Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all articles
Browse latest Browse all 3135

Oracle RAC Performance on VMware All-Flash Virtual SAN (4 nodes)

$
0
0

 

Oracle RAC 12c is one of the popular versions of oracle database solution, which is widely used by many customers for mission critical workloads. VMware Virtual SAN aims at providing highly scalable, available, reliable, performance storage using cost effective commodity hardware, specifically directly attached disks in ESXi hosts.  Virtual SAN 6.2 added many enhancements including deduplication and compression, checksum, Erasure Coding to make the Hyper Convergent solution more efficient. All-Flash Virtual SAN, together with these new features can help customer to achieve consistent low latency, high performance while still can be benefited from space efficient storage for mission critical application such as Oracle databases.

All-Flash Virtual SAN Architecture

To measure OLTP performance on All-Flash Virtual SAN, we created a 4-node cluster to host from one Oracle node up to four Oracle RAC nodes. The deduplication and compression feature is enabled to use the All-Flash Virtual SAN space efficiently.  the Figure 1 shows the solution architecture:

arch.png

 

Figure 1. All-Flash Virtual SAN Datastore for Oracle RAC

Hardware Resources

We used direct-attached SSDs on ESXi server to provide Virtual SAN solution. Each ESXi server has two disk groups, each consisting of one cache-tier SSD and four capacity-tier SSDs, for better performance by using more SSDs as cache tier on one server. The raw capacity of the Virtual SAN datastore is around 11.88TB.

Each ESXi Server in the Virtual SAN Cluster has the following configuration as shown in Table 1.

  1. Table 1. ESXi Server Configuration

Property

SPECIFICATION

Server

4 x Dell PowerEdge R630

CPU

2 sockets, 12 cores each of 2.3GHz with hyper-threading enabled

RAM

256GB DDR4 RDIMM

Network adapter

2 x Intel 10 Gigabit X540-AT2, + I350 1Gb Ethernet

Storage adapter

2 x 12Gbps SAS PCI-Express

Disks

SSD: 2 x 400GBdrive as cache SSD

SSD: 8 x 400GB drive as capacity SSD

Software Resources

Table 2 shows the software resources used in this solution.

  1. Table 2. Software Resources

Software

Version

Purpose

VMware vCenter and ESXi

  1. 6.0 U2

ESXi Cluster to host virtual machines and provide Virtual SAN Cluster. VMware vCenter Server provides a centralized platform for managing VMware vSphere environments

VMware Virtual SAN

  1. 6.2

Software-defined storage solution for hyper-converged infrastructure

Oracle Enterprise Linux

  1. 6.7

Oracle Database server Nodes

Oracle 12c Grid Infrastructure

  1. 12.1.0.2.0

Oracle Database and clusterware software

Oracle Workload Generation

SwingBench 2.5

TPC-C like benchmark tool

to generate oracle Workload.

Network Configuration

A VMware vSphere Distributed Switch™ acts as a single virtual switch across all associated hosts in the data cluster. This setup allows virtual machines to maintain a consistent network configuration as they migrate across multiple hosts.

The vSphere Distributed Switch uses two 10GbE adapters for the teaming and failover purposes. A port group defines properties regarding security, traffic shaping, and NIC teaming. We used default port group setting except the uplink failover order as shown in Table 3. It also shows the distributed switch port groups created for different functions and the respective active and standby uplink to balance traffic across the available uplinks.

  1. Table 3. Uplink and VLAN Settings of the Distributed Switch Port Groups

Distributed Switch Port Group Name

VLAN

Active Uplink

Standby Uplink

vSphere vMotion

4021

Uplink1

Uplink2

Virtual SAN Cluster

1284

Uplink2

Uplink1

 

We used different VLANs to separate the traffic of the vSphere vMotion and Virtual SAN while providing the NIC failover function.

 

Performance of Oracle RAC

With 150 users as the Swingbench setting, Oracle reached up to a maximum Transactions per Minute (TPM) of 383,050 with average of 329,258. From AWR report, we measured the maximum wait event was “log file resyc” and the second maximum wait event was “db sequential read”. The maximum wait time of “log file sync” was less than 7ms. And the maximum wait time of db file sequential read was less than 2ms. When four RAC nodes worked together, the average CPU utilization of every node was less than 62 percent.  See figure 1 for more information about the AWR details.

cpu.png

  1. Figure 1. Average CPU utilization on 4 Node RAC

     latency.png

  1. Figure 2. Oracle I/O wait event

 

Table 1 IO Workload from Oracle AWR Report

AWR report

IOPS

Physical Reads

16,824

Physical Writes

9,590

Total

26,414

Performance on Virtual SAN Backend

From storage perspective Virtual SAN performance, we measured the average IOPS of 48,300 and average Throughput 730 MB/sec. Response time was less than 1.5ms for read and less than 1ms for write during most of the time period observed by Virtual SAN performance monitor.

vsan-perf.png

  1. Figure 3. Virtual SAN Backend performance – IOPS, bandwidth and latency

 

Oracle RAC Scalability on Virtual SAN

This test focuses on 4-Node Oracle RAC Database Performance and RAC scalability on VSAN Storage. Four different tests were run starting from one-node RAC up to four node RAC.

Number of Oracle Transaction per minute increase near- linearly when more Oracle RAC nodes is added.  When increased the number of RAC nodes, the database response time monitored from Swingbench decreased accordingly from started 52ms to 25ms.

4-nodes.png

  1. Figure 4. RAC Scalability on Virtual SAN

Summary

Virtual SAN is a cost-effective and high-performance storage platform that is rapidly deployed, easy to manage, and fully integrated into the industry-leading VMware vSphere platform.

This solution validates All-Flash Virtual SAN as a storage platform supporting high performing Oracle RAC database.


Viewing all articles
Browse latest Browse all 3135

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>