Christian Bilien's Oracle performance and tuning blog

Spotlight on Oracle replication options within a SAN (2/2)

Advertisements

This post is a follow up to “Spotlight on Oracle replication options within a SAN (1/2)”. This first post was about the available replication options.

I will address in this post a specific performance aspect for which I am very concerned for one of my customers. This is an organization where many performance challenges come down to the commit wait time: the applications trade at the millisecond level which translates in data base log file syncs expressed in hundredth of microseconds. It is a basic DRP requirement that applications must be synchronously replicated over a 2,5 kms (1.5 miles) Fiber Channel network between a local and a remote EMC DMX 1000 storage array. The mutipathing software is Powerpath, the DMX1000 volumes may be mirrored from the local array to the remote by either VxVm, ASM or SRDF.

Two options may be considered:

All options may not always be available as RAC installations over the two sites will require a host based replication. On the other hand, simple replication with no clustering may either use SRDF of a volume manager replication.

I made some unitary tests aimed at qualifying the SRDF protocol vs. a volume manager replication. Let us just recall that an SRDF mirrored I/O will go in the local storage array cache, and will be acknowledged to the calling program only when the remote cache has been updated. A VM is no less different in principle: the Powerpath policy dictates that both storage arrays must acknowledge the I/O before the calling program considers it is completed.

Test conditions:

Baseline: Local Raw device on a DMX 1000
Throughput=1 powerpath link throughput x 2

Block size (k)

I/O/s

I/O time (ms)

MB/s

2

1516

0,66

3,0

5

1350

0,74

6,6

Test 1: Distant Raw device on a DMX
Throughput=1 powerpath link throughput x 2

Block size (k)

I/O/s

I/O time (ms)

MB/s

2

1370

0,73

2,7

5

1281

0,78

6,3

The distance degradation is less than 10%. This is the I/O time and throughput I expect when I mirror the array volumes by VxVM or ASM.

Test 2: Local raw device on a DMX, SRDF mirrored
Throughput=1 powerpath link throughput x 2

Block size (k)

I/O/s

I/O time (ms)

MB/s

2

566

1,77

1,1

5

562

1,78

2,7

This is where it gets interesting: SRDF will double the I/O time and halve the throughput.

 

Conclusion: When you need log file write performance in order to minimize the log file sync wait times, use a volume manager (including ASM) rather than SRDF. I believe this kind of result can also be expected under either the EVA or XP Continuous Access. The SRDF mirrored I/O are even bound to be more impacted by an increasing write load on the storage arrays as mirroring is usually performed via dedicated ports, which bear the load of all of the writes sent to the storage array. This bottleneck does not exist for the VM replication.

 

 

Advertisements