Christian Bilien’s Oracle performance and tuning blog

February 10, 2007

Join the BAARF Party (or not) (1/2)

Filed under: Oracle,Storage — christianbilien @ 6:38 pm

This is the title of the last chapter of “Oracle Insights, Tales of the Oak Table”, which I just completed reading. The “Battle Against Any Raid Five” ( was also mentionned in “Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning”. So is Raid 5 really so bad ? I’ll start with generalities, and I’ll put in a second post some tests of stripe aggregation I did

1. Large reads, i.e. reads of a full stripe group, can be done in parallel, by reading from the four disks that contain the stripe group.

2. Small reads, i.e. reads of one stripe unit, exhibit good performance because they only tie up one disk, therefore allowing other small reads to other disks to proceed in parallel.

3. Large writes, i.e. writes of a full stripe group require writing into five disks : the four data disks and the parity disk for the stripe unit. Raid 5 MR3 optimization is implemented for example in EMC Clariions such as the CX700 : the optimization works to delay writing to the cache until a RAID 5 stripe has filled, at which time a modified RAID 3 write (MR3) is performed. The steps are : entire stripe is XORed in memory to produce a parity segment, and the entire stripe, including new parity is written to the disks. In comparaison to a mirrored stripe scheme (RAID10), when writing a full stripe, the RAID 5 engine will write to N+1 disks, the RAID10 to 2*N disks.

– Large I/O stripe detection : If a large I/O is received, the RAID engine detects whether it can fill a stripe, and write the stripe out in MR3 fashion.

– The RAID engine detects data being written to the LUN that is sequential and delay flushing cache pages for a stripe until the sequential writes have filled a stripe.

4. Small writes, i.e. writes to one stripe, require that the parity block for the entire stripe unit be recomputed. Thus, a small write require reading the stripe unit and the parity block (hopefully in parallel), computing the new parity block and writing the new stripe unit and the new parity block in parallel. On a RAID 5 4+1, a write to one stripe unit actually requires four I/Os. This is known as the small write penalty for RAID 5 disks.

5. If the cache is saturated, the RAID 1+0 allows more writes to that system before cache flushing increases reponse time

6. Alignment : It is always desirable (but seldom feasible) to align I/Os on stripe elements boundaries to avoid disk crossings (RAID stripe misaligment). A single I/O split across two disks will actually incur two I/Os on two different stripes. This is even more costly with RAID 5 as there is an additional stripe’s worth of parity to calculate. On Intel architecture systems (at least the Xeon, it is likely to be different for the Itanium), the placement of the Master Boot Record (MBR) at the beginning of each logical device causes subsequent data structure to be misaligned by 63 sectors (or 512K block). The LUN aligment offset must be specified in Navishere to overcome this problem.


Storage array cache sizing

Filed under: Oracle,Storage — christianbilien @ 11:28 am

I often have to argue or even fight with storage providers about cache sizing. Here is a set of rules I apply in Proof Of Concepts and disk I/O modellization.

Write cache size:

1. Cache sizing : the real issue is not the cache size, but how fast the cache can flush to disk. In other words, assuming sustained IO, cache will fill and I/O will bottleneck is this condition is met : the rate of incoming I/O is greater than the cache ability to flush.

2. The size of a write cache matters when the array must handle a burst of write I/O. A larger cache can absorb bigger bursts of write, such as database checkpoints. The burst can then be contained without hitting forced flush.

3. Write caching from one SP to the other is normally activated for redundancy and single point of failure removal. That is, the write cache in each SP contains both the primary cached data for the logical units it owns as well as a secondary copy of cache data for the LUNs owned by its peer SP. In other words, SP1’s write cache hold a copy of SP2’s write cache and vice versa. Overall, the real write cache size (seen from a performance point of view) is half the write cache configuration.

4. Write caching is used for raid 5 full stripe agregation (when the storage firmware support this feature) and parity calculation, a very useful feature for many large environments. Lack of space must not force the cache to destage.

Read cache size:

1. Randoms reads are little chance to be in cache: the storage array read cache is unlikely to provide much value on top of the SGA and possibly the file system buffer cache (depending of FILESYSTEMIO_OPTIONS and file system direct IO mount options).

2. However, array read caches are very useful for prefetches. I can see three cases for this situation to occur :

  • Oracle direct reads (or direct path reads) are operating system asynchronous reads (I’ll write a new post on this later). Assuming a well designed backend, you will be able to use a large disk bandwidth to take advantage of large numbers of parallel disk I/O.
  • Buffered reads (imports for example) will take advantage both of file system and storage array read ahead operations.

Also take a look at

Create a free website or blog at