Christian Bilien's Oracle performance and tuning blog

Storage array bottlenecks


Even if the internal communications of a switch may be “oversubscribed” when the aggregate speed of the ports exceeds the internal bandwidth of the device, many switch devices (at least the director class) are “non-blocking”, meaning that all ports can operate at full speed simultaneously. I’ll write a post one day on SAN bottlenecks, but for now here is a view of the main hardware bottlenecks encountered in storage arrays:

Host ports:

The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10 bytes long -) per port. As load balancing software (Powerpath, MPXIO, DMP, etc.) are most of the times used both for redundancy and load balancing, I/Os coming from a host can take advantage of an aggregated bandwidth of two ports. However, reads can use only one path, but writes are duplicated, i.e. a host write ends up as one write on each host port.

Below is an example of a couple of host ports on an EMC DMX1000 (2Gb/s host ports).

Thanks to Powerpath, the load is well spread over the two ports. Both ports are running at about half of the bandwidth (but queuing theory shows that queues would start to be non negligible when the available bandwidth reaches 50%).

Array service processors

Depending on the hardware maker, service processors may be either bound Not all SPs are bound toto specific volumes, or may be accessed by any of the array SP. I wrote a blog entry sometimes ago about SP binding. Higher end arrays such as DMX and XP are not do not bind Luns to SPs, whilst Clariion and EVA do.

Back end controllers

Back end controllers are the access point for disk FC loops. Backend controllers also have a given throughput, usually limited anyway by the fact that at a given point in time, the dual ported disks within the FC-AL loop only allow 2 senders and 2 receivers. Below is a DMX1000 controller utilization rate, where almost all disks are running at a minimum of 60% of their available bandwidth, with 30 RAID 10 disks in each loop.




From a host standpoint, disks can sustain a much higher utilization rate when they are behind a cache than when they are accessed directly: remember than a disk running at a utilization rate of 50% will queue on average one I/O out of 2 (seen from the host, the I/O service time will be on average 50% higher than the disk service). It is not uncommon to measure disks utilization rates of nearly 100%. This will only becomes a problem when the array cache stops buffering the I/Os because space is exhausted.