Capacity planning fundamental laws are seldom used to identify benchmark flaws, although some of these laws are almost trivial. Worse, some performance assessments provide performance outputs which are individually commented without even realizing that physical laws bind them together.
Perhaps the simplest of them all is the Utilization law, which states that the utilization of a resource is equal to the product of its throughput and its average service time. The utilization is the portion of time the resource is busy serving requests. The cpu(s) utilizations are given by sar –u. The individual disk utilizations in a storage array by the storage vendors proprietary tools. sar -d or iostat can be used to collect data for internal disks.
Take a disk serving a fairly steady load I picked up on an HP-UX test system with no storage array attached . sar –d gave the following data:
This formula can be verified for the 3 points I picked up:
Let’s replace our disk by a single volume which encompasses a whole raid group inside an array, and consider that this raid group is dedicated to a single batch. Other raid groups participate to the transactions but we’ll focus on the most accessed one. If our transaction needs to make 10 synchronous visits (meaning each of them has to wait for the previous one to complete) to the most accessed volume in the storage array, and each of the visits “costs” 10ms, we’ll have