Christian Bilien’s Oracle performance and tuning blog

January 28, 2008

An upper bound of the transactions throughputs

Filed under: Models and Methods — christianbilien @ 9:46 pm

Capacity planning fundamental laws are seldom used to identify benchmark flaws, although some of these laws are almost trivial. Worse, some performance assessments provide performance outputs which are individually commented without even realizing that physical laws bind them together.

Perhaps the simplest of them all is the Utilization law, which states that the utilization of a resource is equal to the product of its throughput and its average service time. The utilization is the portion of time the resource is busy serving requests. The cpu(s) utilizations are given by sar –u. The individual disk utilizations in a storage array by the storage vendors proprietary tools. sar -d or iostat can be used to collect data for internal disks.

Take a disk serving a fairly steady load I picked up on an HP-UX test system with no storage array attached . sar –d gave the following data:

Time Utilization(%) Service time(ms) Reads/s Writes/s
15:07 70.8 10 4 67
15:12 67.3 12.3 30.6 24.1
15:17 67.8 12 33.7 22.7

This formula can be verified for the 3 points I picked up:

Utilization = (Read/s+writes/s) x service time

This law can be used to define an asymptotic bound, which is an optimistic bound since it indicates the best possible performance. If each application transaction spends {D}_{k} seconds on disk k, and {X} is the application throughput, the utilization law can be rewritten for disk k as {U}_{k}=X*{D}_{k}. An increase in the arrival rate can be accommodated as long as none of the disks are saturated (i.e. has a utilization of 100%). The throughput bound {X}_{max} is therefore the arrival rate at which any of the disk centers saturates. If {D}_{max} is the maximum disk service time, the upper bound to the transaction throughput can be found when one of the disks has a utilization of 1 (100%):


{D}_{max}*{X}_{max}=1

therefore

{X}_{max}=\frac{1}{{D}_{max}}

Let’s replace our disk by a single volume which encompasses a whole raid group inside an array, and consider that this raid group is dedicated to a single batch. Other raid groups participate to the transactions but we’ll focus on the most accessed one. If our transaction needs to make 10 synchronous visits (meaning each of them has to wait for the previous one to complete) to the most accessed volume in the storage array, and each of the visits “costs” 10ms, we’ll have {D}_{max}=100ms=0.1s. The best possible throughput we can get is 10 transactions per seconds.

 

 

 

Advertisements

August 16, 2007

Workload characterization for the uninformed capacity planner

Filed under: HP-UX,Models and Methods,Oracle,Solaris,Storage — christianbilien @ 7:32 pm

Doug Burns initiated an interesting thread a while ago about user or application workloads, their meanings and the difficulties associated with their determination. But workload characterization is both essential and probably the hardest and most prone to error bit off the whole forecasting process. Models that fail to validate (i.e. are not usable) most of the time fall in one of these categories:

  • The choice of characteristics and parameters is not relevant enough to describe the workloads and their variations
  • The analysis and reduction of performance data was incorrect
  • Data collection errors, misinterpretations, etc.

Unless you already know the business environment and the applications, or some previous workload characterization is already in place, you are facing a blank page. You can always try to do the smart workload partition along functional lines, but this effort is unfortunately often preposterous and doomed to failure because of time constraints. So what can be done?

I find the clustering analysis a good compromise between time to deliver and business transactions. Caveat: this method ignores any data cache (storage array, Oracle and File System cache, etc.) and locks/latches or any other waits unrelated to resource waits.

A simple example will explain how it works:

Let’s assume that we have a server with a single CPU and a single I/O path to a disk array. We’ll represent each transaction running on our server by a couple of attributes: the service time each of these transactions requires from the two physical resources.In other words, each transaction will require in absolute terms a given number of seconds of presence on the disk array and another number of seconds on the CPU. We’ll call a required serviced time a “demand on a service center” to avoid confusion. The sum of those two values would represent the response time on an otherwise empty system assuming no interaction occurs with any other external factor. As soon as you start running concurrent transactions, you introduce on one hand waits on locks, latches, etc. and on the other hand queues on the resources: the sum of the demands is no longer the response time. Any transaction may of course visit each resource several times: the sum of the times spent using each service center will simply equal the demand.

Let us consider that we are able to collect the demands each single transaction j requires from our two resource centers. We’ll name
{D}_{j1} the CPU demand and {D}_{j2} the disk demand of transaction j. Transaction j can now be represented by a two components workload: {w}_{j}=({D}_{j1},{D}_{j2}). Let’s now start the collection. We’ll collect overtime every {w}_{j} that goes on the system. Below is a real 300 points collection on a Windows server. I cheated a little bit because there are four CPUs on this machine but we’ll just say a single queue represents the four CPUs.

T1

The problem is now obvious: there is no natural grouping of transactions with similar requirements. Another attempt can be made using Neperian logs to distort the scales:

t2.gif

This is not good enough either to identify meaningful workloads.

The Minimum Spanning Tree (MST) method can be used to perform successive fusions of data until the wanted number of representative workloads is obtained. It begins by considering each component of a workload to be a cluster of points. Next, the two clusters with the minimum distance are fused to form a cluster. The process iterates until the final number of desired clusters is reached.

  • Distance: let’s assume two workloads represented by {w}_{i}=({D}_{i1},{D}_{i2},...,{D}_{iK}) and {w}_{j}=({D}_{j1},{D}_{j2},...,{D}_{jK}). I moved from just two attributes per workload to K attributes, which will correspond to service times at K service centers. The Euclidian distance between the two workloads will be d=\sqrt[]{\sum_{n=1}^{K}({D}_{in}-{D}_{jK})}.
  • Each cluster is represented at each iteration by its centroid whose parameter values are the means of the parameter values of all points in the cluster.

    Below is a 20 points reduction of the 300 initial points. In real life, thousands of points are used to avoid outliers and average the transactions

    t3.gif

February 16, 2007

Service time at Disk Arrays

Filed under: Models and Methods,Storage — christianbilien @ 7:20 pm

Reliability and increased performance in disk access are obtained by using RAID arrays. Many options and trade offs exist when you come to those disk arrays organization. Models can be easily built in order to avoid common pitfalls, justify RAID10, etc. As I could not write a post entry because of the math types figures, I went for a note: it is intended to present the basic issues of I/O bottleneck removals and the fundamental laws on which the I/O models are based: http://www.enezis.com/Disk_maths.pdf

Create a free website or blog at WordPress.com.