Christian Bilien’s Oracle performance and tuning blog

July 27, 2007

Oracle DB operations (cautiously) venturing into the ITIL world

Filed under: ITIL,Oracle — christianbilien @ 9:16 pm

My interest in ITIL started a couple of years ago when activities I practiced routinely for more than 15 years started to appear in the large IT departments as processes within a larger framework of best practices. My initial interest went to Availability, IT Service Continuity and Capacity Management which are ITIL processes I had practiced from the technical side. I then expanded my knowledge to the other processes and I am now running for the Service Manager certification. Although I am an ITIL practitioner, I reckon I’ll need 2/3 months of evening time to get ready for the exams. Incidentally, this does not help me to keep up with my other nightly activities such as blogging…

ITIL is big in the UK and in the North of Europe and a number of organizations I know in the financial world in the US also adopted ITIL years ago and have now achieved the first degrees of maturity in several key ITIL processes.

It is beyond the scope of this post to explain what ITIL is (look at for the official version – V3 came out in April 2007). ITIL is also one those buzzwords used out of context in many articles in the press when a link has to be established between the IT user perception and the IT deliverables. Just out of curiosity, I tried to figure out where ITIL stands in the Oracle Database world.

  • My first encounter with ITIL within the Oracle community was in January 2007 when I downloaded from the RAC SIG site a presentation from Kirk McGowan, the “Rac Pack” technical Director at Oracle ( He called his presentation “Rac & ASM best practices”, which led me to initially believe this would be the usual blurb about the installation procedures one can otherwise find in the Oracle books. But it wasn’t. I hope I do not over summarize his presentation by saying it boiled down to “why do RAC implementations fail ?”. The answer was : “Operational Process Requirements were not met” in terms of change management, availability and capacity planning, SLAs, etc. despite the fact that the ITIL framework had been there (among others) for years.
  • Second encounter: the Siebel Help Desk. It is hardly surprising ITIL gets mentioned all over the place in the marketing materials as Service Desk is one of the ITIL functions.
  • Third, Oracle started to label some existing functions of the Enterprise Manager (see ) as contributors to ITIL processes. Incident and problem Management are also shown within the Siebel perimeter, but you’ll find the EM servicing configuration, change and release management as well as monitoring service level compliance.
  • Fourth: the marketing stuff. On demand, grid, virtualization, etc. “ITIL ready” labeled (what on earth could that mean?). No need to elaborate.

A somewhat more sarcastic view for the ITIL skeptics:

I occasionally write in “IT-Expert”, a French IT magazine. I wrote an article about coherence and relationships of the ITIL function and processes in the July-August issue:


July 17, 2007

Hooray for the 11g ASM: “Fast” Mirror Resync at last !

Filed under: Oracle,RAC — christianbilien @ 9:05 pm

Forgive me if I sound over-enthusiastic: I already mentioned in RAC geo clusters on HP-UX and in RAC geo clusters on Solaris how annoying the absence of incremental mirror rebuild was to the ASM based RAC geo clusters. As a matter of fact, the fact that a full rebuild of the mirror structure is needed writes off this architecture for data bases over a few hundreds GB. Here is a very common situation to me: you have a 2 sites RAC/ASM based geo clusters made of one storage array and one node on each site. Site A has to be taken down for cooling maintenance (actually sites are taken down at least 5 times a year), but the surviving site B must be kept up to preserve the week end operations. The second site ASM instances on site B are gracefully taken down, but all of the mirrored ASM failure groups have all to be fully rebuilt when they are brought up. The more databases, the more terabytes have to move around.

An actual outage is a more dramatic opportunity for trouble: assuming the clusters transfer the applications loads on your backup site, you nonetheless have to wait for the outage cause to be fixed plus the ASM rebuild time to be back to normal. Just prey you don’t have a second outage on the backup site while the first site is busy rebuilding its ASM failure groups.

The consequences of this single ASM weakness reach as far as having to use a third party cluster on a SAN just to be able to make use of the VxVM Dirty Logging Region for large databases. Having to make a “strategic” decision (3rd party cluster or not) on such a far reaching choice solely based on this is to me a major annoyance.

There are a few promising words in the Oracle 11g new features area posted on the OTN sites about a “Fast Mirror Resync” in Automatic Storage Management New Features Overview which should just be the long awaited “DLR” ASM rebuild feature. ASM can now resynchronize the extents that have been modified during the outage. It also looks like a failure group has a DISK_REPAIR_TIME attribute that defines a window in which the failure can be repaired and the mirrored failure group storage array be brought on-line, after which and “ALTER DISKGROUP DISK ONLINE” will starts the process of resynchronization. What happens if you exceed DISK_REPAIR_TIME is not said.

July 2, 2007

Asynchronous checkpoints (db file parallel write waits) and the physics of distance

Filed under: HP-UX,Oracle,Solaris,Storage — christianbilien @ 5:15 pm

The first post ( “Log file write time and the physics of distance” ) devoted to the physic of distance was targeting log file writes and “log file sync” waits. It assumed that :

  • The percentage of occupied bandwidth by all the applications which share the pipe was negligible
  • No other I/O subsystem waits were occurring.
  • The application streams writes, i.e. it is able to issue an I/O as soon as the channel is open.

This set of assumptions is legitimate if indeed an application is “waiting” (i.e. not consuming cpu) on log file writes but not on any other I/O related events and the fraction of available bandwidth is large enough for a frame not to be delayed by another applications which share the same pipe, such as an array replication.

Another common Oracle event is the checkpoint completion wait (db file parallel write). I’ll try to explore in this post how the replication distance factor influences the checkpoint durations. Streams of small transactions make the calling program synchronous from the write in the logfile, but checkpoints writes are much less critical by nature because they are asynchronous from the user program perspective. They only influence negatively the response time when “db file parallel write” waits start to appear. The word “asynchronous” could be a source of confusion, but it is not here. The checkpoints I/Os are doubly asynchronous, because the I/Os are also asynchronous at the DBWR level.

1. Synchronous writes: relationship of I/O/s to throughput and percent bandwidth

We did some maths in figure 3 in “Log file write time and the physics of distance” aimed at calculating the time to complete a log write. Let’s do the same with larger writes over a 50km distance on a 2Gb/s FC link. We’ll also add a couple of columns: the number of I/O/s and the fraction of used bandwidth. 2Gb/s = 200MB/s because the FC frame is 10 bytes long.


Figure 1: throughput and percent bandwidth as a function of the I/O size (synchronous writes)

I/O size

Time to

load (ms)

Round trip

latency (ms)


Time to complete

an I/O (ms)






























































So what change should we expect to the above results if we change from synchronous writes to asynchronous writes?

2. Asynchronous writes

Instead of firing one write at a time and waiting for completion before issuing the next one, we’ll stream writes one after the other, leaving no “gap” between consecutive writes.

Three new elements will influence the expected maximum number of I/O streams in the pipe:

  • Channel buffer-to-buffer credits
  • Number of outstanding I/O (if any) the controller can support. This is 32 for example for an HP EVA
  • Number of outstanding I/O (if any) the system, or an scsi target can support. On HP-UX, the default number of I/Os that a single SCSI target will queue up for execution is for example 8, the maximum is 255.

Over 50kms, and knowing that the speed of light in fiber is about 5 microseconds per kilometer, the relationship between the I/O size and the packet size in the pipe is shown in figure 2:

Figure 2: between the I/O size and the packet size in the fiber channel pipe

I/O size


Time to load


Packet length




















The packet length for 2KB writes requires a capacity of 25 outstanding I/Os to fill the 50km pipe, but only one I/O can be active for 128KB packets streams. Again, this statement only holds true if the “space” between frames is negligible.

Assuming a zero-gap between 2KB frames, an observation post would see an I/O pass through every 10µs, which corresponds to 100 000 I/O/s. We are here leaving the replication bottleneck as other limiting factors such as at the storage array and computers at both end will now take precedence. However, a single 128KB packet will be in the pipe at a given time: the next has to wait for the previous to complete. Sounds familiar, doesn’t it ? When the packet size exceeds the window size, replication won’t give any benefit to asynchronous I/O writes, because asynchronous writes behave synchronously.


Blog at