Christian Bilien’s Oracle performance and tuning blog

September 28, 2007

Is dbms_stats.set_xxx_stats insane tuning ? (part 2/2)

Filed under: Oracle — christianbilien @ 1:37 pm

This is the second post about using dbms_stats.set_xxx_stats (and feeling guilty about it). I gave in the first post some straightforward cost calculations about an oversimplified (yet real) example of an index scan and the underlying table scan as a preparatory step.

select * from mytable d where status_code=’AAAA’;

status_code is indexed by MYTABLEEI000003, no histograms were calculated. The “adjusted dbf_mbrc” (obtained by dividing the full table scan cost by the number of blocks below the high water mark is 6.59). This is all detailed in here .

Things I did not emphasize in this case :

The number of distinct value of status_code is 6 and never varies whatever the number of lines in the table. The selectivity of status_code will always be 1/6, which is quite high (bad selectivity).

The distribution is very unequal 90% of the rows have the same status code. Bind peeking is enabled but the select call does not necessarily goes to the low cardinality values.

The table scan cost is now BETTER (lower) that the index scan cost.

optimizer_index_cost_adj is set to 50 (the final index cost is index cost calculation as explained in the first post divided by 2).

 

The index calculation cost of mytableI000003 follows this formula:

Cost=blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling (clustering factor * effective table selectivity)

which can be approximated to:

cost=(leaf blocks + clustering factor)/6

I went back to old statistics by using various copies of the database made from production to development or user acceptance environments to get a picture if what the statistics were in the past. I then plotted the index vs. the full tables scan costs as a function of the number of blocks below the high water mark:

ftsvsidx11.GIF

 

Here is a zoom of the last two points:

ftsvsidx12.GIF


The FTS cost was slightly above the index cost in the past, but it is now under it. This is unfortunate as the actual FTS execution time is here always much higher than when the index is chosen.

So what can be done ?

Lowering optimizer_index_cost_adj: the whole instance would be impacted

Creating an outline: I oversimplified the case by just presenting one select but there are actually many calls derived from it, but in any case the index scan is always the good option.

Hints and code changes: this is a vendor package so the code cannot be amended

This is where the use of dbms_stats.set_column_stats or dbms_stats.set_index_stats seems to me to be a valid option. True, tweaking the statistics goes against common sense, and may be seen as insane by some. However we are in a blatant case where:

  1. a slight variation in the statistics calculation causes plans to go from good to (very) bad
  2. the modified statistics will only impacts a known perimeter
  3. we do not have any other satisfactory solution

A last question remains: because of the relative weights involved in the index cost calculation, it is only practicable to make the index more attractive when

Clustering Factor x selectivity

is lowered.

Of the two factors, I prefer to modify the clustering factor because the optimizer calculates the selectivity either from the density or from num_distinct depending upon the histograms presence. One last word: the clustering factor you set will of course be wiped out the next time statistics are calculated on this index.

 

Modify the index clustering factor :

 

begin

dbms_stats.set_index_stats(

ownname => ‘MYOWNER’,

indname => ‘MYTABLEI000003’,

clstfct => 100000);

end;

/

Modify the column density:

 

begin

dbms_stats.set_column_stats(

ownname => ‘MYOWNER’,

tabname => ‘MYTABLE,

colname => ‘STATUS_CODE’,

density => 0.0001);

end;

/

 

 

 

September 20, 2007

Is dbms_stats.set_xxx_stats insane tuning ? (part 1/2)

Filed under: Oracle — christianbilien @ 11:04 am

 

The challenge is a very common one: changing the plan of SQL calls

– generated by a vendor package (i.e. you cannot have the code easily changed – so no hints can be used -)
and
– the SQL calls cannot be syntactically predicted either because the calls are assembled according to user input criterions (i.e. no outlines) or there might be a set of calls too numerous to handle one by one
and
– I do not want to alter the initialization parameters (
OPTIMIZER_ INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING most of the time) because this apply to the whole instance
and

– Dbms_sqltune / Sql Advisor (10g) do not provide the right plan or I am working on a previous version (*)

This is where the dbms_stats.set_xxx_stats comes to my mind, with the fear that one of my interlocutors starts calling me insane once I start muttering “I may have another option”.

Some background might be necessary: ever since I first read some years ago the Wolfgang Breitling papers (http://www.centrexcc.com/) about the 10053 trace and the cost algorithm, followed by a growing enthusiasm at reading Jonathan Lewis “CBO fundamentals”, I started to think that setting statistics was a valid approach provided that three requirements were met:

  • There is hardly another way of getting to an acceptable plan (not even necessarily the “best”)
  • You know what you are doing and the potential consequences on other plans
  • The customer is psychologically receptive: he or she understands what you are doing, the reasons you are doing it that way and the need to maintain the solution or even to discard it in the future.

I’ll explain in this first part the basics to clarify my point. Jump directly to the second part (once it is written) if you already know about cost calculations (or risk being bored).

On the technical side, the most common use of dbms_stats.set_xxx_stats is for me to change index statistics via the set_index_stats procedure to make an index more attractive.

I’ll start with the most basic example to make my point. Let’s consider a select statement such as:

select * from mytable where status_code=’AAAA’;

mytable is not partitioned, status_code is indexed (non-unique) and no histograms were calculated.

In this particular case, having status_code compared to a bind value or a literal does not make any difference as far as cost calculation is concerned, but it does when the predicate is >,<,>=, etc. The most annoying bit is that you will see significant changes to the cost selectivity algorithms in 9i and in each 10g version when the predicates differ from a simple equality. Another glitch is the change in calculations depending of status_code being in bounds or out of bounds (i.e. being or not being between user_tab_col_statistics.low_value and user_tab_col_statistics.high_value). “Check Wolfgang Breitling’s “A look under the hood of the CBO” and Jonathan Lewis’ book for more comprehensive researches.

So let’s get back to our simple select * from mytable where status_code=’AAAA’;

We know the I/O cost of and index-driven access path will be:

Cost=
blevel +
ceiling(leaf_blocks * effective index selectivity) +
ceiling (clustering factor * effective table selectivity)

Blevel, the height of the index is usually a small value compared to the two other factors. Blevel, leaf_blocks and clustering factor are columns of the user_indexes view.

 

The effective index selectivity and the effective table selectivity are both calculated by the optimizer.

The effective index selectivity is found in a 10053 9i trace file as IXSEL.In our simple select, the index selectivity can be obtained from user_tab_col_statistics or user_tab_columns (the Oracle documentation states that density and num_distinct from user_tab_columns are only maintained for backward compatibility).

The effective table selectivity is found as TB_SEL in the 10053 9i trace (it is IX_SEL_WITH_FILTERS in 10g). It is here with the “=” predicate:

  • the density column if an histogram is used. You’ll find that density differs from 1/NUM_DISTINCT when histograms are calculated
  • 1/ NUM_DISTINCT (of col1) if not. In this case density would report 1/NUM_DISTINCT.

The third factor is often overwhelmingly dominant (it is not for example if fast full-index scans can be used).It is essential to know which of DENSITY and NUM_DISTINCT are really used, especially in 10g where many data base are left with the default statistics calculations (with histograms).

Let’s see in 9i (9.2.0.7) how this works with cpu costing disabled:

 

 

Index access cost

set autotrace traceonly

alter session set “_optimizer_cost_model”=io; è because cpu costing is enabled for this data base

select * from mytable where status_code=’AAAA’;

Execution Plan

———————————————————-

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20872 Card=690782 Bytes=148518130)
1 0 TABLE ACCESS (BY INDEX ROWID) OF ‘mytable’(Cost=20872 Card=690782 Bytes=148518130)
2 1 INDEX (RANGE SCAN) OF ‘mytableI000003’ (NON-UNIQUE) (Cost=1432 Card=690782)

Let’s make the computation to verify the cost of 20872:

Blevel =2
Leaf blocks=8576
Clustering Factor = 116639
Here, index Selectivity = table selectivity = density= 1/num distinct = 1/6 = 0.166667

leaf_blocks * effective index selectivity = 1429.33
clustering factor * table selectivity = 19439.33

Cost=blevel + ceiling(leaf_blocks * effective index selectivity) + ceiling (clustering factor * effective table selectivity) = 20872

As expected, clustering factor * table selectivity is by far the dominant factor.

 

Table access cost

The basic formula is derived from “number of blocks below the high water mark” divided by db_file_multiblock_read_count. This formula has to be amended: Oracle actually uses a divisor (an “adjusted dbf_mbrc” as Jonathan Lewis calls it) which is actually a function of both db_file_multiblock_read_count and the block size. W. Breitling inferred in “A look under the hood of the CBO” how the divisor would vary as a function of db_file_multiblock_read_count for a fixed block size.

I used a 16k block size and db_file_multiblock_read_count =16. I derived the adjusted dbf_mbrc by divising the number of blocks below the HWM (46398) by the IO_COST:

alter session set optimizer_index_cost_adj=100;
alter session set “_optimizer_cost_model”=io;
select /*+ full(d) */ * from mytable d where
status_code=’AAAA’;

Execution Plan

———————————————————-

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7044 Card=544275 Bytes=111032100)

1 0 TABLE ACCESS (FULL) OF ‘MYTABLE’ (Cost=7044 Card=544275 Bytes=111032100)

The adjusted dbf_mbrc = 46398/7044= 6.59 (remember that db_file_multiblock_read_count=16 !)

Interestingly, adjusted dbf_mbrc = 4.17 when db_file_multiblock_read_count=8. The adjustment is much less “intrusive”.

A final word: the adjusted dbf_mbrc is about the cost calculation thing not about the actual I/O size.

 

 

 

(*) Jonathan Lewis experienced in the “CBO fundamentals” book a possible tweak in profiles using the OPT_ESTIMATE hint, with a warning of using an undocumented, subject to change feature. So I’ll leave this outside this discussion. Dbms_stats.set_xxx_stats is on the other hand a documented and supported option.

 

 

September 14, 2007

Strategies for parallelized queries across RAC instances (part 2/2)

Filed under: Oracle,RAC — christianbilien @ 5:52 am

If you have not done so, it would be beneficial to first go through the first post “Strategies for parallelized queries across RAC instances” to get an understanding of the concepts and challenge. I came up with a description of an asymmetric strategy for the handling of RAC inter-instance parallelized queries but still being able to force both the slaves and the coordinator to reside on just one node. The asymmetric configuration is a connexion and service based scheme which allow users

  • connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
  • connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
  • connected to node 1 and able to issue an alter session to load balance.

Another way of doing things is to load balance the default service to which the applications connect to, and to restrict access to node1 and to node2. This is a symmetric configuration.

Load balancing:

Tnsnames and service names are the same as in the asymmetric configuration.

bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)

execute dbms_service.modify_service (service_name => ‘bothnodestaf’ –
, aq_ha_notifications => true –
, goal => dbms_service.goal_throughput –
, failover_method => dbms_service.failover_method_basic –
, failover_type => dbms_service.failover_type_select –
, failover_retries => 180 –
, failover_delay => 5 –
, clb_goal => dbms_service.clb_goal_short);

Both spfiles were changed:

On node1, spfileI1.ora contains:

INSTANCE_GROUPS =’IG1’,’IGCLUSTER’

PARALLEL_INSTANCE_GROUP=’IGCLUSTER’

On node2, spfileI2.ora contains:

INSTANCE_GROUPS =’IG2’, ’IGCLUSTER’

PARALLEL_INSTANCE_GROUP=’IGCLUSTER’


select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;

INST_ID NAME VALUE

———- ——————————————————————————-

1 instance_groups IG1, IGCLUSTER

1 parallel_instance_group IGCLUSTER

2 instance_groups IG2, IGCLUSTER

2 parallel_instance_group IGCLUSTER

By default, both nodes will behave in the same way. As the PARALLEL_INSTANCE_GROUP matches one of the INSTANCE_GROUPS on both nodes, load balancing will work by default whatever the node on which the applications connects.

On node 1:

$sqlplus mylogin/mypass@bothnodes

 

< no alter session set parallel_instance_group= >

select /*+ full(orders_part) */ count(*) from orders_part;

QCSID Inst Group Set PROGRAM

———- ———- ———- ———- ————————————————
1050 1 1 1 oracle@node1 (P010)
1050 1 1 1 oracle@node1 (P011)
1050 1 1 1 oracle@node1 (P009)
1050 1 1 1 oracle@node1 (P008)
1050 2 1 1 oracle@node2 (P009)
1050 2 1 1 oracle@node2 (P011)
1050 2 1 1 oracle@node2 (P010)
1050 2 1 1 oracle@node2 (P008)
1050 1 sqlplus@node1 (TNS V1-V3)

On node 2:

$ sqlplus mylogin/mypass@bothnodes

< no alter session set parallel_instance_group= >

select /*+ full(orders_part) */ count(*) from orders_part;

QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
997 1 1 1 oracle@node1 (P010)
997 1 1 1 oracle@node1 (P011)
997 1 1 1 oracle@node1 (P009)
997 1 1 1 oracle@node1 (P008)
997 2 1 1 oracle@node2 (P009)
997 2 1 1 oracle@node2 (P011)
997 2 1 1 oracle@node2 (P010)
997 2 1 1 oracle@node2 (P008)
997 2 sqlplus@node2 (TNS V1-V3)

You may notice a subtle difference: the coordinator runs as expected on the node the service connects to.

Node restriction:

The tnsnames.ora and the service definition are left unchanged from the tests performed in the previous post. nodetaf1 is set to node1= ‘PREFERRED’ and node2=’NONE’

On node1:

Tnsnames.ora: (unchanged)

node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)

$ sqlplus mylogin/mypass@node1-only

alter session set parallel_instance_group=’IG1′;

select /*+ full(orders_part) */ count(*) from orders_part;

QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
984 1 1 1 oracle@node1 (P000)
984 1 1 1 oracle@node1 (P004)
984 1 1 1 oracle@node1 (P002)
984 1 1 1 oracle@node1 (P005)
984 1 1 1 oracle@node1 (P006)
984 1 1 1 oracle@node1 (P001)
984 1 1 1 oracle@node1 (P007)
984 1 1 1 oracle@node1 (P003)
984 1 sqlplus@node1 (TNS V1-V3)

On node 2:

The tnsnames.ora and service definition are symmetric to what they are on node1

$ sqlplus mylogin/mypass@node2-only

alter session set parallel_instance_group=’IG2′;

select /*+ full(orders_part) */ count(*) from orders_part;

QCSID Inst Group Set PROGRAM

———- ———- ———- ———- ————————————————
952 1 1 1 oracle@node2 (P000)
952 1 1 1 oracle@node2 (P004)
952 1 1 1 oracle@node2 (P002)
952 1 1 1 oracle@node2 (P005)
952 1 1 1 oracle@node2 (P006)
952 1 1 1 oracle@node2 (P001)
952 1 1 1 oracle@node2 (P007)
952 1 1 1 oracle@node2 (P003)
952 1 sqlplus@node2 (TNS V1-V3)

September 12, 2007

Strategies for RAC inter-instance parallelized queries (part 1/2)

Filed under: Oracle,RAC — christianbilien @ 8:32 pm

I recently had to sit down and think about how I would spread workloads across nodes in a RAC data warehouse configuration. The challenge was quite interesting, here were the specifications:

  • Being able to fire PQ slaves on different instances in order to use all available resources for most of the aggregation queries.
  • Based on a tnsname connection, being able to restrict PQ access to one node because external tables may be used. Slaves running on the wrong node would fail :

insert /*+ append parallel(my_table,4) */ into my_table
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001, instance node-db04:INST2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file myfile408.12.myfile_P22.20070907.0.txt in MYDB_INBOUND_DIR not found
ORA-06512: at “SYS.ORACLE_LOADER”, line 19

I could come up with two basic configurations based on combinations of services, INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP. This post is about an asymmetric configuration, the second will deal with a symmetric configuration.

Parallel executions are services aware: the slaves inherit the data base service from the coordinator. But the coordinator may execute slaves on any instance of the cluster, which means that the service localization (‘PREFERRED’ instance) will create the coordinator on the henceforth designed instance, but the slaves are not bound by the service configuration. So I rewrote “being able to restrict PQ access to one node” into “being able to restrict PQ access and the coordinator to one node”

An instance belongs to the instance group it declares in its init.ora/spfile.ora. Each instance may have several instance groups declared: INSTANCE_GROUPS =’IG1’,’IG2’ (be careful not to set ‘IG1,IG2’). PARALLEL_INSTANCE_GROUP is another parameter which can be either configured at the system or at session level. Sessions may fire PQ slaves on instances for which the parallel instance group matches and instance group the instance belongs to.

Let’s consider a 2 nodes configuration: node1 and node2. I1 and I2 are instances respectively running on node1 and node2.
The Oracle Data Warehouse documentation guide states that a SELECT statement can be parallelized for objects schema created with a PARALLEL declaration only if the query involves either a full table table scan or an inter partition index range scan. I’ll force a full table scan in my tests case to have the CBO decide to pick up a parallel plan:

select /*+ full(orders_part)*/ count(*) from orders_part;

The configuration described below will allow users :

  • connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
  • connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
  • connected to node 1 and able to issue an alter session to load balance.

In a nutshell, users connecting to node1 will restrict their slave scope to node1, users connecting to node2 will be allowed to load balance their slaves over all the nodes.

I set for the test parallel_min_servers=0: I can then see the slaves starting whenever Oracle decides to fire them.

Asymmetric INSTANCE_GROUPS configuration:

On node1, spfileI1.ora looks like:

INSTANCE_GROUPS =’IG1’,’IG2’
PARALLEL_INSTANCE_GROUP=’IG1’

On node2, spfileI2.ora contains:

INSTANCE_GROUPS =’IG2’
PARALLEL_INSTANCE_GROUP=’IG2’

select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;

INST_ID NAME VALUE

———- ——————————————————————————-

1 instance_groups IG1, IG2
1 parallel_instance_group IG1
2 instance_groups IG2
2 parallel_instance_group IG2

Single node access

I configured via dbca nodetaf1, a service for which node1/instance I1 was the preferred node and node was set to “not use” (I do not want to connect on one node an execute on another – thereby unnecessarily clogging the interconnect —) and did the same for node2/instance I2.

SQL> select inst_id,name from gv$active_services where name like ‘%taf%’;

INST_ID NAME

———- —————————————————————-

2 nodetaf2
1 nodetaf1

The related TNSNAMES entries for a node1 only access looks like:

Node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))

)

(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)

)
)

Note that the host name is not the vip address (because there is no point in switching node should node1 fail).

Test results:

$ sqlplus mylogin/mypass@node1-only

< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;

select qcsid, p.inst_id “Inst”, p.server_group “Group”, p.server_set “Set”,s.program
from gv$px_session p,gv$session s
where s.sid=p.sid and qcsid=&1
order by qcinst_id , p.inst_id,server_group,server_set

QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————

938 1 1 1 oracle@node1 (P000)
938 1 1 1 oracle@node1 (P004)
938 1 1 1 oracle@node1 (P002)
938 1 1 1 oracle@node1 (P005)
938 1 1 1 oracle@node1 (P006)
938 1 1 1 oracle@node1 (P001)
938 1 1 1 oracle@node1 (P007)
938 1 1 1 oracle@node1 (P003)
938 1 sqlplus@node1 (TNS V1-V3)

The coordinator and the slaves stay on node1

Dual node access

I then added a service aimed at firing slave executions on both nodes. The ‘bothnodestaf’ was added using dbca and then modified to give it “goal_throughput” and “clb_goal_short” load balancing advisories: according to the documentation, load balancing is based on rate that works is completed in service plus available bandwidth. I’ll dig into that one day to get a better understanding of the LB available strategies.

execute dbms_service.modify_service (service_name => ‘bothnodestaf’ –
, aq_ha_notifications => true –
, goal => dbms_service.goal_throughput –
, failover_method => dbms_service.failover_method_basic –
, failover_type => dbms_service.failover_type_select –
, failover_retries => 180 –
, failover_delay => 5 –
, clb_goal => dbms_service.clb_goal_short);

In the tnsnames.ora:

bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))

)

(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)

)
(SERVER = DEDICATED)
)
)

Note that LOAD_BALANCE is unset (set to OFF) in the tnsnames.ora to allow server-side connection balancing

On node1:

$ sqlplus mylogin/mypass@bothnodes

alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;

QCSID Inst Group Set PROGRAM

———- ———- ———- ———- ————————————————

936 1 1 1 oracle@node1 (P002)
936 1 1 1 oracle@node1 (P003)
936 1 1 1 oracle@node1 (P001)
936 1 1 1 oracle@node1 (P000)
936 1 1 1 oracle@node2 (P037)
936 1 1 1 oracle@node2 (P027)
936 2 1 1 oracle@node1 (O001)
936 2 1 1 oracle@node2 (P016)
936 2 1 1 oracle@node2 (P019)
936 2 1 1 oracle@node2 (P017)
936 2 1 1 oracle@node2 (P018)
936 1 sqlplus@node1 (TNS V1-V3)

The slaves started on both nodes.

On node2:

$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;

QCSID Inst Group Set PROGRAM

———- ——————————————————————————-

952 1 1 1 oracle@node1 (P002)
952 1 1 1 oracle@node1 (P003)
952 1 1 1 oracle@node1 (P001)
952 1 1 1 oracle@node1 (P000)
952 1 1 1 oracle@node2 (P037)
952 1 1 1 oracle@node2 (P027)
952 2 1 1 oracle@node1 (O001)
952 2 1 1 oracle@node2 (P016)
952 2 1 1 oracle@node2 (P019)
952 2 1 1 oracle@node2 (P017)
952 2 1 1 oracle@node2 (P018)
952 1 sqlplus@node1 (TNS V1-V3)

Again, connecting on a node2 services allows load balancing between the nodes

 

 

Blog at WordPress.com.