Latches: What do we know ?

Latches are low level serialization mechanism which protects memory areas inside SGA. They are light wait and less sophesticated than enqueues and can be acquired and released very quickly.

Latch acquisition does not involve any complex algorithm and is based on test-and-set atomic instruction of a computer processor.

Latch Classification:

Latch can be classified in multiple ways:-

Shared latch and exclusive latch:

Shared latch is the one which can be shared by multiple processes/sessions.

Example if a session wants to read a block in memory and at the same time other session also wants to read the same block in memory, they can acquire shared latch. Example of shared latch is “latch: cache buffer chain”. This latch was exclusive in older version and Oracle changed this to shared latch from Oracle 9i onwards.

Exclusive latch is the one which is acquired when a session wants to make modification to block or memory area. Exclusive latch can be held by only 1 session/process at a time and is not compatible with any other latch.

Both shared latch and exclusive latch are not compatible with each other. Process holding a shared latch will block other process which needs exclusive latch but will allow other process which needs shared latch on same memory area. Similarly process holding exclusive latch will not allow any other process which needs either shared latch or exclusive latch.

Example of exclusive latch is “library cache latches” in previous version. These were taken in exclusive mode even for traversing a hash bucket. These latches are no more present in 11g and they are replaced by mutex.

As per Andrey Nikolaev 460 out of 551 latches are exclusive in Oracle 11.2.0.2. In each new version, Oracle tries to make latches more sharable in order to reduce contention.

Another classification of latches is immedaite latch and willing to wait latch

Immedaite latch and willing to wait latch:

Immediate latches are the one that session try to acquire immediately without waits. If the latch is not available, session will not wait and may get terminated or check for another latch. Example of immediate latch is redo copy latch. This is immediate latch because Oracle only wants to know if anyone else is currently copying redo data to log buffer, but not who exactly is copying and where, as this does not matter to LGWR.

Willing to wait latches are the one which will wait if the latch is not available. These latches have little complex behavior then immediate latches. Most of the latches in oracle are willing to wait latches

Why do we see wait events on shared latches ?

After reading about shared latches and exclusive latches, one question comes to mind regarding shared latches.

If “latch: cache buffer chain” latch is a shared latch (past 9i version) and they dont block other shared latches, why do we see “latch: cache buffer chain” latch wait events even in latest version of Oracle ?

The answer to this is explained by Andrey Nikolaev. Andrey has given a very practical example of why this happens. When multiple processes are trying to acquire shared latch (example CBC latch), we should not see any wait events or process blocking each other. The moment another session try to acquire exclusive lock on the resource (where other sessions were having shared latch), exclusive latches are given higher preference than shared latch. So session with exclusive latch will get access to resource and it will block all other sessions having shared latch. This will form a chain of sessions willing to acquire shared latch. Strange part of this algorithm is that even after exclusive latch has been released by the processing holding it, other processes cannot acquire shared latch concurrently. They still continue to be in latch queue and they get access to shared latch one by one. First process in queue will get shared latch and once its done with the access, it will release the latch and post the next process that latch is available. This is one of the major reason why we see wait events on shared latches.

Process flow for latch acquisition

Following is the process flow for acquiring a latch

 - Immediate latch gets 
       - spin latch gets
             - Add process to queue of latch waits
                   - Sleep until posted

The last step in process flow (sleep until posted) is a changed behavior. Initially until 9i, Oracle used to wake up periodically to check if latch has been freed or not and if latch is still not avaialble go back to sleep. So process flow in 9i used to be following

 - Immediate latch gets 
       - spin latch gets
             - Add process to queue of latch waits
                   - Sleep for fixed time and wake up
             - immediate latch gets
                   - Sleep for fixed time and wake up
             - immediate latch gets
                   - Sleep for fixed time and wake up

This behavior has been change from 10g onwards and holding process will wakeup the waiter process after the latch becomes free. This has been explain well by Tannel Podder. Also behavior is explained in more details by ALEX FATKULIN.

Some times, holding process was not able to wake up the process waiting for latch because of bugs or lower kernel version. So older version of Oracle used to have some default timeout available so that in case holding process miss the wakeup call, it will not wait indefinitely and will wakeup after timeout. Oracle has introduced a parameter _enable_reliable_latch_waits={true|false}, which alters this behavior. If this is set to true then no timeout is added and holding process continue to sleep until it gets posted by holding process. False represents the behavior otherwise.

Only latch which is exception to above process flow is “process allocation” latch. This latch does not depends on holding process to post when latch is free. It wakes up preriodically to acquire latch.

Latch Behavior

Oracle has introduced few parameters to control behavior of latch. We will discuss brief about those parameters and how they affect shared latch and exclusive latches

When process try to acquire latch, it attempts to acquire latch in immediate mode. If latch is not available in immediate mode, it will spin for defined number of times and try again. If still latch is not free, process will go to sleep.

Process Spin:

Spinning is a process of consuming/burning CPU so that process will stay “ON CPU” and not to get descheduled from CPU cycles. During spinning, process will burn CPU for few micro seconds with the hope that after passing that much time, latch will be available for it to acquire. This does increase CPU consumption, but it saves time as it may avoid the needing for the process to sleep (which is expensive as it involves context switches).

Number of times a process spins to acquire latch depends on type of latch.

Exclusive latch:

For exclusive latch, this “can be” controlled by _spin_count but needs database bounce. I said “can be” because exclusive latch spins ar actually decided by “spin” column in x$ksllclass table.


SQL>select indx,spin from x$ksllclass;

INDX SPIN
---------- ----------
0 20000
1 20000
2 20000
3 20000
4 20000
5 20000
6 20000
7 20000

8 rows selected.

There are 8 classes of latches (as indicated by indx column) that we will discuss later in this article. By default all latches belongs to class 0.
If we want to change spin count for exclusive latches, we need to change value of SPIN column for class 0. This can be done by changing _spin_count and bouncing the instance (but that will change spin count for all classes), or by setting _latch_class_0 parameter (which will change spin count for only class 0). We have similar parameter to change spin count for other classes (_latch_class_[0-7]).

So changing _spin_count is not a good idea. Instead we can move a specific latch for which we want to change the spin count to another class (1-7) and change specific underscore parameter (_latch_class_[1-7]).

By details _spin_count parameter is not applicable to exclusive latches as default value of _spin_count is 2000 where as exclusive latch spin for 20000 times as mentioned in above table x$ksllclass.
But changing _spin_count will make it applicable for exclusive latch as well.

Shared Latch:

For shared latch, number of times process spins is _spin_count * 2. This has been proved by Andrey Nikolaev. Also, _spin_count parameter is applicable to shared latches by default. So since default value of _spin_count is 2000, shared latches spins 4000 times (2000 * 2).

Diagnosing latch contention:

I am not going to mention much here because Tannel Podder has already written great script – latchprof.sql and latchprofx.sql which can be used to analyze latch wait events.

Tannel has also written great article on how to diagnose latch wait events – http://tech.e2sn.com/oracle/troubleshooting/latch-contention-troubleshooting

In next article, I will try to cover mutex.

Hope this helps !!

Reference:

https://andreynikolaev.wordpress.com

http://tech.e2sn.com/oracle/troubleshooting/latch-contention-troubleshooting

Brief about Workload Management in Oracle RAC

This is a brief article about workload management in RAC. I tried to cover different components of workload management in RAC and how they are configured at client side or server side. I haven’t gone into details of configuration steps but just mentioned in brief about how it can be done.

Readers are advised to refer Oracle documentation to understand details about configuration of workload management.

Workload management on RAC

There are 2 major components of workload management:

  1. Failover – if connection to one instance fails, Oracle should failover the connection to another instance
  2. Load Balancing – Workload on RAC instances should be distributed equally

Failover can be connect time or run time. Similarly load balancing can be achieved during connect time or run time.

We can also configure these components either on client side or on server side.

I tried to put workload management in RAC in single block diagram to get high level overview. Following figures gives a summary of configuring workload management in RAC.

workload

Lets check how to configure each of these components.

Failover

We can achieve failover at connect time or at run time. So depending on how we want to achieve failover (during connect time or run time), we can configure the same on client side or on server side

Connect time failover, Client side

Connect time failover can be configured on client side as connection failover happens if instance is down before it can even get connected. So logically its not possible to have it on server side (because that will need connection to complete and in that case it wont be connect time failover).

This is achieved on client side using FAILOVER=ON parameter in TNS string.
Example:

ORCL=
     (DESCRIPTION=
         (ADDRESS_LIST=
         (FAILOVER=ON)
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node1-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node2-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node3-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node4-vip) (PORT=1521))
     )
    (CONNECT_DATA= (SERVICE_NAME= ORCL))
)

 

Run time failover, Client side

At run time, we can achieve failover using Transparent Application Failover (TAF).
TAF can be configured on client side in TNS string using FAILOVER_MODE parameter.
Example:

ORCL=
     (DESCRIPTION=
         (ADDRESS_LIST=
         (FAILOVER=ON)
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node1-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node2-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node3-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node4-vip) (PORT=1521))
     )
     (CONNECT_DATA= (SERVICE_NAME=ORCL)
     (FAILOVER_MODE=(TYPE=select)(METHOD=basic))
)

 

If you check above TNS string, we have FAILOVER_MODE parameter and it specifies the failover type and method. If FAILOVER_MODE is specified then in case of instance outage, existing connected sessions will automatically failover at run time to other existing instances. TAF has more details then specified here. You can check Oracle documentation or reference links in this article for complete details about TAF.

Run time failover, Server side

Same TAF implementation can be done on server side as well. This is done as part of service management in RAC.
We can use SRVCTL to configure services on RAC add TAF parameters.

[oracle@orcl_node1 ~]$ srvctl add service -d orcl -s test -r orcl1 -P BASIC -e SESSION -m BASIC
[oracle@orcl_node1 ~]$ srvctl start service -d orcl -s test
[oracle@orcl_node1 ~]$ srvctl status service -d orcl -s test
Service test is running on instance(s) orcl1
[oracle@orcl_node1 ~]$ srvctl config service -d orcl -s test
Service name: test
Service is enabled
Server pool: orcl_test
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: 
Runtime Load Balancing Goal:
TAF policy specification: BASIC
Preferred instances: orcl1
Available instances:
[oracle@orcl_node1 ~]$

We can also use Fast Connection Failover (FCF) using Fast Application Notification (FAN) events in OCI clients to get notifications about instance availability. Based on these notification, clients can reconnect to available instances.

Load Balancing

We can achieve load balancing at connect time or at run time. Depending upon when we want to achieve load balancing (connect time or run time), we can configure load balancing on client side or on server side.

Connect time load balancing, Client side

We can achieve connect time load balancing on client side using LOAD_BALANCE=ON parameter in TNS string.

Example:

ORCL=
     (DESCRIPTION=
         (ADDRESS_LIST=
         (LOAD_BALANCE=ON)
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node1-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node1-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node1-vip) (PORT=1521))
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_node1-vip) (PORT=1521))
     )
    (CONNECT_DATA= (SERVICE_NAME= ORCL)
)

LOAD_BALANCE parameter is set to ON by default and we do not have to specify explicitely. However, putting LOAD_BALANCE=OFF will disable load balancing. Oracle picks ramdom hosts from address list and try to load balance connections to database instances. With 11.2, Oracle introduced SCAN listener which provide host IPs in round robin fashion. So with single SCAN alias in TNS names, connections to different hosts are balanced automatically. This is going to deprecate LOAD_BALANCING parameter in TNS. Example:

ORCL=
     (DESCRIPTION=
         (LOAD_BALANCE=ON)
         (ADDRESS= (PROTOCOL=TCP) (HOST=orcl_scan) (PORT=1521))
     )
     (CONNECT_DATA= (SERVICE_NAME= ORCL)
 )

 

Connect time load balancing, Server side

We can enable server side load balancing using CLB_GOAL service attribute.
Oracle has introduced load balanving advisory in 10g which keeps track of loads on individual instances. Having dynamic registration keeps all listeners aware of load profile of each instances. We need to set remote_listener to tns alias containing all virual IP address of all nodes in cluster. Even with SCAN listener, we need to keep SCAN VIP in remote_listener parameter.

CLB_GOAL stands for connect time load balancing goal. It is used to define expected session duration for the service. For example if we have OLTP service and we expect lots of short sessions which last for very short time (few secs to few mins), then we can set CLB_GOAL for that service as SHORT. If service is expected to serve sessions which are going to be connected for longer duration (few mins to hours), we can set CLB_GOAL to LONG. Setting CLB_GOAL will instruct listener to route connections based on metrics. Possible metrics are load per node (based on CPU run queue) (CLB_GOAL=short) or number of current connections (CLB_GOAL = long).

  • If CLB_GOAL is short then Oracle considers load per node (based on CPU run queue) as metrics and route connection to host where load is less.
  • If CLB_GOAL is long then Oracle considers number of connections to instance as metrics and route connection to host where number of connections are less.

Example:

[oracle@orcl_node1 ~]$ srvctl modify service -d orcl -s test -j SHORT
[oracle@orcl_node1 ~]$ srvctl config service -d orcl -s test
Service name: test
Service is enabled
Server pool: orcl_test
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: SHORT
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Preferred instances: orcl1
Available instances:
[oracle@orcl_node1 ~]$

Run time load balancing, Server side

We cannot have client side load balancing during run time. This is because when we consider run time load balancing, each transaction needs to be balanced as against each connection.
Load balancing advisory serve as basic for runtime connection load balancing. Using dynamic service registration services are registered with all listeners. PMON of each instance updates the load profile to all listeners. Since listeners knows the load profile of all instances sessions are directed to most appropriate instance depending on the goal of runtime load balancing. Connection allocation is based on current performance level provided by the database instances as indicated by load balancing advisory FAN events. This provides load balancing at the transaction level instead of load balancing at the time of initial connection.

Service level of instances are analyzed based on runtime load balancing goal

  • Service time (internet web processing)
  • Throughput (batch processing)

Above runtime load balancing goals can be set using GOAL parameter of server (dont get confused with CLB_GOAL parameter, which is for connect time load balancing).

We can set this parameter GOAL on server side for each service using SRVCTL.

Once we set this parameter to either GOAL_SERVICE_TIME or GOAL_THROUGHPUT, Oracle will balance the load using following metrics

  • If GOAL_SERVICE_TIME is used, Oracle will check the service time ie. how fast an instance if serving a single transaction. Oracle will have these metrics for each of the instances and connection for a service will be diverted to an instance where service time is best. This is mainly for OLTP transactions.
  • If GOAL_THROUGHPUT is used, Oracle will check the throughput metrics ie. which instance is doing max amount of work in least time and forward the instance which is having best throughput. This is mainly for batch processing.

Example:

[oracle@orcl_node1 ~]$ srvctl modify service -d orcl -s test -B SERVICE_TIME
[oracle@orcl_node1 ~]$ srvctl config service -d orcl -s test
Service name: test
Service is enabled
Server pool: orcl_test
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SESSION
Failover method: BASIC
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: SHORT
Runtime Load Balancing Goal: SERVICE_TIME
TAF policy specification: BASIC
Preferred instances: orcl1
Available instances:

Reference:

http://www.oracle.com/technetwork/database/features/oci/taf-10-133239.pdf

http://docs.oracle.com/cd/B19306_01/rac.102/b14197/hafeats.htm

How many checkpoints in Oracle database ?

This Question came to my mind when I was discussing “direct path reads” with DBA candidates. I was surprised that many DBAs were not aware of object level checkpoint that happens while doing direct path read. So I thought many DBAs may not be knowing different level of checkpoints that happens in the database and hence the question.

Well, the answer is 7 (as per my knowledge). Oracle does 7 different checkpoints at various stages. Lets check out what are those checkpoints and at what point they happen

  1. Full checkpoint
  2. Thread checkpoint
  3. File level checkpoint
  4. Object level checkpoint
  5. Parallel query checkpoint <– BTW. This is the checkpoint that happens during direct path reads
  6. Incremental checkpoint
  7. Checkpoint during log switch

Full Checkpoint

This happens when DBA explicitly issues checkpoint command using “alter system checkpoint”. When this happens, all dirty blocks from (all instances in case of RAC) db_cache are written to disk. This includes committed as well as uncommitted data blocks.

This checkpoint also happens when DBA shutdown database cleanly using “shutdown immediate” or puts entire database in begin backup mode using “alter database begin backup”

Thread Checkpoint

Thread checkpoint is basically a full checkpoint in single instance database. So redo thread is associated with an instance. Thread checkpoints basically write all dirty blocks of a single thread or instance to a database. In case of RAC when a checkpoint is done for a specific single instance, its called thread checkpoint. This is done using “alter system checkpoint local”

File level Checkpoint

File level checkpoint is writing dirty blocks associated with set of files belonging to a tablespace. This happens when we put a tablespace in begin backup mode or when we take a tablespace offline or when we make tablespace readonly. Oracle writes all dirty blocks associated with datafiles of that tablespace to database before it changes the status of that tablespace.

Object level Checkpoint

All the dirty blocks that belong to specific object is written to database by DBWR process. This happens when you perform following action on the object (example table or index):

  • Drop table
  • truncate table
  • drop index
  • drop table purge

You might be thinking if we are dropping an object, why Oracle has to write its blocks to database. 2 reasons

  1. Oracle writes these blocks to database “before” performing above DDL tasks
  2. It’s required for recovery purpose. In future if you have to restore and recover database, Oracle needs to have its previous blocks so that it can roll forward and rollback.

Parallel query checkpoint

Whenever you are reading queries using parallel workers, Oracle does direct path reads and reads data from a datafile directly into PGA bypassing SGA. Starting from 11g, direct path reads also happens for full table scans where table size is larger than _small_table_threshold parameter.

Imagine we have a session which connected prior and did some DML on a table in SGA (buffer cache) and committed the changes. Since checkpoint doesn’t happen as soon as we commit the changes what happens when another session connects immediately and do a full table scan or parallel scan of that table. How will it see the latest data ?

This is where parallel query checkpoint comes in. When you run full table scan or parallel query scan, you will see direct path reads wait event but in the beginning, you will also see enq: KO fast object checkpoint wait event. This will checkpoint any blocks that belong to the object you are doing direct path read so that latest change goes into datafile.

Incremental Checkpoint

Prior to Oracle 8i, Oracle used to do checkpoint during log switch and nothing before that. So during log file switch, Oracle has to write lot of blocks to disk and we will see sudden spike in IO. Also, this has an effect of increasing recovery time if checkpoint hasn’t happen for until we are at the end of log file and database has crashed.

Starting 8i, Oracle started doing incremental checkpoints time to time. This logic has also evolved from Oracle 8i version till now and different parameters control this behavious in latest versions compared to older version.

In prior releases, we used to have log_checkpoint_interval and log_checkpoint_timeout parameters which used to control duration of incremental checkpoints that should happen in order to meet recovery SLAs. In later release, Oracle provided fast_start_mttr_target and fast_start_io_target parameters. These parameters takes our SLA timings and internally Oracle decides how frequently it has to take incremental checkpoints.

We have another situation where Oracle has to go for incremental checkpoint even though above parameters (fast_start_mttr_target or fast_start_io_target) has not met the condition yet. This happens when Oracle is not able to find any free blocks in buffer cache. At this point Oracle has to flush least recently used blocks to datafiles in order to make room for new buffers comming in. By default Oracle scans _db_block_max_scan_pct of blocks before it decides to flush LRU blocks to datafiles. If its not able to find required number of free blocks even after scanning _db_block_max_scan_pct blocks, it will go for incremental checkpoint starting with LRU blocks.

Checkpoint logic is much more complex that what I explained here. But objective of this article was to just introduce  different types of checkpoints so I am not covering details of checkpoint algorithm here.

Checkpoint during log switch

This is the most obvious checkpoint. It happens whenever log switch happens. But note that during log switch only dirty blocks whose information is protected by that log file will be written to datafiles. So not all dirty blocks are written during log switch.

Hope this helps !!

Reference

https://bdrouvot.wordpress.com/2015/04/30/direct-path-read-and-enq-ko-fast-object-checkpoint/

Affect of object Statistics on SQL Execution statistics

This is a small article to demonstrate why correct statistics are important and how they affect execution statistics of same plan.
In the past we learned that changing table statistics or index statistics (or rebuilding index) can causes plan for a SQL to change. Because when statistics changes, optimizer will try to generate new plan based on changed statistics.
With 11g, oracle provided baseline to ensure stability in SQL plans. So if you have single baseline enabled for a SQL, you essentially have single plan for that SQL and that plan will not change unless auto tuning job evolve another plan or you manually evolves another plan.

Does it mean that object statistics has no role to play if your plan is fixed ?

Lets have a quick demo. In my below exercise, I will not be changing the plan (as I am using baseline). But we will see that execution statistics such as buffer_gets and disk reads changes as you change object statistics.

I faced this issue on one of our production database where stats were completely inaccurate and when we gathered stats on tables and indexes, execution stats for SQLs changed with same plan.

Setup

I am performing these tests on 11.2.0.4.
Lets create 2 tables T1 and T2. I will use one query on single table to go for FTS and other query joining T1 and T2 and use index.


SQL>create table T1 as select * from dba_tables;

Table created.

SQL>insert into T1 select * from T1;

2804 rows created.

SQL>insert into T1 select * from T1;

5608 rows created.
...
...

SQL>commit;

Commit complete.

SQL>--create index on this table on TABLE_NAME column and gather stats

SQL>create index I_T1_TABLE_NAME on T1(TABLE_NAME);

Index created.

SQL>exec dbms_stats.gather_table_stats(null,'T1');

PL/SQL procedure successfully completed.

Create 2nd table from some other view, lets say dba_tab_statistics


DEO>create table T2 as select * from dba_tab_statistics;

Table created.

DEO>insert into T2 select * from T2;

16289 rows created.

...
...

DEO>commit;

Commit complete.

DEO>--create index on TABLE_NAME column in T2 table

DEO>create index I_T2_TABLE_NAME on T2(TABLE_NAME);

Index created.

DEO>exec dbms_stats.gather_table_stats(null,'T2');

PL/SQL procedure successfully completed.

Following are the table and index level stats values for T1 and T2 and corresponding indexes


DEO>select table_name, num_rows, blocks, avg_row_len from dba_tables where table_name in ('T1','T2');

TABLE_NAME			           NUM_ROWS   BLOCKS     AVG_ROW_LEN
------------------------------ ---------- ---------- -----------
T1				               34783	  3214	     247
T2				               298096	  4351	     99

DEO>select INDEX_NAME, LEAF_BLOCKS, DISTINCT_KEYS, CLUSTERING_FACTOR, NUM_ROWS from dba_indexes where table_name in ('T1','T2');

INDEX_NAME		               LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS
------------------------------ ----------- ------------- ----------------- ----------
I_T1_TABLE_NAME 		       360	       1414	         34675	           34783
I_T2_TABLE_NAME 		       1813	       2364	         74610             298096

So when object statistics are accurate and current, I see following execution statistics for following 2 queries

Query 1:

select a.table_name, sum(b.num_rows) from T1 a, T2 b where a.table_name= b.table_name and a.owner = b.owner and a.table_name = :b1 group by a.table_name;


DEO>var b1 varchar2(40);
DEO>exec :b1 := 'ABC';

PL/SQL procedure successfully completed.

DEO>select a.table_name, sum(b.num_rows) from T1 a, T2 b where a.table_name= b.table_name and a.owner = b.owner and a.table_name = :b1 group by a.table_name;

TABLE_NAME		               SUM(B.NUM_ROWS)
------------------------------ ---------------
ABC				               240678640


Execution Plan
----------------------------------------------------------
Plan hash value: 1483037242

--------------------------------------------------------------------------------------------------------
| Id  | Operation			                 | Name 	       | Rows  | Bytes | Cost (%CPU)| Time	   |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		             |		           |	 1 |	61 |	 4  (50)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT		         |		           |	 1 |	61 |	 4  (50)| 00:00:01 |
|   2 |   MERGE JOIN			             |		           |	47 |  2867 |	 4  (50)| 00:00:01 |
|   3 |    SORT JOIN			             |		           |	11 |   396 |	 2  (50)| 00:00:01 |
|   4 |     VIEW			                 | VW_GBC_5	       |	11 |   396 |	 2  (50)| 00:00:01 |
|   5 |      HASH GROUP BY		             |		           |	11 |   297 |	 2  (50)| 00:00:01 |
|   6 |       TABLE ACCESS BY INDEX ROWID    | T2		       |   126 |  3402 |	 1   (0)| 00:00:01 |
|*  7 |        INDEX RANGE SCAN 	         | I_T2_TABLE_NAME |   126 |	   |	 1   (0)| 00:00:01 |
|*  8 |    SORT JOIN			             |		           |	25 |   625 |	 2  (50)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID      | T1		       |	25 |   625 |	 1   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN		         | I_T1_TABLE_NAME |	25 |	   |	 1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("B"."TABLE_NAME"=:B1)
   8 - access("A"."TABLE_NAME"="ITEM_2" AND "A"."OWNER"="ITEM_1")
       filter("A"."OWNER"="ITEM_1" AND "A"."TABLE_NAME"="ITEM_2")
  10 - access("A"."TABLE_NAME"=:B1)

Note
-----
   - SQL plan baseline "SQL_PLAN_gt8npsxnncgm2abc84fa9" used for this statement


Statistics
----------------------------------------------------------
	  1  recursive calls
	  0  db block gets
	157  consistent gets
	  3  physical reads
	  0  redo size
	598  bytes sent via SQL*Net to client
	453  bytes received via SQL*Net from client
	  2  SQL*Net roundtrips to/from client
	  2  sorts (memory)
	  0  sorts (disk)
	  1  rows processed

Above query has done 157 consistent gets and 3 physical reads.

Query 2:

select count(1) from T2 where owner = :b2;


DEO>var b2 varchar2(40);
DEO>exec :b2 := 'SYS';

PL/SQL procedure successfully completed.

DEO>select count(1) from T2 where owner = :b2;

  COUNT(1)
----------
    251088


Execution Plan
----------------------------------------------------------
Plan hash value: 3321871023

---------------------------------------------------------------------------
| Id  | Operation	       | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1     |	5     |   625   (2)| 00:00:03 |
|   1 |  SORT AGGREGATE    |	  |	1     |	5     |	           |	      |
|*  2 |   TABLE ACCESS FULL| T2   | 19873 | 99365 |   625   (2)| 00:00:03 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"=:B2)

Note
-----
   - SQL plan baseline "SQL_PLAN_9wq67xmrafzda1c6cf506" used for this statement


Statistics
----------------------------------------------------------
	  1  recursive calls
	  0  db block gets
   4350  consistent gets
	  0  physical reads
	  0  redo size
	517  bytes sent via SQL*Net to client
	453  bytes received via SQL*Net from client
	  2  SQL*Net roundtrips to/from client
	  0  sorts (memory)
	  0  sorts (disk)
	  1  rows processed

Above query has done 4350 consistent gets and no physical reads.

Lets fake statistics to random high value and see the affect. Note that both queries above are using baseline, so plan will not change with further executions even after changing object stats

I am going to set table stats and index stats as follows


DEO>exec dbms_stats.SET_TABLE_STATS(null,'T1',NUMROWS=>3221020,NUMBLKS=>202590, AVGRLEN=>150)

PL/SQL procedure successfully completed.

DEO>exec dbms_stats.SET_INDEX_STATS(null,'I_T1_TABLE_NAME',NUMROWS=>3153430,NUMLBLKS=>124232,NUMDIST=>6,AVGLBLK=>2990,AVGDBLK=>19002,CLSTFCT=>76010)

PL/SQL procedure successfully completed.

DEO>exec dbms_stats.SET_TABLE_STATS(null,'T2',NUMROWS=>140000000,NUMBLKS=>13540202, AVGRLEN=>120)

PL/SQL procedure successfully completed.

DEO>exec dbms_stats.SET_INDEX_STATS(null,'I_T2_TABLE_NAME',NUMROWS=>13345304,NUMLBLKS=>1242022,NUMDIST=>12,AVGLBLK=>324,AVGDBLK=>1342,CLSTFCT=>260123)

PL/SQL procedure successfully completed.

DEO>select table_name, num_rows, blocks, avg_row_len from dba_tables where table_name in ('T1','T2');

TABLE_NAME			           NUM_ROWS   BLOCKS     AVG_ROW_LEN
------------------------------ ---------- ---------- -----------
T1				               3221020    202590	 150
T2				               140000000  13540202	 120

DEO>select INDEX_NAME, LEAF_BLOCKS, DISTINCT_KEYS, CLUSTERING_FACTOR, NUM_ROWS from dba_indexes where table_name in ('T1','T2');

INDEX_NAME		               LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS
------------------------------ ----------- ------------- ----------------- ----------
I_T1_TABLE_NAME 		       124232	   6	         76010             3153430
I_T2_TABLE_NAME 		       1242022	   12	         260123            13345304


I will purge those queries from shared_pool and run those same queries again


DEO>!cat purgesql.sql
accept sql_id prompt 'Enter SQL_ID:- '
DECLARE
 name varchar2(50);
BEGIN
 select distinct address||','||hash_value into name
 from v$sqlarea
 where sql_id like '&sql_id';

 sys.dbms_shared_pool.purge(name,'C',65);

END;
/

DEO>@purgesql
Enter SQL_ID:- 46j56265bmw7u

PL/SQL procedure successfully completed.

DEO>@purgesql
Enter SQL_ID:- gkbtfpmvxw4hn

PL/SQL procedure successfully completed.

Lets run the queries again after changing the stats

Query 1:

select a.table_name, sum(b.num_rows) from T1 a, T2 b where a.table_name= b.table_name and a.owner = b.owner and a.table_name = :b1 group by a.table_name;


DEO>set autotrace on
DEO>select a.table_name, sum(b.num_rows) from T1 a, T2 b where a.table_name= b.table_name and a.owner = b.owner and a.table_name = :b1 group by a.table_name;

TABLE_NAME		               SUM(B.NUM_ROWS)
------------------------------ ---------------
ABC				               240678640


Execution Plan
----------------------------------------------------------
Plan hash value: 1483037242

----------------------------------------------------------------------------------------------------
| Id  | Operation			              | Name 	        | Rows  | Bytes | Cost (%CPU)| Time	   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		          |		            |	 1  |	61  |	74   (9)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT		      |		            |	 1  |	61  |	74   (9)| 00:00:01 |
|   2 |   MERGE JOIN			          |		            |  4464 |   265K|	74   (9)| 00:00:01 |
|   3 |    SORT JOIN			          |		            |	11  |   396 |	71   (6)| 00:00:01 |
|   4 |     VIEW			              | VW_GBC_5	    |	11  |   396 |	71   (6)| 00:00:01 |
|   5 |      HASH GROUP BY		          |		            |	11  |   297 |	71   (6)| 00:00:01 |
|   6 |       TABLE ACCESS BY INDEX ROWID | T2		        | 59222 |  1561K|	67   (0)| 00:00:01 |
|*  7 |        INDEX RANGE SCAN 	      | I_T2_TABLE_NAME | 59222 |	    |	55   (0)| 00:00:01 |
|*  8 |    SORT JOIN			          |		            |  2278 | 56950 |	 3  (67)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID   | T1		        |  2278 | 56950 |	 1   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN		      | I_T1_TABLE_NAME |  2278 |	    |	 1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("B"."TABLE_NAME"=:B1)
   8 - access("A"."TABLE_NAME"="ITEM_2" AND "A"."OWNER"="ITEM_1")
       filter("A"."OWNER"="ITEM_1" AND "A"."TABLE_NAME"="ITEM_2")
  10 - access("A"."TABLE_NAME"=:B1)

Note
-----
   - SQL plan baseline "SQL_PLAN_gt8npsxnncgm2abc84fa9" used for this statement


Statistics
----------------------------------------------------------
	 26  recursive calls
	 62  db block gets
	231  consistent gets
	  4  physical reads
  14528  redo size
	598  bytes sent via SQL*Net to client
	453  bytes received via SQL*Net from client
	  2  SQL*Net roundtrips to/from client
	  2  sorts (memory)
	  0  sorts (disk)
	  1  rows processed

Note that plan didn’t change, but we see that consistent gets (buffer_gets/exec) has increased from previous value of 157 to 231.
Physical reads are more or less same (just increase of 1).

Lets check 2nd query doing FTS

Query 2:

select count(1) from T2 where owner = :b2;


DEO>set  autotrace on
DEO>select count(1) from T2 where owner = :b2;

  COUNT(1)
----------
      9600

Execution Plan
----------------------------------------------------------
Plan hash value: 3321871023

---------------------------------------------------------------------------
| Id  | Operation	       | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1     |	5     |  1926K  (1)| 02:10:54 |
|   1 |  SORT AGGREGATE    |	  |	1     |	5     |	           |	      |
|*  2 |   TABLE ACCESS FULL| T2   |  9333K|    44M|  1926K  (1)| 02:10:54 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"=:B2)

Note
-----
   - SQL plan baseline "SQL_PLAN_9wq67xmrafzda1c6cf506" used for this statement


Statistics
----------------------------------------------------------
	  1  recursive calls
	  0  db block gets
   8195  consistent gets
   4342  physical reads
	  0  redo size
	515  bytes sent via SQL*Net to client
	453  bytes received via SQL*Net from client
	  2  SQL*Net roundtrips to/from client
	  0  sorts (memory)
	  0  sorts (disk)
	  1  rows processed


Huge increase in consistent gets compared to previous run. Previous run showed consistent gets as 4350 when stats were accurate. With increased stats, consistent gets became 8195
Also, there was no disk reads previously may be because all blocks were in buffer. But with changed object level stats, we are seeing 4342 disk reads.

So always make sure you have latest accurate statistics available for your tables and indexes in your database. You may have right plans, but having bad object stats can cause Oracle to work more.

Hope this helps !!

Tracing Single SQL in Oracle

Many times, while doing SQL tuning, we want to trace (event 10046) single SQL in database. Instead of going for module level tracing or session level using DBMS_MONITOR, we can simply use below alter system command to trace specific SQL

Example: I have a table T1 and index T_I_TABLE_NAME on that table.
I am running following SQL and I want to trace on this SQL

select * from T1 where table_name = 'SINGLE_PRODUCT_GROUPS';

I can just find out the SQL ID of above SQL

select sql_id, sql_text from v$sql where sql_text like '%SINGLE_PRODUCT_GROUPS%'

SQL_ID	      SQL_TEXT
------------- --------------------------------------------------------------------------------
8kybysnu4nn34 select * from T1 where table_name = 'SINGLE_PRODUCT_GROUPS'

Once I have the SQL ID, I can use below alter system to trace this SQL


alter system set events 'sql_trace[SQL:8kybysnu4nn34] plan_stat=all_executions,wait=true,bind=true';

Note that even though this is alter system, other SQL IDs run will not have any effect on tracing because we are tracing only specific SQL ID. So unless SQL ID 8kybysnu4nn34 is run, it will not generate any trace file.

Once SQL ID is run and trace is generated, you can turn off tracing using following statement


alter system set events 'sql_trace[SQL:8kybysnu4nn34] off';

This might generate multiple trace files as multiple sessions might run same SQL ID (depending on the application).

Hope this helps !!!

direct path read behavior in Oracle 11.2

Prior to 11g, whenever optimizer goes for full table scan, Oracle used to show “db file scattered read” as wait event. But starting from 11g, a full table scan can show (depending on certain conditions) “direct path read” wait event.

db file scattered read – happens when blocks for a table is read from datafile into buffer cache in SGA

direct path read – happens when blocks for a table is read from datafile into PGA

Problem with fetching data in db cache was, if table is big enough it will use many blocks in db cache causing performance issue. And if multiple SQLs are doing FTS (full table scan) then every session will try to use those blocks in buffer cache by acquiring the latch and can cause “cache buffer chain: latch” contention. This will spike CPU on the host.

With direct path read, block read from datafile goes to PGA and every session has its own chunk of memory allocated in PGA. So multiple sessions doing FTS on same table does not make situation as bad as it used to be with “db file scattered read”

There are certain conditions based on which oracle decides whether to go for direct path read or use buffer cache.
In the reference section, I have mentioned many good blog articles which discusses one or more of these behaviors.

_small_table_threshold

_small_table_threshold : This is a hidden parameter which helps server process decide whether to go for direct path read or buffer cache. Unit of this parameter is number of blocks and default value is 2% of buffer cache size.

DEO>@param
Enter Parameter name: _small_table

Parameter			                  Description						                           Session Va Instance V
------------------------------------- ------------------------------------------------------------ ---------- ----------
_small_table_threshold		          lower threshold level of table size for direct reads	       27107      27107

In my case this is set to 27107 blocks which is approximately 2% of my db_cache_size blocks (you need to convert db_cache_size in terms of Oracle blocks and take 2% of that value to arrive at default value).

If size of table (blocks below high water mark) is > _small_table_threshold then Oracle chooses to go for direct path read.
In many blog articles, its stated that if size of table is > 5 times _small_table_threshold then it goes for direct path read.

During my testing, I was not able to verify 5 times _small_table_threshold. Because as soon as table size if more than _small_table_threshold, it goes for direct path read. I have checked this behavior in 11.2.0.3 as well as in 11.2.0.4

Lets test the dependency of direct path read on this hidden parameter for partitioned as well as non-partitioned table

Non-Partitoned table

My Oracle version is 11.2.0.4. I have a non-partitioned table T1 created from DBA_OBJECTS. You can create any test table of significant size using any view in database or by using “CONNECT BY rownum<=” clause from dual. Gather stats on table.


DEO>select table_name, num_rows, blocks from dba_tables where table_name = 'T1';

TABLE_NAME			           NUM_ROWS   BLOCKS
------------------------------ ---------- ----------
T1				               91200	    3214

My table has 3214 blocks.

Setting _small_table_threshold to 5000


DEO>alter session set "_small_table_threshold" = 5000;

Session altered.

Since table size is < _small_table_threshold, it should not go for direct path read. Instead we should see blocks going into db cache.
We can check which object’s blocks are placed in buffer cache using X$KCBOQH table. So when full table scan goes into db cache, we should be able to see the same using this table.

Lets flush the buffer cache to make sure nothing is present in buffer cache.


DEO>alter system flush buffer_cache;

System altered.

DEO>select count(*) from T1;

  COUNT(*)
----------
     91200

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93840	   3209

As shown above FTS has read all buffers in db cache.

Now, lets reduce _small_table_threshold to 3000 and flush buffer cache again.

DEO>alter session set "_small_table_threshold"=3000;

Session altered.

DEO>alter system flush buffer_cache;

System altered.

Since table size > _small_table_threshold, we should see direct path read.
We can verify if a session has done direct path read using statistics “physical reads direct” from v$sesstat
Following query shows the same. In this case my SID is 6531.

DEO>r
  1* select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    2

So current value of direct path read stats is 2 before reading from table. Lets read data from table


DEO>select count(*) from T1;

  COUNT(*)
----------
     91200

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    3210

We see that if table size is more than _small_table_threshold it has gone for direct path read.
We can also confirm if db cache has anything related to our table T1. Remember than we flushed buffer cache before running last query on T1 table.

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93840	      1

We can see 1 block in db cache which is the header block. Header block is always read in buffer cache.

Partitioned Table

In case of partitioned table same behavior was observed but _small_table_threshold is compared with partitions we are selecting instead of entire table.

I have created similar partitioned table with 4 partitions


CREATE TABLE T1_PART
(
    col1 NUMBER,
    col2 VARCHAR2(100)
 )
 PARTITION BY LIST (col1)
 (
    PARTITION PART_1  VALUES (1)
  , PARTITION PART_2  VALUES (2)
  , PARTITION PART_3  VALUES (3)
  , PARTITION PART_4  VALUES (4)
 ) ;

INSERT INTO T1_PART (SELECT mod(rownum,4)+1, rpad('X',100,'X') FROM dual CONNECT BY rownum<=896000);

DEO>select table_name, partition_name, num_rows, blocks, avg_row_len from user_tab_partitions where table_name='T1_PART';

TABLE_NAME		       PARTITION_NAME			              NUM_ROWS   BLOCKS     AVG_ROW_LEN
-------------------    -------------------------------------- ---------- ---------- -----------
T1_PART 		       PART_1				                  128000      1948	    104
T1_PART 		       PART_2				                  128000      1948	    104
T1_PART 		       PART_3				                  128000      1948	    104
T1_PART 		       PART_4				                  128000      1948	    104

4 rows selected.

Each partition is 1948 blocks.
Lets change _small_table_threshold to 1800 and select values from single partition. Since partition size we are selecting > _small_table_threshold, it should go for direct path read


DEO>alter session set "_small_table_threshold"=1800;

Session altered.

DEO>select count(1) from T1_PART where col1 = 1;

  COUNT(1)
----------
    128000

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	      1

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    5166

We are seeing “physical reads direct” stats value as 5166. This is a cumulative value. Previous value was 3210.
So it has read 5166 – 3210 = 1956 blocks which is approx same as number of blocks for 1 partition.

Also, we are see 1 block is read in buffer cache which is header block

To verify that _small_table_threshold is checked at partitions we are selecting and not at table level, lets increase _small_table_threshold to 3600 and continue to select single partition


DEO>alter session set "_small_table_threshold"=3600;

Session altered.

DEO>select count(1) from T1_PART where col1 = 1;

  COUNT(1)
----------
    128000

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    5168

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1949

So no increase in “physical reads direct”, but we can see blocks are read in buffer as sum(NUM_BUF) is showing same as number of blocks for that partition.

If we select 2 partitions, it will again go for direct path read because sum of blocks of 2 partitions > _small_table_threshold (3600)


DEO>select count(1) from T1_PART where col1 in (1,2);

  COUNT(1)
----------
    256000

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    9064

Both partitions went for direct path read even though 1 partition was in buffer cache.
Above stat value if cumulative. It has increased by (9064 – 5168) 3896 blocks which is size of 2 partitions.

We can change this behavior using _direct_read_decision_statistics_driven parameter. Check later in this article.

What happens when the blocks are cached in buffer cache ?

If we have atleast 50% of blocks cached in buffer cache, Oracle will choose to use db cache instead of direct path read when using FTS. ALEX FATKULIN has published a very nice article http://afatkulin.blogspot.ie/2009/01/11g-adaptive-direct-path-reads-what-is.html about this behavior and he has provided a function to test this behavior.

But I am seeing different behavior in my instance. In my tests, Oracle is not going for db cache unless 98% blocks are in buffer cache.

In my case for table T1, we can select half the records from table to have approx 50% blocks cached in buffer cache and check the behavior.

Non-Partitioned Table

Lets increased _small_table_threshold to 5000 so that Oracle does not go for direct path read and read approx half the blocks in buffer cache. We are also flushing buffer cache to make sure nothing is present.


DEO>alter system flush buffer_cache;

System altered.

DEO>alter session set "_small_table_threshold"=5000;

Session altered.

DEO>select count(1) from T1 where rownum < 50000;

  COUNT(1)
----------
     49999

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93840	   1753

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                     5168

From above output we can see that more than 50% blocks are cached in buffer cache.
Now, lets change _small_table_threshold to 3000 again to check if FTS goes for direct path read or buffer cache.


DEO>alter session set "_small_table_threshold"=3000;

Session altered.

DEO>select count(1) from T1;

  COUNT(1)
----------
     91200

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93840	   1753

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct					                        8376

So even when we had more than 50% of blocks in buffer cache, its still going for direct path read. SUM(NUM_BUF) didnt change from 1753 to 3209 (all blocks on T1 table).
I increased number of blocks cached in buffer cache to 90% and it will still going for direct path read.

Only when approx 98% of blocks are cached then Oracle goes for db cache. I validated same behavior in 11.2.0.3 as well.

Partitioned Table

I repeated the same process for partitioned table and found same behavior. In this case Oracle checks for each partition instead of table.

Each partition has 1948 blocks. So lets put 90% of blocks in db cache and check behavior.
Lets increase _small_table_threshold to 3000 deliberately so that Oracle uses db cache.


DEO>alter session set "_small_table_threshold"=3000;

Session altered.

DEO>alter system flush buffer_cache;

System altered.

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

no rows selected

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    2

DEO>--selecting 90% of records from PART_1
DEO>select count(*) from T1_PART where col1 = 1 and rownum < (128000*0.9);

  COUNT(*)
----------
    115199

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    2

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1753

We have 90% of blocks of partition PART_1 in buffer cache.
Now we can reduce _small_table_threshold to 1800 and try selecting.


DEO>alter session set "_small_table_threshold"=1800;

Session altered.

DEO>--selecting complete PART_1
DEO>select count(*) from T1_PART where col1 = 1;

  COUNT(*)
----------
    128000

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1753

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    1950

DEO>

As seen above, even with 90% of blocks in db cache, it goes for direct path read.
Lets repeat the same by putting 98% of blocks


DEO>alter session set "_small_table_threshold"=3000;

Session altered.

DEO>alter system flush buffer_cache;

System altered.

DEO>select count(*) from T1_PART where col1 = 1 and rownum < (128000*0.98);

  COUNT(*)
----------
    125439

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1910

DEO>alter session set "_small_table_threshold"=1800;

Session altered.

DEO>select count(*) from T1_PART where col1 = 1;

  COUNT(*)
----------
    128000

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# = (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME = 'PART_1') group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1949

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                     1950

So with 98% of rows/blocks in db cache, Oracle does NOT go for direct path read.

What if we have 1 partition in cache and we select 2 partitions. Will it go for direct path read ?

I have kept _small_table_threshold=3000 which is more than 1 partition size but less than 2 partitions size


DEO>alter session set "_small_table_threshold"=3000;

Session altered.

DEO>select count(*) from T1_PART where col1 = 1;

  COUNT(*)
----------
    128000

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# in (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME in ('PART_1','PART_2')) group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1949

We have above PART_1 in db cache and nothing on PART_2. Lets select 2 partitions


DEO>select count(*) from T1_PART where col1 in (1,2);

  COUNT(*)
----------
    256000

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# in (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME in ('PART_1','PART_2')) group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1949
     93846	      1

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    5848

So even when 1 partition was in db cache, if we select 2 partitions, it will go for direct path read for both partitions.
Above stats value of 5848 is cumulative value from previous value of 1950. Difference comes out to be 3898 which is size in blocks of 2 partitions.

_direct_read_decision_statistics_driven

This parameter mainly affect the behavior of partition table.

We have seen above that if we have a partition cached in db cache and if we select 2 partitions, both partitions will go for direct path read. This happens when _direct_read_decision_statistics_driven is set to true (default).

When _direct_read_decision_statistics_driven parameter is true, Oracle uses table statistics to decide whether to go for direct path read or db cache (using db file scattered read).
When _direct_read_decision_statistics_driven parameter is false, Oracle uses the segment header block (the one which always gets selected in db cache) to decide whehter to go for direct path read or db cache. It means that value of _small_table_threshold will be compared with every segment(partition) and not as a whole with partitions we are selecting.

In all the above excercise _direct_read_decision_statistics_driven was set to true. So Oracle server process used table stats to decide the fetch method.

Let set _direct_read_decision_statistics_driven to false to check behavior.


DEO>alter session set "_direct_read_decision_statistics_driven"=false;

Session altered.

We have 1 partition cached entirely in db cache.


DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# in (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME in ('PART_1','PART_2')) group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1949
     93846	      1

Current value of physical reads direct stats is as below and _small_table_threshold is set to 1800 which is less than size of single partition


DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    3896

DEO>alter session set "_small_table_threshold" = 1800;

Session altered.

Lets select 2 partitions


DEO>select count(1) from T1_PART where col1 in (1,2);

  COUNT(1)
----------
    256000

1 row selected.

DEO>select a.sid, b.name, a.value from v$sesstat a, v$statname b where a.statistic# = b.statistic# and a.statistic# in (97) and a.sid = 6531;

       SID NAME 								                            VALUE
---------- ---------------------------------------------------------------- ----------
      6531 physical reads direct						                    5846

1 row selected.

DEO>select obj#, sum(num_buf) from X$KCBOQH where obj# in (select object_id from dba_objects where object_name = 'T1_PART' and SUBOBJECT_NAME in ('PART_1','PART_2')) group by obj#;

      OBJ# SUM(NUM_BUF)
---------- ------------
     93845	   1949
     93846	      1

2 rows selected.

If we check carefully, we have direct path read done for only 1 partition (5846 – 3896 = 1950). 1950 is size of single partition.
So Oracle has gone for direct path read for 2nd partition and it has used blocks cached in buffer cache for 1st partition.
Since we have set _direct_read_decision_statistics_driven to false, Oracle reads header block of individual segment to decide fetch method to be used for that segment.
Since 1st segment PART_1 was present entirely in db cache, it decided to use that. PART_2 was not present in db cache, only header block was present so it compared PART_2 size with _small_table_threshold and since PART_2 size > _small_table_threshold, it went with direct path read.

In above exercise (_direct_read_decision_statistics_driven = true), when we set _small_table_threshold to 3600 for partition table and tried selecting 2 partitions, both went for direct path read. Now if we do that, both partitions will go to db cache because size of individual partitions (from header block of each segment) will be compared individually and not as sum of blocks we are selecting.

_serial_direct_read

If you want to disable direct path read on FTS, you can make use of _serial_direct_read parameter. Many DBA are increasing value of _small_table_threshold to very high value (higher than any table in database) and prevents direct path reads. But Oracle has provided this hidden parameter _serial_direct_read, which if set to false will disable direct path reads.

Default value of this parameter is true, which means allow direct path reads.

You can also set event 10949 which can also be used to disable direct path reads. Setting this event is like setting _serial_direct_read=false.

alter session set events '10949 trace name context forever, level 1';

Hope this helps !!

References

https://dioncho.wordpress.com/2009/07/21/disabling-direct-path-read-for-the-serial-full-table-scan-11g/
http://afatkulin.blogspot.ie/2009/01/11g-adaptive-direct-path-reads-what-is.html

Optimizer statistics-driven direct path read decision for full table scans (_direct_read_decision_statistics_driven)

http://oracle-tech.blogspot.com.tr/2014/04/smalltablethreshold-and-direct-path.html
http://oracle-tech.blogspot.com.tr/2014/04/directreaddecisionstatistcsdriven.html

Oracle Performance Tuning – ASH Basics

Recently I got an opportunity to dig into performance of database. We were not facing any performance issue, but we wanted to understand whats happening under the hood. What are the bottlenecks and how do we prepare and scale our database further.

In future the load on the databases are going to be crazy and we want to make sure that we look into every consumer of our database and optimize the usage.

I went back into the basics of database performance tuning – ASH. Active session history is the beast which has loads of information and provides information at very granular level – “session”. We can dig into ASH data and generate different reports rolled-up against various entities like module, action, sql_id, user_id etc. We can determine how much each of these entities are using database resources.

DB Time:

One of the most fundamental measure of database performance is DB Time. This is the amount of time a session has spend on database activities. This is a combination of CPU Time + non-Idle wait times. This does not consider any idle wait time so even if session is done with its activity but just sitting idle doing nothing, that idle time is not considered part of DB Time.

This gives a very accurate measure of amount of database resources a session is burning.

Active Session History:

ASH or Active Session History provides snapshots of active sessions connected to database (status=ACTIVE in v$session) and loads of other information related to those sessions about what they were doing when snapshot was taken. To understand how we can extract this information and make sense about it, we need to understand how ASH is build and what that information means.

V$ACTIVE_SESSION_HISTORY

Active session history is exposed by database using a view V$ACTIVE_SESSION_HISTORY. This view stores samples taken every 1 sec. A background process MMNL (Memory manager lite) takes a snapshot from v$session and v$session_wait about the current state of active sessions and dumps the information in V$ACTIVE_SESSION_HISTORY. Note that only active sessions’s snapshot is taken.

V$ACTIVE_SESSION_HISTORY is a memory view stored in the form of circular buffer. It means that this view has size limit on its growth (approx 2MB / CPU). So when it reaches to that size it starts overwriting the oldest records in circular buffer. You can visualize the same using following figure

 

ash-architecture-and-advanced-usage-rmoug2014-7-638

 

 

 

 

 

 

 

 

So how these samples are useful to us in finding DB Time ?

Lets say a session has spend 10 sec of active time in database. It means DB time for that session is 10 sec. During this 10 sec of active time, we will have 10 samples of that session taken as interval time between snapshot is 1 sec. So if we just take a count(*) from V$ACTIVE_SESSION_HISTORY, we will see 10 records for that session. This means that to get DB Time of a session  we just have to take count(*) from V$ACTIVE_SESSION_HISTORY for that session.

Its little difficult to adjust to the fact that time parameter can be obtained using count(*) from a view. But this is because interval time between samples is 1 sec.

So DB Time is actually – [ Number of samples ] X [ Time interval between samples ]

Since time interval between samples is 1 sec, we can get DB time as number of samples which is count(*) from V$ACTIVE_SESSION_HISTORY

Again, calculating DB Time using count(*) from ASH is approximation as sessions which are taking fraction of times < 1 sec will not be counted if they happens between 2 samples and DB time of other sessions which ends just before next samples is taken will be counted until last samples when they were active. But those were very few cases and amount of DB time not measure in such cases will be very less compared to total DB time we are measuring. You can visualize the same using following figure.

 

Aas_ashDBA_HIST_ACTIVE_SESS_HISTORY

DBA_HIST_ACTIVE_SESS_HISTORY is a history table which maintains history of active sessions between AWR snapshots. Since V$ACTIVE_SESSION_HISTORY is a memory view and has limited space for growth, data from V$ACTIVE_SESSION_HISTORY flows to DBA_HIST_ACTIVE_SESS_HISTORY table.

DBA_HIST_ACTIVE_SESS_HISTORY table enables us to view active session data from the past. Data in DBA_HIST_ACTIVE_SESS_HISTORY is retained based on retention level set for AWR snapshots (30 days by default). So every time AWR snapshot happens, data from V$ACTIVE_SESSION_HISTORY goes into this table.

But NOT all data from V$ACTIVE_SESSION_HISTORY goes into DBA_HIST_ACTIVE_SESS_HISTORY. When AWR snapshot happens, it picks only 1 out of 10 samples from V$ACTIVE_SESSION_HISTORY into history table.

Complete visualization of active sessions looks like below

ash-architecture-and-advanced-usage-rmoug2014-7-638

 

 

 

 

 

 

 

 

Since AWR snapshot interval is 1 hour, it will take AWR snapshots every 1 hour and during that time 1 out of 10 samples form V$ACTIVE_SESSION_HISTORY will move to DBA_HIST_ACTIVE_SESS_HISTORY table.

So if want to calculate DB Time from DBA_HIST_ACTIVE_SESS_HISTORY table, we need to consider sample time as 10 sec instead of 1 sec.

DB Time = [ Number of samples ] X [ Time interval between samples ]
        = count(1) * 10

That’s why when we are calculating DB Time of a session or SQL from DBA_HIST_ACTIVE_SESS_HISTORY table, we need to take count(*) * 10.

Calculating DB Time

With this brief understanding of ASH, we can use these tables to find out DB time for a session and consolidate sessions to calculate DB Time for module/user etc.

If we can list modules in terms of % of DB time they are consuming in database, modules with high % of DB time are the one using most resources in database and has high impact to database. Same holds true at user level as well. We can find out which user is contributing highest DB time.

I have given couple of queries to find out % of DB Time different modules are contributing in database. This gives a fair idea about module wise resource consumption in database.

Note that I have randomized module names using DBMS_RANDOM.string(‘L’,TRUNC(DBMS_RANDOM.value(10,21))) to avoid exposing actual module name outside.


set line 999
set pagesize 999
set verify off
col module format a30;
BREAK ON REPORT
COLUMN DUMMY HEADING ''
compute sum of PCT_CPU on REPORT
with total_cpu as
(select   count(1)
 from     V$ACTIVE_SESSION_HISTORY
 where    SESSION_TYPE = 'FOREGROUND')
 select   module,
          count(1) "module_cpu",
          (select * from total_cpu) "total_cpu",
          round((count(1)/(select * from total_cpu))*100,2) PCT_CPU
from      V$ACTIVE_SESSION_HISTORY
where     SESSION_TYPE = 'FOREGROUND'
group by module order by PCT_CPU desc;

MODULE                         module_cpu  total_cpu    PCT_CPU
------------------------------ ---------- ---------- ----------
vvadcagpas                        29355     187065      15.69
cbkcbqchts                        26506     187065      14.17
zugkvhbajxpff                     24655     187065      13.18
tvfqhrkqzktiiceolrfd              10046     187065       5.37
paczxkndgsaqt                      6740     187065        3.6
pzqacdgxeqzbtqhalfc                6220     187065       3.33
nvqqscmayx                         5905     187065       3.16
zbirttdyoxxbslopnm                 5412     187065       2.89
bbztiihehigeqwxvk                  5051     187065        2.7
kvosekxfhkmfw                      4687     187065       2.51
aenyopwmok                         3977     187065       2.13
brfqfuwdyhyrej                     3256     187065       1.74
abruxykxhvqgfg                     3227     187065       1.73
zfrahazoqgdxubpni                  2759     187065       1.47
ddxinnevqd                         2633     187065       1.41
lructcsjkxhxatfidpf                2518     187065       1.35
naawhsfqvcaoathn                   2246     187065        1.2
sljzltjhhigepqfmcji                2142     187065       1.15
chvjvntynkjb                       2102     187065       1.12
zlzhwwhzbhfdtsszfedj               2005     187065       1.07
yvkuwzjkjhyeamco                   1610     187065        .86
fdjwspubzyhyxyzzkgyi               1578     187065        .84
risowlcnkizrbpgt                   1557     187065        .83
ywwdycosdpcfsflaubia               1539     187065        .82
pxlqrkmvreppum                     1496     187065         .8
amivlnlivqeesnqdkto                1404     187065        .75
jjaqyosqsc                         1347     187065        .72
yxkgfgktafvx                       1354     187065        .72
gqistslkwdw                        1328     187065        .71
bhuzvdpklocmnyimtzux               1272     187065        .68
uimycabvbjcczibrn                  1053     187065        .56
ksnvqslyker                         997     187065        .53
ofekwpklwpjtha                      975     187065        .52
wmkmovueneno                        904     187065        .48
umdtggtvdptkubfs                    795     187065        .42
dddegpqpwyalotxtoh                  761     187065        .41
pwycwyhavccwjuufqjx                 733     187065        .39
pxvcbhpfmeuwen                      729     187065        .39
nhezqmgkqccdcog                     716     187065        .38
ixtljofuazhpp                       628     187065        .34
mbotmyoairyqbdg                     600     187065        .32
aocabnswozdg                        586     187065        .31
qtdbdbuffprxj                       586     187065        .31
suiwwwtolcnkbnkcfwxw                560     187065         .3
btbdbtaztym                         562     187065         .3
xmvdgeczrot                         567     187065         .3
gkwzuwlgsh                          538     187065        .29
yhgmyqgqkvzmvbpsi                   463     187065        .25
obahkcsiafagmkxxu                   457     187065        .24
tcjoiyhzpuutbfhegn                  448     187065        .24
brlrkszrpmtw                        426     187065        .23
spzvvwzvoh                          435     187065        .23
ckvvarzrtpedhjl                     334     187065        .18
mnamshpswmsif                       329     187065        .18
gwvwypqrpwxkbugms                   293     187065        .16
clkuqearqhguezagwjlc                302     187065        .16
fskwtuctalnxhhu                     306     187065        .16
dyljhijlwyfmjy                      277     187065        .15
eqdifihzda                          255     187065        .14
nkrvwqrldd                          229     187065        .12
jkevokwkwokbnkbx                    231     187065        .12
niatevdadex                         198     187065        .11
qfjiifhiubvstyp                     200     187065        .11
acquoqslwxedulsmekbu                180     187065         .1
hjfimltalvopvuum                    156     187065        .08
bmyyhtzdqebo                        152     187065        .08
tkehpofgfe                          157     187065        .08
qnhgoqgstvq                         131     187065        .07
ddlloytpmyhpzpmk                    125     187065        .07
bvborwlump                          133     187065        .07
juirsmdjuejot                       124     187065        .07
saeoojlzfepjfybpclls                129     187065        .07
ppfdzcozyhxmf                       135     187065        .07
xbrwqdopemjqrhebglzs                115     187065        .06
zrfdsejwxgitsorat                   119     187065        .06
vsmsefmxnkoeb                        97     187065        .05
dvhoxhzayqyhsny                      98     187065        .05
cwjpiesdhpxvjs                      101     187065        .05
bvffncgtiojaj                        98     187065        .05
flfuciblxfzdcbo                      87     187065        .05
asucxewzdruwyfmg                     66     187065        .04
dztczkivnnbuzyuh                     67     187065        .04
hslcljfjmrhduw                       73     187065        .04
lywascpugxxjidx                      74     187065        .04
cawgdisaagltmie                      73     187065        .04
dpqbuvfkef                           68     187065        .04
tjpaushwfftmm                        54     187065        .03
ulgktcqvsazktojskfo                  56     187065        .03
ckoayyqtcxybkukgqvgl                 49     187065        .03
aqmphwbrngbotpxehyhe                 49     187065        .03
asqllnbivusyu                        60     187065        .03
skvrrqoshnirxbud                     59     187065        .03
vobtqhfewje                          58     187065        .03
xurrifyontk                          34     187065        .02
rliqgkwdwao                          42     187065        .02
ziwftblonh                           36     187065        .02
wzybbgmxkx                           30     187065        .02
vhumxxhimauc                         16     187065        .01
vssaefphdkebdx                       26     187065        .01
vjbjocafrwgdv                        11     187065        .01
xgcwcxrlzrjvysanlijw                 14     187065        .01
ztzxyfykhnxozdn                      20     187065        .01
btlbvffybkyjsn                       12     187065        .01
pmmqhsjuqhdaf                        22     187065        .01
kaumdvjqfcprmaeygr                   11     187065        .01
kadliemxdxhtzleqjr                   28     187065        .01
hlebszxkejjgkfbpmjew                 16     187065        .01
atwvjasykmlmldbipswh                 16     187065        .01
                                                     ----------
sum                                                      99.95

Similarly to check DB Time for sessions in past, we need to use DBA_HIST_ACTIVE_SESS_HISTORY table. Following query will provide % DB time used by individual modules calculated as % of total DB time between 2 AWR snapshots

set line 999
set pagesize 999
set verify off
BREAK ON REPORT
COLUMN DUMMY HEADING ''
compute sum of PCT_CPU on REPORT
with total_cpu as
(select   count(1)*10
from      dba_hist_active_sess_history
where     snap_id >= &begin_snap_id
and       snap_id < &end_snap_id
and       SESSION_TYPE = 'FOREGROUND')
select    module,
          count(1)*10 "module_cpu",
          (select * from total_cpu) "total_cpu",
          round((count(1)*10/(select * from total_cpu))*100,2) PCT_CPU
from      dba_hist_active_sess_history
where     snap_id >= &&begin_snap_id
and       snap_id < &&end_snap_id
and       SESSION_TYPE = 'FOREGROUND'
group by module order by PCT_CPU desc

 

 

Note that we have used count(*) * 10 in above query because 1 of 10 samples go to this table. Also, you need to specify begin_snap_id and end_snap_id and module DB Time usage during those snapshots will be calculated.

CPU Time

If we want to ignore waits and only focus on pure CPU used by modules, we can list modules as per CPU time they consume as % of total CPU time of database.

I think DB time is still better metric than CPU time because DB time covers waits as well. Waits slow down database.

We can calculate CPU time of a session by doing the same count(*) from V$ACTIVE_SESSION_HISTORY but we need to count only those samples during which session was actually using CPU. An ACTIVE session could be either using CPU to do some processing or it could be actively waiting on something (example IO when doing db file sequential read). Active session using CPU is represented by session_state = ‘ON CPU’. So we will count only those samples for session where session_state was ON CPU

Following query will give CPU time used by each module as % of total CPU time using V$ACTIVE_SESSION_HISTORY


set line 999
set pagesize 999
set verify off
BREAK ON REPORT
COLUMN DUMMY HEADING ''
compute sum of PCT_CPU on REPORT
with total_cpu as
(select count(1)
from  V$ACTIVE_SESSION_HISTORY
where   SESSION_TYPE != 'BACKGROUND'
and session_state = 'ON CPU')
select  module,
count(1) "module_cpu",
(select * from total_cpu) "total_cpu",
round((count(1)/(select * from total_cpu))*100,2) PCT_CPU
from  V$ACTIVE_SESSION_HISTORY
where SESSION_TYPE != 'BACKGROUND'
and   session_state = 'ON CPU'
group by module order by PCT_CPU desc

We can find similar CPU time for past sessions using DBA_HIST_ACTIVE_SESS_HISTORY table. Following query will provide CPU time used by module as % of total CPU time between 2 AWR snapshots


set line 999
set pagesize 999
set verify off
BREAK ON REPORT
COLUMN DUMMY HEADING ''
compute sum of PCT_CPU on REPORT
with total_cpu as
(select count(1)*10
from  dba_hist_active_sess_history
where   snap_id >= &&begin_snap_id
and   snap_id < &&end_snap_id
and   SESSION_TYPE != 'BACKGROUND'
and session_state = 'ON CPU')
select  module,
count(1)*10 "module_cpu",
(select * from total_cpu) "total_cpu",
round((count(1)*10/(select * from total_cpu))*100,2) PCT_CPU
from  dba_hist_active_sess_history
where   snap_id >= &&begin_snap_id
and   snap_id < &&end_snap_id
and   SESSION_TYPE != 'BACKGROUND'
and   session_state = 'ON CPU'
group by module order by PCT_CPU desc

Note, that in above query from DBA_HIST_ACTIVE_SESS_HISTORY, we have used count(*) * 10 because only 1 of 10 samples goes into this table.

I think for now above information is sufficient to give a basic idea about ASH.

I will be writing more articles on performance tuning using ASH as I dig further.

Hope this helps !!