Oracle Corporate timeline

I was going through some sites and found a good information regarding the events that happened in the past and which brought up Oracle to this place. Thats being a long journey of Oracle business spreading right from database to application and middle tier architecture.

Here are the brief events I have listed.

  • 1979: Larry Ellison and friends founded Software Development Laboratories.
  • 1979: SDL changed its company-name to “Relational Software, Inc.” (RSI) and introduced its product Oracle V2 as an early commercially-available relational database system. The version did not support transactions, but implemented the basic SQL functionality of queries and joins. (RSI never released a version 1 – instead calling the first version version 2 as a marketing gimmick.)
  • 1983: RSI in its turn changed its name, becoming known as “Oracle Corporation”, to align itself more closely with its flagship-product. The company released Oracle version 3, which it had re-written using the C programming language and which supported COMMIT and ROLLBACK functionality for transactions. Version 3 extended platform support from the existing Digital VAX/VMS systems to include Unix environments.
  • 1984: Oracle Corporation released Oracle version 4, which supported read-consistency.
  • 1985: the Oracle RDBMS began supporting the client-server model, with networks becoming more widely available in the mid-1980s. Oracle version 5.0 supported distributed queries.
  • 1989: Oracle Corporation entered the application products market and developed its ERP product, Oracle E-Business Suite, based on the Oracle relational database. Oracle RDBMS version 6 came out with support for PL/SQL embedded within Oracle Forms v3 (version 6 could not store PL/SQL in the database proper), row-level locking and hot backups.
  • 1992: Oracle version 7 appeared with support for referential integrity, stored procedures and triggers.
  • 1997: Oracle Corporation released version 8, which supported object-oriented development and multimedia applications.
  • 1999: The release of Oracle8i aimed to provide a database inter-operating better with the Internet (the i in the name stands for “Internet”). The Oracle 8i database incorporated a native Java virtual machine (Oracle JVM).
  • 2001: Oracle9i was released with 400 new features, including the ability to read and write XML documents. 9i also provided an option for Oracle RAC, or “Real Application Clusters”, a computer-cluster database, as a replacement for the Oracle Parallel Server (OPS) option.
  • 2003: Oracle Corporation released Oracle Database 10g. (The g stands for “grid”; emphasizing a marketing thrust of presenting 10g as “grid-computing ready”.)
  • 2005: Oracle Database — also known as Oracle Database 10g Release 2 (10gR2) — appeared.
  • 2007: Oracle Corporation released Oracle Database 11g for Linux and Microsoft Windows.

 Oracle Database versions

Since version 7, Oracle’s RDBMS release numbering has used the following codes:

  • Oracle7: 7.0.16 — 7.3.4
  • Oracle8 Database: 8.0.3 — 8.0.6
  • Oracle8i Database Release 1: —
  • Oracle8i Database Release 2: —
  • Oracle8i Database Release 3: —
  • Oracle9i Database Release 1: — (Latest current patchset as of December 2003)
  • Oracle9i Database Release 2: — (Latest current patchset as of April 2007)
  • Oracle Database 10g Release 1: — (Latest current patchset as of February 2006)
  • Oracle Database 10g Release 2: — (Latest current patchset as of November 2006)
  • Oracle Database 11g Release 1: — no patchset available as of October 2007

The version numbering syntax within each release follows the pattern: major.maintenance.application-server.component-specific.platform-specific.For example, “ for 64-bit Solaris” means: 10th major version of Oracle, maintenance level 2, Oracle Application Server (OracleAS) 0, level 1 for Solaris 64-bit.

Reference: Wikipedia


Database Wait Statistics – Oracle Database 10g

When a SQL is submitted by a user to Oracle database, it never happens that Oracle will execute the SQL continuously at one go. Oracle process never get to work on the execution of statement without any interruptions. Often the process has to pause or wait for some event or some other resource to be released. Thus active Oracle process is doing one of the following thing at any point of time.

The process is executing the SQL statement.

The process is waiting for something (for example, a resource such as a database buffer or a latch). It could be waiting for an action such as a write to the buffer cache to complete.
That’s why the response time—the total time taken by Oracle to finish work—is correctly
defined as follows:

response time = service time + wait time

So only part of the time is spend by oracle process to actually “do” some thing. Rest of the time process just wait for some resource to get freed up. It can be waiting log writter process or database writter process or any other resources.

The wait event may also be due to unavailable buffers or latches.

Four dynamic performance views contain wait information: V$SESSION, V$SYSTEM_EVENT, V$SESSION_EVENT, and V$SESSION_WAIT. These four views list just about all the events the instance was waiting for and the duration of these waits. Understanding these wait events is essential for resolving performance issues.

There are different wait classes defined in database. Each class will contain different wait events. There are around 860 wait events defined in Oracle database 10g and are classified under different wait classes.

Some of the main wait classes includes:

  • Administrative: Waits caused by administrative commands, such as rebuilding an index, for example.
  • Application: Waits due to the application code.
  • Cluster: Waits related to Real Application Cluster management.
  • Commit: Consists of the single wait event log file sync, which is a wait caused by commits in the database.
  • Concurrency: Waits for database resources that are used for locking; for example, latches.
  • Configuration: Waits caused by database or instance configuration problems, including a low shared-pool memory size, for example.
  • Idle: Idle wait events indicate waits that occur when a session isn’t active; for example, the ‘SQL*Net message from client’ wait event.

You can see the complete list of wait classes using V$SESSION_WAIT_CLASS dynamic performance view.

Analyzing Instance Performance

You can check the percentage of time spend by the database in waiting for resource and percentage of time spend by database in actual execution.

3  WHERE METRIC_NAME IN (‘Database CPU Time Ratio’,
4  ‘Database Wait Time Ratio’) AND
6  (select max(INTSIZE_CSEC) from V$SYSMETRIC);

METRIC_NAME                                                           VALUE
—————————————————————- ———-
Database Wait Time Ratio                                         15.6260647
Database CPU Time Ratio                                          84.3739353

If the database performance shows that ‘Database Wait Time Ratio’ is greater then ‘Database CPU Time Ratio’ or the value for ‘Database Wait Time Ratio’ is quite significant, then in that case you need to dig inside to get the information about where exactly oracle is waiting. You need to basically find the type of wait. This will give you root cause. Once you get the root cause you can work to fix the same.

you can determine the total waits and percentage of waits by wait class.

round(100 * (TIME_WAITED / SUM_TIME),2) PCT_TIME
(select WAIT_CLASS,
where WAIT_CLASS != ‘Idle’)

——————– ———– ———— ————— ———-
System I/O                180300        19.96          3008.8      49.53
Commit                     67855         7.51         1302.46      21.44
User I/O                  291565        32.28         1056.55      17.39
Application                 3637           .4          596.66       9.82
Other                      15388          1.7            67.4       1.11
Concurrency                 1264          .14           38.12        .63
Network                   343169        37.99            3.86        .06
Configuration                 22            0               1        .02

8 rows selected.

In the above output percentage of time waited (last column) is more important and gives the correct picture of the impact due to wait. Example if we see total number of Network waits are large but the actual percentage of time contributed to the wait is very less (0.06%).

The key dynamic performance tables for finding wait information are the V$SYSTEM_EVENT, V$SESSION_EVENT, V$SESSION_WAIT, and the V$SESSION views. The first two views show the waiting time for different events.


The V$SYSTEM_EVENT view shows the total time waited for all the events for the entire system since the instance started up. The view doesn’t focus on the individual sessions experiencing waits, and therefore it gives you a high-level view of waits in the system. You can use this view to find out that the top instance-wide wait events are. You can calculate the top n waits in the system by dividing the event’s wait time by the total wait time for all events.

where wait_class != ‘Idle’
order by time_waited desc;

—————————— ———– ———– ——————–
log file parallel write             128953      210308 System I/O
log file sync                        67904      130313 Commit
db file sequential read             259065       73686 User I/O
enq: TX – row lock contention          226       59080 Application
control file parallel write          28282       57929 System I/O
db file parallel write               19155       32924 System I/O
db file scattered read               31841       30925 User I/O
os thread startup                       95        3262 Concurrency
rdbms ipc reply                        485        2116 Other
PX Deq: Signal ACK                    1971        1103 Other
local write wait                       245         864 User I/O

we can get the session level waits for each event using V$SESSION_EVENT view. In this view the TIME_WAITED is the wait time per session.


where WAIT_CLASS != ‘Idle’
order by TIME_WAITED;

———- —————————— ———– ———– ————
390 os thread startup                       55      1918   Concurrency
393 db file sequential read              10334      4432   User I/O
396 db file parallel write                8637      14915  System I/O
397 db file parallel write               10535      18035  System I/O
394 control file parallel write          28294      57928  System I/O
395 log file parallel write             129020      210405 System I/O

As we can see from above output that session 395 is having maximum wait time because of system I/O. Here system I/O is the I/O because of background processes like DBWR and LGWR etc.

You can get all the database wait events from V$EVENT_NAME and the related meaning of all the wait events available in oracle 10g by checking this documentation link.


The third dynamic view is the V$SESSION_WAIT view, which shows the current waits or just completed waits for sessions. The information on waits in this view changes continuously based on the types of waits that are occurring in the system. The real-time information in this view provides you with tremendous insight into what’s holding up things in the database right now. The
V$SESSION_WAIT view provides detailed information on the wait event, including details such as file number, latch numbers, and block number. This detailed level of information provided by the V$SESSION_WAIT view enables you to probe into the exact bottleneck that’s slowing down the database. The low-level information helps you zoom in on the root cause of performance problems.

SQL> select sid, event, WAIT_CLASS, WAIT_TIME, SECONDS_IN_WAIT, STATE from v$session_wait
2  where wait_class != ‘Idle’;

       SID EVENT                          WAIT_CLASS            WAIT_TIME
———- —————————— ——————– ———-
————— ——————-
337 SQL*Net message to client      Network                      -1

Here wait time -1 means that session has waited for less then 1/100th of a second.
You can get the complete wait information for a particular session using V$SESSION view. You can get SQLID of the sql, which is causing wait.


For getting the wait statistics you can go with the following methodology.

  • First, look at the V$SYSTEM_EVENT view and rank the top wait events by the total amount of time waited, as well as the average wait time for that event.
  • Next, find out more details about the specific wait event that’s at the top of the list. You can check V$WAITSTAT view for the same. Check the type of wait this view is showing. If the wait is due to undo header or undo block then wait is related to undo segment.
  • Finally, use the V$SESSION view to find out the exact objects that may be the source of a problem. For example, if you have a high amount of db file scattered read-type waits, the V$SESSION view will give you the file number and block number involved in the wait events.

In V$SESSION we have a column called BLOCKING_SESSION_STATUS. IF this column value is ‘valid’, then we can presume that corresponding session is getting blocked.


The V$SESSION_WAIT_HISTORY view holds information about the last ten wait events for each active session. The other wait-related views, such as the V$SESSION and the V$SESSION_WAIT, show you only the wait information for the most recent wait. This may be a short wait, thus escaping your scrutiny.

SQL> select sid from v$session_wait_history
2  where wait_time = (select max(wait_time) from v$session_wait_history);

      SEQ# EVENT                            P1         P2         P3  WAIT_TIME
———- ———————— ———- ———- ———- ———-
1 rdbms ipc message            180000          0          0     175787
2 rdbms ipc message            180000          0          0     175787
3 rdbms ipc message            180000          0          0      60782
4 rdbms ipc message            180000          0          0     175787
5 rdbms ipc message            180000          0          0     138705
6 db file sequential read           1        368          1          0
7 rdbms ipc message            180000          0          0     158646
8 db file sequential read           1        368          1          0
9 db file sequential read           1         73          1          0
10 db file sequential read           1         30          1          0

Note that a zero value under the WAIT_TIME column means that the session is waiting for a specific wait event. A nonzero value represents the time waited for the last event.


The V$SESSION_WAIT view tells you what resource a session is waiting for. The V$SESSION view also provides significant wait information for active sessions. However, neither of these views provides you with historical information about the waits in your instance. Once the wait is over, you can no longer view the wait information using the V$SESSION_WAIT view. The waits are so fleeting that by the time you query the views, the wait in most times is over. The new Active Session History (ASH) feature, by recording session information, enables you to go back in time and review the history of a performance bottleneck in your database. Although the AWR provides hourly snapshots
of the instance by default, you won’t be able to analyze events that occurred five or ten minutes ago, based on AWR data. This is where the ASH information comes in handy. ASH samples the V$SESSION view every second and collects the wait information for all active sessions. An active session is defined as a session that’s on the CPU or waiting for a resource. You can view the ASH session statistics through the view  V$ACTIVE_SESSION_HISTORY, which contains a single row for each active session in your instance. ASH is a rolling buffer in memory, with older information being overwritten by new session data. Every 60 minutes,the MMON background process flushes filtered ASH data to disk, as part of the hourly AWR snapshots. If the ASH buffer is full, the MMNL background process performs the flushing of data. Once the ASH data is flushed to disk, you won’t be able to see it in the V$ACTIVE_SESSION_HISTORY view. You’ll now have to use the DBA_HIST_ACTIVE_SESS_HISTORY view to look at the historical data.

obtaining the objects with highest waits

SELECT a.current_obj#, o.object_name, o.object_type, a.event,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a,
dba_objects o
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.current_obj# = o.object_id
GROUP BY a.current_obj#, o.object_name, o.object_type, a.event
ORDER BY total_wait_time;

 OBJECT_NAME                OBJECT_TYPE         EVENT                  TOTAL_WAIT_TIME
——————–       ——————- ———————– —————-
FND_LOGINS                 TABLE               db file sequential read   47480
KOTTB$                     TABLE               db file sequential read   48077
SCHEDULER$_WINDOW          TABLE               db file sequential read   49205
ENG_CHANGE_ROUTE_STEPS_TL  TABLE               db file sequential read   52534
JDR_PATHS_N1               INDEX               db file sequential read   58888
MTL_ITEM_REVISIONS_B       TABLE               SQL*Net more data to client

select p1text, p1, p2text, p2, p3text, p3, a.event
from v$active_session_history a
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.current_obj# = 1938000

P1TEXT     P1          P2TEXT     P2   P3TEXT          P3  EVENT
——–   —         ——- ——- ———— —— ————————
file#      71          block#    4389  blocks           1  db file sequential read
file#      187         block#   89977  blocks           1  db file sequential read
file#      80          block#   79301  blocks           1  db file sequential read
driver id  675562835   #bytes       1  0
file#      11          block#     831  blocks           1  db file sequential read
driver id  675562835   #bytes       1  0

so we can see few history wait events for a particular object in database. We can get the segment stats for this object. Finally we can got to some conclusing and implementaing the solution to reduce the wait. For example if it is a ‘db file sequential read’ wait then

  • Increase buffer cache size, but this wont help much. For this to do, you need to check cache miss percentages.
  • Check the query and optimize it, so that it can read less number of blocks
  • Increase freelists for that segment

some important wait events

The following query lists the most important wait events in your database in the last 15 minutes:

SELECT a.event,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a
WHERE a.sample_time between
sysdate – 30/2880 and sysdate
GROUP BY a.event
ORDER BY total_wait_time DESC;

EVENT                                                            TOTAL_WAIT_TIME
—————————————————————- —————
enq: TX – row lock contention                                          877360289
TCP Socket (KGAS)                                                       13787430
SQL*Net break/reset to client                                            6675324
db file sequential read                                                  2318850
control file parallel write                                              1790011
log file parallel write                                                  1411201
db file scattered read                                                     62132
os thread startup                                                          39640
null event                                                                     0

Users with the Most Waits

The following query lists the users with the highest wait times within the last 15 minutes:

SELECT s.sid, s.username,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a,
v$session s
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.session_id=s.sid
GROUP BY s.sid, s.username
ORDER BY total_wait_time DESC;

       SID USERNAME                       TOTAL_WAIT_TIME
———- —————————— —————
773 APPS                                 877360543
670 APPS                                 374767126
797                                       98408003
713 APPS                                  97655307
638 APPS                                  53719218
726 APPS                                  39072236
673 APPS                                  29353667
762 APPS                                  29307261
746 APPS                                  29307183
653 APPS                                  14677170
675 APPS                                  14676426

Identifying SQL with the Highest Waits

Using the following query, you can identify the SQL that’s waiting the most in your instance with in last 15 mins

SELECT a.user_id,d.username,s.sql_text,
SUM(a.wait_time + a.time_waited) total_wait_time
FROM v$active_session_history a,
v$sqlarea s,
dba_users d
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.sql_id = s.sql_id
AND a.user_id = d.user_id
GROUP BY a.user_id,s.sql_text, d.username;

Monitoring IO statistics – Oracle Database 10g

There are several ways to check the IO statistics and load on the devices. We have OS level utilities like iostat and sar. We will check out first the OS utilities and then we will move to check IO stats using database query.

Example of using sar -d for IO stats:

(appmgr01) appmgr – -bash $ sar -d 10 5
Linux 2.4.21- (ap6157rt)    06/18/2007

02:19:21 AM       DEV       tps  rd_sec/s  wr_sec/s
02:19:31 AM    dev8-0      1.30      0.00     19.20
02:19:31 AM    dev8-1      0.00      0.00      0.00
02:19:31 AM    dev8-2      1.30      0.00     19.20

02:19:31 AM       DEV       tps  rd_sec/s  wr_sec/s
02:19:41 AM    dev8-0      0.50      0.00     12.81
02:19:41 AM    dev8-1      0.00      0.00      0.00
02:19:41 AM    dev8-2      0.50      0.00     12.81

Here we can see that dev8-0, dev8-1 and dev8-2 are the devices (disks).

tps is the transfer rate/second in terms of number of requests
rd_sec/s is Number  of  sectors  read  from the device. The size of a sector is 512 bytes.
wr_sec/s is Number  of  sectors  write  to the device. The size of a sector is 512 bytes.

Another command is iostat

-bash-2.05b$ iostat
Linux 2.4.21- (ap6188rt)    06/18/2007

avg-cpu:  %user   %nice    %sys %iowait   %idle
7.54    0.00    6.37    0.14   85.94

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               1.21         6.16        15.18   19695596   48524898
sda1              0.94         6.15        10.84   19655648   34666374
sda2              0.26         0.01         4.33      39404   13858522

Here you can check the %user column, which will show you the percentage of CPU that occurred while executing at the user level (application).

Other significant things are already explained in Performace Tuning Oracle Database 10g

You can check the IO stats using some SQLs on dynamic views as given below. This query will provide you with the read/write statistics for each file of database.

f.phyrds reads,
f.phywrts wrts,
(f.readtim / decode(f.phyrds,0,-1,f.phyrds)) readtime,
(f.writetim / decode(f.phywrts,0,-1,phywrts)) writetime
v$datafile d,
v$filestat f
d.file# = f.file#

NAME                                                    READS       WRTS   READTIME   WRITETIME
————————————————– ———- ———- ———-   ———-
/slot/ems1149/oracle/db/apps_st/data/tx_idx20.dbf          70        129   .9         .031007752
/slot/ems1149/oracle/db/apps_st/data/tx_idx21.dbf          46         72   .6521739   .041666667
/slot/ems1149/oracle/db/apps_st/data/tx_idx22.dbf         151        428   .7814569   .044392523
/slot/ems1149/oracle/db/apps_st/data/tx_idx23.dbf         107        300   1.177570        .08
/slot/ems1149/oracle/db/apps_st/data/tx_idx24.dbf         177        935   .4293785   .098395722
/slot/ems1149/oracle/db/apps_st/data/tx_idx3.dbf          103       1624   1.427184   .580049261

v$filestat view gives the read/write statistics for datafiles. Here phyrds are the number of physical reads done from respective datafiles and phywrts are the number of physical writes down by DBWR process in writting from buffer cache to disk. Also readtime, what we are calculating here is the average read time per request. Similarly the writetime.

Excessive disk reads and write time shows that you have I/O contention. Here are some of the point which you can make a note in order to reduce I/O contention.

  • Increase the number of disks in the storage system.
  • Separate the database and the redo log files.
  • For a large table, use partitions to reduce I/O.
  • Stripe the data either manually or by using a RAID disk-striping system.
  • Invest in cutting-edge technology, such as file caching, to avoid I/O bottlenecks.
  • Consider using the new Automatic Storage Management system.

Monitoring CPU Usage – Oracle Database 10g

System Activity Report (SAR)

-bash-2.05b$ sar -u 10 5
Linux 2.4.21- (ap6188rt)    06/12/2007

10:48:24 PM       CPU     %user     %nice   %system   %iowait     %idle
10:48:34 PM       all     22.07      0.00     14.36      0.03     63.54
10:48:44 PM       all     16.70      0.00     13.93      0.17     69.20
10:48:54 PM       all      8.80      0.00      8.15      0.25     82.80
10:49:04 PM       all      2.52      0.00      3.55      0.00     93.92
10:49:14 PM       all      2.05      0.00      4.00      0.00     93.95
Average:          all     10.43      0.00      8.80      0.09     80.69

We can check the processes which are consuming high CPU units. When we use the command ps -eaf, 4th column shows the number of CPU units, that process is using. Example

oracle03 12979     1  0 21:39 ?        00:00:00 ora_q003_tkr12m1
oracle03 22815     1  0 21:57 ?        00:00:00 ora_q001_tkr12m1
oracle03  2720     1  0 22:36 ?        00:00:00 oracletkr12m1 (LOCAL=NO)
oracle03 30548     1  0 22:42 ?        00:00:00 oracletkr12m1 (LOCAL=NO)
oracle03 25572     1  0 22:48 ?        00:00:00 oracletkr12m1 (LOCAL=NO)
oracle03 14232     1 35 22:53 ?        00:00:03 oracletkr12m1 (LOCAL=NO)

You can also check the users, which are consuming high CPU.

SELECT n.username,
FROM v$sesstat s,v$statname t, v$session n
WHERE s.statistic# = t.statistic#
AND n.sid = s.sid
AND’CPU used by this session’
ORDER BY s.value desc;

USERNAME                              SID      VALUE
—————————— ———- ———-
1093     194184
1092      77446
1088      67564
1089      43054
1090      19192
1072      15009
APPS                                  832       1777
APPS                                  998       1190
APPS                                  822        577
APPS                                  900        508
APPS                                  823        477

You can also check session level CPU using

SQL> select * from v$sesstat
2  where statistic# = 12
3  order by value desc;

———- ———- ———-
1093         12     194184
1092         12      77475
1088         12      67579
1089         12      43062
1090         12      19194
1072         12      15010
832         12       1785
998         12       1197
822         12        577

You can also decompose the total CPU usage. Basically a CPU time is

total CPU time = parsing CPU usage + recursive CPU usage + other CPU usage

WHERE NAME IN (‘CPU used by this session’,
‘recursive cpu usage’,
‘parse time cpu’);

NAME                                                                  VALUE
—————————————————————- ———-
recursive cpu usage                                                 6276669
CPU used by this session                                            8806491
parse time cpu                                                       482645

Ideally (parsing CPU usage + recursive CPU usage) should be significantly less then CPU used by this session. Here we can see that (parsing CPU usage + recursive CPU usage) is almost equal to CPU used by this session. Here we need to make some tuning in reducing the time for recursive CPU usage.

To determine percentage of CPU usage for parsing

select (a.value / b.value)*100 “% CPU for parsing”
where = ‘parse time cpu’
and = ‘CPU used by this session’;

% CPU for parsing

Reducing Parse Time CPU Usage

If parse time CPU is the major part of total CPU usage, you need to reduce this by performing the following steps:

  1. Use bind variables and remove hard-coded literal values from code, as explained in the
    “Optimizing the Library Cache” section earlier in this chapter.
  2. Make sure you aren’t allocating too much memory for the shared pool. Remember that even if you have an exact copy of a new SQL statement in your library cache, Oracle has to find it by scanning all the statements in the cache. If you have a zillion relatively useless statements sitting in the cache, all they’re doing is slowing down the instance by increasing the parse time.
  3. Make sure you don’t have latch contention on the library cache, which could result in increased parse time CPU usage. We will see latch contentions and how to reduce it – later.

To determine percentage of Recursive CPU Usage

select (a.value / b.value)*100 “% recursive cpu usage”
where = ‘recursive cpu usage’
and = ‘CPU used by this session’;

Recursive CPU usage

This is really a bad number for recursive CPU usage. Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used.

Recursive calls are also generated due to the inavailability of dictionary info in the dictionary cache, firing of database triggers, execution of DDL, execution of SQL within PL/SQL blocks, functions or stored procedures and enforcement of referential integrity constraints.

If the recursive CPU usage percentage is a large proportion of total CPU usage, you may want to make sure the shared pool memory allocation is adequate. However, a PL/SQL-based application will always have a significant amount of recursive CPU usage.

To reduce the recursive CPU usage, make sure that shared pool is sized correctly. Also you can check the parameters like

SESSION_CACHED_CURSORS – value can be anything numeric like 500.

Tuning PGA Memory – Oracle database 10g

Correct size of PGA

You can get the correct size of PGA using V$PGA_TARGET_ADVICE, dynamic performance view.

SQL> SELECT ROUND(pga_target_for_estimate/1024/1024) target_mb,
2 estd_pga_cache_hit_percentage cache_hit_perc,
3 estd_overalloc_count

———- ————– ——————–
512 98 909
1024 100 0
2048 100 0
3072 100 0
4096 100 0
4915 100 0
5734 100 0
6554 100 0
7373 100 0
8192 100 0
12288 100 0

———- ————– ——————–
16384 100 0
24576 100 0
32768 100 0

14 rows selected.

Checking PGA for each sessions

You can check session level PGA using V$SESSTAT and V$SESSION view and also you can check the username, who is using that memory.

name = ‘session pga memory’
AND s.sid=a.sid
ORDER BY s.value;

———- ———- ——————————
487276 1070 APPS
552812 1068 SYS
552812 1088
618348 1009 APPS_READ_ONLY
683884 1091
749420 846 MOBILEADMIN
749420 1090
749420 1051 APPLSYSPUB
749420 1000 APPLSYSPUB
749420 929 APPLSYSPUB
790412 1093

To check the total PGA in use and hit ratio for PGA


————————————————– ———- ————
aggregate PGA target parameter 4294967296 bytes
aggregate PGA auto target 3674290176 bytes
global memory bound 201252864 bytes
total PGA inuse 218925056 bytes
total PGA allocated 433349632 bytes
maximum PGA allocated 1526665216 bytes
total freeable PGA memory 86835200 bytes
process count 113
max processes count 250
PGA memory freed back to OS 8.3910E+10 bytes
total PGA used for auto workareas 6505472 bytes

————————————————– ———- ————
maximum PGA used for auto workareas 70296576 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 4292608 bytes
over allocation count 0
bytes processed 2.1553E+11 bytes
extra bytes read/written 10403840 bytes
cache hit percentage 99.99 percent
recompute count (total) 205474

19 rows selected.

The ideal way to perform sorts is by doing the entire job in memory. A sort job that Oracle performs entirely in memory is said to be an optimal sort. If you set the PGA_AGGREGATE_TARGET too low, some of the sort data is written out directly to disk (temporary tablespace) because the sorts are too large to fit in memory. If only part of a sort job spills over to disk, it’s called a 1-pass sort. If the instance performs most of the sort on disk instead of in memory, the response time will be high. This is called multi pass sort.

Another method of checking the efficiency of PGA memory is to check V$SQL_WORKAREA_HISTOGRAM.

V$SQL_WORKAREA_HISTOGRAM displays the cumulative work area execution statistics (cumulated since instance startup) for different work area groups. The work areas are split into 33 groups based on their optimal memory requirements with the requirements increasing in powers of two. That is, work areas whose optimal requirement varies from 0 KB to 1 KB, 1 KB to 2 KB, 2 KB to 4 KB, … and 2 TB to 4 TB.

For each work area group, the V$SQL_WORKAREA_HISTOGRAM view shows how many work areas in that group were able to run in optimal mode, how many were able to run in one-pass mode, and finally how many ran in multi-pass mode. The DBA can take a snapshot at the beginning and the end of a desired time interval to derive the same statistics for that interval.

low_optimal_size/1024 “Low (K)”,
(high_optimal_size + 1)/1024 “High (K)”,
optimal_executions “Optimal”,
onepass_executions “1-Pass”,
multipasses_executions “>1 Pass”
FROM v$sql_workarea_histogram
WHERE total_executions <> 0;

Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
2 4 6254487 0 0
64 128 110568 0 0
128 256 20041 0 0
256 512 86399 0 0
512 1024 145082 0 0
1024 2048 31207 0 0
2048 4096 104 0 0
4096 8192 79 2 0
8192 16384 116 0 0
16384 32768 30 0 0
32768 65536 4 0 0

Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
65536 131072 2 0 0

12 rows selected.

You can check the proportion of work areas since you started the Oracle instance, using optimal, 1-pass, and multipass PGA memory sizes.

DECODE(total, 0, 0, ROUND(cnt*100/total)) PERCENTAGE
FROM (SELECT name, value cnt, (sum(value) over ()) total
WHERE name like ‘workarea exec%’);

————————————————– ———- ———-
workarea executions – optimal 6650608 100
workarea executions – onepass 2 0
workarea executions – multipass 0 0

Since almost all the sorting and temporary operation are carried out inder optimal catagory we can conclude that out PGA is sized correctly.

Tuning Buffer cache – Oracle Database 10g

For tuning buffer cache, we need to understand following point very closely.

Physical reads: These are the data blocks that Oracle reads from disk. Reading data from disk is much more expensive than reading data that’s already in Oracle’s memory. When you issue a query, Oracle always first tries to retrieve the data from memory—the database buffer cache—and not disk.

DB block gets: This is a read of the buffer cache, to retrieve a block in current mode. This most often happens during data modification when Oracle has to be sure that it’s updating the most recent version of the block. So, when Oracle finds the required data in the database buffer cache, it checks whether the data in the blocks is up to date. If a user changes the data in the buffer cache but hasn’t committed those changes yet, new requests for the same data can’t show these interim changes. If the data in the buffer blocks is up to date, each such data block retrieved is counted as a DB block get.
Consistent gets: This is a read of the buffer cache, to retrieve a block in consistent mode. This may include a read of undo segments to maintain the read consistency principle. If Oracle finds that another session has updated the data in that block since the read began, then it will apply the new information from the undo segments.

Logical reads: Every time Oracle is able to satisfy a request for data by reading it from the database buffer cache, you get a logical read. Thus logical reads include both DB block gets and consistent gets.

Buffer gets: This term refers to the number of database cache buffers retrieved. This value is the same as the logical reads described earlier.

So basically buffer cache hit ratio is all about the rate at which you get information in your memory cache and less accessing the disk.

so in short we can say from above defination,

Buffer cache hit ratio = 1 – (physical read/logical reads)

Here logical reads means reading from memory.

SQL> SELECT name, value FROM v$sysstat
2  where name IN (‘physical reads cache’,
3  ‘consistent gets from cache’,
4  ‘db block gets from cache’);          

NAME                                                                  VALUE
—————————————————————- ———-
db block gets from cache                                          164998308
consistent gets from cache                                       2136448944
physical reads cache                                                2787422

Here physical reads are ‘physical reads cache’ stored in v$sysstat.
logical reads are, sum of ‘consistent gets from cache’ and ‘db block gets from cache’.

so our buffer cache hit ratio will be 1 – (2787422 / (2136448944 + 164998308)) = 0.9987

Another way to calculate the same buffer cache hit ratio is our query on v$sysstat

SQL> SELECT 1- ((p.value – l.value – d.value) / s.value)
AS “Buffer Cache Hit Ratio”
FROM v$sysstat s, v$sysstat l, v$sysstat d, v$sysstat p
WHERE = ‘session logical reads’
AND = ‘physical reads direct’
AND = ‘physical reads direct (lob)’
AND = ‘physical reads’

The above query will also give almost the same result. Here actually,

physical reads cache = physical reads – (physical reads direct (lob) + physical reads direct) and session logical reads = consistent gets from cache + db block gets from cache

LRU Algorithm or Touch Count Algorithm ??

Since version 8.1, Oracle has used a concept called touch count to measure how many times an object is accessed in the buffer cache. This algorithm of using touch counts for managing the buffer cache is somewhat different from the traditional modified LRU algorithm that Oracle used to employ for managing the cache. Each time a buffer is accessed, the touch count is incremented.

low touch count means that the block isn’t being reused frequently, and therefore is wasting database buffer cache space. If you have large objects that have a low touch count but occupy a significant proportion of the buffer cache, you can consider them ideal candidates for the recycle pool.

TCH (Touch count) is a column present in x$bh table of data dictionary, which keeps track of touch counts of objects.

Following query will give you the objects which are consuming reasonable amount of memory and are the candidates for getting flushed out of buffer cache.

obj object,
count(1) buffers,
(count(1)/totsize) * 100 percent_cache
FROM x$bh,
(select value totsize
FROM v$parameter
WHERE name =’db_block_buffers’)
WHERE tch=1
OR (tch = 0 and lru_flag <10)
GROUP BY obj, totsize
HAVING (count(1)/totsize) * 100 > 5

Remember here that, this explaination is just for understanding. We dont have to do anything in buffer cache for getting these objects flushed out. Removal of this objects are handled automatically by database engine.

From this query you can get object_id and from that you can find object_name (using dba_objects).

using Multiple Pools for the Buffer Cache

As you already know we can have multiple pool for buffer cache, so I wont be describing the same here, else we will loose the focus on current discussion.
We have basically 3 types of buffer pools.

  1. KEEP

Default pool will always be there. However depending on the situation we can create keep and recycle pools. If there are some objects, which are accessed frequently, then will want to keep such objects in database cache. For such objects we can use keep buffer pool. Objects which are big and not accessed frequently can be put in recycle pool. By default if we dont specify buffer pool, object will always go to default pool.

V$BUFFER_POOL_STATISTICS will give statistics for all pools.

Determining Candidates for the Recycle Buffer Pool

Candidates which are having low TCH value as given by above query are best to put in recycle pool.

Determining Candidates for the Keep Buffer Cache

SELECT obj object,
count(1) buffers,
AVG(tch) average_touch_count
FROM x$bh
WHERE lru_flag = 8
HAVING avg(tch) > 5
AND count(1) > 25;

Above query will give the candidates which are having avg TCH value of more then 5 and number of buffers occupied in buffer cache as more then 25. Such objects can be placed in KEEP buffer cache. You can place an object in a perticular pool using alter table command.

ALTER TABLE test1 STORAGE (buffer_pool keep);

Sizing buffer cache

For sizing buffer cache, you can check V$DB_CACHE_ADVICE view. This view will provide you information about various buffer cache sizes and estimated physical read, when we use those sizes for buffer cache. Based on this result you should be able to decide the correct size of database buffer cache.

2  from v$db_cache_advice;

———  —————– ———– ——————- —————————–
DEFAULT       112             .0946      6871847             2.4
DEFAULT       224             .1892      5435019             1.8
DEFAULT       336             .2838      4600629             1.5
DEFAULT       448             .3784      4125422             1.3
DEFAULT       560              .473      3831101             1.1
DEFAULT       672             .5676      3598589             1
DEFAULT       784             .6622      3381913             .9
DEFAULT       896             .7568      3154726             .8
DEFAULT      1008             .8514      2957398             .8
DEFAULT      1120             .9459      2841502             .7
DEFAULT      1184                 1      2791921             .7
DEFAULT      1232            1.0405      2757728             .7
DEFAULT      1344            1.1351      2689955             .6
DEFAULT      1456            1.2297      2653143             .6
DEFAULT      1568            1.3243      2631218             .6
DEFAULT      1680            1.4189      2608447             .6
DEFAULT      1792            1.5135      2588899             .6
DEFAULT      1904            1.6081      2573463             .6
DEFAULT      2016            1.7027      2561587             .6
DEFAULT      2128            1.7973      2549937             .6
DEFAULT      2240            1.8919      2535972             .6

21 rows selected.     

Based on ESTD_PCT_OF_DB_TIME_FOR_READS and ESTD_PHYSICAL_READS, you can decide the correct size for db buffer cache.

Tuning Shared pool – Oracle Database 10g

Shared Pool

The most important part of memory resource when considering the production system is “shared pool”. The shared pool is a part of the SGA that holds almost all the necessary elements for execution of the SQL statements and PL/SQL programs.
In addition to caching program code, the shared pool caches the data dictionary information that Oracle needs to refer to often during the course of program execution.
Proper shared pool configuration leads to dramatic improvements in performance. An improperly tuned shared pool leads to problems such as the following:

  • Fragmentation of the pool
  • Increased latch contention with the resulting demand for more CPU resources
  • Greater I/O because executable forms of SQL aren’t present in the shared pool
  • Higher CPU usage because of unnecessary parsing of SQL code

The general increase in shared pool waits and other waits observed during a severe slowdown of the production database is the result of SQL code that fails to use bind variables.

As the number of users increases, so does the demand on shared pool memory and latches, which are internal locks for memory areas. If there are excessive latches the result might be a higher wait time and a slower response time. Sometimes the entire database seems to hang. The shared pool consists of two major areas: the library cache and the data dictionary cache. You can’t allocate or decrease memory specifically for one of these components. If you increase the total shared pool memory size, both components will increase in some ratio that Oracle determines. Similarly, when you decrease the total shared pool memory, both components will decrease in size.

Let’s look at these two important components of the shared pool in detail.

The Library Cache

The library cache holds the parsed and executable versions of SQL and PL/SQL code. As you may recall from Chapter 21, all SQL statements undergo the following steps during their processing:

Parsing, which includes syntactic and semantic verification of SQL statements and checking of object privileges to perform the actions.

Optimization, where the Oracle optimizer evaluates how to process the statement with the
least cost, after it evaluates several alternatives.

Execution, where Oracle uses the optimized physical execution plan to perform the action
stated in the SQL statement.

Fetching, which only applies to SELECT statements where Oracle has to return rows to you.

This step isn’t necessary in any nonquery-type statements. Parsing is a resource-intensive operation, and if your application needs to execute the same
SQL statement repeatedly, having a parsed version in memory will reduce contention for latches, CPU, I/O, and memory usage. The first time Oracle parses a statement, it creates a parse tree. The optimization step is necessary only for the first execution of a SQL statement. Once the statement is optimized, the best access path is encapsulated in the access plan. Both the parse tree and the access plan are stored in the library cache before the statement is executed for the first time. Future invocation of the same statement will need to go through only the last stage, execution, which avoids the overhead of parsing and optimizing as long as Oracle can find the parse tree and access plan in the library cache. Of course, if the statement is a SQL query, the last step will be the fetch

The library cache, being limited in size, discards old SQL statements when there’s no more
room for new SQL statements. The only way you can use a parsed statement repeatedly for multiple executions is if a SQL statement is identical to the parsed statement. Two SQL statements are identical if they use exactly the same code, including case and spaces. The reason for this is that when Oracle compares a new statement to existing statements in the library cache, it uses simple string comparisons. In addition, any bind variables used must be similar in data type and size. Here are a couple of examples that show you how picky Oracle is when it comes to considering whether two SQL statements are identical.

In the following example, the statements aren’t considered identical because of an extra space in the second statement:

SELECT * FROM employees;

SELECT * FROM employees;

In the next example, the statements aren’t considered identical because of the different case used for the table Employees in the second statement. The two versions of employees are termed literals because they’re literally different from each other.

SELECT * FROM employees;

SELECT * FROM Employees;

Let’s say users in the database issue the following three SQL statements:

SELECT * FROM persons WHERE person_id = 10

SELECT * FROM persons WHERE person_id = 999

SELECT * FROM persons WHERE person_id = 6666

Oracle uses a different execution plan for the preceding three statements, even though they seem to be identical in every respect, except for the value of person_id. Each of these statements has to be parsed and executed separately, as if they were entirely different. Because all three are essentially the same, this is inherently inefficient. As you can imagine, if hundreds of thousands of such statements are issued during the day, you’re wasting database resources and the query performance will be slow. Bind variables allow you to reuse SQL statements by making them “identical,” and thus eligible to share the same execution plan.

In our example, you can use a bind variable, which I’ll call :var, to help Oracle view the three statements as identical, thus requiring a single execution instead of multiple ones. The person_id values 10, 99, and 6666 are “bound” to the bind variable, :var. Your replacement SQL statement using a bind variable, then, would be this:

SELECT * FROM persons WHERE person_id = :var

Using bind variables can dramatically increase query performance, you can even “force” Oracle to use bind variables, even if an application doesn’t use them.

Checking hard parsing and soft parsing

When a query is actually fired from the SQL prompt, we have seen that it undergoes many steps as explained above. Now the role of library cache is to hold the information about parse tree and executing plan. If we run a query and information about parsing and executing plan is found directly in library cache, then Oracle doesnt have to do time consuming operation of parsing the statement and creating the execution plans and selecting the best one. So Oracle is going to save its time by picking up the execution plan directly from cache. This is called soft parsing.

But if Oracle is not able to get the execution plan from library cache and have to carry out all these steps, then in that case the operation is expensive. This we call it as hard parse.

For a query we can check if it was a hard parse or soft parse. For that we need to turn on the session level tracing.

Usually when we fire the query for first time its a hard parse and with subsequent execution, Oracle will find the information in cache so its going to be soft parse (provided the query we fire is exactly same).

Lets try this.

Step 1)

SQL> alter system flush shared_pool;

System altered.


Session altered.

SQL> select * from emp where empno = 2;

If we check the trace file we see following info

PARSING IN CURSOR #5 len=43 dep=1 uid=0 oct=3 lid=0 tim=2461739217275 hv=1682066536 ad=’bb4c0df8′
select user#,type# from user$ where name=:1
PARSE #5:c=0,e=2245,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739217253
EXEC #5:c=10000,e=5622,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739223682

Step 2) Run the same statement again by enabling the tracing

We see following in trace file.

PARSING IN CURSOR #1 len=33 dep=0 uid=5 oct=3 lid=5 tim=2461984275940 hv=1387342950 ad=’bc88b950′
select * from emp where empno = 2
PARSE #1:c=0,e=631,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984275914
EXEC #1:c=0,e=487,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984277279

Since we used exactly the same statement (not even a change in space) so Oracle reuses
the parsed version. Hence mis=0 indicates there wasn’t a hard parse but merely a soft parse, which is a lot cheaper in terms of resource usage.

Following is the TKPROF ouput.

select * from emp where empno = 2
call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 2 0.04 0.03 0 2 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 6 0 2
——- —— ——– ———- ———- ———- ———- ———-
total 8 0.04 0.03 0 8 0 2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
——- —————————————————
1 TABLE ACCESS FULL EMP (cr=3 pr=0 pw=0 time=217 us)

As you can see from the above output there is one library cache miss and that was
during first executing of statement.

There is a measure to check the parsing efficiency. usually this is done by checking parse to execute ratio. In this we take a ratio of number of parse vs number of exections made. In a real database, some SQL statements will be fully reentrant (execute to parse = 100%), while others must be re-parsed for every execution (execute to parse = 1%).

You can use the statspack or AWR report to get the instance efficency percentages as shown below.

Lets first check the ADDM report

SQL> @?/rdbms/admin/addmrpt.sql

Current Instance

DB Id DB Name Inst Num Instance
———– ———— ——– ————
1053231558 TKR12M 1 tkr12m1

Instances in this Workload Repository schema

DB Id Inst Num DB Name Instance Host
———— ——– ———— ———— ————
3640598056 1 LA4001 LA4001 adsdemo
4208397564 1 VISASUPG VISASUPG ap6004bld
1053231558 1 TKR12M tkr12m ap6313rt
3143212236 1 AZ2TP202 AZ2TP202 ap6002rems
1967308183 1 AR1202D2 ar1202d2 ap6175rt

Using 1053231558 for database Id
Using 1 for instance number

Specify the number of days of snapshots to choose from
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.

Listing the last 3 days of Completed Snapshots

Instance DB Name Snap Id Snap Started Level
———— ———— ——— —————— —–
tkr12m1 TKR12M 334 07 Jun 2007 00:00 1
335 07 Jun 2007 01:00 1
336 07 Jun 2007 02:00 1

Specify the Begin and End Snapshot Ids
Enter value for begin_snap: 334
Begin Snapshot Id specified: 334

Enter value for end_snap: 357
End Snapshot Id specified: 357

Specify the Report Name
The default report file name is addmrpt_1_334_357.txt. To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name: ./addmrpt_1_334_357.txt

Running the ADDM analysis on the specified pair of snapshots …

For creating the statspack report you need to first install statspack in Oracle database 10g. Following are the steps we have to do for getting a statspack report.

1) Install Statspack
For installing statspack you have to create a tablespace perfstat and after that run the script @?/rdbms/admin/spcreate

SQL> create tablespace perfstat datafile ‘/dy/oracle/product/db10g/dbf/perfstat01.dbf’ size 1000M;

Tablespace created.

Once the tablespace is created run the script @?/rdbms/admin/spcreate and provide perfstat user password, perfstat tablespace name and temp tablespace name.

2) Take snapshots

use the command “execute statspack.snap” to take snapshot. Also after some time take another snapshot.

3) Run reports

use @?/rdbms/admin/spreport to generate statspack report.

Instance Efficiency Percentages
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 84.44 Soft Parse %: 64.14
Execute to Parse %: 70.27 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 99.08 % Non-Parse CPU: 67.62

So from here we can get – Execute to Parse %: 70.27
This means that there are enough hard parse going on in database.

Also soft parse percentage is low.

You can also find out the sessions with high number of hard parses.

SQL> SELECT s.sid, s.value “Hard Parses”,
t.value “Executions Count”
2 3 FROM v$sesstat s, v$sesstat t
WHERE s.sid=t.sid 4
5 AND s.statistic#=(select statistic#
6 FROM v$statname where name=’parse count (hard)’)
7 AND t.statistic#=(select statistic#
8 FROM v$statname where name=’execute count’)
9 AND s.value>0
10 ORDER BY 2 desc;

SID            Hard Parses     Executions Count
———- ———–        —————-
1089         2023                 125305
1063         557                    129291
963           338                    55504
878           252                    46754
1072         226                    179833
871           181                     61504

The ratio of parse count “(hard)/execute count” should be on lower side to have better performance.

Also you can check the ratio of “parse count (total)/execute count”. This ratio too should be on lower side.

There are various other ratios that can be computed to assist in determining whether parsing may be a problem:

For example we can derive other ratios from following query

WHERE NAME IN ( ‘parse time cpu’, ‘parse time elapsed’,
‘parse count (hard)’, ‘CPU used by this session’ ); 2 3 4

—————————————————————- ———-
CPU used by this session 88709
parse time cpu 7402
parse time elapsed 7570
parse count (hard) 5871

1) parse time CPU / parse time elapsed

This ratio indicates how much of the time spent parsing was due to the parse operation itself, rather than waiting for resources, such as latches. A ratio of one is good, indicating that the elapsed time was not spent waiting for highly contended resources.
The parse time elapsed Oracle metric is the total elapsed time for parsing in 10s of milliseconds. By subtracting parse time cpu from the this statistic, the total waiting time for parse resources is determined.

SQL> select a.value/b.value from v$sysstat a, v$sysstat b
2 where = ‘parse time cpu’
3 and = ‘parse time elapsed’;


2) parse time CPU / CPU used by this session

This ratio indicates how much of the total CPU used by Oracle server processes was spent on parse-related operations. A ratio closer to zero is good, indicating that the majority of CPU is not spent on parsing.

SQL> select a.value/b.value from v$sysstat a, v$sysstat b
2 where = ‘parse time cpu’
3 and = ‘CPU used by this session’;


We can still go on in case of finding different ratios, but lets stop here since we have to check other optimization techniques.

Checking Library cache performance

SQL> select sum(pinhits)/sum(pins) from V$LIBRARYCACHE;


In V$LIBRARYCACHE we have following three columns

GETS: – The # of lookups for object of the namespace.
PINS: – The # of reads or execution of the object.
RELOADS: – cache miss.

So basically pins are the actual executions.

A library cache hit ratio between 95 to 99 is considered good. But dont just rely on these numbers because this can be a wrong guessing. There can be always a sitution where your library cache is showing a good hit ratio but there are sessions which are waiting for resource. So always check for that.

Measuring Library Cache Efficiency

We can also check the effeciency of library cache using the same view V$LIBRARYCACHE

SQL> select namespace, pins, pinhits, reloads from V$LIBRARYCACHE
2 order by namespace;

NAMESPACE                   PINS          PINHITS       RELOADS
—————                 ———- ———-        ———-
BODY                                546             473                   20
CLUSTER                        1075            1055                6
INDEX                              2016           1588                41
JAVA DATA                    0                  0                       0
JAVA RESOURCE          0                  0                      0
JAVA SOURCE               0                  0                       0
OBJECT                           0                  0                       0
PIPE                                 0                  0                       0
SQL AREA                       129364       122594           1415
TABLE/PROCEDURE    35855         29402            1359
TRIGGER                         6                  4                      0

11 rows selected.

Here we see that there are high number of reloads for SQL AREA and TABLE/PROCEDURE. Basically the memory used by the Oracle for Library cache is not just a single heap of memory. For better efficiency the memory is allocated to different kinds to entities. For example a chuck of memory is allocated to SQLs called as SQL AREA. Also memory is allocated for procedures, objects, triggers etc.

For each of these areas we can see PINS, PINHITS and RELOADS to get the efficiencies. Note that RELOAD involves reloading of SQL statements after being aged out.

Some times, increasing the size of shared pool for these kind of output wont help, because the application itself can be big and number of SQLs are too high to keep it in memory. Another solution is to create KEEP and RECYCLE cache, to store the statements those are used intensively and those can be flushed off respectively. Also you can use DBMS_SHARED_POOL package (KEEP and UNKEEP) procedures to pin the
objects so that they will remain in memory.

You can use the V$LIBRARY_CACHE_MEMORY view to determine the number of library cache memory objects currently in use in the shared pool and to determine the number of freeable library cache memory objects in the shared pool

Once we get the exact info about the issues related to library cache like hit ratio, efficiency, information about hard parse and soft parse etc, we can move ahead for optimizing library cache.

Optimizing the Library Cache

We can optimize Library cache using some of the initialization parameters. We will see some of the parameters here now.

Using the CURSOR_SHARING (Literal Replacement) Parameter

When a SQL statment is used Oracle engine generates an execution plan and store it in library cache. If a statement is having literal value and same statement is used with same value next time, then no issues with that. There wont be any system overhead. But if the same statement is used next time but with different values then in that case Oracle will have to reparse the statement generate execution plans and select the best plan. This adds to the system overhead and performance goes down. So ots recomended to use bind variables. Following example shows the use of bind variables.

2 :bindvar :=7900;
3 END;
4 /

PL/SQL procedure successfully completed.
SQL> SELECT ename FROM scott.emp WHERE empid = :bindvar;

In this case the statement can be executed many times but it will be parsed only once. We can change the value of bind varaible bindvar and use this query efficiently.

But its also true that in application many times developed dont use bind varables in query. Here we can do favor on our side by setting the parameter CURSOR_SHARING value to FORCE or SIMILAR. By default the value for this variable is EXACT, which means that it will check only the exact statement (even the spaces, actually it uses string search). If the statement is found in library cache then it wont parse it, else it will. FORCE option will enable Oracle to reuse the parsed SQL statements in its library cache. Oracle replaces the literal values with bind values to make the statements identical.

The CURSOR_SHARING=FORCE option forces the use of bind variables under all circumstances, whereas the CURSOR SHARING=SIMILAR option does so only when Oracle thinks doing so won’t adversely affect optimization.

You can change this parameter dynamically if you are using SPFILE, using following statements.

SQL> show parameters CURSOR_SHARING

———————————— ———– ——————————
cursor_sharing string EXACT

SQL> alter system set CURSOR_SHARING=FORCE;

System altered.


Ideally, an application should have all the parsed statements available in separate cursors, so that if it has to execute a new statement, all it has to do is pick the parsed statement and change the value of the variables. If the application reuses a single cursor with different SQL statements, it still has to pay the cost of a soft parse. After opening a cursor for the first time, Oracle will parse the statement, and then it can reuse this parsed version in the future. This is a much better strategy than re-creating the cursor each time the database executes the same SQL statement. If you can cache all the cursors,
you’ll retain the server-side context, even when clients close the cursors or reuse them for new SQL statements.

You’ll appreciate the usefulness of the SESSION_CACHED_CURSORS parameter in a situation where users repeatedly parse the same statements, as happens in an Oracle Forms-based application when users switch among various forms. Using the SESSION_CACHED_CURSORS parameter ensures that for any cursor for which more than three parse requests are made, the parse requests are automatically cached in the session cursor cache. Thus new calls to parse the same statement avoid the parsing overhead. Using the initialization parameter SESSION_CACHED_CURSORS and setting it to a high number makes the query processing more efficient. Although soft parses are cheaper than hard parses, you can reduce even soft parsing by using the SESSION_CACHED_CURSORS parameter and setting it to a high number.

You can enforce session caching of cursors by setting the SESSION_CACHED_CURSORS in your initialization parameter file, or dynamically by using the following ALTER SESSION command:


You can check how good your SESSION_CACHED_CURSORS parameter value is by using the V$SYSSTAT view. If the value of session cursor cache hits is low compared to the total parse count for a session, then the SESSION_CACHED_CURSORS parameter value should be bumped up.


———————————— ———– ——————————
session_cached_cursors integer 500

SQL> select name, value from V$SYSSTAT
2 where name in (‘session cursor cache hits’,’parse count (total)’);

—————————————————————- ———-
session cursor cache hits 8194208
parse count (total) 13252290

The perfect situation is where a SQL statement is soft parsed once in a session and executed multiple times. For a good explanation of bind variables, cursor sharing, and related issues, please read the Oracle white paper “Efficient use of bind variables, cursor_sharing and related cursor parameters” (

Sizing the Shared Pool

Best way to maintain the correct size of memory structure is to set the value for SGA_TARGET. This is a new initialization parameter introduced in Oracle database 10g for automatic management of memory allocation like


Also for accurate size of shared pool, you can use the view V$SHARED_POOL_SIZE. We can get the total memory size using

SQL> select sum(bytes)/1024/1024 from v$sgastat
2 where pool = ‘shared pool’;


You can also check the estimated library cache size for a specified shared_pool size using following query.

2 from v$shared_pool_advice;

—————————– ———————– ————
60               .7143                   12
72               .8571
84               1
96               1.1429
108             1.2857
            1.4286                 66
            1.5714                 77
            1.7143                 88
            1.8571                 99
            2                          110

10 rows selected.

ESTD_LC_TIME_SAVED column tell estimated elapsed parse time saved (in seconds), owing to library cache memory objects being found in a shared pool of the specified size. This is the time that would have been spent in reloading the required objects in the shared pool had they been aged out due to insufficient amount of available free memory.

2 from v$shared_pool_advice;

—————————– ———————– ——————
                .7143                 552
                .8571                 560
                1                         564
                1.1429                567
              1.2857                577
               1.4286                577
               1.5714                577
               1.7143                577
               1.8571                577
               2                         577

10 rows selected.

So from above queries we can see that even though the current size of shared pool is 84M,
but the correct size will be 108M. You can also check the parsed time required in different shared pool sizes using following query.

2 from v$shared_pool_advice;

—————————– ———————– —————–
               .7143                  94
               .8571                   86
               1                           82
               1.1429                  79
               1.2857                69
               1.4286                69
               1.5714                69
               1.7143                69
               1.8571                69
               2                         69

10 rows selected.

This also shows that a shared pool of 108M required a minimum elapsed time(69 sec).

Pinning Objects in the Shared Pool

As I have discussed, if code objects have to be repeatedly hard-parsed and executed, database performance will deteriorate eventually. Your goal should be to see that as much of the executed code remains in memory as possible so compiled code can be re-executed. You can avoid repeated reloading of objects in your library cache by pinning objects using the DBMS_SHARED_POOL package. Pinning the object means storing the parse tree and execution plan of object in memory continuously until the instance reboots. We can determine the objects to be pinned using following query.

SUM(loads) – count(*) reloads
ORDER BY objects DESC;

TYPE                                       OBJECTS    KEPT      RELOADS
—————————- ———- ———- ———-
CURSOR                        1428                     584
              1423                     0                179
252                      28                320
88                      0                117
                   41                      0                0
           41                      37                12
26                      7                0
10                      1                21
10                      0                2
  8                      6                1              
PACKAGE BODY           7                      1                6

TYPE                                      OBJECTS      KEPT      RELOADS
—————————- ———- ———- ———-
4                      0                1
TYPE BODY                  
2                      0                0
1         0                1
1                      0                0
1                      0                8
1                      0                0

17 rows selected.

If the number of reloades are more then you need to make sure that the objects are pinned in database.

SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(object_name,object_type);

You can use the following statements to pin a package first in the shared pool and then remove it, if necessary:



You check the total memory consumed by an object of type of objects using following query.

SQL> select count(*) from v$DB_OBJECT_CACHE
2 where type = ‘CURSOR’;


SQL> select sum(sharable_mem) from v$DB_OBJECT_CACHE
2 where type = ‘CURSOR’;


The above size is in bytes. So for holding 1431 cursor, Shared_pool needs approx 15MB of memory.

So basically it all depends on the reports that you take from database and your database initialization parameter settings. So check your database quickly and tune it. Good luck !!!

The Dictionary Cache

The dictionary cache, as mentioned earlier, caches data dictionary information. This cache is much smaller than the library cache, and to increase or decrease it you modify the shared pool accordingly. If your library cache is satisfactorily configured, chances are that the dictionary cache is going to be fine too. You can get an idea about the efficiency of the dictionary cache by using the following query:

SQL> select sum(gets – getmisses – fixed) / sum(gets) from v$rowcache;


Usually, it’s a good idea to shoot for a dictionary hit ratio as high as 95 to 99 percent, although Oracle itself sometimes seems to refer to a figure of 85 percent as being adequate. To increase the library cache ratio, you simply increase the shared pool size for the instance.


Expert Oracle Database 10g Administration – Sam R. Alapati

Oracle Performance Tuning Guide

Oracle Data Dictionary Reference

Performace Tuning Oracle Database 10g


For tuning oracle database 10g we need to following some of the standard steps, which I call as protocol for performance tuning. To begin with as a DBA, we need to know at first that there is a problem with our database. Sounds weird !! but a fact. Many a times some DBA dont even know that, there is an issue with the database they are managing. Source for knowing about this are very normal like user will complaint about there programs are taking unusually long time. Also Database 10g provides lots of performance monitoring tool for checking the health of database. You can proactively check the health of your database manually by running some scripts or by checking the alerts on Enterprise manager. Also you can check the alerts from backend using DBA_OUTSTANDING_ALERTS table.

So once you come to know there is an issue, you need to find out what the issue is about. I mean to say that is the issue and not what is the cause of issue. Cause for the issue can be found once we know the issue. The issue can be realted to query taking time, or may be related to memory structure, or can be related to I/O issue as well.

Once you identify the issue, you can check the trend, get the root cause and form the solution. This may involve changing some parameters, clearing locks etc…

Final steps is implementing the solution.

In oracle 10g we have lot of performance view avaiable for checking the performanace of database. We will describe some of the ways to improve the performance and tune your database. I hope following discussion will help you getting the right configuration for your database to perform good.

Performance monitoring views

CPU resource monitoring:

At the very top we have to identify, whether an oracle process is consuming the resource or its some other OS process which is using CPU. This you can get from top command of unix. Following is the brief description.


 04:24:24  up 28 days,  2:54, 31 users,  load average: 0.80, 1.12, 1.26
1195 processes: 1190 sleeping, 2 running, 3 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
total    1.9%    0.0%    4.7%   0.0%     0.1%    0.0%   93.0%
cpu00    2.9%    0.0%    4.4%   0.0%     0.0%    0.0%   92.6%
cpu01    1.5%    0.0%    5.6%   0.0%     0.3%    0.0%   92.4%
cpu02    2.5%    0.0%    3.8%   0.1%     0.3%    0.0%   93.0%
cpu03    0.9%    0.0%    5.0%   0.0%     0.0%    0.0%   93.9%
Mem:  8103716k av, 7862584k used,  241132k free,       0k shrd,   14136k buff
5462332k actv, 1102660k in_d,  146424k in_c
Swap: 24314368k av, 1574788k used, 22739580k free                 4345360k cached

5194 appmgr02  16   0  2652 2652   844 S     0.8  0.0  25:43   0 top
29525 oracle03  16   0  2524 2524   844 R     0.7  0.0   0:00   0 top
2357 root      15   0 19384 1952   984 S     0.6  0.0 129:28   1 dcstor32d
2356 root      15   0 19384 1952   984 S     0.1  0.0  15:30   3 dcstor32d
8367 oracle03  15   0  636M 633M  632M S     0.1  8.0   1:48   2 oracle
1 root      15   0   508  472   436 S     0.0  0.0   1:02   2 init
2 root      RT   0     0    0     0 SW    0.0  0.0   0:00   0 migration/0
3 root      RT   0     0    0     0 SW    0.0  0.0   0:00   1 migration/1
4 root      RT   0     0    0     0 SW    0.0  0.0   0:00   2 migration/2
5 root      RT   0     0    0     0 SW    0.0  0.0   0:00   3 migration/3
6 root      15   0     0    0     0 SW    0.0  0.0   0:00   1 keventd
7 root      34  19     0    0     0 SWN   0.0  0.0   0:00   0 ksoftirqd/0
8 root      34  19     0    0     0 SWN   0.0  0.0   0:00   1 ksoftirqd/1
9 root      34  19     0    0     0 SWN   0.0  0.0   0:00   2 ksoftirqd/2
10 root      34  19     0    0     0 SWN   0.0  0.0   0:00   3 ksoftirqd/3

From above stats we have following thing to say. Lets split the first line
04:24:24 –> Current time
up 28 days, 2:54 –> Continuous up time till now (28 Days, 2 hours and 54 mins)
31 users –> total number of users currently logged in
load average: 0.80, 1.12, 1.26 –> system load averages for the past 1, 5, and 15 minutes.

1195 processes: 1190 sleeping, 2 running, 3 zombie, 0 stopped –> This is clear, except zombie means not dead but halted because of some reason.

cpu00 – cpu03 are the 4 CPUs on the system. Stats about each CPU is available in the form of usage table.
User column –> CPU used by users
Nice column –> The NI (nice) value is the dispatching priority of the task, and refers to the rate that the task receives services from that CPU.
system column –> CPU usage by system processes
iowait column –> wait events on different CPUs.
idle column –> Percentage of CPU, which is idle.

Next comes the memory part for the server.

Mem: 8103716k av, 7862584k used, 241132k free, 0k shrd, 14136k buff
5462332k actv, 1102660k in_d, 146424k in_c
Swap: 24314368k av, 1574788k used, 22739580k free 4345360k cached

The important numbers to note here is

8103716k is the total memory available.
7862584k is the currently used memory
241132k is the free memeory

Total swap memory 24314368k
Used swap memory is 1574788k
and free swap memory is 22739580k.

Other listing is the list of processes that are currently running on the system and it list the process in descending order. Here you can see if there are some oracle processes then we can conclude that oracle processes are consuming CPU time.

To check oracle processes involved in high CPU usage we can check the views V$SYSSTAT and V$SESSTAT

To begin with you can check the view V$SYSSTAT which list all the system statistics that a database can capture for you. The table contain statistics name and its value. For example we have

SQL> select * from v$sysstat
2  where name = ‘CPU used by this session’;

———- —————————————————————-
———- ———- ———-
12 CPU used by this session
1    5448603   24469293

This gives the total CPU used by all the sessions.

To check the CPU and DB TIME used by individual sessions we can check V$SESSTAT. Please note that for DB TIME statistic# is 13.

SQL> select * from v$sesstat
2  where statistic# = 13
3  order by value desc;

———- ———- ———-
1025         13   14904405
993         13   14904380
1019         13   14895026
968         13   11158910
840         13    7445590
925         13    3505544
912         13    3504272

Now we see that SID 1025 is using high DB time usage. For checking the CPU time used we can use statistic# 12. These statistic# we can get from v$statname.

SQL> select * from v$sesstat
2  where statistic# = 12
3  order by value desc;

———- ———- ———-
1092         12      47445
1088         12      33410
1089         12      24896
1063         12      19364
1072         12      13726
1090         12      10588
963         12      10465
850         12       9955
871         12       9001

So as per the records, using having SID 1092 is using maximum CPU time. Note that these times are in microseconds.

Once we get the SID we can check for all the details of the user from V$SESSION. It also gives the resource intensive sql query that user is using.

SQL> select SID,serial#,sql_ID from v$session
2  where SID=1025
3  ;

       SID    SERIAL# SQL_ID
———- ———- ————-
1025       3307 ay44jj6hphbuz

And as usual you can get the full sql query from V$SQL.

Checking the IO resource statistics:

We can check the IO statistics from OS level commands. Again if OS commands says that oracle processes doing lot of IO then in that case we can dig inside and check which user is causing problems.

For check IO stats at unix level for example we have iostat command as follows.

-bash-2.05b$ iostat
Linux 2.4.21- (ap6188rt) 06/09/2007

avg-cpu: %user %nice %sys %iowait %idle
5.49 0.00 4.90 0.12 89.48

Device:    tps         Blk_read/s         Blk_wrtn/s         Blk_read          Blk_wrtn
sda           1.05        5.27                     13.23                   12823196         32184706
sda1         0.81        5.26                     9.05                    12800280         22003398
sda2         0.25       0.01                     4.19                     22452                10181306

If we see here, it generates 2 reports. Once is CPU utilization report and other is
disk IO report. In CPU utilization report we get to see the average CPU consumption of all
CPU. User, nice, sys, iowait and idle is already explained above.

In case of disk report we have following.

tps –> this is the transfer rate/second in terms of number of requests
Blk_read/s –> is the number of blocks read at a time per second. Each block size is 512 bytes.
Blk_wrtn/s –> is the number of blocks written at a time per second. Again each block is 512 bytes.
Blk_read and Blk_wrtn –> will give you the total number of blocks read and written.

If you see heavy input-output on a device check for oracle files location. And oracle files happen to be on same device then we can say that heavy IO is because of some oracle process (which may be because of some user executing a heavy query).

In that case again, you have 3 views to confirm that.

First check the view V$SESSION_WAIT

SQL> select SID, event, wait_time, seconds_in_wait, state from v$session_wait
2  where upper(wait_class) != ‘IDLE’;

       SID EVENT
———- —————————————————————-
———- ————— ——————-
941 SQL*Net message to client
-1               0 WAITED SHORT TIME

Here Event is SQL*Net message to client wait time = -1. Usually when wait_time column is -1 then actual wait time is less then 1/100th of second. In that case state column will show “WAITED SHORT TIME”.

Also if wait_time column is zero then state column will have value “WAITING” and we can infer that currently the process is waiting.

If WAIT_TIME = 0, then SECONDS_IN_WAIT is the seconds spent in the current wait condition. If WAIT_TIME 0, then SECONDS_IN_WAIT is the seconds since the start of the last wait.

From this table you can get the SID of the user who is waiting. You can then check the view V$SESS_IO to get the info about how the user is performing in terms of writting and reading the blocks at OS level.

Also we can check V$SESS_IO to get the stats about IO activites of users. We can get the SID of user who is doing maximum physical reads.

SQL> select * from v$sess_IO
2  order by PHYSICAL_READS desc;

——- ———- ————— ————– ————-
1092     298236         3975622         170767        362249

1093          0             155          18401             0

871       6872         4535603          12013          8340

Here we can see SID 1092 is doing high physical reads, which is nothing by IO.

Consistent changes – logical i/o, how many changes were applied to blocks for read consistency purposes. (Consistent mode changes)

CONSISTENT_CHANGES – Consistent changes for this session
CONSISTENT_GETS – Consistent gets for this session

CONSISTENT_GETS shows the number of blocks requested in consistent mode, for consistent read operations. Consistent gets is the number of blocks requested in consistent mode. Blocks are requested in consistent mode for consistent read operations. When a block is requested in consistent mode, if any changes have been committed or made to that block since the requesting statement (or transaction) began, then they must be rolled back for the purposes of the read, to give a consistent view of the data at that time.

When a block is requested in consistent mode, any changes will be rolled back so the SELECT query can get a read-consistent view of the data as of the time it started.

Finally you can also check V$SESSION view to get all information of the user and SQL query he is running.

SQL> select program from v$session
2  where SID=1092;

oracle@ap6188rt (SMON)

So we see here SMON process was responsible for doing IO.

To Be Continued . . .

Locking issues in Oracle Database 10g

Many a times we face some issue, that a perticular sql query is holding a lock(s) on certain object. This will impact concurrency issue also for other DMLs. Some other user may complaint that he is using a DML query(Insert/delete/update) and qurey is taking lot of time. One of the reason for this might be that, the object which 2nd query wants to use is already locked by first query. In Oracle database 10g we can check the objects, which are locked by any other user. We have a table called V$LOCKED_OBJECTS, which lists all locks acquired by every transaction on the system.

Example, lets say a user is holding lock on certain table by some DML query from long time. As a DBA you want to find out the session and kill the same so that other can use the resource. Following steps can be followed to get this info and remove the lock.

SQL> select object_id, session_id, oracle_username, os_user_name, process from v$locked_object
2 ;

———- ———– ———————- —————– ——–
1991774 368 APPS plnops 26188
2019121 368 APPS plnops 26188
34376 507 APPS plnops 25922

Based on the above output you can check the object, which the user has locked. Example, if we take a object ID 1991774, then following query will give us the output as object_name.

SQL> select object_name from dba_objects
2 where object_id = 1991774;


Also from SESSION_ID value we can get the session details of the user. Lets check for SESSION_ID 368. Here the SESSION_ID column of V$LOCKED_OBJECT is same as SID column of V$SESSION.

SQL> select SID, SERIAL#, sql_id from v$session
2 where SID=368;

———- ———- ————-
368 545 ag47457n3xck1

From the SQL_ID we can easily get the SQL query, the user is using as shown below.

SQL> select sql_text from v$sql
2 where sql_id = ‘ag47457n3xck1’;



The above query gives first 1000 characters of SQL query. To get the complete SQL you can check the column SQL_FULLTEXT in same table(V$SQL).

Once you get the session information for SQL, you can kill the session using following commands.

SQL> alter system kill session ‘368, 545’;

System altered.

Check performance after killing 1 session. Usually the performance improves and other users will appriciate you, as there query will run faster. (:-)).