Database Wait Statistics – Oracle Database 10g

When a SQL is submitted by a user to Oracle database, it never happens that Oracle will execute the SQL continuously at one go. Oracle process never get to work on the execution of statement without any interruptions. Often the process has to pause or wait for some event or some other resource to be released. Thus active Oracle process is doing one of the following thing at any point of time.

The process is executing the SQL statement.

The process is waiting for something (for example, a resource such as a database buffer or a latch). It could be waiting for an action such as a write to the buffer cache to complete.
That’s why the response time—the total time taken by Oracle to finish work—is correctly
defined as follows:

response time = service time + wait time

So only part of the time is spend by oracle process to actually “do” some thing. Rest of the time process just wait for some resource to get freed up. It can be waiting log writter process or database writter process or any other resources.

The wait event may also be due to unavailable buffers or latches.

Four dynamic performance views contain wait information: V$SESSION, V$SYSTEM_EVENT, V$SESSION_EVENT, and V$SESSION_WAIT. These four views list just about all the events the instance was waiting for and the duration of these waits. Understanding these wait events is essential for resolving performance issues.

There are different wait classes defined in database. Each class will contain different wait events. There are around 860 wait events defined in Oracle database 10g and are classified under different wait classes.

Some of the main wait classes includes:

  • Administrative: Waits caused by administrative commands, such as rebuilding an index, for example.
  • Application: Waits due to the application code.
  • Cluster: Waits related to Real Application Cluster management.
  • Commit: Consists of the single wait event log file sync, which is a wait caused by commits in the database.
  • Concurrency: Waits for database resources that are used for locking; for example, latches.
  • Configuration: Waits caused by database or instance configuration problems, including a low shared-pool memory size, for example.
  • Idle: Idle wait events indicate waits that occur when a session isn’t active; for example, the ‘SQL*Net message from client’ wait event.

You can see the complete list of wait classes using V$SESSION_WAIT_CLASS dynamic performance view.

Analyzing Instance Performance

You can check the percentage of time spend by the database in waiting for resource and percentage of time spend by database in actual execution.

SQL> SELECT METRIC_NAME, VALUE
2  FROM V$SYSMETRIC
3  WHERE METRIC_NAME IN (‘Database CPU Time Ratio’,
4  ‘Database Wait Time Ratio’) AND
5  INTSIZE_CSEC =
6  (select max(INTSIZE_CSEC) from V$SYSMETRIC);

METRIC_NAME                                                           VALUE
—————————————————————- ———-
Database Wait Time Ratio                                         15.6260647
Database CPU Time Ratio                                          84.3739353

If the database performance shows that ‘Database Wait Time Ratio’ is greater then ‘Database CPU Time Ratio’ or the value for ‘Database Wait Time Ratio’ is quite significant, then in that case you need to dig inside to get the information about where exactly oracle is waiting. You need to basically find the type of wait. This will give you root cause. Once you get the root cause you can work to fix the same.

you can determine the total waits and percentage of waits by wait class.

SELECT WAIT_CLASS,
TOTAL_WAITS,
round(100 * (TOTAL_WAITS / SUM_WAITS),2) PCT_TOTWAITS,
ROUND((TIME_WAITED / 100),2) TOT_TIME_WAITED,
round(100 * (TIME_WAITED / SUM_TIME),2) PCT_TIME
FROM
(select WAIT_CLASS,
TOTAL_WAITS,
TIME_WAITED
FROM V$SYSTEM_WAIT_CLASS
WHERE WAIT_CLASS != ‘Idle’),
(select sum(TOTAL_WAITS) SUM_WAITS,
sum(TIME_WAITED) SUM_TIME
from V$SYSTEM_WAIT_CLASS
where WAIT_CLASS != ‘Idle’)
ORDER BY PCT_TIME DESC;

WAIT_CLASS           TOTAL_WAITS PCT_TOTWAITS TOT_TIME_WAITED   PCT_TIME
——————– ———– ———— ————— ———-
System I/O                180300        19.96          3008.8      49.53
Commit                     67855         7.51         1302.46      21.44
User I/O                  291565        32.28         1056.55      17.39
Application                 3637           .4          596.66       9.82
Other                      15388          1.7            67.4       1.11
Concurrency                 1264          .14           38.12        .63
Network                   343169        37.99            3.86        .06
Configuration                 22            0               1        .02

8 rows selected.

In the above output percentage of time waited (last column) is more important and gives the correct picture of the impact due to wait. Example if we see total number of Network waits are large but the actual percentage of time contributed to the wait is very less (0.06%).

The key dynamic performance tables for finding wait information are the V$SYSTEM_EVENT, V$SESSION_EVENT, V$SESSION_WAIT, and the V$SESSION views. The first two views show the waiting time for different events.

V$SYSTEM_EVENT

The V$SYSTEM_EVENT view shows the total time waited for all the events for the entire system since the instance started up. The view doesn’t focus on the individual sessions experiencing waits, and therefore it gives you a high-level view of waits in the system. You can use this view to find out that the top instance-wide wait events are. You can calculate the top n waits in the system by dividing the event’s wait time by the total wait time for all events.

select EVENT, TOTAL_WAITS, TIME_WAITED, WAIT_CLASS from V$SYSTEM_EVENT
where wait_class != ‘Idle’
order by time_waited desc;

EVENT                          TOTAL_WAITS TIME_WAITED WAIT_CLASS
—————————— ———– ———– ——————–
log file parallel write             128953      210308 System I/O
log file sync                        67904      130313 Commit
db file sequential read             259065       73686 User I/O
enq: TX – row lock contention          226       59080 Application
control file parallel write          28282       57929 System I/O
db file parallel write               19155       32924 System I/O
db file scattered read               31841       30925 User I/O
os thread startup                       95        3262 Concurrency
rdbms ipc reply                        485        2116 Other
PX Deq: Signal ACK                    1971        1103 Other
local write wait                       245         864 User I/O

we can get the session level waits for each event using V$SESSION_EVENT view. In this view the TIME_WAITED is the wait time per session.

V$SESSION_EVENT

select sid, EVENT, TOTAL_WAITS, TIME_WAITED, WAIT_CLASS
from V$SESSION_EVENT
where WAIT_CLASS != ‘Idle’
order by TIME_WAITED;

  SID EVENT                          TOTAL_WAITS TIME_WAITED WAIT_CLASS
———- —————————— ———– ———– ————
390 os thread startup                       55      1918   Concurrency
393 db file sequential read              10334      4432   User I/O
396 db file parallel write                8637      14915  System I/O
397 db file parallel write               10535      18035  System I/O
394 control file parallel write          28294      57928  System I/O
395 log file parallel write             129020      210405 System I/O

As we can see from above output that session 395 is having maximum wait time because of system I/O. Here system I/O is the I/O because of background processes like DBWR and LGWR etc.

You can get all the database wait events from V$EVENT_NAME and the related meaning of all the wait events available in oracle 10g by checking this documentation link.

V$SESSION_WAIT

The third dynamic view is the V$SESSION_WAIT view, which shows the current waits or just completed waits for sessions. The information on waits in this view changes continuously based on the types of waits that are occurring in the system. The real-time information in this view provides you with tremendous insight into what’s holding up things in the database right now. The
V$SESSION_WAIT view provides detailed information on the wait event, including details such as file number, latch numbers, and block number. This detailed level of information provided by the V$SESSION_WAIT view enables you to probe into the exact bottleneck that’s slowing down the database. The low-level information helps you zoom in on the root cause of performance problems.

SQL> select sid, event, WAIT_CLASS, WAIT_TIME, SECONDS_IN_WAIT, STATE from v$session_wait
2  where wait_class != ‘Idle’;

       SID EVENT                          WAIT_CLASS            WAIT_TIME
———- —————————— ——————– ———-
SECONDS_IN_WAIT STATE
————— ——————-
337 SQL*Net message to client      Network                      -1
0 WAITED SHORT TIME

Here wait time -1 means that session has waited for less then 1/100th of a second.
You can get the complete wait information for a particular session using V$SESSION view. You can get SQLID of the sql, which is causing wait.

V$SESSION

For getting the wait statistics you can go with the following methodology.

  • First, look at the V$SYSTEM_EVENT view and rank the top wait events by the total amount of time waited, as well as the average wait time for that event.
  • Next, find out more details about the specific wait event that’s at the top of the list. You can check V$WAITSTAT view for the same. Check the type of wait this view is showing. If the wait is due to undo header or undo block then wait is related to undo segment.
  • Finally, use the V$SESSION view to find out the exact objects that may be the source of a problem. For example, if you have a high amount of db file scattered read-type waits, the V$SESSION view will give you the file number and block number involved in the wait events.

In V$SESSION we have a column called BLOCKING_SESSION_STATUS. IF this column value is ‘valid’, then we can presume that corresponding session is getting blocked.

V$SESSION_WAIT_HISTORY

The V$SESSION_WAIT_HISTORY view holds information about the last ten wait events for each active session. The other wait-related views, such as the V$SESSION and the V$SESSION_WAIT, show you only the wait information for the most recent wait. This may be a short wait, thus escaping your scrutiny.

SQL> select sid from v$session_wait_history
2  where wait_time = (select max(wait_time) from v$session_wait_history);

      SEQ# EVENT                            P1         P2         P3  WAIT_TIME
———- ———————— ———- ———- ———- ———-
1 rdbms ipc message            180000          0          0     175787
2 rdbms ipc message            180000          0          0     175787
3 rdbms ipc message            180000          0          0      60782
4 rdbms ipc message            180000          0          0     175787
5 rdbms ipc message            180000          0          0     138705
6 db file sequential read           1        368          1          0
7 rdbms ipc message            180000          0          0     158646
8 db file sequential read           1        368          1          0
9 db file sequential read           1         73          1          0
10 db file sequential read           1         30          1          0

Note that a zero value under the WAIT_TIME column means that the session is waiting for a specific wait event. A nonzero value represents the time waited for the last event.

V$ACTIVE_SESSION_HISTORY

The V$SESSION_WAIT view tells you what resource a session is waiting for. The V$SESSION view also provides significant wait information for active sessions. However, neither of these views provides you with historical information about the waits in your instance. Once the wait is over, you can no longer view the wait information using the V$SESSION_WAIT view. The waits are so fleeting that by the time you query the views, the wait in most times is over. The new Active Session History (ASH) feature, by recording session information, enables you to go back in time and review the history of a performance bottleneck in your database. Although the AWR provides hourly snapshots
of the instance by default, you won’t be able to analyze events that occurred five or ten minutes ago, based on AWR data. This is where the ASH information comes in handy. ASH samples the V$SESSION view every second and collects the wait information for all active sessions. An active session is defined as a session that’s on the CPU or waiting for a resource. You can view the ASH session statistics through the view  V$ACTIVE_SESSION_HISTORY, which contains a single row for each active session in your instance. ASH is a rolling buffer in memory, with older information being overwritten by new session data. Every 60 minutes,the MMON background process flushes filtered ASH data to disk, as part of the hourly AWR snapshots. If the ASH buffer is full, the MMNL background process performs the flushing of data. Once the ASH data is flushed to disk, you won’t be able to see it in the V$ACTIVE_SESSION_HISTORY view. You’ll now have to use the DBA_HIST_ACTIVE_SESS_HISTORY view to look at the historical data.

obtaining the objects with highest waits

SELECT a.current_obj#, o.object_name, o.object_type, a.event,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a,
dba_objects o
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.current_obj# = o.object_id
GROUP BY a.current_obj#, o.object_name, o.object_type, a.event
ORDER BY total_wait_time;

 OBJECT_NAME                OBJECT_TYPE         EVENT                  TOTAL_WAIT_TIME
——————–       ——————- ———————– —————-
FND_LOGINS                 TABLE               db file sequential read   47480
KOTTB$                     TABLE               db file sequential read   48077
SCHEDULER$_WINDOW          TABLE               db file sequential read   49205
ENG_CHANGE_ROUTE_STEPS_TL  TABLE               db file sequential read   52534
JDR_PATHS_N1               INDEX               db file sequential read   58888
MTL_ITEM_REVISIONS_B       TABLE               SQL*Net more data to client

select p1text, p1, p2text, p2, p3text, p3, a.event
from v$active_session_history a
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.current_obj# = 1938000

P1TEXT     P1          P2TEXT     P2   P3TEXT          P3  EVENT
——–   —         ——- ——- ———— —— ————————
file#      71          block#    4389  blocks           1  db file sequential read
file#      187         block#   89977  blocks           1  db file sequential read
file#      80          block#   79301  blocks           1  db file sequential read
driver id  675562835   #bytes       1  0
file#      11          block#     831  blocks           1  db file sequential read
driver id  675562835   #bytes       1  0

so we can see few history wait events for a particular object in database. We can get the segment stats for this object. Finally we can got to some conclusing and implementaing the solution to reduce the wait. For example if it is a ‘db file sequential read’ wait then

  • Increase buffer cache size, but this wont help much. For this to do, you need to check cache miss percentages.
  • Check the query and optimize it, so that it can read less number of blocks
  • Increase freelists for that segment

some important wait events

The following query lists the most important wait events in your database in the last 15 minutes:

SELECT a.event,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a
WHERE a.sample_time between
sysdate – 30/2880 and sysdate
GROUP BY a.event
ORDER BY total_wait_time DESC;

EVENT                                                            TOTAL_WAIT_TIME
—————————————————————- —————
enq: TX – row lock contention                                          877360289
816854999
TCP Socket (KGAS)                                                       13787430
SQL*Net break/reset to client                                            6675324
db file sequential read                                                  2318850
control file parallel write                                              1790011
log file parallel write                                                  1411201
db file scattered read                                                     62132
os thread startup                                                          39640
null event                                                                     0

Users with the Most Waits

The following query lists the users with the highest wait times within the last 15 minutes:

SELECT s.sid, s.username,
SUM(a.wait_time +
a.time_waited) total_wait_time
FROM v$active_session_history a,
v$session s
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.session_id=s.sid
GROUP BY s.sid, s.username
ORDER BY total_wait_time DESC;

       SID USERNAME                       TOTAL_WAIT_TIME
———- —————————— —————
773 APPS                                 877360543
670 APPS                                 374767126
797                                       98408003
713 APPS                                  97655307
638 APPS                                  53719218
726 APPS                                  39072236
673 APPS                                  29353667
762 APPS                                  29307261
746 APPS                                  29307183
653 APPS                                  14677170
675 APPS                                  14676426

Identifying SQL with the Highest Waits

Using the following query, you can identify the SQL that’s waiting the most in your instance with in last 15 mins

SELECT a.user_id,d.username,s.sql_text,
SUM(a.wait_time + a.time_waited) total_wait_time
FROM v$active_session_history a,
v$sqlarea s,
dba_users d
WHERE a.sample_time between sysdate – 30/2880 and sysdate
AND a.sql_id = s.sql_id
AND a.user_id = d.user_id
GROUP BY a.user_id,s.sql_text, d.username;

Monitoring IO statistics – Oracle Database 10g

There are several ways to check the IO statistics and load on the devices. We have OS level utilities like iostat and sar. We will check out first the OS utilities and then we will move to check IO stats using database query.

Example of using sar -d for IO stats:

(appmgr01) appmgr – -bash $ sar -d 10 5
Linux 2.4.21-37.0.0.1.4.ELhugemem (ap6157rt)    06/18/2007

02:19:21 AM       DEV       tps  rd_sec/s  wr_sec/s
02:19:31 AM    dev8-0      1.30      0.00     19.20
02:19:31 AM    dev8-1      0.00      0.00      0.00
02:19:31 AM    dev8-2      1.30      0.00     19.20

02:19:31 AM       DEV       tps  rd_sec/s  wr_sec/s
02:19:41 AM    dev8-0      0.50      0.00     12.81
02:19:41 AM    dev8-1      0.00      0.00      0.00
02:19:41 AM    dev8-2      0.50      0.00     12.81

Here we can see that dev8-0, dev8-1 and dev8-2 are the devices (disks).

tps is the transfer rate/second in terms of number of requests
rd_sec/s is Number  of  sectors  read  from the device. The size of a sector is 512 bytes.
wr_sec/s is Number  of  sectors  write  to the device. The size of a sector is 512 bytes.

Another command is iostat

-bash-2.05b$ iostat
Linux 2.4.21-37.0.0.1.4.ELhugemem (ap6188rt)    06/18/2007

avg-cpu:  %user   %nice    %sys %iowait   %idle
7.54    0.00    6.37    0.14   85.94

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               1.21         6.16        15.18   19695596   48524898
sda1              0.94         6.15        10.84   19655648   34666374
sda2              0.26         0.01         4.33      39404   13858522

Here you can check the %user column, which will show you the percentage of CPU that occurred while executing at the user level (application).

Other significant things are already explained in Performace Tuning Oracle Database 10g

You can check the IO stats using some SQLs on dynamic views as given below. This query will provide you with the read/write statistics for each file of database.

SELECT d.name,
f.phyrds reads,
f.phywrts wrts,
(f.readtim / decode(f.phyrds,0,-1,f.phyrds)) readtime,
(f.writetim / decode(f.phywrts,0,-1,phywrts)) writetime
FROM
v$datafile d,
v$filestat f
WHERE
d.file# = f.file#
ORDER BY
d.name;

NAME                                                    READS       WRTS   READTIME   WRITETIME
————————————————– ———- ———- ———-   ———-
/slot/ems1149/oracle/db/apps_st/data/tx_idx20.dbf          70        129   .9         .031007752
/slot/ems1149/oracle/db/apps_st/data/tx_idx21.dbf          46         72   .6521739   .041666667
/slot/ems1149/oracle/db/apps_st/data/tx_idx22.dbf         151        428   .7814569   .044392523
/slot/ems1149/oracle/db/apps_st/data/tx_idx23.dbf         107        300   1.177570        .08
/slot/ems1149/oracle/db/apps_st/data/tx_idx24.dbf         177        935   .4293785   .098395722
/slot/ems1149/oracle/db/apps_st/data/tx_idx3.dbf          103       1624   1.427184   .580049261

v$filestat view gives the read/write statistics for datafiles. Here phyrds are the number of physical reads done from respective datafiles and phywrts are the number of physical writes down by DBWR process in writting from buffer cache to disk. Also readtime, what we are calculating here is the average read time per request. Similarly the writetime.

Excessive disk reads and write time shows that you have I/O contention. Here are some of the point which you can make a note in order to reduce I/O contention.

  • Increase the number of disks in the storage system.
  • Separate the database and the redo log files.
  • For a large table, use partitions to reduce I/O.
  • Stripe the data either manually or by using a RAID disk-striping system.
  • Invest in cutting-edge technology, such as file caching, to avoid I/O bottlenecks.
  • Consider using the new Automatic Storage Management system.

Monitoring CPU Usage – Oracle Database 10g

System Activity Report (SAR)

-bash-2.05b$ sar -u 10 5
Linux 2.4.21-37.0.0.1.4.ELhugemem (ap6188rt)    06/12/2007

10:48:24 PM       CPU     %user     %nice   %system   %iowait     %idle
10:48:34 PM       all     22.07      0.00     14.36      0.03     63.54
10:48:44 PM       all     16.70      0.00     13.93      0.17     69.20
10:48:54 PM       all      8.80      0.00      8.15      0.25     82.80
10:49:04 PM       all      2.52      0.00      3.55      0.00     93.92
10:49:14 PM       all      2.05      0.00      4.00      0.00     93.95
Average:          all     10.43      0.00      8.80      0.09     80.69

We can check the processes which are consuming high CPU units. When we use the command ps -eaf, 4th column shows the number of CPU units, that process is using. Example

oracle03 12979     1  0 21:39 ?        00:00:00 ora_q003_tkr12m1
oracle03 22815     1  0 21:57 ?        00:00:00 ora_q001_tkr12m1
oracle03  2720     1  0 22:36 ?        00:00:00 oracletkr12m1 (LOCAL=NO)
oracle03 30548     1  0 22:42 ?        00:00:00 oracletkr12m1 (LOCAL=NO)
oracle03 25572     1  0 22:48 ?        00:00:00 oracletkr12m1 (LOCAL=NO)
oracle03 14232     1 35 22:53 ?        00:00:03 oracletkr12m1 (LOCAL=NO)

You can also check the users, which are consuming high CPU.

SELECT n.username,
s.sid,
s.value
FROM v$sesstat s,v$statname t, v$session n
WHERE s.statistic# = t.statistic#
AND n.sid = s.sid
AND t.name=’CPU used by this session’
ORDER BY s.value desc;

USERNAME                              SID      VALUE
—————————— ———- ———-
1093     194184
1092      77446
1088      67564
1089      43054
1090      19192
1072      15009
APPS                                  832       1777
APPS                                  998       1190
APPS                                  822        577
APPS                                  900        508
APPS                                  823        477

You can also check session level CPU using

SQL> select * from v$sesstat
2  where statistic# = 12
3  order by value desc;

       SID STATISTIC#      VALUE
———- ———- ———-
1093         12     194184
1092         12      77475
1088         12      67579
1089         12      43062
1090         12      19194
1072         12      15010
832         12       1785
998         12       1197
822         12        577

You can also decompose the total CPU usage. Basically a CPU time is

total CPU time = parsing CPU usage + recursive CPU usage + other CPU usage

SELECT name,value FROM V$SYSSTAT
WHERE NAME IN (‘CPU used by this session’,
‘recursive cpu usage’,
‘parse time cpu’);

NAME                                                                  VALUE
—————————————————————- ———-
recursive cpu usage                                                 6276669
CPU used by this session                                            8806491
parse time cpu                                                       482645

Ideally (parsing CPU usage + recursive CPU usage) should be significantly less then CPU used by this session. Here we can see that (parsing CPU usage + recursive CPU usage) is almost equal to CPU used by this session. Here we need to make some tuning in reducing the time for recursive CPU usage.

To determine percentage of CPU usage for parsing

select (a.value / b.value)*100 “% CPU for parsing”
from V$SYSSTAT a, V$SYSSTAT b
where a.name = ‘parse time cpu’
and b.name = ‘CPU used by this session’;

% CPU for parsing
—————–
5.48300146

Reducing Parse Time CPU Usage

If parse time CPU is the major part of total CPU usage, you need to reduce this by performing the following steps:

  1. Use bind variables and remove hard-coded literal values from code, as explained in the
    “Optimizing the Library Cache” section earlier in this chapter.
  2. Make sure you aren’t allocating too much memory for the shared pool. Remember that even if you have an exact copy of a new SQL statement in your library cache, Oracle has to find it by scanning all the statements in the cache. If you have a zillion relatively useless statements sitting in the cache, all they’re doing is slowing down the instance by increasing the parse time.
  3. Make sure you don’t have latch contention on the library cache, which could result in increased parse time CPU usage. We will see latch contentions and how to reduce it – later.

To determine percentage of Recursive CPU Usage

select (a.value / b.value)*100 “% recursive cpu usage”
from V$SYSSTAT a, V$SYSSTAT b
where a.name = ‘recursive cpu usage’
and b.name = ‘CPU used by this session’;

Recursive CPU usage
——————-
71.421139

This is really a bad number for recursive CPU usage. Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used.

Recursive calls are also generated due to the inavailability of dictionary info in the dictionary cache, firing of database triggers, execution of DDL, execution of SQL within PL/SQL blocks, functions or stored procedures and enforcement of referential integrity constraints.

If the recursive CPU usage percentage is a large proportion of total CPU usage, you may want to make sure the shared pool memory allocation is adequate. However, a PL/SQL-based application will always have a significant amount of recursive CPU usage.

To reduce the recursive CPU usage, make sure that shared pool is sized correctly. Also you can check the parameters like

CURSOR_SHARING – values can be EXACT, SIMILAR, FORCE
SESSION_CACHED_CURSORS – value can be anything numeric like 500.

Tuning PGA Memory – Oracle database 10g

Correct size of PGA

You can get the correct size of PGA using V$PGA_TARGET_ADVICE, dynamic performance view.

SQL> SELECT ROUND(pga_target_for_estimate/1024/1024) target_mb,
2 estd_pga_cache_hit_percentage cache_hit_perc,
3 estd_overalloc_count
4 FROM V$PGA_TARGET_ADVICE;

TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
512 98 909
1024 100 0
2048 100 0
3072 100 0
4096 100 0
4915 100 0
5734 100 0
6554 100 0
7373 100 0
8192 100 0
12288 100 0

TARGET_MB CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
16384 100 0
24576 100 0
32768 100 0

14 rows selected.

Checking PGA for each sessions

You can check session level PGA using V$SESSTAT and V$SESSION view and also you can check the username, who is using that memory.

SELECT
s.value,s.sid,a.username
FROM
V$SESSTAT S, V$STATNAME N, V$SESSION A
WHERE
n.STATISTIC# = s.STATISTIC# and
name = ‘session pga memory’
AND s.sid=a.sid
ORDER BY s.value;

VALUE SID USERNAME
———- ———- ——————————
487276 1070 APPS
552812 1068 SYS
552812 1088
618348 1009 APPS_READ_ONLY
683884 1091
749420 846 MOBILEADMIN
749420 1090
749420 1051 APPLSYSPUB
749420 1000 APPLSYSPUB
749420 929 APPLSYSPUB
790412 1093

To check the total PGA in use and hit ratio for PGA

SQL> SELECT * FROM V$PGASTAT;

NAME VALUE UNIT
————————————————– ———- ————
aggregate PGA target parameter 4294967296 bytes
aggregate PGA auto target 3674290176 bytes
global memory bound 201252864 bytes
total PGA inuse 218925056 bytes
total PGA allocated 433349632 bytes
maximum PGA allocated 1526665216 bytes
total freeable PGA memory 86835200 bytes
process count 113
max processes count 250
PGA memory freed back to OS 8.3910E+10 bytes
total PGA used for auto workareas 6505472 bytes

NAME VALUE UNIT
————————————————– ———- ————
maximum PGA used for auto workareas 70296576 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 4292608 bytes
over allocation count 0
bytes processed 2.1553E+11 bytes
extra bytes read/written 10403840 bytes
cache hit percentage 99.99 percent
recompute count (total) 205474

19 rows selected.

The ideal way to perform sorts is by doing the entire job in memory. A sort job that Oracle performs entirely in memory is said to be an optimal sort. If you set the PGA_AGGREGATE_TARGET too low, some of the sort data is written out directly to disk (temporary tablespace) because the sorts are too large to fit in memory. If only part of a sort job spills over to disk, it’s called a 1-pass sort. If the instance performs most of the sort on disk instead of in memory, the response time will be high. This is called multi pass sort.

Another method of checking the efficiency of PGA memory is to check V$SQL_WORKAREA_HISTOGRAM.

V$SQL_WORKAREA_HISTOGRAM displays the cumulative work area execution statistics (cumulated since instance startup) for different work area groups. The work areas are split into 33 groups based on their optimal memory requirements with the requirements increasing in powers of two. That is, work areas whose optimal requirement varies from 0 KB to 1 KB, 1 KB to 2 KB, 2 KB to 4 KB, … and 2 TB to 4 TB.

For each work area group, the V$SQL_WORKAREA_HISTOGRAM view shows how many work areas in that group were able to run in optimal mode, how many were able to run in one-pass mode, and finally how many ran in multi-pass mode. The DBA can take a snapshot at the beginning and the end of a desired time interval to derive the same statistics for that interval.

SELECT
low_optimal_size/1024 “Low (K)”,
(high_optimal_size + 1)/1024 “High (K)”,
optimal_executions “Optimal”,
onepass_executions “1-Pass”,
multipasses_executions “>1 Pass”
FROM v$sql_workarea_histogram
WHERE total_executions <> 0;

Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
2 4 6254487 0 0
64 128 110568 0 0
128 256 20041 0 0
256 512 86399 0 0
512 1024 145082 0 0
1024 2048 31207 0 0
2048 4096 104 0 0
4096 8192 79 2 0
8192 16384 116 0 0
16384 32768 30 0 0
32768 65536 4 0 0

Low (K) High (K) Optimal 1-Pass >1 Pass
———- ———- ———- ———- ———-
65536 131072 2 0 0

12 rows selected.

You can check the proportion of work areas since you started the Oracle instance, using optimal, 1-pass, and multipass PGA memory sizes.

SELECT name PROFILE, cnt COUNT,
DECODE(total, 0, 0, ROUND(cnt*100/total)) PERCENTAGE
FROM (SELECT name, value cnt, (sum(value) over ()) total
FROM V$SYSSTAT
WHERE name like ‘workarea exec%’);


PROFILE COUNT PERCENTAGE
————————————————– ———- ———-
workarea executions – optimal 6650608 100
workarea executions – onepass 2 0
workarea executions – multipass 0 0

Since almost all the sorting and temporary operation are carried out inder optimal catagory we can conclude that out PGA is sized correctly.

Tuning Buffer cache – Oracle Database 10g

For tuning buffer cache, we need to understand following point very closely.

Physical reads: These are the data blocks that Oracle reads from disk. Reading data from disk is much more expensive than reading data that’s already in Oracle’s memory. When you issue a query, Oracle always first tries to retrieve the data from memory—the database buffer cache—and not disk.

DB block gets: This is a read of the buffer cache, to retrieve a block in current mode. This most often happens during data modification when Oracle has to be sure that it’s updating the most recent version of the block. So, when Oracle finds the required data in the database buffer cache, it checks whether the data in the blocks is up to date. If a user changes the data in the buffer cache but hasn’t committed those changes yet, new requests for the same data can’t show these interim changes. If the data in the buffer blocks is up to date, each such data block retrieved is counted as a DB block get.
Consistent gets: This is a read of the buffer cache, to retrieve a block in consistent mode. This may include a read of undo segments to maintain the read consistency principle. If Oracle finds that another session has updated the data in that block since the read began, then it will apply the new information from the undo segments.

Logical reads: Every time Oracle is able to satisfy a request for data by reading it from the database buffer cache, you get a logical read. Thus logical reads include both DB block gets and consistent gets.

Buffer gets: This term refers to the number of database cache buffers retrieved. This value is the same as the logical reads described earlier.

So basically buffer cache hit ratio is all about the rate at which you get information in your memory cache and less accessing the disk.

so in short we can say from above defination,

Buffer cache hit ratio = 1 – (physical read/logical reads)

Here logical reads means reading from memory.

SQL> SELECT name, value FROM v$sysstat
2  where name IN (‘physical reads cache’,
3  ‘consistent gets from cache’,
4  ‘db block gets from cache’);          


NAME                                                                  VALUE
—————————————————————- ———-
db block gets from cache                                          164998308
consistent gets from cache                                       2136448944
physical reads cache                                                2787422

Here physical reads are ‘physical reads cache’ stored in v$sysstat.
logical reads are, sum of ‘consistent gets from cache’ and ‘db block gets from cache’.

so our buffer cache hit ratio will be 1 – (2787422 / (2136448944 + 164998308)) = 0.9987

Another way to calculate the same buffer cache hit ratio is our query on v$sysstat

SQL> SELECT 1- ((p.value – l.value – d.value) / s.value)
AS “Buffer Cache Hit Ratio”
FROM v$sysstat s, v$sysstat l, v$sysstat d, v$sysstat p
WHERE s.name = ‘session logical reads’
AND d.name = ‘physical reads direct’
AND l.name = ‘physical reads direct (lob)’
AND p.name = ‘physical reads’

The above query will also give almost the same result. Here actually,

physical reads cache = physical reads – (physical reads direct (lob) + physical reads direct) and session logical reads = consistent gets from cache + db block gets from cache

LRU Algorithm or Touch Count Algorithm ??

Since version 8.1, Oracle has used a concept called touch count to measure how many times an object is accessed in the buffer cache. This algorithm of using touch counts for managing the buffer cache is somewhat different from the traditional modified LRU algorithm that Oracle used to employ for managing the cache. Each time a buffer is accessed, the touch count is incremented.

low touch count means that the block isn’t being reused frequently, and therefore is wasting database buffer cache space. If you have large objects that have a low touch count but occupy a significant proportion of the buffer cache, you can consider them ideal candidates for the recycle pool.

TCH (Touch count) is a column present in x$bh table of data dictionary, which keeps track of touch counts of objects.

Following query will give you the objects which are consuming reasonable amount of memory and are the candidates for getting flushed out of buffer cache.

SELECT
obj object,
count(1) buffers,
(count(1)/totsize) * 100 percent_cache
FROM x$bh,
(select value totsize
FROM v$parameter
WHERE name =’db_block_buffers’)
WHERE tch=1
OR (tch = 0 and lru_flag <10)
GROUP BY obj, totsize
HAVING (count(1)/totsize) * 100 > 5

Remember here that, this explaination is just for understanding. We dont have to do anything in buffer cache for getting these objects flushed out. Removal of this objects are handled automatically by database engine.

From this query you can get object_id and from that you can find object_name (using dba_objects).

using Multiple Pools for the Buffer Cache

As you already know we can have multiple pool for buffer cache, so I wont be describing the same here, else we will loose the focus on current discussion.
We have basically 3 types of buffer pools.

  1. KEEP
  2. RECYCLE
  3. DEFAULT

Default pool will always be there. However depending on the situation we can create keep and recycle pools. If there are some objects, which are accessed frequently, then will want to keep such objects in database cache. For such objects we can use keep buffer pool. Objects which are big and not accessed frequently can be put in recycle pool. By default if we dont specify buffer pool, object will always go to default pool.

V$BUFFER_POOL_STATISTICS will give statistics for all pools.

Determining Candidates for the Recycle Buffer Pool

Candidates which are having low TCH value as given by above query are best to put in recycle pool.

Determining Candidates for the Keep Buffer Cache

SELECT obj object,
count(1) buffers,
AVG(tch) average_touch_count
FROM x$bh
WHERE lru_flag = 8
GROUP BY obj
HAVING avg(tch) > 5
AND count(1) > 25;

Above query will give the candidates which are having avg TCH value of more then 5 and number of buffers occupied in buffer cache as more then 25. Such objects can be placed in KEEP buffer cache. You can place an object in a perticular pool using alter table command.

ALTER TABLE test1 STORAGE (buffer_pool keep);

Sizing buffer cache

For sizing buffer cache, you can check V$DB_CACHE_ADVICE view. This view will provide you information about various buffer cache sizes and estimated physical read, when we use those sizes for buffer cache. Based on this result you should be able to decide the correct size of database buffer cache.

SQL> select NAME, SIZE_FOR_ESTIMATE, SIZE_FACTOR, ESTD_PHYSICAL_READS, ESTD_PCT_OF_DB_TIME_FOR_READS
2  from v$db_cache_advice;

NAME       SIZE_FOR_ESTIMATE SIZE_FACTOR ESTD_PHYSICAL_READS ESTD_PCT_OF_DB_TIME_FOR_READS
———  —————– ———– ——————- —————————–
DEFAULT       112             .0946      6871847             2.4
DEFAULT       224             .1892      5435019             1.8
DEFAULT       336             .2838      4600629             1.5
DEFAULT       448             .3784      4125422             1.3
DEFAULT       560              .473      3831101             1.1
DEFAULT       672             .5676      3598589             1
DEFAULT       784             .6622      3381913             .9
DEFAULT       896             .7568      3154726             .8
DEFAULT      1008             .8514      2957398             .8
DEFAULT      1120             .9459      2841502             .7
DEFAULT      1184                 1      2791921             .7
DEFAULT      1232            1.0405      2757728             .7
DEFAULT      1344            1.1351      2689955             .6
DEFAULT      1456            1.2297      2653143             .6
DEFAULT      1568            1.3243      2631218             .6
DEFAULT      1680            1.4189      2608447             .6
DEFAULT      1792            1.5135      2588899             .6
DEFAULT      1904            1.6081      2573463             .6
DEFAULT      2016            1.7027      2561587             .6
DEFAULT      2128            1.7973      2549937             .6
DEFAULT      2240            1.8919      2535972             .6

21 rows selected.     

Based on ESTD_PCT_OF_DB_TIME_FOR_READS and ESTD_PHYSICAL_READS, you can decide the correct size for db buffer cache.

Setting up Oracle DataGuard for 10g

Introduction

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.

Data Guard Configurations:

A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other. For example, you can have a standby database on the same system as the production database, along with two standby databases on other systems at remote locations.

You can manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager.

Primary Database

A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.

The primary database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.

Standby Database

A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.

Similar to a primary database, a standby database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.

A standby database can be either a physical standby database or a logical standby database:

Physical standby database

Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database by recovering the redo data received from the primary database.

Logical standby database

Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database by transforming the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch sets with almost no downtime. Thus, a logical standby database can be used concurrently for data protection, reporting, and database upgrades.

Figure 1-1 shows a typical Data Guard configuration that contains a primary database instance that transmits redo data to a physical standby database. The physical standby database is remotely located from the primary database instance for disaster recovery and backup operations. You can configure the standby database at the same location as the primary database. However, for disaster recovery purposes, Oracle recommends you configure standby databases at remote locations.

Figure 1-1 shows a typical Data Guard configuration in which archived redo log files are being applied to a physical standby database.

Data Guard Services

The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:

Log Transport Services

Control the automated transfer of redo data from the production database to one or more archival destinations.

Log Apply Services

Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.

Role Management Services

Change the role of a database from a standby database to a primary database, or from a primary database to a standby database using either a switchover or a failover operation.

A database can operate in one of the two mutually exclusive roles: primary or standby database.

  • Failover

During a failover, one of the standby databases takes the primary database role.

  • Switchover

Primary and standby database can continue to alternate roles. The primary database can switch the role to a standby database; and one of the standby databases can switch roles to become the primary.

The main difference between physical and logical standby databases is the manner in which log apply services apply the archived redo data:

For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database, as shown in Figure 1-2.

Figure 1-2 Automatic Updating of a Physical Standby Database

For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database, as shown in Figure 1-3.

Figure 1-3 Automatic Updating of a Logical Standby Database

Data Guard Interfaces

Oracle provides three ways to manage a Data Guard environment:

1. SQL*Plus and SQL Statements

Using SQL*Plus and SQL commands to manage Data Guard environment.The following SQL statement initiates a switchover operation:

SQL> alter database commit to switchover to physical standby;

2. Data Guard Broker GUI Interface (Data Guard Manager)

Data Guard Manger is a GUI version of Data Guard broker interface that allows you to automate many of the tasks involved in configuring and monitoring a Data Guard environment.

3. Data Guard Broker Command-Line Interface (CLI)

It is an alternative interface to using the Data Guard Manger. It is useful if you want to use the broker from batch programs or scripts. You can perform most of the activities required to manage and monitor the Data Guard environment using the CLI.

The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. The following are some of the operations that the broker automates and simplifies:

  • Automated creation of Data Guard configurations incorporating a primary database, a new or existing (physical or logical) standby database, log transport services, and log apply services, where any of the databases could be Real Application Clusters (RAC) databases.
  • Adding up to 8 additional new or existing (physical or logical, RAC, or non-RAC) standby databases to each existing Data Guard configuration, for a total of one primary database, and from 1 to 9 standby databases in the same configuration.
  • Managing an entire Data Guard configuration, including all databases, log transport services, and log apply services, through a client connection to any database in the configuration.
  • Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration.
  • Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the log apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools.

You can perform all management operations locally or remotely through the broker’s easy-to-use interfaces: the Data Guard web pages of Oracle Enterprise Manager, which is the broker’s graphical user interface (GUI), and the Data Guard command-line interface (CLI) called DGMGRL.

Configuring Oracle DataGuard using SQL commands Creating a physical standby database

Step 1) Getting the primary database ready (on Primary host)

We are assuming that you are using SPFILE for your current(Primary) instance. You can check if your instance is using SPFILE or not using spfile parameter.

SQL> show parameter spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string

You can create spfile as shown below. (on primary host)

SQL> create spfile from pfile;

File created.

The primary database must meet two conditions before a standby database can be created from it:

  1. It must be in force logging mode and
  2. It must be in archive log mode (also automatic archiving must be enabled and a local archiving destination must be defined.

Before Putting database in force logging mode, check if the database is force logging mode using

SQL> select force_logging from v$database;

FOR

NO

Also the database is not in archive log mode as shown below.

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /dy/oracle/product/db10g/dbs/arch
Oldest online log sequence 10
Current log sequence 12

Now we will start the database in archive log and force logging mode.

Starting database in archive log mode:-

We need to set following 2 parameters to set the database in archive log mode.

Log_archive_dest_1=’Location=/u00/oracle/product/10.2.0/archive/orcl’
log_archive_format = “ARCH_%r_%t_%s.ARC”

If a database is in force logging mode, all changes, except those in temporary tablespaces, will be logged, independently from any nologging specification.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers 6397952 bytes
Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database force logging;

Database altered.

SQL> alter database open;

Database altered.

So now our primary database is in archive log mode and in force logging mode.

SQL> select log_mode, force_logging from v$database;

LOG_MODE FOR
———— —
ARCHIVELOG YES

init.ora file for primary

control_files = /u00/oracle/product/10.2.0/oradata_orcl/orclcontrol.ctl
db_name = orcl

db_domain = us.oracle.com

db_block_size = 8192
pga_aggregate_target = 250M

processes = 300
sessions = 300
open_cursors = 1024

undo_management = AUTO

undo_tablespace = undotbs
compatible = 10.2.0

sga_target = 600M

nls_language = AMERICAN
nls_territory = AMERICA
background_dump_dest=/u00/oracle/product/10.2.0/db_1/admin/orcl/bdump
user_dump_dest=/u00/oracle/product/10.2.0/db_1/admin/orcl/udump
core_dump_dest=/u00/oracle/product/10.2.0/db_1/admin/orcl/cdump
db_unique_name=’PRIMARY’
Log_archive_dest_1=’Location=/u00/oracle/product/10.2.0/archive/orcl’
Log_archive_dest_state_1=ENABLE

Step 2) Creating the standby database

Since we are creating a physical stand by database we have to copy all the datafiles
of primary database to standby location. For that, you need to shutdown main database, copy the files of main database to new location and start the main database again.

Step 3) Creating a standby database control file

A control file needs to be created for the standby system. Execute the following on the primary system:

SQL> alter database create standby controlfile as ‘/dy/oracle/product/db10g/dbf/standby.ctl’;

Database altered.

Step 4) Creating an init file

SQL> show parameters spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string /dy/oracle/product/db10g/dbs/s
pfiletest.ora

Step 5) Changing init.ora file for standby database

A pfile is created from the spfile. This pfile needs to be modified and then be used on the standby system to create an spfile from it. So create a pfile from spfile on primary database.

create pfile=’/some/path/to/a/file’ from spfile

SQL> create pfile=’/dy/oracle/product/db10g/dbs/standby.ora’ from spfile;

File created.

The following parameters must be modified or added:

  • control_files
  • standby_archive_dest
  • db_file_name_convert (only if directory structure is different on primary and standby server)
  • log_file_name_convert (only if directory structure is different on primary and standby server)
  • log_archive_format
  • log_archive_dest_1 — This value is used if a standby becomes primary during a switchover or a failover.
  • standby_file_management — Set to auto

init.ora parameters for standby

control_files = /u00/oracle/product/10.2.0/oradata/orcl/standby_orcl.ctl
db_name = orcl
db_domain = us.oracle.com
db_block_size = 8192
pga_aggregate_target = 250M
processes = 300
sessions = 300
open_cursors = 1024
undo_management = AUTO

undo_tablespace = undotbs
compatible = 10.2.0
sga_target = 600M
nls_language = AMERICAN
nls_territory = AMERICA
background_dump_dest=/u00/oracle/product/10.2.0/db_1/admin/orcl/bdump
user_dump_dest=/u00/oracle/product/10.2.0/db_1/admin/orcl/udump
core_dump_dest=/u00/oracle/product/10.2.0/db_1/admin/orcl/cdump
db_unique_name=’STANDBY’
Log_archive_dest_1=’Location=/u00/oracle/product/10.2.0/archive/orcl’
Log_archive_dest_state_1=ENABLE
standby_archive_dest=/u00/oracle/product/10.2.0/prim_archive

db_file_name_convert=’/u00/oracle/product/10.2.0/oradata’,\

‘/u00/oracle/product/10.2.0/oradata/orcl’
log_file_name_convert=’/u00/oracle/product/10.2.0/oradata’,\

‘/u00/oracle/product/10.2.0/oradata/orcl’

standby_file_management=auto

FAL_Client=’to_standby’

Step 7) Creating the spfile on the standby database
set ORACLE_SID=orcl
sqlplus “/ as sysdba”

create spfile from pfile=’/…/../modified-pfile’;

Step 8- On standby database
SQL> startup nomount pfile=standby.ora
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 754981192 bytes
Database Buffers 314572800 bytes
Redo Buffers 2203648 bytes

SQL> alter database mount standby database;

Database altered.

Add following parameters to standby side

FAL_Client=’to_standby’
FAL_Server=’to_primary’
Log_archive_dest_2=’Service=to_primary VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=primary’
Log_archive_dest_state_2=ENABLE
remote_login_passwordfile=’SHARED’

Add following parameters to primary side

Log_archive_dest_2=’Service=to_standby lgwr’
Log_archive_dest_state_2=ENABLE
Standby_File_Management=’AUTO’
REMOTE_LOGIN_PASSWORDFILE=’SHARED’

Create password file on both sides

orapwd file=/u00/oracle/product/10.2.0/db_1/dbs/orapworcl.ora password=oracle entries=5 force=y

FTP the password file to standby location

Step 9) Configuring the listener

Creating net service names

Net service names must be created on both the primary and standby database that will be used by log transport services. That is: something like to following lines must be added in the tnsnames.ora.

Setup listener configuration

On Primary:

ORCL=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ocsun216.us.oracle.com)(PORT = 1530))
)
)
)

SID_LIST_ORCL=
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u00/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = /u00/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

On Standby:

SID_LIST_orcl=
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u00/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = /u00/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

orcl =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = ocsun215)(PORT = 1538))
)
)

TNSNAMES settings

On Primary:

orcl =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ocsun216.us.oracle.com)(PORT = 1530))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

TO_STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ocsun215.us.oracle.com)(PORT = 1538))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

On standby:

TO_PRIMARY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ocsun216.us.oracle.com)(PORT = 1530))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

orcl =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ocsun215.us.oracle.com)(PORT = 1538))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

On Standby side following below commands

SQL> startup nomount pfile=standby_orcl.ora
ORACLE instance started.

Total System Global Area 629145600 bytes
Fixed Size 1980744 bytes
Variable Size 171968184 bytes
Database Buffers 452984832 bytes
Redo Buffers 2211840 bytes

SQL> alter database mount standby database;

Database altered.

Try to connect to stand by database from primary database

Following connections should work now
From Primary host:

sqlplus sys/oracle@orcl as sysdba –> This will connect to primary database
sqlplus sys/oracle@to_standby as sysdba –> This will connect to standby database from primary host

From Standby host

sqlplus sys/oracle@orcl as sysdba –> This will connect to standby database
sqlplus sys/oracle@to_primary as sysdba –> This will connect to primary database from standby host

LOG SHIPPING

On PRIMARY site enable Log_archive_dest_state_2 to start shipping archived redo logs.

SQL> Alter system set Log_archive_dest_state_2=ENABLE scope=both;

System Altered.

Check the sequence # and the archiving mode by executing following command.

SQL> Archive Log List

Then switch the logfile on primary side

SQL> Alter system switch logfile;

System Altered.

Start physical apply log service on standby side.

SQL> Alter Database Recover Managed Standby Database Disconnect;

Database Altered.

Now the session will be available to you and MRP will work as a background process and apply the redo logs.

You can check whether the log is applied or not by querying V$ARCHIVED_LOG.

SQL> Select Name, Applied, Archived from v$Archived_log;

This query will return the name of archived files and their status of being archived and applied.

Once you complete above step, you are done with physical standby. We can verify the log files which got applied using following commands.

On Standby side

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-07 12-JUN-07
2 12-JUN-07 12-JUN-07
3 12-JUN-07 12-JUN-07
4 12-JUN-07 12-JUN-07
5 12-JUN-07 12-JUN-07
6 12-JUN-07 12-JUN-07
7 12-JUN-07 12-JUN-07
8 12-JUN-07 12-JUN-07
9 12-JUN-07 12-JUN-07

9 rows selected.

On Primary side

SQL> Select Status, Error from v$Archive_dest where dest_id=2;

STATUS ERROR
——— —————————————————————–
VALID

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

On Standby side

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-07 12-JUN-07
2 12-JUN-07 12-JUN-07
3 12-JUN-07 12-JUN-07
4 12-JUN-07 12-JUN-07
5 12-JUN-07 12-JUN-07
6 12-JUN-07 12-JUN-07
7 12-JUN-07 12-JUN-07
8 12-JUN-07 12-JUN-07
9 12-JUN-07 12-JUN-07
10 12-JUN-07 12-JUN-07

10 rows selected.

If you can see, after switching archive log on primary side, an archive log file got applied to standby database.

Again on primary

SQL> alter system switch logfile;

System altered.

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-07 12-JUN-07
2 12-JUN-07 12-JUN-07
3 12-JUN-07 12-JUN-07
4 12-JUN-07 12-JUN-07
5 12-JUN-07 12-JUN-07
6 12-JUN-07 12-JUN-07
7 12-JUN-07 12-JUN-07
8 12-JUN-07 12-JUN-07
9 12-JUN-07 12-JUN-07
10 12-JUN-07 12-JUN-07
11 12-JUN-07 12-JUN-07

11 rows selected.

After switching log file on primary, that log file got applied to standby database. This is called high protection mode. I will explain the switching to different modes in my next post. keep watching !!!

Tuning Shared pool – Oracle Database 10g

Shared Pool

The most important part of memory resource when considering the production system is “shared pool”. The shared pool is a part of the SGA that holds almost all the necessary elements for execution of the SQL statements and PL/SQL programs.
In addition to caching program code, the shared pool caches the data dictionary information that Oracle needs to refer to often during the course of program execution.
Proper shared pool configuration leads to dramatic improvements in performance. An improperly tuned shared pool leads to problems such as the following:

  • Fragmentation of the pool
  • Increased latch contention with the resulting demand for more CPU resources
  • Greater I/O because executable forms of SQL aren’t present in the shared pool
  • Higher CPU usage because of unnecessary parsing of SQL code

The general increase in shared pool waits and other waits observed during a severe slowdown of the production database is the result of SQL code that fails to use bind variables.

As the number of users increases, so does the demand on shared pool memory and latches, which are internal locks for memory areas. If there are excessive latches the result might be a higher wait time and a slower response time. Sometimes the entire database seems to hang. The shared pool consists of two major areas: the library cache and the data dictionary cache. You can’t allocate or decrease memory specifically for one of these components. If you increase the total shared pool memory size, both components will increase in some ratio that Oracle determines. Similarly, when you decrease the total shared pool memory, both components will decrease in size.

Let’s look at these two important components of the shared pool in detail.

The Library Cache

The library cache holds the parsed and executable versions of SQL and PL/SQL code. As you may recall from Chapter 21, all SQL statements undergo the following steps during their processing:

Parsing, which includes syntactic and semantic verification of SQL statements and checking of object privileges to perform the actions.

Optimization, where the Oracle optimizer evaluates how to process the statement with the
least cost, after it evaluates several alternatives.

Execution, where Oracle uses the optimized physical execution plan to perform the action
stated in the SQL statement.

Fetching, which only applies to SELECT statements where Oracle has to return rows to you.

This step isn’t necessary in any nonquery-type statements. Parsing is a resource-intensive operation, and if your application needs to execute the same
SQL statement repeatedly, having a parsed version in memory will reduce contention for latches, CPU, I/O, and memory usage. The first time Oracle parses a statement, it creates a parse tree. The optimization step is necessary only for the first execution of a SQL statement. Once the statement is optimized, the best access path is encapsulated in the access plan. Both the parse tree and the access plan are stored in the library cache before the statement is executed for the first time. Future invocation of the same statement will need to go through only the last stage, execution, which avoids the overhead of parsing and optimizing as long as Oracle can find the parse tree and access plan in the library cache. Of course, if the statement is a SQL query, the last step will be the fetch
operation.

The library cache, being limited in size, discards old SQL statements when there’s no more
room for new SQL statements. The only way you can use a parsed statement repeatedly for multiple executions is if a SQL statement is identical to the parsed statement. Two SQL statements are identical if they use exactly the same code, including case and spaces. The reason for this is that when Oracle compares a new statement to existing statements in the library cache, it uses simple string comparisons. In addition, any bind variables used must be similar in data type and size. Here are a couple of examples that show you how picky Oracle is when it comes to considering whether two SQL statements are identical.

In the following example, the statements aren’t considered identical because of an extra space in the second statement:

SELECT * FROM employees;

SELECT * FROM employees;

In the next example, the statements aren’t considered identical because of the different case used for the table Employees in the second statement. The two versions of employees are termed literals because they’re literally different from each other.

SELECT * FROM employees;

SELECT * FROM Employees;

Let’s say users in the database issue the following three SQL statements:

SELECT * FROM persons WHERE person_id = 10

SELECT * FROM persons WHERE person_id = 999

SELECT * FROM persons WHERE person_id = 6666

Oracle uses a different execution plan for the preceding three statements, even though they seem to be identical in every respect, except for the value of person_id. Each of these statements has to be parsed and executed separately, as if they were entirely different. Because all three are essentially the same, this is inherently inefficient. As you can imagine, if hundreds of thousands of such statements are issued during the day, you’re wasting database resources and the query performance will be slow. Bind variables allow you to reuse SQL statements by making them “identical,” and thus eligible to share the same execution plan.

In our example, you can use a bind variable, which I’ll call :var, to help Oracle view the three statements as identical, thus requiring a single execution instead of multiple ones. The person_id values 10, 99, and 6666 are “bound” to the bind variable, :var. Your replacement SQL statement using a bind variable, then, would be this:

SELECT * FROM persons WHERE person_id = :var

Using bind variables can dramatically increase query performance, you can even “force” Oracle to use bind variables, even if an application doesn’t use them.

Checking hard parsing and soft parsing

When a query is actually fired from the SQL prompt, we have seen that it undergoes many steps as explained above. Now the role of library cache is to hold the information about parse tree and executing plan. If we run a query and information about parsing and executing plan is found directly in library cache, then Oracle doesnt have to do time consuming operation of parsing the statement and creating the execution plans and selecting the best one. So Oracle is going to save its time by picking up the execution plan directly from cache. This is called soft parsing.

But if Oracle is not able to get the execution plan from library cache and have to carry out all these steps, then in that case the operation is expensive. This we call it as hard parse.

For a query we can check if it was a hard parse or soft parse. For that we need to turn on the session level tracing.

Usually when we fire the query for first time its a hard parse and with subsequent execution, Oracle will find the information in cache so its going to be soft parse (provided the query we fire is exactly same).

Lets try this.

Step 1)

SQL> alter system flush shared_pool;

System altered.

SQL> ALTER SESSION SET SQL_TRACE=TRUE;

Session altered.

SQL> select * from emp where empno = 2;

If we check the trace file we see following info

PARSING IN CURSOR #5 len=43 dep=1 uid=0 oct=3 lid=0 tim=2461739217275 hv=1682066536 ad=’bb4c0df8′
select user#,type# from user$ where name=:1
END OF STMT
PARSE #5:c=0,e=2245,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739217253
EXEC #5:c=10000,e=5622,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739223682

Step 2) Run the same statement again by enabling the tracing

We see following in trace file.

PARSING IN CURSOR #1 len=33 dep=0 uid=5 oct=3 lid=5 tim=2461984275940 hv=1387342950 ad=’bc88b950′
select * from emp where empno = 2
END OF STMT
PARSE #1:c=0,e=631,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984275914
EXEC #1:c=0,e=487,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984277279

Since we used exactly the same statement (not even a change in space) so Oracle reuses
the parsed version. Hence mis=0 indicates there wasn’t a hard parse but merely a soft parse, which is a lot cheaper in terms of resource usage.

Following is the TKPROF ouput.
*************************************************************************

select * from emp where empno = 2
call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 2 0.04 0.03 0 2 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 6 0 2
——- —— ——– ———- ———- ———- ———- ———-
total 8 0.04 0.03 0 8 0 2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
——- —————————————————
1 TABLE ACCESS FULL EMP (cr=3 pr=0 pw=0 time=217 us)
*************************************************************************

As you can see from the above output there is one library cache miss and that was
during first executing of statement.

There is a measure to check the parsing efficiency. usually this is done by checking parse to execute ratio. In this we take a ratio of number of parse vs number of exections made. In a real database, some SQL statements will be fully reentrant (execute to parse = 100%), while others must be re-parsed for every execution (execute to parse = 1%).

You can use the statspack or AWR report to get the instance efficency percentages as shown below.

Lets first check the ADDM report

SQL> @?/rdbms/admin/addmrpt.sql

Current Instance
~~~~~~~~~~~~~~~~

DB Id DB Name Inst Num Instance
———– ———— ——– ————
1053231558 TKR12M 1 tkr12m1


Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Id Inst Num DB Name Instance Host
———— ——– ———— ———— ————
3640598056 1 LA4001 LA4001 adsdemo
4208397564 1 VISASUPG VISASUPG ap6004bld
1053231558 1 TKR12M tkr12m ap6313rt
3143212236 1 AZ2TP202 AZ2TP202 ap6002rems
1967308183 1 AR1202D2 ar1202d2 ap6175rt


Using 1053231558 for database Id
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.

Listing the last 3 days of Completed Snapshots

Snap
Instance DB Name Snap Id Snap Started Level
———— ———— ——— —————— —–
tkr12m1 TKR12M 334 07 Jun 2007 00:00 1
335 07 Jun 2007 01:00 1
336 07 Jun 2007 02:00 1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 334
Begin Snapshot Id specified: 334

Enter value for end_snap: 357
End Snapshot Id specified: 357

Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is addmrpt_1_334_357.txt. To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name: ./addmrpt_1_334_357.txt

Running the ADDM analysis on the specified pair of snapshots …

For creating the statspack report you need to first install statspack in Oracle database 10g. Following are the steps we have to do for getting a statspack report.

1) Install Statspack
For installing statspack you have to create a tablespace perfstat and after that run the script @?/rdbms/admin/spcreate

SQL> create tablespace perfstat datafile ‘/dy/oracle/product/db10g/dbf/perfstat01.dbf’ size 1000M;

Tablespace created.

Once the tablespace is created run the script @?/rdbms/admin/spcreate and provide perfstat user password, perfstat tablespace name and temp tablespace name.

2) Take snapshots

use the command “execute statspack.snap” to take snapshot. Also after some time take another snapshot.

3) Run reports

use @?/rdbms/admin/spreport to generate statspack report.

Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 84.44 Soft Parse %: 64.14
Execute to Parse %: 70.27 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 99.08 % Non-Parse CPU: 67.62

So from here we can get – Execute to Parse %: 70.27
This means that there are enough hard parse going on in database.

Also soft parse percentage is low.

You can also find out the sessions with high number of hard parses.

SQL> SELECT s.sid, s.value “Hard Parses”,
t.value “Executions Count”
2 3 FROM v$sesstat s, v$sesstat t
WHERE s.sid=t.sid 4
5 AND s.statistic#=(select statistic#
6 FROM v$statname where name=’parse count (hard)’)
7 AND t.statistic#=(select statistic#
8 FROM v$statname where name=’execute count’)
9 AND s.value>0
10 ORDER BY 2 desc;

SID            Hard Parses     Executions Count
———- ———–        —————-
1089         2023                 125305
1063         557                    129291
963           338                    55504
878           252                    46754
1072         226                    179833
871           181                     61504

The ratio of parse count “(hard)/execute count” should be on lower side to have better performance.

Also you can check the ratio of “parse count (total)/execute count”. This ratio too should be on lower side.

There are various other ratios that can be computed to assist in determining whether parsing may be a problem:

For example we can derive other ratios from following query

SQL> SELECT NAME, VALUE
FROM V$SYSSTAT
WHERE NAME IN ( ‘parse time cpu’, ‘parse time elapsed’,
‘parse count (hard)’, ‘CPU used by this session’ ); 2 3 4

NAME VALUE
—————————————————————- ———-
CPU used by this session 88709
parse time cpu 7402
parse time elapsed 7570
parse count (hard) 5871

1) parse time CPU / parse time elapsed

This ratio indicates how much of the time spent parsing was due to the parse operation itself, rather than waiting for resources, such as latches. A ratio of one is good, indicating that the elapsed time was not spent waiting for highly contended resources.
The parse time elapsed Oracle metric is the total elapsed time for parsing in 10s of milliseconds. By subtracting parse time cpu from the this statistic, the total waiting time for parse resources is determined.

SQL> select a.value/b.value from v$sysstat a, v$sysstat b
2 where a.name = ‘parse time cpu’
3 and b.name = ‘parse time elapsed’;

A.VALUE/B.VALUE
—————
.977810065

2) parse time CPU / CPU used by this session

This ratio indicates how much of the total CPU used by Oracle server processes was spent on parse-related operations. A ratio closer to zero is good, indicating that the majority of CPU is not spent on parsing.

SQL> select a.value/b.value from v$sysstat a, v$sysstat b
2 where a.name = ‘parse time cpu’
3 and b.name = ‘CPU used by this session’;

A.VALUE/B.VALUE
—————
.083350217

We can still go on in case of finding different ratios, but lets stop here since we have to check other optimization techniques.

Checking Library cache performance

SQL> select sum(pinhits)/sum(pins) from V$LIBRARYCACHE;

SUM(PINHITS)/SUM(PINS)
———————-
.918569272

In V$LIBRARYCACHE we have following three columns

GETS: – The # of lookups for object of the namespace.
PINS: – The # of reads or execution of the object.
RELOADS: – cache miss.

So basically pins are the actual executions.

A library cache hit ratio between 95 to 99 is considered good. But dont just rely on these numbers because this can be a wrong guessing. There can be always a sitution where your library cache is showing a good hit ratio but there are sessions which are waiting for resource. So always check for that.

Measuring Library Cache Efficiency

We can also check the effeciency of library cache using the same view V$LIBRARYCACHE

SQL> select namespace, pins, pinhits, reloads from V$LIBRARYCACHE
2 order by namespace;

NAMESPACE                   PINS          PINHITS       RELOADS
—————                 ———- ———-        ———-
BODY                                546             473                   20
CLUSTER                        1075            1055                6
INDEX                              2016           1588                41
JAVA DATA                    0                  0                       0
JAVA RESOURCE          0                  0                      0
JAVA SOURCE               0                  0                       0
OBJECT                           0                  0                       0
PIPE                                 0                  0                       0
SQL AREA                       129364       122594           1415
TABLE/PROCEDURE    35855         29402            1359
TRIGGER                         6                  4                      0

11 rows selected.

Here we see that there are high number of reloads for SQL AREA and TABLE/PROCEDURE. Basically the memory used by the Oracle for Library cache is not just a single heap of memory. For better efficiency the memory is allocated to different kinds to entities. For example a chuck of memory is allocated to SQLs called as SQL AREA. Also memory is allocated for procedures, objects, triggers etc.

For each of these areas we can see PINS, PINHITS and RELOADS to get the efficiencies. Note that RELOAD involves reloading of SQL statements after being aged out.

Some times, increasing the size of shared pool for these kind of output wont help, because the application itself can be big and number of SQLs are too high to keep it in memory. Another solution is to create KEEP and RECYCLE cache, to store the statements those are used intensively and those can be flushed off respectively. Also you can use DBMS_SHARED_POOL package (KEEP and UNKEEP) procedures to pin the
objects so that they will remain in memory.

You can use the V$LIBRARY_CACHE_MEMORY view to determine the number of library cache memory objects currently in use in the shared pool and to determine the number of freeable library cache memory objects in the shared pool

Once we get the exact info about the issues related to library cache like hit ratio, efficiency, information about hard parse and soft parse etc, we can move ahead for optimizing library cache.

Optimizing the Library Cache

We can optimize Library cache using some of the initialization parameters. We will see some of the parameters here now.

Using the CURSOR_SHARING (Literal Replacement) Parameter

When a SQL statment is used Oracle engine generates an execution plan and store it in library cache. If a statement is having literal value and same statement is used with same value next time, then no issues with that. There wont be any system overhead. But if the same statement is used next time but with different values then in that case Oracle will have to reparse the statement generate execution plans and select the best plan. This adds to the system overhead and performance goes down. So ots recomended to use bind variables. Following example shows the use of bind variables.

SQL> VARIABLE bindvar NUMBER;
SQL> BEGIN
2 :bindvar :=7900;
3 END;
4 /

PL/SQL procedure successfully completed.
SQL> SELECT ename FROM scott.emp WHERE empid = :bindvar;
ENAME
JAMES

In this case the statement can be executed many times but it will be parsed only once. We can change the value of bind varaible bindvar and use this query efficiently.

But its also true that in application many times developed dont use bind varables in query. Here we can do favor on our side by setting the parameter CURSOR_SHARING value to FORCE or SIMILAR. By default the value for this variable is EXACT, which means that it will check only the exact statement (even the spaces, actually it uses string search). If the statement is found in library cache then it wont parse it, else it will. FORCE option will enable Oracle to reuse the parsed SQL statements in its library cache. Oracle replaces the literal values with bind values to make the statements identical.

The CURSOR_SHARING=FORCE option forces the use of bind variables under all circumstances, whereas the CURSOR SHARING=SIMILAR option does so only when Oracle thinks doing so won’t adversely affect optimization.

You can change this parameter dynamically if you are using SPFILE, using following statements.

SQL> show parameters CURSOR_SHARING

NAME TYPE VALUE
———————————— ———– ——————————
cursor_sharing string EXACT

SQL> alter system set CURSOR_SHARING=FORCE;

System altered.

Using the SESSION_CACHED_CURSORS Parameter

Ideally, an application should have all the parsed statements available in separate cursors, so that if it has to execute a new statement, all it has to do is pick the parsed statement and change the value of the variables. If the application reuses a single cursor with different SQL statements, it still has to pay the cost of a soft parse. After opening a cursor for the first time, Oracle will parse the statement, and then it can reuse this parsed version in the future. This is a much better strategy than re-creating the cursor each time the database executes the same SQL statement. If you can cache all the cursors,
you’ll retain the server-side context, even when clients close the cursors or reuse them for new SQL statements.

You’ll appreciate the usefulness of the SESSION_CACHED_CURSORS parameter in a situation where users repeatedly parse the same statements, as happens in an Oracle Forms-based application when users switch among various forms. Using the SESSION_CACHED_CURSORS parameter ensures that for any cursor for which more than three parse requests are made, the parse requests are automatically cached in the session cursor cache. Thus new calls to parse the same statement avoid the parsing overhead. Using the initialization parameter SESSION_CACHED_CURSORS and setting it to a high number makes the query processing more efficient. Although soft parses are cheaper than hard parses, you can reduce even soft parsing by using the SESSION_CACHED_CURSORS parameter and setting it to a high number.

You can enforce session caching of cursors by setting the SESSION_CACHED_CURSORS in your initialization parameter file, or dynamically by using the following ALTER SESSION command:

SQL> ALTER SESSION SET SESSION_CACHED_CURSORS = value;

You can check how good your SESSION_CACHED_CURSORS parameter value is by using the V$SYSSTAT view. If the value of session cursor cache hits is low compared to the total parse count for a session, then the SESSION_CACHED_CURSORS parameter value should be bumped up.

SQL> show parameters SESSION_CACHED_CURSORS

NAME TYPE VALUE
———————————— ———– ——————————
session_cached_cursors integer 500

SQL> select name, value from V$SYSSTAT
2 where name in (‘session cursor cache hits’,’parse count (total)’);

NAME VALUE
—————————————————————- ———-
session cursor cache hits 8194208
parse count (total) 13252290

The perfect situation is where a SQL statement is soft parsed once in a session and executed multiple times. For a good explanation of bind variables, cursor sharing, and related issues, please read the Oracle white paper “Efficient use of bind variables, cursor_sharing and related cursor parameters” (http://otn.oracle.com/deploy/performance/pdf/cursor.pdf).

Sizing the Shared Pool

Best way to maintain the correct size of memory structure is to set the value for SGA_TARGET. This is a new initialization parameter introduced in Oracle database 10g for automatic management of memory allocation like

  1. SHARED_POOL_SIZE
  2. DB_CACHE_SIZE
  3. JAVA_POOL_SIZE
  4. LARGE_POOL_SIZE
  5. STREAMS_POOL_SIZE

Also for accurate size of shared pool, you can use the view V$SHARED_POOL_SIZE. We can get the total memory size using

SQL> select sum(bytes)/1024/1024 from v$sgastat
2 where pool = ‘shared pool’;

SUM(BYTES)/1024/1024
——————–
84.0046158

You can also check the estimated library cache size for a specified shared_pool size using following query.

SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_SIZE
2 from v$shared_pool_advice;

SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_SIZE
—————————– ———————– ————
60               .7143                   12
72               .8571
                  22
84               1
                         33
96               1.1429
                44
108             1.2857
                55
120
            1.4286                 66
132
            1.5714                 77
144
            1.7143                 88
156
            1.8571                 99
168
            2                          110

10 rows selected.

ESTD_LC_TIME_SAVED column tell estimated elapsed parse time saved (in seconds), owing to library cache memory objects being found in a shared pool of the specified size. This is the time that would have been spent in reloading the required objects in the shared pool had they been aged out due to insufficient amount of available free memory.

SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_TIME_SAVED
2 from v$shared_pool_advice;

SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_TIME_SAVED
—————————– ———————– ——————
60
                .7143                 552
72
                .8571                 560
84
                1                         564
96
                1.1429                567
108
              1.2857                577
120
               1.4286                577
132
               1.5714                577
144
               1.7143                577
156
               1.8571                577
168
               2                         577

10 rows selected.

So from above queries we can see that even though the current size of shared pool is 84M,
but the correct size will be 108M. You can also check the parsed time required in different shared pool sizes using following query.

SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_LOAD_TIME
2 from v$shared_pool_advice;

SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_LOAD_TIME
—————————– ———————– —————–
60
               .7143                  94
72
               .8571                   86
84
               1                           82
96
               1.1429                  79
108
               1.2857                69
120
               1.4286                69
132
               1.5714                69
144
               1.7143                69
156
               1.8571                69
168
               2                         69

10 rows selected.

This also shows that a shared pool of 108M required a minimum elapsed time(69 sec).

Pinning Objects in the Shared Pool

As I have discussed, if code objects have to be repeatedly hard-parsed and executed, database performance will deteriorate eventually. Your goal should be to see that as much of the executed code remains in memory as possible so compiled code can be re-executed. You can avoid repeated reloading of objects in your library cache by pinning objects using the DBMS_SHARED_POOL package. Pinning the object means storing the parse tree and execution plan of object in memory continuously until the instance reboots. We can determine the objects to be pinned using following query.

SELECT type, COUNT(*) OBJECTS,
SUM(DECODE(KEPT,’YES’,1,0)) KEPT,
SUM(loads) – count(*) reloads
FROM V$DB_OBJECT_CACHE
GROUP BY type
ORDER BY objects DESC;

TYPE                                       OBJECTS    KEPT      RELOADS
—————————- ———- ———- ———-
CURSOR                        1428                     584
               719
NOT LOADED
              1423                     0                179
TABLE                           
252                      28                320
VIEW                             
88                      0                117
SYNONYM
                   41                      0                0
INVALID TYPE
           41                      37                12
INDEX                          
26                      7                0
PACKAGE                     
10                      1                21
TYPE                             
10                      0                2
CLUSTER                     
  8                      6                1              
PACKAGE BODY           7                      1                6

TYPE                                      OBJECTS      KEPT      RELOADS
—————————- ———- ———- ———-
SEQUENCE                   
4                      0                1
TYPE BODY                  
2                      0                0
RSRC CONSUMER GROUP     
1         0                1
NON-EXISTENT           
1                      0                0
PUB_SUB                        
1                      0                8
FUNCTION                    
1                      0                0

17 rows selected.

If the number of reloades are more then you need to make sure that the objects are pinned in database.

SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(object_name,object_type);

You can use the following statements to pin a package first in the shared pool and then remove it, if necessary:

SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(NEW_EMP.PKG, PACKAGE);

SQL> EXECUTE SYS.DBMS_SHARED_POOL.UNKEEP(NEW_EMP.PKG,PACKAGE);

You check the total memory consumed by an object of type of objects using following query.

SQL> select count(*) from v$DB_OBJECT_CACHE
2 where type = ‘CURSOR’;

COUNT(*)
———-
1431

SQL> select sum(sharable_mem) from v$DB_OBJECT_CACHE
2 where type = ‘CURSOR’;

SUM(SHARABLE_MEM)
—————–
15300756

The above size is in bytes. So for holding 1431 cursor, Shared_pool needs approx 15MB of memory.

So basically it all depends on the reports that you take from database and your database initialization parameter settings. So check your database quickly and tune it. Good luck !!!

The Dictionary Cache

The dictionary cache, as mentioned earlier, caches data dictionary information. This cache is much smaller than the library cache, and to increase or decrease it you modify the shared pool accordingly. If your library cache is satisfactorily configured, chances are that the dictionary cache is going to be fine too. You can get an idea about the efficiency of the dictionary cache by using the following query:

SQL> select sum(gets – getmisses – fixed) / sum(gets) from v$rowcache;

SUM(GETS-GETMISSES-FIXED)/SUM(GETS)
———————————–
.995904392

Usually, it’s a good idea to shoot for a dictionary hit ratio as high as 95 to 99 percent, although Oracle itself sometimes seems to refer to a figure of 85 percent as being adequate. To increase the library cache ratio, you simply increase the shared pool size for the instance.

References:

Expert Oracle Database 10g Administration – Sam R. Alapati

Oracle Performance Tuning Guide

Oracle Data Dictionary Reference