Tuning Shared pool – Oracle Database 10g

Shared Pool

The most important part of memory resource when considering the production system is “shared pool”. The shared pool is a part of the SGA that holds almost all the necessary elements for execution of the SQL statements and PL/SQL programs.
In addition to caching program code, the shared pool caches the data dictionary information that Oracle needs to refer to often during the course of program execution.
Proper shared pool configuration leads to dramatic improvements in performance. An improperly tuned shared pool leads to problems such as the following:

  • Fragmentation of the pool
  • Increased latch contention with the resulting demand for more CPU resources
  • Greater I/O because executable forms of SQL aren’t present in the shared pool
  • Higher CPU usage because of unnecessary parsing of SQL code

The general increase in shared pool waits and other waits observed during a severe slowdown of the production database is the result of SQL code that fails to use bind variables.

As the number of users increases, so does the demand on shared pool memory and latches, which are internal locks for memory areas. If there are excessive latches the result might be a higher wait time and a slower response time. Sometimes the entire database seems to hang. The shared pool consists of two major areas: the library cache and the data dictionary cache. You can’t allocate or decrease memory specifically for one of these components. If you increase the total shared pool memory size, both components will increase in some ratio that Oracle determines. Similarly, when you decrease the total shared pool memory, both components will decrease in size.

Let’s look at these two important components of the shared pool in detail.

The Library Cache

The library cache holds the parsed and executable versions of SQL and PL/SQL code. As you may recall from Chapter 21, all SQL statements undergo the following steps during their processing:

Parsing, which includes syntactic and semantic verification of SQL statements and checking of object privileges to perform the actions.

Optimization, where the Oracle optimizer evaluates how to process the statement with the
least cost, after it evaluates several alternatives.

Execution, where Oracle uses the optimized physical execution plan to perform the action
stated in the SQL statement.

Fetching, which only applies to SELECT statements where Oracle has to return rows to you.

This step isn’t necessary in any nonquery-type statements. Parsing is a resource-intensive operation, and if your application needs to execute the same
SQL statement repeatedly, having a parsed version in memory will reduce contention for latches, CPU, I/O, and memory usage. The first time Oracle parses a statement, it creates a parse tree. The optimization step is necessary only for the first execution of a SQL statement. Once the statement is optimized, the best access path is encapsulated in the access plan. Both the parse tree and the access plan are stored in the library cache before the statement is executed for the first time. Future invocation of the same statement will need to go through only the last stage, execution, which avoids the overhead of parsing and optimizing as long as Oracle can find the parse tree and access plan in the library cache. Of course, if the statement is a SQL query, the last step will be the fetch
operation.

The library cache, being limited in size, discards old SQL statements when there’s no more
room for new SQL statements. The only way you can use a parsed statement repeatedly for multiple executions is if a SQL statement is identical to the parsed statement. Two SQL statements are identical if they use exactly the same code, including case and spaces. The reason for this is that when Oracle compares a new statement to existing statements in the library cache, it uses simple string comparisons. In addition, any bind variables used must be similar in data type and size. Here are a couple of examples that show you how picky Oracle is when it comes to considering whether two SQL statements are identical.

In the following example, the statements aren’t considered identical because of an extra space in the second statement:

SELECT * FROM employees;

SELECT * FROM employees;

In the next example, the statements aren’t considered identical because of the different case used for the table Employees in the second statement. The two versions of employees are termed literals because they’re literally different from each other.

SELECT * FROM employees;

SELECT * FROM Employees;

Let’s say users in the database issue the following three SQL statements:

SELECT * FROM persons WHERE person_id = 10

SELECT * FROM persons WHERE person_id = 999

SELECT * FROM persons WHERE person_id = 6666

Oracle uses a different execution plan for the preceding three statements, even though they seem to be identical in every respect, except for the value of person_id. Each of these statements has to be parsed and executed separately, as if they were entirely different. Because all three are essentially the same, this is inherently inefficient. As you can imagine, if hundreds of thousands of such statements are issued during the day, you’re wasting database resources and the query performance will be slow. Bind variables allow you to reuse SQL statements by making them “identical,” and thus eligible to share the same execution plan.

In our example, you can use a bind variable, which I’ll call :var, to help Oracle view the three statements as identical, thus requiring a single execution instead of multiple ones. The person_id values 10, 99, and 6666 are “bound” to the bind variable, :var. Your replacement SQL statement using a bind variable, then, would be this:

SELECT * FROM persons WHERE person_id = :var

Using bind variables can dramatically increase query performance, you can even “force” Oracle to use bind variables, even if an application doesn’t use them.

Checking hard parsing and soft parsing

When a query is actually fired from the SQL prompt, we have seen that it undergoes many steps as explained above. Now the role of library cache is to hold the information about parse tree and executing plan. If we run a query and information about parsing and executing plan is found directly in library cache, then Oracle doesnt have to do time consuming operation of parsing the statement and creating the execution plans and selecting the best one. So Oracle is going to save its time by picking up the execution plan directly from cache. This is called soft parsing.

But if Oracle is not able to get the execution plan from library cache and have to carry out all these steps, then in that case the operation is expensive. This we call it as hard parse.

For a query we can check if it was a hard parse or soft parse. For that we need to turn on the session level tracing.

Usually when we fire the query for first time its a hard parse and with subsequent execution, Oracle will find the information in cache so its going to be soft parse (provided the query we fire is exactly same).

Lets try this.

Step 1)

SQL> alter system flush shared_pool;

System altered.

SQL> ALTER SESSION SET SQL_TRACE=TRUE;

Session altered.

SQL> select * from emp where empno = 2;

If we check the trace file we see following info

PARSING IN CURSOR #5 len=43 dep=1 uid=0 oct=3 lid=0 tim=2461739217275 hv=1682066536 ad=’bb4c0df8′
select user#,type# from user$ where name=:1
END OF STMT
PARSE #5:c=0,e=2245,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739217253
EXEC #5:c=10000,e=5622,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=2461739223682

Step 2) Run the same statement again by enabling the tracing

We see following in trace file.

PARSING IN CURSOR #1 len=33 dep=0 uid=5 oct=3 lid=5 tim=2461984275940 hv=1387342950 ad=’bc88b950′
select * from emp where empno = 2
END OF STMT
PARSE #1:c=0,e=631,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984275914
EXEC #1:c=0,e=487,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=2461984277279

Since we used exactly the same statement (not even a change in space) so Oracle reuses
the parsed version. Hence mis=0 indicates there wasn’t a hard parse but merely a soft parse, which is a lot cheaper in terms of resource usage.

Following is the TKPROF ouput.
*************************************************************************

select * from emp where empno = 2
call count cpu elapsed disk query current rows
——- —— ——– ———- ———- ———- ———- ———-
Parse 2 0.04 0.03 0 2 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 6 0 2
——- —— ——– ———- ———- ———- ———- ———-
total 8 0.04 0.03 0 8 0 2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
——- —————————————————
1 TABLE ACCESS FULL EMP (cr=3 pr=0 pw=0 time=217 us)
*************************************************************************

As you can see from the above output there is one library cache miss and that was
during first executing of statement.

There is a measure to check the parsing efficiency. usually this is done by checking parse to execute ratio. In this we take a ratio of number of parse vs number of exections made. In a real database, some SQL statements will be fully reentrant (execute to parse = 100%), while others must be re-parsed for every execution (execute to parse = 1%).

You can use the statspack or AWR report to get the instance efficency percentages as shown below.

Lets first check the ADDM report

SQL> @?/rdbms/admin/addmrpt.sql

Current Instance
~~~~~~~~~~~~~~~~

DB Id DB Name Inst Num Instance
———– ———— ——– ————
1053231558 TKR12M 1 tkr12m1


Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Id Inst Num DB Name Instance Host
———— ——– ———— ———— ————
3640598056 1 LA4001 LA4001 adsdemo
4208397564 1 VISASUPG VISASUPG ap6004bld
1053231558 1 TKR12M tkr12m ap6313rt
3143212236 1 AZ2TP202 AZ2TP202 ap6002rems
1967308183 1 AR1202D2 ar1202d2 ap6175rt


Using 1053231558 for database Id
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.

Listing the last 3 days of Completed Snapshots

Snap
Instance DB Name Snap Id Snap Started Level
———— ———— ——— —————— —–
tkr12m1 TKR12M 334 07 Jun 2007 00:00 1
335 07 Jun 2007 01:00 1
336 07 Jun 2007 02:00 1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 334
Begin Snapshot Id specified: 334

Enter value for end_snap: 357
End Snapshot Id specified: 357

Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is addmrpt_1_334_357.txt. To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name: ./addmrpt_1_334_357.txt

Running the ADDM analysis on the specified pair of snapshots …

For creating the statspack report you need to first install statspack in Oracle database 10g. Following are the steps we have to do for getting a statspack report.

1) Install Statspack
For installing statspack you have to create a tablespace perfstat and after that run the script @?/rdbms/admin/spcreate

SQL> create tablespace perfstat datafile ‘/dy/oracle/product/db10g/dbf/perfstat01.dbf’ size 1000M;

Tablespace created.

Once the tablespace is created run the script @?/rdbms/admin/spcreate and provide perfstat user password, perfstat tablespace name and temp tablespace name.

2) Take snapshots

use the command “execute statspack.snap” to take snapshot. Also after some time take another snapshot.

3) Run reports

use @?/rdbms/admin/spreport to generate statspack report.

Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 84.44 Soft Parse %: 64.14
Execute to Parse %: 70.27 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 99.08 % Non-Parse CPU: 67.62

So from here we can get – Execute to Parse %: 70.27
This means that there are enough hard parse going on in database.

Also soft parse percentage is low.

You can also find out the sessions with high number of hard parses.

SQL> SELECT s.sid, s.value “Hard Parses”,
t.value “Executions Count”
2 3 FROM v$sesstat s, v$sesstat t
WHERE s.sid=t.sid 4
5 AND s.statistic#=(select statistic#
6 FROM v$statname where name=’parse count (hard)’)
7 AND t.statistic#=(select statistic#
8 FROM v$statname where name=’execute count’)
9 AND s.value>0
10 ORDER BY 2 desc;

SID            Hard Parses     Executions Count
———- ———–        —————-
1089         2023                 125305
1063         557                    129291
963           338                    55504
878           252                    46754
1072         226                    179833
871           181                     61504

The ratio of parse count “(hard)/execute count” should be on lower side to have better performance.

Also you can check the ratio of “parse count (total)/execute count”. This ratio too should be on lower side.

There are various other ratios that can be computed to assist in determining whether parsing may be a problem:

For example we can derive other ratios from following query

SQL> SELECT NAME, VALUE
FROM V$SYSSTAT
WHERE NAME IN ( ‘parse time cpu’, ‘parse time elapsed’,
‘parse count (hard)’, ‘CPU used by this session’ ); 2 3 4

NAME VALUE
—————————————————————- ———-
CPU used by this session 88709
parse time cpu 7402
parse time elapsed 7570
parse count (hard) 5871

1) parse time CPU / parse time elapsed

This ratio indicates how much of the time spent parsing was due to the parse operation itself, rather than waiting for resources, such as latches. A ratio of one is good, indicating that the elapsed time was not spent waiting for highly contended resources.
The parse time elapsed Oracle metric is the total elapsed time for parsing in 10s of milliseconds. By subtracting parse time cpu from the this statistic, the total waiting time for parse resources is determined.

SQL> select a.value/b.value from v$sysstat a, v$sysstat b
2 where a.name = ‘parse time cpu’
3 and b.name = ‘parse time elapsed’;

A.VALUE/B.VALUE
—————
.977810065

2) parse time CPU / CPU used by this session

This ratio indicates how much of the total CPU used by Oracle server processes was spent on parse-related operations. A ratio closer to zero is good, indicating that the majority of CPU is not spent on parsing.

SQL> select a.value/b.value from v$sysstat a, v$sysstat b
2 where a.name = ‘parse time cpu’
3 and b.name = ‘CPU used by this session’;

A.VALUE/B.VALUE
—————
.083350217

We can still go on in case of finding different ratios, but lets stop here since we have to check other optimization techniques.

Checking Library cache performance

SQL> select sum(pinhits)/sum(pins) from V$LIBRARYCACHE;

SUM(PINHITS)/SUM(PINS)
———————-
.918569272

In V$LIBRARYCACHE we have following three columns

GETS: – The # of lookups for object of the namespace.
PINS: – The # of reads or execution of the object.
RELOADS: – cache miss.

So basically pins are the actual executions.

A library cache hit ratio between 95 to 99 is considered good. But dont just rely on these numbers because this can be a wrong guessing. There can be always a sitution where your library cache is showing a good hit ratio but there are sessions which are waiting for resource. So always check for that.

Measuring Library Cache Efficiency

We can also check the effeciency of library cache using the same view V$LIBRARYCACHE

SQL> select namespace, pins, pinhits, reloads from V$LIBRARYCACHE
2 order by namespace;

NAMESPACE                   PINS          PINHITS       RELOADS
—————                 ———- ———-        ———-
BODY                                546             473                   20
CLUSTER                        1075            1055                6
INDEX                              2016           1588                41
JAVA DATA                    0                  0                       0
JAVA RESOURCE          0                  0                      0
JAVA SOURCE               0                  0                       0
OBJECT                           0                  0                       0
PIPE                                 0                  0                       0
SQL AREA                       129364       122594           1415
TABLE/PROCEDURE    35855         29402            1359
TRIGGER                         6                  4                      0

11 rows selected.

Here we see that there are high number of reloads for SQL AREA and TABLE/PROCEDURE. Basically the memory used by the Oracle for Library cache is not just a single heap of memory. For better efficiency the memory is allocated to different kinds to entities. For example a chuck of memory is allocated to SQLs called as SQL AREA. Also memory is allocated for procedures, objects, triggers etc.

For each of these areas we can see PINS, PINHITS and RELOADS to get the efficiencies. Note that RELOAD involves reloading of SQL statements after being aged out.

Some times, increasing the size of shared pool for these kind of output wont help, because the application itself can be big and number of SQLs are too high to keep it in memory. Another solution is to create KEEP and RECYCLE cache, to store the statements those are used intensively and those can be flushed off respectively. Also you can use DBMS_SHARED_POOL package (KEEP and UNKEEP) procedures to pin the
objects so that they will remain in memory.

You can use the V$LIBRARY_CACHE_MEMORY view to determine the number of library cache memory objects currently in use in the shared pool and to determine the number of freeable library cache memory objects in the shared pool

Once we get the exact info about the issues related to library cache like hit ratio, efficiency, information about hard parse and soft parse etc, we can move ahead for optimizing library cache.

Optimizing the Library Cache

We can optimize Library cache using some of the initialization parameters. We will see some of the parameters here now.

Using the CURSOR_SHARING (Literal Replacement) Parameter

When a SQL statment is used Oracle engine generates an execution plan and store it in library cache. If a statement is having literal value and same statement is used with same value next time, then no issues with that. There wont be any system overhead. But if the same statement is used next time but with different values then in that case Oracle will have to reparse the statement generate execution plans and select the best plan. This adds to the system overhead and performance goes down. So ots recomended to use bind variables. Following example shows the use of bind variables.

SQL> VARIABLE bindvar NUMBER;
SQL> BEGIN
2 :bindvar :=7900;
3 END;
4 /

PL/SQL procedure successfully completed.
SQL> SELECT ename FROM scott.emp WHERE empid = :bindvar;
ENAME
JAMES

In this case the statement can be executed many times but it will be parsed only once. We can change the value of bind varaible bindvar and use this query efficiently.

But its also true that in application many times developed dont use bind varables in query. Here we can do favor on our side by setting the parameter CURSOR_SHARING value to FORCE or SIMILAR. By default the value for this variable is EXACT, which means that it will check only the exact statement (even the spaces, actually it uses string search). If the statement is found in library cache then it wont parse it, else it will. FORCE option will enable Oracle to reuse the parsed SQL statements in its library cache. Oracle replaces the literal values with bind values to make the statements identical.

The CURSOR_SHARING=FORCE option forces the use of bind variables under all circumstances, whereas the CURSOR SHARING=SIMILAR option does so only when Oracle thinks doing so won’t adversely affect optimization.

You can change this parameter dynamically if you are using SPFILE, using following statements.

SQL> show parameters CURSOR_SHARING

NAME TYPE VALUE
———————————— ———– ——————————
cursor_sharing string EXACT

SQL> alter system set CURSOR_SHARING=FORCE;

System altered.

Using the SESSION_CACHED_CURSORS Parameter

Ideally, an application should have all the parsed statements available in separate cursors, so that if it has to execute a new statement, all it has to do is pick the parsed statement and change the value of the variables. If the application reuses a single cursor with different SQL statements, it still has to pay the cost of a soft parse. After opening a cursor for the first time, Oracle will parse the statement, and then it can reuse this parsed version in the future. This is a much better strategy than re-creating the cursor each time the database executes the same SQL statement. If you can cache all the cursors,
you’ll retain the server-side context, even when clients close the cursors or reuse them for new SQL statements.

You’ll appreciate the usefulness of the SESSION_CACHED_CURSORS parameter in a situation where users repeatedly parse the same statements, as happens in an Oracle Forms-based application when users switch among various forms. Using the SESSION_CACHED_CURSORS parameter ensures that for any cursor for which more than three parse requests are made, the parse requests are automatically cached in the session cursor cache. Thus new calls to parse the same statement avoid the parsing overhead. Using the initialization parameter SESSION_CACHED_CURSORS and setting it to a high number makes the query processing more efficient. Although soft parses are cheaper than hard parses, you can reduce even soft parsing by using the SESSION_CACHED_CURSORS parameter and setting it to a high number.

You can enforce session caching of cursors by setting the SESSION_CACHED_CURSORS in your initialization parameter file, or dynamically by using the following ALTER SESSION command:

SQL> ALTER SESSION SET SESSION_CACHED_CURSORS = value;

You can check how good your SESSION_CACHED_CURSORS parameter value is by using the V$SYSSTAT view. If the value of session cursor cache hits is low compared to the total parse count for a session, then the SESSION_CACHED_CURSORS parameter value should be bumped up.

SQL> show parameters SESSION_CACHED_CURSORS

NAME TYPE VALUE
———————————— ———– ——————————
session_cached_cursors integer 500

SQL> select name, value from V$SYSSTAT
2 where name in (‘session cursor cache hits’,’parse count (total)’);

NAME VALUE
—————————————————————- ———-
session cursor cache hits 8194208
parse count (total) 13252290

The perfect situation is where a SQL statement is soft parsed once in a session and executed multiple times. For a good explanation of bind variables, cursor sharing, and related issues, please read the Oracle white paper “Efficient use of bind variables, cursor_sharing and related cursor parameters” (http://otn.oracle.com/deploy/performance/pdf/cursor.pdf).

Sizing the Shared Pool

Best way to maintain the correct size of memory structure is to set the value for SGA_TARGET. This is a new initialization parameter introduced in Oracle database 10g for automatic management of memory allocation like

  1. SHARED_POOL_SIZE
  2. DB_CACHE_SIZE
  3. JAVA_POOL_SIZE
  4. LARGE_POOL_SIZE
  5. STREAMS_POOL_SIZE

Also for accurate size of shared pool, you can use the view V$SHARED_POOL_SIZE. We can get the total memory size using

SQL> select sum(bytes)/1024/1024 from v$sgastat
2 where pool = ‘shared pool’;

SUM(BYTES)/1024/1024
——————–
84.0046158

You can also check the estimated library cache size for a specified shared_pool size using following query.

SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_SIZE
2 from v$shared_pool_advice;

SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_SIZE
—————————– ———————– ————
60               .7143                   12
72               .8571
                  22
84               1
                         33
96               1.1429
                44
108             1.2857
                55
120
            1.4286                 66
132
            1.5714                 77
144
            1.7143                 88
156
            1.8571                 99
168
            2                          110

10 rows selected.

ESTD_LC_TIME_SAVED column tell estimated elapsed parse time saved (in seconds), owing to library cache memory objects being found in a shared pool of the specified size. This is the time that would have been spent in reloading the required objects in the shared pool had they been aged out due to insufficient amount of available free memory.

SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_TIME_SAVED
2 from v$shared_pool_advice;

SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_TIME_SAVED
—————————– ———————– ——————
60
                .7143                 552
72
                .8571                 560
84
                1                         564
96
                1.1429                567
108
              1.2857                577
120
               1.4286                577
132
               1.5714                577
144
               1.7143                577
156
               1.8571                577
168
               2                         577

10 rows selected.

So from above queries we can see that even though the current size of shared pool is 84M,
but the correct size will be 108M. You can also check the parsed time required in different shared pool sizes using following query.

SQL> select SHARED_POOL_SIZE_FOR_ESTIMATE, SHARED_POOL_SIZE_FACTOR, ESTD_LC_LOAD_TIME
2 from v$shared_pool_advice;

SHARED_POOL_SIZE_FOR_ESTIMATE SHARED_POOL_SIZE_FACTOR ESTD_LC_LOAD_TIME
—————————– ———————– —————–
60
               .7143                  94
72
               .8571                   86
84
               1                           82
96
               1.1429                  79
108
               1.2857                69
120
               1.4286                69
132
               1.5714                69
144
               1.7143                69
156
               1.8571                69
168
               2                         69

10 rows selected.

This also shows that a shared pool of 108M required a minimum elapsed time(69 sec).

Pinning Objects in the Shared Pool

As I have discussed, if code objects have to be repeatedly hard-parsed and executed, database performance will deteriorate eventually. Your goal should be to see that as much of the executed code remains in memory as possible so compiled code can be re-executed. You can avoid repeated reloading of objects in your library cache by pinning objects using the DBMS_SHARED_POOL package. Pinning the object means storing the parse tree and execution plan of object in memory continuously until the instance reboots. We can determine the objects to be pinned using following query.

SELECT type, COUNT(*) OBJECTS,
SUM(DECODE(KEPT,’YES’,1,0)) KEPT,
SUM(loads) – count(*) reloads
FROM V$DB_OBJECT_CACHE
GROUP BY type
ORDER BY objects DESC;

TYPE                                       OBJECTS    KEPT      RELOADS
—————————- ———- ———- ———-
CURSOR                        1428                     584
               719
NOT LOADED
              1423                     0                179
TABLE                           
252                      28                320
VIEW                             
88                      0                117
SYNONYM
                   41                      0                0
INVALID TYPE
           41                      37                12
INDEX                          
26                      7                0
PACKAGE                     
10                      1                21
TYPE                             
10                      0                2
CLUSTER                     
  8                      6                1              
PACKAGE BODY           7                      1                6

TYPE                                      OBJECTS      KEPT      RELOADS
—————————- ———- ———- ———-
SEQUENCE                   
4                      0                1
TYPE BODY                  
2                      0                0
RSRC CONSUMER GROUP     
1         0                1
NON-EXISTENT           
1                      0                0
PUB_SUB                        
1                      0                8
FUNCTION                    
1                      0                0

17 rows selected.

If the number of reloades are more then you need to make sure that the objects are pinned in database.

SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(object_name,object_type);

You can use the following statements to pin a package first in the shared pool and then remove it, if necessary:

SQL> EXECUTE SYS.DBMS_SHARED_POOL.KEEP(NEW_EMP.PKG, PACKAGE);

SQL> EXECUTE SYS.DBMS_SHARED_POOL.UNKEEP(NEW_EMP.PKG,PACKAGE);

You check the total memory consumed by an object of type of objects using following query.

SQL> select count(*) from v$DB_OBJECT_CACHE
2 where type = ‘CURSOR’;

COUNT(*)
———-
1431

SQL> select sum(sharable_mem) from v$DB_OBJECT_CACHE
2 where type = ‘CURSOR’;

SUM(SHARABLE_MEM)
—————–
15300756

The above size is in bytes. So for holding 1431 cursor, Shared_pool needs approx 15MB of memory.

So basically it all depends on the reports that you take from database and your database initialization parameter settings. So check your database quickly and tune it. Good luck !!!

The Dictionary Cache

The dictionary cache, as mentioned earlier, caches data dictionary information. This cache is much smaller than the library cache, and to increase or decrease it you modify the shared pool accordingly. If your library cache is satisfactorily configured, chances are that the dictionary cache is going to be fine too. You can get an idea about the efficiency of the dictionary cache by using the following query:

SQL> select sum(gets – getmisses – fixed) / sum(gets) from v$rowcache;

SUM(GETS-GETMISSES-FIXED)/SUM(GETS)
———————————–
.995904392

Usually, it’s a good idea to shoot for a dictionary hit ratio as high as 95 to 99 percent, although Oracle itself sometimes seems to refer to a figure of 85 percent as being adequate. To increase the library cache ratio, you simply increase the shared pool size for the instance.

References:

Expert Oracle Database 10g Administration – Sam R. Alapati

Oracle Performance Tuning Guide

Oracle Data Dictionary Reference

2 thoughts on “Tuning Shared pool – Oracle Database 10g

  1. Woah! I’m really digging the template/theme of this site. It’s simple,
    yet effective. A lot of times it’s very difficult to get that “perfect balance” between user friendliness and appearance. I must say you’ve done a fantastic job with
    this. Also, the blog loads super quick for me
    on Opera. Superb Blog!

Leave a comment