Fixing SQL Plans: The hard way – Part 2

In my previous article – https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/ – I showed you a way to fix query plan if you have a good plan available for that query in some other similar prod DB.

But what if you don’t have any other prod DB, or this query is so unique that it runs in only 1 database.

Fixing such SQLs needs deeper look into the SQL and the plan.

We have various methods to fix those SQLs like:-

– Creating Index

– Adding Hint (may be at the code level).

– Gathering stats etc.

Lets say you have all index in place, but query is not picking the index. Again, there could be many reasons why index is not picked by the query. Common one includes incorrect stats or missing stats for index. Or optimizer_index_cost_adj value is too high.

Changing any parameters or gathering stats or any changes to optimizer environment can have adverse affect to other queries.

Atleast in prod its not advisable to change any of optimizer environment.

One of the best way to fix queries in such scenario is to use hint so that index will be picked by CBO (cost based optimizer).

Having said that even if we use hints in query for CBO to pick the right index, how are we going to push such plan to actual query. We cannot (and should not) change application code and put a hint in application code.

Query should essentially remains the same but it should pick the index.

Fixing query using Hints

We want a solution where we will put hints in a query and generate the desired plan. We want original SQL ID to pick our newly generated plan.

Here is the example:

SQL>create table T as select * from dba_objects;

Table created.

SQL>col name format a30;
SQL>col value format a30;
SQL>select name, value from v$parameter where name = 'optimizer_index_cost_adj';

NAME                   VALUE
------------------------------ ------------------------------
optimizer_index_cost_adj       1

SQL>alter session set optimizer_index_cost_adj = 10000;

Session altered.

SQL>create index T_IDX on T(data_object_id);

Index created.

SQL>

SQL>exec dbms_stats.gather_table_stats('ORACLE_DBA','T');

PL/SQL procedure successfully completed.

SQL>exec dbms_stats.gather_index_stats('ORACLE_DBA','T_IDX');

PL/SQL procedure successfully completed.

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select sql_id from v$sql where sql_text like 'select count(1) from T where data_object_id > 65000 and status%';

SQL_ID
-------------
cj4sqr25b6b8k

SQL>@explain
Enter SQL ID:- cj4sqr25b6b8k

PL/SQL procedure successfully completed.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID    cj4sqr25b6b8k, child number 0
-------------------------------------
select count(1) from T where data_object_id > 65000 and status = 'VALID'

Plan hash value: 2966233522

---------------------------------------------------------------------------
| Id  | Operation       | Name | Rows  | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      |      |   258 (100)|      |
|   1 |  SORT AGGREGATE    |      |    1 |    11 |           |      |
|*  2 |   TABLE ACCESS FULL| T      | 17690 |   190K|   258   (1)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("STATUS"='VALID' AND "DATA_OBJECT_ID">65000))

19 rows selected.

SQL>

Now we know that since optimizer_index_cost_adj value is too high, it will not try to use index T_IDX on data_object_id column, because query will be expensive.

But what if we want our query to use the index. This may not be a good test case, but there could be scenario where using index will actually improve the performance of query. CBO may not be able to find that (based on cost), but we know that.

So lets force a index usage using a hint.

SQL>select /*+ index(T T_IDX) */ count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select /*+ index(T T_IDX) */ count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select sql_id from v$sql where sql_text like 'select /*+ index(T T_IDX) */ count(1) from T where data_object_id > 65000 and status%';

SQL_ID
-------------
dut61hdv6b12t

SQL>@explain
Enter SQL ID:- dut61hdv6b12t

PL/SQL procedure successfully completed.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID    dut61hdv6b12t, child number 0
-------------------------------------
select /*+ index(T T_IDX) */ count(1) from T where data_object_id >
65000 and status = 'VALID'

Plan hash value: 1789076273

--------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |         |         |     267K(100)|         |
|   1 |  SORT AGGREGATE          |         |       1 |      11 |          |         |
|*  2 |   TABLE ACCESS BY INDEX ROWID| T     | 17690 |     190K|     267K  (1)| 00:53:31 |
|*  3 |    INDEX RANGE SCAN         | T_IDX | 35380 |         |    8253   (1)| 00:01:40 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("STATUS"='VALID')
   3 - access("DATA_OBJECT_ID">65000)

22 rows selected.

SQL>

We can see that using index hint here, it has forced a plan to use index.
Now, we can easily get the required query hints from v$sql_plan view as we have other_xml column.
You can refer to the same SQLs as I mentioned in previous post (https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/) and get the required hints

select  CHR(9)||''''
        ||regexp_replace(extractvalue(value(d),'/hint'),'''','''''')
        || ''','
from    xmltable('/*/outline_data/hint'
                passing (select     xmltype(other_xml) as xmlval
                        from        v$sql_plan
                        where       sql_id = '&GOOD_SQL_ID'
                        and         CHILD_NUMBER = &CHILD_NO
                        and         other_xml is not null)) d;

In this case GOOD_SQL_ID = dut61hdv6b12t and child number = 0

    'IGNORE_OPTIM_EMBEDDED_HINTS',
    'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
    'DB_VERSION(''11.2.0.2'')',
    'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
    'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
    'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
    'OPT_PARAM(''optimizer_index_cost_adj'' 10000)',
    'OPT_PARAM(''optimizer_index_caching'' 80)',
    'OUTLINE_LEAF(@"SEL$1")',
    'INDEX_RS_ASC(@"SEL$1" "T"@"SEL$1" ("T"."DATA_OBJECT_ID"))',

Rest of the procedure remains the same. Only change I did in the below PLSQL code compared to PLSQL code in my previous post is that, I changed dba_hist_sqltext table with v$sql and dba_hist_sql_plan table with v$sql_plan view. This is because new query we ran with hints may not be part of DBA_HIST* views as they are new query and will be present in only v$ views.

Following PLSQL code can be used

declare
    ar_profile_hints sys.sqlprof_attr;
begin
    ar_profile_hints := sys.sqlprof_attr(
    'BEGIN_OUTLINE_DATA',
    'IGNORE_OPTIM_EMBEDDED_HINTS',
    'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
    'DB_VERSION(''11.2.0.2'')',
    'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
    'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
    'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
    'OPT_PARAM(''optimizer_index_cost_adj'' 10000)',
    'OPT_PARAM(''optimizer_index_caching'' 80)',
    'OUTLINE_LEAF(@"SEL$1")',
    'INDEX_RS_ASC(@"SEL$1" "T"@"SEL$1" ("T"."DATA_OBJECT_ID"))',
    'END_OUTLINE_DATA');
    for sql_rec in (
    select t.sql_id, t.sql_text
    from v$sql t, v$sql_plan p
    where t.sql_id = p.sql_id
    and p.sql_id = '&SQL_ID_TO_FIX'
    and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
    and p.parent_id is null
)
loop
        DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    sql_text    => sql_rec.sql_text,
    profile     => ar_profile_hints,
    name        => 'PROFILE_&&SQL_ID_TO_FIX');

end loop;
end;
/
Enter value for sql_id_to_fix: cj4sqr25b6b8k
Enter value for bad_plan_hash_value: 2966233522
Enter value for sql_id_to_fix: cj4sqr25b6b8k

PL/SQL procedure successfully completed.

In this case SQL_ID_TO_FIX will be our original SQL = cj4sqr25b6b8k and BAD_PLAN_HASH_VALUE will be plan hash of original SQL = 2966233522

So the code at the end that I mentioned in my previous post – https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/ can be run on same database instead of running on other database. Only change will be to use v$ views instead of DBA_HIST* views.

If we check the plan of original query

First purge the sql from shared pool using “sys.dbms_shared_pool.purge” so that old plan will get flushed out and than run the query to see new plan

 

SQL>@purgesql
Enter SQL_ID:- cj4sqr25b6b8k

PL/SQL procedure successfully completed.

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>@explain
Enter SQL ID:- cj4sqr25b6b8k

PL/SQL procedure successfully completed.


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID    cj4sqr25b6b8k, child number 0
-------------------------------------
select count(1) from T where data_object_id > 65000 and status = 'VALID'

Plan hash value: 1789076273

--------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |         |         |     267K(100)|         |
|   1 |  SORT AGGREGATE          |         |       1 |      11 |          |         |
|*  2 |   TABLE ACCESS BY INDEX ROWID| T     | 17690 |     190K|     267K  (1)| 00:53:31 |
|*  3 |    INDEX RANGE SCAN         | T_IDX | 35380 |         |    8253   (1)| 00:01:40 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("STATUS"='VALID')
   3 - access("DATA_OBJECT_ID">65000)

Note
-----
   - SQL profile PROFILE_cj4sqr25b6b8k used for this statement


25 rows selected.

SQL>

 

We basically tried to simulate a different plan (using index) to get the required hints from V$SQL_PLAN. We then forced those hints on original SQL to create a profile so that original SQL will start using index.

Hope above procedure if clear in understanding !!

Advertisement

Auto-Capture Baseline Behaviour

This article is about demonstrating Auto-Capture baseline behavior, which I was surprised that many people are not aware.

Auto-Capture of baseline behaves in following way

Case 1) optimizer_capture_sql_plan_baselines parameter is TRUE

If you have optimizer_capture_sql_plan_baselines parameter set to TRUE, baselines for the SQLs will get captured automatically if SQL query runs 2 or more times. I believe all of us know about this feature

Case 2) optimizer_capture_sql_plan_baselines parameter is FALSE

If optimizer_capture_sql_plan_baselines parameter is set to FALSE, many people think that no new baseline will be captured in the database. This is WRONG. Even when this parameter is set to FALSE, baselines will be captured for following case

*** If a query has one or more baselines with ENABLED=YES ***

Lets take an example

SQL>create table T as select * from dba_objects;

Table created.

SQL>explain plan for 
  2  select count(1) from T where data_object_id between 1000 and 2000;

Explained.

SQL>@display

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2966233522

-----------------------------------
| Id  | Operation       | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|   1 |  SORT AGGREGATE    |      |
|*  2 |   TABLE ACCESS FULL| T      |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATA_OBJECT_ID"<=2000 AND "DATA_OBJECT_ID">=1000)

Note
-----
   - rule based optimizer used (consider using cbo)

18 rows selected.

SQL>select count(1) from T where data_object_id between 1000 and 2000;

  COUNT(1)
----------
     0

SQL>/

  COUNT(1)
----------
     0
         
SQL>select sql_id from v$sql where sql_text like 'select count(1) from T where data_object_id between 1000 and%';

SQL_ID
-------------
7fbxxz894w0mg

SQL>@baseline
Enter sql ID:- 7fbxxz894w0mg

SQL_TEXT
----------------------------------------------------------------------------------------------------
select count(1) from T where data_object_id between 1000 and 2000


SQL_HANDLE               PLAN_NAME              CREATOR          ORIGIN         LAST_MODIFIED            ENA ACC FIX
------------------------------ ------------------------------ --------------- -------------- ------------------------------ --- --- ---
SYS_SQL_46f1e9bfbc331557       SYS_SQL_PLAN_bc3315573fdbb376  ORACLE_DBA      AUTO-CAPTURE   26-JUN-12 11.30.08.000000 AM   YES YES NO

SQL_HANDLE            PLAN_NAME                  PLAN_HASH_VALUE     ENABLED    ACCEPTED     FIXED
-------------------------   ----------------------------      --------------     -------    -------     -------
SYS_SQL_46f1e9bfbc331557    SYS_SQL_PLAN_bc3315573fdbb376     2966233522      YES         YES       NO

PL/SQL procedure successfully completed.

SQL>show parameters  capture

optimizer_capture_sql_plan_baselines    boolean         TRUE

SQL>alter system set optimizer_capture_sql_plan_baselines=false;

System altered.

SQL>create index T_IDX on T(data_object_id);

Index created.

SQL>select count(1) from T where data_object_id between 1000 and 2000;

  COUNT(1)
----------
     0

SQL>/

  COUNT(1)
----------
     0

         
SQL>@baseline
Enter sql ID:- 7fbxxz894w0mg

SQL_TEXT
----------------------------------------------------------------------------------------------------
select count(1) from T where data_object_id between 1000 and 2000


SQL_HANDLE               PLAN_NAME              CREATOR          ORIGIN         LAST_MODIFIED            ENA ACC FIX
------------------------------ ------------------------------ --------------- -------------- ------------------------------ --- --- ---
SYS_SQL_46f1e9bfbc331557       SYS_SQL_PLAN_bc3315573fdbb376  ORACLE_DBA      AUTO-CAPTURE   26-JUN-12 11.30.08.000000 AM   YES YES NO
SYS_SQL_46f1e9bfbc331557       SYS_SQL_PLAN_bc331557ded8ae2f  ORACLE_DBA      AUTO-CAPTURE   26-JUN-12 11.31.20.000000 AM   YES NO  NO

SQL_HANDLE            PLAN_NAME                  PLAN_HASH_VALUE     ENABLED    ACCEPTED     FIXED
-------------------------   ----------------------------      --------------     -------    -------     -------
SYS_SQL_46f1e9bfbc331557    SYS_SQL_PLAN_bc3315573fdbb376     2966233522      YES         YES       NO
SYS_SQL_46f1e9bfbc331557    SYS_SQL_PLAN_bc331557ded8ae2f     293504097      YES        NO         NO

PL/SQL procedure successfully completed.

SQL>

This proves that even when optimizer_capture_sql_plan_baselines parameter is set to false we can have baselines captured for our old SQL.

If we think about this feature, this is perfect.

Imagine you are rolling out a new module and new queries are going to run against new tables.

At first, initially you will not be having enough data in new tables and your queries will go with FTS (Full Table Scan) and will not use index, even if you create them. This is because based on value of optimizer_index_cost_adj parameter optimizer might make a decision to get data from table rather than going to index and than table.

If you have optimizer_capture_sql_plan_baselines set to TRUE, this is going to create baselines on those queries (which will be automatically accepted as first baselines are automatically accepted).

At later point when data grows and you really want the indexes to get used, your query will still be doing FTS (because of the auto-capture baseline in ACCEPTED state). So even if new baselines are getting captured, they are not getting used.

So you have to manually (or automatically) evolve those baselines to correct the plans.

Instead, my approach would be to enable optimizer_capture_sql_plan_baselines parameter when your database is really baked and you have enough data available. This way it will create baselines which will be first time right.

If your database is still not baked enough and you are looking for plan stability, I would suggest getting baselines from other similar prod database which is baked for long time.

Once you have baselines in place for almost all queries turn off optimizer_capture_sql_plan_baselines parameter. This will prevent baseline capture for new queries, at the same time this doesn’t stop new baselines from getting captured for old SQLs.

Hope this helps !!

Oracle SQL Plan Management – Part 1

SQL Plan Management – Oracle Database 11g

SQL Plan Management or SPM is a new feature introduced in 11g. We will take a detailed look at SPM in the following sessions.
Before starting with practical example lets see some theory.

What is SQL Plan Management?

SPM is a new feature introduced in Oracle database 11g which stores the baseline (Plans + Hints) inside database.
SPM allows an Oracle DBA to capture and preserve the most efficient execution plans for any SQL statement, thus limiting the impact of refreshed optimizer statistics, changes to existing applications, and even upgraded database versions

Why do we need SPM?

SPM is basically used for plan stability. If you have a database environment with several queries running, you always want your queries to run the way they are running irrespective of the changes that you make in your environment.
Example, If you upgrade your database, your queries might behave differently because of the change in optimizer or change in environment and at initial run you have to fix many queries in your production DB because of the upgrade or change in the env.
To avoid this, SPM is introduce to give you plan stability. So even after change in the environment, your queries will use same plan as it was before. More plans (Better or worse) could exists in the new environment, but optimizer is not allowed to use those plans without you confirming the plan change.

What was wrong with outlines?

Well, outlines are more about fixing the plan for a query that optimizer will use. So if there is change in environment, the plan would still be the same.
There are certain situations where you want the plan to change with change in lets say data. What if your data gets skewed over time.

Intially you had 100 of entries for month “DEC” out of total 150 entries. So it was using FTS. Now with increase in the data, there are 1 million records and entries for month “DEC” are around 1000. In that case it makes sense for optimizer to change plan and start using Index scan instead of FTS.
But since you used outline, you are forcing optimizer to go for a FTS. This is just an example situation and things can change in your environment and so outline is not a very good approach for such situations.

What was wrong with profile?

Profile is another feature introduced in 10g and can be implemented at SQL level. Profiles are better than outlines in that they are not fixing a plan through out the life of SQL. With change in the environment the plans can change even if the profile is used.

So whats the difference between profile and SQL plan baselines?

Profile is more of a advisors. They give advice to optimizer while executing the query. They provide optimizer with all the precise estimates. Profiles are more abount correcting optimizer to use correct plan when the underlying data is skewed and changed drastically.
The goal is to create best execution plan for the SQL by giving the very precise data to the optimizer. Its the optimizer who will decide what should be the explain plan based on the information/hints it has received from profile.

SQL Plan baselines works differently. It enforces the plan to be used for a SQL. Baseline does not provide and estimates or it does not help optimizer in anyway. It just tells optimizer to ignore everything and use the plan that we are giving you.
Optimizer just followes the plan provided by baselines. So here baseline is the driving mechanism for getting the correct plan.

Finally how baselines are different then outline then?

Well, baseline always keeps the optimized plans for your SQL. If there is a change in the environment you will have a new baseline created for the same SQL. Oracle will stop that baseline in the baseline history.
Depending on the baseline parameters setting in your environment, new plan will be automatically used or you have to evolve the new plan for optimizer to use it.

So in a way SQL baselines are combination of outlines and profiles. It gives the stability of plan similar to outlines and it also allows capturing better plans in case the environment changes.

Not only that, SQL Baselines give the complete control to the DBA on

  1. Whether to capture the new and better plans?
  2. Whether to use the new plans automaticallly without DBA’s intervention.

So DBAs have complete control of the environment now. Far better than profiles and outlines.

SQL Plan Baseline Parameters:

Before we check how to use SQL Plan baselines, lets consider the significance of 2 important baseline parameters

1) optimizer_capture_sql_plan_baselines – Default “FALSE”

This parameter is responsible for automatic capturing the baselines for SQLs. If we set this parameter to TRUE we are asking oracle to automatically gather the baselines for the SQL.
When you run the query for the first time (and parsed version is not present in shared_pool), oracle consider that as a fresh new query and does not create a baseline.
When you run the query for second time, oracle will consider the query as repetative and will automatically create the baseline.
These baseline are stored in DBA_SQL_PLAN_BASELINES table.

If this parameter is set to FALSE, then we (DBA) has to create baselines for the SQL manually. There are 2 procedure available for creating baselines manually.

1) Using DBMS_SPM.LOAD_PLANS_FROM_SQLSET

This procedure is usually used when we create SQL tuning set and store our SQLs into the tuning sets. Normally done before we upgrade the database to 11g. In our 10g database we create SQL tuning set and store all our SQL. Once we upgrade to 11g, we can create baselines for all our plans in SQL tuning set. That why what ever plans were effective in 10g, same will be used in 11g and there wont be any plan flips.

2) Using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE

This is used when we want to fix one of the SQL currently running in our 11g database. We just load the plans from cursor cache (shared_pool) and create baseline out of that. We need to give SQL ID as input to this procedure.

We will see how to create baseline using manual method at later point of time.

2) optimizer_use_sql_plan_baselines – Default “TRUE”

This parameter will allow optimizer to use the baselines present in DBA_SQL_PLAN_BASELINES table. If you set this parameter to FALSE, then your 11g DB will start behaving same as 10g DB.
If there is any change in environment then it might flip the plan. Keeping this parameter TRUE is important in 11g.

How to use SQL Plan Management baselines ?

Lets take an example in a test database.

Table T with 1540 records.

SQL> select count(1) from t;

  COUNT(1)
----------
      1540

Data is skewed and distribution is as given below.

SQL> select col1, count(1) from t group by col1;

      COL1   COUNT(1)
---------- ----------
         1          1
         2          3
         3       1536

Gather the stats on the table

SQL> exec dbms_stats.gather_table_stats(OWNNAME=> 'ADVAITD_DBA',TABNAME => 'T', DEGREE => 6, GRANULARITY => 'ALL' ,CASCADE => TRUE , METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 254');

SQL>

Currently I dont have any baseline.

SQL> select count(1) from dba_sql_plan_baselines;

  COUNT(1)
----------
         0

SQL>

My baseline parameters setting is as below.

SQL> show parameters baselines

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_capture_sql_plan_baselines boolean     FALSE
optimizer_use_sql_plan_baselines     boolean     TRUE
SQL>

Let us first consider the auto capture utility for baselines.

AUTO Capture of baseline

SQL> alter session set optimizer_capture_sql_plan_baselines=TRUE;

Session altered.

SQL>

As mention earlier, we need to run the query 2 times in order to automatically create the baseline.

SQL> select * from t where col1 = 1;

      COL1 COL2                                               COL3
---------- -------------------------------------------------- -----------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 01-JUN-2011

SQL> select * from t where col1 = 1;

      COL1 COL2                                               COL3
---------- -------------------------------------------------- -----------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 01-JUN-2011

If we check DBA_SQL_PLAN_BASELINES we will see a sql baseline created.

SQL> select SQL_HANDLE, PLAN_NAME, ENABLED, ACCEPTED, FIXED
  2  from dba_sql_plan_baselines
  3  WHERE sql_text like 'select * from t%';

SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX
------------------------------ ------------------------------ --- --- ---
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920f94ecae5c  YES YES NO

Turning off auto SQL plan baseline capture

SQL> alter session set optimizer_capture_sql_plan_baselines=FALSE;

Session altered.

SQL>

Following statement gives the plan stored in the baseline. DBMS_XPLAN has a new procedure DISPLAY_SQL_PLAN_BASELINE which will display the baseline.

SQL> set line 999
SQL> set pagesize 999
SQL> select * from table(dbms_xplan.display_sql_plan_baseline(plan_name=>'SYS_SQL_PLAN_1d83920f94ecae5c'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SYS_SQL_1447ba3a1d83920f
SQL text: select * from t where col1 = 1
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_1d83920f94ecae5c
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   735 |     9   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    15 |   735 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=1)

24 rows selected.

SQL>

Origin: AUTO-CAPTURE shown above tell us that this baseline is captured automatically.

Lets now create an index and gather stats over index.

SQL> create index t_idx on t(col1);

Index created.

SQL> exec DBMS_STATS.GATHER_INDEX_STATS(OWNNAME => 'ADVAITD_DBA', INDNAME=>'T_IDX');

Run the same query now, since the index is created, we expect the query to use the index.

SQL> explain plan for
  2  select * from t where col1 = 1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   735 |     9   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    15 |   735 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=1)

Note
-----
   - SQL plan baseline "SYS_SQL_PLAN_1d83920f94ecae5c" used for this statement

17 rows selected.

SQL>

The reason we are seeing full table scan is because of the NOTE at the end, which says “SQL plan baseline “SYS_SQL_PLAN_1d83920f94ecae5c” used for this statement”

Since we have a baseline created for this SQL, it will not allow the plan to be changed. This is the kind of stability that SQL Plan baseline gives.
But using an index will be beneficial in our case.

If we check DBA_SQL_PLAN_BASELINES we can see a new plan has been created (PLAN_NAME = SYS_SQL_PLAN_1d83920fae82cf72), but it is not yet ACCEPTED. The plan is enabled though.

SQL> select SQL_HANDLE, PLAN_NAME, ENABLED, ACCEPTED, FIXED
  2  from dba_sql_plan_baselines
  3  WHERE sql_text like 'select * from t%';

SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX
------------------------------ ------------------------------ --- --- ---
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920f94ecae5c  YES YES NO
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920fae82cf72  YES NO  NO

We can check what the new plan looks like using dbms_xplan.display_sql_plan_baseline

SQL> select * from table(dbms_xplan.display_sql_plan_baseline(plan_name=>'SYS_SQL_PLAN_1d83920fae82cf72'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SYS_SQL_1447ba3a1d83920f
SQL text: select * from t where col1 = 1
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_1d83920fae82cf72
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------

Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    15 |   735 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |    15 |   735 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     6 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("COL1"=1)

25 rows selected.

SQL>

As seen above, new plan uses index. Lets evolve this plan now.
Evolving a plan includes evaluating the cost of the plan and accepting if the plan seems to be better than all accepted plan for this query.

SQL> SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SYS_SQL_1447ba3a1d83920f') from dual;

DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE(SQL_HANDLE=>'SYS_SQL_1447BA3A1D83920F')
--------------------------------------------------------------------------------

-------------------------------------------------------------------------------
                        Evolve SQL Plan Baseline Report
-------------------------------------------------------------------------------

Inputs:
-------
  SQL_HANDLE = SYS_SQL_1447ba3a1d83920f
  PLAN_NAME  =
  TIME_LIMIT = DBMS_SPM.AUTO_LIMIT
  VERIFY     = YES
  COMMIT     = YES

Plan: SYS_SQL_PLAN_1d83920fae82cf72
-----------------------------------
  Plan was verified: Time used .01 seconds.
  Passed performance criterion: Compound improvement ratio >= 7.33
  Plan was changed to an accepted plan.

                      Baseline Plan      Test Plan     Improv. Ratio
                      -------------      ---------     -------------
  Execution Status:        COMPLETE       COMPLETE
  Rows Processed:                 1              1
  Elapsed Time(ms):               0              0
  CPU Time(ms):                   0              0
  Buffer Gets:                   22              3              7.33
  Disk Reads:                     0              0
  Direct Writes:                  0              0
  Fetches:                        0              0
  Executions:                     1              1

-------------------------------------------------------------------------------
                                 Report Summary
-------------------------------------------------------------------------------
Number of SQL plan baselines verified: 1.
Number of SQL plan baselines evolved: 1.

Sometimes, your plan may not get evolved because oracle see that there are other already ACCEPTED plans which are better than the plan you are trying to evolve.
But if you know your plan will be better and still want to deploy the same, you can do so by manually changing the attributes ACCEPTED and ENABLED as shown below.

SET SERVEROUTPUT ON
DECLARE
  l_plans_altered  PLS_INTEGER;
BEGIN
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
    sql_handle      => 'SYS_SQL_1447ba3a1d83920f',
    plan_name       => 'SYS_SQL_PLAN_1d83920fae82cf72',
    attribute_name  => 'ENABLED',
    attribute_value => 'YES');

  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/

SET SERVEROUTPUT ON
DECLARE
  l_plans_altered  PLS_INTEGER;
BEGIN
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
    sql_handle      => 'SYS_SQL_1447ba3a1d83920f',
    plan_name       => 'SYS_SQL_PLAN_1d83920fae82cf72',
    attribute_name  => 'ACCEPTED',
    attribute_value => 'YES');

  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/

You can also set the value of attribute FIXED using the above function. Here is the meaning of ENABLED, ACCPETED and FIXED

ENABLED   – ‘YES’ means the plan is available for use by the optimizer. It may or may not be used depending on accepted status.
ACCPETED – ‘YES’ means the plan will be used by optimizer while running the query. ‘NO’ means optimizer will not use the plan.
FIXED        – ‘YES’ means the SQL plan baseline is not evolved over time. A fixed plan takes precedence over a non-fixed plan.

Once you evolve the plan, you can see that plan is ACCEPTED now.

SQL> select SQL_HANDLE, PLAN_NAME, ENABLED, ACCEPTED, FIXED
  2  from dba_sql_plan_baselines
  3  WHERE sql_text like 'select * from t%';

SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX
------------------------------ ------------------------------ --- --- ---
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920f94ecae5c  YES YES NO
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920fae82cf72  YES YES NO

Now if you run the explain plan you can see Index T_IDX is getting used.

SQL> explain plan for
  2  select * from t where col1 = 1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    15 |   735 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |    15 |   735 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     6 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("COL1"=1)

Note
-----
   - SQL plan baseline "SYS_SQL_PLAN_1d83920fae82cf72" used for this statement

18 rows selected.

SQL>

Hope this helps.

Part 2 can be viewed at –https://avdeo.com/2011/06/07/oracle-sql-plan-management-%E2%80%93-part-2/

References:

http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/optplanmgmt.htm

http://www.comp.dit.ie/btierney/oracle11gdoc/appdev.111/b28419/d_spm.htm

Analyze Index Vs. Gather_Index_Stats – Oracle Database 10gp

One of the difference I noticed about analyze index command and gather_index_stats procedure is that when we use analyze index command if updates the index statistics with number of leaf blocks equals the number of leaf blocks below HWM.

However if we use gather_index_stats statistics shows number of leaf blocks equals number of leaf blocks that actually has the data.

Here is the illustration for the same.

 

Create a table

SQL> create table t1 pctfree 99 pctused 1  
2  as  
3  select rownum id,  
4  trunc(100 * dbms_random.normal) val,  
5  rpad('X',100) padding  
6  from all_objects where rownum <= 10000;

Table created.

I intentionally created table with pctfree as 99 so that each block will have 1 row and it uses 10000 blocks to store 10000 rows.

Create an index

SQL> create index t1_i on t1(val) ;
Index created.

Check the number of leaf blocks

SQL> select BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS from dba_indexes where index_name = 'T1_I';

 BLEVEL LEAF_BLOCKS DISTINCT_KEYS
---------- ----------- -------------         
1          21           563

We have around 1600 rows whose val > 100

SQL> select count(*) from t1 where val > 100;
 COUNT(*)
----------      
  1598

Delete rows from table, which will also delete entries from index and some of the leaf blocks will get empty.

SQL> delete from t1 where val > 100;
1598 rows deleted.

Analyze index

SQL> analyze index t1_i validate structure;
Index analyzed.

Check again the number of leaf blocks

SQL> select BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS from dba_indexes where index_name = 'T1_I';
 BLEVEL LEAF_BLOCKS DISTINCT_KEYS
---------- ----------- -------------         
 1          21           563

Number of leaf blocks reported are still 21

Gather stats on index

SQL> exec dbms_stats.gather_index_stats('ADVAITD_DBA','T1_I');
PL/SQL procedure successfully completed.

Check the number of leaf blocks again

SQL> select BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS from dba_indexes where index_name = 'T1_I';
 BLEVEL LEAF_BLOCKS DISTINCT_KEYS
---------- ----------- -------------         
 1          18           379

Now if we see the number of leaf blocks reported are 18. This plays quiet a big role for optimizer in creating plans for queries. If we don’t have correct stats for the index, it may lead to expensive explain plans.

So its better to use gather_index_stats rather than analyze index.

Hope this helps !!

Row Chaining and Migeration

Overview

If you notice poor performance in your Oracle database Row Chaining and Migration may be one of several reasons, but we can prevent some of them by properly designing and/or diagnosing the database. Row Migration & Row Chaining are two potential problems that can be prevented. By suitably diagnosing, we can improve database performance. The main considerations are: What is Row Migration & Row Chaining ? How to identify Row Migration & Row Chaining ? How to avoid Row Migration & Row Chaining ? Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

Oracle Block

The Operating System Block size is the minimum unit of operation (read /write) by the OS and is a property of the OS file system. While creating an Oracle database we have to choose the «Data Base Block Size» as a multiple of the Operating System Block size. The minimum unit of operation (read /write) by the Oracle database would be this «Oracle block», and not the OS block. Once set, the «Data Base Block Size» cannot be changed during the life of the database (except in case of Oracle 9i). To decide on a suitable block size for the database, we take into consideration factors like the size of the database and the concurrent number of transactions expected. The database block has the following structure (within the whole database structure)

oracle_data_block

Header

Header contains the general information about the data i.e. block address, and type of segments (table, index etc). It Also contains the information about table and the actual row (address) which that holds the data.

Free Space

Space allocated for future update/insert operations. Generally affected by the values of PCTFREE and PCTUSED parameters.

Data

Actual row data.

FREELIST, PCTFREE and PCTUSED While creating / altering any table/index, Oracle used two storage parameters for space control.

  • PCTFREE – The percentage of space reserved for future update of existing data.
  • PCTUSED – The percentage of minimum space used for insertion of new row data. This value determines when the block gets back into the FREELISTS structure.
  • FREELIST – Structure where Oracle maintains a list of all free available blocks. Oracle will first search for a free block in the FREELIST and then the data is inserted into that block. The availability of the block in the FREELIST is decided by the PCTFREE value. Initially an empty block will be listed in the FREELIST structure, and it will continue to remain there until the free space reaches the PCTFREE value.

When the free space reach the PCTFREE value the block is removed from the FREELIST, and it is re-listed in the FREELIST table when the volume of data in the block comes below the PCTUSED value. Oracle use FREELIST to increase the performance. So for every insert operation, oracle needs to search for the free blocks only from the FREELIST structure instead of searching all blocks.

Row Migration

We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.

ora_row_migration_1.jpg

Full Table Scans are not affected by migrated rows

The forwarding addresses are ignored. We know that as we continue the full scan, we’ll eventually get to that row so we can ignore the forwarding address and just process the row when we get there. Hence, in a full scan migrated rows don’t cause us to really do any extra work — they are meaningless.

Index Read will cause additional IO’s on migrated rows

When we Index Read into a table, then a migrated row will cause additional IO’s. That is because the index will tell us «goto file X, block Y, slot Z to find this row». But when we get there we find a message that says «well, really goto file A, block B, slot C to find this row». We have to do another IO (logical or physical) to find the row.

Row Chaining

A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with long and long raw columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.

ora_row_chained_1.jpg

Chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was spread over two blocks, the query: select column1 from table where column1 is in Block 1, would not cause any «table fetch continued row». It would not actually have to get column2, it would not follow the chained row all of the way out. On the other hand, if we ask for: select column2 from table and column2 is in Block 2 due to row chaining, then you would in fact see a «table fetch continued row»

Example

The following example was published by Tom Kyte, it will show row migration and chaining. We are using an 4k block size:

select name,value from v$parameter where name = ‘db_block_size’;

NAME   VALUE
————– ——
db_block_size  4096

Create the following table with CHAR fixed columns:

create table row_mig_chain_demo (
x int primary key,
a char(1000),
b char(1000),
c char(1000),
d char(1000),
e char(1000)
);

That is our table. The char(1000)’s will let us easily cause rows to migrate or chain. We used 5 columns a,b,c,d,e so that the total rowsize can grow to about 5K, bigger than one block, ensuring we can truly chain a row.

insert into row_mig_chain_demo (x) values (1);
insert into row_mig_chain_demo (x) values (2);
insert into row_mig_chain_demo (x) values (3);
commit;

We are not interested about seeing a,b,c,d,e – just fetching them. They are really wide so we’ll surpress their display.

column a noprint
column b noprint
column c noprint
column d noprint
column e noprint

select * from row_mig_chain_demo;

X
———-
1
2
3

Check for chained rows:

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                             VALUE
----------------------------     ----------
table fetch continued row        0

Now that is to be expected, the rows came out in the order we put them in (Oracle full scanned this query, it processed the data as it found it). Also expected is the table fetch continued row is zero. This data is so small right now, we know that all three rows fit on a single block. No chaining.

ora_row_example_1.jpg

Demonstration of the Row Migration

Now, lets do some updates in a specific way. We want to demonstrate the row migration issue and how it affects the full scan:

update row_mig_chain_demo set a = ‘z1’, b = ‘z2’, c = ‘z3’ where x = 3;
commit;
update row_mig_chain_demo set a = ‘y1’, b = ‘y2’, c = ‘y3’ where x = 2;
commit;
update row_mig_chain_demo set a = ‘w1’, b = ‘w2’, c = ‘w3’ where x = 1;
commit;

Note the order of updates, we did last row first, first row last.

select * from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 0

Interesting, the rows came out «backwards» now. That is because we updated row 3 first. It did not have to migrate, but it filled up block 1. We then updated row 2. It migrated to block 2 with row 3 hogging all of the space, it had to. We then updated row 1, it migrated to block 3. We migrated rows 2 and 1, leaving 3 where it started.

So, when Oracle full scanned the table, it found row 3 on block 1 first, row 2 on block 2 second and row 1 on block 3 third. It ignored the head rowid piece on block 1 for rows 1 and 2 and just found the rows as it scanned the table. That is why the table fetch continued row is still zero. No chaining.

ora_row_example_2.jpg

So, lets see a migrated row affecting the «table fetch continued row»:

select * from row_mig_chain_demo where x = 3;

X
———-
3

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 0

This was an index range scan / table access by rowid using the primary key.  We didn’t increment the «table fetch continued row» yet since row 3 isn’t migrated.

select * from row_mig_chain_demo where x = 1;

X
———-
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                      VALUE
————————— ———-
table fetch continued row       1

Row 1 is migrated, using the primary key index, we forced a «table fetch continued row».

Demonstration of the Row Chaining

update row_mig_chain_demo set d = ‘z4’, e = ‘z5’ where x = 3;
commit;

Row 3 no longer fits on block 1. With d and e set, the rowsize is about 5k, it is truly chained.

select x,a from row_mig_chain_demo where x = 3;

X
———-
3

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 1

We fetched column «x» and «a» from row 3 which are located on the «head» of the row, it will not cause a «table fetch continued row». No extra I/O to get it.

ora_row_example_3.jpg select x,d,e from row_mig_chain_demo where x = 3;

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 2

Now we fetch from the «tail» of the row via the primary key index. This increments the «table fetch continued row» by one to put the row back together from its head to its tail to get that data.

Now let’s see a full table scan – it is affected as well:

select * from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 3

The «table fetch continued row» was incremented here because of Row 3, we had to assemble it to get the trailing columns.  Rows 1 and 2, even though they are migrated don’t increment the «table fetch continued row» since we full scanned.

select x,a from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 3

No «table fetch continued row» since we didn’t have to assemble Row 3, we just needed the first two columns.

select x,e from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 4

But by fetching for d and e, we incemented the «table fetch continued row». We most likely have only migrated rows but even if they are truly chained, the columns you are selecting are at the front of the table.

So, how can you decide if you have migrated or truly chained?

Count the last column in that table. That’ll force to construct the entire row.

select count(e) from row_mig_chain_demo;

COUNT(E)
———-
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 5

Analyse the table to verify the chain count of the table:

analyze table row_mig_chain_demo compute statistics;

select chain_cnt
from user_tables
where table_name = ‘ROW_MIG_CHAIN_DEMO’;

CHAIN_CNT
———-
3

Three rows that are chained. Apparently, 2 of them are migrated (Rows 1 and 2) and one is truly chained (Row 3).

Total Number of «table fetch continued row» since instance startup?

The V$SYSSTAT view tells you how many times, since the system (database) was started you did a «table fetch continued row» over all tables.

sqlplus system/<password>

SELECT ‘Chained or Migrated Rows = ‘||value
FROM v$sysstat
WHERE name = ‘table fetch continued row’;

Chained or Migrated Rows = 31637

You could have 1 table with 1 chained row that was fetched 31’637 times. You could have 31’637 tables, each with a chained row, each of which was fetched once. You could have any combination of the above — any combo.

Also, 31’637 – maybe that’s good, maybe that’s bad. it is a function of

  • how long has the database has been up
  • how many rows is this as a percentage of total fetched rows.
    For example if 0.001% of your fetched are table fetch continued row, who cares!

Therefore, always compare the total fetched rows against the continued rows.

select name,value from v$sysstat where name like ‘%table%’;

select name,value from v$sysstat where name like ‘%table%’;

NAME                                                                  VALUE
—————————————————————- ———-
table scans (short tables)                                           124338
table scans (long tables)                                              1485
table scans (rowid ranges)                                                0
table scans (cache partitions)                                           10
table scans (direct read)                                                 0
table scan rows gotten                                             20164484
table scan blocks gotten                                            1658293
table fetch by rowid                                                1883112
table fetch continued row                                             31637
table lookup prefetch client count                                        0

How many Rows in a Table are chained?

The USER_TABLES tells you immediately after an ANALYZE (will be null otherwise) how many rows in the table are chained.

analyze table row_mig_chain_demo compute statistics;

select chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
from user_tables
where table_name = ‘ROW_MIG_CHAIN_DEMO’;

CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
———- ———– ———– ———- ———-
3         100        3691         10         40

PCT_CHAINED shows 100% which means all rows are chained or migrated.

How to avoid Chained and Migrated Rows?

Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand. Reorganizing the table ensures that the main free space is totally empty blocks.

The ALTER TABLE … MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE … MOVE statement with the COMPRESS keyword to store the new segment using table compression.

  1. ALTER TABLE MOVEFirst count the number of Rows per Block before the ALTER TABLE MOVEselect dbms_rowid.rowid_block_number(rowid) “Block-Nr”, count(*) “Rows”
    from row_mig_chain_demo
    group by dbms_rowid.rowid_block_number(rowid) order by 1; Block-Nr        Rows
    ———- ———-
    2066          3Now, de-chain the table, the ALTER TABLE MOVE rebuilds the row_mig_chain_demo table in a new segment, specifying new storage parameters:ALTER TABLE row_mig_chain_demo MOVE
    PCTFREE 20
    PCTUSED 40
    STORAGE (INITIAL 20K
    NEXT 40K
    MINEXTENTS 2
    MAXEXTENTS 20
    PCTINCREASE 0);Table altered. 

    Again count the number of Rows per Block after the ALTER TABLE MOVE

    select dbms_rowid.rowid_block_number(rowid) “Block-Nr”, count(*) “Rows”
    from row_mig_chain_demo
    group by dbms_rowid.rowid_block_number(rowid) order by 1;

    Block-Nr        Rows
    ———- ———-
    2322          1
    2324          1
    2325          1

  2. Rebuild the Indexes for the TableMoving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.analyze table row_mig_chain_demo compute statistics;ERROR at line 1:
    ORA-01502: index ‘SCOTT.SYS_C003228’ or partition of such index is in unusable
    stateThis is the primary key of the table which must be rebuilt.alter index SYS_C003228 rebuild;
    Index altered.analyze table row_mig_chain_demo compute statistics;
    Table analyzed. 

    select chain_cnt,
    round(chain_cnt/num_rows*100,2) pct_chained,
    avg_row_len, pct_free , pct_used
    from user_tables
    where table_name = ‘ROW_MIG_CHAIN_DEMO’;

    CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
    ———- ———– ———– ———- ———-
    1       33.33        3687         20         40If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.

Detect all Tables with Chained and Migrated Rows

Using the CHAINED_ROWS table, you can find out the tables with chained or migrated rows.

  1. Create the CHAINED_ROWS tablecd $ORACLE_HOME/rdbms/admin
    sqlplus scott/tiger
    @utlchain.sql
  2. Analyse all or only your Tablesselect ‘analyze table ‘||table_name||’ list chained rows into CHAINED_ROWS;’
    from user_tables
    /analyze table ROW_MIG_CHAIN_DEMO list chained rows into CHAINED_ROWS;
    analyze table DEPT list chained rows into CHAINED_ROWS;
    analyze table EMP list chained rows into CHAINED_ROWS;
    analyze table BONUS list chained rows into CHAINED_ROWS;
    analyze table SALGRADE list chained rows into CHAINED_ROWS;
    analyze table DUMMY list chained rows into CHAINED_ROWS;Table analyzed.
  3. Show the RowIDs for all chained rowsThis will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild with a higher value for PCTFREEselect owner_name,
    table_name,
    count(head_rowid) row_count
    from chained_rows
    group by owner_name,table_name
    /OWNER_NAME                     TABLE_NAME                      ROW_COUNT
    —————————— —————————— ———-
    SCOTT                          ROW_MIG_CHAIN_DEMO                      1

Conclusion

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

  • Row migration is typically caused by UPDATE operation
  • Row chaining is typically caused by INSERT operation.
  • SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.
  • To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view
  • To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.

 

Multiple Database Block Sizes and the Buffer Cache

In oracle 10g we can have multiple block sizes at the same time. When a tablespace is created we can assign a block size for the objects that will be created in that tablespace.

The DB_BLOCK_SIZE parameter in your initialization parameter file determines the size of your standard block size in the database and frequently is the only block size for the entire database.

The DB_CACHE_SIZE parameter in your initialization parameter file specifies the size (in bytes) of the cache of the standard block sized buffers. Notice that you don’t set the number of database buffers; rather, you specify the size of the buffer cache itself in the DB_CACHE_SIZE parameter.

You can have up to five different database block sizes in your databases. That is, you can create your tablespaces with any one of the five allowable database block sizes.

But before you use non standard block size, you have to define the cache size for these non standard block size. We have a paramter called DB_nK_CACHE_SIZE for setting the cache size for non standard block size.

The new init.ora parameters that allow you to use non-default block sizes are:

DB_2K_CACHE_SIZE
DB_4K_CACHE_SIZE
DB_8K_CACHE_SIZE
DB_16K_CACHE_SIZE
DB_32K_CACHE_SIZE

Another classification for buffer cache is depending on the algorithm used to keep the contents into the cache. We have basically 3 types in this catagory.

1) DB_KEEP_CACHE_SIZE
2) DB_RECYCLE_CACHE_SIZE
3) DB_CACHE_SIZE

DB_KEEP_CACHE_SIZE is where the object are always present when they are loaded. The objects which qualifies for this cache are those which are very frquently accessed and which has to be retained in memory. For example, frquently used small lookup tables. This cache is a subset of default cache defined by parameter DB_CACHE_SIZE. For any database we need to have DB_CACHE_SIZE set.

DB_RECYCLE_CACHE_SIZE is where you dont want to store the object. You want to clear off the object from cache as soon as it is used. You have to be careful while using this, since this may incure performance hit in case you allocated a frequently used object to this cache.

DB_CACHE_SIZE is the size for default cache.

it is important to note that the init.ora parameters and functionality regarding the keep and recycle buffer pools has changed between Oracle8i and Oracle9i. Those changes are
summarized in the table below:

Version Of Oracle Init.ora parameters Functionality
Oracle8i                             BUFFER_POOL_KEEP = <buffers>                Subsets of the data
BUFFER_POOL_RECYCLE = <buffers>        buffer cache

Oracle9i and 10g              DB_KEEP_CACHE_SIZE = <size>                   Independent of the
DB_RECYCLE_CACHE_SIZE = <size>           data buffer cache

To specify the use of the keep, recycle or default buffer pools, you can use the storage clause of the alter table statement:

alter table <table_name> storage (buffer pool keep);
alter table <table_name> storage (buffer pool recycle);
alter table <table_name> storage (buffer pool default);

Note: The keep and recycle buffer pools are only available for the standard
block size. Non-standard block-size caches have a single default pool.

So again back to non standard cache size. Lets say the default block size is 8K and you want to create 1 more block size for you future tablespaces. In that case you have to
assign the buffer cache for those block size in the memory. Remember, when
you create a non standard block sizes, the memory (cache size) allocation for these block
size will be taken again from physical memory RAM and hence the RAM consumption with go up. This memory wont be allocated from existing db_cache_size.

Example

SQL> show parameters db_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_cache_size                        big integer 200M

Now we have 200M set for DB_CACHE_SIZE and db_keep_cache_size and db_recycle_cache_size is not set.

SQL> show parameter db_keep_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_keep_cache_size                   big integer 0

SQL> show parameter db_recycle_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recycle_cache_size                big integer 0

Also we can see the size of buffer cache using show sga

SQL> show sga

Total System Global Area 1073741824 bytes
Fixed Size                  1984184 bytes
Variable Size             750786888 bytes
Database Buffers 209715200 bytes
Redo Buffers                6397952 bytes
Lets now try to create a tablespace with 4K block size.

SQL> create tablespace test_tbs4k datafile ‘/dy/oracle/product/db10g/dbf/test_tbs4k_1.dbf’ size 100M blocksize 4K;
create tablespace test_tbs4k datafile ‘/dy/oracle/product/db10g/dbf/test_tbs4k_1.dbf’ size 100M blocksize 4K
*
ERROR at line 1:
ORA-29339: tablespace block size 4096 does not match configured block sizes

Oracle in not intelligent enough to translate the block size of 4k into 8k buffer size. Because it cannot load the content of this datafile created for this tablespace having block size of 4k into buffer buffer of 8k.

So we need to create a buffer of 4K block size, after that only we can create a tablespace for 4k block size.

If we see the parameter db_4k_cache_size is not set.

SQL> show parameters db_4k_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_4k_cache_size                     big integer 0

SQL> alter system set db_4k_cache_size = 100M;

System altered.

SQL> show parameter db_4k_cache_size;

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_4k_cache_size                     big integer 100M
SQL> show parameter db_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_cache_size                        big integer 200M
SQL> show sga

Total System Global Area 1073741824 bytes
Fixed Size                  1984184 bytes
Variable Size             750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers                6397952 bytes
SQL>

If we see the above stats, it clearly shows tghat db_cache_size has not reduced, but
database buffer size has increased not to 300M. Now we can create a tablespace for 4K block size

SQL> create tablespace test_tbs4k datafile ‘/dy/oracle/product/db10g/dbf/test_tbs4k_1.dbf’ size 100M blocksize 4K;

Tablespace created.

SQL>

You can get the information about your database buffer cache from a view v$buffer_pool

SQL> select name, block_size, current_size from v$buffer_pool;

NAME                 BLOCK_SIZE CURRENT_SIZE
——————– ———- ————
DEFAULT                    8192          200
DEFAULT                    4096          100

You can get more stats and information on you buffer pool using the view v$buffer_pool_statistics.

 

Restoring the statistics – Oracle Database 10g

Here is the small article on how to restore the statistics on a table. Sometimes we gather stats on a table which causes it to flip the plan of a query accessing that table. It can lead to a great performance for some queries but there are situations where the query performance can degrade.

Certain tables in every databases are “Hot” tables and a DBA should not play around the statistics of those tables as there could be quiet a huge impact of the same.

In case if  a stats are gathered and you need to restore the previous stats in order to bring everything back to normal, here is what one should be doing.

Using RESTORE_TABLE_STATS

Step 1) Check the history of stats gathered on the table using DBA_TAB_STATS_HISTORY

SQL> select TABLE_NAME, STATS_UPDATE_TIME from dba_tab_stats_history where table_name = 'PICKED_SHIPMENT_BATCHES';
TABLE_NAME                     STATS_UPDATE_TIME
------------------------------ ----------------------------------------
PICKED_SHIPMENT_BATCHES        30-OCT-10 07.50.59.539450 PM +02:00
PICKED_SHIPMENT_BATCHES        31-OCT-10 12.02.58.979300 AM +02:00
PICKED_SHIPMENT_BATCHES        31-OCT-10 12.17.19.315201 AM +02:00

STATS_UPDATE_TIME tells us when the stats were last updated.
Step 2) Check when was the table last analyzed

DCFRA1>select last_analyzed from dba_tables where table_name = 'PICKED_SHIPMENT_BATCHES';
LAST_ANALYZED
----------------
2010-10-31:20:12

 

Step 3) Use DBMS_STATS.RESTORE_TABLE_STATS procedure to restore the stats on the table

SQL> execute dbms_stats.restore_table_stats(<OWNER>,<TABLE_NAME>,<TIMESTAMP WITH TIMEZONE>);

Timestamp could be any timestamp in STATS_UPDATE_TIME column of dba_tab_stats_history table.

After restore, Check the last_analyzed date from dba_table and you should see the old date.
Following is a short FAQ on statistics.

Where does oracle store the statistics?

Oracle DBA_OPTSTAT_OPERATIONS table contains a log history of statistics collected.

SQL> select OPERATION, START_TIME, END_TIME from dba_optstat_operations;
OPERATION                      START_TIME                               END_TIME
------------------------------ ---------------------------------------- ----------------------------------------
gather_database_stats(auto)    30-OCT-10 06.00.04.042555 AM +02:00      31-OCT-10 11.00.02.114381 PM +01:00

 

How does Oracle maintain the Statistics History?

We can check the oldest statistics that can be restore using GET_STATS_HISTORY_AVAILABILITY procedure.

SQL> select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY  from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
30-SEP-10 11.22.25.817428000 PM +01:00

Usually Oracle retains stats for 1 month (31 days).

DCFRA1>select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
                        31

You can change the retention using ALTER_STATS_HISTORY_RETENTION procedure.

Using EXPORT/IMPORT

 

Using export/import of statistics is a 6 steps process

Step 1) Create stats table in the database

EXEC DBMS_STATS.create_stat_table(‘SYSTEM’,’STATS_TABLE’);

Step 2) Export table/schema statistics

DBMS_STATS.export_schema_stats(‘DEMO_SCHEMA’,’STATS_TABLE’,NULL,’SYSTEM’);

Step 3) Export the table STATS_TABLE using exp utility.

This completes the export part. You can later import the stats using following steps

Step 4) Import table STATS_TABLE using imp utility

Step 5) Import table/schema statistics

DBMS_STATS.import_schema_stats(‘DEMO_SCHEMA’,’STATS_TABLE’,NULL,’SYSTEM’);

Step 6) Drop the stats table

EXEC DBMS_STATS.drop_stat_table(‘SYSTEM’,’STATS_TABLE’);

 

Hope this helps !!

Unlocking the locked table

Some times we get an error while running a DDL statement on a table. something like below

SQL> drop table aa;
drop table aa
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already
in use

This happens because some other session is using this table or having a lock on this table.

Following is the simple procedure to kill the session holding the lock on this table and drop the table. Note that this should be done only if you are sure that this table is no more required and all sessions holding lock on this table can be deleted

1. Get the object ID of the table to be dropped

SQL> select object_id from dba_objects where object_name = 'AA';
 OBJECT_ID
----------
   3735492
2. Get the session ID which is holding lock on this object from v$locked_object view
SQL> select OBJECT_ID, SESSION_ID, ORACLE_USERNAME, PROCESS from v$locked_object where OBJECT_ID = 3735492;
 OBJECT_ID SESSION_ID ORACLE_USERNAME                PROCESS
---------- ---------- ------------------------------ ------------------------
   3735492       1124 MSC                            4092@AKPRADH-LAP
3. Get the serial# of the SID using v$session
SQL> select sid, serial# from v$session where sid = 1124;
       SID    SERIAL#
---------- ----------
      1124      51189
4. Kill the session by connecting as sysdba and try dropping the table
SQL> alter system kill session '1124, 51189';
System altered.
SQL> drop table aa;
Table dropped.
Once the locks are removed, you should be able to drop the table.
Hope this helps !!

Identifying Locks on Objects using V$LOCKS

Ever wondered when a user comes to you saying that he/she cannot run a DML statement, which one should be able to run with in a second.
We suspect 2 reason for the same
1) Database is terribely slow (Which cannot be the case for processing a simple update)
2) Some one is holding an exclusive lock on object which user is trying to update (quite possible).

There can be many more reasons we can find upon further investigations. In this post we will see how to investigate reason 2 – If some other user is holding lock on the object which this user is trying to modify.

lets take a simple scenario.

session 1:

SQL> create table test (col1 number, col2 varchar2(5));

Table created.

SQL> insert into test values (1,’a’);

1 row created.

SQL> insert into test values (2,’b’);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test for update;

COL1 COL2
———- —–
1 a
2 b

Session 2:

SQL> update test set col2=’a’ where col1 = 1;

The above session 2 will hang !!!


SQL> select * from v$lock;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00000003BFD5D868 00000003BFD5D888         54 CF          0          0          2          0     669613          0
00000003BFD5D900 00000003BFD5D920         54 XR          4          0          1          0     669625          0
00000003BFD5DA30 00000003BFD5DA50         54 RS         25          1          2          0     669613          0
00000003BFD5DAC8 00000003BFD5DAE8         35 TX     196652        882          0          6        344          0
00000003BFD5DB60 00000003BFD5DB80         53 TS          3          1          3          0     669599          0
00000003BFD5DBF8 00000003BFD5DC18         55 RT          1          0          6          0     669613          0
00000003BFD5DDC0 00000003BFD5DDE0         57 MR          1          0          4          0     669601          0
00000003BFD5DE58 00000003BFD5DE78         57 MR          2          0          4          0     669601          0
00000003BFD5DEF0 00000003BFD5DF10         57 MR          3          0          4          0     669601          0
00000003BFD5DF88 00000003BFD5DFA8         57 MR          4          0          4          0     669601          0
00000003BFD5E020 00000003BFD5E040         57 PW          1          0          3          0     669599          0

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00000003BFD5E3C8 00000003BFD5E3E8         57 MR         81          0          4          0     669593          0
00000003BE50B9B8 00000003BE50B9E0         49 TM      21837          0          3          0        374          0
00000003BE50BAB8 00000003BE50BAE0         35 TM      21837          0          3          0        344          0
00000003BDC81138 00000003BDC812C0         49 TX     196652        882          6          0        374          1

15 rows selected.

If a session is blocking another session, you will see block = 1 for that session. So in out case SID=49 is blocking some other session. We can also find the session which got blocked because of SID=49.
There are 2 columns ID1 and ID2. The values of ID1 and ID2 for SID=49 will match with some other SID in v$lock table. If you see carefully in our case its matching with SID=35. So SID 35 is the session which got blocked because of SID=49. Also the session which gets blocked will not be able to get its request processed. So you will see REQUEST column will have a value > 0.
So from the above analysis we can say that SID 49 is blocking SID 35.

We can directly write a query which will give the required output.

SQL> select a.SID “Blocking Session”, b.SID “Blocked Session”
2  from v$lock a, v$lock b
3  where a.SID != b.SID
4  and a.ID1 = b.ID1
5  and a.ID2 = b.ID2
6  and b.request > 0
7  and a.block = 1;

Blocking Session Blocked Session
—————- —————
49              35

Lets understand rest of the columns in v$lock tables here.

ID1 and ID2 -> There represents the rollback segment and transaction table entries for that transaction. So when session 1 acquired the lock it got rollback segment and transaction table entry. When another session requested the same block, Oracle tried to generate a CR “Consistent read” image for the same by applying the rollback segment. But since there was exclusive lock it was not able to proceed. Unless first session relievs the lock, second session cannot proceed. So for second session its showing the value of ID1 and ID2 same as session 1 as session 2 was using the same values from rollback segment to make a CR copy.

TYPE -> This column gives the type of lock obtained on that table in which the data was locked. For more information on lock type check defination of v$lock table in Oracle references. For a session to change the data in a table it has to acquire a TX lock. This is the transaction enqueue lock.

LMODE -> This is the mode of lock. The is having values from 0 to 6, 6 being the most restrictive and 0 being least restrictive. When a session obtains lock in mode 6 that mean it has obtained exclusive lock and no other session is allowed to update the data. So for SID=49 we can see here that lock mode is exclusive (6). For more information on lock mode, you can check the v$lock table definition in Oracle references

REQUEST -> This column represent the lock mode requested by a blocking session. The value in this column gets updated only when the session is blocked. For example in our case SID=35 is being blocked so LMODE column shows a value of 0, but REQUEST column shows a value of 6. That means that SID 35 has requested lock mode 6 which has not yet assigned to it.

Some time if a session request for exclusive lock (mode 6), it might not get it because there are no Interested Transaction List (ITL) available in the block in which a user want to change data. For this user session will wait on mode 4 (shared mode) for some time and as soon as ITL is available, it will obtain the lock in mode 6.

Also in the above rows, you can see that for SID 49 and 35, there are 2 more rows and the TYPE column shows ‘TM’. There are the DML level enqueues and are acquired in lock mode 3 (Shared Row Exclusive). The lock will prevent any DDL activity on this table.
We can find the object name by getting the ID1 column value from these rows containing TM lock. 21837 in our case.

SQL> select object_name from dba_objects where object_id=21837;

OBJECT_NAME
————–
TEST

We can even get the row which is being blocked by transaction using v$session.

SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2  from v$session where sid=35;

ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
————- ————– ————— ————-
21837              1           45082             0

The above 4 components are the components of ROWID and we can generate ROWID number from there components using DBMS_ROWID package.

SQL> select dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
2  from v$session where sid=35;

DBMS_ROWID.ROWID_C
——————
AAAFVNAABAAALAaAAA

Now we can check if this was the row which blocking session was trying to update

SQL> select * from test where rowid = ‘AAAFVNAABAAALAaAAA’;

COL1 COL2
———- —–
1 a

this was the row blocking session was trying to update.

Hope this helps !!

Understanding Oracle Trace Utility – Oracle Database 10g

Overview:

Some times the situation demands more information about an event. Example if we take a case of some error coming in our database because of some application. In this case its the job of developer to drill down into the application and get the exact queries those are failing. I have seen many situations where we get ORA-0600 error or ORA-07445 errors. In such situation, if one wants to debug the issue, then he need to know the exact query which is failing so that he should be able to reproduce the issue from command prompt and another thing is when he can reproduce the issue form command prompt, then he can get detailed level of information about the running query by setting the tracing levels.
This post is all about setting the tracing at session level (either own session or some other session) and getting detailed information about the query. I will show you, what are the different levels of tracing events that are present and what are the various levels of tracing.

I will discuss 2 types of tracing here

  1. SQL tracing
  2. Event based tracing

SQL Tracing:

Session Level:

One of the simplest method to trace SQL is to use “alter session set SQL_TRACE=true” command and oracle will generate the trace file for the SQL statment that you run in that session.

System Level:

Also you can set the SQL trace at system level using “alter system set SQL_TRACE=true” command. But be careful about this, because this is going to generate huge trace file and especially when the number of users are large, this will degrade database performance.

The pre-reqs for enabling the SQL trace either at session level or at system level is ‘TIMED_STATISTICS’ parameter should be set to “TRUE” and “STATISTICS_LEVEL” should be set minimum to “TYPICAL”. It can be set to “ALL” as well, which is the higest level of tracing information. While setting the statistics level to ALL, you have to be careful.

Important *** If possible do not set the statistics level to all at system level, set only at session level, else the database performance will be impacted as database has to produce a very detail level of tracing. Also do not set SQL trace at system level, set only at session level. Setting SQL trace at system level will generate huge trace file and database performance will degrade.

For other sessions:

You can also set SQL_TRACE for some other session as well using DBMS_SYSTEM package.

procedure DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION will enable you to do this.

login as sysdba and execute

exec DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(<sid>,<serial#>,true) – for enabling the tracing

exec DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(<sid>,<serial#>,false) – for disabling the tracing

Event Based Tracing

Session Level:

Event level tracing is a way to create detail information about some kind of event. Not what exactly we mean by event.

“An event is simply a flag to the Oracle kernel to tell it to emit some trace messages or to add some additional processing or to activate some new functionality.”

Some events are used by support analysts and developers to force certain conditions to occur for testing purposes.

For example the event 10046 will enable SQL statement timings. So setting this event tells oracle kernel to through timing information about SQLs in its trace files. Like wise we have several events.

The details of all the events and actions are provided in $ORACLE_HOME/rdbms/mesg/oraus.msg file.

For each of these events mentioned in oraus.msg file, we can set the level. These are the level of information that kernel should put in the trace files. Below are the various trace levels present in Oracle tracing utility.

  • Level 0 = No statistics generated
  • Level 1 = standard trace output including parsing, executes and fetches plus more.
  • Level 2 = Same as level 1.
  • Level 4 = Same as level 1 but includes bind information
  • Level 8 = Same as level 1 but includes wait’s information
  • Level 12 = Same as level 1 but includes binds and waits

Also setting any event other that trace (10046) should not be done without the guidance of Oracle support.

There are many ways in which we can enable event level tracing. Below are some of the methods to do that.

Method 1: Setting the trace for event 10046 level 8

It is better to make a new connection before you start a trace each time, if not information will be appended if any trace has already been started.

alter session set max_dump_file_size=unlimited;
alter session SET timed_statistics = true;
alter session set STATISTICS_LEVEL = ALL ;

We should be able to distinctly identify our trace file from other trace files. For that we can set tracefile_identifier parameter.

Example:

alter session set tracefile_identifier = index_trace;

So setting this will set the name of our trace files as %index_trace.trc

A simple example of analyzing the index by setting the trace event 10236 is shown below. Event no 10236 is – dump redo on object no. or block type mismatch errors 1410/8103.

SQL> alter session set max_dump_file_size=unlimited;

Session altered.

SQL> alter session SET timed_statistics = true;

Session altered.

SQL> alter session set STATISTICS_LEVEL = ALL ;

Session altered.

SQL> alter session set tracefile_identifier = index_trace;

Session altered.

SQL> alter session set events ‘10236 trace name context forever, level 8’;

Session altered.

SQL> analyze index msc.test_idx validate structure online;
analyze index msc.test_idx validate structure online
*
ERROR at line 1:
ORA-01410: invalid ROWID

This is to check some kind of corruption.

SQL> ALTER SESSION SET EVENTS ‘10236 trace name context off’;

Session altered.

Method 2: Using DBMS_SYSTEM package.

Another way to set the event trace is to use the package DBMS_SYSTEM as given below.

exec DBMS_SYSTEM.SET_EV(SID,SESSION#,EVENT#,EVENT LEVEL,”);

Example: exec DBMS_SYSTEM.SET_EV(10,20,10046,8,”);

This will set the tracing for any session.

Method 3: Using DBMS_SUPPORT package.

Another method to set tracing is to use DBMS_SUPPORT package. This package is not installed by default and is in fact undocumented and indeed on some platforms and versions its not even shipped and you will need to talk to Oracle support and get it from metalink.

Installaing the package first:

SQL> @?/rdbms/admin/dbmssupp.sql

Package created.

Package body created.
SQL>

Using DBMS_SUPPORT package for tracing.

exec DBMS_SUPPORT.START_TRACE_IN_SESSION(SID,SERIAL#,WAITS,BINDS);

Example: exec dbms_support.start_trace_in_session(10,20,waits=>true,binds=>false);

To stop tracing: exec dbms_support.stop_trace_in_session(10,20);

Start tracing in your own session:

exec dbms_support.start_trace(waits=>true,binds=>false);

Stop tracing in your session: exec dbms_support.stop_trace;

System Level:

If you want to set the event level tracing at system level, then we can either use alter system instead of alter session or we can use init.ora parameter to set the events.

Method 1: Using Alter System

SQL> alter system set events ‘10046 trace name context forever, level 1’;

System altered.

To turn off the tracing at system level again we can use the below command.

SQL> alter system set events ‘10046 trace name context off’;

System altered.

Method 2: Using init.ora parameter

init.ora parameter name is “event” and it can be set as given below. But be careful while setting up this as this will generate huge trace files.

event = “10046 trace name context forever, level 12”

References:

http://www.petefinnigan.com/ramblings/how_to_set_trace.htm
http://tonguc.wordpress.com/