Materialized Views Concepts – Discussion Series 2

We have seen Discussion Series 1 of materialized view concepts and we know how to create materialized view and also what each clause of Mview creation mean.

In this article we will see all backend tables that can be accessed to check the details of materialized view.

We will begin with identifying materialized view

DBA_SNAPSHOTS (On MView site)

Most important table for checking MView info is DBA_SNAPSHOTS.

We need to query this table on snapshot site (where MView is created)

 

SQL>select name, table_name, MASTER, MASTER_LINK, REFRESH_METHOD, UPDATABLE , LAST_REFRESH, STATUS, PREBUILT, REFRESH_MODE from dba_snapshots where name = 'T_REP';
NAME TABLE_NAME MASTER MASTER_LINK REFRESH_MET UPD LAST_REFRESH STATUS PRE REFRESH_
---------- ---------- -------------------- ------------------------------ ----------- --- ---------------- ------- --- --------
T_REP T_REP T @"DB1.AMAZON" PRIMARY KEY NO 2012-10-13:22:26 UNKNOWN YES DEMAND

 

DBA_REFRESH_CHILDREN (On Mview Site)

we can group multiple materialized views into a group and refresh all materialized view all at once.

This can be done by creating a refresh group.

You can see all steps to create refresh group at http://docs.oracle.com/cd/B28359_01/server.111/b28327/rarmviewgroup.htm

Once you create refresh group and add MView to it, you can see the info in MVIEW_REFRESH_GROUPS table

BEGIN
   DBMS_REFRESH.MAKE (
      name => 'TEST_REF_GROP',
      list => '', 
      next_date => SYSDATE, 
      interval => 'SYSDATE + 1/24');
END;
/

BEGIN
   DBMS_REFRESH.ADD (
      name => 'TEST_REF_GROP',
      list => 'T_REP',
lax => FALSE );
END;
/
SQL>select name, RNAME, REFGROUP, INTERVAL, TYPE from dba_refresh_children where name = 'T_REP';

NAME       RNAME REFGROUP INTERVAL TYPE
------------------------------ ------------------------------ ---------- ------------------------------ ------------------------------
T_REP       TEST_REF_GROP     3693 SYSDATE + 1/24 SNAPSHOT

SQL>

 

DBA_SNAPSHOT_LOGS (On Master site):

If we want to check information about MLOG table, we can view this table.

 

SQL>select master, LOG_TABLE, ROWIDS, PRIMARY_KEY, SNAPSHOT_ID, CURRENT_SNAPSHOTS from dba_snapshot_logs where master = 'T';
MASTER       LOG_TABLE       ROW PRI SNAPSHOT_ID CURRENT_SNAPSHOT
------------------------------ ------------------------------ --- --- ----------- ----------------
T       MLOG$_T       NO  YES     15119 2012-10-13:22:26

 

Since this is primary key based MLOG, we can see YES for primary key column. CURRENT_SNAPSHOT gives when was this last refreshed. This is same as LAST_REFRESH column in DBA_SNAPSHOTS

DBA_REGISTERED_SNAPSHOTS (On Master site):

To Check how many sites are registered for 1 master table, we can query DBA_REGISTERED_SNAPSHOTS

This table has a column called name which is basically the name of Mview on MVIEW site. Since each MVIEW site can have a different name we cannot compare this column to get list of sites registered for 1 master table.

But we don’t have any master column in this table so we join this table with DBA_SNAPSHOT_LOGS to get list of sites which are registered for a master table

 

SQL>select a.master, b.name, b.snapshot_site from dba_snapshot_logs a, dba_registered_snapshots b
  2  where a.snapshot_id = b.snapshot_id
  3  and a.master = 'T';

MASTER       NAME       SNAPSHOT_SITE
------------------------------ ------------------------------ ------------------------------
T       T_REP       DB1.AMAZON

 

To check which snapshots has delay

We can use following query to check which snapshots has refresh delay in mins

 

select a.master, b.name, b.snapshot_site, (sysdate - a.CURRENT_SNAPSHOTS)*24*60 "delay Mins"
from dba_snapshot_logs a, dba_registered_snapshots b
where a.snapshot_id = b.snapshot_id
and (sysdate - a.CURRENT_SNAPSHOTS)*24*60 > &delay;

 

This will ask for delay and you can enter delay in mins.

After that this will list down all snapshots which are having delay more than what you entered.

 

Example, if you want to list down all snapshots having delay of more than 10 mins, you need to enter 10.

x$knstmvr

This is another internal table which can be used to check the progress of snapshots.

Columns in this table is self understood.

 

Following query will provide you the details of snapshot progress

 

column mvowner format a10 
Column mvname format a30 
column refmode format a8 
column refstate format a12 
column inserts format 99999999 
column updates format 999999999 
column deletes format 999999999 
column event format a30 
column spid format a6 
select  currmvowner_knstmvr mvowner, 
currmvname_knstmvr mvname, 
decode( reftype_knstmvr, 0, 'FAST', 1, 'FAST', 2, 'COMPLETE', REFTYPE_KNSTMVR ) refmode, 
decode(groupstate_knstmvr, 1, 'SETUP', 2, 'INSTANTIATE',3, 'WRAPUP', 'UNKNOWN' ) refstate, 
total_inserts_knstmvr inserts, 
total_updates_knstmvr updates, 
total_deletes_knstmvr deletes, 
b.spid,c.event 
from x$knstmvr X, v$session a, v$process b, v$session_wait c 
WHERE type_knst=6 
and a.paddr = b.addr 
and a.sid = c.sid 
and x.SID_KNST = a.sid 
and x.SERIAL_KNST = a.serial#;

 

In the next article about MView discussion series we will discuss about MLOG table and some internal details about fast refresh and how it works.

Hope this helps !!

 

Materialized Views Concepts – Discussion Series 1

Materialized view concept: Why do we need materialized view?

Materialized views are nothing but views created on the base table and having data which is extracted from the base table.
How is materialized view different from the normal view.

Difference # 1:

Normal view does not contain data. It is just a transparent layer on the top of base

Materialized view contains data and additional space is required to create materialized view.
Difference # 2:

To use normal view, a user needs to provide the view name in the query

To use materialized view user does not need to provide materialized view name (although a user can, but its not required.)

Materialized views are required mainly for 2 reasons

1) Advanced replication of data from one location (database) to another location (database)

2) Summarizing the data in the table

When we are replicating the table from remote location to local location our queries can access the same data from local location which can lead to improved query performance.
Replication of data is possible using materialized views.

 

You can use materialized views to increase the speed of queries on very large databases. Queries to large databases often involve joins between tables, aggregations such as SUM, or both. These operations are expensive in terms of time and processing power. The type of materialized view you create determines how the materialized view is refreshed and used by query rewrite.

 

Materialized views improve query performance by precalculating expensive join and aggregation operations on the database prior to execution and storing the results in the database. The query optimizer automatically recognizes when an existing materialized view can and should be used to satisfy a request. It then transparently rewrites the request to use the materialized view. Queries go directly to the materialized view and not to the underlying detail tables. In general, rewriting queries to use materialized views rather than detail tables improves response time.

Based on the above definitions we have 3 situations where materialized views can be used:

1) Materialized Views for Data Warehouses

In data warehouses, you can use materialized views to precompute and store aggregated data such as the sum of sales. Materialized views in these environments are often referred to as summaries, because they store summarized data. They can also be used to precompute joins with or without aggregations. A materialized view eliminates the overhead associated with expensive joins and aggregations for a large or important class of queries.

2) Materialized Views for Distributed Computing

In distributed environments, you can use materialized views to replicate data at distributed sites and to synchronize updates done at those sites with conflict resolution methods. These replica materialized views provide local access to data that otherwise would have to be accessed from remote sites. Materialized views are also useful in remote data marts.

3) Materialized Views for Mobile Computing

You can also use materialized views to download a subset of data from central servers to mobile clients, with periodic refreshes and updates between clients and the central servers.

Its difficult to understand everything before we create a materialized view. So lets create a MView and understand every clause that can be given in creating MVIew.

Creating Materialized View:

Here goes the syntax of creating materialized view

CREATE MATERIALIZED VIEW SCHEMA.NAME 
(COL1, COL2, COL3 … )
ON PREBUILT TABLE / BUILD IMMEDIATE / BUILD DEFERRED
WITH / WITHOUT REDUCED PRECISION
USING INDEX / USING NO INDEX
TABLESPACE <TS NAME>
FOR UPDATE
REFRESH FAST / COMPLETE /  FORCE / ON DEMAND / ON COMMIT / START WITH .. NEXT / 
REFRESH WITH PRIMARY KEY / ROWID
ENABLE / DISABLE QUERY REWRITE
AS <SELECT QUERY ON MASTER TABLE>

This syntax is no way complete and there are many more clause related to storage, constraints, physical properties etc. But this is a basic DDL statement that is used most of the time to create materialized view.
If you want to see complete details of creating MView with all clauses, check reference section to get documentation link.

Lets discuss about each clause and what these values means to us.

ON PREBUILT TABLE

When you create a materialized view you can create the same on pre-built table. For example, before creating MView you were managing data replication using software and you were keeping a local table updated with latest data from some other master site. Now when you are implementing MView its a good option to use same table as prebuilt table and create MView on top of that. This way you can stop software replication and start using Mview.

If you don’t have any prebuilt table you can skip that clause along with “WITH REDUCED PRECISION” clause. If you are not using “ON PREBUILT TABLE” clause than you can use with “BUILD IMMEDIATE” or “BUILD DEFERRED” clause.
BUILD IMMEDIATE will immediately fetch the data info Mview. BUILD DEFERRED clause will defer fetching of data unless you do complete refresh manually.

Following things to be considered in case of PREBUILT Table

  • Materialized view name should be same as prebuilt table name. Master table can have different name
  • PREBUILT table can have extra column or less columns compared to master table, but Mview definition should include only those columns which are present in both master table and prebuilt table and corresponding columns must have matching datatypes.
  • If you don’t have any prebuilt table, create materialized view statement will create a table but it won’t be a prebuilt table. Meaning that if you drop Mview, your table will also be dropped (unless you use “preserve table” clause). But if its a prebuilt table, table will not be dropped even if you drop MView. This way you can retain data. So its essential to have a table marked as prebuilt table.

If you created materialized view without prebuilt table, it will create a table and there is a way to change that table as prebuilt table. So that if you drop materialized view some time in future, table will stay. Following update statement will convert a table into prebuilt table

update sys.snap$ set flag = 2228321 where vname = '<Mview Name>';
commit;

Above statement will convert a table into prebuilt table.

After that if you run the statement “drop materialized view <MView Name>;” it won’t drop prebuilt table.

Alternatively if you don’t have prebuilt table and you want to drop snapshot preserving the table you can use following statement

drop materialized view <MView Name> preserve table;

WITH / WITHOUT REDUCED PRECISION

Specify WITH REDUCED PRECISION to authorize the loss of precision that will result if the precision of columns in materialized view does not match with precision of column result returned by subquery. Remember that subquery is build on table on master site.
So this clause makes sense if you are using different precision of columns on Mview site than on master site.
WITHOUT REDUCED PRECISION will require the precision of columns in Mview to be exactly same as that of columns in table on master site (precision retuned by subquery).

USING INDEX

Using index clause can be used to specify the storage clause for index that will be created on materialized view. You can also specify the tablespace that should be used for all indexes on materialized view.
If you are creating ROWID based materialized view then “USING INDEX” clause will create default index like I_SNAP$_<table_name> which will be useful in internal MView management.
Using “NO INDEX VLAUSE” will prevent creating this default index and also you cannot specify storage clause and tablespace clause.

TABLESPACE <TS NAME>

This specifies the tablespace in which to create MVIEW.

ENABLE / DISABLE QUERY REWRITE

This clause will specify whether to enable or disable query rewrite on this MView.
One of the advantages of having materialized view is that our queries will automatically start using it (just like index)without doing any changes to the SQL. This capability is provided by query rewrite functionality.
The optimizer uses two different methods to recognize when to rewrite a query in terms of a materialized view. The first method is based on matching the SQL text of the query with the SQL text of the materialized view definition. If the first method fails, the optimizer uses the more general method in which it compares joins, selections, data columns, grouping columns, and aggregate functions between the query and materialized views.

You can find more details about query rewrite feature in link http://docs.oracle.com/cd/B19306_01/server.102/b14223/qrbasic.htm

REFRESH FAST / COMPLETE /  FORCE / ON DEMAND / ON COMMIT / START WITH .. NEXT /

Fast refresh can be based on primary key or it could be based on ROWID.
If fast refresh if based on primary key then, master table should have primary key defined. MView log will have primary key included. For fast refresh to work, you must have MLOG created on master table on master site using following commands

create materialized view log on <Master table name>;

Fast refresh is most efficient way of maintaining data as it brings only the changed data from master site to local MView. It maintains the changes in MLOG.
Not all Mviews can be fast refreshed. You can check the restrictions on creating fast refreshable materialized view at http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007028

Complete refresh does not need any MLOG. When we do complete refresh, complete data of master table is brought into MView. This method can be used for small tables.

Refresh Force will first try to refresh the Mview using fast method and if it doesn’t work, it will go for complete refresh.

Refresh On Demand will need DBAs to schedule a job which will do the refresh of Mview on periodic basis. If you don’t want to setup a job, you have to decide the schedule and refresh Mview yourself.

Refresh On commit will refresh Mview if a transaction gets committed on master site. We have to be careful with this option, if master site is located far away and commit rate is high, this is not a good option. Everytime a commit happens, it will try to refresh Mview, which is going to take time and eventually your transactions on master site will slow down.

Refresh Start with .. Next is used when we want to define the schedule in Mview definition while creating MView itself. This way Mview gets refreshed as per defined schedule

I tried to give basic overview of Materialized view and creating materialized view. Below are some examples of creating materialized view using above clause

 

Examples of Mview Creation

Example 1:

create materialized view T_REP
 ( OWNER,
   SUBOBJECT_NAME ,
   OBJECT_ID,
   DATA_OBJECT_ID ,
   OBJECT_TYPE,
   CREATED,
   LAST_DDL_TIME,
   TIMESTAMP,
   STATUS ,
   TEMPORARY,
   GENERATED,
   SECONDARY,
   NAMESPACE,
   EDITION_NAME)
 BUILD IMMEDIATE
 REFRESH ON DEMAND
 as 
 select OWNER,
        SUBOBJECT_NAME ,
        OBJECT_ID,
        DATA_OBJECT_ID ,
        OBJECT_TYPE,
        CREATED,
        LAST_DDL_TIME,
        TIMESTAMP,
        STATUS ,
        TEMPORARY,
        GENERATED,
        SECONDARY,
        NAMESPACE,
        EDITION_NAME
 from t@DB1;

Example 2:

Assuming we have primary key on master table and materialized view log on master site

create materialized view T_REP
 ( OWNER,
   SUBOBJECT_NAME ,
   OBJECT_ID,
   DATA_OBJECT_ID ,
   OBJECT_TYPE,
   CREATED,
   LAST_DDL_TIME,
   TIMESTAMP,
   STATUS ,
   TEMPORARY,
   GENERATED,
   SECONDARY,
   NAMESPACE,
   EDITION_NAME)
 ON PREBUILT TABLE
 WITH REDUCED PRECISION
 USING INDEX TABLESPACE OPS_IDX
 REFRESH FAST ON DEMAND
 ENABLE QUERY REWRITE
 as 
 select OWNER,
        SUBOBJECT_NAME ,
        OBJECT_ID,
        DATA_OBJECT_ID ,
        OBJECT_TYPE,
        CREATED,
        LAST_DDL_TIME,
        TIMESTAMP,
        STATUS ,
        TEMPORARY,
        GENERATED,
        SECONDARY,
        NAMESPACE,
        EDITION_NAME
 from t@DB1;

I hope this clears few things. This article is no way a replacement of Oracle documentation. I just tried to provide a concise information on creating materialized views.
My reference has links to Oracle Docs which provides detailed information.

In my next article of materialized view, I am going to provide some backend DBA tables which can be used to view information about materialized views.

Hope this helps !!

References:

Basic Mview creationhttp://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_6002.htm
Basics of Query rewritehttp://docs.oracle.com/cd/B19306_01/server.102/b14223/qrbasic.htm
Oracle By Example. Creating Mviewhttp://st-curriculum.oracle.com/obe/db/10g/r2/prod/bidw/mv/mv_otn.htm

ORA-00600: internal error code, arguments: [kkdlGetCkyName1] – On ADG After Renaming Index

I encountered wired error on my Active Dataguard (ADG) instance after renaming an index on primary

Here is a simple test case to reproduce the issue

On Primary:

SQL>create table T as select * from dba_objects;

Table created.

SQL>create index T_IDX on T(data_object_id);

Index created.

SQL>exec DBMS_STATS.GATHER_TABLE_STATS('ORACLE_DBA','T');

PL/SQL procedure successfully completed.

SQL>exec DBMS_STATS.GATHER_INDEX_STATS('ORACLE_DBA','T_IDX');

PL/SQL procedure successfully completed.

SQL>explain plan for
  2  select count(1) from T where data_object_id > 65000;

Explained.

SQL>@display

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 293504097

---------------------------------------------------------------------------
| Id  | Operation      | Name  | Rows  | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    1 |    5 |    2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |      |    1 |    5 |           |      |
|*  2 |   INDEX RANGE SCAN| T_IDX | 29118 |   142K|    2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("DATA_OBJECT_ID">65000)

14 rows selected.

SQL>select count(1) from T where data_object_id > 65000;

  COUNT(1)
----------
     28337

On Standby:

 

SQL>select count(1) from T where data_object_id > 65000;

  COUNT(1)
----------
     28337

Now rename the index on primary

On Primary:

SQL>alter index T_IDX rename to T_IDX1;

Index altered.

SQL>

Now run the same query on standby

On Standby:

SQL>select count(1) from T where data_object_id > 65000;
select count(1) from T where data_object_id > 65000
                     *
ERROR at line 1:
ORA-00600: internal error code, arguments: [kkdlGetCkyName1], [1622537], [],
[], [], [], [], [], [], [], [], []

DCYYZ1>

This is happening for following version

SQL>select * from PRODUCT_COMPONENT_VERSION;

PRODUCT                  VERSION                  STATUS
---------------------------------------- ---------------------------------------- ----------------------------------------
NLSRTL                     11.2.0.2.0                  Production
Oracle Database 11g Enterprise Edition     11.2.0.2.0                  64bit Production
PL/SQL                     11.2.0.2.0                  Production
TNS for Linux:                 11.2.0.2.0                  Production

Solution: Oracle is working on Bug 13035388: ORA 600 [KKDLGETCKYNAME1] IN ADG for fixing this issue

Workaround: Rename back the index to original name. Or drop and recreate the index.

Hope this helps !!

Fixing SQL Plans: The hard way – Part 2

In my previous article – https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/ – I showed you a way to fix query plan if you have a good plan available for that query in some other similar prod DB.

But what if you don’t have any other prod DB, or this query is so unique that it runs in only 1 database.

Fixing such SQLs needs deeper look into the SQL and the plan.

We have various methods to fix those SQLs like:-

– Creating Index

– Adding Hint (may be at the code level).

– Gathering stats etc.

Lets say you have all index in place, but query is not picking the index. Again, there could be many reasons why index is not picked by the query. Common one includes incorrect stats or missing stats for index. Or optimizer_index_cost_adj value is too high.

Changing any parameters or gathering stats or any changes to optimizer environment can have adverse affect to other queries.

Atleast in prod its not advisable to change any of optimizer environment.

One of the best way to fix queries in such scenario is to use hint so that index will be picked by CBO (cost based optimizer).

Having said that even if we use hints in query for CBO to pick the right index, how are we going to push such plan to actual query. We cannot (and should not) change application code and put a hint in application code.

Query should essentially remains the same but it should pick the index.

Fixing query using Hints

We want a solution where we will put hints in a query and generate the desired plan. We want original SQL ID to pick our newly generated plan.

Here is the example:

SQL>create table T as select * from dba_objects;

Table created.

SQL>col name format a30;
SQL>col value format a30;
SQL>select name, value from v$parameter where name = 'optimizer_index_cost_adj';

NAME                   VALUE
------------------------------ ------------------------------
optimizer_index_cost_adj       1

SQL>alter session set optimizer_index_cost_adj = 10000;

Session altered.

SQL>create index T_IDX on T(data_object_id);

Index created.

SQL>

SQL>exec dbms_stats.gather_table_stats('ORACLE_DBA','T');

PL/SQL procedure successfully completed.

SQL>exec dbms_stats.gather_index_stats('ORACLE_DBA','T_IDX');

PL/SQL procedure successfully completed.

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select sql_id from v$sql where sql_text like 'select count(1) from T where data_object_id > 65000 and status%';

SQL_ID
-------------
cj4sqr25b6b8k

SQL>@explain
Enter SQL ID:- cj4sqr25b6b8k

PL/SQL procedure successfully completed.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID    cj4sqr25b6b8k, child number 0
-------------------------------------
select count(1) from T where data_object_id > 65000 and status = 'VALID'

Plan hash value: 2966233522

---------------------------------------------------------------------------
| Id  | Operation       | Name | Rows  | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      |      |   258 (100)|      |
|   1 |  SORT AGGREGATE    |      |    1 |    11 |           |      |
|*  2 |   TABLE ACCESS FULL| T      | 17690 |   190K|   258   (1)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("STATUS"='VALID' AND "DATA_OBJECT_ID">65000))

19 rows selected.

SQL>

Now we know that since optimizer_index_cost_adj value is too high, it will not try to use index T_IDX on data_object_id column, because query will be expensive.

But what if we want our query to use the index. This may not be a good test case, but there could be scenario where using index will actually improve the performance of query. CBO may not be able to find that (based on cost), but we know that.

So lets force a index usage using a hint.

SQL>select /*+ index(T T_IDX) */ count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select /*+ index(T T_IDX) */ count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select sql_id from v$sql where sql_text like 'select /*+ index(T T_IDX) */ count(1) from T where data_object_id > 65000 and status%';

SQL_ID
-------------
dut61hdv6b12t

SQL>@explain
Enter SQL ID:- dut61hdv6b12t

PL/SQL procedure successfully completed.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID    dut61hdv6b12t, child number 0
-------------------------------------
select /*+ index(T T_IDX) */ count(1) from T where data_object_id >
65000 and status = 'VALID'

Plan hash value: 1789076273

--------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |         |         |     267K(100)|         |
|   1 |  SORT AGGREGATE          |         |       1 |      11 |          |         |
|*  2 |   TABLE ACCESS BY INDEX ROWID| T     | 17690 |     190K|     267K  (1)| 00:53:31 |
|*  3 |    INDEX RANGE SCAN         | T_IDX | 35380 |         |    8253   (1)| 00:01:40 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("STATUS"='VALID')
   3 - access("DATA_OBJECT_ID">65000)

22 rows selected.

SQL>

We can see that using index hint here, it has forced a plan to use index.
Now, we can easily get the required query hints from v$sql_plan view as we have other_xml column.
You can refer to the same SQLs as I mentioned in previous post (https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/) and get the required hints

select  CHR(9)||''''
        ||regexp_replace(extractvalue(value(d),'/hint'),'''','''''')
        || ''','
from    xmltable('/*/outline_data/hint'
                passing (select     xmltype(other_xml) as xmlval
                        from        v$sql_plan
                        where       sql_id = '&GOOD_SQL_ID'
                        and         CHILD_NUMBER = &CHILD_NO
                        and         other_xml is not null)) d;

In this case GOOD_SQL_ID = dut61hdv6b12t and child number = 0

    'IGNORE_OPTIM_EMBEDDED_HINTS',
    'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
    'DB_VERSION(''11.2.0.2'')',
    'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
    'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
    'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
    'OPT_PARAM(''optimizer_index_cost_adj'' 10000)',
    'OPT_PARAM(''optimizer_index_caching'' 80)',
    'OUTLINE_LEAF(@"SEL$1")',
    'INDEX_RS_ASC(@"SEL$1" "T"@"SEL$1" ("T"."DATA_OBJECT_ID"))',

Rest of the procedure remains the same. Only change I did in the below PLSQL code compared to PLSQL code in my previous post is that, I changed dba_hist_sqltext table with v$sql and dba_hist_sql_plan table with v$sql_plan view. This is because new query we ran with hints may not be part of DBA_HIST* views as they are new query and will be present in only v$ views.

Following PLSQL code can be used

declare
    ar_profile_hints sys.sqlprof_attr;
begin
    ar_profile_hints := sys.sqlprof_attr(
    'BEGIN_OUTLINE_DATA',
    'IGNORE_OPTIM_EMBEDDED_HINTS',
    'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
    'DB_VERSION(''11.2.0.2'')',
    'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
    'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
    'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
    'OPT_PARAM(''optimizer_index_cost_adj'' 10000)',
    'OPT_PARAM(''optimizer_index_caching'' 80)',
    'OUTLINE_LEAF(@"SEL$1")',
    'INDEX_RS_ASC(@"SEL$1" "T"@"SEL$1" ("T"."DATA_OBJECT_ID"))',
    'END_OUTLINE_DATA');
    for sql_rec in (
    select t.sql_id, t.sql_text
    from v$sql t, v$sql_plan p
    where t.sql_id = p.sql_id
    and p.sql_id = '&SQL_ID_TO_FIX'
    and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
    and p.parent_id is null
)
loop
        DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
    sql_text    => sql_rec.sql_text,
    profile     => ar_profile_hints,
    name        => 'PROFILE_&&SQL_ID_TO_FIX');

end loop;
end;
/
Enter value for sql_id_to_fix: cj4sqr25b6b8k
Enter value for bad_plan_hash_value: 2966233522
Enter value for sql_id_to_fix: cj4sqr25b6b8k

PL/SQL procedure successfully completed.

In this case SQL_ID_TO_FIX will be our original SQL = cj4sqr25b6b8k and BAD_PLAN_HASH_VALUE will be plan hash of original SQL = 2966233522

So the code at the end that I mentioned in my previous post – https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/ can be run on same database instead of running on other database. Only change will be to use v$ views instead of DBA_HIST* views.

If we check the plan of original query

First purge the sql from shared pool using “sys.dbms_shared_pool.purge” so that old plan will get flushed out and than run the query to see new plan

 

SQL>@purgesql
Enter SQL_ID:- cj4sqr25b6b8k

PL/SQL procedure successfully completed.

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>select count(1) from T where data_object_id > 65000 and status = 'VALID';

  COUNT(1)
----------
     32103

SQL>@explain
Enter SQL ID:- cj4sqr25b6b8k

PL/SQL procedure successfully completed.


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID    cj4sqr25b6b8k, child number 0
-------------------------------------
select count(1) from T where data_object_id > 65000 and status = 'VALID'

Plan hash value: 1789076273

--------------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |         |         |     267K(100)|         |
|   1 |  SORT AGGREGATE          |         |       1 |      11 |          |         |
|*  2 |   TABLE ACCESS BY INDEX ROWID| T     | 17690 |     190K|     267K  (1)| 00:53:31 |
|*  3 |    INDEX RANGE SCAN         | T_IDX | 35380 |         |    8253   (1)| 00:01:40 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("STATUS"='VALID')
   3 - access("DATA_OBJECT_ID">65000)

Note
-----
   - SQL profile PROFILE_cj4sqr25b6b8k used for this statement


25 rows selected.

SQL>

 

We basically tried to simulate a different plan (using index) to get the required hints from V$SQL_PLAN. We then forced those hints on original SQL to create a profile so that original SQL will start using index.

Hope above procedure if clear in understanding !!

Sessions and Processes Parameters – Oracle 11g

Oracle has changed the way it derives sessions and processes parameters in database.

In 10g, oracle used to derive sessions parameter from processes parameter using following formula

(1.1 * PROCESSES) + 5

In 11g R1 onwards it changed to

(1.5 * PROCESSES) + 22

It has another rule though:

If we set lower value of sessions parameters than derived value, Oracle will automatically bump it to above derived value.
If we set higher value of sessions parameters than derived value, Oracle will consider our set value

So it always takes which ever is higher

SQL>select name, value from v$spparameter where name in ('sessions','processes');

NAME                   VALUE
------------------------------ ----------
processes               4500
sessions               5000

SQL>select name, value from v$parameter where name in ('sessions','processes');

NAME                   VALUE
------------------------------ ----------
processes               4500
sessions               6776

We can see derived value (4500*1.5)+22=6776 is greater than set value of 5000 in spfile. So its taking derived value for this parameter

Lets change the value in spfile to 7000

SQL> alter system set sessions = 7000 scope=spfile;

System altered.

<< Bounce DB >>
SQL>select name, value from v$spparameter where name in ('sessions','processes');

NAME                   VALUE
------------------------------ ----------
processes               4500
sessions               7000

SQL>select name, value from v$parameter where name in ('sessions','processes');

NAME                   VALUE
------------------------------ ----------
processes               4500
sessions               7000

Now you can see its taking the set value, because set value of more than derived value of 6776.

Hope this helps !!

References:

Sessions parameter in 10.2 – http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams230.htm#REFRN10197

Sessions parameter in 11.1 – http://docs.oracle.com/cd/B19306_01/server.102/b14237/initparams191.htm#i1133629

Sessions parameter in 11.2 – http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams230.htm#i1133629

Auto-Capture Baseline Behaviour

This article is about demonstrating Auto-Capture baseline behavior, which I was surprised that many people are not aware.

Auto-Capture of baseline behaves in following way

Case 1) optimizer_capture_sql_plan_baselines parameter is TRUE

If you have optimizer_capture_sql_plan_baselines parameter set to TRUE, baselines for the SQLs will get captured automatically if SQL query runs 2 or more times. I believe all of us know about this feature

Case 2) optimizer_capture_sql_plan_baselines parameter is FALSE

If optimizer_capture_sql_plan_baselines parameter is set to FALSE, many people think that no new baseline will be captured in the database. This is WRONG. Even when this parameter is set to FALSE, baselines will be captured for following case

*** If a query has one or more baselines with ENABLED=YES ***

Lets take an example

SQL>create table T as select * from dba_objects;

Table created.

SQL>explain plan for 
  2  select count(1) from T where data_object_id between 1000 and 2000;

Explained.

SQL>@display

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2966233522

-----------------------------------
| Id  | Operation       | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|   1 |  SORT AGGREGATE    |      |
|*  2 |   TABLE ACCESS FULL| T      |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATA_OBJECT_ID"<=2000 AND "DATA_OBJECT_ID">=1000)

Note
-----
   - rule based optimizer used (consider using cbo)

18 rows selected.

SQL>select count(1) from T where data_object_id between 1000 and 2000;

  COUNT(1)
----------
     0

SQL>/

  COUNT(1)
----------
     0
         
SQL>select sql_id from v$sql where sql_text like 'select count(1) from T where data_object_id between 1000 and%';

SQL_ID
-------------
7fbxxz894w0mg

SQL>@baseline
Enter sql ID:- 7fbxxz894w0mg

SQL_TEXT
----------------------------------------------------------------------------------------------------
select count(1) from T where data_object_id between 1000 and 2000


SQL_HANDLE               PLAN_NAME              CREATOR          ORIGIN         LAST_MODIFIED            ENA ACC FIX
------------------------------ ------------------------------ --------------- -------------- ------------------------------ --- --- ---
SYS_SQL_46f1e9bfbc331557       SYS_SQL_PLAN_bc3315573fdbb376  ORACLE_DBA      AUTO-CAPTURE   26-JUN-12 11.30.08.000000 AM   YES YES NO

SQL_HANDLE            PLAN_NAME                  PLAN_HASH_VALUE     ENABLED    ACCEPTED     FIXED
-------------------------   ----------------------------      --------------     -------    -------     -------
SYS_SQL_46f1e9bfbc331557    SYS_SQL_PLAN_bc3315573fdbb376     2966233522      YES         YES       NO

PL/SQL procedure successfully completed.

SQL>show parameters  capture

optimizer_capture_sql_plan_baselines    boolean         TRUE

SQL>alter system set optimizer_capture_sql_plan_baselines=false;

System altered.

SQL>create index T_IDX on T(data_object_id);

Index created.

SQL>select count(1) from T where data_object_id between 1000 and 2000;

  COUNT(1)
----------
     0

SQL>/

  COUNT(1)
----------
     0

         
SQL>@baseline
Enter sql ID:- 7fbxxz894w0mg

SQL_TEXT
----------------------------------------------------------------------------------------------------
select count(1) from T where data_object_id between 1000 and 2000


SQL_HANDLE               PLAN_NAME              CREATOR          ORIGIN         LAST_MODIFIED            ENA ACC FIX
------------------------------ ------------------------------ --------------- -------------- ------------------------------ --- --- ---
SYS_SQL_46f1e9bfbc331557       SYS_SQL_PLAN_bc3315573fdbb376  ORACLE_DBA      AUTO-CAPTURE   26-JUN-12 11.30.08.000000 AM   YES YES NO
SYS_SQL_46f1e9bfbc331557       SYS_SQL_PLAN_bc331557ded8ae2f  ORACLE_DBA      AUTO-CAPTURE   26-JUN-12 11.31.20.000000 AM   YES NO  NO

SQL_HANDLE            PLAN_NAME                  PLAN_HASH_VALUE     ENABLED    ACCEPTED     FIXED
-------------------------   ----------------------------      --------------     -------    -------     -------
SYS_SQL_46f1e9bfbc331557    SYS_SQL_PLAN_bc3315573fdbb376     2966233522      YES         YES       NO
SYS_SQL_46f1e9bfbc331557    SYS_SQL_PLAN_bc331557ded8ae2f     293504097      YES        NO         NO

PL/SQL procedure successfully completed.

SQL>

This proves that even when optimizer_capture_sql_plan_baselines parameter is set to false we can have baselines captured for our old SQL.

If we think about this feature, this is perfect.

Imagine you are rolling out a new module and new queries are going to run against new tables.

At first, initially you will not be having enough data in new tables and your queries will go with FTS (Full Table Scan) and will not use index, even if you create them. This is because based on value of optimizer_index_cost_adj parameter optimizer might make a decision to get data from table rather than going to index and than table.

If you have optimizer_capture_sql_plan_baselines set to TRUE, this is going to create baselines on those queries (which will be automatically accepted as first baselines are automatically accepted).

At later point when data grows and you really want the indexes to get used, your query will still be doing FTS (because of the auto-capture baseline in ACCEPTED state). So even if new baselines are getting captured, they are not getting used.

So you have to manually (or automatically) evolve those baselines to correct the plans.

Instead, my approach would be to enable optimizer_capture_sql_plan_baselines parameter when your database is really baked and you have enough data available. This way it will create baselines which will be first time right.

If your database is still not baked enough and you are looking for plan stability, I would suggest getting baselines from other similar prod database which is baked for long time.

Once you have baselines in place for almost all queries turn off optimizer_capture_sql_plan_baselines parameter. This will prevent baseline capture for new queries, at the same time this doesn’t stop new baselines from getting captured for old SQLs.

Hope this helps !!

Fixing SQL Plans: The hard way – Part 1

This article is about fixing a plan for SQL statement in a hard way.

Till now we have seen that profiles can be created using DBMS_SQLTUNE package where

  1. we create a SQL tuning set
  2. Pack sql_ids into SQL tuning sets
  3. we create tuning task and provide SQL tuning set as inputs
  4. Finally after running tuning task we accept the profiles created.

Profiles consists of hints that a SQL can use it to arrive at a plan.

We can use same DBMS_SQLTUNE package and IMPORT_SQL_PROFILE procedure to import the required hints and apply to the SQL where we have to fix the plan.

Where are these hints stored ?

If you are taking about a sql, check V$SQL_PLAN, you will see a column OTHER_XML (CLOB). This has all the hints stored in XML form.

So logically if we have a good plan in any other database, we can just get the hints from that plan and use  it in our database so that same plan comes in our database (provided we have all the objects used by that plan exists in our database).

This scenario happens many times. We have several similar production databases and sometimes plan is good in some DBs where as in few DBs plan is entirely different (may be because of different stats or different environment or some parameter difference).

Making changes to stats or parameters or DB environment has a very big impact as it affects other SQLs. So if you have 1 or 2 SQLs which are having bad plans, its better to gett a better plan from some other DB and just apply to the affected DBs.

Here is another way to create a profile.

If we have a better plan in some other DB, we need to parse the OTHER_XML column for the SQL and extract the hints from that

This can be done using following SQL

select  CHR(9)||''''
        ||regexp_replace(extractvalue(value(d),'/hint'),'''','''''')
        || ''','
from    xmltable('/*/outline_data/hint'
                passing (select     xmltype(other_xml) as xmlval
                        from        v$sql_plan
                        where       sql_id = '&GOOD_SQL_ID'
                        and         CHILD_NUMBER = &CHILD_NO
                        and         other_xml is not null)) d;

In my case sql_id = 33fndgzsas09k and child_no = 0

I got following output

            'IGNORE_OPTIM_EMBEDDED_HINTS',
            'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
            'DB_VERSION(''11.2.0.2'')',
            'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
            'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
            'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
            'OPT_PARAM(''optimizer_index_cost_adj'' 3)',
            'OPT_PARAM(''optimizer_index_caching'' 80)',
            'OUTLINE_LEAF(@"SEL$1")',
            'INDEX_RS_ASC(@"SEL$1" "BI"@"SEL$1" ("TEST_ITEMS"."ISBN" "TEST_ITEMS"."TEST_ORDER_ID" "TEST_ITEMS"."OWNER"))',
            'INDEX_RS_ASC(@"SEL$1" "PCS"@"SEL$1" ("TEST_CUSTS"."TEST_WORK_ID"))',
            'INDEX(@"SEL$1" "SR"@"SEL$1" ("CUST_REQUESTS"."TEST_WORK_ID"))',
            'LEADING(@"SEL$1" "BI"@"SEL$1" "PCS"@"SEL$1" "SR"@"SEL$1")',
            'USE_NL(@"SEL$1" "PCS"@"SEL$1")',
            'USE_NL(@"SEL$1" "SR"@"SEL$1")',
            'NLJ_BATCHING(@"SEL$1" "SR"@"SEL$1")',

Above are the hints that used in the SQL. Note that these are not the sql hints that user supply. CBO internally creates a tree of hints (which we call a plan) and uses them to get the required data.

How do we make use of these hints ?

We will define a variable of type sys.sqlprof_attr (This is a VARRAY defined already in database) and put all the hints into this VARRAY

Once we get this VARRAY with all hints, we can apply the same to all SQLs that matches our criteria

Following PLSQL code will do that

declare
	ar_profile_hints sys.sqlprof_attr;
begin
	ar_profile_hints := sys.sqlprof_attr(
	'BEGIN_OUTLINE_DATA',
        'IGNORE_OPTIM_EMBEDDED_HINTS',
        'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
        'DB_VERSION(''11.2.0.2'')',
        'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
        'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
        'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
        'OPT_PARAM(''optimizer_index_cost_adj'' 3)',
        'OPT_PARAM(''optimizer_index_caching'' 80)',
        'OUTLINE_LEAF(@"SEL$1")',
        'INDEX_RS_ASC(@"SEL$1" "BI"@"SEL$1" ("TEST_ITEMS"."ISBN" "TEST_ITEMS"."TEST_ORDER_ID" "TEST_ITEMS"."OWNER"))',
        'INDEX_RS_ASC(@"SEL$1" "PCS"@"SEL$1" ("TEST_CUSTS"."TEST_WORK_ID"))',
        'INDEX(@"SEL$1" "SR"@"SEL$1" ("CUST_REQUESTS"."TEST_WORK_ID"))',
        'LEADING(@"SEL$1" "BI"@"SEL$1" "PCS"@"SEL$1" "SR"@"SEL$1")',
        'USE_NL(@"SEL$1" "PCS"@"SEL$1")',
        'USE_NL(@"SEL$1" "SR"@"SEL$1")',
        'NLJ_BATCHING(@"SEL$1" "SR"@"SEL$1")',
        'END_OUTLINE_DATA');
	for sql_rec in (
	select t.sql_id, t.sql_text
	from dba_hist_sqltext t, dba_hist_sql_plan p
	where t.sql_id = p.sql_id
	and p.sql_id = '&SQL_ID_TO_FIX'
	and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
	and p.parent_id is null
)
loop
        DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
	sql_text    => sql_rec.sql_text,
	profile     => ar_profile_hints,
	name	    => 'PROFILE_33fndgzsas09k');

end loop;
end;
/

You can run above PLSQL code in a database where plan is bad and it should create a profile for that SQL. Once profile is created you can purge that SQL from shared_pool so that existing plan will be wiped off and run the SQL again.

This will create a new plan (good plan) same as the one we copied from other DB.

Above PLSQL code can generated directly using following SQL

Run the following SQL in database where plan is good and provide required inputs. This will output PLSQL code same as above (with all values in place). You can just copy and paste that PLSQL code on affected DB where plan is bad.

*** NOTE :- You need to run below SQL code in DB where plan is good.

accept HINTED_SQL_ID prompt 'Enter good SQL ID:- '
accept CHILD_NO prompt 'Enter child number of good SQL:- '
accept BAD_SQL_ID prompt 'Enter bad SQL ID to be fixed:- '
accept PLAN_HASH_VALUE prompt 'Enter bad SQL plan_hash_value:- '
set pagesize 0
set line 9999
set verify off;
set heading off;
set feedback off;
set echo off;
set pagesize 0
prompt '======================= OUTPUT ======================='
select CHR(10) from dual;
select  'declare'
        ||CHR(10)||CHR(9)
        ||'ar_profile_hints sys.sqlprof_attr;'
        ||CHR(10)
        ||'begin'||CHR(10)||CHR(9)
        ||'ar_profile_hints := sys.sqlprof_attr('||CHR(10)||CHR(9)
        ||'''BEGIN_OUTLINE_DATA'','
from    dual;
select  CHR(9)||''''
        ||regexp_replace(extractvalue(value(d),'/hint'),'''','''''')
        || ''','
from    xmltable('/*/outline_data/hint'
                passing (select     xmltype(other_xml) as xmlval
                        from        v$sql_plan
                        where       sql_id = '33fndgzsas09k'
                        and         CHILD_NUMBER = 0
                        and         other_xml is not null)) d;
select  CHR(9)
        ||'''END_OUTLINE_DATA'');'||CHR(10)||CHR(9)
        ||'for sql_rec in ('||CHR(10)||CHR(9)
        ||'select t.sql_id, t.sql_text'||CHR(10)||CHR(9)
        ||'from dba_hist_sqltext t, dba_hist_sql_plan p'||CHR(10)||CHR(9)
        ||'where t.sql_id = p.sql_id'||CHR(10)||CHR(9)
        ||'and p.sql_id = '''||'&BAD_SQL_ID'||'''' ||CHR(10)||CHR(9)
        ||'and p.plan_hash_value = '||&PLAN_HASH_VALUE ||CHR(10)||CHR(9)
        ||'and p.parent_id is null'||CHR(10)||') loop' ||CHR(10)
        ||'DBMS_SQLTUNE.IMPORT_SQL_PROFILE(' ||CHR(10)||CHR(9)
        ||'sql_text    => sql_rec.sql_text,'||CHR(10)||CHR(9)
        ||'profile     => ar_profile_hints,' ||CHR(10)||CHR(9)
        ||'name        => ''PROFILE_'||'&BAD_SQL_ID'||'''); '||CHR(10)||CHR(9)
        || 'end loop;'||CHR(10)|| 'end;'|| CHR(10)||'/'
from    dual;
select CHR(10) from dual;
prompt '======================= OUTPUT ======================='

Enter good SQL ID:- <This should be the SQL_ID with good plan. In our case its the same SQL ID but in different DB>

Enter child number of hinted SQL:- <This is the child number of the SQL in DB where plan is good>

Enter bad SQL ID to be fixed:- <This is same SQL ID as input 1 since we are checking plan for same SQL ID in other DB>

Enter bad SQL plan_hash_value:- <This is the plan hash value of SQL ID where plan is bad>

Example Run:

SQL>@create_profile_code.sql
Enter good SQL ID:- 33fndgzsas09k
Enter child number of good SQL:- 0
Enter bad SQL ID to be fixed:- 33fndgzsas09k
Enter bad SQL plan_hash_value:- 3225275398
'======================= OUTPUT ======================='
declare
	ar_profile_hints sys.sqlprof_attr;
begin
	ar_profile_hints := sys.sqlprof_attr(
	'BEGIN_OUTLINE_DATA',

        'IGNORE_OPTIM_EMBEDDED_HINTS',
        'OPTIMIZER_FEATURES_ENABLE(''11.2.0.2'')',
        'DB_VERSION(''11.2.0.2'')',
        'OPT_PARAM(''_b_tree_bitmap_plans'' ''false'')',
        'OPT_PARAM(''_optim_peek_user_binds'' ''false'')',
        'OPT_PARAM(''_optimizer_connect_by_cost_based'' ''false'')',
        'OPT_PARAM(''optimizer_index_cost_adj'' 3)',
        'OPT_PARAM(''optimizer_index_caching'' 80)',
        'OUTLINE_LEAF(@"SEL$1")',
        'INDEX_RS_ASC(@"SEL$1" "BI"@"SEL$1" ("TEST_ITEMS"."ISBN" "TEST_ITEMS"."TEST_ORDER_ID" "TEST_ITEMS"."OWNER"))',
        'INDEX_RS_ASC(@"SEL$1" "PCS"@"SEL$1" ("TEST_CUSTS"."TEST_WORK_ID"))',
        'INDEX(@"SEL$1" "SR"@"SEL$1" ("CUST_REQUESTS"."TEST_WORK_ID"))',
        'LEADING(@"SEL$1" "BI"@"SEL$1" "PCS"@"SEL$1" "SR"@"SEL$1")',
        'USE_NL(@"SEL$1" "PCS"@"SEL$1")',
        'USE_NL(@"SEL$1" "SR"@"SEL$1")',
        'NLJ_BATCHING(@"SEL$1" "SR"@"SEL$1")',
	'END_OUTLINE_DATA');
	for sql_rec in (
	select t.sql_id, t.sql_text
	from dba_hist_sqltext t, dba_hist_sql_plan p
	where t.sql_id = p.sql_id
	and p.sql_id = '33fndgzsas09k'
	and p.plan_hash_value = 3225275398
	and p.parent_id is null
) loop
DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
	sql_text    => sql_rec.sql_text,
	profile     => ar_profile_hints,
	name	    => 'PROFILE_33fndgzsas09k');
	end loop;
end;
/

'======================= OUTPUT ======================='

Hope this helps !!

Removing targets from Grid Control – Oracle 11g

Many times we have to cleanup some old targets from grid control. This happens because of host getting deprecated.

Example if you have database which was running on a host and is registered in grid. Because of scaling requirement you moved the database to bigger box. In that case you have to have agent installation done on new box and you have to remove old target from the grid.

Not doing this will make your grid stale with old data.

We can clean up grid from the Grid UI. Only problem with this approach is that this is time consuming.

When we have a database running on a host, its not just 1 target that agent discovers. We have multiple targets like host, database listeners etc. Deleting each target takes time.

Easiest approach for deleting the target is from backend database. We have grid database which stores complete grid information.

We can follow below steps to remove target from backend without using grid UI

1) Stop the management agent on the target database

% <AGENT_HOME>/bin/emctl stop agent

2) Log onto grid database (also called repository database) and search for target

SQL> select target_name from mgmt_targets where target_type = ‘oracle_emd’ and target_name like ‘<search_string>’;

This shows registered agents as per search string

3) Once you identify the target name, you below command to delete the target

exec mgmt_admin.cleanup_agent(‘<target_name>’);

Check the grid UI now for this target and you will not find it.

Hope this helps !!

Inside Index block – Oracle Database 11g

I ran into a scenario where I really wanted to check whats inside the leaf block of an index.

This post is to dig inside the index leaf block and find out various facts stored inside a block.

We will start with taking a dump of a block

SYS.ORCL>alter session set tracefile_identifier='INDEX_BLOCK_DUMP';
Session altered.
SYS.ORCL>alter system dump datafile 292 block 245790;
System altered.
SYS.ORCL>

You will get a trace file having name something like “<DB_NAME>_ora_<PID>_INDEX_BLOCK_DUMP.trc” in your trace directory

Even if you dont use tracefile_identifier, you can still see the name of the dump file using following query

SYS.ORCL>select value from v$diag_info where name = 'Default Trace File';
VALUE
-----------------------------------------------------------------
/dumps-01/diag/rdbms/orcl_b/orcl/trace/orcl_ora_11416.trc
1 row selected.
SYS.ORCL>

At the start of the block you will see following details

*********************************************************************
Start dump data blocks tsn: 17 file#:292 minblk 245790 maxblk 245790
Block dump from cache:
Dump of buffer cache at level 4 for tsn=17, rdba=1224982558
Block dump from disk:
buffer tsn: 17 rdba: 0x4903c01e (292/245790)
scn: 0x06ca.4d4684c7 seq: 0x01 flg: 0x04 tail: 0x84c70601
frmt: 0x02 chkval: 0xe178 type: 0x06=trans data
*********************************************************************

tsn –> Tablespace Number = 17
file# –> File # = 292
minblk & maxblk –> Min block and Max block numbers = 245790
rdba –> Relative data block address. Its value is 0x4903c01e in hexa decimal and 1224982558 in decimal.
We can get rdba using dbms_utility.make_data_block_address

SYS.ORCL>select dbms_utility.make_data_block_address(292,245790) from dual;
DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(292,245790)
------------------------------------------------
 1224982558

After above lines we have dump of memory. You will see a line something as below

Dump of memory from 0x00002B5C26C2DA00 to 0x00002B5C26C2FA00

So at each address it will print the content. Its very difficult to read this format.

2B5C26C2DA00 0000A206 4903C01E 4D4684C7 040106CA [.......I..FM....]
2B5C26C2DA10 0000E178 00000002 000551A1 4CFB5368 [x........Q..hS.L]
2B5C26C2DA20 000006CA 0032001D 4903C009 000F0046 [......2....IF...]
2B5C26C2DA30 00221BD6 01460CC8 0001F2FC 06CAC000 [.."...F.........]
2B5C26C2DA40 40FE45EE 001D00EE 0008E5F9 0086EC85 [.E.@............]
2B5C26C2DA50 003EC8BA 06CA8000 40FA9BFF 00000000 [..>........@....]
2B5C26C2DA60 00000000 00000000 00000000 00000000 [................]
2B5C26C2DA70 00000000 00100040 0027C6F8 0085DF28 [....@.....'.(...]

But once this memory dump is complete, we can see some readable information stored in index block. Here is what comes next

seg/obj: 0x551a1 csc: 0x6ca.4cfb5368 itc: 29 flg: E typ: 2 - INDEX
 brn: 0 bdba: 0x4903c009 ver: 0x01 opc: 0
 inc: 0 exflg: 0

seg/obj is the object_id. 0x551a1 = 348577 in 348577 in decimal.

SYS.ORCL>select object_name from dba_objects where data_object_id = 348577;
OBJECT_NAME
-------------------------
UNIQ_FPM_MSG_WHSE_CAT_ID
1 row selected.

This is the index for which we dumped the block.

csc: 0x6ca.4cfb5368 –> This is the comit SCN number of the block
itc: 29 –> Number of interested transaction list(ITL) entry in the block.
typ: 2 – INDEX –> Indicates this is an index block

Next comes the Interested Transaction List(ITL) entries. We had 29 ITL entries as indicated above by itc: 29

Itl  Xid                 Uba                Flag Lck   Scn/Fsc
0x01 0x0046.00f.00221bd6 0x01460cc8.f2fc.01 CB-- 0 scn 0x06ca.40fe45ee
0x02 0x00ee.01d.0008e5f9 0x0086ec85.c8ba.3e C--- 0 scn 0x06ca.40fa9bff
0x03 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc 0x0000.00000000
0x04 0x0040.010.0027c6f8 0x0085df28.cb0f.0b C--- 0 scn 0x06ca.40fb14e2
0x05 0x00f0.017.00090d20 0x37860557.c767.36 C--- 0 scn 0x06ca.40fb15a7
0x06 0x013d.020.0002205c 0x3802952d.202e.26 C--- 0 scn 0x06ca.40fd6d83
0x07 0x001d.012.0049340a 0x38063ebe.c0d6.48 C--- 0 scn 0x06ca.40fd6e3e
0x08 0x00b5.018.000e6ee9 0x384279bf.95f1.07 C--- 0 scn 0x06ca.40fe4336
0x09 0x010e.007.000a4514 0x0088535e.b343.19 C--- 0 scn 0x06ca.40fa9b08
0x0a 0x00df.00f.0009db56 0x014571c8.e27e.0a C--- 0 scn 0x06ca.40fe446d
0x0b 0x00f4.00b.00092665 0x37c65073.c520.27 C--- 0 scn 0x06ca.4100f955
0x0c 0x0129.017.00035f0e 0x0187980c.2a1a.04 C--- 0 scn 0x06ca.4100fa77
0x0d 0x007a.002.0010db5b 0x38458cb5.5bbd.20 C--- 0 scn 0x06ca.41168fad
0x0e 0x0069.00e.0017dd7d 0x3807b2e2.1390.30 C--- 0 scn 0x06ca.411691b0
0x0f 0x00d7.006.0009a7ee 0x38417fbc.dd00.19 C--- 0 scn 0x06ca.41169313
0x10 0x0181.003.000170b0 0x00c78696.190d.2b C--- 0 scn 0x06ca.41169bc6
0x11 0x00bd.014.000d2ddf 0x3805ab0d.72f9.19 C--- 0 scn 0x06ca.41169ecc
0x12 0x0001.019.00634f4c 0x3802e302.4fd4.0d C--- 0 scn 0x06ca.4116ab9e
0x13 0x0035.014.003a9a1d 0x014691a5.cda1.3d C--- 0 scn 0x06ca.4116ad76
0x14 0x017a.017.00018a03 0x37c4d213.17c4.0d C--- 0 scn 0x06ca.4116b388
0x15 0x006a.012.0017f16a 0x37878e97.329f.2b C--- 0 scn 0x06ca.4165b46c
0x16 0x00f0.01b.00090d4e 0x3846d24e.c76b.36 C--- 0 scn 0x06ca.4165b57c
0x17 0x0168.004.00019960 0x00c77875.18ab.08 C--- 0 scn 0x06ca.4165b7a5
0x18 0x0104.002.0006b6fd 0x38401fbe.a734.0e C--- 0 scn 0x06ca.4165b951
0x19 0x0044.018.00275fa5 0x38417b7e.cf86.3c C--- 0 scn 0x06ca.41035e03
0x1a 0x0127.01c.0006d44b 0x008602ca.5584.04 C--- 0 scn 0x06ca.40f95f82
0x1b 0x00ba.004.000dbc9a 0x3786425e.74f4.32 C--- 0 scn 0x06ca.40f96071
0x1c 0x0194.004.00014665 0x38464daf.1aca.46 C--- 0 scn 0x06ca.40f9c995
0x1d 0x0014.00c.0066b5d6 0x01865ee3.e323.08 C--- 0 scn 0x06ca.40f9ca31

Itl –> Slot Id of interested transaction
Xid –> Transaction ID od transaction occupying that ITL
Uba –> Undo block address
Flag –> State for current transaction (C – Commited)
Lck –> Number of Lock held by the transacion
Scn/Fsc –> SCN number of the current transaction

Once ITL list is displayed, next comes the leaf block dump.
At the start of leaf block dump it gives the header information of the block

Leaf block dump
===============
header address 47674787290348=0x2b5c26c2dcec     => Header start and end address
kdxcolev 0                                       => Index Level. 0 represent leaf block
KDXCOLEV Flags = - - - 
kdxcolok 0                                       => Indicates that no block transaction happening right now
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y    => Internal Operation code
kdxconco 3                                        => Index Column Count
kdxcosdc 8                                        => Count of index structural changes involving block
kdxconro 48                                       => Number of index records in this block
kdxcofbo 132=0x84                                 => Offset from start to the beginning of free space
kdxcofeo 1435=0x59b                               => Offset to end of free space (ie. first portion of block containing index data)
kdxcoavs 1303                                     => Available space in the block (effective space between kdxcofbo and kdxcofeo)
kdxlespl 0                                        => Bytes of uncommited data at the time of block split that have been cleaned out
kdxlende 0                                        => Number of deleted entries
kdxlenxt 830726637=0x3183e1ed                     => Pointer to next leaf block
kdxleprv 830721132=0x3183cc6c                     => Pointer to previous leaf block
kdxledsz 10                                       => Deleted space
kdxlebksz 7384                                    => Size of usable block space

After header, comes the index information as shown below

row#0[3602] flag: ------, lock: 0, len=61, data:(10): 00 0c 13 af 37 47 af ee 00 05
col 0; len 26; (26):
 35 35 39 32 35 38 5f 31 33 35 33 32 35 30 31 34 33 5f 36 38 5e 32 34 31 34
 30
col 1; len 4; (4): 4c 45 4a 31
col 2; len 16; (16): 43 68 61 6e 67 65 50 69 63 6b 53 74 61 74 75 73
row#1[3663] flag: ------, lock: 0, len=57, data:(10): 00 0c 13 af 37 47 af 24 00 42
col 0; len 26; (26):
 35 35 39 22 35 38 5f 31 33 32 33 32 35 30 32 32 33 5f 36 38 5f 32 34 31 34
 30
col 1; len 4; (4): 4c 45 4f 33
col 2; len 12; (12): 50 69 63 6b 43 6f 6d 70 6c 65 74 65

Here is what above values mean

row#0 -> First row in the block. Row starts from 0
[3602] -> Starting location within the block
flag: -> Flag to give information above the record in index leaf block. If you delete a row, the flag will be marked D, meaning deleted.
lock -> Lock information at row level
len -> Total length of a index record
data -> I guess this should be the length of rowid which is following this “00 0c 13 af 37 47 af ee 00 05”

Each row in this index has 3 columns. Columns start with col0. So we have column information in col0, col1 and col2

col0 -> First column
len -> Length of the column. 26 in this case.
Value after that is the actual value in base 16 stored in the block.

We can actually get real value for the values stored above using following function SQL.
You need to pass the hex value seen in block dump to a function dbms_stats.convert_raw_value and it returns the required values.

Lets say we want to find out the value for first row (Value for each column in first row)

SQL>var col1 varchar2(1000);
SQL>exec dbms_stats.convert_raw_value(replace('35 35 39 32 35 38 5f 31 33 32 33 32 35 30 30 32 33 5f 36 38 5f 32 34 31 34 30',' ',''),:col1);
PL/SQL procedure successfully completed.
SQL>var col2 varchar2(10);
SQL>exec dbms_stats.convert_raw_value(replace('4c 45 4a 31',' ',''),:col2);
PL/SQL procedure successfully completed.
SQL>var col3 varchar2(100);
SQL>exec dbms_stats.convert_raw_value(replace('43 68 61 6e 67 65 50 69 63 6b 53 74 61 74 75 73',' ',''),:col3);
PL/SQL procedure successfully completed.
COL1
-----------------------------
559258_1323250023_68_24140

COL2
-----------------------------
ABC

COL3
-----------------------------
EmployeeStatus
SQL>

Similarly you can write your own PLSQL code and find out all the values stored inside an index block.

Hope this helps !!

Oracle SQL Plan Management – Part 1

SQL Plan Management – Oracle Database 11g

SQL Plan Management or SPM is a new feature introduced in 11g. We will take a detailed look at SPM in the following sessions.
Before starting with practical example lets see some theory.

What is SQL Plan Management?

SPM is a new feature introduced in Oracle database 11g which stores the baseline (Plans + Hints) inside database.
SPM allows an Oracle DBA to capture and preserve the most efficient execution plans for any SQL statement, thus limiting the impact of refreshed optimizer statistics, changes to existing applications, and even upgraded database versions

Why do we need SPM?

SPM is basically used for plan stability. If you have a database environment with several queries running, you always want your queries to run the way they are running irrespective of the changes that you make in your environment.
Example, If you upgrade your database, your queries might behave differently because of the change in optimizer or change in environment and at initial run you have to fix many queries in your production DB because of the upgrade or change in the env.
To avoid this, SPM is introduce to give you plan stability. So even after change in the environment, your queries will use same plan as it was before. More plans (Better or worse) could exists in the new environment, but optimizer is not allowed to use those plans without you confirming the plan change.

What was wrong with outlines?

Well, outlines are more about fixing the plan for a query that optimizer will use. So if there is change in environment, the plan would still be the same.
There are certain situations where you want the plan to change with change in lets say data. What if your data gets skewed over time.

Intially you had 100 of entries for month “DEC” out of total 150 entries. So it was using FTS. Now with increase in the data, there are 1 million records and entries for month “DEC” are around 1000. In that case it makes sense for optimizer to change plan and start using Index scan instead of FTS.
But since you used outline, you are forcing optimizer to go for a FTS. This is just an example situation and things can change in your environment and so outline is not a very good approach for such situations.

What was wrong with profile?

Profile is another feature introduced in 10g and can be implemented at SQL level. Profiles are better than outlines in that they are not fixing a plan through out the life of SQL. With change in the environment the plans can change even if the profile is used.

So whats the difference between profile and SQL plan baselines?

Profile is more of a advisors. They give advice to optimizer while executing the query. They provide optimizer with all the precise estimates. Profiles are more abount correcting optimizer to use correct plan when the underlying data is skewed and changed drastically.
The goal is to create best execution plan for the SQL by giving the very precise data to the optimizer. Its the optimizer who will decide what should be the explain plan based on the information/hints it has received from profile.

SQL Plan baselines works differently. It enforces the plan to be used for a SQL. Baseline does not provide and estimates or it does not help optimizer in anyway. It just tells optimizer to ignore everything and use the plan that we are giving you.
Optimizer just followes the plan provided by baselines. So here baseline is the driving mechanism for getting the correct plan.

Finally how baselines are different then outline then?

Well, baseline always keeps the optimized plans for your SQL. If there is a change in the environment you will have a new baseline created for the same SQL. Oracle will stop that baseline in the baseline history.
Depending on the baseline parameters setting in your environment, new plan will be automatically used or you have to evolve the new plan for optimizer to use it.

So in a way SQL baselines are combination of outlines and profiles. It gives the stability of plan similar to outlines and it also allows capturing better plans in case the environment changes.

Not only that, SQL Baselines give the complete control to the DBA on

  1. Whether to capture the new and better plans?
  2. Whether to use the new plans automaticallly without DBA’s intervention.

So DBAs have complete control of the environment now. Far better than profiles and outlines.

SQL Plan Baseline Parameters:

Before we check how to use SQL Plan baselines, lets consider the significance of 2 important baseline parameters

1) optimizer_capture_sql_plan_baselines – Default “FALSE”

This parameter is responsible for automatic capturing the baselines for SQLs. If we set this parameter to TRUE we are asking oracle to automatically gather the baselines for the SQL.
When you run the query for the first time (and parsed version is not present in shared_pool), oracle consider that as a fresh new query and does not create a baseline.
When you run the query for second time, oracle will consider the query as repetative and will automatically create the baseline.
These baseline are stored in DBA_SQL_PLAN_BASELINES table.

If this parameter is set to FALSE, then we (DBA) has to create baselines for the SQL manually. There are 2 procedure available for creating baselines manually.

1) Using DBMS_SPM.LOAD_PLANS_FROM_SQLSET

This procedure is usually used when we create SQL tuning set and store our SQLs into the tuning sets. Normally done before we upgrade the database to 11g. In our 10g database we create SQL tuning set and store all our SQL. Once we upgrade to 11g, we can create baselines for all our plans in SQL tuning set. That why what ever plans were effective in 10g, same will be used in 11g and there wont be any plan flips.

2) Using DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE

This is used when we want to fix one of the SQL currently running in our 11g database. We just load the plans from cursor cache (shared_pool) and create baseline out of that. We need to give SQL ID as input to this procedure.

We will see how to create baseline using manual method at later point of time.

2) optimizer_use_sql_plan_baselines – Default “TRUE”

This parameter will allow optimizer to use the baselines present in DBA_SQL_PLAN_BASELINES table. If you set this parameter to FALSE, then your 11g DB will start behaving same as 10g DB.
If there is any change in environment then it might flip the plan. Keeping this parameter TRUE is important in 11g.

How to use SQL Plan Management baselines ?

Lets take an example in a test database.

Table T with 1540 records.

SQL> select count(1) from t;

  COUNT(1)
----------
      1540

Data is skewed and distribution is as given below.

SQL> select col1, count(1) from t group by col1;

      COL1   COUNT(1)
---------- ----------
         1          1
         2          3
         3       1536

Gather the stats on the table

SQL> exec dbms_stats.gather_table_stats(OWNNAME=> 'ADVAITD_DBA',TABNAME => 'T', DEGREE => 6, GRANULARITY => 'ALL' ,CASCADE => TRUE , METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 254');

SQL>

Currently I dont have any baseline.

SQL> select count(1) from dba_sql_plan_baselines;

  COUNT(1)
----------
         0

SQL>

My baseline parameters setting is as below.

SQL> show parameters baselines

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_capture_sql_plan_baselines boolean     FALSE
optimizer_use_sql_plan_baselines     boolean     TRUE
SQL>

Let us first consider the auto capture utility for baselines.

AUTO Capture of baseline

SQL> alter session set optimizer_capture_sql_plan_baselines=TRUE;

Session altered.

SQL>

As mention earlier, we need to run the query 2 times in order to automatically create the baseline.

SQL> select * from t where col1 = 1;

      COL1 COL2                                               COL3
---------- -------------------------------------------------- -----------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 01-JUN-2011

SQL> select * from t where col1 = 1;

      COL1 COL2                                               COL3
---------- -------------------------------------------------- -----------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 01-JUN-2011

If we check DBA_SQL_PLAN_BASELINES we will see a sql baseline created.

SQL> select SQL_HANDLE, PLAN_NAME, ENABLED, ACCEPTED, FIXED
  2  from dba_sql_plan_baselines
  3  WHERE sql_text like 'select * from t%';

SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX
------------------------------ ------------------------------ --- --- ---
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920f94ecae5c  YES YES NO

Turning off auto SQL plan baseline capture

SQL> alter session set optimizer_capture_sql_plan_baselines=FALSE;

Session altered.

SQL>

Following statement gives the plan stored in the baseline. DBMS_XPLAN has a new procedure DISPLAY_SQL_PLAN_BASELINE which will display the baseline.

SQL> set line 999
SQL> set pagesize 999
SQL> select * from table(dbms_xplan.display_sql_plan_baseline(plan_name=>'SYS_SQL_PLAN_1d83920f94ecae5c'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SYS_SQL_1447ba3a1d83920f
SQL text: select * from t where col1 = 1
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_1d83920f94ecae5c
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   735 |     9   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    15 |   735 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=1)

24 rows selected.

SQL>

Origin: AUTO-CAPTURE shown above tell us that this baseline is captured automatically.

Lets now create an index and gather stats over index.

SQL> create index t_idx on t(col1);

Index created.

SQL> exec DBMS_STATS.GATHER_INDEX_STATS(OWNNAME => 'ADVAITD_DBA', INDNAME=>'T_IDX');

Run the same query now, since the index is created, we expect the query to use the index.

SQL> explain plan for
  2  select * from t where col1 = 1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   735 |     9   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    15 |   735 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=1)

Note
-----
   - SQL plan baseline "SYS_SQL_PLAN_1d83920f94ecae5c" used for this statement

17 rows selected.

SQL>

The reason we are seeing full table scan is because of the NOTE at the end, which says “SQL plan baseline “SYS_SQL_PLAN_1d83920f94ecae5c” used for this statement”

Since we have a baseline created for this SQL, it will not allow the plan to be changed. This is the kind of stability that SQL Plan baseline gives.
But using an index will be beneficial in our case.

If we check DBA_SQL_PLAN_BASELINES we can see a new plan has been created (PLAN_NAME = SYS_SQL_PLAN_1d83920fae82cf72), but it is not yet ACCEPTED. The plan is enabled though.

SQL> select SQL_HANDLE, PLAN_NAME, ENABLED, ACCEPTED, FIXED
  2  from dba_sql_plan_baselines
  3  WHERE sql_text like 'select * from t%';

SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX
------------------------------ ------------------------------ --- --- ---
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920f94ecae5c  YES YES NO
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920fae82cf72  YES NO  NO

We can check what the new plan looks like using dbms_xplan.display_sql_plan_baseline

SQL> select * from table(dbms_xplan.display_sql_plan_baseline(plan_name=>'SYS_SQL_PLAN_1d83920fae82cf72'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SYS_SQL_1447ba3a1d83920f
SQL text: select * from t where col1 = 1
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SYS_SQL_PLAN_1d83920fae82cf72
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------

Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    15 |   735 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |    15 |   735 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     6 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("COL1"=1)

25 rows selected.

SQL>

As seen above, new plan uses index. Lets evolve this plan now.
Evolving a plan includes evaluating the cost of the plan and accepting if the plan seems to be better than all accepted plan for this query.

SQL> SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SYS_SQL_1447ba3a1d83920f') from dual;

DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE(SQL_HANDLE=>'SYS_SQL_1447BA3A1D83920F')
--------------------------------------------------------------------------------

-------------------------------------------------------------------------------
                        Evolve SQL Plan Baseline Report
-------------------------------------------------------------------------------

Inputs:
-------
  SQL_HANDLE = SYS_SQL_1447ba3a1d83920f
  PLAN_NAME  =
  TIME_LIMIT = DBMS_SPM.AUTO_LIMIT
  VERIFY     = YES
  COMMIT     = YES

Plan: SYS_SQL_PLAN_1d83920fae82cf72
-----------------------------------
  Plan was verified: Time used .01 seconds.
  Passed performance criterion: Compound improvement ratio >= 7.33
  Plan was changed to an accepted plan.

                      Baseline Plan      Test Plan     Improv. Ratio
                      -------------      ---------     -------------
  Execution Status:        COMPLETE       COMPLETE
  Rows Processed:                 1              1
  Elapsed Time(ms):               0              0
  CPU Time(ms):                   0              0
  Buffer Gets:                   22              3              7.33
  Disk Reads:                     0              0
  Direct Writes:                  0              0
  Fetches:                        0              0
  Executions:                     1              1

-------------------------------------------------------------------------------
                                 Report Summary
-------------------------------------------------------------------------------
Number of SQL plan baselines verified: 1.
Number of SQL plan baselines evolved: 1.

Sometimes, your plan may not get evolved because oracle see that there are other already ACCEPTED plans which are better than the plan you are trying to evolve.
But if you know your plan will be better and still want to deploy the same, you can do so by manually changing the attributes ACCEPTED and ENABLED as shown below.

SET SERVEROUTPUT ON
DECLARE
  l_plans_altered  PLS_INTEGER;
BEGIN
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
    sql_handle      => 'SYS_SQL_1447ba3a1d83920f',
    plan_name       => 'SYS_SQL_PLAN_1d83920fae82cf72',
    attribute_name  => 'ENABLED',
    attribute_value => 'YES');

  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/

SET SERVEROUTPUT ON
DECLARE
  l_plans_altered  PLS_INTEGER;
BEGIN
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
    sql_handle      => 'SYS_SQL_1447ba3a1d83920f',
    plan_name       => 'SYS_SQL_PLAN_1d83920fae82cf72',
    attribute_name  => 'ACCEPTED',
    attribute_value => 'YES');

  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/

You can also set the value of attribute FIXED using the above function. Here is the meaning of ENABLED, ACCPETED and FIXED

ENABLED   – ‘YES’ means the plan is available for use by the optimizer. It may or may not be used depending on accepted status.
ACCPETED – ‘YES’ means the plan will be used by optimizer while running the query. ‘NO’ means optimizer will not use the plan.
FIXED        – ‘YES’ means the SQL plan baseline is not evolved over time. A fixed plan takes precedence over a non-fixed plan.

Once you evolve the plan, you can see that plan is ACCEPTED now.

SQL> select SQL_HANDLE, PLAN_NAME, ENABLED, ACCEPTED, FIXED
  2  from dba_sql_plan_baselines
  3  WHERE sql_text like 'select * from t%';

SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX
------------------------------ ------------------------------ --- --- ---
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920f94ecae5c  YES YES NO
SYS_SQL_1447ba3a1d83920f       SYS_SQL_PLAN_1d83920fae82cf72  YES YES NO

Now if you run the explain plan you can see Index T_IDX is getting used.

SQL> explain plan for
  2  select * from t where col1 = 1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    15 |   735 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |    15 |   735 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     6 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("COL1"=1)

Note
-----
   - SQL plan baseline "SYS_SQL_PLAN_1d83920fae82cf72" used for this statement

18 rows selected.

SQL>

Hope this helps.

Part 2 can be viewed at –https://avdeo.com/2011/06/07/oracle-sql-plan-management-%E2%80%93-part-2/

References:

http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/optplanmgmt.htm

http://www.comp.dit.ie/btierney/oracle11gdoc/appdev.111/b28419/d_spm.htm