AWS Solutions Architect – Associate Certification

Hi All,

I am glad to announce that I have completed AWS Solutions Architect – Associate Level certification

screen-shot-2017-01-16-at-3-36-11-pm

This is a completely new world with lots of opportunities to learn new things.

Below is the list of things that I used for doing the certification. This will be helpful for anyone who is aspiring for AWS solutions architect associate certification.

  • Thanks to acloud.guru for providing complete course material for this certification. It was really helpful. Using this course material is recommended.
  • Went through following white papers. These are also mentioned in the course material of acloud.guru
    • AWS Storage Services Overview
    • Architecting for the Cloud – AWS Best practices
    • Amazon Web Services: Risk and Compliance
    • Storage Options in the AWS Cloud
    • AWS Well-Architected Framework
    • Overview of Security Processes
  • Went through various product FAQs
  • Went through resource limitations for various products in AWS documentation

Hope this helps !!

 

Book on Oracle 12c New features

A Book on Oracle 12c New features for administrators

Hi All,

It’s our pleasure to announce the availability of our first book – OCP 12c upgrade 1Z0-060 Exam Guide

 

51xr2hzj5ql

 

 

 

 

 

 

 

 

 

 

The book covers the new features of the Oracle Database 12c for the DBAs from the OCP 12c upgrade exam standpoint. Book is available WW in kindle format as well as paperback.

Link to Amazon.com – https://www.amazon.com/OCP-Upgrade-1Z0-060-Exam-Guide-ebook/dp/B01N3XXYYC/ref=sr_1_1?ie=UTF8&qid=1481109922&sr=8-1&keywords=B01N3XXYYC

Link to Amazon.in – https://www.amazon.in/OCP-Upgrade-1Z0-060-Exam-Guide-ebook/dp/B01N3XXYYC/ref=sr_1_1?ie=UTF8&qid=1481221495&sr=8-1&keywords=B01N3XXYYC

Link to Packt Publishing – https://www.packtpub.com/big-data-and-business-intelligence/ocp-12c-upgrade-1z0-060-exam-guide

This book covers all the new features of Oracle 12.1.0.1 database. So, if you want to upgrade your knowledge on Oracle 12c, this book certainly serves the purpose. And it is the most ideal book for DBAs aspiring for OCP 12c certification.

Feel free to post your comments about this book on amazon.com link or you can post your comments below.

Following are the objectives of the chapters covered in this book, and the objectives are completely in line with the Oracle Database 12c Exam syllabus.

Section 1 – Oracle 12c new features

Chapter 1Getting Started with Oracle 12c – This chapter introduces some of the new features related to installation and configuration of Oracle 12c including database express.
Chapter 2Multitenant Container Database (CDB) Architecture – This chapter introduces you to Oracle 12c multitenant architecture. It covers different components of multitenant architecture, how to create and configure multitenant database and different ways to create pluggable database PDB.
Chapter 3Managing CDBs and PDBs – This chapter covers establishing connection to container database and pluggable database. Performing general administration tasks and managing tablespace, users, roles and privileges in CDB and PDBs etc.
Chapter 4Information Life Cycle Management and Storage Enhancements – This chapter introduces you to information life cycle management in Oracle 12c and how this has been remarkably automated in Oracle 12c.
Chapter 5Auditing, Privileges and Data Redaction – Like the title says, this chapter covers the new unified auditing feature of Oracle 12c along with new privilege analysis tool and finally new data redaction feature.
Chapter 6Database Tuning and Troubleshooting – In this chapter we will learn about new features related to performance tuning. This covers the new optimizer for Oracle 12c and includes explaination of new features like adaptive SQL plans, Dynamic statistics and SQL plan directives. We will also see some improvements related to statistics gathering followed by enhancements to histogram.
Chapter 7Backup and Flashback – This chapter covers backup and recovery scenarios and how multitenant architecture affect these scenarios. We are also going to look into some of the RMAN enhancements in Oracle 12c followed by table restore and recovery using simple RMAN command
Chapter 8Database Administration Enhancements – In this chapter we are going to look into resource manager and how it will work with Oracle 12c multitenant database.
Chapter 9Miscellaneous New Features – This chapter bundles the other new features introduced in Oracle 12c including enhancement related to Oracle data pump, SQL* Loader, online operations etc. It also provides new partitioning enhancements introduced in Oracle 12c and new top n SQL clauses in Oracle 12c.

Section 2 – General Administration

Chapter 10Core Administration – This chapter is more of a general administration and we are going to explain the fundamentals of database administration.
Chapter 11Performance Management – This chapter includes discussion about designing database for optimal performance, monitoring the performance of database to improve the same, analyze and identify performance issues and preforming real application testing to analyze the performance.
Chapter 12Storage – In this chapter we are going to introduce the logical and physical structure of database and how they are different. It also covers ASM and that can improve the storage management.
Chapter 13Security – This chapter covers developing and implementing robust security policy for Oracle database. Auditing the actions in Oracle database, creating password file and authenticating privileged users remotely using password file.

Regards,
Advait & Indira
Author – OCP 12c upgrade 1Z0-060 Exam Guide

Adaptive Cursor Sharing

Adaptive cursor sharing was introduced by Oracle in 11g release 1 as a way to generate best plan for a SQL in all situation. Prior to adaptive cursor sharing, optimizer used to generate a single plan for a SQL statement and that plan is used by all cursors of that SQL_ID. So if we have data skewness in a column and that column is being using in “where” clause of the SQL statement, single plan of that SQL will not be optimum for all values of that column, which is passed to the bind variable.

Lets take a simple example. We will create a table and add random data to it.

SQL> create table T1
 2 (emp_id number,
 3 dept_id number,
 4 salary number,
 5 gender char(1),
 6 dob date,
 7 address varchar2(60));

Table created.

Let me add 10K records to this table

SQL> insert into T1
     SELECT LEVEL,
            (MOD (ROWNUM, 2)+1)*10,
            TRUNC (DBMS_RANDOM.VALUE (1000, 500000), 2),
            DECODE (ROUND (DBMS_RANDOM.VALUE (1, 2)), 1, 'M', 2, 'F'),
            TO_DATE ( ROUND (DBMS_RANDOM.VALUE (1, 28))
                     || '-'
                     || ROUND (DBMS_RANDOM.VALUE (1, 12))
                     || '-'
                     || ROUND (DBMS_RANDOM.VALUE (1900, 2010)),
                        'DD-MM-YYYY'
                     ),
            DBMS_RANDOM.STRING ('x', DBMS_RANDOM.VALUE (20, 50))
     FROM DUAL
     CONNECT BY LEVEL < 10000;

9999 rows created.

We can make one of the column – DEPT_ID as skewed by updating the records as below

SQL> update T1 set dept_id = 10 where rownum <= 9950;

9950 rows updated.

SQL> update T1 set dept_id = 20 where dept_id != 10;

49 rows updated.

SQL> select dept_id, count(1) from T1 group by dept_id;

   DEPT_ID COUNT(1)
---------- ----------
        20 49
        10 9950

So we have skewness for column DEPT_ID now.

Lets gather statistics on this table with cascade option so that it will gather statistics on index as well. Since the data is skew in index, it will also generate histogram.


SQL> exec dbms_stats.gather_table_stats(OWNNAME=>user, TABNAME=>'T1',CASCADE => TRUE , METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 254');

PL/SQL procedure successfully completed.

SQL> select * from dba_histograms where TABLE_NAME = 'T1';

OWNER                          TABLE_NAME                     COLUMN_NAME                    ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_ACTUAL_VALU
------------------------------ ------------------------------ ------------------------------ --------------- -------------- --------------------
ADVAITD_DBA                    T1                             DEPT_ID                                   9950             10
ADVAITD_DBA                    T1                             DEPT_ID                                   9999             20

SQL> select table_name, column_name, histogram from user_tab_cols where table_name = 'T1' and histogram != 'NONE';

TABLE_NAME                     COLUMN_NAME                    HISTOGRAM
------------------------------ ------------------------------ ---------------
T1                             DEPT_ID                        FREQUENCY

We can see it has created frequency histogram on the index column – DEPT_ID

Adaptive cursor sharing is affected by multiple factors. Following are some of the important factors which affects adaptive cursor sharing.

  • Adaptive cursor sharing is enabled or not. This is the first basic factor which determines if adaptive cursor sharing is enabled or not
  • CURSOR_SHARING parameter and what value we are setting for this parameter
  • SQL plan baseline. If we have baselines enabled and what all SQL baselines are available for a SQL in ENABLED=YES and ACCEPTED=YES
  • OPTIMIZER_INDEX_COST_ADJ – What is the value of this parameter set in database.

You can check if adaptive cursor sharing is enabled by checking the parameter _optimizer_adaptive_cursor_sharing

SQL>@param
Enter Parameter name: _adaptive_cursor_sharing

Parameter                            |Description                                                 |Session Va|Instance V
-------------------------------------|------------------------------------------------------------|----------|----------
_optimizer_adaptive_cursor_sharing   |optimizer adaptive cursor sharing                           |TRUE      |TRUE

Now that we have adaptive cursor sharing enabled, lets see how it works and the effect of other factors on adaptive cursor sharing

How adaptive cursor sharing works?

So we have a table T1 with 10K records and one of the column DEPT_ID is having skewed data as seen above. We have 9950 records with DEPT_ID = 10 and 49 records with DEPT_ID = 20.

Lets try to run some queries and check how optimizer behaves

Running SQL for DEPT_NO = 20


SQL>exec :dept_no :=20;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
        49|2002-09-06:00:00

SQL>select sql_id, child_number, executions, buffer_gets, is_bind_sensitive, is_bind_aware, is_shareable from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         1|         58|Y|N|Y

Pay attention to the columns IS_BIND_SENSITIVE, IS_BIND_AWARE, IS_SHAREABLE. Column IS_BIND_SENSITIVE tells whether output of the query is sensitive to values of bind variable. Meaning if we change the value of bind variable, will the output change significantly?

In above case, optimizer is showing IS_BIND_SENSITIVE=Y based on the histogram that is present in the column used in “where” clause.

As per Oracle definition, IS_BIND_SENSITIVE VARCHAR2(1) Indicates whether the cursor is bind sensitive (Y) or not (N). A query is considered bind-sensitive if the optimizer peeked at one of its bind variable values when computing predicate selectivities and where a change in a bind variable value may cause the optimizer to generate a different plan.

The plan its using currently is index range scan

select /*+ GATHER_PLAN_STATISTICS */ count(:"SYS_B_0"), max(dob) from
t1 where dept_id = :dept_no

Plan hash value: 3776485839

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE              |              |     1 |    12 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |    49 |   588 |     1   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | I_T1_DEPT_ID |    49 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("DEPT_ID"=:DEPT_NO)

Lets run the query again but this time we will use the bind value of 10, which has 9950 records

Running SQL for DEPT_NO = 10


SQL>exec :dept_no := 10;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
      9950|2010-12-27:00:00

SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         2|        169|Y|N|Y

You can see now, we have 2 executions but same cursor is being used. During the second run, we used different bind value and optimizer becomes aware that using different bind value is actually fetching different number of records and plan is expensive.

So even though it didn’t generate any new cursor for this and used same plan as it had earlier, it will generate a new cursor when we run this again for DEPT_NO = 10 and hopefully it will also use a new plan (probably full table scan).

Lets run this query again for same DEPT_NO = 10

Running SQL for DEPT_NO = 10


SQL>exec :dept_no := 10;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
      9950|2010-12-27:00:00

SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         2|        169|Y|N|N
btj8b1hx3cf0g|           1|     3724264953|         1|         96|Y|Y|Y

Now, you see that it has created a new cursor along with a new plan and also is_bind_aware becomes YES for new cursor. At the same time, cursor # 0 becomes non-shareable (check IS_SHAREABLE column is showing ‘N’). So since optimizer knows that this query is bind sensitive, it will create new cursors which are bind aware and doesn’t use the cursor it created initially, which was not bind aware.

As per Oracle definition, IS_BIND_AWARE VARCHAR2(1) Indicates whether the cursor is bind aware (Y) or not (N). A query is considered bind-aware if it has been marked to use extended cursor sharing. The query would already have been marked as bind-sensitive.

Sometimes optimizer just creates a new cursor with same plan and it might go for index range scan even for selecting majority of rows from the table. This happened with me when I was trying this example. In such cases, cost of plan is determined by optimizer_index_cost_adj parameter. In my case, value of this parameter is set to lowest value of 1 so it will going for index range scan in all cases. Lower value of this variable represents lower cost for index access and it favors use of index. I set the value of this parameter to 50 and after that it was going for full table scan.

So if you check the plan for cursor 1, it will show full table scan as we are selecting 99% of rows from the table (DEPT_NO = 10 will fetch 9950 rows)

SQL>select * from table(dbms_xplan.display_cursor('btj8b1hx3cf0g',1));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  btj8b1hx3cf0g, child number 1
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ count(:"SYS_B_0"), max(dob) from
t1 where dept_id = :dept_no

Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    27 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    12 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  9950 |   116K|    27   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DEPT_ID"=:DEPT_NO)

At this point if we go for another execution with DEPT_NO = 20, it will create another cursor, which will be bind aware

Running SQL for DEPT_NO = 20

SQL>exec :dept_no := 20;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
        49|2002-09-06:00:00

SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         2|        169|Y|N|N
btj8b1hx3cf0g|           1|     3724264953|         1|         96|Y|Y|Y
btj8b1hx3cf0g|           2|     3776485839|         1|          3|Y|Y|Y

As we can see it created cursor 2 which is bind aware. So now cursor 0 is not shareable and will be removed from shared pool in sometime but cursor 1 and 2 are shareable and will be used by further executions.

Effect of bind peeking on adaptive cursor sharing

One of the important thing for using adaptive cursor sharing is that you need to enable bind peeking in database. Without enabling bind peeking, adaptive cursor sharing will not work.

Initially, in my environment, bind peeking was not enabled and because of that optimizer doesn’t have visibility on what values are being passed to the bind variables. So in that case optimizer will not generate new cursors even when adaptive cursor sharing is enabled. Optimizer does NOT even mark the cursor as bind sensitive as it doesn’t have any visibility in bind values

As you can see below, it was using same cursor for multiple executions


SQL>select sql_id, child_number, executions, buffer_gets, is_bind_sensitive, is_bind_aware from v$sql where sql_text like '%test_sql%';

SQL_ID       |CHILD_NUMBER|EXECUTIONS|BUFFER_GETS|I|I
-------------|------------|----------|-----------|-|-
2bxcmkhms19jn|           0|        12|        220|N|N

If we check the bind peeking, it was turned off

SQL>@param
Enter Parameter name: peek

Parameter                            |Description                                                 |Session Va|Instance V
-------------------------------------|------------------------------------------------------------|----------|----------
_optim_peek_user_binds               |enable peeking of user binds                                |FALSE     |FALSE

Effect of CURSOR_SHARING parameter on adaptive cursor sharing

Cursor sharing plays an important role when we are using adaptive cursor sharing. CURSOR_SHARING parameters determines if a cursor can be shared or a new cursor should be created. This was discussed very well on https://blogs.oracle.com/optimizer/entry/explain_adaptive_cursor_sharing_behavior_with_cursor_sharing_similar_and_force

But will give a brief about how CURSOR_SHARING plays an important role.
Until 11g R2, we had 3 values for CURSOR_SHARING parameter

  • EXACT: By setting this value, optimizer will not replace the literals used in the statement with system generated bind variables. So if you are using literals in the statement, each statement is considered as different depending on the literal values you have used.
  • SIMILAR: Using this value, optimizer will replace the literals with system generated bind variable only if you don’t have histogram on the column for which you have used literal. If you have histogram, optimizer will NOT replace the literal with bind variable
  • FORCE: This value will replace all the literals with system generated bind variables irrespective of whether a histogram is present on the column or not.

Example: we have following statement with literal

select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = 10;

This is the same statement that we used above and optimzer knows that DEPT_ID column is skewed and has histogram defined on it. Becuase it has histogram defined, optimizer knows that data is skewed and replacing the literal value with system generated bind variable can lead to suboptimal plan.

So if we use CURSOR_SHARING = SIMILAR and we have statements like above where we are using literal values on columns with histogram, optimizer will not replace the literal with bind variables.

In such cases adaptive cursor sharing will not kick in as no bind variables present in the query.

For adaptive cursor sharing to work in such situation, we need to drop histograms on the columns where we are using literal values. This has its own problems as data skewness cannot be determined by optimizer.

This is one of the main reason why a value of SIMILAR is deprecated in oracle 12c for CURSOR_SHARING parameter. Oracle encourages to use only FORCE and EXACT going forward.

Effect of baseline on adaptive cursor sharing

This is one of the thing that is covered in good detail in https://blogs.oracle.com/optimizer/entry/how_do_adaptive_cursor_sharing. I am just covering this topic briefly for the sake of completion.

So baseline has a different purpose than adaptive cursor sharing. While adaptive cursor sharing deals with using the right cursor and plan for a SQL statement depending on the bind vales we are passing and skewness of data, purpose of baseline is to provide the stability and avoid using any new unverified plan.

Adaptive cursor sharing kicks in before SQL management baseline. So when the statement is executed, adaptive cursor sharing will peak for bind values and will check the skewness of data. Based on these inputs, it will check if one of the existing cursor and plan can be used or it can derive a new better plan. Adaptive cursor sharing can tells optimizer that a new plan or a cursor needs to be created and after that control moves on to the next part of optimizer. So adaptive cursor sharing only suggests that a new cursor or plan is better and required to be created and provides all the required data.

Now, if we have SQL plan baseline already enabled on this SQL, it will prevent optimizer to use a new plan. However, optimizer can create a new plan and save that in the history of SQL management base. This new plan will be enabled but NOT ACCEPTED. EIther DBA has to manually accept the plan or have to wait for “sql tuning advisor” auto task to kick in and accept the baseline if its verified as good. Only after accepting the new plan, it will be used for the SQL.

In my case, I had a baseline created for the SQL initially.


SQL>exec :dept_no :=20;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
        49|2002-09-06:00:00


If you check the explain plan for this SQL

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  btj8b1hx3cf0g, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ count(:"SYS_B_0"), max(dob) from
t1 where dept_id = :dept_no

Plan hash value: 3776485839

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE              |              |     1 |    12 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |    49 |   588 |     1   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | I_T1_DEPT_ID |    49 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("DEPT_ID"=:DEPT_NO)

Note
-----
   - SQL plan baseline SQL_PLAN_gwfyvvw5h1w96e68c0740 used for this statement

We can see baseline is getting used in the Note section. We can also see the details of baseline in DBA_SQL_PLAN_BASELINES table


SQL>select sql_handle, plan_name from dba_sql_plan_baselines where signature = (select exact_matching_signature from v$sql where sql_id = 'btj8b1hx3cf0g');

SQL_HANDLE                    |PLAN_NAME
------------------------------|------------------------------
SQL_fe3bdbdf0b00f126          |SQL_PLAN_gwfyvvw5h1w96e68c0740

Even after running the SQL multiple times for different bind values, we are not getting any new cursor. Also, the cursor is not marked as bind sensitive as shown in below output


SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         5|        339|N|N|Y

So if we have SQL plan baselines enabled for a SQL and we have only 1 baseline accepted, we will not see the advantages of adaptive cursor sharing unless we have multiple baselines ACCEPTED so that optimizer can use different plan as suggested by adaptive cursor sharing.

This covers most of the situation for using adaptive cursor sharing.

Hope this helps !!

Reference

https://blogs.oracle.com/optimizer/entry/how_do_adaptive_cursor_sharing

https://blogs.oracle.com/optimizer/entry/update_on_adaptive_cursor_sharing

https://docs.oracle.com/database/121/REFRN/GUID-455358F8-D657-49A2-B32B-13A1DC53E7D2.htm#REFRN10025

https://blogs.oracle.com/optimizer/entry/explain_adaptive_cursor_sharing_behavior_with_cursor_sharing_similar_and_force

Oracle 12c Learning Series: Automatic Table Recovery Using RMAN

Table recovery was possible in earlier release as well. Until previous release, if we wanted to recover a table, we had following options

  • Database point in time recovery (DBPITR)
  • Tablespace point in time recovery (TSPITR)
  • Flashback technology

In Oracle 12c, RMAN has been enhanced to perform recovery of table. We have a new command in RMAN which automates complete process of recovering the table. New process does not affect the existing objects in the database and reduces the time and space required for doing table recovery.

How table recovery works?

  • RMAN uses the database backups taken previously to recovery table or table partitions to a specified time. You need to provide following inputs to RMAN
    • Table name or partition names to be recovered
    • Point in time until which the recovery should be done
    • Whether recovered tables should be imported back to original database
  • RMAN determines the correct backup based on your inputs
  • RMAN creates auxiliary instance
  • RMAN restores the controlfile
  • RMAN restores necessary files required for obtaining the older image of table. This might include SYSTEM, SYSAUX, UNDO tablespace and tablespace which has required table.
  • RMAN recovers the table or table partitions in auxiliary instance until the specified time.
  • RMAN creates data pump export dump file that contains the recovered objects
  • RMAN imports the recovered objects into original database

Recovery Point-in-Time Options

You can recover a table or table partition to a past point in time by specifying one of the following three clauses:

  • UNTIL SCN – The system change number (SCN)
  • UNTIL TIME – Time in NLS_DATE_FORMAT environment variable
  • UNTIL SEQUENCE – The log sequence number and thread number

Steps for performing table recovery

Let’s take an example. I have a table T1 created in DEO database.

 

[oracle@advait ~]$ sqlplus advaitd/oracle@deo

SQL> select * from T1;

      COL1
----------
         1
         2
         3

We can note down the current SCN when table is available


SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1243251

Following are the datafiles available in my database


SQL> select file#, name, con_id from v$datafile order by 3,1;  

     FILE# NAME                                                                                 CON_ID
---------- -------------------------------------------------------------------------------- ----------
         1 /u01/app/oracle/oradata/orcl/orcl/datafiles/system-01.dbf                                 0
         2 /u01/app/oracle/oradata/orcl/orcl/datafiles/sysaux-01.dbf                                 0
         3 /u01/app/oracle/oradata/orcl/orcl/datafiles/undotbs1-01.dbf                               0
         4 /u01/app/oracle/oradata/orcl/orcl/users.dbf                                               0
         5 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt_depot.dbf                                0
         6 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt.dbf                                      0
         7 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt_ad4j.dbf                                 0

Lets drop the table T1


SQL> drop table T1;

Table dropped.

We can use below command to recover this table. Note that the SCN number we are giving below is the SCN number we noted down when table was existing.

RMAN> recover table advaitd.t1 until scn 1243251 auxiliary destination '/u01/app/oracle/oradata/aux';

Let’s see what this command does exactly.

Step 1) Command creates and mounts a new adhoc dummy instance as shown below

 


RMAN> recover table advaitd.t1 until scn 1243251 auxiliary destination '/u01/app/oracle/oradata/aux';

Starting recover at 26-JUL-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='jCvj'

It uses the controlfile autobackup to restore the controlfile to mount the dummy instance.

Step 2) It then restores the controlfile for the auxiliary instance

Following memory script shows the commands for restoring controlfile


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log 
sql 'alter system archive log current';
}

Step 3) Next it restores the tablespaces required to perform tablespace PITR

Following memory script shows that it’s going to restore the required datafiles (until SCN 1243251) into auxiliary destination and switch the datafiles to copy.
When we use UNTIL SCN, RMAN restores the datafiles which are little older then this SCN.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;
 
switch clone datafile all;
}

Step 4) Recover the datafiles until required SCN specified

Below memory script shows that RMAN tries to recover these datafiles until the specified SCN. Once recovered, it opens the database in read only mode.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

Step 5) Create SPFILE for auxiliary instance and mount the DB again

Following memory script tell us that RMAN is creating SPFILE for the auxiliary instance and including CONTROL_FILES parameter in SPFILE and mounting the auxiliary instance again


contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files = 
  ''/u01/app/oracle/oradata/aux/DEO/controlfile/o1_mf_ct8t8hyq_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}

Step 6) Restore USERS tablespace where the table belongs

Following memory script shows that RMAN is trying to restore USERS tablespace (datafile 0004) and switching to the copy.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;
 
switch clone datafile all;
}

Step 7) Recovering the auxiliary database until required SCN

Following memory script shows that RMAN is trying to recover USERS tablespace and other tablespaces that it has restored previously to required SCN. It also opens the database in resetlogs.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

Step 8) Create directory for datapump export and export the required table

Following memory script shows that RMAN is creating a directory for exporting the table and it also exports the table


contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/u01/app/oracle/oradata/aux''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/u01/app/oracle/oradata/aux''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/u01/app/oracle/oradata/aux''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/u01/app/oracle/oradata/aux''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_jCvj_rjBm":  
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 64 KB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADVAITD"."T1"                              5.062 KB       3 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_jCvj_rjBm" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_jCvj_rjBm is:
   EXPDP>   /u01/app/oracle/oradata/aux/tspitr_jCvj_52000.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_jCvj_rjBm" successfully completed at Tue Jul 26 19:55:13 2016 elapsed 0 00:00:21
Export completed

Step 9) Importing the table to original database

Following memory script shows, RMAN is trying to import the table to original database. You can provide additional options to prevent RMAN to import the table or you can provide import options to remap the table to different name if required or append the content to existing table.


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_jCvj_tCzd" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_jCvj_tCzd":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADVAITD"."T1"                              5.062 KB       3 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_jCvj_tCzd" successfully completed at Tue Jul 26 19:55:24 2016 elapsed 0 00:00:04
Import completed

Step 10) Finally, RMAN will clean up and remove the auxiliary instance


Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_temp_csgwmp8s_.tmp deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/onlinelog/o1_mf_2_csgwo8fy_.log deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/onlinelog/o1_mf_1_csgwo7ht_.log deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/datafile/o1_mf_users_csgwo53j_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_sysaux_csgwkm4b_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_undotbs1_csgwkm4f_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_system_csgwkm2x_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/controlfile/o1_mf_csgwkf6f_.ctl deleted
auxiliary instance file tspitr_jCvj_52000.dmp deleted
Finished recover at 26-JUL-16

RMAN> 

You can verify that you got the required table or not


SQL> select * from T1;

      COL1
----------
         1
         2
         3

As you can see all the steps are performed by RMAN automatically once we execute single RMAN command to recover the table.

This was a small test database, but in case of production databases, we have to make sure that auxiliary destination has required space available for restoring the required tablespaces. Else table recovery will fail.

Hope this helps !!

Monitoring transaction recovery

Sometimes we end up in a situation where our long running transaction is not completing and we are also not sure how much further time it’s going to take. This happened with one of our DBA where they found MLOG to be bloated because of one orphan snapshot entry. Orphan entries are the one where actual site is not registered on master (no entry in DBA_REGISTERED_SNAPSHOTS), but they see entry for MLOGS (entry in DBA_SNAPSHOT_LOGS). This could happen if we try to drop snapshot from downstream database and it does not get cleaned up on upstream databases.

So in the situation that I faced, upstream team had MLOG which was bloated to 18GB and MLOG also had an index which was bloated to 30GB. (ya, I know its bad :-))

So they identified the orphan snapshot ID and they wanted to purge that from snapshot log to reduce the size of MLOG (after they move the MLOG and rebuild the index after doing the purge).

They used following procedure of DBMS_SNAPSHOT for purging snapshot ID from log

PROCEDURE PURGE_SNAPSHOT_FROM_LOG
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SNAPSHOT_ID                    BINARY_INTEGER          IN

After they started the activity in the morning and monitoring the same until evening, it was still not complete. I helped them in tracking the progress by checking real time SQL monitoring report and it was showing that session has already read around 60GB and undo used until that time was around 48GB. It was still not clear how the command has read 60GB worth of data when MLOG size was only 18GB.

Also, original base table was just 2GB.

At this point they wanted to kill the session. But killing the session will not help immediately as it has to perform huge rollback as well (48GB of UNDO).

But since command was not completing and took almost entire shift, they decided to kill the session. So session was killed using “ALTER SYSTEM KILL SESSION ‘<sid>,<serial#>’ immediate” and session was marked for kill. But session was just marked as killed and it was still holding the lock (if we check in V$LOCK view). This was because session was doing the rollback. We can monitor the progress of rollback using V$TRANSACTION view

You can look at used_ublk in V$transaction to estimate how long it is going
to take to complete the rollback.

SQL> SELECT a.used_ublk 
 FROM v$transaction a, v$session b
 WHERE a.addr = b.taddr AND b.sid = <SID>;

For example:

If used_ublk showed 29,900 12 hours ago and is now 22,900, it has taken 12 hours to rollback 7,000 entries. It will take approximately another 36 hours to complete depending on the types of transactions that are rolling back.

Recovery was very slow as session was doing serial recovery. Next we found the OS PID of the session and killed the OS process as well so that recovery can happen in the background using SMON. Within few mins PMON performed the clean up and lock was released.

Rollback continued in the background and this is faster than the rollback performed by the session. If we kill the session and the shadow process at OS level, SMON picks up the rollback part and it goes for parallel rollback, which is faster.

V$FAST_START_TRANSACTIONS & X$KTUXE

We can monitor the progress of rollback in V$FAST_START_TRANSACTIONS view.

V$fast_start_transactions -> contains one row for each one of the transactions that Oracle is recovering in Parallel.

FAST_START_PARALLEL_ROLLBACK shows the maximum number of processes which may exist for performing parallel rollback.

In fast-start parallel rollback, the background process SMON acts as a coordinator and rolls back a set of transactions in parallel using multiple server processes.

Fast start parallel rollback is mainly useful when a system has transactions that run a long time before committing, especially parallel Inserts, Updates, Deletes operations. When SMON discovers that the amount of recovery work is above a certain threshold, it automatically begins parallel rollback by dispersing the work among several parallel processes.

The following queries are available to monitor the progress of the transaction recovery

set linesize 100 
 alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; 
 select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo", 
 decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) 
 "Estimated time to complete" 
 from v$fast_start_transactions;

Run the above query several times in a row, this will give you a good idea on how SMON is progressing.

  • In some versions the cputime does not work (always 0), hence the estimated completion time will not be displayed
  • In some cases the v$fast_start_transactions view will not work. If this is the case then you can query the internal data dictionary view x$ktuxe

The ‘ktuxesiz’ column represents the remaining number of undo blocks required for rollback:

select ktuxeusn, to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta 
 from x$ktuxe 
 where ktuxecfl = 'DEAD';

I was not able to see recover progress using V$FAST_START_TRANSACTIONS, but I was able to see the progress in x$ktuxe view.

 


select ktuxeusn, to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta 
from x$ktuxe 
where ktuxecfl = 'DEAD'; 

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 12:05:14      |   5260156|ACTIVE
SQL>/

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 12:05:15      |   5259945|ACTIVE
SRW1NA>/

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 12:05:15      |   5259854|ACTIVE

..
..
..
<After 2-3 hours>

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 16:31:47      |    612697|ACTIVE

Speeding up recovery

We can further improve the speed of recovery by taking following steps

1) There are cases where parallel transaction recovery is not as fast as serial transaction recovery, because the pq slaves are interfering with each other. To check the Parallel Recovery processes and there state query:

select * from v$fast_start_servers;

Column STATE shows the state of the server being IDLE or RECOVERING, if only 1 process is in state RECOVERING while the other processes are in state IDLE, then you should disable Parallel Transaction Recovery. How to do this is outlined in the following note:

Note 238507.1: How to Disable Parallel Transaction Recovery When Parallel Txn Re very is Active

2) If all the rows are showing RECOVERING in STATE column of v$fast_start_servers, then you will get benefitted if you add more threads for doing the recovery.

You can do so by setting value of FAST_START_PARALLEL_ROLLBACK parameter. You should set a value of HIGH if you want to speed up the recovery.

Following are the different values of this parameter

  • FALSE – Parallel rollback is disabled
  • LOW – Limits the maximum degree of parallelism to 2 * CPU_COUNT
  • HIGH -Limits the maximum degree of parallelism to 4 * CPU_COUNT

Note that, this parameter is not dynamic and needs database bounce. Also, If you change the value of this parameter, then transaction recovery will be stopped and restarted with the new implied degree of parallelism. So if you are already done more than half the rollback and you think its not worth to change this parameter, you can leave it. Else if you still change this parameter, recovery will start from the beginning again.

3) Increase the parameter ‘_cleanup_rollback_entries’

This parameter determines number of undo entries to apply per transaction cleanup. The default value is 100. You can change that to, say 400.This parameter cannot be changed dynamically, so in order to change this the database will need to be restarted.

In our specific situation, we knew that huge rollback needs to be performed and we were monitoring the rollback progress from the beginning. So we made a decision at the very beginning to set FAST_START_PARALLEL_ROLLBACK to HIGH and bounce the DB. This improved recovery speed right from the beginning.

References:

SMON: Parallel transaction recovery tried (Doc ID 1458738.1) To BottomTo Bottom

Troubleshooting Database Transaction Recovery (Doc ID 1494886.1)

Database Hangs Because SMON Is Taking 100% CPU Doing Transaction Recovery (Doc ID 414242.1)

SMON: Parallel transaction recovery tried (Doc ID 1458738.1)

Note 238507.1: How to Disable Parallel Transaction Recovery When Parallel Txn Re very is Active

Oracle 12c Learning Series: In-database Archiving and Temporal Validity

One of the major challenge faced by an Oracle DBA is – how to effectively deal with historical data? Today, if we consider database tables for an enterprise, data in the table goes back several years and most of the data in the table is inactive. Challenge remains as to how to archive this data and make our query run efficiently.

If we choose to archive old data outside of database on a tape, cost of storing data reduces but older data is not available for one-off reports. Also regulatory compliance suggests that older historical data should be accessible all the time.

If we choose to maintain old data inside database, size of database grows, so is the size of backup and storage cost increases. Not only that, queries on bigger tables does not run efficiently, so there is performance cost associated with queries, DMLs and DDLs (example index creation and rebuild).

In 11g, older archived data is handled using following 2 solutions

  1. Partitioning – Distinguish the data as old or new based on the date of creation or some date attribute in the table and partition the table according to that column. DBA can manually move older partitions to low cost storage
  2. Flashback data archive (total recall feature in 11g) – This feature automatically tracks changes to data over period of time and maintains archive transactional data in a different tablespace based on RETENTION parameter. Historical data can be queried using flashback query AS OF clause. Archived transactional data which has aged beyond RETENTION will be purged automatically.

Oracle 12c provide additional solutions to above challenges. We have 2 ways to handle situation for archive data in Oracle 12c:

  1. In-database archiving
  2. Temporal validity

In-Database Archiving

Until now, it was very difficult to identify active data from in-active data at row level. In Oracle 12c, we can identify active data vs inactive data at row level. This is possible by attaching a property to every row of a table to mark it as in-active. If the row is not marked inactive, it’s an active row. Setting such property at row level helps separate active and non-active data in a table and lets you archive rows in a table by marking them inactive. The key thing to understand here is that the data remains in the table and you can compress it, but to the applications, this part of the data (inactive) remains invisible. We also have a session level parameter which enables us to see active as well as inactive data (if required).

In-database archiving is enabled at table level, either during creation of table or later by altering the table. Following is an example of creating table with in-database archiving


SQL> create table sales_archive
 2 (year    number(4),
 3 product  varchar2(10),
 4 amt      number(10,2))
 5 row archival;

Table created.

When you create a table with “ROW ARCHIVAL” clause (line 5 in above example), it automatically adds an extra column ORA_ARCHIVE_STATE to the table. But this column is added as hidden column. If you try to describe the table, you won’t see this column.


SQL> desc sales_archive
 Name                    Null?    Type
 ----------------------- -------- ----------------
 YEAR                             NUMBER(4)
 PRODUCT                          VARCHAR2(10)
 AMT                              NUMBER(10,2)

But you can check if the column is added or not using DBA_TAB_COLUMNS view.


SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES_ARCHIVE';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES_ARCHIVE        ORA_ARCHIVE_STATE               YES
SALES_ARCHIVE        YEAR                          1 NO
SALES_ARCHIVE        PRODUCT                       2 NO
SALES_ARCHIVE        AMT                           3 NO

Similarly, you can enable an existing table for row archival using alter table as shown below.


SQL> alter table sales row archival;

Table altered.

SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES                YEAR                          1 NO
SALES                PRODUCT                       2 NO
SALES                AMT                           3 NO
SALES                ORA_ARCHIVE_STATE               YES

ORA_ARCHIVE_STATE column is very important column for distinguishing active data vs inactive data. The column ORA_ARCHIVE_STATE can take two values—0 and 1. By default, a newly inserted row is active and is denoted by the value 0 for the ORA_ARCHIVE_STATE column. When the rows start being rarely accessed and not updated any longer, they’re considered to be in the non-active state and are denoted by the value 1 (or any value other than zero) for the ORA_ARCHIVE_STATE column.

If you want to check the value of ORA_ARCHIVE_STATE column, you have to select the column explicitly in your select statement.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 0
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

As you can see, we have value of 0 in all the selected rows, which means all this data is active.

Note that if rows are rarely accessed, column ORA_ARCHIVE_STATE will not be set to value of 1 automatically. A DBA or owner of the data have to decide which rows are not being accessed and are inactive and they have to manually update the value of ORA_ARCHIVE_STATE column.

Following DML shows how to mark the data inactive

SQL> update sales_archive set ORA_ARCHIVE_STATE = DBMS_ILM.ARCHIVESTATENAME(1) where year = 2001;

1 row updated.

SQL> commit;

Commit complete.

In the above DML, we used DBMS_ILM.ARCHIVESTATENAME function. This function is officially used for updating value of ORA_ARCHIVE_STATE column. You can also directly update this column to any non-zero value and result will be same. In the above DML, we updated 1 record of year 2001 and marked it as inactive (or archived).

If you query the table again, you won’t see the record that we updated above.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

6 rows selected.

This is because, by default Oracle only shows active records. Any records marked inactive will not be shown. This is done by Oracle by adding simple filter to your queries. We can see the same in below explain plan

SQL> explain plan for
 2 select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------
Plan hash value: 4043476784

----------------------------------------------------------------------------------
| Id | Operation         | Name          | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |               | 6    | 12210 | 3 (0)       | 00:00:01 |
|* 1 | TABLE ACCESS FULL | SALES_ARCHIVE | 6    | 12210 | 3 (0)       | 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("SALES_ARCHIVE"."ORA_ARCHIVE_STATE"='0')

Note
-----
 - dynamic statistics used: dynamic sampling (level=2)

17 rows selected.

You can view archived records again by setting row archival visibility parameter to ALL at session level. By default, the value of this parameter is ACTIVE, that’s why we are seeing only active records. Follow shows the same


SQL> alter session set row archival visibility = ALL;

Session altered.

SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

YEAR       PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 1
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

After setting row archival visibility to ALL, we are able to see archive values. This is one great advantage of in-database archiving. If some adhoc report needs to be run, which needs access to archived records, we can just set row archival visibility to ALL at the beginning of the report and we should be able to get the desired result. You can set back visibility to ACTIVE again.

SQL> alter session set row archival visibility = ACTIVE;

Session altered.

You can also restore back archived data as active data by running update statement on archived data and using DBMS_ILM.ARCHIVESTATENAME function. But in this case you have to pass 0 as value to this function. Also, you need to set ROW ARCHIVAL VISIBILITY clause to ALL before you run update else archived data won’t be visible.

You can disable row archival for a table by using ALTER TABLE … NO ROW ARCHIVAL. If you disable row archival for a table, ORA_ARCHIVE_STATE column will get dropped immediately and all the data will be visible in your select statement

SQL> alter table sales no row archival;

Table altered.

Note that if you use CTAS (CREATE TABLE AS SELECT) to create a new table from an existing table, then new table will have active as well as inactive data. In-database archiving is only applicable to selects and DMLs but not to DDLs.

Temporal Validity and Temporal history

In in-database archiving, we can distinguish active rows from non-active rows using row archival column (ORA_ARCHIVE_STATE). You can achieve same functionality using temporal validity. Many of the applications today have their data dependent on various dates that are relevant to underlying business. Example an insurance application will have basic dates recorded for insurance start date and insurance end date. This represents valid time when the insurance policy will be active for a customer. These dates attributes which denotes valid time for record is called temporal validity. Temporal Validity lets you keep active and inactive data together in the same table, while providing all the benefits of archiving non-active data. Temporal validity support helps in cases where it’s critical to know when certain data becomes valid and when it’s invalid.

Similar to business relevant date attributes used in table by application, Oracle provide temporal validity by creating its own date attributes (start date and end date) to represent valid time dimensions when the record will be active. Application user or DBA can decide which records are active vs inactive by updating start date and end date attribute of each record. Records for which end date has passed are inactive records.

By using valid time temporal implicit filter on valid-time dimension, queries can show rows that are currently valid or that will be valid in future. Queries can hide rows whose facts are currently not valid.

Temporal History

Before we go further, it’s better to understand about temporal history and how temporal validity is different than temporal history. Temporal history was called as Flashback Data Archive (FDA) in previous release. To understand this, let’s consider this.

Temporal validity dates and times are different than the dates and times when the record is created. The date and time when record was created in database is called temporal history. Example, if we have to create a new customer whose insurance policy starts from today, it’s possible that we might create a record for that application tomorrow or after a week but put insurance start date of today. In this case temporal validity date is of today but since record is created 1 week later, temporal history date will be 1 week later.

We can use temporal history to get the past data. We can also use temporal history along with temporal validity to get rows which were valid in the past.

Defining temporal validity

You define a valid-time dimension at table creation time, or by altering a table. In order to create the valid-time dimension, you specify the PERIOD FOR clause in the table creation statement.

The following example shows how to explicitly define two date-time columns, USER_TIME_START and USER_TIME_END:

Following example creates table with valid-time dimension

Explicitly specifying valid-time dimension columns.


SQL> create table policy
2 (cust_id         number,
3 policy_no        number,
4 policy_type      varchar2(20),
5 user_time_start  date,
6 user_time_end    date,
7 period for user_time (user_time_start, user_time_end));

Table created.

In above example, we explicitly specified valid-time dimension columns in create table statement. But even if we don’t specify the column names, valid-time temporal creates desired columns


SQL> create table policy
2 (cust_id    number,
3 policy_no   number,
4 policy_type varchar2(20),
5 period for user_time);

Table created.

A valid-time dimension represented by new PERIOD FOR clause, consist of two date-time columns. If you don’t specify the columns explicitly as in second example, Oracle automatically creates 2 hidden columns whose name starts with prefix provided in PERIOD FOR clause and ends with “start” and “end”. So in second example, the 2 hidden columns created would be user_time_start and user_time_end (same as in example 1).

Only difference between first example and second one is the visibility of valid-time dimension columns. In first table, the columns are visible as we explicitly created them, whereas in second example, the columns are hidden.

If the columns are created hidden, you need to explicitly specify them in select, updates and inserts. Following example shows an insert statement into policy table where we created implicit valid-time dimension columns


SQL> insert into policy (cust_id, policy_no, policy_type, user_time_start, user_time_end) values (123, 3424, 'HEALTH',sysdate, null);

1 row created.

SQL> select * from policy;

   CUST_ID POLICY_NO  POLICY_TYPE
---------- ---------- --------------------
       123 3424       HEALTH

SQL> select cust_id, policy_no, policy_type, user_time_start from policy;

   CUST_ID POLICY_NO  POLICY_TYPE          USER_TIME_START
---------- ---------- -------------------- ----------------------------------------
       123 3424       HEALTH               20-JUN-16 12.33.53.000000 AM +05:30

As you can see, using “select *” won’t show hidden valid-time dimension columns. In 2nd statement we explicitly select them and see the values.

Now, we have seen how to create valid-time columns in table to separate active data from inactive data. Let’s check how to filter these valid-time columns and select only valid data. There are 2 methods of selecting active data by filtering on valid-time columns

  • PERIOD FOR clause
  • Using DBMS_FLASHBACK_ARCHIVE procedure

PERIOD FOR clause

To filter valid data, you can use SELECT statement with new PERIOD FOR clause. We have a set of data which is valid based on its start date and end date of valid-time temporal and we have another set of data in same table which is invalid as it falls outside the start time and end time of valid-time temporal. Both sets of rows reside in same table. However, by controlling visibility of data to valid rows, you can limit what queries and DMLs to see only active data.

For each record that you insert into table, you specify valid-time start date and end date. These dates represent the activeness of data. These dates are entered manually by end users. The date when the record ins actually inserted into the table is the transaction time and is called temporal history in Oracle 12c.

You can use either AS OF PERIOD FOR clause or VERSIONS PERIOD FOR clause to display valid data. AS OF PERIOD FOR clause is used when we want to see valid data as on specific date. Whereas VERSIONS PERIOD FOR clause is used when we want to see valid data between 2 dates (range).

Following example displays all policies that were active as on 01-Jan-2016

select * from policy AS OF PERIOD FOR user_time to_date('01-JAN-2016','DD-MON-YYYY');

Here is another example which show valid data between 01-JAN-2016 and 01-FEB-2016.

select * from policy VERSIONS PERIOD FOR user_time between to_date('01-JAN-2016','DD-MON-YYYY') and to_date(’01-FEB-2016’,’DD-MON-YYYY’);

DBMS_FLASHBACK_ARCHIVE

You can also use DBMS_FLASHBACK_ARCHIVE procedure to show valid data. This procedure is used to set visibility of data at session level. Once visibility is set, select statement on valid-time temporal table shows data that was valid data at set time.

In following example, we are setting visibility of data as of given time – 01-JAN-2016
This means any select statement or DMLs in the same session, will see valid data as of 01-JAN-2016


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('ASOF',to_date('01-JAN-2016','DD-MON-YYYY'));

PL/SQL procedure successfully completed.

You can set the visibility to current time. This will show data which is currently valid


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME(‘CURRENT'));

PL/SQL procedure successfully completed.

Note that this visibility setting affects only selects and DMLs. It doesn’t affect DDL. DDL sees complete data (valid as well as invalid). Also, in-database archiving and temporal validity is NOT supported in CDB database. It’s only supported in non-CDB database.

Hope this helps !!

Fixing SQL plans on ADG using SQL Profiles

With Active Dataguard setup, many of the read only applications runs on ADG and sometime the SQLs that are used by these applications runs into bad plans. ADG being a read only database, its not possible to create a baseline or a profile on ADG.

One of the way to have good plan for SQLs running on ADG is to make them run on primary first and fix the plan by creating profile or baseline and then transfer these SQLs to ADG. Practically I have seen many times that Dev teams are not very flexible in changing the configuration to run the queries on primary. Either there configuration setup is complex and takes more time and efforts to change them or most of the time in bigger enterprises, they have central configuration file which when changed, changes the configuration for entire region. In that case complete software techstack will run on primary for entire region and not just 1 database.

Another way to deal make queries run on primary is to point ADG CNAME to primary. That way all the applications which are supposed to run on ADG will start running on primary. Down side for this is the increase in load on primary as all the applications on ADG will connect to primary. Its too much of a change to fix 1 SQL on ADG.

This article is about fixing plans on ADG using profile, without making any other change. We don’t have to run the query on primary to fix the plan.

Fixing SQL plan on ADG

In one of my old article “Fixing SQL Plans: The hard way – Part 1“, I mentioned about DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This procedure takes internal SQL hints that optimizer understands as input and create a profile to fix the plan.

In that method, I am providing SQL text as input, but the way I am providing SQL text as input is by selecting “SQL_TEXT” column from either V$SQL_TEXT view or from DBA_HIST_SQLTEXT view.

Following is the partial code of the PL/SQL block I used to create profile

for sql_rec in (
 select t.sql_id, t.sql_text
 from dba_hist_sqltext t, dba_hist_sql_plan p
 where t.sql_id = p.sql_id
 and p.sql_id = '&SQL_ID_TO_FIX'
 and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
 and p.parent_id is null
)
loop
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => sql_rec.sql_text,
 profile => ar_profile_hints,
 name => 'PROFILE_33fndgzsas09k');

end loop;

Why we cannot use this method for ADG?

We cannot use this method for ADG. You will find the required SQL text in V$SQL_TEXT view in ADG, but you cannot create SQL profile on ADG as its a read-only database.
Also, you cannot create a profile on primary because you will not find SQL text on primary.

So, How do we fix this?

Method 1:

One of the method I thought of for fixing the SQL is by creating a database link between primary and ADG (say adg_db_link) and use that DB link to get SQL text from standby. So above PL/SQL code will look like below. Note that I have change DBA_HIS* table with V$ views because ADG data will be present in only V$ views. DBA_HIST* tables will have data from primary database only as part of AWR snapshot.

for sql_rec in (
 select t.sql_id, t.sql_text
 from v$sqltext@adg_db_link t, v$sql_plan@adg_db_link p
 where t.sql_id = p.sql_id
 and p.sql_id = '&SQL_ID_TO_FIX'
 and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
 and p.parent_id is null
)
loop
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => sql_rec.sql_text,
 profile => ar_profile_hints,
 name => 'PROFILE_33fndgzsas09k');

end loop;

You can run the complete PL/SQL block on primary and using the DB link will fetch the required SQL text from ADG and you can create SQL profile on primary.

Once this SQL profile is created, it should be picked up by SQL query on ADG. Note that you need to purge the SQL from shared pool of ADG so that SQL will go for hard parse and pick the profile.

Method 2:

Another method we can use, is to pass SQL text as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This is the method I implemented and it worked for me.

Let’s take a example.

Following is my SQL text that I want to fix. I was having a completely different SQL in my environment, but I cannot provide actual text for that SQL. My SQL was very big and complex, So I simplified the same and changed table name and column name to understand easily.

select * from ( 
          select this_.col1 as col1,
                 this_.col2 as col2, 
                 this_.col3 as col3, 
                 this_.col4 as col4
          from TAB1 this_ 
          inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
          inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
          where this_.col1 in (:1) 
          and   this_.col2 in (:2 , :3) 
          and   compshipme1_.col1 in (:4 )
          and   this_.col8 = 'OPEN'
          and   this_.col5>=:5 
          and   this_.col5<=:6 
          order by this_.col1 asc ) 
where rownum <= :7

Assuming that existing plan in ADG database is bad, we need to find a good plan of this SQL and also create a profile so that new good plan will be picked in ADG.
We are going to do that without running this SQL on primary or without pointing ADG CNAME to primary. Also, we are not going to do any code changes to add any hints.

Following are the steps to fix the plan on ADG

1) Get a good plan for this SQL

You can use the hints in SQL text and do an explain plan on primary to check if the plan looks good. Alternatively, if you want to be sure, you can run the hinted SQL in ADG to make sure that SQL runs good and efficiently.

I used few index hints in my SQL to make sure correct index is getting picked. I did “explain plan” for the SQL to check the explain plan.

explain plan for
select * from ( 
          select /*+ index(tab1 I_UNIQ_COL2) */ this_.col1 as col1,
                 this_.col2 as col2, 
                 this_.col3 as col3, 
                 this_.col4 as col4
          from TAB1 this_ 
          inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
          inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
          where this_.col1 in (:1) 
          and   this_.col2 in (:2 , :3) 
          and   compshipme1_.col1 in (:4 )
          and   this_.col8 = 'OPEN'
          and   this_.col5>=:5 
          and   this_.col5<=:6 
          order by this_.col1 asc ) 
where rownum <= :7

I got following plan after using required hints

Plan hash value: 2524091007

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name         | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0   | SELECT STATEMENT                          |              | 134  | 1524K | 4134   (3) | 00:00:50 |       |       |
|* 1  |  COUNT STOPKEY                            |              |      |       |            |          |       |       |
| 2   |   VIEW                                    |              | 134  | 1524K | 4134   (3) | 00:00:50 |       |       |
|* 3  |    SORT ORDER BY STOPKEY                  |              | 134  | 71154 | 4134   (3) | 00:00:50 |       |       |
|* 4  |     FILTER                                |              |      |       |            |          |       |       |
| 5   |      NESTED LOOPS                         |              | 134  | 71154 | 4133   (3) | 00:00:50 |       |       |
| 6   |       NESTED LOOPS                        |              | 134  | 43550 | 3731   (3) | 00:00:45 |       |       |
| 7   |        TABLE ACCESS BY INDEX ROWID        | TAB1         | 1    | 22    | 1      (0) | 00:00:01 |       |       |
|* 8  |         INDEX UNIQUE SCAN                 | I_UNIQ_COL2  | 1    |       | 0      (0) | 00:00:01 |       |       |
| 9   |        TABLE ACCESS BY GLOBAL INDEX ROWID | TAB2         | 134  | 40602 | 3730   (3) | 00:00:45 | ROWID | ROWID |
|* 10 |         INDEX RANGE SCAN                  | I_TAB2_COL2  | 242  |       | 3507   (3) | 00:00:43 |       |       |
| 11  |        TABLE ACCESS BY GLOBAL INDEX ROWID | TAB3         | 1    | 206   | 3      (0) | 00:00:01 | ROWID | ROWID |
|* 12 |         INDEX RANGE SCAN                  | PK_TAB3_COL1 | 1    |       | 2      (0) | 00:00:01 |       |       |
-------------------------------------------------------------------------------------------------------------------------

2) Obtain outline hints for SQL

Once you perform explain plan for the SQL, you can use FORMAT=>’ADVANCED’ to get the neccessary outline hints

select * from table(dbms_xplan.display(format=>’advanced’));

/*+
 BEGIN_OUTLINE_DATA
 USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")
 USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")
 LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")
 INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))
 INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))
 INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))
 NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")
 OUTLINE(@"SEL$2")
 OUTLINE(@"SEL$3")
 MERGE(@"SEL$2")
 OUTLINE(@"SEL$64EAE176")
 OUTLINE(@"SEL$4")
 OUTLINE_LEAF(@"SEL$1")
 MERGE(@"SEL$64EAE176")
 OUTLINE_LEAF(@"SEL$54D64B3C")
 ALL_ROWS
 OPT_PARAM('_optimizer_extended_cursor_sharing_rel' 'none')
 DB_VERSION('11.2.0.4')
 OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
 IGNORE_OPTIM_EMBEDDED_HINTS
 END_OUTLINE_DATA
 */

Step 3) Above outline data is useful for creating profile for query running on ADG

SQL Profile have to created on primary as we cannot create SQL profile on ADG (read only database).
Previous method described in”Fixing SQL Plans: The hard way – Part 1” uses the SQL ID as input and queries V$SQLTEXT or DBA_HIST* tables to get the required SQL text. This SQL text is then supplied to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure.

But since we don’t have SQL text in primary, we can directly supply SQL text to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure as shown below.

Note – If you have single quotes in your script, you need to change that to 2 single quotes. 1st single quote works as escape character

In my SQL text, I have a single quote literal. So I have to enclose that to 2 single quotes as shown below

select * from ( 
       select this_.col1 as col1,
              this_.col2 as col2, 
              this_.col3 as col3, 
              this_.col4 as col4
       from TAB1 this_ 
       inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
       inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
       where this_.col1 in (:1) 
       and   this_.col2 in (:2 , :3) 
       and   compshipme1_.col1 in (:4 )
       and   this_.col8 = ''OPEN''
       and   this_.col5>=:5 
       and   this_.col5<=:6 
       order by this_.col1 asc ) 
where rownum <= :7

Above SQL will be given as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure.
Similarly, if we have single quotation in outline hints that we extracted, we need to change that to 2 single quotes as shown below.

Also, we need to mark every outline hint in single quotation.

'BEGIN_OUTLINE_DATA',
'USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")',
'USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")',
'LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")',
'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))',
'INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))',
'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))',
'NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")',
'OUTLINE(@"SEL$2")',
'OUTLINE(@"SEL$3")',
'MERGE(@"SEL$2")',
'OUTLINE(@"SEL$64EAE176")',
'OUTLINE(@"SEL$4")',
'OUTLINE_LEAF(@"SEL$1")',
'MERGE(@"SEL$64EAE176")',
'OUTLINE_LEAF(@"SEL$54D64B3C")',
'ALL_ROWS',
'OPT_PARAM(''_optimizer_extended_cursor_sharing_rel'' ''none'')',
'DB_VERSION(''11.2.0.4'')',
'OPTIMIZER_FEATURES_ENABLE(''11.2.0.4'')',
'IGNORE_OPTIM_EMBEDDED_HINTS',
'END_OUTLINE_DATA'

Note that we have enclosed the hints in single quotes and also put a comma at the end of every hint except the last hint.

So we have required outline hints and SQL text that should be provided as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure

Step 4) Creating required profile on primary

Below shows the code for creating SQL profile on primary after providing required inputs.

declare
ar_profile_hints sys.sqlprof_attr;
begin
  ar_profile_hints := sys.sqlprof_attr(
     'BEGIN_OUTLINE_DATA',
     'USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")',
     'USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")',
     'LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))',
     'NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")',
     'OUTLINE(@"SEL$2")',
     'OUTLINE(@"SEL$3")',
     'MERGE(@"SEL$2")',
     'OUTLINE(@"SEL$64EAE176")',
     'OUTLINE(@"SEL$4")',
     'OUTLINE_LEAF(@"SEL$1")',
     'MERGE(@"SEL$64EAE176")',
     'OUTLINE_LEAF(@"SEL$54D64B3C")',
     'ALL_ROWS',
     'OPT_PARAM(''_optimizer_extended_cursor_sharing_rel'' ''none'')',
     'DB_VERSION(''11.2.0.4'')',
     'OPTIMIZER_FEATURES_ENABLE(''11.2.0.4'')',
     'IGNORE_OPTIM_EMBEDDED_HINTS',
     'END_OUTLINE_DATA'
 );
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => 'select * from ( 
                   select /*+ index(tab1 I_TAB1_IDX1) */ this_.col1 as col1,
                          this_.col2 as col2, 
                          this_.col3 as col3, 
                          this_.col4 as col4
                   from TAB1 this_ 
                   inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
                   inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
                   where this_.col1 in (:1) 
                   and   this_.col2 in (:2 , :3) 
                   and   compshipme1_.col1 in (:4 )
                   and   this_.col8 = ''OPEN''
                   and   this_.col5>=:5 
                   and   this_.col5<=:6 
                   order by this_.col1 asc ) 
             where rownum <= :7',
 profile => ar_profile_hints,name => 'PROFILE_d7w7fruzwyh2s'
 );
end;
/

You can run above code on primary to create required profile.
Once you create profile, purge the SQL on ADG using sys.dbms_shared_pool.purge procedure.

When the SQL runs next time on ADG, it will automatically pick the above created profile and plan will be the one we tested before by including hints.

I was able to see following in the NOTE section when I did DBMS_XPLAN.DISPLAY_CURSOR on ADG

Note
-----
   - SQL profile PROFILE_d7w7fruzwyh2s used for this statement

Hope this helps !!

Reference

https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/

https://avdeo.com/2012/07/04/fixing-sql-plans-the-hard-way-part-2/

https://avdeo.com/2012/07/13/fixing-sql-plans-the-hard-way-part-3/

 

Using UDev to configure ASM disks

This is a small article on using UDev on RHEL 7. I have a virtual box with RHEL 7 and I am configuring ASM diskgroups.

Before we configure ASM diskgroups, we need to have disks/partitions available at OS level and those should be recognized by ASM. Oracle provides a utility called ASMLib which can be installed and used very easily to configure disk/partitions at OS level.

ASMLib stamps the header of partitions/disk at OS level with Oracle format making them easily detected by ASM instance doing disk discovery. You can go through ASMLib installation, configuration and usage in an article write by my friend Anand Prakashhttps://aprakash.wordpress.com/2016/05/19/oracle-asmlib

In this article we will focus on using UDev for setting up ASM disks.

Let’s start with some information on UDev

What is Udev?

Udev is the device manager for the Linux 2.6 kernel that creates/removes device nodes in the /dev directory dynamically. It is the successor of devfs and hotplug. It runs in userspace and the user can change device names using Udev rules.

Why Do We Need It ?

In the older kernels, the /dev directory contained statics device files. But with dynamic device creation, device nodes for only those devices which are actually present in the system are created. Let us see the disadvantages of the static /dev directory, which led to the development of Udev.

Problems Identifying the Exact Hardware Device for a Device Node in /dev

The kernel will assign a major/minor number pair when it detects a hardware device while booting the system. Let us consider two hard disks. The connection/alignment is in such a way that one is connected as a master and the other, as a slave. The Linux system will call them, /dev/hda and /dev/hdb. Now, if we interchange the disks the device name will change. This makes it difficult to identify the correct device that is related to the available static device node. The condition gets worse when there are a bunch of hard disks connected to the system.

Udev provides a persistent device naming system through the /dev directory, making it easier to identify the device.

Huge Number of Device Nodes in /dev

In the static model of device node creation, no method was available to identify the hardware devices actually present in the system. So, device nodes were created for all the devices that Linux was known to support at the time. The huge mess of device nodes in /dev made it difficult to identify the devices actually present in the system.

Not Enough Major/Minor Number Pairs

The number of static device nodes to be included increased a lot in recent times and the 8-bit scheme, that was used, proved to be insufficient for handling all the devices. As a result the major/minor number pairs started running out.

Working of Udev

The Udev daemon listens to the netlink socket that the kernel uses for communicating with user space applications. The kernel will send a bunch of data through the netlink socket when a device is added to, or removed from a system. The Udev daemon catches all this data and will do the rest, i.e., device node creation, module loading etc.

Kernel Device Event Management

  • When bootup is initialized, the /dev directory is mounted in tmpfs.
  • After that, Udev will copy the static device nodes from /lib/udev/devices to the /dev directory.
  • The Udev daemon then runs and collects uevents from the kernel, for all the devices connected to the system.
  • The Udev daemon will parse the uevent data and it will match the data with the rules specified in /etc/udev/rules.d.
  • It will create the device nodes and symbolic links for the devices as specified in the rules.
    The Udev daemon reads the rules from /etc/udev/rules.d/*.rules and stores them in the memory.
  • Udev will receive an inotify event, if any rules were changed. It will read the changes and will update the memory.

Let’s start by creating disks at OS level. VirtualBox has commands which creates block devices at OS level.

Using Udev to configure disks

Step 1) Creating Block devices at OS level

I am creating 4 disks, each of 10GB using following command

VBoxManage createhd --filename asm1.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm4.vdi --size 10240 --format VDI --variant Fixed

Step 2) Attaching the disk to the correct storage

VBoxManage storageattach 12102 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4.vdi --mtype shareable

12102 is the name of virtual machine. You can see the virtual machine name as shown in below screen shot.

vbox

Step 3) (optional) Making disk sharable

This is optional step and is required only if you are using ASM cluster. In that case ASM disks should be made shareable as should be attached to all machines in cluster. You can use following commands to make the disks sharable

VBoxManage modifyhd asm1.vdi --type shareable
VBoxManage modifyhd asm2.vdi --type shareable
VBoxManage modifyhd asm3.vdi --type shareable
VBoxManage modifyhd asm4.vdi --type shareable

Step 4) Check if the disks are now shown in your virtual box

Following screen shot shows that disks are now added to correct virtual machine

vbox copy

Also, after you login to virtual box, you should be able to see new devices under /dev directory

[root@advait ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Jul 18 05:36 /dev/sda
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jul 18 05:36 /dev/sda2
brw-rw---- 1 root disk 8, 16 Jul 18 05:36 /dev/sdb
brw-rw---- 1 root disk 8, 32 Jul 18 05:36 /dev/sdc
brw-rw---- 1 root disk 8, 48 Jul 18 05:36 /dev/sdd
brw-rw---- 1 root disk 8, 64 Jul 18 05:36 /dev/sde

/dev/sda, /dev/sda1 and /dev/sda2 are partitions of main device used for virtual box.
/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde are the devices we added to virtual box as above.

Step 5) Format the new devices and create partitions

You need to create new partitions using fdisk utility. I am showing the command for one partition, you can repeate the same for remaining partitions.

[root@advait ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xbda01838.

Command (m for help): n
Partition type:
 p primary (0 primary, 0 extended, 4 free)
 e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xbda01838

Device Boot Start End Blocks Id System
/dev/sdb1 2048 20971519 10484736 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

This will create single partition for each of the devices.

[root@advait ~]# ls -rlt /dev/sd*1
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 root disk 8, 17 Jul 18 05:40 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Jul 18 05:40 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Jul 18 05:41 /dev/sdd1
brw-rw---- 1 root disk 8, 65 Jul 18 05:41 /dev/sde1

Step 5) Configure UDev rules

So before creating UDev rule, we need to understand what exactly we want to do. We want to create alias for each of the disk that we created at OS level so that it is always identified the same way, regardless of the device name Linux assigns it. This can be done by recognizing each device based on some unique IDs and assining the alias to that device. UDev can do just that.

Each disk has a unique SCSI ID. We can use this unique ID to recognize the disk and assign the required alias to that disk.
We can get unique SCSI_ID using following command

[root@advait ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
1ATA_VBOX_HARDDISK_VB544d069c-abd3901e

scsi_id command is located in /usr/lib/udev/ directory on RHEL 7. But in previous release this used to be in /sbin/ location.

Like wise we can find SCSI_ID for each disk that we added.

Rules are defined in “/etc/udev/rules.d” directory. Udev reads these rules and apply them to devices listed in /dev directory.

Rules looks like below

KERNEL=="sd?1", SUBSYSTEM=="block", -
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", -
RESULT=="1ATA_VBOX_HARDDISK_VB88ee7178-f28aa887", -
SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"

Following is the explanation of each parameter

  • KERNEL==”sd?1″ – This matches the kernel name of the device. In our case all our partitions are having names as sd?1 (sdb1, sdc1 etc). So this match key matches the kernel name of the devices
  • SUBSYSTEM==”block” – This match key matches the subsystem of the devices. SUBSYSTEM could be block, scsi, usb etc. We have all block devices.
  • PROGRAM==”/usr/lib/udev/scsi_id -g -u -d /dev/$parent” – This will get the unique SCSI_ID for the device searched by first 2 match parameters (KERNEL and SUBSYSTEM)
  • RESULT==”1ATA_VBOX_HARDDISK_VB544d069c-abd3901e” – This will match the output of PROGRAM command with RESULT. If the result matches, then further action will be taken
  • SYMLINK+=”asm-disk1″ – This parameter is part of action key. If PROGRAM output matches RESULT, then a SYMLINK will be created, which is named asm-disk1 and will be pointing to the device in question.
  • OWNER=”oracle” – This parameter is also part of action. This will change the ownership of device to oracle user
  • GROUP=”dba” – This parameter changes the group of device to dba group
  • MODE=”0660″ – This changes the permission of device file to 0660

So above rule means that the device pointing to the partition “sd*1” on the disk with the SCSI ID of “1ATA_VBOX_HARDDISK_VB88ee7178-f28aa887” will always be referred with symlink “/dev/asm-disk1” pointing to the device, regardless of the letter “?” Linux assigns when the device is discovered. In addition, the device will have the correct ownership and permissions for ASM.

We can create such rule for each of the device or if number of devices are huge, we can use wildcard and matching patterns to create more intelligent rules to search and take required actions.

I have created following rules and we can create a new rule file /etc/udev/rules.d/99-oracle-asm-disks.rules and put following rules, one for each device

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB544d069c-abd3901e", SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9a630cc5-d9697727", SYMLINK+="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB446fdf92-8640efff", SYMLINK+="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB3a71b4f2-8c603b78", SYMLINK+="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"

Step 6) Load updated block device partition tables

You can use partprobe to load the partition tables for block devices

/sbin/partprobe /dev/sdb1
/sbin/partprobe /dev/sdc1
/sbin/partprobe /dev/sdd1
/sbin/partprobe /dev/sde1

Step 7) Test the rules

This is optional step to check if the rules are working as expected. You can run following commands to test the rules

udevadm test /block/sdb/sdb1
udevadm test /block/sdb/sdc1
udevadm test /block/sdb/sdd1
udevadm test /block/sdb/sde1

The output for one of the above command looks like following

calling: test
version 219
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

=== trie on-disk ===
tool version: 219
file size: 6984832 bytes
header size 80 bytes
strings 1805856 bytes
nodes 5178896 bytes
Load module index
Created link configuration context.
timestamp of '/etc/udev/rules.d' changed
Reading rules file: /usr/lib/udev/rules.d/10-dm.rules
Reading rules file: /usr/lib/udev/rules.d/100-balloon.rules
Reading rules file: /usr/lib/udev/rules.d/13-dm-disk.rules
Reading rules file: /usr/lib/udev/rules.d/40-redhat.rules
Reading rules file: /usr/lib/udev/rules.d/42-usb-hid-pm.rules
Reading rules file: /usr/lib/udev/rules.d/50-udev-default.rules
Reading rules file: /usr/lib/udev/rules.d/60-alias-kmsg.rules
Reading rules file: /usr/lib/udev/rules.d/60-cdrom_id.rules
Reading rules file: /usr/lib/udev/rules.d/60-drm.rules
Reading rules file: /usr/lib/udev/rules.d/60-keyboard.rules
Reading rules file: /usr/lib/udev/rules.d/60-net.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-alsa.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-input.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-serial.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage-tape.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-v4l.rules
Reading rules file: /usr/lib/udev/rules.d/60-raw.rules
Reading rules file: /usr/lib/udev/rules.d/61-accelerometer.rules
Reading rules file: /usr/lib/udev/rules.d/64-btrfs.rules
Reading rules file: /usr/lib/udev/rules.d/70-mouse.rules
Reading rules file: /usr/lib/udev/rules.d/70-power-switch.rules
Reading rules file: /usr/lib/udev/rules.d/70-touchpad.rules
Reading rules file: /usr/lib/udev/rules.d/70-uaccess.rules
Reading rules file: /usr/lib/udev/rules.d/71-biosdevname.rules
Reading rules file: /usr/lib/udev/rules.d/71-seat.rules
Reading rules file: /usr/lib/udev/rules.d/73-idrac.rules
Reading rules file: /usr/lib/udev/rules.d/73-seat-late.rules
Reading rules file: /usr/lib/udev/rules.d/75-net-description.rules
Reading rules file: /usr/lib/udev/rules.d/75-probe_mtd.rules
Reading rules file: /usr/lib/udev/rules.d/75-tty-description.rules
Reading rules file: /usr/lib/udev/rules.d/78-sound-card.rules
Reading rules file: /usr/lib/udev/rules.d/80-drivers.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-name-slot.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-setup-link.rules
Reading rules file: /usr/lib/udev/rules.d/81-kvm-rhel.rules
Reading rules file: /usr/lib/udev/rules.d/85-nm-unmanaged.rules
Reading rules file: /usr/lib/udev/rules.d/90-alsa-tools-firmware.rules
Reading rules file: /usr/lib/udev/rules.d/90-iprutils.rules
Reading rules file: /usr/lib/udev/rules.d/90-vconsole.rules
Reading rules file: /usr/lib/udev/rules.d/91-drm-modeset.rules
Reading rules file: /usr/lib/udev/rules.d/95-dm-notify.rules
Reading rules file: /usr/lib/udev/rules.d/95-udev-late.rules
Reading rules file: /usr/lib/udev/rules.d/98-kexec.rules
Reading rules file: /usr/lib/udev/rules.d/98-rdma.rules
Reading rules file: /etc/udev/rules.d/99-oracle-asm-disks.rules
Reading rules file: /usr/lib/udev/rules.d/99-systemd.rules
rules contain 24576 bytes tokens (2048 * 12 bytes), 12216 bytes strings
1803 strings (22584 bytes), 1184 de-duplicated (10988 bytes), 620 trie nodes used
GROUP 6 /usr/lib/udev/rules.d/50-udev-default.rules:52
LINK 'disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' /usr/lib/udev/rules.d/60-persistent-storage.rules:43
IMPORT builtin 'blkid' /usr/lib/udev/rules.d/60-persistent-storage.rules:72
probe /dev/sdb1 raid offset=0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:1
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10122] exit with return code 0
OWNER 54321 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
GROUP 54322 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
MODE 0660 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
LINK 'asm-disk1' /etc/udev/rules.d/99-oracle-asm-disks.rules:1
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:2
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10123] exit with return code 0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:3
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10124] exit with return code 0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:4
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10125] exit with return code 0
handling device node '/dev/sdb1', devnum=b8:17, mode=0660, uid=54321, gid=54322
preserve permissions /dev/sdb1, 060660, uid=54321, gid=54322
preserve already existing symlink '/dev/block/8:17' to '../sdb1'
found 'b8:17' claiming '/run/udev/links/\x2fasm-disk1'
creating link '/dev/asm-disk1' to '/dev/sdb1'
preserve already existing symlink '/dev/asm-disk1' to 'sdb1'
found 'b8:17' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fata-VBOX_HARDDISK_VB544d069c-abd3901e-part1'
creating link '/dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' to '/dev/sdb1'
preserve already existing symlink '/dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' to '../../sdb1'
created db file '/run/udev/data/b8:17' for '/block/sdb/sdb1'
ACTION=add
DEVLINKS=/dev/asm-disk1 /dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1
DEVNAME=/dev/sdb1
DEVPATH=/block/sdb/sdb1
DEVTYPE=partition
ID_ATA=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_BUS=ata
ID_MODEL=VBOX_HARDDISK
ID_MODEL_ENC=VBOX\x20HARDDISK\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_PART_ENTRY_DISK=8:16
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=20969472
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_REVISION=1.0
ID_SERIAL=VBOX_HARDDISK_VB544d069c-abd3901e
ID_SERIAL_SHORT=VB544d069c-abd3901e
ID_TYPE=disk
MAJOR=8
MINOR=17
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=404668
Unload module index
Unloaded link configuration context.

Step 8) Reload rules of udev

Use following command to reload the rules

udevadm control --reload-rules

Check if the required symlinks and other actions are performed on the devices or not

[root@advait ~]# ls -rlt /dev/asm*
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk2 -> sdc1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk3 -> sdd1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk4 -> sde1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk1 -> sdb1

Above symlinks are owned by root, but devices will be owned by oracle:dba

[root@advait ~]# ls -rlt /dev/sd?1
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 oracle dba 8, 33 Jul 18 06:20 /dev/sdc1
brw-rw---- 1 oracle dba 8, 49 Jul 18 06:20 /dev/sdd1
brw-rw---- 1 oracle dba 8, 65 Jul 18 06:20 /dev/sde1
brw-rw---- 1 oracle dba 8, 17 Jul 18 06:20 /dev/sdb1

Now ASM can identify the disks as /dev/asm-disk* and these symlinks will persist with host reboot.

Hope this helps !!

References:

https://www.linux.com/news/udev-introduction-device-management-modern-linux-system
http://www.reactivated.net/writing_udev_rules.html

Oracle 12c Learning Series: Automatic Data Optimization – ADO

I am starting Oracle 12c learning series, where I am planning to publish multiple articles on Oracle 12c new features. I hope these articles will be helpful to DBAs aspiring for OCP 12c certification and also others who are planning to implement these features in there databases.

This is a very long post about ADO – Automatic Data Optimization, which is one of the ILM strategy to manage aging data. So please be patient while reading this article. This article provides end-to-end details about ADO implementation, commands syntax etc.

Typical lifecycle of data

Lifecycle of data starts with insert statements. New data is inserted into the table. At this stage data is either kept in non-compressed form or is compressed by DBA. But compression for such active data is optimized for DMLs (example OLTP table compression). Compression has is less and has minimal impact on DMLs.
After a month, activity has subsided, although significant OLTP transactions are still carried out. At this stage, data in former most active partition stays in OLTP table compressed format and new partition is automatically created.

Two months down the line data is rarely modified and accessed rarely as well. At this stage, partition can be moved to low cost storage tier and at higher compression level like hybrid columnar compression (HCC).

After 1 year, data is considered dormant and is no longer accessed or updated. At this stage, data can be moved to low cost storage tier with highest level of HCC compression and may also be marked as read-only.

How (ADO) Automatic Data Optimization works

Automatic data optimization (ADO) is an ILM strategy which uses activity tracking by using heat map to determine if the data is still getting accessed on not.

Heat map provides ability to track and mark data as it goes through lifecycle changes. Heat map can track data at following level

  • Data accesses at segment level
  • Data modification at segment level and block level

These collected heat map statistics are stored in SYSAUX tablespace. These statistics provide “heat map” of hot and cold data based on how frequent the data is accessed or modified.

ADO allows you to create policies that uses heat map statistics to compress and move data when necessary. ADO automatically evaluates and executes policies that perform compression and storage tiring actions.

Following are the outline steps which can be used to implement and use ADO policies

1)    Enable heat map at database level. Heap map tracks data access at segment level and data modification at segment level and row level

2)    Create ADO policies.

You can create ADO policies at different scopes/level

a)    Tablespace level

b)    Group level

c)     Segment level

d)    Row level

You can also create ADO policies based on different operations to track.

a)    Creation

b)    Access

c)     Modification

You can also provide condition when this policy takes effect

a)    Based on time period. Ex: After 3 days, After 1 week, After 1 year

b)    Tablespace usage above threshold

You can also specify different kind of action for ADO policy

a)    Compression

b)    Moving to different storage tier

c)     Both compression + move

3)    Evaluating and executing ADO policies. By default, segment level ADO policies are evaluated every day during maintenance window. DBA can also create custom schedule for evaluating ADO policies. Row level policies are evaluated by MMON every 15 mins.

4)    Verify ADO execution details by using DBA_ILMEVALUATIONDETAILS and DBA_ILMRESULTS view.

5)    Verify if segment is compressed or moved based on the action provided in executed policy.

We are going into the details of all steps above and see how we can enable ADO policies as per our business requirements

Step 1) Enabling Heat Map

Enabling and disabling heat map

The very first step before you can start using ADO is to enable activity tracking, which you can enable by setting the new initialization parameter HEAT_MAP

SQL> alter system set heat_map = on;

System altered.

Above parameter will enable activity tracking for entire instance. Enabling heat map will enable tracking both DML and access at segment level and store these details in relevant table in SYSAUX tablespace. This will not track any access details of segments in SYSTEM and SYSAUX tablespace.

*** Important thing to note here is that heat_map does not work with CDB databases. It works only for non-CDB databases. So you cannot use this feature with CDB databases

You can also enable heat map at session level using following command

SQL> alter session set heat_map = on;

Session altered.

You can turn off heat map by setting the parameter to off as below

SQL> alter system set heat_map = off;

System altered.

Checking heat map details

Once you enable heap map, you can check various details of activity tracking provided by heat map. You can check details about when segments were accessed recently, was it row ID access or full scan access or when was the recent DML performed on the segment.

V$HEAT_MAP_SEGMENT view

Tracking details of segments are available at real time in V$HEAP_MAP_SEGMENT view. As soon as session accesses some objects in database, its heat map information will be available immediately in this view.

Example, I just accessed one of the demo table (SALES) in my test database after enabling heat map and I can see following info


SQL> select object_name, SUBOBJECT_NAME, TRACK_TIME, SEGMENT_WRITE, SEGMENT_READ, FULL_SCAN, LOOKUP_SCAN, con_id from v$heat_map_segment order by 2;

OBJECT_NAME          SUBOBJECT_ TRACK_TIM SEG SEG FUL LOO     CON_ID
-------------------- ---------- --------- --- --- --- --- ----------
SALES                P10        14-JUN-16 NO  NO  YES NO           0
SALES                P11        14-JUN-16 NO  NO  YES NO           0
SALES                P12        14-JUN-16 YES NO  YES NO           0
SALES                P2         14-JUN-16 NO  NO  YES NO           0
SALES                P3         14-JUN-16 NO  NO  YES NO           0
SALES                P4         14-JUN-16 NO  NO  YES NO           0
SALES                P5         14-JUN-16 NO  NO  YES NO           0
SALES                P6         14-JUN-16 NO  NO  YES NO           0
SALES                P7         14-JUN-16 NO  NO  YES NO           0
SALES                P8         14-JUN-16 NO  NO  YES NO           0
SALES                P9         14-JUN-16 NO  NO  YES NO           0
SALES                PM         14-JUN-16 NO  NO  YES NO           0
I_SALES_ITEM_YEAR               14-JUN-16 NO  NO  NO  YES          0
SALES_ITEM                      14-JUN-16 NO  NO  YES NO           0

14 rows selected.

If you see column FULL_SCAN is YES, it means the access was done using full scan.

If you see column SEGMENT_WRITE is YES, it means the modification was done to the table

TRACK_TIME is the timestamp when access or modification was done.

You can also see LOOKUP_SCAN column is YES for 1 row where object_name is the name of index. So if index is getting used in your select plan, it will show LOOKUP_SCAN as YES for such access.

SYS.HEAT_MAP_STAT$ and DBA_HEAT_MAP_SEG_HISTOGRAM

Data from V$HEAP_MAP_SEGMENT is persisted into SYS.HEAT_MAP_STAT$ table in SYSAUX tablespace by DBMS_SCHEDULER job at regular period of time. This data is then available via view DBA_HEAT_MAP_SEG_HISTOGRAM. Following show similar data when we query DBA_HEAT_MAP_SEG_HISTOGRAM

SQL> select object_name, subobject_name, track_time, SEGMENT_WRITE, full_scan, lookup_scan from DBA_HEAT_MAP_SEG_HISTOGRAM;

OBJECT_NAME          SUBOBJECT_ TRACK_TIM SEG FUL LOO
-------------------- ---------- --------- --- --- ---
SALES                P2         14-JUN-16 NO  YES NO
SALES                P3         14-JUN-16 NO  YES NO
SALES                P4         14-JUN-16 NO  YES NO
SALES                P5         14-JUN-16 NO  YES NO
SALES                P6         14-JUN-16 NO  YES NO
SALES                P7         14-JUN-16 NO  YES NO
SALES                P8         14-JUN-16 NO  YES NO
SALES                P9         14-JUN-16 NO  YES NO
SALES                P10        14-JUN-16 NO  YES NO
SALES                P11        14-JUN-16 NO  YES NO
SALES                P12        14-JUN-16 YES YES NO
SALES                PM         14-JUN-16 NO  YES NO
SALES_ITEM                      14-JUN-16 NO  YES NO
I_SALES_ITEM_YEAR               14-JUN-16 NO  NO  YES

14 rows selected.

DBA_HEATMAP_TOP_TABLESPACES

This view shows heat map information for top 100 tablespaces. It gives report at tablespace level instead of segment level. You can fine MIN, MAX and AVG read time, write time, full table access time and lookup access time for each tablespace.

DBMS_HEAT_MAP package

DBMS_HEAT_MAP packages can be used to get heat map statistics at even deeper level. The default access is only going to let you know heat map statistics at segment level. But if you want to go deeper, for example to check which blocks were accessed and at what time, you can use use subprograms in DBMS_HEAT_MAP package.

Following is a summary of the procedures and functions in DBMS_HEAT_MAP package:

  • BLOCK_HEAT_MAP function Returns the last modification time for each block in a table segment
  • EXTENT_HEAT_MAP function Returns the extent-level Heat Map statistics for a table segment
  • OBJECT_HEAT_MAP procedure Shows the minimum, maximum, and average access times for all segments belonging to an object
  • SEGMENT_HEAT_MAP procedure Shows the Heat Map attributes for a segment
  • TABLESPACE_HEAT_MAP procedure Shows minimum, maximum, and average access times for all segments in a tablespace

Example, we can use DBMS_HEAT_MAP.BLOCK_HEAT_MAP to get heat map statistics of a block of a segment

Similarly, DBMS_HEAT_MAP.EXTENT_HEAT_MAP function provide heat map statistics information at extent level.

Step 2) Enabling ADO policies

Let’s move to the 2nd step of implementing ILM. Now that we have enabled heat map in step 1, we need to create policies at various levels to manage our data. The policy we create will use the heat map statistical data and take the required action that we define in the policy. There are 4 sections in creating ADO policy

Level at which you can create ADO policies

  • Tablespace level – Default ADO policy can be defined at tablespace level and will be applicable to all the segments existing in that tablespace or new segments getting created in that tablespace.
  • Group level – When we define group level ADO policy, if a table is eligible for specific ADO action defined in policy, the same action would be performed on all dependent objects. If action is compress and table has LOB columns, all secureFile LOBs will be compressed as well.
  • Segment level – Applies to tables and table partitions. If no policy defined at segment level and we have a default policy at tablespace level, tablespace level policy applies to segment. But segment level policy overrides tablespace level policy.
  • Row level – Can be created only for tracking modifications. We cannot have row level policy for creation and access operations.

Operations to track

  • Creation – Action will be taken on specified time condition after creation of data
  • Access – Action will be taken on specified time condition after access of data
  • Modification – Action will be taken on specified time condition after modification of data

Condition when this policy takes effect

  • Based on time period. Ex: After 3 days, After 1 week, After 1 year
  • Tablespace usage above threshold

Action to take for ADO policy

  • Compression
  • Data Movement-  Moving to different storage tier
  • Both compression + move

You can create ILM policies during CREATE TABLE statement or you can add ILM policies on existing tables using ALTER TABLE statements.  A segment can have multiple ILM policies defined on it.

Before we look into different example of creating ILM policies let’s take a look at various compression level available. Data can be compressed while its being inserted, updated or loaded into a table via bulk load.

We have following 4 compression levels available

  • ROW STORE COMPRESS BASIC

This is basic level of compression and is used while inserting data into a table without using direct-path insert, using advanced compression option (ACO).

  • ROW STORE COMPRESS ADVANCED

This a renamed syntax for the previous OLTP table compression feature that was part of ACO. ROW STORE COMPRESS ADVANCED on heap table maps to LOW compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

  • COLUMN STORE COMPRESS FOR QUERY LOW or HIGH

This provides Hybrid columnar compression (or HCC) and provides higher level compression then ROW STORE COMPRESS. It works well when load performance is critical and frequent queries are run against the data and no DMLs are expected. COLUMN STORE COMPRESS FOR QUERY LOW or HIGH maps to MEDIUM compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

  • COLUMN STORE COMPRESS FOR ARCHIVE LOW or HIGH

This provides Hybrid columnar compression (or HCC) and provides highest level of compression. It works well when data is accessed very infrequently and no DMLs are run against data. COLUMN STORE COMPRESS FOR ARCHIVE LOW or HIGH maps to MEDIUM compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

Let’s look at creating ADO polices for compression action and Data movement action

ADO policies for Compression action

Following are the different examples of creating ADO compression policies

Tablespace level compression policy

ALTER TABLESPACE USERS DEFAULT ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
SEGMENT AFTER 30 DAYS OF NO ACCESS

In the above colorful command, we can see various sections required in policy implementation. TABLESPACE in purple color tells that this is the default policy at tablespace level and applicable to segments in that tablespace as represented by SEGMENT keyword in blue.

  • ILM ADD POLICY – This is required to add a policy to tablespace or segment
  • ROW STORE COMPRESS ADVANCED – This is the compression action level we are using as this is compression based ILM policy
  • SEGMENT – This represent that this policy is applicable to all segments in USERS tablespace.
  • AFTER 30 DAYS OF – This represent condition when this policy will be eligible to take required action of compression.
  • NO ACCESS – This represent operation to track. We have 3 different types of operations that we defined above – Access, modification, creation

So if alter tablespace USERS using above command and add ILM policy then if any segment in the tablespace (which has no other ILM policy set) is not accessed for more than 30 days then that segment will be compressed to ROW STORE COMPRESS ADVANCED

Group level compression policy

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
GROUP AFTER 60 DAYS OF NO MODIFICATION

In the above example, we are implementing group level ILM policy. This policy will automatically compress the table SALES if no modification is done for 60 days. Since we are using GROUP level policy, dependent LOBs are compressed with LOW compression after 60 days of no modification.

ALTER TABLE SALES MODIFY PARTITION P1 ILM ADD POLICY
COLUMN STORE COMPRESS FOR ARCHIVE HIGH
GROUP AFTER 3 MONTHS OF CREATION

In this example, we are using GROUP level compression policy on a partition of a table. This policy will automatically compress PARTITION P1 after 3 months of creation and corresponding LOB will be compressed MEDIUM. This does highest level of compression using HCC. Global indexes are maintained

Segment level compression policy

ALTER TABLE SALES ILM ADD POLICY
COLUMN STORE COMPRESS FOR QUERY HIGH
SEGMENT AFTER 90 DAYS OF NO MODIFICATION

In this example, we are creating segment level compression policy on SALES table. This policy is going to automatically compress SALES table with COLUMN STORE COMPRESS FOR QUERY HIGH if it’s not modified for 90 days.

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS
SEGMENT AFTER 1 YEAR OF CREATION

This example implements basic level of compression (ROW STORE COMPRESS [BASIC]) on SALES table after 1 year of its creation. SALES table will be automatically compressed after 1 year of its creation

Row level compression policy

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
ROW AFTER 30 DAYS OF NO MODIFICATION

Above policy will compress any block of the table SALES whose all rows are not modified from last 30 days. Even if 1 row in a block gets modified, it does not qualify for compression.

Important thing to note here is that row level compression polices can be created based on modification time only. They cannot be created based on creation time or access time. Also, only compression type available for row level compression policy is ROW STORE COMPRESS ADVANCED/BASIC. We cannot use columnar compression for row level policy.

ADO policies for Data movement action

This is another type of ADO policies that we can set at segment level only. We cannot set this at tablespace level or group level or any other level. Also, data movement happens at tablespace level. Meaning that we can move the table or partition to another tablespace based on low cost storage.

Data movement policy takes following format

ALTER TABLE <TABLE_NAME> ILM ADD POLICY
TIER TO <TABLESPACE_NAME>
[ CUSTOM ILM RULE | SEGMENT <CONDITION> <OPERATION> ]

TIER TO is a new clause and we specify which tablespace the segment should move if it satisfies he condition set. We can also provide custom ILM rules in the last section of the command instead of providing condition (example AFTER 30 DAYS, AFTER 1 YEAR etc) and operation (example CREATION, NO MODIFICATION, NO ACCESS etc).

Let’s take an example

In the above example, we replaced compression clause with “TIER TO” clause.

TIER TO – This clause provides tablespace name where segment should move after it satisfies the condition. In this example, ILM policy will automatically move SALES table to low_cost_storage tablespace after 60 days of creation.

If we don’t provide the last part of condition and operation, then policy will be created and data movement action will take place when tablespace is running out of space. Space threshold for tablespace is already set and we can see those values in DBA_ILMPARAMETERS table.

Following are tiring fullness threshold for tablespace

  • TBS PERCENT USED (default 85): Objects with tiring policy will be moved out of tablespace if its TBS PERCENT USED full (default 85% full)
  • TBS PERCENT FREE (default 25): Objects with tiring policy will continue to move out until source tablespace has TBS PERCENT FREE space (default 25% free).

You can see current values set by querying DBA_ILMPARAMETERS view. You can customize following parameters using DBMS_ILM_ADMIN package.


SQL> select * from dba_ilmparameters;

NAME                      VALUE
-------------------- ----------
ENABLED                       1
RETENTION TIME               30
JOB LIMIT                     2
EXECUTION MODE                2
EXECUTION INTERVAL            1
TBS PERCENT USED             90
TBS PERCENT FREE             15
POLICY TIME                   0

8 rows selected.

Example, if we want to change TBS PERCENT USER to 90% from existing 85%, we can use following procedure

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.tbs_percent_used,90);

PL/SQL procedure successfully completed.

We will check out other procedures and functions in DBMS_ILM_ADMIN package at later section in this chapter.

So if we set following policy, then SALES table will be moved out of current tablespace to low_cost_storage tablespace when SALES tablespace reaches 90% of fullness

ALTER TABLE SALES ILM ADD POLICY
TIER TO low_cost_storage

Also, if we set this policy on many tables in the same tablespace and if tablespace reaches the fullness threshold, then tables will be moved to different tablespace in the order of oldest accessed tables first.

Last thing to know about data movement policy is that they are executed only once in life time. Since they apply only at segment level, once the policy is executed on the segment and required movement takes place, the policy is disabled for that segment. This rule applies only for data movement policy and not for compression policy.

CUSTOM_ILM_RULES

Instead of having condition and operation in the simplified syntax provided by Oracle, you can have more complex algorithm about when the desired action for ILM policy should take place by specifying custom ILM rule. The syntax for custom ILM rule looks like below

ALTER TABLE SALES ILM ADD POLICY
<Compression clause> | <Data movement clause>
ON <CUSTOM_ILM_RULE>

A custom ILM rule should be a function which returns Boolean. You can implement complex logic inside the function and return Boolean after evaluating that complex logic. If return value is true, ILM policy takes action. If return value is false, ILM policy does not take action. This custom ILM rule can be used for both compression action as well as data movement action policies.

Implementing multiple policies on segment

A segment can have multiple ADO policies implemented on this. This logically makes sense as we want to have different action taken on data as it ages more and more. Example, we might go for basic level of compression after 1 week of data insertion, more advanced compression after a month of no creation or no modification and highest level of compression of moving to low cost storage after a month of no access or after a year of creation of data. So satisfy all these timelines and criteria it’s not unusual to have multiple ILM policies on a segment. But there are rules to follow.

Rule 1) Policy should be applied on same operation or statistics (creation, modification or access)

If you create a segment level policy on a table based on ACCESS, make sure that other policies are also created based on ACCESS operation only.

Example:

I create a policy on SALES table based on ACCESS

SQL> alter table demo_user.sales ilm add policy row store compress advanced segment after 2 days of no access;

Table altered.

Then I create another policy on same table based on ACCESS

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 30 days of no access;

Table altered.

Now, If I tried to create 3rd policy based on creation or modification, I will get conflict

SQL> alter table demo_user.sales ilm add policy column store compress for archive high segment after 60 days of creation;
alter table demo_user.sales ilm add policy column store compress for archive high segment after 60 days of creation
*
ERROR at line 1:
ORA-38323: policy conflicts with policy 21

Rule 2) Compression level should increase with time

You should not reduce compression level as timeline condition increases. In the following example, I tried to set column level compression after 7 days of no access and row store compression (which is lower level compression then column level) after 30 days of no access. This causes conflicts

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 7 days of no access;

Table altered.

SQL> alter table demo_user.sales ilm add policy row store compress advanced segment after 30 days of no access;
alter table demo_user.sales ilm add policy row store compress advanced segment after 30 days of no access
*
ERROR at line 1:
ORA-38325: policy conflicts with policy 23

Rule 3) Only 1 row level policy allowed per segment

You can have only 1 row level policy on a table. In following example, I tried to create 2 row level policies on same table and it didn’t allow

SQL> alter table demo_user.sales ilm add policy row store compress advanced row after 7 days of no modification;

Table altered.

SQL> alter table demo_user.sales ilm add policy row store compress advanced row after 30 days of no modification;
alter table demo_user.sales ilm add policy row store compress advanced row after 30 days of no modification
*
ERROR at line 1:
ORA-38325: policy conflicts with policy 24

Also remember that you can create only “no modification” operation policy at row level.

Rule 4) Segment level policy does not overwrite row level policy

Continuing with above example, after creating row level policy, we can still create segment level policy

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 30 days of no modification;

Table altered.

Policy priority and policy Inheritance

As we discussed before, we can create policies at tablespace level, which serves as default policy for every segment created in that tablespace with no policy, segment level and even at partition level. Question remains as to which policy will actually take affect?

Again, this follows rules. Following are the simple rules to understand this

  • Child level policy always overrides parent level policy for same action. By this logic, policies set at table partition will override the one set at table level if both has same actions and policies at table level will overrides the policy at tablespace level if both has same actions
  • If no policies are defined at child level, inheritance takes place. Example, if no ILM policies are defined at partition level, table level policies are applied at partition level. Also, if no ILM policies are defined at table level, tablespace level policies are inherited at table level
  • Inheritance is additive if policy actions are different. For example, if you have a compression policy at tablespace level after specified time condition and compression policy at segment level after some specified condition then total effect on the segment is the sum of effect of both policies – i.e. compression + data movement

SQL> select policy_name, object_name, subobject_name, object_type, inherited_from from dba_ilmobjects;

POLICY_NAM OBJECT_NAME          SUBOBJECT_ OBJECT_TYPE                    INHERITED_FROM
---------- -------------------- ---------- ------------------------------ --------------------
P25        SALES                P1         TABLE PARTITION                TABLE
P25        SALES                P10        TABLE PARTITION                TABLE
P25        SALES                P11        TABLE PARTITION                TABLE
P25        SALES                P12        TABLE PARTITION                TABLE
P25        SALES                P2         TABLE PARTITION                TABLE
P25        SALES                P3         TABLE PARTITION                TABLE
P25        SALES                P4         TABLE PARTITION                TABLE
P25        SALES                P5         TABLE PARTITION                TABLE
P25        SALES                P6         TABLE PARTITION                TABLE
P25        SALES                P7         TABLE PARTITION                TABLE
P25        SALES                P8         TABLE PARTITION                TABLE
P25        SALES                P9         TABLE PARTITION                TABLE
P25        SALES                PM         TABLE PARTITION                TABLE
P25        SALES                           TABLE                          POLICY NOT INHERITED
P41        SALES_ITEM                      TABLE                          TABLESPACE

15 rows selected.

We can check different policies implemented on different objects using DBA_ILMOBJECTS view.

In the above example, Policy P41 is applied on SALES_ITEM table as it exists in USERS tablespace and has no ILM policy defined on it. But since we have default ILM policy at tablespace level USERS, this table SALES_ITEM has inherited this policy from tablespace. In the column INHERITED_FROM, you can see value as TABLESPACE

Similarly, policy P25 applied to table and all its partition. But we create policy at table level only and not at partition level so all partitions of table inherit table level policy.

For object type TABLE and policy P25, we see “POLICY NO INHERITED” in INHERITED_FROM column. This is because we have created new policy at table level and it’s not getting inherited from any level.

Checking ILM policy details

We have couple of views to provide details about ILM policies that exists at various levels in database.

DBA_ILMPOLICIES

You can view all the policies configured in the database in this view


SQL> select * from dba_ilmpolicies;

POLICY_NAM POLICY_TYPE   TABLESPACE ENABLE DELETED
---------- ------------- ---------- ------ -------
P24        DATA MOVEMENT            YES    NO
P25        DATA MOVEMENT            YES    NO
P41        DATA MOVEMENT USERS      YES    NO

Even compression policies are represented as data movement policies in this table. This is because during compression data movement happens.

DBA_ILMDATAMOVEMENTPOLICIES


SQL> select policy_name, action_type, compression_level, condition_type, condition_days from DBA_ILMDATAMOVEMENTPOLICIES;

POLICY_NAM ACTION_TYPE COMPRESSION_LEV CONDITION_TYPE         CONDITION_DAYS
---------- ----------- --------------- ---------------------- --------------
P41        COMPRESSION ADVANCED        LAST MODIFICATION TIME             60
P24        COMPRESSION ADVANCED        LAST MODIFICATION TIME              7
P25        COMPRESSION QUERY HIGH      LAST MODIFICATION TIME             30

Step 3) Evaluating and Executing ADO policies

We have 2 ways of evaluation and execution of ADO policies – scheduled in maintenance window, evaluating and executing it manually. Let’s look at both approaches

Scheduled in maintenance window

Row level policies are evaluated and executed every 15 mins by MMON background process. Segment level policies are evaluated and executed daily once during maintenance window using a scheduler job. This scheduler job uses DBMS_ILM.EXECUTE_ILM_TASK procedure to evaluate and execute ADO policies.

** Note that name of the package is DBMS_ILM as against DBMS_ILM_ADMIN, which is used for customizing ILM parameters and other admin related tasks. We will look into DBMS_ILM_ADMIN package in short while.

In DBMS_ILM.EXECUTE_ILM_TASK procedure, TASK_ID is an input parameter which is automatically generated during evaluation process, which also happens during scheduled maintenance window.

You can change the interval of evaluation and execution using DBMS_ILM_ADMIN.CUSTOMIZE_ILM procedure. We have seen this procedure previously when we were changing TBS PERCENT FREE and TBS PERCENT USED parameters. We can use same procedure to customize execution interval of ILM tasks.

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.execution_interval, 1);

PL/SQL procedure successfully completed.

Manually executing ADO policies

For manually executing ADO policies, we have 2 options

  1. Directly execute ADO policies without doing any customization
  2. Preview the evaluation of ADO policies, make any desired changes to the task and then execute the task

You can directly execute ADO policies without doing any customization using DBMS_ILM.EXECUTE_ILM procedure. Note that this procedure is different than the one that runs in scheduled maintenance window (DBMS_ILM.EXECUTE_ILM_TASK). There are multiple definitions of this procedure and in simple definition, you can just pass owner and table name and all policies on that table will be executed.

You can also preview the evaluation result before you execute the task. You can do following steps to customize ILM tasks

  • Evaluate set of ADO policies and preview the result. Check what all tables/policies will be executed
  • Add/remove objects/subobjects from ILM tasks evaluation result.
  • Execute the final customized evaluated ILM task

You can use following procedure in the same order to perform above steps of customizing ILM tasks

  • Execute DBMS_ILM.PREVIEW_ILM procedure to preview ILM tasks for database or specific schema
  • Execute DBMS_ILM.ADD_TO_ILM or DBMS_ILM.REMOVE_FROM_ILM to add or remove objects/subobjects from evaluated task
  • Execute DBMS_ILM.EXECUTE_ILM_TASK to execute the above customized ILM task

Example:

var v_task_id number;
exec DBMS_ILM.PREVIEW_ILM(TASK_ID=> :v_task_id, ILM_SCOPE=>DBMS_ILM.SCOPE_SCHEMA);

let’s say above command generated task_id as 1234.

exec DBMS_ILM.ADD_TO_ILM(TASK_ID=>1234, OWN=>DEMO_USER’, OBJ_NAME=>’NEW_TABLE’);

exec DBMS_ILM.EXECUTE_ILM_TASK(TASK_ID=>1234);

You can also pass following 2 parameters to DBMS_ILM.EXECUTE_ILM_TASK

EXECUTION_MODE: ILM_EXECUTION_ONLINE value executes the task online and ILM_EXECUTION_OFFLINE executes the task offline.

EXECUTION_SCHEDULE: Only possible value for this parameter is DBMS_ILM.SCHEDULE_IMMEDIATE

Step 4) Checking execution details of ADO policies

You can check the details of ADO policies evaluation using following 3 views

  • DBA_ILMEVALUATIONDETAILS
  • DBA_ILMRESULTS
  • DBA_ILMTASKS

DBA_ILMTASKS

Every task that is executed manually or automatically as scheduled will be recorded in DBA_ILMTASKS view. It stored creation time, start time and completion time of task and if task is completed, failed or Active


SQL> select TASK_ID, STATE, START_TIME, COMPLETION_TIME from DBA_ILMTASKS order by task_id;

   TASK_ID STATE     START_TIME                     COMPLETION_TIME
---------- --------- ------------------------------ ------------------------------
         1 COMPLETED 15-JUN-16 10.19.24.976731 PM   15-JUN-16 10.19.24.976731 PM
         2 COMPLETED 15-JUN-16 10.34.27.587548 PM   15-JUN-16 10.34.27.587548 PM
         3 COMPLETED 15-JUN-16 10.49.30.158279 PM   15-JUN-16 10.49.30.158279 PM
         4 COMPLETED 15-JUN-16 11.04.32.861197 PM   15-JUN-16 11.04.32.861197 PM
         5 COMPLETED 15-JUN-16 11.19.35.192715 PM   15-JUN-16 11.19.35.192715 PM

DBA_ILMEVALUATIONDETAILS

This view can be used to see details on task execution after the evaluation of the ADO policies is completed. If the SELECTED_FOR_EXECUTION column in this view has the value “selected for execution,” then the policy has been selected for execution indeed and an ADO job will be executed to satisfy the ILM policy. However, a job may not execute if there’s a value other than “selected for execution” under the column SELECTED_FOR_EXECUTION.

The following values all mean that the ADO policy hasn’t successfully passed the evaluation:

  • Precondition not satisfied
  • Policy disabled
  • Policy overruled
  • Inherited policy overruled
  • Job already exists
  • No operation since last ILM action
  • Target compression not higher than current
  • Statistics not available

DBA_ILMRESULTS

Provide details about completed ILM tasks. It has a STATISTICS column which is a CLOB and provide Job specific statistics, such as space saved via compression etc

Enabling and disabling ADO policies

You can enable and or disable ADO policies at individual table level or at database level completely.

Enabling/disabling ADO policies

You can enable/disable individual ADO policy on a table

SQL> alter table demo_user.sales ilm disable policy P25;

Table altered.

SQL> alter table demo_user.sales ilm enable policy P25;

Table altered.

You can also enable/disable all policies on a table

SQL> alter table demo_user.sales ilm disable_all;

Table altered.

SQL> alter table demo_user.sales ilm enable_all;

Table altered.

Deleting ADO policies

You can delete individual policy on a table using policy name

SQL> alter table demo_user.sales ilm delete policy P24;

Table altered.

If you want to delete all policies on a table, you can use delete_all

SQL> alter table demo_user.sales ilm delete_all;

Table altered.

Enable/disable ILM

You can disable ILM completely in database using DBMS_ILM_ADMIN.DISABLE_ILM procedure. This disables ILM, but keep all policies intact. Those policies will not get executed until we enable ILM.

SQL> exec DBMS_ILM_ADMIN.DISABLE_ILM;

PL/SQL procedure successfully completed.

SQL> exec DBMS_ILM_ADMIN.ENABLE_ILM;

PL/SQL procedure successfully completed.

Hope this helps !!

Previewing Backup Restore

Introduction:

This is a short article on RMAN where we can check if our backups are really intact and can help us in critical situation when we have to restore and recover the database.

We have a command option called PREVIEW which we can use with RESTORE DATABASE. This option does not actually restore the datafiles from backup but it just tell us SCN number until which we should be recovering our database. It also tells us SCN number until which it can restore.

Lets take a simple example where I have a complete database backup taken for my database including archivelogs.

I will run RESTORE DATABASE PREVIEW command to check if I can get back my database in case of any issues. Then I will wipe out my database and try to recover from the backup to validate if PREVIEW option was showing correct information.

Validating Backupsets

We can validate if our backupsets are intact and does not have any corruption. We can check both physical and logical corruption for the backupsets and make sure they are good and can be used for restore.

Following RMAN command will confirm the same

Physical corruption validation

 


RMAN> backup validate database archivelog all;

Starting backup at 08-FEB-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=272 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=13 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
...
...
...

Logical corruption validation


RMAN> backup validate check logical database archivelog all;

Starting backup at 08-FEB-16
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00008 name=/u01/app/oracle/oradata/orcl/datafiles/users02.dbf
...
...
...

If above command doesn’t report any issues, we can be sure that backupsets are intact and can be used for restore/recovery.

Checking PREVIEW

We can use “RESTORE DATABASE PREVIEW” command in RMAN to get preview of all backup sets we are going to use and until what SCN this backup will be able to restore.

This command also gives us the SCN number until which we need to recovery the DB to get all datafiles out of fuzzy status


RMAN> restore database preview;

Starting restore at 08-FEB-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=264 device type=DISK
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
48 Full 209.01M DISK 00:00:43 08-FEB-16
BP Key: 48 Status: AVAILABLE Compressed: YES Tag: TAG20160208T053652
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_02_08/o1_mf_nnndf_TAG20160208T053652_ccjvy4y2_.bkp
List of Datafiles in backup set 48
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 2955419 08-FEB-16 /u01/app/oracle/oradata/orcl/datafiles/system01.dbf
4 Full 2955419 08-FEB-16 /u01/app/oracle/oradata/orcl/datafiles/undotbs01.dbf
8 Full 2955419 08-FEB-16 /u01/app/oracle/oradata/orcl/datafiles/users02.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
47 Full 152.79M DISK 00:00:41 08-FEB-16
BP Key: 47 Status: AVAILABLE Compressed: YES Tag: TAG20160208T053652
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_02_08/o1_mf_nnndf_TAG20160208T053652_ccjvy4yx_.bkp
List of Datafiles in backup set 47
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
3 Full 2955421 08-FEB-16 /u01/app/oracle/oradata/orcl/datafiles/sysaux01.dbf
6 Full 2955421 08-FEB-16 /u01/app/oracle/oradata/orcl/datafiles/users01.dbf
9 Full 2955421 08-FEB-16 /u01/app/oracle/oradata/orcl/datafiles/users03.dbf

...
...
...
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
54 3.97M DISK 00:00:01 08-FEB-16
BP Key: 54 Status: AVAILABLE Compressed: YES Tag: TAG20160208T055120
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_02_08/o1_mf_annnn_TAG20160208T055120_ccjws8m9_.bkp

List of Archived Logs in backup set 54
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 43 2955401 08-FEB-16 2956061 08-FEB-16

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
55 80.50K DISK 00:00:00 08-FEB-16
BP Key: 55 Status: AVAILABLE Compressed: YES Tag: TAG20160208T055120
Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_02_08/o1_mf_annnn_TAG20160208T055120_ccjws9xm_.bkp

List of Archived Logs in backup set 55
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 44 2956061 08-FEB-16 2956142 08-FEB-16
1 45 2956142 08-FEB-16 2956308 08-FEB-16
RMAN-05119: recovery can not be done to a consistent state.
Media recovery start SCN is 2955419
Recovery must be done beyond SCN 2955603 to clear datafile fuzziness
Finished restore at 08-FEB-16

If you check the last 4 lines, it will show the SCN number until which RMAN can restore the backups – SCN 2955419

From SCN 2955419 we need to start applying archive logs. We can identify the sequence number in which this SCN falls using below SQL


SQL> select sequence#, first_change#, next_change# from v$archived_log

where FIRST_CHANGE# <= 2955419 and NEXT_CHANGE# >= 2955419;

SEQUENCE#  FIRST_CHANGE# NEXT_CHANGE#
---------- ------------- ------------
43         2955401       2956061

So we will need archives from sequence# 43 and we need to recover beyond SCN 2955603.

Trying restore/recover

Lets see I have few/all datafiles missing for database. I tried to bounce the DB and got following error


SQL> startup
ORACLE instance started.

Total System Global Area 943718400 bytes
Fixed Size 2931136 bytes
Variable Size 348128832 bytes
Database Buffers 587202560 bytes
Redo Buffers 5455872 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/u01/app/oracle/oradata/orcl/datafiles/system01.dbf'

Lets try to restore


[oracle@advait ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Feb 8 06:56:26 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1429382412, not open)

RMAN> restore database;

Starting restore at 08-FEB-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=248 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=13 device type=DISK

skipping datafile 5; already restored to file /u01/app/oracle/oradata/orcl/pdbseed/system01.dbf
skipping datafile 7; already restored to file /u01/app/oracle/oradata/orcl/pdbseed/sysaux01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/orcl/datafiles/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/orcl/datafiles/users01.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/orcl/datafiles/users03.dbf
...
...
...
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:01:01
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/2B0D04DCEFAF55F5E0531438A8C0EDC4/backupset/2016_02_08/o1_mf_nnndf_TAG20160208T053652_ccjvzlmt_.bkp tag=TAG20160208T053652
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 08-FEB-16

RMAN>

Once restore complete, lets try to recover


SQL> recover database;
ORA-00279: change 2955419 generated at 02/08/2016 05:36:52 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/fast_recovery_area/ORCL/archivelog/2016_02_08/o1_mf_1_43_%u_.arc
ORA-00280: change 2955419 for thread 1 is in sequence #43
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

As expected, its asking for recovery starting from sequence# 43.

We can restore the required archivelogs as well


RMAN> restore archivelog from sequence 43;

Starting restore at 08-FEB-16
using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=43
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/backupset/2016_02_08/o1_mf_annnn_TAG20160208T055120_ccjws8m9_.bkp
...
...
...
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
Finished restore at 08-FEB-16

RMAN>

Once archives are restored, we can carry on the recovery


SQL> recover database
ORA-00279: change 2955419 generated at 02/08/2016 05:36:52 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/fast_recovery_area/ORCL/archivelog/2016_02_08/o1_mf_1_43_cck2fwv
k_.arc
ORA-00280: change 2955419 for thread 1 is in sequence #43
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/app/oracle/fast_recovery_area/ORCL/archivelog/2016_02_08/o1_mf_1_43_cck2fwvk_.arc
Log applied.
Media recovery complete.
SQL> alter database open;

Database altered.

SQL>

In this case it old took sequence# 43 as NEXT_CHANGE# for sequence# 43 was 2956061 which is more than SCN 2955603 required to clear the fuzzy state. Rest of the redo information was also present in online redo logs so database did crash recovery while doing the open.

Hope this helps !!