Book on Oracle 12c New features

A Book on Oracle 12c New features for administrators

Hi All,

It’s our pleasure to announce the availability of our first book – OCP 12c upgrade 1Z0-060 Exam Guide

 

51xr2hzj5ql

 

 

 

 

 

 

 

 

 

 

The book covers the new features of the Oracle Database 12c for the DBAs from the OCP 12c upgrade exam standpoint. Book is available WW in kindle format as well as paperback.

Link to Amazon.com – https://www.amazon.com/OCP-Upgrade-1Z0-060-Exam-Guide-ebook/dp/B01N3XXYYC/ref=sr_1_1?ie=UTF8&qid=1481109922&sr=8-1&keywords=B01N3XXYYC

Link to Amazon.in – https://www.amazon.in/OCP-Upgrade-1Z0-060-Exam-Guide-ebook/dp/B01N3XXYYC/ref=sr_1_1?ie=UTF8&qid=1481221495&sr=8-1&keywords=B01N3XXYYC

Link to Packt Publishing – https://www.packtpub.com/big-data-and-business-intelligence/ocp-12c-upgrade-1z0-060-exam-guide

This book covers all the new features of Oracle 12.1.0.1 database. So, if you want to upgrade your knowledge on Oracle 12c, this book certainly serves the purpose. And it is the most ideal book for DBAs aspiring for OCP 12c certification.

Feel free to post your comments about this book on amazon.com link or you can post your comments below.

Following are the objectives of the chapters covered in this book, and the objectives are completely in line with the Oracle Database 12c Exam syllabus.

Section 1 – Oracle 12c new features

Chapter 1Getting Started with Oracle 12c – This chapter introduces some of the new features related to installation and configuration of Oracle 12c including database express.
Chapter 2Multitenant Container Database (CDB) Architecture – This chapter introduces you to Oracle 12c multitenant architecture. It covers different components of multitenant architecture, how to create and configure multitenant database and different ways to create pluggable database PDB.
Chapter 3Managing CDBs and PDBs – This chapter covers establishing connection to container database and pluggable database. Performing general administration tasks and managing tablespace, users, roles and privileges in CDB and PDBs etc.
Chapter 4Information Life Cycle Management and Storage Enhancements – This chapter introduces you to information life cycle management in Oracle 12c and how this has been remarkably automated in Oracle 12c.
Chapter 5Auditing, Privileges and Data Redaction – Like the title says, this chapter covers the new unified auditing feature of Oracle 12c along with new privilege analysis tool and finally new data redaction feature.
Chapter 6Database Tuning and Troubleshooting – In this chapter we will learn about new features related to performance tuning. This covers the new optimizer for Oracle 12c and includes explaination of new features like adaptive SQL plans, Dynamic statistics and SQL plan directives. We will also see some improvements related to statistics gathering followed by enhancements to histogram.
Chapter 7Backup and Flashback – This chapter covers backup and recovery scenarios and how multitenant architecture affect these scenarios. We are also going to look into some of the RMAN enhancements in Oracle 12c followed by table restore and recovery using simple RMAN command
Chapter 8Database Administration Enhancements – In this chapter we are going to look into resource manager and how it will work with Oracle 12c multitenant database.
Chapter 9Miscellaneous New Features – This chapter bundles the other new features introduced in Oracle 12c including enhancement related to Oracle data pump, SQL* Loader, online operations etc. It also provides new partitioning enhancements introduced in Oracle 12c and new top n SQL clauses in Oracle 12c.

Section 2 – General Administration

Chapter 10Core Administration – This chapter is more of a general administration and we are going to explain the fundamentals of database administration.
Chapter 11Performance Management – This chapter includes discussion about designing database for optimal performance, monitoring the performance of database to improve the same, analyze and identify performance issues and preforming real application testing to analyze the performance.
Chapter 12Storage – In this chapter we are going to introduce the logical and physical structure of database and how they are different. It also covers ASM and that can improve the storage management.
Chapter 13Security – This chapter covers developing and implementing robust security policy for Oracle database. Auditing the actions in Oracle database, creating password file and authenticating privileged users remotely using password file.

Regards,
Advait & Indira
Author – OCP 12c upgrade 1Z0-060 Exam Guide

Adaptive Cursor Sharing

Adaptive cursor sharing was introduced by Oracle in 11g release 1 as a way to generate best plan for a SQL in all situation. Prior to adaptive cursor sharing, optimizer used to generate a single plan for a SQL statement and that plan is used by all cursors of that SQL_ID. So if we have data skewness in a column and that column is being using in “where” clause of the SQL statement, single plan of that SQL will not be optimum for all values of that column, which is passed to the bind variable.

Lets take a simple example. We will create a table and add random data to it.

SQL> create table T1
 2 (emp_id number,
 3 dept_id number,
 4 salary number,
 5 gender char(1),
 6 dob date,
 7 address varchar2(60));

Table created.

Let me add 10K records to this table

SQL> insert into T1
     SELECT LEVEL,
            (MOD (ROWNUM, 2)+1)*10,
            TRUNC (DBMS_RANDOM.VALUE (1000, 500000), 2),
            DECODE (ROUND (DBMS_RANDOM.VALUE (1, 2)), 1, 'M', 2, 'F'),
            TO_DATE ( ROUND (DBMS_RANDOM.VALUE (1, 28))
                     || '-'
                     || ROUND (DBMS_RANDOM.VALUE (1, 12))
                     || '-'
                     || ROUND (DBMS_RANDOM.VALUE (1900, 2010)),
                        'DD-MM-YYYY'
                     ),
            DBMS_RANDOM.STRING ('x', DBMS_RANDOM.VALUE (20, 50))
     FROM DUAL
     CONNECT BY LEVEL < 10000;

9999 rows created.

We can make one of the column – DEPT_ID as skewed by updating the records as below

SQL> update T1 set dept_id = 10 where rownum <= 9950;

9950 rows updated.

SQL> update T1 set dept_id = 20 where dept_id != 10;

49 rows updated.

SQL> select dept_id, count(1) from T1 group by dept_id;

   DEPT_ID COUNT(1)
---------- ----------
        20 49
        10 9950

So we have skewness for column DEPT_ID now.

Lets gather statistics on this table with cascade option so that it will gather statistics on index as well. Since the data is skew in index, it will also generate histogram.


SQL> exec dbms_stats.gather_table_stats(OWNNAME=>user, TABNAME=>'T1',CASCADE => TRUE , METHOD_OPT => 'FOR ALL INDEXED COLUMNS SIZE 254');

PL/SQL procedure successfully completed.

SQL> select * from dba_histograms where TABLE_NAME = 'T1';

OWNER                          TABLE_NAME                     COLUMN_NAME                    ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_ACTUAL_VALU
------------------------------ ------------------------------ ------------------------------ --------------- -------------- --------------------
ADVAITD_DBA                    T1                             DEPT_ID                                   9950             10
ADVAITD_DBA                    T1                             DEPT_ID                                   9999             20

SQL> select table_name, column_name, histogram from user_tab_cols where table_name = 'T1' and histogram != 'NONE';

TABLE_NAME                     COLUMN_NAME                    HISTOGRAM
------------------------------ ------------------------------ ---------------
T1                             DEPT_ID                        FREQUENCY

We can see it has created frequency histogram on the index column – DEPT_ID

Adaptive cursor sharing is affected by multiple factors. Following are some of the important factors which affects adaptive cursor sharing.

  • Adaptive cursor sharing is enabled or not. This is the first basic factor which determines if adaptive cursor sharing is enabled or not
  • CURSOR_SHARING parameter and what value we are setting for this parameter
  • SQL plan baseline. If we have baselines enabled and what all SQL baselines are available for a SQL in ENABLED=YES and ACCEPTED=YES
  • OPTIMIZER_INDEX_COST_ADJ – What is the value of this parameter set in database.

You can check if adaptive cursor sharing is enabled by checking the parameter _optimizer_adaptive_cursor_sharing

SQL>@param
Enter Parameter name: _adaptive_cursor_sharing

Parameter                            |Description                                                 |Session Va|Instance V
-------------------------------------|------------------------------------------------------------|----------|----------
_optimizer_adaptive_cursor_sharing   |optimizer adaptive cursor sharing                           |TRUE      |TRUE

Now that we have adaptive cursor sharing enabled, lets see how it works and the effect of other factors on adaptive cursor sharing

How adaptive cursor sharing works?

So we have a table T1 with 10K records and one of the column DEPT_ID is having skewed data as seen above. We have 9950 records with DEPT_ID = 10 and 49 records with DEPT_ID = 20.

Lets try to run some queries and check how optimizer behaves

Running SQL for DEPT_NO = 20


SQL>exec :dept_no :=20;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
        49|2002-09-06:00:00

SQL>select sql_id, child_number, executions, buffer_gets, is_bind_sensitive, is_bind_aware, is_shareable from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         1|         58|Y|N|Y

Pay attention to the columns IS_BIND_SENSITIVE, IS_BIND_AWARE, IS_SHAREABLE. Column IS_BIND_SENSITIVE tells whether output of the query is sensitive to values of bind variable. Meaning if we change the value of bind variable, will the output change significantly?

In above case, optimizer is showing IS_BIND_SENSITIVE=Y based on the histogram that is present in the column used in “where” clause.

As per Oracle definition, IS_BIND_SENSITIVE VARCHAR2(1) Indicates whether the cursor is bind sensitive (Y) or not (N). A query is considered bind-sensitive if the optimizer peeked at one of its bind variable values when computing predicate selectivities and where a change in a bind variable value may cause the optimizer to generate a different plan.

The plan its using currently is index range scan

select /*+ GATHER_PLAN_STATISTICS */ count(:"SYS_B_0"), max(dob) from
t1 where dept_id = :dept_no

Plan hash value: 3776485839

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE              |              |     1 |    12 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |    49 |   588 |     1   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | I_T1_DEPT_ID |    49 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("DEPT_ID"=:DEPT_NO)

Lets run the query again but this time we will use the bind value of 10, which has 9950 records

Running SQL for DEPT_NO = 10


SQL>exec :dept_no := 10;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
      9950|2010-12-27:00:00

SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         2|        169|Y|N|Y

You can see now, we have 2 executions but same cursor is being used. During the second run, we used different bind value and optimizer becomes aware that using different bind value is actually fetching different number of records and plan is expensive.

So even though it didn’t generate any new cursor for this and used same plan as it had earlier, it will generate a new cursor when we run this again for DEPT_NO = 10 and hopefully it will also use a new plan (probably full table scan).

Lets run this query again for same DEPT_NO = 10

Running SQL for DEPT_NO = 10


SQL>exec :dept_no := 10;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
      9950|2010-12-27:00:00

SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         2|        169|Y|N|N
btj8b1hx3cf0g|           1|     3724264953|         1|         96|Y|Y|Y

Now, you see that it has created a new cursor along with a new plan and also is_bind_aware becomes YES for new cursor. At the same time, cursor # 0 becomes non-shareable (check IS_SHAREABLE column is showing ‘N’). So since optimizer knows that this query is bind sensitive, it will create new cursors which are bind aware and doesn’t use the cursor it created initially, which was not bind aware.

As per Oracle definition, IS_BIND_AWARE VARCHAR2(1) Indicates whether the cursor is bind aware (Y) or not (N). A query is considered bind-aware if it has been marked to use extended cursor sharing. The query would already have been marked as bind-sensitive.

Sometimes optimizer just creates a new cursor with same plan and it might go for index range scan even for selecting majority of rows from the table. This happened with me when I was trying this example. In such cases, cost of plan is determined by optimizer_index_cost_adj parameter. In my case, value of this parameter is set to lowest value of 1 so it will going for index range scan in all cases. Lower value of this variable represents lower cost for index access and it favors use of index. I set the value of this parameter to 50 and after that it was going for full table scan.

So if you check the plan for cursor 1, it will show full table scan as we are selecting 99% of rows from the table (DEPT_NO = 10 will fetch 9950 rows)

SQL>select * from table(dbms_xplan.display_cursor('btj8b1hx3cf0g',1));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  btj8b1hx3cf0g, child number 1
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ count(:"SYS_B_0"), max(dob) from
t1 where dept_id = :dept_no

Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    27 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    12 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  9950 |   116K|    27   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DEPT_ID"=:DEPT_NO)

At this point if we go for another execution with DEPT_NO = 20, it will create another cursor, which will be bind aware

Running SQL for DEPT_NO = 20

SQL>exec :dept_no := 20;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
        49|2002-09-06:00:00

SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         2|        169|Y|N|N
btj8b1hx3cf0g|           1|     3724264953|         1|         96|Y|Y|Y
btj8b1hx3cf0g|           2|     3776485839|         1|          3|Y|Y|Y

As we can see it created cursor 2 which is bind aware. So now cursor 0 is not shareable and will be removed from shared pool in sometime but cursor 1 and 2 are shareable and will be used by further executions.

Effect of bind peeking on adaptive cursor sharing

One of the important thing for using adaptive cursor sharing is that you need to enable bind peeking in database. Without enabling bind peeking, adaptive cursor sharing will not work.

Initially, in my environment, bind peeking was not enabled and because of that optimizer doesn’t have visibility on what values are being passed to the bind variables. So in that case optimizer will not generate new cursors even when adaptive cursor sharing is enabled. Optimizer does NOT even mark the cursor as bind sensitive as it doesn’t have any visibility in bind values

As you can see below, it was using same cursor for multiple executions


SQL>select sql_id, child_number, executions, buffer_gets, is_bind_sensitive, is_bind_aware from v$sql where sql_text like '%test_sql%';

SQL_ID       |CHILD_NUMBER|EXECUTIONS|BUFFER_GETS|I|I
-------------|------------|----------|-----------|-|-
2bxcmkhms19jn|           0|        12|        220|N|N

If we check the bind peeking, it was turned off

SQL>@param
Enter Parameter name: peek

Parameter                            |Description                                                 |Session Va|Instance V
-------------------------------------|------------------------------------------------------------|----------|----------
_optim_peek_user_binds               |enable peeking of user binds                                |FALSE     |FALSE

Effect of CURSOR_SHARING parameter on adaptive cursor sharing

Cursor sharing plays an important role when we are using adaptive cursor sharing. CURSOR_SHARING parameters determines if a cursor can be shared or a new cursor should be created. This was discussed very well on https://blogs.oracle.com/optimizer/entry/explain_adaptive_cursor_sharing_behavior_with_cursor_sharing_similar_and_force

But will give a brief about how CURSOR_SHARING plays an important role.
Until 11g R2, we had 3 values for CURSOR_SHARING parameter

  • EXACT: By setting this value, optimizer will not replace the literals used in the statement with system generated bind variables. So if you are using literals in the statement, each statement is considered as different depending on the literal values you have used.
  • SIMILAR: Using this value, optimizer will replace the literals with system generated bind variable only if you don’t have histogram on the column for which you have used literal. If you have histogram, optimizer will NOT replace the literal with bind variable
  • FORCE: This value will replace all the literals with system generated bind variables irrespective of whether a histogram is present on the column or not.

Example: we have following statement with literal

select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = 10;

This is the same statement that we used above and optimzer knows that DEPT_ID column is skewed and has histogram defined on it. Becuase it has histogram defined, optimizer knows that data is skewed and replacing the literal value with system generated bind variable can lead to suboptimal plan.

So if we use CURSOR_SHARING = SIMILAR and we have statements like above where we are using literal values on columns with histogram, optimizer will not replace the literal with bind variables.

In such cases adaptive cursor sharing will not kick in as no bind variables present in the query.

For adaptive cursor sharing to work in such situation, we need to drop histograms on the columns where we are using literal values. This has its own problems as data skewness cannot be determined by optimizer.

This is one of the main reason why a value of SIMILAR is deprecated in oracle 12c for CURSOR_SHARING parameter. Oracle encourages to use only FORCE and EXACT going forward.

Effect of baseline on adaptive cursor sharing

This is one of the thing that is covered in good detail in https://blogs.oracle.com/optimizer/entry/how_do_adaptive_cursor_sharing. I am just covering this topic briefly for the sake of completion.

So baseline has a different purpose than adaptive cursor sharing. While adaptive cursor sharing deals with using the right cursor and plan for a SQL statement depending on the bind vales we are passing and skewness of data, purpose of baseline is to provide the stability and avoid using any new unverified plan.

Adaptive cursor sharing kicks in before SQL management baseline. So when the statement is executed, adaptive cursor sharing will peak for bind values and will check the skewness of data. Based on these inputs, it will check if one of the existing cursor and plan can be used or it can derive a new better plan. Adaptive cursor sharing can tells optimizer that a new plan or a cursor needs to be created and after that control moves on to the next part of optimizer. So adaptive cursor sharing only suggests that a new cursor or plan is better and required to be created and provides all the required data.

Now, if we have SQL plan baseline already enabled on this SQL, it will prevent optimizer to use a new plan. However, optimizer can create a new plan and save that in the history of SQL management base. This new plan will be enabled but NOT ACCEPTED. EIther DBA has to manually accept the plan or have to wait for “sql tuning advisor” auto task to kick in and accept the baseline if its verified as good. Only after accepting the new plan, it will be used for the SQL.

In my case, I had a baseline created for the SQL initially.


SQL>exec :dept_no :=20;
SQL>select /*+ GATHER_PLAN_STATISTICS */ count(1), max(dob) from t1 where dept_id = :dept_no;

  COUNT(1)|MAX(DOB)
----------|----------------
        49|2002-09-06:00:00


If you check the explain plan for this SQL

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  btj8b1hx3cf0g, child number 0
-------------------------------------
select /*+ GATHER_PLAN_STATISTICS */ count(:"SYS_B_0"), max(dob) from
t1 where dept_id = :dept_no

Plan hash value: 3776485839

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE              |              |     1 |    12 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |    49 |   588 |     1   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | I_T1_DEPT_ID |    49 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("DEPT_ID"=:DEPT_NO)

Note
-----
   - SQL plan baseline SQL_PLAN_gwfyvvw5h1w96e68c0740 used for this statement

We can see baseline is getting used in the Note section. We can also see the details of baseline in DBA_SQL_PLAN_BASELINES table


SQL>select sql_handle, plan_name from dba_sql_plan_baselines where signature = (select exact_matching_signature from v$sql where sql_id = 'btj8b1hx3cf0g');

SQL_HANDLE                    |PLAN_NAME
------------------------------|------------------------------
SQL_fe3bdbdf0b00f126          |SQL_PLAN_gwfyvvw5h1w96e68c0740

Even after running the SQL multiple times for different bind values, we are not getting any new cursor. Also, the cursor is not marked as bind sensitive as shown in below output


SQL>select sql_id, child_number, plan_hash_value , executions, buffer_gets, is_bind_sensitive, is_bind_aware,IS_SHAREABLE from v$sql where sql_text like '%GATHER_PLAN_STATISTICS%';

SQL_ID       |CHILD_NUMBER|PLAN_HASH_VALUE|EXECUTIONS|BUFFER_GETS|I|I|I
-------------|------------|---------------|----------|-----------|-|-|-
btj8b1hx3cf0g|           0|     3776485839|         5|        339|N|N|Y

So if we have SQL plan baselines enabled for a SQL and we have only 1 baseline accepted, we will not see the advantages of adaptive cursor sharing unless we have multiple baselines ACCEPTED so that optimizer can use different plan as suggested by adaptive cursor sharing.

This covers most of the situation for using adaptive cursor sharing.

Hope this helps !!

Reference

https://blogs.oracle.com/optimizer/entry/how_do_adaptive_cursor_sharing

https://blogs.oracle.com/optimizer/entry/update_on_adaptive_cursor_sharing

https://docs.oracle.com/database/121/REFRN/GUID-455358F8-D657-49A2-B32B-13A1DC53E7D2.htm#REFRN10025

https://blogs.oracle.com/optimizer/entry/explain_adaptive_cursor_sharing_behavior_with_cursor_sharing_similar_and_force

Oracle 12c Learning Series: Automatic Table Recovery Using RMAN

Table recovery was possible in earlier release as well. Until previous release, if we wanted to recover a table, we had following options

  • Database point in time recovery (DBPITR)
  • Tablespace point in time recovery (TSPITR)
  • Flashback technology

In Oracle 12c, RMAN has been enhanced to perform recovery of table. We have a new command in RMAN which automates complete process of recovering the table. New process does not affect the existing objects in the database and reduces the time and space required for doing table recovery.

How table recovery works?

  • RMAN uses the database backups taken previously to recovery table or table partitions to a specified time. You need to provide following inputs to RMAN
    • Table name or partition names to be recovered
    • Point in time until which the recovery should be done
    • Whether recovered tables should be imported back to original database
  • RMAN determines the correct backup based on your inputs
  • RMAN creates auxiliary instance
  • RMAN restores the controlfile
  • RMAN restores necessary files required for obtaining the older image of table. This might include SYSTEM, SYSAUX, UNDO tablespace and tablespace which has required table.
  • RMAN recovers the table or table partitions in auxiliary instance until the specified time.
  • RMAN creates data pump export dump file that contains the recovered objects
  • RMAN imports the recovered objects into original database

Recovery Point-in-Time Options

You can recover a table or table partition to a past point in time by specifying one of the following three clauses:

  • UNTIL SCN – The system change number (SCN)
  • UNTIL TIME – Time in NLS_DATE_FORMAT environment variable
  • UNTIL SEQUENCE – The log sequence number and thread number

Steps for performing table recovery

Let’s take an example. I have a table T1 created in DEO database.

 

[oracle@advait ~]$ sqlplus advaitd/oracle@deo

SQL> select * from T1;

      COL1
----------
         1
         2
         3

We can note down the current SCN when table is available


SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1243251

Following are the datafiles available in my database


SQL> select file#, name, con_id from v$datafile order by 3,1;  

     FILE# NAME                                                                                 CON_ID
---------- -------------------------------------------------------------------------------- ----------
         1 /u01/app/oracle/oradata/orcl/orcl/datafiles/system-01.dbf                                 0
         2 /u01/app/oracle/oradata/orcl/orcl/datafiles/sysaux-01.dbf                                 0
         3 /u01/app/oracle/oradata/orcl/orcl/datafiles/undotbs1-01.dbf                               0
         4 /u01/app/oracle/oradata/orcl/orcl/users.dbf                                               0
         5 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt_depot.dbf                                0
         6 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt.dbf                                      0
         7 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt_ad4j.dbf                                 0

Lets drop the table T1


SQL> drop table T1;

Table dropped.

We can use below command to recover this table. Note that the SCN number we are giving below is the SCN number we noted down when table was existing.

RMAN> recover table advaitd.t1 until scn 1243251 auxiliary destination '/u01/app/oracle/oradata/aux';

Let’s see what this command does exactly.

Step 1) Command creates and mounts a new adhoc dummy instance as shown below

 


RMAN> recover table advaitd.t1 until scn 1243251 auxiliary destination '/u01/app/oracle/oradata/aux';

Starting recover at 26-JUL-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='jCvj'

It uses the controlfile autobackup to restore the controlfile to mount the dummy instance.

Step 2) It then restores the controlfile for the auxiliary instance

Following memory script shows the commands for restoring controlfile


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log 
sql 'alter system archive log current';
}

Step 3) Next it restores the tablespaces required to perform tablespace PITR

Following memory script shows that it’s going to restore the required datafiles (until SCN 1243251) into auxiliary destination and switch the datafiles to copy.
When we use UNTIL SCN, RMAN restores the datafiles which are little older then this SCN.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;
 
switch clone datafile all;
}

Step 4) Recover the datafiles until required SCN specified

Below memory script shows that RMAN tries to recover these datafiles until the specified SCN. Once recovered, it opens the database in read only mode.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

Step 5) Create SPFILE for auxiliary instance and mount the DB again

Following memory script tell us that RMAN is creating SPFILE for the auxiliary instance and including CONTROL_FILES parameter in SPFILE and mounting the auxiliary instance again


contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files = 
  ''/u01/app/oracle/oradata/aux/DEO/controlfile/o1_mf_ct8t8hyq_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}

Step 6) Restore USERS tablespace where the table belongs

Following memory script shows that RMAN is trying to restore USERS tablespace (datafile 0004) and switching to the copy.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;
 
switch clone datafile all;
}

Step 7) Recovering the auxiliary database until required SCN

Following memory script shows that RMAN is trying to recover USERS tablespace and other tablespaces that it has restored previously to required SCN. It also opens the database in resetlogs.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

Step 8) Create directory for datapump export and export the required table

Following memory script shows that RMAN is creating a directory for exporting the table and it also exports the table


contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/u01/app/oracle/oradata/aux''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/u01/app/oracle/oradata/aux''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/u01/app/oracle/oradata/aux''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/u01/app/oracle/oradata/aux''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_jCvj_rjBm":  
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 64 KB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADVAITD"."T1"                              5.062 KB       3 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_jCvj_rjBm" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_jCvj_rjBm is:
   EXPDP>   /u01/app/oracle/oradata/aux/tspitr_jCvj_52000.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_jCvj_rjBm" successfully completed at Tue Jul 26 19:55:13 2016 elapsed 0 00:00:21
Export completed

Step 9) Importing the table to original database

Following memory script shows, RMAN is trying to import the table to original database. You can provide additional options to prevent RMAN to import the table or you can provide import options to remap the table to different name if required or append the content to existing table.


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_jCvj_tCzd" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_jCvj_tCzd":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADVAITD"."T1"                              5.062 KB       3 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_jCvj_tCzd" successfully completed at Tue Jul 26 19:55:24 2016 elapsed 0 00:00:04
Import completed

Step 10) Finally, RMAN will clean up and remove the auxiliary instance


Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_temp_csgwmp8s_.tmp deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/onlinelog/o1_mf_2_csgwo8fy_.log deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/onlinelog/o1_mf_1_csgwo7ht_.log deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/datafile/o1_mf_users_csgwo53j_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_sysaux_csgwkm4b_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_undotbs1_csgwkm4f_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_system_csgwkm2x_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/controlfile/o1_mf_csgwkf6f_.ctl deleted
auxiliary instance file tspitr_jCvj_52000.dmp deleted
Finished recover at 26-JUL-16

RMAN> 

You can verify that you got the required table or not


SQL> select * from T1;

      COL1
----------
         1
         2
         3

As you can see all the steps are performed by RMAN automatically once we execute single RMAN command to recover the table.

This was a small test database, but in case of production databases, we have to make sure that auxiliary destination has required space available for restoring the required tablespaces. Else table recovery will fail.

Hope this helps !!

Monitoring transaction recovery

Sometimes we end up in a situation where our long running transaction is not completing and we are also not sure how much further time it’s going to take. This happened with one of our DBA where they found MLOG to be bloated because of one orphan snapshot entry. Orphan entries are the one where actual site is not registered on master (no entry in DBA_REGISTERED_SNAPSHOTS), but they see entry for MLOGS (entry in DBA_SNAPSHOT_LOGS). This could happen if we try to drop snapshot from downstream database and it does not get cleaned up on upstream databases.

So in the situation that I faced, upstream team had MLOG which was bloated to 18GB and MLOG also had an index which was bloated to 30GB. (ya, I know its bad :-))

So they identified the orphan snapshot ID and they wanted to purge that from snapshot log to reduce the size of MLOG (after they move the MLOG and rebuild the index after doing the purge).

They used following procedure of DBMS_SNAPSHOT for purging snapshot ID from log

PROCEDURE PURGE_SNAPSHOT_FROM_LOG
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SNAPSHOT_ID                    BINARY_INTEGER          IN

After they started the activity in the morning and monitoring the same until evening, it was still not complete. I helped them in tracking the progress by checking real time SQL monitoring report and it was showing that session has already read around 60GB and undo used until that time was around 48GB. It was still not clear how the command has read 60GB worth of data when MLOG size was only 18GB.

Also, original base table was just 2GB.

At this point they wanted to kill the session. But killing the session will not help immediately as it has to perform huge rollback as well (48GB of UNDO).

But since command was not completing and took almost entire shift, they decided to kill the session. So session was killed using “ALTER SYSTEM KILL SESSION ‘<sid>,<serial#>’ immediate” and session was marked for kill. But session was just marked as killed and it was still holding the lock (if we check in V$LOCK view). This was because session was doing the rollback. We can monitor the progress of rollback using V$TRANSACTION view

You can look at used_ublk in V$transaction to estimate how long it is going
to take to complete the rollback.

SQL> SELECT a.used_ublk 
 FROM v$transaction a, v$session b
 WHERE a.addr = b.taddr AND b.sid = <SID>;

For example:

If used_ublk showed 29,900 12 hours ago and is now 22,900, it has taken 12 hours to rollback 7,000 entries. It will take approximately another 36 hours to complete depending on the types of transactions that are rolling back.

Recovery was very slow as session was doing serial recovery. Next we found the OS PID of the session and killed the OS process as well so that recovery can happen in the background using SMON. Within few mins PMON performed the clean up and lock was released.

Rollback continued in the background and this is faster than the rollback performed by the session. If we kill the session and the shadow process at OS level, SMON picks up the rollback part and it goes for parallel rollback, which is faster.

V$FAST_START_TRANSACTIONS & X$KTUXE

We can monitor the progress of rollback in V$FAST_START_TRANSACTIONS view.

V$fast_start_transactions -> contains one row for each one of the transactions that Oracle is recovering in Parallel.

FAST_START_PARALLEL_ROLLBACK shows the maximum number of processes which may exist for performing parallel rollback.

In fast-start parallel rollback, the background process SMON acts as a coordinator and rolls back a set of transactions in parallel using multiple server processes.

Fast start parallel rollback is mainly useful when a system has transactions that run a long time before committing, especially parallel Inserts, Updates, Deletes operations. When SMON discovers that the amount of recovery work is above a certain threshold, it automatically begins parallel rollback by dispersing the work among several parallel processes.

The following queries are available to monitor the progress of the transaction recovery

set linesize 100 
 alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; 
 select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo", 
 decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) 
 "Estimated time to complete" 
 from v$fast_start_transactions;

Run the above query several times in a row, this will give you a good idea on how SMON is progressing.

  • In some versions the cputime does not work (always 0), hence the estimated completion time will not be displayed
  • In some cases the v$fast_start_transactions view will not work. If this is the case then you can query the internal data dictionary view x$ktuxe

The ‘ktuxesiz’ column represents the remaining number of undo blocks required for rollback:

select ktuxeusn, to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta 
 from x$ktuxe 
 where ktuxecfl = 'DEAD';

I was not able to see recover progress using V$FAST_START_TRANSACTIONS, but I was able to see the progress in x$ktuxe view.

 


select ktuxeusn, to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta 
from x$ktuxe 
where ktuxecfl = 'DEAD'; 

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 12:05:14      |   5260156|ACTIVE
SQL>/

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 12:05:15      |   5259945|ACTIVE
SRW1NA>/

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 12:05:15      |   5259854|ACTIVE

..
..
..
<After 2-3 hours>

  KTUXEUSN|Time                      |  KTUXESIZ|KTUXESTA
----------|--------------------------|----------|----------------
      2167|01-AUG-2016 16:31:47      |    612697|ACTIVE

Speeding up recovery

We can further improve the speed of recovery by taking following steps

1) There are cases where parallel transaction recovery is not as fast as serial transaction recovery, because the pq slaves are interfering with each other. To check the Parallel Recovery processes and there state query:

select * from v$fast_start_servers;

Column STATE shows the state of the server being IDLE or RECOVERING, if only 1 process is in state RECOVERING while the other processes are in state IDLE, then you should disable Parallel Transaction Recovery. How to do this is outlined in the following note:

Note 238507.1: How to Disable Parallel Transaction Recovery When Parallel Txn Re very is Active

2) If all the rows are showing RECOVERING in STATE column of v$fast_start_servers, then you will get benefitted if you add more threads for doing the recovery.

You can do so by setting value of FAST_START_PARALLEL_ROLLBACK parameter. You should set a value of HIGH if you want to speed up the recovery.

Following are the different values of this parameter

  • FALSE – Parallel rollback is disabled
  • LOW – Limits the maximum degree of parallelism to 2 * CPU_COUNT
  • HIGH -Limits the maximum degree of parallelism to 4 * CPU_COUNT

Note that, this parameter is not dynamic and needs database bounce. Also, If you change the value of this parameter, then transaction recovery will be stopped and restarted with the new implied degree of parallelism. So if you are already done more than half the rollback and you think its not worth to change this parameter, you can leave it. Else if you still change this parameter, recovery will start from the beginning again.

3) Increase the parameter ‘_cleanup_rollback_entries’

This parameter determines number of undo entries to apply per transaction cleanup. The default value is 100. You can change that to, say 400.This parameter cannot be changed dynamically, so in order to change this the database will need to be restarted.

In our specific situation, we knew that huge rollback needs to be performed and we were monitoring the rollback progress from the beginning. So we made a decision at the very beginning to set FAST_START_PARALLEL_ROLLBACK to HIGH and bounce the DB. This improved recovery speed right from the beginning.

References:

SMON: Parallel transaction recovery tried (Doc ID 1458738.1) To BottomTo Bottom

Troubleshooting Database Transaction Recovery (Doc ID 1494886.1)

Database Hangs Because SMON Is Taking 100% CPU Doing Transaction Recovery (Doc ID 414242.1)

SMON: Parallel transaction recovery tried (Doc ID 1458738.1)

Note 238507.1: How to Disable Parallel Transaction Recovery When Parallel Txn Re very is Active

Oracle 12c Learning Series: In-database Archiving and Temporal Validity

One of the major challenge faced by an Oracle DBA is – how to effectively deal with historical data? Today, if we consider database tables for an enterprise, data in the table goes back several years and most of the data in the table is inactive. Challenge remains as to how to archive this data and make our query run efficiently.

If we choose to archive old data outside of database on a tape, cost of storing data reduces but older data is not available for one-off reports. Also regulatory compliance suggests that older historical data should be accessible all the time.

If we choose to maintain old data inside database, size of database grows, so is the size of backup and storage cost increases. Not only that, queries on bigger tables does not run efficiently, so there is performance cost associated with queries, DMLs and DDLs (example index creation and rebuild).

In 11g, older archived data is handled using following 2 solutions

  1. Partitioning – Distinguish the data as old or new based on the date of creation or some date attribute in the table and partition the table according to that column. DBA can manually move older partitions to low cost storage
  2. Flashback data archive (total recall feature in 11g) – This feature automatically tracks changes to data over period of time and maintains archive transactional data in a different tablespace based on RETENTION parameter. Historical data can be queried using flashback query AS OF clause. Archived transactional data which has aged beyond RETENTION will be purged automatically.

Oracle 12c provide additional solutions to above challenges. We have 2 ways to handle situation for archive data in Oracle 12c:

  1. In-database archiving
  2. Temporal validity

In-Database Archiving

Until now, it was very difficult to identify active data from in-active data at row level. In Oracle 12c, we can identify active data vs inactive data at row level. This is possible by attaching a property to every row of a table to mark it as in-active. If the row is not marked inactive, it’s an active row. Setting such property at row level helps separate active and non-active data in a table and lets you archive rows in a table by marking them inactive. The key thing to understand here is that the data remains in the table and you can compress it, but to the applications, this part of the data (inactive) remains invisible. We also have a session level parameter which enables us to see active as well as inactive data (if required).

In-database archiving is enabled at table level, either during creation of table or later by altering the table. Following is an example of creating table with in-database archiving


SQL> create table sales_archive
 2 (year    number(4),
 3 product  varchar2(10),
 4 amt      number(10,2))
 5 row archival;

Table created.

When you create a table with “ROW ARCHIVAL” clause (line 5 in above example), it automatically adds an extra column ORA_ARCHIVE_STATE to the table. But this column is added as hidden column. If you try to describe the table, you won’t see this column.


SQL> desc sales_archive
 Name                    Null?    Type
 ----------------------- -------- ----------------
 YEAR                             NUMBER(4)
 PRODUCT                          VARCHAR2(10)
 AMT                              NUMBER(10,2)

But you can check if the column is added or not using DBA_TAB_COLUMNS view.


SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES_ARCHIVE';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES_ARCHIVE        ORA_ARCHIVE_STATE               YES
SALES_ARCHIVE        YEAR                          1 NO
SALES_ARCHIVE        PRODUCT                       2 NO
SALES_ARCHIVE        AMT                           3 NO

Similarly, you can enable an existing table for row archival using alter table as shown below.


SQL> alter table sales row archival;

Table altered.

SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES                YEAR                          1 NO
SALES                PRODUCT                       2 NO
SALES                AMT                           3 NO
SALES                ORA_ARCHIVE_STATE               YES

ORA_ARCHIVE_STATE column is very important column for distinguishing active data vs inactive data. The column ORA_ARCHIVE_STATE can take two values—0 and 1. By default, a newly inserted row is active and is denoted by the value 0 for the ORA_ARCHIVE_STATE column. When the rows start being rarely accessed and not updated any longer, they’re considered to be in the non-active state and are denoted by the value 1 (or any value other than zero) for the ORA_ARCHIVE_STATE column.

If you want to check the value of ORA_ARCHIVE_STATE column, you have to select the column explicitly in your select statement.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 0
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

As you can see, we have value of 0 in all the selected rows, which means all this data is active.

Note that if rows are rarely accessed, column ORA_ARCHIVE_STATE will not be set to value of 1 automatically. A DBA or owner of the data have to decide which rows are not being accessed and are inactive and they have to manually update the value of ORA_ARCHIVE_STATE column.

Following DML shows how to mark the data inactive

SQL> update sales_archive set ORA_ARCHIVE_STATE = DBMS_ILM.ARCHIVESTATENAME(1) where year = 2001;

1 row updated.

SQL> commit;

Commit complete.

In the above DML, we used DBMS_ILM.ARCHIVESTATENAME function. This function is officially used for updating value of ORA_ARCHIVE_STATE column. You can also directly update this column to any non-zero value and result will be same. In the above DML, we updated 1 record of year 2001 and marked it as inactive (or archived).

If you query the table again, you won’t see the record that we updated above.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

6 rows selected.

This is because, by default Oracle only shows active records. Any records marked inactive will not be shown. This is done by Oracle by adding simple filter to your queries. We can see the same in below explain plan

SQL> explain plan for
 2 select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------
Plan hash value: 4043476784

----------------------------------------------------------------------------------
| Id | Operation         | Name          | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |               | 6    | 12210 | 3 (0)       | 00:00:01 |
|* 1 | TABLE ACCESS FULL | SALES_ARCHIVE | 6    | 12210 | 3 (0)       | 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("SALES_ARCHIVE"."ORA_ARCHIVE_STATE"='0')

Note
-----
 - dynamic statistics used: dynamic sampling (level=2)

17 rows selected.

You can view archived records again by setting row archival visibility parameter to ALL at session level. By default, the value of this parameter is ACTIVE, that’s why we are seeing only active records. Follow shows the same


SQL> alter session set row archival visibility = ALL;

Session altered.

SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

YEAR       PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 1
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

After setting row archival visibility to ALL, we are able to see archive values. This is one great advantage of in-database archiving. If some adhoc report needs to be run, which needs access to archived records, we can just set row archival visibility to ALL at the beginning of the report and we should be able to get the desired result. You can set back visibility to ACTIVE again.

SQL> alter session set row archival visibility = ACTIVE;

Session altered.

You can also restore back archived data as active data by running update statement on archived data and using DBMS_ILM.ARCHIVESTATENAME function. But in this case you have to pass 0 as value to this function. Also, you need to set ROW ARCHIVAL VISIBILITY clause to ALL before you run update else archived data won’t be visible.

You can disable row archival for a table by using ALTER TABLE … NO ROW ARCHIVAL. If you disable row archival for a table, ORA_ARCHIVE_STATE column will get dropped immediately and all the data will be visible in your select statement

SQL> alter table sales no row archival;

Table altered.

Note that if you use CTAS (CREATE TABLE AS SELECT) to create a new table from an existing table, then new table will have active as well as inactive data. In-database archiving is only applicable to selects and DMLs but not to DDLs.

Temporal Validity and Temporal history

In in-database archiving, we can distinguish active rows from non-active rows using row archival column (ORA_ARCHIVE_STATE). You can achieve same functionality using temporal validity. Many of the applications today have their data dependent on various dates that are relevant to underlying business. Example an insurance application will have basic dates recorded for insurance start date and insurance end date. This represents valid time when the insurance policy will be active for a customer. These dates attributes which denotes valid time for record is called temporal validity. Temporal Validity lets you keep active and inactive data together in the same table, while providing all the benefits of archiving non-active data. Temporal validity support helps in cases where it’s critical to know when certain data becomes valid and when it’s invalid.

Similar to business relevant date attributes used in table by application, Oracle provide temporal validity by creating its own date attributes (start date and end date) to represent valid time dimensions when the record will be active. Application user or DBA can decide which records are active vs inactive by updating start date and end date attribute of each record. Records for which end date has passed are inactive records.

By using valid time temporal implicit filter on valid-time dimension, queries can show rows that are currently valid or that will be valid in future. Queries can hide rows whose facts are currently not valid.

Temporal History

Before we go further, it’s better to understand about temporal history and how temporal validity is different than temporal history. Temporal history was called as Flashback Data Archive (FDA) in previous release. To understand this, let’s consider this.

Temporal validity dates and times are different than the dates and times when the record is created. The date and time when record was created in database is called temporal history. Example, if we have to create a new customer whose insurance policy starts from today, it’s possible that we might create a record for that application tomorrow or after a week but put insurance start date of today. In this case temporal validity date is of today but since record is created 1 week later, temporal history date will be 1 week later.

We can use temporal history to get the past data. We can also use temporal history along with temporal validity to get rows which were valid in the past.

Defining temporal validity

You define a valid-time dimension at table creation time, or by altering a table. In order to create the valid-time dimension, you specify the PERIOD FOR clause in the table creation statement.

The following example shows how to explicitly define two date-time columns, USER_TIME_START and USER_TIME_END:

Following example creates table with valid-time dimension

Explicitly specifying valid-time dimension columns.


SQL> create table policy
2 (cust_id         number,
3 policy_no        number,
4 policy_type      varchar2(20),
5 user_time_start  date,
6 user_time_end    date,
7 period for user_time (user_time_start, user_time_end));

Table created.

In above example, we explicitly specified valid-time dimension columns in create table statement. But even if we don’t specify the column names, valid-time temporal creates desired columns


SQL> create table policy
2 (cust_id    number,
3 policy_no   number,
4 policy_type varchar2(20),
5 period for user_time);

Table created.

A valid-time dimension represented by new PERIOD FOR clause, consist of two date-time columns. If you don’t specify the columns explicitly as in second example, Oracle automatically creates 2 hidden columns whose name starts with prefix provided in PERIOD FOR clause and ends with “start” and “end”. So in second example, the 2 hidden columns created would be user_time_start and user_time_end (same as in example 1).

Only difference between first example and second one is the visibility of valid-time dimension columns. In first table, the columns are visible as we explicitly created them, whereas in second example, the columns are hidden.

If the columns are created hidden, you need to explicitly specify them in select, updates and inserts. Following example shows an insert statement into policy table where we created implicit valid-time dimension columns


SQL> insert into policy (cust_id, policy_no, policy_type, user_time_start, user_time_end) values (123, 3424, 'HEALTH',sysdate, null);

1 row created.

SQL> select * from policy;

   CUST_ID POLICY_NO  POLICY_TYPE
---------- ---------- --------------------
       123 3424       HEALTH

SQL> select cust_id, policy_no, policy_type, user_time_start from policy;

   CUST_ID POLICY_NO  POLICY_TYPE          USER_TIME_START
---------- ---------- -------------------- ----------------------------------------
       123 3424       HEALTH               20-JUN-16 12.33.53.000000 AM +05:30

As you can see, using “select *” won’t show hidden valid-time dimension columns. In 2nd statement we explicitly select them and see the values.

Now, we have seen how to create valid-time columns in table to separate active data from inactive data. Let’s check how to filter these valid-time columns and select only valid data. There are 2 methods of selecting active data by filtering on valid-time columns

  • PERIOD FOR clause
  • Using DBMS_FLASHBACK_ARCHIVE procedure

PERIOD FOR clause

To filter valid data, you can use SELECT statement with new PERIOD FOR clause. We have a set of data which is valid based on its start date and end date of valid-time temporal and we have another set of data in same table which is invalid as it falls outside the start time and end time of valid-time temporal. Both sets of rows reside in same table. However, by controlling visibility of data to valid rows, you can limit what queries and DMLs to see only active data.

For each record that you insert into table, you specify valid-time start date and end date. These dates represent the activeness of data. These dates are entered manually by end users. The date when the record ins actually inserted into the table is the transaction time and is called temporal history in Oracle 12c.

You can use either AS OF PERIOD FOR clause or VERSIONS PERIOD FOR clause to display valid data. AS OF PERIOD FOR clause is used when we want to see valid data as on specific date. Whereas VERSIONS PERIOD FOR clause is used when we want to see valid data between 2 dates (range).

Following example displays all policies that were active as on 01-Jan-2016

select * from policy AS OF PERIOD FOR user_time to_date('01-JAN-2016','DD-MON-YYYY');

Here is another example which show valid data between 01-JAN-2016 and 01-FEB-2016.

select * from policy VERSIONS PERIOD FOR user_time between to_date('01-JAN-2016','DD-MON-YYYY') and to_date(’01-FEB-2016’,’DD-MON-YYYY’);

DBMS_FLASHBACK_ARCHIVE

You can also use DBMS_FLASHBACK_ARCHIVE procedure to show valid data. This procedure is used to set visibility of data at session level. Once visibility is set, select statement on valid-time temporal table shows data that was valid data at set time.

In following example, we are setting visibility of data as of given time – 01-JAN-2016
This means any select statement or DMLs in the same session, will see valid data as of 01-JAN-2016


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('ASOF',to_date('01-JAN-2016','DD-MON-YYYY'));

PL/SQL procedure successfully completed.

You can set the visibility to current time. This will show data which is currently valid


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME(‘CURRENT'));

PL/SQL procedure successfully completed.

Note that this visibility setting affects only selects and DMLs. It doesn’t affect DDL. DDL sees complete data (valid as well as invalid). Also, in-database archiving and temporal validity is NOT supported in CDB database. It’s only supported in non-CDB database.

Hope this helps !!

Fixing SQL plans on ADG using SQL Profiles

With Active Dataguard setup, many of the read only applications runs on ADG and sometime the SQLs that are used by these applications runs into bad plans. ADG being a read only database, its not possible to create a baseline or a profile on ADG.

One of the way to have good plan for SQLs running on ADG is to make them run on primary first and fix the plan by creating profile or baseline and then transfer these SQLs to ADG. Practically I have seen many times that Dev teams are not very flexible in changing the configuration to run the queries on primary. Either there configuration setup is complex and takes more time and efforts to change them or most of the time in bigger enterprises, they have central configuration file which when changed, changes the configuration for entire region. In that case complete software techstack will run on primary for entire region and not just 1 database.

Another way to deal make queries run on primary is to point ADG CNAME to primary. That way all the applications which are supposed to run on ADG will start running on primary. Down side for this is the increase in load on primary as all the applications on ADG will connect to primary. Its too much of a change to fix 1 SQL on ADG.

This article is about fixing plans on ADG using profile, without making any other change. We don’t have to run the query on primary to fix the plan.

Fixing SQL plan on ADG

In one of my old article “Fixing SQL Plans: The hard way – Part 1“, I mentioned about DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This procedure takes internal SQL hints that optimizer understands as input and create a profile to fix the plan.

In that method, I am providing SQL text as input, but the way I am providing SQL text as input is by selecting “SQL_TEXT” column from either V$SQL_TEXT view or from DBA_HIST_SQLTEXT view.

Following is the partial code of the PL/SQL block I used to create profile

for sql_rec in (
 select t.sql_id, t.sql_text
 from dba_hist_sqltext t, dba_hist_sql_plan p
 where t.sql_id = p.sql_id
 and p.sql_id = '&SQL_ID_TO_FIX'
 and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
 and p.parent_id is null
)
loop
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => sql_rec.sql_text,
 profile => ar_profile_hints,
 name => 'PROFILE_33fndgzsas09k');

end loop;

Why we cannot use this method for ADG?

We cannot use this method for ADG. You will find the required SQL text in V$SQL_TEXT view in ADG, but you cannot create SQL profile on ADG as its a read-only database.
Also, you cannot create a profile on primary because you will not find SQL text on primary.

So, How do we fix this?

Method 1:

One of the method I thought of for fixing the SQL is by creating a database link between primary and ADG (say adg_db_link) and use that DB link to get SQL text from standby. So above PL/SQL code will look like below. Note that I have change DBA_HIS* table with V$ views because ADG data will be present in only V$ views. DBA_HIST* tables will have data from primary database only as part of AWR snapshot.

for sql_rec in (
 select t.sql_id, t.sql_text
 from v$sqltext@adg_db_link t, v$sql_plan@adg_db_link p
 where t.sql_id = p.sql_id
 and p.sql_id = '&SQL_ID_TO_FIX'
 and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
 and p.parent_id is null
)
loop
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => sql_rec.sql_text,
 profile => ar_profile_hints,
 name => 'PROFILE_33fndgzsas09k');

end loop;

You can run the complete PL/SQL block on primary and using the DB link will fetch the required SQL text from ADG and you can create SQL profile on primary.

Once this SQL profile is created, it should be picked up by SQL query on ADG. Note that you need to purge the SQL from shared pool of ADG so that SQL will go for hard parse and pick the profile.

Method 2:

Another method we can use, is to pass SQL text as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This is the method I implemented and it worked for me.

Let’s take a example.

Following is my SQL text that I want to fix. I was having a completely different SQL in my environment, but I cannot provide actual text for that SQL. My SQL was very big and complex, So I simplified the same and changed table name and column name to understand easily.

select * from ( 
          select this_.col1 as col1,
                 this_.col2 as col2, 
                 this_.col3 as col3, 
                 this_.col4 as col4
          from TAB1 this_ 
          inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
          inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
          where this_.col1 in (:1) 
          and   this_.col2 in (:2 , :3) 
          and   compshipme1_.col1 in (:4 )
          and   this_.col8 = 'OPEN'
          and   this_.col5>=:5 
          and   this_.col5<=:6 
          order by this_.col1 asc ) 
where rownum <= :7

Assuming that existing plan in ADG database is bad, we need to find a good plan of this SQL and also create a profile so that new good plan will be picked in ADG.
We are going to do that without running this SQL on primary or without pointing ADG CNAME to primary. Also, we are not going to do any code changes to add any hints.

Following are the steps to fix the plan on ADG

1) Get a good plan for this SQL

You can use the hints in SQL text and do an explain plan on primary to check if the plan looks good. Alternatively, if you want to be sure, you can run the hinted SQL in ADG to make sure that SQL runs good and efficiently.

I used few index hints in my SQL to make sure correct index is getting picked. I did “explain plan” for the SQL to check the explain plan.

explain plan for
select * from ( 
          select /*+ index(tab1 I_UNIQ_COL2) */ this_.col1 as col1,
                 this_.col2 as col2, 
                 this_.col3 as col3, 
                 this_.col4 as col4
          from TAB1 this_ 
          inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
          inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
          where this_.col1 in (:1) 
          and   this_.col2 in (:2 , :3) 
          and   compshipme1_.col1 in (:4 )
          and   this_.col8 = 'OPEN'
          and   this_.col5>=:5 
          and   this_.col5<=:6 
          order by this_.col1 asc ) 
where rownum <= :7

I got following plan after using required hints

Plan hash value: 2524091007

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name         | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0   | SELECT STATEMENT                          |              | 134  | 1524K | 4134   (3) | 00:00:50 |       |       |
|* 1  |  COUNT STOPKEY                            |              |      |       |            |          |       |       |
| 2   |   VIEW                                    |              | 134  | 1524K | 4134   (3) | 00:00:50 |       |       |
|* 3  |    SORT ORDER BY STOPKEY                  |              | 134  | 71154 | 4134   (3) | 00:00:50 |       |       |
|* 4  |     FILTER                                |              |      |       |            |          |       |       |
| 5   |      NESTED LOOPS                         |              | 134  | 71154 | 4133   (3) | 00:00:50 |       |       |
| 6   |       NESTED LOOPS                        |              | 134  | 43550 | 3731   (3) | 00:00:45 |       |       |
| 7   |        TABLE ACCESS BY INDEX ROWID        | TAB1         | 1    | 22    | 1      (0) | 00:00:01 |       |       |
|* 8  |         INDEX UNIQUE SCAN                 | I_UNIQ_COL2  | 1    |       | 0      (0) | 00:00:01 |       |       |
| 9   |        TABLE ACCESS BY GLOBAL INDEX ROWID | TAB2         | 134  | 40602 | 3730   (3) | 00:00:45 | ROWID | ROWID |
|* 10 |         INDEX RANGE SCAN                  | I_TAB2_COL2  | 242  |       | 3507   (3) | 00:00:43 |       |       |
| 11  |        TABLE ACCESS BY GLOBAL INDEX ROWID | TAB3         | 1    | 206   | 3      (0) | 00:00:01 | ROWID | ROWID |
|* 12 |         INDEX RANGE SCAN                  | PK_TAB3_COL1 | 1    |       | 2      (0) | 00:00:01 |       |       |
-------------------------------------------------------------------------------------------------------------------------

2) Obtain outline hints for SQL

Once you perform explain plan for the SQL, you can use FORMAT=>’ADVANCED’ to get the neccessary outline hints

select * from table(dbms_xplan.display(format=>’advanced’));

/*+
 BEGIN_OUTLINE_DATA
 USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")
 USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")
 LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")
 INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))
 INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))
 INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))
 NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")
 OUTLINE(@"SEL$2")
 OUTLINE(@"SEL$3")
 MERGE(@"SEL$2")
 OUTLINE(@"SEL$64EAE176")
 OUTLINE(@"SEL$4")
 OUTLINE_LEAF(@"SEL$1")
 MERGE(@"SEL$64EAE176")
 OUTLINE_LEAF(@"SEL$54D64B3C")
 ALL_ROWS
 OPT_PARAM('_optimizer_extended_cursor_sharing_rel' 'none')
 DB_VERSION('11.2.0.4')
 OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
 IGNORE_OPTIM_EMBEDDED_HINTS
 END_OUTLINE_DATA
 */

Step 3) Above outline data is useful for creating profile for query running on ADG

SQL Profile have to created on primary as we cannot create SQL profile on ADG (read only database).
Previous method described in”Fixing SQL Plans: The hard way – Part 1” uses the SQL ID as input and queries V$SQLTEXT or DBA_HIST* tables to get the required SQL text. This SQL text is then supplied to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure.

But since we don’t have SQL text in primary, we can directly supply SQL text to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure as shown below.

Note – If you have single quotes in your script, you need to change that to 2 single quotes. 1st single quote works as escape character

In my SQL text, I have a single quote literal. So I have to enclose that to 2 single quotes as shown below

select * from ( 
       select this_.col1 as col1,
              this_.col2 as col2, 
              this_.col3 as col3, 
              this_.col4 as col4
       from TAB1 this_ 
       inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
       inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
       where this_.col1 in (:1) 
       and   this_.col2 in (:2 , :3) 
       and   compshipme1_.col1 in (:4 )
       and   this_.col8 = ''OPEN''
       and   this_.col5>=:5 
       and   this_.col5<=:6 
       order by this_.col1 asc ) 
where rownum <= :7

Above SQL will be given as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure.
Similarly, if we have single quotation in outline hints that we extracted, we need to change that to 2 single quotes as shown below.

Also, we need to mark every outline hint in single quotation.

'BEGIN_OUTLINE_DATA',
'USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")',
'USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")',
'LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")',
'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))',
'INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))',
'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))',
'NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")',
'OUTLINE(@"SEL$2")',
'OUTLINE(@"SEL$3")',
'MERGE(@"SEL$2")',
'OUTLINE(@"SEL$64EAE176")',
'OUTLINE(@"SEL$4")',
'OUTLINE_LEAF(@"SEL$1")',
'MERGE(@"SEL$64EAE176")',
'OUTLINE_LEAF(@"SEL$54D64B3C")',
'ALL_ROWS',
'OPT_PARAM(''_optimizer_extended_cursor_sharing_rel'' ''none'')',
'DB_VERSION(''11.2.0.4'')',
'OPTIMIZER_FEATURES_ENABLE(''11.2.0.4'')',
'IGNORE_OPTIM_EMBEDDED_HINTS',
'END_OUTLINE_DATA'

Note that we have enclosed the hints in single quotes and also put a comma at the end of every hint except the last hint.

So we have required outline hints and SQL text that should be provided as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure

Step 4) Creating required profile on primary

Below shows the code for creating SQL profile on primary after providing required inputs.

declare
ar_profile_hints sys.sqlprof_attr;
begin
  ar_profile_hints := sys.sqlprof_attr(
     'BEGIN_OUTLINE_DATA',
     'USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")',
     'USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")',
     'LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))',
     'NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")',
     'OUTLINE(@"SEL$2")',
     'OUTLINE(@"SEL$3")',
     'MERGE(@"SEL$2")',
     'OUTLINE(@"SEL$64EAE176")',
     'OUTLINE(@"SEL$4")',
     'OUTLINE_LEAF(@"SEL$1")',
     'MERGE(@"SEL$64EAE176")',
     'OUTLINE_LEAF(@"SEL$54D64B3C")',
     'ALL_ROWS',
     'OPT_PARAM(''_optimizer_extended_cursor_sharing_rel'' ''none'')',
     'DB_VERSION(''11.2.0.4'')',
     'OPTIMIZER_FEATURES_ENABLE(''11.2.0.4'')',
     'IGNORE_OPTIM_EMBEDDED_HINTS',
     'END_OUTLINE_DATA'
 );
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => 'select * from ( 
                   select /*+ index(tab1 I_TAB1_IDX1) */ this_.col1 as col1,
                          this_.col2 as col2, 
                          this_.col3 as col3, 
                          this_.col4 as col4
                   from TAB1 this_ 
                   inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
                   inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
                   where this_.col1 in (:1) 
                   and   this_.col2 in (:2 , :3) 
                   and   compshipme1_.col1 in (:4 )
                   and   this_.col8 = ''OPEN''
                   and   this_.col5>=:5 
                   and   this_.col5<=:6 
                   order by this_.col1 asc ) 
             where rownum <= :7',
 profile => ar_profile_hints,name => 'PROFILE_d7w7fruzwyh2s'
 );
end;
/

You can run above code on primary to create required profile.
Once you create profile, purge the SQL on ADG using sys.dbms_shared_pool.purge procedure.

When the SQL runs next time on ADG, it will automatically pick the above created profile and plan will be the one we tested before by including hints.

I was able to see following in the NOTE section when I did DBMS_XPLAN.DISPLAY_CURSOR on ADG

Note
-----
   - SQL profile PROFILE_d7w7fruzwyh2s used for this statement

Hope this helps !!

Reference

Fixing SQL Plans: The hard way – Part 1

Fixing SQL Plans: The hard way – Part 2

Fixing SQL Plans: The hard way – Part 3

 

Using UDev to configure ASM disks

This is a small article on using UDev on RHEL 7. I have a virtual box with RHEL 7 and I am configuring ASM diskgroups.

Before we configure ASM diskgroups, we need to have disks/partitions available at OS level and those should be recognized by ASM. Oracle provides a utility called ASMLib which can be installed and used very easily to configure disk/partitions at OS level.

ASMLib stamps the header of partitions/disk at OS level with Oracle format making them easily detected by ASM instance doing disk discovery. You can go through ASMLib installation, configuration and usage in an article write by my friend Anand Prakashhttps://aprakash.wordpress.com/2016/05/19/oracle-asmlib

In this article we will focus on using UDev for setting up ASM disks.

Let’s start with some information on UDev

What is Udev?

Udev is the device manager for the Linux 2.6 kernel that creates/removes device nodes in the /dev directory dynamically. It is the successor of devfs and hotplug. It runs in userspace and the user can change device names using Udev rules.

Why Do We Need It ?

In the older kernels, the /dev directory contained statics device files. But with dynamic device creation, device nodes for only those devices which are actually present in the system are created. Let us see the disadvantages of the static /dev directory, which led to the development of Udev.

Problems Identifying the Exact Hardware Device for a Device Node in /dev

The kernel will assign a major/minor number pair when it detects a hardware device while booting the system. Let us consider two hard disks. The connection/alignment is in such a way that one is connected as a master and the other, as a slave. The Linux system will call them, /dev/hda and /dev/hdb. Now, if we interchange the disks the device name will change. This makes it difficult to identify the correct device that is related to the available static device node. The condition gets worse when there are a bunch of hard disks connected to the system.

Udev provides a persistent device naming system through the /dev directory, making it easier to identify the device.

Huge Number of Device Nodes in /dev

In the static model of device node creation, no method was available to identify the hardware devices actually present in the system. So, device nodes were created for all the devices that Linux was known to support at the time. The huge mess of device nodes in /dev made it difficult to identify the devices actually present in the system.

Not Enough Major/Minor Number Pairs

The number of static device nodes to be included increased a lot in recent times and the 8-bit scheme, that was used, proved to be insufficient for handling all the devices. As a result the major/minor number pairs started running out.

Working of Udev

The Udev daemon listens to the netlink socket that the kernel uses for communicating with user space applications. The kernel will send a bunch of data through the netlink socket when a device is added to, or removed from a system. The Udev daemon catches all this data and will do the rest, i.e., device node creation, module loading etc.

Kernel Device Event Management

  • When bootup is initialized, the /dev directory is mounted in tmpfs.
  • After that, Udev will copy the static device nodes from /lib/udev/devices to the /dev directory.
  • The Udev daemon then runs and collects uevents from the kernel, for all the devices connected to the system.
  • The Udev daemon will parse the uevent data and it will match the data with the rules specified in /etc/udev/rules.d.
  • It will create the device nodes and symbolic links for the devices as specified in the rules.
    The Udev daemon reads the rules from /etc/udev/rules.d/*.rules and stores them in the memory.
  • Udev will receive an inotify event, if any rules were changed. It will read the changes and will update the memory.

Let’s start by creating disks at OS level. VirtualBox has commands which creates block devices at OS level.

Using Udev to configure disks

Step 1) Creating Block devices at OS level

I am creating 4 disks, each of 10GB using following command

VBoxManage createhd --filename asm1.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm4.vdi --size 10240 --format VDI --variant Fixed

Step 2) Attaching the disk to the correct storage

VBoxManage storageattach 12102 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4.vdi --mtype shareable

12102 is the name of virtual machine. You can see the virtual machine name as shown in below screen shot.

vbox

Step 3) (optional) Making disk sharable

This is optional step and is required only if you are using ASM cluster. In that case ASM disks should be made shareable as should be attached to all machines in cluster. You can use following commands to make the disks sharable

VBoxManage modifyhd asm1.vdi --type shareable
VBoxManage modifyhd asm2.vdi --type shareable
VBoxManage modifyhd asm3.vdi --type shareable
VBoxManage modifyhd asm4.vdi --type shareable

Step 4) Check if the disks are now shown in your virtual box

Following screen shot shows that disks are now added to correct virtual machine

vbox copy

Also, after you login to virtual box, you should be able to see new devices under /dev directory

[root@advait ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Jul 18 05:36 /dev/sda
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jul 18 05:36 /dev/sda2
brw-rw---- 1 root disk 8, 16 Jul 18 05:36 /dev/sdb
brw-rw---- 1 root disk 8, 32 Jul 18 05:36 /dev/sdc
brw-rw---- 1 root disk 8, 48 Jul 18 05:36 /dev/sdd
brw-rw---- 1 root disk 8, 64 Jul 18 05:36 /dev/sde

/dev/sda, /dev/sda1 and /dev/sda2 are partitions of main device used for virtual box.
/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde are the devices we added to virtual box as above.

Step 5) Format the new devices and create partitions

You need to create new partitions using fdisk utility. I am showing the command for one partition, you can repeate the same for remaining partitions.

[root@advait ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xbda01838.

Command (m for help): n
Partition type:
 p primary (0 primary, 0 extended, 4 free)
 e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xbda01838

Device Boot Start End Blocks Id System
/dev/sdb1 2048 20971519 10484736 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

This will create single partition for each of the devices.

[root@advait ~]# ls -rlt /dev/sd*1
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 root disk 8, 17 Jul 18 05:40 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Jul 18 05:40 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Jul 18 05:41 /dev/sdd1
brw-rw---- 1 root disk 8, 65 Jul 18 05:41 /dev/sde1

Step 5) Configure UDev rules

So before creating UDev rule, we need to understand what exactly we want to do. We want to create alias for each of the disk that we created at OS level so that it is always identified the same way, regardless of the device name Linux assigns it. This can be done by recognizing each device based on some unique IDs and assining the alias to that device. UDev can do just that.

Each disk has a unique SCSI ID. We can use this unique ID to recognize the disk and assign the required alias to that disk.
We can get unique SCSI_ID using following command

[root@advait ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
1ATA_VBOX_HARDDISK_VB544d069c-abd3901e

scsi_id command is located in /usr/lib/udev/ directory on RHEL 7. But in previous release this used to be in /sbin/ location.

Like wise we can find SCSI_ID for each disk that we added.

Rules are defined in “/etc/udev/rules.d” directory. Udev reads these rules and apply them to devices listed in /dev directory.

Rules looks like below

KERNEL=="sd?1", SUBSYSTEM=="block", -
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", -
RESULT=="1ATA_VBOX_HARDDISK_VB88ee7178-f28aa887", -
SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"

Following is the explanation of each parameter

  • KERNEL==”sd?1″ – This matches the kernel name of the device. In our case all our partitions are having names as sd?1 (sdb1, sdc1 etc). So this match key matches the kernel name of the devices
  • SUBSYSTEM==”block” – This match key matches the subsystem of the devices. SUBSYSTEM could be block, scsi, usb etc. We have all block devices.
  • PROGRAM==”/usr/lib/udev/scsi_id -g -u -d /dev/$parent” – This will get the unique SCSI_ID for the device searched by first 2 match parameters (KERNEL and SUBSYSTEM)
  • RESULT==”1ATA_VBOX_HARDDISK_VB544d069c-abd3901e” – This will match the output of PROGRAM command with RESULT. If the result matches, then further action will be taken
  • SYMLINK+=”asm-disk1″ – This parameter is part of action key. If PROGRAM output matches RESULT, then a SYMLINK will be created, which is named asm-disk1 and will be pointing to the device in question.
  • OWNER=”oracle” – This parameter is also part of action. This will change the ownership of device to oracle user
  • GROUP=”dba” – This parameter changes the group of device to dba group
  • MODE=”0660″ – This changes the permission of device file to 0660

So above rule means that the device pointing to the partition “sd*1” on the disk with the SCSI ID of “1ATA_VBOX_HARDDISK_VB88ee7178-f28aa887” will always be referred with symlink “/dev/asm-disk1” pointing to the device, regardless of the letter “?” Linux assigns when the device is discovered. In addition, the device will have the correct ownership and permissions for ASM.

We can create such rule for each of the device or if number of devices are huge, we can use wildcard and matching patterns to create more intelligent rules to search and take required actions.

I have created following rules and we can create a new rule file /etc/udev/rules.d/99-oracle-asm-disks.rules and put following rules, one for each device

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB544d069c-abd3901e", SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9a630cc5-d9697727", SYMLINK+="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB446fdf92-8640efff", SYMLINK+="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB3a71b4f2-8c603b78", SYMLINK+="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"

Step 6) Load updated block device partition tables

You can use partprobe to load the partition tables for block devices

/sbin/partprobe /dev/sdb1
/sbin/partprobe /dev/sdc1
/sbin/partprobe /dev/sdd1
/sbin/partprobe /dev/sde1

Step 7) Test the rules

This is optional step to check if the rules are working as expected. You can run following commands to test the rules

udevadm test /block/sdb/sdb1
udevadm test /block/sdb/sdc1
udevadm test /block/sdb/sdd1
udevadm test /block/sdb/sde1

The output for one of the above command looks like following

calling: test
version 219
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

=== trie on-disk ===
tool version: 219
file size: 6984832 bytes
header size 80 bytes
strings 1805856 bytes
nodes 5178896 bytes
Load module index
Created link configuration context.
timestamp of '/etc/udev/rules.d' changed
Reading rules file: /usr/lib/udev/rules.d/10-dm.rules
Reading rules file: /usr/lib/udev/rules.d/100-balloon.rules
Reading rules file: /usr/lib/udev/rules.d/13-dm-disk.rules
Reading rules file: /usr/lib/udev/rules.d/40-redhat.rules
Reading rules file: /usr/lib/udev/rules.d/42-usb-hid-pm.rules
Reading rules file: /usr/lib/udev/rules.d/50-udev-default.rules
Reading rules file: /usr/lib/udev/rules.d/60-alias-kmsg.rules
Reading rules file: /usr/lib/udev/rules.d/60-cdrom_id.rules
Reading rules file: /usr/lib/udev/rules.d/60-drm.rules
Reading rules file: /usr/lib/udev/rules.d/60-keyboard.rules
Reading rules file: /usr/lib/udev/rules.d/60-net.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-alsa.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-input.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-serial.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage-tape.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-v4l.rules
Reading rules file: /usr/lib/udev/rules.d/60-raw.rules
Reading rules file: /usr/lib/udev/rules.d/61-accelerometer.rules
Reading rules file: /usr/lib/udev/rules.d/64-btrfs.rules
Reading rules file: /usr/lib/udev/rules.d/70-mouse.rules
Reading rules file: /usr/lib/udev/rules.d/70-power-switch.rules
Reading rules file: /usr/lib/udev/rules.d/70-touchpad.rules
Reading rules file: /usr/lib/udev/rules.d/70-uaccess.rules
Reading rules file: /usr/lib/udev/rules.d/71-biosdevname.rules
Reading rules file: /usr/lib/udev/rules.d/71-seat.rules
Reading rules file: /usr/lib/udev/rules.d/73-idrac.rules
Reading rules file: /usr/lib/udev/rules.d/73-seat-late.rules
Reading rules file: /usr/lib/udev/rules.d/75-net-description.rules
Reading rules file: /usr/lib/udev/rules.d/75-probe_mtd.rules
Reading rules file: /usr/lib/udev/rules.d/75-tty-description.rules
Reading rules file: /usr/lib/udev/rules.d/78-sound-card.rules
Reading rules file: /usr/lib/udev/rules.d/80-drivers.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-name-slot.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-setup-link.rules
Reading rules file: /usr/lib/udev/rules.d/81-kvm-rhel.rules
Reading rules file: /usr/lib/udev/rules.d/85-nm-unmanaged.rules
Reading rules file: /usr/lib/udev/rules.d/90-alsa-tools-firmware.rules
Reading rules file: /usr/lib/udev/rules.d/90-iprutils.rules
Reading rules file: /usr/lib/udev/rules.d/90-vconsole.rules
Reading rules file: /usr/lib/udev/rules.d/91-drm-modeset.rules
Reading rules file: /usr/lib/udev/rules.d/95-dm-notify.rules
Reading rules file: /usr/lib/udev/rules.d/95-udev-late.rules
Reading rules file: /usr/lib/udev/rules.d/98-kexec.rules
Reading rules file: /usr/lib/udev/rules.d/98-rdma.rules
Reading rules file: /etc/udev/rules.d/99-oracle-asm-disks.rules
Reading rules file: /usr/lib/udev/rules.d/99-systemd.rules
rules contain 24576 bytes tokens (2048 * 12 bytes), 12216 bytes strings
1803 strings (22584 bytes), 1184 de-duplicated (10988 bytes), 620 trie nodes used
GROUP 6 /usr/lib/udev/rules.d/50-udev-default.rules:52
LINK 'disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' /usr/lib/udev/rules.d/60-persistent-storage.rules:43
IMPORT builtin 'blkid' /usr/lib/udev/rules.d/60-persistent-storage.rules:72
probe /dev/sdb1 raid offset=0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:1
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10122] exit with return code 0
OWNER 54321 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
GROUP 54322 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
MODE 0660 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
LINK 'asm-disk1' /etc/udev/rules.d/99-oracle-asm-disks.rules:1
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:2
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10123] exit with return code 0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:3
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10124] exit with return code 0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:4
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10125] exit with return code 0
handling device node '/dev/sdb1', devnum=b8:17, mode=0660, uid=54321, gid=54322
preserve permissions /dev/sdb1, 060660, uid=54321, gid=54322
preserve already existing symlink '/dev/block/8:17' to '../sdb1'
found 'b8:17' claiming '/run/udev/links/\x2fasm-disk1'
creating link '/dev/asm-disk1' to '/dev/sdb1'
preserve already existing symlink '/dev/asm-disk1' to 'sdb1'
found 'b8:17' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fata-VBOX_HARDDISK_VB544d069c-abd3901e-part1'
creating link '/dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' to '/dev/sdb1'
preserve already existing symlink '/dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' to '../../sdb1'
created db file '/run/udev/data/b8:17' for '/block/sdb/sdb1'
ACTION=add
DEVLINKS=/dev/asm-disk1 /dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1
DEVNAME=/dev/sdb1
DEVPATH=/block/sdb/sdb1
DEVTYPE=partition
ID_ATA=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_BUS=ata
ID_MODEL=VBOX_HARDDISK
ID_MODEL_ENC=VBOX\x20HARDDISK\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_PART_ENTRY_DISK=8:16
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=20969472
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_REVISION=1.0
ID_SERIAL=VBOX_HARDDISK_VB544d069c-abd3901e
ID_SERIAL_SHORT=VB544d069c-abd3901e
ID_TYPE=disk
MAJOR=8
MINOR=17
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=404668
Unload module index
Unloaded link configuration context.

Step 8) Reload rules of udev

Use following command to reload the rules

udevadm control --reload-rules

Check if the required symlinks and other actions are performed on the devices or not

[root@advait ~]# ls -rlt /dev/asm*
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk2 -> sdc1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk3 -> sdd1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk4 -> sde1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk1 -> sdb1

Above symlinks are owned by root, but devices will be owned by oracle:dba

[root@advait ~]# ls -rlt /dev/sd?1
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 oracle dba 8, 33 Jul 18 06:20 /dev/sdc1
brw-rw---- 1 oracle dba 8, 49 Jul 18 06:20 /dev/sdd1
brw-rw---- 1 oracle dba 8, 65 Jul 18 06:20 /dev/sde1
brw-rw---- 1 oracle dba 8, 17 Jul 18 06:20 /dev/sdb1

Now ASM can identify the disks as /dev/asm-disk* and these symlinks will persist with host reboot.

Hope this helps !!

References:

https://www.linux.com/news/udev-introduction-device-management-modern-linux-system
http://www.reactivated.net/writing_udev_rules.html