Oracle 12c Learning Series: In-database Archiving and Temporal Validity

One of the major challenge faced by an Oracle DBA is – how to effectively deal with historical data? Today, if we consider database tables for an enterprise, data in the table goes back several years and most of the data in the table is inactive. Challenge remains as to how to archive this data and make our query run efficiently.

If we choose to archive old data outside of database on a tape, cost of storing data reduces but older data is not available for one-off reports. Also regulatory compliance suggests that older historical data should be accessible all the time.

If we choose to maintain old data inside database, size of database grows, so is the size of backup and storage cost increases. Not only that, queries on bigger tables does not run efficiently, so there is performance cost associated with queries, DMLs and DDLs (example index creation and rebuild).

In 11g, older archived data is handled using following 2 solutions

  1. Partitioning – Distinguish the data as old or new based on the date of creation or some date attribute in the table and partition the table according to that column. DBA can manually move older partitions to low cost storage
  2. Flashback data archive (total recall feature in 11g) – This feature automatically tracks changes to data over period of time and maintains archive transactional data in a different tablespace based on RETENTION parameter. Historical data can be queried using flashback query AS OF clause. Archived transactional data which has aged beyond RETENTION will be purged automatically.

Oracle 12c provide additional solutions to above challenges. We have 2 ways to handle situation for archive data in Oracle 12c:

  1. In-database archiving
  2. Temporal validity

In-Database Archiving

Until now, it was very difficult to identify active data from in-active data at row level. In Oracle 12c, we can identify active data vs inactive data at row level. This is possible by attaching a property to every row of a table to mark it as in-active. If the row is not marked inactive, it’s an active row. Setting such property at row level helps separate active and non-active data in a table and lets you archive rows in a table by marking them inactive. The key thing to understand here is that the data remains in the table and you can compress it, but to the applications, this part of the data (inactive) remains invisible. We also have a session level parameter which enables us to see active as well as inactive data (if required).

In-database archiving is enabled at table level, either during creation of table or later by altering the table. Following is an example of creating table with in-database archiving


SQL> create table sales_archive
 2 (year    number(4),
 3 product  varchar2(10),
 4 amt      number(10,2))
 5 row archival;

Table created.

When you create a table with “ROW ARCHIVAL” clause (line 5 in above example), it automatically adds an extra column ORA_ARCHIVE_STATE to the table. But this column is added as hidden column. If you try to describe the table, you won’t see this column.


SQL> desc sales_archive
 Name                    Null?    Type
 ----------------------- -------- ----------------
 YEAR                             NUMBER(4)
 PRODUCT                          VARCHAR2(10)
 AMT                              NUMBER(10,2)

But you can check if the column is added or not using DBA_TAB_COLUMNS view.


SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES_ARCHIVE';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES_ARCHIVE        ORA_ARCHIVE_STATE               YES
SALES_ARCHIVE        YEAR                          1 NO
SALES_ARCHIVE        PRODUCT                       2 NO
SALES_ARCHIVE        AMT                           3 NO

Similarly, you can enable an existing table for row archival using alter table as shown below.


SQL> alter table sales row archival;

Table altered.

SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES                YEAR                          1 NO
SALES                PRODUCT                       2 NO
SALES                AMT                           3 NO
SALES                ORA_ARCHIVE_STATE               YES

ORA_ARCHIVE_STATE column is very important column for distinguishing active data vs inactive data. The column ORA_ARCHIVE_STATE can take two values—0 and 1. By default, a newly inserted row is active and is denoted by the value 0 for the ORA_ARCHIVE_STATE column. When the rows start being rarely accessed and not updated any longer, they’re considered to be in the non-active state and are denoted by the value 1 (or any value other than zero) for the ORA_ARCHIVE_STATE column.

If you want to check the value of ORA_ARCHIVE_STATE column, you have to select the column explicitly in your select statement.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 0
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

As you can see, we have value of 0 in all the selected rows, which means all this data is active.

Note that if rows are rarely accessed, column ORA_ARCHIVE_STATE will not be set to value of 1 automatically. A DBA or owner of the data have to decide which rows are not being accessed and are inactive and they have to manually update the value of ORA_ARCHIVE_STATE column.

Following DML shows how to mark the data inactive

SQL> update sales_archive set ORA_ARCHIVE_STATE = DBMS_ILM.ARCHIVESTATENAME(1) where year = 2001;

1 row updated.

SQL> commit;

Commit complete.

In the above DML, we used DBMS_ILM.ARCHIVESTATENAME function. This function is officially used for updating value of ORA_ARCHIVE_STATE column. You can also directly update this column to any non-zero value and result will be same. In the above DML, we updated 1 record of year 2001 and marked it as inactive (or archived).

If you query the table again, you won’t see the record that we updated above.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

6 rows selected.

This is because, by default Oracle only shows active records. Any records marked inactive will not be shown. This is done by Oracle by adding simple filter to your queries. We can see the same in below explain plan

SQL> explain plan for
 2 select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------
Plan hash value: 4043476784

----------------------------------------------------------------------------------
| Id | Operation         | Name          | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |               | 6    | 12210 | 3 (0)       | 00:00:01 |
|* 1 | TABLE ACCESS FULL | SALES_ARCHIVE | 6    | 12210 | 3 (0)       | 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("SALES_ARCHIVE"."ORA_ARCHIVE_STATE"='0')

Note
-----
 - dynamic statistics used: dynamic sampling (level=2)

17 rows selected.

You can view archived records again by setting row archival visibility parameter to ALL at session level. By default, the value of this parameter is ACTIVE, that’s why we are seeing only active records. Follow shows the same


SQL> alter session set row archival visibility = ALL;

Session altered.

SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

YEAR       PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 1
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

After setting row archival visibility to ALL, we are able to see archive values. This is one great advantage of in-database archiving. If some adhoc report needs to be run, which needs access to archived records, we can just set row archival visibility to ALL at the beginning of the report and we should be able to get the desired result. You can set back visibility to ACTIVE again.

SQL> alter session set row archival visibility = ACTIVE;

Session altered.

You can also restore back archived data as active data by running update statement on archived data and using DBMS_ILM.ARCHIVESTATENAME function. But in this case you have to pass 0 as value to this function. Also, you need to set ROW ARCHIVAL VISIBILITY clause to ALL before you run update else archived data won’t be visible.

You can disable row archival for a table by using ALTER TABLE … NO ROW ARCHIVAL. If you disable row archival for a table, ORA_ARCHIVE_STATE column will get dropped immediately and all the data will be visible in your select statement

SQL> alter table sales no row archival;

Table altered.

Note that if you use CTAS (CREATE TABLE AS SELECT) to create a new table from an existing table, then new table will have active as well as inactive data. In-database archiving is only applicable to selects and DMLs but not to DDLs.

Temporal Validity and Temporal history

In in-database archiving, we can distinguish active rows from non-active rows using row archival column (ORA_ARCHIVE_STATE). You can achieve same functionality using temporal validity. Many of the applications today have their data dependent on various dates that are relevant to underlying business. Example an insurance application will have basic dates recorded for insurance start date and insurance end date. This represents valid time when the insurance policy will be active for a customer. These dates attributes which denotes valid time for record is called temporal validity. Temporal Validity lets you keep active and inactive data together in the same table, while providing all the benefits of archiving non-active data. Temporal validity support helps in cases where it’s critical to know when certain data becomes valid and when it’s invalid.

Similar to business relevant date attributes used in table by application, Oracle provide temporal validity by creating its own date attributes (start date and end date) to represent valid time dimensions when the record will be active. Application user or DBA can decide which records are active vs inactive by updating start date and end date attribute of each record. Records for which end date has passed are inactive records.

By using valid time temporal implicit filter on valid-time dimension, queries can show rows that are currently valid or that will be valid in future. Queries can hide rows whose facts are currently not valid.

Temporal History

Before we go further, it’s better to understand about temporal history and how temporal validity is different than temporal history. Temporal history was called as Flashback Data Archive (FDA) in previous release. To understand this, let’s consider this.

Temporal validity dates and times are different than the dates and times when the record is created. The date and time when record was created in database is called temporal history. Example, if we have to create a new customer whose insurance policy starts from today, it’s possible that we might create a record for that application tomorrow or after a week but put insurance start date of today. In this case temporal validity date is of today but since record is created 1 week later, temporal history date will be 1 week later.

We can use temporal history to get the past data. We can also use temporal history along with temporal validity to get rows which were valid in the past.

Defining temporal validity

You define a valid-time dimension at table creation time, or by altering a table. In order to create the valid-time dimension, you specify the PERIOD FOR clause in the table creation statement.

The following example shows how to explicitly define two date-time columns, USER_TIME_START and USER_TIME_END:

Following example creates table with valid-time dimension

Explicitly specifying valid-time dimension columns.


SQL> create table policy
2 (cust_id         number,
3 policy_no        number,
4 policy_type      varchar2(20),
5 user_time_start  date,
6 user_time_end    date,
7 period for user_time (user_time_start, user_time_end));

Table created.

In above example, we explicitly specified valid-time dimension columns in create table statement. But even if we don’t specify the column names, valid-time temporal creates desired columns


SQL> create table policy
2 (cust_id    number,
3 policy_no   number,
4 policy_type varchar2(20),
5 period for user_time);

Table created.

A valid-time dimension represented by new PERIOD FOR clause, consist of two date-time columns. If you don’t specify the columns explicitly as in second example, Oracle automatically creates 2 hidden columns whose name starts with prefix provided in PERIOD FOR clause and ends with “start” and “end”. So in second example, the 2 hidden columns created would be user_time_start and user_time_end (same as in example 1).

Only difference between first example and second one is the visibility of valid-time dimension columns. In first table, the columns are visible as we explicitly created them, whereas in second example, the columns are hidden.

If the columns are created hidden, you need to explicitly specify them in select, updates and inserts. Following example shows an insert statement into policy table where we created implicit valid-time dimension columns


SQL> insert into policy (cust_id, policy_no, policy_type, user_time_start, user_time_end) values (123, 3424, 'HEALTH',sysdate, null);

1 row created.

SQL> select * from policy;

   CUST_ID POLICY_NO  POLICY_TYPE
---------- ---------- --------------------
       123 3424       HEALTH

SQL> select cust_id, policy_no, policy_type, user_time_start from policy;

   CUST_ID POLICY_NO  POLICY_TYPE          USER_TIME_START
---------- ---------- -------------------- ----------------------------------------
       123 3424       HEALTH               20-JUN-16 12.33.53.000000 AM +05:30

As you can see, using “select *” won’t show hidden valid-time dimension columns. In 2nd statement we explicitly select them and see the values.

Now, we have seen how to create valid-time columns in table to separate active data from inactive data. Let’s check how to filter these valid-time columns and select only valid data. There are 2 methods of selecting active data by filtering on valid-time columns

  • PERIOD FOR clause
  • Using DBMS_FLASHBACK_ARCHIVE procedure

PERIOD FOR clause

To filter valid data, you can use SELECT statement with new PERIOD FOR clause. We have a set of data which is valid based on its start date and end date of valid-time temporal and we have another set of data in same table which is invalid as it falls outside the start time and end time of valid-time temporal. Both sets of rows reside in same table. However, by controlling visibility of data to valid rows, you can limit what queries and DMLs to see only active data.

For each record that you insert into table, you specify valid-time start date and end date. These dates represent the activeness of data. These dates are entered manually by end users. The date when the record ins actually inserted into the table is the transaction time and is called temporal history in Oracle 12c.

You can use either AS OF PERIOD FOR clause or VERSIONS PERIOD FOR clause to display valid data. AS OF PERIOD FOR clause is used when we want to see valid data as on specific date. Whereas VERSIONS PERIOD FOR clause is used when we want to see valid data between 2 dates (range).

Following example displays all policies that were active as on 01-Jan-2016

select * from policy AS OF PERIOD FOR user_time to_date('01-JAN-2016','DD-MON-YYYY');

Here is another example which show valid data between 01-JAN-2016 and 01-FEB-2016.

select * from policy VERSIONS PERIOD FOR user_time between to_date('01-JAN-2016','DD-MON-YYYY') and to_date(’01-FEB-2016’,’DD-MON-YYYY’);

DBMS_FLASHBACK_ARCHIVE

You can also use DBMS_FLASHBACK_ARCHIVE procedure to show valid data. This procedure is used to set visibility of data at session level. Once visibility is set, select statement on valid-time temporal table shows data that was valid data at set time.

In following example, we are setting visibility of data as of given time – 01-JAN-2016
This means any select statement or DMLs in the same session, will see valid data as of 01-JAN-2016


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('ASOF',to_date('01-JAN-2016','DD-MON-YYYY'));

PL/SQL procedure successfully completed.

You can set the visibility to current time. This will show data which is currently valid


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME(‘CURRENT'));

PL/SQL procedure successfully completed.

Note that this visibility setting affects only selects and DMLs. It doesn’t affect DDL. DDL sees complete data (valid as well as invalid). Also, in-database archiving and temporal validity is NOT supported in CDB database. It’s only supported in non-CDB database.

Hope this helps !!

Fixing SQL plans on ADG using SQL Profiles

With Active Dataguard setup, many of the read only applications runs on ADG and sometime the SQLs that are used by these applications runs into bad plans. ADG being a read only database, its not possible to create a baseline or a profile on ADG.

One of the way to have good plan for SQLs running on ADG is to make them run on primary first and fix the plan by creating profile or baseline and then transfer these SQLs to ADG. Practically I have seen many times that Dev teams are not very flexible in changing the configuration to run the queries on primary. Either there configuration setup is complex and takes more time and efforts to change them or most of the time in bigger enterprises, they have central configuration file which when changed, changes the configuration for entire region. In that case complete software techstack will run on primary for entire region and not just 1 database.

Another way to deal make queries run on primary is to point ADG CNAME to primary. That way all the applications which are supposed to run on ADG will start running on primary. Down side for this is the increase in load on primary as all the applications on ADG will connect to primary. Its too much of a change to fix 1 SQL on ADG.

This article is about fixing plans on ADG using profile, without making any other change. We don’t have to run the query on primary to fix the plan.

Fixing SQL plan on ADG

In one of my old article “Fixing SQL Plans: The hard way – Part 1“, I mentioned about DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This procedure takes internal SQL hints that optimizer understands as input and create a profile to fix the plan.

In that method, I am providing SQL text as input, but the way I am providing SQL text as input is by selecting “SQL_TEXT” column from either V$SQL_TEXT view or from DBA_HIST_SQLTEXT view.

Following is the partial code of the PL/SQL block I used to create profile

for sql_rec in (
 select t.sql_id, t.sql_text
 from dba_hist_sqltext t, dba_hist_sql_plan p
 where t.sql_id = p.sql_id
 and p.sql_id = '&SQL_ID_TO_FIX'
 and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
 and p.parent_id is null
)
loop
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => sql_rec.sql_text,
 profile => ar_profile_hints,
 name => 'PROFILE_33fndgzsas09k');

end loop;

Why we cannot use this method for ADG?

We cannot use this method for ADG. You will find the required SQL text in V$SQL_TEXT view in ADG, but you cannot create SQL profile on ADG as its a read-only database.
Also, you cannot create a profile on primary because you will not find SQL text on primary.

So, How do we fix this?

Method 1:

One of the method I thought of for fixing the SQL is by creating a database link between primary and ADG (say adg_db_link) and use that DB link to get SQL text from standby. So above PL/SQL code will look like below. Note that I have change DBA_HIS* table with V$ views because ADG data will be present in only V$ views. DBA_HIST* tables will have data from primary database only as part of AWR snapshot.

for sql_rec in (
 select t.sql_id, t.sql_text
 from v$sqltext@adg_db_link t, v$sql_plan@adg_db_link p
 where t.sql_id = p.sql_id
 and p.sql_id = '&SQL_ID_TO_FIX'
 and p.plan_hash_value = &BAD_PLAN_HASH_VALUE
 and p.parent_id is null
)
loop
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => sql_rec.sql_text,
 profile => ar_profile_hints,
 name => 'PROFILE_33fndgzsas09k');

end loop;

You can run the complete PL/SQL block on primary and using the DB link will fetch the required SQL text from ADG and you can create SQL profile on primary.

Once this SQL profile is created, it should be picked up by SQL query on ADG. Note that you need to purge the SQL from shared pool of ADG so that SQL will go for hard parse and pick the profile.

Method 2:

Another method we can use, is to pass SQL text as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure. This is the method I implemented and it worked for me.

Let’s take a example.

Following is my SQL text that I want to fix. I was having a completely different SQL in my environment, but I cannot provide actual text for that SQL. My SQL was very big and complex, So I simplified the same and changed table name and column name to understand easily.

select * from ( 
          select this_.col1 as col1,
                 this_.col2 as col2, 
                 this_.col3 as col3, 
                 this_.col4 as col4
          from TAB1 this_ 
          inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
          inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
          where this_.col1 in (:1) 
          and   this_.col2 in (:2 , :3) 
          and   compshipme1_.col1 in (:4 )
          and   this_.col8 = 'OPEN'
          and   this_.col5>=:5 
          and   this_.col5<=:6 
          order by this_.col1 asc ) 
where rownum <= :7

Assuming that existing plan in ADG database is bad, we need to find a good plan of this SQL and also create a profile so that new good plan will be picked in ADG.
We are going to do that without running this SQL on primary or without pointing ADG CNAME to primary. Also, we are not going to do any code changes to add any hints.

Following are the steps to fix the plan on ADG

1) Get a good plan for this SQL

You can use the hints in SQL text and do an explain plan on primary to check if the plan looks good. Alternatively, if you want to be sure, you can run the hinted SQL in ADG to make sure that SQL runs good and efficiently.

I used few index hints in my SQL to make sure correct index is getting picked. I did “explain plan” for the SQL to check the explain plan.

explain plan for
select * from ( 
          select /*+ index(tab1 I_UNIQ_COL2) */ this_.col1 as col1,
                 this_.col2 as col2, 
                 this_.col3 as col3, 
                 this_.col4 as col4
          from TAB1 this_ 
          inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
          inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
          where this_.col1 in (:1) 
          and   this_.col2 in (:2 , :3) 
          and   compshipme1_.col1 in (:4 )
          and   this_.col8 = 'OPEN'
          and   this_.col5>=:5 
          and   this_.col5<=:6 
          order by this_.col1 asc ) 
where rownum <= :7

I got following plan after using required hints

Plan hash value: 2524091007

-------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name         | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0   | SELECT STATEMENT                          |              | 134  | 1524K | 4134   (3) | 00:00:50 |       |       |
|* 1  |  COUNT STOPKEY                            |              |      |       |            |          |       |       |
| 2   |   VIEW                                    |              | 134  | 1524K | 4134   (3) | 00:00:50 |       |       |
|* 3  |    SORT ORDER BY STOPKEY                  |              | 134  | 71154 | 4134   (3) | 00:00:50 |       |       |
|* 4  |     FILTER                                |              |      |       |            |          |       |       |
| 5   |      NESTED LOOPS                         |              | 134  | 71154 | 4133   (3) | 00:00:50 |       |       |
| 6   |       NESTED LOOPS                        |              | 134  | 43550 | 3731   (3) | 00:00:45 |       |       |
| 7   |        TABLE ACCESS BY INDEX ROWID        | TAB1         | 1    | 22    | 1      (0) | 00:00:01 |       |       |
|* 8  |         INDEX UNIQUE SCAN                 | I_UNIQ_COL2  | 1    |       | 0      (0) | 00:00:01 |       |       |
| 9   |        TABLE ACCESS BY GLOBAL INDEX ROWID | TAB2         | 134  | 40602 | 3730   (3) | 00:00:45 | ROWID | ROWID |
|* 10 |         INDEX RANGE SCAN                  | I_TAB2_COL2  | 242  |       | 3507   (3) | 00:00:43 |       |       |
| 11  |        TABLE ACCESS BY GLOBAL INDEX ROWID | TAB3         | 1    | 206   | 3      (0) | 00:00:01 | ROWID | ROWID |
|* 12 |         INDEX RANGE SCAN                  | PK_TAB3_COL1 | 1    |       | 2      (0) | 00:00:01 |       |       |
-------------------------------------------------------------------------------------------------------------------------

2) Obtain outline hints for SQL

Once you perform explain plan for the SQL, you can use FORMAT=>’ADVANCED’ to get the neccessary outline hints

select * from table(dbms_xplan.display(format=>’advanced’));

/*+
 BEGIN_OUTLINE_DATA
 USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")
 USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")
 LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")
 INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))
 INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))
 INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))
 NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")
 OUTLINE(@"SEL$2")
 OUTLINE(@"SEL$3")
 MERGE(@"SEL$2")
 OUTLINE(@"SEL$64EAE176")
 OUTLINE(@"SEL$4")
 OUTLINE_LEAF(@"SEL$1")
 MERGE(@"SEL$64EAE176")
 OUTLINE_LEAF(@"SEL$54D64B3C")
 ALL_ROWS
 OPT_PARAM('_optimizer_extended_cursor_sharing_rel' 'none')
 DB_VERSION('11.2.0.4')
 OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
 IGNORE_OPTIM_EMBEDDED_HINTS
 END_OUTLINE_DATA
 */

Step 3) Above outline data is useful for creating profile for query running on ADG

SQL Profile have to created on primary as we cannot create SQL profile on ADG (read only database).
Previous method described in”Fixing SQL Plans: The hard way – Part 1” uses the SQL ID as input and queries V$SQLTEXT or DBA_HIST* tables to get the required SQL text. This SQL text is then supplied to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure.

But since we don’t have SQL text in primary, we can directly supply SQL text to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure as shown below.

Note – If you have single quotes in your script, you need to change that to 2 single quotes. 1st single quote works as escape character

In my SQL text, I have a single quote literal. So I have to enclose that to 2 single quotes as shown below

select * from ( 
       select this_.col1 as col1,
              this_.col2 as col2, 
              this_.col3 as col3, 
              this_.col4 as col4
       from TAB1 this_ 
       inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
       inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
       where this_.col1 in (:1) 
       and   this_.col2 in (:2 , :3) 
       and   compshipme1_.col1 in (:4 )
       and   this_.col8 = ''OPEN''
       and   this_.col5>=:5 
       and   this_.col5<=:6 
       order by this_.col1 asc ) 
where rownum <= :7

Above SQL will be given as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure.
Similarly, if we have single quotation in outline hints that we extracted, we need to change that to 2 single quotes as shown below.

Also, we need to mark every outline hint in single quotation.

'BEGIN_OUTLINE_DATA',
'USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")',
'USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")',
'LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")',
'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))',
'INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))',
'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))',
'NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")',
'OUTLINE(@"SEL$2")',
'OUTLINE(@"SEL$3")',
'MERGE(@"SEL$2")',
'OUTLINE(@"SEL$64EAE176")',
'OUTLINE(@"SEL$4")',
'OUTLINE_LEAF(@"SEL$1")',
'MERGE(@"SEL$64EAE176")',
'OUTLINE_LEAF(@"SEL$54D64B3C")',
'ALL_ROWS',
'OPT_PARAM(''_optimizer_extended_cursor_sharing_rel'' ''none'')',
'DB_VERSION(''11.2.0.4'')',
'OPTIMIZER_FEATURES_ENABLE(''11.2.0.4'')',
'IGNORE_OPTIM_EMBEDDED_HINTS',
'END_OUTLINE_DATA'

Note that we have enclosed the hints in single quotes and also put a comma at the end of every hint except the last hint.

So we have required outline hints and SQL text that should be provided as input to DBMS_SQLTUNE.IMPORT_SQL_PROFILE procedure

Step 4) Creating required profile on primary

Below shows the code for creating SQL profile on primary after providing required inputs.

declare
ar_profile_hints sys.sqlprof_attr;
begin
  ar_profile_hints := sys.sqlprof_attr(
     'BEGIN_OUTLINE_DATA',
     'USE_NL(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2")',
     'USE_NL(@"SEL$54D64B3C" "THIS_"@"SEL$2")',
     'LEADING(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" "THIS_"@"SEL$2" "COMPADDRES3_"@"SEL$2")',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPADDRES3_"@"SEL$2" ("TAB2"."COL2"))',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "THIS_"@"SEL$2" ("TAB1"."COL1" "TAB1"."COL2" "TAB1"."COL3" "TAB1"."COL4"))',
     'INDEX_RS_ASC(@"SEL$54D64B3C" "COMPSHIPME1_"@"SEL$3" ("TAB3"."COL1"))',
     'NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")',
     'OUTLINE(@"SEL$2")',
     'OUTLINE(@"SEL$3")',
     'MERGE(@"SEL$2")',
     'OUTLINE(@"SEL$64EAE176")',
     'OUTLINE(@"SEL$4")',
     'OUTLINE_LEAF(@"SEL$1")',
     'MERGE(@"SEL$64EAE176")',
     'OUTLINE_LEAF(@"SEL$54D64B3C")',
     'ALL_ROWS',
     'OPT_PARAM(''_optimizer_extended_cursor_sharing_rel'' ''none'')',
     'DB_VERSION(''11.2.0.4'')',
     'OPTIMIZER_FEATURES_ENABLE(''11.2.0.4'')',
     'IGNORE_OPTIM_EMBEDDED_HINTS',
     'END_OUTLINE_DATA'
 );
 DBMS_SQLTUNE.IMPORT_SQL_PROFILE(
 sql_text => 'select * from ( 
                   select /*+ index(tab1 I_TAB1_IDX1) */ this_.col1 as col1,
                          this_.col2 as col2, 
                          this_.col3 as col3, 
                          this_.col4 as col4
                   from TAB1 this_ 
                   inner join TAB2 compaddres3_ on this_.col2 = compaddres3_.col2 
                   inner join TAB3 compshipme1_ on this_.col1=compshipme1_.col1
                   where this_.col1 in (:1) 
                   and   this_.col2 in (:2 , :3) 
                   and   compshipme1_.col1 in (:4 )
                   and   this_.col8 = ''OPEN''
                   and   this_.col5>=:5 
                   and   this_.col5<=:6 
                   order by this_.col1 asc ) 
             where rownum <= :7',
 profile => ar_profile_hints,name => 'PROFILE_d7w7fruzwyh2s'
 );
end;
/

You can run above code on primary to create required profile.
Once you create profile, purge the SQL on ADG using sys.dbms_shared_pool.purge procedure.

When the SQL runs next time on ADG, it will automatically pick the above created profile and plan will be the one we tested before by including hints.

I was able to see following in the NOTE section when I did DBMS_XPLAN.DISPLAY_CURSOR on ADG

Note
-----
   - SQL profile PROFILE_d7w7fruzwyh2s used for this statement

Hope this helps !!

Reference

https://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/

https://avdeo.com/2012/07/04/fixing-sql-plans-the-hard-way-part-2/

https://avdeo.com/2012/07/13/fixing-sql-plans-the-hard-way-part-3/

 

Using UDev to configure ASM disks

This is a small article on using UDev on RHEL 7. I have a virtual box with RHEL 7 and I am configuring ASM diskgroups.

Before we configure ASM diskgroups, we need to have disks/partitions available at OS level and those should be recognized by ASM. Oracle provides a utility called ASMLib which can be installed and used very easily to configure disk/partitions at OS level.

ASMLib stamps the header of partitions/disk at OS level with Oracle format making them easily detected by ASM instance doing disk discovery. You can go through ASMLib installation, configuration and usage in an article write by my friend Anand Prakashhttps://aprakash.wordpress.com/2016/05/19/oracle-asmlib

In this article we will focus on using UDev for setting up ASM disks.

Let’s start with some information on UDev

What is Udev?

Udev is the device manager for the Linux 2.6 kernel that creates/removes device nodes in the /dev directory dynamically. It is the successor of devfs and hotplug. It runs in userspace and the user can change device names using Udev rules.

Why Do We Need It ?

In the older kernels, the /dev directory contained statics device files. But with dynamic device creation, device nodes for only those devices which are actually present in the system are created. Let us see the disadvantages of the static /dev directory, which led to the development of Udev.

Problems Identifying the Exact Hardware Device for a Device Node in /dev

The kernel will assign a major/minor number pair when it detects a hardware device while booting the system. Let us consider two hard disks. The connection/alignment is in such a way that one is connected as a master and the other, as a slave. The Linux system will call them, /dev/hda and /dev/hdb. Now, if we interchange the disks the device name will change. This makes it difficult to identify the correct device that is related to the available static device node. The condition gets worse when there are a bunch of hard disks connected to the system.

Udev provides a persistent device naming system through the /dev directory, making it easier to identify the device.

Huge Number of Device Nodes in /dev

In the static model of device node creation, no method was available to identify the hardware devices actually present in the system. So, device nodes were created for all the devices that Linux was known to support at the time. The huge mess of device nodes in /dev made it difficult to identify the devices actually present in the system.

Not Enough Major/Minor Number Pairs

The number of static device nodes to be included increased a lot in recent times and the 8-bit scheme, that was used, proved to be insufficient for handling all the devices. As a result the major/minor number pairs started running out.

Working of Udev

The Udev daemon listens to the netlink socket that the kernel uses for communicating with user space applications. The kernel will send a bunch of data through the netlink socket when a device is added to, or removed from a system. The Udev daemon catches all this data and will do the rest, i.e., device node creation, module loading etc.

Kernel Device Event Management

  • When bootup is initialized, the /dev directory is mounted in tmpfs.
  • After that, Udev will copy the static device nodes from /lib/udev/devices to the /dev directory.
  • The Udev daemon then runs and collects uevents from the kernel, for all the devices connected to the system.
  • The Udev daemon will parse the uevent data and it will match the data with the rules specified in /etc/udev/rules.d.
  • It will create the device nodes and symbolic links for the devices as specified in the rules.
    The Udev daemon reads the rules from /etc/udev/rules.d/*.rules and stores them in the memory.
  • Udev will receive an inotify event, if any rules were changed. It will read the changes and will update the memory.

Let’s start by creating disks at OS level. VirtualBox has commands which creates block devices at OS level.

Using Udev to configure disks

Step 1) Creating Block devices at OS level

I am creating 4 disks, each of 10GB using following command

VBoxManage createhd --filename asm1.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename asm4.vdi --size 10240 --format VDI --variant Fixed

Step 2) Attaching the disk to the correct storage

VBoxManage storageattach 12102 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3.vdi --mtype shareable
VBoxManage storageattach 12102 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4.vdi --mtype shareable

12102 is the name of virtual machine. You can see the virtual machine name as shown in below screen shot.

vbox

Step 3) (optional) Making disk sharable

This is optional step and is required only if you are using ASM cluster. In that case ASM disks should be made shareable as should be attached to all machines in cluster. You can use following commands to make the disks sharable

VBoxManage modifyhd asm1.vdi --type shareable
VBoxManage modifyhd asm2.vdi --type shareable
VBoxManage modifyhd asm3.vdi --type shareable
VBoxManage modifyhd asm4.vdi --type shareable

Step 4) Check if the disks are now shown in your virtual box

Following screen shot shows that disks are now added to correct virtual machine

vbox copy

Also, after you login to virtual box, you should be able to see new devices under /dev directory

[root@advait ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Jul 18 05:36 /dev/sda
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jul 18 05:36 /dev/sda2
brw-rw---- 1 root disk 8, 16 Jul 18 05:36 /dev/sdb
brw-rw---- 1 root disk 8, 32 Jul 18 05:36 /dev/sdc
brw-rw---- 1 root disk 8, 48 Jul 18 05:36 /dev/sdd
brw-rw---- 1 root disk 8, 64 Jul 18 05:36 /dev/sde

/dev/sda, /dev/sda1 and /dev/sda2 are partitions of main device used for virtual box.
/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde are the devices we added to virtual box as above.

Step 5) Format the new devices and create partitions

You need to create new partitions using fdisk utility. I am showing the command for one partition, you can repeate the same for remaining partitions.

[root@advait ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xbda01838.

Command (m for help): n
Partition type:
 p primary (0 primary, 0 extended, 4 free)
 e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xbda01838

Device Boot Start End Blocks Id System
/dev/sdb1 2048 20971519 10484736 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

This will create single partition for each of the devices.

[root@advait ~]# ls -rlt /dev/sd*1
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 root disk 8, 17 Jul 18 05:40 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Jul 18 05:40 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Jul 18 05:41 /dev/sdd1
brw-rw---- 1 root disk 8, 65 Jul 18 05:41 /dev/sde1

Step 5) Configure UDev rules

So before creating UDev rule, we need to understand what exactly we want to do. We want to create alias for each of the disk that we created at OS level so that it is always identified the same way, regardless of the device name Linux assigns it. This can be done by recognizing each device based on some unique IDs and assining the alias to that device. UDev can do just that.

Each disk has a unique SCSI ID. We can use this unique ID to recognize the disk and assign the required alias to that disk.
We can get unique SCSI_ID using following command

[root@advait ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
1ATA_VBOX_HARDDISK_VB544d069c-abd3901e

scsi_id command is located in /usr/lib/udev/ directory on RHEL 7. But in previous release this used to be in /sbin/ location.

Like wise we can find SCSI_ID for each disk that we added.

Rules are defined in “/etc/udev/rules.d” directory. Udev reads these rules and apply them to devices listed in /dev directory.

Rules looks like below

KERNEL=="sd?1", SUBSYSTEM=="block", -
PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", -
RESULT=="1ATA_VBOX_HARDDISK_VB88ee7178-f28aa887", -
SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"

Following is the explanation of each parameter

  • KERNEL==”sd?1″ – This matches the kernel name of the device. In our case all our partitions are having names as sd?1 (sdb1, sdc1 etc). So this match key matches the kernel name of the devices
  • SUBSYSTEM==”block” – This match key matches the subsystem of the devices. SUBSYSTEM could be block, scsi, usb etc. We have all block devices.
  • PROGRAM==”/usr/lib/udev/scsi_id -g -u -d /dev/$parent” – This will get the unique SCSI_ID for the device searched by first 2 match parameters (KERNEL and SUBSYSTEM)
  • RESULT==”1ATA_VBOX_HARDDISK_VB544d069c-abd3901e” – This will match the output of PROGRAM command with RESULT. If the result matches, then further action will be taken
  • SYMLINK+=”asm-disk1″ – This parameter is part of action key. If PROGRAM output matches RESULT, then a SYMLINK will be created, which is named asm-disk1 and will be pointing to the device in question.
  • OWNER=”oracle” – This parameter is also part of action. This will change the ownership of device to oracle user
  • GROUP=”dba” – This parameter changes the group of device to dba group
  • MODE=”0660″ – This changes the permission of device file to 0660

So above rule means that the device pointing to the partition “sd*1” on the disk with the SCSI ID of “1ATA_VBOX_HARDDISK_VB88ee7178-f28aa887” will always be referred with symlink “/dev/asm-disk1” pointing to the device, regardless of the letter “?” Linux assigns when the device is discovered. In addition, the device will have the correct ownership and permissions for ASM.

We can create such rule for each of the device or if number of devices are huge, we can use wildcard and matching patterns to create more intelligent rules to search and take required actions.

I have created following rules and we can create a new rule file /etc/udev/rules.d/99-oracle-asm-disks.rules and put following rules, one for each device

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB544d069c-abd3901e", SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9a630cc5-d9697727", SYMLINK+="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB446fdf92-8640efff", SYMLINK+="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB3a71b4f2-8c603b78", SYMLINK+="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"

Step 6) Load updated block device partition tables

You can use partprobe to load the partition tables for block devices

/sbin/partprobe /dev/sdb1
/sbin/partprobe /dev/sdc1
/sbin/partprobe /dev/sdd1
/sbin/partprobe /dev/sde1

Step 7) Test the rules

This is optional step to check if the rules are working as expected. You can run following commands to test the rules

udevadm test /block/sdb/sdb1
udevadm test /block/sdb/sdc1
udevadm test /block/sdb/sdd1
udevadm test /block/sdb/sde1

The output for one of the above command looks like following

calling: test
version 219
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

=== trie on-disk ===
tool version: 219
file size: 6984832 bytes
header size 80 bytes
strings 1805856 bytes
nodes 5178896 bytes
Load module index
Created link configuration context.
timestamp of '/etc/udev/rules.d' changed
Reading rules file: /usr/lib/udev/rules.d/10-dm.rules
Reading rules file: /usr/lib/udev/rules.d/100-balloon.rules
Reading rules file: /usr/lib/udev/rules.d/13-dm-disk.rules
Reading rules file: /usr/lib/udev/rules.d/40-redhat.rules
Reading rules file: /usr/lib/udev/rules.d/42-usb-hid-pm.rules
Reading rules file: /usr/lib/udev/rules.d/50-udev-default.rules
Reading rules file: /usr/lib/udev/rules.d/60-alias-kmsg.rules
Reading rules file: /usr/lib/udev/rules.d/60-cdrom_id.rules
Reading rules file: /usr/lib/udev/rules.d/60-drm.rules
Reading rules file: /usr/lib/udev/rules.d/60-keyboard.rules
Reading rules file: /usr/lib/udev/rules.d/60-net.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-alsa.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-input.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-serial.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage-tape.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-storage.rules
Reading rules file: /usr/lib/udev/rules.d/60-persistent-v4l.rules
Reading rules file: /usr/lib/udev/rules.d/60-raw.rules
Reading rules file: /usr/lib/udev/rules.d/61-accelerometer.rules
Reading rules file: /usr/lib/udev/rules.d/64-btrfs.rules
Reading rules file: /usr/lib/udev/rules.d/70-mouse.rules
Reading rules file: /usr/lib/udev/rules.d/70-power-switch.rules
Reading rules file: /usr/lib/udev/rules.d/70-touchpad.rules
Reading rules file: /usr/lib/udev/rules.d/70-uaccess.rules
Reading rules file: /usr/lib/udev/rules.d/71-biosdevname.rules
Reading rules file: /usr/lib/udev/rules.d/71-seat.rules
Reading rules file: /usr/lib/udev/rules.d/73-idrac.rules
Reading rules file: /usr/lib/udev/rules.d/73-seat-late.rules
Reading rules file: /usr/lib/udev/rules.d/75-net-description.rules
Reading rules file: /usr/lib/udev/rules.d/75-probe_mtd.rules
Reading rules file: /usr/lib/udev/rules.d/75-tty-description.rules
Reading rules file: /usr/lib/udev/rules.d/78-sound-card.rules
Reading rules file: /usr/lib/udev/rules.d/80-drivers.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-name-slot.rules
Reading rules file: /usr/lib/udev/rules.d/80-net-setup-link.rules
Reading rules file: /usr/lib/udev/rules.d/81-kvm-rhel.rules
Reading rules file: /usr/lib/udev/rules.d/85-nm-unmanaged.rules
Reading rules file: /usr/lib/udev/rules.d/90-alsa-tools-firmware.rules
Reading rules file: /usr/lib/udev/rules.d/90-iprutils.rules
Reading rules file: /usr/lib/udev/rules.d/90-vconsole.rules
Reading rules file: /usr/lib/udev/rules.d/91-drm-modeset.rules
Reading rules file: /usr/lib/udev/rules.d/95-dm-notify.rules
Reading rules file: /usr/lib/udev/rules.d/95-udev-late.rules
Reading rules file: /usr/lib/udev/rules.d/98-kexec.rules
Reading rules file: /usr/lib/udev/rules.d/98-rdma.rules
Reading rules file: /etc/udev/rules.d/99-oracle-asm-disks.rules
Reading rules file: /usr/lib/udev/rules.d/99-systemd.rules
rules contain 24576 bytes tokens (2048 * 12 bytes), 12216 bytes strings
1803 strings (22584 bytes), 1184 de-duplicated (10988 bytes), 620 trie nodes used
GROUP 6 /usr/lib/udev/rules.d/50-udev-default.rules:52
LINK 'disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' /usr/lib/udev/rules.d/60-persistent-storage.rules:43
IMPORT builtin 'blkid' /usr/lib/udev/rules.d/60-persistent-storage.rules:72
probe /dev/sdb1 raid offset=0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:1
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10122] exit with return code 0
OWNER 54321 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
GROUP 54322 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
MODE 0660 /etc/udev/rules.d/99-oracle-asm-disks.rules:1
LINK 'asm-disk1' /etc/udev/rules.d/99-oracle-asm-disks.rules:1
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:2
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10123] exit with return code 0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:3
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10124] exit with return code 0
PROGRAM '/usr/lib/udev/scsi_id -g -u -d /dev/sdb' /etc/udev/rules.d/99-oracle-asm-disks.rules:4
starting '/usr/lib/udev/scsi_id -g -u -d /dev/sdb'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb'(out) '1ATA_VBOX_HARDDISK_VB544d069c-abd3901e'
'/usr/lib/udev/scsi_id -g -u -d /dev/sdb' [10125] exit with return code 0
handling device node '/dev/sdb1', devnum=b8:17, mode=0660, uid=54321, gid=54322
preserve permissions /dev/sdb1, 060660, uid=54321, gid=54322
preserve already existing symlink '/dev/block/8:17' to '../sdb1'
found 'b8:17' claiming '/run/udev/links/\x2fasm-disk1'
creating link '/dev/asm-disk1' to '/dev/sdb1'
preserve already existing symlink '/dev/asm-disk1' to 'sdb1'
found 'b8:17' claiming '/run/udev/links/\x2fdisk\x2fby-id\x2fata-VBOX_HARDDISK_VB544d069c-abd3901e-part1'
creating link '/dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' to '/dev/sdb1'
preserve already existing symlink '/dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1' to '../../sdb1'
created db file '/run/udev/data/b8:17' for '/block/sdb/sdb1'
ACTION=add
DEVLINKS=/dev/asm-disk1 /dev/disk/by-id/ata-VBOX_HARDDISK_VB544d069c-abd3901e-part1
DEVNAME=/dev/sdb1
DEVPATH=/block/sdb/sdb1
DEVTYPE=partition
ID_ATA=1
ID_ATA_FEATURE_SET_PM=1
ID_ATA_FEATURE_SET_PM_ENABLED=1
ID_ATA_SATA=1
ID_ATA_SATA_SIGNAL_RATE_GEN2=1
ID_ATA_WRITE_CACHE=1
ID_ATA_WRITE_CACHE_ENABLED=1
ID_BUS=ata
ID_MODEL=VBOX_HARDDISK
ID_MODEL_ENC=VBOX\x20HARDDISK\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
ID_PART_ENTRY_DISK=8:16
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=20969472
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_REVISION=1.0
ID_SERIAL=VBOX_HARDDISK_VB544d069c-abd3901e
ID_SERIAL_SHORT=VB544d069c-abd3901e
ID_TYPE=disk
MAJOR=8
MINOR=17
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=404668
Unload module index
Unloaded link configuration context.

Step 8) Reload rules of udev

Use following command to reload the rules

udevadm control --reload-rules

Check if the required symlinks and other actions are performed on the devices or not

[root@advait ~]# ls -rlt /dev/asm*
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk2 -> sdc1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk3 -> sdd1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk4 -> sde1
lrwxrwxrwx 1 root root 4 Jul 18 06:20 /dev/asm-disk1 -> sdb1

Above symlinks are owned by root, but devices will be owned by oracle:dba

[root@advait ~]# ls -rlt /dev/sd?1
brw-rw---- 1 root disk 8, 1 Jul 18 05:36 /dev/sda1
brw-rw---- 1 oracle dba 8, 33 Jul 18 06:20 /dev/sdc1
brw-rw---- 1 oracle dba 8, 49 Jul 18 06:20 /dev/sdd1
brw-rw---- 1 oracle dba 8, 65 Jul 18 06:20 /dev/sde1
brw-rw---- 1 oracle dba 8, 17 Jul 18 06:20 /dev/sdb1

Now ASM can identify the disks as /dev/asm-disk* and these symlinks will persist with host reboot.

Hope this helps !!

References:

https://www.linux.com/news/udev-introduction-device-management-modern-linux-system
http://www.reactivated.net/writing_udev_rules.html

Oracle 12c Learning Series: Automatic Data Optimization – ADO

I am starting Oracle 12c learning series, where I am planning to publish multiple articles on Oracle 12c new features. I hope these articles will be helpful to DBAs aspiring for OCP 12c certification and also others who are planning to implement these features in there databases.

This is a very long post about ADO – Automatic Data Optimization, which is one of the ILM strategy to manage aging data. So please be patient while reading this article. This article provides end-to-end details about ADO implementation, commands syntax etc.

Typical lifecycle of data

Lifecycle of data starts with insert statements. New data is inserted into the table. At this stage data is either kept in non-compressed form or is compressed by DBA. But compression for such active data is optimized for DMLs (example OLTP table compression). Compression has is less and has minimal impact on DMLs.
After a month, activity has subsided, although significant OLTP transactions are still carried out. At this stage, data in former most active partition stays in OLTP table compressed format and new partition is automatically created.

Two months down the line data is rarely modified and accessed rarely as well. At this stage, partition can be moved to low cost storage tier and at higher compression level like hybrid columnar compression (HCC).

After 1 year, data is considered dormant and is no longer accessed or updated. At this stage, data can be moved to low cost storage tier with highest level of HCC compression and may also be marked as read-only.

How (ADO) Automatic Data Optimization works

Automatic data optimization (ADO) is an ILM strategy which uses activity tracking by using heat map to determine if the data is still getting accessed on not.

Heat map provides ability to track and mark data as it goes through lifecycle changes. Heat map can track data at following level

  • Data accesses at segment level
  • Data modification at segment level and block level

These collected heat map statistics are stored in SYSAUX tablespace. These statistics provide “heat map” of hot and cold data based on how frequent the data is accessed or modified.

ADO allows you to create policies that uses heat map statistics to compress and move data when necessary. ADO automatically evaluates and executes policies that perform compression and storage tiring actions.

Following are the outline steps which can be used to implement and use ADO policies

1)    Enable heat map at database level. Heap map tracks data access at segment level and data modification at segment level and row level

2)    Create ADO policies.

You can create ADO policies at different scopes/level

a)    Tablespace level

b)    Group level

c)     Segment level

d)    Row level

You can also create ADO policies based on different operations to track.

a)    Creation

b)    Access

c)     Modification

You can also provide condition when this policy takes effect

a)    Based on time period. Ex: After 3 days, After 1 week, After 1 year

b)    Tablespace usage above threshold

You can also specify different kind of action for ADO policy

a)    Compression

b)    Moving to different storage tier

c)     Both compression + move

3)    Evaluating and executing ADO policies. By default, segment level ADO policies are evaluated every day during maintenance window. DBA can also create custom schedule for evaluating ADO policies. Row level policies are evaluated by MMON every 15 mins.

4)    Verify ADO execution details by using DBA_ILMEVALUATIONDETAILS and DBA_ILMRESULTS view.

5)    Verify if segment is compressed or moved based on the action provided in executed policy.

We are going into the details of all steps above and see how we can enable ADO policies as per our business requirements

Step 1) Enabling Heat Map

Enabling and disabling heat map

The very first step before you can start using ADO is to enable activity tracking, which you can enable by setting the new initialization parameter HEAT_MAP

SQL> alter system set heat_map = on;

System altered.

Above parameter will enable activity tracking for entire instance. Enabling heat map will enable tracking both DML and access at segment level and store these details in relevant table in SYSAUX tablespace. This will not track any access details of segments in SYSTEM and SYSAUX tablespace.

*** Important thing to note here is that heat_map does not work with CDB databases. It works only for non-CDB databases. So you cannot use this feature with CDB databases

You can also enable heat map at session level using following command

SQL> alter session set heat_map = on;

Session altered.

You can turn off heat map by setting the parameter to off as below

SQL> alter system set heat_map = off;

System altered.

Checking heat map details

Once you enable heap map, you can check various details of activity tracking provided by heat map. You can check details about when segments were accessed recently, was it row ID access or full scan access or when was the recent DML performed on the segment.

V$HEAT_MAP_SEGMENT view

Tracking details of segments are available at real time in V$HEAP_MAP_SEGMENT view. As soon as session accesses some objects in database, its heat map information will be available immediately in this view.

Example, I just accessed one of the demo table (SALES) in my test database after enabling heat map and I can see following info


SQL> select object_name, SUBOBJECT_NAME, TRACK_TIME, SEGMENT_WRITE, SEGMENT_READ, FULL_SCAN, LOOKUP_SCAN, con_id from v$heat_map_segment order by 2;

OBJECT_NAME          SUBOBJECT_ TRACK_TIM SEG SEG FUL LOO     CON_ID
-------------------- ---------- --------- --- --- --- --- ----------
SALES                P10        14-JUN-16 NO  NO  YES NO           0
SALES                P11        14-JUN-16 NO  NO  YES NO           0
SALES                P12        14-JUN-16 YES NO  YES NO           0
SALES                P2         14-JUN-16 NO  NO  YES NO           0
SALES                P3         14-JUN-16 NO  NO  YES NO           0
SALES                P4         14-JUN-16 NO  NO  YES NO           0
SALES                P5         14-JUN-16 NO  NO  YES NO           0
SALES                P6         14-JUN-16 NO  NO  YES NO           0
SALES                P7         14-JUN-16 NO  NO  YES NO           0
SALES                P8         14-JUN-16 NO  NO  YES NO           0
SALES                P9         14-JUN-16 NO  NO  YES NO           0
SALES                PM         14-JUN-16 NO  NO  YES NO           0
I_SALES_ITEM_YEAR               14-JUN-16 NO  NO  NO  YES          0
SALES_ITEM                      14-JUN-16 NO  NO  YES NO           0

14 rows selected.

If you see column FULL_SCAN is YES, it means the access was done using full scan.

If you see column SEGMENT_WRITE is YES, it means the modification was done to the table

TRACK_TIME is the timestamp when access or modification was done.

You can also see LOOKUP_SCAN column is YES for 1 row where object_name is the name of index. So if index is getting used in your select plan, it will show LOOKUP_SCAN as YES for such access.

SYS.HEAT_MAP_STAT$ and DBA_HEAT_MAP_SEG_HISTOGRAM

Data from V$HEAP_MAP_SEGMENT is persisted into SYS.HEAT_MAP_STAT$ table in SYSAUX tablespace by DBMS_SCHEDULER job at regular period of time. This data is then available via view DBA_HEAT_MAP_SEG_HISTOGRAM. Following show similar data when we query DBA_HEAT_MAP_SEG_HISTOGRAM

SQL> select object_name, subobject_name, track_time, SEGMENT_WRITE, full_scan, lookup_scan from DBA_HEAT_MAP_SEG_HISTOGRAM;

OBJECT_NAME          SUBOBJECT_ TRACK_TIM SEG FUL LOO
-------------------- ---------- --------- --- --- ---
SALES                P2         14-JUN-16 NO  YES NO
SALES                P3         14-JUN-16 NO  YES NO
SALES                P4         14-JUN-16 NO  YES NO
SALES                P5         14-JUN-16 NO  YES NO
SALES                P6         14-JUN-16 NO  YES NO
SALES                P7         14-JUN-16 NO  YES NO
SALES                P8         14-JUN-16 NO  YES NO
SALES                P9         14-JUN-16 NO  YES NO
SALES                P10        14-JUN-16 NO  YES NO
SALES                P11        14-JUN-16 NO  YES NO
SALES                P12        14-JUN-16 YES YES NO
SALES                PM         14-JUN-16 NO  YES NO
SALES_ITEM                      14-JUN-16 NO  YES NO
I_SALES_ITEM_YEAR               14-JUN-16 NO  NO  YES

14 rows selected.

DBA_HEATMAP_TOP_TABLESPACES

This view shows heat map information for top 100 tablespaces. It gives report at tablespace level instead of segment level. You can fine MIN, MAX and AVG read time, write time, full table access time and lookup access time for each tablespace.

DBMS_HEAT_MAP package

DBMS_HEAT_MAP packages can be used to get heat map statistics at even deeper level. The default access is only going to let you know heat map statistics at segment level. But if you want to go deeper, for example to check which blocks were accessed and at what time, you can use use subprograms in DBMS_HEAT_MAP package.

Following is a summary of the procedures and functions in DBMS_HEAT_MAP package:

  • BLOCK_HEAT_MAP function Returns the last modification time for each block in a table segment
  • EXTENT_HEAT_MAP function Returns the extent-level Heat Map statistics for a table segment
  • OBJECT_HEAT_MAP procedure Shows the minimum, maximum, and average access times for all segments belonging to an object
  • SEGMENT_HEAT_MAP procedure Shows the Heat Map attributes for a segment
  • TABLESPACE_HEAT_MAP procedure Shows minimum, maximum, and average access times for all segments in a tablespace

Example, we can use DBMS_HEAT_MAP.BLOCK_HEAT_MAP to get heat map statistics of a block of a segment

Similarly, DBMS_HEAT_MAP.EXTENT_HEAT_MAP function provide heat map statistics information at extent level.

Step 2) Enabling ADO policies

Let’s move to the 2nd step of implementing ILM. Now that we have enabled heat map in step 1, we need to create policies at various levels to manage our data. The policy we create will use the heat map statistical data and take the required action that we define in the policy. There are 4 sections in creating ADO policy

Level at which you can create ADO policies

  • Tablespace level – Default ADO policy can be defined at tablespace level and will be applicable to all the segments existing in that tablespace or new segments getting created in that tablespace.
  • Group level – When we define group level ADO policy, if a table is eligible for specific ADO action defined in policy, the same action would be performed on all dependent objects. If action is compress and table has LOB columns, all secureFile LOBs will be compressed as well.
  • Segment level – Applies to tables and table partitions. If no policy defined at segment level and we have a default policy at tablespace level, tablespace level policy applies to segment. But segment level policy overrides tablespace level policy.
  • Row level – Can be created only for tracking modifications. We cannot have row level policy for creation and access operations.

Operations to track

  • Creation – Action will be taken on specified time condition after creation of data
  • Access – Action will be taken on specified time condition after access of data
  • Modification – Action will be taken on specified time condition after modification of data

Condition when this policy takes effect

  • Based on time period. Ex: After 3 days, After 1 week, After 1 year
  • Tablespace usage above threshold

Action to take for ADO policy

  • Compression
  • Data Movement-  Moving to different storage tier
  • Both compression + move

You can create ILM policies during CREATE TABLE statement or you can add ILM policies on existing tables using ALTER TABLE statements.  A segment can have multiple ILM policies defined on it.

Before we look into different example of creating ILM policies let’s take a look at various compression level available. Data can be compressed while its being inserted, updated or loaded into a table via bulk load.

We have following 4 compression levels available

  • ROW STORE COMPRESS BASIC

This is basic level of compression and is used while inserting data into a table without using direct-path insert, using advanced compression option (ACO).

  • ROW STORE COMPRESS ADVANCED

This a renamed syntax for the previous OLTP table compression feature that was part of ACO. ROW STORE COMPRESS ADVANCED on heap table maps to LOW compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

  • COLUMN STORE COMPRESS FOR QUERY LOW or HIGH

This provides Hybrid columnar compression (or HCC) and provides higher level compression then ROW STORE COMPRESS. It works well when load performance is critical and frequent queries are run against the data and no DMLs are expected. COLUMN STORE COMPRESS FOR QUERY LOW or HIGH maps to MEDIUM compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

  • COLUMN STORE COMPRESS FOR ARCHIVE LOW or HIGH

This provides Hybrid columnar compression (or HCC) and provides highest level of compression. It works well when data is accessed very infrequently and no DMLs are run against data. COLUMN STORE COMPRESS FOR ARCHIVE LOW or HIGH maps to MEDIUM compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

Let’s look at creating ADO polices for compression action and Data movement action

ADO policies for Compression action

Following are the different examples of creating ADO compression policies

Tablespace level compression policy

ALTER TABLESPACE USERS DEFAULT ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
SEGMENT AFTER 30 DAYS OF NO ACCESS

In the above colorful command, we can see various sections required in policy implementation. TABLESPACE in purple color tells that this is the default policy at tablespace level and applicable to segments in that tablespace as represented by SEGMENT keyword in blue.

  • ILM ADD POLICY – This is required to add a policy to tablespace or segment
  • ROW STORE COMPRESS ADVANCED – This is the compression action level we are using as this is compression based ILM policy
  • SEGMENT – This represent that this policy is applicable to all segments in USERS tablespace.
  • AFTER 30 DAYS OF – This represent condition when this policy will be eligible to take required action of compression.
  • NO ACCESS – This represent operation to track. We have 3 different types of operations that we defined above – Access, modification, creation

So if alter tablespace USERS using above command and add ILM policy then if any segment in the tablespace (which has no other ILM policy set) is not accessed for more than 30 days then that segment will be compressed to ROW STORE COMPRESS ADVANCED

Group level compression policy

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
GROUP AFTER 60 DAYS OF NO MODIFICATION

In the above example, we are implementing group level ILM policy. This policy will automatically compress the table SALES if no modification is done for 60 days. Since we are using GROUP level policy, dependent LOBs are compressed with LOW compression after 60 days of no modification.

ALTER TABLE SALES MODIFY PARTITION P1 ILM ADD POLICY
COLUMN STORE COMPRESS FOR ARCHIVE HIGH
GROUP AFTER 3 MONTHS OF CREATION

In this example, we are using GROUP level compression policy on a partition of a table. This policy will automatically compress PARTITION P1 after 3 months of creation and corresponding LOB will be compressed MEDIUM. This does highest level of compression using HCC. Global indexes are maintained

Segment level compression policy

ALTER TABLE SALES ILM ADD POLICY
COLUMN STORE COMPRESS FOR QUERY HIGH
SEGMENT AFTER 90 DAYS OF NO MODIFICATION

In this example, we are creating segment level compression policy on SALES table. This policy is going to automatically compress SALES table with COLUMN STORE COMPRESS FOR QUERY HIGH if it’s not modified for 90 days.

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS
SEGMENT AFTER 1 YEAR OF CREATION

This example implements basic level of compression (ROW STORE COMPRESS [BASIC]) on SALES table after 1 year of its creation. SALES table will be automatically compressed after 1 year of its creation

Row level compression policy

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
ROW AFTER 30 DAYS OF NO MODIFICATION

Above policy will compress any block of the table SALES whose all rows are not modified from last 30 days. Even if 1 row in a block gets modified, it does not qualify for compression.

Important thing to note here is that row level compression polices can be created based on modification time only. They cannot be created based on creation time or access time. Also, only compression type available for row level compression policy is ROW STORE COMPRESS ADVANCED/BASIC. We cannot use columnar compression for row level policy.

ADO policies for Data movement action

This is another type of ADO policies that we can set at segment level only. We cannot set this at tablespace level or group level or any other level. Also, data movement happens at tablespace level. Meaning that we can move the table or partition to another tablespace based on low cost storage.

Data movement policy takes following format

ALTER TABLE <TABLE_NAME> ILM ADD POLICY
TIER TO <TABLESPACE_NAME>
[ CUSTOM ILM RULE | SEGMENT <CONDITION> <OPERATION> ]

TIER TO is a new clause and we specify which tablespace the segment should move if it satisfies he condition set. We can also provide custom ILM rules in the last section of the command instead of providing condition (example AFTER 30 DAYS, AFTER 1 YEAR etc) and operation (example CREATION, NO MODIFICATION, NO ACCESS etc).

Let’s take an example

In the above example, we replaced compression clause with “TIER TO” clause.

TIER TO – This clause provides tablespace name where segment should move after it satisfies the condition. In this example, ILM policy will automatically move SALES table to low_cost_storage tablespace after 60 days of creation.

If we don’t provide the last part of condition and operation, then policy will be created and data movement action will take place when tablespace is running out of space. Space threshold for tablespace is already set and we can see those values in DBA_ILMPARAMETERS table.

Following are tiring fullness threshold for tablespace

  • TBS PERCENT USED (default 85): Objects with tiring policy will be moved out of tablespace if its TBS PERCENT USED full (default 85% full)
  • TBS PERCENT FREE (default 25): Objects with tiring policy will continue to move out until source tablespace has TBS PERCENT FREE space (default 25% free).

You can see current values set by querying DBA_ILMPARAMETERS view. You can customize following parameters using DBMS_ILM_ADMIN package.


SQL> select * from dba_ilmparameters;

NAME                      VALUE
-------------------- ----------
ENABLED                       1
RETENTION TIME               30
JOB LIMIT                     2
EXECUTION MODE                2
EXECUTION INTERVAL            1
TBS PERCENT USED             90
TBS PERCENT FREE             15
POLICY TIME                   0

8 rows selected.

Example, if we want to change TBS PERCENT USER to 90% from existing 85%, we can use following procedure

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.tbs_percent_used,90);

PL/SQL procedure successfully completed.

We will check out other procedures and functions in DBMS_ILM_ADMIN package at later section in this chapter.

So if we set following policy, then SALES table will be moved out of current tablespace to low_cost_storage tablespace when SALES tablespace reaches 90% of fullness

ALTER TABLE SALES ILM ADD POLICY
TIER TO low_cost_storage

Also, if we set this policy on many tables in the same tablespace and if tablespace reaches the fullness threshold, then tables will be moved to different tablespace in the order of oldest accessed tables first.

Last thing to know about data movement policy is that they are executed only once in life time. Since they apply only at segment level, once the policy is executed on the segment and required movement takes place, the policy is disabled for that segment. This rule applies only for data movement policy and not for compression policy.

CUSTOM_ILM_RULES

Instead of having condition and operation in the simplified syntax provided by Oracle, you can have more complex algorithm about when the desired action for ILM policy should take place by specifying custom ILM rule. The syntax for custom ILM rule looks like below

ALTER TABLE SALES ILM ADD POLICY
<Compression clause> | <Data movement clause>
ON <CUSTOM_ILM_RULE>

A custom ILM rule should be a function which returns Boolean. You can implement complex logic inside the function and return Boolean after evaluating that complex logic. If return value is true, ILM policy takes action. If return value is false, ILM policy does not take action. This custom ILM rule can be used for both compression action as well as data movement action policies.

Implementing multiple policies on segment

A segment can have multiple ADO policies implemented on this. This logically makes sense as we want to have different action taken on data as it ages more and more. Example, we might go for basic level of compression after 1 week of data insertion, more advanced compression after a month of no creation or no modification and highest level of compression of moving to low cost storage after a month of no access or after a year of creation of data. So satisfy all these timelines and criteria it’s not unusual to have multiple ILM policies on a segment. But there are rules to follow.

Rule 1) Policy should be applied on same operation or statistics (creation, modification or access)

If you create a segment level policy on a table based on ACCESS, make sure that other policies are also created based on ACCESS operation only.

Example:

I create a policy on SALES table based on ACCESS

SQL> alter table demo_user.sales ilm add policy row store compress advanced segment after 2 days of no access;

Table altered.

Then I create another policy on same table based on ACCESS

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 30 days of no access;

Table altered.

Now, If I tried to create 3rd policy based on creation or modification, I will get conflict

SQL> alter table demo_user.sales ilm add policy column store compress for archive high segment after 60 days of creation;
alter table demo_user.sales ilm add policy column store compress for archive high segment after 60 days of creation
*
ERROR at line 1:
ORA-38323: policy conflicts with policy 21

Rule 2) Compression level should increase with time

You should not reduce compression level as timeline condition increases. In the following example, I tried to set column level compression after 7 days of no access and row store compression (which is lower level compression then column level) after 30 days of no access. This causes conflicts

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 7 days of no access;

Table altered.

SQL> alter table demo_user.sales ilm add policy row store compress advanced segment after 30 days of no access;
alter table demo_user.sales ilm add policy row store compress advanced segment after 30 days of no access
*
ERROR at line 1:
ORA-38325: policy conflicts with policy 23

Rule 3) Only 1 row level policy allowed per segment

You can have only 1 row level policy on a table. In following example, I tried to create 2 row level policies on same table and it didn’t allow

SQL> alter table demo_user.sales ilm add policy row store compress advanced row after 7 days of no modification;

Table altered.

SQL> alter table demo_user.sales ilm add policy row store compress advanced row after 30 days of no modification;
alter table demo_user.sales ilm add policy row store compress advanced row after 30 days of no modification
*
ERROR at line 1:
ORA-38325: policy conflicts with policy 24

Also remember that you can create only “no modification” operation policy at row level.

Rule 4) Segment level policy does not overwrite row level policy

Continuing with above example, after creating row level policy, we can still create segment level policy

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 30 days of no modification;

Table altered.

Policy priority and policy Inheritance

As we discussed before, we can create policies at tablespace level, which serves as default policy for every segment created in that tablespace with no policy, segment level and even at partition level. Question remains as to which policy will actually take affect?

Again, this follows rules. Following are the simple rules to understand this

  • Child level policy always overrides parent level policy for same action. By this logic, policies set at table partition will override the one set at table level if both has same actions and policies at table level will overrides the policy at tablespace level if both has same actions
  • If no policies are defined at child level, inheritance takes place. Example, if no ILM policies are defined at partition level, table level policies are applied at partition level. Also, if no ILM policies are defined at table level, tablespace level policies are inherited at table level
  • Inheritance is additive if policy actions are different. For example, if you have a compression policy at tablespace level after specified time condition and compression policy at segment level after some specified condition then total effect on the segment is the sum of effect of both policies – i.e. compression + data movement

SQL> select policy_name, object_name, subobject_name, object_type, inherited_from from dba_ilmobjects;

POLICY_NAM OBJECT_NAME          SUBOBJECT_ OBJECT_TYPE                    INHERITED_FROM
---------- -------------------- ---------- ------------------------------ --------------------
P25        SALES                P1         TABLE PARTITION                TABLE
P25        SALES                P10        TABLE PARTITION                TABLE
P25        SALES                P11        TABLE PARTITION                TABLE
P25        SALES                P12        TABLE PARTITION                TABLE
P25        SALES                P2         TABLE PARTITION                TABLE
P25        SALES                P3         TABLE PARTITION                TABLE
P25        SALES                P4         TABLE PARTITION                TABLE
P25        SALES                P5         TABLE PARTITION                TABLE
P25        SALES                P6         TABLE PARTITION                TABLE
P25        SALES                P7         TABLE PARTITION                TABLE
P25        SALES                P8         TABLE PARTITION                TABLE
P25        SALES                P9         TABLE PARTITION                TABLE
P25        SALES                PM         TABLE PARTITION                TABLE
P25        SALES                           TABLE                          POLICY NOT INHERITED
P41        SALES_ITEM                      TABLE                          TABLESPACE

15 rows selected.

We can check different policies implemented on different objects using DBA_ILMOBJECTS view.

In the above example, Policy P41 is applied on SALES_ITEM table as it exists in USERS tablespace and has no ILM policy defined on it. But since we have default ILM policy at tablespace level USERS, this table SALES_ITEM has inherited this policy from tablespace. In the column INHERITED_FROM, you can see value as TABLESPACE

Similarly, policy P25 applied to table and all its partition. But we create policy at table level only and not at partition level so all partitions of table inherit table level policy.

For object type TABLE and policy P25, we see “POLICY NO INHERITED” in INHERITED_FROM column. This is because we have created new policy at table level and it’s not getting inherited from any level.

Checking ILM policy details

We have couple of views to provide details about ILM policies that exists at various levels in database.

DBA_ILMPOLICIES

You can view all the policies configured in the database in this view


SQL> select * from dba_ilmpolicies;

POLICY_NAM POLICY_TYPE   TABLESPACE ENABLE DELETED
---------- ------------- ---------- ------ -------
P24        DATA MOVEMENT            YES    NO
P25        DATA MOVEMENT            YES    NO
P41        DATA MOVEMENT USERS      YES    NO

Even compression policies are represented as data movement policies in this table. This is because during compression data movement happens.

DBA_ILMDATAMOVEMENTPOLICIES


SQL> select policy_name, action_type, compression_level, condition_type, condition_days from DBA_ILMDATAMOVEMENTPOLICIES;

POLICY_NAM ACTION_TYPE COMPRESSION_LEV CONDITION_TYPE         CONDITION_DAYS
---------- ----------- --------------- ---------------------- --------------
P41        COMPRESSION ADVANCED        LAST MODIFICATION TIME             60
P24        COMPRESSION ADVANCED        LAST MODIFICATION TIME              7
P25        COMPRESSION QUERY HIGH      LAST MODIFICATION TIME             30

Step 3) Evaluating and Executing ADO policies

We have 2 ways of evaluation and execution of ADO policies – scheduled in maintenance window, evaluating and executing it manually. Let’s look at both approaches

Scheduled in maintenance window

Row level policies are evaluated and executed every 15 mins by MMON background process. Segment level policies are evaluated and executed daily once during maintenance window using a scheduler job. This scheduler job uses DBMS_ILM.EXECUTE_ILM_TASK procedure to evaluate and execute ADO policies.

** Note that name of the package is DBMS_ILM as against DBMS_ILM_ADMIN, which is used for customizing ILM parameters and other admin related tasks. We will look into DBMS_ILM_ADMIN package in short while.

In DBMS_ILM.EXECUTE_ILM_TASK procedure, TASK_ID is an input parameter which is automatically generated during evaluation process, which also happens during scheduled maintenance window.

You can change the interval of evaluation and execution using DBMS_ILM_ADMIN.CUSTOMIZE_ILM procedure. We have seen this procedure previously when we were changing TBS PERCENT FREE and TBS PERCENT USED parameters. We can use same procedure to customize execution interval of ILM tasks.

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.execution_interval, 1);

PL/SQL procedure successfully completed.

Manually executing ADO policies

For manually executing ADO policies, we have 2 options

  1. Directly execute ADO policies without doing any customization
  2. Preview the evaluation of ADO policies, make any desired changes to the task and then execute the task

You can directly execute ADO policies without doing any customization using DBMS_ILM.EXECUTE_ILM procedure. Note that this procedure is different than the one that runs in scheduled maintenance window (DBMS_ILM.EXECUTE_ILM_TASK). There are multiple definitions of this procedure and in simple definition, you can just pass owner and table name and all policies on that table will be executed.

You can also preview the evaluation result before you execute the task. You can do following steps to customize ILM tasks

  • Evaluate set of ADO policies and preview the result. Check what all tables/policies will be executed
  • Add/remove objects/subobjects from ILM tasks evaluation result.
  • Execute the final customized evaluated ILM task

You can use following procedure in the same order to perform above steps of customizing ILM tasks

  • Execute DBMS_ILM.PREVIEW_ILM procedure to preview ILM tasks for database or specific schema
  • Execute DBMS_ILM.ADD_TO_ILM or DBMS_ILM.REMOVE_FROM_ILM to add or remove objects/subobjects from evaluated task
  • Execute DBMS_ILM.EXECUTE_ILM_TASK to execute the above customized ILM task

Example:

var v_task_id number;
exec DBMS_ILM.PREVIEW_ILM(TASK_ID=> :v_task_id, ILM_SCOPE=>DBMS_ILM.SCOPE_SCHEMA);

let’s say above command generated task_id as 1234.

exec DBMS_ILM.ADD_TO_ILM(TASK_ID=>1234, OWN=>DEMO_USER’, OBJ_NAME=>’NEW_TABLE’);

exec DBMS_ILM.EXECUTE_ILM_TASK(TASK_ID=>1234);

You can also pass following 2 parameters to DBMS_ILM.EXECUTE_ILM_TASK

EXECUTION_MODE: ILM_EXECUTION_ONLINE value executes the task online and ILM_EXECUTION_OFFLINE executes the task offline.

EXECUTION_SCHEDULE: Only possible value for this parameter is DBMS_ILM.SCHEDULE_IMMEDIATE

Step 4) Checking execution details of ADO policies

You can check the details of ADO policies evaluation using following 3 views

  • DBA_ILMEVALUATIONDETAILS
  • DBA_ILMRESULTS
  • DBA_ILMTASKS

DBA_ILMTASKS

Every task that is executed manually or automatically as scheduled will be recorded in DBA_ILMTASKS view. It stored creation time, start time and completion time of task and if task is completed, failed or Active


SQL> select TASK_ID, STATE, START_TIME, COMPLETION_TIME from DBA_ILMTASKS order by task_id;

   TASK_ID STATE     START_TIME                     COMPLETION_TIME
---------- --------- ------------------------------ ------------------------------
         1 COMPLETED 15-JUN-16 10.19.24.976731 PM   15-JUN-16 10.19.24.976731 PM
         2 COMPLETED 15-JUN-16 10.34.27.587548 PM   15-JUN-16 10.34.27.587548 PM
         3 COMPLETED 15-JUN-16 10.49.30.158279 PM   15-JUN-16 10.49.30.158279 PM
         4 COMPLETED 15-JUN-16 11.04.32.861197 PM   15-JUN-16 11.04.32.861197 PM
         5 COMPLETED 15-JUN-16 11.19.35.192715 PM   15-JUN-16 11.19.35.192715 PM

DBA_ILMEVALUATIONDETAILS

This view can be used to see details on task execution after the evaluation of the ADO policies is completed. If the SELECTED_FOR_EXECUTION column in this view has the value “selected for execution,” then the policy has been selected for execution indeed and an ADO job will be executed to satisfy the ILM policy. However, a job may not execute if there’s a value other than “selected for execution” under the column SELECTED_FOR_EXECUTION.

The following values all mean that the ADO policy hasn’t successfully passed the evaluation:

  • Precondition not satisfied
  • Policy disabled
  • Policy overruled
  • Inherited policy overruled
  • Job already exists
  • No operation since last ILM action
  • Target compression not higher than current
  • Statistics not available

DBA_ILMRESULTS

Provide details about completed ILM tasks. It has a STATISTICS column which is a CLOB and provide Job specific statistics, such as space saved via compression etc

Enabling and disabling ADO policies

You can enable and or disable ADO policies at individual table level or at database level completely.

Enabling/disabling ADO policies

You can enable/disable individual ADO policy on a table

SQL> alter table demo_user.sales ilm disable policy P25;

Table altered.

SQL> alter table demo_user.sales ilm enable policy P25;

Table altered.

You can also enable/disable all policies on a table

SQL> alter table demo_user.sales ilm disable_all;

Table altered.

SQL> alter table demo_user.sales ilm enable_all;

Table altered.

Deleting ADO policies

You can delete individual policy on a table using policy name

SQL> alter table demo_user.sales ilm delete policy P24;

Table altered.

If you want to delete all policies on a table, you can use delete_all

SQL> alter table demo_user.sales ilm delete_all;

Table altered.

Enable/disable ILM

You can disable ILM completely in database using DBMS_ILM_ADMIN.DISABLE_ILM procedure. This disables ILM, but keep all policies intact. Those policies will not get executed until we enable ILM.

SQL> exec DBMS_ILM_ADMIN.DISABLE_ILM;

PL/SQL procedure successfully completed.

SQL> exec DBMS_ILM_ADMIN.ENABLE_ILM;

PL/SQL procedure successfully completed.

Hope this helps !!