Book on Oracle 12c New features

A Book on Oracle 12c New features for administrators

Hi All,

It’s our pleasure to announce the availability of our first book – OCP 12c upgrade 1Z0-060 Exam Guide

 

51xr2hzj5ql

 

 

 

 

 

 

 

 

 

 

The book covers the new features of the Oracle Database 12c for the DBAs from the OCP 12c upgrade exam standpoint. Book is available WW in kindle format as well as paperback.

Link to Amazon.com – https://www.amazon.com/OCP-Upgrade-1Z0-060-Exam-Guide-ebook/dp/B01N3XXYYC/ref=sr_1_1?ie=UTF8&qid=1481109922&sr=8-1&keywords=B01N3XXYYC

Link to Amazon.in – https://www.amazon.in/OCP-Upgrade-1Z0-060-Exam-Guide-ebook/dp/B01N3XXYYC/ref=sr_1_1?ie=UTF8&qid=1481221495&sr=8-1&keywords=B01N3XXYYC

Link to Packt Publishing – https://www.packtpub.com/big-data-and-business-intelligence/ocp-12c-upgrade-1z0-060-exam-guide

This book covers all the new features of Oracle 12.1.0.1 database. So, if you want to upgrade your knowledge on Oracle 12c, this book certainly serves the purpose. And it is the most ideal book for DBAs aspiring for OCP 12c certification.

Feel free to post your comments about this book on amazon.com link or you can post your comments below.

Following are the objectives of the chapters covered in this book, and the objectives are completely in line with the Oracle Database 12c Exam syllabus.

Section 1 – Oracle 12c new features

Chapter 1Getting Started with Oracle 12c – This chapter introduces some of the new features related to installation and configuration of Oracle 12c including database express.
Chapter 2Multitenant Container Database (CDB) Architecture – This chapter introduces you to Oracle 12c multitenant architecture. It covers different components of multitenant architecture, how to create and configure multitenant database and different ways to create pluggable database PDB.
Chapter 3Managing CDBs and PDBs – This chapter covers establishing connection to container database and pluggable database. Performing general administration tasks and managing tablespace, users, roles and privileges in CDB and PDBs etc.
Chapter 4Information Life Cycle Management and Storage Enhancements – This chapter introduces you to information life cycle management in Oracle 12c and how this has been remarkably automated in Oracle 12c.
Chapter 5Auditing, Privileges and Data Redaction – Like the title says, this chapter covers the new unified auditing feature of Oracle 12c along with new privilege analysis tool and finally new data redaction feature.
Chapter 6Database Tuning and Troubleshooting – In this chapter we will learn about new features related to performance tuning. This covers the new optimizer for Oracle 12c and includes explaination of new features like adaptive SQL plans, Dynamic statistics and SQL plan directives. We will also see some improvements related to statistics gathering followed by enhancements to histogram.
Chapter 7Backup and Flashback – This chapter covers backup and recovery scenarios and how multitenant architecture affect these scenarios. We are also going to look into some of the RMAN enhancements in Oracle 12c followed by table restore and recovery using simple RMAN command
Chapter 8Database Administration Enhancements – In this chapter we are going to look into resource manager and how it will work with Oracle 12c multitenant database.
Chapter 9Miscellaneous New Features – This chapter bundles the other new features introduced in Oracle 12c including enhancement related to Oracle data pump, SQL* Loader, online operations etc. It also provides new partitioning enhancements introduced in Oracle 12c and new top n SQL clauses in Oracle 12c.

Section 2 – General Administration

Chapter 10Core Administration – This chapter is more of a general administration and we are going to explain the fundamentals of database administration.
Chapter 11Performance Management – This chapter includes discussion about designing database for optimal performance, monitoring the performance of database to improve the same, analyze and identify performance issues and preforming real application testing to analyze the performance.
Chapter 12Storage – In this chapter we are going to introduce the logical and physical structure of database and how they are different. It also covers ASM and that can improve the storage management.
Chapter 13Security – This chapter covers developing and implementing robust security policy for Oracle database. Auditing the actions in Oracle database, creating password file and authenticating privileged users remotely using password file.

Regards,
Advait & Indira
Author – OCP 12c upgrade 1Z0-060 Exam Guide
Advertisement

Oracle 12c Learning Series: Automatic Table Recovery Using RMAN

Table recovery was possible in earlier release as well. Until previous release, if we wanted to recover a table, we had following options

  • Database point in time recovery (DBPITR)
  • Tablespace point in time recovery (TSPITR)
  • Flashback technology

In Oracle 12c, RMAN has been enhanced to perform recovery of table. We have a new command in RMAN which automates complete process of recovering the table. New process does not affect the existing objects in the database and reduces the time and space required for doing table recovery.

How table recovery works?

  • RMAN uses the database backups taken previously to recovery table or table partitions to a specified time. You need to provide following inputs to RMAN
    • Table name or partition names to be recovered
    • Point in time until which the recovery should be done
    • Whether recovered tables should be imported back to original database
  • RMAN determines the correct backup based on your inputs
  • RMAN creates auxiliary instance
  • RMAN restores the controlfile
  • RMAN restores necessary files required for obtaining the older image of table. This might include SYSTEM, SYSAUX, UNDO tablespace and tablespace which has required table.
  • RMAN recovers the table or table partitions in auxiliary instance until the specified time.
  • RMAN creates data pump export dump file that contains the recovered objects
  • RMAN imports the recovered objects into original database

Recovery Point-in-Time Options

You can recover a table or table partition to a past point in time by specifying one of the following three clauses:

  • UNTIL SCN – The system change number (SCN)
  • UNTIL TIME – Time in NLS_DATE_FORMAT environment variable
  • UNTIL SEQUENCE – The log sequence number and thread number

Steps for performing table recovery

Let’s take an example. I have a table T1 created in DEO database.

 

[oracle@advait ~]$ sqlplus advaitd/oracle@deo

SQL> select * from T1;

      COL1
----------
         1
         2
         3

We can note down the current SCN when table is available


SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1243251

Following are the datafiles available in my database


SQL> select file#, name, con_id from v$datafile order by 3,1;  

     FILE# NAME                                                                                 CON_ID
---------- -------------------------------------------------------------------------------- ----------
         1 /u01/app/oracle/oradata/orcl/orcl/datafiles/system-01.dbf                                 0
         2 /u01/app/oracle/oradata/orcl/orcl/datafiles/sysaux-01.dbf                                 0
         3 /u01/app/oracle/oradata/orcl/orcl/datafiles/undotbs1-01.dbf                               0
         4 /u01/app/oracle/oradata/orcl/orcl/users.dbf                                               0
         5 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt_depot.dbf                                0
         6 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt.dbf                                      0
         7 /u01/app/oracle/oradata/orcl/orcl/datafiles/mgmt_ad4j.dbf                                 0

Lets drop the table T1


SQL> drop table T1;

Table dropped.

We can use below command to recover this table. Note that the SCN number we are giving below is the SCN number we noted down when table was existing.

RMAN> recover table advaitd.t1 until scn 1243251 auxiliary destination '/u01/app/oracle/oradata/aux';

Let’s see what this command does exactly.

Step 1) Command creates and mounts a new adhoc dummy instance as shown below

 


RMAN> recover table advaitd.t1 until scn 1243251 auxiliary destination '/u01/app/oracle/oradata/aux';

Starting recover at 26-JUL-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=107 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='jCvj'

It uses the controlfile autobackup to restore the controlfile to mount the dummy instance.

Step 2) It then restores the controlfile for the auxiliary instance

Following memory script shows the commands for restoring controlfile


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log 
sql 'alter system archive log current';
}

Step 3) Next it restores the tablespaces required to perform tablespace PITR

Following memory script shows that it’s going to restore the required datafiles (until SCN 1243251) into auxiliary destination and switch the datafiles to copy.
When we use UNTIL SCN, RMAN restores the datafiles which are little older then this SCN.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;
 
switch clone datafile all;
}

Step 4) Recover the datafiles until required SCN specified

Below memory script shows that RMAN tries to recover these datafiles until the specified SCN. Once recovered, it opens the database in read only mode.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

Step 5) Create SPFILE for auxiliary instance and mount the DB again

Following memory script tell us that RMAN is creating SPFILE for the auxiliary instance and including CONTROL_FILES parameter in SPFILE and mounting the auxiliary instance again


contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files = 
  ''/u01/app/oracle/oradata/aux/DEO/controlfile/o1_mf_ct8t8hyq_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}

Step 6) Restore USERS tablespace where the table belongs

Following memory script shows that RMAN is trying to restore USERS tablespace (datafile 0004) and switching to the copy.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;
 
switch clone datafile all;
}

Step 7) Recovering the auxiliary database until required SCN

Following memory script shows that RMAN is trying to recover USERS tablespace and other tablespaces that it has restored previously to required SCN. It also opens the database in resetlogs.


contents of Memory Script:
{
# set requested point in time
set until  scn 1243251;
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

Step 8) Create directory for datapump export and export the required table

Following memory script shows that RMAN is creating a directory for exporting the table and it also exports the table


contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/u01/app/oracle/oradata/aux''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/u01/app/oracle/oradata/aux''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/u01/app/oracle/oradata/aux''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/u01/app/oracle/oradata/aux''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_jCvj_rjBm":  
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 64 KB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADVAITD"."T1"                              5.062 KB       3 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_jCvj_rjBm" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_jCvj_rjBm is:
   EXPDP>   /u01/app/oracle/oradata/aux/tspitr_jCvj_52000.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_jCvj_rjBm" successfully completed at Tue Jul 26 19:55:13 2016 elapsed 0 00:00:21
Export completed

Step 9) Importing the table to original database

Following memory script shows, RMAN is trying to import the table to original database. You can provide additional options to prevent RMAN to import the table or you can provide import options to remap the table to different name if required or append the content to existing table.


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_jCvj_tCzd" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_jCvj_tCzd":  
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADVAITD"."T1"                              5.062 KB       3 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_jCvj_tCzd" successfully completed at Tue Jul 26 19:55:24 2016 elapsed 0 00:00:04
Import completed

Step 10) Finally, RMAN will clean up and remove the auxiliary instance


Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_temp_csgwmp8s_.tmp deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/onlinelog/o1_mf_2_csgwo8fy_.log deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/onlinelog/o1_mf_1_csgwo7ht_.log deleted
auxiliary instance file /u01/app/oracle/oradata/aux/JCVJ_PITR_DEO/datafile/o1_mf_users_csgwo53j_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_sysaux_csgwkm4b_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_undotbs1_csgwkm4f_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/datafile/o1_mf_system_csgwkm2x_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/aux/DEO/controlfile/o1_mf_csgwkf6f_.ctl deleted
auxiliary instance file tspitr_jCvj_52000.dmp deleted
Finished recover at 26-JUL-16

RMAN> 

You can verify that you got the required table or not


SQL> select * from T1;

      COL1
----------
         1
         2
         3

As you can see all the steps are performed by RMAN automatically once we execute single RMAN command to recover the table.

This was a small test database, but in case of production databases, we have to make sure that auxiliary destination has required space available for restoring the required tablespaces. Else table recovery will fail.

Hope this helps !!

Oracle 12c Learning Series: In-database Archiving and Temporal Validity

One of the major challenge faced by an Oracle DBA is – how to effectively deal with historical data? Today, if we consider database tables for an enterprise, data in the table goes back several years and most of the data in the table is inactive. Challenge remains as to how to archive this data and make our query run efficiently.

If we choose to archive old data outside of database on a tape, cost of storing data reduces but older data is not available for one-off reports. Also regulatory compliance suggests that older historical data should be accessible all the time.

If we choose to maintain old data inside database, size of database grows, so is the size of backup and storage cost increases. Not only that, queries on bigger tables does not run efficiently, so there is performance cost associated with queries, DMLs and DDLs (example index creation and rebuild).

In 11g, older archived data is handled using following 2 solutions

  1. Partitioning – Distinguish the data as old or new based on the date of creation or some date attribute in the table and partition the table according to that column. DBA can manually move older partitions to low cost storage
  2. Flashback data archive (total recall feature in 11g) – This feature automatically tracks changes to data over period of time and maintains archive transactional data in a different tablespace based on RETENTION parameter. Historical data can be queried using flashback query AS OF clause. Archived transactional data which has aged beyond RETENTION will be purged automatically.

Oracle 12c provide additional solutions to above challenges. We have 2 ways to handle situation for archive data in Oracle 12c:

  1. In-database archiving
  2. Temporal validity

In-Database Archiving

Until now, it was very difficult to identify active data from in-active data at row level. In Oracle 12c, we can identify active data vs inactive data at row level. This is possible by attaching a property to every row of a table to mark it as in-active. If the row is not marked inactive, it’s an active row. Setting such property at row level helps separate active and non-active data in a table and lets you archive rows in a table by marking them inactive. The key thing to understand here is that the data remains in the table and you can compress it, but to the applications, this part of the data (inactive) remains invisible. We also have a session level parameter which enables us to see active as well as inactive data (if required).

In-database archiving is enabled at table level, either during creation of table or later by altering the table. Following is an example of creating table with in-database archiving


SQL> create table sales_archive
 2 (year    number(4),
 3 product  varchar2(10),
 4 amt      number(10,2))
 5 row archival;

Table created.

When you create a table with “ROW ARCHIVAL” clause (line 5 in above example), it automatically adds an extra column ORA_ARCHIVE_STATE to the table. But this column is added as hidden column. If you try to describe the table, you won’t see this column.


SQL> desc sales_archive
 Name                    Null?    Type
 ----------------------- -------- ----------------
 YEAR                             NUMBER(4)
 PRODUCT                          VARCHAR2(10)
 AMT                              NUMBER(10,2)

But you can check if the column is added or not using DBA_TAB_COLUMNS view.


SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES_ARCHIVE';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES_ARCHIVE        ORA_ARCHIVE_STATE               YES
SALES_ARCHIVE        YEAR                          1 NO
SALES_ARCHIVE        PRODUCT                       2 NO
SALES_ARCHIVE        AMT                           3 NO

Similarly, you can enable an existing table for row archival using alter table as shown below.


SQL> alter table sales row archival;

Table altered.

SQL> select table_name, column_name, column_id, HIDDEN_COLUMN from dba_tab_cols where table_name = 'SALES';

TABLE_NAME           COLUMN_NAME          COLUMN_ID  HID
-------------------- -------------------- ---------- ---
SALES                YEAR                          1 NO
SALES                PRODUCT                       2 NO
SALES                AMT                           3 NO
SALES                ORA_ARCHIVE_STATE               YES

ORA_ARCHIVE_STATE column is very important column for distinguishing active data vs inactive data. The column ORA_ARCHIVE_STATE can take two values—0 and 1. By default, a newly inserted row is active and is denoted by the value 0 for the ORA_ARCHIVE_STATE column. When the rows start being rarely accessed and not updated any longer, they’re considered to be in the non-active state and are denoted by the value 1 (or any value other than zero) for the ORA_ARCHIVE_STATE column.

If you want to check the value of ORA_ARCHIVE_STATE column, you have to select the column explicitly in your select statement.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 0
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

As you can see, we have value of 0 in all the selected rows, which means all this data is active.

Note that if rows are rarely accessed, column ORA_ARCHIVE_STATE will not be set to value of 1 automatically. A DBA or owner of the data have to decide which rows are not being accessed and are inactive and they have to manually update the value of ORA_ARCHIVE_STATE column.

Following DML shows how to mark the data inactive

SQL> update sales_archive set ORA_ARCHIVE_STATE = DBMS_ILM.ARCHIVESTATENAME(1) where year = 2001;

1 row updated.

SQL> commit;

Commit complete.

In the above DML, we used DBMS_ILM.ARCHIVESTATENAME function. This function is officially used for updating value of ORA_ARCHIVE_STATE column. You can also directly update this column to any non-zero value and result will be same. In the above DML, we updated 1 record of year 2001 and marked it as inactive (or archived).

If you query the table again, you won’t see the record that we updated above.


SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

      YEAR PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

6 rows selected.

This is because, by default Oracle only shows active records. Any records marked inactive will not be shown. This is done by Oracle by adding simple filter to your queries. We can see the same in below explain plan

SQL> explain plan for
 2 select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------
Plan hash value: 4043476784

----------------------------------------------------------------------------------
| Id | Operation         | Name          | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT   |               | 6    | 12210 | 3 (0)       | 00:00:01 |
|* 1 | TABLE ACCESS FULL | SALES_ARCHIVE | 6    | 12210 | 3 (0)       | 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("SALES_ARCHIVE"."ORA_ARCHIVE_STATE"='0')

Note
-----
 - dynamic statistics used: dynamic sampling (level=2)

17 rows selected.

You can view archived records again by setting row archival visibility parameter to ALL at session level. By default, the value of this parameter is ACTIVE, that’s why we are seeing only active records. Follow shows the same


SQL> alter session set row archival visibility = ALL;

Session altered.

SQL> select year, product, amt, ORA_ARCHIVE_STATE from sales_archive;

YEAR       PRODUCT           AMT ORA_ARCHIV
---------- ---------- ---------- ----------
      2001 A                 100 1
      2002 B                 200 0
      2003 C                 300 0
      2004 D                 400 0
      2005 E                 500 0
      2006 F                 600 0
      2007 G                 700 0

7 rows selected.

After setting row archival visibility to ALL, we are able to see archive values. This is one great advantage of in-database archiving. If some adhoc report needs to be run, which needs access to archived records, we can just set row archival visibility to ALL at the beginning of the report and we should be able to get the desired result. You can set back visibility to ACTIVE again.

SQL> alter session set row archival visibility = ACTIVE;

Session altered.

You can also restore back archived data as active data by running update statement on archived data and using DBMS_ILM.ARCHIVESTATENAME function. But in this case you have to pass 0 as value to this function. Also, you need to set ROW ARCHIVAL VISIBILITY clause to ALL before you run update else archived data won’t be visible.

You can disable row archival for a table by using ALTER TABLE … NO ROW ARCHIVAL. If you disable row archival for a table, ORA_ARCHIVE_STATE column will get dropped immediately and all the data will be visible in your select statement

SQL> alter table sales no row archival;

Table altered.

Note that if you use CTAS (CREATE TABLE AS SELECT) to create a new table from an existing table, then new table will have active as well as inactive data. In-database archiving is only applicable to selects and DMLs but not to DDLs.

Temporal Validity and Temporal history

In in-database archiving, we can distinguish active rows from non-active rows using row archival column (ORA_ARCHIVE_STATE). You can achieve same functionality using temporal validity. Many of the applications today have their data dependent on various dates that are relevant to underlying business. Example an insurance application will have basic dates recorded for insurance start date and insurance end date. This represents valid time when the insurance policy will be active for a customer. These dates attributes which denotes valid time for record is called temporal validity. Temporal Validity lets you keep active and inactive data together in the same table, while providing all the benefits of archiving non-active data. Temporal validity support helps in cases where it’s critical to know when certain data becomes valid and when it’s invalid.

Similar to business relevant date attributes used in table by application, Oracle provide temporal validity by creating its own date attributes (start date and end date) to represent valid time dimensions when the record will be active. Application user or DBA can decide which records are active vs inactive by updating start date and end date attribute of each record. Records for which end date has passed are inactive records.

By using valid time temporal implicit filter on valid-time dimension, queries can show rows that are currently valid or that will be valid in future. Queries can hide rows whose facts are currently not valid.

Temporal History

Before we go further, it’s better to understand about temporal history and how temporal validity is different than temporal history. Temporal history was called as Flashback Data Archive (FDA) in previous release. To understand this, let’s consider this.

Temporal validity dates and times are different than the dates and times when the record is created. The date and time when record was created in database is called temporal history. Example, if we have to create a new customer whose insurance policy starts from today, it’s possible that we might create a record for that application tomorrow or after a week but put insurance start date of today. In this case temporal validity date is of today but since record is created 1 week later, temporal history date will be 1 week later.

We can use temporal history to get the past data. We can also use temporal history along with temporal validity to get rows which were valid in the past.

Defining temporal validity

You define a valid-time dimension at table creation time, or by altering a table. In order to create the valid-time dimension, you specify the PERIOD FOR clause in the table creation statement.

The following example shows how to explicitly define two date-time columns, USER_TIME_START and USER_TIME_END:

Following example creates table with valid-time dimension

Explicitly specifying valid-time dimension columns.


SQL> create table policy
2 (cust_id         number,
3 policy_no        number,
4 policy_type      varchar2(20),
5 user_time_start  date,
6 user_time_end    date,
7 period for user_time (user_time_start, user_time_end));

Table created.

In above example, we explicitly specified valid-time dimension columns in create table statement. But even if we don’t specify the column names, valid-time temporal creates desired columns


SQL> create table policy
2 (cust_id    number,
3 policy_no   number,
4 policy_type varchar2(20),
5 period for user_time);

Table created.

A valid-time dimension represented by new PERIOD FOR clause, consist of two date-time columns. If you don’t specify the columns explicitly as in second example, Oracle automatically creates 2 hidden columns whose name starts with prefix provided in PERIOD FOR clause and ends with “start” and “end”. So in second example, the 2 hidden columns created would be user_time_start and user_time_end (same as in example 1).

Only difference between first example and second one is the visibility of valid-time dimension columns. In first table, the columns are visible as we explicitly created them, whereas in second example, the columns are hidden.

If the columns are created hidden, you need to explicitly specify them in select, updates and inserts. Following example shows an insert statement into policy table where we created implicit valid-time dimension columns


SQL> insert into policy (cust_id, policy_no, policy_type, user_time_start, user_time_end) values (123, 3424, 'HEALTH',sysdate, null);

1 row created.

SQL> select * from policy;

   CUST_ID POLICY_NO  POLICY_TYPE
---------- ---------- --------------------
       123 3424       HEALTH

SQL> select cust_id, policy_no, policy_type, user_time_start from policy;

   CUST_ID POLICY_NO  POLICY_TYPE          USER_TIME_START
---------- ---------- -------------------- ----------------------------------------
       123 3424       HEALTH               20-JUN-16 12.33.53.000000 AM +05:30

As you can see, using “select *” won’t show hidden valid-time dimension columns. In 2nd statement we explicitly select them and see the values.

Now, we have seen how to create valid-time columns in table to separate active data from inactive data. Let’s check how to filter these valid-time columns and select only valid data. There are 2 methods of selecting active data by filtering on valid-time columns

  • PERIOD FOR clause
  • Using DBMS_FLASHBACK_ARCHIVE procedure

PERIOD FOR clause

To filter valid data, you can use SELECT statement with new PERIOD FOR clause. We have a set of data which is valid based on its start date and end date of valid-time temporal and we have another set of data in same table which is invalid as it falls outside the start time and end time of valid-time temporal. Both sets of rows reside in same table. However, by controlling visibility of data to valid rows, you can limit what queries and DMLs to see only active data.

For each record that you insert into table, you specify valid-time start date and end date. These dates represent the activeness of data. These dates are entered manually by end users. The date when the record ins actually inserted into the table is the transaction time and is called temporal history in Oracle 12c.

You can use either AS OF PERIOD FOR clause or VERSIONS PERIOD FOR clause to display valid data. AS OF PERIOD FOR clause is used when we want to see valid data as on specific date. Whereas VERSIONS PERIOD FOR clause is used when we want to see valid data between 2 dates (range).

Following example displays all policies that were active as on 01-Jan-2016

select * from policy AS OF PERIOD FOR user_time to_date('01-JAN-2016','DD-MON-YYYY');

Here is another example which show valid data between 01-JAN-2016 and 01-FEB-2016.

select * from policy VERSIONS PERIOD FOR user_time between to_date('01-JAN-2016','DD-MON-YYYY') and to_date(’01-FEB-2016’,’DD-MON-YYYY’);

DBMS_FLASHBACK_ARCHIVE

You can also use DBMS_FLASHBACK_ARCHIVE procedure to show valid data. This procedure is used to set visibility of data at session level. Once visibility is set, select statement on valid-time temporal table shows data that was valid data at set time.

In following example, we are setting visibility of data as of given time – 01-JAN-2016
This means any select statement or DMLs in the same session, will see valid data as of 01-JAN-2016


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('ASOF',to_date('01-JAN-2016','DD-MON-YYYY'));

PL/SQL procedure successfully completed.

You can set the visibility to current time. This will show data which is currently valid


SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME(‘CURRENT'));

PL/SQL procedure successfully completed.

Note that this visibility setting affects only selects and DMLs. It doesn’t affect DDL. DDL sees complete data (valid as well as invalid). Also, in-database archiving and temporal validity is NOT supported in CDB database. It’s only supported in non-CDB database.

Hope this helps !!

Oracle 12c Learning Series: Automatic Data Optimization – ADO

I am starting Oracle 12c learning series, where I am planning to publish multiple articles on Oracle 12c new features. I hope these articles will be helpful to DBAs aspiring for OCP 12c certification and also others who are planning to implement these features in there databases.

This is a very long post about ADO – Automatic Data Optimization, which is one of the ILM strategy to manage aging data. So please be patient while reading this article. This article provides end-to-end details about ADO implementation, commands syntax etc.

Typical lifecycle of data

Lifecycle of data starts with insert statements. New data is inserted into the table. At this stage data is either kept in non-compressed form or is compressed by DBA. But compression for such active data is optimized for DMLs (example OLTP table compression). Compression has is less and has minimal impact on DMLs.
After a month, activity has subsided, although significant OLTP transactions are still carried out. At this stage, data in former most active partition stays in OLTP table compressed format and new partition is automatically created.

Two months down the line data is rarely modified and accessed rarely as well. At this stage, partition can be moved to low cost storage tier and at higher compression level like hybrid columnar compression (HCC).

After 1 year, data is considered dormant and is no longer accessed or updated. At this stage, data can be moved to low cost storage tier with highest level of HCC compression and may also be marked as read-only.

How (ADO) Automatic Data Optimization works

Automatic data optimization (ADO) is an ILM strategy which uses activity tracking by using heat map to determine if the data is still getting accessed on not.

Heat map provides ability to track and mark data as it goes through lifecycle changes. Heat map can track data at following level

  • Data accesses at segment level
  • Data modification at segment level and block level

These collected heat map statistics are stored in SYSAUX tablespace. These statistics provide “heat map” of hot and cold data based on how frequent the data is accessed or modified.

ADO allows you to create policies that uses heat map statistics to compress and move data when necessary. ADO automatically evaluates and executes policies that perform compression and storage tiring actions.

Following are the outline steps which can be used to implement and use ADO policies

1)    Enable heat map at database level. Heap map tracks data access at segment level and data modification at segment level and row level

2)    Create ADO policies.

You can create ADO policies at different scopes/level

a)    Tablespace level

b)    Group level

c)     Segment level

d)    Row level

You can also create ADO policies based on different operations to track.

a)    Creation

b)    Access

c)     Modification

You can also provide condition when this policy takes effect

a)    Based on time period. Ex: After 3 days, After 1 week, After 1 year

b)    Tablespace usage above threshold

You can also specify different kind of action for ADO policy

a)    Compression

b)    Moving to different storage tier

c)     Both compression + move

3)    Evaluating and executing ADO policies. By default, segment level ADO policies are evaluated every day during maintenance window. DBA can also create custom schedule for evaluating ADO policies. Row level policies are evaluated by MMON every 15 mins.

4)    Verify ADO execution details by using DBA_ILMEVALUATIONDETAILS and DBA_ILMRESULTS view.

5)    Verify if segment is compressed or moved based on the action provided in executed policy.

We are going into the details of all steps above and see how we can enable ADO policies as per our business requirements

Step 1) Enabling Heat Map

Enabling and disabling heat map

The very first step before you can start using ADO is to enable activity tracking, which you can enable by setting the new initialization parameter HEAT_MAP

SQL> alter system set heat_map = on;

System altered.

Above parameter will enable activity tracking for entire instance. Enabling heat map will enable tracking both DML and access at segment level and store these details in relevant table in SYSAUX tablespace. This will not track any access details of segments in SYSTEM and SYSAUX tablespace.

*** Important thing to note here is that heat_map does not work with CDB databases. It works only for non-CDB databases. So you cannot use this feature with CDB databases

You can also enable heat map at session level using following command

SQL> alter session set heat_map = on;

Session altered.

You can turn off heat map by setting the parameter to off as below

SQL> alter system set heat_map = off;

System altered.

Checking heat map details

Once you enable heap map, you can check various details of activity tracking provided by heat map. You can check details about when segments were accessed recently, was it row ID access or full scan access or when was the recent DML performed on the segment.

V$HEAT_MAP_SEGMENT view

Tracking details of segments are available at real time in V$HEAP_MAP_SEGMENT view. As soon as session accesses some objects in database, its heat map information will be available immediately in this view.

Example, I just accessed one of the demo table (SALES) in my test database after enabling heat map and I can see following info


SQL> select object_name, SUBOBJECT_NAME, TRACK_TIME, SEGMENT_WRITE, SEGMENT_READ, FULL_SCAN, LOOKUP_SCAN, con_id from v$heat_map_segment order by 2;

OBJECT_NAME          SUBOBJECT_ TRACK_TIM SEG SEG FUL LOO     CON_ID
-------------------- ---------- --------- --- --- --- --- ----------
SALES                P10        14-JUN-16 NO  NO  YES NO           0
SALES                P11        14-JUN-16 NO  NO  YES NO           0
SALES                P12        14-JUN-16 YES NO  YES NO           0
SALES                P2         14-JUN-16 NO  NO  YES NO           0
SALES                P3         14-JUN-16 NO  NO  YES NO           0
SALES                P4         14-JUN-16 NO  NO  YES NO           0
SALES                P5         14-JUN-16 NO  NO  YES NO           0
SALES                P6         14-JUN-16 NO  NO  YES NO           0
SALES                P7         14-JUN-16 NO  NO  YES NO           0
SALES                P8         14-JUN-16 NO  NO  YES NO           0
SALES                P9         14-JUN-16 NO  NO  YES NO           0
SALES                PM         14-JUN-16 NO  NO  YES NO           0
I_SALES_ITEM_YEAR               14-JUN-16 NO  NO  NO  YES          0
SALES_ITEM                      14-JUN-16 NO  NO  YES NO           0

14 rows selected.

If you see column FULL_SCAN is YES, it means the access was done using full scan.

If you see column SEGMENT_WRITE is YES, it means the modification was done to the table

TRACK_TIME is the timestamp when access or modification was done.

You can also see LOOKUP_SCAN column is YES for 1 row where object_name is the name of index. So if index is getting used in your select plan, it will show LOOKUP_SCAN as YES for such access.

SYS.HEAT_MAP_STAT$ and DBA_HEAT_MAP_SEG_HISTOGRAM

Data from V$HEAP_MAP_SEGMENT is persisted into SYS.HEAT_MAP_STAT$ table in SYSAUX tablespace by DBMS_SCHEDULER job at regular period of time. This data is then available via view DBA_HEAT_MAP_SEG_HISTOGRAM. Following show similar data when we query DBA_HEAT_MAP_SEG_HISTOGRAM

SQL> select object_name, subobject_name, track_time, SEGMENT_WRITE, full_scan, lookup_scan from DBA_HEAT_MAP_SEG_HISTOGRAM;

OBJECT_NAME          SUBOBJECT_ TRACK_TIM SEG FUL LOO
-------------------- ---------- --------- --- --- ---
SALES                P2         14-JUN-16 NO  YES NO
SALES                P3         14-JUN-16 NO  YES NO
SALES                P4         14-JUN-16 NO  YES NO
SALES                P5         14-JUN-16 NO  YES NO
SALES                P6         14-JUN-16 NO  YES NO
SALES                P7         14-JUN-16 NO  YES NO
SALES                P8         14-JUN-16 NO  YES NO
SALES                P9         14-JUN-16 NO  YES NO
SALES                P10        14-JUN-16 NO  YES NO
SALES                P11        14-JUN-16 NO  YES NO
SALES                P12        14-JUN-16 YES YES NO
SALES                PM         14-JUN-16 NO  YES NO
SALES_ITEM                      14-JUN-16 NO  YES NO
I_SALES_ITEM_YEAR               14-JUN-16 NO  NO  YES

14 rows selected.

DBA_HEATMAP_TOP_TABLESPACES

This view shows heat map information for top 100 tablespaces. It gives report at tablespace level instead of segment level. You can fine MIN, MAX and AVG read time, write time, full table access time and lookup access time for each tablespace.

DBMS_HEAT_MAP package

DBMS_HEAT_MAP packages can be used to get heat map statistics at even deeper level. The default access is only going to let you know heat map statistics at segment level. But if you want to go deeper, for example to check which blocks were accessed and at what time, you can use use subprograms in DBMS_HEAT_MAP package.

Following is a summary of the procedures and functions in DBMS_HEAT_MAP package:

  • BLOCK_HEAT_MAP function Returns the last modification time for each block in a table segment
  • EXTENT_HEAT_MAP function Returns the extent-level Heat Map statistics for a table segment
  • OBJECT_HEAT_MAP procedure Shows the minimum, maximum, and average access times for all segments belonging to an object
  • SEGMENT_HEAT_MAP procedure Shows the Heat Map attributes for a segment
  • TABLESPACE_HEAT_MAP procedure Shows minimum, maximum, and average access times for all segments in a tablespace

Example, we can use DBMS_HEAT_MAP.BLOCK_HEAT_MAP to get heat map statistics of a block of a segment

Similarly, DBMS_HEAT_MAP.EXTENT_HEAT_MAP function provide heat map statistics information at extent level.

Step 2) Enabling ADO policies

Let’s move to the 2nd step of implementing ILM. Now that we have enabled heat map in step 1, we need to create policies at various levels to manage our data. The policy we create will use the heat map statistical data and take the required action that we define in the policy. There are 4 sections in creating ADO policy

Level at which you can create ADO policies

  • Tablespace level – Default ADO policy can be defined at tablespace level and will be applicable to all the segments existing in that tablespace or new segments getting created in that tablespace.
  • Group level – When we define group level ADO policy, if a table is eligible for specific ADO action defined in policy, the same action would be performed on all dependent objects. If action is compress and table has LOB columns, all secureFile LOBs will be compressed as well.
  • Segment level – Applies to tables and table partitions. If no policy defined at segment level and we have a default policy at tablespace level, tablespace level policy applies to segment. But segment level policy overrides tablespace level policy.
  • Row level – Can be created only for tracking modifications. We cannot have row level policy for creation and access operations.

Operations to track

  • Creation – Action will be taken on specified time condition after creation of data
  • Access – Action will be taken on specified time condition after access of data
  • Modification – Action will be taken on specified time condition after modification of data

Condition when this policy takes effect

  • Based on time period. Ex: After 3 days, After 1 week, After 1 year
  • Tablespace usage above threshold

Action to take for ADO policy

  • Compression
  • Data Movement-  Moving to different storage tier
  • Both compression + move

You can create ILM policies during CREATE TABLE statement or you can add ILM policies on existing tables using ALTER TABLE statements.  A segment can have multiple ILM policies defined on it.

Before we look into different example of creating ILM policies let’s take a look at various compression level available. Data can be compressed while its being inserted, updated or loaded into a table via bulk load.

We have following 4 compression levels available

  • ROW STORE COMPRESS BASIC

This is basic level of compression and is used while inserting data into a table without using direct-path insert, using advanced compression option (ACO).

  • ROW STORE COMPRESS ADVANCED

This a renamed syntax for the previous OLTP table compression feature that was part of ACO. ROW STORE COMPRESS ADVANCED on heap table maps to LOW compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

  • COLUMN STORE COMPRESS FOR QUERY LOW or HIGH

This provides Hybrid columnar compression (or HCC) and provides higher level compression then ROW STORE COMPRESS. It works well when load performance is critical and frequent queries are run against the data and no DMLs are expected. COLUMN STORE COMPRESS FOR QUERY LOW or HIGH maps to MEDIUM compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

  • COLUMN STORE COMPRESS FOR ARCHIVE LOW or HIGH

This provides Hybrid columnar compression (or HCC) and provides highest level of compression. It works well when data is accessed very infrequently and no DMLs are run against data. COLUMN STORE COMPRESS FOR ARCHIVE LOW or HIGH maps to MEDIUM compression for SecureFile LOB segments when GROUP level ADO policy is implemented.

Let’s look at creating ADO polices for compression action and Data movement action

ADO policies for Compression action

Following are the different examples of creating ADO compression policies

Tablespace level compression policy

ALTER TABLESPACE USERS DEFAULT ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
SEGMENT AFTER 30 DAYS OF NO ACCESS

In the above colorful command, we can see various sections required in policy implementation. TABLESPACE in purple color tells that this is the default policy at tablespace level and applicable to segments in that tablespace as represented by SEGMENT keyword in blue.

  • ILM ADD POLICY – This is required to add a policy to tablespace or segment
  • ROW STORE COMPRESS ADVANCED – This is the compression action level we are using as this is compression based ILM policy
  • SEGMENT – This represent that this policy is applicable to all segments in USERS tablespace.
  • AFTER 30 DAYS OF – This represent condition when this policy will be eligible to take required action of compression.
  • NO ACCESS – This represent operation to track. We have 3 different types of operations that we defined above – Access, modification, creation

So if alter tablespace USERS using above command and add ILM policy then if any segment in the tablespace (which has no other ILM policy set) is not accessed for more than 30 days then that segment will be compressed to ROW STORE COMPRESS ADVANCED

Group level compression policy

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
GROUP AFTER 60 DAYS OF NO MODIFICATION

In the above example, we are implementing group level ILM policy. This policy will automatically compress the table SALES if no modification is done for 60 days. Since we are using GROUP level policy, dependent LOBs are compressed with LOW compression after 60 days of no modification.

ALTER TABLE SALES MODIFY PARTITION P1 ILM ADD POLICY
COLUMN STORE COMPRESS FOR ARCHIVE HIGH
GROUP AFTER 3 MONTHS OF CREATION

In this example, we are using GROUP level compression policy on a partition of a table. This policy will automatically compress PARTITION P1 after 3 months of creation and corresponding LOB will be compressed MEDIUM. This does highest level of compression using HCC. Global indexes are maintained

Segment level compression policy

ALTER TABLE SALES ILM ADD POLICY
COLUMN STORE COMPRESS FOR QUERY HIGH
SEGMENT AFTER 90 DAYS OF NO MODIFICATION

In this example, we are creating segment level compression policy on SALES table. This policy is going to automatically compress SALES table with COLUMN STORE COMPRESS FOR QUERY HIGH if it’s not modified for 90 days.

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS
SEGMENT AFTER 1 YEAR OF CREATION

This example implements basic level of compression (ROW STORE COMPRESS [BASIC]) on SALES table after 1 year of its creation. SALES table will be automatically compressed after 1 year of its creation

Row level compression policy

ALTER TABLE SALES ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
ROW AFTER 30 DAYS OF NO MODIFICATION

Above policy will compress any block of the table SALES whose all rows are not modified from last 30 days. Even if 1 row in a block gets modified, it does not qualify for compression.

Important thing to note here is that row level compression polices can be created based on modification time only. They cannot be created based on creation time or access time. Also, only compression type available for row level compression policy is ROW STORE COMPRESS ADVANCED/BASIC. We cannot use columnar compression for row level policy.

ADO policies for Data movement action

This is another type of ADO policies that we can set at segment level only. We cannot set this at tablespace level or group level or any other level. Also, data movement happens at tablespace level. Meaning that we can move the table or partition to another tablespace based on low cost storage.

Data movement policy takes following format

ALTER TABLE <TABLE_NAME> ILM ADD POLICY
TIER TO <TABLESPACE_NAME>
[ CUSTOM ILM RULE | SEGMENT <CONDITION> <OPERATION> ]

TIER TO is a new clause and we specify which tablespace the segment should move if it satisfies he condition set. We can also provide custom ILM rules in the last section of the command instead of providing condition (example AFTER 30 DAYS, AFTER 1 YEAR etc) and operation (example CREATION, NO MODIFICATION, NO ACCESS etc).

Let’s take an example

In the above example, we replaced compression clause with “TIER TO” clause.

TIER TO – This clause provides tablespace name where segment should move after it satisfies the condition. In this example, ILM policy will automatically move SALES table to low_cost_storage tablespace after 60 days of creation.

If we don’t provide the last part of condition and operation, then policy will be created and data movement action will take place when tablespace is running out of space. Space threshold for tablespace is already set and we can see those values in DBA_ILMPARAMETERS table.

Following are tiring fullness threshold for tablespace

  • TBS PERCENT USED (default 85): Objects with tiring policy will be moved out of tablespace if its TBS PERCENT USED full (default 85% full)
  • TBS PERCENT FREE (default 25): Objects with tiring policy will continue to move out until source tablespace has TBS PERCENT FREE space (default 25% free).

You can see current values set by querying DBA_ILMPARAMETERS view. You can customize following parameters using DBMS_ILM_ADMIN package.


SQL> select * from dba_ilmparameters;

NAME                      VALUE
-------------------- ----------
ENABLED                       1
RETENTION TIME               30
JOB LIMIT                     2
EXECUTION MODE                2
EXECUTION INTERVAL            1
TBS PERCENT USED             90
TBS PERCENT FREE             15
POLICY TIME                   0

8 rows selected.

Example, if we want to change TBS PERCENT USER to 90% from existing 85%, we can use following procedure

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.tbs_percent_used,90);

PL/SQL procedure successfully completed.

We will check out other procedures and functions in DBMS_ILM_ADMIN package at later section in this chapter.

So if we set following policy, then SALES table will be moved out of current tablespace to low_cost_storage tablespace when SALES tablespace reaches 90% of fullness

ALTER TABLE SALES ILM ADD POLICY
TIER TO low_cost_storage

Also, if we set this policy on many tables in the same tablespace and if tablespace reaches the fullness threshold, then tables will be moved to different tablespace in the order of oldest accessed tables first.

Last thing to know about data movement policy is that they are executed only once in life time. Since they apply only at segment level, once the policy is executed on the segment and required movement takes place, the policy is disabled for that segment. This rule applies only for data movement policy and not for compression policy.

CUSTOM_ILM_RULES

Instead of having condition and operation in the simplified syntax provided by Oracle, you can have more complex algorithm about when the desired action for ILM policy should take place by specifying custom ILM rule. The syntax for custom ILM rule looks like below

ALTER TABLE SALES ILM ADD POLICY
<Compression clause> | <Data movement clause>
ON <CUSTOM_ILM_RULE>

A custom ILM rule should be a function which returns Boolean. You can implement complex logic inside the function and return Boolean after evaluating that complex logic. If return value is true, ILM policy takes action. If return value is false, ILM policy does not take action. This custom ILM rule can be used for both compression action as well as data movement action policies.

Implementing multiple policies on segment

A segment can have multiple ADO policies implemented on this. This logically makes sense as we want to have different action taken on data as it ages more and more. Example, we might go for basic level of compression after 1 week of data insertion, more advanced compression after a month of no creation or no modification and highest level of compression of moving to low cost storage after a month of no access or after a year of creation of data. So satisfy all these timelines and criteria it’s not unusual to have multiple ILM policies on a segment. But there are rules to follow.

Rule 1) Policy should be applied on same operation or statistics (creation, modification or access)

If you create a segment level policy on a table based on ACCESS, make sure that other policies are also created based on ACCESS operation only.

Example:

I create a policy on SALES table based on ACCESS

SQL> alter table demo_user.sales ilm add policy row store compress advanced segment after 2 days of no access;

Table altered.

Then I create another policy on same table based on ACCESS

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 30 days of no access;

Table altered.

Now, If I tried to create 3rd policy based on creation or modification, I will get conflict

SQL> alter table demo_user.sales ilm add policy column store compress for archive high segment after 60 days of creation;
alter table demo_user.sales ilm add policy column store compress for archive high segment after 60 days of creation
*
ERROR at line 1:
ORA-38323: policy conflicts with policy 21

Rule 2) Compression level should increase with time

You should not reduce compression level as timeline condition increases. In the following example, I tried to set column level compression after 7 days of no access and row store compression (which is lower level compression then column level) after 30 days of no access. This causes conflicts

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 7 days of no access;

Table altered.

SQL> alter table demo_user.sales ilm add policy row store compress advanced segment after 30 days of no access;
alter table demo_user.sales ilm add policy row store compress advanced segment after 30 days of no access
*
ERROR at line 1:
ORA-38325: policy conflicts with policy 23

Rule 3) Only 1 row level policy allowed per segment

You can have only 1 row level policy on a table. In following example, I tried to create 2 row level policies on same table and it didn’t allow

SQL> alter table demo_user.sales ilm add policy row store compress advanced row after 7 days of no modification;

Table altered.

SQL> alter table demo_user.sales ilm add policy row store compress advanced row after 30 days of no modification;
alter table demo_user.sales ilm add policy row store compress advanced row after 30 days of no modification
*
ERROR at line 1:
ORA-38325: policy conflicts with policy 24

Also remember that you can create only “no modification” operation policy at row level.

Rule 4) Segment level policy does not overwrite row level policy

Continuing with above example, after creating row level policy, we can still create segment level policy

SQL> alter table demo_user.sales ilm add policy column store compress for query high segment after 30 days of no modification;

Table altered.

Policy priority and policy Inheritance

As we discussed before, we can create policies at tablespace level, which serves as default policy for every segment created in that tablespace with no policy, segment level and even at partition level. Question remains as to which policy will actually take affect?

Again, this follows rules. Following are the simple rules to understand this

  • Child level policy always overrides parent level policy for same action. By this logic, policies set at table partition will override the one set at table level if both has same actions and policies at table level will overrides the policy at tablespace level if both has same actions
  • If no policies are defined at child level, inheritance takes place. Example, if no ILM policies are defined at partition level, table level policies are applied at partition level. Also, if no ILM policies are defined at table level, tablespace level policies are inherited at table level
  • Inheritance is additive if policy actions are different. For example, if you have a compression policy at tablespace level after specified time condition and compression policy at segment level after some specified condition then total effect on the segment is the sum of effect of both policies – i.e. compression + data movement

SQL> select policy_name, object_name, subobject_name, object_type, inherited_from from dba_ilmobjects;

POLICY_NAM OBJECT_NAME          SUBOBJECT_ OBJECT_TYPE                    INHERITED_FROM
---------- -------------------- ---------- ------------------------------ --------------------
P25        SALES                P1         TABLE PARTITION                TABLE
P25        SALES                P10        TABLE PARTITION                TABLE
P25        SALES                P11        TABLE PARTITION                TABLE
P25        SALES                P12        TABLE PARTITION                TABLE
P25        SALES                P2         TABLE PARTITION                TABLE
P25        SALES                P3         TABLE PARTITION                TABLE
P25        SALES                P4         TABLE PARTITION                TABLE
P25        SALES                P5         TABLE PARTITION                TABLE
P25        SALES                P6         TABLE PARTITION                TABLE
P25        SALES                P7         TABLE PARTITION                TABLE
P25        SALES                P8         TABLE PARTITION                TABLE
P25        SALES                P9         TABLE PARTITION                TABLE
P25        SALES                PM         TABLE PARTITION                TABLE
P25        SALES                           TABLE                          POLICY NOT INHERITED
P41        SALES_ITEM                      TABLE                          TABLESPACE

15 rows selected.

We can check different policies implemented on different objects using DBA_ILMOBJECTS view.

In the above example, Policy P41 is applied on SALES_ITEM table as it exists in USERS tablespace and has no ILM policy defined on it. But since we have default ILM policy at tablespace level USERS, this table SALES_ITEM has inherited this policy from tablespace. In the column INHERITED_FROM, you can see value as TABLESPACE

Similarly, policy P25 applied to table and all its partition. But we create policy at table level only and not at partition level so all partitions of table inherit table level policy.

For object type TABLE and policy P25, we see “POLICY NO INHERITED” in INHERITED_FROM column. This is because we have created new policy at table level and it’s not getting inherited from any level.

Checking ILM policy details

We have couple of views to provide details about ILM policies that exists at various levels in database.

DBA_ILMPOLICIES

You can view all the policies configured in the database in this view


SQL> select * from dba_ilmpolicies;

POLICY_NAM POLICY_TYPE   TABLESPACE ENABLE DELETED
---------- ------------- ---------- ------ -------
P24        DATA MOVEMENT            YES    NO
P25        DATA MOVEMENT            YES    NO
P41        DATA MOVEMENT USERS      YES    NO

Even compression policies are represented as data movement policies in this table. This is because during compression data movement happens.

DBA_ILMDATAMOVEMENTPOLICIES


SQL> select policy_name, action_type, compression_level, condition_type, condition_days from DBA_ILMDATAMOVEMENTPOLICIES;

POLICY_NAM ACTION_TYPE COMPRESSION_LEV CONDITION_TYPE         CONDITION_DAYS
---------- ----------- --------------- ---------------------- --------------
P41        COMPRESSION ADVANCED        LAST MODIFICATION TIME             60
P24        COMPRESSION ADVANCED        LAST MODIFICATION TIME              7
P25        COMPRESSION QUERY HIGH      LAST MODIFICATION TIME             30

Step 3) Evaluating and Executing ADO policies

We have 2 ways of evaluation and execution of ADO policies – scheduled in maintenance window, evaluating and executing it manually. Let’s look at both approaches

Scheduled in maintenance window

Row level policies are evaluated and executed every 15 mins by MMON background process. Segment level policies are evaluated and executed daily once during maintenance window using a scheduler job. This scheduler job uses DBMS_ILM.EXECUTE_ILM_TASK procedure to evaluate and execute ADO policies.

** Note that name of the package is DBMS_ILM as against DBMS_ILM_ADMIN, which is used for customizing ILM parameters and other admin related tasks. We will look into DBMS_ILM_ADMIN package in short while.

In DBMS_ILM.EXECUTE_ILM_TASK procedure, TASK_ID is an input parameter which is automatically generated during evaluation process, which also happens during scheduled maintenance window.

You can change the interval of evaluation and execution using DBMS_ILM_ADMIN.CUSTOMIZE_ILM procedure. We have seen this procedure previously when we were changing TBS PERCENT FREE and TBS PERCENT USED parameters. We can use same procedure to customize execution interval of ILM tasks.

SQL> exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.execution_interval, 1);

PL/SQL procedure successfully completed.

Manually executing ADO policies

For manually executing ADO policies, we have 2 options

  1. Directly execute ADO policies without doing any customization
  2. Preview the evaluation of ADO policies, make any desired changes to the task and then execute the task

You can directly execute ADO policies without doing any customization using DBMS_ILM.EXECUTE_ILM procedure. Note that this procedure is different than the one that runs in scheduled maintenance window (DBMS_ILM.EXECUTE_ILM_TASK). There are multiple definitions of this procedure and in simple definition, you can just pass owner and table name and all policies on that table will be executed.

You can also preview the evaluation result before you execute the task. You can do following steps to customize ILM tasks

  • Evaluate set of ADO policies and preview the result. Check what all tables/policies will be executed
  • Add/remove objects/subobjects from ILM tasks evaluation result.
  • Execute the final customized evaluated ILM task

You can use following procedure in the same order to perform above steps of customizing ILM tasks

  • Execute DBMS_ILM.PREVIEW_ILM procedure to preview ILM tasks for database or specific schema
  • Execute DBMS_ILM.ADD_TO_ILM or DBMS_ILM.REMOVE_FROM_ILM to add or remove objects/subobjects from evaluated task
  • Execute DBMS_ILM.EXECUTE_ILM_TASK to execute the above customized ILM task

Example:

var v_task_id number;
exec DBMS_ILM.PREVIEW_ILM(TASK_ID=> :v_task_id, ILM_SCOPE=>DBMS_ILM.SCOPE_SCHEMA);

let’s say above command generated task_id as 1234.

exec DBMS_ILM.ADD_TO_ILM(TASK_ID=>1234, OWN=>DEMO_USER’, OBJ_NAME=>’NEW_TABLE’);

exec DBMS_ILM.EXECUTE_ILM_TASK(TASK_ID=>1234);

You can also pass following 2 parameters to DBMS_ILM.EXECUTE_ILM_TASK

EXECUTION_MODE: ILM_EXECUTION_ONLINE value executes the task online and ILM_EXECUTION_OFFLINE executes the task offline.

EXECUTION_SCHEDULE: Only possible value for this parameter is DBMS_ILM.SCHEDULE_IMMEDIATE

Step 4) Checking execution details of ADO policies

You can check the details of ADO policies evaluation using following 3 views

  • DBA_ILMEVALUATIONDETAILS
  • DBA_ILMRESULTS
  • DBA_ILMTASKS

DBA_ILMTASKS

Every task that is executed manually or automatically as scheduled will be recorded in DBA_ILMTASKS view. It stored creation time, start time and completion time of task and if task is completed, failed or Active


SQL> select TASK_ID, STATE, START_TIME, COMPLETION_TIME from DBA_ILMTASKS order by task_id;

   TASK_ID STATE     START_TIME                     COMPLETION_TIME
---------- --------- ------------------------------ ------------------------------
         1 COMPLETED 15-JUN-16 10.19.24.976731 PM   15-JUN-16 10.19.24.976731 PM
         2 COMPLETED 15-JUN-16 10.34.27.587548 PM   15-JUN-16 10.34.27.587548 PM
         3 COMPLETED 15-JUN-16 10.49.30.158279 PM   15-JUN-16 10.49.30.158279 PM
         4 COMPLETED 15-JUN-16 11.04.32.861197 PM   15-JUN-16 11.04.32.861197 PM
         5 COMPLETED 15-JUN-16 11.19.35.192715 PM   15-JUN-16 11.19.35.192715 PM

DBA_ILMEVALUATIONDETAILS

This view can be used to see details on task execution after the evaluation of the ADO policies is completed. If the SELECTED_FOR_EXECUTION column in this view has the value “selected for execution,” then the policy has been selected for execution indeed and an ADO job will be executed to satisfy the ILM policy. However, a job may not execute if there’s a value other than “selected for execution” under the column SELECTED_FOR_EXECUTION.

The following values all mean that the ADO policy hasn’t successfully passed the evaluation:

  • Precondition not satisfied
  • Policy disabled
  • Policy overruled
  • Inherited policy overruled
  • Job already exists
  • No operation since last ILM action
  • Target compression not higher than current
  • Statistics not available

DBA_ILMRESULTS

Provide details about completed ILM tasks. It has a STATISTICS column which is a CLOB and provide Job specific statistics, such as space saved via compression etc

Enabling and disabling ADO policies

You can enable and or disable ADO policies at individual table level or at database level completely.

Enabling/disabling ADO policies

You can enable/disable individual ADO policy on a table

SQL> alter table demo_user.sales ilm disable policy P25;

Table altered.

SQL> alter table demo_user.sales ilm enable policy P25;

Table altered.

You can also enable/disable all policies on a table

SQL> alter table demo_user.sales ilm disable_all;

Table altered.

SQL> alter table demo_user.sales ilm enable_all;

Table altered.

Deleting ADO policies

You can delete individual policy on a table using policy name

SQL> alter table demo_user.sales ilm delete policy P24;

Table altered.

If you want to delete all policies on a table, you can use delete_all

SQL> alter table demo_user.sales ilm delete_all;

Table altered.

Enable/disable ILM

You can disable ILM completely in database using DBMS_ILM_ADMIN.DISABLE_ILM procedure. This disables ILM, but keep all policies intact. Those policies will not get executed until we enable ILM.

SQL> exec DBMS_ILM_ADMIN.DISABLE_ILM;

PL/SQL procedure successfully completed.

SQL> exec DBMS_ILM_ADMIN.ENABLE_ILM;

PL/SQL procedure successfully completed.

Hope this helps !!

Converting non-CDB database as PDB in existing CDB

This is a short article on converting existing non-container (non-cdb) 12c database as pluggable database (pdb) to existing container database (cdb).

Version of Non-CDB database = 12.1.0.1

Version of CDB database = 12.1.0.2

Following steps will plug-in non-cdb database into cdb as pdb and also will upgrade the version to 12.1.0.2

My non-cdb database name is deo12c

My cdb database name is deocdb

Step 1) Make sure that non-cdb database is at least 12.1.0.1 version. This is the first version available.

Step 2) Compatible is set to at least 12.0.0 in non-cdb database

Step 3) Open non-cdb database in read-only to create XML file for the PDB

Shutting down non-cdb database (deo12c)

SYS.DEO12C.PRIMARY>shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Mounting the database and opening in read-only

SYS.DEO12C.PRIMARY>startup mount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 4294967296 bytes
Fixed Size            3719064 bytes
Variable Size         4177526888 bytes
Database Buffers      100663296 bytes
Redo Buffers           13058048 bytes
Database mounted.
SYS.DEO12C.PRIMARY>alter database open read only;

Database altered.

SYS.DEO12C.PRIMARY>

Step 4) Create XML file for PDB

SYS.DEO12C.PRIMARY-RO>exec DBMS_PDB.DESCRIBE('/fs-a01-a/backups/deo12c/deopdb2.xml');

PL/SQL procedure successfully completed.

Step 5) Shutdown non-cdb database

SYS.DEO12C.PRIMARY-RO>SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS.DEO12C.PRIMARY-RO>

Step 6) Plugin non-cdb database as PDB into CDB

Running following command on CDB database – deocdb

SYS.DEOCDB.PRIMARY>CREATE PLUGGABLE DATABASE deopdb2 USING '/fs-a01-a/backups/deo12c/deopdb2.xml' NOCOPY TEMPFILE REUSE;

Pluggable database created.

SYS.DEOCDB.PRIMARY>

Step 7) Convert the dictionary of new plugged in database to the PDB type

Change the container to new pluggable database

SYS.DEOCDB.PRIMARY>ALTER SESSION set container=deopdb2;

Session altered.

SYS.DEOCDB.PRIMARY>@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

Step 8) Start new pluggable database

SYS.DEOCDB.PRIMARY>alter pluggable database deopdb2 open;

Warning: PDB altered with errors.

When we opened the pluggable database, we got a warning. You can check the details of warning by querying PDB_PLUG_IN_VIOLATIONS table

SYS.DEOCDB.PRIMARY>select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name = 'DEOPDB2' and status != 'RESOLVED';

NAME                   CAUSE                  TYPE    MESSAGE                        STATUS
------------------------------ ------------------------------ --------- -------------------------------------------------- ---------
DEOPDB2                NATIONAL CHARACTER SET          ERROR    National character set mismatch: PDB national char PENDING
                                    acter set AL16UTF16. CDB national character set UT
                                    F8.

DEOPDB2                OPTION                  WARNING    Database option APS mismatch: PDB installed versio PENDING
                                    n NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option CATJAVA mismatch: PDB installed ve PENDING
                                    rsion NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option CONTEXT mismatch: PDB installed ve PENDING
                                    rsion NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option DV mismatch: PDB installed version PENDING
                                     NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option JAVAVM mismatch: PDB installed ver PENDING
                                    sion NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option OLS mismatch: PDB installed versio PENDING
                                    n NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option ORDIM mismatch: PDB installed vers PENDING
                                    ion NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option OWM mismatch: PDB installed versio PENDING
                                    n NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option SDO mismatch: PDB installed versio PENDING
                                    n NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option XML mismatch: PDB installed versio PENDING
                                    n NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                OPTION                  WARNING    Database option XOQ mismatch: PDB installed versio PENDING
                                    n NULL. CDB installed version 12.1.0.2.0.

DEOPDB2                APEX                  WARNING    APEX mismatch: PDB installed version NULL CDB inst PENDING
                                    alled version 4.2.5.00.08


13 rows selected.

These warnings can be ignored. These are basically coming because dba_registry of container database doesn’t match new pluggable database.

In my case container database has following registry

SYS.DEOCDB.PRIMARY>alter session set container=cdb$root;

Session altered.

SYS.DEOCDB.PRIMARY>col comp_name format a40;
SYS.DEOCDB.PRIMARY>SELECT comp_id,comp_name,version FROM dba_registry;

COMP_ID                COMP_NAME                VERSION
------------------------------ ---------------------------------------- ------------------------------
DV                   Oracle Database Vault            12.1.0.2.0
APEX                   Oracle Application Express        4.2.5.00.08
OLS                   Oracle Label Security            12.1.0.2.0
SDO                   Spatial                    12.1.0.2.0
ORDIM                   Oracle Multimedia            12.1.0.2.0
CONTEXT                Oracle Text                12.1.0.2.0
OWM                   Oracle Workspace Manager         12.1.0.2.0
XDB                   Oracle XML Database            12.1.0.2.0
CATALOG                Oracle Database Catalog Views        12.1.0.2.0
CATPROC                Oracle Database Packages and Types    12.1.0.2.0
JAVAVM                   JServer JAVA Virtual Machine        12.1.0.2.0
XML                   Oracle XDK                12.1.0.2.0
CATJAVA                Oracle Database Java Packages        12.1.0.2.0
APS                   OLAP Analytic Workspace            12.1.0.2.0
XOQ                   Oracle OLAP API                12.1.0.2.0
RAC                   Oracle Real Application Clusters     12.1.0.2.0

16 rows selected.

In my pluggable database, registry has only following components

SYS.DEOCDB.PRIMARY>alter session set container=deopdb2;

Session altered.

SYS.DEOCDB.PRIMARY>alter session set container=deopdb2;

Session altered.

SYS.DEOCDB.PRIMARY>SELECT comp_id,comp_name,version FROM dba_registry;

COMP_ID                COMP_NAME                VERSION
------------------------------ ---------------------------------------- ------------------------------
XDB                   Oracle XML Database            12.1.0.2.0
CATALOG                Oracle Database Catalog Views        12.1.0.2.0
CATPROC                Oracle Database Packages and Types    12.1.0.2.0
RAC                   Oracle Real Application Clusters     12.1.0.2.0

4 rows selected.

You can get rid of above warnings by installing the required components in your pluggable database and match it with container database.

References:

https://docs.oracle.com/database/121/UPGRD/upgrade.htm#CHDBEDDA

Hope this helps !!

Creating Oracle 12c Multitenant Container Database

Oracle has come up with new feature called multitenant database. Using this feature we can have many pluggable databases plugged into single container database.
This article explains pluggable databases architecture and how to create them.
I am not covering administration part. Viewers can check the documentation mentioned in reference section to check administrative part for managing pluggable databases.

Multitenant Architecture

One of the high-profile new features of Oracle 12c Enterprise Edition(EE) is the multitenant option – multitenant container database (CDB). This option allows the creation of many pluggable databases (PDBs) inside a CDB. The PDBs share resources provided by the CDB, such as memory, background processes, UNDO, REDO, and control files. This enables more databases to run on a single platform than was possible with the former Oracle 11gR2 architecture.
You can consider all databases till 11g as non-container databases or non-CDB. In oracle 12c we can still create non-CDB databases using normal “create database” command and administration wise there is no difference in managing those 12c databases compared to 11g database.

Oracle Multitenant architecture contains following:

One root container database
This looks like a normal database, but the purpose is different. This stores Oracle-supplied metadata and common users. It has all component that a normal database has – controlfile, online redo logs, SYSTEM and SYSAUX tablespace, Pfile and SPfile, archive logs and flasback logs etc. You can also create additional users, tablespace in this database.

One seed pluggable database
This is a seed databases which gets created when we create container database. This is nothing but set of datafiles in different location. This seed database can be used to create new pluggable databases.

Zero or more user-created PDBs
We can create multiple pluggable databases in single container databases. There are different methods of creating pluggable databases. Using above seed database to create pluggable database is just 1 method. We can also clone existing pluggable database from same container database or different container database. We can also make other non-CDB oracle 12c databases as pluggable database in our container database. You can refer to doc mentioned in reference section for more details on this.

Following figure show multitenant architecture:

cdbContainer database is same as normal database having all required component same as normal database.
Pluggable database has its own SYSTEM, SYSAUX and data tablespace. All other components like UNDO tablespace, online redo logs, controlfile, archive logs etc are common for container database and pluggable database.

When compared to RAC, I feel this architecture is exactly opposite. In RAC we have multiple instances and 1 database. In multitenant architecture we have single instance and multiple databases.

Creating Multitenant Container database

Creating container database is similar to creating non-container database but with few changes. Following changes should be made if we want to create container database

1) Define enable_pluggable_database parameter to true.
By default this parameter is false. Meaning that by default database will be created as non-multitenant database. If you want to create multitenant database, you have to make this parameter true.
Failing to make this parameter true will give you following error if you try to create multitenant database

ORA-65093: multitenant container database not set up properly

2) Change in “Create database” command
So if you are going to create multitenant database using “create database” command, you need to add couple of clause to this command.
Following clause needs to be added

  • ENABLE PLUGGABLE DATABASE
  • SEED

Following steps can be used to create multitenant database

Step 1) Create pfile and spfile

You can use following parameters as generic to create test database. You can alter the memory parameters and include/remove many parameters as required. But these are good enough to create test instance.

cat initdeocdb.ora

db_name='deocdb'
sga_max_size=4G
shared_pool_size=1200M
shared_pool_reserved_size=300M
java_pool_size = 200M
pga_aggregate_target=1G
processes = 300
audit_trail ='db'
db_block_size=8192
db_domain=''
db_recovery_file_dest='/u01/oradata/databases/flashback'
db_recovery_file_dest_size=50G
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDO_T1'
control_files = (/ctl-01/databases/deocdb/control.ctl, /ctl-02/databases/deocdb/control.ctl)
compatible ='12.0.0'
audit_file_dest = /dumps-01/databases/deocdb/adump
core_dump_dest = /dumps-02/databases/deocdb/cdump
diagnostic_dest = /dumps-01
sec_case_sensitive_logon = FALSE
utl_file_dir = /dumps-01/databases/deocdb/output
enable_pluggable_database=true

create spfile from pfile;


***Its important to set java_pool_size in pfile/spfile. If you do not set java_pool_size, we will see following errors in alert log while creating container database

ORA-04031: unable to allocate 4096 bytes of shared memory (“java pool”,”/51835a0f_ReflectionFactoryGet”,”JOXLE^5cb1fb41″,”:SGAClass”)

Step 2) Create database command

I used following create database command to create container database. Highlighted lines are additional clauses for creating container database.


startup nomount;

CREATE DATABASE deocdb
USER SYS IDENTIFIED BY welcome
USER SYSTEM IDENTIFIED BY welcome
LOGFILE GROUP 1 ('/redo-01-a/databases/deocdb/redo-t01-g01-m1.log',
                 '/redo-03-a/databases/deocdb/redo-t01-g01-m2.log') SIZE 100M BLOCKSIZE 512,
        GROUP 2 ('/redo-02-a/databases/deocdb/redo-t01-g02-m1.log',
                 '/redo-04-a/databases/deocdb/redo-t01-g02-m2.log') SIZE 100M BLOCKSIZE 512,
        GROUP 3 ('/redo-01-a/databases/deocdb/redo-t01-g03-m1.log',
                 '/redo-03-a/databases/deocdb/redo-t01-g03-m2.log') SIZE 100M BLOCKSIZE 512,
        GROUP 4 ('/redo-02-a/databases/deocdb/redo-t01-g04-m1.log',
                 '/redo-04-a/databases/deocdb/redo-t01-g04-m2.log') SIZE 100M BLOCKSIZE 512
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
CHARACTER SET UTF8
NATIONAL CHARACTER SET UTF8
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/oradata/databases/deocdb/system01.dbf'   
    SIZE 700M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SYSAUX DATAFILE '/u01/oradata/databases/deocdb/sysaux01.dbf'   
    SIZE 550M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
DEFAULT TABLESPACE admin
DATAFILE '/u01/oradata/databases/deocdb/admin-01.dbf' 
    SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED  
DEFAULT TEMPORARY TABLESPACE TEMP 
TEMPFILE '/u01/oradata/databases/deocdb/temp01.dbf' 
    SIZE 20M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
UNDO TABLESPACE undo_t1
DATAFILE '/u01/oradata/databases/deocdb/undo_t1-01.dbf'
    SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
ENABLE PLUGGABLE DATABASE
SEED
    FILE_NAME_CONVERT = ('/u01/oradata/databases/deocdb/', 
                         '/u01/oradata/databases/pdbseed/')
    SYSTEM DATAFILES SIZE 125M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
    SYSAUX DATAFILES SIZE 100M
USER_DATA TABLESPACE users
DATAFILE '/u01/oradata/databases/pdbseed/users-01.dbf'
    SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

Step 3) run catcdb.sql

DO NOT RUN catlog.sql and catproc.sql
If you are creating container database, you need to run only catcdb.sql.
This script takes care of running all other dependent scripts internally.

If you run catlog.sql and catproc.sql, this script will fail later at many stages with following errors

catcon: ALL catcon-related output will be written to dbmsxdbt_catcon_2527.lst
catcon: See dbmsxdbt*.log files for output generated by scripts
catcon: See dbmsxdbt_*.lst files for spool files, if any
catconInit: database is not open on the default instance
Unexpected error encountered in catconInit; exiting

In case of normal database creation, we usually run catlog.sql and catproc.sql as mentioned in http://docs.oracle.com/database/121/ADMIN/create.htm#ADMIN11082
But that’s for normal database creation and process is same in 12c. But for creating container database, we should be running only catcdb.sql

catcdb.sql will ask for 3 inputs – new sys password, new system password and temp tablespace

SQL> @?/rdbms/admin/catcdb.sql

Session altered.

Enter new password for SYS:
Enter new password for SYSTEM:
Enter temporary tablespace name: TEMP

Once you provide this, it will create root container database with whatever database name you give. Container name for root container DB will be CDB$ROOT.
It will also create seed pluggable container database. Container name will be seed$pdb

Step 4) Run utlrp.sql to compile invalid objects.

There will be few invalid objects in database. You can run utlrp.sql to compile the same

Validation

You can run few commands to validate if you database is container databases and what all containers it has.

We have a new column in v$database called CDB. If your database is container database it will show as YES

SYS.DEOCDB.PRIMARY>select name, cdb from v$database;

NAME    CDB
--------- ---
DEOCDB    YES

1 row selected.

You can also check different containers in your database using following

SYS.DEOCDB.PRIMARY>select CON_ID, NAME, OPEN_MODE, TOTAL_SIZE from v$containers;

    CON_ID NAME         OPEN_MODE  TOTAL_SIZE
---------- ------------------------------ ---------- ----------
   1 CDB$ROOT       READ WRITE        0
   2 PDB$SEED       READ ONLY  1599078400

V$CONTAINER is the new view available in 12c. When this is run from root container, it shows all containers CDB as well as PDB. But if you run same command from any PDB, it will show that container.

SYS.DEOCDB.PRIMARY>alter session set container=PDB$SEED;

Session altered.

SYS.DEOCDB.PRIMARY>select CON_ID, NAME, OPEN_MODE, TOTAL_SIZE from v$containers;

    CON_ID NAME         OPEN_MODE  TOTAL_SIZE
---------- ------------------------------ ---------- ----------
   2 PDB$SEED       READ ONLY  1599078400

Creating Pluggable database

Following command can be used to create pluggable database from seed pluggable database PDB$SEED

SYS.DEOCDB.PRIMARY>create pluggable database deopdb1 admin user deodba identified by welcome FILE_NAME_CONVERT=('/u01/oradata/databases/pdbseed/','/u01/oradata/databases/deopdb1/');

Pluggable database created.

Above command is in its basic form and all parameter/clauses mentioned are mandatory. You need to provide FILE_NAME_CONVERT so that files from seed database will be copied to location you provided for your pluggable database. There are many parameters to this command and you can explore them in documentation link provided in reference.

Initially when you create pluggable database it will be in MOUNTED state. This is because pluggable database use same memory structure and control file as root container database. So memory structure and control file is already allocated.

SYS.DEOCDB.PRIMARY>select CON_ID, NAME, OPEN_MODE, TOTAL_SIZE from v$containers;

    CON_ID NAME         OPEN_MODE  TOTAL_SIZE
---------- ------------------------------ ---------- ----------
   1 CDB$ROOT       READ WRITE        0
   2 PDB$SEED       READ ONLY  1599078400
   3 DEOPDB1        MOUNTED       0

You need to just open the database.
You can open pluggable database from root container when you logged in using following command

SYS.DEOCDB.PRIMARY>show con_name;

NAME_COL_PLUS_PDB_CONTAINER
-------------------------------------
CDB$ROOT

SYS.DEOCDB.PRIMARY>alter pluggable database deopdb1 open;

Pluggable Database opened.

Or you can change container name and simply start

SYS.DEOCDB.PRIMARY>alter session set container=DEOPDB1;

Session altered.

SYS.DEOCDB.PRIMARY>show con_name;

NAME_COL_PLUS_PDB_CONTAINER
-------------------------------------
DEOPDB1

SYS.DEOCDB.PRIMARY>startup
Pluggable Database opened.

You can shutdown pluggable database using similar commands

if you are in root container, you can use following commands

SYS.DEOCDB.PRIMARY>show con_name;

NAME_COL_PLUS_PDB_CONTAINER
-------------------------------------
CDB$ROOT
SYS.DEOCDB.PRIMARY>alter pluggable database deopdb1 close;

Pluggable database altered.

Else you can change container and shutdown the database.

SYS.DEOCDB.PRIMARY>alter session set container=DEOPDB1;

Session altered.

SYS.DEOCDB.PRIMARY>show con_name;

NAME_COL_PLUS_PDB_CONTAINER
--------------------------------------
DEOPDB1
SYS.DEOCDB.PRIMARY>
SYS.DEOCDB.PRIMARY>
SYS.DEOCDB.PRIMARY>shut immediate;
Pluggable Database closed.

Reference

http://docs.oracle.com/database/121/ADMIN/cdb_intro.htm#ADMIN13508
http://docs.oracle.com/database/121/ADMIN/cdb_plug.htm#ADMIN13556
http://docs.oracle.com/database/121/ADMIN/cdb_create.htm#ADMIN13521
https://docs.oracle.com/database/121/CNCPT/cdbovrvw.htm#CNCPT89234

Hope this helps !!