Oracle Corporate timeline

I was going through some sites and found a good information regarding the events that happened in the past and which brought up Oracle to this place. Thats being a long journey of Oracle business spreading right from database to application and middle tier architecture.

Here are the brief events I have listed.

  • 1979: Larry Ellison and friends founded Software Development Laboratories.
  • 1979: SDL changed its company-name to “Relational Software, Inc.” (RSI) and introduced its product Oracle V2 as an early commercially-available relational database system. The version did not support transactions, but implemented the basic SQL functionality of queries and joins. (RSI never released a version 1 – instead calling the first version version 2 as a marketing gimmick.)
  • 1983: RSI in its turn changed its name, becoming known as “Oracle Corporation”, to align itself more closely with its flagship-product. The company released Oracle version 3, which it had re-written using the C programming language and which supported COMMIT and ROLLBACK functionality for transactions. Version 3 extended platform support from the existing Digital VAX/VMS systems to include Unix environments.
  • 1984: Oracle Corporation released Oracle version 4, which supported read-consistency.
  • 1985: the Oracle RDBMS began supporting the client-server model, with networks becoming more widely available in the mid-1980s. Oracle version 5.0 supported distributed queries.
  • 1989: Oracle Corporation entered the application products market and developed its ERP product, Oracle E-Business Suite, based on the Oracle relational database. Oracle RDBMS version 6 came out with support for PL/SQL embedded within Oracle Forms v3 (version 6 could not store PL/SQL in the database proper), row-level locking and hot backups.
  • 1992: Oracle version 7 appeared with support for referential integrity, stored procedures and triggers.
  • 1997: Oracle Corporation released version 8, which supported object-oriented development and multimedia applications.
  • 1999: The release of Oracle8i aimed to provide a database inter-operating better with the Internet (the i in the name stands for “Internet”). The Oracle 8i database incorporated a native Java virtual machine (Oracle JVM).
  • 2001: Oracle9i was released with 400 new features, including the ability to read and write XML documents. 9i also provided an option for Oracle RAC, or “Real Application Clusters”, a computer-cluster database, as a replacement for the Oracle Parallel Server (OPS) option.
  • 2003: Oracle Corporation released Oracle Database 10g. (The g stands for “grid”; emphasizing a marketing thrust of presenting 10g as “grid-computing ready”.)
  • 2005: Oracle Database 10.2.0.1 — also known as Oracle Database 10g Release 2 (10gR2) — appeared.
  • 2007: Oracle Corporation released Oracle Database 11g for Linux and Microsoft Windows.

 Oracle Database versions

Since version 7, Oracle’s RDBMS release numbering has used the following codes:

  • Oracle7: 7.0.16 — 7.3.4
  • Oracle8 Database: 8.0.3 — 8.0.6
  • Oracle8i Database Release 1: 8.1.5.0 — 8.1.5.1
  • Oracle8i Database Release 2: 8.1.6.0 — 8.1.6.3
  • Oracle8i Database Release 3: 8.1.7.0 — 8.1.7.4
  • Oracle9i Database Release 1: 9.0.1.0 — 9.0.1.5 (Latest current patchset as of December 2003)
  • Oracle9i Database Release 2: 9.2.0.1 — 9.2.0.8 (Latest current patchset as of April 2007)
  • Oracle Database 10g Release 1: 10.1.0.2 — 10.1.0.5 (Latest current patchset as of February 2006)
  • Oracle Database 10g Release 2: 10.2.0.1 — 10.2.0.3 (Latest current patchset as of November 2006)
  • Oracle Database 11g Release 1: 11.1.0.6 — no patchset available as of October 2007

The version numbering syntax within each release follows the pattern: major.maintenance.application-server.component-specific.platform-specific.For example, “10.2.0.1 for 64-bit Solaris” means: 10th major version of Oracle, maintenance level 2, Oracle Application Server (OracleAS) 0, level 1 for Solaris 64-bit.

Reference: Wikipedia

Advertisements

Upgrading Database to 10g for Oracle Applications 11i

For Oracle Applications 11i installation, the database version that comes is 9.2.0
This post is to give a brief and clear steps for upgrading your database to 10.2.0

Instance Details

Here are the details for the Oracle Application I am trying on

Oracle Application Version: 11.5.10 CU2
Oracle Database Version: 9.2.0

Since its a 11.5.10 CU2, its autoconfig enabled. So we have to make sure that we need to make new Oracle Home also autoconfig enabled. For more information on autoconfig, please refer to metalink note ID 165195.1

The below steps are summarized from metalink note ID 423056.1.

To begin with I will briefly explain the procedure in 3 major steps.

1) Pre-Upgrade
2) Upgrade
3) Post-Upgrade

Pre-Upgrade Step:

1) Check the current database version:

For upgrading the database, you need to check the exact version of the current database and based on that check the upgrade path.

Below is the brief compatibility matrix. Reference Metalink note ID 316889.1.

COMPATIBILITY MATRIX
====================

+ Minimum Version of the database that can be directly upgraded to Oracle 10g Release 2
8.1.7.4 -> 10.2.X.X.X
9.0.1.4 or 9.0.1.5 -> 10.2.X.X.X
9.2.0.4 or higher -> 10.2.X.X.X
10.1.0.2 or higher -> 10.2.X.X.X

+ The following database version will require an indirect upgrade path.
7.3.3 (or lower) -> 7.3.4 -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X
7.3.4 -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X
8.0.n -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X
8.1.n -> 8.1.7 -> 8.1.7.4 -> 10.2.X.X.X

SQL> select banner from v$version;

BANNER
—————————————————————-
Oracle9i Enterprise Edition Release 9.2.0.5.0 – Production
PL/SQL Release 9.2.0.5.0 – Production
CORE 9.2.0.6.0 Production
TNS for Linux: Version 9.2.0.5.0 – Production
NLSRTL Version 9.2.0.5.0 – Production

Since in my case its 9.2.0.5, I can directly upgrade to 10.2.0

2) Check the application version:

Oracle Application version should be above 11.5.9 (11.5.10 CU2 in my case).

SQL> SELECT product_group_id GroupID, product_group_name GroupName,
release_name Release, product_group_type GroupType, argument1
FROM fnd_product_groups;

GROUPID GROUPNAME
———- ————————————————————
RELEASE
————————————————–
GROUPTYPE ARGUMENT1
—————————— ——————————
1 Default product group
11.5.10
Standard

Check the developer 6i Patch set.

3) Check the latest autoconfig patch:

Check if the autoconfig latest patch is applied to your environment or not. Metalink note ID 165195.1
The latest patch mention in this note is 5985992. So if this patch is not applied then please apply this patch on application system.

SQL> select count(*) from ad_bugs
2 where bug_number = ‘5985992’;


COUNT(*)
———-
1

4) Apply either

or

1If you are using Oracle Demand Planning, your Applications version must be 11.5.10 in order to use Oracle Database 10g Release 2. The 11.5.9 version of Oracle Demand Planning is not certified against Oracle Database 10g Release 2.Upgrade Step:

1) Install Oracle Database 10g. (Software Only).

2) Since you are upgrading the 9i database to 10g database, you have to install 10g software to some different Oracle Home. So create a new Oracle Home 10.2.0 and install the software in that location.

Example if the currect 9i Oracle home location is

9i Oracle Home: /u01/app/oracle/apps11idb/9.2.0
10g Oracle Home: /u01/app/oracle/apps11idb/10.2.0

3) Create nls/data/9idata directory:

This can be done using $ORACLE_HOME/nls/data/old/cr9idata.pl script
Here ORACLE_HOME refers to 10g ORACLE_HOME.

[oracle@ocvmrh2064 old]$ export ORACLE_HOME=/u01/app/oracle/apps11idb/10.2.0
[oracle@ocvmrh2064 old]$ perl cr9idata.pl
Creating directory //u01/app/oracle/apps11idb/10.2.0/nls/data/9idata …
Copying files to //u01/app/oracle/apps11idb/10.2.0/nls/data/9idata…
Copy finished.
Please reset environment variable ORA_NLS10 to //u01/app/oracle/apps11idb/10.2.0/nls/data/9idata!
[oracle@ocvmrh2064 old]$ export ORA_NLS10=/u01/app/oracle/apps11idb/10.2.0/nls/data/9idata

4) Next Step is to perform database upgrade.

Reference used is the upgrade docs

a) copy the script utlu102i.sql provided in new Oracle Home 10g to some temporary location and run the same in your existing 9i database.

[oracle@ocvmrh2064 admin]$ pwd
/u01/app/oracle/apps11idb/10.2.0/rdbms/admin
[oracle@ocvmrh2064 admin]$ cp utlu102i.sql /tmp/

**********************************************************************
Update Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
WARNING: –> “streams_pool_size” is not currently defined and needs a value of
at least 50331648
WARNING: –> “large_pool_size” needs to be increased to at least 8388608
WARNING: –> “session_max_open_files” needs to be increased to at least 20

copy the init.ora parameter file from old ORACLE_HOME to new ORACLE_HOME

Set the above parameters in the new init.ora parameter file you copied to 10g ORACLE_HOME

**********************************************************************
SYSAUX Tablespace:
[Create tablespace in the Oracle Database 10.2 environment]
**********************************************************************
–> New “SYSAUX” tablespace
…. minimum required size for database upgrade: 500 MB

SQL> create tablespace sysaux datafile ‘/u01/app/oracle/apps11idata/sysaux01.dbf’ size 1000M
2 extent management local segment space management auto;

Tablespace created.

TIMESTAMP WITH TIMEZONE Datatype:

The time zone files that are supplied with Oracle Database 10g have been updated from version 1 to version 2 to reflect changes in transition rules for some time zone regions. The changes may affect existing data of TIMESTAMP WITH TIME ZONE datatype. To preserve this TIMESTAMP data for updating according to the new time zone transition rules, you must run the utltzuv2.sql script on the database before upgrading. This script is located in the new 10.2 ORACLE_HOME/rdbms/admin directory. This script analyzes your database for TIMESTAMP WITH TIME ZONE columns that are affected by the updated time zone transition rules.

If the utltzuv2.sql script identifies columns with time zone data affected by a database upgrade, then back up the data in character format before you upgrade the database. After the upgrade, you must update the tables to ensure that the data is stored based on the new rules. If you export the tables before upgrading and import them after the upgrade, the conversion will happen automatically during the import.

SQL> @?/../10.2.0/rdbms/admin/utltzuv2.sql
DROP TABLE sys.sys_tzuv2_temptab
*
ERROR at line 1:
ORA-00942: table or view does not exist

Table created.

Query sys.sys_tzuv2_temptab Table to see if any TIMEZONE data is affected by
version 2 transition rules

PL/SQL procedure successfully completed.

Commit complete.

SQL> select * from sys.sys_tzuv2_temptab;

no rows selected

b) Shutdown the database and start it again. Run the pre-upgrade utility again to check if every thing is alright before staring the upgrade.

c) Shutdown all the process on mid tier and database tier

d) Shutdown the listener process for 9i database.

e) connect to database using 10g Oracle Home.

[oracle@ocvmrh2064 9.2.0]$ export ORACLE_HOME=/u01/app/oracle/apps11idb/10.2.0
[oracle@ocvmrh2064 9.2.0]$ sqlplus “/as sysdba”

Message 133 not found; product=SQLPlus; facility=SP2
: Release 9.2.0.5.0 – Production on Sun Oct 14 04:30:52 2007

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

SQL>

You might face the issue mentioned above. This is becase the PATH variable is not having the new ORACLE_HOME/bin path.

[oracle@ocvmrh2064 9.2.0]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@ocvmrh2064 9.2.0]$ sqlplus “/as sysdba”

SQL*Plus: Release 10.2.0.1.0 – Production on Sun Oct 14 04:33:09 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

f) Start the upgrade process

SQL> startup upgrade
ORA-32004: obsolete and/or deprecated parameter(s) specified
ORACLE instance started.

Total System Global Area 591396864 bytes
Fixed Size 1220700 bytes
Variable Size 415240100 bytes
Database Buffers 163577856 bytes
Redo Buffers 11358208 bytes
Database mounted.
Database opened.
SQL> SPOOL upgrade.log
SQL> @catupgrd.sql

Oracle Database 10.2 Upgrade Status Utility 10-14-2007 06:32:36
.
Component Status Version HH:MM:SS
Oracle Database Server VALID 10.2.0.1.0 00:55:04
JServer JAVA Virtual Machine VALID 10.2.0.1.0 00:06:53
Oracle XDK VALID 10.2.0.1.0 00:08:38
Oracle Database Java Packages VALID 10.2.0.1.0 00:00:44
Oracle Text VALID 10.2.0.1.0 00:01:20
Oracle XML Database VALID 10.2.0.1.0 00:02:50
Oracle Real Application Clusters INVALID 10.2.0.1.0 00:00:01
Oracle Data Mining VALID 10.2.0.1.0 00:00:30
OLAP Analytic Workspace VALID 10.2.0.1.0 00:00:25
OLAP Catalog VALID 10.2.0.1.0 00:01:32
Oracle OLAP API VALID 10.2.0.1.0 00:01:01
Oracle interMedia INVALID 10.2.0.1.0 00:09:37
Spatial INVALID 10.2.0.1.0 00:09:24
.
Total Upgrade Time: 01:52:15

g) After the upgrade is over, there will be many invalid objects, which we have to compile. Please use utlrp.sql to compile the invalids.

Once the invalids are compiled, the next step is to make the database autoconfig enabled. This are brifly explaind in post upgrade section.

Post Upgrade Steps:

1) Copy the .env file present in 9i Oracle Home to 10g oracle home.

2) Change all the 9.2.0 reference to 10.2.0 reference.
Source the .env file.

3) Restart the database again.

4) On the application tier run the below command to generate appsutil.zip file.

perl <AD_TOP>/bin/admkappsutil.pl

5) Copy the appsutil.zip file generated on application tier to the database tier inside 10g Oracle Home.

unzip the appsutil.zip file.

6) Next step is to create a context file on database tier. Run the below commands with appropriate values. This will generate context file on database tier.

cd $ORACLE_HOME/appsutil

perl adbldxml.pl tier=db appsuser=<APPSuser> appspasswd=<APPSpwd>

Please note that above oracle home we are selecting is 10g Oracle home.

==========================================================

While running adbldxml.pl you might encounter following issue.

[oracle@ocvmrh2081 bin]$ perl adbldxml.pl tier=db appsuser=apps appspasswd=apps

Starting context file generation for db tier..
Using JVM from /u01/app/oracle/proddb/10.2.0/jdk/bin/java to execute java programs..

The log file for this adbldxml session is located at:
/u01/app/oracle/proddb/10.2.0/appsutil/log/adbldxml_02172131.log
UnsatisfiedLinkError exception loading native library: njni10

Could not Connect to the Database with the above parameters, Please answer the Questions below

Enter Hostname of Database server: OCVMRH2081.us.oracle.com

Enter Port of Database server: 1521

Enter SID of Database server: PROD

Could not Connect to the Database : ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux Error: 2: No such file or directory

Could not Connect to the Database with the above parameters, Please answer the Questions below

Enter Hostname of Database server[OCVMRH2081.us.oracle.com]:

Enter Port of Database server[1521]:

and it will keep on asking the same questions.

The reason for such error is becasue adbldxml.pl is not able to make connection to your database. This is because of listener issue.
– If you do “echo $TNS_ADMIN”, it will point to your 9i ORACLE_HOME.
– Now what you have to do is to copy the <SID_hostname> directory present under 9i ORACLE_HOME/network/admin to new 10g ORACLE_HOME/netwrok/admin
– Make all the references to 9i ORACLE_HOME to 10g ORACLE_HOME in listener.ora
– make sure that listener started from 9i ORALCE_HOME is stopped from 9i ORACLE_HOME only
– Start the listner from 10g ORACLE_HOME. Then try running adbldxml.pl. This should work.

==========================================================

7) Once the context file is created on db tier, run autoconfig.

cd $ORACLE_HOME/appsutils/bin

adconfig.sh contextfile=<CONTEXT> appspass=<APPSpwd>

This completes the migeration to 10g database.

Oracle Transaction Management

Transaction Management in Oracle Database

As you know the basic transaction in oracle database are one or more SQL statements that are executed at once. So these set of SQL statements are called transaction. When ever we fire any SQL statement transaction starts. A transaction ends with any of the below statements

  • Commit
  • Rollback
  • DDL statement (Create, Drop, Rename, Alter etc)
  • Normal program execution – This will happen if you run a normal program which will terminate with status zero and that will commit the whole transaction
  • Abnormal Program termination – In case a program terminated abnormally, it will rollback the transactions.

When a user starts a transaction following things happens

  1. An undo buffer segment is allocated and old values of the record gets stored in undo buffer segment in SGA.
  2. When a transaction gets executed, oracle generates a redo entry in redo buffer
  3. Oracle modifies the data in buffer cache as updated by the transaction.

At the end of transaction when the user commits following things happen.

  1. A new SCN (system change number) gets generated and assigned to each record belonging to that transaction in redo buffer.
  2. Entries in redo buffer gets written to redo log file
  3. Any locks that is held by the transaction (on table or any other resource) will be cleared.

When you commit a transaction the records in redo buffer can be either written immediately of you can allow oracle to write the record at its own prefered time to improve the performance. This can be done using IMMEDIATE or BATCH option.

Also you can ask oracle engine to wait until the records are successfully written to the redo files, or you can ask oracle to return back from commit statment with out bothering about whether the commited records are written to redo buffers or not. This can be controlled using WAIT or NOWAIT option.

The above two settings can be done using the init.ora paramter COMMIT_WRITE. You can set this parameter as given below in init.ora

COMMIT_WRITE = BATCH, NOWAIT

You can also set particular commit options at the session level in the following way:

SQL> ALTER SESSION SET COMMIT_WRITE = BATCH, NOWAIT

You can directly specify alternate commit options in the COMMIT statement itself, in the following way, without using the COMMIT_WRITE initialization parameter:

SQL> COMMIT WRITE BATCH NOWAIT

Now having this basic knowledge, I also presume that you must be aware of the various concurrency problem also. I have given a brief explaination of those problems below.

1) Dirty-Read problem – This happens when a user A has updated the data in a table and has not yet commited. User B reads the data from that table. User A then rolls back the data, but user B has already read that data. It means the user B has read the data, which never existed.

This problem can be solved by allowing the user to read only those data that is been commited.

2) Phantom-Read problem – This happens when for example a User A has opened up a session and is quering the data from a table. Another user B inserts some data into the table and commits. When user A re-executes the same query again after a time laps in the “same session”, he finds the extra rows. This is called phantom read problem. When a user is in the same session, he should not see the extra data in subsequent runs.

3) Lost-updates problem – In this case a user A reads the data for updating. Then User B reads the data for updating and updates the data and commits. But then user A changes the previously read data and commits. This means the updates made by the User B is lost even though he has commited the data. This is called lost updates problem.

4) Non-repeatable read – Suppose you access a table’s data at a certain point in time, and then you try to access the same data a little later, only to find that the data values are different the second time around. This inconsistent data during the same transaction causes a nonrepeatable-read problem.

The answer to the above problem is the serializable scheduling. Now what is serializable scheduling?? Lets see the explaination below.

Every user which is executing a transaction should basically think that he is the only user of this database. This is the feeling that a serializable scheduling gives to the user. Even thought there will be many usres who will be accessing the database, but the complaxity of making individual transaction as isolated to upto the database engine. We as a user dont have to worry about any other transaction that is accessing the same table that we are going to access. Below is the brief explaination of how oracle internally manages the complexity and gives us the feeling of isolation.

Oracle Isolation Levels:

Now the best and simplest way to ensure transaction isolation is to have a resource accessible by only one user at a time. When a user is accessing any resource database have to make sure that other resource should not access that. OK. let me tell you that this might be the simplest way of doing this thing, but definately not the best way. So initially I mentioned best is not the best now. Confused? You ask your self what is the problem when we apply this methodology? If you gave an answer some thing related to concurrency then  you are right, this becomes totally serializable. Only one user will be able to access the database at 1 time. Definately not a good option !!! Oracle handles these situations very efficiently, thats why we say its Oracle !!

Oracle engine basically gives us the flexibility to decide what isolation level we should set. Here are the brif info about different type of isolation levels that are present in Oracle.

1) Serializable – Under this type, a transaction will lock all the tables that is required for it. It wont allow any other user to update any tables underneath it until this transaction is commited or rollbacked. Note here that this lock is at the transaction level and not at the statement level. This will avoid all the above concurreny issues, but the disadvantage is you know, there is no concurrency.

2) Repeatable read – Guarantees read consistancies. A transaction that reads the data twice from a table at two different points in time will find the same values each time. You avoid both the dirty-read problem and the nonrepeatable-read problem.

3) Read uncommitted – The read-uncommitted level, which allows a transaction to read another transaction’s intermediate values before it commits, will result in the occurrence of all the problems of concurrent usage.

4) Read committed – Oracle’s default isolation level is the read-committed level of isolation at the statement level. Oracle queries see only the data that was committed at the beginning of the query. Because the isolation level is at the statement level, each statement is allowed to see only the data that was committed before the commencement of that statement. The read-committed level of isolation guarantees that the row data won’t change while you’re accessing a particular row in an Oracle table.

Note here that Read commited transaction isolation level is at the query level and not the transaction level. So a query in a transaction can see different data at different time within the same transaction, meaning that Non-repeatable read problem persists.

Below table summerizes the problems and the solutions provided by various isolation level.

As you can see, the last isolation level in Table 6-1, serializable, avoids all concurrency problems, but unfortunately, it’s not a practical option because it doesn’t allow any concurrent use of the database. Oracle’s default read-committed isolation level will get rid of the dirty-read and the lostupdate problems.

You can change the isolation level using the statment as given below

ALTER SESSION SET ISOLATION LEVEL SERIALIZABLE;

It’s safest to stick with Oracle’s default read-committed level of transaction isolation, although it isn’t perfect. Oracle recommends that you stick with the default read-committed level of isolation, which produces the maximum throughput with only a slight risk of running into the nonrepeatable-read and phantom-read anomalies.

The read-committed transaction level provides a good trade-off between data concurrency and data consistency. Also, the throughput is much higher with this mode of isolation than with the purer serialization mode. If getting a repeatable read is your objective in using a serializable isolation level, you can always use explicit locks in situations where that is necessary.

Implementing the Oracle Concurrency control:

to be continued…