Oracle Password Encryption and Hashing


The way Oracle stores the password in database is using Hash algorithm. We know a technique called enryption for storing the password, but it really not a safe idea to implement encryption for storing the password. Because all that matters in encryption is the “key”, which is used to encrypt the password.
In this short post we will see the difference between encryption and Hashing and how it can be used in Oracle database.
Oracle supports encryption but it doesnt use encryption for storing password. Instead it uses hashing for storing the password.

Oracle Encryption:

In oracle database, we can encrypt any string using DBMS_OBFUSCATION_TOOLKIT package. There is a procedure called DSEEncrypt which can be used to encrypt the string

The basic inputs that this procedure ask is input string which you want to encrypt, the encryption key to be used and gives output as encrypted password.
You can use DESDecrypt procedure of the same package to decrypt the password.
You need to decrypt the password using the same key.

 2  var1 varchar2(2000) := 'advait';
 3  l_temp varchar2(100);
 4  opvar varchar2(2000);
 5  begin
 6  l_temp := rpad (var1, (trunc(length(var1)/8)+1)*8, chr(0));
 7  dbms_obfuscation_toolkit.DESEncrypt(input_string => l_temp, key_string => 'oracleencryptionkey', encrypted_string => opvar);
 8  dbms_output.put_line('Enc Output :'||opvar);
 9  dbms_obfuscation_toolkit.DESdEcrypt(input_string => opvar, key_string => 'oracleencryptionkey', decrypted_string => var1 );
 10  dbms_output.put_line('Dec Output :'||var1 );
 11  end;
 12  /
Enc Output :I?8??+)i
Dec Output :advait

PL/SQL procedure successfully completed.

Problem with is method is the encryption key. Everytime we want to veryfy the password we need to know the encryption key. Without encryption key we cannot encrypt or decript the password and hence cannot be verfied. Now if someone comes to know about encryption key, he knows everything.

Oracle Hashing:

Next we will see how the Oracle uses hashing for storing the password in database. Hash is basically a key generated for specific input string. Example, we can define a hashing table which will have several finite entries. With any input string we can have some logic applied on that string and get a hash value for that sting. It could be a simple logic of checking a length of the string multiply that with some number and get a hash value for that from hash table to a very complex algorithm. Also we can hash a string to a hash value but we cannot get back our original string from a hash value. So how does password varification happens.
What Oracle database does it when it receives the password it gets a hash value of that password and stors it in USER$ table. When a user tries to login using the password it will again get the hash value of that password and compare that with hash value stored in USER$ table. If they match then user is in if now he is denied access.

Does this mean that 2 same password (example : welcome) will have same hash value? The answer is yes.
In that case a user sys and user scott has both password as tiger it will hash it to same value and user scott can login as sys using his password. Hence in order to avoid this we should usually hash a biger string instead of just the password. Like for example a ‘username/password’. This combination will be unique and a user with his password can login. If this case even if scott and sys has same password tiger the hash key for scott/tiger and sys/tiger “might” differ.
I said might because even 2 differnt strings can have a same hash value. The best hasing alogithm is the one which tries to give different values for each different input string. Basically we are trying to convert the infinite number of strings into finite number of hash values so there could be same hash values for different input strings.
Below is the simple hashing function we can use in Oracle. Oracle database itself uses some complete function to store the password.

 2  var1 number;
 3  begin
 4  var1 := dbms_utility.GET_HASH_VALUE('welcome',1000000000, power(2,30));
 5  dbms_output.put_line(var1);
 6  end;
 7  /

PL/SQL procedure successfully completed.

So the hash value that we get here is a number. You can convert the same in hexadecimal as well.
GET_HASH_VALUE function takes 3 inputs and returns number. The 3 inputs that it takes are 1) the input string to be hashed 2) base value to be added to the hash number that we get 3) size of the hash table to be used
You can vary your input and change the hash value that you can get. Example instead of adding a constant value of 1000000000 to hash value I can as well give power(10,length(‘welcome’)). So depending on the input string my base value to be added will change as shown below.

 2  var1 number;
 3  begin
 4  var1 := dbms_utility.GET_HASH_VALUE('welcome',power(10,length('welcome')),power(2,30));
 5  dbms_output.put_line(var1);
 6  end;
 7  /

PL/SQL procedure successfully completed.

Another logic that you can apply for securing your password is encrypt the password and then get a hash key for the same and then store the hash key like below

 2  var1 varchar2(2000) := 'welcome';
 3  l_temp varchar2(100);
 4  opvar varchar2(2000);
 5  var2 number;
 6  begin
 7  l_temp := rpad (var1, (trunc(length(var1)/8)+1)*8, chr(0));
 8  dbms_obfuscation_toolkit.DESEncrypt(input_string => l_temp, key_string => 'oracleencryptionkey', encrypted_string => opvar);
 9  var2 := dbms_utility.GET_HASH_VALUE(var1,power(10,length(var1)),power(2,30));
 10  dbms_output.put_line('Enc Output :'||opvar);
 11  dbms_output.put_line('Hash value :'||var2);
 12  end;
 13  /
Enc Output :?B;FAE
Hash value :301289758

PL/SQL procedure successfully completed.

So in this case even if the user is able to find the encryption key, he wont know the hashing algorithm and he wont be able to crack.

Hope this helps !!


Understanding SQL Trace file – Oracle Database 10g


Many times when the application runs slow one of the usual approach for a DBA is to enable tracing and get the SQL queries the application is running. We check some of the SQL that application runs is becoming bottle neck here. But getting a trace file for different application can be very tricky. The reason is that when a user is connecting from the front end application its very difficult for us (the DBA) to track the session information. When he login to the application and does some database activity a new session will get created, its difficult to know if that session belongs to the user we intend to monitor. Also if the application is written efficiently then inbetween the application usage a client might get disconnected and then reconnect again when some database operation needs to be performed. Again the SID changes. So its very difficult for a DBA to keep tracks of sessions an application is changing.

So in order to get around this problem we have something called End-to-End application tracing.

End to End Tracing:

In end to end application tracing, instead of tracing the session we trace using different parameters like

Client identifier – specifies an end user based on the logon Id, such as HR.HR
Service – specifies a group of applications with common attributes, service level thresholds, and priorities; or a single application, such as ACCTG for an accounting application
Module – specifies a functional block, such as Accounts Receivable or General Ledger, of an application
Action – specifies an action, such as an INSERT or UPDATE operation, in a module
Session – specifies a session based on a given database session identifier (SID), on the local instance
Instance – specifies a given instance based on the instance name

To gather the appropriate statistics using PL/SQL, you need to enable statistics gathering for client identifier, service, module, or action using procedures in the DBMS_MONITOR package.

Example if you want to enable statistics gathering for user OE. You can enable statistics for this CLIENT_ID using following procedure


So when ever any clients logs into the database using OE as user ID, all the activities for that user will get traced in the trace file. You can see the client identifier using CLIENT_IDENTIFIER column in v$session table.

EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE(client_id => ‘OE.OE’); is used to disable the tracing for that client ID.

Similarly you can also enable the tracing for any service, any module, for an required action etc. For more information on end to end tracing, kindlly check the link in reference section of this post.


coming to the contents of trace file. Initially when you get a trace file generated you wont be able to understand as its not in a readable format. You need to use a utility called TKPROF inorder to make the trace file readable.
The basic inputs to tkprof are 2 file names. First filename will be the name of the trace file which you want to format and second file name will be filename for output file.
Example: tkprof jdbc_traces.trc jdbc_traces.txt

Contents of Trace file:

After you use the tkprof utility you will get the output file in readable format. The complete trace file contains well formated blocks. Every block contains the SQL, statistics about that SQL, Miss statistics, query Explain plan and wait statistics.
The sample block from one of the trace file is as shown below


SQL ID : 74637j0wuhxfx

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        9      0.00       0.00          0          0          0           0
Execute    823      0.02       0.02          0          0          0           0
Fetch      823      0.05       0.05          0       1646          0         823
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1655      0.08       0.08          0       1646          0         823

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 100

Rows     Row Source Operation
-------  ---------------------------------------------------
 1  TABLE ACCESS BY INDEX ROWID SNP_PROJECT (cr=2 pr=0 pw=0 time=0 us cost=1 size=108 card=1)
 1   INDEX UNIQUE SCAN PK_PROJECT (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 81820)

Elapsed times include waiting on following events:
 Event waited on                             Times   Max. Wait  Total Waited
 ----------------------------------------   Waited  ----------  ------------
 SQL*Net message to client                     823        0.00          0.00
 SQL*Net message from client                   822      103.76        173.63

Now Lets understand the meaning of these statistics. If we consider the call statistics table we have columns like count, cpu, elapsed, disk, query, current and rows. The meaning of these columns will be given at the starting of your trace file.
COUNT   – Number of times a statement was parsed, executed, or fetched.

CPU     – Total CPU time in seconds for all parse, execute, or fetch calls for the statement. This value is zero (0) if TIMED_STATISTICS is not turned on.

ELAPSED – Total elapsed time in seconds for all parse, execute, or fetch calls for the statement. This value is zero (0) if TIMED_STATISTICS is not turned on.

DISK    – Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls.

QUERY   – Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls. Usually, buffers are retrieved in consistent mode for queries.

CURRENT – Total number of buffers retrieved in current mode. Buffers are retrieved in current mode for statements such as INSERT, UPDATE, and DELETE.

ROWS    – Total number of rows processed by the SQL statement. This total does not include rows processed by subqueries of the SQL statement.
Also the rows gives statistics about 3 calls :- Parse, Execute and Fetch
PARSE   – Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects.

EXECUTE – Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.

FETCH   – Retrieves rows returned by a query. Fetches are only performed for SELECT statements.
For example in the above table we have following statistics
– SQL query has been parsed 9 times

– SQL query got executed 823 times

– fetch operation has been done 823 times

– CPU time taken for parsing is less then 1/100th of a second

– CPU time for execute was 0.02 sec

– CPU time for fetch was 0.06 sec

– there was no elapsed time involved for parsing

– elapsed time for execute was 0.02 sec

– elapsed time for fetch was 0.05

– No block was fetch from disk, so no physical reads

– blocks received in consistent mode (1646) during fetch operation. Since this is a Select statement the blocks are shown during Fetch operation. If its a DML statement then blocks will be shown during Execute operation.

– none of the blocks are received in current mode. You will see this value gets updated during DML statement or if some DML activites are done

– Total number of rows processed are 823
Other then this you can also see the Misses in library cache during each call. If there is no miss then it wont be mentioned. The 1 miss for the SQL is very much acceptable since when a SQL runs for the first time it need to be parsed and executed and execution plan will be stored. So parse and execute call will have 1 misses.

If you see the statement parse call happened 9 times but the miss count is only 1, meaning that the statement was parsed only once and was stored in library cache. For next 8 parses the same parsed statement from library cache was used. So we have miss 1 and hit 8.

Similarly execution plan was made only once and 822 times Oracle used same execution plan from library cache.
Next it gives the execution plan for the query as shown below

Rows     Row Source Operation
-------  ---------------------------------------------------
 1  TABLE ACCESS BY INDEX ROWID SNP_PROJECT (cr=2 pr=0 pw=0 time=0 us cost=1 size=108 card=1)
 1   INDEX UNIQUE SCAN PK_PROJECT (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 81820)

Here we have several parameters shown in bracket (cr=2 pr=0 pw=0 time=0 us cost=1 size=108 card=1)

cr = Consistent Reads

pr = Physical Reads

pw = Physical Writes

time = time taken by this step in microseconds

cost = cost incured by the step

size = represent the size of data in that step

card = cardinality

Finally at the end SQL trace give the wait time event for that query, the max wait time and total time it waited.

Concluding from the Traces:

With the values above we need make conclusions and decide whether to tune the SQL or not. Unless we have a locking issue or bad performing SQLs we should not worry about CPU time or the elapsed time. because timings come into consideration only when we have bad performing SQLs. The key is the number of block visits, both query (that is, subject to read consistency) and current (that is, not subject to read consistency). Segment headers and blocks that are going to be updated are acquired in current mode, but all query and subquery processing requests the data in query mode.


Oracle® Database Performance Tuning Guide

Hope this helps !!

Oracle Database Recovery – Basic Concept

Its been a really long time since I wrote a post. These days seems to be very busy with all Fusion middle ware floating around me. Anyways, here I start again. This post is a kind of kickoff post. I was going through Expert one-on-one book by Thomas Kyte and read a wonderful basic recovery related article from the book. Ohh by the way…everyone should read Expert one-on-one atleast once in his/her life. A must read book, not only for DBA but specifically for developers.

So here is the recovery concept go. This is a very basic concept about how Oracle does crash recovery. I will be putting the same story here, but in my words.

We know the basic DML transactions are insert, update and delete. We will consider a simple transaction and understand : what data gets generated during the transaction, where it is stored and how oracle database recovers in case of crash.

Lets say an insert statement on EMP table

insert into EMP (EMPNO, NAME, JOB, SAL) values (10, ‘Deo’, ‘DBA’, 100);

Following things will happen

  • This statement will modify the data blocks in buffer cache
  • This statement will also fetch the index blocks in the buffer cache and modify the same
  • This statement will generate undo data in buffer cache
  • This statement will generate redo data in redo log buffer. This entries in redo log buffer is to protect the blocks in buffer cache.

So the picture in SGA will looks as shown below


So till now the user has executed an insert statement, its not yet committed. The blocks in buffer cache has been protected by entries in redo log buffer.

What if the system crashes at this stage?

If the system crashes at this point, then everything in the memory will get wiped out. It will be like nothing happened. There is no record of this transaction in any of the files (datafiles or redo log file). This is OK. We dont need anything that was in SGA.

What if the buffer cache fills up at this stage?

We are have done the insert, but we have not yet committed. Changed blocks are in buffer cache. Also redo blocks are not yet flushed to online redo log files and are still present in redo log buffer. At this stage suppose db buffer cache fills up. Now DBWR process needs to make a room for new blocks to be fetched for some other user session. In that case DBWR will ask LGWR process to flush the redo log buffer into online redo log files. So the redo log records generated by insert transaction will get flushed to online redo log file. Note that we have NOT yet commited the insert record.

This behavior of DBWR to ask LGWR process to flush redo data is perfect. imagine what will happen if the data in buffer cache gets written to database files before redo data gets flushed into redo log files. If system craches at a point after buffer cache data gets updated in data files and before redo data from redo log buffer gets flushed into online redo log files, then when the database reboots, there is no way to tell whether our insert transaction was commetted or not. Since our transaction was not commited, how will Oracle know that it needs to rollback the insert transaction. That’s why data in redo log buffer gets flushed to online redo log files first. The scenario will look like as shown in fig 2 below.


Now the blocks in database buffer cache are protected by the entries in redo log files. Some of this blocks might get written to datafiles in order for DBWR process to make room, or none of them gets written to datafiles if DBWR process is able to make room without touching these blocks. It depends.

What if the system crashes at this stage?

At this stage if the system crashes then all the data in memory will get wiped out. The data blocks in buffer cache will be gone. But we still have the entries in online redo log files. So when the oracle performs crash recovery it reads the online redo log file and roll-forward the transactions. Redo log file has data to regenerate the data blocks that were present in the buffer cache before system crash. So oracle will read the redo log files and generate the data block, the index block and undo data block as well. Note here that undo data block is also treated here as a normal data block and changes to undo data block will make redo entries. So while doing crash recovery it will generate the 3 data blocks and scenario will be similar to fig 2. On further reading of redo log file, oracle will come to know that this insert transaction was never committed. So Oracle will use the undo data generated for this transaction to rollback the data. This will delete the insert entry.

Now before system crash if the data in buffer cache has been applied to datafiles then oracle will reapply the latest data generated and overwrite the previously inserted data. Now everything is back the way it was.

This scenario covers the rudimentary details of crash recovery. Here the recovery is a 2 step process, first oracle roll forwards all the transactions and then it starts rolling back the transactions which are not committed (from the undo data that it generates during roll forward process).

Similar concept applies to the update and delete statements as well.

What happens to the undo data?

The undo data which gets generated during transactions (example in the above scenerio) will be stored for some time. If there is no space left in memory this undo data is flushed to undo segments in undo tablespace and it will reside in undo tablespace for a minimum time period defined by UNDO_RETENTION init.ora paramter. This undo data is used during rolling back a transaction. Please note that for rolling back a transaction redo log files are never used. Redo logs and archive logs are referred (or read) only during recovery of database. During all other time the data is written to redo logs files but never read from it. So for rolling back the transaction undo data saved in undo tablespace is required.

Hope this helps !!


Expert One-on-One – By Thomas Kyte









Automating browser activities using Selenium

While carrying out the OAM installation as mentioned in my preovious post, the challenge was to automate the installation in a single script. I have been automating installations of various components since long using the silent install response files. But this automation was different. This setup involves carring out various activites from the browser. Also it involves setup and configuration from browser at different stages in the installation.

This was a real challenge for me. I stated doing the automation for component level installation. Component level silent installation was very simple. We get a response file when we install OAM. The response files are present under $isntall_dir/oblix/config. The name of response file is install_options.txt. You will see this response file under every component installation directory.

Automation of browser task for OAM installation involves following steps

1) Creating perl script for browser activities.
You have to install OAM manually and record the browser activities in Selenium tool. From this you have to create perl scripts
2) Modifying the perl script for OAM browser task.
You have to modify the script and place the bounce script for OIS and Apache at appropriate location where the browser waits for the services to be bounced.
3) Installing selenium perl module and Selenium server on the Linux server where you want to run this perl scriptsFor browser automation, I used Selenium tool. I will show here the demo of what Selenium can do.

Selenium IDE:

Selenium is a tool for automating the browser activity. This client version of this tool is available in windows in the form of FireFox addon.
Download Selenium IDE and install in your firefox browser
Once downloaded following below steps to understand how Selenium works.

1) Open firefox browser and open
2) Tools -> Selenium IDE

3) Search for “Selenium IDE” or anything else in the google window and navigate to some pages. I used following test case
Search for “Selenium IDE”, click on first link, click on downloads tab on that page, click on documentation tab etc..
4) Once done turn off the record button on your selenium screen.

What ever steps that you have followed in the browser got recorded by selenium. Now you can click on play icon on Selenium tool and it will navigate exactly as you have done before.
Selenium also takes care of entering the information on a browser if you have entered before. Example login ID and password can also be recoreded and entered by Selenium when we play the record.

Creating perl script from Selenium:

The great utility of Selenium is that it allows creating a perl script automatically for the sequence that we have recoreded. This can be used very well for automating the browser activities of OAM.
We can record the sequence and create a perl script. We can run this perl script in Linux server where we are doing the OAM installation.

For creating perl script of the sequence, you can do Options -> Format -> Perl – Selenium RC.
Note that you can get the perl script after recording the sequence. The perl script for my recorded sequence looks as below

use strict;
use warnings;
use Time::HiRes qw(sleep);
use Test::WWW::Selenium;
use Test::More "no_plan";
use Test::Exception;

my $sel = Test::WWW::Selenium->new( host => "localhost", 
 port => 4444, 
 browser => "*chrome", 
 browser_url => "http://change-this-to-the-site-you-are-testing/" );

$sel->type_ok("q", "selenium IDE");

So like wise for each browser related step in OAM configuration, I created a perl script. You will end up creating 4 Perl script

Referring to OAM install post

Script 1 – Step 3) Setting up Identity System
Script 2 – Step 5) Setup Policy Manager
Script 3 – Step 6) Installing Oracle Access Manager
Script 4 – Step 7) Installing WebGate (browser part)

You also need to edit the perl script at appropriate place in order to bounce the services for OIS and Apache. I introduced following commands in respective perl scripts in order to take care of that.

system(‘$HOME/oam/Apache2063/bin/apachectl stop’);
system(‘$HOME/oam/Apache2063/bin/apachectl start’);

The next task that remains is, installing selenium perl module on linix server where you want to run this perl script for configuring OAM.

Installing Selenium perl module and Selenium server on Linux:

For installing Selenium perl module and Selenium server on Linux, you need to download Selenium Remote Control
This is a Selenium server setup. Once downloaded, ftp to linux server and unzip. You will see following directories

-bash-3.1$ cd selenium-remote-control-1.0-beta-2
-bash-3.1$ ls

selenium-perl-client-driver-1.0-beta-2 is the perl module which you want to install.
In order to install perl module locally you can use PREFIX=<path> argument while running Makefile.PL as shown below. Here <path> represent path to some local directory where you want install perl module.

perl Makefile.PL PREFIX=/slot/ems3170/oracle/oam/perl/install

When you try to install this perl module you will find that some other perl modules are missing. It depends on the perl installation that you have on your linux server. If some dependent perl modules are missing, make sure to download and install the same from
Once the perl module is installed, source the variable PERL5LIB to include all the libraries created by different perl modules.
Example in my case PERL5LIB was as shown below
export PERL5LIB=$HOME/oam/perl/install/lib/5.6.1:$HOME/oam/perl/install/lib/5.8.1:$HOME/oam/perl/install/lib/site_perl/5.6.1:$HOME/oam/perl/install/lib/site_perl/5.8.0:$HOME/oam/perl/install/local/perl5.8/lib/site_perl/5.8.0

Once the perl module is install, you can start the selenium server. You need JDK 1.6 to start selenium server so verify your jdk version.

-bash-3.1$ java -version
java version “1.6.0_07”
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) Server VM (build 10.0-b23, mixed mode)

If your java is at 1.4 level, install 1.6 and then use 1.6 version

Following is command to start selenium server

go to the location where you have unzipped Selenium remote control and go inside selenium-server

-bash-3.1$ cd selenium-server-1.0-beta-2/
-bash-3.1$ pwd
-bash-3.1$ ls
javadoc                       selenium-server-sources.jar       sslSupport
selenium-server-coreless.jar  selenium-server-tests.jar
-bash-3.1$ java -jar selenium-server.jar &

The & at the end is important, you need to start the server in the backend, else you wont be able to use that session. Once selenium server is started, perl module installed and PERL5LIB is sourced correctly then try running the perl script that you created using Selenium IDE before. You will see that from Linux session a browser will get launched automatically and all the activities in browser will be done automatically.
Finally you can wrap all the installation and configuration steps in a shell script in correct sequence and have OAM installation automated.

Hope this helps !!


Click to access Selenium%20Report.pdf