Patching history of a file – adfhrept.sql

Oracle Apps 11i and E-Business Suite R12 provides a wonderful utility to get the history of any file applied through patches. When a patch it applied higher version file get implemented in application. This utility will give complete history about a file starting from the initial release till now and which patch has introduced which release at what date.

The file name is adfhrept.sql and is present under $AD_TOP/patch/115/sql

Following is the usage of file.

adfhrept.sql <filename>\

<latest file version only? (Y/N)> \
<start date(mm/dd/rr or ALL)> \
<end date (mm/dd/rr or ALL)> <patchtype/ALL> <language/ALL> \
<appltop/ALL>

<limit to forms server? (Y/N)> \
<limit to web server?(Y/N)> \
<limit to node server? (Y/N)> \
<limit to admin server?(Y/N)> \
<only patches that change DB? (Y/N)>

For example:

-bash-2.05b$ sqlplus apps/apps @adfhrept.sql adphst.odf N 12/01/00 12/31/08 ALL ALL ALL N N N N N

SQL*Plus: Release 8.0.6.0.0 – Production on Mon Jun 30 12:56:21 2008

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 – Production
With the Partitioning option
JServer Release 8.1.7.4.0 – Production

Writing data to report file adfilerep.xml…

Done writing data to report file adfilerep.xml

Note:
To view the XML report from browser
Copy file adfilerep.xml to OA_HTML top directory

Disconnected from Oracle8i Enterprise Edition Release 8.1.7.4.0 – Production
With the Partitioning option
JServer Release 8.1.7.4.0 – Production

This sql will create an XML file at the same location from where you are running this utility. You need to mode the XML file to OA_HTML where its corresponding XSLT file is present. You can then access this XML file using the URL: http://(hostname):(Apache port)/OA_HTML/adfilerep.xml

The output will looks like this.

Hope this helps !!

Reference:

Metalink Note ID: 162498.1

Advertisement

AutoPatch modes, arguements and options

Here are the various options for applying the patch in Oracle apps 11i and R12. ADpatch comes with lots of option that can be used, especially when we are applying the patch in production.

Modes of ADPATCH

If we talk about the mode of applying patch, there are 3 modes

1) Pre-Install Mode

Pre-install mode is used to update AD utilities before an upgrade and to apply family consolidated upgrade packs.
AutoPatch Pre-AutoInstall mode allows you to apply patches when your installation is missing database information and/or filesystem information that AutoPatch requires to run in normal mode.

Examples of when you would run AutoPatch in Pre-AutoInstall mode (and cannot run it in normal mode) include:

  • Prior to installing Oracle Applications for the first time
  • Prior to upgrading Oracle Applications to the latest release.
  • During an upgrade (to apply a bug fix that stopped your upgrade)

Applying patch in pre-install mode performs following tasks:

  • Version checking
  • File copy actions
  • Relink FND and AD executables
  • Save patch history information to file system

AutoPatch in pre-install mode will NOT:

  • Run SQL of EXEC command
  • Generate files
  • Read product driver files
  • Apply maintenance pack

To apply patch in pre-install mode, run  adpatch preinstall=y

2) Test Mode

AutoPatch provides a test mode in which it tells you everything it would have done in applying a patch, but doesn’t actually apply the patch.

To run AutoPatch in Test Mode, you must include ‘apply=no’ on the AutoPatch command line. For example:

$ adpatch apply=no

Instead of performing an action, AutoPatch indicates that it is not performing the action because “Apply=No”. In general, AutoPatch lists each file it would have copied, generated, relinked, or executed. This shows you exactly what actions it would have performed.

AutoPatch test mode works the same as normal mode, with the following exceptions:

  • It does not copy any files from your patch directory into your installation area.
  • It does not copy any files from your APPL_TOP to JAVA_TOP or OAH_TOP.
  • It does not archive any object modules into your product libraries.
  • It does not generate any forms or reports.
  • It does not relink any executables.
  • It does not run any ‘sql’ or ‘exec’ commands.
  • It does not update the release version in the database.
  • It does not update the patch history file.

AutoPatch asks you the same initial questions in test mode as in normal mode. It performs the following actions to determine what it would have done if run in normal mode:

  • Reads and validates the patch driver file.
  • Reads product file driver files.
  • Extracts object modules from your product libraries (so it can perform version checking on the object modules it extracts).
  • Performs version checking.
  • Looks in the database to determine what ‘sql’ and ‘exec’ comands it would have run.

Its a good practice to run the patch in test mode and analyze the things before applying the patch in normal mode.

3) Non-Interactive Mode

Starting in Release 11.5, you can run AutoPatch non-interactively.

Creating a defaults file

Before you can run AutoPatch non-interactively, you must first create an AutoPatch defaults file for your current environment.

Here is a simple way to create an AutoPatch defaults file for your current environment:

1. Specify defaultsfile=<New Defaults File Name> on the AutoPatch command line. The defaults file must be located under $APPL_TOP/admin/<SID>.

For example:

adpatch defaultsfile=$APPL_TOP/admin/testdb1/my_def.txt

2. Run AutoPatch up to the point where it asks you for the directory where your Oracle Applications patch has been unloaded. Then type ‘abort’ at this prompt.

3. Verify that your defaults file exists.

Once you have an AutoPatch defaults file for your current environment, you can run AutoPatch non-interactively.

Applying a single patch driver file non-interactively

Before applying any Oracle Applications patch, either interactively or non-interactively, you should read the README file (usually called readme.txt) supplied with the patch. You should also read the documentation supplied with the patch (if any).

It is possible to apply just a single patch driver file non-interactively using AutoPatch. Here is an example:

Assume the following:

  • defaults file is $APPL_TOP/admin/testdb1/def.txt
  • Applying copy driver for patch 123456, which is located in directory $APPL_TOP/patch/123456.
  • Using three parallel workers
  • AutoPatch log file name is cpy123456.log

The AutoPatch command line would be:

adpatch defaultsfile=$APPL_TOP/admin/testdb1/def.txt \
logfile=cpy123456.log \
patchtop=$APPL_TOP/patch/123456 \
driver=c123456.drv \
workers=3 \
interactive=no

If we dont give any of the mode as mentioned above and apply the patch simply using adpatch command then its a normal mode of patch application.

Having seen the modes of patch application, now we will see various arguements for applying patch.

1) defaultsfile
Purpose: This option is used when we are running the patch in non interactive mode. In that case we create defaults file and provide that file as an option for running patch in non-interactive mode.
Default: none. No default file read or written.

2) logfile
Purpose: This is the name of adpatch log file which it will write during patch application.
Default: none. Adpatch prompts for this value.

3) workers
Purpose: Specifies the number of workers to run. This value depends on number of CPU and other factors.
Default: none. Adpatch prompts for this value.

4) patchtop
Purpose: Top-level directory for the current patch. This is the directory after unzipping the patch. This directory will a patch number.
Default: none. Adpatch prompts for this value.

5) driver
Purpose: Name of the patch driver file. This comes with the patch and is present in patch directory.
Default – none. Adpatch prompts for this value.

6) restart
Purpose: To restart an existing session. Only valid when interactive=no is also specified
Default: No

7) localworkers
Purpose: Used in Distributed AD to specify the number of workers to be run on the current machine. If you have multi node instance (example RAC and shared APPL_TOP), then you can utilize this paramter to run the patch parallely in multiple nodes. You can start few workers on node 1, few on node 2 and so on. The way this can be done is that, you can start adpatch on one node with localworker=<some value less then total workers>. Then run adctrl on other node in distributed mode and start some mode workers. This will speed up the process and utilized the resources effectively.
Default: Value specified for workers.

8) printdebug
Purpose: To display extra debugging information.
Default: No.

Now lets consider some common options that can be used with adpatch options=<value>

1) checkfile
Purpose: To skip running exec, SQL, and exectier commands if they are recorded as already run. Indicates that Autopatch should run the command *only* if a certain file is newer than the version of it that was last run. The idea behind it is to reduce the duration of an Autopatch session by skipping actions that don’t really need to be performed. When used in the right manner, it can dramatically improve Autopatch performance, especially for big patches and/or long running actions.
Default: checkfile (use ‘nocheckfile’ to skip)

2) compiledb
Purpose: To compile invalid objects in the database after running actions in the database driver.
Default: compiledb (use ‘nocompiledb’ to skip)

3) compilejsp
Purpose: To compile out-of-date JSP files, if the patch has copy actions for at least one JSP file.
Default: compilejsp (use’nocompilejsp’ to skip)

4) copyportion
Purpose: To run commands found in a copy driver. This will copy the higher version files from patch to product top.
Default: copyportion (Use ‘nocopyportion’ to skip. Use it only when mentioned in readme of patch)

5) databaseportion
Purpose: To run commands found in a database driver. This portion includes applying the files (like sql, pls etc) to database.
Default: databaseportion (use ‘nodatabaseportion’ to skip. Use it only when mentioned in readme of patch)

6) generateportion
Purpose: To run commands found in a generate driver. This portion will generate new executable files from the copied code of patch. For example if will generate new forms files (fmx) from new .fmb files.
Default: generateportion (use ‘nogenerateporation’ to skip)

7) integrity
Purpose: To perform patch integrity checking. Tells adpatch whether to perform patch integrity checking, which verifies that the version of each file referenced in a copy action matches the version present in the patch.
Default: nointegrity (By default the integrity is not checked)

8) maintainmrc
Purpose: To maintain the MRC schema after running actions found in the database driver.
Default: maintainmrc (use ‘nomaintainmrc’ to skip)

9) autoconfig
Purpose: Tells adpatch to run Autoconfig after patch installation.
Default: autoconfig (use ‘noautoconfig’ to skip)

10) parallel
Purpose: To run actions that update the database or actions (like SQL) that generate files in parallel (like genform).
Default: parallel (use ‘noparallel’ to skip)

11) prereq
Purpose: Tells adpatch whether to perform prerequisite patch checking prior to running patch driver files that contain actions normally found in the copy driver.
Default: prereq (use ‘noprereq’ to skip)

12) validate
Purpose: To connect to all registered Oracle Applications schemas at the start of the patch. Adpatch validates the passwords for each schema.
Default: novalidate (use ‘validate’ to validate schema passwords)

Following flags can be passed to adpatch

1) hidepw
Purpose: This argument is used to hide the passwords in log files
Default: nohidepw

2) trace
Purpose: Tells the adpatch utility whether to log all database operations to a trace file
Default: notrace

3) logging
Purpose: Tells the adpatch utility whether to create indexes using the logging or nologging mode.
Default: logging

Hope this helps !!

Shared Memory and Semaphores – Oracle Database

Introduction:

Some times if the database shut down is not graceful, shared memory wont be released. In that case if you try to login and start the database you will get following error.

-bash-2.05b$ sqlplus /nolog
SQL*Plus: Release 8.1.7.0.0 – Production on Thu Jun 26 12:57:25 2008
(c) Copyright 2000 Oracle Corporation.  All rights reserved.
SQL> connect / as sysdba
ERROR:
ORA-03113: end-of-file on communication channel

To resolve such issues, you can check the shared memory and semaphores that are blocked by or locked by the defunct oracle processes. Before that here is the bried understanding of Shared memory segments and semaphores in UNIX.

Shared Memory Segments

Shared memory allows processes to access common structures and data by placing them in shared memory segments. It’s the fastest form of Interprocess Communication (IPC) available since no kernel involvement occurs when data is passed between the processes. In fact, data does not need to be copied between the processes.

Oracle uses shared memory segments for the Shared Global Area (SGA) which is an area of memory that is shared by Oracle processes. The size of the SGA has a significant impact to Oracle’s performance since it holds database buffer cache and much more.

To see all shared memory settings, execute:

-bash-3.00$ ipcs -lm

—— Shared Memory Limits ——–
max number of segments = 4096
max seg size (kbytes) = 4194303
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

The max value of Shared Memory segment is decided by SHMMAX parameter.

ipcs -m will give you the shared memory segments, its size and which user owns the segment.

-bash-3.00$ ipcs -m

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0x0001ffb8 0          root      666        76         3
0x00025990 32769      root      666        8308       3
0x00027cb9 65538      root      666        132256     1
0x00027cba 98307      root      666        132256     1
0xf771930c 1277956    oracle01  660        2166358016 144

Semaphores

You can see semaphores setting using ipcs -ls

-bash-3.00$ ipcs -ls

—— Semaphore Limits ——–
max number of arrays = 1100
max semaphores per array = 1100
max semaphores system wide = 32000
max ops per semop call = 100
semaphore max value = 32767

SEMMSL parameter determinies the max size of semaphores.

You can see the various semaphores assigned to different processes using

-bash-3.00$ ipcs -s

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x00000000 393216     root      666        1
0x00000000 425985     root      666        1
0x00000000 458754     root      666        1
0x00000000 491523     root      666        1
0x0001ffb8 524292     root      666        1
0x000251c0 557061     root      666        1
0x000255a8 589830     root      666        1
0x00025990 622599     root      666        1
0x000278d1 655368     root      666        1
0x00027cb9 688137     root      666        1
0x000278d2 720906     root      666        1
0x00027cba 753675     root      666        1
0x26e55b68 3244044    oracle01  660        1004

You can also see the semaphores and Shared Memory Segments assigned to oracle processes using Oracle utility sysresv

-bash-3.00$ sysresv

IPC Resources for ORACLE_SID “tkr12r3m2” :
Shared Memory:
ID              KEY
1277956         0xf771930c
Semaphores:
ID              KEY
3244044         0x26e55b68
Oracle Instance alive for sid “tkr12r3m2”

If the instance is not alive you can remove the shared memory segments and semaphores using ipcrm command

ipcrm -m <Shared Memory ID (shmid)>

ipcrm -s <Semaphore ID (semid)>

Hope this helps !!

“Touch Count” for Buffer Cache

Introduction

I was going through one of the interesting white paper on oraPub Inc. which talks about the mechanish or the algorithm Oracle engine uses to manage the buffers in database buffer cache. The paper is written by Craig A. Shallahamer (criag@orapub.com). Thanks to the author for sharing such an interestng information.

Author has explained the algorithm for managing buffers in Oracle buffer cache. Now obviously all the buffers cannot stay in buffer cache because of the limited size of cache. So buffers which are older or non-popular should go out of the buffer cache. The algorithm is all about the criteria for buffer cache to stay or leave buffer cache.

Initially with older release, Oracle started with LRU (Least Recently Used) algorithm. Conceptually LRU list is nothing but a list of pointers to buffer blocks. When a buffer block is touched, the pointer moves towards the most recently end of LRU chain. Remember that its the pointer which moves up the chain, buffer blocks are never moved. The general idea is to keep the more popular blocks in buffer cache and ask less popular block to leave.

But this algorithm was having some issues. For example a full table scan !! This will get all the buffers into the buffer cache making other important buffers to leave the cache. For example if the buffer cache is having 500 blocks and full table scan is getting 600 blocks in buffer cache, all the popular blocks will go away.

To overcome this problem, Oracle came with a modified LRU algorithem. This modified LRU algorithm takes care of full table scan by keep the buffers of full table scan at the LRU end of LRU chain and also it will allow only limited number of blocks to be put in LRU end of LRU chain. This will avoid flooding buffer cache with huge number of buffers from full table scan. Also along with this algorithm, Oracle implemented multiple buffer pools – KEEP BUFFER POOL, RECYCLE BUFFER POOL and DEFAULT BUFFER POOL. As you must be knowing, the data which is used frequently and should remain in buffer cache should be placed in KEEP BUFFER POOL, buffer that should not stay for longer in buffer cache should be placed in RECYCLE BUFFER POOL and if we dont specifically mention any buffer pool duing table creation, it will go in DEFAULT POOL.

With this background lets move towards understanding the touch count algorithm.

Buffer cache Management

Before understanding the complex logic, we should understand the needs for touch count algorithm. Why does Oracle needs to have such a complex alogrithm in place? We have already seen that modified LRU algorithm takes care of full table scan. But if you think again we have another issue – Large index range scan? Imaging large number of index leaf blocks flowing into the buffer cache. This issue wont be addressed by modified LRU.

Also with growing sizes for buffer cache, better and better performance requirements and more control Oracle introduced touch count algorithm.

For smooth and effective buffer cache operation, a very fast and very flexible algorithm must be implemented that essentially forces every buffer to seriously earn the right to remain in buffer cache. Touch count algorithm makes it very difficult for buffer to simply remain in buffer cache. It’s almost cruel how many hurdles a buffer must continually jump to simply remain in buffer cache.

At the core of touch count algorithm are the harsh requirements placed on each buffer to not only remain in cache, but to remain in MRU end of LRU list.Conceptually touch count algorithm does this by assigning each buffer a counter. Each time a block is touched its counter is incremented. The value of this counter is used to assign popularity to the blocks. But its not staright forward. Keep reading !!

Touch Count Algotrithm

Oracle does lot of nasty things to make it very difficult for a buffer to remain in buffer cache. Lets understand each concepts

Mid-point Insertion

Each LRU is divided into two basic areas or region; a hot region and a cold region. All buffers in hot region is called hot buffers and all buffers in cold region is called cold buffers.There is a midpoint marker between hot region and cold region. This mid point marker moves in order to make sure that correct number of buffers are present in each regions. This mid pointer is not associated with any buffer.

By default Oracle divides LRU equally between hot and cold region. That means, 50% of the buffers will be in hot region and 50% will be in cold region. However we can change the default setting by changing a hidden parameter _db_percent_hot_default. If we increase this parameter then buffers in hot region will increase, that is buffers above mid point wil increase.

When a server process reads a block from disk into the buffer cache, it is placed in the middle of LRU chain, that is, between hot region and cold region. This is called mid-point insertion and is the fundamental concept in touch count algorithm. These buffer has to earn there way into hot region by increasing the touch count. Frequent access to these block will increase there touch count and block will move in the hot region.

Touch Count Increment

Now we know that when a server process reads a new block from disk into the buffer cache, where exactly its going to put the block. Lets now consider how the tocuh count increases for blocks.

Therotically when ever a buffer is touched, its touch count should increase. but practically Oracle does not let that happen. Think of a situation where some processes needs some blocks in buffer very frequently for some period, and after that period that block is not so required. In that case Oracle process might access the block so frequently that within a second its touch count will grow huge and it will become eligible to be placed in hot region. Not only that, it might become eligible to stay in hot region. Since this block wont be used later, we dont want this block to stay just because it was used for initial period of time.

To overcome this problem, Oracle only allows buffer’s touch count to be incremented, at most, once every 3 seconds. Again, thi 3 scond is default timing by Oracle, which can be changed using hidden parameter _db_aging_touch_time.

When a touch count is incremented buffer pointer should move. But movement of buffer pointer is independent of touch count increment. Also for any activity in memory area oracle needs a latch for assuring cache consistency. But there is an exception here !! For updating touch count, Oracle does not use latch and buffer block can be modified while touch count is getting incremented. But more interesting is that, two processes may increment the touch count to same value, and when this happens Oracle assures the worst that could happen is the touch count is not actually incremented every time a buffer is touched and that no cache corruption will result.

Buffer Movement

As mentioned previously, when a buffer is brought into the buffer cache its placed in the middle of hot region and cold region. Unlike LRU algorithm, touch count algorithm will not move the block to hot region just because its touched. Yes, its touch count will probably be incremented.

When a server process if looking for the free buffer to place the content of datafile block into buffer cache or when the DBWR is looking for dirty buffer to write to disk, and if the buffer’s touch count is observed to be greater then 2, its moved to MRU end of LRU list. This default threshold of block movement is controlled by hidden parameter _db_aging_hot_criteria.

Oracle touch count implementation is tough on buffers !! When buffer is moved to MRU end of LRU list, its touch count is set to zero. So this buffer which is newly brought into the hot region has hit touch count reset to 0 immediately. For this block to remain in the hot region its touch count should be incremented significantly. So if this block is really accessed frequenctly, its touch count will increase automatically and it will servive in hot region.

Hot and Cold Movement

If a buffer is moved from cold region to hot region, the mid point marker has to shift to accomodate correct number of blocks in hot a cold region. So 1 block from hot region will be forced into the cold region which is least frequently used and which belongs to LRU end of LRU list. When this happens the touch count of that block is reset to 1. Even if the buffer’s touch count is 250, after moving to cold region its touch count is reset to 1. This threshold crossing touch count reset value is controlled by the instance parameter _db_aging_cool_count. This means the buffer must be all touched again to make its move to the hot region. This is how the blocks are moved and managed in buffer cache.

For more information on performance tuning buffer cache using the above parameters check the reference section below.

References:

ALL ABOUT ORACLE’S TOUCH COUNT ALGORITHM – Criag A. Shallahamer (Version 4a, January 5, 2004)

Identifying Locks on Objects using V$LOCKS

Ever wondered when a user comes to you saying that he/she cannot run a DML statement, which one should be able to run with in a second.
We suspect 2 reason for the same
1) Database is terribely slow (Which cannot be the case for processing a simple update)
2) Some one is holding an exclusive lock on object which user is trying to update (quite possible).

There can be many more reasons we can find upon further investigations. In this post we will see how to investigate reason 2 – If some other user is holding lock on the object which this user is trying to modify.

lets take a simple scenario.

session 1:

SQL> create table test (col1 number, col2 varchar2(5));

Table created.

SQL> insert into test values (1,’a’);

1 row created.

SQL> insert into test values (2,’b’);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test for update;

COL1 COL2
———- —–
1 a
2 b

Session 2:

SQL> update test set col2=’a’ where col1 = 1;

The above session 2 will hang !!!


SQL> select * from v$lock;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00000003BFD5D868 00000003BFD5D888         54 CF          0          0          2          0     669613          0
00000003BFD5D900 00000003BFD5D920         54 XR          4          0          1          0     669625          0
00000003BFD5DA30 00000003BFD5DA50         54 RS         25          1          2          0     669613          0
00000003BFD5DAC8 00000003BFD5DAE8         35 TX     196652        882          0          6        344          0
00000003BFD5DB60 00000003BFD5DB80         53 TS          3          1          3          0     669599          0
00000003BFD5DBF8 00000003BFD5DC18         55 RT          1          0          6          0     669613          0
00000003BFD5DDC0 00000003BFD5DDE0         57 MR          1          0          4          0     669601          0
00000003BFD5DE58 00000003BFD5DE78         57 MR          2          0          4          0     669601          0
00000003BFD5DEF0 00000003BFD5DF10         57 MR          3          0          4          0     669601          0
00000003BFD5DF88 00000003BFD5DFA8         57 MR          4          0          4          0     669601          0
00000003BFD5E020 00000003BFD5E040         57 PW          1          0          3          0     669599          0

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00000003BFD5E3C8 00000003BFD5E3E8         57 MR         81          0          4          0     669593          0
00000003BE50B9B8 00000003BE50B9E0         49 TM      21837          0          3          0        374          0
00000003BE50BAB8 00000003BE50BAE0         35 TM      21837          0          3          0        344          0
00000003BDC81138 00000003BDC812C0         49 TX     196652        882          6          0        374          1

15 rows selected.

If a session is blocking another session, you will see block = 1 for that session. So in out case SID=49 is blocking some other session. We can also find the session which got blocked because of SID=49.
There are 2 columns ID1 and ID2. The values of ID1 and ID2 for SID=49 will match with some other SID in v$lock table. If you see carefully in our case its matching with SID=35. So SID 35 is the session which got blocked because of SID=49. Also the session which gets blocked will not be able to get its request processed. So you will see REQUEST column will have a value > 0.
So from the above analysis we can say that SID 49 is blocking SID 35.

We can directly write a query which will give the required output.

SQL> select a.SID “Blocking Session”, b.SID “Blocked Session”
2  from v$lock a, v$lock b
3  where a.SID != b.SID
4  and a.ID1 = b.ID1
5  and a.ID2 = b.ID2
6  and b.request > 0
7  and a.block = 1;

Blocking Session Blocked Session
—————- —————
49              35

Lets understand rest of the columns in v$lock tables here.

ID1 and ID2 -> There represents the rollback segment and transaction table entries for that transaction. So when session 1 acquired the lock it got rollback segment and transaction table entry. When another session requested the same block, Oracle tried to generate a CR “Consistent read” image for the same by applying the rollback segment. But since there was exclusive lock it was not able to proceed. Unless first session relievs the lock, second session cannot proceed. So for second session its showing the value of ID1 and ID2 same as session 1 as session 2 was using the same values from rollback segment to make a CR copy.

TYPE -> This column gives the type of lock obtained on that table in which the data was locked. For more information on lock type check defination of v$lock table in Oracle references. For a session to change the data in a table it has to acquire a TX lock. This is the transaction enqueue lock.

LMODE -> This is the mode of lock. The is having values from 0 to 6, 6 being the most restrictive and 0 being least restrictive. When a session obtains lock in mode 6 that mean it has obtained exclusive lock and no other session is allowed to update the data. So for SID=49 we can see here that lock mode is exclusive (6). For more information on lock mode, you can check the v$lock table definition in Oracle references

REQUEST -> This column represent the lock mode requested by a blocking session. The value in this column gets updated only when the session is blocked. For example in our case SID=35 is being blocked so LMODE column shows a value of 0, but REQUEST column shows a value of 6. That means that SID 35 has requested lock mode 6 which has not yet assigned to it.

Some time if a session request for exclusive lock (mode 6), it might not get it because there are no Interested Transaction List (ITL) available in the block in which a user want to change data. For this user session will wait on mode 4 (shared mode) for some time and as soon as ITL is available, it will obtain the lock in mode 6.

Also in the above rows, you can see that for SID 49 and 35, there are 2 more rows and the TYPE column shows ‘TM’. There are the DML level enqueues and are acquired in lock mode 3 (Shared Row Exclusive). The lock will prevent any DDL activity on this table.
We can find the object name by getting the ID1 column value from these rows containing TM lock. 21837 in our case.

SQL> select object_name from dba_objects where object_id=21837;

OBJECT_NAME
————–
TEST

We can even get the row which is being blocked by transaction using v$session.

SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2  from v$session where sid=35;

ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
————- ————– ————— ————-
21837              1           45082             0

The above 4 components are the components of ROWID and we can generate ROWID number from there components using DBMS_ROWID package.

SQL> select dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
2  from v$session where sid=35;

DBMS_ROWID.ROWID_C
——————
AAAFVNAABAAALAaAAA

Now we can check if this was the row which blocking session was trying to update

SQL> select * from test where rowid = ‘AAAFVNAABAAALAaAAA’;

COL1 COL2
———- —–
1 a

this was the row blocking session was trying to update.

Hope this helps !!

Rolling Back Previous Autoconfig Run

In case your autoconfig run has created some issues in your environment and you want to rollback autoconfig run, you can do so by running restore.sh script.

For every autoconfig run, autoconfig will create a directory “MMDDhhmm” under APPL_TOP/admin/TWO_TASK/out. Inside these directories, there will be several files which autoconfig has taken a backup off before making changes. Also there will be a script restore.sh. This script is going to copy these backup files to the original location and this has the effect to rolling back autoconfig run.

(appmgr02) 06190950 – -bash $ pwd
/SLOTS/slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190950
(appmgr02) 06190950 – -bash $ restore.sh

Hope this helps !!

Running Autoconfig in Test Mode

Before running the autoconfig in production to propogate your changes, its usually a good idea to run autoconfig in test mode. This will reduce the risk and you can understand the impact better. AD tool provides a script to run autoconfig in test mode. This script will create a report which will list all files & profile options that is going to change when you run AutoConfig.

Below is the sample run for the same.

(appmgr02) bin – -bash $ adchkcfg.sh
Enter the full path to the Applications Context file:
/slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1.xml
Enter the APPS user password:

AutoConfig is running in test mode and building diffs…

AutoConfig will consider the custom templates if present.
Using APPL_TOP location     : /slot02/appmgr/PQDC2MS1appl
Classpath                   : /local/java/jdk1.4.2_04/jre/lib/rt.jar:/local/java/jdk1.4.2_04/lib/dt.jar:/local/java/jdk1.4.2_04/lib/tools.jar:/slot02/appmgr/PQDC2MS1comn/java/appsborg2.zip:/slot02/appmgr/PQDC2MS1comn/java

Using Context file          : /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190701/PQDC2MS1.xml

Context Value Management will now update the test Context file
Updating test Context file…COMPLETED

[ Test mode ]
No uploading of Context File and its templates to database.

Testing templates from all of the product tops…
Testing AD_TOP……..COMPLETED
Testing FND_TOP…….COMPLETED
Testing ICX_TOP…….COMPLETED
Testing IEO_TOP…….COMPLETED
Testing ABM_TOP…….COMPLETED
Testing EMS_TOP…….COMPLETED
Testing ECX_TOP…….COMPLETED
Testing BIS_TOP…….COMPLETED
Testing GL_TOP……..COMPLETED
Testing AMS_TOP…….COMPLETED
Testing CCT_TOP…….COMPLETED
Testing WSH_TOP…….COMPLETED
Testing CLN_TOP…….COMPLETED
Testing OKE_TOP…….COMPLETED
Testing OKL_TOP…….COMPLETED
Testing OKS_TOP…….COMPLETED
Testing CSF_TOP…….COMPLETED
Testing XNC_TOP…….COMPLETED
Testing IGS_TOP…….COMPLETED
Testing IBY_TOP…….COMPLETED
Testing PA_TOP……..COMPLETED
Testing JTF_TOP…….COMPLETED
Testing MWA_TOP…….COMPLETED
Testing CN_TOP……..COMPLETED
Testing CSI_TOP…….COMPLETED
Testing WIP_TOP…….COMPLETED
Testing CSE_TOP…….COMPLETED
Testing EAM_TOP…….COMPLETED
Testing IMT_TOP…….COMPLETED
Testing FTE_TOP…….COMPLETED
Testing ONT_TOP…….COMPLETED
Testing AR_TOP……..COMPLETED
Testing AHL_TOP…….COMPLETED
Testing OZF_TOP…….COMPLETED
Testing IES_TOP…….COMPLETED
Testing CSD_TOP…….COMPLETED
Testing IGC_TOP…….COMPLETED

Differences text report is located at: /SLOTS/slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190702/cfgcheck.txt

Generating Profile Option differences report…COMPLETED
Generating File System differences report……COMPLETED
Differences html report is located at: /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190702/cfgcheck.html

Differences Zip report is located at: /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190702/ADXcfgcheck.zip

AutoConfig completed successfully.
The log file for this session is located at: /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/log/06190701/adconfig.log

Once autoconfig is run on test mode, it will generate a zip file (ADXcfgcheck.zip) as shown above. You can copy the zip file to $OA_HTML location and unzip the same. You can then view the report using following URL

http://hostname:port/OA_HTML/cfgcheck.html

Hope this helps !!

Interested Transaction List (ITL)

Introduction

This post gives a brief idea about how Oracle locks the rows on behalf of the transactions. Before going further on locking, it will be good for you to check the previous post on transaction management to understand the isolation levels of Oracle. Here we will discuss an important area present inside Oracle block that is ITL (Interested Transaction List).

Imagine the huge amount of data present in Oracle database and transactions are continuously jumping on database to get the data. If the locking and concurrency system of oracle is made centralized then imagine how bad the performance of oracle database would be. Because at this level of scalability it is impossible to centralize the locking mechanism for database. That’s the reason oracle make a intelligent decision of decentralizing the locking mechanism. In Oracle database, we dont have a central lock manager and locks on data is managed at block level. Every block is responsible for providing the locks on rows that it maintains. This keeps up the performance of database.

Now lets consider a database is up and running and a transaction hit the database to one of the data block. How does the data block grant the locks? Where will it store the locking information for the rows the transaction is asking for? Here comes the structure – Interested Transaction List (ITL). This is a small structure in oracle block, which holds the information about the list of transactions, which are interested in some of the rows of that block. That’s why it’s called Interested Transaction List. ITL is present in the variable portion of the Oracle block. To understand the exact location of ITL, lets have a brief explanation about the structure of database block.

Oracle Data block is divided into 3 major portions.

  1. Oracle Fixed size header
  2. Oracle Variable size header
  3. Oracle Data content space

Here, Oracle fixed size header is at the top of data block, followed by Oracle variable size header and then left over space is for Oracle data content as shown in the following diagram.

At the end we have block consistency checking. Variable header info grows from the top down (just below the fixed header) if necessary and rows are inserted from the bottom up (just above the tail). ITL (Interested Transaction List) resides in variable portion of data block header. It is this portion of data block, which holds the information about locking.

This variable portion of the data block contains slots for transactions to put the locking information. When a row in the block is locked for the first time, the transaction places a lock in one of the slots with the rowid of the row that is locked. In other words, the transaction makes it known that it is interested in the row (hence the name “Interested Transaction List”). When the same transaction or another one locks another row, the information is stored in another slot, and so on.

The next logical question that comes up is, how many slots are typically available? During the table creation, the INITRANS parameter defines how many slots are initially created in the ITL. When the transactions exhaust all the available slots and a new transaction comes in to lock a row, the ITL grows to create another slot. The ITL can grow up to the number defined by the MAXTRANS parameter of the table, provided there is space in the block. Nevertheless, if there is no more room in the block, even if the MAXTRANS is high enough, the ITL cannot grow.

What if there are not enough slots available in ITL? What will happen? Obviously the new transaction asking for the lock on the rows have to wait until the existing transaction is finished and frees up the slot. This is called ITL wait event. Once the existing transaction finishes and frees up the slot, the new transaction can proceed.

ITL Wait in detail

Let’s see this description of the wait in action. Assume our table has INITRANS of one and MAXTRANS 11. A typical data block right after the creation of the table will look like figure 1 below.

Since the INITRANS is one, there is only one slot for the ITL. The rest of the block is empty. Now we inserted three rows into the table. These will go into this block, and the block will look like figure 2.
Note how the empty space is reduced. At this point, a transaction called Txn1 updates Row1, but does not commit. This locks Row1, and the transaction places the lock in the slot number one in the ITL as shown in figure 3.

Then another transaction, Txn2, updates the row Row2 and wants to lock the row. However, there are no more slots in the ITL available to service the transaction. The MAXTRANS entry is 11, meaning the ITL can grow up to 11 slots and the block has empty space. Therefore, ITL can grow by another slot and Slot number two is created and allocated to Txn2 (refer to figure 4).

Now the empty space in the block is severely limited, and it will not be able to fit another ITL slot. If at this time another transaction comes in to update the row three, it must have a free slot in the ITL. The MAXTRANS is 11 and currently only two slots have been created, so another one is possible; but since there is no room in the block to grow, the slot can’t be created. Therefore, the Txn3 has to wait until either of the other transactions rolls back or commits and the slot held by it becomes free. At this time the session will experience an ITL waits event as seen from the view V$SESSION_WAIT.

ITL structure in detail

Every ITL slot occupies 24 Bytes of free space in variable header portion of data block. Maximum number of slot is controlled by MAXTRANS parameter. However the size of variable part of data block header cannot exceed 50% of block size. This puts the upper limit on the number of ITL allowed. If we put very high value for MAXTRANS it wont be able to fulfill the same because size of variable header will grow beyond limit.

An ITL contains the unique transaction ID (XID), which is a pointer to an entry in transaction table to rollback segment. Any transaction who want to execute DML statement on the rows present in the data block should get an ITL slot before it can proceed. An ITL entry consist of an transaction ID (XID), undo block address (UBA), flags showing the status of transaction and lock count which gives the number of rows locked by this transaction. XID uniquely identifies the transaction and gives undo information for that transaction. Following figure explains the relation between ITL entries the rows locked and the transaction table in rollback segment.

While the transaction commits, Oracle completes the bare minimum task it has to do since Oracle is optimized for maximum throughput. This is called fast commit. It will update the status of transaction in the transaction table to rollback segment and it will not visit the block again. So commit is just going to change the flag in transaction table while the actual values to be changed (data values) are already updated by transaction when it was in progress. During the time when the transaction is in progress the ITL entry is called open ITL. If the instance crashed before the transaction is committed or roll backed a transaction recovery is performed using the data from rollback segments.

While a transaction is in progress and having an open ITL, if another transaction wants to select the data, then to get the consistent read (CR), 2nd transaction looks up the transaction table in rollback segment to find the status of transaction. If the transaction is uncommitted, the second transaction will create a copy of data present in memory (This data might have been changed by first transaction) and will apply the undo data present in rollback segments to get the consistent read data. If the status in transaction table shows the transaction as committed then the transaction is deemed committed. In that case rows are no longer locked by transaction, but the lock byte in row header is not cleared until next DML is performed on the block. The lock byte cleanout is piggybacked with DML operation. The block cleanout is piggybacked by some time interval because of fast commit. This cleanout operation closes the open ITL for committed transactions and generate redo information, as block cleanout may involve updating block with new SCN. This is why we see redo generation for some select statement.

So in short when we commit the transaction only the status flag in transaction table of rollback segment is updated and marked that transaction is committed. Next time when a new DML or select statement check for the header in data blocks, it sees that those rows are getting updated and also gets the undo block address. When that new transaction checks for that undo block address in transaction table of rollback segment, it finds that the transaction is committed (or rolled backed) and then that new transaction updated the block header and also clears the lock byte in row header. This generates some redo information.

Hope this helps !!

References:

Interested Transaction List (ITL) Waits Demystified

Oracle Real Application Cluster Handbook – By K. Gopalakrishnan

Recovering database after loss of current controlfile

In this scenario we will be recovering the database after loss of current control file. We assume that database is in archive log mode and all the datafile, online redo log files and archive log files are intact.

Same steps holds good even if the database is in no archive log mode.

1) Database is in archive log mode here. But archivelog mode does not matter. These steps holds good even if the database is not in archive log mode.

bash-2.05$ sqlplus “/as sysdba”

SQL*Plus: Release 10.2.0.1.0 – Production on Thu Jun 12 05:51:04 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /dy/oracle/product/db10g/archive/htmldb
Oldest online log sequence 71
Next log sequence to archive 73
Current log sequence 73
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

2) Remove the control files from file system.

bash-2.05$ ls
htmldb redo02.log system01.dbf test_control2
htmldb01.dbf redo03.log temp01.dbf undotbs01.dbf
redo01.log sysaux01.dbf test_control1
bash-2.05$ rm test_control*

After doing this database will crash after some time, when ever any background process needs to update the control file, it wont find the same and DB will crash.

3) Create a “create control file script”. This can be obtained using “Alter database backup controlfile to trace” command if used some time back. Its a good practice to backup the control file to trace regularly when ever the structure if changed (Example when ever a datafile is added).

Create control file command is stored in crdb.sql

bash-2.05$ cat crdb.sql
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “HTMLDB” NORESETLOGS ARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘/dy/oracle/product/db10g/dbf/redo01.log’ SIZE 100M,
GROUP 2 ‘/dy/oracle/product/db10g/dbf/redo02.log’ SIZE 100M,
GROUP 3 ‘/dy/oracle/product/db10g/dbf/redo03.log’ SIZE 100M
— STANDBY LOGFILE
DATAFILE
‘/dy/oracle/product/db10g/dbf/system01.dbf’,
‘/dy/oracle/product/db10g/dbf/undotbs01.dbf’,
‘/dy/oracle/product/db10g/dbf/sysaux01.dbf’,
‘/dy/oracle/product/db10g/dbf/htmldb01.dbf’
CHARACTER SET US7ASCII
;

RECOVER DATABASE
ALTER SYSTEM ARCHIVE LOG ALL;
ALTER DATABASE OPEN;
ALTER TABLESPACE TEMPTS1 ADD TEMPFILE ‘/dy/oracle/product/db10g/dbf/temp01.dbf’
SIZE 20971520 REUSE AUTOEXTEND OFF;

4) Run the “Create control file” script crdb.sql.

bash-2.05$ sqlplus “/as sysdba”

SQL*Plus: Release 10.2.0.1.0 – Production on Thu Jun 12 05:55:43 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> @crdb.sql
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 243276104 bytes
Database Buffers 822083584 bytes
Redo Buffers 6397952 bytes

Control file created.

Media recovery complete.

System altered.

Database altered.

Tablespace altered.

SQL>

We are done. Database is now up and running. Note that this is a complete recovery. There is no loss of data, because all the data files and redo log files were intact.

Hope this helps !!

Recovering database after loosing active redo log file after archiving

We are considering the scenario where we have the database in archive log mode and we have lost one of the active redo log file.

lets say that right now database is working fine and we are just starting it. All the files (Controlfiles, datafiles and online redo log files are intact).

1) Starting the database

SQL> startup
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  1984184 bytes
Variable Size             243276104 bytes
Database Buffers          822083584 bytes
Redo Buffers                6397952 bytes
Database mounted.
Database opened.

2) Archive log status

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /dy/oracle/product/db10g/archive/htmldb
Oldest online log sequence     0
Next log sequence to archive   1
Current log sequence           1
SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /dy/oracle/product/db10g/archive/htmldb
Oldest online log sequence     1
Next log sequence to archive   1
Current log sequence           3
3) Redo log group 2 is the current redo log and other two are active, meaning that they are required for recovery. When we say they are required for recovery means that the content has not been flushed to datafiles from database buffer cache. DBWR is yet to do that. This happens when checkpoint has just started and DBWR is yet to perform its job.

But at the same time you can see that these active redo log files has been archived by ARCH process. This is important for us.

SQL> select * from v$log;

GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
———- ———- ———- ———- ———- — —————-
FIRST_CHANGE# FIRST_TIM
————- ———
1          1          2  104857600          1 YES ACTIVE
4384517 13-JUN-08

2          1          3  104857600          1 NO  CURRENT
4384519 13-JUN-08

3          1          1  104857600          1 YES ACTIVE
4379112 13-JUN-08

SQL> select * from v$logfile;

GROUP# STATUS  TYPE
———- ——- ——-
MEMBER
——————————————————————————–
IS_

1         ONLINE
/dy/oracle/product/db10g/dbf/redo01.log
NO

3         ONLINE
/dy/oracle/product/db10g/dbf/redo03.log
NO

GROUP# STATUS  TYPE
———- ——- ——-
MEMBER
——————————————————————————–
IS_

2         ONLINE
/dy/oracle/product/db10g/dbf/redo02.log
NO

4) Lets remove one of the active redo log file.

bash-2.05$ rm /dy/oracle/product/db10g/dbf/redo03.log

5) Shut abort

bash-2.05$ sqlplus “/as sysdba”

SQL*Plus: Release 10.2.0.1.0 – Production on Fri Jun 13 02:16:22 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> shut abort
ORACLE instance shut down.

6) Startup the database. This will fail as it wont be able to recover because it will not be able to access redo log group 3.

SQL> startup
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  1984184 bytes
Variable Size             243276104 bytes
Database Buffers          822083584 bytes
Redo Buffers                6397952 bytes
Database mounted.
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: ‘/dy/oracle/product/db10g/dbf/redo03.log’
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

7) Now we can recover database until cancel because just “recover database” will not be able to recover since the active redo log file is missing. Even though this file is archived, database wont be aware of this.

SQL> recover database until cancel;
ORA-00279: change 4380738 generated at 06/13/2008 02:12:35 needed for thread 1
ORA-00289: suggestion :
/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_1.ARC
ORA-00280: change 4380738 for thread 1 is in sequence #1

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 4384517 generated at 06/13/2008 02:15:30 needed for thread 1
ORA-00289: suggestion :
/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_2.ARC
ORA-00280: change 4384517 for thread 1 is in sequence #2
ORA-00278: log file
‘/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_1.ARC’ no longer
needed for this recovery

ORA-00279: change 4384519 generated at 06/13/2008 02:15:31 needed for thread 1
ORA-00289: suggestion :
/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_3.ARC
ORA-00280: change 4384519 for thread 1 is in sequence #3
ORA-00278: log file
‘/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_2.ARC’ no longer
needed for this recovery

ORA-00308: cannot open archived log
‘/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_3.ARC’
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/dy/oracle/product/db10g/dbf/system01.dbf’

Here it has applied all the archived log files, but still this is not recovered completely. This is where you should supply the current online redo log file.

8) Recover until cancel again and supply current redo log file which is redo log 2.

SQL> recover database until cancel;
ORA-00279: change 4384519 generated at 06/13/2008 02:15:31 needed for thread 1
ORA-00289: suggestion :
/dy/oracle/product/db10g/archive/htmldb/ARCH_657248900_1_3.ARC
ORA-00280: change 4384519 for thread 1 is in sequence #3

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/dy/oracle/product/db10g/dbf/redo02.log
Log applied.
Media recovery complete.

9) Once media recovery is complete, open database in reset log mode.

SQL> alter database open resetlogs;

Database altered.

SQL>

We were able to recover here after loosing the active redo log file, because that active redo log file was archived by archiver process.

This is a complete recovery and there is no data loss.

Hope this helps !!