Patching history of a file – adfhrept.sql

Oracle Apps 11i and E-Business Suite R12 provides a wonderful utility to get the history of any file applied through patches. When a patch it applied higher version file get implemented in application. This utility will give complete history about a file starting from the initial release till now and which patch has introduced which release at what date.

The file name is adfhrept.sql and is present under $AD_TOP/patch/115/sql

Following is the usage of file.

adfhrept.sql <filename>\

<latest file version only? (Y/N)> \
<start date(mm/dd/rr or ALL)> \
<end date (mm/dd/rr or ALL)> <patchtype/ALL> <language/ALL> \
<appltop/ALL>

<limit to forms server? (Y/N)> \
<limit to web server?(Y/N)> \
<limit to node server? (Y/N)> \
<limit to admin server?(Y/N)> \
<only patches that change DB? (Y/N)>

For example:

-bash-2.05b$ sqlplus apps/apps @adfhrept.sql adphst.odf N 12/01/00 12/31/08 ALL ALL ALL N N N N N

SQL*Plus: Release 8.0.6.0.0 – Production on Mon Jun 30 12:56:21 2008

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 – Production
With the Partitioning option
JServer Release 8.1.7.4.0 – Production

Writing data to report file adfilerep.xml…

Done writing data to report file adfilerep.xml

Note:
To view the XML report from browser
Copy file adfilerep.xml to OA_HTML top directory

Disconnected from Oracle8i Enterprise Edition Release 8.1.7.4.0 – Production
With the Partitioning option
JServer Release 8.1.7.4.0 – Production

This sql will create an XML file at the same location from where you are running this utility. You need to mode the XML file to OA_HTML where its corresponding XSLT file is present. You can then access this XML file using the URL: http://(hostname):(Apache port)/OA_HTML/adfilerep.xml

The output will looks like this.

Hope this helps !!

Reference:

Metalink Note ID: 162498.1

AutoPatch modes, arguements and options

Here are the various options for applying the patch in Oracle apps 11i and R12. ADpatch comes with lots of option that can be used, especially when we are applying the patch in production.

Modes of ADPATCH

If we talk about the mode of applying patch, there are 3 modes

1) Pre-Install Mode

Pre-install mode is used to update AD utilities before an upgrade and to apply family consolidated upgrade packs.
AutoPatch Pre-AutoInstall mode allows you to apply patches when your installation is missing database information and/or filesystem information that AutoPatch requires to run in normal mode.

Examples of when you would run AutoPatch in Pre-AutoInstall mode (and cannot run it in normal mode) include:

  • Prior to installing Oracle Applications for the first time
  • Prior to upgrading Oracle Applications to the latest release.
  • During an upgrade (to apply a bug fix that stopped your upgrade)

Applying patch in pre-install mode performs following tasks:

  • Version checking
  • File copy actions
  • Relink FND and AD executables
  • Save patch history information to file system

AutoPatch in pre-install mode will NOT:

  • Run SQL of EXEC command
  • Generate files
  • Read product driver files
  • Apply maintenance pack

To apply patch in pre-install mode, run  adpatch preinstall=y

2) Test Mode

AutoPatch provides a test mode in which it tells you everything it would have done in applying a patch, but doesn’t actually apply the patch.

To run AutoPatch in Test Mode, you must include ‘apply=no’ on the AutoPatch command line. For example:

$ adpatch apply=no

Instead of performing an action, AutoPatch indicates that it is not performing the action because “Apply=No”. In general, AutoPatch lists each file it would have copied, generated, relinked, or executed. This shows you exactly what actions it would have performed.

AutoPatch test mode works the same as normal mode, with the following exceptions:

  • It does not copy any files from your patch directory into your installation area.
  • It does not copy any files from your APPL_TOP to JAVA_TOP or OAH_TOP.
  • It does not archive any object modules into your product libraries.
  • It does not generate any forms or reports.
  • It does not relink any executables.
  • It does not run any ‘sql’ or ‘exec’ commands.
  • It does not update the release version in the database.
  • It does not update the patch history file.

AutoPatch asks you the same initial questions in test mode as in normal mode. It performs the following actions to determine what it would have done if run in normal mode:

  • Reads and validates the patch driver file.
  • Reads product file driver files.
  • Extracts object modules from your product libraries (so it can perform version checking on the object modules it extracts).
  • Performs version checking.
  • Looks in the database to determine what ‘sql’ and ‘exec’ comands it would have run.

Its a good practice to run the patch in test mode and analyze the things before applying the patch in normal mode.

3) Non-Interactive Mode

Starting in Release 11.5, you can run AutoPatch non-interactively.

Creating a defaults file

Before you can run AutoPatch non-interactively, you must first create an AutoPatch defaults file for your current environment.

Here is a simple way to create an AutoPatch defaults file for your current environment:

1. Specify defaultsfile=<New Defaults File Name> on the AutoPatch command line. The defaults file must be located under $APPL_TOP/admin/<SID>.

For example:

adpatch defaultsfile=$APPL_TOP/admin/testdb1/my_def.txt

2. Run AutoPatch up to the point where it asks you for the directory where your Oracle Applications patch has been unloaded. Then type ‘abort’ at this prompt.

3. Verify that your defaults file exists.

Once you have an AutoPatch defaults file for your current environment, you can run AutoPatch non-interactively.

Applying a single patch driver file non-interactively

Before applying any Oracle Applications patch, either interactively or non-interactively, you should read the README file (usually called readme.txt) supplied with the patch. You should also read the documentation supplied with the patch (if any).

It is possible to apply just a single patch driver file non-interactively using AutoPatch. Here is an example:

Assume the following:

  • defaults file is $APPL_TOP/admin/testdb1/def.txt
  • Applying copy driver for patch 123456, which is located in directory $APPL_TOP/patch/123456.
  • Using three parallel workers
  • AutoPatch log file name is cpy123456.log

The AutoPatch command line would be:

adpatch defaultsfile=$APPL_TOP/admin/testdb1/def.txt \
logfile=cpy123456.log \
patchtop=$APPL_TOP/patch/123456 \
driver=c123456.drv \
workers=3 \
interactive=no

If we dont give any of the mode as mentioned above and apply the patch simply using adpatch command then its a normal mode of patch application.

Having seen the modes of patch application, now we will see various arguements for applying patch.

1) defaultsfile
Purpose: This option is used when we are running the patch in non interactive mode. In that case we create defaults file and provide that file as an option for running patch in non-interactive mode.
Default: none. No default file read or written.

2) logfile
Purpose: This is the name of adpatch log file which it will write during patch application.
Default: none. Adpatch prompts for this value.

3) workers
Purpose: Specifies the number of workers to run. This value depends on number of CPU and other factors.
Default: none. Adpatch prompts for this value.

4) patchtop
Purpose: Top-level directory for the current patch. This is the directory after unzipping the patch. This directory will a patch number.
Default: none. Adpatch prompts for this value.

5) driver
Purpose: Name of the patch driver file. This comes with the patch and is present in patch directory.
Default – none. Adpatch prompts for this value.

6) restart
Purpose: To restart an existing session. Only valid when interactive=no is also specified
Default: No

7) localworkers
Purpose: Used in Distributed AD to specify the number of workers to be run on the current machine. If you have multi node instance (example RAC and shared APPL_TOP), then you can utilize this paramter to run the patch parallely in multiple nodes. You can start few workers on node 1, few on node 2 and so on. The way this can be done is that, you can start adpatch on one node with localworker=<some value less then total workers>. Then run adctrl on other node in distributed mode and start some mode workers. This will speed up the process and utilized the resources effectively.
Default: Value specified for workers.

8) printdebug
Purpose: To display extra debugging information.
Default: No.

Now lets consider some common options that can be used with adpatch options=<value>

1) checkfile
Purpose: To skip running exec, SQL, and exectier commands if they are recorded as already run. Indicates that Autopatch should run the command *only* if a certain file is newer than the version of it that was last run. The idea behind it is to reduce the duration of an Autopatch session by skipping actions that don’t really need to be performed. When used in the right manner, it can dramatically improve Autopatch performance, especially for big patches and/or long running actions.
Default: checkfile (use ‘nocheckfile’ to skip)

2) compiledb
Purpose: To compile invalid objects in the database after running actions in the database driver.
Default: compiledb (use ‘nocompiledb’ to skip)

3) compilejsp
Purpose: To compile out-of-date JSP files, if the patch has copy actions for at least one JSP file.
Default: compilejsp (use’nocompilejsp’ to skip)

4) copyportion
Purpose: To run commands found in a copy driver. This will copy the higher version files from patch to product top.
Default: copyportion (Use ‘nocopyportion’ to skip. Use it only when mentioned in readme of patch)

5) databaseportion
Purpose: To run commands found in a database driver. This portion includes applying the files (like sql, pls etc) to database.
Default: databaseportion (use ‘nodatabaseportion’ to skip. Use it only when mentioned in readme of patch)

6) generateportion
Purpose: To run commands found in a generate driver. This portion will generate new executable files from the copied code of patch. For example if will generate new forms files (fmx) from new .fmb files.
Default: generateportion (use ‘nogenerateporation’ to skip)

7) integrity
Purpose: To perform patch integrity checking. Tells adpatch whether to perform patch integrity checking, which verifies that the version of each file referenced in a copy action matches the version present in the patch.
Default: nointegrity (By default the integrity is not checked)

8) maintainmrc
Purpose: To maintain the MRC schema after running actions found in the database driver.
Default: maintainmrc (use ‘nomaintainmrc’ to skip)

9) autoconfig
Purpose: Tells adpatch to run Autoconfig after patch installation.
Default: autoconfig (use ‘noautoconfig’ to skip)

10) parallel
Purpose: To run actions that update the database or actions (like SQL) that generate files in parallel (like genform).
Default: parallel (use ‘noparallel’ to skip)

11) prereq
Purpose: Tells adpatch whether to perform prerequisite patch checking prior to running patch driver files that contain actions normally found in the copy driver.
Default: prereq (use ‘noprereq’ to skip)

12) validate
Purpose: To connect to all registered Oracle Applications schemas at the start of the patch. Adpatch validates the passwords for each schema.
Default: novalidate (use ‘validate’ to validate schema passwords)

Following flags can be passed to adpatch

1) hidepw
Purpose: This argument is used to hide the passwords in log files
Default: nohidepw

2) trace
Purpose: Tells the adpatch utility whether to log all database operations to a trace file
Default: notrace

3) logging
Purpose: Tells the adpatch utility whether to create indexes using the logging or nologging mode.
Default: logging

Hope this helps !!

Shared Memory and Semaphores – Oracle Database

Introduction:

Some times if the database shut down is not graceful, shared memory wont be released. In that case if you try to login and start the database you will get following error.

-bash-2.05b$ sqlplus /nolog
SQL*Plus: Release 8.1.7.0.0 – Production on Thu Jun 26 12:57:25 2008
(c) Copyright 2000 Oracle Corporation.  All rights reserved.
SQL> connect / as sysdba
ERROR:
ORA-03113: end-of-file on communication channel

To resolve such issues, you can check the shared memory and semaphores that are blocked by or locked by the defunct oracle processes. Before that here is the bried understanding of Shared memory segments and semaphores in UNIX.

Shared Memory Segments

Shared memory allows processes to access common structures and data by placing them in shared memory segments. It’s the fastest form of Interprocess Communication (IPC) available since no kernel involvement occurs when data is passed between the processes. In fact, data does not need to be copied between the processes.

Oracle uses shared memory segments for the Shared Global Area (SGA) which is an area of memory that is shared by Oracle processes. The size of the SGA has a significant impact to Oracle’s performance since it holds database buffer cache and much more.

To see all shared memory settings, execute:

-bash-3.00$ ipcs -lm

—— Shared Memory Limits ——–
max number of segments = 4096
max seg size (kbytes) = 4194303
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

The max value of Shared Memory segment is decided by SHMMAX parameter.

ipcs -m will give you the shared memory segments, its size and which user owns the segment.

-bash-3.00$ ipcs -m

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0x0001ffb8 0          root      666        76         3
0x00025990 32769      root      666        8308       3
0x00027cb9 65538      root      666        132256     1
0x00027cba 98307      root      666        132256     1
0xf771930c 1277956    oracle01  660        2166358016 144

Semaphores

You can see semaphores setting using ipcs -ls

-bash-3.00$ ipcs -ls

—— Semaphore Limits ——–
max number of arrays = 1100
max semaphores per array = 1100
max semaphores system wide = 32000
max ops per semop call = 100
semaphore max value = 32767

SEMMSL parameter determinies the max size of semaphores.

You can see the various semaphores assigned to different processes using

-bash-3.00$ ipcs -s

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x00000000 393216     root      666        1
0x00000000 425985     root      666        1
0x00000000 458754     root      666        1
0x00000000 491523     root      666        1
0x0001ffb8 524292     root      666        1
0x000251c0 557061     root      666        1
0x000255a8 589830     root      666        1
0x00025990 622599     root      666        1
0x000278d1 655368     root      666        1
0x00027cb9 688137     root      666        1
0x000278d2 720906     root      666        1
0x00027cba 753675     root      666        1
0x26e55b68 3244044    oracle01  660        1004

You can also see the semaphores and Shared Memory Segments assigned to oracle processes using Oracle utility sysresv

-bash-3.00$ sysresv

IPC Resources for ORACLE_SID “tkr12r3m2” :
Shared Memory:
ID              KEY
1277956         0xf771930c
Semaphores:
ID              KEY
3244044         0x26e55b68
Oracle Instance alive for sid “tkr12r3m2”

If the instance is not alive you can remove the shared memory segments and semaphores using ipcrm command

ipcrm -m <Shared Memory ID (shmid)>

ipcrm -s <Semaphore ID (semid)>

Hope this helps !!

“Touch Count” for Buffer Cache

Introduction

I was going through one of the interesting white paper on oraPub Inc. which talks about the mechanish or the algorithm Oracle engine uses to manage the buffers in database buffer cache. The paper is written by Craig A. Shallahamer (criag@orapub.com). Thanks to the author for sharing such an interestng information.

Author has explained the algorithm for managing buffers in Oracle buffer cache. Now obviously all the buffers cannot stay in buffer cache because of the limited size of cache. So buffers which are older or non-popular should go out of the buffer cache. The algorithm is all about the criteria for buffer cache to stay or leave buffer cache.

Initially with older release, Oracle started with LRU (Least Recently Used) algorithm. Conceptually LRU list is nothing but a list of pointers to buffer blocks. When a buffer block is touched, the pointer moves towards the most recently end of LRU chain. Remember that its the pointer which moves up the chain, buffer blocks are never moved. The general idea is to keep the more popular blocks in buffer cache and ask less popular block to leave.

But this algorithm was having some issues. For example a full table scan !! This will get all the buffers into the buffer cache making other important buffers to leave the cache. For example if the buffer cache is having 500 blocks and full table scan is getting 600 blocks in buffer cache, all the popular blocks will go away.

To overcome this problem, Oracle came with a modified LRU algorithem. This modified LRU algorithm takes care of full table scan by keep the buffers of full table scan at the LRU end of LRU chain and also it will allow only limited number of blocks to be put in LRU end of LRU chain. This will avoid flooding buffer cache with huge number of buffers from full table scan. Also along with this algorithm, Oracle implemented multiple buffer pools – KEEP BUFFER POOL, RECYCLE BUFFER POOL and DEFAULT BUFFER POOL. As you must be knowing, the data which is used frequently and should remain in buffer cache should be placed in KEEP BUFFER POOL, buffer that should not stay for longer in buffer cache should be placed in RECYCLE BUFFER POOL and if we dont specifically mention any buffer pool duing table creation, it will go in DEFAULT POOL.

With this background lets move towards understanding the touch count algorithm.

Buffer cache Management

Before understanding the complex logic, we should understand the needs for touch count algorithm. Why does Oracle needs to have such a complex alogrithm in place? We have already seen that modified LRU algorithm takes care of full table scan. But if you think again we have another issue – Large index range scan? Imaging large number of index leaf blocks flowing into the buffer cache. This issue wont be addressed by modified LRU.

Also with growing sizes for buffer cache, better and better performance requirements and more control Oracle introduced touch count algorithm.

For smooth and effective buffer cache operation, a very fast and very flexible algorithm must be implemented that essentially forces every buffer to seriously earn the right to remain in buffer cache. Touch count algorithm makes it very difficult for buffer to simply remain in buffer cache. It’s almost cruel how many hurdles a buffer must continually jump to simply remain in buffer cache.

At the core of touch count algorithm are the harsh requirements placed on each buffer to not only remain in cache, but to remain in MRU end of LRU list.Conceptually touch count algorithm does this by assigning each buffer a counter. Each time a block is touched its counter is incremented. The value of this counter is used to assign popularity to the blocks. But its not staright forward. Keep reading !!

Touch Count Algotrithm

Oracle does lot of nasty things to make it very difficult for a buffer to remain in buffer cache. Lets understand each concepts

Mid-point Insertion

Each LRU is divided into two basic areas or region; a hot region and a cold region. All buffers in hot region is called hot buffers and all buffers in cold region is called cold buffers.There is a midpoint marker between hot region and cold region. This mid point marker moves in order to make sure that correct number of buffers are present in each regions. This mid pointer is not associated with any buffer.

By default Oracle divides LRU equally between hot and cold region. That means, 50% of the buffers will be in hot region and 50% will be in cold region. However we can change the default setting by changing a hidden parameter _db_percent_hot_default. If we increase this parameter then buffers in hot region will increase, that is buffers above mid point wil increase.

When a server process reads a block from disk into the buffer cache, it is placed in the middle of LRU chain, that is, between hot region and cold region. This is called mid-point insertion and is the fundamental concept in touch count algorithm. These buffer has to earn there way into hot region by increasing the touch count. Frequent access to these block will increase there touch count and block will move in the hot region.

Touch Count Increment

Now we know that when a server process reads a new block from disk into the buffer cache, where exactly its going to put the block. Lets now consider how the tocuh count increases for blocks.

Therotically when ever a buffer is touched, its touch count should increase. but practically Oracle does not let that happen. Think of a situation where some processes needs some blocks in buffer very frequently for some period, and after that period that block is not so required. In that case Oracle process might access the block so frequently that within a second its touch count will grow huge and it will become eligible to be placed in hot region. Not only that, it might become eligible to stay in hot region. Since this block wont be used later, we dont want this block to stay just because it was used for initial period of time.

To overcome this problem, Oracle only allows buffer’s touch count to be incremented, at most, once every 3 seconds. Again, thi 3 scond is default timing by Oracle, which can be changed using hidden parameter _db_aging_touch_time.

When a touch count is incremented buffer pointer should move. But movement of buffer pointer is independent of touch count increment. Also for any activity in memory area oracle needs a latch for assuring cache consistency. But there is an exception here !! For updating touch count, Oracle does not use latch and buffer block can be modified while touch count is getting incremented. But more interesting is that, two processes may increment the touch count to same value, and when this happens Oracle assures the worst that could happen is the touch count is not actually incremented every time a buffer is touched and that no cache corruption will result.

Buffer Movement

As mentioned previously, when a buffer is brought into the buffer cache its placed in the middle of hot region and cold region. Unlike LRU algorithm, touch count algorithm will not move the block to hot region just because its touched. Yes, its touch count will probably be incremented.

When a server process if looking for the free buffer to place the content of datafile block into buffer cache or when the DBWR is looking for dirty buffer to write to disk, and if the buffer’s touch count is observed to be greater then 2, its moved to MRU end of LRU list. This default threshold of block movement is controlled by hidden parameter _db_aging_hot_criteria.

Oracle touch count implementation is tough on buffers !! When buffer is moved to MRU end of LRU list, its touch count is set to zero. So this buffer which is newly brought into the hot region has hit touch count reset to 0 immediately. For this block to remain in the hot region its touch count should be incremented significantly. So if this block is really accessed frequenctly, its touch count will increase automatically and it will servive in hot region.

Hot and Cold Movement

If a buffer is moved from cold region to hot region, the mid point marker has to shift to accomodate correct number of blocks in hot a cold region. So 1 block from hot region will be forced into the cold region which is least frequently used and which belongs to LRU end of LRU list. When this happens the touch count of that block is reset to 1. Even if the buffer’s touch count is 250, after moving to cold region its touch count is reset to 1. This threshold crossing touch count reset value is controlled by the instance parameter _db_aging_cool_count. This means the buffer must be all touched again to make its move to the hot region. This is how the blocks are moved and managed in buffer cache.

For more information on performance tuning buffer cache using the above parameters check the reference section below.

References:

ALL ABOUT ORACLE’S TOUCH COUNT ALGORITHM – Criag A. Shallahamer (Version 4a, January 5, 2004)

Identifying Locks on Objects using V$LOCKS

Ever wondered when a user comes to you saying that he/she cannot run a DML statement, which one should be able to run with in a second.
We suspect 2 reason for the same
1) Database is terribely slow (Which cannot be the case for processing a simple update)
2) Some one is holding an exclusive lock on object which user is trying to update (quite possible).

There can be many more reasons we can find upon further investigations. In this post we will see how to investigate reason 2 – If some other user is holding lock on the object which this user is trying to modify.

lets take a simple scenario.

session 1:

SQL> create table test (col1 number, col2 varchar2(5));

Table created.

SQL> insert into test values (1,’a’);

1 row created.

SQL> insert into test values (2,’b’);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test for update;

COL1 COL2
———- —–
1 a
2 b

Session 2:

SQL> update test set col2=’a’ where col1 = 1;

The above session 2 will hang !!!


SQL> select * from v$lock;

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00000003BFD5D868 00000003BFD5D888         54 CF          0          0          2          0     669613          0
00000003BFD5D900 00000003BFD5D920         54 XR          4          0          1          0     669625          0
00000003BFD5DA30 00000003BFD5DA50         54 RS         25          1          2          0     669613          0
00000003BFD5DAC8 00000003BFD5DAE8         35 TX     196652        882          0          6        344          0
00000003BFD5DB60 00000003BFD5DB80         53 TS          3          1          3          0     669599          0
00000003BFD5DBF8 00000003BFD5DC18         55 RT          1          0          6          0     669613          0
00000003BFD5DDC0 00000003BFD5DDE0         57 MR          1          0          4          0     669601          0
00000003BFD5DE58 00000003BFD5DE78         57 MR          2          0          4          0     669601          0
00000003BFD5DEF0 00000003BFD5DF10         57 MR          3          0          4          0     669601          0
00000003BFD5DF88 00000003BFD5DFA8         57 MR          4          0          4          0     669601          0
00000003BFD5E020 00000003BFD5E040         57 PW          1          0          3          0     669599          0

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
00000003BFD5E3C8 00000003BFD5E3E8         57 MR         81          0          4          0     669593          0
00000003BE50B9B8 00000003BE50B9E0         49 TM      21837          0          3          0        374          0
00000003BE50BAB8 00000003BE50BAE0         35 TM      21837          0          3          0        344          0
00000003BDC81138 00000003BDC812C0         49 TX     196652        882          6          0        374          1

15 rows selected.

If a session is blocking another session, you will see block = 1 for that session. So in out case SID=49 is blocking some other session. We can also find the session which got blocked because of SID=49.
There are 2 columns ID1 and ID2. The values of ID1 and ID2 for SID=49 will match with some other SID in v$lock table. If you see carefully in our case its matching with SID=35. So SID 35 is the session which got blocked because of SID=49. Also the session which gets blocked will not be able to get its request processed. So you will see REQUEST column will have a value > 0.
So from the above analysis we can say that SID 49 is blocking SID 35.

We can directly write a query which will give the required output.

SQL> select a.SID “Blocking Session”, b.SID “Blocked Session”
2  from v$lock a, v$lock b
3  where a.SID != b.SID
4  and a.ID1 = b.ID1
5  and a.ID2 = b.ID2
6  and b.request > 0
7  and a.block = 1;

Blocking Session Blocked Session
—————- —————
49              35

Lets understand rest of the columns in v$lock tables here.

ID1 and ID2 -> There represents the rollback segment and transaction table entries for that transaction. So when session 1 acquired the lock it got rollback segment and transaction table entry. When another session requested the same block, Oracle tried to generate a CR “Consistent read” image for the same by applying the rollback segment. But since there was exclusive lock it was not able to proceed. Unless first session relievs the lock, second session cannot proceed. So for second session its showing the value of ID1 and ID2 same as session 1 as session 2 was using the same values from rollback segment to make a CR copy.

TYPE -> This column gives the type of lock obtained on that table in which the data was locked. For more information on lock type check defination of v$lock table in Oracle references. For a session to change the data in a table it has to acquire a TX lock. This is the transaction enqueue lock.

LMODE -> This is the mode of lock. The is having values from 0 to 6, 6 being the most restrictive and 0 being least restrictive. When a session obtains lock in mode 6 that mean it has obtained exclusive lock and no other session is allowed to update the data. So for SID=49 we can see here that lock mode is exclusive (6). For more information on lock mode, you can check the v$lock table definition in Oracle references

REQUEST -> This column represent the lock mode requested by a blocking session. The value in this column gets updated only when the session is blocked. For example in our case SID=35 is being blocked so LMODE column shows a value of 0, but REQUEST column shows a value of 6. That means that SID 35 has requested lock mode 6 which has not yet assigned to it.

Some time if a session request for exclusive lock (mode 6), it might not get it because there are no Interested Transaction List (ITL) available in the block in which a user want to change data. For this user session will wait on mode 4 (shared mode) for some time and as soon as ITL is available, it will obtain the lock in mode 6.

Also in the above rows, you can see that for SID 49 and 35, there are 2 more rows and the TYPE column shows ‘TM’. There are the DML level enqueues and are acquired in lock mode 3 (Shared Row Exclusive). The lock will prevent any DDL activity on this table.
We can find the object name by getting the ID1 column value from these rows containing TM lock. 21837 in our case.

SQL> select object_name from dba_objects where object_id=21837;

OBJECT_NAME
————–
TEST

We can even get the row which is being blocked by transaction using v$session.

SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2  from v$session where sid=35;

ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
————- ————– ————— ————-
21837              1           45082             0

The above 4 components are the components of ROWID and we can generate ROWID number from there components using DBMS_ROWID package.

SQL> select dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
2  from v$session where sid=35;

DBMS_ROWID.ROWID_C
——————
AAAFVNAABAAALAaAAA

Now we can check if this was the row which blocking session was trying to update

SQL> select * from test where rowid = ‘AAAFVNAABAAALAaAAA’;

COL1 COL2
———- —–
1 a

this was the row blocking session was trying to update.

Hope this helps !!

Rolling Back Previous Autoconfig Run

In case your autoconfig run has created some issues in your environment and you want to rollback autoconfig run, you can do so by running restore.sh script.

For every autoconfig run, autoconfig will create a directory “MMDDhhmm” under APPL_TOP/admin/TWO_TASK/out. Inside these directories, there will be several files which autoconfig has taken a backup off before making changes. Also there will be a script restore.sh. This script is going to copy these backup files to the original location and this has the effect to rolling back autoconfig run.

(appmgr02) 06190950 – -bash $ pwd
/SLOTS/slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190950
(appmgr02) 06190950 – -bash $ restore.sh

Hope this helps !!

Running Autoconfig in Test Mode

Before running the autoconfig in production to propogate your changes, its usually a good idea to run autoconfig in test mode. This will reduce the risk and you can understand the impact better. AD tool provides a script to run autoconfig in test mode. This script will create a report which will list all files & profile options that is going to change when you run AutoConfig.

Below is the sample run for the same.

(appmgr02) bin – -bash $ adchkcfg.sh
Enter the full path to the Applications Context file:
/slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1.xml
Enter the APPS user password:

AutoConfig is running in test mode and building diffs…

AutoConfig will consider the custom templates if present.
Using APPL_TOP location     : /slot02/appmgr/PQDC2MS1appl
Classpath                   : /local/java/jdk1.4.2_04/jre/lib/rt.jar:/local/java/jdk1.4.2_04/lib/dt.jar:/local/java/jdk1.4.2_04/lib/tools.jar:/slot02/appmgr/PQDC2MS1comn/java/appsborg2.zip:/slot02/appmgr/PQDC2MS1comn/java

Using Context file          : /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190701/PQDC2MS1.xml

Context Value Management will now update the test Context file
Updating test Context file…COMPLETED

[ Test mode ]
No uploading of Context File and its templates to database.

Testing templates from all of the product tops…
Testing AD_TOP……..COMPLETED
Testing FND_TOP…….COMPLETED
Testing ICX_TOP…….COMPLETED
Testing IEO_TOP…….COMPLETED
Testing ABM_TOP…….COMPLETED
Testing EMS_TOP…….COMPLETED
Testing ECX_TOP…….COMPLETED
Testing BIS_TOP…….COMPLETED
Testing GL_TOP……..COMPLETED
Testing AMS_TOP…….COMPLETED
Testing CCT_TOP…….COMPLETED
Testing WSH_TOP…….COMPLETED
Testing CLN_TOP…….COMPLETED
Testing OKE_TOP…….COMPLETED
Testing OKL_TOP…….COMPLETED
Testing OKS_TOP…….COMPLETED
Testing CSF_TOP…….COMPLETED
Testing XNC_TOP…….COMPLETED
Testing IGS_TOP…….COMPLETED
Testing IBY_TOP…….COMPLETED
Testing PA_TOP……..COMPLETED
Testing JTF_TOP…….COMPLETED
Testing MWA_TOP…….COMPLETED
Testing CN_TOP……..COMPLETED
Testing CSI_TOP…….COMPLETED
Testing WIP_TOP…….COMPLETED
Testing CSE_TOP…….COMPLETED
Testing EAM_TOP…….COMPLETED
Testing IMT_TOP…….COMPLETED
Testing FTE_TOP…….COMPLETED
Testing ONT_TOP…….COMPLETED
Testing AR_TOP……..COMPLETED
Testing AHL_TOP…….COMPLETED
Testing OZF_TOP…….COMPLETED
Testing IES_TOP…….COMPLETED
Testing CSD_TOP…….COMPLETED
Testing IGC_TOP…….COMPLETED

Differences text report is located at: /SLOTS/slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190702/cfgcheck.txt

Generating Profile Option differences report…COMPLETED
Generating File System differences report……COMPLETED
Differences html report is located at: /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190702/cfgcheck.html

Differences Zip report is located at: /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/out/06190702/ADXcfgcheck.zip

AutoConfig completed successfully.
The log file for this session is located at: /slot02/appmgr/PQDC2MS1appl/admin/PQDC2MS1/log/06190701/adconfig.log

Once autoconfig is run on test mode, it will generate a zip file (ADXcfgcheck.zip) as shown above. You can copy the zip file to $OA_HTML location and unzip the same. You can then view the report using following URL

http://hostname:port/OA_HTML/cfgcheck.html

Hope this helps !!