WARNING: inbound connection timed out (ORA-3136)

Many times I faced this issue “WARNING: inbound connection timed out (ORA-3136)” in alert log and if you are using data guard, then you can see that dgmgrl status will show failed because standby is not able to sync up with primary.

The “WARNING: inbound connection timed out (ORA-3136)” in the alert log indicates that the client was not able to complete it’s authentication within the period of time specified by parameter SQLNET.INBOUND_CONNECT_TIMEOUT.

From 10.2 onwards the default value of this parameter is 60 seconds, hence if the client is not able authenticate within 60 secs , the warning would appear in the alert log and the client connection will be terminated.

Root cause of this issue could be

1. malicious client is flooding database servers with connect requests that consumes resources

2. The server receives a valid client connection request but the client takes a long time to authenticate more than the default 60 seconds.

3. The DB server is heavily loaded due to which it cannot finish the client logon within the timeout specified

As a workaround to avoid only this warning messages, you can set the parameters SQLNET.INBOUND_CONNECT_TIMEOUT=0 in $ORACLE_HOME/network/admin/sqlnet.ora file

Once you set this parameter the warnings in alert log will go away. If you are using data guard, may be you want to enable the databases on which you were getting warnings.

Hope this helps !!

Checking Database Growth Trend

One of the simple way to find the growth of the database is using v$datafile view. following is the simple query and its output which gives the growth trend in month and year

</code>

SQL>select to_char(CREATION_TIME,'RRRR') year, to_char(CREATION_TIME,'MM') month, round(sum(bytes)/1024/1024/1024) GB
 2  from   v$datafile
 3  group by  to_char(CREATION_TIME,'RRRR'),   to_char(CREATION_TIME,'MM')
 4  order by   1, 2;

YEAR MO         GB
---- -- ----------
2004 10         14
2004 11         66
2004 12         47
2005 01         15
2005 02         14
2005 03         34
2005 04         14
2005 05         30
2005 06          5
2005 07          3
2005 08          9

YEAR MO         GB
---- -- ----------
2005 10          4
2005 12         20
2006 01         11
2006 02         15
2006 03         14
2006 04         16
2006 05         13
2006 06          0
2006 07          4
2006 08         17
2006 09          5

YEAR MO         GB
---- -- ----------
2006 10          6
2006 11          0
2006 12         30
2007 01          4
2007 02          4
2007 03          6
2007 04          6
2007 05         12
2007 06         15
2007 07         62
2007 08         29

YEAR MO         GB
---- -- ----------
2007 09         18
2007 11         20
2007 12          8
2008 03          8
2008 05          4
2008 08         11
2008 09         14
2008 10          4
2008 11         36
2008 12         49
2009 01          8

YEAR MO         GB
---- -- ----------
2009 02         38
2009 03         35
2009 04         54
2009 05         73
2009 08          1
2009 09         12
2009 10         78
2009 11        105
2009 12        113
2010 01        109
2010 02         79

YEAR MO         GB
---- -- ----------
2010 03         84
2010 05        243
2010 06        225
2010 07        244
2010 08        152
2010 09         40
2010 10        176
2010 11         76
2010 12        107

64 rows selected.

<code>

Analyze Index Vs. Gather_Index_Stats – Oracle Database 10gp

One of the difference I noticed about analyze index command and gather_index_stats procedure is that when we use analyze index command if updates the index statistics with number of leaf blocks equals the number of leaf blocks below HWM.

However if we use gather_index_stats statistics shows number of leaf blocks equals number of leaf blocks that actually has the data.

Here is the illustration for the same.

 

Create a table

SQL> create table t1 pctfree 99 pctused 1  
2  as  
3  select rownum id,  
4  trunc(100 * dbms_random.normal) val,  
5  rpad('X',100) padding  
6  from all_objects where rownum <= 10000;

Table created.

I intentionally created table with pctfree as 99 so that each block will have 1 row and it uses 10000 blocks to store 10000 rows.

Create an index

SQL> create index t1_i on t1(val) ;
Index created.

Check the number of leaf blocks

SQL> select BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS from dba_indexes where index_name = 'T1_I';

 BLEVEL LEAF_BLOCKS DISTINCT_KEYS
---------- ----------- -------------         
1          21           563

We have around 1600 rows whose val > 100

SQL> select count(*) from t1 where val > 100;
 COUNT(*)
----------      
  1598

Delete rows from table, which will also delete entries from index and some of the leaf blocks will get empty.

SQL> delete from t1 where val > 100;
1598 rows deleted.

Analyze index

SQL> analyze index t1_i validate structure;
Index analyzed.

Check again the number of leaf blocks

SQL> select BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS from dba_indexes where index_name = 'T1_I';
 BLEVEL LEAF_BLOCKS DISTINCT_KEYS
---------- ----------- -------------         
 1          21           563

Number of leaf blocks reported are still 21

Gather stats on index

SQL> exec dbms_stats.gather_index_stats('ADVAITD_DBA','T1_I');
PL/SQL procedure successfully completed.

Check the number of leaf blocks again

SQL> select BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS from dba_indexes where index_name = 'T1_I';
 BLEVEL LEAF_BLOCKS DISTINCT_KEYS
---------- ----------- -------------         
 1          18           379

Now if we see the number of leaf blocks reported are 18. This plays quiet a big role for optimizer in creating plans for queries. If we don’t have correct stats for the index, it may lead to expensive explain plans.

So its better to use gather_index_stats rather than analyze index.

Hope this helps !!

Row Chaining and Migeration

Overview

If you notice poor performance in your Oracle database Row Chaining and Migration may be one of several reasons, but we can prevent some of them by properly designing and/or diagnosing the database. Row Migration & Row Chaining are two potential problems that can be prevented. By suitably diagnosing, we can improve database performance. The main considerations are: What is Row Migration & Row Chaining ? How to identify Row Migration & Row Chaining ? How to avoid Row Migration & Row Chaining ? Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

Oracle Block

The Operating System Block size is the minimum unit of operation (read /write) by the OS and is a property of the OS file system. While creating an Oracle database we have to choose the «Data Base Block Size» as a multiple of the Operating System Block size. The minimum unit of operation (read /write) by the Oracle database would be this «Oracle block», and not the OS block. Once set, the «Data Base Block Size» cannot be changed during the life of the database (except in case of Oracle 9i). To decide on a suitable block size for the database, we take into consideration factors like the size of the database and the concurrent number of transactions expected. The database block has the following structure (within the whole database structure)

oracle_data_block

Header

Header contains the general information about the data i.e. block address, and type of segments (table, index etc). It Also contains the information about table and the actual row (address) which that holds the data.

Free Space

Space allocated for future update/insert operations. Generally affected by the values of PCTFREE and PCTUSED parameters.

Data

Actual row data.

FREELIST, PCTFREE and PCTUSED While creating / altering any table/index, Oracle used two storage parameters for space control.

  • PCTFREE – The percentage of space reserved for future update of existing data.
  • PCTUSED – The percentage of minimum space used for insertion of new row data. This value determines when the block gets back into the FREELISTS structure.
  • FREELIST – Structure where Oracle maintains a list of all free available blocks. Oracle will first search for a free block in the FREELIST and then the data is inserted into that block. The availability of the block in the FREELIST is decided by the PCTFREE value. Initially an empty block will be listed in the FREELIST structure, and it will continue to remain there until the free space reaches the PCTFREE value.

When the free space reach the PCTFREE value the block is removed from the FREELIST, and it is re-listed in the FREELIST table when the volume of data in the block comes below the PCTUSED value. Oracle use FREELIST to increase the performance. So for every insert operation, oracle needs to search for the free blocks only from the FREELIST structure instead of searching all blocks.

Row Migration

We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.

ora_row_migration_1.jpg

Full Table Scans are not affected by migrated rows

The forwarding addresses are ignored. We know that as we continue the full scan, we’ll eventually get to that row so we can ignore the forwarding address and just process the row when we get there. Hence, in a full scan migrated rows don’t cause us to really do any extra work — they are meaningless.

Index Read will cause additional IO’s on migrated rows

When we Index Read into a table, then a migrated row will cause additional IO’s. That is because the index will tell us «goto file X, block Y, slot Z to find this row». But when we get there we find a message that says «well, really goto file A, block B, slot C to find this row». We have to do another IO (logical or physical) to find the row.

Row Chaining

A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with long and long raw columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.

ora_row_chained_1.jpg

Chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was spread over two blocks, the query: select column1 from table where column1 is in Block 1, would not cause any «table fetch continued row». It would not actually have to get column2, it would not follow the chained row all of the way out. On the other hand, if we ask for: select column2 from table and column2 is in Block 2 due to row chaining, then you would in fact see a «table fetch continued row»

Example

The following example was published by Tom Kyte, it will show row migration and chaining. We are using an 4k block size:

select name,value from v$parameter where name = ‘db_block_size’;

NAME   VALUE
————– ——
db_block_size  4096

Create the following table with CHAR fixed columns:

create table row_mig_chain_demo (
x int primary key,
a char(1000),
b char(1000),
c char(1000),
d char(1000),
e char(1000)
);

That is our table. The char(1000)’s will let us easily cause rows to migrate or chain. We used 5 columns a,b,c,d,e so that the total rowsize can grow to about 5K, bigger than one block, ensuring we can truly chain a row.

insert into row_mig_chain_demo (x) values (1);
insert into row_mig_chain_demo (x) values (2);
insert into row_mig_chain_demo (x) values (3);
commit;

We are not interested about seeing a,b,c,d,e – just fetching them. They are really wide so we’ll surpress their display.

column a noprint
column b noprint
column c noprint
column d noprint
column e noprint

select * from row_mig_chain_demo;

X
———-
1
2
3

Check for chained rows:

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                             VALUE
----------------------------     ----------
table fetch continued row        0

Now that is to be expected, the rows came out in the order we put them in (Oracle full scanned this query, it processed the data as it found it). Also expected is the table fetch continued row is zero. This data is so small right now, we know that all three rows fit on a single block. No chaining.

ora_row_example_1.jpg

Demonstration of the Row Migration

Now, lets do some updates in a specific way. We want to demonstrate the row migration issue and how it affects the full scan:

update row_mig_chain_demo set a = ‘z1’, b = ‘z2’, c = ‘z3’ where x = 3;
commit;
update row_mig_chain_demo set a = ‘y1’, b = ‘y2’, c = ‘y3’ where x = 2;
commit;
update row_mig_chain_demo set a = ‘w1’, b = ‘w2’, c = ‘w3’ where x = 1;
commit;

Note the order of updates, we did last row first, first row last.

select * from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 0

Interesting, the rows came out «backwards» now. That is because we updated row 3 first. It did not have to migrate, but it filled up block 1. We then updated row 2. It migrated to block 2 with row 3 hogging all of the space, it had to. We then updated row 1, it migrated to block 3. We migrated rows 2 and 1, leaving 3 where it started.

So, when Oracle full scanned the table, it found row 3 on block 1 first, row 2 on block 2 second and row 1 on block 3 third. It ignored the head rowid piece on block 1 for rows 1 and 2 and just found the rows as it scanned the table. That is why the table fetch continued row is still zero. No chaining.

ora_row_example_2.jpg

So, lets see a migrated row affecting the «table fetch continued row»:

select * from row_mig_chain_demo where x = 3;

X
———-
3

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 0

This was an index range scan / table access by rowid using the primary key.  We didn’t increment the «table fetch continued row» yet since row 3 isn’t migrated.

select * from row_mig_chain_demo where x = 1;

X
———-
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                      VALUE
————————— ———-
table fetch continued row       1

Row 1 is migrated, using the primary key index, we forced a «table fetch continued row».

Demonstration of the Row Chaining

update row_mig_chain_demo set d = ‘z4’, e = ‘z5’ where x = 3;
commit;

Row 3 no longer fits on block 1. With d and e set, the rowsize is about 5k, it is truly chained.

select x,a from row_mig_chain_demo where x = 3;

X
———-
3

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 1

We fetched column «x» and «a» from row 3 which are located on the «head» of the row, it will not cause a «table fetch continued row». No extra I/O to get it.

ora_row_example_3.jpg select x,d,e from row_mig_chain_demo where x = 3;

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 2

Now we fetch from the «tail» of the row via the primary key index. This increments the «table fetch continued row» by one to put the row back together from its head to its tail to get that data.

Now let’s see a full table scan – it is affected as well:

select * from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 3

The «table fetch continued row» was incremented here because of Row 3, we had to assemble it to get the trailing columns.  Rows 1 and 2, even though they are migrated don’t increment the «table fetch continued row» since we full scanned.

select x,a from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 3

No «table fetch continued row» since we didn’t have to assemble Row 3, we just needed the first two columns.

select x,e from row_mig_chain_demo;

X
———-
3
2
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 4

But by fetching for d and e, we incemented the «table fetch continued row». We most likely have only migrated rows but even if they are truly chained, the columns you are selecting are at the front of the table.

So, how can you decide if you have migrated or truly chained?

Count the last column in that table. That’ll force to construct the entire row.

select count(e) from row_mig_chain_demo;

COUNT(E)
———-
1

select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = ‘table fetch continued row’;

NAME                                                                  VALUE
—————————————————————- ———-
table fetch continued row                                                 5

Analyse the table to verify the chain count of the table:

analyze table row_mig_chain_demo compute statistics;

select chain_cnt
from user_tables
where table_name = ‘ROW_MIG_CHAIN_DEMO’;

CHAIN_CNT
———-
3

Three rows that are chained. Apparently, 2 of them are migrated (Rows 1 and 2) and one is truly chained (Row 3).

Total Number of «table fetch continued row» since instance startup?

The V$SYSSTAT view tells you how many times, since the system (database) was started you did a «table fetch continued row» over all tables.

sqlplus system/<password>

SELECT ‘Chained or Migrated Rows = ‘||value
FROM v$sysstat
WHERE name = ‘table fetch continued row’;

Chained or Migrated Rows = 31637

You could have 1 table with 1 chained row that was fetched 31’637 times. You could have 31’637 tables, each with a chained row, each of which was fetched once. You could have any combination of the above — any combo.

Also, 31’637 – maybe that’s good, maybe that’s bad. it is a function of

  • how long has the database has been up
  • how many rows is this as a percentage of total fetched rows.
    For example if 0.001% of your fetched are table fetch continued row, who cares!

Therefore, always compare the total fetched rows against the continued rows.

select name,value from v$sysstat where name like ‘%table%’;

select name,value from v$sysstat where name like ‘%table%’;

NAME                                                                  VALUE
—————————————————————- ———-
table scans (short tables)                                           124338
table scans (long tables)                                              1485
table scans (rowid ranges)                                                0
table scans (cache partitions)                                           10
table scans (direct read)                                                 0
table scan rows gotten                                             20164484
table scan blocks gotten                                            1658293
table fetch by rowid                                                1883112
table fetch continued row                                             31637
table lookup prefetch client count                                        0

How many Rows in a Table are chained?

The USER_TABLES tells you immediately after an ANALYZE (will be null otherwise) how many rows in the table are chained.

analyze table row_mig_chain_demo compute statistics;

select chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
from user_tables
where table_name = ‘ROW_MIG_CHAIN_DEMO’;

CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
———- ———– ———– ———- ———-
3         100        3691         10         40

PCT_CHAINED shows 100% which means all rows are chained or migrated.

How to avoid Chained and Migrated Rows?

Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand. Reorganizing the table ensures that the main free space is totally empty blocks.

The ALTER TABLE … MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE … MOVE statement with the COMPRESS keyword to store the new segment using table compression.

  1. ALTER TABLE MOVEFirst count the number of Rows per Block before the ALTER TABLE MOVEselect dbms_rowid.rowid_block_number(rowid) “Block-Nr”, count(*) “Rows”
    from row_mig_chain_demo
    group by dbms_rowid.rowid_block_number(rowid) order by 1; Block-Nr        Rows
    ———- ———-
    2066          3Now, de-chain the table, the ALTER TABLE MOVE rebuilds the row_mig_chain_demo table in a new segment, specifying new storage parameters:ALTER TABLE row_mig_chain_demo MOVE
    PCTFREE 20
    PCTUSED 40
    STORAGE (INITIAL 20K
    NEXT 40K
    MINEXTENTS 2
    MAXEXTENTS 20
    PCTINCREASE 0);Table altered. 

    Again count the number of Rows per Block after the ALTER TABLE MOVE

    select dbms_rowid.rowid_block_number(rowid) “Block-Nr”, count(*) “Rows”
    from row_mig_chain_demo
    group by dbms_rowid.rowid_block_number(rowid) order by 1;

    Block-Nr        Rows
    ———- ———-
    2322          1
    2324          1
    2325          1

  2. Rebuild the Indexes for the TableMoving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.analyze table row_mig_chain_demo compute statistics;ERROR at line 1:
    ORA-01502: index ‘SCOTT.SYS_C003228’ or partition of such index is in unusable
    stateThis is the primary key of the table which must be rebuilt.alter index SYS_C003228 rebuild;
    Index altered.analyze table row_mig_chain_demo compute statistics;
    Table analyzed. 

    select chain_cnt,
    round(chain_cnt/num_rows*100,2) pct_chained,
    avg_row_len, pct_free , pct_used
    from user_tables
    where table_name = ‘ROW_MIG_CHAIN_DEMO’;

    CHAIN_CNT PCT_CHAINED AVG_ROW_LEN   PCT_FREE   PCT_USED
    ———- ———– ———– ———- ———-
    1       33.33        3687         20         40If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.

Detect all Tables with Chained and Migrated Rows

Using the CHAINED_ROWS table, you can find out the tables with chained or migrated rows.

  1. Create the CHAINED_ROWS tablecd $ORACLE_HOME/rdbms/admin
    sqlplus scott/tiger
    @utlchain.sql
  2. Analyse all or only your Tablesselect ‘analyze table ‘||table_name||’ list chained rows into CHAINED_ROWS;’
    from user_tables
    /analyze table ROW_MIG_CHAIN_DEMO list chained rows into CHAINED_ROWS;
    analyze table DEPT list chained rows into CHAINED_ROWS;
    analyze table EMP list chained rows into CHAINED_ROWS;
    analyze table BONUS list chained rows into CHAINED_ROWS;
    analyze table SALGRADE list chained rows into CHAINED_ROWS;
    analyze table DUMMY list chained rows into CHAINED_ROWS;Table analyzed.
  3. Show the RowIDs for all chained rowsThis will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild with a higher value for PCTFREEselect owner_name,
    table_name,
    count(head_rowid) row_count
    from chained_rows
    group by owner_name,table_name
    /OWNER_NAME                     TABLE_NAME                      ROW_COUNT
    —————————— —————————— ———-
    SCOTT                          ROW_MIG_CHAIN_DEMO                      1

Conclusion

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

  • Row migration is typically caused by UPDATE operation
  • Row chaining is typically caused by INSERT operation.
  • SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.
  • To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view
  • To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.

 

Oracle Database Lite 10g (OLite 10g)

Introduction

Oracle Database Lite 10g extends the power of grid computing to a mobile workforce with a complete, integrated, secure database for building, deploying, and managing mobile applications for a broad range of mobile devices. Oracle Database Lite 10g enables persistent access to critical information and applications without requiring continuous connectivity to back-end enterprise systems. As a result, users increase the efficiency, productivity, and responsiveness of mobile workforces while reducing costs and improving customer satisfaction.

Oracle Database Lite is a complete, integrated, and secure infrastructure to build, manage, and provision mobile database application to a broad range of wireless devices. It is the leading platform to develop, deploy and manage mobile applications that store data locally on mobile devices (Smartphones, PDAs, Handheld Computers, and traditional Laptops) and synchronize data with central servers. Oracle Database Lite includes: scalable data synchronization architecture to thousands of mobile users; centralized management through easy-to-use graphical user interface; utilizes Oracle Real Application Clusters for maximum high-availability and failover; and revolutionary new mobile database development tools increases developer productivity. With Oracle Lite, your business cost-effectively extends enterprise data to occasionally connected mobile users. Oracle Lite is a complementary product optimized to work with Oracle Database Standard or Enterprise Edition.

Usability

Employees are working away from their desks and require access to corporate data normally found only on desktop devices connected to enterprise networks. Wireless connections for mobile users offer the promise of remote access of enterprise data but persistent wireless connections are not always possible, practical or desirable.

Wireless connections are not always possible because mobile workers may be in an environment that does not have wireless coverage. Often it is not practical to use a persistent wireless connection because applications do not have a real time data access requirement to justify the costs of communications. There are situations where a wireless connection is not desirable because the very presence of a wireless connection could compromise an application or a user’s security.

Developers require an infrastructure with application services that enable the development, delivery and operation of secure, personalized applications to mobile or embedded devices.

Database Lite Components

Oracle Database Lite 10g is a part of Oracle’s Database that extends the grid environment
and it makes your applications mobile. Oracle Database Lite describes a group of components that includes the Lite Database, the Mobile Server for Data Synchronization, Life Cycle Management and a Rapid Application Development environment.

Oracle Database Lite is a complete solution and includes more than a small database. The Oracle Lite database included in this solution is not an abbreviated version of the Oracle database but was designed from the ground up to be used with mobile applications in small and embedded devices.

untitled.jpg

Prerequisite Software for Lite

Since Oracle Database Lite stores and retrieves your information from an Oracle database, you must have a back-end Oracle database to run Lite. The Oracle database can be either Standard or Enterprise Edition, running a minimum version of 8.1.7 or higher.

Middle-Tier Option for Mobile Server

Oracle Database Lite uses a middle-tier to communicate between the clients and the back-end database. You must use one of the following as the middle-tier:

  • Stand alone OC4J – for development environments
  • Oracle Application Server – for production environments

Stand Alone OC4J

Stand Alone OC4J is automatically installed with Oracle Database Lite and this provides a standalone Mode for development environments.

Mobile Server in standalone mode uses the standalone version of Oracle Containers for J2EE (OC4J). This is the preferred configuration for development and testing but it will
support only a limited number of concurrent users.

Oracle Application Server

Production environments require either Oracle Application Server 10g. The application server is not installed with Oracle Database Lite and must be installed before installing Oracle Database Lite Mobile Server.

The Oracle Application Server production environment will support large numbers of  users, provides caching, load balancing, centralized management, security, and Single Sign On support.

Oracle Lite Database

The Oracle Lite Database is a small footprint, Java enabled, secure, relational database
management system created specifically for laptop computers, handheld computers, PDAs, and information appliances. Oracle Lite Database runs on Windows 2000/XP, Windows CE/Pocket PC, Symbian OS, and Embedded Linux.

Oracle Lite Database provides JDBC, ODBC, ADO.NET and Simple Object Data Access (SODA) APIs, for an easy-to-use C++ interface that is optimized for the object-oriented and SQL functionality of Oracle Database Lite.

Installation of the Mobile Development Kit (MDK) installs the Oracle Lite Database and all its utilities on your development machine.

Data Synchronization

A user’s data is synchronized between Oracle Lite Database and an Oracle database server.

Synchronization is accomplished by invoking the Mobile sync client, mSync, which interacts with the Mobile Server. The Mobile Server uses synchronization objects such as users, publications, publication items, and subscriptions to process client and server data changes. This technique is referred to as a publish/subscribe model.

Synchronization Process

Oracle Database Lite contains a subset of data stored in the Oracle database. This subset is stored in snapshots in the Oracle Lite database. Unlike a base table, a snapshot keeps track of changes made to it in a change log. Users can make changes in the Oracle Lite database while the device is disconnected, and can synchronize with the Oracle database
server.

There are basically three types of publication items that can be used to define synchronization; fast refresh, complete refresh, and queue based. The most common method of synchronization is a fast refresh publication item where changes are uploaded by the client, and changes for the client are downloaded.

Meanwhile, a background process called the Message Generator and Processor (MGP) periodically collects the changes uploaded by all clients and applies them to database tables. It then composes new data, ready to be downloaded to each client during the next
synchronization, based on predefined subscriptions.

Oracle Lite Sync is a two step asynchronous process:

untitled1.jpg

  1. Sync moves updated rows from the client database to the In Queue and from the Out Queue to the client database.
  2. MGP (Message Generator Processor) applies In Queue changes to base tables and Composes the changes to the base tables to client Out Queues.

Until recently, most business applications have only been accessible from network tethered PCs in offices, away from where a business’ productive assets and employees are at work, its customers are serviced, and its operations performed. A new generation of lightweight and powerful mobile devices combined with inexpensive, widely available wireless data connections promise to redefine the reach of Enterprise applications. Oracle is committed to servicing the mobile computing needs of businesses worldwide with solutions and software platforms that offer the depth of feature, scalability, reliability and performance.

Please note that above discussion is taken from Oracle white paper.

Below I have given the steps for installing Olite 10g in E- Business suite environment.

Installing OLite 10g

For installing Oracle database lite 10g for demo and checking out product features, you need to get the required software download from thislink.

Once downloaded, it need to be installed, very similar to our regular database installation. I have given the brief installation step for the same. As far as DBA is concern, the main resonsibility lies in installation and management. Please follow the below steps to install this software successfully. Also, please note that the steps below are the installtion steps for Linux OS. But though the basic steps are going to be almost same in any platform.
Note that the below discussion is for the installation on R12 instance.

Once the dump is extracted from zip file, before starting the OUI, you need to set the following environment variable.

ORACLE_HOME
JAVA13_HOME
TNS_ADMIN

For this software you need to set the path for ORACLE_HOME. You can create a directory called olite10g and set the path for this directoy as ORACLE_HOME.

JAVA13_HOME will be the location for your JAVA home. You can check the path for your java installtion, either 1.3 or 1.4 and set the path accordingly. Usually in Linux the path will be some thing like /local/java/jdk1.4.2.

TNS_ADMIN should point to $ORACLE_HOME/network/admin

Make sure you drop ‘mobileadmin’ schema with cascade option.

Connect to applications as system user and

SQL> Drop user mobileadmin cascade;

User Dropped.

SQL>

Once the above parameters are set and exported, check the DISPLAY environment variable(if installing on any of UNIX platform) and use the below command (from the software dump directory) to start the installation.

==========================================================
./runInstaller -invPtrLoc $ORACLE_HOME/oraInst.loc

Starting Oracle Universal Installer …

Checking requirements…

Checking operating system version: must be redhat-2.1, UnitedLinux-1.0 or redhat-3
Passed
All requirements met.

Checking if CPU speed is above 450 MHz.
Actual 3056 MHz Passed
Checking for Kernel version 2.4.21-4.EL Passed
Checking swap space: must be greater than 1536 MB.
Actual 23744MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2007-05-28_03-38-56AM. Please wait ..
sal Installer, Version 2.3.0.10.0
Copyright (C) 1999, 2003, Oracle. All rights reserved.

1.jpg

The above screen is just a welcome screen. Click next.

Next screen is a Inventory screen. Please note that we are here using the local inventory as specified by invPtrLoc attribute supplied, when we started the installation above. We also specified the location of oraInst.loc file. If the file doesnt exist, installer will create the file and the location of inventory will the one that we will provide in this Specify Inventory Directory screen. So basically oraInst.Loc file will be created with the inventory location as given in this screen. Give the location of your ORACLE_HOME, followed by a directory oraInventory. Even if the directory does not exist, installer will create the same.

1-1.jpg

2.jpg

In the above screen, we just specify the group of the user who is doing installation.

3.jpg

Specify File location page needs the product.jar file, which comes with software dump and name & location of ORACLE_HOME. Keep the default value and click next.

4.jpg

Press ‘Yes’ when you get such warning. The reason for this warning is that, the ORACLE_HOME directory that we are using is not empty, because we are creating the oraInventory directory in the same location. So by the time we come to this sceen, some components of inventory already got created in ORACLE_HOME directoy.

5.jpg

Select Installation Type as Mobile Server. This also includes Mobile Development Kit. Press Next.

6.jpg

On Mobile Server Repository Database Information page, you need to provide the information about, the hostname on which database resides, the tnsport of database and database sid. Once provided the info, click Next.

7.jpg

Mobile Server Repository screen will confirm your acceptance in installing the mobile server repository. This will create a schema, which will contain some schema and code objects specific to mobile administration. If you remember, before starting the installation, we have dropped mobileadmin schema for the same purpose. In case there is already existing repository, then the repository creation wizard will fail. So in usual practice, we drop the repository and create a fresh one when we install Olite 10g. ALso please note that upgrade repository option doesnt work with 10g Olite installation. Select Yes option if you want to create repository, else you can even select No.

If you select No at this point and later at some point of time, if you need to create a repository, you can do same by running just the respsitory wizard (repwizard) present in ORACLE_HOME/mobile/server/admin. But also note that, database information for creating repository cannot be given later, while creating the repository. But anyway you can change the information for the same by editing configuration file.

After the installtion, I will be mentioning the configuration file for OLite 10g installation in Oracle Apps R12.

At this point we will be going with repository creation option.

8.jpg

Port Number for OC4J HTTP Listener is the port we specify for webtogo URL for administration. We are use any of the unused port numbers, which are greater then 1024.

9.jpg

Demo Applications comes with product which can be used for testing and demo. Its totally upto the user to either install the same or skip. In this installation, we will be skipping the demo applicaitons.

10.jpg

Once all the parameters are provided, click on Install. The product installation will begin.

11.jpg

You can also see the progress, when the installation proceeds.

12.jpg

Once the product installation completes, Mobile Server Repository creation will start automatically. In this case the wizard will just have to create a schema(mobileadmin) and objects. For that it has to connect as system.

Since we have already specified the database details, where the wizard will connect, during installation part, it will just ask for system password. Please provide correct system password for your main database and click next.

13.jpg

When you click next, it will connect to database and check for mobileadmin schema, If the mobileadmin schema is found it will give upgrade option, else it will give the message as given in above screen and proceed further to create a new repository. Click next to continue.

14.jpg

When you click on next, it will ask for schema password, which you have to set in this screen. The schema name will be MOBILEADMIN, and password will be what ever you set here.

15.jpg

Administrator password if for administering the webtogo URL. Note that MOBILEADMIN is a database user, where are ADMINISTRATOR is a user for webtogo application. You can set the password for Administrator user on this screen.

16.jpg

As I said before, its going to install a new repository. Click next to continue.

17.jpg

You can see the Repository Creation progress.

18.jpg

Once the repository creation task is done. It will show the message as seen in the screen. Click on finish button to finish the wizard.

19.jpg

At the end you can just exit. Our Oracle Database Lite installation with repository creation was successful.

Next task that comes after installation is bringing up the services and sanity checking the  URL.

 

Configuration

At the starting of installation, we set 3 environment variables related to OLite 10g

ORACLE_HOME

JAVA13_HOME

TNS_ADMIN

Basically, when ever we have to manager Olite, we need to have these varaiables configuraed with correct values. Since we are talking about Olite installing in E-Buiness suite, we will talk about the configuration w.r.t R12.

The above three paramters can be set correct in a configuartion file called asgovars_ux.env present in $ASG_TOP/admin/template directory. Just edit the file and set these environment variables. Once that is done source the file. After sourcing when you see ORACLE_HOME, it will be pointing to Olite ORACLE_HOME (and not tools ORACLE_HOME).

You can go to $ORACLE_HOME/mobile/server/bin

and run the following command

(appmgr03) bin – -bash $ ./runmobileserver &
[1] 766
(appmgr03) bin – -bash $ 07/05/29 12:48:28 Warning: Error reading transaction-log file (/slot03/appmgr/olite10g/mobile_oc4j/j2ee/mobileserver/
persistence/transaction.state) for recovery: premature end of file
07/05/29 12:48:28 Forced or abrupt (crash etc) server shutdown detected, starting recovery process…
07/05/29 12:48:28 Recovery completed, 0 connections committed and 0 rolled back…
07/05/29 12:48:31 Tutalii: /SLOTS/slot03/appmgr/olite10g/mobile/server/bin/jzlib.jar archive

(appmgr03) bin – -bash $

This will start the webtogo service. You can verify the same using following command

(appmgr03) bin – -bash $ ps -eaf | grep oc4j.jar
appmgr03   767   766  8 12:48 pts/3    00:00:04 /local/java/jdk1.4.2/bin/java -Xms256m -Xmx512m -jar oc4j.jar

appmgr03   971 32439  0 12:49 pts/3    00:00:00 grep oc4j.jar
(appmgr03) bin – -bash $

You can see, there is 1 oc4j.jar process that is running.

The configuration file for web-to-go is present in $ORACLE_HOME/mobile/server/bin directory, and the name is webtogo.ora.

This file basically contains the database details like, Olite ORACLE_HOME, JDBC connection string, encrypted password, maximum number of connection allowed and log & trace file details.

The port number, that we select for Olite (8080), wont be present in webtogo.ora. The port information will be present in http-web-site.xml file present in $ORACLE_HOME/mobile_oc4j/j2ee/mobileserver/config directory. You can change the port any time in http-web-site.xml file and bounce the services, the new port will take effect.

For shutting down the service, there is no special command and we need to kill the oc4j.jar process, that is running. Thats the only way that is currently available for shutting down the Olite service.

The URL for accessing webtogo can be constructed as followes:

http://(hostname):(port)/webtogo.

The port here is webtogo port(8080), configured during installation.

Once the above installation is done, you can follow metalink note ID  386682.1 for post installation task. Note that these post installation task is required only if the installation is done for E-Business suite.

 

Multiple Database Block Sizes and the Buffer Cache

In oracle 10g we can have multiple block sizes at the same time. When a tablespace is created we can assign a block size for the objects that will be created in that tablespace.

The DB_BLOCK_SIZE parameter in your initialization parameter file determines the size of your standard block size in the database and frequently is the only block size for the entire database.

The DB_CACHE_SIZE parameter in your initialization parameter file specifies the size (in bytes) of the cache of the standard block sized buffers. Notice that you don’t set the number of database buffers; rather, you specify the size of the buffer cache itself in the DB_CACHE_SIZE parameter.

You can have up to five different database block sizes in your databases. That is, you can create your tablespaces with any one of the five allowable database block sizes.

But before you use non standard block size, you have to define the cache size for these non standard block size. We have a paramter called DB_nK_CACHE_SIZE for setting the cache size for non standard block size.

The new init.ora parameters that allow you to use non-default block sizes are:

DB_2K_CACHE_SIZE
DB_4K_CACHE_SIZE
DB_8K_CACHE_SIZE
DB_16K_CACHE_SIZE
DB_32K_CACHE_SIZE

Another classification for buffer cache is depending on the algorithm used to keep the contents into the cache. We have basically 3 types in this catagory.

1) DB_KEEP_CACHE_SIZE
2) DB_RECYCLE_CACHE_SIZE
3) DB_CACHE_SIZE

DB_KEEP_CACHE_SIZE is where the object are always present when they are loaded. The objects which qualifies for this cache are those which are very frquently accessed and which has to be retained in memory. For example, frquently used small lookup tables. This cache is a subset of default cache defined by parameter DB_CACHE_SIZE. For any database we need to have DB_CACHE_SIZE set.

DB_RECYCLE_CACHE_SIZE is where you dont want to store the object. You want to clear off the object from cache as soon as it is used. You have to be careful while using this, since this may incure performance hit in case you allocated a frequently used object to this cache.

DB_CACHE_SIZE is the size for default cache.

it is important to note that the init.ora parameters and functionality regarding the keep and recycle buffer pools has changed between Oracle8i and Oracle9i. Those changes are
summarized in the table below:

Version Of Oracle Init.ora parameters Functionality
Oracle8i                             BUFFER_POOL_KEEP = <buffers>                Subsets of the data
BUFFER_POOL_RECYCLE = <buffers>        buffer cache

Oracle9i and 10g              DB_KEEP_CACHE_SIZE = <size>                   Independent of the
DB_RECYCLE_CACHE_SIZE = <size>           data buffer cache

To specify the use of the keep, recycle or default buffer pools, you can use the storage clause of the alter table statement:

alter table <table_name> storage (buffer pool keep);
alter table <table_name> storage (buffer pool recycle);
alter table <table_name> storage (buffer pool default);

Note: The keep and recycle buffer pools are only available for the standard
block size. Non-standard block-size caches have a single default pool.

So again back to non standard cache size. Lets say the default block size is 8K and you want to create 1 more block size for you future tablespaces. In that case you have to
assign the buffer cache for those block size in the memory. Remember, when
you create a non standard block sizes, the memory (cache size) allocation for these block
size will be taken again from physical memory RAM and hence the RAM consumption with go up. This memory wont be allocated from existing db_cache_size.

Example

SQL> show parameters db_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_cache_size                        big integer 200M

Now we have 200M set for DB_CACHE_SIZE and db_keep_cache_size and db_recycle_cache_size is not set.

SQL> show parameter db_keep_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_keep_cache_size                   big integer 0

SQL> show parameter db_recycle_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_recycle_cache_size                big integer 0

Also we can see the size of buffer cache using show sga

SQL> show sga

Total System Global Area 1073741824 bytes
Fixed Size                  1984184 bytes
Variable Size             750786888 bytes
Database Buffers 209715200 bytes
Redo Buffers                6397952 bytes
Lets now try to create a tablespace with 4K block size.

SQL> create tablespace test_tbs4k datafile ‘/dy/oracle/product/db10g/dbf/test_tbs4k_1.dbf’ size 100M blocksize 4K;
create tablespace test_tbs4k datafile ‘/dy/oracle/product/db10g/dbf/test_tbs4k_1.dbf’ size 100M blocksize 4K
*
ERROR at line 1:
ORA-29339: tablespace block size 4096 does not match configured block sizes

Oracle in not intelligent enough to translate the block size of 4k into 8k buffer size. Because it cannot load the content of this datafile created for this tablespace having block size of 4k into buffer buffer of 8k.

So we need to create a buffer of 4K block size, after that only we can create a tablespace for 4k block size.

If we see the parameter db_4k_cache_size is not set.

SQL> show parameters db_4k_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_4k_cache_size                     big integer 0

SQL> alter system set db_4k_cache_size = 100M;

System altered.

SQL> show parameter db_4k_cache_size;

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_4k_cache_size                     big integer 100M
SQL> show parameter db_cache_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_cache_size                        big integer 200M
SQL> show sga

Total System Global Area 1073741824 bytes
Fixed Size                  1984184 bytes
Variable Size             750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers                6397952 bytes
SQL>

If we see the above stats, it clearly shows tghat db_cache_size has not reduced, but
database buffer size has increased not to 300M. Now we can create a tablespace for 4K block size

SQL> create tablespace test_tbs4k datafile ‘/dy/oracle/product/db10g/dbf/test_tbs4k_1.dbf’ size 100M blocksize 4K;

Tablespace created.

SQL>

You can get the information about your database buffer cache from a view v$buffer_pool

SQL> select name, block_size, current_size from v$buffer_pool;

NAME                 BLOCK_SIZE CURRENT_SIZE
——————– ———- ————
DEFAULT                    8192          200
DEFAULT                    4096          100

You can get more stats and information on you buffer pool using the view v$buffer_pool_statistics.

 

Installing Oracle Database 10g R2 on Solaris 5.9

Introduction

To being with installation we need to follow some pre-reqs docs regarding operating system requirements, Checking disk space, RAM, correct OS patches and correct Oracle software dump. Here we will see the installation steps of Oracle Database 10.2.0.2 on Solaris 5.9

Determining Disk and Memory Requirements

For Solaris OS you can check the OS level setting as given below. Check metalink note 169706.1 for the same.

Checking swap space

bash-2.05$ swap -s
total: 9342392k bytes allocated + 1155400k reserved = 10497792k used, 6616928k available

Checking RAM

bash-2.05$ /usr/sbin/prtconf | grep -i memory
Memory size: 16384 Megabytes
memory (driver not attached)
virtual-memory (driver not attached)

Checking OS version

bash-2.05$ uname -r
5.9

Checking Disk space

bash-2.05$ df -h .
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/dg01/vol01
615G   258G   335G    44%    /dy
Performing Pre-Installation Tasks

1)  Checking Additional OS packages for Solaris

Some additional packages are required for successful installation of Oracle software. To check whether required packages are installed on your operating system use following command:

bash-2.05$ pkginfo -i SUNWlibms SUNWtoo SUNWi1cs SUNWi15cs SUNWxwfnt SUNWxwplt SUNWmfrun SUNWxwplr SUNWxwdv SUNWgcc SUNWbtool SUNWi1of SUNWhea SUNWlibm SUNWsprot SUNWuiu8
system      SUNWbtool      CCS tools bundled with SunOS
system      SUNWhea        SunOS Header Files
system      SUNWi1of       ISO-8859-1 (Latin-1) Optional Fonts
system      SUNWlibm       Forte Developer Bundled libm
system      SUNWlibms      Forte Developer Bundled shared libm
system      SUNWmfrun      Motif RunTime Kit
system      SUNWsprot      Solaris Bundled tools
system      SUNWtoo        Programming Tools
system      SUNWuiu8       Iconv modules for UTF-8 Locale
system      SUNWxwdv       X Windows System Window Drivers
system      SUNWxwfnt      X Window System platform required fonts
system      SUNWxwplt      X Window System platform software
ERROR: information for “SUNWi1cs” was not found
ERROR: information for “SUNWi15cs” was not found
ERROR: information for “SUNWxwplr” was not found
ERROR: information for “SUNWgcc” was not found

Install the missing package using command

pkgadd -d . SUNWi1cs SUNWi15cs SUNWxwplr SUNWgcc

(Note that above installation of packages must be done as a ROOT user)

2) Setting kernel parameters

Set the following values (either greater then or equal to) in /etc/system file as root user.

noexec_user_stack 1
semsys:seminfo_semmni 100
semsys:seminfo_semmns 1024
semsys:seminfo_semmsl 256
semsys:seminfo_semvmx 32767
shmsys:shminfo_shmmax 4294967295
shmsys:shminfo_shmmin 1
shmsys:shminfo_shmmni 100
shmsys:shminfo_shmseg 10

3) Reboot the system

/usr/sbin/reboot

4) Creating Oracle user

groupadd oinstall
groupadd dba
useradd -d /export/home/oracle -g oinstall -G dba -m -s /bin/ksh oracle
passwd -r files oracle

5) Check DISPLAY settings

You should check the DISPLAY setting if you are not doing the silent installation. You can do this using following command.

Check the IP address from /etc/hosts file.

export DISPLAY=(IP Address):1.0

6) Go to the dump location and execute runInstaller.

bash-2.05$ pwd
/nfs/stage/solaris/oracle/10G-R2-102010/Disk1/database
bash-2.05$ ls
doc           install       response      runInstaller  stage         welcome.html

The first screen that you see is the welcome screen. You can check the Oracle products that are installed using this screen. Also you can uninstall a product.

Once we press next…

Next screen will ask about Inventory directoy for our database. Specify the correct inventory directory (either central inventory or local inventory). The OS group can be either DBA or OINSTALL. Specify the correct value and press next.

Select the type of installation you want to do. The features for each type of installation is already provided. Press next when selected. Here we are doing Enterprise installation.

Select the location for your Oracle Home and the name. Please note that Oracle base directory should be created before proceeding further and also it should have correct permissions. Here Oracle base is /u00/oracle/product. The other directories for Oracle Home will be created by Oracle. Press next when selected the correct directory.

Oracle Universal Installer will perform the pre-install checks about the OS version, patch levels, memory parametrers etc.

You can select at this screen, whether you want to create a database or you want to install just a software on you want to configure ASM as well. Lets select database creation option and press next.

Select the type of database you want to create. In this case we will create a General purpose database.

Here you have to select the Global database name and SID. You can also select the character case and a choice to create sample schemas. Press next.

Since you are installing a single database and not a grid installation, so the choice of “Use Grid Control for Database Management” is disabled. You have to sue Database management, and you can enable EMail notifications. So that if there are some critical alerts then you can get an email notification for the same.

On “Specify Database Storage Option” you can specify the option for your file system. It can be either OS file system, or ASM or raw disk. Here we will be using OS file system.

On “Specify Backup and Recovery Option” you can specify if you need any automated backups for the environment. If you are opting for automated backups, then you need to specify the location for flash recovery area. Also you need to specify the operating system login credentials.

On this screen you need to specify the admin passwords. You need to specify passwords for SYS, SYSTEM, SYSMAN and DBSNMP. Here you have an option for having different passwords as well as same passwords for all users.

Final summary screen will just display the summary of installations that are going to happen. It will show complete product list, which is going to be installed. Press Install button.

Once you press on Install, the installation will start and it will install the products automatically. Also you can check the current action of installer by checking the log file provided at the botton of window. After the main software installation, the next screen comes the configuration screen as shown below.

Configuration screen will carry out network configuration, database configurations and installation of SQL* plus client tool. Network configuration is just configuring listener for database to take up remote connections. Database configuration assistant will come into picture when we are giving the option to create a database in “Select Configuration option” screen.

Once network configuration is done, universal installer will automatically start database configuration assistant. This will do the actual database creation. Here database name will be the one, which we specified on “Specify database configuration options” screen.

Once the installation completes an “End of Installation” screen will come. Here OUI will provide you will all the URLs for logging into Database Enterprise Manager. Here the last URL is EM URL.

That completes the installation of Oracle Database 10g R2 with creating a database using DBUA(Database Configuration Assistant).

 

Converting / Migerating database character set

This article presents a simple example of migrating the default character set of a database using the CSSCAN and CSALTER character set scanner utilities provided by Oracle. The basic steps involved in character set conversion are listed below:

SHUTDOWN IMMEDIATE
BACKUP
STARTUP
CSSCAN
SHUTDOWN IMMEDIATE
STARTUP RESTRICT
CSALTER
SHUTDOWN IMMEDIATE
STARTUP

Before migerating the chanracter set its really very important that we backup the database completly. In case if some issue occurs while migeration or some file gets corrupted, then we can restore the database back.

Once backup is done, start the instance and begin scanning.

Before we migerate to a new character set we have to scan the database for determining
the language and character set for unknown file text. With each text, the character set detection engine sets up a series of probabilities, each probability corresponding to a language and character set pair. The most statistically probable pair identifies the dominant language and character set.

We can scan the database using CSSCAN utility. From the command prompt

===============================================================
bash-2.05$ csscan system/manager full=y tochar=al32utf8 ARRAY=10240 PROCESS=3

ld.so.1: csscan: fatal: libclntsh.so.10.1: open failed: No such file or directory
Killed
===============================================================

The above error is because LD_LIBRARY_PATH is not set. Set the LD_LIBRARY_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

===============================================================
bash-2.05$ csscan system/manager full=y tochar=al32utf8 ARRAY=10240 PROCESS=3
Character Set Scanner v2.1 : Release 10.2.0.0.0 – Production on Sun May 27 10:52:53 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options
CSS-00107: Character set migration utility schema not installed

Scanner terminated unsuccessfully.
===============================================================
The above error is due to

CSS-00107 Character set migration utility schem not installed
Cause: CSM$VERSION table not found in the database.
Action: Run CSMINST.SQL on the database.

===============================================================
bash-2.05$ csscan system/manager full=y
Character Set Scanner v2.1 : Release 10.2.0.0.0 – Production on Sun May 27 11:13:46 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

Current database character set is US7ASCII.

Enter new database character set name: > al32utf8

Enter array fetch buffer size: 1024000 >

Enter number of scan processes to utilize(1..32): 1 > 3

Enumerating tables to scan…

. process 1 scanning SYS.SOURCE$[AAAABIAABAAAAIRAAA]
. process 2 scanning SYS.TYPE$[AAAAC1AABAAAAURAAA]
. process 2 scanning SYS.METHOD$[AAAAC1AABAAAAURAAA]
. process 3 scanning SYS.ATTRIBUTE$[AAAAC1AABAAAAURAAA]
. process 2 scanning SYS.PARAMETER$[AAAAC1AABAAAAURAAA]



. process 2 scanning SYS.LOGMNR_BUILDLOG[AAABXcAABAAADN5AAA]
. process 3 scanning SYSTEM.LOGSTDBY$APPLY_MILESTONE[AAABw/AADAAAAkJAAA]
. process 1 scanning SYSTEM.REPCAT$_CONFLICT[AAAB3IAABAAAEVhAAA]
. process 2 scanning SYS.DBMS_UPG_LOG$[AAACGVAABAAAFHRAAA]
. process 3 scanning SYS.WRH$_DATAFILE[AAACLIAADAAAAs5AAA]
. process 2 scanning SYS.WRH$_MTTR_TARGET_ADVICE[AAACORAADAAABBJAAA]
. process 1 scanning SYS.WRH$_STREAMS_CAPTURE[AAACOkAADAAABDBAAA]
. process 3 scanning DBSNMP.MGMT_CAPTURE_SQL[AAACWHAADAAABSZAAA]
. process 2 scanning SYS.WRH$_ACTIVE_SESSION_HISTORY[AAACVHAADAAABB5AAA]
. process 1 scanning SYSTEM.LOGMNRC_GTCS[AAABajAADAAAAZxAAA]
. process 3 scanning SYS.WRH$_DB_CACHE_ADVICE[AAACubAADAAABjhAAA]

Creating Database Scan Summary Report…

Creating Individual Exception Report…

Scanner terminated successfully.

===============================================================

Once the scaning is completed shutdown the database and start in restrict mode.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup restrict
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers 6397952 bytes
Database mounted.
Database opened.
SQL>

SQL> @@?/rdbms/admin/csalter.plb

0 rows created.
Function created.
Function created.
Procedure created.

This script will update the content of the Oracle Data Dictionary.
Please ensure you have a full backup before initiating this procedure.
Would you like to proceed (Y/N)?Y
old 6: if (UPPER(‘&conf’) <> ‘Y’) then
new 6: if (UPPER(‘Y’) <> ‘Y’) then
Checking data validility…
begin converting system objects
80 rows in table SYS.METASTYLESHEET are converted
4 rows in table SYS.RULE$ are converted
3369 rows in table SYS.WRH$_SQL_PLAN are converted
408 rows in table SYS.WRH$_SQLTEXT are converted
79 rows in table SYS.WRI$_DBU_FEATURE_METADATA are converted
17 rows in table SYS.WRI$_DBU_HWM_METADATA are converted
21 rows in table SYS.WRI$_DBU_FEATURE_USAGE are converted

PL/SQL procedure successfully completed.

Alter the database character set…
CSALTER operation completed, please restart database

PL/SQL procedure successfully completed.
0 rows deleted.
Function dropped.
Function dropped.
Procedure dropped.

SQL>
===============================================================

If there are possible conversion problems, the process will report the problem and clean itself up without performing the conversion. Here are a couple of the messages I got when trying this process.

# When I tried to convert WE8MSWIN1252 -> AL32UTF8.
Checking data validility…
Unrecognized convertible date found in scanner result

# When I tried to run the CSALTER script without a SHUTDOWN-STARTUP RESTRICT.
Checking data validility…
Sorry only one session is allowed to run this script

Once the conversion is complete, you must restart the instance

===============================================================
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers 6397952 bytes
Database mounted.
Database opened.
SQL>
===============================================================

For more information on this utility, please check the oracle documentation link.

 

Restoring the statistics – Oracle Database 10g

Here is the small article on how to restore the statistics on a table. Sometimes we gather stats on a table which causes it to flip the plan of a query accessing that table. It can lead to a great performance for some queries but there are situations where the query performance can degrade.

Certain tables in every databases are “Hot” tables and a DBA should not play around the statistics of those tables as there could be quiet a huge impact of the same.

In case if  a stats are gathered and you need to restore the previous stats in order to bring everything back to normal, here is what one should be doing.

Using RESTORE_TABLE_STATS

Step 1) Check the history of stats gathered on the table using DBA_TAB_STATS_HISTORY

SQL> select TABLE_NAME, STATS_UPDATE_TIME from dba_tab_stats_history where table_name = 'PICKED_SHIPMENT_BATCHES';
TABLE_NAME                     STATS_UPDATE_TIME
------------------------------ ----------------------------------------
PICKED_SHIPMENT_BATCHES        30-OCT-10 07.50.59.539450 PM +02:00
PICKED_SHIPMENT_BATCHES        31-OCT-10 12.02.58.979300 AM +02:00
PICKED_SHIPMENT_BATCHES        31-OCT-10 12.17.19.315201 AM +02:00

STATS_UPDATE_TIME tells us when the stats were last updated.
Step 2) Check when was the table last analyzed

DCFRA1>select last_analyzed from dba_tables where table_name = 'PICKED_SHIPMENT_BATCHES';
LAST_ANALYZED
----------------
2010-10-31:20:12

 

Step 3) Use DBMS_STATS.RESTORE_TABLE_STATS procedure to restore the stats on the table

SQL> execute dbms_stats.restore_table_stats(<OWNER>,<TABLE_NAME>,<TIMESTAMP WITH TIMEZONE>);

Timestamp could be any timestamp in STATS_UPDATE_TIME column of dba_tab_stats_history table.

After restore, Check the last_analyzed date from dba_table and you should see the old date.
Following is a short FAQ on statistics.

Where does oracle store the statistics?

Oracle DBA_OPTSTAT_OPERATIONS table contains a log history of statistics collected.

SQL> select OPERATION, START_TIME, END_TIME from dba_optstat_operations;
OPERATION                      START_TIME                               END_TIME
------------------------------ ---------------------------------------- ----------------------------------------
gather_database_stats(auto)    30-OCT-10 06.00.04.042555 AM +02:00      31-OCT-10 11.00.02.114381 PM +01:00

 

How does Oracle maintain the Statistics History?

We can check the oldest statistics that can be restore using GET_STATS_HISTORY_AVAILABILITY procedure.

SQL> select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY  from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
30-SEP-10 11.22.25.817428000 PM +01:00

Usually Oracle retains stats for 1 month (31 days).

DCFRA1>select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
                        31

You can change the retention using ALTER_STATS_HISTORY_RETENTION procedure.

Using EXPORT/IMPORT

 

Using export/import of statistics is a 6 steps process

Step 1) Create stats table in the database

EXEC DBMS_STATS.create_stat_table(‘SYSTEM’,’STATS_TABLE’);

Step 2) Export table/schema statistics

DBMS_STATS.export_schema_stats(‘DEMO_SCHEMA’,’STATS_TABLE’,NULL,’SYSTEM’);

Step 3) Export the table STATS_TABLE using exp utility.

This completes the export part. You can later import the stats using following steps

Step 4) Import table STATS_TABLE using imp utility

Step 5) Import table/schema statistics

DBMS_STATS.import_schema_stats(‘DEMO_SCHEMA’,’STATS_TABLE’,NULL,’SYSTEM’);

Step 6) Drop the stats table

EXEC DBMS_STATS.drop_stat_table(‘SYSTEM’,’STATS_TABLE’);

 

Hope this helps !!

Setting Password for database listener

Introduction

This is a simple post which explains you how to secure your listener by setting the password authentication.
By default in Oracle 8i/9i database, we can remotely administer the listener using lsnrctl command line utility. In Oracle Database 10g, this feature is disabled by default and we cannot by default administer the listener remotely. However in database 10g, there is a parameter LOCAL_OS_AUTHENTICATION_<listener_name> which can be set to OFF in order to enable remote listener administration.
How to administer listener remotely

Following are the activities we can do in remote listener administration
  • Stop the Listener
  • Set a password and prevent others from controlling the Listener
  • Write trace and log files to any file accessible to the process owner of tnslnsr (usually oracle)
  • Obtain detailed information on the Listener, database, and application configuration
Lets say I have a database sitst01 and listener name is same as DB name.
-bash-3.00$ lsnrctl start sitst01
LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:09:36

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Starting /slot/ems3079/oracle/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.4.0 - Production

System parameter file is /slot/ems3079/oracle/db_1/network/admin/listener.ora
Log messages written to /slot/ems3079/oracle/db_1/network/log/sitst01.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))

STATUS of the LISTENER
------------------------
Alias                     sitst01
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                13-MAY-2010 11:09:36
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  OFF
SNMP                      ON
Listener Parameter File   /slot/ems3079/oracle/db_1/network/admin/listener.ora
Listener Log File         /slot/ems3079/oracle/db_1/network/log/sitst01.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))
Services Summary...
Service "sitst01.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
Service "sitst01_DGMGRL.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
My listener.ora file looks like this

-bash-3.00$ cat listener.ora
# listener.ora Network Configuration File: /slot/ems3079/oracle/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
SITST01 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = adc60006sems.us.oracle.com)(PORT = 1600))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCsitst01))
)
)

SID_LIST_sitst01=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=sitst01.us.oracle.com)
(SID_NAME=sitst01)
(ORACLE_HOME=/slot/ems3079/oracle/db_1))
(SID_DESC=
(GLOBAL_DBNAME=sitst01_DGMGRL.us.oracle.com)
(SID_NAME=sitst01)
(ORACLE_HOME=/slot/ems3079/oracle/db_1))
)

LOCAL_OS_AUTHENTICATION_SITST01=OFF

Note that I have enabled remote listener authentication using LOCAL_OS_AUTHENTICATION_SITST01=OFF
On second server (sitst02), I am setting this listener alias (sitst01) of first server in listener.ora
listener.ora on second server looks like this

<span style="font-family: monospace;">
</span>

-bash-3.00$ cat listener.ora
# listener.ora Network Configuration File: /slot/ems6826/oracle/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
SITST02 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = adc60011sems.us.oracle.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCsitst02))
)
)

SID_LIST_sitst02=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=sitst02.us.oracle.com)
(SID_NAME=sitst02)
(ORACLE_HOME=/slot/ems6826/oracle/db_1))
(SID_DESC=
(GLOBAL_DBNAME=sitst02_DGMGRL.us.oracle.com)
(SID_NAME=sitst02)
(ORACLE_HOME=/slot/ems6826/oracle/db_1))
)

SITST01 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = adc60006sems.us.oracle.com)(PORT = 1600))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCsitst01))
)
)

I added sitst01 alias enter in listener.ora of second server
Reload the listener on second server, so that it will know the new listener alias that we have registered

-bash-3.00$ lsnrctl reload sitst02

LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:13:23

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60011sems.us.oracle.com)(PORT=1521)))
The command completed successfully

Now lets administer the listener on first server from second server
On the second server, check the status of first listener (sitst01)

-bash-3.00$ lsnrctl status sitst01
LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:14:11
Copyright (c) 1991, 2007, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
STATUS of the LISTENER
------------------------
Alias                     sitst01
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                13-MAY-2010 11:09:36
Uptime                    0 days 0 hr. 4 min. 35 sec
Trace Level               off
Security                  OFF
SNMP                      ON
Listener Parameter File   /slot/ems3079/oracle/db_1/network/admin/listener.ora
Listener Log File         /slot/ems3079/oracle/db_1/network/log/sitst01.log

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))
Services Summary...
Service "sitst01.us.oracle.com" has 2 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
Instance "sitst01", status READY, has 1 handler(s) for this service...
Service "sitst01_DGMGRL.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

You can also stop the listener on first server, from second server


-bash-3.00$ lsnrctl stop sitst01

LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:15:44

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
The command completed successfully

Password Protecting listener

To avoid the above scenario, you can protect the listener using password authentication. You have 2 methods of doing the same
1) Setting the clear text password in listener.ora file
2) Setting the encrypted password using LSNRCTL prompt
We will see both the methods now.
1) Setting clear text password in listener.ora file

This is not a recommended method for setting password. Here is how it works.
1) stop the listener

-bash-3.00$ lsnrctl stop sitst01

LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:15:44

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
The command completed successfully

2) Add following parameter in listener.ora
PASSWORD_<listener_name>=<password>
Example:
PASSWORDS_SITST01=welcome
3) start the listener

-bash-3.00$ lsnrctl start sitst01

LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:20:11

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Starting /slot/ems3079/oracle/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 10.2.0.4.0 - Production
System parameter file is /slot/ems3079/oracle/db_1/network/admin/listener.ora
Log messages written to /slot/ems3079/oracle/db_1/network/log/sitst01.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
STATUS of the LISTENER
------------------------
Alias                     sitst01
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                13-MAY-2010 11:20:11
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Password
SNMP                      ON
Listener Parameter File   /slot/ems3079/oracle/db_1/network/admin/listener.ora
Listener Log File         /slot/ems3079/oracle/db_1/network/log/sitst01.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))
Services Summary...
Service "sitst01.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
Service "sitst01_DGMGRL.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Now try doing any activity from remote server or even this server. It will give error

-bash-3.00$ lsnrctl status sitst01

LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 13-MAY-2010 11:20:54

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
TNS-01169: The listener has not recognized the password

In order to carry out any activity on the listener, you need to first set the password at LSNRCTL prompt using “set password” command as shown below

LSNRCTL> set current_listener sitst01
Current Listener is sitst01
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
TNS-01169: The listener has not recognized the password
LSNRCTL> set password
Password:
The command completed successfully
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
STATUS of the LISTENER
------------------------
Alias                     sitst01
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                13-MAY-2010 11:22:45
Uptime                    0 days 0 hr. 0 min. 34 sec
Trace Level               off
Security                  ON: Password
SNMP                      ON
Listener Parameter File   /slot/ems3079/oracle/db_1/network/admin/listener.ora
Listener Log File         /slot/ems3079/oracle/db_1/network/log/sitst01.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))
Services Summary...
Service "sitst01.us.oracle.com" has 2 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
Instance "sitst01", status READY, has 1 handler(s) for this service...
Service "sitst01_DGMGRL.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
LSNRCTL> stop
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
The command completed successfully

So unless we use “set password” and enter the password we have set in listener.ora we cannot perform any activity.
2) Setting the encrypted password using LSNRCTL prompt

LSNRCTL> set current_listener sitst01
Current Listener is sitst01
LSNRCTL> change_password
Old password:
New password:
Reenter new password:
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
Password changed for sitst01
The command completed successfully
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
TNS-01169: The listener has not recognized the password
LSNRCTL> set password
Password:
The command completed successfully
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
STATUS of the LISTENER
------------------------
Alias                     sitst01
Version                   TNSLSNR for Linux: Version 10.2.0.4.0 - Production
Start Date                13-MAY-2010 11:22:45
Uptime                    0 days 0 hr. 0 min. 34 sec
Trace Level               off
Security                  ON: Password
SNMP                      ON
Listener Parameter File   /slot/ems3079/oracle/db_1/network/admin/listener.ora
Listener Log File         /slot/ems3079/oracle/db_1/network/log/sitst01.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=adc60006sems.us.oracle.com)(PORT=1600)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROCsitst01)))
Services Summary...
Service "sitst01.us.oracle.com" has 2 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
Instance "sitst01", status READY, has 1 handler(s) for this service...
Service "sitst01_DGMGRL.us.oracle.com" has 1 instance(s).
Instance "sitst01", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Use “change_password” command to set the password for listener.
Now if you see listener.ora file it will have encrypted password updated in it

</pre>
</span></div>
<div>#----ADDED BY TNSLSNR 13-MAY-2010 11:27:55---</div>
<div>PASSWORDS_sitst01 = 125916DFCEFE8F08</div>
<div>#--------------------------------------------</div>
<div><span style="font-family: monospace;">
<pre>

Hope this helps !!
Reference: