Deploying an application in OC4J Instance – Oracle AS10g

Introduction:

This post gives a brief idea about deploying a J2EE application in an OC4J instance of application server 10g. I am using a sample .ear file for deployment. This is a demo HR application available.

We can carry out the deployment using 2 methodes

  1. Using Application server enterprise manager (UI)
  2. Using dcmctl (Command line utility)

We will see both the options here.

Comparing JAR, WAR, and EAR Files

To deploy EJBs and other components in the J2EE application, you must package all the components together. This also includes JSP files, images, utility classes, and other files that are part of this package.

JAR Files

Java provides a utility for creating archives, called Java Application Archives (JAR). In addition to using JAR files for archiving and distribution, you can also use them for deployment and encapsulation of libraries, components, plug-ins, and other files (such as image files). The JAR file maintains the file subdirectories, and special files in the JAR, such as manifests and deployment descriptors, instruct how the JAR is to be treated. You can package standard JavaBeans and Enterprise JavaBeans or an entire application into JAR files that can be executed by the JVM.

WAR Files

You add Web components to a J2EE application in a package called a Web Archive (WAR) file. WAR files are similar to JAR files but contain a .war extension. You can include HTML documents, servlets, JSPs, and applet class files into WAR files. A WAR file has a specific hierarchical directory structure. The top-level directory of a WAR is the document-root directory of the application. JSP pages, client-side classes and archives, and static Web resources are stored in the document-root directory.

The document-root directory contains a subdirectory called WEB-INF, which contains the following files and directories:

  • Tag library descriptor files
  • classes: Directory that contains server-side classes: servlet, utility classes, and JavaBeans components
  • Web.xml: The Web application deployment descriptor
  • lib: Directory that contains JAR archives of libraries

EAR Files

An Enterprise Archive file or EAR is a JAR file that contains Web modules of a J2EE application. A Web module is an entity consisting of one or more resources such as HTML files, Java class files, and XML files. In other words, an EAR file is a JAR file that can contain JAR and WAR files in addition to other files, and ends with the .ear extension. An EAR file also contains an application descriptor called application.xml that describes its contents.
Deploying the application

You can deploy the application either in an exiting OC4J instance or you can cerate a new OC4J instance and deploy the application in that. This totally depends on the situation and business requirement. Creating an OC4J instance is for just organizing your application server and different application. Even we can create our application in OC4J_Portal instance. But in this case your application will get mixed up with portal application and things wont be organized.

In our case we will be creating a new OC4J instance and deploy a sample application in that.

Creating OC4J instance – Using Enterprise Manager

1) Login to your enterprise manager on the mid tier node and you can see a button “Create OC4J Instance” as shown below.

2) Next step, it will ask you for the name of OC4J instance. You can provide any name which you can identify later. Click on “Create

3) Once OC4J instance gets created, you should be able to see the same in components home page as shown below. By default the instance will be down and you have to start the instance. You can do so using OPMN or form EM only. Just select the instance using checkbox and click on start.

The above steps are for creating an OC4J instance using Enterprise Manager. You can also create OC4J instance through command line using DCMCTL utility as shown below.

Creating OC4J instance – Using DCMCTL

Same OC4J instance can also be created using following command.

bash-2.05$ cd mid/dcm/bin
bash-2.05$ ls dcmctl
dcmctl
bash-2.05$ ./dcmctl CreateComponent -ct OC4J -co OC4J_AVDEO

1
Component Name: OC4J_AVDEO
Component Type: OC4J
Instance: mid.ap101fam.us.oracle.com

Deploying Application in Oc4J instance – Using Enterprise Manager

1) Go to Enterprise Manager mid tier home page.

2) Click on OC4J instance that you created (OC4J_AVDEO in my case)

3) Click on Applications tab

4) Here you can see the “Deploy EAR” file button. Click on the button and provide the EAR file using Browse button. Also you need to provide application name.

Click on “Continue

5) It will ask for URL mapping for Web Modules. This will be entry it will put in mod_oc4j.conf file of apache. You can keep the default setting.

6) Keep all other setting as default. It will show the confirmation screen. Click on deploy.

Deploying Application in Oc4J instance – Using DCMCTL

For deploying application using DCMCTL utility you can use the following command

./dcmctl deployApplication -f /dy/oracle/product/AS10g/hrapp.ear -a hrapp -co OC4J_AVDEO

1
Application: hrapp
Component Name: OC4J_AVDEO
Component Type: OC4J
Instance: mid.ap101fam.us.oracle.com

Here -f /dy/oracle/product/AS10g/hrapp.ear is the EAR file name, -a hrapp is the application name and -co OC4J_AVDEO is the instance name where we are deploying the application.

Once the application is deployed. You can try to access the application using the URL

http://(mid tier hostname):(mid tier HTTP port)/(URL mapping)

In my case the URL becomes http://ap101fam.us.oracle.com:7778/hrapp/

However if you try to access the application, it might now work. This is because you need to set Data Source for this application.

Data Source is for connection to database. An application has to connect to database for saving application data of retrieving the application specific data from production database. Each OC4J instance has its set of data sources. These data sources belongs to some JNDI tree (Java Namespace directory interface). So connection between the application and database is through JNDI lookup tree.

When a developer does coding for an application he uses local database name in the form of database source. A deployer of the application has to map the local data source with physical database. This can be down using following steps.

Setting up Data Source

1) You can either create a Data Source or you can edit the existing default data source. The default data source will be OracleDS. You can also create/see data source specific to your application by going to Application tab and then click on your application. Then on application home page, you can see “Data Source” link at the bottom.

But this data source would be empty as we did not create any data source for the application.

5) We can use default data source in our case. For this we need to make changes in default data source (OracleDS) regarding the database connection details and the username/password it should use to connect to database.

For this you can go to

2) From the mid tier home page, go to the OC4J instance that you have created (OC4J_AVDEO in my case).

3) Click on Administration Tab

4) You will see a “Data Source” link. The default data source is seen here as OracleDS. Click on “Edit

Here you need to change JDBC URL and provide the correct hostname, port and database name. Also you need to change Username and password. Other values you can keep default.

Once this is done, you can save and restart OC4J instance. Once that is done you should be able to access the application using your URL as given above (http://ap101fam.us.oracle.com:7778/hrapp/ in my case).

You must be having 1 doubt in your mind, regarding the whole process. At least I had when I learned this process. The question is how the application knows that it needs to use this data source (OracleDS) for making connection to database. Nothing must be defined in application level. Neither we planned to go with this name for data source. We can even create our own data source. Then how application knows that it should use this data source to access the application.

The answer to this is JNDI. For each OC4J instance as I explained there is a JNDI lookup tree. And we configure our application to use a particular data source in this JNDI.

If you go to your application home page (Mid tier home -> your OC4J Instance -> Applications Tab -> Click on your application) and then click on “General” link in the bottom, you can see “Default Data Source (JNDI Name)” field. This field is having a value of jdbc/OracleDS. This is where your application knows that it needs to use OracleDS data source. So if we create a new data source, we need to edit this entry as well.

Hope this help !!

Advertisement

Configuring Virtual Host – Oracle AS10g

Introduction:

Some times we need to configure a virtual host setting for our server. This is required in case when we have 2 server, where one is main server and other is standby server. If the main server goes down then standby server can take over the control and user does not have to change the URL. That means we can have same host name but different IPs (or different physical machines).

Other situation is when we want to host multiple web sites on a single host. In this case IP address remains same (means physical server is only one) where as we can have multiple hostname configured for the same.

The first example (same hostname and different IP address) is called IP based virtual host and the second example (same IP address but multiple hostname is called name based virtual host).

So we have 2 types of virtual hosts

  1. IP based (Same hostname – multiple IPs. Used in backup or standby situation).
  2. Name based (Same IP address or physical server – multiple hostname. Used for hosting multiple web sites)

Configuring Virtual Hosts:

We will see here how to configure name based virtual host. That means we will be able to refer to our server with 2 URLs or 2 hostnames.

Configuration of virtual hosts needs simple change in httpd.conf file. We can do it using Enterprise Manager or through backend by directly editing http.conf file. We will see both the options here.

*** The below virtual host configuration is for middle tier

1) Configuration using Enterprise Manager

Before starting the virtual host configuration, we need the hostname recognized by server. For this we need to make entry in our /etc/hosts file as shown below.

144.25.76.171   advait.us.oracle.com    advait

Here 144.25.76.171 is the actual IP address of the host, followed by advait.us.oracle.com, which is the hostname and domain name and finally the hostname. These changes needs to be made as root user.
Step 1) Create a new port

You can create a new port on which you HTTP server can listen. Actually this steps can be skipped and you can use existing port for new virtual host. But just to avoid confusion we will create a new port for virtual host.

You can add “Listen 9988” (ignore double quotes) to your http.conf file of middle tier and bounce the http server for middle tier. Here port number I have selected is 9988, but you can select any port that is free.

From Enterprise manager you can go to Mid Tier home page -> HTTP_Server component -> Administration tab -> Server Properties. Here you can see “Listening Addresses and Ports” section. You can just add a row and put the new port number as shown in below figure.

Scroll down to the bottom and click on Apply button. It will ask to restart HTTP server. Click on yes. In case if you are doing manual changes, do remember to bounce the HTTP server using OPMN.

Step 2) Got to Virtual Host section using Mid Tier Home -> HTTP Server component -> Virtual Host tab.

Click on “Create” button. It will take you to next screen. Here you have to enter “DocumentRoot”. Enter some different path (Other then htdocs). This way we can keep the content of virtual host with our main server. This is useful in hosting 2 different website on same server.

Step 3) Next it will ask you to enter the name of new virtual host that you want to create. You enter the name that you entered in /etc/hosts file above. Below fig shows the same.

Also it will ask for which IP address to listen to. You can just enable it for all IP address. Else you can provide specific IP address and it will listen for that IP address and port number only.

Step 4) In this step it will ask for the port number to listen to. If you remember in step 1) we created a port number to be used for virtual host. You can select the same port from option 2 as show below.

Step 5) This is the last step in virtual host creation, where it will ask for log file location and log level. It will be better if you give some identified name for log file so that you will come to know later. Else better to create a new directory inside logs directory for virtual hosts log files.

Finally it will show the summary of the inputs and you can submit the same. This will add entry in http.conf file and then bounce HTTP server so that changes takes effect.

This will put the entry in httpd.conf file as given below.

Listen 9988
NameVirtualHost advait.us.oracle.com:9988

<VirtualHost advait.us.oracle.com:9988>
ServerAdmin advait@advait.us.oracle.com
ServerName advait.us.oracle.com
DocumentRoot /dy/oracle/product/AS10g/mid/Apache/Apache/advait
ErrorLog “|/dy/oracle/product/AS10g/mid/Apache/Apache/bin/rotatelogs /dy/oracle
/product/AS10g/mid/Apache/Apache/logs/error_log_advait 43200” common
</VirtualHost>

Listen 9988 was added when you created a new port for virtual host setup.

After bouncing the virtual setting will take effect and you will be able to access the URL http://(Virtual_host_name):(port for virtual host)/

In my case the URL becomes http://advait.oracle.com:9988/

This will display index.html if present in you DocumentRoot for virtual host.

2) Manual configuration

In case of manual configuration we just put the Virtual host configuration directive in httpd.conf file of mid tier Apache and bounce the apache. This will enable virtual host setup. The step to add hostname entry in /etc/hosts holds true in manual configuration as well.

Hope this helps !!

Enabling SSL for Oracle Application Server 10g

Introduction

This short post is to show you how we can enable SSL for application server console.

Concept

Application server console uses the management agents to get the real time information about the various components installed on the server. The agents run on the host and continuously monitor the usage. Agents passes on the required matrix information to Application server console. For there there should be a connection between the management agents running on the host and the application server console (Which is a J2EE application deployed on stand alone OC4J instance). This connection is a insecure HTTP connection as shown in the below figure.

As you can see Management agents talks to application server instance / components using insecure HTTP protocol to get the current status and performance matrix values and these values are returned to Application Server console. Also application server console uses DCM to configure various components and hence connection exists between the two as well.

Clients or the users, uses web browsers to connection AS console. This again is done using insecure HTTP protocol.

We can secure the protocol between web browsers – AS console and also between AS console – Management Agents. The above figure gets modified as shown below.

Here you can see that there is a secured protocol HTTPS exists between web browsers and AS console and also between AS console and management agents. This can be done using the simple command as shown below.

1) You can login to the infra tier of AS 10g and go to ORACLE_HOME/bin

2) Run command

./emctl stop iasconsole
Oracle Enterprise Manager 10g Application Server Control Release 10.1.2.0.2
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
http://appserv1.acme.com:1811/emd/console/aboutApplication
Stopping Oracle Enterprise Manager 10g Application Server Control …  …
Stopped.

./emctl secure iasconsole
Oracle Enterprise Manager 10g Application Server Control Release 10.1.2.0.2
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
http://appserv1.acme.com:1811/emd/console/aboutApplication
Generating Standalone Console Root Key (this takes a minute)…   Done.
Fetching Standalone Console Root Certificate…   Done.
Generating Standalone Console Agent Key…   Done.
Storing Standalone Console Agent Key…   Done.
Generating Oracle Wallet for the Standalone Console Agent…   Done.
Configuring Agent for HTTPS…   Done.
EMD_URL set in /dsk02/oracle/appserv1/sysman/config/emd.properties
Generating Standalone Console Java Keystore…   Done.
Configuring the website …   Done.
Updating targets.xml …    Done.

./emctl start iasconsole

3) Execute the above steps to mid tier as well.

Once above steps are completed you can access the same AS console page using HTTPS prototype. The hostname and port remains same.

Hope this helps !!

References:

Oracle Online Docs

Oracle Application Server 10g R2 – Installation

Introduction

Oracle Application server 10g is a middle ware software by Oracle. Purpose of Application server is to handle the business intelligent and logic. Below is the difference between a two-tier architecture and multi-tier architecture.

In case of a two-tier architecture we business logic and presentation is both handled by client. Hence we call it thick client. Where as in case of multi tier architecture we have only presentation layer in client side and business logic is handled by Application server. Hence such clients are called as think client. Below is the three tier architecture show.

Advantages of 3-tier architecure

1) In case of change in business logic the change needs to be done only in Application server (mid tier) as its centralized. Where as in case of two tier architecture ever client needs to be changed. So management is easy.

2) It reduces the network traffic, because in case of 2 tier architecture the protocol used is TCP/IP which is a stateful connection (continuous). So even if data is been send or not connection will always be there. This increases network traffic and reduces bandwidth. Where as in 3 tier architecture we have TCP/IP connection only between mid tier and database and is set when client request for some data. As soon as data is provided the connection is closed. The connection between client and mid tier is http which is a stateless connection.

3) Performance is another factor which make 3 tier architecture a better solution over 2 tier architecture. Since all business logic gets executed at client side, performance on client reduces. This is in addition to database performance issue. In case of 3 tier architecture performance is not an issue and is handled well in Application server.

Lets start with Application server 10g

Application Server 10g Components

Below is a very simple line sketch of application server 10g component architecture. This shows what are the components that are present in AS (Application Server) and why those are required.

All the above components shown above comes with AS 10g R2 installation. All these components are categorized as given below.

All the components shown in green color + OHS for mid tier (on left) + Web cache is installed as mid tier components. So mid tier includes

  1. Web Cache
  2. OHS (Oracle HTTP Server)
  3. OC4J
  4. FORMS
  5. REPORTS
  6. DISCOVERER
  7. PORTAL
  8. WIRELESS

While installing this middle tier, the installation of these components are divided into 3 main components.

  1. J2EE and Web Cache (This will install Web Cache, OHS and OC4J)
  2. Portal and Wireless (This will install all components of J2EE and Web Cache + Portal and Wireless components)
  3. Business Intelligence and Forms (This will install all components)

If we are installing just option 1 (J2EE and Web Cache) then infrastructure tier is not required. However we can still install infrastructure tier but in this case it wont be used by these 3 components of mid tier.

If we are installing either option 2 or 3, we have to install infrastructure tier before installing these mid tier components. This is because these middle tier component such as portal and wireless will create there own repository in infrastructure’s meta data repository. So its important that we first install infrastructure tier before installing mid tier, in case we want to go for option 2 or 3 for mid tier.

The other components shown in blue + OHS on right side are installed as infrastructure tier. So infrastructure tier includes following components

  1. OHS (Oracle HTTP Server)
  2. Meta Data Repository
  3. SSO (Single Sign on)
  4. OID (Oracle Internet Directory)
  5. DAS (Delegate Administrative Services)
  6. OCA (Oracle Certification Authority)

Out of the above components of infrastructure tier SSO, OID, DAS and OCA are together called as IDM (Identity Management). So infrastructure tier contains mainly meta data repository and identity management.

Having this brief knowledge above AS 10g R2 components, lets start with installation.

Installing Oracle Application server 10g R2

Before starting the installation, we should complete the pre-installation checks. This includes following steps.

Pre-Installation Task

1) Check the required packages are installed in OS or not.

glibc-2.3.4-2.9
glibc-common-2.3.4-2.9
binutils-2.15.92.0.2-13
compat-libstdc++-296-2.96-132.7.2
gcc-3.4.3-22.1
gcc-c++-3.4.3-22.1
libstdc++-3.4.3-22.1
libstdc++-devel-3.4.3-22.1
openmotif21-2.1.30-11.RHEL4.4
pdksh-5.2.14-30
setarch-1.6-1
make-3.80-5
gnome-libs-1.4.1.2.90-44.1
sysstat-5.0.5-1
compat-db-4.1.25-9
control-center-2.8.0-12
xscreensaver-4.18-5.rhel4.2

you can check the status of package using rpm -q command. as a root user.

2) Add following in /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 256 32000 100 142
fs.file-max = 131072
net.ipv4.ip_local_port_range = 10000 65000
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65535

3) Make entry of hostname and domain name in /etc/sysctl.conf
kernel.hostname=advait
kernel.domainname=oracle.com

4) Make following changes in /etc/security/limits.conf
*        soft   nproc         2047
*        hard   nproc         16384
*        soft   nofile        2048
*        hard   nofile        65536

5) Add the following line to the /etc/pam.d/login file
session    required     /lib/security/pam_limits.so

6)  Make the following entry in /etc/profile
if [ $USER = “oracle” ]; then
if [ $SHELL = “/bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

7)  Create a group and user.
Primary Group – oinstall
/usr/sbin/groupadd oinstall
Secondary Group
/usr/sbin/groupadd dba
/usr/sbin/useradd -g oinstall -G dba -md /home/oracle oracle
Give the password
[root@dhcp-hydcampusp1-10-176-107-101 home]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

8) Make changes in /etc/hosts as root user and make following entry.

(IP Address)       (hostname.domain.name)       (hostname)

Example

10.176.106.101      advait.oracle.com       advait

Installing Infrastructure Tier

The installation of AS10g should be done in specific order only as explained before. This is because of the dependencies between the components. Here we are going to install Portal and Wireless option of mid tier, this needs database repository for storing the information of these components. So we will first install infrastructure tier and then install Mid tier. Please follow the below screen shots.

run the runInstaller executable present in your Disk1 directory of software dump.

Screen 1 : Welcome – This is the welcome screen. Click on Next.

Screen 2: Specify Inventory location – In this case you can take the default inventory location and then it will ask to run orainstRoot.sh script as root user. Once you run this script as root user on that host, click on continue.

Click on Next

Screen 3: Specify File Locations – Specify the location of Oracle Home for Infra tier and click next.

Screen 4: Select the product to Install – In this case select “Oracle Infrastructure 10g”. If you choose “Oracle Application Server 10g”, then it will install infra tier first and then automatically start installing mid tier.

Screen 5: Select Installation type – As I said before infra tier consists of mainly IDM (Identity Management) and metadata repository. IDM consists of OID, SSO, DAS and OCA. Meta data repository is a 10g oracle database. We will choose both to install here.

Screen 6: runInstaller will perform the basic check on operating system. If for any of the check it generates warning, you can verify the same using the check box and then click on next.

Screen 7: Pre-Installation Requirement – You should have root privileges as you need to run root.sh after installation. Select the checkbox and click on next.

Screen 8: Select Configuration Option – You can select individual components which we are going to install.

Screen 9: Port Configuration Option – For Port Configuration, select Automatic port selection to keep installation simple. Else you can specify the ports in staticports.ini file and upload the same.

Screen 10: Namespace in Internet Directory – Select the default name space only. If your /etc/hosts name file is correct then the default port will be populated correctly. Else it may get populated as localhosts also. In that case check the entry in /etc/hosts.

Screen 11: OCA Distinguish Name – You can specify only the mandatory fields. This is required for creating digital certificate for SSL configuration.

Screen 12: OCA Key length – This is the key length to be used for encryption. You can select the default 2048 and click on next.

Screen 13: OCA Administrator Password – This is the password for Oracle Certificate Authority admin. Select any password, confirm the same and click on next.

Screen 14: Database Configuration Options – Here we provide the name of the database. You can accept the default orcl or give any new name.

Screen 15: Database Schema Password – Select any password for database schema, confirm the same and click on next.

Screen 16: Instance Name and ias_admin Password – Here we are creating a new instance. Note that this is different then database instance. So for infrastructure tier you can give any name (example infra) and also password you want to use for enterprise manager.

Screen 17: Installation Summary – At the end it will show installation summary. Click on Install.

It will show the installation progress. Remember that it is just copying the files and will do some setup. The actual component configuration will come next.

Installer has now started with component configuration. It will configure each component one by one.

When it comes to Meta Data Repository, it will create a new database using DBCA.

At the end of installation it will show the URLs and ports. The above URL is the URL for EM of Infra. Remember we will have 2 EM (Enterprise Manager), one for infra and one for mid-tier.

With the completion of infra tier, we can now start with installation of mid tier. But note that during installation of mid tier your infrastructure tier should be up and running.

Installing Mid Tier components

run the same runInstaller executable present in your Disk1 directory of software dump which you used during infra tier installation.

Screen 1 : Welcome – This is the welcome screen. Click on Next.

Screen 2: Specify File Locations – Specify the location of Oracle Home for mid tier and click next.

Screen 3: Select Product to install – If you remember for infra tier installation we selected second option, i.e. “Oracle Application Server Infrastructure 10g”. Here we have to install mid-tier, which will be present inside “Oracle Application Server 10g” option. So select “Oracle Application Server 10g” option.

Screen 4: Select Installation Type – Here you can select the required installation type depending on the components you want to install. For this installation we will go for Portal and Wireless.

Screen 5: runInstaller will perform the basic check on operating system. If for any of the check it generates warning, you can verify the same using the check box and then click on next.

Screen 6: Pre-Installation Requirement – You should have root privileges as you need to run root.sh after installation. Select the checkbox and click on next.

Screen 7: Select Configuration Option – You can select individual components which we are going to install.

Screen 8: Port Configuration Option – For Port Configuration, select Automatic port selection to keep installation simple. Else you can specify the ports in staticports.ini file and upload the same.

Screen 9: Register with OID – Here you need to provide the hostname and port number of OID which you have already installed as a part of infra tier. By default the port number is 389. You can also check the port numbers of infra tier by going to ORACLE_HOME/install directory and checking file portlist.ini. Here ORACLE_HOME refers to infra ORACLE_HOME.

Screen 10: OID Login – Here OID superuser login should be given. by default its always orcladmin. You have to give the password for this user. This password will be set for orcladmin. You can give any password.

Screen 11: Select AS10g Meta Data Repository – This will show the name of database that we created as meta data repository as a part of infra installation.

Screen 12: Specify Instance Name – Here you will specify the name of the instance that needs to be created for mid tier and also the admin password for logging into EM (Enterprise Manager).

Screen 13: Summary – At the end it will show summary. Click on install.

You can see the progress of installation. After this initial copy and setup, it will start configuring each components one by one as shown below.

Progress of each component.

At the end it will give the URL for enterprise manager and port details. You can access enterprise manager to manage Application server.

I don’t want to extend this post more now. This completes the installation. More details will be provided in next posts. Thanks.

Hope this helps !!

References:

Oracle Application Server 10g R2 Installation guide

Oracle 9i Undo Management

Introduction

So much to tell about undo management that even this post is not sufficient to explain the concept. Well a small effort to explain the undo concept in Oracle Database 9i. Lets start with the main funda and then we will see how automatic undo management and manual undo management differs. We will also see undo segments and difference between private and public undo segments.

Undo fundamentals

Undo is the rather the most important functionality present in database, without which a database cannot work. undo serves following purpose in database

  • Provide transaction level read consistency of data to all users in database
  • Permits user to rollback or discard the changes that have been made in transaction
  • Provide transaction recovery in case of instance failure while the user is in the middle of transaction

When a user fires a DML (Either update, delete or insert), oracle will fetch the data from disk into database buffer cache for user to change the data also at the same time a copy of the original data is placed in undo segments. This happens for every DML transaction that is executed in database. When a user makes changes to the data, he can verify the change before doing COMMIT. If he thinks that the data change was not correct, he can ROLLBACK the changes back to original. This functionality is provided by undo. The copy of data which was placed in Undo segment will be copied back and any data change will be discarded. This is a very basic principle. The actual transaction is bit complicated.

When a data is fetched into database buffer cache a copy of the same has to be placed in undo segment. Now imagine that there are many users that are working in the database simultaneously and each firing a DML statement needs to place the original copy in undo segment. How the undo segment will be managed amongst so many transactions? To answer this question we need to understand Automatic undo management and manual undo management. Lets see first Auto undo management.

Automatic Undo Management

In case of automatic undo management, we delegate all the burden of undo segment management to oracle. Oracle will take care of creating the required number of segments as and when required. In return, what oracle want is a tablespace from where he can create segments. So we have to provide a undo tablespace which oracle can use to create undo segments and store the original copy of data for user to view. For enabling automatic undo management, we need to set following parameters

1) UNDO_MANAGEMENT (either AUTO or MANUAL).

For automatic undo management this parameter should be set to AUTO.

2) UNDO-TABLESPACE

Here we provide the name of undo tablespace that we want oracle to use. Undo tablespace is a permanent tablespace that we create (just like other permanent tablespace) and gives it to oracle.

3) UNDO_RETENTION

To understand undo retention consider this scenario. User A fetches the data to update, a copy of data is also placed in undo segment in undo tablespace. User B queries for same data and he is given the old copy of data from undo segment (because user A has not yet committed the changes). After few mins, user A commits the changes.

When the data was copied to undo tablespace in one of the undo segment a lock was also placed on that data, which prevents other undo data to overwrite this data. As soon as user commits the data, this lock will be taken off and data can be overwritten.

Imagine that this data gets overwritten quickly and when user B re-queries the data in the same transaction he gets the error ORA-15555 Snapshot too old.

This happened because the undo segment from which user B was querying the data has been overwritten by some other undo segment data. When a user fires a DML oracle will look for free undo segments to put the copy of data and as soon as he finds the free undo segment he overwrites the previous data. However It would be good if oracle retains the data in these segments for some more time even after the user has committed the data and lock is remove. But for how much time should oracle keep this data after commit? This time is decided by this init.ora parameter UNDO_RETENTION. The time is specified in sec. Usually a value of 900 (sec) is seen to be fine.

In the header of undo segment there is a undo segment table. This table contains the records about which transaction is In-active and which transaction is using the undo segment currently. Also the size of undo segment is decided automatically by oracle and is sufficient to support current number of transactions. When a segments get full by data from various transactions, additional space is allocated to the undo segments by allocating extents to the segments. Extents are similar to segment but are allocated automatically if the segment get full. If the extent also gets full, next extent will get allocated. This continues till oracle accommodates current active transactions.

Manual Undo Management

In case of manual undo management a DBA is supposed to create undo segments manually. For understanding this lets first understand system and non-system undo segments and public and private undo segment.

System and non-system undo segments

System undo segment is the one which resides in system tablespace and is created by oracle when you create a database. This undo segment will be used by oracle while updating the data dictionary. A normal user or a DBA is not supposed to use this segment.

non-system undo segment is the one which is created out side system tablespace and which will be used for normal transactions. A non-system undo segment can be pubic or private

Public undo segment and private undo segment

A private undo segment is the one which can be created by initializing the init.ora parameter – ROLLBACK_SEGMENTS. You can specify the name(s) of undo segment which you want to create in this parameter as comma separated names and oracle will create those undo segment when it starts the database. These are called private undo segments because these are acquired by instance explicitly after starting the instance.

Public undo segments are the one that are available in the database pool from undo tablespace that you create.

When configuring the undo management manually, you need to define 2 parameters in init.ora

  1. TRANSACTIONS
  2. TRANSACTIONS_PER_ROLLBACK_SEGMENT

Based on the values of these parameter, oracle will decide the number of rollback segments required for the proper working of oracle database.

Example if number of transactions during normal operation is defined as 146 in init.ora file and TRANSACTIONS_PER_ROLLBACK_SEGMENT is defined as 18, then oracle needs 8 rollback segment for proper functioning. Oracle then checks if ROLLBACK_SEGMENTS parameter contains 8 rollback segments defined. If not then the difference of the rollback segments are taken from the pool of public undo segments.

So at any given time there are many undo segments and many extents are allocated to each undo segments. The extents are sequential. Example for the first transaction extent 1 will be allocated, for transaction 2 extent 2 will be allocated and so on.

Consider the below figure. In case currently 5 extents has been allocated.

Now if a new transaction comes, oracle will check if any of the extent is free (In-active). Even a single extent can hold more then one transaction. Extents are made of oracle block. But a single oracle block can hold data from only one transaction. Now imaging that Extent 5 is having some space and all other extents are full. In this case new transaction will be allocated to extent 5. As transaction proceeds, the space in the extent will start getting occupied. A stage will come when extent 5 will get full. Not ideally one of the other extent should get freed (In-active) because of completion of some other transaction. But if all the transactions are long running then none of the extent will be freed. In this case oracle will pull a new extent into this “cycle” – an extent 6 as given below.

With that taken place, now we have 6 extents in a cycle. After some time some of the transactions commits and there extents gets freed and can be allocated to other active transactions if required. Again if extents are insufficient a new extent will be pulled. This continuous until the tablespace is full.

For manual undo management, you need to set UNDO_MANAGEMENT=manual

“Optimal” Clause

From the above scenario you can imaging that if the transactions are big enough then the cycle of extents will grow big and can be come unmanageable. To over come this problem there is a clause defined while creating undo segments. This clause is “OPTIMAL” clause. We can define optimal to some value, either in KB or on MB. If we take above example, consider each extent to be of size 1M and we have defined OPTIMAL=5M while creating undo segment. Now untill 5 extents are present in the cycle, no action will be taken. But when there is no space in any of the extent a 6th extent will be pulled. As soon as any extent gets freed it will be removed from the cycle and only 5 extent (equal to size specified by optimal) will be maintained. If some of the extents amongst 5 are free, none of them will be removed, because optimal is specified as 5M. Oracle will always try to maintain the size of segment as 5M. Following is the example for creating rollback segment with optimal parameter

CREATE ROLLBACK SEGMENT rollseg01 TABLESPACE UNDO_TBS1 STORAGE ( INITIAL 12k NEXT 12k MINEXTENTS 25 MAXEXTENTS 400 OPTIMAL 300k);

Data Dictionary

You can see the name of rollback segments, tablespace they reside into and status using following data dictionary view.

SQL> select SEGMENT_NAME, TABLESPACE_NAME, SEGMENT_ID, STATUS
2 from dba_rollback_segs
3 order by segment_id;

SEGMENT_NAME TABLESPACE SEGMENT_ID STATUS
—————————— ———- ———- —————-
SYSTEM SYSTEM 0 ONLINE
_SYSSMU1_1207284872$ UNDO_TBS 1 ONLINE
_SYSSMU2_1207309696$ UNDO_TBS 2 ONLINE
_SYSSMU3_1207980887$ UNDO_TBS 3 ONLINE
_SYSSMU4_1207980890$ UNDO_TBS 4 ONLINE
_SYSSMU5_1207980890$ UNDO_TBS 5 ONLINE
_SYSSMU6_1207984687$ UNDO_TBS 6 ONLINE
_SYSSMU7_1207984689$ UNDO_TBS 7 ONLINE
_SYSSMU8_1207984689$ UNDO_TBS 8 ONLINE
_SYSSMU9_1207984689$ UNDO_TBS 9 ONLINE
_SYSSMU10_1207984689$ UNDO_TBS 10 OFFLINE

SEGMENT_NAME TABLESPACE SEGMENT_ID STATUS
—————————— ———- ———- —————-
_SYSSMU11_1207984689$ UNDO_TBS 11 OFFLINE
_SYSSMU92_1204797035$ UNDO_TBS 92 ONLINE
_SYSSMU93_1204797035$ UNDO_TBS 93 ONLINE
_SYSSMU94_1204797035$ UNDO_TBS 94 ONLINE
_SYSSMU95_1204797035$ UNDO_TBS 95 ONLINE
_SYSSMU96_1204797035$ UNDO_TBS 96 ONLINE

If you can see carefully here, segment SYSTEM belongs to SYSTEM tablespace and having ID as 0. This is a system undo segment.

Hope this helps !!

Oracle Database 9i – Basic Architecture

Introduction

This post is regarding the basic database architecture for 9i. Its been very late to upload this basic stuff, but I realized that it would be an incomplete blog without having even a brief architecture. So here it is.

Below is the figure which gives a overview of “Inside Oracle”.

An Oracle database is a combination of oracle Instance and data files on the file system.

Oracle Database = Oracle Instance + Datafiles

Again Oracle Instance is nothing but Memory architecture and Background processes. Lets start the discussion with Memory architecture first.

Memory Architecture

Oracle database uses memory for its operation. The total memory allocated to the Oracle database can be broadly categorized into SGA (System Global Area) and PGA (Program Global Area).

SGA Contains following data structure

  • Database buffer cache
  • Redo log buffer
  • Shared pool
  • Java pool
  • Large pool (optional)
  • Data dictionary cache
  • Other miscellaneous information

We can also categorized SGA into fixed SGA and variable SGA. When asked about Fixed SGA, AskTom says that “fixed SGA is a component of the SGA that varies in size from platform to platform and  release to release.  It is “compiled” into the database.  The fixed SGA contains a set of variables that point to the other components of the SGA and variables that contain the values of various parameters.  The size of the fixed SGA is something over which we have no control and it is generally very small.  Think of this area as a “bootstrap” section of the SGA, something Oracle uses internally to find the other bits and pieces of the
SGA.”

Variable SGA contains 4 main components as listed above, those are “Database Buffer Cache”, “Redo Log Buffer”, “Shared Pool” and “Large Pool”. We call it variable SGA because we can alter the size of each of these components manually using ALTER SYSTEM command. The size of each of the components of variable SGA is determined by INIT.ORA parameters. Following are the INIT.ORA parameter for each of the component.

  • Database Buffer Cache – db_block_buffers
  • Redo Log Buffer – log_buffer
  • Shared Pool – shared_pool_size
  • Large Pool – Large_pool_size

We cannot however alter the size of Fixed SGA.

Database Buffer Cache – This is used to hold the data into the memory. When ever a user access the data, it gets fetched into database buffer cache and it will be managed according to LRU (Least recently used) algorithm. Advantages – If a user is requesting data, which gets fetched into the buffer cache, then next time if he ask for same data with in a short period of time, the data will be read from buffer cache and Oracle process does not have to fetch data again from disk. Reading data from buffer cache is a faster operation. Another advantage is that if a user is modifying the data, it can be modified in the buffer cache which is a faster operation then modifying the data directly on the disk.

Redo Log Buffer – This memory block hold the data which is going to be written to redo log file. Why do we need this data? To rollback the changes if the need be. But instead of writing the data directly to the redo log files, it is first written to log buffer which improves performance and then with the occurrence of certain event it will be written to redo log file.

Shared Pool – This contains 2 memory section, 1) Library Cache 2) Dictionary Cache. Library cache hold the parsed SQL statement and execution plans and parsed PLSQL codes. Dictionary cache hold the information about user privileges, tables and column definitions, passwords etc. These 2 memory components are included in the size of shared pool.

Large Pool – If defined then used for heavy operations such as bulk copy during backup or during restore operation.

The total size of SGA is determined by a parameter SGA_MAX_SIZE. Below is the simple calculation of memory sizes.

SQL> show sga

Total System Global Area  577574308 bytes
Fixed Size                   452004 bytes
Variable Size             402653184 bytes
Database Buffers          163840000 bytes
Redo Buffers               10629120 bytes

This will show fixed and variable size SGA. Fixed size SGA, as I said is not in our control. However we can verify the size of variable SGA and other memory values shown above.

Database Buffers          163840000 bytes

SQL> show parameters db_block_buffer

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_block_buffers                     integer     20000

This value is in terms of blocks. we can find the size of a block using DB_BLOCK_SIZE parameter

SQL> show parameters db_block_size

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_block_size                        integer     8192

So Database Buffers = db_block_buffers X db_block_size = 20000 X 8192 = 163840000 bytes

Also Variable size = “Shared Pool Size” + “Large Pool Size” + “Java Pool size” (some times defined)

SQL> SELECT pool, sum(bytes) from v$sgastat group by pool;

POOL        SUM(BYTES)
———– ———-
java pool     50331648
shared pool  352321536
11069860
Variable size = 352321536 + 50331648 = 402653184 bytes

Program Global Area

PGA contains information about bind variables, sort areas, and other aspect of cursor handling. This is not a shared area and every user has its own PGA. But why PGA is required for every user? The reason being that even though the parse information for SQL or PLSQL may be available in library cache of shared pool, the value upon which the user want to execute the select or update statement cannot be shared. These values are stored in PGA. This is also called Private Global Area.

Going still deeper into the memory structure…

Database buffer cache is again divided into 3 different types of cache.

  1. Default Cache
  2. Keep Cache
  3. Recycle Cache

If we define the cache size using DB_CACHE_SIZE (or DB_BLOCK_BUFFER and specify the block size) then this will be default cache. The cache has a limited size, so not all the data on disk can fit in the cache. When the cache is full, subsequent cache misses cause Oracle to write dirty data already in the cache to disk to make room for the new data.

You can configure the database buffer cache with separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks.

  • The KEEP buffer pool retains the schema object’s data blocks in memory. This is defined using the INIT.ORA parameter DB_KEEP_CACHE_SIZE
  • The RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed. This is defined using the INIT.ORA parameter DB_RECYCLE_CACHE_SIZE

You can also define multiple DB block sizes using following parameters. Example if you have defined standard default block size of 4K, then following parameters can be used to define a size of 2K, 8K, 16K and 32K.

DB_2K_CACHE_SIZE
DB_8K_CACHE_SIZE
DB_16K_CACHE_SIZE
DB_32K_CACHE_SIZE

Note that you can define the Keep and Recycle cache only on standard block size and buffer cache size is the sum of sizes of each of these pools.

Shared Pool Reserved Size

Shared Pool, as we have seen previously contains the parsed SQL statements and execution plans. With continuous use of database, after a period of time the shared pool will get fragmented. New parsed SQL and execution plans comes and old one gets aged out and hence overwritten. This will also lead to larger packages being aged out with new entries going into shared pool. Hence access to such larger packages will take time to parse and create execution plan. This might cause performance issues.

To avoid such situation, you can define a parameter SHARED_POOL_RESERVED_SIZE. This will reserve some additional space other then shared_pool_size. If an object (either parsed SQL statement or execution plan) is stored in reserved shared pool area then it will not age out.

For large allocations, the order in which Oracle attempts to allocate space in the shared pool is the following:

  1. From the unreserved part of the shared pool.
  2. If there is not enough space in the unreserved part of the shared pool, and if the allocation is large, then Oracle checks whether the reserved pool has enough space.
  3. If there is not enough space in the unreserved and reserved parts of the shared pool, then Oracle attempts to free enough memory for the allocation. It then retries the unreserved and reserved parts of the shared pool.

Process Architecture

Oracle has several process running in the background for proper functioning of database. Following are the main categories of process.

  1. Server Process
  2. Background Process

Server Process – to handle the requests of user processes connected to the instance. Server processes (or the server portion of combined user/server processes) created on behalf of each user’s application can perform one or more of the following:

  • Parse and execute SQL statements issued through the application
  • Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA
  • Return results in such a way that the application can process the information

Background Process – An Oracle instance can have many background processes; not all are always present. The background processes in an Oracle instance include the following:

  • Database Writer (DBW0 or DBWn)
  • Log Writer (LGWR)
  • Checkpoint (CKPT)
  • System Monitor (SMON)
  • Process Monitor (PMON)
  • Archiver (ARCn)
  • Recoverer (RECO)
  • Lock Manager Server (LMS) – Real Application Clusters only
  • Queue Monitor (QMNn)
  • Dispatcher (Dnnn)
  • Server (Snnn)

On many operating systems, background processes are created automatically when an instance is started.

Database writer (DBWn) – The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes (DBW1 through DBW9) to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems.

Log Writer (LGWR) – The log writer process (LGWR) is responsible for redo log buffer management–writing the redo log buffer to a redo log file on disk. LGWR writes all redo entries that have been copied into the buffer since the last time it wrote.

Checkpoint (CKPT) – When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

System Monitor (SMON)The system monitor process (SMON) performs crash recovery, if necessary, at instance startup. SMON is also responsible for cleaning up temporary segments that are no longer in use and for coalescing contiguous free extents within dictionary-managed tablespaces. If any dead transactions were skipped during crash and instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON wakes up regularly to check whether it is needed.

Process Monitor (PMON)

The process monitor (PMON) performs process recovery when a user process fails. PMON is responsible for cleaning up the database buffer cache and freeing resources that the user process was using. For example, it resets the status of the active transaction table, releases locks, and removes the process ID from the list of active processes.

PMON periodically checks the status of dispatcher and server processes, and restarts any that have died (but not any that Oracle has terminated intentionally). PMON also registers information about the instance and dispatcher processes with the network listener.

Archiver Process (ARCn)

The archiver process (ARCn) copies online redo log files to a designated storage device after a log switch has occurred. ARCn processes are present only when the database is in ARCHIVELOG mode, and automatic archiving is enabled.

An Oracle instance can have up to 10 ARCn processes (ARC0 to ARC9). The LGWR process starts a new ARCn process whenever the current number of ARCn processes is insufficient to handle the workload. The ALERT file keeps a record of when LGWR starts a new ARCn process.

Recoverer (RECO) – The recoverer process (RECO) is a background process used with the distributed database configuration that automatically resolves failures involving distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When the RECO process reestablishes a connection between involved database servers, it automatically resolves all in-doubt transactions, removing from each database’s pending transaction table any rows that correspond to the resolved in-doubt transactions.

The RECO process is present only if the instance permits distributed transactions and if the DISTRIBUTED_TRANSACTIONS parameter is greater than zero. If this initialization parameter is zero, RECO is not created during instance startup.

Lock Manager Server (LMS) – In Oracle9i Real Application Clusters, a Lock Manager Server process (LMS) provides inter-instance resource management.

Queue Monitor (QMNn) – The queue monitor process is an optional background process for Oracle Advanced Queuing, which monitors the message queues. You can configure up to 10 queue monitor processes. These processes, like the Jnnn processes, are different from other Oracle background processes in that process failure does not cause the instance to fail.

The above once explained are the mail background processes. Please refer to the Oracle documentation for detailed Oracle 9i Architecture.

Hope this helps !!

References:

Oracle9i Database Online Documentation (Release 9.0.1)

Tom Kyte

Oracle Database 10g scheduler – Advanced

You must have seen the basic concepts of Oracle Database scheduler. The basic concepts includes programs, Jobs and schedule. Oracle Database 10g scheduler – Basic post includes these concepts and shows how to create programs, jobs and schedule the same.

This post is regarding the advanced concepts of Oracle 10g scheduler. This includes Job Classes, Window, Window group and profiles. Lets see each of these concepts one by one.

Job Classes

Job class allows DBA to categorize the jobs according to similar resource requirements. For each jobs  to be executed, database needs resource to be allocated to the job. We can define resource group and limit the resource for each resource group. You can check the existing resource group using the below query.

SQL> SELECT consumer_group FROM dba_rsrc_consumer_groups;

CONSUMER_GROUP
——————————
SYS_GROUP
LOW_GROUP
OTHER_GROUPS
AUTO_TASK_CONSUMER_GROUP
DEFAULT_CONSUMER_GROUP
ORA$AUTOTASK_URGENT_GROUP
BATCH_GROUP
ORA$DIAGNOSTICS
ORA$AUTOTASK_HEALTH_GROUP
ORA$AUTOTASK_SQL_GROUP
ORA$AUTOTASK_SPACE_GROUP

CONSUMER_GROUP
——————————
ORA$AUTOTASK_STATS_GROUP
ORA$AUTOTASK_MEDIUM_GROUP
INTERACTIVE_GROUP

14 rows selected.

Each of these resource group is having some defined level of resource usage. We can create a job class and allocate a resource group to that job class using RESOURCE_CONSUMER_GROUP parameter as given below.

— Create a job class.
BEGIN
DBMS_SCHEDULER.create_job_class (
job_class_name          =>  ‘test_job_class’,
resource_consumer_group =>  ‘low_group’);
END;

If we don’t provide any value for RESOURCE_CONSUMER_GROUP parameter then a default value of DEFAULT_CONSUMER_GROUP will be allocated.

You can see different job classes and there association to consumer group using the following SQL.

SQL> SELECT job_class_name, resource_consumer_group FROM dba_scheduler_job_classes;

JOB_CLASS_NAME                 RESOURCE_CONSUMER_GROUP
—————————— ——————————
DEFAULT_JOB_CLASS
AUTO_TASKS_JOB_CLASS           AUTO_TASK_CONSUMER_GROUP
DBMS_JOB$
ORA$AT_JCURG_OS                ORA$AUTOTASK_URGENT_GROUP
ORA$AT_JCNRM_OS                ORA$AUTOTASK_STATS_GROUP
ORA$AT_JCMED_OS                ORA$AUTOTASK_MEDIUM_GROUP
ORA$AT_JCURG_SA                ORA$AUTOTASK_URGENT_GROUP
ORA$AT_JCNRM_SA                ORA$AUTOTASK_SPACE_GROUP
ORA$AT_JCMED_SA                ORA$AUTOTASK_MEDIUM_GROUP
ORA$AT_JCURG_SQ                ORA$AUTOTASK_URGENT_GROUP
ORA$AT_JCNRM_SQ                ORA$AUTOTASK_SQL_GROUP

JOB_CLASS_NAME                 RESOURCE_CONSUMER_GROUP
—————————— ——————————
ORA$AT_JCMED_SQ                ORA$AUTOTASK_MEDIUM_GROUP
AQ$_PROPAGATION_JOB_CLASS
XMLDB_NFS_JOBCLASS

14 rows selected.

Once Job class has been created you can create jobs and assign the jobs to one of the Job class. You can even assign the existing jobs to the job class.

BEGIN
— Job defined by an existing program and schedule and assigned to a job class.
DBMS_SCHEDULER.create_job (
job_name      => ‘new_job’,
program_name  => ‘test_plsql_block_prog’,
schedule_name => ‘test_hourly_schedule’,
job_class     => ‘test_job_class’,
enabled       => TRUE,
comments      => ‘Job defined by an existing program and schedule and assigned toa job class.’);

DBMS_SCHEDULER.set_attribute (
name      => ‘existing_job’,
attribute => ‘job_class’,
value     => ‘test_job_class’);
END;

You can check the scheduled jobs and the job class to which they belong using DBA_SCHEDULER_JOBS

SQL> SELECT owner, job_name, job_class, enabled FROM dba_scheduler_jobs;

OWNER      JOB_NAME                       JOB_CLASS            ENABL
———- —————————— ——————– —–
SYS        PURGE_LOG                      DEFAULT_JOB_CLASS    TRUE
SYS        GATHER_STATS_JOB               AUTO_TASKS_JOB_CLASS FALSE
SYS        XMLDB_NFS_CLEANUP_JOB          XMLDB_NFS_JOBCLASS   FALSE
SYS        AUTO_SPACE_ADVISOR_JOB         AUTO_TASKS_JOB_CLASS FALSE
SYS        FGR$AUTOPURGE_JOB              DEFAULT_JOB_CLASS    FALSE
SYS        ORA$AUTOTASK_CLEAN             DEFAULT_JOB_CLASS    TRUE
SYS        HM_CREATE_OFFLINE_DICTIONARY   DEFAULT_JOB_CLASS    TRUE
SYS        DRA_REEVALUATE_OPEN_FAILURES   DEFAULT_JOB_CLASS    TRUE
ORACLE_OCM MGMT_CONFIG_JOB                DEFAULT_JOB_CLASS    TRUE
ORACLE_OCM MGMT_STATS_CONFIG_JOB          DEFAULT_JOB_CLASS    TRUE
SYS        BSLN_MAINTAIN_STATS_JOB        DEFAULT_JOB_CLASS    TRUE

Window

A window is a time slot created within the database. A window is associated with a resource plan. A resource plan defines the resource allocation among the resource consumer group. Do not get confused with resource consumer group and resource plan. A resource consumer group is associated with Job Classes which defines how much resources are required by that job for execution. Where as resource plan is the blue print for resource allocation among resource consumer groups.

So when a resource plan is associated with a window, during that period of time when the window is active that particular resource plan will be active. This plan will decide the resource allocation to different consumer groups.

So if a certain amount of CPU is allocated to a resource plan which is currently assigned to the active window, then depending on the currently running job classes each will be assigned required resources from the available resources in the plan. Example of CPU allocated to resource plan is 50% of total CPU and 2 resource consumer group allocated to 2 job classes is running, one job class requires 20% CPU and other needs 80% CPU, then 50% CPU of resource plan will be divided into 80:20 ratio and will be provided to each of job classes. Job class will run the jobs accordingly.

BEGIN
— Window with a predefined schedule.
DBMS_SCHEDULER.create_window (
window_name     => ‘test_window_1’,
resource_plan   => NULL,
schedule_name   => ‘test_hourly_schedule’,
duration        => INTERVAL ’60’ MINUTE,
window_priority => ‘LOW’,
comments        => ‘Window with a predefined schedule.’);

— Window with an inline schedule.
DBMS_SCHEDULER.create_window (
window_name     => ‘test_window_2’,
resource_plan   => NULL,
start_date      => SYSTIMESTAMP,
repeat_interval => ‘freq=hourly; byminute=0′,
end_date        => NULL,
duration        => INTERVAL ’60’ MINUTE,
window_priority => ‘LOW’,
comments        => ‘Window with an inline schedule.’);
END;

You can display window names and resource plans using following query.

SQL> SELECT window_name, resource_plan, enabled, active FROM   dba_scheduler_windows;

WINDOW_NAME                    RESOURCE_PLAN                  ENABL ACTIV
—————————— —————————— —– —–
WEEKNIGHT_WINDOW               DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
WEEKEND_WINDOW                 DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
MONDAY_WINDOW                  DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
ADS_GATHER_STATS_WINDOW1       SYSTEM_PLAN                    TRUE  FALSE
TUESDAY_WINDOW                 DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
WEDNESDAY_WINDOW               DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
THURSDAY_WINDOW                DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
FRIDAY_WINDOW                  DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
SATURDAY_WINDOW                DEFAULT_MAINTENANCE_PLAN       FALSE FALSE
SUNDAY_WINDOW                  DEFAULT_MAINTENANCE_PLAN       FALSE FALSE

Windows can be opened and closed manually using the OPEN_WINDOW and CLOSE_WINDOW procedures:

BEGIN
— Open window.
DBMS_SCHEDULER.open_window (
window_name => ‘test_window_2’,
duration    => INTERVAL ‘1’ MINUTE,
force       => TRUE);
END;

Windows can be dropped using the DROP_WINDOW procedure:

BEGIN
DBMS_SCHEDULER.drop_window (
window_name => ‘test_window_1’,
force       => TRUE);

DBMS_SCHEDULER.drop_window (
window_name => ‘test_window_2’,
force       => TRUE);
END;

Window Group

Window group is just a collection of similar windows. It can be created with 0, 1 or many windows as group members using the CREATE_WINDOW_GROUP procedure.

BEGIN
DBMS_SCHEDULER.create_window_group (
group_name  => ‘test_window_group’,
window_list => ‘test_window_1, test_window_2’,
comments    => ‘A test window group’);
END;
/

You can display the window group details using following query.

SQL> SELECT window_group_name, enabled, number_of_windowS FROM   dba_scheduler_window_groups;

WINDOW_GROUP_NAME              ENABL NUMBER_OF_WINDOWS
—————————— —– —————–
MAINTENANCE_WINDOW_GROUP       TRUE                  9
ORA$AT_WGRP_OS                 TRUE                  9
ORA$AT_WGRP_SA                 TRUE                  9
ORA$AT_WGRP_SQ                 TRUE                  9

Windows can be added and removed from a group using the ADD_WINDOW_GROUP_MEMBER and REMOVE_WINDOW_GROUP_MEMBER procedures.

BEGIN
— Create a new window.
DBMS_SCHEDULER.create_window (
window_name     => ‘test_window_3’,
resource_plan   => NULL,
schedule_name   => ‘test_hourly_schedule’,
duration        => INTERVAL ’60’ MINUTE,
window_priority => ‘LOW’,
comments        => ‘Window with a predefined schedule.’);

DBMS_SCHEDULER.add_window_group_member (
group_name  => ‘test_window_group’,
window_list => ‘test_window_3’);
END;
/

BEGIN
DBMS_SCHEDULER.remove_window_group_member (
group_name  => ‘test_window_group’,
window_list => ‘test_window_3’);
END;
/

Window groups can be dropped using the drop_window_group procedure.

BEGIN
DBMS_SCHEDULER.drop_window_group (
group_name => ‘test_window_group’,
force      => TRUE);
END;
/

Hope this helps !!

References:

http://www.oracle-base.com/articles/10g/Scheduler10g.php

[ESDS error code – 10004] Invalid Email Id – OCS10g

While working on OCS10g when we create a new user, we cannot login to webmail client using the created user ID and password. If we check the user in OID, it says that user has been created successfully.

But if we “View” the user in OID using the view button, you can see at the bottom of the page in E-Mail provisioning following error.

Status – Failed

[ESDS error code – 10004] Invalid Email Id

This is a known issue with OCS. This happens when you try to register a user, which was deleted previously. When you delete a user previously, it removes the information about the user, but it doesn’t delete the user mail store information from the database that is where the problem occurs.

So we need to delete the user mail store information from database manually. Below steps should be executed for removing the mail store information from database and registering the user successfully.

1) Delete the user from OID.

2) Run the following command to remove the mail store information from database.

oesucr filename -clean_user_mailstore_data

Here filename contains the mail ID of the user to be delete. The format of (filename) file is

mail=avdeo@ap6023fems.us.oracle.com

When you run the command you can see following output

-bash-3.00$ oesucr test.txt -clean_user_mailstore_data
users delete from mailstore list size=1
user to delete mailstore =avdeo@ap6023fems.us.oracle.com

3) Register the user again in OID. This time you can see E-Mail provisioning status as successful.

Hope this help !!

References:

Oracle Forum

Metalink Nore ID : 333235.1

Enhancement request has been filled for the same by andras.kecskes@oracle.com (5240052 - MORE ADMINISTRATOR FRIENDLY DEPROVISIONING)

Oracle Database 10g Scheduler – Basics

Introduction and Scope:

The intention of this post is to give you a very basic information about Oracle scheduler utility in Oracle 10g database.

The Oracle Database 10g Scheduler feature offers the best of all worlds: a job scheduler utility right inside the database that is sufficiently powerful to handle all types of jobs, not just PL/SQL code segments, and that can help you create jobs either with or without associated programs and/or schedules. Best of all, it comes with the database at no additional cost. In this installment, we’ll take a look at how it works.

Three Concepts

Three concepts are involved in Scheduler basic functionality.

  1. Job
  2. Program
  3. Schedule

Purpose of the Job is to execute any program or executable provided in the Job details as per the schedule provided. The example is as given below.

We can specify the OS executable directly without creating it as a program first.

Example we want to run a shell script /home/arup/dbtools/move_arcs.sh

The definition of job will look as given below.

Method I : Creating Job Directly

begin
dbms_scheduler.create_job
(
job_name => ‘ARC_MOVE_2’,
job_type => ‘EXECUTABLE’,
job_action => ‘/home/arup/dbtools/move_arcs.sh’,
repeat_interval => ‘FREQ=MINUTELY; INTERVAL=30’,
enabled => true,
comments => ‘Move Archived Logs to a Different Directory’
);
end;
/

Here we are creating Job directly and providing the script name to execute (job_action) , the job type (job_type) and the schedule (repeat_interval => ‘FREQ=MINUTELY; INTERVAL=30’) according to which it has to execute the job.

The above method is one of the way of creating the job. Remember that you have to create a job for your program to execute as per defined schedule.

Method II : Creating a program and attaching to a Job

Another method of creating a job is to first create a program and use that program while creating a job as given below.

Creating Program First

begin
dbms_scheduler.create_program
(
program_name => ‘MOVE_ARCS’,
program_type => ‘EXECUTABLE’,
program_action => ‘/home/arup/dbtools/move_arcs.sh’,
enabled => TRUE,
comments => ‘Moving Archived Logs to Staging Directory’
);
end;
/

Using that program while creating a job

begin
dbms_scheduler.create_job
(
job_name => ‘ARC_MOVE’,
program_name => ‘MOVE_ARCS’,
repeat_interval => ‘FREQ=MINUTELY; INTERVAL=30’,
comments => ‘Move Archived Logs to a Different Directory’,
enabled => TRUE
);
end;
/

So we just create a separate program and attach the same to the job. But if you note here that we have mentioned the schedule in the job itself. Obviously the third method will be to create a schedule separately and attach the same to the job.

Method III : Creating a program, schedule and attaching to a Job

Creating a program first.

begin
dbms_scheduler.create_program
(
program_name => ‘MOVE_ARCS’,
program_type => ‘EXECUTABLE’,
program_action => ‘/home/arup/dbtools/move_arcs.sh’,
enabled => TRUE,
comments => ‘Moving Archived Logs to Staging Directory’
);
end;
/

Creating schedule for the job to execute.

begin
dbms_scheduler.create_schedule
(
schedule_name => ‘EVERY_30_MINS’,
repeat_interval => ‘FREQ=MINUTELY; INTERVAL=30’,
comments => ‘Every 30-mins’
);
end;
/

Attaching program and schedule to the job.

begin
dbms_scheduler.create_job
(
job_name => ‘ARC_MOVE’,
program_name => ‘MOVE_ARCS’,
schedule_name => ‘EVERY_30_MINS’,
comments => ‘Move Archived Logs to a Different Directory’,
enabled => TRUE
);
end;
/

This is a more cleaner way of doing the things.

You can use any of the above methods to create your schedule in the database.

Checking the validity of Schedules

We can also check if our schedules are valid or not. Means we have a way to check what are the dates and times when our job is going to run in future. This is possible using a procedure called EVALUATE_CALENDER_STRING present in DBMS_SCHEDULER package.

Lets see the usage as given below.

declare
L_start_date TIMESTAMP;
l_next_date TIMESTAMP;
l_return_date TIMESTAMP;
begin
l_start_date := trunc(SYSTIMESTAMP);
l_return_date := l_start_date;
for ctr in 1..10 loop
dbms_scheduler.evaluate_calendar_string(
‘freq=hourly; byminute=30’,
l_start_date, l_return_date, l_next_date
);
dbms_output.put_line(‘Next Run on: ‘ ||
to_char(l_next_date,’mm/dd/yyyy hh24:mi:ss’)
);
l_return_date := l_next_date;
end loop;
end;
/

Next Run on: 04/03/2008 00:30:00
Next Run on: 04/03/2008 01:30:00
Next Run on: 04/03/2008 02:30:00
Next Run on: 04/03/2008 03:30:00
Next Run on: 04/03/2008 04:30:00
Next Run on: 04/03/2008 05:30:00
Next Run on: 04/03/2008 06:30:00
Next Run on: 04/03/2008 07:30:00
Next Run on: 04/03/2008 08:30:00
Next Run on: 04/03/2008 09:30:00
You can change the freq portion and check for its run frequency and then put the same in create_schedule procedure.

Changing Attributes values

You can change the value of any attribute using SET_ATTRIBUTE procedure of DBMS_SCHEDULER procedure as given below.

BEGIN
DBMS_SCHEDULER.set_attribute (
name => ‘ARCHIVE_CHEK_LIST_SCHEDULE’,
attribute => ‘repeat_interval’,
value => ‘freq=daily’);
END;
/

This will change our schedule to run daily once.

These are just the basic concepts for scheduling. I will be covering more detailed concepts like window, job classes, plans and priorities in my next post.

Monitoring Jobs

You can monitor the status of the jobs that was scheduled and ran previously and also the jobs which are currently running. Below are some of the important tables which you can use to monitor the jobs.

To get the details of job runs:

You can use DBA_SCHEDULER_JOB_RUN_DETAILS table.

1.jpg

To get the jobs which are currently running

You can use DBA_SCHEDULER_RUNNING_JOBS table.  This table will not give any rows if there are no jobs which are running at that instance.

To get job history of previous run jobs:

You can use DBA_SCHEDULAR_JOB_LOG table. The output is as shown below.

select log_date
,      job_name
,      status
from dba_scheduler_job_log
where job_name = ‘ARCHIVE_CHEK_LIST_JOB’

2.jpg

To get all the schedules in database DBA_SCHEDULER_SCHEDULES

To get all the jobs and their attributes –  DBA_SCHEDULER_JOBS

Note that DBA_JOBS is used by DBMS_JOBS package, the old version for scheduling jobs.

To get all the information about programs  – DBA_SCHEDULER_PROGRAMS

To get all program arguements  – DBA_SCHEDULER_PROGRAM_ARGS

Hope this helps !!

References:

Oracle Database 10g: Top 20 DBA Features