Transaction Management in Oracle Database
As you know the basic transaction in oracle database are one or more SQL statements that are executed at once. So these set of SQL statements are called transaction. When ever we fire any SQL statement transaction starts. A transaction ends with any of the below statements
- Commit
- Rollback
- DDL statement (Create, Drop, Rename, Alter etc)
- Normal program execution – This will happen if you run a normal program which will terminate with status zero and that will commit the whole transaction
- Abnormal Program termination – In case a program terminated abnormally, it will rollback the transactions.
When a user starts a transaction following things happens
- An undo buffer segment is allocated and old values of the record gets stored in undo buffer segment in SGA.
- When a transaction gets executed, oracle generates a redo entry in redo buffer
- Oracle modifies the data in buffer cache as updated by the transaction.
At the end of transaction when the user commits following things happen.
- A new SCN (system change number) gets generated and assigned to each record belonging to that transaction in redo buffer.
- Entries in redo buffer gets written to redo log file
- Any locks that is held by the transaction (on table or any other resource) will be cleared.
When you commit a transaction the records in redo buffer can be either written immediately of you can allow oracle to write the record at its own prefered time to improve the performance. This can be done using IMMEDIATE or BATCH option.
Also you can ask oracle engine to wait until the records are successfully written to the redo files, or you can ask oracle to return back from commit statment with out bothering about whether the commited records are written to redo buffers or not. This can be controlled using WAIT or NOWAIT option.
The above two settings can be done using the init.ora paramter COMMIT_WRITE. You can set this parameter as given below in init.ora
COMMIT_WRITE = BATCH, NOWAIT
You can also set particular commit options at the session level in the following way:
SQL> ALTER SESSION SET COMMIT_WRITE = BATCH, NOWAIT
You can directly specify alternate commit options in the COMMIT statement itself, in the following way, without using the COMMIT_WRITE initialization parameter:
SQL> COMMIT WRITE BATCH NOWAIT
Now having this basic knowledge, I also presume that you must be aware of the various concurrency problem also. I have given a brief explaination of those problems below.
1) Dirty-Read problem – This happens when a user A has updated the data in a table and has not yet commited. User B reads the data from that table. User A then rolls back the data, but user B has already read that data. It means the user B has read the data, which never existed.
This problem can be solved by allowing the user to read only those data that is been commited.
2) Phantom-Read problem – This happens when for example a User A has opened up a session and is quering the data from a table. Another user B inserts some data into the table and commits. When user A re-executes the same query again after a time laps in the “same session”, he finds the extra rows. This is called phantom read problem. When a user is in the same session, he should not see the extra data in subsequent runs.
3) Lost-updates problem – In this case a user A reads the data for updating. Then User B reads the data for updating and updates the data and commits. But then user A changes the previously read data and commits. This means the updates made by the User B is lost even though he has commited the data. This is called lost updates problem.
4) Non-repeatable read – Suppose you access a table’s data at a certain point in time, and then you try to access the same data a little later, only to find that the data values are different the second time around. This inconsistent data during the same transaction causes a nonrepeatable-read problem.
The answer to the above problem is the serializable scheduling. Now what is serializable scheduling?? Lets see the explaination below.
Every user which is executing a transaction should basically think that he is the only user of this database. This is the feeling that a serializable scheduling gives to the user. Even thought there will be many usres who will be accessing the database, but the complaxity of making individual transaction as isolated to upto the database engine. We as a user dont have to worry about any other transaction that is accessing the same table that we are going to access. Below is the brief explaination of how oracle internally manages the complexity and gives us the feeling of isolation.
Oracle Isolation Levels:
Now the best and simplest way to ensure transaction isolation is to have a resource accessible by only one user at a time. When a user is accessing any resource database have to make sure that other resource should not access that. OK. let me tell you that this might be the simplest way of doing this thing, but definately not the best way. So initially I mentioned best is not the best now. Confused? You ask your self what is the problem when we apply this methodology? If you gave an answer some thing related to concurrency then you are right, this becomes totally serializable. Only one user will be able to access the database at 1 time. Definately not a good option !!! Oracle handles these situations very efficiently, thats why we say its Oracle !!
Oracle engine basically gives us the flexibility to decide what isolation level we should set. Here are the brif info about different type of isolation levels that are present in Oracle.
1) Serializable – Under this type, a transaction will lock all the tables that is required for it. It wont allow any other user to update any tables underneath it until this transaction is commited or rollbacked. Note here that this lock is at the transaction level and not at the statement level. This will avoid all the above concurreny issues, but the disadvantage is you know, there is no concurrency.
2) Repeatable read – Guarantees read consistancies. A transaction that reads the data twice from a table at two different points in time will find the same values each time. You avoid both the dirty-read problem and the nonrepeatable-read problem.
3) Read uncommitted – The read-uncommitted level, which allows a transaction to read another transaction’s intermediate values before it commits, will result in the occurrence of all the problems of concurrent usage.
4) Read committed – Oracle’s default isolation level is the read-committed level of isolation at the statement level. Oracle queries see only the data that was committed at the beginning of the query. Because the isolation level is at the statement level, each statement is allowed to see only the data that was committed before the commencement of that statement. The read-committed level of isolation guarantees that the row data won’t change while you’re accessing a particular row in an Oracle table.
Note here that Read commited transaction isolation level is at the query level and not the transaction level. So a query in a transaction can see different data at different time within the same transaction, meaning that Non-repeatable read problem persists.
Below table summerizes the problems and the solutions provided by various isolation level.
As you can see, the last isolation level in Table 6-1, serializable, avoids all concurrency problems, but unfortunately, it’s not a practical option because it doesn’t allow any concurrent use of the database. Oracle’s default read-committed isolation level will get rid of the dirty-read and the lostupdate problems.
You can change the isolation level using the statment as given below
ALTER SESSION SET ISOLATION LEVEL SERIALIZABLE;
It’s safest to stick with Oracle’s default read-committed level of transaction isolation, although it isn’t perfect. Oracle recommends that you stick with the default read-committed level of isolation, which produces the maximum throughput with only a slight risk of running into the nonrepeatable-read and phantom-read anomalies.
The read-committed transaction level provides a good trade-off between data concurrency and data consistency. Also, the throughput is much higher with this mode of isolation than with the purer serialization mode. If getting a repeatable read is your objective in using a serializable isolation level, you can always use explicit locks in situations where that is necessary.
Implementing the Oracle Concurrency control:
to be continued…
I am not rattling wonderful with English but I get hold this rattling easy to read .