Many times when the application runs slow one of the usual approach for a DBA is to enable tracing and get the SQL queries the application is running. We check some of the SQL that application runs is becoming bottle neck here. But getting a trace file for different application can be very tricky. The reason is that when a user is connecting from the front end application its very difficult for us (the DBA) to track the session information. When he login to the application and does some database activity a new session will get created, its difficult to know if that session belongs to the user we intend to monitor. Also if the application is written efficiently then inbetween the application usage a client might get disconnected and then reconnect again when some database operation needs to be performed. Again the SID changes. So its very difficult for a DBA to keep tracks of sessions an application is changing.
So in order to get around this problem we have something called End-to-End application tracing.
End to End Tracing:
In end to end application tracing, instead of tracing the session we trace using different parameters like
Client identifier – specifies an end user based on the logon Id, such as HR.HR
Service – specifies a group of applications with common attributes, service level thresholds, and priorities; or a single application, such as ACCTG for an accounting application
Module – specifies a functional block, such as Accounts Receivable or General Ledger, of an application
Action – specifies an action, such as an INSERT or UPDATE operation, in a module
Session – specifies a session based on a given database session identifier (SID), on the local instance
Instance – specifies a given instance based on the instance name
To gather the appropriate statistics using PL/SQL, you need to enable statistics gathering for client identifier, service, module, or action using procedures in the DBMS_MONITOR package.
Example if you want to enable statistics gathering for user OE. You can enable statistics for this CLIENT_ID using following procedure
EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_ENABLE(client_id => ‘OE.OE’);
So when ever any clients logs into the database using OE as user ID, all the activities for that user will get traced in the trace file. You can see the client identifier using CLIENT_IDENTIFIER column in v$session table.
EXECUTE DBMS_MONITOR.CLIENT_ID_STAT_DISABLE(client_id => ‘OE.OE’); is used to disable the tracing for that client ID.
Similarly you can also enable the tracing for any service, any module, for an required action etc. For more information on end to end tracing, kindlly check the link in reference section of this post.
coming to the contents of trace file. Initially when you get a trace file generated you wont be able to understand as its not in a readable format. You need to use a utility called TKPROF inorder to make the trace file readable.
The basic inputs to tkprof are 2 file names. First filename will be the name of the trace file which you want to format and second file name will be filename for output file.
Example: tkprof jdbc_traces.trc jdbc_traces.txt
Contents of Trace file:
After you use the tkprof utility you will get the output file in readable format. The complete trace file contains well formated blocks. Every block contains the SQL, statistics about that SQL, Miss statistics, query Explain plan and wait statistics.
The sample block from one of the trace file is as shown below
******************************************************************************** SQL ID : 74637j0wuhxfx Select BEG_VERSION, EXT_VERSION, FIRST_DATE, FIRST_USER, IND_CHANGE, INT_VERSION, I_PROJECT, LAST_DATE, LAST_USER, PROJECT_CODE, PROJECT_NAME, PRV_VERSION, V_LAST_DATE from SNP_PROJECT where I_PROJECT = :1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9 0.00 0.00 0 0 0 0 Execute 823 0.02 0.02 0 0 0 0 Fetch 823 0.05 0.05 0 1646 0 823 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 1655 0.08 0.08 0 1646 0 823 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 100 Rows Row Source Operation ------- --------------------------------------------------- 1 TABLE ACCESS BY INDEX ROWID SNP_PROJECT (cr=2 pr=0 pw=0 time=0 us cost=1 size=108 card=1) 1 INDEX UNIQUE SCAN PK_PROJECT (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 81820) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 823 0.00 0.00 SQL*Net message from client 822 103.76 173.63 ********************************************************************************
Now Lets understand the meaning of these statistics. If we consider the call statistics table we have columns like count, cpu, elapsed, disk, query, current and rows. The meaning of these columns will be given at the starting of your trace file.
COUNT – Number of times a statement was parsed, executed, or fetched.
CPU – Total CPU time in seconds for all parse, execute, or fetch calls for the statement. This value is zero (0) if TIMED_STATISTICS is not turned on.
ELAPSED – Total elapsed time in seconds for all parse, execute, or fetch calls for the statement. This value is zero (0) if TIMED_STATISTICS is not turned on.
DISK – Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls.
QUERY – Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls. Usually, buffers are retrieved in consistent mode for queries.
CURRENT – Total number of buffers retrieved in current mode. Buffers are retrieved in current mode for statements such as INSERT, UPDATE, and DELETE.
ROWS – Total number of rows processed by the SQL statement. This total does not include rows processed by subqueries of the SQL statement.
Also the rows gives statistics about 3 calls :- Parse, Execute and Fetch
PARSE – Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects.
EXECUTE – Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.
FETCH – Retrieves rows returned by a query. Fetches are only performed for SELECT statements.
For example in the above table we have following statistics
– SQL query has been parsed 9 times
– SQL query got executed 823 times
– fetch operation has been done 823 times
– CPU time taken for parsing is less then 1/100th of a second
– CPU time for execute was 0.02 sec
– CPU time for fetch was 0.06 sec
– there was no elapsed time involved for parsing
– elapsed time for execute was 0.02 sec
– elapsed time for fetch was 0.05
– No block was fetch from disk, so no physical reads
– blocks received in consistent mode (1646) during fetch operation. Since this is a Select statement the blocks are shown during Fetch operation. If its a DML statement then blocks will be shown during Execute operation.
– none of the blocks are received in current mode. You will see this value gets updated during DML statement or if some DML activites are done
– Total number of rows processed are 823
Other then this you can also see the Misses in library cache during each call. If there is no miss then it wont be mentioned. The 1 miss for the SQL is very much acceptable since when a SQL runs for the first time it need to be parsed and executed and execution plan will be stored. So parse and execute call will have 1 misses.
If you see the statement parse call happened 9 times but the miss count is only 1, meaning that the statement was parsed only once and was stored in library cache. For next 8 parses the same parsed statement from library cache was used. So we have miss 1 and hit 8.
Similarly execution plan was made only once and 822 times Oracle used same execution plan from library cache.
Next it gives the execution plan for the query as shown below
Rows Row Source Operation ------- --------------------------------------------------- 1 TABLE ACCESS BY INDEX ROWID SNP_PROJECT (cr=2 pr=0 pw=0 time=0 us cost=1 size=108 card=1) 1 INDEX UNIQUE SCAN PK_PROJECT (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 81820)
Here we have several parameters shown in bracket (cr=2 pr=0 pw=0 time=0 us cost=1 size=108 card=1)
cr = Consistent Reads
pr = Physical Reads
pw = Physical Writes
time = time taken by this step in microseconds
cost = cost incured by the step
size = represent the size of data in that step
card = cardinality
Finally at the end SQL trace give the wait time event for that query, the max wait time and total time it waited.
Concluding from the Traces:
With the values above we need make conclusions and decide whether to tune the SQL or not. Unless we have a locking issue or bad performing SQLs we should not worry about CPU time or the elapsed time. because timings come into consideration only when we have bad performing SQLs. The key is the number of block visits, both query (that is, subject to read consistency) and current (that is, not subject to read consistency). Segment headers and blocks that are going to be updated are acquired in current mode, but all query and subquery processing requests the data in query mode.
Hope this helps !!