1600 PMP mock questions 1400 CAPM mock questions 800 SCJP 6 mock questions 600 OCAJP 7 mock questions 590 OCPJP 7 mock questions 556 SCWCD 5 mock questions 500 OCEJWCD 6 mock questions pdfDownload (java,struts, hibernet etc)

ORACLE Tutorials

Oracle Tools and Utilities TKPROF

Oracle Tools and Utilities TKPROF
(Transient Kernel Profiling)
(Source Oracle Docs)

Understanding TKPROF

You can run the TKPROF program to format the contents of the trace file and place the output into a readable output file. TKPROF can also:

Create a SQL script that stores the statistics in the database
Determine the execution plans of SQL statements


If the cursor for a SQL statement is not closed, TKPROF output does not automatically include the actual execution plan of the SQL statement. In this situation, you can use the EXPLAIN option with TKPROF to generate an execution plan.

TKPROF reports each statement executed with the resources it has consumed, the number of times it was called, and the number of rows which it processed. This information lets you easily locate those statements that are using the greatest resource. With experience or with baselines available, you can assess whether the resources used are reasonable given the work done.

Using the SQL Trace Facility and TKPROF

Step 1: Setting Initialization Parameters for Trace File Management
When the SQL Trace facility is enabled for a session, Oracle generates a trace file containing statistics for traced SQL statements for that session. When the SQL Trace facility is enabled for an instance, Oracle creates a separate trace file for each process. Before enabling the SQL Trace facility:

Check the settings of the TIMED_STATISTICS, MAX_DUMP_FILE_SIZE, and USER_DUMP_DEST initialization parameters.


This enables and disables the collection of timed statistics, such as CPU and elapsed times, by the SQL Trace facility, as well as the collection of various statistics in the dynamic performance tables. The default value of false disables timing. A value of true enables timing. Enabling timing causes extra timing calls for low-level operations. This is a dynamic parameter. It is also a session parameter.


When the SQL Trace facility is enabled at the instance level, every call to the server produces a text line in a file in the operating system's file format. The maximum size of these files (in operating system blocks) is limited by this initialization parameter. The default is 500. If you find that the trace output is truncated, then increase the value of this parameter before generating another trace file. This is a dynamic parameter. It is also a session parameter.


This must fully specify the destination for the trace file according to the conventions of the operating system. The default value is the default destination for system dumps on the operating system.This value can be modified with ALTER SYSTEM SET USER_DUMP_DEST= newdir. This is a dynamic parameter. It is also a session parameter.

See Also:

"Interpreting Statistics" for considerations when setting the STATISTICS_LEVEL, DB_CACHE_ADVICE, TIMED_STATISTICS, or TIMED_OS_STATISTICS initialization parameters
"Setting the Level of Statistics Collection" for information about STATISTICS_LEVEL settings
Oracle Database Reference for information on the STATISTICS_LEVEL initialization parameter
Oracle Database Reference for information about the dynamic performance V$STATISTICS_LEVEL view

Devise a way of recognizing the resulting trace file.

Be sure you know how to distinguish the trace files by name. Oracle writes them to the user dump destination specified by USER_DUMP_DEST. However, this directory can soon contain many hundreds of files, usually with generated names. It might be difficult to match trace files back to the session or process that created them. You can tag trace files by including in your programs a statement like SELECT 'program_name' FROM DUAL. You can then trace each file back to the process that created it.

You can also set the TRACEFILE_IDENTIFIER initialization parameter to specify a custom identifier that becomes part of the trace file name. For example, you can add my_trace_id to subsequent trace file names for easy identification with the following:


See Also:
Oracle Database Reference for information on the TRACEFILE_IDENTIFIER initialization parameter

If the operating system retains multiple versions of files, then be sure that the version limit is high enough to accommodate the number of trace files you expect the SQL Trace facility to generate.

The generated trace files can be owned by an operating system user other than yourself. This user must make the trace files available to you before you can use TKPROF to format them.

See Also:

"Setting the Level of Statistics Collection" for information about STATISTICS_LEVEL settings
Oracle Database Reference for information on the STATISTICS_LEVEL initialization parameter

Step 2: Enabling the SQL Trace Facility
Enable the SQL Trace facility for the session by using one of the following:



Because running the SQL Trace facility increases system overhead, enable it only when tuning SQL statements, and disable it when you are finished.

You might need to modify an application to contain the ALTER SESSION statement. For example, to issue the ALTER SESSION statement in Oracle Forms, invoke Oracle Forms using the -s option, or invoke Oracle Forms (Design) using the statistics option. For more information on Oracle Forms, see the Oracle Forms Reference.


To disable the SQL Trace facility for the session, enter:


The SQL Trace facility is automatically disabled for the session when the application disconnects from Oracle.

You can enable the SQL Trace facility for an instance by setting the value of the SQL_TRACE initialization parameter to TRUE in the initialization file.


After the instance has been restarted with the updated initialization parameter file, SQL Trace is enabled for the instance and statistics are collected for all sessions. If the SQL Trace facility has been enabled for the instance, you can disable it for the instance by setting the value of the SQL_TRACE parameter to FALSE.

Setting SQL_TRACE to TRUE can have a severe performance impact. For more information, see Oracle Database Reference.


Step 3: Formatting Trace Files with TKPROF
TKPROF accepts as input a trace file produced by the SQL Trace facility, and it produces a formatted output file. TKPROF can also be used to generate execution plans.

After the SQL Trace facility has generated a number of trace files, you can:

Run TKPROF on each individual trace file, producing a number of formatted output files, one for each session.
Concatenate the trace files, and then run TKPROF on the result to produce a formatted output file for the entire instance.
Run the trcsess command-line utility to consolidate tracing information from several trace files, then run TKPROF on the result. See "Using the trcsess Utility".
TKPROF does not report COMMITs and ROLLBACKs that are recorded in the trace file.

Sample TKPROF Output
Sample output from TKPROF is as follows:

SELECT * FROM emp, dept
WHERE emp.deptno = dept.deptno;

Call Count Cpu Elapsed Disk Query Current Rows
Parse 1 0.16 0.29 3 13 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.03 0.26 2 2 4 14

Misses in library cache during parse: 1
Parsing user id: (8) SCOTT

Rows Execution Plan

For this statement, TKPROF output includes the following information:

The text of the SQL statement
The SQL Trace statistics in tabular form
The number of library cache misses for the parsing and execution of the statement.
The user initially parsing the statement.
The execution plan generated by EXPLAIN PLAN.
TKPROF also provides a summary of user level statements and recursive SQL calls for the trace file.

Syntax of TKPROF
TKPROF is run from the operating system prompt. The syntax is:

tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n]
[aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table]
[explain=user/password] [record=filename4] [width=n]

The input and output files are the only required arguments. If you invoke TKPROF without arguments, then online help is displayed. Use the arguments in Table 20-2 with TKPROF.

Table 20-2 TKPROF Arguments
Argument Description
Specifies the input file, a trace file containing statistics produced by the SQL Trace facility. This file can be either a trace file produced for a single session, or a file produced by concatenating individual trace files from multiple sessions.

Specifies the file to which TKPROF writes its formatted output.

Specifies whether to record summary for any wait events found in the trace file. Values are YES or NO. The default is YES.

Sorts traced SQL statements in descending order of specified sort option before listing them into the output file. If more than one option is specified, then the output is sorted in descending order by the sum of the values specified in the sort options. If you omit this parameter, then TKPROF lists statements into the output file in order of first use. Sort options are listed as follows:

Number of times parsed.

CPU time spent parsing.

Elapsed time spent parsing.

Number of physical reads from disk during parse.

Number of consistent mode block reads during parse.

Number of current mode block reads during parse.

Number of library cache misses during parse.

Number of executes.

CPU time spent executing.

Elapsed time spent executing.

Number of physical reads from disk during execute.

Number of consistent mode block reads during execute.

Number of current mode block reads during execute.

Number of rows processed during execute.

Number of library cache misses during execute.

Number of fetches.

CPU time spent fetching.

Elapsed time spent fetching.

Number of physical reads from disk during fetch.

Number of consistent mode block reads during fetch.

Number of current mode block reads during fetch.

Number of rows fetched.

Userid of user that parsed the cursor.

Lists only the first integer sorted SQL statements from the output file. If you omit this parameter, then TKPROF lists all traced SQL statements. This parameter does not affect the optional SQL script. The SQL script always generates insert data for all traced SQL statements.

If you specify AGGREGATE = NO, then TKPROF does not aggregate multiple users of the same SQL text.

Creates a SQL script that stores the trace file statistics in the database. TKPROF creates this script with the name filename3. This script creates a table and inserts a row of statistics for each traced SQL statement into the table.

Enables and disables the listing of SQL statements issued by the user SYS, or recursive SQL statements, into the output file. The default value of YES causes TKPROF to list these statements. The value of NO causes TKPROF to omit them. This parameter does not affect the optional SQL script. The SQL script always inserts statistics for all traced SQL statements, including recursive SQL statements.

Specifies the schema and name of the table into which TKPROF temporarily places execution plans before writing them to the output file. If the specified table already exists, then TKPROF deletes all rows in the table, uses it for the EXPLAIN PLAN statement (which writes more rows into the table), and then deletes those rows. If this table does not exist, then TKPROF creates it, uses it, and then drops it.

The specified user must be able to issue INSERT, SELECT, and DELETE statements against the table. If the table does not already exist, then the user must also be able to issue CREATE TABLE and DROP TABLE statements. For the privileges to issue these statements, see the Oracle Database SQL Reference.

This option allows multiple individuals to run TKPROF concurrently with the same user in the EXPLAIN value. These individuals can specify different TABLE values and avoid destructively interfering with each other's processing on the temporary plan table.

If you use the EXPLAIN parameter without the TABLE parameter, then TKPROF uses the table PROF$PLAN_TABLE in the schema of the user specified by the EXPLAIN parameter. If you use the TABLE parameter without the EXPLAIN parameter, then TKPROF ignores the TABLE parameter.

If no plan table exists, TKPROF creates the table PROF$PLAN_TABLE and then drops it at the end.

Determines the execution plan for each SQL statement in the trace file and writes these execution plans to the output file. TKPROF determines execution plans by issuing the EXPLAIN PLAN statement after connecting to Oracle with the user and password specified in this parameter. The specified user must have CREATE SESSION system privileges. TKPROF takes longer to process a large trace file if the EXPLAIN option is used.

Creates a SQL script with the specified filename4 with all of the nonrecursive SQL in the trace file. This can be used to replay the user events from the trace file.

An integer that controls the output line width of some TKPROF output, such as the explain plan. This parameter is useful for post-processing of TKPROF output.


The syntax for the trcsess utility is:

trcsess [output=output_file_name]


Output specifies the file where the output is generated. If this option is not specified, then standard output is used for the output.
Session consolidates the trace information for the session specified. The session identifier is a combination of session index and session serial number, such as 21.2371. You can locate these values in the V$SESSION view.
Clientid consolidates the trace information given client Id.
Service consolidates the trace information for the given service name.
Action consolidates the trace information for the given action name.
Module consolidates the trace information for the given module name.
trace_files is a list of all the trace file names, separated by spaces, in which trcsess should look for trace information. The wild card character * can be used to specify the trace file names. If trace files are not specified, all the files in the current directory are taken as input to trcsess.
One of the session, clientid, service, action, or module options must be specified. If more then one of the session, clientid, service, action, or module options is specified, then the trace files which satisfies all the criteria specified are consolidated into the output file.

Explain Plan

Understanding EXPLAIN PLAN

The EXPLAIN PLAN statement displays execution plans chosen by the Oracle optimizer for SELECT, UPDATE, INSERT, and DELETE statements. A statement's execution plan is the sequence of operations Oracle performs to run the statement.

The row source tree is the core of the execution plan. It shows the following information:

An ordering of the tables referenced by the statement
An access method for each table mentioned in the statement
A join method for tables affected by join operations in the statement
Data operations like filter, sort, or aggregation
In addition to the row source tree, the plan table contains information about the following:

Optimization, such as the cost and cardinality of each operation
Partitioning, such as the set of accessed partitions
Parallel execution, such as the distribution method of join inputs
The EXPLAIN PLAN results let you determine whether the optimizer selects a particular execution plan, such as, nested loops join. It also helps you to understand the optimizer decisions, such as why the optimizer chose a nested loops join instead of a hash join, and lets you understand the performance of a query.

With the query optimizer, execution plans can and do change as the underlying optimizer inputs change. EXPLAIN PLAN output shows how Oracle runs the SQL statement when the statement was explained. This can differ from the plan during actual execution for a SQL statement, because of differences in the execution environment and explain plan environment.

Execution plans can differ due to the following:

Different Schemas

The execution and explain plan happen on different databases.
The user explaining the statement is different from the user running the statement. Two users might be pointing to different objects in the same database, resulting in different execution plans. Schema changes (usually changes in indexes) between the two operations.

Different Costs

Even if the schemas are the same, the optimizer can choose different execution plans if the costs are different. Some factors that affect the costs include the following:

Data volume and statistics
Bind variable types and values
Initialization parameters - set globally or at session level

Looking Beyond Execution Plans

The execution plan operation alone cannot differentiate between well-tuned statements and those that perform poorly. For example, an EXPLAIN PLAN output that shows that a statement uses an index does not necessarily mean that the statement runs efficiently. Sometimes indexes can be extremely inefficient. In this case, you should examine the following:
The columns of the index being used

Their selectivity (fraction of table being accessed)

It is best to use EXPLAIN PLAN to determine an access plan, and then later prove that it is the optimal plan through testing. When evaluating a plan, examine the statement's actual resource consumption.

Using V$SQL_PLAN Views

In addition to running the EXPLAIN PLAN command and displaying the plan, you can use the V$SQL_PLAN views to display the execution plan of a SQL statement:

After the statement has executed, you can display the plan by querying the V$SQL_PLAN view. V$SQL_PLAN contains the execution plan for every statement stored in the cursor cache. Its definition is similar to the PLAN_TABLE. See "PLAN_TABLE Columns".

The advantage of V$SQL_PLAN over EXPLAIN PLAN is that you do not need to know the compilation environment that was used to execute a particular statement. For EXPLAIN PLAN, you would need to set up an identical environment to get the same plan when executing the statement.

The V$SQL_PLAN_STATISTICS view provides the actual execution statistics for every operation in the plan, such as the number of output rows and elapsed time. All statistics, except the number of output rows, are cumulative. For example, the statistics for a join operation also includes the statistics for its two inputs.

The statistics in V$SQL_PLAN_STATISTICS are available for cursors that have been compiled with the STATISTICS_LEVEL initialization parameter set to ALL.

The V$SQL_PLAN_STATISTICS_ALL view enables side-by-side comparisons of the estimates that the optimizer provides for the number of rows and elapsed time. This view combines both V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.

Installing the Trace Analyzer

Download the latest version of this tool from TRCA.zip

Unzip into a dedicated directory on either the database server, or a client that can connect to the database server.

Connect into SQL*Plus with USER that created Raw SQL Trace to be analyzed.

If using Oracle Apps, connect as APPS USER.

Execute script to create Staging Repository and Package to be used by the Trace Analyzer:

sqlplus scott/tiger


When the Raw SQL Trace resides under the UDUMP Directory where it was created, there is no need to create any other Directory Alias. But if the Raw SQL Trace was moved into any other Directory on the database server, a new Directory Alias pointing to this non UDUMP Directory must be created:

sqlplus system/<system_pwd>


If you get some PLS-00201 errors while installing the Staging Repository or executing the Trace Analyzer, you may need to create some GRANTs for the USER which will be using the Trace Analyzer. If this is the case, use the example below. You would have to connect into SQL*Plus as SYS or SYSTEM.


If you are on RDBMS 9.0.x, and you got some 'ORA-00942: table or view does not exist' errors, you need to install this TRCA tool on a dictionary managed tablespace. Create a dictionary managed tablespace using example below, and modify script TRCAREPO.sql to use this new tablespace.

# sqlplus scott/tiger

SQL> create tablespace TRCA datafile '/oracle/em40db/oradata/em40/trca01.dbf'
size 100M autoextend on next 100M permanent
default storage ( initial 1M next 1M)
extent management dictionary;

Trace Analyzer reports, by default, all SQL commands executed while tracing was active, including recursive SYS commands. If for any reason, you want to exclude from Trace Analyzer report the recursive SQL executed by user SYS, use the TRCAISYS.sql script provided. This script, when executed with parameter value of NO, provides same functionality than TKPROF and sys=no parameter. To reset to default behavior (sys=yes), execute with value of YES.

# sqlplus apps/<apps_pwd>


Maintaining the Staging Repository
The Trace Analyzer automatically keeps into the Staging Repository up to 14 days of data related to analyzed Raw SQL Traces. The spaced used on the Tablespace where the Staging Repository was installed, is equivalent to the size of the Raw SQL Traces analyzed during the last 14 days. If you need to purge the Staging Repository in order to keep less than 14 days, use example below (keeping 3 days of data) connecting to SQL*Plus as USER which installed the Trace Analyzer.
Truncating the Staging Repository provides the better performance executing the Trace Analyzer, and it recovers all used space on the Tablespace where the Staging Repository was installed. Use command below connecting to SQL*Plus as USER which installed the Trace Analyzer.
The Trace Analyzer staging repository includes several objects with prefix trca$.

Reviews and Comments

PMP, CAPM, PMI is a registered certification mark of the Project Management Institute, Inc

Copyright © www.techfaq360.com 2019

About US | Contact US | Privacy Policy | Terms and Conditions  | Website disclaimer  | Cancellation and Refund Policy  | Shipping & Delivery Policy