This document discusses Oracle database performance tuning. It covers identifying common Oracle performance issues such as CPU bottlenecks, memory issues, and inefficient SQL statements. It also outlines the Oracle performance tuning method and tools like the Automatic Database Diagnostic Monitor (ADDM) and performance page in Oracle Enterprise Manager. These tools help administrators monitor performance, identify bottlenecks, implement ADDM recommendations, and tune SQL statements reactively when issues arise.
4. Common Oracle DBA Tasks
• Installing Oracle software
• Creating an Oracle database
• Upgrading the database and software to new releases
• Starting up and shutting down the database
• Managing the storage structures of the database
• Managing user accounts and security
• Managing schema objects, such as tables, indexes, and views
• Making database backups and performing database recovery, when
necessary
• Proactively monitoring the condition of the database and taking
preventive or corrective actions, as required
• Monitoring and tuning database performance
8. 為何會有Performance問題
• In general, performance problems are caused by the
overuse of a particular resource.
• The overused resource is the bottleneck in the system.
• There are several distinct phases in identifying the
bottleneck and the potential fixes.
• Most of the performance problems are caused by
I/O peak periods
Bad SQL statements
Bad Application design
• To fix the performance problems
Changes in the application, or the way the application is used
Changes in Oracle
Changes in the host hardware configuration
9. 什麼是Oracle DB
Performance Tuning
• As an Oracle database administrator (DBA), you are
responsible for the performance of your Oracle database.
• Tuning a database to reach a desirable performance level
may be a daunting task.
• Performance tuning include
Performance Planning
Instance Tuning
SQL Tuning
• Performance tuning requires a different method to the
initial configuration of a system.
• Performance tuning is driven by identifying the most
significant bottleneck and making the appropriate changes
to reduce or eliminate the effect of that bottleneck.
• Usually, Performance tuning is performed reactively, either
while the system is in preproduction or after it is live.
12. Tuning by Layer
Application Layer
Applications issue SQL(PL/SQL) requests to database
Database Code Layer
Oracle DB parses and optimizes SQLs, manage locks, security, concurrency,
etc.
Memory Layer
Buffer cache (data blocks). Other shared memory caches.
PGA (Sorting and hashmemory)
Disk Layer
Read/write table/index data, read/write temporary work area, redo and
other log I/O
SQLs
Data
Rows
Block
Requests
Data
Blocks
I/O
Requests
Data
14. Caused by Environment
• CPU bottlenecks
• Undersized memory structures
System Global Area (SGA)
Program Global Area (PGA)
Buffer cache
• I/O capacity issues
Disk I/O
Network I/O
15. Caused by Database System
• Suboptimal use of Oracle Database by the application
Establishing new database connections repeatedly
Excessive SQL parsing
High levels of contention for a small amount of data (also known
as application-level block contention)
• Concurrency issues
A high degree of concurrent activities might result in contention
for shared resources that can manifest in the forms of locks or
waits for buffer cache.
• Database configuration issues
Incorrect sizing of log files
Archiving issues
Too many checkpoints
Or suboptimal parameter settings
• Short-lived performance problems
• Degradation of database performance over time
16. Caused by SQL Statements
• Inefficient or high-load SQL statements
• Object contention
Are any database objects the source of bottlenecks because
they are continuously accessed?
• Unexpected performance regression after tuning SQL
statements
Tuning SQL statements may cause changes to their
execution plans, resulting in a significant impact on SQL
performance.
In some cases, the changes may cause SQL statements to
regress, resulting in a degradation of SQL performance.
Before making changes on a production system, you can
analyze the impact of SQL tuning on a test system by using
SQL Performance Analyzer.
19. Tools when OEM Not
Available
• DBMS_XPLAN
• Cached SQL Statistics
• Wait interface and time model
• SQL Trace and tkprof
• 3rd Party Tools
Quest TOAD for Oracle
…
20. Oracle Performance Method
• Performance tuning using the Oracle performance
method is driven by identifying and eliminating
bottlenecks in the database, and by developing efficient
SQL statements.
• Database tuning is performed in two phases: proactively
and reactively.
• Applying the Oracle performance method involves the
following:
Performing pre-tuning preparations
Tuning the database proactively on a regular basis
Tuning the database reactively when performance problems
are reported by the users
Identifying, tuning, and optimizing high-load SQL
statements
21. Preparing the Database for
Tuning
• Get feedback from users.
Determine the scope of the performance project and subsequent
performance goals, and determine performance goals for the
future. This process is key for future capacity planning.
• Check the operating systems of all systems involved with
user performance.
Check for hardware or operating system resources that are fully
utilized. List any overused resources for possible later analysis.
In addition, ensure that all hardware is functioning properly.
• Ensure that the STATISTICS_LEVEL initialization
parameter is set to TYPICAL (default) or ALL to enable the
automatic performance tuning features of Oracle Database,
including AWR and ADDM.
• Ensure that the
CONTROL_MANAGEMENT_PACK_ACCESS initialization
parameter is set to DIAGNOSTIC+TUNING (default) or
DIAGNOSTIC to enable ADDM.
23. Tuning the Database
Proactively
• Review the ADDM findings
ADDM automatically detects and reports on performance problems
with the database. The results are displayed as ADDM findings on the
Database Home page in Oracle Enterprise Manager.
• Implement the ADDM recommendations
With each ADDM finding, ADDM automatically provides a list of
recommendations for reducing the impact of the performance problem.
• Monitor performance problems with the database in real time
The Performance page in Enterprise Manager enables you to identify
and respond to real-time performance problems.
• Respond to performance-related alerts
The Database Home page in Enterprise Manager displays
performance-related alerts generated by the database.
• Validate that any changes made have produced the desired
effect, and verify that the users experience performance
improvements.
24. Tuning the Database
Proactively
• Automatic Database Performance Monitoring
Automatic Database Diagnostic Monitor (ADDM) automatically
detects and reports performance problems with the database.
• Monitoring Real-Time Database Performance
The Performance page in Oracle Enterprise Manager (Enterprise
Manager) displays information that you can use to assess the
overall performance of the database in real time.
• Monitoring Performance Alerts
Oracle Database includes a built-in alerts infrastructure to notify
you of impending problems with the database.
By default, Oracle Database enables the following alerts:
Tablespace Usage
Snapshot Too Old
Recovery Area Low on Free Space
Resumable Session Suspended
In addition to these default alerts, you can use performance
alerts to detect any unusual changes in database performance.
27. Automatic Database
Diagnostic Monitor(ADDM)
• ADDM is self-diagnostic software built into Oracle
Database.
• ADDM examines and analyzes data captured in
Automatic Workload Repository (AWR) to determine
possible database performance problems.
• ADDM then locates the root causes of the performance
problems, provides recommendations for correcting
them, and quantifies the expected benefits.
• ADDM also identifies areas where no action is necessary.
29. ADDM Analysis
• An ADDM analysis is performed after each AWR snapshot
(every hour by default).
• Before using another performance tuning method, review
the results of the ADDM analysis first.
• ADDM uses the DB time statistic to identify performance
problems.
• DB time is the cumulative time spent by the database in
processing user requests, including
Wait time
CPU time of all user sessions that are not idle.
• The goal of database performance tuning is to reduce the
DB time of the system for a given workload.
• By reducing DB time, the database can support more user
requests by using the same or fewer resources.
30. ADDM Recommendations
• Hardware changes
Adding CPUs or changing the I/O subsystem configuration
• Database configuration
Changing initialization parameter settings
• Schema changes
Hash partitioning a table or index, or using automatic
segment space management(ASSM)
• Application changes
Using the cache option for sequences or using bind variables
• Using other advisors
Running SQL Tuning Advisor on high-load SQL statements
or running the Segment Advisor on hot objects
31. ADDM for Oracle RAC
• Considers DB time as the sum of database times for all
database instances
• Reports findings that are significant at the cluster level.
• For example, the DB time of each cluster node may be
insignificant when considered individually, but the
aggregate DB time may be a significant problem for the
cluster as a whole.
33. Monitoring Real-Time
Database Performance
• At first, we should use ADDM to identify performance
problem.
• But ADDM performs its analysis after each Automatic
Workload Repository (AWR) snapshot, which by default
is once every hour.
• The Performance Page in Oracle Enterprise Manager
displays the overall performance of the database in real
time.
• By drilling down the Performance page, we can identify
database performance problems in real time. Then we
can run ADDM manually to analyze it immediately
34. Monitoring Real-Time
Database Performance
• Using the Performance Page, we can
Monitoring User Activity
Top SQL、Top Sessions、Top Services、Top Modules、Top
Actions、Top Clients、Top PL/SQL、Top Files、Top Objects
Monitoring Instance Activity
Throughput、I/O、Parallel Execution、Services
Monitoring Host Activity
CPU Utilization、Memory Utilization、Disk I/O Utilization
• Determining the Cause of Spikes in Database Activity
We can access the ASH Analytics page to find out which
sessions are consuming the most database time.
Event, Activity Class, Module/Action, Session, Instance ID,
and PL/SQL function
• Customizing the Database Performance Page
36. Monitoring Performance
Alerts
• Oracle Database includes a built-in alerts infrastructure
to notify you of impending problems with the database
Tablespace Usage
Snapshot Too Old
Recovery Area Low on Free Space
Resumable Session Suspended
• We can use performance alerts to detect any unusual
changes in database performance.
• Using Performance Alerts, we can
Setting Metric Thresholds for Performance Alerts
Responding to Alerts
Clearing Alerts
37. Monitoring Performance
Alerts
• Setting Metric Thresholds for Performance Alerts
A metric is the rate of change in a cumulative statistic.
This rate can be measured against a variety of units,
including time, transactions, or database calls.
For example, the number of database calls per second is a
metric.
You can set thresholds on a metric so that an alert is
generated when the threshold is passed.
Performance alerts are based on metrics that are
performance-related.
• Environment-dependent performance alerts
AVERAGE_FILE_READ_TIME metric
• Application-dependent performance alerts
BLOCKED_USERS metric
39. Monitoring Performance
Alerts
• Responding to Alerts
When an alert is generated by Oracle Database, it appears
under Alerts on the Database Home page.
On the Database Home page, under Alerts, locate the alert
that you want to investigate and click the Message link.
Follow the recommendations.
Run Automatic Database Diagnostic Monitor (ADDM) or another
advisor to get more detailed diagnostics of the system or object
behavior.
• Clearing Alerts
On the Database Home page, under Diagnostic Summary,
click the Alert Log link.
Clear alters
Purge alters
42. Tuning the Database
Reactively
• Run ADDM manually to diagnose current and historical database
performance when performance problems are reported by the users.
In this way you can analyze current database performance before the
next ADDM analysis, or analyze historical database performance
when you were not proactively monitoring the system.
• Resolve transient performance problems.
The Active Session History (ASH) reports enable you to analyze
transient performance problems with the database that are short-
lived and do not appear in the ADDM analysis.
• Resolve performance degradation over time.
The Automatic Workload Repository (AWR) Compare Periods report
enables you to compare database performance between two periods of
time, and resolve performance degradation that may happen from one
time period to another.
• Validate that the changes made have produced the desired effect,
and verify that the users experience performance improvements.
• Repeat these steps until your performance goals are met or become
impossible to achieve due to other constraints.
44. Tuning the Database
Reactively
• Manual Database Performance Monitoring
We can run the Automatic Database Diagnostic Monitor (ADDM)
manually to monitor current and historical database performance.
• Resolving Transient Performance Problems
Transient performance problems are short-lived and typically do
not appear in the Automatic Database Diagnostic Monitor
(ADDM) analysis.
Using Active Session History(ASH) to reports to analyze
transient performance problems with the database that only
occur during specific times.
• Resolving Performance Degradation Over Time
Performance degradation of the database occurs when your
database was performing optimally in the past, but has gradually
degraded to a point where it becomes noticeable to the users.
The Automatic Workload Repository (AWR) Compare Periods
report enables you to compare database performance between two
periods of time.
45. Manual Database
Performance Monitoring
• Manually Running ADDM to Analyze Current Database
Performance
By default, ADDM runs every hour to analyze snapshots taken by
AWR during this period.
We can run ADDM manually to identify and resolve the
performance problem.
When you run ADDM manually, a manual AWR snapshot is
created automatically.
This manual run may affect the ADDM run cycle.
• Manually Running ADDM to Analyze Historical Database
Performance
We can run ADDM manually to analyze historical database
performance by selecting a pair or range of AWR snapshots as the
analysis period.
We can monitor historical performance in the Performance page.
If we identify a problem, then we can run ADDM manually to
analyze a particular time period.
• Accessing Previous ADDM Results
47. Optimizing the Optimizer
• Object statistics
• Database parameters
OPTIMIZER_MODE
OPTIMIZER_INDEX_COST_ADJ
OPTIMIZER_INDEX_CACHING
OPTIMIZER_FEATURES_ENABLE
• System statistics
DBMS_STATS.gather_system_stats
Object
Statistics
Cardinality
Estimates
Database
Parameters
&
Configuration
I/O & CPU
Operations
Estimates
Cost
Estimates
System
Statistics
48. Contention
• Type of Contention
Locks
Mostly caused by application, sometimes system.
Latches/Mutexes
Often side effect of excessive application demand.
But sometimes the final constraint on DB throughput
Buffers
Buffer caches, redo buffer …
Hot blocks (buffer busy)
Slow writer process (DBWR, LGWR, RVWR).
• Contention常常會引發連鎖反應,造成系統效能迅速下降
50. Tuning SQL Statements
• Identify high-load SQL statements.
Use the ADDM findings and the Top SQL section to identify high-
load SQL statements that are causing the greatest contention.
• Tune high-load SQL statements.
You can improve the efficiency of high-load SQL statements by
tuning them using SQL Tuning Advisor.
• Optimize data access paths.
You can optimize the performance of data access paths by
creating the proper set of materialized views, materialized view
logs, and indexes for a given workload by using SQL Access
Advisor.
• Analyze the SQL performance impact of SQL tuning and
other system changes by using SQL Performance Analyzer.
• Repeat these steps until all high-load SQL statements are
tuned for greatest efficiency.
51. Identifying High-Load SQL
Statements
• High-load SQL statements often greatly affect database
performance and must be tuned to optimize their
performance and resource consumption.
• Identification of High-Load SQL Statements Using
ADDM Findings
When a high-load SQL statement is identified, ADDM gives
recommendations, such as running SQL Tuning Advisor on
the SQL statement.
• Identifying High-Load SQL Statements Using Top SQL
The Top SQL section of the Top Activity page in Enterprise
Manager enables you to identify high-load SQL statements
for any 5-minute interval.
• After you have identified the high-load SQL statements,
you can tune them with SQL Tuning Advisor and SQL
Access Advisor.
52. Top SQL Section
• The Top SQL section of the Top Activity page in
Enterprise Manager enables you to identify high-load
SQL statements for any 5-minute interval.
• From Top SQL Section, we can
Viewing SQL Statements by Wait Class
Viewing Details of SQL Statements
54. Viewing SQL Statements by
Wait Class
• The SQL statements that appear in the Top SQL section
of the Top Activity page are categorized into various
wait classes, based on their corresponding class as
described in the legend on the Top Activity chart.
• The Active Sessions Working page for the selected wait
class appears, and the Top SQL section will be
automatically updated to show only the SQL statements
for that wait class.
• The Top SQL section of the Top Activity page displays
the SQL statements executed within the selected 5-
minute interval in descending order based on their
resource consumption.
• We can view the detail of the SQL statements by click
the SQL ID link directly.
57. SQL Details pages
• Viewing SQL Statistics
SQL Statistics Summary
General SQL Statistics
Activity by Wait Statistics and Activity by Time Statistics
Elapsed Time Breakdown Statistics
Shared Cursors Statistics and Execution Statistics
Other SQL Statistics
• Viewing Session Activity
• Viewing the SQL Execution Plan
• Viewing the Plan Control
• Viewing the Tuning History
61. How Oracle DB Execute SQL
Statements
• When Oracle Database executes the SQL statement, the
query optimizer first determines the best and most
efficient way to retrieve the results.
• The optimizer determines whether it is more efficient to
read all data in the table, called a full table scan, or use
an index.
• It compares the cost of all possible approaches and
chooses the approach with the least cost.
• The access method for physically executing a SQL
statement is called an execution plan, which the
optimizer is responsible for generating.
• The determination of an execution plan is an important
step in the processing of any SQL statement, and can
greatly affect execution time.
64. How Optimizer Help Tuning
• The query optimizer can also help you tune SQL
statements.
• By using SQL Tuning Advisor and SQL Access Advisor,
you can run the query optimizer in advisory mode to
examine a SQL statement or set of statements and
determine how to improve their efficiency.
• SQL Access Advisor is primarily responsible for making
schema modification recommendations, such as adding
or dropping indexes and materialized views.
• SQL Tuning Advisor makes other types of
recommendations, such as creating SQL profiles and
restructuring SQL statements.
68. Managing SQL Tuning Sets
• A SQL tuning set is a database object that includes one
or more SQL statements and their execution statistics
and context.
• You can use the set as an input for advisors such as SQL
Tuning Advisor, SQL Access Advisor, and SQL
Performance Analyzer.
• Under Oracle Enterprise Manager, we can
Creating a SQL Tuning Set
Dropping a SQL Tuning Set
Transporting SQL Tuning Sets
69. SQL Tuning Set
• A set of SQL statements
• Associated execution context such as:
User schema
Application module name and action
List of bind values
Cursor compilation environment
• Associated basic execution statistics such as:
Elapsed time and CPU time
Buffer gets
Disk reads
Rows processed
Cursor fetches
Number of executions and number of complete executions
Optimizer cost
Command type
• Associated execution plans and row source statistics for each SQL
statement (optional)
71. Creating a SQL Tuning Set:
Load Method
• Loading Active SQL Statements Incrementally from the
Cursor Cache
• Loading SQL Statements from the Cursor Cache
• Loading SQL Statements from AWR Snapshots
• Loading SQL Statements from AWR Baselines
• Loading SQL Statements from a User-Defined Workload
72. Creating a SQL Tuning Set:
Filter Options
• After the load method is selected, you can apply filters
to reduce the scope of the SQL statements found in the
SQL tuning set.
• By default, the following filter conditions are displayed:
Parsing Schema Name
SQL Text
SQL ID
Elapsed Time (sec)
• We can add more filter Conditions.
73. Creating a SQL Tuning Set:
Schedule
• Under Job Parameters, enter a Job Name field and the
description of the job.
• Under Schedule, do one of the following:
Immediately to run the job immediately after it has been
submitted
Later to run the job at a later time as specified using the
Time Zone, Date, and Time fields
• After Schedule assigned, we can submit the SQL Tuning
Set. And, we can use SQL Tuning Advisor to generate
SQL tuning reports.
• Also, we can drop a SQL Tuning Set, or Import/Export
(Transporting) a SQL Tuning Set.
75. SQL Profiles
• A SQL profile is a set of auxiliary information that is built
during automatic tuning of a SQL statement.
• The database use the profile to verify and, if necessary,
adjust optimizer estimates.
• During SQL profiling, the optimizer uses the execution
history of the SQL statement to create appropriate settings
for optimizer parameters.
• After SQL profiling completes, the optimizer uses the
information in the SQL profile and regular database
statistics to generate execution plans.
• After running a SQL Tuning Advisor task, a SQL profile
may be recommended.
• If you accept the recommendation, then the database
creates the SQL profile and enables it for the SQL
statement.
77. Manage SQL Profiles
• We can test the performance of a SQL statement
without using a SQL profile to determine if the SQL
profile is actually beneficial.
• If the SQL statement is performing poorly after the SQL
profile is disabled, then we should enable it again to
avoid performance degradation.
• If the SQL statement is performing optimally after
having the SQL profile disabled, then we could remove
the SQL profile from database.
79. SQL Execution Plans
• SQL plan management is a preventative mechanism
that records and evaluates execution plans of SQL
statements over time.
• The database builds SQL plan baselines consisting of a
set of efficient plans.
• If the same SQL statement runs repeatedly, and if the
optimizer generates a new plan differing from the
baseline, then the database compares the plan with the
baseline and chooses the best one.
• SQL plan management avoids SQL performance
regression.
• SQL plan baselines preserve performance of SQL
statements, regardless of changes in the database.
81. Managing SQL Execution
Plans
• Capture SQL plan baselines automatically
• Load SQL execution plans manually
• Fix the execution plan of a baseline to prevent the
database from using an alternative SQL plan baseline
83. SQL Access Advisor
• SQL Access Advisor enables you to optimize query access
paths by recommending materialized views and view logs,
indexes, SQL profiles, and partitions for a specific workload.
• A materialized view provides access to table data by storing
query results in a separate schema object.
• A materialized view contains the rows from a query of one
or more base tables or views.
• A materialized view log is a schema object that records
changes to a master table's data, so that a materialized
view defined on the master table can be refreshed
incrementally.
• SQL Access Advisor recommends how to optimize
materialized views so that they can be rapidly refreshed
and make use of the query rewrite feature.
• SQL Access Advisor also recommends bitmap, function-
based, and B-tree indexes.
85. bitmap, function-based,
and B-tree indexes
• A bitmap index reduces response time for many types of
ad hoc queries and can also reduce storage space
compared to other indexes.
• A function-based index derives the indexed value from
the table data.
• For example, to find character data in mixed cases, a
function-based index search for values as if they were all
in uppercase.
• B-tree indexes are commonly used to index unique or
near-unique keys.
86. Using SQL Access Advisor
• Running SQL Access Advisor
Running SQL Access Advisor to make recommendations for a
SQL workload.
• Reviewing the SQL Access Advisor Recommendations
SQL Access Advisor graphically displays the
recommendations and provides hyperlinks so that you can
quickly see which SQL statements benefit from a
recommendation.
• Implementing the SQL Access Advisor
Recommendations
You can select the recommendations for implementation and
schedule when the job should be executed.
Before implementing the SQL Access Advisor
recommendations, review them for cost benefits to determine
which ones should be implemented.
87. Running SQL Access Advisor
• Running SQL Access Advisor: Initial Options, Select the
initial options
• Running SQL Access Advisor: Workload Source, Select
the workload source used for the analysis
• Running SQL Access Advisor: Filter Options, Define the
filters options
• Running SQL Access Advisor: Recommendation Options,
Choose the types of recommendations
• Running SQL Access Advisor: Schedule, Schedule the
SQL Access Advisor task
88. Running SQL Access Advisor:
Initial Options
• The first step in running SQL Access Advisor is to select
the initial options on the SQL Access Advisor: Initial
Options page.
89. Running SQL Access Advisor:
Workload Source
• After initial options are specified, select the workload
source that you want to use for the analysis.
Using SQL Statements from the Cache
Using an Existing SQL Tuning Set
Using a Hypothetical Workload
• Hypothetical Workload
A dimension table stores all or part of the values for a logical
dimension in a star or snowflake schema.
Create a hypothetical workload from dimension tables
containing primary or foreign key constraints.
This option is useful if the workload to be analyzed does not
exist.
90. Running SQL Access Advisor:
Filter Options
• (Optionally) After the workload source is selected, apply
filters to reduce the scope of the SQL statements found
in the workload
Using filters directs SQL Access Advisor to make
recommendations based on a specific subset of SQL
statements from the workload, which may lead to better
recommendations.
Using filters removes extraneous SQL statements from the
workload, which may greatly reduce processing time.
• Define the filters
For Resource Consumption
For Users
For Tables
For SQL Text
For Modules
For Actions
91. Running SQL Access Advisor:
Recommendation Options
• SQL Access Advisor provides recommendations for indexes,
materialized views, and partitioning.
• Balance the benefits of using these access structures against the
cost to maintain them.
• Access Structures to Recommend
Indexes
Materialized Views
Partitioning
• Scope
Select Limited.
Select Comprehensive.
• Advanced Options.
Workload Categorization
Space Restrictions
Tuning Prioritization
Default Storage Locations
93. Reviewing the SQL Access
Advisor Recommendations
• SQL Access Advisor graphically displays the
recommendations and provides hyperlinks to see which
SQL statements benefit from a recommendation.
• We can
Reviewing the SQL Access Advisor Recommendations: Summary
Reviewing the SQL Access Advisor Recommendations:
Recommendations
Reviewing the SQL Access Advisor Recommendations: SQL
Statements
Reviewing the SQL Access Advisor Recommendations: Details
98. Implementing the SQL Access
Advisor Recommendations
• A SQL Access Advisor recommendation can range from
a simple suggestion to a complex solution that requires
partitioning a set of existing base tables and
implementing a set of database objects such as indexes,
materialized views, and materialized view logs.
101. Developing Efficient SQL
Statements
• Verifying Optimizer Statistics
• Reviewing the Execution Plan
• Restructuring the SQL Statements
• Restructuring the Indexes
• Modifying or Disabling Triggers and Constraints
• Restructuring the Data
• Maintaining Execution Plans Over Time
• Visiting Data as Few Times as Possible
• Verifying Optimizer
102. Verifying Optimizer Statistics
• The query optimizer uses statistics gathered on tables
and indexes when determining the optimal execution
plan.
• If these statistics have not been gathered, or if the
statistics are no longer representative of the data stored
within the database, then the optimizer does not have
sufficient information to generate the best plan.
• Things to check:
If you gather statistics for some tables in your database, then
it is probably best to gather statistics for all tables.
If the optimizer statistics in the data dictionary are no longer
representative of the data in the tables and indexes, then
gather new statistics.
One way to check whether the dictionary statistics are stale
is to compare the real cardinality (row count) of a table to the
value of DBA_TABLES.NUM_ROWS
103. Reviewing the Execution Plan
• When writing a SQL statement in an OLTP
environment, the goal is to drive from the table that has
the most selective filter.
• This means that there are fewer rows passed to the next
step.
• When examining the optimizer execution plan, look for
the following:
The driving table has the best filter.
The join order in each step returns the fewest number of
rows to the next step.
The join method is appropriate for the number of rows being
returned. For example, nested loop joins through indexes
may not be optimal when the statement returns many rows.
The database uses views efficiently.
There are any unintentional Cartesian products.
104. Access Table Efficiently
• Consider the predicates in the SQL statement and the
number of rows in the table. Look for suspicious activity,
such as a full table scans on tables with large number of
rows, which have predicates in the where clause.
Determine why an index is not used for such a selective
predicate.
• A full table scan does not mean inefficiency. It might be
more efficient to perform a full table scan on a small
table, or to perform a full table scan to leverage a better
join method (for example, hash_join) for the number of
rows returned.
• If any of these conditions are not optimal, then consider
restructuring the SQL statement or the indexes
available on the tables.
105. Restructuring the SQL
Statements
• Compose Predicates Using AND and =
To improve SQL efficiency, use equijoins whenever possible.
• Avoid Transformed Columns in the WHERE Clause
Use untransformed column values. For example, use:
WHERE a.order_no = b.order_no
rather than:
WHERE TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))
= TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))
Do not use SQL functions in predicate clauses or WHERE clauses,
the expression using a column causes the optimizer to ignore the
possibility of using an index on that column, unless there is a
function-based index defined.
Avoid mixed-mode expressions. For example,
Avoid: AND charcol = numexpr
Better: AND TO_NUMBER(charcol) = numexpr
Avoid the following kinds of complex expressions:
col1 = NVL (:b1,col1)
NVL (col1,-999) = ….
TO_DATE(), TO_NUMBER(), and so on
106. Restructuring the SQL
Statements
• Avoid Transformed Columns in the WHERE Clause
Add the predicate versus using NVL() technique. For
example:
WHERE (employee_num = NVL (:b1,employee_num))
Also:
WHERE (employee_num = :b1)
For example, if numcol is a column of type NUMBER, then a
WHERE clause containing numcol=TO_NUMBER('5')
enables the database to use the index on numcol.
For example, if the join condition is varcol=numcol, then the
database implicitly converts the condition to
TO_NUMBER(varcol)=numcol. If an index exists on the
varcol column, then explicitly set the type conversion to
varcol=TO_CHAR(numcol), thus enabling the database to use
the index.
107. Restructuring the SQL
Statements
• Write Separate SQL Statements for Specific Tasks
SELECT info
FROM tables
WHERE ...
AND somecolumn BETWEEN DECODE(:loval, 'ALL', somecolumn, :loval)
AND DECODE(:hival, 'ALL', somecolumn, :hival);
The database cannot use an index on the somecolumn column
SELECT /* change this half of UNION ALL if other half changes */ info
FROM tables
WHERE ...
AND somecolumn BETWEEN :loval AND :hival
AND (:hival != 'ALL' AND :loval != 'ALL')
UNION ALL
SELECT /* Change this half of UNION ALL if other half changes. */ info
FROM tables
WHERE ...
AND (:hival = 'ALL'OR :loval = 'ALL');
The EXPLAIN PLAN gets both a desirable and an
undesirable execution plan.
108. Controlling the Access Path
and Join Order with Hints
• Refer to (E41573-03) Oracle Database Performance
Tuning Guide 11g Release 2 (11.2) Chapter 19 Using
Optimizer Hints
• We can use hints in SQL statements to instruct the
optimizer about how the statement should be executed.
• Hints, such as /*+FULL */ control access paths. For
example:
SELECT /*+ FULL(e) */ e.last_name
FROM employees e
WHERE e.job_id = 'CLERK';
109. Hints for Join Order
• Join order can have a significant effect on performance.
• The main objective of SQL tuning is to avoid performing
unnecessary work to access rows that do not affect the
result.
• This leads to three general rules:
Avoid a full-table scan if it is more efficient to get the
required rows through an index.
Avoid using an index that fetches 10,000 rows from the
driving table if you could instead use another index that
fetches 100 rows.
Choose the join order so as to join fewer rows to tables later
in the join order.
• Using the ORDERED or LEADING hint to force the join
order.
110. Hints for Join Order
Example
SELECT /*+ LEADING(e2 e1) USE_NL(e1) INDEX(e1 emp_emp_id_pk)
USE_MERGE(j) FULL(j) */
e1.first_name, e1.last_name, j.job_id, sum(e2.salary) total_sal
FROM employees e1, employees e2, job_history j
WHERE e1.employee_id = e2.manager_id
AND e1.employee_id = j.employee_id
AND e1.hire_date = j.start_date
GROUP BY e1.first_name, e1.last_name, j.job_id
ORDER BY total_sal;
111. Restructuring the Indexes
• Often, there is a beneficial impact on performance by
restructuring indexes.
• This can involve the following:
Remove nonselective indexes to speed the DML.
Index performance-critical access paths.
Consider reordering columns in existing concatenated
indexes.
Add columns to the index to improve selectivity.
• Do not use indexes as a panacea. Application developers
sometimes think that performance improves when they
create more indexes.
• A single programmer creates an appropriate index, this
index may improve the application's performance. But,
50 developers each create an index, the application
performance will probably be hampered.
112. Modifying or Disabling
Triggers and Constraints
• Using triggers/Constraints consumes system resources.
• If you use too many triggers/Constraints, then
performance may be adversely affected.
• In this case, you might need to modify or disable the
triggers/Constraints.
113. Restructuring the Data
• After restructuring the indexes and the statement,
consider restructuring the data:
Introduce derived values.
Avoid GROUP BY in response-critical code.
Review your data design. Change the design of your system if
it can improve performance.
Consider partitioning, if appropriate.
Consider merging data tables.
Review duplicate data.
114. Maintaining Execution Plans
Over Time
• We can maintain the existing execution plan of SQL
statements over .
• Storing optimizer statistics for tables will apply to all SQL
statements that refer to those tables.
• Storing an execution plan as a SQL plan baseline
maintains the plan for set of SQL statements.
• If both statistics and a SQL plan baseline are available for
a SQL statement, then the optimizer first uses a cost-based
search method to build a best-cost plan, and then tries to
find a matching plan in the SQL plan baseline.
• If a match is found, then the optimizer proceeds using this
plan.
• Otherwise, it evaluates the cost of each of the accepted
plans in the SQL plan baseline and selects the plan with
the lowest cost.
115. Visiting Data as Few Times as
Possible
• Applications should try to access each row only once.
• This reduces network traffic and reduces database load.
• Consider doing the following:
Combine Multiples Scans Using CASE Expressions
Use DML with RETURNING Clause
Modify All the Data Needed in One Statement
116. Combine Multiples Scans
Using CASE Expressions
SELECT COUNT (*)
FROM employees
WHERE salary < 2000;
SELECT COUNT (*)
FROM employees
WHERE salary BETWEEN 2000 AND 4000;
SELECT COUNT (*)
FROM employees
WHERE salary>4000;
• However, it is more efficient to run the entire query in a
single statement. For example:
SELECT
COUNT (CASE WHEN salary < 2000 THEN 1 ELSE null END) count1,
COUNT (CASE WHEN salary BETWEEN 2001 AND 4000 THEN 1 ELSE null END)
count2,
COUNT (CASE WHEN salary > 4000 THEN 1 ELSE null END) count3
FROM employees;
117. Use DML with RETURNING
Clause
• Use INSERT, UPDATE, or DELETE... RETURNING to
select and modify data with a single call.
• This technique improves performance by reducing the
number of calls to the database.
• For example:
INSERT INTO t1 VALUES (t1_seq.nextval, 'FOUR')
RETURNING id INTO l_id;
UPDATE t1
SET description = description
WHERE description = 'FOUR'
RETURNING id INTO l_id;
DELETE FROM t1
WHERE description = 'FOUR'
RETURNING id INTO l_id;
118. Modify All the Data Needed in
One Statement
• When possible, use array processing. This means that an array of
bind variable values is passed to Oracle Database for repeated
execution.
• For example:
BEGIN
FOR pos_rec IN (SELECT *
FROM order_positions
WHERE order_id = :id) LOOP
DELETE FROM order_positions
WHERE order_id = pos_rec.order_id
AND order_position = pos_rec.order_position;
END LOOP;
DELETE FROM orders
WHERE order_id = :id;
END;
• Alternatively, you could define a cascading constraint on orders. In
the previous example, one SELECT and n DELETEs are executed.
When a user issues the DELETE on orders DELETE FROM orders
WHERE order_id = :id, the database automatically deletes the
positions with a single DELETE statement.