1. Performance Tuning in SQL Server
Antonios Chatzipavlis
Software Architect , Development Evangelist, IT Consultant
MCT, MCITP, MCPD, MCSD, MCDBA, MCSA, MCTS, MCAD, MCP, OCA
1
2. Objectives
•
•
•
•
•
Why is Performance Tuning Necessary?
How to Optimize SQL Server for performance
How to Optimize Database for performance
How to Optimize Query for performance
Define and implement monitoring standards for database
servers and instances
• How to troubleshoot SQL Server
2
4. Why is Performance Tuning Necessary?
• Allowing your system to scale
• Adding more customers
• Adding more features
• Improve overall system performance
• Save money but not wasting resources
• The database is typically one of the most expensive resources
in a datacenter
4
5. General Scaling Options
Scaling SQL Server with Bigger Hardware
•
•
•
•
Purchase a larger server, and replace the existing system.
Works well with smaller systems.
Cost prohibitive for larger systems.
Can be a temporary solution.
5
6. General Scaling Options
Scaling SQL Server with More Hardware
• Purchase more hardware and split or partition the database.
• Partitioning can be either vertical or horizontal
• Vertical: Split the databases based on a specific demographic
such as time zone or zip code.
• Horizontal: Split components out of one database into another
6
7. General Scaling Options
Scaling SQL Server without adding hardware
•
•
•
•
•
•
•
•
Adjusting and rewriting queries.
Adding indexes.
Removing indexes.
Re-architecting the database schema.
Moving things that shouldn’t be in the database.
Eliminating redundant work on the database.
Caching of data.
Other performance tuning techniques.
7
11. CPU and SQL Server
• CPU Intensive Operations
• Compression
• Bulk Load operations
• Compiling or Recompiling Queries
• Hyper-Threading
• Is just 1.3 times better than non hyper-threaded execution
• The currently accepted best practice recommendation is that
you should run SQL Server with Hyper-Threading disabled
• L3 Cache
11
12. CPU and SQL Server
Performance Counters
Counter
Description
Guidelines
Processor:% Processor
Time
This counter monitors the amount of
time the CPU spends executing a thread
that is not idle
A consistent state of 80 percent to 90
percent may indicate the need to
upgrade your CPU or add more
processors.
System: %Total Processor
To determine the average for all
processors
Processor: % Privileged
Time
Corresponds to the percentage of time
the processor spends on execution of
Microsoft Windows kernel commands,
such as processing of SQL Server I/O
requests.
Processor: %User Time
Corresponds to the percentage of time
that the processor spends on
executing user processes such as SQL
Server.
System: Processor Queue
Length
Corresponds to the number of threads
waiting for processor time. A processor
bottleneck develops when threads of a
process require more processor cycles
than are available.
If this counter is consistently high when
the Physical Disk counters are high,
consider installing a faster or more
efficient disk subsystem.
If more than a few processes attempt to
utilize the processor's time, you might
need to install a faster processor. Or, if
you have a multiprocessor system, you
could add a processor.
12
13. Memory and SQL Server
Enable Address Windowing Extensions (AWE)
• Tuning 32-bit Systems
• Use /PAE and /3GB Together (Windows 2003)
• Running BCDEDIT /set increaseUserVA 3072 (Windows 2008)
• Tuning 64-bit Systems
• If needed, enable AWE on Enterprise Edition of SQL Server
• If needed, enable AWE on Standard Edition of SQL Server only when
SP1 with Cumulative Update 2 applied.
Read more info at http://support.microsoft.com/kb/970070
13
14. Memory and SQL Server
Min and Max Server Memory
• Control the allowable size of SQL Server’s buffer pool.
• Do not control all of SQL Server’s memory usage, just the
buffer pool.
• When the SQL Server service starts, it does not acquire all
the memory configured in Min Server Memory but instead
starts with only the minimal required, growing as necessary.
• Once memory usage has increased beyond the Min Server
Memory setting, SQL Server won’t release any memory
below that figure.
• Max Server Memory is the opposite of Min Server Memory,
setting a “ceiling” for the buffer pool
14
15. Memory and SQL Server
How to configure Max Server Memory
• Look at the buffer pool’s maximum usage.
• Set SQL Server to dynamically manage memory
• Monitor MSSQLSERVER : Memory ManagerTotal Server Memory
(KB) counter using Performance Monitor
• Determine the maximum potential for non-buffer pool usage.
• 2GB for Windows
• xGB for SQL Server worker threads
• Each thread use 0.5MB on x86, 2MB on x64, and 4MB on Itanium.
• 1GB for multi-page allocations, linked servers, and other consumers
of memory outside the buffer pool
• 1–3GB for other applications that might be running on the system,
such as backup programs
15
16. Memory and SQL Server
Example of Max Server Memory configuration
• In 8-CPU cores and 16GB of RAM running SQL Server 2008
x64 and a third-party backup utility, you would allow the
following:
•
•
•
•
2GB for Windows
1GB for worker threads (576 Χ 2MB rounded down)
1GB for MPAs, etc.
1GB for the backup program
• The total is 5GB, and you would configure Max Server
Memory to 11GB.
16
17. Memory and SQL Server
Performance Counters
Counter
Description
Guidelines
Memory: Available Bytes
Indicates how many any bytes of memory are
currently available for use by Processes
Low values can indicate that there is an overall
shortage of memory on computer or that an
application is not releasing memory
Memory: Pages/sec
Indicates the number of pages that either
were retrieved from disk due to hard page
faults or written to disk to free space in the
working set due to page faults.
A high rate could indicate excessive paging.
Monitor the Memory: Page Faults/sec counter
to make sure that the disk activity is not caused
by paging.
Process - Page Faults/sec
(sql server instance)
Windows Virtual Memory
Manager takes pages from SQL Server and
other processes as it trims the working-set
sizes of those processes.
A high number indicates excessive paging and
disk thrashing. Use this counter to check
whether SQL Server or another process is
causing the excessive paging.
SQL Server: Buffer ManagerBuffer Cache Hit Ratio
Monitors the percentage of required pages
found in the buffer cache, without reading
from hard disk.
Add more memory until the value is consistently
greater than 90 percent.
SQL Server: Buffer ManagerTotal Pages
Monitors the total number of pages in the
buffer cache, including database, free, and
stolen pages from other processes.
A low number may indicate frequent disk I/O or
thrashing. Consider adding more memory.
SQL Server: Memory ManagerTotal Server Memory (KB)
Monitors the total amount of dynamic
memory that the server is using.
If this counter is consistently high in comparison
to the amount of physical memory available,
more memory may be required.
17
18. IO and SQL Server
Choose the right hard disk subsystem
• RAID 5
•
•
•
•
Loved by storage administrators
Dominated choice for non-database applications
It’s cost effective and cost efficient
Minimize the space required in the datacenter (fewer drives need
fewer bays)
• RAID 10
• Microsoft recommendation for log files
• Storage Area Networks (SANs)
• Performance is not always predictable if two servers share the same
drive
• iSCSI Storage Area Networks
• For good performance needs dedicated switches.
18
19. IO and SQL Server
Choosing Which Files to Place on Which Disks
• Best practices dictate that SQL Server
•
•
•
•
data files,
logs,
tempdb files
backup files
are all written to separate arrays
• Put log files on RAID 10
• Put data files on RAID 5 (to save money)
19
20. IO and SQL Server
Using Compression to Gain Performance
• Increase IO performance but has CPU penalty
• The SQL Server engine has to compress the data before
writing the page, and decompress the data after reading the
page
• However, in practice this penalty is far outweighed by the
time saved waiting for storage. Read more at
http://msdn.microsoft.com/en-us/library/dd894051.aspx
• Example: If a 10GB index is compressed down to 3GB, then
an index scan will be completed 70% faster simply because
the data takes less time to read off the drives.
• Is Enterprise Edition feature
20
21. IO and SQL Server
Performance Counters
Counter
Description
Guidelines
% Disk Time
Monitors the percentage of time that
the disk is busy with read/write activity.
If this counter is high (more than 90
percent), check the Current Disk Queue
Length counter.
Avg. Disk Queue Length
Monitors the average number of
read/write requests that are queued.
This counter should be no more than
twice the number of spindles.
Current Disk Queue Length
Monitors the current number of
read/write requests that are queued.
This counter should be no more than
twice the number of spindles
• Monitor the Page Faults/sec counter in the Memory object
to make sure that the disk activity is not caused by paging.
• If you have more than one logical partition on the same hard
disk, use the Logical Disk counters rather than the Physical
Disk counters.
21
25. Schema Design Optimization
Normalization
• In this process you organize data to minimize redundancy,
which eliminates duplicated data and logical ambiguities in
the database
Normal Form
Description
First
Every attribute is atomic, and there are no
repeating groups
Second
Complies with First Normal Form, and all non-key
columns depend on the whole key
Third
Complies with Second Normal Form, and all nonkey columns are non-transitively dependent upon
the primary key
25
26. Schema Design Optimization
Denormalization
• In this process you re-introduce redundancy to the database
to optimize performance.
• When to use denormalization:
• To pre-aggregate data
• To avoid multiple/complex joins
• When not to use denormalization:
• To prevent simple joins
• To provide reporting data
• To prevent same row calculations
26
27. Schema Design Optimization
Generalization
• In this process you group similar entities together into a
single entity to reduce the amount of required data access
code.
• Use generalization when:
• A large number of entities appear to be of the same type
• Multiple entities contain the same attributes
• Do not use generalization when:
• It results in an overly complex design that is difficult to
manage
27
30. Key Measures for Query Performance
Key factors for query performance:
Resources used to execute the query
Time required for query execution
SQL Server tools to measure query performance:
Performance Monitor
SQL Server Profiler
30
32. Logical Execution of Query
Example Data
customerid city
Orderid customerid
ANTON
Athens
1
NASOS
CHRIS
Salonica
2
NASOS
FANIS
Athens
3
FANIS
NASOS
Athens
4
FANIS
5
FANIS
6
CHRIS
7
NULL
32
33. Logical Execution of Query
Example Query & Results
SELECT C.customerid, COUNT(O.orderid) AS numorders
FROM dbo.Customers AS C
LEFT OUTER JOIN dbo.Orders AS O
ON C.customerid = O.customerid
WHERE C.city = 'Athens'
GROUP BY C.customerid
HAVING COUNT(O.orderid) < 3
ORDER BY numorders;
Customerid
numorders
ANTON
0
NASOS
2
33
34. Logical Execution of Query
1st Step - Cross Join
FROM dbo.Customers AS C ... JOIN dbo.Orders AS O
Customerid
City
Orderid
customerid
ANTON
Athens
1
NASOS
ANTON
Athens
2
NASOS
ANTON
Athens
3
FANIS
ANTON
Athens
4
FANIS
ANTON
Athens
5
FANIS
ANTON
Athens
6
CHRIS
ANTON
Athens
7
NULL
CHRIS
Salonica
1
NASOS
CHRIS
Salonica
2
NASOS
CHRIS
Salonica
3
FANIS
CHRIS
Salonica
4
FANIS
CHRIS
Salonica
5
FANIS
CHRIS
Salonica
6
CHRIS
CHRIS
Salonica
7
NULL
FANIS
Athens
1
NASOS
FANIS
Athens
2
NASOS
FANIS
Athens
3
FANIS
FANIS
Athens
4
FANIS
FANIS
Athens
5
FANIS
FANIS
Athens
6
CHRIS
FANIS
Athens
7
NULL
NASOS
Athens
1
NASOS
NASOS
Athens
2
NASOS
NASOS
Athens
3
FANIS
NASOS
Athens
4
FANIS
NASOS
Athens
5
FANIS
NASOS
Athens
6
CHRIS
NASOS
Athens
7
NULL
34
35. Logical Execution of Query
2nd Step- Apply Join condition ON Filter
ON C.customerid = O.customerid
Customerid
City
Orderid
customerid
ΟΝ Filter
ANTON
Athens
1
NASOS
FALSE
ANTON
Athens
2
NASOS
FALSE
ANTON
Athens
3
FANIS
FALSE
ANTON
Athens
4
FANIS
FALSE
ANTON
Athens
5
FANIS
FALSE
ANTON
Athens
6
CHRIS
FALSE
ANTON
Athens
7
NULL
UNKNOWN
CHRIS
Salonica
1
NASOS
FALSE
CHRIS
Salonica
2
NASOS
FALSE
CHRIS
Salonica
3
FANIS
FALSE
CHRIS
Salonica
4
FANIS
FALSE
CHRIS
Salonica
5
FANIS
FALSE
CHRIS
Salonica
6
CHRIS
TRUE
CHRIS
Salonica
7
NULL
UNKNOWN
FANIS
Athens
1
NASOS
FALSE
FANIS
Athens
2
NASOS
FALSE
FANIS
Athens
3
FANIS
TRUE
FANIS
Athens
4
FANIS
TRUE
FANIS
Athens
5
FANIS
TRUE
FANIS
Athens
6
CHRIS
FALSE
FANIS
Athens
7
NULL
UNKNOWN
NASOS
Athens
1
NASOS
TRUE
NASOS
Athens
2
NASOS
TRUE
NASOS
Athens
3
FANIS
FALSE
NASOS
Athens
4
FANIS
FALSE
NASOS
Athens
5
FANIS
FALSE
NASOS
Athens
6
CHRIS
FALSE
NASOS
Athens
7
NULL
UNKNOWN
Customerid
City
Orderid
customerid
CHRIS
Salonica
6
CHRIS
FANIS
Athens
3
FANIS
FANIS
Athens
4
FANIS
FANIS
Athens
5
FANIS
NASOS
Athens
1
NASOS
NASOS
Athens
2
NASOS
35
36. Logical Execution of Query
3rd Step - Apply OUTER Join
FROM dbo.Customers AS C LEFT OUTER JOIN
dbo.Orders AS O
Customerid
City
Orderid
customerid
CHRIS
Salonica
6
CHRIS
FANIS
Athens
3
FANIS
FANIS
Athens
4
FANIS
FANIS
Athens
5
FANIS
NASOS
Athens
1
NASOS
NASOS
Athens
2
NASOS
ΑΝΤΟΝ
Athens
NULL
NULL
36
45. Logical Execution of Query
Get the Result
Customerid
numorders
ANTON
0
NASOS
2
45
46. Performance Tuning in SQL Server
How to Optimize Query for performance
Top 10 for Building Efficient Queries
46
47. Top 10 for Building Efficient Queries
1.Favor set-based logic over procedural or cursor logic
• The most important factor to consider when tuning queries
is how to properly express logic in a set-based manner.
• Cursors or other procedural constructs limit the query
optimizer’s ability to generate flexible query plans.
• Cursors can therefore reduce the possibility of performance
improvements in many situations
47
48. Top 10 for Building Efficient Queries
2.Test query variations for performance
• The query optimizer can often produce widely different
plans for logically equivalent queries.
• Test different techniques, such as joins or subqueries, to
find out which perform better in various situations.
48
49. Top 10 for Building Efficient Queries
3.Avoid query hints.
• You must work with the SQL Server query optimizer, rather
than against it, to create efficient queries.
• Query hints tell the query optimizer how to behave and
therefore override the optimizer’s ability to do its job
properly.
• If you eliminate the optimizer’s choices, you might limit
yourself to a query plan that is less than ideal.
• Use query hints only when you are absolutely certain that
the query optimizer is incorrect.
49
50. Top 10 for Building Efficient Queries
4.Use correlated subqueries to improve performance.
• Since the query optimizer is able to integrate subqueries
into the main query flow in a variety of ways, subqueries
might help in various query tuning situations.
• Subqueries can be especially useful in situations in which
you create a join to a table only to verify the existence of
correlated rows. For better performance, replace these
kinds of joins with correlated subqueries that make use of
the EXISTS operator
--Using a LEFT JOIN
SELECT a.parent_key FROM parent_table a LEFT JOIN child_table b ON a.parent_key =
b.parent_key WHERE B.parent_key IS NULL
--Using a NOT EXISTS
SELECT a.parent_key FROM parent_table a WHERE NOT EXISTS (SELECT * FROM child_table b
WHERE a.parent_key =b.parent_key)
50
51. Top 10 for Building Efficient Queries
5. Avoid using a scalar user-defined function in the
WHERE clause.
• Scalar user-defined functions, unlike scalar subqueries, are
not optimized into the main query plan.
• Instead, you must call them row-by-row by using a hidden
cursor.
• This is especially troublesome in the WHERE clause because
the function is called for every input row.
• Using a scalar function in the SELECT list is much less
problematic because the rows have already been filtered in
the WHERE clause.
51
52. Top 10 for Building Efficient Queries
6.Use table-valued user-defined functions as
derived tables.
• In contrast to scalar user-defined functions, table-valued
functions are often helpful from a performance point of
view when you use them as derived tables.
• The query processor evaluates a derived table only once per
query.
• If you embed the logic in a table-valued user-defined
function, you can encapsulate and reuse it for other queries.
CREATE FUNCTION Sales.fn_SalesByStore (@storeid int)
RETURNS TABLE AS RETURN
(
SELECT P.ProductID, P.Name, SUM(SD.LineTotal) AS 'YTD Total‘ FROM Production.Product AS P
JOIN Sales.SalesOrderDetail AS SD ON SD.ProductID = P.ProductID JOIN Sales.SalesOrderHeader AS SH
ON SH.SalesOrderID = SD.SalesOrderID WHERE SH.CustomerID = @storeid GROUP BY P.ProductID, P.Name
)
52
53. Top 10 for Building Efficient Queries
7.Avoid unnecessary GROUP BY columns
• Use a subquery instead.
• The process of grouping rows becomes more expensive as
you add more columns to the GROUP BY list.
• If your query has few column aggregations but many nonaggregated grouped columns, you might be able to refactor
it by using a correlated scalar subquery.
• This will result in less work for grouping in the query and
therefore possibly better overall query performance.
SELECT p1.ProductSubcategoryID, p1.Name
FROM Production.Product p1 WHERE p1.ListPrice >
( SELECT AVG (p2.ListPrice) FROM Production.Product p2
WHERE p1.ProductSubcategoryID =
p2.ProductSubcategoryID)
53
54. Top 10 for Building Efficient Queries
8.Use CASE expressions to include variable
logic in a query
• The CASE expression is one of the most powerful logic tools
available to T-SQL programmers.
• Using CASE, you can dynamically change column output on a
row-by-row basis.
• This enables your query to return only the data that is
absolutely necessary and therefore reduces the I/O
operations and network overhead that is required to
assemble and send large result sets to clients.
54
55. Top 10 for Building Efficient Queries
9. Divide joins into temporary tables when you query
very large tables.
• The query optimizer’s main strategy is to find query plans that
satisfy queries by using single operations.
• Although this strategy works for most cases, it can fail for larger
sets of data because the huge joins require so much I/O
overhead.
• In some cases, a better option is to reduce the working set by
using temporary tables to materialize key parts of the query. You
can then join the temporary tables to produce a final result.
• This technique is not favorable in heavily transactional systems
because of the overhead of temporary table creation, but it can
be very useful in decision support situations.
55
56. Top 10 for Building Efficient Queries
10. Refactoring Cursors into Queries.
•
Rebuild logic as multiple queries
•
Rebuild logic as a user-defined function
•
Rebuild logic as a complex query with a case expression
56
59. Stored Procedures
Best Practices
• Avoid using “sp_” as name prefix
• Avoid stored procedures that accept parameters for table
names
• Use the SET NOCOUNT ON option in stored procedures
• Limit the use of temporary tables and table variables in
stored procedures
• If a stored procedure does multiple data modification
operations, make sure to enlist them in a transaction.
• When working with dynamic T-SQL, use sp_executesql
instead of the EXEC statement
60
60. Views
Best Practices
•
•
•
•
•
Use views to abstract complex data structures
Use views to encapsulate aggregate queries
Use views to provide more user-friendly column names
Think of reusability when designing views
Avoid using the ORDER BY clause in views that contain a TOP
100 PERCENT clause.
• Utilize indexes on views that include aggregate data
61
64. Guidelines for designing indexes
•
Examine the database characteristics.
For example, your indexing strategy will differ between an online transaction processing system with frequent data updates and a data
warehousing system that contains primarily read-only data.
•
Understand the characteristics of the most frequently used queries and the
columns used in the queries.
For example, you might need to create an index on a query that joins tables or that uses a unique column for its search argument.
•
Decide on the index options that might enhance the performance of the index.
Options that can affect the efficiency of an index include FILLFACTOR and ONLINE.
•
Determine the optimal storage location for the index.
You can choose to store a nonclustered index in the same filegroup as the table or on a different filegroup. If you store the index in a filegroup that
is on a different disk than the table filegroup, you might find that disk I/O performance improves because multiple disks can be read at the same
time.
•
Balance read and write performance in the database.
You can create many nonclustered indexes on a single table, but it is important to remember that each new index has an impact on the
performance of insert and update operations. This is because nonclustered indexes maintain copies of the indexed data. Each copy of the data
requires I/O operations to maintain it, and you might cause a reduction in write performance if the database has to write too many copies. You
must ensure that you balance the needs of both select queries and data updates when you design an indexing strategy.
•
Consider the size of tables in the database.
The query processor might take longer to traverse the index of a small table than to perform a simple table scan. Therefore, if you create an index
on a small table, the processor might never use the index. However, the database engine must still update the index when the data in the table
changes.
•
Consider the use of indexed views.
Indexes on views can provide significant performance gains when the view contains aggregations, table joins, or both.
65
65. Nonclustered Index
do’s & don’ts
•
Create a nonclustered index for columns used for:
•
Predicates
•
Joins
•
Aggregation
• Avoid the following when designing nonclustered indexes:
•
Redundant indexes
•
Wide composite indexes
•
Indexes for one query
•
Nonclustered indexes that include the clustered index
66
66. Clustered Indexes
do’s & don’ts
• Use clustered indexes for:
•
Range queries
•
Primary key queries
•
Queries that retrieve data from many columns
• Do not use clustered indexes for:
•
Columns that have frequent changes
•
Wide keys
67
68. Performance Tuning in SQL Server
Define and implement monitoring
standards for database servers and
instances
69
69. Monitoring Stages
Stage 1
Monitoring the database environment
Narrowing down a performance issue to a
particular database environment area
Stage 2
Stage 3
Narrowing down a performance issue to a
particular database environment object
Stage 4
Stage 5
Troubleshooting individual
problems
Implementing a
solution
70
70. Monitoring the database environment
• You must collect a broad range of performance data.
• The monitoring system must provide you with enough data to solve the
current performance issues.
• You must set up a monitoring solution that collects data from a broad
range of sources.
• Active data, you can use active collection tools
• System Monitor,
• Error Logs,
• SQL Server Profiler
• Inactive data you can use sources
• Database configuration settings,
• Server configuration settings,
• Metadata from SQL Server installation and databases
71
71. Guidelines for Auditing and Comparing
Test Results
• Scan the outputs gathered for any obvious performance
issues.
• Automate the analysis with the use of custom scripts and
tools.
• Analyze data soon after it is collected.
• Performance data has a short life span, and if there is a delay, the quality of the
analysis will suffer.
• Do not stop analyzing data when you discover the first set of
issues.
• Continue to analyze until all performance issues have been identified.
• Take into account the entire database environment when
you analyze performance data.
73
73. SQL Server Profiler guidelines
• Schedule data tracing for peak and nonpeak hours
• Use Transact-SQL to create your own SQL Server Profiler
traces to minimize the performance impact of SQL Server
Profiler.
• Do not collect the SQL Server Profiler traces directly into a
SQL Server table.
• After the trace has ended, use fn_trace_gettable function to load the data
into a table.
• Store collected data on a computer that is not the instance
that you are tracing.
75
74. System Monitor guidelines
• Execute System Monitor traces at different times during the
week, month.
• Collect data every 36 seconds for a week.
• If the data collection period spans more than a week, set the
collection time interval in the range of 300 to 600 seconds.
• Collect the data in a comma-delimited text file. You can load
this text file into SQL Server Profiler for further analysis.
• Execute System Monitor on one server to collect the
performance data of another server.
76
75. DMVs for Monitoring
DMV
Description
sys.dm_os_threads
Returns a list of all SQL Server Operating System threads
that are running under the SQL Server process.
sys.dm_os_memory_pools
Returns a row for each object store in the instance of SQL
Server. You can use this view to monitor cache memory
use and to identify bad caching behavior
sys.dm_os_memory_cache_counters
Returns a snapshot of the health of a cache, provides runtime information about the cache entries allocated, their
use, and the source of memory for the cache entries.
sys.dm_os_wait_stats
Returns information about all the waits encountered by
threads that executed. You can use this aggregated view
to diagnose performance issues with SQL Server and also
with specific queries and batches.
sys.dm_os_sys_info
Returns a miscellaneous set of useful information about
the computer, and about the resources available to and
consumed by SQL Server.
77
76. Performance Data Collector
• Management Data Warehouse
• Performance Data Collection
•
•
•
•
Performance data collection components
System collection sets
User-defined collection sets
Reporting
• Centralized Administration: Bringing it all together
Performance Data Collection and Reporting
78
80. Reduce Locking and Blocking
Guidelines
•
Keep logical transactions short
•
Avoid cursors
•
Use efficient and well-indexed queries
•
Use the minimum transaction isolation level required
•
Keep triggers to a minimum
82
81. Minimizing Deadlocks
•
•
•
•
•
Access objects in the same order.
Avoid user interaction in transactions.
Keep transactions short and in one batch.
Use a lower isolation level.
Use a row versioning–based isolation level.
• Set the READ_COMMITTED_SNAPSHOT database option ON to enable
read-committed transactions to use row versioning.
• Use snapshot isolation.
• Use bound connections.
• Allow two or more connections to share the same transaction and locks.
• Can work on the same data without lock conflicts.
• Can be created from multiple connections within the same application,
or from multiple applications with separate connections.
• Make coordinating actions across multiple connections easier.
• http://msdn.microsoft.com/en-us/library/aa213063(SQL.80).aspx
83
82. SQLschool.gr
• A dream
• Reliable source of knowledge for SQL Server
• http://www.autoexec.gr/blogs/antonch
84