0% found this document useful (0 votes)
26 views74 pages

Module 17 18

Uploaded by

saadehsan.17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views74 pages

Module 17 18

Uploaded by

saadehsan.17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

17-1

Module 17
SQL Server Concurrency
Contents:
Module Overview 17-1
Lesson 1: Concurrency and Transactions 17-2

Lesson 2: Locking Internals 17-14

Lab: Concurrency and Transactions 17-27


Module Review and Takeaways 17-31

Module Overview
Concurrency control is a critical feature of multiuser database systems; it allows data to remain consistent
when many users are modifying data at the same time. This module covers the implementation of
concurrency in Microsoft SQL Server®. You will learn about how SQL Server implements concurrency
controls, and the different ways you can configure and work with concurrency settings.

Note: Transactions and locking are closely interrelated; it is difficult to explain either topic
without reference to the other. This module covers transactions and concurrency before covering
locking, but you will find some references to locking in the description of transactions.

Objectives
At the end of this module, you will be able to:

 Describe concurrency and transactions in SQL Server.

 Describe SQL Server locking.


17-2 SQL Server Concurrency

Lesson 1
Concurrency and Transactions
This lesson focuses on how SQL Server implements concurrency and transactions. You will learn about
different concurrency models, and the strengths and weaknesses of each model. You will then learn about
different types of isolation levels and transaction internals.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe different models of concurrency.

 Identify concurrency problems.

 Implement isolation levels.

 Work with row versioning isolation levels.

 Describe how SQL Server implements transactions.


 Explain best practices when working with transactions.

Concurrency Models
Concurrency can be defined as a system’s ability to
allow multiple users to access or change shared
data simultaneously. The greater the number of
active users able to work on shared data, the
greater the level of concurrency. As the level of
concurrency increases, the likelihood of conflicting
data operations (where two or more users attempt
to access or amend the same data at the same
time) also increases.
There are two different approaches to resolving
data conflicts during concurrent operation; these
are pessimistic and optimistic concurrency.

Pessimistic Concurrency
The pessimistic concurrency model assumes that conflicting data operations will occur frequently. In this
model, locks are used to ensure that only one user can access one data item at a time. While a data item is
locked to one user, other users cannot access it. A pessimistic concurrency model exhibits the following
properties:

 Data being read is locked, so that no other user can modify the data.

 Data being modified is locked, so that no other user can read or modify the data.

 The number of locks acquired is high because every data access operation (read/write) acquires a
lock.

 Writers block readers and other writers. Readers block writers.

The pessimistic concurrency model is suitable for a system where:

 Data contention is high.

 Locks are held for a short period of time.


Developing SQL Databases 17-3

 The cost of preventing conflicts with locks is lower than the cost of rolling back the change, in the
case of a concurrency conflict.

Optimistic Concurrency
The optimistic concurrency model assumes that conflicting data operations will occur infrequently. In this
model, locks are not used; instead, the state of the affected data is recorded at the start of a data
operation. This state is checked again at the end of the operation, before any changes are written. If the
state has not changed, new changes are written. If the state has changed, the new changes are discarded
and the operation fails. An optimistic concurrency model exhibits the following properties:

 Data being read is not locked; other users may read or modify the data.

 Data being modified is not locked; other users may read or modify the data.

 Before modified data is written, it is checked to confirm that it has not changed since being read; only
if it has not changed will the changes be written.

 The number of locks acquired is low.

The optimistic concurrency model is suitable for a system where:

 Data contention is low.

 Data modifications may take long periods of time.

 The cost of rolling back and then retrying a change is lower than the cost of holding locks.

 Readers should not block writers.

SQL Server supports implementations of both optimistic concurrency and pessimistic concurrency.
Pessimistic concurrency is the default concurrency model for the database engine. The In-Memory OLTP
Engine uses a type of optimistic concurrency called row versioning; it does not implement pessimistic
concurrency.

Concurrency Problems
There are several categories of problems that may
occur when concurrency control is lacking or
insufficient, and multiple sessions attempt to
access or change the same data item.

Dirty Read
A dirty read occurs when one transaction reads a
row that is in the process of being modified by
another transaction. The reading transaction reads
uncommitted data that may be changed later by a
transaction modifying the data.
For example, user A changes a value from x to y,
but does not finalize the change by committing the transaction. A second user, user B, reads the updated
value y and performs processing based on this value. User A later changes the value again, from y to z,
and commits the transaction. User B reads the uncommitted (dirty) value.

Lost Update
A lost update occurs when one or more transactions simultaneously updates the same row, based on the
same initial value. When this happens, the last transaction to update the row overwrites the changes made
by other transaction(s), resulting in lost data.
17-4 SQL Server Concurrency

For example, user C and user D select value x to update. User C first updates the value from x to y, and
then user D updates the value x to z. The modifications made by user A are overwritten by user B,
resulting in data loss.

Non-repeatable Read
A non-repeatable read occurs when a transaction reads different values for the same row each time the
row is accessed. This happens when data is changed by another transaction in between two SELECT
statements.

For example, user E begins a transaction that contains two similar SELECT statements, s1 and s2. The
SELECT statement s1 reads value x, and then does some processing. Another user, user F, modifies value x
to y while user E is executing other queries. When user E subsequently executes s2, the value y is returned
instead of the initial x.

Phantom Read
A phantom read is a variation of a non-repeatable read. Phantom reads occur when one transaction
carries out a DELETE operation or an INSERT operation against a row that belongs to the range of rows
being read by another transaction.

For example, user G has two similar SELECT statements, s1 and s2, within a transaction; the SELECT
statement s1 reads the count of rows as n, and then does other processing. Meanwhile, another user, user
H, deletes a row from the range of rows being read by select statement s1 and s2. When user G returns to
execute s2, the row count is n-1. The SELECT statement s1 returns a phantom read for a row that does not
exist at the end of user G’s transaction.

Double Read
A double read occurs when a transaction reads the same row value twice when reading a range of rows.
This happens when the row value that defines the range is updated by another transaction while the
range is being read.

For example, user I executes a SELECT statement that returns rows with values in a range a to z, that is
implemented as an index scan. After the scan has read rows with value a, but before the scan completes,
another user, user J, updates a row with value a to value z. The updated row is read again when the scan
reaches rows with value z.

Note: It is also possible for this issue to miss a row, but this is still referred to as a double
read problem. In the example, a row could be updated from value z to value a while the scan was
running.
Developing SQL Databases 17-5

Transaction Isolation Levels


You can use transaction isolation levels to control
the extent to which one transaction is isolated
from another, and to switch between pessimistic
and optimistic concurrency models. Transaction
isolation levels can be defined in terms of which of
the concurrency problems are permitted.

A transaction isolation level controls:

 Whether locks should be acquired when data


is being read and the type of locks to be
acquired.

 The duration that the locks are held.

 Whether a read operation accessing rows being modified by another transaction:

o Is blocked until the exclusive lock on the row is released.

o Fetches the committed data present at the time the transaction started.

o Reads the uncommitted data modification.


The transaction isolation level controls only whether locks are to be acquired or not for read operations;
write operations will always acquire an exclusive lock on the data they modify and hold the lock until the
transaction finishes, whatever the isolation level of transaction.
Isolation levels represent a trade-off between concurrency and consistency of data reads. At lower
isolation levels, more concurrent data access is possible, but you experience more concurrency problems.
At higher isolation levels, concurrency is reduced, but you experience fewer concurrency problems.
Five isolation levels are available in SQL Server:

READ UNCOMMITTED
READ UNCOMMITTED is the lowest level of isolation available in SQL Server. The READ UNCOMMITTED
isolation level has the following properties:

 No locks are taken for data being read.


 During read operations, a lock is taken to protect the underlying database schema from being
modified.

 Readers do not block writers, and writers do not block readers; however, writers do block writers.
 All of the concurrency problems (dirty reads, non-repeatable reads, double reads, and phantom
reads) can occur.

 Data consistency is not guaranteed.

 Not supported on FILESTREAM enabled databases.

READ COMMITTED
READ COMMITTED is the SQL Server default isolation level. The READ COMMITTED isolation level has the
following properties when the READ_COMMITTED_SNAPSHOT database option is OFF (the default for
SQL Server installations):

 Read locks are acquired and held until the end of the statement.

 Dirty reads are eliminated by allowing access to committed data only.


17-6 SQL Server Concurrency

 Because read locks are held until the end of the statement, data can be changed by other
transactions between individual statements within the current transaction, resulting in non-repeatable
reads, double reads, and phantom reads.

When the READ_COMMITTED_SNAPSHOT database option is ON (the default for Azure SQL Database),
the READ COMMITTED isolation level has the following properties:

 Row versioning is used to provide statement-level read consistency. Because each statement in a
transaction executes, a snapshot of old data is taken and stored in version store. The snapshot is
consistent until the statement finishes execution.
 Read locks are not held because the data is read from the version store, and not from the underlying
object.

 Dirty reads do not occur because a transaction reads only committed data, but non-repeatable reads
and phantom reads can occur during a transaction.

READ COMMITTED isolation is supported on FILESTREAM enabled databases.

REPEATABLE READ
REPEATABLE READ has the following properties:

 Read locks are acquired and held until the end of the transaction. Therefore, a transaction cannot
read uncommitted data and cannot modify the data being read by other transactions until that
transaction completes.

 Eliminates non-repeatable reads. Phantom reads and double reads still occur. Other transactions can
insert or delete rows in the range of data being read.
 Not supported on FILESTREAM enabled databases.

SERIALIZABLE
SERIALIZABLE is the highest level of isolation available in SQL Server. It has the following properties:
 Range locks are acquired on the range of values being read and are held until the end of the
transaction.

 Transactions cannot read uncommitted data, and cannot modify the data being read by other
transactions until the transaction completes; another transaction cannot insert or delete the rows in
the range of rows being read.

 Provides lowest level of concurrency.

 Not supported on FILESTREAM enabled databases.

SNAPSHOT
SNAPSHOT isolation is based on an optimistic concurrency model. SNAPSHOT isolation has the following
properties:

 Uses row versioning to provide transaction-level read consistency. A data snapshot is taken at the
start of the transaction and remains consistent until the end of the transaction.

 Transaction-level read consistency eliminates dirty reads, non-repeatable reads, and phantom reads.

 If update conflicts are detected, a participating transaction will roll back.

 Supported on FILESTREAM enabled databases.


 The ALLOW_SNAPSHOT_ISOLATION database option must be ON before you can use the SNAPSHOT
isolation level (OFF by default in SQL Server installations, ON by default in Azure SQL Database).
Developing SQL Databases 17-7

For more information on transaction isolation levels, see the topic SET TRANSACTION ISOLATION LEVEL
(Transact-SQL) in Microsoft Docs:

SET TRANSACTION ISOLATION LEVEL (Transact-SQL)


http://aka.ms/faim9a

Working with Row Versioning Isolation Levels


Row versioning isolation levels (SNAPSHOT
isolation, and READ COMMITTED isolation with
READ_COMMITTED_SNAPSHOT ON) have costs as
well as benefits. In particular, row versioning
makes use of tempdb to hold versioning data;
you should ensure your storage subsystem can
accommodate the additional load on tempdb
before enabling a row versioning isolation level.

In general, row versioning based isolation levels


have the following benefits:
 Readers do not block writers.

 Fewer locks overall—SELECT statements do not acquire locks:

o Reduced blocking and deadlocks


o Fewer lock escalations

Row versioning-based isolation levels can cause the following issues:

 Read performance may degrade because the set of versioned rows ages and large version chains
must be scanned.

 Increased resource utilization in maintaining row versions in tempdb.


 Versions are maintained even when there is no active transaction using a row versioning isolation
level.

 SNAPSHOT isolation may result in transaction rollback because of update conflict. Applications may
need to be modified to handle update conflicts.
Other points you should consider include:

 SNAPSHOT isolation does not affect queries with lock hints. The lock hints still apply.

 Writers still block writers.


 Setting READ_COMMITTED_SNAPSHOT ON requires that only one connection is open (the
connection issuing the command). This can be challenging in a production database.
 When READ_COMMITTED_SNAPSHOT is ON, row versioning may be bypassed by using the
READCOMMITTEDLOCK table hint, in which case the table will not be row versioned for the purposes
of the statement using the hint.
17-8 SQL Server Concurrency

Transactions
A transaction is considered to be one or more
Transact-SQL statements that are logically
grouped into a single unit of work. A transaction
might be made up of multiple statements;
changes made to data by these statements are
applied to the database only when the transaction
completes successfully. A transaction must adhere
to the ACID principles:

 Atomicity. A transaction must be atomic in


nature; that is, either all of the changes are
applied or none.

 Consistency. After completion, a transaction


must leave all data and related structures in a consistent state.

 Isolation. A transaction must have a view of the data independent of any other concurrent
transaction; a transaction should not see data in an intermediate state.
 Durability. Data changes must be persisted in case of a system failure.

SQL Server Transaction Management Modes

Auto-commit mode
Auto-commit is the default transaction management mode in SQL Server. A transaction is either
committed or rolled back after completion. If a statement completes successfully without any error, it is
committed. If it encounters errors, it is rolled back. Auto-commit mode is overridden when a user initiates
an explicit transaction or when implicit transaction mode is enabled.

Explicit transaction mode


In explicit transaction mode, you explicitly define the start and end of a transaction.
 BEGIN TRANSACTION. Marks the start of a transaction.

 COMMIT TRANSACTION. Marks the successful completion of a transaction. The modifications made
to a database are made permanent; the resources held by a transaction are released.

 ROLLBACK. Marks the unsuccessful termination of a transaction; the modifications made by a


transaction are discarded; the resources held by a transaction are released.

When an explicit transaction completes, the connection returns to the transaction mode it was using
before the start of the explicit transaction.

For more information on transaction control statements in explicit transaction mode, see the topic
Transaction Statements (Transact-SQL) in Microsoft Docs:

Transaction Statements (Transact-SQL)


http://aka.ms/krt2iy

Implicit transaction mode


In implicit transaction mode, SQL Server automatically manages the start of a transaction. You can commit
or roll back an implicit transaction but you cannot control the start of the transaction. SQL Server
automatically starts a new implicit transaction after the current implicit transaction finishes, generating a
continuous chain of transactions.
Developing SQL Databases 17-9

 Implicit transaction is a session level setting and can be changed by setting the
IMPLICIT_TRANSACTION option to ON/OFF.

 SQL Server automatically starts an implicit transaction when any of the following statements are
executed: ALTER TABLE, CREATE, DELETE, DROP, FETCH, GRANT, INSERT, OPEN, REVOKE, SELECT,
TRUNCATE TABLE, UPDATE.

For more information about implicit transaction mode, see the topic SET IMPLICIT_TRANSACTIONS
(Transact-SQL) in Microsoft Docs:

SET IMPLICIT_TRANSACTIONS (Transact-SQL)


http://aka.ms/vima93

Batch-scoped transaction mode


The batch-scoped transaction is applicable only to multiple active result sets (MARS). A transaction
(whether implicit or explicit) that starts under MARS is converted to a batch-scoped transaction. A batch-
scoped transaction that is neither committed nor rolled back on batch completion is automatically rolled
back by SQL Server.

The transaction mode is set at the connection level. If a connection changes from one transaction mode
to another, other active connections are not affected.

Working with Transactions


You should be aware of some features of
transactions in SQL Server when you start to use
them.

Naming Transactions
Transaction names are optional, and have no
effect on the behavior of SQL Server; they act
purely as labels to assist developers and DBAs in
understanding Transact-SQL code.

Explicit transactions may be named.

Transaction Naming Example


BEGIN TRANSACTION my_tran_name;

COMMIT TRANSACTION my_tran_name;

Nesting Transactions
Explicit transactions can be nested; you can issue a BEGIN TRANSACTION command inside an open
transaction. Only the outermost transaction has any effect; if the outermost transaction is committed, all
the nested transactions are committed. If the outermost transaction is rolled back, all the nested
transactions are rolled back. The level of transaction nesting is tracked by the @@TRANCOUNT function,
which is maintained at connection level:

 Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one.

 Each COMMIT statement decrements @@TRANCOUNT by one. The COMMIT that reduces
@@TRANCOUNT to zero commits the outermost transaction.

 A ROLLBACK statement rolls back the outer transaction and reduces @@TRANCOUNT to zero.
17-10 SQL Server Concurrency

Terminating Transactions
In addition to an explicit COMMIT or ROLLBACK, a transaction can be terminated for the following
reasons:

 Resource error. If a transaction fails because of a resource error such as lack of disk space, SQL Server
automatically rolls back the transaction to maintain data integrity.

 SET XACT_ABORT. When the connection-level SET XACT_ABORT setting is ON, an open transaction
is automatically rolled back in the event of a runtime error. When XACT_ABORT is OFF, a statement
that causes an error is normally rolled back, but any open transaction will remain open.

 Connection closure. If a connection is closed, all open transactions are rolled back.

For more information on SET XACT_ABORT, see the topic SET XACT_ABORT (Transact-SQL) in Microsoft
Docs:

SET XACT_ABORT (Transact-SQL)


http://aka.ms/nrph4c

Savepoints
You can also create a save point partway through the transaction. This is a named point within a
transaction. You then use a ROLLBACK statement to revert to the save point, which means you can retry
without having to start the entire transaction again. You set save points using the SAVE TRANSACTION
key words. You can specify a name up to 32 characters, or a variable holding the name of the save point.
You can then use the ROLLBACK command, together with the name of the save point, to return to the
save point in the event of an error. Typically the save point would be created in the TRY block, and the
ROLLBACK to the save point would be included in the CATCH block of an error handling construct.
The following code fragments show how to name a save point, and how to refer to the named save point
in the event of a ROLLBACK:

SAVEPOINT
SAVE TRANSACTION SavePointName;

ROLLBACK TRANSACTION ProcedureSave;

For more information about SAVE TRANSACTION, see Microsoft Docs:

SAVE TRANSACTION (Transact-SQL)


https://aka.ms/ypdjku

Transaction Best Practices


 Short transactions. Keep transactions as short as possible. The shorter the transaction, the sooner the
locks will be released. This will help reduce unnecessary blocking.

 Avoid user input. Avoid user interaction during a transaction. This might add unnecessary delay,
because a user may open a transaction and go out for a break. The transaction will hold locks until
the user returns to complete the transaction or the transaction is killed. Other transactions requiring
locks on the same resource will be blocked during this time.

 Open a transaction only when necessary. If possible, avoid opening a transaction when browsing
through data. Do the preliminary data analysis and then open a transaction to perform any necessary
data modification.
Developing SQL Databases 17-11

 Access only relevant data. Access only the relevant data within a transaction. This reduces the
number of locks, so reducing the blocking and deadlocks.

 Use the appropriate transaction isolation level. Not all applications require high level isolation
level, such as repeatable read and serializable. Many applications work well with the default
repeatable read isolation.

 Beware of triggers containing transactions. Triggers containing transactions should be written


carefully. Issuing a ROLLBACK command within a trigger will roll back the whole transaction, of which
the trigger is a part.

Demonstration: Analyzing Concurrency Problems


In this demonstration, you will see:

 Examples of concurrency problems.

 How changes to transaction isolation levels address concurrency problems.

Demonstration Steps
Preparation

1. Ensure that the MT17B-WS2016-NAT, 20762C-MIA-DC, and 20762C-MIA-SQL virtual machines


are running, and then log on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the
password Pa55w.rd.

2. Run Setup.cmd in the D:\Demofiles\Mod17 folder as Administrator.

3. In the User Account Control dialog box, click Yes, and then wait for the script to finish.

4. Start SQL Server Management Studio and connect to your Azure instance of the AdventureWorksLT
database engine instance using SQL Server authentication.

5. In the Login box, type Student, in the Password box, type Pa55w.rd, and then click Connect.

6. Open the Demo1.ssmssln solution in the D:\Demofiles\Mod17\Demo1 folder.

7. Open the Demo 1a.sql and the Demo 1b.sql script files; open these files in different query windows,
because you will be switching between them.

8. In both script files, in the Available databases list, click ADVENTUREWORKSLT.

Dirty Reads

1. In the Demo 1a.sql script file, under the comment that begins Step 1, select the code, and then click
Execute to check the current settings for SNAPSHOT isolation.

2. Under the comment that begins Step 2, select the code, and then click Execute to view the current
state of the row used in this demonstration.

3. In the Demo 1b.sql script file, under the comment that begins Query 1, select the code, and then
click Execute.
4. In the Demo 1a.sql file, under the comment that begins Step 3, select the code, and then click
Execute to demonstrate READ UNCOMMITTED isolation.

5. Under the comment that begins Step 4, select the code, and then click Execute to demonstrate
READ COMMITTED isolation. The query will wait until you complete the next step.

6. In the Demo 1b.sql file, under the comment that begins Query 2, select the code, and then click
Execute.
17-12 SQL Server Concurrency

7. In the Demo 1a.sql script file, note that the query under Step 4 (which was already running) has now
returned a result.

Non-repeatable Reads

1. In the Demo 1a.sql script file, under the comment that begins Step 5, select the first five lines of
code, and then click Execute.
2. In the Demo 1b.sql file, under the comment that begins Query 3, select the code, and then click
Execute.
3. In the Demo 1a.sql file, under the comment that begins Step 5, select the final four lines of code,
and then click Execute to demonstrate a non-repeatable read.

4. Under the comment that begins Step 6, select the first six lines of code, and then click Execute.
5. In the Demo 1b.sql file, under the comment that begins Query 4, select the code, and then click
Execute. Note that this query will not return until you complete the next step, because REPEATABLE
READ holds a lock on the affected row.
6. In the Demo 1a.sql script file, under the comment that begins Step 6, select the final four lines of
code, and then click Execute to demonstrate that REPEATABLE READ isolation prevents a non-
repeatable read.

Phantom Read
1. In the script Demo1a.sql that begins Step 7, select the first six lines of code, and then click Execute.

2. In the Demo 1b.sql script file, under the comment that begins Query 5, select the code, and then
click Execute.
3. In the Demo 1a.sql file, under the comment that begins Step 7, select the final four lines of code,
and then click Execute to demonstrate that READ COMMITTED isolation allows a phantom read.

Serializable and Phantom Read


1. In the script Demo1a.sql under the comment that begins Step 8, select the first six lines of code, and
then click Execute.

2. In the Demo 1b.sql script file under the comment that begins Query 6, select the code, and then
click Execute. Note that this query will not return until you complete the next step, since
SERIALIZABLE holds a lock on the affected table.

3. In the Demo 1a.sql script file, under the comment that begins Step 8, select the final four lines of
code, and then click Execute to demonstrate that SERIALIZABLE isolation prevents a phantom read.

Snapshot Isolation
1. In Demo 1a.sql, under the comment that begins Step 9, select the first eight lines of code, and then
click Execute. This can take up to 30 minutes to complete.

2. Select the last three lines of code under Step 9, and then click Execute. Note the locks.

3. In the Demo 1b.sql script file, under the comment that begins Step 6, select the code, and then click
Execute. Note that the query does not complete.

4. In the Demo 1a.sql script file, under the comment that begins Query 10, select the code, and then
click Execute. Note the locks.
5. In the Demo 1a.sql script file, under the comment that begins Step 11, select the code, and then
click Execute. Note that the query in Demo 1b aborted, demonstrating the behavior of SNAPSHOT
isolation.
6. Close SQL Server Management Studio without saving any changes.
Developing SQL Databases 17-13

Check Your Knowledge


Question

User A starts to update a customer


record, and while the transaction is still in
progress, User B tries to update the same
record. User A’s update completes
successfully, but User B’s update fails with
an error message: “This customer’s record
has been updated by another user”.
Which concurrency model is the system
using?

Select the correct answer.

Pessimistic concurrency

Optimistic concurrency
17-14 SQL Server Concurrency

Lesson 2
Locking Internals
SQL Server uses locks to ensure the consistency of data during a transaction. This lesson discusses the
details of the locking architecture used by SQL Server, how locks are used during the life of a transaction,
and the various methods available to you to influence the default locking behavior.

Lesson Objectives
At the end of this lesson, you will be able to:

 Describe the SQL Server locking architecture.

 Describe lock hierarchy and lock granularity.

 Explain lock escalation.

 Understand lock modes.

 Explain lock compatibility.


 Explain the data modification process.

 Use locking hints.

 Understand deadlocks.
 Explain latches and spinlocks.

Locking Architecture
In a hypothetical database system, the least
sophisticated locking architecture possible is to
allow locks only at database level. Every user
reading or writing data would lock the entire
database, preventing access by any other user
until the change was complete. While this
approach ensures data is consistent, it prohibits
concurrent database activity.

SQL Server’s locking system is designed to ensure


data consistency while still allowing concurrent
activity. Locks are acquired at an appropriate level
of granularity to protect the data that is modified
by a transaction; locks are held until the transaction commits or rolls back. Different objects affected by a
transaction can be locked with different types of lock.

Locks and Latches


SQL Server implements two locking systems. The first system manages locks for database objects (tables,
indexes, and so on) that are accessible directly to users; these locks act at a logical level to ensure data
consistency. This locking system is managed by the lock manager. The second system is used to ensure
the physical consistency of data in memory; for this process, a lightweight locking mechanism, known as a
latch, is employed. This system is managed by the storage engine.
Developing SQL Databases 17-15

Lock Manager
Internally, locks are automatically managed by the lock manager, a component of the database engine.
When the database engine processes a Transact-SQL statement, the Query Processor subcomponent
determines the resources that will be accessed. The Query Processor also determines the type of locks to
acquire, based on the type of data access (read or write) and the transaction isolation level setting. The
Query Processor then requests these locks from the lock manager. The lock manager grants the locks if no
conflicting locks are being held by other transactions.

Locks are in-memory structures; the lock manager maintains a memory structure for each lock, called a
lock block, which records the lock type and the resource that is locked. Each lock block will be linked to
one or more lock owner blocks; the lock owner block links a lock to the process requesting the lock. The
lock manager also maintains a lock hash table, to track locked resources more efficiently.

When a SQL Server instance starts, lock manager acquires sufficient memory to support 2,500 locks. If the
total number of locks exceeds 2,500, more memory is allocated dynamically to lock manager.

For more information on locking in SQL Server, see the topic SQL Server Transaction Locking and Row
Versioning Guide on MSDN:

SQL Server Transaction Locking and Row Versioning Guide


http://aka.ms/mc5pmh

Lock Granularity and Hierarchy


Database objects and resources can be locked at
different levels of granularity; to allow more
concurrent activity, SQL Server will attempt to lock
as few resources as possible to efficiently process a
Transact-SQL statement. The efficiency of the
locking strategy is determined by comparing the
overhead of maintaining many locks at a fine grain
against the increase in concurrency from lower-
grained locking. Locking at higher granularity
levels—such as at table level—decreases
concurrency because the entire table is
inaccessible to other transactions. However, the
overhead is less, as fewer locks are to be maintained.

SQL Server acquires locks at any of the following levels, ordered here from lowest grain to highest grain.
The first two items (RID and KEY) are of equivalent grain:

 RID. RID stands for row id. A row id is a lock on a single row in a heap (table without clustered index).

 KEY. A key lock applies to a single row in a clustered or nonclustered index.

 PAGE. A lock on an 8 KB page in a database, such as a data or index page. If a page is locked, all of
the data rows contained in the page are locked.

 EXTENT. A lock on a 64 KB extent (a block of eight pages). If an extent is locked, all of the pages in
the extent are locked.

 HoBT. HoBT stands for heap or b-tree. A lock protecting a b-tree (index), or the heap data pages in a
table that does not have a clustered index. All the extents that make up the heap or b-tree are locked.

 OBJECT. Typically, a lock on a table. If a table is locked, all of the associated data and index pages are
also locked.
17-16 SQL Server Concurrency

 FILE. A lock on a database file. If a file is locked, all of the objects it contains are locked.

 APPLICATION. An application-specified lock, created using sp_getapplock.

 METADATA. A lock on catalog views.

 ALLOCATION_UNIT. A lock on an allocation unit such as IN_ROW_DATA.

 DATABASE. A lock on an entire database. All the objects in the database are also locked.
The objects in this list make up a hierarchy; databases are composed of files, files contain tables, and
tables are in turn made up of extents, pages, and rows. To fully protect a resource during the processing
of a command, a process might acquire locks at multiple levels of the resource hierarchy. For example,
when processing a command that affects a single row in a table, locks might be acquired on the affected
row, the page in which the row is stored, the page’s extent, and the table to which the row belongs. This
both fully protects the table and simplifies the detection of locking conflicts with other concurrent
processes that may hold locks on different rows in the same table.

Lock Escalation
Lock escalation occurs when many fine-grained
locks held by a transaction on a single resource
are converted to a single coarser-grained lock on
the same resource. Lock escalation is used to limit
the total number of locks the lock manager must
manage; the cost being that lock escalation might
reduce concurrency.

When lock escalation from row locks occurs, the


lock is always escalated to table level; lock
escalation does not take place from row level to
page level.
In previous versions of SQL Server, the default
conditions for lock escalation were hard-coded; when a transaction held more than a fixed number of row
level or page level locks on a resource, the lock would be escalated. This is no longer true and lock
escalation decisions are now based on multiple factors; there is no fixed threshold for lock escalation.

Lock escalation can also occur when the memory structures maintained by the lock manager consume
more than 40 percent of the available memory.
You can control lock escalation behavior for individual tables by using the ALTER TABLE SET
LOCK_ESCALATION command. LOCK_ESCALATION can be set to one of three values:

 TABLE. The default value. When lock escalation occurs, locks are always escalated to table level,
whether or not the table is partitioned.

 AUTO. If the table is partitioned when lock escalation occurs, locks can be escalated to partition level.
If the table is not partitioned, locks are escalated to table level.

 DISABLE. Prevents lock escalation occurring in most cases. Table locks might still occur, but will be
less frequent.

For more information on controlling lock escalation behavior, see the topic ALTER TABLE (Transact-SQL) in
Microsoft Docs:

ALTER TABLE (Transact-SQL)


http://aka.ms/hb1ub7
Developing SQL Databases 17-17

Lock escalation behavior can also be controlled at session level or instance level by use of trace flags:

 Trace flag 1224 disables lock escalation, based on the number of locks held on a resource. Lock
escalation due to memory pressure can still occur.

 Trace flag 1211 disables lock escalation completely, whether due to the number of locks held on a
resource or due to memory pressure. Disabling lock escalation can have a severe effect on
performance and is not recommended.

For more information on trace flags 1224 and 1211, see the topic Trace Flags (Transact-SQL) in Microsoft
Docs:

Trace Flags (Transact-SQL)


http://aka.ms/hvmsq7

Lock Modes
SQL Server locks resources using different lock
modes. The lock modes determine how accessible
a resource is to other concurrent transactions.

Data Lock Modes


The following lock modes are used to lock
resources:
 Shared lock. Shared locks are acquired when
reading data. The duration for which a shared
lock is held depends on transaction isolation
level or locking hints. Many concurrent
transactions might hold shared locks on the
same data. No other transaction can modify the data until the shared lock is released.
 Exclusive lock. Exclusive locks are acquired when data is modified (by an INSERT, UPDATE, or DELETE
statement). Exclusive locks are always held until the end of the transaction. Only one transaction may
acquire an exclusive lock on a data item at a time; while an exclusive lock is held on a data item, no
other type of lock may be acquired on that data item.
 Update lock. Update locks are acquired when modifying data and are a combination of shared and
exclusive locks. Update locks are held during the searching phase of the update, where the rows to be
modified are identified; they are converted to exclusive locks when actual modification takes place.
Only one transaction may acquire an update lock on a data item at one time; other transactions
might hold or acquire shared locks on the same data item while an update lock is in place.

 Intent lock. An intent lock is not a locking mode in its own right—it acts as a qualifier to other lock
modes. Intent locks are used on a data item to indicate that a subcomponent of the data item is
locked; for instance, if a row in a table is locked with a shared lock, the table to which the row belongs
would be locked with an intent shared lock. Intent locks are discussed in more detail in the next topic.
 Key-range locks. Key-range locks are used by transactions using the SERIALIZABLE isolation level to
lock ranges of rows that are implicitly read by the transaction; they protect against phantom reads.

Special Lock Modes


Special lock modes are used to control stability of the database schema, when locks are converted
between modes, and during bulk update operations:
17-18 SQL Server Concurrency

 Schema lock. Schema locks are used when an operation dependent on the table schema is executed.
There are two types of schema lock:

o Schema modification lock. Schema modification locks are acquired when a data definition
language (DDL) operation is being performed against a table, such as adding or dropping a
column.

o Schema stability lock. Schema stability locks are used during query compilation to prevent
transactions that modify the underlying database schema. Schema stability locks are compatible
with all other lock types (including exclusive locks).
 Conversion lock. A conversion lock is a specialized type of intent lock used to manage the transition
between data lock modes. Conversion locks appear in three types:

o Shared with intent exclusive. Used when a transaction holds a mixture of shared locks and
exclusive locks on subobjects of the locked object.

o Shared with intent update. Used when a transaction holds a mixture of shared locks and update
locks on subobjects of the locked object.

o Update with intent exclusive. Used when a transaction holds a mixture of exclusive locks and
update locks on subobjects of the locked object.
 Bulk update lock. Bulk update locks can optionally be acquired when data is bulk inserted into a
table using a bulk command such as BULK INSERT. A bulk update lock can only be acquired if no
other incompatible lock types are held on the table.
Locks held by active transactions can be viewed by using the sys.dm_tran_locks dynamic management
view (DMV). DMVs use abbreviations for lock mode names, summarized in the following table:

Abbreviation Lock Mode

S Shared

U Update

X Exclusive

IS Intent Shared

IU Intent Update

IX Intent Exclusive

RangeS_S Shared Key-Range and Shared Resource lock

RangeS_U Shared Key-Range and Update Resource lock

RangeI_N Insert Key-Range and Null Resource lock

RangeI_S Key-Range Conversion lock

RangeI_U Key-Range Conversion lock

RangeI_X Key-Range Conversion lock

RangeX_S Key-Range Conversion lock

RangeX_U Key-Range Conversion lock


Developing SQL Databases 17-19

Abbreviation Lock Mode

RangeX_X Exclusive Key-Range and Exclusive Resource lock

Sch-S Schema stability

Sch-M Schema modification

SIU Shared Intent Update

SIX Shared Intent Exclusive

UIX Update Intent Exclusive

BU Bulk Update

For more information on the sys.dm_tran_locks DMV, see the topic sys.dm_tran_locks (Transact-SQL) in
Microsoft Docs:
sys.dm_tran_locks (Transact-SQL)
http://aka.ms/jf98cd

Lock Mode Compatibility


Processes may acquire locks of different modes.
Two lock modes are said to be compatible if a
process can acquire a lock mode on a resource
when another concurrent process already has a
lock on the same resource. If a process attempts to
acquire a lock mode that is incompatible with the
mode of an existing lock, the process must wait for
the existing lock to be released before acquiring
the new lock.
SQL Server uses lock compatibility to ensure
transactional consistency and isolation, while still
permitting concurrent activity; it allows processes
that read data to run concurrently, while ensuring that modification of a resource can only be carried out
by one process at a time.

Note: Lock compatibility gives an insight into the differences in behavior between the
different isolation levels you learned about in the previous topic; the more pessimistic isolation
levels acquire and hold locks that are less compatible with other lock types.

When processes wait for incompatible lock types to be released, they wait in a first-in, first-out queue. If
there are already processes queuing for a resource, a process seeking to acquire a lock on the same
resource must join the end of the queue, even if the mode of the lock it seeks to acquire is compatible
with the current lock on the resource. On busy resources, this prevents processes seeking less compatible
lock modes from waiting indefinitely when other, more compatible, lock modes are in use.
17-20 SQL Server Concurrency

Note: When a process is waiting for an incompatible lock on a resource to be released, it is


said to be blocked. Because of the way processes queue when waiting for locks, chains of blocked
processes can develop, slowing—or potentially stopping—system activity.

For a full lock compatibility matrix, see the topic Lock Compatibility (Database Engine) on Microsoft
TechNet (note that this page comes from the SQL Server 2008 R2 documentation; Microsoft has not
published a recent version of this matrix):

Lock Compatibility (Database Engine)


http://aka.ms/t0ia22

The Data Modification Process


To understand how locks are used as Transact-SQL
statements are processed, consider the example of
a statement that modifies data in an UPDATE
statement.
The following UPDATE query could be run in the
AdventureWorks database:

Example UPDATE Statement Changing Two


Rows
UPDATE HumanResources.Employee
SET MaritalStatus = 'S'
WHERE BusinessEntityId IN (3,289);

The following steps are involved when modifying data in SQL Server:
 A user or an application sends the UPDATE query to SQL Server.

 The database engine receives the update request and locates the data pages to be updated in the
cache—or reads the data pages from the storage subsystem into cache.

 The database engine tries to grant the lock on the necessary data to the user’s session:

o If any transaction already has an incompatible lock on the affected data, the UPDATE query waits
for the existing lock to be released.

o Because this UPDATE statement is highly selective (affecting only two rows) the database engine
uses row level locking to acquire an update lock on each of the two rows being modified.

 The following additional locks are acquired to secure the pages and the table in question:

o Two intent-exclusive page level locks (one for each page, since the rows are in different pages).

o One intent-exclusive table level lock.

o A shared database level lock.


o Additional locks may be required if the data being modified makes up an index. In this example,
no indexes are affected.

 SQL Server starts the data modification. The steps are as follows:

o The data modification is made (in the cache). At the same time, the update lock is converted to
an exclusive lock.
Developing SQL Databases 17-21

o The changes are logged in transaction log pages (in cache).

o The locks are released.

o The transaction is committed.

 Acknowledgement is sent to the user or application.

Locking Hints
There might be instances where you need to
influence locking behavior; several table hints are
available that help you adjust the locking of
individual tables during a single Transact-SQL
statement. You can use hints to influence:

 The mode of any locks acquired on a table.

 The transaction isolation level applied to a


table.
Table hints are applied by including a WITH
command after the name of the table for which
you want to influence locking in the FROM clause
of a Transact-SQL statement.

Table Hint Example



FROM <table name> WITH (<table hint> [,<table hint>])

You can specify multiple hints for a table—the hints should be comma-separated in the brackets of the
WITH command.

Best Practice: In general, it is best to avoid locking hints and allow the SQL Server Query
Optimizer to select an appropriate locking strategy. Be sure to regularly review any locking hints
you use; confirm that they are still appropriate.

Hints Affecting Lock Mode


The following hints affect the lock mode acquired by a Transact-SQL statement:

 ROWLOCK. Row locks should be acquired where page or table locks would normally be used.

 PAGLOCK. Page locks should be acquired where row or table locks would normally be used.

 TABLOCK. A table lock should be acquired where row or page locks would normally be used.

 TABLOCKX. An exclusive table lock should be acquired.

 UPDLOCK. An update lock should be acquired.

 XLOCK. An exclusive lock should be acquired.

Hints Affecting Table Isolation Level


The following hints affect the isolation level used by a Transact-SQL statement:

 READCOMMITTED. Use the READ COMMITTED isolation level. Locks or row versioning are used,
depending on the value of the READ_COMMITTED_SNAPSHOT database setting.
17-22 SQL Server Concurrency

 READCOMMITTEDLOCK. Use the READ COMMITTED isolation level, acquiring locks. The value of the
READ_COMMITTED_SNAPSHOT database setting is ignored.

 READUNCOMMITTED or NOLOCK. Use the READ UNCOMMITTED isolation level. Both


READUNCOMMITTED and NOLOCK hints have the same effect.

 REPEATABLEREAD. Use the REPEATABLE READ isolation level.


 SERIALIZABLE or HOLDLOCK. Use the SERIALIZABLE isolation level. Both SERIALIZABLE and
HOLDLOCK hints have the same effect.
 READPAST. Rows that are locked by other transactions will be ignored, instead of waiting for
incompatible locks to be released,

For more information on table hints—including those that control locking—see the topic Table Hints
(Transact-SQL) in Microsoft Docs:

Table Hints (Transact-SQL)


http://aka.ms/fkaztl

Some best practices when using locking hints are:

 Use the TABLOCK hint to speed up bulk insert operations. TABLOCK is only compatible with itself.
This allows multiple bulk inserts to be made in parallel into a single table, while preventing other
processes to update or modify the records. This considerably improves bulk insert performance.

 Avoid using the NOLOCK or READUNCOMMITTED hint to resolve reader-writer blocking; consider
setting READ_COMMITTED_SNAPSHOT to ON or using the SNAPSHOT isolation level. The NOLOCK
and READUNCOMMITTED hints are only suitable in environments where the effects of the READ
UNCOMMITTED isolation level (documented in the previous lesson) are acceptable.

 Use ROWLOCK or UPDLOCK hints to reduce deadlocks in the REPEATABLE READ isolation level.

Deadlock Internals
A deadlock occurs when two or more transactions
block one another by attempting to acquire a lock
on a resource that is already locked by the other
transaction(s) with an incompatible lock mode. For
example:

 Transaction A acquires a shared lock on table


T1.
 Transaction B acquires a shared lock on table
T2.
 Transaction A requests an exclusive lock on
table T2. It waits on transaction B to release
the shared lock it holds on table T2.

 Transaction B requests an exclusive lock on table T1. It waits on transaction A to release the shared
lock it holds on Table T1. A deadlock has occurred.

Without intervention, a deadlock will continue indefinitely.


Developing SQL Databases 17-23

Deadlock Resolution
The Lock Monitor process is responsible for detecting deadlocks. It periodically searches for the tasks
involved in a deadlock. The search process has the following properties:

 The default interval between deadlock detection searches for five seconds.

 As soon as a deadlock is found, the deadlock detection search will run again immediately.

 When deadlocks are detected, the deadlock detection search interval is reduced to as little as 100
milliseconds, depending on the deadlock frequency.
When a deadlock is detected, the Lock Monitor ends the deadlock by choosing one of the thread as the
deadlock victim. The deadlock victim command is forcefully terminated; the transaction is rolled back, and
the error 1205 is returned to the application. This releases the locks held by the deadlock victim, allowing
the other transactions to continue with their work.

The deadlock victim is selected, based on the following rules:

 If all the deadlocked transactions have the same deadlock priority, the transaction that is estimated to
be the least expensive to roll back is chosen as the deadlock victim.
 If the deadlocked transactions have a different deadlock priority, the transaction with the lowest
deadlock priority is chosen as the deadlock victim.

Deadlock Priority
You can specify the deadlock priority of a transaction by using the SET DEADLOCK_PRIORITY command.
You can set deadlock priority to an integer value between -10 (the lowest priority) and 10 (the highest
priority)—or you can use a text value:
 LOW. Equivalent to the integer value -5.

 NORMAL. Equivalent to the integer value 0.

 HIGH. Equivalent to the integer value 5.

For more information on setting deadlock priority, see the topic SET DEADLOCK_PRIORITY (Transact-SQL)
in Microsoft Docs:

SET DEADLOCK_PRIORITY (Transact-SQL)


http://aka.ms/vaffc7

Analyze Deadlocks
You can use SQL Server Profiler to analyze deadlocks. The Locks section of SQL Server Profiler includes a
Deadlock graph event class which you can add to your trace. This is a graphical representation of the
deadlock, showing the statements involved in the deadlock. Add data about the deadlock to a Deadlock
XML file, and you can choose whether to write all deadlock events to one files, or to separate files. The
deadlock data can then be viewed graphically by opening the file in SSMS.

For more information about analyzing deadlocks using SQL Server Profiler, see Microsoft Docs:
Analyze Deadlocks with SQL Server Profiler

https://aka.ms/nn6oel
17-24 SQL Server Concurrency

Latches and Spinlocks


Some database engine operations avoid the cost
of managing locks by using lighter-weight locking
mechanisms, latches, and spinlocks.

Latches
Latches are a lightweight locking mechanism used
by the storage engine to ensure the consistency of
in-memory data structures, such as data pages
and non-leaf pages in a b-tree. Latches are
managed internally by SQL Server and cannot be
controlled by users. Latches are broadly divided
into three types:

 I/O latches. Used to manage outstanding I/O operations against pages in the Buffer Pool, I/O latches
ensure that pages are read only once from I/O into the Buffer Pool.

 Buffer latches. Used to prevent concurrent processes from making conflicting changes to pages in
the Buffer Pool.
 Non-buffer latches. Used to protect shared data structures held outside the Buffer Pool.

When a process waits for a latch, the duration of the wait is recorded in the sys.dm_os_wait_stats DMV:

 I/O latches appear as wait types with names starting PAGEIOLATCH_.

 Buffer latches appear as wait types with names starting PAGELATCH_.

 Non-buffer latches are summarized as wait types with names starting LATCH_. A complete list of all
non-buffer latch types can be found in the sys.dm_os_latch_stats DMV.

For more information on the sys.dm_os_wait_stats DMV, see the topic sys.dm_os_wait_stats (Transact-
SQL) in Microsoft Docs:

sys.dm_os_wait_stats (Transact-SQL)
http://aka.ms/kvkoru

For more information on the sys.dm_os_latch_stats DMV, see the topic sys.dm_os_latch_stats (Transact-
SQL) in Microsoft Docs:

sys.dm_os_latch_stats (Transact-SQL)
http://aka.ms/im1px3

Spinlocks
Spinlocks are very lightweight locking structures used when a process needs to lock an object in memory
for a very short time. A process waiting to acquire a spinlock will go into a loop for a period, checking
repeatedly whether the lock is available—as opposed to moving onto a waiter list and yielding the CPU
immediately. SQL Server uses spinlocks to protect objects such as hash buckets in the lock manager’s lock
hash table.

Some contention for spinlocks is expected on busy SQL Server instances; spinlock contention should only
be considered a problem when it causes significant CPU overhead. Performance problems can be caused
by contention for spinlocks, but this is a relatively rare occurrence.
Developing SQL Databases 17-25

For more information on diagnosing and resolving performance problems caused by spinlock contention,
see the Microsoft paper Diagnosing and Resolving Spinlock Contention on SQL Server. Note that this paper
was written in reference to SQL Server 2008 R2:

Diagnosing and Resolving Spinlock Contention on SQL Server


http://aka.ms/uvpmoe

Demonstration: Applying Locking Hints


In this demonstration, you will see the effects of several locking hints.

Demonstration Steps
1. Start SQL Server Management Studio and connect to the MIA-SQL database engine instance using
Windows authentication.

2. Open the Demo2.ssmssln solution in the D:\Demofiles\Mod17\Demo2 folder.


3. Open the Demo 2a - lock hints 1.sql and Demo 2b - lock hints 2.sql script files. Ensure that both
scripts use the AdventureWorks database.

4. In the Demo 2a - lock hints 1.sql script file, under the comment that begins Step 3, select the code,
and then click Execute to show the current isolation level.

5. Under the comment that begins Step 4, select the first three lines of code, and then click Execute to
demonstrate the locks held by a transaction using READ UNCOMMITTED isolation.

6. Under the comment that begins Step 4, select the remaining five lines of code, and then click
Execute.

7. Under the comment that begins Step 5, select the first three lines of code, and then click Execute to
demonstrate the locks held by a transaction using REPEATABLE READ isolation.
8. Under the comment that begins Step 5, select the remaining five lines of code, and then click
Execute.
9. Under the comment that begins Step 6, select the first three lines of code, and then click Execute to
demonstrate the locks held by a transaction using REPEATABLE READ isolation and a
READCOMMITTED locking hint.

10. Under the comment that begins Step 6, select the remaining five lines of code, and then click
Execute.

11. Under the comment that begins Step 7, select the first three lines of code, and then click Execute to
demonstrate the locks held by a transaction using READ COMMITTED isolation and a TABLOCKX
locking hint.

12. Under the comment that begins Step 7, select the remaining five lines of code, and then click
Execute.
13. Under the comment that begins Step 8, select the first three lines of code, and then click Execute to
demonstrate the locks held by a transaction using REPEATABLE READ isolation and a TABLOCKX
locking hint.
14. Under the comment that begins Step 8, select the remaining five lines of code, and then click
Execute.
15. In the Demo 2b - lock hints 2.sql script file, under the comment that begins Query 1, select the
code, and then click Execute.
17-26 SQL Server Concurrency

16. In the Demo 2a - lock hints 1.sql script file, under the comment that begins Step 9, select the code,
and then click Execute to demonstrate that the statement waits.

17. Allow the query to wait for a few seconds, and then on the Query menu, click Cancel Executing
Query.
18. Under the comment that begins Step 10, select the code, and then click Execute to demonstrate the
behavior of the READPAST hint.

19. In the Demo 2b - lock hints 2.sql script file, under the comment that begins Query 2, select the
code, and then click Execute to close the open transaction.

20. Close SSMS without saving any changes.

Check Your Knowledge


Question

If a process is attempting to acquire an


exclusive row lock, what lock mode will it
attempt to acquire on the data page and
table that contain the row?

Select the correct answer.

Exclusive (X)

Shared (S)

Intent shared (IS)

Intent exclusive (IX)

Intent update (IU)


Developing SQL Databases 17-27

Lab: Concurrency and Transactions


Scenario
You have reviewed statistics for the AdventureWorks database and noticed high wait stats for CPU,
memory, IO, blocking, and latching. In this lab, you will address blocking wait stats. You will explore
workloads that can benefit from snapshot isolation and partition level locking. You will then implement
snapshot isolation and partition level locking to reduce overall blocking.

Objectives
After completing this lab, you will be able to:

 Implement the SNAPSHOT isolation level.

 Implement partition level locking.

Estimated Time: 45 minutes

Virtual machine: 20762C-MIA-SQL

User name: ADVENTUREWORKS\Student


Password: Pa55w.rd

Exercise 1: Implement Snapshot Isolation


Scenario
You have reviewed wait statistics for the AdventureWorks database and noticed high wait stats for
locking, amongst others. In this exercise, you will implement SNAPSHOT isolation to reduce blocking
scenarios.

The main tasks for this exercise are as follows:

1. Prepare the Lab Environment

2. Clear Wait Statistics

3. Run the Workload

4. Capture Lock Wait Statistics

5. Enable SNAPSHOT Isolation

6. Implement SNAPSHOT Isolation

7. Rerun the Workload


8. Capture New Lock Wait Statistics

9. Compare Overall Lock Wait Time

 Task 1: Prepare the Lab Environment


1. Ensure that the MT17B-WS2016-NAT, 20762C-MIA-DC, and 20762C-MIA-SQL virtual machines
are running, and then log on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the
password Pa55w.rd.

2. Run Setup.cmd in the D:\Labfiles\Lab17\Starter folder as Administrator.


17-28 SQL Server Concurrency

 Task 2: Clear Wait Statistics


1. Start SQL Server Management Studio and connect to the MIA-SQL instance using Windows
authentication, then open the project file D:\Labfiles\Lab17\Starter\Project\Project.ssmssln and
the script file Lab Exercise 01 - snapshot isolation.sql.

2. Execute the query under the comment that begins Task 1 to clear wait statistics.

 Task 3: Run the Workload


 In the D:\Labfiles\Lab17\Starter folder, execute start_load_exercise_01.ps1 with PowerShell™. Wait
for the workload to finish before continuing. If a message is displayed asking you to confirm a change
in execution policy, type Y and then press Enter.

 Task 4: Capture Lock Wait Statistics


 In SSMS, amend the query under the comment that begins Task 3 to capture only lock wait statistics
into a temporary table. Hint: lock wait statistics have a wait_type that begins “LCK”.

 Task 5: Enable SNAPSHOT Isolation


 Amend the properties of the AdventureWorks database to allow SNAPSHOT isolation.

 Task 6: Implement SNAPSHOT Isolation


1. In SSMS Solution Explorer, open the script file Lab Exercise 01 - stored procedure.sql.
2. Use the script to modify the stored procedure definition to run under SNAPSHOT isolation.

 Task 7: Rerun the Workload


1. In the SSMS query window for Lab Exercise 01 - snapshot isolation.sql, rerun the query under the
comment that begins Task 1.
2. In the D:\Labfiles\Lab17\Starter folder, execute start_load_exercise_01.ps1 with PowerShell. Wait
for the workload to finish before continuing.

 Task 8: Capture New Lock Wait Statistics


 In SSMS, under the comment that begins Task 8, amend the query to capture lock wait statistics into
a temporary table called #task8.

 Task 9: Compare Overall Lock Wait Time


 In the SSMS query window for Lab Exercise 01 - snapshot isolation.sql, execute the query under
the comment that begins Task 9, to compare the total wait_time_ms you have captured between
the #task3 and #task8 temporary tables.

Results: After this exercise, the AdventureWorks database will be configured to use the SNAPSHOT
isolation level.
Developing SQL Databases 17-29

Exercise 2: Implement Partition Level Locking


Scenario
You have reviewed statistics for the AdventureWorks database and noticed high wait stats for locking,
amongst others. In this exercise, you will implement partition level locking to reduce blocking.

The main tasks for this exercise are as follows:

1. Open Activity Monitor

2. Clear Wait Statistics

3. View Lock Waits in Activity Monitor

4. Enable Partition Level Locking

5. Rerun the Workload

 Task 1: Open Activity Monitor


1. In SSMS Object Explorer, open Activity Monitor for the MIA-SQL instance.

2. In Activity Monitor, expand the Resource Waits section.

 Task 2: Clear Wait Statistics


1. If it is not already open, open the project file D:\Labfiles\Lab17\Starter\Project\Project.ssmssln,
then open the query file Lab Exercise 02 - partition isolation.sql.

2. Execute the code under Task 2 to clear wait statistics.

 Task 3: View Lock Waits in Activity Monitor


1. In the D:\Labfiles\Lab17\Starter folder, execute start_load_exercise_02.ps1 with PowerShell. Wait
for the workload to finish before continuing (it will take a few minutes to complete).

2. Switch to SSMS and to the MIA-SQL - Activity Monitor tab. In the Resource Waits section, note the
value of Cumulative Wait Time (sec) for the Lock wait type.
3. Close the PowerShell window where the workload was executed.

 Task 4: Enable Partition Level Locking


1. In the Lab Exercise 02 - partition isolation.sql query, under the comment that begins Task 5, write
a query to alter the Proseware.CampaignResponsePartitioned table in the AdventureWorks
database to enable partition level locking.

2. Rerun the query under the comment that begins Task 2 to clear wait statistics.

 Task 5: Rerun the Workload


1. In the D:\Labfiles\Lab17\Starter folder, execute start_load_exercise_02.ps1 with PowerShell. Wait
for the workload to finish before continuing (it will take a few minutes to complete).

2. Return to the MIA-SQL - Activity Monitor tab. In the Resource Waits section, note the value of
Cumulative Wait Time (sec) for the Lock wait type.

3. Compare this value to the value you noted earlier in the exercise.

4. Close the PowerShell window where the workload was executed.

5. Close SQL Server Management Studio without saving any changes.

Results: After this exercise, the AdventureWorks database will use partition level locking.
17-30 SQL Server Concurrency

Check Your Knowledge


Question

When partition level locking is enabled,


what combination of locks will be held by
an UPDATE statement that updates all the
rows in a single partition? Assume that
the partition contains more than 1 million
rows.

Select the correct answer.

Database: Shared (S)


Table: Exclusive (X)

Database: Shared (S)


Table: Intent Exclusive (IX)
Partition: Exclusive (X)

Database: Shared (S)


Table: Exclusive (X)
Partition: Exclusive (X)
Developing SQL Databases 17-31

Module Review and Takeaways


In this module, you have learned about SQL Server’s implementation of transactions and concurrency. You
have learned how to use transaction isolation levels to control data consistency within a transaction, and
the concurrency issues you might expect at each isolation level. You have also learned about how locking
is used to implement transaction isolation levels, and how to use lock hints to modify locking behavior.

Review Question(s)

Check Your Knowledge


Question

A transaction is running with the


SERIALIZABLE transaction isolation level.
The transaction includes a SELECT
statement with a single table in the FROM
clause; the table is referenced with the
READCOMMITTED table hint. Which
transaction isolation level applies to the
SELECT statement?

Select the correct answer.

SERIALIZABLE

READ UNCOMMITTED

REPEATABLE READ

READ COMMITTED
18-1

Module 18
Performance and Monitoring
Contents:
Module Overview 18-1
Lesson 1: Extended Events 18-2

Lesson 2: Working with Extended Events 18-11

Lesson 3: Live Query Statistics 18-20


Lesson 4: Optimize Database File Configuration 18-23

Lesson 5: Metrics 18-27

Lab: Monitoring, Tracing, and Baselining 18-37

Module Review and Takeaways 18-40

Module Overview
This module looks at how to measure and monitor the performance of your SQL Server® databases. The
first two lessons look at SQL Server Extended Events, a flexible, lightweight event-handling system built
into the Microsoft® SQL Server Database Engine. These lessons focus on the architectural concepts,
troubleshooting strategies and usage scenarios.

This module also describes tools and techniques you can use to monitor performance data, and to create
baselines.

Objectives
After completing this module, you will be able to:

 Understand Extended Events and how to use them.


 Work with Extended Events.

 Understand Live Query Statistics.

 Optimize the file configuration of your databases.

 Use DMVs and Performance Monitor to create baselines and gather performance metrics.
18-2 Performance and Monitoring

Lesson 1
Extended Events
SQL Server Extended Events is a flexible, lightweight event-handling system built into the Microsoft SQL
Server Database Engine. This lesson focuses on the architectural concepts, troubleshooting strategies, and
usage scenarios of Extended Events.

Lesson Objectives
When you have completed this lesson, you will be able to:

 Understand Extended Events.

 Use Extended Events to capture events.

Extended Events, SQL Trace, and SQL Server Profiler


Extended Events, SQL Trace, and SQL Server
Profiler are all tools that you can use to monitor
SQL Server events.

SQL Trace
SQL Trace is a server-side, event-driven activity
monitoring tool; it can capture information about
more than 150 event classes. Each event returns
data in one or more columns and you can filter
column values. You configure the range of events
and event data columns in the trace definition.
You can also configure the destination for the
trace data, a file or a database table, in the trace
definition.
SQL Trace is included in SQL Server 7.0 and later versions.

SQL Server Profiler


SQL Server Profiler is a GUI for creating SQL traces and viewing data from them. SQL Server Profiler is
included in SQL Server 7.0 and later versions.

As established parts of the SQL Server platform, SQL Server Profiler and SQL Trace are familiar to many
SQL Server administrators.

Note: SQL Trace and SQL Server Profiler are marked for deprecation and will be removed in
future versions of SQL Server. Extended Events is now the recommended activity tracing tool.
Because SQL Trace is marked for deprecation, it does not include event classes for many features
added in SQL Server 2012 onwards.

Extended Events
Like SQL Trace, Extended Events is an event-driven activity monitoring tool; however, it attempts to
address some of the limitations in the design of SQL Trace by following a loose-coupled design pattern.
Events and their targets are not tightly coupled; any event can be bound to any target. This means that
data processing and filtering can be carried out independently of data capture. In most cases, this results
in Extended Events having a lower performance overhead than an equivalent SQL Trace.
Developing SQL Databases 18-3

With Extended Events, you can define sophisticated filters on captured data. In addition to using value
filters, you can filter events by sampling, and data can be aggregated at the point it is captured. You can
manage Extended Events either through a GUI in SQL Server Management Studio (SSMS) or by using
Transact-SQL statements.

You can integrate Extended Events with the Event Tracing for Windows (ETW) framework, so that you can
monitor SQL Server activity alongside other Windows® components.

Extended Events was introduced in SQL Server 2008; since the deprecation of SQL Trace and SQL Server
Profiler was announced with the release of SQL Server 2012, many features introduced in SQL Server 2012,
2014, 2016, and 2017 can only be traced using Extended Events.

However, the additional flexibility of Extended Events comes at the cost of greater complexity.

Extended Events Architecture


The Extended Events engine is a collection of
services, running in the database engine, that
provide the resources necessary for events to be
defined and consumed.
You might find it most helpful to think about
Extended Events primarily in terms of the session
object. A session defines the Extended Events data
that you want to collect, how the data will be
filtered, and how the data will be stored for later
analysis. Sessions are the top-level object through
which you will interact with Extended Events:
 User defines session

o Session includes event


 Event triggers action
 Event is filtered by predicate
o Session writes to target

A list of sessions is maintained by the Extended Events engine. You can define and modify sessions using
Transact-SQL or in SSMS. You can view data collected by active sessions using Transact-SQL—in which
case the data is presented in XML format—or using SSMS.
18-4 Performance and Monitoring

Packages
Packages act as containers for the Extended Events
objects and their definitions; a package can
expose any of the following object types:

 Events

 Predicates

 Actions
 Targets

 Types

 Maps

Packages are contained in a module that exposes them to the Extended Events engine. A module can
contain one or more packages, and can be compiled as an executable or DLL file.

A complete list of packages registered on the server can be viewed using the sys.dm_xe_packages DMV:

sys.dm_xe_packages
SELECT * FROM sys.dm_xe_packages;

For more information about sys.dm_xe_packages, see Microsoft Docs:

sys.dem_xe_packages (Transact-SQL)
http://aka.ms/i4j6vf

Events, Actions, and Predicates


Events
Events are points in the code of a module that are
of interest for logging purposes. When an event
fires, it indicates that the corresponding point in
the code was reached. Each event type returns
information in a well-defined schema when it
occurs.
All available events can be viewed in the
sys.dm_xe_objects DMV under the event
object_type.

sys.dm_xe_objects; events
SELECT * FROM sys.dm_xe_objects
WHERE object_type = 'event';
Developing SQL Databases 18-5

Events are defined by the Event Tracing for Windows model—this means that SQL Server Extended Events
can be integrated with ETW. Like ETW events, Extended Events is categorized by:

 Channel. The event channel identifies the target audience for an event. These channels are common
to all ETW events:

o Admin. Events for administration and support.


o Operational. Events for problem investigation.

o Analytic. High-volume events used in performance investigation.

o Debug. ETW developer debugging events.

o Keyword. An application-specific categorization. In SQL Server, Extended Events event keywords


map closely to the grouping of events in a SQL Trace definition.

A complete list of event keywords can be returned from sys.dm_xe_map_values.

Extended Events Event Keywords


SELECT map_value AS keyword
FROM sys.dm_xe_map_values
WHERE name = 'keyword_map'
ORDER BY keyword;

When you add, amend or remove an event from a package, you must refer to it with a two-part name:
packagename.eventname.
For more information on sys.dm_xe_objects, see Microsoft Docs:

sys.dm_xe_objects (Transact-SQL)
http://aka.ms/bwkcmu

Actions

Actions are responses to an event; you can use these responses to collect supplementary information
about the context of an event at the time that it occurs. Each event may have a unique set of one or more
actions associated with it. When an event occurs, any associated actions are raised synchronously.

Note: Actions do not allow you to define responses to an event. Instead, actions are
additional steps that occur within the Extended Events engine when an event is triggered. Most
actions provide more data to be collected about an event.

SQL Server defines more than 50 different actions, which include:

o Collect database ID

o Collect T-SQL stack

o Collect session ID

o Collect session's NT username

o Collect client hostname


18-6 Performance and Monitoring

All available actions can be viewed in the DMV sys.dm_xe_objects under the object_type value action:

sys.dm_xe_objects; actions
SELECT * FROM sys.dm_xe_objects
WHERE object_type = 'action';

Predicates

Predicates are logical rules with which events can be selectively captured, based on criteria you specify.
Predicates divide into two subcategories:
 Predicate comparisons. Comparison operators, such as “equal to”, “greater than”, and “less than”,
which may make up a predicate filter. All predicate comparisons return a Boolean result (true or false).
 Predicate sources. Data items that may be used as inputs to predicate comparisons. These are similar
to the column filters available when defining a SQL trace.

In addition to building logical rules, predicates are capable of storing data in a local context, which means
that predicates based on counters can be constructed; for example, every n events or every n seconds.
Predicates are applied to an event using a WHERE clause—this functions like the WHERE clause in a
Transact-SQL query.

All available predicates can be viewed in the DMV sys.dm_xe_objects under the object_type values
pred_source and pred_compare.

sys.dem_xe_objects; predicates
SELECT * FROM sys.dm_xe_objects
WHERE object_type LIKE 'pred%'
ORDER BY object_type, name;

Targets and Sessions


Targets
Targets are the Extended Events objects that
collect data. When an event is triggered, the
associated data can be written to one or more
targets. A target may be updated synchronously or
asynchronously. The following targets are available
for Extended Events:
 Event counter. The counter is incremented
each time an event associated with a session
occurs—synchronous.
 Event file. Event data is written to a file on
disk—asynchronous.

 Event pairing. Tracks when events that normally occur in pairs (for example, lock acquired and lock
released) do not have a matching pair—asynchronous.

 Event Tracing for Windows. Event data is written to an ETW log—synchronous.

 Histogram. A more complex counter that partitions counts by an event or action value—
asynchronous.

 Ring buffer. A first-in, first-out (FIFO) in-memory buffer of a fixed size—asynchronous.


Developing SQL Databases 18-7

The design of Extended Events is such that an event will only be written once to a target, even if multiple
sessions are configured to send that event to the same target.

All available targets can be viewed in the DMV sys.dm_xe_objects under the object_type value target:

sys.dm_xe_objects; targets
SELECT * FROM sys.dm_xe_objects
WHERE object_type = 'target';

Sessions

A session links one or more events to one or more targets. You can configure each event in a session to
include one or more actions, and to be filtered with one or more predicates. Once defined, a session can
be started or stopped as required; it is possible to configure a session to start when the database engine
starts.

A session may include events from more than one package. Sessions are isolated from one another;
multiple sessions may use the same events and targets in different ways, without interfering with one
another.
A session is configured with a buffer in which event data is held while a session is running, before it is
dispatched to the session targets. The size of this buffer is configurable, as is a dispatch policy (how long
data will be held in the buffer). You can also configure whether or not to permit data loss from the buffer
if event data arrives faster than it can be processed and dispatched to the session target.
All active Extended Events sessions can be viewed in the DMV sys.dm_xe_sessions:

All active Extended Events sessions can be viewed in the DMV sys.dm_xe_sessions:

sys.dm_xe_sessions
SELECT * FROM sys.dm_xe_sessions;

For more information about active Extended Events DMVs, including sys.dm_xe_sessions, see Microsoft
Docs:

Extended Events Dynamic Management Views


http://aka.ms/Imlj06

For more information on the set of DMVs for accessing definitions for all Extended Events sessions,
including sys.server_event_sessions, see Microsoft Docs:

Extended Events Catalog Views (Transact-SQL)


http://aka.ms/Cqon4y

Note: A session can be created without targets, in which case the session data is only visible
using Watch Live Data in SSMS.
18-8 Performance and Monitoring

Types and Maps


Types and maps are metadata objects that make it
easier to work with Extended Events data. Types
and maps are not directly referenced in an
Extended Events session definition.

Types

Internally, Extended Events data is held in binary.


A type identifies how a binary value should be
interpreted and presented when the data is
queried.

All available types can be viewed in the DMV


sys.dm_xe_objects under the object_type value
type:

sys.dem_xe_objects; types
SELECT * FROM sys.dm_xe_objects
WHERE object_type = 'type';

Maps
A map is a lookup table for integer values. Internally, many event and action data values are stored as
integers; maps link these integer values to text values that are easier to interpret.

All available types can be viewed in the DMV sys.dm_xe_map_values:

sys.dm_xe_map_values
SELECT * FROM sys.dm_xe_map_values
ORDER BY name, map_key;

Demonstration: Creating an Extended Events Session


In this demonstration, you will learn how to:

 Create an Extended Events session.

Demonstration Steps
1. Start the 20762C-MIA-DC, and 20762C-MIA-SQL virtual machines, and then log on to 20762C-
MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. In the D:\Demofiles\Mod18 folder, run Setup.cmd as Administrator.

3. In the User Account Control dialog box, click Yes and wait for the script to finish.
4. Start SQL Server Management Studio and connect to the MIA-SQL database engine instance using
Windows authentication.

5. On the File menu, point to Open, and then click Project/Solution.


6. In the Open Project dialog box, navigate to the D:\Demofiles\Mod18 folder, click Demo.ssmssln,
and then click Open.

7. In Solution Explorer, double-click Demo 1 - create xe session.sql.


Developing SQL Databases 18-9

8. Select code under the comment that begins Step 1, and then click Execute to create an Extended
Events session.

9. Select code under the comment that begins Step 2, and then click Execute to verify that the session
metadata is visible.
10. Select code under the comment that begins Step 3, and then click Execute to start the session and
execute some queries.

11. Select code under the comment that begins Step 4, and then click Execute to query the session data.
12. Select code under the comment that begins Step 5, and then click Execute to refine the session data
query.

13. In Object Explorer, under MIA-SQL, expand Management, expand Extended Events, expand
Sessions, expand SqlStatementCompleted, and then double-click package0.ring_buffer.

14. In the Data column, click the XML value, and note that this is the same data that is returned by the
query under the comment that begins Step 4 (note that additional statements will have been
captured because you ran the code earlier).
15. In Object Explorer, right-click SqlStatementCompleted, and then click Watch Live Data.

16. In the Demo 1 - create xe sessions.sql query pane, select the code under the comment that begins
Step 7, and then click Execute to execute some SQL statements.
17. In the MIA-SQL - SqlStatementCompleted: Live Data pane. Wait for the events to be captured and
displayed; this can take a few seconds. Other SQL statements from background processes might be
captured by the session.
18. In the Demo 1 - create xe sessions.sql query pane, select the code under the comment that begins
Step 8, and then click Execute to stop the session.

19. In Object Explorer, right-click SqlStatementCompleted, and then click Properties.

20. In the Session Properties dialog box, review the settings on the General, Events, Data Storage, and
Advanced pages, if necessary referring back to the session definition under the comment that begins
Step 1.

21. In the Session Properties dialog box, click Cancel.

22. In the Demo 1 - create xe sessions.sql query pane, select the code under the comment that begins
Step 10, and then click Execute to drop the session.

23. Keep SQL Server Management Studio open for the next demonstration.
18-10 Performance and Monitoring

Check Your Knowledge


Question

Which of the following statements about


Extended Events is incorrect?

Select the correct answer.

Extended Events can be viewed using


Watch Live Data without starting a
session.

Extended Events sessions can be


created using Transact-SQL
commands.

Extended Events is limited in what it


can do and will soon be deprecated.
Use SQL Trace whenever possible.

Watch Live Data provides a real-time


view of query execution statistics.
Developing SQL Databases 18-11

Lesson 2
Working with Extended Events
This lesson discusses using Extended Events. It covers common scenarios in which you might create
Extended Events sessions for troubleshooting and performance optimization, in addition to the
system_health Extended Events session, which captures several events relevant to performance tuning.

Lesson Objectives
At the end of this lesson, you will be able to:

 Configure Extended Events sessions.

 Configure Extended Events targets.

 Explain the system_health Extended Events session.

 Describe usage scenarios for Extended Events.

 Describe best practices for using Extended Events.

Configuring Sessions
As you have learned, Extended Events sessions are
composed from several other object types,
primarily events and targets. Sessions also have a
number of configuration options that are set at
session level:

 MAX_MEMORY. The amount of memory


allocated to the session for use as event
buffers, in kilobytes. The default value is 4 MB.

 EVENT_RETENTION_MODE. Specifies how


the session will behave when the event buffers
are full and further events occur:

o ALLOW_SINGLE_EVENT_LOSS. An event can be dropped from the session if the buffers are full. A
compromise between performance and data loss, this is the default value.

o ALLOW_MULTIPLE_EVENT_LOSS. Full event buffers containing multiple events can be discarded.


Minimal performance impact, but high data loss.

o NO_EVENT_LOSS. Events are never discarded; tasks that trigger events must wait until event
buffer space is available. Potential for severe performance impact, but no data loss.

 MAX_DISPATCH_LATENCY. The amount of time events will be held in event buffers before being
dispatched to targets—defaults to 30 seconds. You may set this value to INFINITE, in which case the
buffer is only dispatched when it is full, or the session is stopped.

 MAX_EVENT_SIZE. For single events larger than the size of the buffers specified by MAX_MEMORY,
use this setting. If a value is specified (in kilobytes or megabytes), it must be greater than
MAX_MEMORY.

 MEMORY_PARTITION_MODE

o NONE. Memory is not partitioned. A single group of event buffers is created.

o PER_NODE. A group of event buffers is created per NUMA node.


18-12 Performance and Monitoring

o PER_CPU. A group of event buffers is created per CPU.

 STARTUP_STATE. When set to ON, the session will start when SQL Server starts. The default value is
OFF.

 TRACK_CAUSALITY. When set to ON, an identifier is added to each event identifying the task that
triggered the event. With this, you can determine whether one event is caused by another.
For more information about configuring a session through Transact-SQL, see Microsoft Docs:

CREATE EVENT SESSION (Transact-SQL)


http://aka.ms/b2eo2i

Configuring Targets
Several Extended Events targets take configuration
values when they are added to a session.

Event File
The event file target can be used to write session
data to a file. It takes the following configuration
parameters:
 filename. The file name to write to; this can
be any valid file name. If a full path is not
specified, the file will be created in the
\MSSQL\Log folder of the SQL Server instance
on which the session is created.

 max_file_size. The largest size that the file may grow to; the default value is 1 GB.
 max_rollover_files. The number of files that have reached max_file_size to retain. The oldest file is
deleted when this number of files is reached.
 increment. The file growth increment, in megabytes. The default value is twice the size of the session
buffer.

For more information on configuring the event file target, see the SQL Server Technical Documentation:

Event File Target


http://aka.ms/ixau4l

Event Pairing
The event pairing target is used to match events that occur in pairs (for example, statement starting and
statement completing, or lock acquired and lock released), and report on beginning events that have no
matching end event. It takes the following configuration parameters:

 begin_event. The beginning event name of the pair.

 end_event. The end event name of the pair.

 begin_matching_columns. The beginning event columns to use to identify pairs.

 end_matching_columns. The ending event columns to use to identify pairs.

 begin_matching_actions. The beginning event actions to use to identify pairs.

 end_matching_actions. The ending event actions to use to identify pairs.


Developing SQL Databases 18-13

 respond_to_memory_pressure. Permit the target to discard events (and so reduce memory


consumption) when memory is under pressure.

 max_orphans. The maximum number of unpaired events the target will collect. The default value is
10,000. When this number is reached, events in the target are discarded on a first-in, first-out basis.
For more information on configuring the event pairing target, see the SQL Server Technical
Documentation:

Event Pairing Target


http://aka.ms/lj7gng

The system_health Extended Events Session


The system_health Extended Events session is
created by default when a SQL Server 2008 or later
version database engine instance is installed. The
session is configured to start automatically when
the database engine starts. The system_health
session is configured to capture a range of events
that are relevant for troubleshooting common SQL
Server issues. These include:
 Details of deadlocks that are detected,
including a deadlock graph.

 The sql_text and session_id when an error


that has a severity of 20 (or higher) occurs.
 The sql_text and session_id for sessions that encounter a memory-related error.

 The callstack, sql_text, and session_id for sessions that have waited for more than 15 seconds on
selected resources (including latches).

 The callstack, sql_text, and session_id for any sessions that have waited for 30 seconds or more for
locks.

 The callstack, sql_text, and session_id for any sessions that have waited for a long time for
preemptive waits. (A preemptive wait occurs when SQL Server is waiting for external API calls to
complete; the trigger time varies by wait type.)

 The callstack and session_id for CLR allocation and virtual allocation failures (when insufficient
memory is available).

 A record of any nonyielding scheduler problems.


 The ring_buffer events for the memory broker, scheduler monitor, memory node OOM, security, and
connectivity. This tracks when an event is added to any of these ring buffers.

 System component results from sp_server_diagnostics.

 Instance health collected by scheduler_monitor_system_health_ring_buffer_recorded.

 Connectivity errors using connectivity_ring_buffer_recorded.

 Security errors using security_error_ring_buffer_recorded.


18-14 Performance and Monitoring

The system_health session writes data to two targets:

 A ring buffer target, configured to hold up to 5,000 events and to occupy no more than 4 MB.

 An event file target, composed of up to four files of 5 MB each.

Note: The details of the system_health session are best understood by looking at its
definition. You can generate a definition from SSMS:

1. Connect SSMS Object Explorer to a SQL Server instance on which you have administrative rights.

2. In the Object Explorer pane, expand Management, expand Extended Events, and then expand
Sessions.

Right-click on the system_health node, click Script As, click CREATE TO, and then click New
Query Editor Window. A script to recreate the system_health session will be generated.

Because both targets are configured to roll over and discard the oldest data they contain when they are
full, the system_health session will only contain the most recent issues. On instances of SQL Server where
the system_health session is capturing many events, the targets might roll over before you can examine
specific events.

Usage Scenarios for Extended Events


Extended Events can be used to troubleshoot
many common performance issues.

Execution Time-outs
When a Transact-SQL statement runs for longer
than the client application’s command time-out
setting, a time-out error will be raised by the client
application. Without detailed client application
logging, it may be difficult to identify the
statement causing a time-out.

This scenario is an ideal use case for the Extended


Events event pairing target, using either of the
following pairs:

 sqlserver.sp_statement_starting and sqlserver.sp_statement_completed (for systems using stored


procedures for database access).
 sqlserver.sql_statement_starting and sqlserver.sql_statement_completed (for systems using ad
hoc SQL for database access).
When a time-out occurs, the starting event will have no corresponding completed event, and will be
returned in the output of the event pairing target.

Troubleshooting ASYNC_NETWORK_IO
The ASYNC_NETWORK_IO wait type occurs when the database engine is waiting for a client application to
consume a result set. This can occur because the client application processes a result set row-by-row as it
is returned from the database server.
Developing SQL Databases 18-15

To troubleshoot this issue with Extended Events, capture the sqlos.wait_info event, filtering on wait_type
value NETWORK_IO. The histogram target might be suitable for this investigation, using either the client
application name or the client host name to define histogram groups.

Tracking Errors and Error Handling in Transact-SQL


Errors may be handled in Transact-SQL code by using TRY…CATCH blocks. Every error raises an event in
Extended Events; this includes the errors handled by the TRY…CATCH blocks. You might want to capture
all unhandled errors, or track the most commonly occurring errors, whether or not they are handled.

The sqlserver.error_reported event can be used to track errors as they are raised. The is_intercepted
column can be used to identify if an error is handled in a TRY…CATCH block.

Tracking Recompilations
Query execution plan recompilations occur when a plan in the plan cache is discarded and recompiled.
High numbers of plan recompilations might indicate a performance problem, and may cause CPU
pressure. Windows performance counters can be used to track overall recompilation counts for a SQL
Server instance, but more detail may be needed to investigate further.

In Extended Events, the sqlserver.sql_statement_recompile event can provide detailed information,


including the cause of recompilation.
The histogram target can be used for tracking recompilations. You should group on source_database_id
to identify the database with the highest number of recompilations in an instance; group on
statement/object_id to find the most commonly recompiled statements.

tempdb Latch Contention


Latch contention in tempdb can occur due to contention for the allocation bitmap pages when large
numbers of temporary objects are being created or deleted. This causes tempdb performance problems
because all allocations in tempdb are slowed down.
The latch_suspend_end event tracks the end of latch waits by database_id, file_id, and page_id. With the
predicate divide_evenly_by_int64, you can capture the contention specifically on allocation pages, because
the different allocation bitmap pages occur at regular intervals in a database data file. Grouping the
events using the histogram target should make it easier to identify whether latch waits are caused by
contention for allocation bitmap pages.

Tracking Lock Escalation


Lock escalation occurs when more than 5,000 locks are required in a single session or under certain
memory conditions.

The sqlserver.lock_escalation event provides the lock escalation information.

Tracking Problematic Page Splits


Page splits are of two types:

 Mid-page splits.
 Page splits for new allocations.

Mid-page splits create fragmentation and more transaction log records due to data movement.

Tracking page splits alone using the sqlserver.page_split event is inefficient because this does not
differentiate the problematic mid-page splits and normal allocation splits. The sqlserver.transaction_log
event can be used for tracking LOP_DELETE_SPLIT operations to identify the problematic page splits. A
histogram target might be most suitable for this task, grouping either on database_id to find the
database with the most page splits, or, within a single database, on alloc_unit_id to find the indexes with
the most page splits.
18-16 Performance and Monitoring

Troubleshooting Orphaned Transactions


Orphaned transactions are open transactions where the transaction is neither committed nor rolled back.
An orphaned transaction may hold locks and lead to more critical problems like log growth and blocking,
potentially leading to a block on the whole SQL Server instance.

The database_transaction_begin and database_transaction_end events can be used with an event


pairing target to identify the orphaned transactions. The tsql_frame action can be used to identify the
line of code where the orphaned transaction started.

Tracking Session-Level Wait Stats


The wait stats available from the sys.dm_os_wait_stats DMV are aggregated at instance level, so it’s not a
fine-grained troubleshooting tool. Although you can track wait stats by session with the additional
sys.dm_exec_session_wait_stats DMV, this may not be suitable for use in a busy system with many
concurrent database sessions.

The sqlos.wait_info event can be used to track waits across multiple concurrent sessions.

Tracking Database and Object Usage


Tracking database and objects usage helps to identify the most frequently used database and most
frequently used objects within a database. You might use this information to guide your optimization
efforts, or to prioritize objects for migration to faster storage or memory-optimized tables.
The sqlserver.lock_acquired event can help with tracking the usage in most cases. For database usage, a
histogram can target grouping on database_id. Object usage can be tracked by tracking SCH_M or SCH_S
locks at the object resource level by grouping on object_id in a histogram target.

Extended Events Best Practices

Run Extended Events Sessions Only


When Needed
Although Extended Events is a lightweight logging
framework, each active session has an overhead of
CPU and memory resources. You should get into
the practice of only running Extended Events
sessions you have created when you have to
troubleshoot specific issues.

Use the SSMS GUI to Browse Available


Events
The Events page of the Extended Events GUI in
SSMS brings all the metadata about individual events together into one view; this view makes
understanding the information that Extended Events makes available to you easier than querying the
DMVs directly.

Understand the Limitations of the Ring Buffer Target


When using a ring buffer target, be aware that you might not always be able to view all the events
contained in the ring buffer. This is due to a limitation of the sys.dm_xe_session_targets DMV; the DMV
is restricted to displaying 4 MB of formatted XML data. Because Extended Events data is stored internally
as unformatted binary, it is possible that the data in a ring buffer will, when converted to formatted XML,
exceed the 4 MB limit of sys.dm_xe_session_targets.
Developing SQL Databases 18-17

You can test for this effect by comparing the number of events returned from a ring buffer in XML with
the count of events returned in the XML document header—or check the value of the truncated attribute
in the XML header.

A query comparing values for the system_health session:

Ring Buffer: Number of Events in XML Compared with Header


SELECT x.target_data.value('(RingBufferTarget/@eventCount)[1]', 'int') AS event_count,
x.target_data.value('count(RingBufferTarget/event)', 'int') AS node_count,
x.target_data.value('(RingBufferTarget/@truncated)[1]', 'bit') AS output_truncated
FROM
(
SELECT CAST(target_data AS xml) AS target_data
FROM sys.dm_xe_sessions AS xs
JOIN sys.dm_xe_session_targets AS xst
ON xs.address = xst.event_session_address

WHERE xs.name = N'system_health'


AND xst.target_name = N'ring_buffer'
) AS x

To avoid this effect, you can:

 Use a file-based target (event file target or ETW target).

 Reduce the size of the MAX_MEMORY setting for the ring buffer to reduce the likelihood that the
formatted data will exceed 4 MB. No single value is guaranteed to work; you may have to try a setting
and be prepared to adjust it to minimize the truncation effect while still collecting a useful volume of
data in the ring buffer.

Note: This effect is not strictly limited to the ring buffer target; it can occur on any target
that stores output in memory buffers (ring buffer target, histogram target, event pairing target,
and event counter target). However, it is most likely to affect the ring buffer target because it
stores unaggregated raw data. All the other targets using memory buffers contain aggregated
data, and are therefore less likely to exceed 4 MB when formatted.

Performance Impact of Collecting Query Execution Plans


Three events can be used to collect query execution plans as part of an Extended Events session:

 query_post_compilation_showplan. Returns the estimated query execution plan when a query is


compiled.

 query_pre_execution_showplan. Returns the estimated query execution plan when a query is


executed.

 query_post_execution_showplan. Returns the actual query execution plan when a query is executed.
When using any of these events, you should consider that adding them to a session, even when predicates
are used to limit the events captured, can have a significant impact on the performance of the database
engine instance. This effect is most marked with the query_post_execution_showplan event. You should
limit your use of these events to troubleshooting specific issues; they should not be included in an
Extended Events session that is always running.
18-18 Performance and Monitoring

Deadlock Graph Format


Deadlock graphs collected by the xml_deadlock_report and database_xml_deadlock_report events are
in a different format from the deadlock graphs produced by SQL Server Profiler; with these, you can use
deadlock graphs captured by Extended Events to represent complex deadlock scenarios involving more
than two processes. If saved as an .xdl file, both formats of deadlock graph can be opened by SSMS.

Demonstration: Tracking Session-Level Waits


In this demonstration, you will see how to use Extended Events to report on wait types by session.

Demonstration Steps
1. In SSMS, in Solution Explorer, double-click Demo 2 - track waits by session.sql.

2. In Object Explorer, expand Management, expand Extended Events, right-click Sessions, and then
click New Session.

3. In the New Session dialog box, on the General page, in the Session name box, type Waits by
Session.
4. On the Events page, in the Event library box, type wait, and then, in the list below, double-click
wait_info to add it to the Selected events list.

5. Click Configure to display the Event configuration options list.


6. In the Event configuration options list, on the Global Fields (Actions) tab, select the session_id
check box.

7. On the Filter (Predicate) tab, click Click here to add a clause.


8. In the Field list, click sqlserver.session_id, in the Operator list, click >, and then in the Value box,
type 50. This filter will exclude most system sessions from the session.

9. On the Data Storage page, click Click here to add a target.

10. In the Type list, click event_file, in the File name on server box, type
D:\Demofiles\Mod18\waitbysession, in the first Maximum file size box, type 5, in the second
Maximum file size box, click MB, and then click OK.
11. In Object Explorer, under Sessions, right-click Waits by Session, and then click Start Session.

12. In File Explorer, in the D:\Demofiles\Mod18 folder, right-click start_load_1.ps1, and then click Run
with PowerShell. If a message is displayed asking you to confirm a change in execution policy, type
Y, and then press Enter. Leave the workload to run for a minute or so before proceeding.

13. In SSMS, in the Demo 2 - track waits by session.sql query pane, select the code under the comment
that begins Step 14, click Execute, and then review the results.

14. Select the code under the comment that begins Step 15, and then click Execute to stop and drop the
session, and to stop the workload.

15. In File Explorer, in the D:\Demofiles\Mod18 folder, note that one (or more) files with a name
matching waitbysession*.xel have been created.

16. Close File Explorer, and then close Windows PowerShell®.

17. Keep SQL Server Management Studio open for the next demonstration.
Developing SQL Databases 18-19

Categorize Activity
Categorize each Extended Events target type into the appropriate category. Indicate your answer by
writing the category number to the right of each item.

Items

1 Ring buffer target

2 Event file target

3 Histogram target

4 Event tracking for Windows


target

5 Event pairing target

6 Event counter target

Category 1 Category 2

Written to Written to
Memory File on Disk
Buffers
18-20 Performance and Monitoring

Lesson 3
Live Query Statistics
Live Query Statistics gives immediate insight into the execution of a query, making the debugging of
long-running queries easier to resolve. You can see each step of the plan exactly as it executes.

Lesson Objectives
After completing this lesson, you will be able to:

 Enable Live Query Statistics for one or all sessions.

 Use Live Query Statistics to view execution plans for queries as they are running.

What Is Live Query Statistics?


Developers and DBAs now have the ability to view
a live execution plan for a query while the query is
running. This is helpful when debugging code, as
you can see the flow of operations while the query
executes.
Live Query Statistics will work with SQL Server
2014 and later. Some performance degradation is
likely to occur as a result of running Live Query
Statistics, so it should not be used in a production
environment, unless a query is causing problems.

Live query statistics is a feature of SSMS, which is


now available independently of SQL Server. You
can download the latest version of SSMS free of charge here:

Download SQL Server Management Studio (SSMS)


http://aka.ms/o4vgkz

Enabling Live Query Statistics


You can enable Live Query Statistics for the
current session only, returning the live plan for
your current query, or for all sessions.

There are two ways to enable Live Query Statistics


for the current session:
 On the Query menu, click Include Live Query
Statistics while in the query window.

 Right-click in the query window, and then


click Include Live Query Statistics.

When the query executes, you will see a new tab


in the results pane.
Developing SQL Databases 18-21

Enabling Live Query Statistics for all sessions adds the statistics to Activity Monitor. There are two ways
that you can do this:

 Execute SET STATISTICS PROFILE ON or SET STATISTICS XML ON in the target session. Both return
the same data, but SET STATISTICS XML ON returns the data as XML, enabling another application
to process the results.

 Enable the query_post_execution_showplan extended event. This is a server setting that will affect
all settings.

To learn more about using Extended Events, see MSDN:

Monitor System Activity Using Extended Events


http://aka.ms/n8n81l

Note: When enabling Live Query Statistics for all sessions, remember that there is a
performance overhead to running it.

Demonstration: Enable Live Query Statistics for a Session


In this demonstration, you will see how to:

 Enable Live Query Statistics.

 View the query execution plan.

 Disable Live Query Statistics.

Demonstration Steps
1. In SSMS, in Solution Explorer, double-click Demo 3 - live query statistics.sql script file.
2. Highlight the script under the Step 1 description, and then click Execute.

3. Highlight the script under the Step 2 description, and then click Execute.

4. On the Query menu, click Include Live Query Statistics.

5. Highlight the script under the Step 4 description, and then click Execute.

6. On the Query menu, click Include Live Query Statistics.

7. Close SQL Server Management Studio, without saving any changes.


18-22 Performance and Monitoring

Check Your Knowledge


Question

Which of the following statements will


enable Live Query Statistics for all
sessions in Activity Monitor?

Select the correct answer.

ALTER DATABASE SET STATISTICS ON

SET LIVE QUERY STATISTICS ON

ENABLE LIVE QUERY STATISTICS

SET STATISTICS PROFILE OFF

SET STATISTICS XML ON


Developing SQL Databases 18-23

Lesson 4
Optimize Database File Configuration
This lesson looks at the options for configuring database files, including tempdb.

Lesson Objectives
At the end of this lesson, you will be able to:

 Explain the importance of tempdb.

 Configure tempdb.

 Look at options for storing data and log files.

Improving Performance with tempdb


tempdb is the temporary database shared by all
the databases on a SQL Server instance. It plays a
critical role in ensuring your databases perform
well. When SQL Server is heavily loaded, tempdb
can become a bottleneck, unless it is properly
configured. To understand why, it is helpful to
consider how it is used.

Understanding tempdb
 tempdb is recreated each time SQL Server
starts, and is dropped when SQL Server is
stopped. Like all databases, it inherits from the
model database.

 tempdb is a shared temporary database for all databases on a SQL Server instance—this means that a
single database can cause performance problems for all the databases on that instance.
 tempdb is a system database. It is not a database you normally work with directly, but a temporary
storage area for objects created by users, internal objects, and version stores. These objects are
created and destroyed during normal operation. User objects include local and global temporary
tables, table variables, temporary indexes, and tables from table-valued functions.

 tempdb is used to hold intermediate results by queries that include GROUP BY, ORDER BY, or UNION
clauses where intermediate results need to be stored when they can’t be cached.

 tempdb holds work tables for hash join and hash aggregate operations, in addition to work tables for
cursor, spool operations, and large object (LOB) storage.

In addition, tempdb acts as a version store for online index operations, and transactions with READ
COMMITTED SNAPSHOT and SNAPSHOT isolation levels.

All the databases on an instance rely on tempdb for fast reads and writes, and most SQL Server databases
generate a great deal of tempdb activity. If tempdb runs out of space, errors will occur causing a
significant amount of disruption in a production environment.
18-24 Performance and Monitoring

Also, if there are latch contention issues, performance will suffer. A latch is a type of lock made on GAMs
(Global Allocation Maps), SGAMs (Shared Global Allocation Maps) and PFS (Page Free Space) pages. These
pages are used to determine where tempdb data can be written. If tempdb has only one data file, it is
easy to run into latch contention issues. With several data files, the load is spread between many PFS,
GAM, and SGAM pages to determine where data can be written.

Improving Performance with tempdb


There are a number of things you can do to ensure that tempdb does not become a bottleneck:

 Store tempdb on separate storage. You get the best performance when database data files, log files,
and tempdb files are held separately.

 Store tempdb on fast storage. Use Solid State Disks (SSDs) to improve performance of tempdb data.

 Increase the size of tempdb.

 Enable autogrowth for tempdb.

 Use multiple data files for tempdb.

 Enable Database Instant File Initialization to speed up file initialization.

Best Practice: Use fast storage for tempdb. Unlike a log file that writes records to disk
sequentially, tempdb has varied reads and writes, and benefits from fast storage media.

SQL Server Installation


At installation the default settings for tempdb depend on the number of processors installed on your
server. This means that performance should be improved for many SQL Server instances without needing
to make any changes after installation.
There are different aspects to configuring tempdb. A full discussion of how to configure tempdb is an
involved topic. If you want to find out more, see TechNet’s white paper:

Working with TempDb in SQL Server 2005


http://aka.ms/dzi7g8

Configuring tempdb
You can configure both tempdb data files, and log
files. The options include:
 Number of files. The default value is either
eight or the number of logical processor cores
on your server. Microsoft recommends that, if
you have less than eight processors, then the
number of tempdb data files should be the
same as the number of processors. If your
server has more than eight processors, and
you experience contention issues, increase the
number of data files in multiples of four; or
make changes to the way your databases
work to reduce tempdb contention. For optimal performance, you should only have the necessary
number of tempdb data files.
Developing SQL Databases 18-25

 Initial size (MB). Make all tempdb data files the same size. This is important because of the
algorithm used for page allocation. If files differ in size, you will not get the intended performance
improvements. The page size required will depend on the database workload—8 MB is the default for
primary and secondary tempdb data files.

 Total initial size (MB). The total size of all tempdb data files.

 Autogrowth (MB). The number of megabytes that each data file will increase by when they run out
of space. All tempdb data files will grow by the same amount, and at the same time. The
recommended, and default size, is 64 MB. You should always allow tempdb data files to grow—if they
run out of space, you will get errors instead of the intended operations.

 Total autogrowth (MB). The total number of megabytes when all tempdb data files autogrow.

 Data directories. You can store your tempdb data files in different directories. SQL Server supports
local disk storage, shared storage, and SMB file shares.

 Log file initial size (MB). The log is used to roll back transactions. However, tempdb cannot be
backed up, and uses minimal logging—there is no point-in-time recovery with tempdb.

 Log file autogrowth (MB). The default value is 64 MB. The maximum size is 2 TB. Always allow the
tempdb log file to grow.
 Log directory. Path for the tempdb log file. There can only be one tempdb log file, so there is only
one directory path.

Best Practice: Ensure tempdb files are large enough for normal workload without relying
on autogrowth. For best performance, autogrowth should handle the unexpected, not the
everyday.

Performance Improvements
SQL Server performance is improved by tempdb configuration in several ways:

 Allocations in tempdb all use uniform extents. This reduces blocking issues when SQL Server is heavily
loaded, and means that trace flag 1118 is not needed.
 Multiple data files all grow by the same amount, at the same time. If data files are different sizes, you
don’t get the same benefit in reducing contention.

 The primary filegroup has the AUTOGROW_ALL_FILES property set to ON, and this cannot be
changed.

Best Practice: When configuring a SQL Server instance for failover clustering, ensure the
directories are valid for all cluster nodes. If the tempdb directory(s)—including log directory—are
not on the failover target node, SQL Server will not fail over.

For more information about tempdb settings, see Microsoft Docs:

tempdb Database
https://aka.ms/Yntqsx
18-26 Performance and Monitoring

Optimize Database File Configuration


SQL Server uses both data files and log files. Each
transaction is written to both a data file, and a log
file, to ensure that the database is never in an
inconsistent state. In the event of a system failure,
SQL Server can go to the log file to get the latest
transactions.

Storing data files and log files on separate storage


media improves performance.

To help you understand read and write activity on


your database, use system DMVs to understand
whether you need to store your data files and log
files separately. For example, the
sys.dm_io_virtual_file_stats DMV gives you I/O statistics for data and log files. Use the results in
combination with baseline performance statistics.

Provide a valid database ID and file ID to get a variety of I/O statistics:

sys.dm_io_virtual_file_stats
SELECT *
FROM sys.dm_io_virtual_file_stats(database_id,file_id)

Check Your Knowledge


Question

Which of the following statements is


correct?

Select the correct answer.

Each SQL Server user database has


one tempdb.

tempdb is a shared resource between


all databases on a SQL Server
instance.

The number of tempdb databases


depends on the number of logical
processors on the server.

You can have any number of tempdb


databases on an instance, but the
exact number is hidden.

tempdb should be deleted when not


in use.
Developing SQL Databases 18-27

Lesson 5
Metrics
This lesson looks at gathering SQL Server metrics, including creating a baseline, using dynamic
management objects (DMOs) and Windows Performance Monitor.

Lesson Objectives
After completing this lesson, you will be able to:

 Monitor operating system and SQL Server performance metrics.

 Compare baseline metrics to observed metrics.

 Use PerfMon.
 Use Dynamic Management Objects to get performance metrics.

Generating Baseline Metrics

Baselines
A baseline is the quantified normal or usual state
of a SQL Server environment. A SQL Server
baseline tells you how a system typically performs.
Benefits of having a baseline include:
 You can compare baselines with the current
system state to make comparisons.

 You can identify unusual activity, and


proactively tune or resolve problems.

 Making capacity planning easier.

 Making troubleshooting easier.


To create a baseline, capture SQL Server diagnostic data over time and calculate the average. For example,
resource usage may be high during business peak hours, and lower during off-peak hours. Alternatively, it
might be high during weekends when weekly reports are generated. Multiple baselines may be needed
for different situations.

Benchmarks
A benchmark is a standard point of reference against which subsequent metrics can be compared. A
benchmark is different from a baseline. For example, the benchmark for a stored procedure execution
time might be two seconds. The baseline will give the usual or normal execution time for stored
procedures. If the execution time for the specific stored procedure is less than the baseline, then it is
considered good; otherwise, the stored procedure needs to be optimized.
18-28 Performance and Monitoring

What to Capture?
Many data points are available to be collected. Consider how you will use the data before capturing a
specific data point—how can it help you diagnose and foresee performance problems? To start with,
consider collecting the following:

 System usage. System usage is described in terms of CPU, I/O, and memory consumption. You can
check these basic Performance Monitor counters against the current values, in case of sudden
performance degradation. These values can also be used to define a system usage trend, which will
help further with capacity planning.

 SQL Server configurations. These are instance-level or database-level configuration settings, such as
max server memory, degree of parallelism, or auto shrink.

 Database size information. The system will stop when it runs out of storage space. Capture
database and file size information to help you react proactively to space issues, thereby preventing
system downtime.
 Wait statistics. Wait statistics are helpful when troubleshooting SQL Server performance issues. They
give insight into the root cause of an issue and are helpful when optimizing a slow system.

Data Capture Frequency


After you decide what to capture, the next step is to decide the frequency of the data capture. The
frequency governs the amount of data that is captured. Too much data will result in high storage costs;
too little data will not provide sufficient understanding of the system. The frequency also depends on the
type of data—for example, Performance Monitor data can be captured every 15 seconds or so. However,
SQL instance configuration data would be recorded much less often.

It is also necessary to have a data retention strategy. It is not advisable to keep a year or six months’ worth
of baseline data, because this data is not likely to be useful. In addition, capturing data during off-peak
hours will only add additional data to the storage without being useful.

Best Practice: Capture relevant data and keep it only for as long as it is useful—typically
three to four months.

Collecting Data Using DMVs


DMVs are views onto the internal workings of SQL
Server. As the name implies, they are dynamic in
that they provide internal metadata, including
internal memory structures, as queries are
executed. The data that DMVs expose will help
you to optimize the performance of your SQL
Server instance. You can capture data from DMVs
by using Transact-SQL queries, which can be saved
and used as required.

DMVs are useful in capturing a baseline because


data such as wait statistics, SQLOS information,
and cached query plans, cannot be obtained
through other sources. Data collection through certain DMVs may incur overhead on the SQL Server
instance. For example, obtaining index fragmentation details using the sys.dm_db_index_physical_stats
DMV for all indexes in a large database might take time to return and can negatively affect SQL Server
performance. Index DMVs, which were discussed in Module 6, should be monitored regularly.
Developing SQL Databases 18-29

In addition, other important DMVs that you should consider monitoring are:

Users Sessions
SQL Server uses session_ids to track users connected to SQL Server. The sys.dm_exec_session DMV collects
information about sessions including time connected, CPU usage, memory usage, and more.

For example, to display a list of users connected to SQL Server, plus the time they have been connected in
milliseconds:

sys.dm_exec_session
SELECT login_name, SUM(total_elapsed_time)/1000 AS time_connected_seconds
FROM sys.dm_exec_sessions
GROUP BY login_name;

SQL Server Waits


 sys.dm_os_waiting_tasks: displays every task waiting for something.
 sys.dm_os_wait_stats: displays an aggregation of all wait times since the SQL Server instance was
started.

Query Performance
Statistics play an important role in ensuring queries are executed in the most efficient way. Use the
following DMOs in combination to identify queries that have missing statistics:

 sys.dm_exec_cached_plans: returns plan handles for cached plans.


 sys.dm_exec_sql_text: returns the Transact-SQL text for a given plan handle.

 sys.dm_exec_query_plan returns the plan for the specified plan handle.

Performance Monitor
Windows Performance Monitor is a snap-in for
Microsoft Management Console (MMC) that
comes with the Windows Server operating system.
Performance Monitor provides graphical views
onto the performance of the operating system and
the server. Performance Monitor is helpful because
it includes data about the server and its operating
system, in addition to SQL Server. You can view
performance data either from log files or in real
time.

What Does Performance Monitor Do?


You can use Performance Monitor to:

 Display real-time system performance data in three formats—line graph, histogram, and report.

 Monitor overall system health, SQL Server health, or the health of other applications by selecting
counters from the available performance objects.

 Record current performance counter values in text files and databases to analyze later. You can
analyze the performance data by using file manipulation techniques or Transact-SQL queries against
the appropriate database.
18-30 Performance and Monitoring

 Create custom sets of performance counters known as a “data collector set” that you can then
schedule to run as appropriate.

 Configure and respond to alerts. For example, when a specified threshold is reached, start a particular
data collector set or a particular program.
You can use Performance Monitor for real-time monitoring, and to establish a baseline for SQL Server
performance. You can collect performance data over time and analyze it to calculate workload
characteristics such as peak and off-peak hours, average CPU usage, memory usage, and more. It is often
useful to demonstrate performance gains or losses following a system change. Use Performance Monitor
to compare counter values before and after implementation.

Performance Monitor is a lightweight tool, with minimal performance overhead for sampling intervals of
greater than one second. The default or optimal sampling interval is 15 seconds. The amount of I/O that
Performance Monitor generates will depend on the number of counters, the sampling interval, and the
underlying storage. If there is an I/O issue, consider saving the performance data on separate storage and
only enable the counters that you need.

Starting Performance Monitor


1. To start Performance Monitor, type Performance Monitor from the start screen. The Performance
Monitor screen appears with a navigation tree in the left pane. Alternatively, type perfmon from the
Run command window.

2. There are three views:

a. Monitoring Tools

b. Data Collector Sets

c. Reports

3. The right pane displays associated data.

Performance Monitor Counters


Performance Monitor enables you to create a
baseline, or compare the current system
performance against an established baseline, by
using the captured data.

Getting Started with Performance


Monitor
You cannot use the real-time performance data
for historical analysis. It can only be used to
monitor current system state and compare against
the established baseline. To monitor real-time
performance using Performance Monitor, follow
these steps:

1. Start Performance Monitor.

2. In the leftmost pane, expand Monitoring Tools, and then click Performance Monitor. This will open
the Performance Monitor window in the rightmost pane.

3. To add the counters to monitor, click the Plus Sign.

4. When you have added the required counters, click OK to view performance information.
Developing SQL Databases 18-31

By default, Performance Monitor shows the last 100 seconds of data. This value can be changed from the
Performance Monitor properties window that is opened by right-clicking on Performance Monitor and
selecting properties.

There are three different types of graph available in Performance Monitor: Line, Histogram, and Report.
Performance Monitor provides data collector sets to automate collection of selected counters. There are
two types of data collector sets: system and user defined. The system set includes OS and network specific
counters but does not include SQL Server counters. The data collector sets can also be triggered to start
when a specific event or alert occurs. For example, a data collector set can be started when the available
memory is less than 100 MB. To set up and schedule a data collector set to collect performance counters,
follow these steps:

1. Start Performance Monitor.

2. In the Performance Monitor window, in the leftmost pane, expand Data Collector Sets, right-click
User Defined, click New, and then click Data Collector Set. The Create new Data Collector Set
dialog box will appear.
3. In the Create new Data Collector Set dialog box, type a name, click Create manually, and then click
Next.

4. In the Create data logs section, select Performance counter, and then click Next.
5. Choose the appropriate performance counters.

6. Specify a sampling interval, and then click Finish. You can now schedule the data collector or execute
manually as required.

To configure the data collector set:

1. Right-click the data collector set you created, and then click Properties.

2. On the Schedule tab, click Add, and in the Folder Action dialog box, schedule the collection.

3. On the Stop Condition tab, specify a duration or maximum size for the set.

The data collector set will use your schedule to start collecting data or you can start it manually by using
the Action menu.

Important Performance Counters


This section describes some of the important performance counters you could collect.

CPU Usage
 Processor:

o %Processor Time

o %Privileged Time

 Process (sqlservr.exe):

o %Processor Time

o %Privileged Time

The %Processor Time counter gives information about the total CPU usage and should be monitored for
each available CPU. The Process (sqlservr.exe)/% Processor Time counter details how much CPU the SQL
Server instance is using. If high CPU usage is a result of another application, you should investigate
options for tuning that application. Occasional CPU spikes may occur and should not cause concern, but
you should investigate prolonged values of greater than 80 percent.
18-32 Performance and Monitoring

Memory Usage
 Memory

o Available Mbytes

 SQL Server: Buffer Manager

o Lazy writes/sec
o Buffer cache hit ratio

o Page life expectancy

o Page reads/sec

o Page writes/sec

 SQL Server: Memory Manager

o Total Server Memory (KB)

o Target Server Memory (KB)

The Memory/Available Mbytes counter shows the amount of physical memory that is immediately
available for allocation to a process or for system use. Ideally, this should be more than 300 MB. When it
drops to less than 64 MB, on most servers, Windows will display low memory notifications. The SQLOS
reacts to these notifications by reducing its memory usage.
The page life expectancy counter shows the amount of time that data pages stay in the buffer pool.
Ideally, this should be more than 300 seconds. If page life expectancy is less than 300 seconds, investigate
other buffer manager counters to get to the root cause. If the Lazy writes/sec counter is consistently non-
zero, along with low page life expectancy, and high page reads/sec and page writes/sec counters, there is
a buffer pool contention. The buffer cache hit ratio counter shows how often SQL Server gets a page from
the buffer rather than the disk. This should ideally be close to 100 percent.
The total server memory is the current amount of memory that an instance of SQL Server is using. The
target server memory is the amount of memory that is allocated to a SQL Server instance. Ideally, total
server memory is equal to target server memory on a stable system. If total server memory is less than the
target server memory, it means that SQL Server is still populating the cache and loading the data pages
into memory. A sudden decrease in total server memory indicates a problem and needs further
investigation.

Disk Usage
 Physical Disk

o Avg. Disk sec/Read

o Avg. Disk Bytes/Read

o Avg. Disk sec/Write

o Avg. Disk Bytes/Write

 Paging File

o %Usage

 SQL Server: Access Methods

o Forwarded Records/sec
o Full Scans/sec

o Index Searches/sec
Developing SQL Databases 18-33

o Page splits/sec

The Avg. Disk sec/Read counter shows the average time taken, in seconds, to read data from disk.
Similarly, the Avg. Disk sec/Write counter shows the average time taken, in seconds, to write data to disk.
High values of these counters may not indicate hardware issues. Poorly tuned queries and missing or
unused indexes may result in high I/O usage.

The Forwarded Records/sec value should be less than 10 per 100 batch requests/sec. Consider creating a
clustered index if this counter is consistently high. A high value for the Full Scans/sec counter may cause
high CPU usage. Full scans on smaller tables are acceptable; however, a large number of scans on big
tables should be investigated. The Counter Page splits/sec value should ideally be less than 20 per 100
batch requests/sec. A high number of page splits may result in blocking, high I/O, or memory pressure.
Set an appropriate fill factor value to balance out page splits.

SQL Server Statistics


 SQL statistics

 Batch requests/sec
 SQL compilations/sec

 SQL recompilations/sec

 SQL Server: General Statistics

 User connections

 Logins/sec

 Logouts/sec

The batch requests/sec value is the number of Transact-SQL batch requests that SQL Server receives per
second. The SQL compilations/sec counter shows the number of times per second that SQL compilations
have occurred. A high number of SQL compilations and recompilations may cause a CPU bottleneck. The
SQL compilations/sec value should ideally be less than 10 percent of the number of batch requests/sec;
SQL recompilations should ideally be less than 10 percent of the total number of SQL compilations/sec.
The user connections counter shows the number of users who are currently connected to a SQL Server
instance. The logins/sec and logouts/sec values should ideally be less than two. If the value is consistently
greater than two, it means that the connection pooling is not being correctly used by the application.

This is not an exhaustive list of counters to be considered, but these are good starting points when
baselining SQL Server.
18-34 Performance and Monitoring

Analyzing Performance Monitor Data


The data you collect will not be useful until you
can analyze it to get meaningful information. You
can analyze Performance Monitor data by using
Microsoft Excel® or by importing the
performance logs into a database.

Microsoft Excel
The performance data, in csv format, can be
analyzed manually in Excel. If the data is collected
in binary format, the binary log file can be
converted to csv format with the relog command-
line utility. The relog utility ships with Windows
and does not require a separate installation.
The following example shows a typical command to convert a binary log file to csv format by using the
relog utility:

Using Relog to Convert to csv


relog <binary file path> -f csv -o <csv file path>

Analyzing data in Excel can be time-consuming. The column headers need to be formatted, the
performance data requires formatting, and then aggregate columns need to be added to get the
maximum and minimum standard deviation for the counter values. This becomes even more tedious
when more than one file is to be analyzed.

Database
The performance data can be imported into SQL Server and analyzed by using Transact-SQL statements.
The performance data is imported into a database manually, loaded by using SQL Server Integration
Services, or loaded by using the relog utility. The simplest method is probably the use of the relog
command-line utility.

The following example shows the syntax to import a performance log into a database by using the relog
utility:

Using Relog to Import Into a Database


relog <binary file path> -f SQL -o SQL:<ODBC Connection>!<display string>

The relog utility accepts a binary log file and inserts it into the database that the Open Database
Connectivity (ODBC) connection specifies. The display string identifies the binary log file or the data
collector set within the database. The relog utility imports data into three different tables, as follows:

 DisplayToID. This lists each data collector set that is imported into the database. A unique identifier
uniquely identifies each data collector set. The data collector is identified by the display string value
that is specified when importing the data, as shown in the preceding relog command syntax. The
table also contains the number of records that are imported and the log start and log stop time.

 CounterDetails. This contains one row for each counter that is present in the performance log file.
Every counter is uniquely identified by a unique counter id. Each counter has an associated machine
name. This is helpful in identifying counter values from different computers.
Developing SQL Databases 18-35

 CounterData. The CounterData table stores the actual counter values. The important columns are
GUID, counterID, counter value, and counterdatetime. The GUID columns link to the DisplayToID
GUID column. The counterID columns link to the CounterDetails counterID column. The counter value
column contains the actual counter value, and the counterdatetime column contains the time that the
value was recorded.

These three tables can be queried to get different counter values, as shown in the following example:

Counter Values
SELECT dd.DisplayString, cd.CounterDateTime,
cdt.ObjectName, cdt.CounterName,
cd.CounterValue
FROM dbo.CounterData cd
JOIN dbo.DisplayToID dd ON cd.GUID=dd.GUID
JOIN dbo.CounterDetails cdt on cd.CounterID=cdt.CounterID

WHERE did.DisplayString='SQLPerf'
ORDER BY cdt.ObjectName, cdt.CounterName,cd.RecordIndex;

The preceding query will list the counter values for the display string “SQLPerf.” To get the data from all
display strings, remove the filter on the DisplayString column in the preceding query.
You can aggregate data to form a baseline, and you can import data from multiple data collector sets
with different display strings, and then compare them to diagnose issues.

A sample query to get aggregate data from a particular display string or data collector set:

Aggregating Data for a Single Display String or Data Collector Set


SELECT CONVERT(VARCHAR(10), cd.CounterDateTime, 101) AS CounterDateTime,
RTRIM(cdt.ObjectName) AS ObjectName,
RTRIM(cdt.CounterName) AS CounterName,
MIN(cd.CounterValue) AS "Minimum value",
MAX(cd.CounterValue) AS "Maximum value",
AVG(cd.CounterValue) AS "Average value"

FROM dbo.CounterData cd
INNER JOIN dbo.DisplayToID did ON cd.GUID = did.GUID
INNER JOIN dbo.CounterDetails cdt ON cd.CounterID = cdt.CounterID

WHERE did.DisplayString = 'SQLPerf'

GROUP BY CONVERT(VARCHAR(10), cd.CounterDateTime, 101),


RTRIM(cdt.ObjectName), RTRIM(cdt.CounterName)

ORDER BY RTRIM(cdt.ObjectName), RTRIM(cdt.CounterName)

A sample query to display aggregate data from more than one display string:

Aggregate Data for More Than One Display String


SELECT did.DisplayString AS DataCollectorSet,
RTRIM(cdt.ObjectName) AS ObjectName,
RTRIM(cdt.CounterName) AS CounterName,
MIN(cd.CounterValue) AS "Minimum value",
MAX(cd.CounterValue) AS "Maximum value",
AVG(cd.CounterValue) AS "Average value"

FROM dbo.CounterData cd
INNER JOIN dbo.DisplayToID did ON cd.GUID = did.GUID
INNER JOIN dbo.CounterDetails cdt ON cd.CounterID = cdt.CounterID
GROUP BY did.DisplayString, RTRIM(cdt.ObjectName), RTRIM(cdt.CounterName)

ORDER BY RTRIM(cdt.ObjectName), RTRIM(cdt.CounterName);


18-36 Performance and Monitoring

You can use the preceding query to compare counter values from different data collectors. Query results can be
written to tables or flat files for further analysis.

Analyzing data manually is relatively straightforward and gives a lot of flexibility with the type of analysis you can
perform.
Developing SQL Databases 18-37

Lab: Monitoring, Tracing, and Baselining


Scenario
You are investigating why a new SQL Server instance is running slowly. Users are complaining that their
workloads run particularly slowly during peak business hours. To troubleshoot these performance issues,
and take informed corrective measures, you decide to establish a baseline for SQL Server performance.

In this lab exercise, you will set up data collection for analyzing workloads during peak business hours,
and implement a baseline methodology to collect performance data at frequent intervals. This will enable
comparisons to be made with the baseline.

Objectives
After completing this lab, you will be able to:

 Collect and analyze performance data by using Extended Events.

 Implement a methodology to establish a baseline.

Estimated Time: 60 minutes

Virtual machine: 20762C-MIA-SQL

User name: ADVENTUREWORKS\Student

Password: Pa55w.rd

Exercise 1: Collecting and Analyzing Data Using Extended Events


Scenario
You have been asked to prepare a reusable Extended Events session to collect and analyze workload.

The main tasks for this exercise are as follows:

1. Prepare the Lab Environment


2. Set Up an Extended Event Session

3. Execute Workload

4. Analyze Collected Data

 Task 1: Prepare the Lab Environment


1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are both running, and then
log on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. Run Setup.cmd in the D:\Labfiles\Lab18\Starter folder as Administrator.

 Task 2: Set Up an Extended Event Session


1. Create an Extended Events session to capture the sqlserver.error_reported, sqlserver.module_end,
sqlserver.sp_statement_completed, and sqlserver.sql_batch_completed events with a ring_buffer
target. In the D:\Labfiles\Lab18\Starter\20762 folder, the SetupExtendedEvent.sql file has a
possible solution script.

2. Use Watch Live Data for the Extended Events session.


18-38 Performance and Monitoring

 Task 3: Execute Workload


1. In the D:\Labfiles\Lab18\Starter folder, in the RunWorkload.cmd file, run the workload multiple
times to generate event data for the Extended Event session.

2. In the AnalyzeSQLEE session live data window, stop the feed data, and then add the duration,
query_hash, and statement columns to the view.

 Task 4: Analyze Collected Data


1. In the AnalyzeSQLEE Extended Events live data window, group the data on query_hash data, and
then aggregate the data on average of duration. Sort the data in descending order of duration so
that statements that take the highest average time are at the top.

2. Review the data in one of the query hash rows.

Drop the AnalyzeSQLEE Extended Events session.

Results: At the end of this lab, you will be able to:

Set up an Extended Events session that collects performance data for a workload.

Analyze the data.

Exercise 2: Implementing Baseline Methodology


Scenario
You have been asked to set up a baseline methodology. You must collect data that can be used for
comparison if the instance develops performance issues.
The main tasks for this exercise are as follows:

1. Set Up Data Collection Scripts

2. Execute Workload

3. Analyze Data

 Task 1: Set Up Data Collection Scripts


 Create a database named baseline by using default settings, and then clear the wait statistics for the
database. In the D:\Labfiles\Lab18\Starter\20762 folder, the PrepareScript.sql Transact-SQL file
has a sample solution script.

 Task 2: Execute Workload


1. Create a job from the WaitsCollectorJob.sql Transact-SQL file in the
D:\Labfiles\Lab18\Starter\20762 folder.

2. Run the waits_collectionsjob to collect statistics before and after running the RunWorkload.cmd
file multiple times.

 Task 3: Analyze Data


1. Using the collected waits data, write and execute a query to find the waits for the workload. In the
D:\Labfiles\Lab18\Starter\20762 folder, the WaitBaselineDelta.sql file has a sample solution
script.

2. Using the collected waits data, write and execute a query to find the percentage of waits. In the
D:\Labfiles\Lab18\Starter\20762 folder, the WaitBaselinePercentage.sql file has a sample
solution script.
Developing SQL Databases 18-39

3. Using the collected waits data, write and execute a query to find the top 10 waits. In the
D:\Labfiles\Lab18\Starter\20762 folder, the WaitBaselineTop10.sql file has a sample solution
script.

4. Close SQL Server Management Studio without saving any changes.

5. Close File Explorer.

Results: After completing this exercise, you will have implemented a baseline for a workload.

Question: What advantages do you see in using Extended Events to monitor your SQL
Server databases?
18-40 Performance and Monitoring

Module Review and Takeaways


Frequent monitoring and tracing is the key to identifying performance issues. Benchmarks and baselines
enable you to implement a robust performance troubleshooting methodology. SQL Server provides a
variety of different tools to help you to regularly monitor system performance.

Review Question(s)
Question: Do you use baselines to help you manage your SQL Server systems? If so, what
tools do you use to create baselines? If not, what are your reasons for not creating baselines?
Developing SQL Databases 18-41

Course Evaluation

Your evaluation of this course will help Microsoft understand the quality of your learning experience.

Please work with your training provider to access the course evaluation form.

Microsoft will keep your answers to this survey private and confidential and will use your responses to
improve your future learning experience. Your open and honest feedback is valuable and appreciated.

You might also like