0% found this document useful (0 votes)
551 views1,663 pages

Transac SQL

This document provides an overview of Transact-SQL statements in SQL Server. It includes categories for backup and restore statements, data definition language statements for creating and managing database structures, data manipulation language statements for inserting, updating and deleting data, permissions statements for controlling access, and session setting statements for runtime options. Specific statements are listed under each category.

Uploaded by

odlarhg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
551 views1,663 pages

Transac SQL

This document provides an overview of Transact-SQL statements in SQL Server. It includes categories for backup and restore statements, data definition language statements for creating and managing database structures, data manipulation language statements for inserting, updating and deleting data, permissions statements for controlling access, and session setting statements for runtime options. Specific statements are listed under each category.

Uploaded by

odlarhg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Contents

Overview
ADD
SENSITIVITY CLASSIFICATION
ALTER
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AUTHORIZATION
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMN ENCRYPTION KEY
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE AUDIT SPECIFICATION
DATABASE compatibility level
DATABASE database mirroring
DATABASE ENCRYPTION KEY
DATABASE file and filegroup options
DATABASE HADR
DATABASE SCOPED CREDENTIAL
DATABASE SCOPED CONFIGURATION
DATABASE SET Options
ENDPOINT
EVENT SESSION
EXTERNAL DATA SOURCE
EXTERNAL LIBRARY
EXTERNAL RESOURCE POOL
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
INDEX
INDEX (Selective XML Indexes)
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
RESOURCE GOVERNOR
RESOURCE POOL
ROLE
ROUTE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER CONFIGURATION
SERVER ROLE
SERVICE
SERVICE MASTER KEY
SYMMETRIC KEY
TABLE
TABLE column_constraint
TABLE column_definition
TABLE computed_column_definition
TABLE index_option
TABLE table_constraint
TRIGGER
USER
VIEW
WORKLOAD GROUP
XML SCHEMA COLLECTION
Backup and restore
BACKUP
BACKUP CERTIFICATE
BACKUP MASTER KEY
BACKUP SERVICE MASTER KEY
RESTORE
RESTORE statements
RESTORE arguments
RESTORE FILELISTONLY
RESTORE HEADERONLY
RESTORE LABELONLY
RESTORE MASTER KEY
RESTORE REWINDONLY
RESTORE VERIFYONLY
BULK INSERT
CREATE
AGGREGATE
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMNSTORE INDEX
COLUMN ENCRYPTION KEY
COLUMN MASTER KEY
CONTRACT
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE AUDIT SPECIFICATION
DATABASE ENCRYPTION KEY
DATABASE SCOPED CREDENTIAL
DEFAULT
ENDPOINT
EVENT NOTIFICATION
EVENT SESSION
EXTERNAL DATA SOURCE
EXTERNAL LIBRARY
EXTERNAL FILE FORMAT
EXTERNAL RESOURCE POOL
EXTERNAL TABLE
EXTERNAL TABLE AS SELECT
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
FUNCTION (SQL Data Warehouse)
INDEX
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
REMOTE TABLE AS SELECT (Parallel Data Warehouse)
RESOURCE POOL
ROLE
ROUTE
RULE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SELECTIVE XML INDEX
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER ROLE
SERVICE
SPATIAL INDEX
STATISTICS
SYMMETRIC KEY
SYNONYM
TABLE
TABLE (Azure SQL Data Warehouse)
TABLE (SQL Graph)
TABLE AS SELECT (Azure SQL Data Warehouse)
TABLE IDENTITY (Property)
TRIGGER
TYPE
USER
VIEW
WORKLOAD GROUP
XML INDEX
XML INDEX (Selective XML Indexes)
XML SCHEMA COLLECTION
Collations
COLLATE clause
SQL Server Collation Name
Windows Collation Name
Collation Precedence
DELETE
DISABLE TRIGGER
DROP
AGGREGATE
APPLICATION ROLE
ASSEMBLY
ASYMMETRIC KEY
AVAILABILITY GROUP
BROKER PRIORITY
CERTIFICATE
COLUMN ENCRYPTION KEY
COLUMN MASTER KEY
CONTRACT
CREDENTIAL
CRYPTOGRAPHIC PROVIDER
DATABASE
DATABASE AUDIT SPECIFICATION
DATABASE ENCRYPTION KEY
DATABASE SCOPED CREDENTIAL
DEFAULT
ENDPOINT
EXTERNAL DATA SOURCE
EXTERNAL FILE FORMAT
EXTERNAL LIBRARY
EXTERNAL RESOURCE POOL
EXTERNAL TABLE
EVENT NOTIFICATION
EVENT SESSION
FULLTEXT CATALOG
FULLTEXT INDEX
FULLTEXT STOPLIST
FUNCTION
INDEX
INDEX (Selective XML Indexes)
LOGIN
MASTER KEY
MESSAGE TYPE
PARTITION FUNCTION
PARTITION SCHEME
PROCEDURE
QUEUE
REMOTE SERVICE BINDING
RESOURCE POOL
ROLE
ROUTE
RULE
SCHEMA
SEARCH PROPERTY LIST
SECURITY POLICY
SENSITIVITY CLASSIFICATION
SEQUENCE
SERVER AUDIT
SERVER AUDIT SPECIFICATION
SERVER ROLE
SERVICE
SIGNATURE
STATISTICS
SYMMETRIC KEY
SYNONYM
TABLE
TRIGGER
TYPE
USER
VIEW
WORKLOAD GROUP
XML SCHEMA COLLECTION
ENABLE TRIGGER
INSERT
INSERT (SQL Graph)
MERGE
RENAME
Permissions
ADD SIGNATURE
CLOSE MASTER KEY
CLOSE SYMMETRIC KEY
DENY
DENY Assembly Permissions
DENY Asymmetric Key Permissions
DENY Availability Group Permissions
DENY Certificate Permissions
DENY Database Permissions
DENY Database Principal Permissions
DENY Database Scoped Credential
DENY Endpoint Permissions
DENY Full-Text Permissions
DENY Object Permissions
DENY Schema Permissions
DENY Search Property List Permissions
DENY Server Permissions
DENY Server Principal Permissions
DENY Service Broker Permissions
DENY Symmetric Key Permissions
DENY System Object Permissions
DENY Type Permissions
DENY XML Schema Collection Permissions
EXECUTE AS
EXECUTE AS Clause
GRANT
GRANT Assembly Permissions
GRANT Asymmetric Key Permissions
GRANT Availability Group Permissions
GRANT Certificate Permissions
GRANT Database Permissions
GRANT Database Principal Permissions
GRANT Database Scoped Credential
GRANT Endpoint Permissions
GRANT Full-Text Permissions
GRANT Object Permissions
GRANT Schema Permissions
GRANT Search Property List Permissions
GRANT Server Permissions
GRANT Server Principal Permissions
GRANT Service Broker Permissions
GRANT Symmetric Key Permissions
GRANT System Object Permissions
GRANT Type Permissions
GRANT XML Schema Collection Permissions
OPEN MASTER KEY
OPEN SYMMETRIC KEY
Permissions: GRANT, DENY, REVOKE (Azure SQL Data Warehouse, Parallel Data
Warehouse)
REVERT
REVOKE
REVOKE Assembly Permissions
REVOKE Asymmetric Key Permissions
REVOKE Availability Group Permissions
REVOKE Certificate Permissions
REVOKE Database Permissions
REVOKE Database Principal Permissions
REVOKE Database Scoped Credential
REVOKE Endpoint Permissions
REVOKE Full-Text Permissions
REVOKE Object Permissions
REVOKE Schema Permissions
REVOKE Search Property List Permissions
REVOKE Server Permissions
REVOKE Server Principal Permissions
REVOKE Service Broker Permissions
REVOKE Symmetric Key Permissions
REVOKE System Object Permissions
REVOKE Type Permissions
REVOKE XML Schema Collection Permissions
SETUSER
Service Broker
BEGIN CONVERSATION TIMER
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
GET_TRANSMISSION_STATUS
MOVE CONVERSATION
RECEIVE
SEND
SET
Overview
ANSI_DEFAULTS
ANSI_NULL_DFLT_OFF
ANSI_NULL_DFLT_ON
ANSI_NULLS
ANSI_PADDING
ANSI_WARNINGS
ARITHABORT
ARITHIGNORE
CONCAT_NULL_YIELDS_NULL
CONTEXT_INFO
CURSOR_CLOSE_ON_COMMIT
DATEFIRST
DATEFORMAT
DEADLOCK_PRIORITY
FIPS_FLAGGER
FMTONLY
FORCEPLAN
IDENTITY_INSERT
IMPLICIT_TRANSACTIONS
LANGUAGE
LOCK_TIMEOUT
NOCOUNT
NOEXEC
NUMERIC_ROUNDABORT
OFFSETS
PARSEONLY
QUERY_GOVERNOR_COST_LIMIT
QUOTED_IDENTIFIER
REMOTE_PROC_TRANSACTIONS
ROWCOUNT
SHOWPLAN_ALL
SHOWPLAN_TEXT
SHOWPLAN_XML
STATISTICS IO
STATISTICS PROFILE
STATISTICS TIME
STATISTICS XML
TEXTSIZE
TRANSACTION ISOLATION LEVEL
XACT_ABORT
TRUNCATE TABLE
UPDATE STATISTICS
Transact-SQL statements
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
This reference topic summarizes the categories of statements for use with Transact-SQL (T-SQL ). You can find all
of the statements listed in the left-hand navigation.

Backup and restore


The backup and restore statements provide ways to create backups and restore from backups. For more
information, see the Backup and restore overview.

Data Definition Language


Data Definition Language (DDL ) statements defines data structures. Use these statements to create, alter, or drop
data structures in a database.
ALTER
Collations
CREATE
DROP
DISABLE TRIGGER
ENABLE TRIGGER
RENAME
UPDATE STATISTICS

Data Manipulation Language


Data Manipulation Language (DML ) affect the information stored in the database. Use these statements to insert,
update, and change the rows in the database.
BULK INSERT
DELETE
INSERT
UPDATE
MERGE
TRUNCATE TABLE

Permissions statements
Permissions statements determine which users and logins can access data and perform operations. For more
information about authentication and access, see the Security center.

Service Broker statements


Service Broker is a feature that provides native support for messaging and queuing applications. For more
information, see Service Broker.
Session settings
SET statements determine how the current session handles run time settings. For an overview, see SET statements.
ADD SENSITIVITY CLASSIFICATION (Transact-SQL)
1/2/2019 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel Data
Warehouse
Adds metadata about the sensitivity classification to one or more database columns. The classification can include a
sensitivity label and an information type.
Classifying sensitive data in your database environment helps achieve extended visibility and better protection.
Additional information can be found in Getting started with SQL Information Protection

Syntax
ADD SENSITIVITY CLASSIFICATION TO
<object_name> [, ...n ]
WITH ( <sensitivity_label_option> [, ...n ] )

<object_name> ::=
{
[schema_name.]table_name.column_name
}

<sensitivity_label_option> ::=
{
LABEL = string |
LABEL_ID = guidOrString |
INFORMATION_TYPE = string |
INFORMATION_TYPE_ID = guidOrString
}

Arguments
object_name ([schema_name.]table_name.column_name)
Is the name of the database column to be classified. Currently only column classification is supported.
schema_name (optional) - Is the name of the schema to which the classified column belongs to.
table_name - Is the name of the table to which the classified column belongs to.
column_name - Is the name of the column being classified.
LABEL
Is the human readable name of the sensitivity label. Sensitivity labels represent the sensitivity of the data stored in
the database column.
LABEL_ID
Is an identifier associated with the sensitivity label. This is often used by centralized information protection
platforms to uniquely identify labels in the system.
INFORMATION_TYPE
Is the human readable name of the information type. Information types are used to describe the type of data being
stored in the database column.
INFORMATION_TYPE_ID
Is an identifier associated with the information type. This is often used by centralized information protection
platforms to uniquely identify information types in the system.

Remarks
Only one classification can be added to a single object. Adding a classification to an object that is already
classified will overwrite the existing classification.
Multiple objects can be classified using a single ADD SENSITIVITY CLASSIFICATION statement.
The system view sys.sensitivity_classifications can be used to retrieve the sensitivity classification information
for a database.

Permissions
Requires ALTER ANY SENSITIVITY CLASSIFICATION permission. The ALTER ANY SENSITIVITY
CLASSIFICATION is implied by the database permission ALTER, or by the server permission CONTROL SERVER.

Examples
A. Classifying two columns
The following example classifies the columns dbo.sales.price and dbo.sales.discount with the sensitivity label
Highly Confidential and the Information Type Financial.

ADD SENSITIVITY CLASSIFICATION TO


dbo.sales.price, dbo.sales.discount
WITH ( LABEL='Highly Confidential', INFORMATION_TYPE='Financial' )

B. Classifying only a label


The following example classifies the column dbo.customer.comments with the label Confidential and label ID
643f7acd-776a-438d-890c-79c3f2a520d6. Information type isn't classified for this column.

ADD SENSITIVITY CLASSIFICATION TO


dbo.customer.comments
WITH ( LABEL='Confidential', LABEL_ID='643f7acd-776a-438d-890c-79c3f2a520d6' )

See Also
DROP SENSITIVITY CLASSIFICATION (Transact-SQL )
sys.sensitivity_classifications (Transact-SQL )
Permissions (Database Engine)
Getting started with SQL Information Protection
ALTER APPLICATION ROLE (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the name, password, or default schema of an application role.
Transact-SQL Syntax Conventions

Syntax
ALTER APPLICATION ROLE application_role_name
WITH <set_item> [ ,...n ]

<set_item> ::=
NAME = new_application_role_name
| PASSWORD = 'password'
| DEFAULT_SCHEMA = schema_name

Arguments
application_role_name
Is the name of the application role to be modified.
NAME =new_application_role_name
Specifies the new name of the application role. This name must not already be used to refer to any principal in the
database.
PASSWORD ='password'
Specifies the password for the application role. password must meet the Windows password policy requirements
of the computer that is running the instance of SQL Server. You should always use strong passwords.
DEFAULT_SCHEMA =schema_name
Specifies the first schema that will be searched by the server when it resolves the names of objects. schema_name
can be a schema that does not exist in the database.

Remarks
If the new application role name already exists in the database, the statement will fail. When the name, password,
or default schema of an application role is changed the ID associated with the role is not changed.

IMPORTANT
Password expiration policy is not applied to application role passwords. For this reason, take extra care in selecting strong
passwords. Applications that invoke application roles must store their passwords.

Application roles are visible in the sys.database_principals catalog view.


Cau t i on

In SQL Server 2005 (9.x)the behavior of schemas changed from the behavior in earlier versions of SQL Server.
Code that assumes that schemas are equivalent to database users may not return correct results. Old catalog
views, including sysobjects, should not be used in a database in which any of the following DDL statements has
ever been used: CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP
USER, CREATE ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE,
ALTER AUTHORIZATION. In a database in which any of these statements has ever been used, you must use the
new catalog views. The new catalog views take into account the separation of principals and schemas that is
introduced in SQL Server 2005 (9.x). For more information about catalog views, see Catalog Views (Transact-
SQL ).

Permissions
Requires ALTER ANY APPLICATION ROLE permission on the database. To change the default schema, the user
also needs ALTER permission on the application role. An application role can alter its own default schema, but not
its name or password.

Examples
A. Changing the name of application role
The following example changes the name of the application role weekly_receipts to receipts_ledger .

USE AdventureWorks2012;
CREATE APPLICATION ROLE weekly_receipts
WITH PASSWORD = '987Gbv8$76sPYY5m23' ,
DEFAULT_SCHEMA = Sales;
GO
ALTER APPLICATION ROLE weekly_receipts
WITH NAME = receipts_ledger;
GO

B. Changing the password of application role


The following example changes the password of the application role receipts_ledger .

ALTER APPLICATION ROLE receipts_ledger


WITH PASSWORD = '897yUUbv867y$200nk2i';
GO

C. Changing the name, password, and default schema


The following example changes the name, password, and default schema of the application role receipts_ledger
all at the same time.

ALTER APPLICATION ROLE receipts_ledger


WITH NAME = weekly_ledger,
PASSWORD = '897yUUbv77bsrEE00nk2i',
DEFAULT_SCHEMA = Production;
GO

See Also
Application Roles
CREATE APPLICATION ROLE (Transact-SQL )
DROP APPLICATION ROLE (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER ASSEMBLY (Transact-SQL)
12/10/2018 • 8 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only) Azure
SQL Data Warehouse Parallel Data Warehouse
Alters an assembly by modifying the SQL Server catalog properties of an assembly. ALTER ASSEMBLY refreshes
it to the latest copy of the Microsoft .NET Framework modules that hold its implementation and adds or removes
files associated with it. Assemblies are created by using CREATE ASSEMBLY.

WARNING
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security boundary. A CLR
assembly created with PERMISSION_SET = SAFE may be able to access external system resources, call unmanaged code, and
acquire sysadmin privileges. Beginning with SQL Server 2017 (14.x), an sp_configure option called clr strict security
is introduced to enhance the security of CLR assemblies. clr strict security is enabled by default, and treats SAFE and
EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr strict security option can be disabled for
backward compatibility, but this is not recommended. Microsoft recommends that all assemblies be signed by a certificate or
asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY permission in the master database. For
more information, see CLR strict security.

Transact-SQL Syntax Conventions

Syntax
ALTER ASSEMBLY assembly_name
[ FROM <client_assembly_specifier> | <assembly_bits> ]
[ WITH <assembly_option> [ ,...n ] ]
[ DROP FILE { file_name [ ,...n ] | ALL } ]
[ ADD FILE FROM
{
client_file_specifier [ AS file_name ]
| file_bits AS file_name
} [,...n ]
] [ ; ]
<client_assembly_specifier> :: =
'\\computer_name\share-name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'

<assembly_bits> :: =
{ varbinary_literal | varbinary_expression }

<assembly_option> :: =
PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE }
| VISIBILITY = { ON | OFF }
| UNCHECKED DATA

Arguments
assembly_name
Is the name of the assembly you want to modify. assembly_name must already exist in the database.
FROM <client_assembly_specifier> | <assembly_bits>
Updates an assembly to the latest copy of the .NET Framework modules that hold its implementation. This option
can only be used if there are no associated files with the specified assembly.
<client_assembly_specifier> specifies the network or local location where the assembly being refreshed is located.
The network location includes the computer name, the share name and a path within that share.
manifest_file_name specifies the name of the file that contains the manifest of the assembly.

IMPORTANT
Azure SQL Database does not support refereencing a file.

<assembly_bits> is the binary value for the assembly.


Separate ALTER ASSEMBLY statements must be issued for any dependent assemblies that also require updating.
PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE }

IMPORTANT
The PERMISSION_SET option is affected by the clr strict security option, described in the opening warning. When
clr strict security is enabled, all assemblies are treated as UNSAFE .
Specifies the .NET Framework code access permission set property of the assembly. For more information about this
property, see CREATE ASSEMBLY (Transact-SQL).

NOTE
The EXTERNAL_ACCESS and UNSAFE options are not available in a contained database.

VISIBILITY = { ON | OFF }
Indicates whether the assembly is visible for creating common language runtime (CLR ) functions, stored
procedures, triggers, user-defined types, and user-defined aggregate functions against it. If set to OFF, the
assembly is intended to be called only by other assemblies. If there are existing CLR database objects already
created against the assembly, the visibility of the assembly cannot be changed. Any assemblies referenced by
assembly_name are uploaded as not visible by default.
UNCHECKED DATA
By default, ALTER ASSEMBLY fails if it must verify the consistency of individual table rows. This option allows
postponing the checks until a later time by using DBCC CHECKTABLE. If specified, SQL Server executes the
ALTER ASSEMBLY statement even if there are tables in the database that contain the following:
Persisted computed columns that either directly or indirectly reference methods in the assembly, through
Transact-SQL functions or methods.
CHECK constraints that directly or indirectly reference methods in the assembly.
Columns of a CLR user-defined type that depend on the assembly, and the type implements a
UserDefined (non-Native) serialization format.
Columns of a CLR user-defined type that reference views created by using WITH SCHEMABINDING.
If any CHECK constraints are present, they are disabled and marked untrusted. Any tables containing
columns depending on the assembly are marked as containing unchecked data until those tables are
explicitly checked.
Only members of the db_owner and db_ddlowner fixed database roles can specify this option.
Requires the ALTER ANY SCHEMA permission to specify this option.
For more information, see Implementing Assemblies.
[ DROP FILE { file_name[ ,...n] | ALL } ]
Removes the file name associated with the assembly, or all files associated with the assembly, from the
database. If used with ADD FILE that follows, DROP FILE executes first. This lets you to replace a file with
the same file name.

NOTE
This option is not available in a contained database or Azure SQL Database.

[ ADD FILE FROM { client_file_specifier [ AS file_name] | file_bitsAS file_name}


Uploads a file to be associated with the assembly, such as source code, debug files or other related information,
into the server and made visible in the sys.assembly_files catalog view. client_file_specifier specifies the location
from which to upload the file. file_bits can be used instead to specify the list of binary values that make up the file.
file_name specifies the name under which the file should be stored in the instance of SQL Server. file_name must
be specified if file_bits is specified, and is optional if client_file_specifier is specified. If file_name is not specified, the
file_name part of client_file_specifier is used as file_name.

NOTE
This option is not available in a contained database or Azure SQL Database.

Remarks
ALTER ASSEMBLY does not disrupt currently running sessions that are running code in the assembly being
modified. Current sessions complete execution by using the unaltered bits of the assembly.
If the FROM clause is specified, ALTER ASSEMBLY updates the assembly with respect to the latest copies of the
modules provided. Because there might be CLR functions, stored procedures, triggers, data types, and user-
defined aggregate functions in the instance of SQL Server that are already defined against the assembly, the
ALTER ASSEMBLY statement rebinds them to the latest implementation of the assembly. To accomplish this
rebinding, the methods that map to CLR functions, stored procedures, and triggers must still exist in the modified
assembly with the same signatures. The classes that implement CLR user-defined types and user-defined
aggregate functions must still satisfy the requirements for being a user-defined type or aggregate.
Cau t i on

If WITH UNCHECKED DATA is not specified, SQL Server tries to prevent ALTER ASSEMBLY from executing if the
new assembly version affects existing data in tables, indexes, or other persistent sites. However, SQL Server does
not guarantee that computed columns, indexes, indexed views or expressions will be consistent with the underlying
routines and types when the CLR assembly is updated. Use caution when you execute ALTER ASSEMBLY to make
sure that there is not a mismatch between the result of an expression and a value based on that expression stored
in the assembly.
ALTER ASSEMBLY changes the assembly version. The culture and public key token of the assembly remain the
same.
ALTER ASSEMBLY statement cannot be used to change the following:
The signatures of CLR functions, aggregate functions, stored procedures, and triggers in an instance of SQL
Server that reference the assembly. ALTER ASSEMBLY fails when SQL Server cannot rebind .NET
Framework database objects in SQL Server with the new version of the assembly.
The signatures of methods in the assembly that are called from other assemblies.
The list of assemblies that depend on the assembly, as referenced in the DependentList property of the
assembly.
The indexability of a method, unless there are no indexes or persisted computed columns depending on that
method, either directly or indirectly.
The FillRow method name attribute for CLR table-valued functions.
The Accumulate and Terminate method signature for user-defined aggregates.
System assemblies.
Assembly ownership. Use ALTER AUTHORIZATION (Transact-SQL ) instead.
Additionally, for assemblies that implement user-defined types, ALTER ASSEMBLY can be used for making
only the following changes:
Modifying public methods of the user-defined type class, as long as signatures or attributes are not
changed.
Adding new public methods.
Modifying private methods in any way.
Fields contained within a native-serialized user-defined type, including data members or base classes,
cannot be changed by using ALTER ASSEMBLY. All other changes are unsupported.
If ADD FILE FROM is not specified, ALTER ASSEMBLY drops any files associated with the assembly.
If ALTER ASSEMBLY is executed without the UNCHECKED data clause, checks are performed to verify that
the new assembly version does not affect existing data in tables. Depending on the amount of data that
needs to be checked, this may affect performance.

Permissions
Requires ALTER permission on the assembly. Additional requirements are as follows:
To alter an assembly whose existing permission set is EXTERNAL_ACCESS, requiresEXTERNAL ACCESS
ASSEMBLYpermission on the server.
To alter an assembly whose existing permission set is UNSAFE requires UNSAFE ASSEMBLY permission
on the server.
To change the permission set of an assembly to EXTERNAL_ACCESS, requiresEXTERNAL ACCESS
ASSEMBLY permission on the server.
To change the permission set of an assembly to UNSAFE, requires UNSAFE ASSEMBLY permission on
the server.
Specifying WITH UNCHECKED DATA, requires ALTER ANY SCHEMA permission.
Permissions with CLR strict security
The following permissions required to alter a CLR assembly when CLR strict security is enabled:
The user must have the ALTER ASSEMBLY permission
And one of the following conditions must also be true:
The assembly is signed with a certificate or asymmetric key that has a corresponding login with the
UNSAFE ASSEMBLY permission on the server. Signing the assembly is recommended.
The database has the TRUSTWORTHY property set to ON , and the database is owned by a login that has the
UNSAFE ASSEMBLY permission on the server. This option is not recommended.
For more information about assembly permission sets, see Designing Assemblies.

Examples
A. Refreshing an assembly
The following example updates assembly ComplexNumber to the latest copy of the .NET Framework modules that
hold its implementation.

NOTE
Assembly ComplexNumber can be created by running the UserDefinedDataType sample scripts. For more information, see
User Defined Type.

ALTER ASSEMBLY ComplexNumber


FROM 'C:\Program Files\Microsoft SQL
Server\130\Tools\Samples\1033\Engine\Programmability\CLR\UserDefinedDataType\CS\ComplexNumber\obj\Debug\Comple
xNumber.dll'

IMPORTANT
Azure SQL Database does not support refereencing a file.

B. Adding a file to associate with an assembly


The following example uploads the source code file Class1.cs to be associated with assembly MyClass . This
example assumes assembly MyClass is already created in the database.

ALTER ASSEMBLY MyClass


ADD FILE FROM 'C:\MyClassProject\Class1.cs';

IMPORTANT
Azure SQL Database does not support refereencing a file.

C. Changing the permissions of an assembly


The following example changes the permission set of assembly ComplexNumber from SAFE to EXTERNAL ACCESS .

ALTER ASSEMBLY ComplexNumber WITH PERMISSION_SET = EXTERNAL_ACCESS;

See Also
CREATE ASSEMBLY (Transact-SQL )
DROP ASSEMBLY (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER ASYMMETRIC KEY (Transact-SQL)
1/2/2019 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the properties of an asymmetric key.
Transact-SQL Syntax Conventions

Syntax
ALTER ASYMMETRIC KEY Asym_Key_Name <alter_option>

<alter_option> ::=
<password_change_option>
| REMOVE PRIVATE KEY

<password_change_option> ::=
WITH PRIVATE KEY ( <password_option> [ , <password_option> ] )

<password_option> ::=
ENCRYPTION BY PASSWORD = 'strongPassword'
| DECRYPTION BY PASSWORD = 'oldPassword'

Arguments
Asym_Key_Name
Is the name by which the asymmetric key is known in the database.
REMOVE PRIVATE KEY
Removes the private key from the asymmetric key The public key is not removed.
WITH PRIVATE KEY
Changes the protection of the private key.
ENCRYPTION BY PASSWORD ='strongPassword'
Specifies a new password for protecting the private key. password must meet the Windows password policy
requirements of the computer that is running the instance of SQL Server. If this option is omitted, the private key
will be encrypted by the database master key.
DECRYPTION BY PASSWORD ='oldPassword'
Specifies the old password, with which the private key is currently protected. Is not required if the private key is
encrypted with the database master key.

Remarks
If there is no database master key the ENCRYPTION BY PASSWORD option is required, and the operation will fail
if no password is supplied. For information about how to create a database master key, see CREATE MASTER KEY
(Transact-SQL ).
You can use ALTER ASYMMETRIC KEY to change the protection of the private key by specifying PRIVATE KEY
options as shown in the following table.
CHANGE PROTECTION FROM ENCRYPTION BY PASSWORD DECRYPTION BY PASSWORD

Old password to new password Required Required

Password to master key Omit Required

Master key to password Required Omit

The database master key must be opened before it can be used to protect a private key. For more information, see
OPEN MASTER KEY (Transact-SQL ).
To change the ownership of an asymmetric key, use ALTER AUTHORIZATION.

Permissions
Requires CONTROL permission on the asymmetric key if the private key is being removed.

Examples
A. Changing the password of the private key
The following example changes the password used to protect the private key of asymmetric key PacificSales09 .
The new password will be <enterStrongPasswordHere> .

ALTER ASYMMETRIC KEY PacificSales09


WITH PRIVATE KEY (
DECRYPTION BY PASSWORD = '<oldPassword>',
ENCRYPTION BY PASSWORD = '<enterStrongPasswordHere>');
GO

B. Removing the private key from an asymmetric key


The following example removes the private key from PacificSales19 , leaving only the public key.

ALTER ASYMMETRIC KEY PacificSales19 REMOVE PRIVATE KEY;


GO

C. Removing password protection from a private key


The following example removes the password protection from a private key and protects it with the database
master key.

OPEN MASTER KEY;


ALTER ASYMMETRIC KEY PacificSales09 WITH PRIVATE KEY (
DECRYPTION BY PASSWORD = '<enterStrongPasswordHere>' );
GO

See Also
CREATE ASYMMETRIC KEY (Transact-SQL )
DROP ASYMMETRIC KEY (Transact-SQL )
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
CREATE MASTER KEY (Transact-SQL )
OPEN MASTER KEY (Transact-SQL )
Extensible Key Management (EKM )
ALTER AUTHORIZATION (Transact-SQL)
10/1/2018 • 11 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the ownership of a securable.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server
ALTER AUTHORIZATION
ON [ <class_type>:: ] entity_name
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::=
{
OBJECT | ASSEMBLY | ASYMMETRIC KEY | AVAILABILITY GROUP | CERTIFICATE
| CONTRACT | TYPE | DATABASE | ENDPOINT | FULLTEXT CATALOG
| FULLTEXT STOPLIST | MESSAGE TYPE | REMOTE SERVICE BINDING
| ROLE | ROUTE | SCHEMA | SEARCH PROPERTY LIST | SERVER ROLE
| SERVICE | SYMMETRIC KEY | XML SCHEMA COLLECTION
}

-- Syntax for SQL Database

ALTER AUTHORIZATION
ON [ <class_type>:: ] entity_name
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::=
{
OBJECT | ASSEMBLY | ASYMMETRIC KEY | CERTIFICATE
| TYPE | DATABASE | FULLTEXT CATALOG
| FULLTEXT STOPLIST
| ROLE | SCHEMA | SEARCH PROPERTY LIST
| SYMMETRIC KEY | XML SCHEMA COLLECTION
}
-- Syntax for Azure SQL Data Warehouse

ALTER AUTHORIZATION ON
[ <class_type> :: ] <entity_name>
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::= {
SCHEMA
| OBJECT
}

<entity_name> ::=
{
schema_name
| [ schema_name. ] object_name
}

-- Syntax for Parallel Data Warehouse

ALTER AUTHORIZATION ON
[ <class_type> :: ] <entity_name>
TO { principal_name | SCHEMA OWNER }
[;]

<class_type> ::= {
DATABASE
| SCHEMA
| OBJECT
}

<entity_name> ::=
{
database_name
| schema_name
| [ schema_name. ] object_name
}

Arguments
<class_type> Is the securable class of the entity for which the owner is being changed. OBJECT is the default.

OBJECT APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database, Azure SQL Data Warehouse, Parallel
Data Warehouse.

ASSEMBLY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

ASYMMETRIC KEY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

AVAILABILITY GROUP APPLIES TO: SQL Server 2012 through SQL Server 2017.

CERTIFICATE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

CONTRACT APPLIES TO: SQL Server 2008 through SQL Server 2017.
DATABASE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database. For more information,see ALTER
AUTHORIZATION FOR databases section below.

ENDPOINT APPLIES TO: SQL Server 2008 through SQL Server 2017.

FULLTEXT CATALOG APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

FULLTEXT STOPLIST APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

MESSAGE TYPE APPLIES TO: SQL Server 2008 through SQL Server 2017.

REMOTE SERVICE BINDING APPLIES TO: SQL Server 2008 through SQL Server 2017.

ROLE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

ROUTE APPLIES TO: SQL Server 2008 through SQL Server 2017.

SCHEMA APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database, Azure SQL Data Warehouse, Parallel
Data Warehouse.

SEARCH PROPERTY LIST APPLIES TO: SQL Server 2012 (11.x) through SQL Server
2017, Azure SQL Database.

SERVER ROLE APPLIES TO: SQL Server 2008 through SQL Server 2017.

SERVICE APPLIES TO: SQL Server 2008 through SQL Server 2017.

SYMMETRIC KEY APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

TYPE APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

XML SCHEMA COLLECTION APPLIES TO: SQL Server 2008 through SQL Server 2017,
Azure SQL Database.

entity_name
Is the name of the entity.
principal_name | SCHEMA OWNER
Name of the security principal that will own the entity. Database objects must be owned by a database principal;
a database user or role. Server objects (such as databases) must be owned by a server principal (a login). Specify
SCHEMA OWNER as the principal_name to indicate that the object should be owned by the principal that
owns the schema of the object.

Remarks
ALTER AUTHORIZATION can be used to change the ownership of any entity that has an owner. Ownership of
database-contained entities can be transferred to any database-level principal. Ownership of server-level entities
can be transferred only to server-level principals.

IMPORTANT
Beginning with SQL Server 2005 (9.x), a user can own an OBJECT or TYPE that is contained by a schema owned by
another database user. This is a change of behavior from earlier versions of SQL Server. For more information, see
OBJECTPROPERTY (Transact-SQL) and TYPEPROPERTY (Transact-SQL).

Ownership of the following schema-contained entities of type "object" can be transferred: tables, views,
functions, procedures, queues, and synonyms.
Ownership of the following entities cannot be transferred: linked servers, statistics, constraints, rules, defaults,
triggers, Service Broker queues, credentials, partition functions, partition schemes, database master keys, service
master key, and event notifications.
Ownership of members of the following securable classes cannot be transferred: server, login, user, application
role, and column.
The SCHEMA OWNER option is only valid when you are transferring ownership of a schema-contained entity.
SCHEMA OWNER will transfer ownership of the entity to the owner of the schema in which it resides. Only
entities of class OBJECT, TYPE, or XML SCHEMA COLLECTION are schema-contained.
If the target entity is not a database and the entity is being transferred to a new owner, all permissions on the
target will be dropped.
Cau t i on

In SQL Server 2005 (9.x), the behavior of schemas changed from the behavior in earlier versions of SQL Server.
Code that assumes that schemas are equivalent to database users may not return correct results. Old catalog
views, including sysobjects, should not be used in a database in which any of the following DDL statements has
ever been used: CREATE SCHEMA, ALTER SCHEMA, DROP SCHEMA, CREATE USER, ALTER USER, DROP
USER, CREATE ROLE, ALTER ROLE, DROP ROLE, CREATE APPROLE, ALTER APPROLE, DROP APPROLE,
ALTER AUTHORIZATION. In a database in which any of these statements has ever been used, you must use the
new catalog views. The new catalog views take into account the separation of principals and schemas that was
introduced in SQL Server 2005 (9.x). For more information about catalog views, see Catalog Views (Transact-
SQL ).
Also, note the following:

IMPORTANT
The only reliable way to find the owner of a object is to query the sys.objects catalog view. The only reliable way to find
the owner of a type is to use the TYPEPROPERTY function.

Special Cases and Conditions


The following table lists special cases, exceptions, and conditions that apply to altering authorization.

CLASS CONDITION

OBJECT Cannot change ownership of triggers, constraints, rules,


defaults, statistics, system objects, queues, indexed views, or
tables with indexed views.
CLASS CONDITION

SCHEMA When ownership is transferred, permissions on schema-


contained objects that do not have explicit owners will be
dropped. Cannot change the owner of sys, dbo, or
information_schema.

TYPE Cannot change ownership of a TYPE that belongs to sys or


information_schema.

CONTRACT, MESSAGE TYPE, or SERVICE Cannot change ownership of system entities.

SYMMETRIC KEY Cannot change ownership of global temporary keys.

CERTIFICATE or ASYMMETRIC KEY Cannot transfer ownership of these entities to a role or


group.

ENDPOINT The principal must be a login.

ALTER AUTHORIZATION for databases


APPLIES TO: SQL Server 2017, Azure SQL Database.
For SQL Server:
Requirements for the new owner:
The new owner principal must be one of the following:
A SQL Server authentication login.
A Windows authentication login representing a Windows user (not a group).
A Windows user that authenticates through a Windows authentication login representing a Windows group.
Requirements for the person executing the ALTER AUTHORIZATION statement:
If you are not a member of the sysadmin fixed server role, you must have at least TAKE OWNERSHIP
permission on the database, and must have IMPERSONATE permission on the new owner login.
For Azure SQL Database:
Requirements for the new owner:
The new owner principal must be one of the following:
A SQL Server authentication login.
A federated user (not a group) present in Azure AD.
A managed user (not a group) or an application present in Azure AD.

NOTE
If the new owner is an Azure Active Directory user, it cannot exist as a user in the database where the new owner will
become the new DBO. Such Azure AD user must be first removed from the database before executing the ALTER
AUTHORIZATION statement changing the database ownership to the new user. For more information about configuring
an Azure Active Directory users with SQL Database, see Connecting to SQL Database or SQL Data Warehouse By Using
Azure Active Directory Authentication.

Requirements for the person executing the ALTER AUTHORIZATION statement:


You must connect to the target database to change the owner of that database.
The following types of accounts can change the owner of a database.
The service-level principal login. (The SQL Azure administrator provisioned when the logical server was
created.)
The Azure Active Directory administrator for the Azure SQL Server.
The current owner of the database.
The following table summarizes the requirements:

EXECUTOR TARGET RESULT

SQL Server Authentication login SQL Server Authentication login Success

SQL Server Authentication login Azure AD user Fail

Azure AD user SQL Server Authentication login Success

Azure AD user Azure AD user Success

To verify an Azure AD owner of the database execute the following Transact-SQL command in a user database
(in this example testdb ).

SELECT CAST(owner_sid as uniqueidentifier) AS Owner_SID


FROM sys.databases
WHERE name = 'testdb';

The output will be an identifier (such as 6D8B81F6-7C79-444C -8858-4AF896C03C67) which corresponds to


Azure AD ObjectID assigned to [email protected]
When a SQL Server authentication login user is the database owner, execute the following statement in the
master database to verify the database owner:

SELECT d.name, d.owner_sid, sl.name


FROM sys.databases AS d
JOIN sys.sql_logins AS sl
ON d.owner_sid = sl.sid;

Best practice
Instead of using Azure AD users as individual owners of the database, use an Azure AD group as a member of
the db_owner fixed database role. The following steps, show how to configure a disabled login as the database
owner, and make an Azure Active Directory group ( mydbogroup ) a member of the db_owner role.
1. Login to SQL Server as Azure AD admin, and change the owner of the database to a disabled SQL Server
authentication login. For example, from the user database execute:
ALTER AUTHORIZATION ON database::testdb TO DisabledLogin;
2. Create an Azure AD group that should own the database and add it as a user to the user database. For
example:
CREATE USER [mydbogroup] FROM EXTERNAL PROVIDER;
3. In the user database add the user representing the Azure AD group, to the db_owner fixed database role. For
example:
ALTER ROLE db_owner ADD MEMBER mydbogroup;

Now the mydbogroup members can centrally manage the database as members of the db_owner role.
When members of this group are removed from the Azure AD group, they automatically loose the dbo
permissions for this database.
Similarly if new members are added to mydbogroup Azure AD group, they automatically gain the dbo access
for this database.
To check if a specific user has the effective dbo permission, have the user execute the following statement:

SELECT IS_MEMBER ('db_owner');

A return value of 1 indicates the user is a member of the role.

Permissions
Requires TAKE OWNERSHIP permission on the entity. If the new owner is not the user that is executing this
statement, also requires either, 1) IMPERSONATE permission on the new owner if it is a user or login; or 2) if the
new owner is a role, membership in the role, or ALTER permission on the role; or 3) if the new owner is an
application role, ALTER permission on the application role.

Examples
A. Transfer ownership of a table
The following example transfers ownership of table Sprockets to user MichikoOsada . The table is located inside
schema Parts .

ALTER AUTHORIZATION ON OBJECT::Parts.Sprockets TO MichikoOsada;


GO

The query could also look like the following:

ALTER AUTHORIZATION ON Parts.Sprockets TO MichikoOsada;


GO

If the objects schema is not included as part of the statement, the Database Engine will look for the object in the
users default schema. For example:

ALTER AUTHORIZATION ON Sprockets TO MichikoOsada;


ALTER AUTHORIZATION ON OBJECT::Sprockets TO MichikoOsada;

B. Transfer ownership of a view to the schema owner


The following example transfers ownership the view ProductionView06 to the owner of the schema that contains
it. The view is located inside schema Production .

ALTER AUTHORIZATION ON OBJECT::Production.ProductionView06 TO SCHEMA OWNER;


GO

C. Transfer ownership of a schema to a user


The following example transfers ownership of the schema SeattleProduction11 to user SandraAlayo .

ALTER AUTHORIZATION ON SCHEMA::SeattleProduction11 TO SandraAlayo;


GO
D. Transfer ownership of an endpoint to a SQL Server login
The following example transfers ownership of endpoint CantabSalesServer1 to JaePak . Because the endpoint is
a server-level securable, the endpoint can only be transferred to a server-level principal.
Applies to: SQL Server 2008 through SQL Server 2017.

ALTER AUTHORIZATION ON ENDPOINT::CantabSalesServer1 TO JaePak;


GO

E. Changing the owner of a table


Each of the following examples changes the owner of the Sprockets table in the Parts database to the
database user MichikoOsada .

ALTER AUTHORIZATION ON Sprockets TO MichikoOsada;


ALTER AUTHORIZATION ON dbo.Sprockets TO MichikoOsada;
ALTER AUTHORIZATION ON OBJECT::Sprockets TO MichikoOsada;
ALTER AUTHORIZATION ON OBJECT::dbo.Sprockets TO MichikoOsada;

F. Changing the owner of a database


APPLIES TO: SQL Server 2008 through SQL Server 2017, Parallel Data Warehouse, SQL Database.
The following example change the owner of the Parts database to the login MichikoOsada .

ALTER AUTHORIZATION ON DATABASE::Parts TO MichikoOsada;

G. Changing the owner of a SQL Database to an Azure AD User


In the following example, an Azure Active Directory administrator for SQL Server in an organization with an
active directory named cqclinic.onmicrosoft.com , can change the current ownership of a database targetDB
and make an AAD user [email protected] the new database owner using the following
command:

ALTER AUTHORIZATION ON database::targetDB TO [[email protected]];

Note that for Azure AD users the brackets around the user name must be used.

See Also
OBJECTPROPERTY (Transact-SQL )
TYPEPROPERTY (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER AVAILABILITY GROUP (Transact-SQL)
1/2/2019 • 30 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Alters an existing Always On availability group in SQL Server. Most ALTER AVAILABILITY GROUP arguments
are supported only the current primary replica. However the JOIN, FAILOVER, and
FORCE_FAILOVER_ALLOW_DATA_LOSS arguments are supported only on secondary replicas.
Transact-SQL Syntax Conventions

Syntax
ALTER AVAILABILITY GROUP group_name
{
SET ( <set_option_spec> )
| ADD DATABASE database_name
| REMOVE DATABASE database_name
| ADD REPLICA ON <add_replica_spec>
| MODIFY REPLICA ON <modify_replica_spec>
| REMOVE REPLICA ON <server_instance>
| JOIN
| JOIN AVAILABILITY GROUP ON <add_availability_group_spec> [ ,...2 ]
| MODIFY AVAILABILITY GROUP ON <modify_availability_group_spec> [ ,...2 ]
| GRANT CREATE ANY DATABASE
| DENY CREATE ANY DATABASE
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
| ADD LISTENER 'dns_name' ( <add_listener_option> )
| MODIFY LISTENER 'dns_name' ( <modify_listener_option> )
| RESTART LISTENER 'dns_name'
| REMOVE LISTENER 'dns_name'
| OFFLINE
}
[ ; ]

<set_option_spec> ::=
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
| FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
| HEALTH_CHECK_TIMEOUT = milliseconds
| DB_FAILOVER = { ON | OFF }
| DTC_SUPPORT = { PER_DB | NONE }
| REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = { integer }

<server_instance> ::=
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }

<add_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT | CONFIGURATION_ONLY },
FAILOVER_MODE = { AUTOMATIC | MANUAL }
[ , <add_replica_option> [ ,...n ] ]
)

<add_replica_option>::=
SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
[ ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL } ]
[,] [ READ_ONLY_ROUTING_URL = 'TCP://system-address:port' ]
} )
| PRIMARY_ROLE ( {
[ ALLOW_CONNECTIONS = { READ_WRITE | ALL } ]
[,] [ READ_ONLY_ROUTING_LIST = { ( '<server_instance>' [ ,...n ] ) | NONE } ]
[,] [ READ_WRITE_ROUTING_URL = { ( '<server_instance>' ) ]
} )
| SESSION_TIMEOUT = integer

<modify_replica_spec>::=
<server_instance> WITH
(
ENDPOINT_URL = 'TCP://system-address:port'
| AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
| FAILOVER_MODE = { AUTOMATIC | MANUAL }
| SEEDING_MODE = { AUTOMATIC | MANUAL }
| BACKUP_PRIORITY = n
| SECONDARY_ROLE ( {
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
| READ_ONLY_ROUTING_URL = 'TCP://system-address:port'
} )
| PRIMARY_ROLE ( {
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
| READ_ONLY_ROUTING_LIST = { ( '<server_instance>' [ ,...n ] ) | NONE }
} )
| SESSION_TIMEOUT = seconds
)

<add_availability_group_spec>::=
<ag_name> WITH
(
LISTENER_URL = 'TCP://system-address:port',
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT },
FAILOVER_MODE = MANUAL,
SEEDING_MODE = { AUTOMATIC | MANUAL }
)

<modify_availability_group_spec>::=
<ag_name> WITH
(
LISTENER = 'TCP://system-address:port'
| AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
| SEEDING_MODE = { AUTOMATIC | MANUAL }
)

<add_listener_option> ::=
{
WITH DHCP [ ON ( <network_subnet_option> ) ]
| WITH IP ( { ( <ip_address_option> ) } [ , ...n ] ) [ , PORT = listener_port ]
}

<network_subnet_option> ::=
'four_part_ipv4_address', 'four_part_ipv4_mask'

<ip_address_option> ::=
{
'four_part_ipv4_address', 'four_part_ipv4_mask'
| 'ipv6_address'
}

<modify_listener_option>::=
{
ADD IP ( <ip_address_option> )
| PORT = listener_port
}
Arguments
group_name
Specifies the name of the new availability group. group_name must be a valid SQL Server identifier, and it must
be unique across all availability groups in the WSFC cluster.
AUTOMATED_BACKUP_PREFERENCE = { PRIMARY | SECONDARY_ONLY| SECONDARY | NONE }
Specifies a preference about how a backup job should evaluate the primary replica when choosing where to
perform backups. You can script a given backup job to take the automated backup preference into account. It is
important to understand that the preference is not enforced by SQL Server, so it has no impact on ad hoc
backups.
Supported only on the primary replica.
The values are as follows:
PRIMARY
Specifies that the backups should always occur on the primary replica. This option is useful if you need backup
features, such as creating differential backups, that are not supported when backup is run on a secondary replica.

IMPORTANT
If you plan to use log shipping to prepare any secondary databases for an availability group, set the automated backup
preference to Primary until all the secondary databases have been prepared and joined to the availability group.

SECONDARY_ONLY
Specifies that backups should never be performed on the primary replica. If the primary replica is the only replica
online, the backup should not occur.
SECONDARY
Specifies that backups should occur on a secondary replica except when the primary replica is the only replica
online. In that case, the backup should occur on the primary replica. This is the default behavior.
NONE
Specifies that you prefer that backup jobs ignore the role of the availability replicas when choosing the replica to
perform backups. Note backup jobs might evaluate other factors such as backup priority of each availability
replica in combination with its operational state and connected state.

IMPORTANT
There is no enforcement of the AUTOMATED_BACKUP_PREFERENCE setting. The interpretation of this preference depends
on the logic, if any, that you script into back jobs for the databases in a given availability group. The automated backup
preference setting has no impact on ad hoc backups. For more information, see Configure Backup on Availability Replicas
(SQL Server).

NOTE
To view the automated backup preference of an existing availability group, select the automated_backup_preference or
automated_backup_preference_desc column of the sys.availability_groups catalog view. Additionally,
sys.fn_hadr_backup_is_preferred_replica (Transact-SQL) can be used to determine the preferred backup replica. This function
will always return 1 for at least one of the replicas, even when AUTOMATED_BACKUP_PREFERENCE = NONE .

FAILURE_CONDITION_LEVEL = { 1 | 2 | 3 | 4 | 5 }
Specifies what failure conditions will trigger an automatic failover for this availability group.
FAILURE_CONDITION_LEVEL is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT).
Furthermore, failure conditions can trigger an automatic failover only if both the primary and secondary replicas
are configured for automatic failover mode (FAILOVER_MODE = AUTOMATIC ) and the secondary replica is
currently synchronized with the primary replica.
Supported only on the primary replica.
The failure-condition levels (1-5) range from the least restrictive, level 1, to the most restrictive, level 5. A given
condition level encompasses all of the less restrictive levels. Thus, the strictest condition level, 5, includes the four
less restrictive condition levels (1-4), level 4 includes levels 1-3, and so forth. The following table describes the
failure-condition that corresponds to each level.

LEVEL FAILURE CONDITION

1 Specifies that an automatic failover should be initiated when


any of the following occurs:

The SQL Server service is down.

The lease of the availability group for connecting to the WSFC


cluster expires because no ACK is received from the server
instance. For more information, see How It Works: SQL Server
Always On Lease Timeout.

2 Specifies that an automatic failover should be initiated when


any of the following occurs:

The instance of SQL Server does not connect to cluster, and


the user-specified HEALTH_CHECK_TIMEOUT threshold of the
availability group is exceeded.

The availability replica is in failed state.

3 Specifies that an automatic failover should be initiated on


critical SQL Server internal errors, such as orphaned spinlocks,
serious write-access violations, or too much dumping.

This is the default behavior.

4 Specifies that an automatic failover should be initiated on


moderate SQL Server internal errors, such as a persistent out-
of-memory condition in the SQL Server internal resource
pool.

5 Specifies that an automatic failover should be initiated on any


qualified failure conditions, including:

Exhaustion of SQL Engine worker-threads.

Detection of an unsolvable deadlock.

NOTE
Lack of response by an instance of SQL Server to client requests is not relevant to availability groups.

The FAILURE_CONDITION_LEVEL and HEALTH_CHECK_TIMEOUT values, define a flexible failover policy for a
given group. This flexible failover policy provides you with granular control over what conditions must cause an
automatic failover. For more information, see Flexible Failover Policy for Automatic Failover of an Availability
Group (SQL Server).
HEALTH_CHECK_TIMEOUT = milliseconds
Specifies the wait time (in milliseconds) for the sp_server_diagnostics system stored procedure to return server-
health information before WSFC cluster assumes that the server instance is slow or hung.
HEALTH_CHECK_TIMEOUT is set at the group level but is relevant only on availability replicas that are
configured for synchronous-commit availability mode with automatic failover (AVAILABILITY_MODE =
SYNCHRONOUS_COMMIT). Furthermore, a health-check timeout can trigger an automatic failover only if both
the primary and secondary replicas are configured for automatic failover mode (FAILOVER_MODE =
AUTOMATIC ) and the secondary replica is currently synchronized with the primary replica.
The default HEALTH_CHECK_TIMEOUT value is 30000 milliseconds (30 seconds). The minimum value is 15000
milliseconds (15 seconds), and the maximum value is 4294967295 milliseconds.
Supported only on the primary replica.

IMPORTANT
sp_server_diagnostics does not perform health checks at the database level.

DB_FAILOVER = { ON | OFF }
Specifies the response to take when a database on the primary replica is offline. When set to ON, any status other
than ONLINE for a database in the availability group triggers an automatic failover. When this option is set to
OFF, only the health of the instance is used to trigger automatic failover.
For more information regarding this setting, see Database Level Health Detection Option
DTC_SUPPORT = { PER_DB | NONE }
Specifies whether distributed transactions are enabled for this Availability Group. Distributed transactions are
only supported for availability group databases beginning in SQL Server 2016 (13.x), and cross-database
transactions are only supported beginning in SQL Server 2016 (13.x) SP2. PER_DB creates the availability group
with support for these transactions and will automatically promote cross-database transactions involving
database(s) in the Availability Group into distributed transactions. NONE prevents the automatic promotion of
cross-database transactions to distributed transactions and does not register the database with a stable RMID in
DTC. Distributed transactions are not prevented when the NONE setting is used, but database failover and
automatic recovery may not succeed under some circumstances. For more information, see Cross-Database
Transactions and Distributed Transactions for Always On Availability Groups and Database Mirroring (SQL
Server).

NOTE
Support for changing the DTC_SUPPORT setting of an Availability Group was introduced in SQL Server 2016 (13.x) Service
Pack 2. This option cannot be used with earlier versions. To change this setting in earlier versions of SQL Server, you must
DROP and CREATE the availability group again.

REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT
Introduced in SQL Server 2017. Used to set a minimum number of synchronous secondary replicas required to
commit before the primary commits a transaction. Guarantees that SQL Server transactions will wait until the
transaction logs are updated on the minimum number of secondary replicas. The default is 0 which gives the
same behavior as SQL Server 2016. The minimum value is 0. The maximum value is the number of replicas
minus 1. This option relates to replicas in synchronous commit mode. When replicas are in synchronous commit
mode, writes on the primary replica wait until writes on the secondary synchronous replicas are committed to the
replica database transaction log. If a SQL Server that hosts a secondary synchronous replica stops responding,
the SQL Server that hosts the primary replica will mark that secondary replica as NOT SYNCHRONIZED and
proceed. When the unresponsive database comes back online it will be in a "not synced" state and the replica will
be marked as unhealthy until the primary can make it synchronous again. This setting guarantees that the
primary replica will not proceed until the minimum number of replicas have committed each transaction. If the
minimum number of replicas is not available then commits on the primary will fail. For cluster type EXTERNAL the
setting is changed when the availability group is added to a cluster resource. See High availability and data
protection for availability group configurations.
ADD DATABASE database_name
Specifies a list of one or more user databases that you want to add to the availability group. These databases must
reside on the instance of SQL Server that hosts the current primary replica. You can specify multiple databases for
an availability group, but each database can belong to only one availability group. For information about the type
of databases that an availability group can support, see Prerequisites, Restrictions, and Recommendations for
Always On Availability Groups (SQL Server). To find out which local databases already belong to an availability
group, see the replica_id column in the sys.databases catalog view.
Supported only on the primary replica.

NOTE
After you have created the availability group, you will need connect to each server instance that hosts a secondary replica
and then prepare each secondary database and join it to the availability group. For more information, see Start Data
Movement on an Always On Secondary Database (SQL Server).

REMOVE DATABASE database_name


Removes the specified primary database and the corresponding secondary databases from the availability group.
Supported only on the primary replica.
For information about the recommended follow up after removing an availability database from an availability
group, see Remove a Primary Database from an Availability Group (SQL Server).
ADD REPLICA ON
Specifies from one to eight SQL server instances to host secondary replicas in an availability group. Each replica
is specified by its server instance address followed by a WITH (...) clause.
Supported only on the primary replica.
You need to join every new secondary replica to the availability group. For more information, see the description
of the JOIN option, later in this section.
<server_instance>
Specifies the address of the instance of SQL Server that is the host for a replica. The address format depends on
whether the instance is the default instance or a named instance and whether it is a standalone instance or a
failover cluster instance (FCI). The syntax is as follows:
{ 'system_name[\instance_name]' | 'FCI_network_name[\instance_name]' }
The components of this address are as follows:
system_name
Is the NetBIOS name of the computer system on which the target instance of SQL Server resides. This computer
must be a WSFC node.
FCI_network_name
Is the network name that is used to access a SQL Server failover cluster. Use this if the server instance participates
as a SQL Server failover partner. Executing SELECT @@SERVERNAME on an FCI server instance returns its
entire 'FCI_network_name[\instance_name]' string (which is the full replica name).
instance_name
Is the name of an instance of a SQL Server that is hosted by system_name or FCI_network_name and that has
Always On enabled. For a default server instance, instance_name is optional. The instance name is case
insensitive. On a stand-alone server instance, this value name is the same as the value returned by executing
SELECT @@SERVERNAME.
\
Is a separator used only when specifying instance_name, in order to separate it from system_name or
FCI_network_name.
For information about the prerequisites for WSFC nodes and server instances, see Prerequisites, Restrictions, and
Recommendations for Always On Availability Groups (SQL Server).
ENDPOINT_URL ='TCP://system -address:port'
Specifies the URL path for the database mirroring endpoint on the instance of SQL Server that will host the
availability replica that you are adding or modifying.
ENDPOINT_URL is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA ON clause.
For more information, see Specify the Endpoint URL When Adding or Modifying an Availability Replica (SQL
Server).
'TCP://system -address:port'
Specifies a URL for specifying an endpoint URL or read-only routing URL. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the destination computer system.
port
Is a port number that is associated with the mirroring endpoint of the server instance (for the ENDPOINT_URL
option) or the port number used by the Database Engine of the server instance (for the
READ_ONLY_ROUTING_URL option).
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT |
CONFIGURATION_ONLY }
Specifies whether the primary replica has to wait for the secondary replica to acknowledge the hardening
(writing) of the log records to disk before the primary replica can commit the transaction on a given primary
database. The transactions on different databases on the same primary replica can commit independently.
SYNCHRONOUS_COMMIT
Specifies that the primary replica will wait to commit transactions until they have been hardened on this
secondary replica (synchronous-commit mode). You can specify SYNCHRONOUS_COMMIT for up to three
replicas, including the primary replica.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary replica to harden the
log (synchronous-commit availability mode). You can specify ASYNCHRONOUS_COMMIT for up to five
availability replicas, including the primary replica.
CONFIGURATION_ONLY Specifies that the primary replica synchronously commit availability group
configuration metadata to the master database on this replica. The replica will not contain user data. This option:
Can be hosted on any edition of SQL Server, including Express Edition.
Requires the data mirroring endpoint of the CONFIGURATION_ONLY replica to be type WITNESS .
Can not be altered.
Is not valid when CLUSTER_TYPE = WSFC .
For more information, see Configuration only replica.
AVAILABILITY_MODE is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA
ON clause. For more information, see Availability Modes (Always On Availability Groups).
FAILOVER_MODE = { AUTOMATIC | MANUAL }
Specifies the failover mode of the availability replica that you are defining.
AUTOMATIC
Enables automatic failover. AUTOMATIC is supported only if you also specify AVAILABILITY_MODE =
SYNCHRONOUS_COMMIT. You can specify AUTOMATIC for three availability replicas, including the
primary replica.

NOTE
Prior to SQL Server 2016, you were limited to two automatic failover replicas, including the primary replica

NOTE
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability
replica that is hosted by an FCI can only be configured for manual failover.

MANUAL
Enables manual failover or forced manual failover (forced failover) by the database administrator.
FAILOVER_MODE is required in the ADD REPLICA ON clause and optional in the MODIFY REPLICA ON
clause. Two types of manual failover exist, manual failover without data loss and forced failover (with possible
data loss), which are supported under different conditions. For more information, see Failover and Failover Modes
(Always On Availability Groups).
SEEDING_MODE = { AUTOMATIC | MANUAL }
Specifies how the secondary replica will be initially seeded.
AUTOMATIC
Enables direct seeding. This method will seed the secondary replica over the network. This method does not
require you to backup and restore a copy of the primary database on the replica.

NOTE
For direct seeding, you must allow database creation on each secondary replica by calling ALTER AVAILABILITY GROUP
with the GRANT CREATE ANY DATABASE option.

MANUAL
Specifies manual seeding (default). This method requires you to create a backup of the database on the primary
replica and manually restore that backup on the secondary replica.
BACKUP_PRIORITY =n
Specifies your priority for performing backups on this replica relative to the other replicas in the same availability
group. The value is an integer in the range of 0..100. These values have the following meanings:
1..100 indicates that the availability replica could be chosen for performing backups. 1 indicates the lowest
priority, and 100 indicates the highest priority. If BACKUP_PRIORITY = 1, the availability replica would be
chosen for performing backups only if no higher priority availability replicas are currently available.
0 indicates that this availability replica will never be chosen for performing backups. This is useful, for
example, for a remote availability replica to which you never want backups to fail over.
For more information, see Active Secondaries: Backup on Secondary Replicas (Always On Availability
Groups).
SECONDARY_ROLE ( ... )
Specifies role-specific settings that will take effect if this availability replica currently owns the secondary
role (that is, whenever it is a secondary replica). Within the parentheses, specify either or both secondary-
role options. If you specify both, use a comma-separated list.
The secondary role options are as follows:
ALLOW_CONNECTIONS = { NO | READ_ONLY | ALL }
Specifies whether the databases of a given availability replica that is performing the secondary role (that is,
is acting as a secondary replica) can accept connections from clients, one of:
NO
No user connections are allowed to secondary databases of this replica. They are not available for read
access. This is the default behavior.
READ_ONLY
Only connections are allowed to the databases in the secondary replica where the Application Intent
property is set to ReadOnly. For more information about this property, see Using Connection String
Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the secondary replica for read-only access.
For more information, see Active Secondaries: Readable Secondary Replicas (Always On Availability
Groups).
READ_ONLY_ROUTING_URL ='TCP://system -address:port'
Specifies the URL to be used for routing read-intent connection requests to this availability replica. This is
the URL on which the SQL Server Database Engine listens. Typically, the default instance of the SQL
Server Database Engine listens on TCP port 1433.
For a named instance, you can obtain the port number by querying the port and type_desc columns of
the sys.dm_tcp_listener_states dynamic management view. The server instance uses the Transact-SQL
listener (type_desc='TSQL').
For more information about calculating the read-only routing URL for an availability replica, see
Calculating read_only_routing_url for Always On.

NOTE
For a named instance of SQL Server, the Transact-SQL listener should be configured to use a specific port. For more
information, see Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager).

PRIMARY_ROLE ( ... )
Specifies role-specific settings that will take effect if this availability replica currently owns the primary role (that
is, whenever it is the primary replica). Within the parentheses, specify either or both primary-role options. If you
specify both, use a comma-separated list.
The primary role options are as follows:
ALLOW_CONNECTIONS = { READ_WRITE | ALL }
Specifies the type of connection that the databases of a given availability replica that is performing the primary
role (that is, is acting as a primary replica) can accept from clients, one of:
READ_WRITE
Connections where the Application Intent connection property is set to ReadOnly are disallowed. When the
Application Intent property is set to ReadWrite or the Application Intent connection property is not set, the
connection is allowed. For more information about Application Intent connection property, see Using Connection
String Keywords with SQL Server Native Client.
ALL
All connections are allowed to the databases in the primary replica. This is the default behavior.
READ_ONLY_ROUTING_LIST = { ('<server_instance>' [ ,...n ] ) | NONE }
Specifies a comma-separated list of server instances that host availability replicas for this availability group that
meet the following requirements when running under the secondary role:
Be configured to allow all connections or read-only connections (see the ALLOW_CONNECTIONS
argument of the SECONDARY_ROLE option, above).
Have their read-only routing URL defined (see the READ_ONLY_ROUTING_URL argument of the
SECONDARY_ROLE option, above).
The READ_ONLY_ROUTING_LIST values are as follows:
<server_instance>
Specifies the address of the instance of SQL Server that is the host for an availability replica that is a
readable secondary replica when running under the secondary role.
Use a comma-separated list to specify all of the server instances that might host a readable secondary
replica. Read-only routing will follow the order in which server instances are specified in the list. If you
include a replica's host server instance on the replica's read-only routing list, placing this server instance at
the end of the list is typically a good practice, so that read-intent connections go to a secondary replica, if
one is available.
Beginning with SQL Server 2016 (13.x), you can load-balance read-intent requests across readable
secondary replicas. You specify this by placing the replicas in a nested set of parentheses within the read-
only routing list. For more information and examples, see Configure load-balancing across read-only
replicas.
NONE
Specifies that when this availability replica is the primary replica, read-only routing will not be supported.
This is the default behavior. When used with MODIFY REPLICA ON, this value disables an existing list, if
any.
SESSION_TIMEOUT =seconds
Specifies the session-timeout period in seconds. If you do not specify this option, by default, the time
period is 10 seconds. The minimum value is 5 seconds.

IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater.

For more information about the session-timeout period, see Overview of Always On Availability Groups (SQL
Server).
MODIFY REPLICA ON
Modifies any of the replicas of the availability group. The list of replicas to be modified contains the server
instance address and a WITH (...) clause for each replica.
Supported only on the primary replica.
REMOVE REPLICA ON
Removes the specified secondary replica from the availability group. The current primary replica cannot be
removed from an availability group. On being removed, the replica stops receiving data. Its secondary databases
are removed from the availability group and enter the RESTORING state.
Supported only on the primary replica.

NOTE
If you remove a replica while it is unavailable or failed, when it comes back online it will discover that it no longer belongs
the availability group.

JOIN
Causes the local server instance to host a secondary replica in the specified availability group.
Supported only on a secondary replica that has not yet been joined to the availability group.
For more information, see Join a Secondary Replica to an Availability Group (SQL Server).
FAILOVER
Initiates a manual failover of the availability group without data loss to the secondary replica to which you are
connected. The replica that will host the primary replica is the failover target. The failover target will take over the
primary role and recover its copy of each database and bring them online as the new primary databases. The
former primary replica concurrently transitions to the secondary role, and its databases become secondary
databases and are immediately suspended. Potentially, these roles can be switched back and forth by a series of
failures.
Supported only on a synchronous-commit secondary replica that is currently synchronized with the primary
replica. Note that for a secondary replica to be synchronized the primary replica must also be running in
synchronous-commit mode.

NOTE
A failover command returns as soon as the failover target has accepted the command. However, database recovery occurs
asynchronously after the availability group has finished failing over.

For information about the limitations, prerequisites and recommendations for a performing a planned manual
failover, see Perform a Planned Manual Failover of an Availability Group (SQL Server).
FORCE_FAILOVER_ALLOW_DATA_LOSS
Cau t i on

Forcing failover, which might involve some data loss, is strictly a disaster recovery method. Therefore, We strongly
recommend that you force failover only if the primary replica is no longer running, you are willing to risk losing
data, and you must restore service to the availability group immediately.
Supported only on a replica whose role is in the SECONDARY or RESOLVING state. --The replica on which you
enter a failover command is known as the failover target.
Forces failover of the availability group, with possible data loss, to the failover target. The failover target will take
over the primary role and recover its copy of each database and bring them online as the new primary databases.
On any remaining secondary replicas, every secondary database is suspended until manually resumed. When the
former primary replica becomes available, it will switch to the secondary role, and its databases will become
suspended secondary databases.
NOTE
A failover command returns as soon as the failover target has accepted the command. However, database recovery occurs
asynchronously after the availability group has finished failing over.

For information about the limitations, prerequisites and recommendations for forcing failover and the effect of a
forced failover on the former primary databases in the availability group, see Perform a Forced Manual Failover of
an Availability Group (SQL Server).
ADD LISTENER 'dns_name'( <add_listener_option> )
Defines a new availability group listener for this availability group. Supported only on the primary replica.

IMPORTANT
Before you create your first listener, we strongly recommend that you read Create or Configure an Availability Group
Listener (SQL Server).
After you create a listener for a given availability group, we strongly recommend that you do the following:
Ask your network administrator to reserve the listener's IP address for its exclusive use.
Give the listener's DNS host name to application developers to use in connection strings when requesting client
connections to this availability group.

dns_name
Specifies the DNS host name of the availability group listener. The DNS name of the listener must be unique in
the domain and in NetBIOS.
dns_name is a string value. This name can contain only alphanumeric characters, dashes (-), and hyphens (_), in
any order. DNS host names are case insensitive. The maximum length is 63 characters.
We recommend that you specify a meaningful string. For example, for an availability group named AG1 ,a
meaningful DNS host name would be ag1-listener .

IMPORTANT
NetBIOS recognizes only the first 15 chars in the dns_name. If you have two WSFC clusters that are controlled by the same
Active Directory and you try to create availability group listeners in both of clusters using names with more than 15
characters and an identical 15 character prefix, you will get an error reporting that the Virtual Network Name resource could
not be brought online. For information about prefix naming rules for DNS names, see Assigning Domain Names.

JOIN AVAILABILITY GROUP ON


Joins to a distributed availability group. When you create a distributed availability group, the availability group on
the cluster where it is created is the primary availability group. The availability group that joins the distributed
availability group is the secondary availability group.
<ag_name>
Specifies the name of the availability group that makes up one half of the distributed availability group.
LISTENER ='TCP://system -address:port'
Specifies the URL path for the listener associated with the availability group.
The LISTENER clause is required.
'TCP://system -address:port'
Specifies a URL for the listener associated with the availability group. The URL parameters are as follows:
system -address
Is a string, such as a system name, a fully qualified domain name, or an IP address, that unambiguously identifies
the listener.
port
Is a port number that is associated with the mirroring endpoint of the availability group. Note that this is not the
port of the listener.
AVAILABILITY_MODE = { SYNCHRONOUS_COMMIT | ASYNCHRONOUS_COMMIT }
Specifies whether the primary replica has to wait for the secondary availability group to acknowledge the
hardening (writing) of the log records to disk before the primary replica can commit the transaction on a given
primary database.
SYNCHRONOUS_COMMIT
Specifies that the primary replica will wait to commit transactions until they have been hardened on the
secondary availability group. You can specify SYNCHRONOUS_COMMIT for up to two availability groups,
including the primary availability group.
ASYNCHRONOUS_COMMIT
Specifies that the primary replica commits transactions without waiting for this secondary availability group to
harden the log. You can specify ASYNCHRONOUS_COMMIT for up to two availability groups, including the
primary availability group.
The AVAILABILITY_MODE clause is required.
FAILOVER_MODE = { MANUAL }
Specifies the failover mode of the distributed availability group.
MANUAL
Enables planned manual failover or forced manual failover (typically called forced failover) by the database
administrator.
Automatic failover to the secondary availability group is not supported.
SEEDING_MODE= { AUTOMATIC | MANUAL }
Specifies how the secondary availability group will be initially seeded.
AUTOMATIC
Enables automatic seeding. This method will seed the secondary availability group over the network. This method
does not require you to backup and restore a copy of the primary database on the replicas of the secondary
availability group.
MANUAL
Specifies manual seeding. This method requires you to create a backup of the database on the primary replica and
manually restore that backup on the replica(s) of the secondary availability group.
MODIFY AVAILABILITY GROUP ON
Modifies any of the availability group settings of a distributed availability group. The list of availability groups to
be modified contains the availability group name and a WITH (...) clause for each availability group.

IMPORTANT
This command must be repeated on both the primary availability group and secondary availability group instances.

GRANT CREATE ANY DATABASE


Permits the availability group to create databases on behalf of the primary replica, which supports direct seeding
(SEEDING_MODE = AUTOMATIC ). This parameter should be run on every secondary replica that supports
direct seeding after that secondary joins the availability group. Requires the CREATE ANY DATABASE permission.
DENY CREATE ANY DATABASE
Removes the ability of the availability group to create databases on behalf of the primary replica.
<add_listener_option>
ADD LISTENER takes one of the following options:
WITH DHCP [ ON { ('four_part_ipv4_address','four_part_ipv4_mask') } ]
Specifies that the availability group listener will use the Dynamic Host Configuration Protocol (DHCP ). Optionally,
use the ON clause to identify the network on which this listener will be created. DHCP is limited to a single subnet
that is used for every server instances that hosts an availability replica in the availability group.

IMPORTANT
We do not recommend DHCP in production environment. If there is a down time and the DHCP IP lease expires, extra time
is required to register the new DHCP network IP address that is associated with the listener DNS name and impact the client
connectivity. However, DHCP is good for setting up your development and testing environment to verify basic functions of
availability groups and for integration with your applications.

For example:
WITH DHCP ON ('10.120.19.0','255.255.254.0')

WITH IP ( { ('four_part_ipv4_address','four_part_ipv4_mask') | ('ipv6_address') } [ , ...n ] ) [ , PORT =listener_port


]
Specifies that, instead of using DHCP, the availability group listener will use one or more static IP addresses. To
create an availability group across multiple subnets, each subnet requires one static IP address in the listener
configuration. For a given subnet, the static IP address can be either an IPv4 address or an IPv6 address. Contact
your network administrator to get a static IP address for each subnet that will host an availability replica for the
new availability group.
For example:
WITH IP ( ('10.120.19.155','255.255.254.0') )

four_part_ipv4_address
Specifies an IPv4 four-part address for an availability group listener. For example, 10.120.19.155 .
four_part_ipv4_mask
Specifies an IPv4 four-part mask for an availability group listener. For example, 255.255.254.0 .
ipv6_address
Specifies an IPv6 address for an availability group listener. For example, 2001::4898:23:1002:20f:1fff:feff:b3a3 .
PORT = listener_port
Specifies the port number-listener_port-to be used by an availability group listener that is specified by a WITH IP
clause. PORT is optional.
The default port number, 1433, is supported. However, if you have security concerns, we recommend using a
different port number.
For example: WITH IP ( ('2001::4898:23:1002:20f:1fff:feff:b3a3') ) , PORT = 7777

MODIFY LISTENER 'dns_name'( <modify_listener_option> )


Modifies an existing availability group listener for this availability group. Supported only on the primary replica.
<modify_listener_option>
MODIFY LISTENER takes one of the following options:
ADD IP { ('four_part_ipv4_address','four_part_ipv4_mask') | ('dns_nameipv6_address') }
Adds the specified IP address to the availability group listener specified by dns_name.
PORT = listener_port
See the description of this argument earlier in this section.
RESTART LISTENER 'dns_name'
Restarts the listener that is associated with the specified DNS name. Supported only on the primary replica.
REMOVE LISTENER 'dns_name'
Removes the listener that is associated with the specified DNS name. Supported only on the primary replica.
OFFLINE
Takes an online availability group offline. There is no data loss for synchronous-commit databases.
After an availability group goes offline, its databases become unavailable to clients, and you cannot bring the
availability group back online. Therefore, use the OFFLINE option only during a cross-cluster migration of Always
On availability groups, when migrating availability group resources to a new WSFC cluster.
For more information, see Take an Availability Group Offline (SQL Server).

Prerequisites and Restrictions


For information about prerequisites and restrictions on availability replicas and on their host server instances and
computers, see Prerequisites, Restrictions, and Recommendations for Always On Availability Groups (SQL
Server).
For information about restrictions on the AVAILABILITY GROUP Transact-SQL statements, see Overview of
Transact-SQL Statements for Always On Availability Groups (SQL Server).

Security
Permissions
Requires ALTER AVAILABILITY GROUP permission on the availability group, CONTROL AVAILABILITY GROUP
permission, ALTER ANY AVAILABILITY GROUP permission, or CONTROL SERVER permission. Also requires
ALTER ANY DATABASE permission.

Examples
A. Joining a secondary replica to an availability group
The following example joins a secondary replica to which you are connected to the AccountsAG availability group.

ALTER AVAILABILITY GROUP AccountsAG JOIN;


GO

B. Forcing failover of an availability group


The following example forces the AccountsAG availability group to fail over to the secondary replica to which you
are connected.

ALTER AVAILABILITY GROUP AccountsAG FORCE_FAILOVER_ALLOW_DATA_LOSS;


GO
See Also
CREATE AVAILABILITY GROUP (Transact-SQL )
ALTER DATABASE SET HADR (Transact-SQL )
DROP AVAILABILITY GROUP (Transact-SQL )
sys.availability_replicas (Transact-SQL )
sys.availability_groups (Transact-SQL )
Troubleshoot Always On Availability Groups Configuration (SQL Server)
Overview of Always On Availability Groups (SQL Server)
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server)
ALTER BROKER PRIORITY (Transact-SQL)
10/1/2018 • 3 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the properties of a Service Broker conversation priority.
Transact-SQL Syntax Conventions

Syntax
ALTER BROKER PRIORITY ConversationPriorityName
FOR CONVERSATION
{ SET ( [ CONTRACT_NAME = {ContractName | ANY } ]
[ [ , ] LOCAL_SERVICE_NAME = {LocalServiceName | ANY } ]
[ [ , ] REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY } ]
[ [ , ] PRIORITY_LEVEL = { PriorityValue | DEFAULT } ]
)
}
[;]

Arguments
ConversationPriorityName
Specifies the name of the conversation priority to be changed. The name must refer to a conversation priority in
the current database.
SET
Specifies the criteria for determining if the conversation priority applies to a conversation. SET is required and
must contain at least one criterion: CONTRACT_NAME, LOCAL_SERVICE_NAME, REMOTE_SERVICE_NAME, or
PRIORITY_LEVEL.
CONTRACT_NAME = {ContractName | ANY }
Specifies the name of a contract to be used as a criterion for determining if the conversation priority applies to a
conversation. ContractName is a Database Engine identifier, and must specify the name of a contract in the current
database.
ContractName
Specifies that the conversation priority can be applied only to conversations where the BEGIN DIALOG statement
that started the conversation specified ON CONTRACT ContractName.
ANY
Specifies that the conversation priority can be applied to any conversation, regardless of which contract it uses.
If CONTRACT_NAME is not specified, the contract property of the conversation priority is not changed.
LOCAL_SERVICE_NAME = {LocalServiceName | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
LocalServiceName is a Database Engine identifier and must specify the name of a service in the current database.
LocalServiceName
Specifies that the conversation priority can be applied to the following:
Any initiator conversation endpoint whose initiator service name matches LocalServiceName.
Any target conversation endpoint whose target service name matches LocalServiceName.
ANY
Specifies that the conversation priority can be applied to any conversation endpoint, regardless of the
name of the local service used by the endpoint.
If LOCAL_SERVICE_NAME is not specified, the local service property of the conversation priority is not
changed.
REMOTE_SERVICE_NAME = {'RemoteServiceName' | ANY }
Specifies the name of a service to be used as a criterion to determine if the conversation priority applies to a
conversation endpoint.
RemoteServiceName is a literal of type nvarchar(256). Service Broker uses a byte-by-byte comparison to
match the RemoteServiceName string. The comparison is case-sensitive and does not consider the current
collation. The target service can be in the current instance of the Database Engine, or a remote instance of
the Database Engine.
'RemoteServiceName'
Specifies the conversation priority be assigned to the following:
Any initiator conversation endpoint whose associated target service name matches RemoteServiceName.
Any target conversation endpoint whose associated initiator service name matches RemoteServiceName.
ANY
Specifies that the conversation priority applies to any conversation endpoint, regardless of the name of the
remote service associated with the endpoint.
If REMOTE_SERVICE_NAME is not specified, the remote service property of the conversation priority is
not changed.
PRIORITY_LEVEL = { PriorityValue | DEFAULT }
Specifies the priority level to assign any conversation endpoint that use the contracts and services that are
specified in the conversation priority. PriorityValue must be an integer literal from 1 (lowest priority) to 10
(highest priority).
If PRIORITY_LEVEL is not specified, the priority level property of the conversation priority is not changed.

Remarks
No properties that are changed by ALTER BROKER PRIORITY are applied to existing conversations. The existing
conversations continue with the priority that was assigned when they were started.
For more information, see CREATE BROKER PRIORITY (Transact-SQL ).

Permissions
Permission for creating a conversation priority defaults to members of the db_ddladmin or db_owner fixed
database roles, and to the sysadmin fixed server role. Requires ALTER permission on the database.

Examples
A. Changing only the priority level of an existing conversation priority.
Changes the priority level, but does not change the contract, local service, or remote service properties.

ALTER BROKER PRIORITY SimpleContractDefaultPriority


FOR CONVERSATION
SET (PRIORITY_LEVEL = 3);

B. Changing all of the properties of an existing conversation priority.


Changes the priority level, contract, local service, and remote service properties.

ALTER BROKER PRIORITY SimpleContractPriority


FOR CONVERSATION
SET (CONTRACT_NAME = SimpleContractB,
LOCAL_SERVICE_NAME = TargetServiceB,
REMOTE_SERVICE_NAME = N'InitiatorServiceB',
PRIORITY_LEVEL = 8);

See Also
CREATE BROKER PRIORITY (Transact-SQL )
DROP BROKER PRIORITY (Transact-SQL )
sys.conversation_priorities (Transact-SQL )
ALTER CERTIFICATE (Transact-SQL)
10/1/2018 • 3 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the private key used to encrypt a certificate, or adds one if none is present. Changes the availability of a
certificate to Service Broker.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER CERTIFICATE certificate_name


REMOVE PRIVATE KEY
| WITH PRIVATE KEY ( <private_key_spec> [ ,... ] )
| WITH ACTIVE FOR BEGIN_DIALOG = [ ON | OFF ]

<private_key_spec> ::=
FILE = 'path_to_private_key'
| DECRYPTION BY PASSWORD = 'key_password'
| ENCRYPTION BY PASSWORD = 'password'

-- Syntax for Parallel Data Warehouse

ALTER CERTIFICATE certificate_name


{
REMOVE PRIVATE KEY
| WITH PRIVATE KEY (
FILE = '<path_to_private_key>',
DECRYPTION BY PASSWORD = '<key password>' )
}

Arguments
certificate_name
Is the unique name by which the certificate is known in database.
FILE ='path_to_private_key'
Specifies the complete path, including file name, to the private key. This parameter can be a local path or a UNC
path to a network location. This file will be accessed within the security context of the SQL Server service account.
When you use this option, you must make sure that the service account has access to the specified file.
DECRYPTION BY PASSWORD ='key_password'
Specifies the password that is required to decrypt the private key.
ENCRYPTION BY PASSWORD ='password'
Specifies the password used to encrypt the private key of the certificate in the database. password must meet the
Windows password policy requirements of the computer that is running the instance of SQL Server. For more
information, see Password Policy.
REMOVE PRIVATE KEY
Specifies that the private key should no longer be maintained inside the database.
ACTIVE FOR BEGIN_DIALOG = { ON | OFF }
Makes the certificate available to the initiator of a Service Broker dialog conversation.

Remarks
The private key must correspond to the public key specified by certificate_name.
The DECRYPTION BY PASSWORD clause can be omitted if the password in the file is protected with a null
password.
When the private key of a certificate that already exists in the database is imported from a file, the private key will
be automatically protected by the database master key. To protect the private key with a password, use the
ENCRYPTION BY PASSWORD phrase.
The REMOVE PRIVATE KEY option will delete the private key of the certificate from the database. You can remove
the private key when the certificate will be used to verify signatures or in Service Broker scenarios that do not
require a private key. Do not remove the private key of a certificate that protects a symmetric key.
You do not have to specify a decryption password when the private key is encrypted by using the database master
key.

IMPORTANT
Always make an archival copy of a private key before removing it from a database. For more information, see BACKUP
CERTIFICATE (Transact-SQL).

The WITH PRIVATE KEY option is not available in a contained database.

Permissions
Requires ALTER permission on the certificate.

Examples
A. Changing the password of a certificate

ALTER CERTIFICATE Shipping04


WITH PRIVATE KEY (DECRYPTION BY PASSWORD = 'pGF$5DGvbd2439587y',
ENCRYPTION BY PASSWORD = '4-329578thlkajdshglXCSgf');
GO

B. Changing the password that is used to encrypt the private key

ALTER CERTIFICATE Shipping11


WITH PRIVATE KEY (ENCRYPTION BY PASSWORD = '34958tosdgfkh##38',
DECRYPTION BY PASSWORD = '95hkjdskghFDGGG4%');
GO

C. Importing a private key for a certificate that is already present in the database
ALTER CERTIFICATE Shipping13
WITH PRIVATE KEY (FILE = 'c:\\importedkeys\Shipping13',
DECRYPTION BY PASSWORD = 'GDFLKl8^^GGG4000%');
GO

D. Changing the protection of the private key from a password to the database master key

ALTER CERTIFICATE Shipping15


WITH PRIVATE KEY (DECRYPTION BY PASSWORD = '95hk000eEnvjkjy#F%');
GO

See Also
CREATE CERTIFICATE (Transact-SQL )
DROP CERTIFICATE (Transact-SQL )
BACKUP CERTIFICATE (Transact-SQL )
Encryption Hierarchy
EVENTDATA (Transact-SQL )
ALTER COLUMN ENCRYPTION KEY (Transact-SQL)
10/1/2018 • 3 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Alters a column encryption key in a database, adding or dropping an encrypted value. A CEK can have up to two
values, which allows for the rotation of the corresponding column master key. A CEK is used when encrypting
columns using the Always Encrypted (Database Engine) feature. Before adding a CEK value, you must define the
column master key that was used to encrypt the value by using SQL Server Management Studio or the CREATE
MASTER KEY statement.
Transact-SQL Syntax Conventions

Syntax
ALTER COLUMN ENCRYPTION KEY key_name
[ ADD | DROP ] VALUE
(
COLUMN_MASTER_KEY = column_master_key_name
[, ALGORITHM = 'algorithm_name' , ENCRYPTED_VALUE = varbinary_literal ]
) [;]

Arguments
key_name
The column encryption key that you are changing.
column_master_key_name
Specifies the name of the column master key (CMK) used for encrypting the column encryption key (CEK).
algorithm_name
Name of the encryption algorithm used to encrypt the value. The algorithm for the system providers must be
RSA_OAEP. This argument is not valid when dropping a column encryption key value.
varbinary_literal
The CEK BLOB encrypted with the specified master encryption key. This argument is not valid when dropping a
column encryption key value.

WARNING
Never pass plaintext CEK values in this statement. Doing so will comprise the benefit of this feature.

Remarks
Typically, a column encryption key is created with just one encrypted value. When a column master key needs to
be rotated (the current column master key needs to be replaced with the new column master key), you can add a
new value of the column encryption key, encrypted with the new column master key. This workflow allows you to
ensure client applications can access data encrypted with the column encryption key, while the new column master
key is being made available to client applications. An Always Encrypted enabled driver in a client application that
does not have access to the new master key, will be able to use the column encryption key value encrypted with
the old column master key to access sensitive data. The encryption algorithms, Always Encrypted supports, require
the plaintext value to have 256 bits. An encrypted value should be generated using a key store provider that
encapsulates the key store holding the column master key.
Column master keys are rotated for following reasons:
Compliance regulations may require keys are periodically rotated.
A column master key is compromised, and it needs to be rotated for security reasons.
To enable or disable sharing column encryption keys with a secure enclave on the server side. For example, if
your current column master key does not support enclave computations (has not been defined with the
ENCLAVE_COMPUTATIONS property) and you want to enable enclave computations on columns protected
with a column encryption key that your column master key encrypts, you need to replace the column master
key with the new key with the ENCLAVE_COMPUTATIONS property. For more information, see Always
Encrypted with secure enclaves.
Use sys.columns (Transact-SQL ), sys.column_encryption_keys (Transact-SQL ) and
sys.column_encryption_key_values (Transact-SQL ) to view information about column encryption keys.

Permissions
Requires ALTER ANY COLUMN ENCRYPTION KEY permission on the database.

Examples
A. Adding a column encryption key value
The following example alters a column encryption key called MyCEK .

ALTER COLUMN ENCRYPTION KEY MyCEK


ADD VALUE
(
COLUMN_MASTER_KEY = MyCMK2,
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE =
0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0064006500650063006200660034006100340031
00300038003400620035003300320036006600320063006200620035003000360038006500390062006100300032003000360061003700
3800310066001DDA6134C3B73A90D349C8905782DD819B428162CF5B051639BA46EC69A7C8C8F81591A92C395711493B25DCBCCC57836E
5B9F17A0713E840721D098F3F8E023ABCDFE2F6D8CC4339FC8F88630ED9EBADA5CA8EEAFA84164C1095B12AE161EABC1DF778C07F07D41
3AF1ED900F578FC00894BEE705EAC60F4A5090BBE09885D2EFE1C915F7B4C581D9CE3FDAB78ACF4829F85752E9FC985DEB8773889EE4A1
945BD554724803A6F5DC0A2CD5EFE001ABED8D61E8449E4FAA9E4DD392DA8D292ECC6EB149E843E395CDE0F98D04940A28C4B05F747149
B34A0BAEC04FFF3E304C84AF1FF81225E615B5F94E334378A0A888EF88F4E79F66CB377E3C21964AACB5049C08435FE84EEEF39D20A665
C17E04898914A85B3DE23D56575EBC682D154F4F15C37723E04974DB370180A9A579BC84F6BC9B5E7C223E5CBEE721E57EE07EFDCC0A32
57BBEBF9ADFFB00DBF7EF682EC1C4C47451438F90B4CF8DA709940F72CFDC91C6EB4E37B4ED7E2385B1FF71B28A1D2669FBEB18EA89F9D
391D2FDDEA0ED362E6A591AC64EF4AE31CA8766C259ECB77D01A7F5C36B8418F91C1BEADDD4491C80F0016B66421B4B788C55127135DA2
FA625FB7FD195FB40D90A6C67328602ECAF3EC4F5894BFD84A99EB4753BE0D22E0D4DE6A0ADFEDC80EB1B556749B4A8AD00E73B329C958
27AB91C0256347E85E3C5FD6726D0E1FE82C925D3DF4A9
);
GO

B. Dropping a column encryption key value


The following example alters a column encryption key called MyCEK by dropping a value.

ALTER COLUMN ENCRYPTION KEY MyCEK


DROP VALUE
(
COLUMN_MASTER_KEY = MyCMK
);
GO
See Also
CREATE COLUMN ENCRYPTION KEY (Transact-SQL )
DROP COLUMN ENCRYPTION KEY (Transact-SQL )
CREATE COLUMN MASTER KEY (Transact-SQL )
Always Encrypted (Database Engine)
sys.column_encryption_keys (Transact-SQL )
sys.column_encryption_key_values (Transact-SQL )
sys.columns (Transact-SQL )
ALTER CREDENTIAL (Transact-SQL)
11/27/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database (Managed Instance only) Azure
SQL Data Warehouse Parallel Data Warehouse
Changes the properties of a credential.

IMPORTANT
"Should do" info as best practice; "must do" to complete task Transact-SQL Syntax Conventions

Syntax
ALTER CREDENTIAL credential_name WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]

Arguments
credential_name
Specifies the name of the credential that is being altered.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server.
SECRET ='secret'
Specifies the secret required for outgoing authentication. secret is optional.

IMPORTANT
Azure SQL Database only supports Azure Key Vault and Shared Access Signature identities. Windows user identities are not
supported.

Remarks
When a credential is changed, the values of both identity_name and secret are reset. If the optional SECRET
argument is not specified, the value of the stored secret will be set to NULL.
The secret is encrypted by using the service master key. If the service master key is regenerated, the secret is
reencrypted by using the new service master key.
Information about credentials is visible in the sys.credentials catalog view.

Permissions
Requires ALTER ANY CREDENTIAL permission. If the credential is a system credential, requires CONTROL
SERVER permission.

Examples
A. Changing the password of a credential
The following example changes the secret stored in a credential called Saddles . The credential contains the
Windows login RettigB and its password. The new password is added to the credential using the SECRET clause.

ALTER CREDENTIAL Saddles WITH IDENTITY = 'RettigB',


SECRET = 'sdrlk8$40-dksli87nNN8';
GO

B. Removing the password from a credential


The following example removes the password from a credential named Frames . The credential contains Windows
login Aboulrus8 and a password. After the statement is executed, the credential will have a NULL password
because the SECRET option is not specified.

ALTER CREDENTIAL Frames WITH IDENTITY = 'Aboulrus8';


GO

See Also
Credentials (Database Engine)
CREATE CREDENTIAL (Transact-SQL )
DROP CREDENTIAL (Transact-SQL )
ALTER DATABASE SCOPED CREDENTIAL (Transact-SQL )
CREATE LOGIN (Transact-SQL )
sys.credentials (Transact-SQL )
ALTER CRYPTOGRAPHIC PROVIDER (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Alters a cryptographic provider within SQL Server from an Extensible Key Management (EKM ) provider.
Transact-SQL Syntax Conventions

Syntax
ALTER CRYPTOGRAPHIC PROVIDER provider_name
[ FROM FILE = path_of_DLL ]
ENABLE | DISABLE

Arguments
provider_name
Name of the Extensible Key Management provider.
Path_of_DLL
Path of the .dll file that implements the SQL Server Extensible Key Management interface.
ENABLE | DISABLE
Enables or disables a provider.

Remarks
If the provider changes the .dll file that is used to implement Extensible Key Management in SQL Server, you must
use the ALTER CRYPTOGRAPHIC PROVIDER statement.
When the .dll file path is updated by using the ALTER CRYPTOGRAPHIC PROVIDER statement, SQL Server
performs the following actions:
Disables the provider.
Verifies the DLL signature and ensures that the .dll file has the same GUID as the one recorded in the catalog.
Updates the DLL version in the catalog.
When an EKM provider is set to DISABLE, any attempts on new connections to use the provider with encryption
statements will fail.
To disable a provider, all sessions that use the provider must be terminated.
When an EKM provider dll does not implement all of the necessary methods, ALTER CRYPTOGRAPHIC
PROVIDER can return error 33085:
One or more methods cannot be found in cryptographic provider library '%.*ls'.

When the header file used to create the EKM provider dll is out of date, ALTER CRYPTOGRAPHIC PROVIDER can
return error 33032:
SQL Crypto API version '%02d.%02d' implemented by provider is not supported. Supported version is '%02d.%02d'.
Permissions
Requires CONTROL permission on the cryptographic provider.

Examples
The following example alters a cryptographic provider, called SecurityProvider in SQL Server, to a newer version
of a .dll file. This new version is named c:\SecurityProvider\SecurityProvider_v2.dll and is installed on the server.
The provider's certificate must be installed on the server.
1. Disable the provider to perform the upgrade. This will terminate all open cryptographic sessions.

ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider


DISABLE;
GO

2. Upgrade the provider .dll file. The GUID must the same as the previous version, but the version can be
different.

ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider


FROM FILE = 'c:\SecurityProvider\SecurityProvider_v2.dll';
GO

3. Enable the upgraded provider.

ALTER CRYPTOGRAPHIC PROVIDER SecurityProvider


ENABLE;
GO

See Also
Extensible Key Management (EKM )
CREATE CRYPTOGRAPHIC PROVIDER (Transact-SQL )
DROP CRYPTOGRAPHIC PROVIDER (Transact-SQL )
CREATE SYMMETRIC KEY (Transact-SQL )
Extensible Key Management Using Azure Key Vault (SQL Server)
ALTER DATABASE (Transact-SQL)
1/8/2019 • 31 minutes to read • Edit Online

Modifies certain configuration options of a database.


This article provides the syntax, arguments, remarks, permissions, and examples for whichever SQL product
you choose.
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.

Click a product!
In the following row, click whichever product name you are interested in. The click displays different content
here on this webpage, appropriate for whichever product you click.

* SQL Server * SQL Database SQL Database SQL Data Parallel


logical server Managed Instance Warehouse Data Warehouse

SQL Server
Overview
In SQL Server, this statement modifies a database, or the files and filegroups associated with the database.
Adds or removes files and filegroups from a database, changes the attributes of a database or its files and
filegroups, changes the database collation, and sets database options. Database snapshots cannot be
modified. To modify database options associated with replication, use sp_replicationdboption.
Because of its length, the ALTER DATABASE syntax is separated into the multiple articles.
ALTER DATABASE
The current article provides the syntax and related information for changing the name and the collation of a
database.
ALTER DATABASE File and Filegroup Options
Provides the syntax and related information for adding and removing files and filegroups from a database,
and for changing the attributes of the files and filegroups.
ALTER DATABASE SET Options
Provides the syntax and related information for changing the attributes of a database by using the SET
options of ALTER DATABASE.
ALTER DATABASE Database Mirroring
Provides the syntax and related information for the SET options of ALTER DATABASE that are related to
database mirroring.
ALTER DATABASE SET HADR
Provides the syntax and related information for the Always On availability groups options of ALTER
DATABASE for configuring a secondary database on a secondary replica of an Always On availability group.
ALTER DATABASE Compatibility Level
Provides the syntax and related information for the SET options of ALTER DATABASE that are related to
database compatibility levels.

Syntax
-- SQL Server Syntax
ALTER DATABASE { database_name | CURRENT }
{
MODIFY NAME = new_database_name
| COLLATE collation_name
| <file_and_filegroup_options>
| SET <option_spec> [ ,...n ] [ WITH <termination> ]
| SET COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 | 90 }

}
[;]

<file_and_filegroup_options>::=
<add_or_modify_files>::=
<filespec>::=
<add_or_modify_filegroups>::=
<filegroup_updatability_option>::=

<option_spec>::=
<auto_option> ::=
<change_tracking_option> ::=
<cursor_option> ::=
<database_mirroring_option> ::=
<date_correlation_optimization_option> ::=
<db_encryption_option> ::=
<db_state_option> ::=
<db_update_option> ::=
<db_user_access_option> ::= <delayed_durability_option> ::= <external_access_option> ::=
<FILESTREAM_options> ::=
<HADR_options> ::=
<parameterization_option> ::=
<query_store_options> ::=
<recovery_option> ::=
<service_broker_option> ::=
<snapshot_option> ::=
<sql_option> ::=
<termination> ::=

<compatibility_level>
{ 140 | 130 | 120 | 110 | 100 | 90 }

Arguments
database_name
Is the name of the database to be modified.

NOTE
This option is not available in a Contained Database.

CURRENT
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Designates that the current database in use should be altered.
MODIFY NAME =new_database_name
Renames the database with the name specified as new_database_name.
COLLATE collation_name
Specifies the collation for the database. collation_name can be either a Windows collation name or a SQL
collation name. If not specified, the database is assigned the collation of the instance of SQL Server.

NOTE
Collation cannot be changed after database has been created on Azure SQL Database.

When creating databases with other than the default collation, the data in the database always respects the
specified collation. For SQL Server, when creating a contained database, the internal catalog information is
maintained using the SQL Server default collation, Latin1_General_100_CI_AS_WS_KS_SC.
For more information about the Windows and SQL collation names, see COLLATE.
<delayed_durability_option> ::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
For more information see ALTER DATABASE SET Options and Control Transaction Durability.
<file_and_filegroup_options>::=
For more information, see ALTER DATABASE File and Filegroup Options.

Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management
mode) and is not allowed in an explicit or implicit transaction.
The state of a database file (for example, online or offline), is maintained independently from the state of the
database. For more information, see File States. The state of the files within a filegroup determines the
availability of the whole filegroup. For a filegroup to be available, all files within the filegroup must be
online. If a filegroup is offline, any try to access the filegroup by an SQL statement will fail with an error.
When you build query plans for SELECT statements, the query optimizer avoids nonclustered indexes and
indexed views that reside in offline filegroups. This enables these statements to succeed. However, if the
offline filegroup contains the heap or clustered index of the target table, the SELECT statements fail.
Additionally, any INSERT, UPDATE, or DELETE statement that modifies a table with any index in an offline
filegroup will fail.
When a database is in the RESTORING state, most ALTER DATABASE statements will fail. The exception is
setting database mirroring options. A database may be in the RESTORING state during an active restore
operation or when a restore operation of a database or log file fails because of a corrupted backup file.
The plan cache for the instance of SQL Server is cleared by setting one of the following options.

OFFLINE READ_WRITE

ONLINE MODIFY FILEGROUP DEFAULT

MODIFY_NAME MODIFY FILEGROUP READ_WRITE


COLLATE MODIFY FILEGROUP READ_ONLY

READ_ONLY PAGE_VERIFY

Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server
error log contains the following informational message: " SQL Server has encountered %d occurrence(s) of
cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or
reconfigure operations". This message is logged every five minutes as long as the cache is flushed within
that time interval.
The procedure cache is also flushed in the following scenarios:
A database has the AUTO_CLOSE database option set to ON. When no user connection references or
uses the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
A database snapshot for a source database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.

Changing the Database Collation


Before you apply a different collation to a database, make sure that the following conditions are in place:
You are the only one currently using the database.
No schema-bound object depends on the collation of the database.
If the following objects, which depend on the database collation, exist in the database, the ALTER
DATABASEdatabase_nameCOLLATE statement will fail. SQL Server will return an error message for
each object blocking the ALTER action:
User-defined functions and views created with SCHEMABINDING.
Computed columns.
CHECK constraints.
Table-valued functions that return tables with character columns with collations inherited from
the default database collation.
Dependency information for non-schema-bound entities is automatically updated when the
database collation is changed.
Changing the database collation does not create duplicates among any system names for the database
objects. If duplicate names result from the changed collation, the following namespaces may cause the
failure of a database collation change:
Object names such as a procedure, table, trigger, or view.
Schema names.
Principals such as a group, role, or user.
Scalar-type names such as system and user-defined types.
Full-text catalog names.
Column or parameter names within an object.
Index names within a table.
Duplicate names resulting from the new collation will cause the change action to fail, and SQL Server will
return an error message specifying the namespace where the duplicate was found.

Viewing Database Information


You can use catalog views, system functions, and system stored procedures to return information about
databases, files, and filegroups.

Permissions
Requires ALTER permission on the database.

Examples
A. Changing the name of a database
The following example changes the name of the AdventureWorks2012 database to Northwind .

USE master;
GO
ALTER DATABASE AdventureWorks2012
Modify Name = Northwind ;
GO

B. Changing the collation of a database


The following example creates a database named testdb with the SQL_Latin1_General_CP1_CI_A S collation,
and then changes the collation of the testdb database to COLLATE French_CI_AI .
Applies to: SQL Server 2008 through SQL Server 2017.

USE master;
GO

CREATE DATABASE testdb


COLLATE SQL_Latin1_General_CP1_CI_AS ;
GO

ALTER DATABASE testDB


COLLATE French_CI_AI ;
GO

See Also
CREATE DATABASE
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOLATION LEVEL
EVENTDATA
sp_configure
sp_spaceused
sys.databases
sys.database_files
sys.database_mirroring_witnesses
sys.data_spaces
sys.filegroups
sys.master_files
System Databases

SQL Server * SQL Database SQL Database SQL Data Parallel


logical server * Managed Instance Warehouse Data Warehouse

Azure SQL Database logical server


Overview
In Azure SQL Database, use this statement to modify a database on a logical server. Use this statement to
change the name of a database, change the edition and service objective of the database, join or remove the
database to or from an elastic pool, set database options, add or remove the database as a secondary in a
geo-replication relationship, and set the database compatibility level.
Because of its length, the ALTER DATABASE syntax is separated into the multiple articles.
ALTER DATABASE
The current article provides the syntax and related information for changing the name and the collation of a
database.
ALTER DATABASE SET Options
Provides the syntax and related information for changing the attributes of a database by using the SET
options of ALTER DATABASE.
ALTER DATABASE Compatibility Level
Provides the syntax and related information for the SET options of ALTER DATABASE that are related to
database compatibility levels.

Syntax
-- Azure SQL Database Syntax
ALTER DATABASE { database_name | CURRENT }
{
MODIFY NAME = new_database_name
| MODIFY ( <edition_options> [, ... n] )
| SET { <option_spec> [ ,... n ] WITH <termination>}
| SET COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 }
| ADD SECONDARY ON SERVER <partner_server_name>
[WITH ( <add-secondary-option>::= [, ... n] ) ]
| REMOVE SECONDARY ON SERVER <partner_server_name>
| FAILOVER
| FORCE_FAILOVER_ALLOW_DATA_LOSS
}
[;]

<edition_options> ::=
{

MAXSIZE = { 100 MB | 250 MB | 500 MB | 1 ... 1024 ... 4096 GB }


| EDITION = { 'basic' | 'standard' | 'premium' | 'GeneralPurpose' | 'BusinessCritical' 'Hyperscale'}
| SERVICE_OBJECTIVE =
{ <service-objective>
| { ELASTIC_POOL (name = <elastic_pool_name>) }
}
}

<add-secondary-option> ::=
{
ALLOW_CONNECTIONS = { ALL | NO }
| SERVICE_OBJECTIVE =
{ <service-objective>
| { ELASTIC_POOL ( name = <elastic_pool_name>) }
}
}

<service-objective> ::= { 'S0' | 'S1' | 'S2' | 'S3'| 'S4'| 'S6'| 'S7'| 'S9'| 'S12' |
| 'P1' | 'P2' | 'P4'| 'P6' | 'P11' | 'P15'
| 'GP_GEN4_1' | 'GP_GEN4_2' | 'GP_GEN4_4' | 'GP_GEN4_8' | 'GP_GEN4_16' | 'GP_GEN4_24' |
| 'BC_GEN4_1' | 'BC_GEN4_2' | 'BC_GEN4_4' | 'BC_GEN4_8' | 'BC_GEN4_16' | 'BC_GEN4_24' |
| 'HS_GEN4_1' | 'HS_GEN4_2' | 'HS_GEN4_4' | 'HS_GEN4_8' | 'HS_GEN4_16' | 'HS_GEN4_24' |
| 'GP_GEN5_2' | 'GP_GEN5_4' | 'GP_GEN5_8' | 'GP_GEN5_16' | 'GP_GEN5_24' | 'GP_GEN5_32' |
'GP_GEN5_48' | 'GP_GEN5_80' |
| 'BC_GEN5_2' | 'BC_GEN5_4' | 'BC_GEN5_8' | 'BC_GEN5_16' | 'BC_GEN5_24' | 'BC_GEN5_32' |
'BC_GEN5_48' | 'BC_GEN5_80' |
| 'HS_GEN5_2' | 'HS_GEN5_4' | 'HS_GEN5_8' | 'HS_GEN5_16' | 'HS_GEN5_24' | 'HS_GEN5_32' |
'HS_GEN5_48' | 'HS_GEN5_80' |
}

<option_spec> ::=
{
<auto_option>
| <change_tracking_option>
| <cursor_option>
| <db_encryption_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <parameterization_option>
| <query_store_options>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
| <temporal_history_retention>
}
Arguments
database_name
Is the name of the database to be modified.
CURRENT
Designates that the current database in use should be altered.
MODIFY NAME =new_database_name
Renames the database with the name specified as new_database_name. The following example changes the
name of a database db1 to db2 :

ALTER DATABASE db1


MODIFY Name = db2 ;

MODIFY (EDITION = ['basic' | 'standard' | 'premium' |'GeneralPurpose' | 'BusinessCritical' | 'Hyperscale'])


Changes the service tier of the database.
The following example changes edition to premium :

ALTER DATABASE current


MODIFY (EDITION = 'premium');

EDITION change fails if the MAXSIZE property for the database is set to a value outside the valid range
supported by that edition.
MODIFY (MAXSIZE = [100 MB | 500 MB | 1 | 1024...4096] GB )
Specifies the maximum size of the database. The maximum size must comply with the valid set of values for
the EDITION property of the database. Changing the maximum size of the database may cause the
database EDITION to be changed.

NOTE
The MAXSIZE argument does not apply to single databases in the Hyperscale service tier. Hyperscale tier databases
grow as needed, up to 100 TB. The SQL Database service adds storage automatically - you do not need to set a
maximum size.

DTU -based model

MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15

100 MB √ √ √ √ √

250 MB √ √ √ √ √

500 MB √ √ √ √ √

1 GB √ √ √ √ √

2 GB √ (D) √ √ √ √
MAXSIZE BASIC S0-S2 S3-S12 P1-P6 P11-P15

5 GB N/A √ √ √ √

10 GB N/A √ √ √ √

20 GB N/A √ √ √ √

30 GB N/A √ √ √ √

40 GB N/A √ √ √ √

50 GB N/A √ √ √ √

100 GB N/A √ √ √ √

150 GB N/A √ √ √ √

200 GB N/A √ √ √ √

250 GB N/A √ (D) √ (D) √ √

300 GB N/A √ √ √ √

400 GB N/A √ √ √ √

500 GB N/A √ √ √ (D) √

750 GB N/A √ √ √ √

1024 GB N/A √ √ √ √ (D)

From 1024 GB N/A N/A N/A N/A √


up to 4096 GB
in increments of
256 GB*

* P11 and P15 allow MAXSIZE up to 4 TB with 1024 GB being the default size. P11 and P15 can use up to 4
TB of included storage at no additional charge. In the Premium tier, MAXSIZE greater than 1 TB is currently
available in the following regions: US East2, West US, US Gov Virginia, West Europe, Germany Central,
South East Asia, Japan East, Australia East, Canada Central, and Canada East. For additional details
regarding resource limitations for the DTU -based model, see DTU -based resource limits.
The MAXSIZE value for the DTU -based model, if specified, has to be a valid value shown in the table above
for the service tier specified.
vCore-based model
General Purpose service tier - Generation 4 compute platform

MAXSIZE GP_GEN4_1 GP_GEN4_2 GP_GEN4_4 GP_GEN4_8 GP_GEN4_16 GP4_24

Max data 1024 1024 1536 3072 4096 4096


size (GB)
General Purpose service tier - Generation 5 compute platform

GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_ GP_GEN5_


MAXSIZE 2 4 8 16 24 32 48 80

Max data 1024 1024 1536 3072 4096 4096 4096 4096
size (GB)

Business Critical service tier - Generation 4 compute platform

PERFORMANCE
LEVEL BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16

Max data size 1024 1024 1024 1024 1024


(GB)

Business Critical service tier - Generation 5 compute platform

BC_GEN5_ BC_GEN5_ BC_GEN5_ BC_GEN5_ BC_GEN5_ BC_GEN5_ BC_GEN5_ BC_GEN5_


MAXSIZE 2 4 8 16 24 32 48 80

Max data 1024 1024 1024 1024 2048 4096 4096 4096
size (GB)

If no MAXSIZE value is set when using the vCore model, the default is 32 GB. For additional details regarding
resource limitations for vCore-based model, see vCore-based resource limits.
The following rules apply to MAXSIZE and EDITION arguments:
If EDITION is specified but MAXSIZE is not specified, the default value for the edition is used. For
example, is the EDITION is set to Standard, and the MAXSIZE is not specified, then the MAXSIZE is
automatically set to 500 MB.
If neither MAXSIZE nor EDITION is specified, the EDITION is set to Standard (S0), and MAXSIZE is
set to 250 GB.
MODIFY (SERVICE_OBJECTIVE = <service-objective>)
Specifies the performance level. The following example changes service objective of a premium database to
P6 :

ALTER DATABASE current


MODIFY (SERVICE_OBJECTIVE = 'P6');

Specifies the performance level. Available values for service objective are: S0 , S1 , S2 , S3 , S4 , S6 , S7 ,
S9 , S12 , P1 , P2 , P4 , P6 , P11 , P15 , GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 ,
GP_GEN4_24 , BC_GEN4_1 BC_GEN4_2 BC_GEN4_4 BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 ,
GP_Gen5_8 , GP_Gen5_16 , GP_Gen5_24 , GP_Gen5_32 , GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 ,
BC_Gen5_8 , BC_Gen5_16 , BC_Gen5_24 , BC_Gen5_32 , BC_Gen5_48 , BC_Gen5_80 , HS_GEN4_1 HS_GEN4_2
HS_GEN4_4 HS_GEN4_8 HS_GEN4_16 , HS_GEN4_24 , HS_Gen5_2 , HS_Gen5_4 , HS_Gen5_8 , HS_Gen5_16 ,
HS_Gen5_24 , HS_Gen5_32 , HS_Gen5_48 , HS_Gen5_80 .

For service objective descriptions and more information about the size, editions, and the service objectives
combinations, see Azure SQL Database Service Tiers and Performance Levels, DTU -based resource limits
and vCore-based resource limits. Support for PRS service objectives have been removed. For questions, use
this e-mail alias: [email protected].
MODIFY (SERVICE_OBJECTIVE = ELASTIC_POOL (name = <elastic_pool_name>)
To add an existing database to an elastic pool, set the SERVICE_OBJECTIVE of the database to
ELASTIC_POOL and provide the name of the elastic pool. You can also use this option to change the
database to a different elastic pool within the same server. For more information, see Create and manage a
SQL Database elastic pool. To remove a database from an elastic pool, use ALTER DATABASE to set the
SERVICE_OBJECTIVE to a single database performance level.

NOTE
Databases in the Hyperscale service tier cannot be added to an elastic pool.

ADD SECONDARY ON SERVER <partner_server_name>


Creates a geo-replication secondary database with the same name on a partner server, making the local
database into a geo-replication primary, and begins asynchronously replicating data from the primary to the
new secondary. If a database with the same name already exists on the secondary, the command fails. The
command is executed on the master database on the server hosting the local database that becomes the
primary.

IMPORTANT
The Hyperscale service tier does not currently support geo-replication.

WITH ALLOW_CONNECTIONS { ALL | NO }


When ALLOW_CONNECTIONS is not specified, it is set to ALL by default. If it is set ALL, it is a read-only
database that allows all logins with the appropriate permissions to connect.
WITH SERVICE_OBJECTIVE { S0 , S1 , S2 , S3 , S4 , S6 , S7 , S9 , S12 , P1 , P2 , P4 , P6 , P11 , P15 ,
GP_GEN4_1 , GP_GEN4_2 , GP_GEN4_4 , GP_GEN4_8 , GP_GEN4_16 , GP_GEN4_24 , BC_GEN4_1 BC_GEN4_2 BC_GEN4_4
BC_GEN4_8 BC_GEN4_16 , BC_GEN4_24 , GP_Gen5_2 , GP_Gen5_4 , GP_Gen5_8 , GP_Gen5_16 , GP_Gen5_24 ,
GP_Gen5_32 , GP_Gen5_48 , GP_Gen5_80 , BC_Gen5_2 , BC_Gen5_4 , BC_Gen5_8 , BC_Gen5_16 , BC_Gen5_24 ,
BC_Gen5_32 , BC_Gen5_48 , BC_Gen5_80 }

When SERVICE_OBJECTIVE is not specified, the secondary database is created at the same service level as
the primary database. When SERVICE_OBJECTIVE is specified, the secondary database is created at the
specified level. This option supports creating geo-replicated secondaries with less expensive service levels.
The SERVICE_OBJECTIVE specified must be within the same edition as the source. For example, you
cannot specify S0 if the edition is premium.
ELASTIC_POOL (name = <elastic_pool_name>)
When ELASTIC_POOL is not specified, the secondary database is not created in an elastic pool. When
ELASTIC_POOL is specified, the secondary database is created in the specified pool.

IMPORTANT
The user executing the ADD SECONDARY command must be DBManager on primary server, have db_owner
membership in local database, and DBManager on secondary server.

REMOVE SECONDARY ON SERVER <partner_server_name>


Removes the specified geo-replicated secondary database on the specified server. The command is executed
on the master database on the server hosting the primary database.
IMPORTANT
The user executing the REMOVE SECONDARY command must be DBManager on the primary server.

FAILOVER
Promotes the secondary database in geo-replication partnership on which the command is executed to
become the primary and demotes the current primary to become the new secondary. As part of this
process, the geo-replication mode is temporarily switched from asynchronous mode to synchronous mode.
During the failover process:
1. The primary stops taking new transactions.
2. All outstanding transactions are flushed to the secondary.
3. The secondary becomes the primary and begins asynchronous geo-replication with the old primary /
the new secondary.
This sequence ensures that no data loss occurs. The period during which both databases are unavailable is
on the order of 0-25 seconds while the roles are switched. The total operation should take no longer than
about one minute. If the primary database is unavailable when this command is issued, the command fails
with an error message indicating that the primary database is not available. If the failover process does not
complete and appears stuck, you can use the force failover command and accept data loss - and then, if you
need to recover the lost data, call devops (CSS ) to recover the lost data.

IMPORTANT
The user executing the FAILOVER command must be DBManager on both the primary server and the secondary
server.

FORCE_FAILOVER_ALLOW_DATA_LOSS
Promotes the secondary database in geo-replication partnership on which the command is executed to
become the primary and demotes the current primary to become the new secondary. Use this command
only when the current primary is no longer available. It is designed for disaster recovery only, when
restoring availability is critical, and some data loss is acceptable.
During a forced failover:
1. The specified secondary database immediately becomes the primary database and begins accepting
new transactions.
2. When the original primary can reconnect with the new primary, an incremental backup is taken on
the original primary, and the original primary becomes a new secondary.
3. To recover data from this incremental backup on the old primary, the user engages devops/CSS.
4. If there are additional secondaries, they are automatically reconfigured to become secondaries of the
new primary. This process is asynchronous and there may be a delay until this process completes.
Until the reconfiguration has completed, the secondaries continue to be secondaries of the old
primary.
IMPORTANT
The user executing the FORCE_FAILOVER_ALLOW_DATA_LOSS command must be DBManager on both the primary
server and the secondary server.

Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management
mode) and is not allowed in an explicit or implicit transaction.
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server
error log contains the following informational message: " SQL Server has encountered %d occurrence(s) of
cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or
reconfigure operations". This message is logged every five minutes as long as the cache is flushed within
that time interval.
The procedure cache is also flushed in the following scenario: You run several queries against a database
that has default options. Then, the database is dropped.

Viewing Database Information


You can use catalog views, system functions, and system stored procedures to return information about
databases, files, and filegroups.

Permissions
Only the server-level principal login (created by the provisioning process) or members of the dbmanager
database role can alter a database.

IMPORTANT
The owner of the database cannot alter the database unless they are a member of the dbmanager role.

Examples
A. Check the edition options and change them:

SELECT Edition = DATABASEPROPERTYEX('db1', 'EDITION'),


ServiceObjective = DATABASEPROPERTYEX('db1', 'ServiceObjective'),
MaxSizeInBytes = DATABASEPROPERTYEX('db1', 'MaxSizeInBytes');

ALTER DATABASE [db1] MODIFY (EDITION = 'Premium', MAXSIZE = 1024 GB, SERVICE_OBJECTIVE = 'P15');

B. Moving a database to a different elastic pool


Moves an existing database into a pool named pool1:

ALTER DATABASE db1


MODIFY ( SERVICE_OBJECTIVE = ELASTIC_POOL ( name = pool1 ) ) ;
C. Add a Geo -Replication Secondary
Creates a readable secondary database db1 on server secondaryserver of the db1 on the local server.

ALTER DATABASE db1


ADD SECONDARY ON SERVER secondaryserver
WITH ( ALLOW_CONNECTIONS = ALL )

D. Remove a Geo -Replication Secondary


Removes the secondary database db1 on server secondaryserver .

ALTER DATABASE db1


REMOVE SECONDARY ON SERVER testsecondaryserver

E. Failover to a Geo -Replication Secondary


Promotes a secondary database db1 on server secondaryserver to become the new primary database when
executed on server secondaryserver .

ALTER DATABASE db1 FAILOVER

See also
CREATE DATABASE - Azure SQL Database
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOLATION LEVEL
EVENTDATA
sp_configure
sp_spaceused
sys.databases
sys.database_files
sys.database_mirroring_witnesses
sys.data_spaces
sys.filegroups
sys.master_files
System Databases

SQL Server SQL Database * SQL Database SQL Data Parallel


logical server Managed Instance * Warehouse Data Warehouse

Azure SQL Database Managed Instance


Overview
In Azure SQL Database Managed Instance, use this statement to set database options.
Because of its length, the ALTER DATABASE syntax is separated into the multiple articles.
ALTER DATABASE
The current article provides the syntax and related information for setting file and filegroup options, for
setting database options, and for setting the database compatibility level.
ALTER DATABASE File and Filegroup Options Provides the syntax and related information for adding and
removing files and filegroups from a database, and for changing the attributes of the files and filegroups.
ALTER DATABASE SET Options Provides the syntax and related information for changing the attributes of a
database by using the SET options of ALTER DATABASE.
ALTER DATABASE Compatibility Level Provides the syntax and related information for the SET options of
ALTER DATABASE that are related to database compatibility levels.

Syntax
-- Azure SQL Database Syntax
ALTER DATABASE { database_name | CURRENT }
{
<file_and_filegroup_options>
| SET <option_spec> [ ,...n ]
| SET COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 }
}
[;]

<file_and_filegroup_options>::=
<add_or_modify_files>::=
<filespec>::=
<add_or_modify_filegroups>::=
<filegroup_updatability_option>::=

<option_spec> ::=
{
<auto_option>
| <change_tracking_option>
| <cursor_option>
| <db_encryption_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <parameterization_option>
| <query_store_options>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <temporal_history_retention>
}

Arguments
database_name
Is the name of the database to be modified.
CURRENT
Designates that the current database in use should be altered.

Remarks
To remove a database, use DROP DATABASE.
To decrease the size of a database, use DBCC SHRINKDATABASE.
The ALTER DATABASE statement must run in autocommit mode (the default transaction management
mode) and is not allowed in an explicit or implicit transaction.
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server
error log contains the following informational message: " SQL Server has encountered %d occurrence(s) of
cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or
reconfigure operations". This message is logged every five minutes as long as the cache is flushed within
that time interval.
The procedure cache is also flushed in the following scenario: You run several queries against a database
that has default options. Then, the database is dropped.

Viewing Database Information


You can use catalog views, system functions, and system stored procedures to return information about
databases, files, and filegroups.

Permissions
Only the server-level principal login (created by the provisioning process) or members of the dbmanager
database role can alter a database.

IMPORTANT
The owner of the database cannot alter the database unless they are a member of the dbmanager role.

Examples
The following examples show you how to set automatic tuning and how to add a file in a managed instance.

ALTER DATABASE WideWorldImporters


SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON)

ALTER DATABASE WideWorldImporters


ADD FILE (NAME = 'data_17')

See also
CREATE DATABASE - Azure SQL Database
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOLATION LEVEL
EVENTDATA
sp_configure
sp_spaceused
sys.databases
sys.database_files
sys.database_mirroring_witnesses sys.data_spaces
sys.filegroups
sys.master_files
System Databases
SQL Server SQL Database SQL Database * SQL Data Parallel
logical server Managed Instance Warehouse * Data Warehouse

Azure SQL Data Warehouse


Overview
Modifies the name, maximum size, or service objective for a database.

Syntax
ALTER DATABASE database_name

MODIFY NAME = new_database_name


| MODIFY ( <edition_option> [, ... n] )

<edition_option> ::=
MAXSIZE = {
250 | 500 | 750 | 1024 | 5120 | 10240 | 20480
| 30720 | 40960 | 51200 | 61440 | 71680 | 81920
| 92160 | 102400 | 153600 | 204800 | 245760
} GB
| SERVICE_OBJECTIVE = {
'DW100' | 'DW200' | 'DW300' | 'DW400' | 'DW500'
| 'DW600' | 'DW1000' | 'DW1200' | 'DW1500' | 'DW2000'
| 'DW3000' | 'DW6000' | 'DW1000c' | 'DW1500c' | 'DW2000c'
| 'DW2500c' | 'DW3000c' | 'DW5000c' | 'DW6000c' | 'DW7500c'
| 'DW10000c' | 'DW15000c' | 'DW30000c'
}

Arguments
database_name
Specifies the name of the database to be modified.
MODIFY NAME = new_database_name
Renames the database with the name specified as new_database_name.
MAXSIZE
The default is 245,760 GB (240 TB ).
Applies to: Optimized for Elasticity performance tier
The maximum allowable size for the database. The database cannot grow beyond MAXSIZE.
Applies to: Optimized for Compute performance tier
The maximum allowable size for rowstore data in the database. Data stored in rowstore tables, a
columnstore index's deltastore, or a nonclustered index on a clustered columnstore index cannot grow
beyond MAXSIZE. Data compressed into columnstore format does not have a size limit and is not
constrained by MAXSIZE.
SERVICE_OBJECTIVE
Specifies the performance level. For more information about service objectives for SQL Data Warehouse,
see Performance Tiers.
Permissions
Requires these permissions:
Server-level principal login (the one created by the provisioning process), or
Member of the dbmanager database role.
The owner of the database cannot alter the database unless the owner is a member of the dbmanager role.

General Remarks
The current database must be a different database than the one you are altering, therefore ALTER must be
run while connected to the master database.
SQL Data Warehouse is set to COMPATIBILITY_LEVEL 130 and cannot be changed. For more details, see
Improved Query Performance with Compatibility Level 130 in Azure SQL Database.
To decrease the size of a database, use DBCC SHRINKDATABASE.

Limitations and Restrictions


To run ALTER DATABASE, the database must be online and cannot be in a paused state.
The ALTER DATABASE statement must run in autocommit mode, which is the default transaction
management mode. This is set in the connection settings.
The ALTER DATABASE statement cannot be part of a user-defined transaction.
You cannot change the database collation.

Examples
Before you run these examples, make sure the database you are altering is not the current database. The
current database must be a different database than the one you are altering, therefore ALTER must be run
while connected to the master database.
A. Change the name of the database

ALTER DATABASE AdventureWorks2012


MODIFY NAME = Northwind;

B. Change max size for the database

ALTER DATABASE dw1 MODIFY ( MAXSIZE=10240 GB );

C. Change the performance level

ALTER DATABASE dw1 MODIFY ( SERVICE_OBJECTIVE= 'DW1200' );

D. Change the max size and the performance level

ALTER DATABASE dw1 MODIFY ( MAXSIZE=10240 GB, SERVICE_OBJECTIVE= 'DW1200' );

See Also
CREATE DATABASE (Azure SQL Data Warehouse) SQL Data Warehouse list of reference articles

SQL Server SQL Database SQL Database SQL Data * Parallel


logical server Managed Instance Warehouse Data Warehouse *

Parallel Data Warehouse


Overview
Modifies the maximum database size options for replicated tables, distributed tables, and the transaction log
in Parallel Data Warehouse. Use this statement to manage disk space allocations for a database as it grows
or shrinks in size. The article also describes syntax related to setting database options in Parallel Data
Warehouse.

Syntax
-- Parallel Data Warehouse
ALTER DATABASE database_name
SET ( <set_database_options> | <db_encryption_option> )
[;]

<set_database_options> ::=
{
AUTOGROW = { ON | OFF }
| REPLICATED_SIZE = size [GB]
| DISTRIBUTED_SIZE = size [GB]
| LOG_SIZE = size [GB]
| SET AUTO_CREATE_STATISTICS { ON | OFF }
| SET AUTO_UPDATE_STATISTICS { ON | OFF }
| SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

Arguments
database_name
The name of the database to be modified. To display a list of databases on the appliance, use sys.databases
(Transact-SQL ).
AUTOGROW = { ON | OFF }
Updates the AUTOGROW option. When AUTOGROW is ON, Parallel Data Warehouse automatically
increases the allocated space for replicated tables, distributed tables, and the transaction log as necessary to
accommodate growth in storage requirements. When AUTOGROW is OFF, Parallel Data Warehouse
returns an error if replicated tables, distributed tables, or the transaction log exceeds the maximum size
setting.
REPLICATED_SIZE = size [GB ]
Specifies the new maximum gigabytes per Compute node for storing all of the replicated tables in the
database being altered. If you are planning for appliance storage space, you will need to multiply
REPLICATED_SIZE by the number of Compute nodes in the appliance.
DISTRIBUTED_SIZE = size [GB ]
Specifies the new maximum gigabytes per database for storing all of the distributed tables in the database
being altered. The size is distributed across all of the Compute nodes in the appliance.
LOG_SIZE = size [GB ]
Specifies the new maximum gigabytes per database for storing all of the transaction logs in the database
being altered. The size is distributed across all of the Compute nodes in the appliance.
ENCRYPTION { ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). Encryption can only be configured for
Parallel Data Warehouse when sp_pdw_database_encryption has been set to 1. A database encryption key
must be created before transparent data encryption can be configured. For more information about
database encryption, see Transparent Data Encryption (TDE ).
SET AUTO_CREATE_STATISTICS { ON | OFF } When the automatic create statistics option,
AUTO_CREATE_STATISTICS, is ON, the Query Optimizer creates statistics on individual columns in the
query predicate, as necessary, to improve cardinality estimates for the query plan. These single-column
statistics are created on columns that do not already have a histogram in an existing statistics object.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created
prior to the upgrade.
For more information about statistics, see Statistics
SET AUTO_UPDATE_STATISTICS { ON | OFF } When the automatic update statistics option,
AUTO_UPDATE_STATISTICS, is ON, the query optimizer determines when statistics might be out-of-date
and then updates them when they are used by a query. Statistics become out-of-date after operations insert,
update, delete, or merge change the data distribution in the table or indexed view. The query optimizer
determines when statistics might be out-of-date by counting the number of data modifications since the last
statistics update and comparing the number of modifications to a threshold. The threshold is based on the
number of rows in the table or indexed view.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created
prior to the upgrade.
For more information about statistics, see Statistics.
SET AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF } The asynchronous statistics update option,
AUTO_UPDATE_STATISTICS_ASYNC, determines whether the Query Optimizer uses synchronous or
asynchronous statistics updates. The AUTO_UPDATE_STATISTICS_ASYNC option applies to statistics
objects created for indexes, single columns in query predicates, and statistics created with the CREATE
STATISTICS statement.
Default is ON for new databases created after upgrading to AU7. The default is OFF for databases created
prior to the upgrade.
For more information about statistics, see Statistics.

Permissions
Requires the ALTER permission on the database.

Error Messages
If auto-stats is disabled and you try to alter the statistics settings, PDW gives the error "This option is not
supported in PDW." The system administrator can enable auto-stats by enabling the feature switch
AutoStatsEnabled.
General Remarks
The values for REPLICATED_SIZE, DISTRIBUTED_SIZE, and LOG_SIZE can be greater than, equal to, or
less than the current values for the database.

Limitations and Restrictions


Grow and shrink operations are approximate. The resulting actual sizes can vary from the size parameters.
Parallel Data Warehouse does not perform the ALTER DATABASE statement as an atomic operation. If the
statement is aborted during execution, changes that have already occurred will remain.
The statistics settings only work if the administrator has enable auto-stats. If you are an administrator, use
the feature switch AutoStatsEnabled to enable or disable auto-stats.

Locking Behavior
Takes a shared lock on the DATABASE object. You cannot alter a database that is in use by another user for
reading or writing. This includes sessions that have issued a USE statement on the database.

Performance
Shrinking a database can take a large amount of time and system resources, depending on the size of the
actual data within the database, and the amount of fragmentation on disk. For example, shrinking a
database could take several hours or more.

Determining Encryption Progress


Use the following query to determine progress of database transparent data encryption as a percent:
WITH
database_dek AS (
SELECT ISNULL(db_map.database_id, dek.database_id) AS database_id,
dek.encryption_state, dek.percent_complete,
dek.key_algorithm, dek.key_length, dek.encryptor_thumbprint,
type
FROM sys.dm_pdw_nodes_database_encryption_keys AS dek
INNER JOIN sys.pdw_nodes_pdw_physical_databases AS node_db_map
ON dek.database_id = node_db_map.database_id
AND dek.pdw_node_id = node_db_map.pdw_node_id
LEFT JOIN sys.pdw_database_mappings AS db_map
ON node_db_map .physical_name = db_map.physical_name
INNER JOIN sys.dm_pdw_nodes nodes
ON nodes.pdw_node_id = dek.pdw_node_id
WHERE dek.encryptor_thumbprint <> 0x
),
dek_percent_complete AS (
SELECT database_dek.database_id, AVG(database_dek.percent_complete) AS percent_complete
FROM database_dek
WHERE type = 'COMPUTE'
GROUP BY database_dek.database_id
)
SELECT DB_NAME( database_dek.database_id ) AS name,
database_dek.database_id,
ISNULL(
(SELECT TOP 1 dek_encryption_state.encryption_state
FROM database_dek AS dek_encryption_state
WHERE dek_encryption_state.database_id = database_dek.database_id
ORDER BY (CASE encryption_state
WHEN 3 THEN -1
ELSE encryption_state
END) DESC), 0)
AS encryption_state,
dek_percent_complete.percent_complete,
database_dek.key_algorithm, database_dek.key_length, database_dek.encryptor_thumbprint
FROM database_dek
INNER JOIN dek_percent_complete
ON dek_percent_complete.database_id = database_dek.database_id
WHERE type = 'CONTROL';

For a comprehensive example demonstrating all the steps in implementing TDE, see Transparent Data
Encryption (TDE ).

Examples: Parallel Data Warehouse


A. Altering the AUTOGROW setting
Set AUTOGROW to ON for database CustomerSales .

ALTER DATABASE CustomerSales


SET ( AUTOGROW = ON );

B. Altering the maximum storage for replicated tables


The following example sets the replicated table storage limit to 1 GB for the database CustomerSales . This is
the storage limit per Compute node.

ALTER DATABASE CustomerSales


SET ( REPLICATED_SIZE = 1 GB );

C. Altering the maximum storage for distributed tables


The following example sets the distributed table storage limit to 1000 GB (one terabyte) for the database
CustomerSales . This is the combined storage limit across the appliance for all of the Compute nodes, not the
storage limit per Compute node.

ALTER DATABASE CustomerSales


SET ( DISTRIBUTED_SIZE = 1000 GB );

D. Altering the maximum storage for the transaction log


The following example updates the database CustomerSales to have a maximum SQL Server transaction log
size of 10 GB for the appliance.

ALTER DATABASE CustomerSales


SET ( LOG_SIZE = 10 GB );

E. Check for current statistics values


The following query returns the current statistics values for all databases. The value 1 means the feature is
on, and a 0 means the feature is off.

SELECT NAME,
is_auto_create_stats_on,
is_auto_update_stats_on,
is_auto_update_stats_async_on
FROM sys.databases;

F. Enable auto -create and auto -update stats for a database


Use the following statement to enable create and update statistics automatically and asynchronously for
database, CustomerSales. This creates and updates single-column statistics as necessary to create high
quality query plans.

ALTER DATABASE CustomerSales


SET AUTO_CREATE_STATISTICS ON;
ALTER DATABASE CustomerSales
SET AUTO_UPDATE_STATISTICS ON;
ALTER DATABASE CustomerSales
SET AUTO_UPDATE_STATISTICS_ASYNC ON;

See Also
CREATE DATABASE (Parallel Data Warehouse)
DROP DATABASE (Transact-SQL )
ALTER DATABASE AUDIT SPECIFICATION (Transact-
SQL)
1/2/2019 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Alters a database audit specification object using the SQL Server Audit feature. For more information, see SQL
Server Audit (Database Engine).
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE AUDIT SPECIFICATION audit_specification_name
{
[ FOR SERVER AUDIT audit_name ]
[ { { ADD | DROP } (
{ <audit_action_specification> | audit_action_group_name }
)
} [, ...n] ]
[ WITH ( STATE = { ON | OFF } ) ]
}
[ ; ]
<audit_action_specification>::=
{
<action_specification>[ ,...n ] ON [ class :: ] securable
BY principal [ ,...n ]
}

Arguments
audit_specification_name
The name of the audit specification.
audit_name
The name of the audit to which this specification is applied.
audit_action_specification
Name of one or more database-level auditable actions. For a list of audit action groups, see SQL Server Audit
Action Groups and Actions.
audit_action_group_name
Name of one or more groups of database-level auditable actions. For a list of audit action groups, see SQL Server
Audit Action Groups and Actions.
class
Class name (if applicable) on the securable.
securable
Table, view, or other securable object in the database on which to apply the audit action or audit action group. For
more information, see Securables.
column
Column name (if applicable) on the securable.
principal
Name of SQL Server principal on which to apply the audit action or audit action group. For more information, see
Principals (Database Engine).
WITH ( STATE = { ON | OFF } )
Enables or disables the audit from collecting records for this audit specification. Audit specification state changes
must be done outside a user transaction and may not have other changes in the same statement when the
transition is ON to OFF.

Remarks
Database audit specifications are non-securable objects that reside in a given database. You must set the state of
an audit specification to the OFF option in order to make changes to a database audit specification. If ALTER
DATABASE AUDIT SPECIFICATION is executed when an audit is enabled with any options other than
STATE=OFF, you will receive an error message. For more information, see tempdb Database.

Permissions
Users with the ALTER ANY DATABASE AUDIT permission can alter database audit specifications and bind them
to any audit.
After a database audit specification is created, it can be viewed by principals with the CONTROL SERVER,or
ALTER ANY DATABASE AUDIT permissions, the sysadmin account, or principals having explicit access to the
audit.

Examples
The following example alters a database audit specification called HIPAA_Audit_DB_Specification that audits the
SELECT statements by the dbo user, for a SQL Server audit called HIPAA_Audit .

ALTER DATABASE AUDIT SPECIFICATION HIPAA_Audit_DB_Specification


FOR SERVER AUDIT HIPAA_Audit
ADD (SELECT
ON OBJECT::dbo.Table1
BY dbo)
WITH (STATE = ON);
GO

For a full example about how to create an audit, see SQL Server Audit (Database Engine).

See Also
CREATE SERVER AUDIT (Transact-SQL )
ALTER SERVER AUDIT (Transact-SQL )
DROP SERVER AUDIT (Transact-SQL )
CREATE SERVER AUDIT SPECIFICATION (Transact-SQL )
ALTER SERVER AUDIT SPECIFICATION (Transact-SQL )
DROP SERVER AUDIT SPECIFICATION (Transact-SQL )
CREATE DATABASE AUDIT SPECIFICATION (Transact-SQL )
DROP DATABASE AUDIT SPECIFICATION (Transact-SQL )
ALTER AUTHORIZATION (Transact-SQL )
sys.fn_get_audit_file (Transact-SQL )
sys.server_audits (Transact-SQL )
sys.server_file_audits (Transact-SQL )
sys.server_audit_specifications (Transact-SQL )
sys.server_audit_specification_details (Transact-SQL )
sys.database_audit_specifications (Transact-SQL )
sys.database_audit_specification_details (Transact-SQL )
sys.dm_server_audit_status (Transact-SQL )
sys.dm_audit_actions (Transact-SQL )
Create a Server Audit and Server Audit Specification
ALTER DATABASE (Transact-SQL) Compatibility
Level
1/8/2019 • 30 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Sets certain database behaviors to be compatible with the specified version of SQL Server. For other ALTER
DATABASE options, see ALTER DATABASE (Transact-SQL ).
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.

Syntax
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = { 150 | 140 | 130 | 120 | 110 | 100 | 90 }

Arguments
database_name
Is the name of the database to be modified.
COMPATIBILITY_LEVEL { 150 | 140 | 130 | 120 | 110 | 100 | 90 | 80 }
Is the version of SQL Server with which the database is to be made compatible. The following compatibility level
values can be configured (not all versions supports all of the above listed compatibility level):

COMPATIBILITY LEVEL SUPPORTED COMPATIBILITY


PRODUCT DATABASE ENGINE VERSION DESIGNATION LEVEL VALUES

SQL Server 2019 preview 15 150 150, 140, 130, 120, 110,
100

SQL Server 2017 (14.x) 14 140 140, 130, 120, 110, 100

Azure SQL Database logical 12 130 150, 140, 130, 120, 110,
server 100

Azure SQL Database 12 130 150, 140, 130, 120, 110,


Managed Instance 100

SQL Server 2016 (13.x) 13 130 130, 120, 110, 100

SQL Server 2014 (12.x) 12 120 120, 110, 100

SQL Server 2012 (11.x) 11 110 110, 100, 90

SQL Server 2008 R2 10.5 100 100, 90, 80

SQL Server 2008 10 100 100, 90, 80


COMPATIBILITY LEVEL SUPPORTED COMPATIBILITY
PRODUCT DATABASE ENGINE VERSION DESIGNATION LEVEL VALUES

SQL Server 2005 (9.x) 9 90 90, 80

SQL Server 2000 8 80 80

NOTE
As of January 2018, in Azure SQL Database, the default compatibility level is 140 for newly created databases. We do not
update database compatibility level for existing databases. This is up to customers to do at their own discretion. With that
said, we highly recommend customers plan on moving to the latest compatibility level in order to leverage the latest
improvements.
If you want to leverage database compatibility level 140 for your database overall, but you have reason to prefer the
cardinality estimation model of SQL Server 2012 (11.x), mapping to database compatibility level 110, see ALTER
DATABASE SCOPED CONFIGURATION (Transact-SQL), and in particular its keyword
LEGACY_CARDINALITY_ESTIMATION = ON .

For details about how to assess the performance differences of your most important queries, between two compatibility
levels on Azure SQL Database, see Improved Query Performance with Compatibility Level 130 in Azure SQL Database.
Note that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140
for SQL Server and Azure SQL Database .

Execute the following query to determine the version of the Database Engine that you are connected to.

SELECT SERVERPROPERTY('ProductVersion');

NOTE
Not all features that vary by compatibility level are supported on Azure SQL Database.

To determine the current compatibility level, query the compatibility_level column of sys.databases (Transact-
SQL ).

SELECT name, compatibility_level FROM sys.databases;

Remarks
For all installations of SQL Server, the default compatibility level is set to the version of the Database Engine.
Databases are set to this level unless the model database has a lower compatibility level. When a database is
upgraded from any earlier version of SQL Server, the database retains its existing compatibility level, if it is at
least minimum allowed for that instance of SQL Server. Upgrading a database with a compatibility level lower
than the allowed level, automatically sets the database to the lowest compatibility level allowed. This applies to
both system and user databases.
The below behaviors are expected for SQL Server 2017 (14.x) when a database is attached or restored, and after
an in-place upgrade:
If the compatibility level of a user database was 100 or higher before the upgrade, it remains the same after
upgrade.
If the compatibility level of a user database was 90 before upgrade, in the upgraded database, the
compatibility level is set to 100, which is the lowest supported compatibility level in SQL Server 2017 (14.x).
The compatibility levels of the tempdb, model, msdb and Resource databases are set to the current
compatibility level after upgrade.
The master system database retains the compatibility level it had before upgrade.
Use ALTER DATABASE to change the compatibility level of the database. The new compatibility level setting for a
database takes effect when a USE <database> command is issued, or a new login is processed with that database
as the default database context.
To view the current compatibility level of a database, query the compatibility_level column in the sys.databases
catalog view.

NOTE
A distribution database that was created in an earlier version of SQL Server and is upgraded to SQL Server 2016 (13.x)
RTM or Service Pack 1 has a compatibility level of 90, which is not supported for other databases. This does not have an
impact on the functionality of replication. Upgrading to later service packs and versions of SQL Server will result in the
compatibility level of the distribution database to be increased to match that of the master database.

Compatibility Levels and SQL Server Upgrades


Database compatibility level is a valuable tool to assist in database modernization, by allowing the SQL Server
Database Engine to be upgraded, while keeping connecting applications functional status by maintaining the
same pre-upgrade database compatibility level. As long as the application does not need to leverage
enhancements that are only available in a higher database compatibility level, it is a valid approach to upgrade
the SQL Server Database Engine and maintain the previous database compatibility level. For more information
on using compatibility level for backward compatibility, see the Using Compatibility Level for Backward
Compatibility later in this article.
For new development work, or when an existing application requires use of new features, as well as performance
improvements done in the query optimizer space, plan to upgrade the database compatibility level to the latest
available in SQL Server, and certify your application to work with that compatibility level. For more details on
upgrading the database compatibility level, see the Best Practices for upgrading Database Compatibility Level
later in the article.

TIP
If an application was tested and certified on a given SQL Server version, then it was implicitly tested and certified on that
SQL Server version native database compatibility level.
So, database compatibility level provides an easy certification path for an existing application, when using the database
compatibility level corresponding to the tested SQL Server version.
For more information about differences between compatibility levels, see the appropriate sections later in this article.

To upgrade the SQL Server Database Engine to the latest version, while maintaining the database compatibility
level that existed before the upgrade and its supportability status, it is recommended to perform static functional
surface area validation of the application code in the database, by using the Microsoft Data Migration Assistant
tool (DMA). The absence of errors in the DMA tool output, about missing or incompatible functionality, protects
application from any functional regressions on the new target version. For more information on the DMA tool,
see here.
NOTE
DMA supports database compatibility level 100 and above. SQL Server 2005 (9.x) as source version is excluded.

IMPORTANT
Microsoft recommends that some minimal testing is done to validate the success of an upgrade, while maintaining the
previous database compatibility level. You should determine what minimal testing means for your own application and
scenario.

NOTE
Microsoft provides query plan shape protection when:
The new SQL Server version (target) runs on hardware that is comparable to the hardware where the previous SQL
Server version (source) was running.
The same supported database compatibility level is used both at the target SQL Server and source SQL Server.
Any query plan shape regression (as compared to the source SQL Server) that occurs in the above conditions will be
addressed. Please contact Microsoft Customer Support if this is the case.

Using Compatibility Level for Backward Compatibility


The database compatibility level setting affects behaviors only for the specified database, not for the entire
server. Database compatibility level provides only partial backward compatibility with earlier versions of SQL
Server.

TIP
Because database compatibility level is a database-level setting, an application running on a newer SQL Server Database
Engine while using an older database compatibility level, can still leverage server-level enhancements without any
requirement for application changes.
These include rich monitoring and troubleshooting improvements, with new System Dynamic Management Views and
Extended Events. And also improved scalability, for example with Automatic Soft-NUMA .

Starting with compatibility mode 130, any new query plan affecting features have been intentionally added only
to the new compatibility level. This has been done in order to minimize the risk during upgrades that arise from
performance degradation due to query plan changes.
From an application perspective, the goal should still be to upgrade to the latest compatibility level at some point
in time, in order to inherit some of the new features, as well as performance improvements done in the query
optimizer space, but to do so in a controlled way. Use the lower compatibility level as a safer migration aid to
work around version differences, in the behaviors that are controlled by the relevant compatibility level setting.
For more details, including the recommended workflow for upgrading database compatibility level, see the Best
Practices for upgrading Database Compatibility Level later in the article.
IMPORTANT
Discontinued functionality introduced in a given SQL Server version is not protected by compatibility level. This refers to
functionality that was removed from the SQL Server Database Engine.
For example, the FASTFIRSTROW hint was discontinued in SQL Server 2012 (11.x) and replaced with the
OPTION (FAST n ) hint. Setting the database compatibility level to 110 will not restore the discontinued hint. For more
information on discontinued functionality, see Discontinued Database Engine Functionality in SQL Server 2016,
Discontinued Database Engine Functionality in SQL Server 2014), Discontinued Database Engine Functionality in SQL
Server 2012), and Discontinued Database Engine Functionality in SQL Server 2008).

IMPORTANT
Breaking changes introduced in a given SQL Server version may not be protected by compatibility level. This refers to
behavior changes between versions of the SQL Server Database Engine. Transact-SQL behavior is usually protected by
compatibility level. However, changed or removed system objects are not protected by compatibility level.
An example of a breaking change protected by compatibility level is an implicit conversion from datetime to datetime2
data types. Under database compatibility level 130, these show improved accuracy by accounting for the fractional
milliseconds, resulting in different converted values. To restore previous conversion behavior, set the database compatibility
level to 120 or lower.
Examples of breaking changes not protected by compatibility level are:
Changed column names in system objects. In SQL Server 2012 (11.x) the column single_pages_kb in
sys.dm_os_sys_info was renamed to pages_kb. Regardless of the compatibility level, the query
SELECT single_pages_kb FROM sys.dm_os_sys_info will produce error 207 (Invalid column name).
Removed system objects. In SQL Server 2012 (11.x) the sp_dboption was removed. Regardless of the compatibility
level, the statement EXEC sp_dboption 'AdventureWorks2016CTP3', 'autoshrink', 'FALSE'; will produce error
2812 (Could not find stored procedure 'sp_dboption').
For more information on breaking changes, see Breaking Changes to Database Engine Features in SQL Server 2017,
Breaking Changes to Database Engine Features in SQL Server 2016, Breaking Changes to Database Engine Features in SQL
Server 2014), Breaking Changes to Database Engine Features in SQL Server 2012), and Breaking Changes to Database
Engine Features in SQL Server 2008).

Best Practices for upgrading Database Compatibility Level


For the recommended workflow for upgrading the compatibility level, see Change the Database Compatibility
Mode and Use the Query Store.

Compatibility Levels and Stored Procedures


When a stored procedure executes, it uses the current compatibility level of the database in which it is defined.
When the compatibility setting of a database is changed, all of its stored procedures are automatically
recompiled accordingly.

Differences Between Compatibility Level 140 and Level 150


This section describes new behaviors introduced with compatibility level 150.
Database compatibility level 150 is currently in Public Preview for Azure SQL Database and SQL Server 2019
preview. This database compatibility level will be associated with the next generation of query processing
improvements beyond what was introduced in database compatibility level 140.
For more information on query processing features enabled in database compatibility level 150, refer to What's
new in SQL Server 2019 and Intelligent query processing in SQL databases.

Differences Between Compatibility Level 130 and Level 140


This section describes new behaviors introduced with compatibility level 140.

COMPATIBILITY-LEVEL SETTING OF 130 OR LOWER COMPATIBILITY-LEVEL SETTING OF 140

Cardinality estimates for statements referencing multi- Cardinality estimates for eligible statements referencing
statement table valued functions use a fixed row guess. multi-statement table valued functions will use the actual
cardinality of the function output. This is enabled via
interleaved execution for multi-statement table valued
functions.

Batch-mode queries that request insufficient memory grant Batch-mode queries that request insufficient memory grant
sizes that result in spills to disk may continue to have issues sizes that result in spills to disk may have improved
on consecutive executions. performance on consecutive executions. This is enabled via
batch mode memory grant feedback which will update
the memory grant size of a cached plan if spills have occurred
for batch mode operators.

Batch-mode queries that request an excessive memory grant Batch-mode queries that request an excessive memory grant
size that results in concurrency issues may continue to have size that results in concurrency issues may have improved
issues on consecutive executions. concurrency on consecutive executions. This is enabled via
batch mode memory grant feedback which will update
the memory grant size of a cached plan if an excessive
amount was originally requested.

Batch-mode queries that contain join operators are eligible There is an additional join operator called adaptive join. If
for three physical join algorithms, including nested loop, hash cardinality estimates are incorrect for the outer build join
join and merge join. If cardinality estimates are incorrect for input, an inappropriate join algorithm may be selected. If this
join inputs, an inappropriate join algorithm may be selected. occurs and the statement is eligible for an adaptive join, a
If this occurs, performance will suffer and the inappropriate nested loop will be used for smaller join inputs and a hash
join algorithm will remain in-use until the cached plan is join will be used for larger join inputs dynamically without
recompiled. requiring recompilation.

Trivial plans referencing Columnstore indexes are not eligible A trivial plan referencing Columnstore indexes will be
for batch mode execution. discarded in favor of a plan that is eligible for batch mode
execution.

The sp_execute_external_script UDX operator can only The sp_execute_external_script UDX operator is eligible
run in row mode. for batch mode execution.

Multi-statement table-valued functions (TVF's) do not have Interleaved execution for multi-statement TVFs to improve
interleaved execution plan quality.

Fixes that were under trace flag 4199 in earlier versions of SQL Server prior to SQL Server 2017 are now
enabled by default. With compatibility mode 140. Trace flag 4199 will still be applicable for new query optimizer
fixes that are released after SQL Server 2017. For information about Trace Flag 4199, see Trace Flag 4199.

Differences Between Compatibility Level 120 and Level 130


This section describes new behaviors introduced with compatibility level 130.
COMPATIBILITY-LEVEL SETTING OF 120 OR LOWER COMPATIBILITY-LEVEL SETTING OF 130

The INSERT in an INSERT-SELECT statement is single- The INSERT in an INSERT-SELECT statement is multi-threaded
threaded. or can have a parallel plan.

Queries on a memory-optimized table execute single- Queries on a memory-optimized table can now have parallel
threaded. plans.

Introduced the SQL 2014 Cardinality estimator Further cardinality estimation ( CE) Improvements with the
CardinalityEstimationModelVersion="120" Cardinality Estimation Model 130 which is visible from a
Query plan. CardinalityEstimationModelVersion="130"

Batch mode versus Row Mode changes with Columnstore Batch mode versus Row Mode changes with Columnstore
indexes: indexes:
Sorts on a table with Columnstore index are in Row Sorts on a table with a Columnstore index are now in
mode batch mode
Windowing function aggregates operate in row mode Windowing aggregates now operate in batch mode
such as LAG or LEAD such as LAG or LEAD
Queries on Columnstore tables with Multiple distinct Queries on Columnstore tables with Multiple distinct
clauses operated in Row mode clauses operate in Batch mode
Queries running under MAXDOP 1 or with a serial Queries running under MAXDOP 1 or with a serial
plan executed in Row mode plan execute in Batch Mode

Statistics can be automatically updated. The logic which automatically updates statistics is more
aggressive on large tables. In practice, this should reduce
cases where customers have seen performance issues on
queries where newly inserted rows are queried frequently but
where the statistics had not been updated to include those
values.

Trace 2371 is OFF by default in SQL Server 2014 (12.x). Trace 2371 is ON by default in SQL Server 2016 (13.x). Trace
flag 2371 tells the auto statistics updater to sample a smaller
yet wiser subset of rows, in a table that has a great many
rows.

One improvement is to include in the sample more rows that


were inserted recently.

Another improvement is to let queries run while the update


statistics process is running, rather than blocking the query.

For level 120, statistics are sampled by a single-threaded For level 130, statistics are sampled by a multi-threaded
process. process.

253 incoming foreign keys is the limit. A given table can be referenced by up to 10,000 incoming
foreign keys or similar references. For restrictions, see Create
Foreign Key Relationships.

The deprecated MD2, MD4, MD5, SHA, and SHA1 hash Only SHA2_256 and SHA2_512 hash algorithms are
algorithms are permitted. permitted.

SQL Server 2016 (13.x) includes improvements in some data


types conversions and some (mostly uncommon) operations.
For details see SQL Server 2016 improvements in handling
some data types and uncommon operations.
COMPATIBILITY-LEVEL SETTING OF 120 OR LOWER COMPATIBILITY-LEVEL SETTING OF 130

The STRING_SPLIT function is not available. The STRING_SPLIT function is available under compatibility
level 130 or above. If your database compatibility level is
lower than 130, SQL Server will not be able to find and
execute STRING_SPLIT function.

Fixes that were under trace flag 4199 in earlier versions of SQL Server prior to SQL Server 2016 (13.x) are now
enabled by default. With compatibility mode 130. Trace flag 4199 will still be applicable for new query optimizer
fixes that are released after SQL Server 2016 (13.x). To use the older query optimizer in SQL Database you must
select compatibility level 110. For information about Trace Flag 4199, see Trace Flag 4199.

Differences Between Lower Compatibility Levels and Level 120


This section describes new behaviors introduced with compatibility level 120.

COMPATIBILITY-LEVEL SETTING OF 110 OR LOWER COMPATIBILITY-LEVEL SETTING OF 120

The older query optimizer is used. SQL Server 2014 (12.x) includes substantial improvements to
the component that creates and optimizes query plans. This
new query optimizer feature is dependent upon use of the
database compatibility level 120. New database applications
should be developed using database compatibility level 120
to take advantage of these improvements. Applications that
are migrated from earlier versions of SQL Server should be
carefully tested to confirm that good performance is
maintained or improved. If performance degrades, you can
set the database compatibility level to 110 or earlier to use
the older query optimizer methodology.

Database compatibility level 120 uses a new cardinality


estimator that is tuned for modern data warehousing and
OLTP workloads. Before setting database compatibility level
to 110 because of performance issues, see the
recommendations in the Query Plans section of the SQL
Server 2014 (12.x) What's New in Database Engine topic.

In compatibility levels lower than 120, the language setting is The language setting is not ignored when converting a date
ignored when converting a date value to a string value. Note value to a string value.
that this behavior is specific only to the date type. See
example B in the Examples section below.

Recursive references on the right-hand side of an EXCEPT Recursive references in an EXCEPT clause generates an error
clause create an infinite loop. Example C in the Examples in compliance with the ANSI SQL standard.
section below demonstrates this behavior.

Recursive common table expression (CTE) allows duplicate Recursive CTE do not allow duplicate column names.
column names.

Disabled triggers are enabled if the triggers are altered. Altering a trigger does not change the state (enabled or
disabled) of the trigger.

The OUTPUT INTO table clause ignores the You cannot insert explicit values for an identity column in a
IDENTITY_INSERT SETTING = OFF and allows explicit values table when IDENTITY_INSERT is set to OFF.
to be inserted.
COMPATIBILITY-LEVEL SETTING OF 110 OR LOWER COMPATIBILITY-LEVEL SETTING OF 120

When the database containment is set to partial, validating The collation of the values returned by the $action clause
the $action field in the OUTPUT clause of a MERGE of a MERGE statement is the database collation instead of
statement can return a collation error. the server collation and a collation conflict error is not
returned.

A SELECT INTO statement always creates a single-threaded A SELECT INTO statement can create a parallel insert
insert operation. operation. When inserting a large numbers of rows, the
parallel operation can improve performance.

Differences Between Lower Compatibility Levels and Levels 110 and


120
This section describes new behaviors introduced with compatibility level 110. This section also applies to level
120.

COMPATIBILITY-LEVEL SETTING OF 100 OR LOWER COMPATIBILITY-LEVEL SETTING OF AT LEAST 110

Common language runtime (CLR) database objects are CLR database objects are executed with version 4 of the CLR.
executed with version 4 of the CLR. However, some behavior
changes introduced in version 4 of the CLR are avoided. For
more information, see What's New in CLR Integration.

The XQuery functions string-length and substring count The XQuery functions string-length and substring count
each surrogate as two characters. each surrogate as one character.

PIVOT is allowed in a recursive common table expression PIVOT is not allowed in a recursive common table
(CTE) query. However, the query returns incorrect results expression (CTE) query. An error is returned.
when there are multiple rows per grouping.

The RC4 algorithm is only supported for backward New material cannot be encrypted using RC4 or RC4_128.
compatibility. New material can only be encrypted using RC4 Use a newer algorithm such as one of the AES algorithms
or RC4_128 when the database is in compatibility level 90 or instead. In SQL Server 2012 (11.x), material encrypted using
100. (Not recommended.) In SQL Server 2012 (11.x), material RC4 or RC4_128 can be decrypted in any compatibility level.
encrypted using RC4 or RC4_128 can be decrypted in any
compatibility level.

The default style for CAST and CONVERT operations on Under compatibility level 110, the default style for CAST and
time and datetime2 data types is 121 except when either CONVERT operations on time and datetime2 data types is
type is used in a computed column expression. For computed always 121. If your query relies on the old behavior, use a
columns, the default style is 0. This behavior impacts compatibility level less than 110, or explicitly specify the 0
computed columns when they are created, used in queries style in the affected query.
involving auto-parameterization, or used in constraint
definitions. Upgrading the database to compatibility level 110 will not
change user data that has been stored to disk. You must
Example D in the Examples section below shows the manually correct this data as appropriate. For example, if you
difference between styles 0 and 121. It does not used SELECT INTO to create a table from a source that
demonstrate the behavior described above. For more contained a computed column expression described above,
information about date and time styles, see CAST and the data (using style 0) would be stored rather than the
CONVERT (Transact-SQL). computed column definition itself. You would need to
manually update this data to match style 121.
COMPATIBILITY-LEVEL SETTING OF 100 OR LOWER COMPATIBILITY-LEVEL SETTING OF AT LEAST 110

Any columns in remote tables of type smalldatetime that Any columns in remote tables of type smalldatetime that
are referenced in a partitioned view are mapped as are referenced in a partitioned view are mapped as
datetime. Corresponding columns in local tables (in the smalldatetime. Corresponding columns in local tables (in
same ordinal position in the select list) must be of type the same ordinal position in the select list) must be of type
datetime. smalldatetime.

After upgrading to 110, the distributed partitioned view will


fail because of the data type mismatch. You can resolve this
by changing the data type on the remote table to datetime
or setting the compatibility level of the local database to 100
or lower.

SOUNDEX function implements the following rules: SOUNDEX function implements the following rules:

1) Upper-case H or upper-case W are ignored when 1) If upper-case H or upper-case W separate two consonants
separating two consonants that have the same number in that have the same number in the SOUNDEX code, the
the SOUNDEX code. consonant to the right is ignored

2) If the first 2 characters of character_expression have the 2) If a set of side-by-side consonants have same number in
same number in the SOUNDEX code, both characters are the SOUNDEX code, all of them are excluded except the first.
included. Else, if a set of side-by-side consonants have same
number in the SOUNDEX code, all of them are excluded
except the first.
The additional rules may cause the values computed by the
SOUNDEX function to be different than the values computed
under earlier compatibility levels. After upgrading to
compatibility level 110, you may need to rebuild the indexes,
heaps, or CHECK constraints that use the SOUNDEX function.
For more information, see SOUNDEX (Transact-SQL).

Differences Between Compatibility Level 90 and Level 100


This section describes new behaviors introduced with compatibility level 100.

COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

The QUOTED_IDENTIFER setting is The QUOTED IDENTIFIER session Medium


always set to ON for multistatement setting is honored when
table-valued functions when they are multistatement table-valued functions
created regardless of the session level are created.
setting.

When you create or alter a partition The current language setting is used to Medium
function, datetime and evaluate datetime and smalldatetime
smalldatetime literals in the function literals in the partition function.
are evaluated assuming US_English as
the language setting.

The FOR BROWSE clause is allowed The FOR BROWSE clause is not allowed Medium
(and ignored) in INSERT and in INSERT and SELECT INTO
SELECT INTO statements. statements.

Full-text predicates are allowed in the Full-text predicates are not allowed in Low
OUTPUT clause. the OUTPUT clause.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

CREATE FULLTEXT STOPLIST , CREATE FULLTEXT STOPLIST , Low


ALTER FULLTEXT STOPLIST , and ALTER FULLTEXT STOPLIST , and
DROP FULLTEXT STOPLIST are not DROP FULLTEXT STOPLIST are
supported. The system stoplist is supported.
automatically associated with new full-
text indexes.

MERGE is not enforced as a reserved MERGE is a fully reserved keyword. The Low
keyword. MERGE statement is supported under
both 100 and 90 compatibility levels.

Using the <dml_table_source> You can capture the results of an Low


argument of the INSERT statement OUTPUT clause in a nested INSERT,
raises a syntax error. UPDATE, DELETE, or MERGE statement,
and insert those results into a target
table or view. This is done using the
<dml_table_source> argument of the
INSERT statement.

Unless NOINDEX is specified, Unless NOINDEX is specified, Low


DBCC CHECKDB or DBCC CHECKTABLE DBCC CHECKDB or DBCC CHECKTABLE
performs both physical and logical performs both physical and logical
consistency checks on a single table or consistency checks on a single table
indexed view and on all its and on all its nonclustered indexes.
nonclustered and XML indexes. Spatial However, on XML indexes, spatial
indexes are not supported. indexes, and indexed views, only
physical consistency checks are
performed by default.

If WITH EXTENDED_LOGICAL_CHECKS is
specified, logical checks are performed
on indexed views, XML indexes, and
spatial indexes, where present. By
default, physical consistency checks are
performed before the logical
consistency checks. If NOINDEX is also
specified, only the logical checks are
performed.

When an OUTPUT clause is used with a When an OUTPUT clause is used with a Low
data manipulation language (DML) data manipulation language (DML)
statement and a run-time error occurs statement and a run-time error occurs
during statement execution, the entire during statement execution, the
transaction is terminated and rolled behavior depends on the
back. SET XACT_ABORT setting. If
SET XACT_ABORT is OFF, a statement
abort error generated by the DML
statement using the OUTPUT clause
will terminate the statement, but the
execution of the batch continues and
the transaction is not rolled back. If
SET XACT_ABORT is ON, all run-time
errors generated by the DML
statement using the OUTPUT clause
will terminate the batch, and the
transaction is rolled back.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

CUBE and ROLLUP are not enforced as CUBE and ROLLUP are reserved Low
reserved keywords. keywords within the GROUP BY clause.

Strict validation is applied to elements Lax validation is applied to elements of Low


of the XML anyType type. the anyType type. For more
information, see Wildcard Components
and Content Validation.

The special attributes xsi:nil and The special attributes xsi:nil and Low
xsi:type cannot be queried or modified xsi:type are stored as regular
by data manipulation language attributes and can be queried and
statements. modified.

This means that /e/@xsi:nil fails For example, executing the query
while /e/@* ignores the xsi:nil and SELECT x.query('a/b/@*') returns
xsi:type attributes. However, /e all attributes including xsi:nil and
returns the xsi:nil and xsi:type xsi:type. To exclude these types in the
attributes for consistency with query, replace @* with
SELECT xmlCol , even if @*[namespace-uri(.) != " insert xsi
xsi:nil = "false" . namespace uri " and not
(local-name(.) = "type" or
local-name(.) ="nil".

A user-defined function that converts A user-defined function that converts Low


an XML constant string value to a SQL an XML constant string value to a SQL
Server datetime type is marked as Server datetime type is marked as non-
deterministic. deterministic.

The XML union and list types are not The union and list types are fully Low
fully supported. supported including the following
functionality:

Union of list

Union of union

List of atomic types

List of union

The SET options required for an xQuery The SET options required for an xQuery Low
method are not validated when the method are validated when the
method is contained in a view or inline method is contained in a view or inline
table-valued function. table-valued function. An error is raised
if the SET options of the method are
set incorrectly.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

XML attribute values that contain end- XML attribute values that contain end- Low
of-line characters (carriage return and of-line characters (carriage return and
line feed) are not normalized according line feed) are normalized according to
to the XML standard. That is, both the XML standard. That is, all line
characters are returned instead of a breaks in external parsed entities
single line-feed character. (including the document entity) are
normalized on input by translating
both the two-character sequence #xD
#xA and any #xD that is not followed
by #xA to a single #xA character.

Applications that use attributes to


transport string values that contain
end-of-line characters will not receive
these characters back as they are
submitted. To avoid the normalization
process, use the XML numeric
character entities to encode all end-of-
line characters.

The column properties ROWGUIDCOL The column properties ROWGUIDCOL Low


and IDENTITY can be incorrectly and IDENTITY cannot be named as a
named as a constraint. For example the constraint. Error 156 is returned.
statement
CREATE TABLE T (C1 int
CONSTRAINT MyConstraint
IDENTITY)
executes, but the constraint name is
not preserved and is not accessible to
the user.

Updating columns by using a two-way Updating columns by using a two-way Low


assignment such as assignment produces expected results
UPDATE T1 SET @v = column_name = because only the statement starting
<expression> value of the column is accessed during
can produce unexpected results statement execution.
because the live value of the variable
can be used in other clauses such as
the WHERE and ON clause during
statement execution instead of the
statement starting value. This can
cause the meanings of the predicates
to change unpredictably on a per-row
basis.

This behavior is applicable only when


the compatibility level is set to 90.

See example E in the Examples section See example F in the Examples section Low
below. below.

The ODBC function {fn CONVERT()} The ODBC function {fn CONVERT()} Low
uses the default date format of the uses style 121 (a language-
language. For some languages, the independent YMD format) when
default format is YDM, which can result converting to the ODBC data types
in conversion errors when CONVERT() SQL_TIMESTAMP, SQL_DATE,
is combined with other functions, such SQL_TIME, SQLDATE, SQL_TYPE_TIME,
as {fn CURDATE()} , that expect a and SQL_TYPE_TIMESTAMP.
YMD format.
COMPATIBILITY-LEVEL SETTING OF 90 COMPATIBILITY-LEVEL SETTING OF 100 POSSIBILITY OF IMPACT

Datetime intrinsics such as DATEPART Datetime intrinsics such as DATEPART Low


do not require string input values to be require string input values to be valid
valid datetime literals. For example, datetime literals. Error 241 is returned
SELECT DATEPART (year, '2007/05- when an invalid datetime literal is used.
30')
compiles successfully.

Reserved Keywords
The compatibility setting also determines the keywords that are reserved by the Database Engine. The following
table shows the reserved keywords that are introduced by each of the compatibility levels.

COMPATIBILITY-LEVEL SETTING RESERVED KEYWORDS

130 To be determined.

120 None.

110 WITHIN GROUP, TRY_CONVERT,


SEMANTICKEYPHRASETABLE,
SEMANTICSIMILARITYDETAILSTABLE,
SEMANTICSIMILARITYTABLE

100 CUBE, MERGE, ROLLUP

90 EXTERNAL, PIVOT, UNPIVOT, REVERT, TABLESAMPLE

At a given compatibility level, the reserved keywords include all of the keywords introduced at or below that
level. Thus, for instance, for applications at level 110, all of the keywords listed in the preceding table are
reserved. At the lower compatibility levels, level-100 keywords remain valid object names, but the level-110
language features corresponding to those keywords are unavailable.
Once introduced, a keyword remains reserved. For example, the reserved keyword PIVOT, which was introduced
in compatibility level 90, is also reserved in levels 100, 110, and 120.
If an application uses an identifier that is reserved as a keyword for its compatibility level, the application will fail.
To work around this, enclose the identifier between either brackets ([]) or quotation marks (""); for example, to
upgrade an application that uses the identifier EXTERNAL to compatibility level 90, you could change the
identifier to either [EXTERNAL ] or "EXTERNAL".
For more information, see Reserved Keywords (Transact-SQL ).

Permissions
Requires ALTER permission on the database.

Examples
A. Changing the compatibility level
The following example changes the compatibility level of the AdventureWorks2012 database to 110, SQL
Server 2012 (11.x).
ALTER DATABASE AdventureWorks2012
SET COMPATIBILITY_LEVEL = 110;
GO

The following example returns the compatibility level of the current database.

SELECT name, compatibility_level


FROM sys.databases
WHERE name = db_name();

B. Ignoring the SET LANGUAGE statement except under compatibility level 120
The following query ignores the SET LANGUAGE statement except under compatibility level 120.

SET DATEFORMAT dmy;


DECLARE @t2 date = '12/5/2011' ;
SET LANGUAGE dutch;
SELECT CONVERT(varchar(11), @t2, 106);

-- Results when the compatibility level is less than 120.


12 May 2011

-- Results when the compatibility level is set to 120).


12 mei 2011

C.
For compatibility-level setting of 110 or lower, recursive references on the right-hand side of an EXCEPT clause
create an infinite loop.

WITH
cte AS (SELECT * FROM (VALUES (1),(2),(3)) v (a)),
r
AS (SELECT a FROM Table1
UNION ALL
(SELECT a FROM Table1 EXCEPT SELECT a FROM r) )
SELECT a
FROM r;

D.
This example shows the difference between styles 0 and 121. For more information about date and time styles,
see CAST and CONVERT (Transact-SQL ).
CREATE TABLE t1 (c1 time(7), c2 datetime2);

INSERT t1 (c1,c2) VALUES (GETDATE(), GETDATE());

SELECT CONVERT(nvarchar(16),c1,0) AS TimeStyle0


,CONVERT(nvarchar(16),c1,121)AS TimeStyle121
,CONVERT(nvarchar(32),c2,0) AS Datetime2Style0
,CONVERT(nvarchar(32),c2,121)AS Datetime2Style121
FROM t1;

-- Returns values such as the following.


TimeStyle0 TimeStyle121
Datetime2Style0 Datetime2Style121
---------------- ----------------
-------------------- --------------------------
3:15PM 15:15:35.8100000
Jun 7 2011 3:15PM 2011-06-07 15:15:35.8130000

E.
Variable assignment is allowed in a statement containing a top-level UNION operator, but returns unexpected
results. For example, in the following statements, local variable @v is assigned the value of the column
BusinessEntityID from the union of two tables. By definition, when the SELECT statement returns more than
one value, the variable is assigned the last value that is returned. In this case, the variable is correctly assigned the
last value, however, the result set of the SELECT UNION statement is also returned.

ALTER DATABASE AdventureWorks2012


SET compatibility_level = 110;
GO
USE AdventureWorks2012;
GO
DECLARE @v int;
SELECT @v = BusinessEntityID FROM HumanResources.Employee
UNION ALL
SELECT @v = BusinessEntityID FROM HumanResources.EmployeeAddress;
SELECT @v;

F.
Variable assignment is not allowed in a statement containing a top-level UNION operator. Error 10734 is
returned. To resolve the error, rewrite the query as shown in the following example.

DECLARE @v int;
SELECT @v = BusinessEntityID FROM
(SELECT BusinessEntityID FROM HumanResources.Employee
UNION ALL
SELECT BusinessEntityID FROM HumanResources.EmployeeAddress) AS Test;
SELECT @v;

See Also
ALTER DATABASE (Transact-SQL )
Reserved Keywords (Transact-SQL )
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
sys.databases (Transact-SQL )
sys.database_files (Transact-SQL )
View or Change the Compatibility Level of a Database
ALTER DATABASE (Transact-SQL) Database Mirroring
1/8/2019 • 9 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse

NOTE
This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature. Use Always On availability
groups instead.

Controls database mirroring for a database. Values specified with the database mirroring options apply to both
copies of the database and to the database mirroring session as a whole. Only one <database_mirroring_option>
is permitted per ALTER DATABASE statement.

NOTE
We recommend that you configure database mirroring during off-peak hours because configuration can affect performance.

For ALTER DATABASE options, see ALTER DATABASE (Transact-SQL ). For ALTER DATABASE SET options, see
ALTER DATABASE SET Options (Transact-SQL ).
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name
SET { <partner_option> | <witness_option> }
<partner_option> ::=
PARTNER { = 'partner_server'
| FAILOVER
| FORCE_SERVICE_ALLOW_DATA_LOSS
| OFF
| RESUME
| SAFETY { FULL | OFF }
| SUSPEND
| TIMEOUT integer
}
<witness_option> ::=
WITNESS { = 'witness_server'
| OFF
}

Arguments
IMPORTANT
A SET PARTNER or SET WITNESS command can complete successfully when entered, but fail later.
NOTE
ALTER DATABASE database mirroring options are not available for a contained database.

database_name
Is the name of the database to be modified.
PARTNER <partner_option>
Controls the database properties that define the failover partners of a database mirroring session and their
behavior. Some SET PARTNER options can be set on either partner; others are restricted to the principal server or
to the mirror server. For more information, see the individual PARTNER options that follow. A SET PARTNER
clause affects both copies of the database, regardless of the partner on which it is specified.
To execute a SET PARTNER statement, the STATE of the endpoints of both partners must be set to STARTED.
Note, also, that the ROLE of the database mirroring endpoint of each partner server instance must be set to either
PARTNER or ALL. For information about how to specify an endpoint, see Create a Database Mirroring Endpoint
for Windows Authentication (Transact-SQL ). To learn the role and state of the database mirroring endpoint of a
server instance, on that instance, use the following Transact-SQL statement:

SELECT role_desc, state_desc FROM sys.database_mirroring_endpoints

<partner_option> ::=

NOTE
Only one <partner_option> is permitted per SET PARTNER clause.

' partner_server '


Specifies the server network address of an instance of SQL Server to act as a failover partner in a new database
mirroring session. Each session requires two partners: one starts as the principal server, and the other starts as the
mirror server. We recommend that these partners reside on different computers.
This option is specified one time per session on each partner. Initiating a database mirroring session requires two
ALTER DATABASE database SET PARTNER ='partner_server' statements. Their order is significant. First, connect
to the mirror server, and specify the principal server instance as partner_server (SET PARTNER
='principal_server'). Second, connect to the principal server, and specify the mirror server instance as
partner_server (SET PARTNER ='mirror_server'); this starts a database mirroring session between these two
partners. For more information, see Setting Up Database Mirroring (SQL Server).
The value of partner_server is a server network address. This has the following syntax:
TCP://<system -address>:<port>
where
<system -address> is a string, such as a system name, a fully qualified domain name, or an IP address, that
unambiguously identifies the destination computer system.
<port> is a port number that is associated with the mirroring endpoint of the partner server instance.
For more information, see Specify a Server Network Address (Database Mirroring).
The following example illustrates the SET PARTNER ='partner_server' clause:
'TCP://MYSERVER.mydomain.Adventure-Works.com:7777'

IMPORTANT
If a session is set up by using the ALTER DATABASE statement instead of SQL Server Management Studio, the session is set
to full transaction safety by default (SAFETY is set to FULL) and runs in high-safety mode without automatic failover. To allow
automatic failover, configure a witness; to run in high-performance mode, turn off transaction safety (SAFETY OFF).

FAILOVER
Manually fails over the principal server to the mirror server. You can specify FAILOVER only on the principal
server. This option is valid only when the SAFETY setting is FULL (the default).
The FAILOVER option requires master as the database context.
FORCE_SERVICE_ALLOW_DATA_LOSS
Forces database service to the mirror database after the principal server fails with the database in an
unsynchronized state or in a synchronized state when automatic failover does not occur.
We strongly recommend that you force service only if the principal server is no longer running. Otherwise, some
clients might continue to access the original principal database instead of the new principal database.
FORCE_SERVICE_ALLOW_DATA_LOSS is available only on the mirror server and only under all the following
conditions:
The principal server is down.
WITNESS is set to OFF or the witness is connected to the mirror server.
Force service only if you are willing to risk losing some data in order to restore service to the database
immediately.
Forcing service suspends the session, temporarily preserving all the data in the original principal database.
Once the original principal is in service and able to communicate with the new principal server, the
database administrator can resume service. When the session resumes, any unsent log records and the
corresponding updates are lost.
OFF
Removes a database mirroring session and removes mirroring from the database. You can specify OFF on
either partner. For information, see about the impact of removing mirroring, see Removing Database
Mirroring (SQL Server).
RESUME
Resumes a suspended database mirroring session. You can specify RESUME only on the principal server.
SAFETY { FULL | OFF }
Sets the level of transaction safety. You can specify SAFETY only on the principal server.
The default is FULL. With full safety, the database mirroring session runs synchronously (in high-safety
mode). If SAFETY is set to OFF, the database mirroring session runs asynchronously (in high-performance
mode).
The behavior of high-safety mode depends partly on the witness, as follows:
When safety is set to FULL and a witness is set for the session, the session runs in high-safety mode with
automatic failover. When the principal server is lost, the session automatically fails over if the database is
synchronized and the mirror server instance and witness are still connected to each other (that is, they have
quorum). For more information, see Quorum: How a Witness Affects Database Availability (Database
Mirroring).
If a witness is set for the session but is currently disconnected, the loss of the mirror server causes the
principal server to go down.
When safety is set to FULL and the witness is set to OFF, the session runs in high-safety mode without
automatic failover. If the mirror server instance goes down, the principal server instance is unaffected. If the
principal server instance goes down, you can force service (with possible data loss) to the mirror server
instance.
If SAFETY is set to OFF, the session runs in high-performance mode, and automatic failover and manual
failover are not supported. However, problems on the mirror do not affect the principal, and if the principal
server instance goes down, you can, if necessary, force service (with possible data loss) to the mirror server
instance-if WITNESS is set to OFF or the witness is currently connected to the mirror. For more
information on forcing service, see "FORCE_SERVICE_ALLOW_DATA_LOSS" earlier in this section.

IMPORTANT
High-performance mode is not intended to use a witness. However, whenever you set SAFETY to OFF, we strongly
recommend that you ensure that WITNESS is set to OFF.

SUSPEND
Pauses a database mirroring session.
You can specify SUSPEND on either partner.
TIMEOUT integer
Specifies the time-out period in seconds. The time-out period is the maximum time that a server instance waits to
receive a PING message from another instance in the mirroring session before considering that other instance to
be disconnected.
You can specify the TIMEOUT option only on the principal server. If you do not specify this option, by default, the
time period is 10 seconds. If you specify 5 or greater, the time-out period is set to the specified number of seconds.
If you specify a time-out value of 0 to 4 seconds, the time-out period is automatically set to 5 seconds.

IMPORTANT
We recommend that you keep the time-out period at 10 seconds or greater. Setting the value to less than 10 seconds
creates the possibility of a heavily loaded system missing PINGs and declaring a false failure.

For more information, see Possible Failures During Database Mirroring.


WITNESS <witness_option>
Controls the database properties that define a database mirroring witness. A SET WITNESS clause affects both
copies of the database, but you can specify SET WITNESS only on the principal server. If a witness is set for a
session, quorum is required to serve the database, regardless of the SAFETY setting; for more information, see
Quorum: How a Witness Affects Database Availability (Database Mirroring).
We recommend that the witness and failover partners reside on separate computers. For information about the
witness, see Database Mirroring Witness.
To execute a SET WITNESS statement, the STATE of the endpoints of both the principal and witness server
instances must be set to STARTED. Note, also, that the ROLE of the database mirroring endpoint of a witness
server instance must be set to either WITNESS or ALL. For information about specifying an endpoint, see The
Database Mirroring Endpoint (SQL Server).
To learn the role and state of the database mirroring endpoint of a server instance, on that instance, use the
following Transact-SQL statement:

SELECT role_desc, state_desc FROM sys.database_mirroring_endpoints

NOTE
Database properties cannot be set on the witness.

<witness_option> ::=

NOTE
Only one <witness_option> is permitted per SET WITNESS clause.

' witness_server '


Specifies an instance of the Database Engine to act as the witness server for a database mirroring session. You can
specify SET WITNESS statements only on the principal server.
In a SET WITNESS ='witness_server' statement, the syntax of witness_server is the same as the syntax of
partner_server.
OFF
Removes the witness from a database mirroring session. Setting the witness to OFF disables automatic failover. If
the database is set to FULL SAFETY and the witness is set to OFF, a failure on the mirror server causes the
principal server to make the database unavailable.

Remarks
Examples
A. Creating a database mirroring session with a witness
Setting up database mirroring with a witness requires configuring security and preparing the mirror database, and
also using ALTER DATABASE to set the partners. For an example of the complete setup process, see Setting Up
Database Mirroring (SQL Server).
B. Manually failing over a database mirroring session
Manual failover can be initiated from either database mirroring partner. Before failing over, you should verify that
the server you believe to be the current principal server actually is the principal server. For example, for the
AdventureWorks2012 database, on that server instance that you think is the current principal server, execute the
following query:

SELECT db.name, m.mirroring_role_desc


FROM sys.database_mirroring m
JOIN sys.databases db
ON db.database_id = m.database_id
WHERE db.name = N'AdventureWorks2012';
GO

If the server instance is in fact the principal, the value of mirroring_role_desc is Principal . If this server instance
were the mirror server, the SELECT statement would return Mirror .
The following example assumes that the server is the current principal.
1. Manually fail over to the database mirroring partner:

ALTER DATABASE AdventureWorks2012 SET PARTNER FAILOVER;


GO

2. To verify the results of the failover on the new mirror, execute the following query:

SELECT db.name, m.mirroring_role_desc


FROM sys.database_mirroring m
JOIN sys.databases db
ON db.database_id = m.database_id
WHERE db.name = N'AdventureWorks2012';
GO

The current value of mirroring_role_desc is now Mirror .

See Also
CREATE DATABASE (SQL Server Transact-SQL )
DATABASEPROPERTYEX (Transact-SQL )
sys.database_mirroring_witnesses (Transact-SQL )
ALTER DATABASE ENCRYPTION KEY (Transact-SQL)
9/5/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Alters an encryption key and certificate that is used for transparently encrypting a database. For more information
about transparent database encryption, see Transparent Data Encryption (TDE ).
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server

ALTER DATABASE ENCRYPTION KEY


REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
|
ENCRYPTION BY SERVER
{
CERTIFICATE Encryptor_Name |
ASYMMETRIC KEY Encryptor_Name
}
[ ; ]

-- Syntax for Parallel Data Warehouse

ALTER DATABASE ENCRYPTION KEY


{
{
REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
[ ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name ]
}
|
ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name
}
[ ; ]

Arguments
REGENERATE WITH ALGORITHM = { AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY }
Specifies the encryption algorithm that is used for the encryption key.
ENCRYPTION BY SERVER CERTIFICATE Encryptor_Name
Specifies the name of the certificate used to encrypt the database encryption key.
ENCRYPTION BY SERVER ASYMMETRIC KEY Encryptor_Name
Specifies the name of the asymmetric key used to encrypt the database encryption key.

Remarks
The certificate or asymmetric key that is used to encrypt the database encryption key must be located in the
master system database.
When the database owner (dbo) is changed, the database encryption key does not have to be regenerated.
After a database encryption key has been modified twice, a log backup must be performed before the database
encryption key can be modified again.

Permissions
Requires CONTROL permission on the database and VIEW DEFINITION permission on the certificate or
asymmetric key that is used to encrypt the database encryption key.

Examples
The following example alters the database encryption key to use the AES_256 algorithm.

-- Uses AdventureWorks

ALTER DATABASE ENCRYPTION KEY


REGENERATE WITH ALGORITHM = AES_256;
GO

See Also
Transparent Data Encryption (TDE )
SQL Server Encryption
SQL Server and Database Encryption Keys (Database Engine)
Encryption Hierarchy
ALTER DATABASE SET Options (Transact-SQL )
CREATE DATABASE ENCRYPTION KEY (Transact-SQL )
DROP DATABASE ENCRYPTION KEY (Transact-SQL )
sys.dm_database_encryption_keys (Transact-SQL )
ALTER DATABASE (Transact-SQL) File and Filegroup
Options
12/12/2018 • 28 minutes to read • Edit Online

Modifies the files and filegroups associated with the database. Adds or removes files and filegroups from a
database, and changes the attributes of a database or its files and filegroups. For other ALTER DATABASE options,
see ALTER DATABASE.
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.

Click a product!
In the following row, click whichever product name you are interested in. The click displays different content here
on this webpage, appropriate for whichever product you click.

* SQL Server * SQL Database


Managed Instance

SQL Server
Syntax
ALTER DATABASE database_name
{
<add_or_modify_files>
| <add_or_modify_filegroups>
}
[;]

<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name } ]
| ADD LOG FILE <filespec> [ ,...n ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}

<filespec>::=
(
NAME = logical_file_name
[ , NEWNAME = new_logical_name ]
[ , FILENAME = {'os_file_name' | 'filestream_path' | 'memory_optimized_data_path' } ]
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB| % ] ]
[ , OFFLINE ]
)

<add_or_modify_filegroups>::=
{
| ADD FILEGROUP filegroup_name
[ CONTAINS FILESTREAM | CONTAINS MEMORY_OPTIMIZED_DATA ]
| REMOVE FILEGROUP filegroup_name
| MODIFY FILEGROUP filegroup_name
{ <filegroup_updatability_option>
| DEFAULT
| NAME = new_filegroup_name
| { AUTOGROW_SINGLE_FILE | AUTOGROW_ALL_FILES }
}
}
<filegroup_updatability_option>::=
{
{ READONLY | READWRITE }
| { READ_ONLY | READ_WRITE }
}

Arguments
<add_or_modify_files>::=
Specifies the file to be added, removed, or modified.
database_name
Is the name of the database to be modified.
ADD FILE
Adds a file to the database.
TO FILEGROUP { filegroup_name }
Specifies the filegroup to which to add the specified file. To display the current filegroups and which filegroup is
the current default, use the sys.filegroups catalog view.
ADD LOG FILE
Adds a log file be added to the specified database.
REMOVE FILE logical_file_name
Removes the logical file description from an instance of SQL Server and deletes the physical file. The file cannot
be removed unless it is empty.
logical_file_name
Is the logical name used in SQL Server when referencing the file.

WARNING
Removing a database file that has FILE_SNAPSHOT backups associated with it will succeed, but any associated snapshots
will not be deleted to avoid invalidating the backups referring to the database file. The file will be truncated, but will not be
physically deleted in order to keep the FILE_SNAPSHOT backups intact. For more information, see SQL Server Backup and
Restore with Microsoft Azure Blob Storage Service. Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server
2017).

MODIFY FILE
Specifies the file that should be modified. Only one <filespec> property can be changed at a time. NAME must
always be specified in the <filespec> to identify the file to be modified. If SIZE is specified, the new size must be
larger than the current file size.
To modify the logical name of a data file or log file, specify the logical file name to be renamed in the NAME clause,
and specify the new logical name for the file in the NEWNAME clause. For example:

MODIFY FILE ( NAME = logical_file_name, NEWNAME = new_logical_name )

To move a data file or log file to a new location, specify the current logical file name in the NAME clause and specify
the new path and operating system file name in the FILENAME clause. For example:

MODIFY FILE ( NAME = logical_file_name, FILENAME = ' new_path/os_file_name ' )

When you move a full-text catalog, specify only the new path in the FILENAME clause. Do not specify the
operating-system file name.
For more information, see Move Database Files.
For a FILESTREAM filegroup, NAME can be modified online. FILENAME can be modified online; however, the
change does not take effect until after the container is physically relocated and the server is shutdown and then
restarted.
You can set a FILESTREAM file to OFFLINE. When a FILESTREAM file is offline, its parent filegroup will be
internally marked as offline; therefore, all access to FILESTREAM data within that filegroup will fail.

NOTE
<add_or_modify_files> options are not available in a Contained Database.

<filespec>::=
Controls the file properties.
NAME logical_file_name
Specifies the logical name of the file.
logical_file_name
Is the logical name used in an instance of SQL Server when referencing the file.
NEWNAME new_logical_file_name
Specifies a new logical name for the file.
new_logical_file_name
Is the name to replace the existing logical file name. The name must be unique within the database and comply
with the rules for identifiers. The name can be a character or Unicode constant, a regular identifier, or a delimited
identifier.
FILENAME { 'os_file_name' | 'filestream_path' | 'memory_optimized_data_path'}
Specifies the operating system (physical) file name.
' os_file_name '
For a standard (ROWS ) filegroup, this is the path and file name that is used by the operating system when you
create the file. The file must reside on the server on which SQL Server is installed. The specified path must exist
before executing the ALTER DATABASE statement.

NOTE
SIZE, MAXSIZE, and FILEGROWTH parameters cannot be set when a UNC path is specified for the file.

NOTE
System databases cannot reside in UNC share directories.

Data files should not be put on compressed file systems unless the files are read-only secondary files, or if the
database is read-only. Log files should never be put on compressed file systems.
If the file is on a raw partition, os_file_name must specify only the drive letter of an existing raw partition. Only
one file can be put on each raw partition.
' filestream_path '
For a FILESTREAM filegroup, FILENAME refers to a path where FILESTREAM data will be stored. The path up to
the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyFilestreamData , then C:\MyFiles must exist before you run ALTER DATABASE, but the
MyFilestreamData folder must not exist.

NOTE
The SIZE and FILEGROWTH properties do not apply to a FILESTREAM filegroup.

' memory_optimized_data_path '


For a memory-optimized filegroup, FILENAME refers to a path where memory-optimized data will be stored. The
path up to the last folder must exist, and the last folder must not exist. For example, if you specify the path
C:\MyFiles\MyData , then C:\MyFiles must exist before you run ALTER DATABASE, but the MyData folder must
not exist.
The filegroup and file ( <filespec> ) must be created in the same statement.

NOTE
The SIZE and FILEGROWTH properties do not apply to a MEMORY_OPTIMIZED_DATA filegroup.
For more information on memory-optimized filegroups, see The Memory Optimized Filegroup.
SIZE size
Specifies the file size. SIZE does not apply to FILESTREAM filegroups.
size
Is the size of the file.
When specified with ADD FILE, size is the initial size for the file. When specified with MODIFY FILE, size is the
new size for the file, and must be larger than the current file size.
When size is not supplied for the primary file, the SQL Server uses the size of the primary file in the model
database. When a secondary data file or log file is specified but size is not specified for the file, the Database
Engine makes the file 1 MB.
The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default
is MB. Specify a whole number and do not include a decimal. To specify a fraction of a megabyte, convert the
value to kilobytes by multiplying the number by 1024. For example, specify 1536 KB instead of 1.5 MB (1.5 x 1024
= 1536).

NOTE
SIZE cannot be set:
When a UNC path is specified for the file
For FILESTREAM and MEMORY_OPTIMIZED_DATA filegroups

MAXSIZE { max_size| UNLIMITED }


Specifies the maximum file size to which the file can grow.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes,
or terabytes. The default is MB. Specify a whole number and do not include a decimal. If max_size is not specified,
the file size will increase until the disk is full.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a
maximum size of 2 TB, and a data file has a maximum size of 16 TB. There is no maximum size when this option is
specified for a FILESTREAM container. It continues to grow until the disk is full.

NOTE
MAXSIZE cannot be set when a UNC path is specified for the file.

FILEGROWTH growth_increment
Specifies the automatic growth increment of the file. The FILEGROWTH setting for a file cannot exceed the
MAXSIZE setting. FILEGROWTH does not apply to FILESTREAM filegroups.
growth_increment
Is the amount of space added to the file every time new space is required.
The value can be specified in MB, KB, GB, TB, or percent (%). If a number is specified without an MB, KB, or %
suffix, the default is MB. When % is specified, the growth increment size is the specified percentage of the size of
the file at the time the increment occurs. The size specified is rounded to the nearest 64 KB.
A value of 0 indicates that automatic growth is set to off and no additional space is allowed.
If FILEGROWTH is not specified, the default values are:

VERSION DEFAULT VALUES

Starting with SQL Server 2016 (13.x) Data 64 MB. Log files 64 MB.

Starting with SQL Server 2005 (9.x) Data 1 MB. Log files 10%.

Prior to SQL Server 2005 (9.x) Data 10%. Log files 10%.

NOTE
FILEGROWTH cannot be set:
When a UNC path is specified for the file
For FILESTREAM and MEMORY_OPTIMIZED_DATA filegroups

OFFLINE
Sets the file offline and makes all objects in the filegroup inaccessible.
Cau t i on

Use this option only when the file is corrupted and can be restored. A file set to OFFLINE can only be set online
by restoring the file from backup. For more information about restoring a single file, see RESTORE (Transact-
SQL ).

NOTE
<filespec> options are not available in a Contained Database.

<add_or_modify_filegroups>::=
Add, modify, or remove a filegroup from the database.
ADD FILEGROUP filegroup_name
Adds a filegroup to the database.
CONTAINS FILESTREAM
Specifies that the filegroup stores FILESTREAM binary large objects (BLOBs) in the file system.
CONTAINS MEMORY_OPTIMIZED_DATA
Applies to: SQL Server ( SQL Server 2014 (12.x) through SQL Server 2017)
Specifies that the filegroup stores memory optimized data in the file system. For more information, see In-
Memory OLTP (In-Memory Optimization). Only one MEMORY_OPTIMIZED_DATA filegroup is allowed per database. For
creating memory optimized tables, the filegroup cannot be empty. There must be at least one file. filegroup_name
refers to a path. The path up to the last folder must exist, and the last folder must not exist.
REMOVE FILEGROUP filegroup_name
Removes a filegroup from the database. The filegroup cannot be removed unless it is empty. Remove all files from
the filegroup first. For more information, see "REMOVE FILE logical_file_name," earlier in this topic.
NOTE
Unless the FILESTREAM Garbage Collector has removed all the files from a FILESTREAM container, the
ALTER DATABASE REMOVE FILE operation to remove a FILESTREAM container will fail and return an error. See the Removing
a FILESTREAM Container section later in this topic.

MODIFY FILEGROUP filegroup_name { <filegroup_updatability_option> | DEFAULT | NAME


=new_filegroup_name } Modifies the filegroup by setting the status to READ_ONLY or READ_WRITE, making
the filegroup the default filegroup for the database, or changing the filegroup name.
<filegroup_updatability_option>
Sets the read-only or read/write property to the filegroup.
DEFAULT
Changes the default database filegroup to filegroup_name. Only one filegroup in the database can be the default
filegroup. For more information, see Database Files and Filegroups.
NAME = new_filegroup_name
Changes the filegroup name to the new_filegroup_name.
AUTOGROW_SINGLE_FILE
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017)
When a file in the filegroup meets the autogrow threshold, only that file grows. This is the default.
AUTOGROW_ALL_FILES
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017)
When a file in the filegroup meets the autogrow threshold, all files in the filegroup grow.

NOTE
This is the default value for TempDB.

<filegroup_updatability_option>::=
Sets the read-only or read/write property to the filegroup.
READ_ONLY | READONLY
Specifies the filegroup is read-only. Updates to objects in it are not allowed. The primary filegroup cannot be
made read-only. To change this state, you must have exclusive access to the database. For more information, see
the SINGLE_USER clause.
Because a read-only database does not allow data modifications:
Automatic recovery is skipped at system startup.
Shrinking the database is not possible.
No locking occurs in read-only databases. This can cause faster query performance.

NOTE
The keyword READONLY will be removed in a future version of Microsoft SQL Server. Avoid using READONLY in new
development work, and plan to modify applications that currently use READONLY . Use READ_ONLY instead.

READ_WRITE | READWRITE
Specifies the group is READ_WRITE. Updates are enabled for the objects in the filegroup. To change this state,
you must have exclusive access to the database. For more information, see the SINGLE_USER clause.

NOTE
The keyword READWRITE will be removed in a future version of Microsoft SQL Server. Avoid using READWRITE in new
development work, and plan to modify applications that currently use READWRITE to use READ_WRITE instead.

TIP
The status of these options can be determined by examining the is_read_only column in the sys.databases catalog view or
the Updateability property of the DATABASEPROPERTYEX function.

Remarks
To decrease the size of a database, use DBCC SHRINKDATABASE.
You cannot add or remove a file while a BACKUP statement is running.
A maximum of 32,767 files and 32,767 filegroups can be specified for each database.
Starting with SQL Server 2005 (9.x), the state of a database file (for example, online or offline), is maintained
independently from the state of the database. For more information, see File States.
The state of the files within a filegroup determines the availability of the whole filegroup. For a filegroup to be
available, all files within the filegroup must be online.
If a filegroup is offline, any try to access the filegroup by an SQL statement will fail with an error. When you
build query plans for SELECT statements, the query optimizer avoids nonclustered indexes and indexed views
that reside in offline filegroups. This enables these statements to succeed. However, if the offline filegroup
contains the heap or clustered index of the target table, the SELECT statements fail. Additionally, any INSERT ,
UPDATE , or DELETE statement that modifies a table with any index in an offline filegroup will fail.

SIZE, MAXSIZE, and FILEGROWTH parameters cannot be set when a UNC path is specified for the file.
SIZE and FILEGROWTH parameters cannot be set for memory optimized filegroups.
The keyword READONLY will be removed in a future version of Microsoft SQL Server. Avoid using READONLY in
new development work, and plan to modify applications that currently use READONLY. Use READ_ONLY instead.
The keyword READWRITE will be removed in a future version of Microsoft SQL Server. Avoid using READWRITE in
new development work, and plan to modify applications that currently use READWRITE to use READ_WRITE instead.

Moving Files
You can move system or user-defined data and log files by specifying the new location in FILENAME. This may be
useful in the following scenarios:
Failure recovery. For example, the database is in suspect mode or shutdown caused by hardware failure.
Planned relocation.
Relocation for scheduled disk maintenance.
For more information, see Move Database Files.

Initializing Files
By default, data and log files are initialized by filling the files with zeros when you perform one of the following
operations:
Create a database.
Add files to an existing database.
Increase the size of an existing file.
Restore a database or filegroup.
Data files can be initialized instantaneously. This enables for fast execution of these file operations. For more
information, see Database File Initialization.

Removing a FILESTREAM Container


Even though FILESTREAM container may have been emptied using the "DBCC SHRINKFILE" operation, the
database may still need to maintain references to the deleted files for various system maintenance reasons.
sp_filestream_force_garbage_collection (Transact-SQL ) will run the FILESTREAM Garbage Collector to remove
these files when it is safe to do so. Unless the FILESTREAM Garbage Collector has removed all the files from a
FILESTREAM container, the ALTER DATABASE REMOVE FILE operation will fail to remove a FILESTREAM
container and will return an error. The following process is recommended to remove a FILESTREAM container.
1. Run DBCC SHRINKFILE (Transact-SQL ) with the EMPTYFILE option to move the active contents of this
container to other containers.
2. Ensure that Log backups have been taken, in the FULL or BULK_LOGGED recovery model.
3. Ensure that the replication log reader job has been run, if relevant.
4. Run sp_filestream_force_garbage_collection (Transact-SQL ) to force the garbage collector to delete any
files that are no longer needed in this container.
5. Execute ALTER DATABASE with the REMOVE FILE option to remove this container.
6. Repeat steps 2 through 4 once more to complete the garbage collection.
7. Use ALTER Database...REMOVE FILE to remove this container.

Examples
A. Adding a file to a database
The following example adds a 5-MB data file to the AdventureWorks2012 database.

USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = Test1dat2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat2.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO

B. Adding a filegroup with two files to a database


The following example creates the filegroup Test1FG1 in the AdventureWorks2012 database and adds two 5-MB
files to the filegroup.
USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat3.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat4.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO

C. Adding two log files to a database


The following example adds two 5-MB log files to the AdventureWorks2012 database.

USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD LOG FILE
(
NAME = test1log2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\test2log.ldf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1log3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\test3log.ldf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO

D. Removing a file from a database


The following example removes one of the files added in example B.

USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO

E. Modifying a file
The following example increases the size of one of the files added in example B.
The ALTER DATABASE with MODIFY FILE command can only make a file size bigger, so if you need to make the
file size smaller you need to use DBCC SHRINKFILE.
USE master;
GO

ALTER DATABASE AdventureWorks2012


MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO

This example shrinks the size of a data file to 100 MB, and then specifies the size at that amount.

USE AdventureWorks2012;
GO

DBCC SHRINKFILE (AdventureWorks2012_data, 100);


GO

USE master;
GO

ALTER DATABASE AdventureWorks2012


MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO

F. Moving a file to a new location


The following example moves the Test1dat2 file created in example A to a new directory.

NOTE
You must physically move the file to the new directory before running this example. Afterward, stop and start the instance
of SQL Server or take the AdventureWorks2012 database OFFLINE and then ONLINE to implement the change.

USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(
NAME = Test1dat2,
FILENAME = N'c:\t1dat2.ndf'
);
GO

G. Moving tempdb to a new location


The following example moves tempdb from its current location on the disk to another disk location. Because
tempdb is re-created each time the MSSQLSERVER service is started, you do not have to physically move the
data and log files. The files are created when the service is restarted in step 3. Until the service is restarted,
tempdb continues to function in its existing location.

1. Determine the logical file names of the tempdb database and their current location on disk.

SELECT name, physical_name


FROM sys.master_files
WHERE database_id = DB_ID('tempdb');
GO
2. Change the location of each file by using ALTER DATABASE .

USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'E:\SQLData\templog.ldf');
GO

3. Stop and restart the instance of SQL Server.


4. Verify the file change.

SELECT name, physical_name


FROM sys.master_files
WHERE database_id = DB_ID('tempdb');

5. Delete the tempdb.mdf and templog.ldf files from their original location.
H. Making a filegroup the default
The following example makes the Test1FG1 filegroup created in example B the default filegroup. Then, the default
filegroup is reset to the PRIMARY filegroup. Note that PRIMARY must be delimited by brackets or quotation marks.

USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP Test1FG1 DEFAULT;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO

I. Adding a Filegroup Using ALTER DATABASE


The following example adds a FILEGROUP that contains the FILESTREAM clause to the FileStreamPhotoDB database.

--Create and add a FILEGROUP that CONTAINS the FILESTREAM clause.


ALTER DATABASE FileStreamPhotoDB
ADD FILEGROUP TodaysPhotoShoot
CONTAINS FILESTREAM;
GO

--Add a file for storing database photos to FILEGROUP


ALTER DATABASE FileStreamPhotoDB
ADD FILE
(
NAME= 'PhotoShoot1',
FILENAME = 'C:\Users\Administrator\Pictures\TodaysPhotoShoot.ndf'
)
TO FILEGROUP TodaysPhotoShoot;
GO

The following example adds a FILEGROUP that contains the MEMORY_OPTIMIZED_DATA clause to the xtp_db database.
The filegroup stores memory optimized data.
--Create and add a FILEGROUP that CONTAINS the MEMORY_OPTIMIZED_DATA clause.
ALTER DATABASE xtp_db
ADD FILEGROUP xtp_fg
CONTAINS MEMORY_OPTIMIZED_DATA;
GO

--Add a file for storing memory optimized data to FILEGROUP


ALTER DATABASE xtp_db
ADD FILE
(
NAME='xtp_mod',
FILENAME='d:\data\xtp_mod'
)
TO FILEGROUP xtp_fg;
GO

J. Change filegroup so that when a file in the filegroup meets the autogrow threshold, all files in the filegroup
grow
The following example generates the required ALTER DATABASE statements to modify read-write filegroups with
the AUTOGROW_ALL_FILES setting.
--Generate ALTER DATABASE ... MODIFY FILEGROUP statements
--so that all read-write filegroups grow at the same time.
SET NOCOUNT ON;

DROP TABLE IF EXISTS #tmpdbs


CREATE TABLE #tmpdbs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, isdone bit);

DROP TABLE IF EXISTS #tmpfgs


CREATE TABLE #tmpfgs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, fgname sysname, isdone bit);

INSERT INTO #tmpdbs ([dbid], [dbname], [isdone])


SELECT database_id, name, 0 FROM master.sys.databases (NOLOCK) WHERE is_read_only = 0 AND state = 0;

DECLARE @dbid int, @query VARCHAR(1000), @dbname sysname, @fgname sysname

WHILE (SELECT COUNT(id) FROM #tmpdbs WHERE isdone = 0) > 0


BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid] FROM #tmpdbs WHERE isdone = 0

SET @query = 'SELECT ' + CAST(@dbid AS NVARCHAR) + ', ''' + @dbname + ''', [name], 0 FROM [' + @dbname +
'].sys.filegroups WHERE [type] = ''FG'' AND is_read_only = 0;'
INSERT INTO #tmpfgs
EXEC (@query)

UPDATE #tmpdbs
SET isdone = 1
WHERE [dbid] = @dbid
END;

IF (SELECT COUNT(ID) FROM #tmpfgs) > 0


BEGIN
WHILE (SELECT COUNT(id) FROM #tmpfgs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid], @fgname = fgname FROM #tmpfgs WHERE isdone = 0

SET @query = 'ALTER DATABASE [' + @dbname + '] MODIFY FILEGROUP [' + @fgname + '] AUTOGROW_ALL_FILES;'

PRINT @query

UPDATE #tmpfgs
SET isdone = 1
WHERE [dbid] = @dbid AND fgname = @fgname
END
END;
GO

See Also
CREATE DATABASE
DATABASEPROPERTYEX
DROP DATABASE
sp_spaceused
sys.databases
sys.database_files
sys.data_spaces
sys.filegroups
sys.master_files
Binary Large Objects
DBCC SHRINKFIL
sp_filestream_force_garbage_collection
Database File Initialization
SQL Server * SQL Database
Managed Instance *

Azure SQL Database Managed Instance


Use this statement with a database in Azure SQL Database Managed Instance.

Syntax for databases in a Managed Instance


ALTER DATABASE database_name
{
<add_or_modify_files>
| <add_or_modify_filegroups>
}
[;]

<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name } ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}

<filespec>::=
(
NAME = logical_file_name
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB| % ] ]
)

<add_or_modify_filegroups>::=
{
| ADD FILEGROUP filegroup_name
| REMOVE FILEGROUP filegroup_name
| MODIFY FILEGROUP filegroup_name
{ <filegroup_updatability_option>
| DEFAULT
| NAME = new_filegroup_name
| { AUTOGROW_SINGLE_FILE | AUTOGROW_ALL_FILES }
}
}
<filegroup_updatability_option>::=
{
{ READONLY | READWRITE }
| { READ_ONLY | READ_WRITE }
}

Arguments
<add_or_modify_files>::=
Specifies the file to be added, removed, or modified.
database_name
Is the name of the database to be modified.
ADD FILE
Adds a file to the database.
TO FILEGROUP { filegroup_name }
Specifies the filegroup to which to add the specified file. To display the current filegroups and which filegroup is
the current default, use the sys.filegroups catalog view.
REMOVE FILE logical_file_name
Removes the logical file description from an instance of SQL Server and deletes the physical file. The file cannot
be removed unless it is empty.
logical_file_name
Is the logical name used in SQL Server when referencing the file.
MODIFY FILE
Specifies the file that should be modified. Only one <filespec> property can be changed at a time. NAME must
always be specified in the <filespec> to identify the file to be modified. If SIZE is specified, the new size must be
larger than the current file size.
<filespec>::=
Controls the file properties.
NAME logical_file_name
Specifies the logical name of the file.
logical_file_name
Is the logical name used in an instance of SQL Server when referencing the file.
NEWNAME new_logical_file_name
Specifies a new logical name for the file.
new_logical_file_name
Is the name to replace the existing logical file name. The name must be unique within the database and comply
with the rules for identifiers. The name can be a character or Unicode constant, a regular identifier, or a delimited
identifier.
SIZE size
Specifies the file size.
size
Is the size of the file.
When specified with ADD FILE, size is the initial size for the file. When specified with MODIFY FILE, size is the
new size for the file, and must be larger than the current file size.
When size is not supplied for the primary file, the SQL Server uses the size of the primary file in the model
database. When a secondary data file or log file is specified but size is not specified for the file, the Database
Engine makes the file 1 MB.
The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default
is MB. Specify a whole number and do not include a decimal. To specify a fraction of a megabyte, convert the
value to kilobytes by multiplying the number by 1024. For example, specify 1536 KB instead of 1.5 MB (1.5 x 1024
= 1536).
MAXSIZE { max_size| UNLIMITED }
Specifies the maximum file size to which the file can grow.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes,
or terabytes. The default is MB. Specify a whole number and do not include a decimal. If max_size is not specified,
the file size will increase until the disk is full.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a
maximum size of 2 TB, and a data file has a maximum size of 16 TB.
FILEGROWTH growth_increment
Specifies the automatic growth increment of the file. The FILEGROWTH setting for a file cannot exceed the
MAXSIZE setting.
growth_increment
Is the amount of space added to the file every time new space is required.
The value can be specified in MB, KB, GB, TB, or percent (%). If a number is specified without an MB, KB, or %
suffix, the default is MB. When % is specified, the growth increment size is the specified percentage of the size of
the file at the time the increment occurs. The size specified is rounded to the nearest 64 KB.
A value of 0 indicates that automatic growth is set to off and no additional space is allowed.
If FILEGROWTH is not specified, the default values are:
Data 64 MB
Log files 64 MB
<add_or_modify_filegroups>::=
Add, modify, or remove a filegroup from the database.
ADD FILEGROUP filegroup_name
Adds a filegroup to the database.
The following example creates a filegroup that is added to a database named sql_db_mi, and adds a file to the
filegroup.

ALTER DATABASE sql_db_mi ADD FILEGROUP sql_db_mi_fg;


GO
ALTER DATABASE sql_db_mi ADD FILE (NAME='sql_db_mi_mod') TO FILEGROUP sql_db_mi_fg;

REMOVE FILEGROUP filegroup_name


Removes a filegroup from the database. The filegroup cannot be removed unless it is empty. Remove all files from
the filegroup first. For more information, see "REMOVE FILE logical_file_name," earlier in this topic.
MODIFY FILEGROUP filegroup_name { <filegroup_updatability_option> | DEFAULT | NAME
=new_filegroup_name } Modifies the filegroup by setting the status to READ_ONLY or READ_WRITE, making
the filegroup the default filegroup for the database, or changing the filegroup name.
<filegroup_updatability_option>
Sets the read-only or read/write property to the filegroup.
DEFAULT
Changes the default database filegroup to filegroup_name. Only one filegroup in the database can be the default
filegroup. For more information, see Database Files and Filegroups.
NAME = new_filegroup_name
Changes the filegroup name to the new_filegroup_name.
AUTOGROW_SINGLE_FILE
When a file in the filegroup meets the autogrow threshold, only that file grows. This is the default.
AUTOGROW_ALL_FILES
When a file in the filegroup meets the autogrow threshold, all files in the filegroup grow.
<filegroup_updatability_option>::=
Sets the read-only or read/write property to the filegroup.
READ_ONLY | READONLY
Specifies the filegroup is read-only. Updates to objects in it are not allowed. The primary filegroup cannot be
made read-only. To change this state, you must have exclusive access to the database. For more information, see
the SINGLE_USER clause.
Because a read-only database does not allow data modifications:
Automatic recovery is skipped at system startup.
Shrinking the database is not possible.
No locking occurs in read-only databases. This can cause faster query performance.

NOTE
The keyword READONLY will be removed in a future version of Microsoft SQL Server. Avoid using READONLY in new
development work, and plan to modify applications that currently use READONLY. Use READ_ONLY instead.

READ_WRITE | READWRITE
Specifies the group is READ_WRITE. Updates are enabled for the objects in the filegroup. To change this state,
you must have exclusive access to the database. For more information, see the SINGLE_USER clause.

NOTE
The keyword READWRITE will be removed in a future version of Microsoft SQL Server. Avoid using READWRITE in new
development work, and plan to modify applications that currently use READWRITE to use READ_WRITE instead.

The status of these options can be determined by examining the is_read_only column in the sys.databases
catalog view or the Updateability property of the DATABASEPROPERTYEX function.

Remarks
To decrease the size of a database, use DBCC SHRINKDATABASE.
You cannot add or remove a file while a BACKUP statement is running.
A maximum of 32,767 files and 32,767 filegroups can be specified for each database.

Examples
A. Adding a file to a database
The following example adds a 5-MB data file to the AdventureWorks2012 database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = Test1dat2,
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO

B. Adding a filegroup with two files to a database


The following example creates the filegroup Test1FG1 in the AdventureWorks2012 database and adds two 5-MB
files to the filegroup.

USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO

C. Removing a file from a database


The following example removes one of the files added in example B.

USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO

D. Modifying a file
The following example increases the size of one of the files added in example B.
The ALTER DATABASE with MODIFY FILE command can only make a file size bigger, so if you need to make the
file size smaller you need to use DBCC SHRINKFILE.
USE master;
GO

ALTER DATABASE AdventureWorks2012


MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO

This example shrinks the size of a data file to 100 MB, and then specifies the size at that amount.

USE AdventureWorks2012;
GO

DBCC SHRINKFILE (AdventureWorks2012_data, 100);


GO

USE master;
GO

ALTER DATABASE AdventureWorks2012


MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO

E. Making a filegroup the default


The following example makes the Test1FG1 filegroup created in example B the default filegroup. Then, the default
filegroup is reset to the PRIMARY filegroup. Note that PRIMARY must be delimited by brackets or quotation marks.

USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP Test1FG1 DEFAULT;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO

F. Adding a Filegroup Using ALTER DATABASE


The following example adds a FILEGROUP to the MyDB database.

--Create and add a FILEGROUP.


ALTER DATABASE MyDB
ADD FILEGROUP NewFG;
GO

--Add a file to FILEGROUP


ALTER DATABASE MyDB
ADD FILE
(
NAME= 'MyFile',
)
TO FILEGROUP NewFG;
GO

G. Change filegroup so that when a file in the filegroup meets the autogrow threshold, all files in the filegroup
grow
The following example generates the required ALTER DATABASE statements to modify read-write filegroups with
the AUTOGROW_ALL_FILES setting.

--Generate ALTER DATABASE ... MODIFY FILEGROUP statements


--so that all read-write filegroups grow at the same time.
SET NOCOUNT ON;

DROP TABLE IF EXISTS #tmpdbs


CREATE TABLE #tmpdbs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, isdone bit);

DROP TABLE IF EXISTS #tmpfgs


CREATE TABLE #tmpfgs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, fgname sysname, isdone bit);

INSERT INTO #tmpdbs ([dbid], [dbname], [isdone])


SELECT database_id, name, 0 FROM master.sys.databases (NOLOCK) WHERE is_read_only = 0 AND state = 0;

DECLARE @dbid int, @query VARCHAR(1000), @dbname sysname, @fgname sysname

WHILE (SELECT COUNT(id) FROM #tmpdbs WHERE isdone = 0) > 0


BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid] FROM #tmpdbs WHERE isdone = 0

SET @query = 'SELECT ' + CAST(@dbid AS NVARCHAR) + ', ''' + @dbname + ''', [name], 0 FROM [' + @dbname +
'].sys.filegroups WHERE [type] = ''FG'' AND is_read_only = 0;'
INSERT INTO #tmpfgs
EXEC (@query)

UPDATE #tmpdbs
SET isdone = 1
WHERE [dbid] = @dbid
END;

IF (SELECT COUNT(ID) FROM #tmpfgs) > 0


BEGIN
WHILE (SELECT COUNT(id) FROM #tmpfgs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid], @fgname = fgname FROM #tmpfgs WHERE isdone = 0

SET @query = 'ALTER DATABASE [' + @dbname + '] MODIFY FILEGROUP [' + @fgname + '] AUTOGROW_ALL_FILES;'

PRINT @query

UPDATE #tmpfgs
SET isdone = 1
WHERE [dbid] = @dbid AND fgname = @fgname
END
END;
GO

See Also
CREATE DATABASE
DATABASEPROPERTYEX
DROP DATABASE
sp_spaceused
sys.databases
sys.database_files
sys.data_spaces
sys.filegroups
sys.master_files
DBCC SHRINKFILE
The Memory-Optimized Filegroup
ALTER DATABASE (Transact-SQL) SET HADR
11/28/2018 • 5 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2012) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
This topic contains the ALTER DATABASE syntax for setting Always On availability groups options on a secondary
database. Only one SET HADR option is permitted per ALTER DATABASE statement. These options are supported
only on secondary replicas.
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE database_name
SET HADR
{
{ AVAILABILITY GROUP = group_name | OFF }
| { SUSPEND | RESUME }
}
[;]

Arguments
database_name
Is the name of the secondary database to be modified.
SET HADR
Executes the specified Always On availability groups command on the specified database.
{ AVAILABILITY GROUP =group_name | OFF }
Joins or removes the availability database from the specified availability group, as follows:
group_name
Joins the specified database on the secondary replica that is hosted by the server instance on which you execute
the command to the availability group specified by group_name.
The prerequisites for this operation are as follows:
The database must already have been added to the availability group on the primary replica.
The primary replica must be active. For information about how troubleshoot an inactive primary replica,
see Troubleshooting Always On Availability Groups Configuration (SQL Server).
The primary replica must be online, and the secondary replica must be connected to the primary replica.
The secondary database must have been restored using WITH NORECOVERY from recent database and
log backups of the primary database, ending with a log backup that is recent enough to permit the
secondary database to catch up to the primary database.
NOTE
To add a database to the availability group, connect to the server instance that hosts the primary replica, and use
the ALTER AVAILABILITY GROUPgroup_name ADD DATABASE database_name statement.

For more information, see Join a Secondary Database to an Availability Group (SQL Server).
OFF
Removes the specified secondary database from the availability group.
Removing a secondary database can be useful if it has fallen far behind the primary database, and you do
not want to wait for the secondary database to catch up. After removing the secondary database, you can
update it by restoring a sequence of backups ending with a recent log backup (using RESTORE ... WITH
NORECOVERY ).

IMPORTANT
To completely remove an availability database from an availability group, connect to the server instance that hosts the
primary replica, and use the ALTER AVAILABILITY GROUPgroup_name REMOVE DATABASE availability_database_name
statement. For more information, see Remove a Primary Database from an Availability Group (SQL Server).

SUSPEND
Suspends data movement on a secondary database. A SUSPEND command returns as soon as it has been
accepted by the replica that hosts the target database, but actually suspending the database occurs
asynchronously.
The scope of the impact depends on where you execute the ALTER DATABASE statement:
If you suspend a secondary database on a secondary replica, only the local secondary database is
suspended. Existing connections on the readable secondary remain usable. New connections to the
suspended database on the readable secondary are not allowed until data movement is resumed.
If you suspend a database on the primary replica, data movement is suspended to the corresponding
secondary databases on every secondary replica. Existing connections on a readable secondary remain
usable, and new read-intent connections will not connect to readable secondary replicas.
When data movement is suspended due to a forced manual failover, connections to the new secondary
replica are not allowed while data movement is suspended.
When a database on a secondary replica is suspended, both the database and replica become
unsynchronized and are marked as NOT SYNCHRONIZED.

IMPORTANT
While a secondary database is suspended, the send queue of the corresponding primary database will accumulate unsent
transaction log records. Connections to the secondary replica return data that was available at the time the data movement
was suspended.
NOTE
Suspending and resuming an Always On secondary database does not directly affect the availability of the primary
database, though suspending a secondary database can impact redundancy and failover capabilities for the primary
database, until the suspended secondary database is resumed. This is in contrast to database mirroring, where the mirroring
state is suspended on both the mirror database and the principal database until mirroring is resumed. Suspending an
Always On primary database suspends data movement on all the corresponding secondary databases, and redundancy and
failover capabilities cease for that database until the primary database is resumed.

For more information, see Suspend an Availability Database (SQL Server).


RESUME
Resumes suspended data movement on the specified secondary database. A RESUME command returns as soon
as it has been accepted by the replica that hosts the target database, but actually resuming the database occurs
asynchronously.
The scope of the impact depends on where you execute the ALTER DATABASE statement:
If you resume a secondary database on a secondary replica, only the local secondary database is resumed.
Data movement is resumed unless the database has also been suspended on the primary replica.
If you resume a database on the primary replica, data movement is resumed to every secondary replica on
which the corresponding secondary database has not also been suspended locally. To resume a secondary
database that was individually suspended on a secondary replica, connect to the server instance that hosts
the secondary replica and resume the database there.
Under synchronous-commit mode, the database state changes to SYNCHRONIZING. If no other database
is currently suspended, the replica state also changes to SYNCHRONIZING.
For more information, see Resume an Availability Database (SQL Server).

Database States
When a secondary database is joined to an availability group, the local secondary replica changes the state of that
secondary database from RESTORING to ONLINE. If a secondary database is removed from the availability
group, it is set back to the RESTORING state by the local secondary replica. This allows you to apply subsequent
log backups from the primary database to that secondary database.

Restrictions
Execute ALTER DATABASE statements outside of both transactions and batches.

Security
Permissions
Requires ALTER permission on the database. Joining a database to an availability group requires membership in
the db_owner fixed database role.

Examples
The following example joins the secondary database, AccountsDb1 , to the local secondary replica of the
AccountsAG availability group.

ALTER DATABASE AccountsDb1 SET HADR AVAILABILITY GROUP = AccountsAG;


NOTE
To see this Transact-SQL statement used in context, see Create an Availability Group (Transact-SQL).

See Also
ALTER DATABASE (Transact-SQL )
ALTER AVAILABILITY GROUP (Transact-SQL )
CREATE AVAILABILITY GROUP (Transact-SQL )
Overview of AlwaysOn Availability Groups (SQL Server) Troubleshoot AlwaysOn Availability Groups
Configuration (SQL Server)
ALTER DATABASE SCOPED CREDENTIAL (Transact-
SQL)
12/10/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the properties of a database scoped credential.
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE SCOPED CREDENTIAL credential_name WITH IDENTITY = 'identity_name'
[ , SECRET = 'secret' ]

Arguments
credential_name
Specifies the name of the database scoped credential that is being altered.
IDENTITY ='identity_name'
Specifies the name of the account to be used when connecting outside the server. To import a file from Azure Blob
storage, the identity name must be SHARED ACCESS SIGNATURE . For more information about shared access
signatures, see Using Shared Access Signatures (SAS ).
SECRET ='secret'
Specifies the secret required for outgoing authentication. secret is required to import a file from Azure Blob
storage. secret may be optional for other purposes.

WARNING
The SAS key value might begin with a '?' (question mark). When you use the SAS key, you must remove the leading '?'.
Otherwise your efforts might be blocked.

Remarks
When a database scoped credential is changed, the values of both identity_name and secret are reset. If the
optional SECRET argument is not specified, the value of the stored secret will be set to NULL.
The secret is encrypted by using the service master key. If the service master key is regenerated, the secret is
reencrypted by using the new service master key.
Information about database scoped credentials is visible in the sys.database_scoped_credentials catalog view.

Permissions
Requires ALTER permission on the credential.
Examples
A. Changing the password of a database scoped credential
The following example changes the secret stored in a database scoped credential called Saddles . The database
scoped credential contains the Windows login RettigB and its password. The new password is added to the
database scoped credential using the SECRET clause.

ALTER DATABASE SCOPED CREDENTIAL AppCred WITH IDENTITY = 'RettigB',


SECRET = 'sdrlk8$40-dksli87nNN8';
GO

B. Removing the password from a credential


The following example removes the password from a database scoped credential named Frames . The database
scoped credential contains Windows login Aboulrus8 and a password. After the statement is executed, the
database scoped credential will have a NULL password because the SECRET option is not specified.

ALTER DATABASE SCOPED CREDENTIAL Frames WITH IDENTITY = 'Aboulrus8';


GO

See Also
Credentials (Database Engine)
CREATE DATABASE SCOPED CREDENTIAL (Transact-SQL )
DROP DATABASE SCOPED CREDENTIAL (Transact-SQL )
sys.database_scoped_credentials
CREATE CREDENTIAL (Transact-SQL )
sys.credentials (Transact-SQL )
ALTER DATABASE SCOPED CONFIGURATION
(Transact-SQL)
1/7/2019 • 15 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
This statement enables several database configuration settings at the individual database level. This statement is
available in Azure SQL Database and in SQL Server beginning with SQL Server 2016 (13.x). Those settings are:
Clear procedure cache.
Set the MAXDOP parameter to an arbitrary value (1,2, ...) for the primary database based on what works best
for that particular database and set a different value (such as 0) for all secondary database used (such as for
reporting queries).
Set the query optimizer cardinality estimation model independent of the database to compatibility level.
Enable or disable parameter sniffing at the database level.
Enable or disable query optimization hotfixes at the database level.
Enable or disable the identity cache at the database level.
Enable or disable a compiled plan stub to be stored in cache when a batch is compiled for the first time.
Enable or disable collection of execution statistics for natively compiled T-SQL modules.
Enable or disable online by default options for DDL statements that support the ONLINE= syntax.
Enable or disable resumable by default options for DDL statements that support the RESUMABLE= syntax.
Enable or disable the auto-drop functionality of global temporary tables
Transact-SQL Syntax Conventions

Syntax
ALTER DATABASE SCOPED CONFIGURATION
{
{ [ FOR SECONDARY] SET <set_options>}
}
| CLEAR PROCEDURE_CACHE
| SET < set_options >
[;]

< set_options > ::=


{
MAXDOP = { <value> | PRIMARY}
| LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY}
| PARAMETER_SNIFFING = { ON | OFF | PRIMARY}
| QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY}
| IDENTITY_CACHE = { ON | OFF }
| OPTIMIZE_FOR_AD_HOC_WORKLOADS = { ON | OFF }
| XTP_PROCEDURE_EXECUTION_STATISTICS = { ON | OFF }
| XTP_QUERY_EXECUTION_STATISTICS = { ON | OFF }
| ELEVATE_ONLINE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
| ELEVATE_RESUMABLE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
| GLOBAL_TEMPORARY_TABLE_AUTODROP = { ON | OFF }
}
Arguments
FOR SECONDARY
Specifies the settings for secondary databases (all secondary databases must have the identical values).
MAXDOP = {<value> | PRIMARY } <value>
Specifies the default MAXDOP setting that should be used for statements. 0 is the default value and indicates that
the server configuration will be used instead. The MAXDOP at the database scope overrides (unless it is set to 0)
the max degree of parallelism set at the server level by sp_configure. Query hints can still override the DB
scoped MAXDOP in order to tune specific queries that need different setting. All these settings are limited by the
MAXDOP set for the Workload Group.
You can use the max degree of parallelism option to limit the number of processors to use in parallel plan
execution. SQL Server considers parallel execution plans for queries, index data definition language (DDL )
operations, parallel insert, online alter column, parallel stats collection, and static and keyset-driven cursor
population.
To set this option at the instance level, see Configure the max degree of parallelism Server Configuration Option.

TIP
To accomplish this at the query level, add the MAXDOP query hint.

PRIMARY
Can only be set for the secondaries, while the database in on the primary, and indicates that the configuration will
be the one set for the primary. If the configuration for the primary changes, the value on the secondaries will
change accordingly without the need to set the secondaries value explicitly. PRIMARY is the default setting for the
secondaries.
LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY }
Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version
independent of the compatibility level of the database. The default is OFF, which sets the query optimizer
cardinality estimation model based on the compatibility level of the database. Setting
LEGACY_CARDINALITY_ESTIMATION to ON is equivalent to enabling Trace Flag 9481.

TIP
To accomplish this at the query level, add the QUERYTRACEON query hint. Starting with SQL Server 2016 (13.x) SP1, to
accomplish this at the query level, add the USE HINT query hint instead of using the trace flag.

PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the query optimizer
cardinality estimation model setting on all secondaries will be the value set for the primary. If the configuration on
the primary for the query optimizer cardinality estimation model changes, the value on the secondaries will change
accordingly. PRIMARY is the default setting for the secondaries.
PARAMETER_SNIFFING = { ON | OFF | PRIMARY }
Enables or disables parameter sniffing. The default is ON. Setting PARAMETER_SNIFFING to OFF is equivalent to
enabling Trace Flag 4136.
TIP
To accomplish this at the query level, see the OPTIMIZE FOR UNKNOWN query hint. Starting with SQL Server 2016 (13.x)
SP1, to accomplish this at the query level, the USE HINT query hint is also available.

PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the value for this
setting on all secondaries will be the value set for the primary. If the configuration on the primary for using
parameter sniffing changes, the value on the secondaries will change accordingly without the need to set the
secondaries value explicitly. PRIMARY is the default setting for the secondaries.
QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY }
Enables or disables query optimization hotfixes regardless of the compatibility level of the database. The default is
OFF, which disables query optimization hotfixes that were released after the highest available compatibility level
was introduced for a specific version (post-RTM ). Setting this to ON is equivalent to enabling Trace Flag 4199.

TIP
To accomplish this at the query level, add the QUERYTRACEON query hint. Starting with SQL Server 2016 (13.x) SP1, to
accomplish this at the query level, add the USE HINT query hint instead of using the trace flag.

PRIMARY
This value is only valid on secondaries while the database in on the primary, and specifies that the value for this
setting on all secondaries is the value set for the primary. If the configuration for the primary changes, the value on
the secondaries changes accordingly without the need to set the secondaries value explicitly. PRIMARY is the
default setting for the secondaries.
CLEAR PROCEDURE_CACHE
Clears the procedure (plan) cache for the database, and can be executed both on the primary and the secondaries.
IDENTITY_CACHE = { ON | OFF }
Applies to: SQL Server 2017 (14.x) and Azure SQL Database
Enables or disables identity cache at the database level. The default is ON. Identity caching is used to improve
INSERT performance on tables with identity columns. To avoid gaps in the values of an identity column in cases
where the server restarts unexpectedly or fails over to a secondary server, disable the IDENTITY_CACHE option.
This option is similar to the existing Trace Flag 272, except that it can be set at the database level rather than only at
the server level.

NOTE
This option can only be set for the PRIMARY. For more information, see identity columns.

OPTIMIZE_FOR_AD_HOC_WORKLOADS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables a compiled plan stub to be stored in cache when a batch is compiled for the first time. The
default is OFF. Once the database scoped configuration OPTIMIZE_FOR_AD_HOC_WORKLOADS is enabled for
a database, a compiled plan stub will be stored in cache when a batch is compiled for the first time. Plan stubs have
a smaller memory footprint compared to the size of the full compiled plan. If a batch is compiled or executed again,
the compiled plan stub will be removed and replaced with a full compiled plan.
XTP_PROCEDURE_EXECUTION_STATISTICS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables collection of execution statistics at the module-level for natively compiled T-SQL modules in
the current database. The default is OFF. The execution statistics are reflected in sys.dm_exec_procedure_stats.
Module-level execution statistics for natively compiled T-SQL modules are collected if either this option is ON, or
if statistics collection is enabled through sp_xtp_control_proc_exec_stats.
XTP_QUERY_EXECUTION_STATISTICS = { ON | OFF }
Applies to: Azure SQL Database
Enables or disables collection of execution statistics at the statement-level for natively compiled T-SQL modules in
the current database. The default is OFF. The execution statistics are reflected in sys.dm_exec_query_stats and in
Query Store.
Statement-level execution statistics for natively compiled T-SQL modules are collected if either this option is ON,
or if statistics collection is enabled through sp_xtp_control_query_exec_stats.
For more information about performance monitoring of natively compiled Transact-SQL modules see Monitoring
Performance of Natively Compiled Stored Procedures.
ELEVATE_ONLINE = { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }
Applies to: Azure SQL Database (feature is in public preview )
Allows you to select options to cause the engine to automatically elevate supported operations to online. The
default is OFF, which means operations will not be elevated to online unless specified in the statement.
sys.database_scoped_configurations reflects the current value of ELEVATE_ONLINE. These options will only apply
to operations that are supported for online.
FAIL_UNSUPPORTED
This value elevates all supported DDL operations to ONLINE. Operations that do not support online execution will
fail and throw a warning.
WHEN_SUPPORTED
This value elevates operations that support ONLINE. Operations that do not support online will be run offline.

NOTE
You can override the default setting by submitting a statement with the ONLINE option specified.

ELEVATE_RESUMABLE= { OFF | WHEN_SUPPORTED | FAIL_UNSUPPORTED }


**Applies to*: SQL Database and SQL Server 2019 preview as a public preview feature
Allows you to select options to cause the engine to automatically elevate supported operations to resumable. The
default is OFF, which means operations are not be elevated to resumable unless specified in the statement.
sys.database_scoped_configurations reflects the current value of ELEVATE_RESUMABLE. These options only apply
to operations that are supported for resumable.
FAIL_UNSUPPORTED
This value elevates all supported DDL operations to RESUMABLE. Operations that do not support resumable
execution fail and throw a warning.
WHEN_SUPPORTED
This value elevates operations that support RESUMABLE. Operations that do not support resumable are run non-
resumably.

NOTE
You can override the default setting by submitting a statement with the RESUMABLE option specified.

GLOBAL_TEMPORARY_TABLE_AUTODROP = { ON | OFF }
Applies to: Azure SQL Database (feature is in public preview )
Allows setting the auto drop functionality for global temporary tables. The default is ON, which means that the
global temporary tables are automatically dropped when not in use by any session. When set to OFF, global
temporary tables need to be explicitly dropped using a DROP TABLE statement or will be automatically dropped
on server restart.
In Azure SQL Database logical server, this option can be set in the individual user databases of the logical
server.
In SQL Server and Azure SQL Database Managed Instance, this option is set in TempDB and the setting of the
individual user databases has no effect.
DISABLE_INTERLEAVED_EXECUTION_TVF = { ON | OFF }
Applies to: SQL Server 2017 (14.x) and Azure SQL Database
Allows you to enable or disable Interleaved execution for multi-statement table valued functions at the database or
statement scope while still maintaining database compatibility level 140 and higher. Interleaved execution is a
feature that is part of Adaptive query processing in Azure SQL Database. For more information, please refer to
Adaptive query processing
DISABLE_BATCH_MODE_ADAPTIVE_JOINS = { ON | OFF }
Applies to: SQL Server 2017 (14.x) and Azure SQL Database
Allows you to enable or disable Adaptive joins at the database or statement scope while still maintaining database
compatibility level 140 and higher. Adaptive Joins is a feature that is part of Adaptive query processing introduced
in SQL Server 2017 (14.x).
ROW_MODE_MEMORY_GRANT_FEEDBACK = { ON | OFF }
Applies to: SQL Database and SQL Server 2019 preview as a public preview feature
Allows you to enable or disable Row mode memory grant feedback at the database or statement scope while still
maintaining database compatibility level 150 and higher. Row mode memory grant feedback a feature that is part
of Adaptive query processing introduced in SQL Server 2019.

Permissions
Requires ALTER ANY DATABASE SCOPE CONFIGURATION on the database. This permission can be granted by a user with
CONTROL permission on a database.

General Remarks
While you can configure secondary databases to have different scoped configuration settings from their primary,
all secondary databases use the same configuration. Different settings cannot be configured for individual
secondaries.
Executing this statement clears the procedure cache in the current database, which means that all queries have to
recompile.
For 3-part name queries, the settings for the current database connection for the query are honored, other than for
SQL modules (such as procedures, functions, and triggers) that are compiled in the current database context and
therefore uses the options of the database in which they reside.
The ALTER_DATABASE_SCOPED_CONFIGURATION event is added as a DDL event that can be used to fire a DDL trigger, and
is a child of the ALTER_DATABASE_EVENTS trigger group.
Database scoped configuration settings will be carried over with the database, which means that when a given
database is restored or attached, the existing configuration settings remain.

Limitations and Restrictions


MAXDOP
The granular settings can override the global ones and that resource governor can cap all other MAXDOP settings.
The logic for MAXDOP setting is the following:
Query hint overrides both the sp_configure and the database scoped configuration. If the resource group
MAXDOP is set for the workload group:
If the query hint is set to zero (0), it is overridden by the resource governor setting.
If the query hint is not zero (0), it is capped by the resource governor setting.
The database scoped configuration (unless it's zero) overrides the sp_configure setting unless there is a
query hint and is capped by the resource governor setting.
The sp_configure setting is overridden by the resource governor setting.
QUERY_OPTIMIZER_HOTFIXES
When QUERYTRACEON hint is used to enable the default query optimizer of SQL Server 7.0 through SQL Server
2012 (11.x) versions or query optimizer hotfixes, it would be an OR condition between the query hint and the
database scoped configuration setting, meaning if either is enabled, the database scoped configurations apply.
Geo DR
Readable secondary databases (Always On Availability Groups and Azure SQL Database geo-replicated
databases), use the secondary value by checking the state of the database. Even though recompile does not occur
on failover and technically the new primary has queries that are using the secondary settings, the idea is that the
setting between primary and secondary only vary when the workload is different and therefore the cached queries
are using the optimal settings, whereas new queries pick the new settings that are appropriate for them.
DacFx
Since ALTER DATABASE SCOPED CONFIGURATION is a new feature in Azure SQL Database and SQL Server (starting with
SQL Server 2016 (13.x)) that affects the database schema, exports of the schema (with or without data) are not
able to be imported into an older version of SQL Server, such as SQL Server 2012 (11.x) or SQL Server 2017
(14.x). For example, an export to a DACPAC or a BACPAC from an SQL Database or SQL Server 2016 (13.x)
database that used this new feature would not be able to be imported into a down-level server.
ELEVATE_ONLINE
This option only applies to DDL statements that support the WITH (ONLINE = <syntax>) . XML indexes are not
affected.
ELEVATE_RESUMABLE
This option only applies to DDL statements that support the WITH (RESUMABLE = <syntax>) . XML indexes are not
affected.

Metadata

The sys.database_scoped_configurations (Transact-SQL ) system view provides information about scoped


configurations within a database. Database-scoped configuration options only show up in
sys.database_scoped_configurations as they are overrides to server-wide default settings. The sys.configurations
(Transact-SQL ) system view only shows server-wide settings.

Examples
These examples demonstrate the use of ALTER DATABASE SCOPED CONFIGURATION
A. Grant Permission
This example grant permission required to execute ALTER DATABASE SCOPED CONFIGURATION to user [Joe].

GRANT ALTER ANY DATABASE SCOPED CONFIGURATION to [Joe] ;

B. Set MAXDOP
This example sets MAXDOP = 1 for a primary database and MAXDOP = 4 for a secondary database in a geo-
replication scenario.

ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 1 ;


ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP=4 ;

This example sets MAXDOP for a secondary database to be the same as it is set for its primary database in a geo-
replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP=PRIMARY ;

C. Set LEGACY_CARDINALITY_ESTIMATION
This example sets LEGACY_CARDINALITY_ESTIMATION to ON for a secondary database in a geo-replication
scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET LEGACY_CARDINALITY_ESTIMATION=ON ;

This example sets LEGACY_CARDINALITY_ESTIMATION for a secondary database as it is for its primary
database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET LEGACY_CARDINALITY_ESTIMATION=PRIMARY ;

D. Set PARAMETER_SNIFFING
This example sets PARAMETER_SNIFFING to OFF for a primary database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SNIFFING =OFF ;

This example sets PARAMETER_SNIFFING to OFF for a primary database in a geo-replication scenario.
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET PARAMETER_SNIFFING=OFF ;

This example sets PARAMETER_SNIFFING for secondary database as it is on primary database in a geo-
replication scenario.

ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET PARAMETER_SNIFFING=PRIMARY ;

E. Set QUERY_OPTIMIZER_HOTFIXES
Set QUERY_OPTIMIZER_HOTFIXES to ON for a primary database in a geo-replication scenario.

ALTER DATABASE SCOPED CONFIGURATION SET QUERY_OPTIMIZER_HOTFIXES=ON ;

F. Clear Procedure Cache


This example clears the procedure cache (possible only for a primary database).

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE ;

G. Set IDENTITY_CACHE
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and Azure SQL Database (feature is in public
preview )
This example disables the identity cache.

ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE=OFF ;

H. Set OPTIMIZE_FOR_AD_HOC_WORKLOADS
Applies to: Azure SQL Database
This example enables a compiled plan stub to be stored in cache when a batch is compiled for the first time.

ALTER DATABASE SCOPED CONFIGURATION SET OPTIMIZE_FOR_AD_HOC_WORKLOADS = ON;

I. Set ELEVATE_ONLINE
Applies to: Azure SQL Database and as a public preview feature
This example sets ELEVATE_ONLINE to FAIL_UNSUPPORTED.

ALTER DATABASE SCOPED CONFIGURATION SET ELEVATE_ONLINE=FAIL_UNSUPPORTED ;

J. Set ELEVATE_RESUMABLE
Applies to: Azure SQL Database and SQL Server 2019 preview as a public preview feature
This example sets ELEVATE_RESUMABLE to WHEN_SUPPORTED.

ALTER DATABASE SCOPED CONFIGURATION SET ELEVATE_RESUMABLE=WHEN_SUPPORTED ;

Additional Resources
MAXDOP Resources
Degree of Parallelism
Recommendations and guidelines for the "max degree of parallelism" configuration option in SQL Server
LEGACY_CARDINALITY_ESTIMATION Resources
Cardinality Estimation (SQL Server)
Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator
PARAMETER_SNIFFING Resources
Parameter Sniffing
"I smell a parameter!"
QUERY_OPTIMIZER_HOTFIXES Resources
Trace Flags
SQL Server query optimizer hotfix trace flag 4199 servicing model
ELEVATE_ONLINE Resources
Guidelines for Online Index Operations
ELEVATE_RESUMABLE Resources
Guidelines for Online Index Operations

More information
sys.database_scoped_configurations
sys.configurations
Databases and Files Catalog Views
Server Configuration Options
How Online Index Operations Work
Perform Index Operations Online
ALTER INDEX (Transact-SQL )
CREATE INDEX (Transact-SQL )
ALTER DATABASE SET Options (Transact-SQL)
12/10/2018 • 104 minutes to read • Edit Online

Sets database options in SQL Server and Azure SQL Database. For other ALTER DATABASE options, see
ALTER DATABASE.
Click one of the following tabs for the syntax, arguments, remarks, permissions, and examples for a particular
SQL version with which you are working.
For more information about the syntax conventions, see Transact-SQL Syntax Conventions.

Click a product!
In the following row, click whichever product name you are interested in. The click displays different content
here on this webpage, appropriate for whichever product you click.

* SQL Server * SQL Database


logical server

SQL Server
Database mirroring, Always On availability groups, and compatibility levels are SET options but are described
in separate articles because of their length. For more information, see ALTER DATABASE Database Mirroring,
ALTER DATABASE SET HADR, and ALTER DATABASE Compatibility Level.

NOTE
Many database set options can be configured for the current session by using SET Statements and are often configured
by applications when they connect. Session level set options override the ALTER DATABASE SET values. The database
options described below are values that can be set for sessions that do not explicitly provide other set option values.

Syntax
ALTER DATABASE { database_name | CURRENT }
SET
{
<option_spec> [ ,...n ] [ WITH <termination> ]
}

<option_spec> ::=
{
<auto_option>
| <automatic_tuning_option>
| <change_tracking_option>
| <containment_option>
| <cursor_option>
| <database_mirroring_option>
| <date_correlation_optimization_option>
| <db_encryption_option>
| <db_state_option>
| <db_update_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <external_access_option>
| FILESTREAM ( <FILESTREAM_option> )
| <HADR_options>
| <mixed_page_allocation_option>
| <parameterization_option>
| <query_store_options>
| <recovery_option>
| <remote_data_archive_option>
| <service_broker_option>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
}
;

<auto_option> ::=
{
AUTO_CLOSE { ON | OFF }
| AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<automatic_tuning_option> ::=
{
AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { ON | OFF } )
}

<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}

<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}

<containment_option> ::=
CONTAINMENT = { NONE | PARTIAL }

<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
| CURSOR_DEFAULT { LOCAL | GLOBAL }
}

<database_mirroring_option>
ALTER DATABASE Database Mirroring

<date_correlation_optimization_option> ::=
DATE_CORRELATION_OPTIMIZATION { ON | OFF }

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

<db_state_option> ::=
{ ONLINE | OFFLINE | EMERGENCY }
<db_update_option> ::=
{ READ_ONLY | READ_WRITE }

<db_user_access_option> ::=
{ SINGLE_USER | RESTRICTED_USER | MULTI_USER }

<delayed_durability_option> ::=
DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

<external_access_option> ::=
{
DB_CHAINING { ON | OFF }
| TRUSTWORTHY { ON | OFF }
| DEFAULT_FULLTEXT_LANGUAGE = { <lcid> | <language name> | <language alias> }
| DEFAULT_LANGUAGE = { <lcid> | <language name> | <language alias> }
| NESTED_TRIGGERS = { OFF | ON }
| TRANSFORM_NOISE_WORDS = { OFF | ON }
| TWO_DIGIT_YEAR_CUTOFF = { 1753, ..., 2049, ..., 9999 }
}
<FILESTREAM_option> ::=
{
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL
| DIRECTORY_NAME = <directory_name>
}
<HADR_options> ::=
ALTER DATABASE SET HADR

<mixed_page_allocation_option> ::=
MIXED_PAGE_ALLOCATION { OFF | ON }

<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }

<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,...n] ) ]
| ( < query_store_option_list> [,...n] )
| CLEAR [ ALL ]
}
}

<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
| WAIT_STATS_CAPTURE_MODE = [ ON | OFF ]
}

<recovery_option> ::=
{
RECOVERY { FULL | BULK_LOGGED | SIMPLE }
| TORN_PAGE_DETECTION { ON | OFF }
| PAGE_VERIFY { CHECKSUM | TORN_PAGE_DETECTION | NONE }
}

<remote_data_archive_option> ::=
{
REMOTE_DATA_ARCHIVE =
{
ON ( SERVER = <server_name> ,
{ CREDENTIAL = <db_scoped_credential_name>
| FEDERATED_SERVICE_ACCOUNT = ON | OFF
}
)
| OFF
}
}

<service_broker_option> ::=
{
ENABLE_BROKER
| DISABLE_BROKER
| NEW_BROKER
| ERROR_BROKER_CONVERSATIONS
| HONOR_BROKER_PRIORITY { ON | OFF}
}

<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 | 90 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}

<target_recovery_time_option> ::=
TARGET_RECOVERY_TIME = target_recovery_time { SECONDS | MINUTES }

<termination> ::=
{
ROLLBACK AFTER integer [ SECONDS ]
| ROLLBACK IMMEDIATE
| NO_WAIT
}

Arguments
database_name
Is the name of the database to be modified.
CURRENT
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
CURRENT performs the action in the current database. CURRENT is not supported for all options in all contexts. If
CURRENT fails, provide the database name.
<auto_option> ::=
Controls automatic options.
AUTO_CLOSE { ON | OFF }
ON
The database is shut down cleanly and its resources are freed after the last user exits.
The database automatically reopens when a user tries to use the database again. For example, by issuing a
USE database_name statement. If the database is shut down cleanly while AUTO_CLOSE is set to ON, the
database is not reopened until a user tries to use the database the next time the Database Engine is restarted.
OFF
The database remains open after the last user exits.
The AUTO_CLOSE option is useful for desktop databases because it allows for database files to be managed as
regular files. They can be moved, copied to make backups, or even e-mailed to other users. The AUTO_CLOSE
process is asynchronous; repeatedly opening and closing the database does not reduce performance.

NOTE
The AUTO_CLOSE option is not available in a Contained Database or on SQL Database.

The status of this option can be determined by examining the is_auto_close_on column in the sys.databases
catalog view or the IsAutoClose property of the DATABASEPROPERTYEX function.

NOTE
When AUTO_CLOSE is ON, some columns in the sys.databases catalog view and DATABASEPROPERTYEX function will
return NULL because the database is unavailable to retrieve the data. To resolve this, execute a USE statement to open
the database.

NOTE
Database mirroring requires AUTO_CLOSE OFF.

When the database is set to AUTOCLOSE = ON, an operation that initiates an automatic database shutdown
clears the plan cache for the instance of SQL Server. Clearing the plan cache causes a recompilation of all
subsequent execution plans and can cause a sudden, temporary decrease in query performance. In SQL Server
2005 (9.x) Service Pack 2 and higher, for each cleared cachestore in the plan cache, the SQL Server error log
contains the following informational message: " SQL Server has encountered %d occurrence(s) of cachestore
flush for the '%s' cachestore (part of plan cache) due to some database maintenance or reconfigure operations".
This message is logged every five minutes as long as the cache is flushed within that time interval.
AUTO_CREATE_STATISTICS { ON | OFF }
ON
The query optimizer creates statistics on single columns in query predicates, as necessary, to improve query
plans and query performance. These single-column statistics are created when the query optimizer compiles
queries. The single-column statistics are created only on columns that are not already the first column of an
existing statistics object.
The default is ON. We recommend that you use the default setting for most databases.
OFF
The query optimizer does not create statistics on single columns in query predicates when it is compiling
queries. Setting this option to OFF can cause suboptimal query plans and degraded query performance.
The status of this option can be determined by examining the is_auto_create_stats_on column in the
sys.databases catalog view or the IsAutoCreateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
INCREMENTAL = ON | OFF
When AUTO_CREATE_STATISTICS is ON, and INCREMENTAL is set to ON, automatically created stats are
created as incremental whenever incremental stats is supported. The default value is OFF. For more
information, see CREATE STATISTICS.
Applies to: SQL Server 2014 (12.x) through SQL Server 2017, SQL Database.
AUTO_SHRINK { ON | OFF }
ON
The database files are candidates for periodic shrinking.
Both data file and log files can be automatically shrunk. AUTO_SHRINK reduces the size of the transaction log
only if the database is set to SIMPLE recovery model or if the log is backed up. When set to OFF, the database
files are not automatically shrunk during periodic checks for unused space.
The AUTO_SHRINK option causes files to be shrunk when more than 25 percent of the file contains unused
space. The file is shrunk to a size where 25 percent of the file is unused space, or to the size of the file when it
was created, whichever is larger.
You cannot shrink a read-only database.
OFF
The database files are not automatically shrunk during periodic checks for unused space.
The status of this option can be determined by examining the is_auto_shrink_on column in the sys.databases
catalog view or the IsAutoShrink property of the DATABASEPROPERTYEX function.

NOTE
The AUTO_SHRINK option is not available in a Contained Database.

AUTO_UPDATE_STATISTICS { ON | OFF }
ON
Specifies that the query optimizer updates statistics when they are used by a query and when they might be
out-of-date. Statistics become out-of-date after insert, update, delete, or merge operations change the data
distribution in the table or indexed view. The query optimizer determines when statistics might be out-of-date
by counting the number of data modifications since the last statistics update and comparing the number of
modifications to a threshold. The threshold is based on the number of rows in the table or indexed view.
The query optimizer checks for out-of-date statistics before compiling a query and before executing a cached
query plan. Before compiling a query, the query optimizer uses the columns, tables, and indexed views in the
query predicate to determine which statistics might be out-of-date. Before executing a cached query plan, the
Database Engine verifies that the query plan references up-to-date statistics.
The AUTO_UPDATE_STATISTICS option applies to statistics created for indexes, single-columns in query
predicates, and statistics that are created by using the CREATE STATISTICS statement. This option also applies
to filtered statistics.
The default is ON. We recommend that you use the default setting for most databases.
Use the AUTO_UPDATE_STATISTICS_ASYNC option to specify whether the statistics are updated
synchronously or asynchronously.
OFF
Specifies that the query optimizer does not update statistics when they are used by a query and when they
might be out-of-date. Setting this option to OFF can cause suboptimal query plans and degraded query
performance.
The status of this option can be determined by examining the is_auto_update_stats_on column in the
sys.databases catalog view or the IsAutoUpdateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
ON
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are asynchronous. The query
optimizer does not wait for statistics updates to complete before it compiles queries.
Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
By default, the AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF, and the query optimizer updates
statistics synchronously.
OFF
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are synchronous. The query
optimizer waits for statistics updates to complete before it compiles queries.
Setting this option to OFF has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
The status of this option can be determined by examining the is_auto_update_stats_async_on column in the
sys.databases catalog view.
For more information that describes when to use synchronous or asynchronous statistics updates, see the
section "Using the Database-Wide Statistics Options" in Statistics.
<automatic_tuning_option> ::=
Applies to: SQL Server 2017 (14.x).
Enables or disables FORCE_LAST_GOOD_PLAN automatic tuning option.
FORCE_LAST_GOOD_PLAN = { ON | OFF }
ON
The Database Engine automatically forces the last known good plan on the Transact-SQL queries where new
SQL plan causes performance regressions. The Database Engine continuously monitors query performance of
the Transact-SQL query with the forced plan. If there are performance gains, the Database Engine will keep
using last known good plan. If performance gains are not detected, the Database Engine will produce a new
SQL plan. The statement will fail if Query Store is not enabled or if it is not in Read -Write mode.
OFF
The Database Engine reports potential query performance regressions caused by SQL plan changes in
sys.dm_db_tuning_recommendations view. However, these recommendations are not automatically applied.
User can monitor active recommendations and fix identified problems by applying Transact-SQL scripts that are
shown in the view. This is the default value.
<change_tracking_option> ::=
Applies to: SQL Server and SQL Database.
Controls change tracking options. You can enable change tracking, set options, change options, and disable
change tracking. For examples, see the Examples section later in this article.
ON
Enables change tracking for the database. When you enable change tracking, you can also set the AUTO
CLEANUP and CHANGE RETENTION options.
AUTO_CLEANUP = { ON | OFF }
ON
Change tracking information is automatically removed after the specified retention period.
OFF
Change tracking data is not removed from the database.
CHANGE_RETENTION =retention_period { DAYS | HOURS | MINUTES }
Specifies the minimum period for keeping change tracking information in the database. Data is removed only
when the AUTO_CLEANUP value is ON.
retention_period is an integer that specifies the numerical component of the retention period.
The default retention period is 2 days. The minimum retention period is 1 minute. The default retention type is
DAYS.
OFF
Disables change tracking for the database. You must disable change tracking on all tables before you can
disable change tracking off the database.
<containment_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Controls database containment options.
CONTAINMENT = { NONE | PARTIAL }
NONE
The database is not a contained database.
PARTIAL
The database is a contained database. Setting database containment to partial will fail if the database has
replication, change data capture, or change tracking enabled. Error checking stops after one failure. For more
information about contained databases, see Contained Databases.
<cursor_option> ::=
Controls cursor options.
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
ON
Any cursors open when a transaction is committed or rolled back are closed.
OFF
Cursors remain open when a transaction is committed; rolling back a transaction closes any cursors except
those defined as INSENSITIVE or STATIC.
Connection-level settings that are set by using the SET statement override the default database setting for
CURSOR_CLOSE_ON_COMMIT. By default, ODBC, and OLE DB clients issue a connection-level SET
statement setting CURSOR_CLOSE_ON_COMMIT to OFF for the session when connecting to an instance of
SQL Server. For more information, see SET CURSOR_CLOSE_ON_COMMIT.
The status of this option can be determined by examining the is_cursor_close_on_commit_on column in the
sys.databases catalog view or the IsCloseCursorsOnCommitEnabled property of the DATABASEPROPERTYEX
function.
CURSOR_DEFAULT { LOCAL | GLOBAL }
Applies to: SQL Server.
Controls whether cursor scope uses LOCAL or GLOBAL.
LOCAL
When LOCAL is specified and a cursor is not defined as GLOBAL when created, the scope of the cursor is local
to the batch, stored procedure, or trigger in which the cursor was created. The cursor name is valid only within
this scope. The cursor can be referenced by local cursor variables in the batch, stored procedure, or trigger, or a
stored procedure OUTPUT parameter. The cursor is implicitly deallocated when the batch, stored procedure, or
trigger ends, unless it was passed back in an OUTPUT parameter. If the cursor is passed back in an OUTPUT
parameter, the cursor is deallocated when the last variable that references it is deallocated or goes out of scope.
GLOBAL
When GLOBAL is specified, and a cursor is not defined as LOCAL when created, the scope of the cursor is
global to the connection. The cursor name can be referenced in any stored procedure or batch executed by the
connection.
The cursor is implicitly deallocated only at disconnect. For more information, see DECLARE CURSOR.
The status of this option can be determined by examining the is_local_cursor_default column in the
sys.databases catalog view or the IsLocalCursorsDefault property of the DATABASEPROPERTYEX function.
<database_mirroring>
Applies to: SQL Server.
For the argument descriptions, see ALTER DATABASE Database Mirroring.
<date_correlation_optimization_option> ::=
Applies to: SQL Server.
Controls the date_correlation_optimization option.
DATE_CORRELATION_OPTIMIZATION { ON | OFF }
ON
SQL Server maintains correlation statistics between any two tables in the database that are linked by a
FOREIGN KEY constraint and have datetime columns.
OFF
Correlation statistics are not maintained.
To set DATE_CORRELATION_OPTIMIZATION to ON, there must be no active connections to the database
except for the connection that is executing the ALTER DATABASE statement. Afterwards, multiple connections
are supported.
The current setting of this option can be determined by examining the is_date_correlation_on column in the
sys.databases catalog view.
<db_encryption_option> ::=
Controls the database encryption state.
ENCRYPTION {ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). For more information about database
encryption, see Transparent Data Encryption, and Transparent Data Encryption with Azure SQL Database.
When encryption is enabled at the database level all filegroups will be encrypted. Any new filegroups will
inherit the encrypted property. If any filegroups in the database are set to READ ONLY, the database
encryption operation will fail.
You can see the encryption state of the database by using the sys.dm_database_encryption_keys dynamic
management view.
<db_state_option> ::=
Applies to: SQL Server.
Controls the state of the database.
OFFLINE
The database is closed, shut down cleanly, and marked offline. The database cannot be modified while it is
offline.
ONLINE
The database is open and available for use.
EMERGENCY
The database is marked READ_ONLY, logging is disabled, and access is limited to members of the sysadmin
fixed server role. EMERGENCY is primarily used for troubleshooting purposes. For example, a database marked
as suspect due to a corrupted log file can be set to the EMERGENCY state. This could enable the system
administrator read-only access to the database. Only members of the sysadmin fixed server role can set a
database to the EMERGENCY state.

NOTE
Permissions: ALTER DATABASE permission for the subject database is required to change a database to the offline or
emergency state. The server level ALTER ANY DATABASE permission is required to move a database from offline to online.

The status of this option can be determined by examining the state and state_desc columns in the sys.databases
catalog view or the Status property of the DATABASEPROPERTYEX function. For more information, see
Database States.
A database marked as RESTORING cannot be set to OFFLINE, ONLINE, or EMERGENCY. A database may be
in the RESTORING state during an active restore operation or when a restore operation of a database or log file
fails because of a corrupted backup file.
<db_update_option> ::=
Controls whether updates are allowed on the database.
READ_ONLY
Users can read data from the database but not modify it.

NOTE
To improve query performance, update statistics before setting a database to READ_ONLY. If additional statistics are
needed after a database is set to READ_ONLY, the Database Engine will create statistics in tempdb. For more information
about statistics for a read-only database, see Statistics.

READ_WRITE
The database is available for read and write operations.
To change this state, you must have exclusive access to the database. For more information, see the
SINGLE_USER clause.

NOTE
On SQL Database federated databases, SET { READ_ONLY | READ_WRITE } is disabled.

<db_user_access_option> ::=
Controls user access to the database.
SINGLE_USER
Applies to: SQL Server.
Specifies that only one user at a time can access the database. If SINGLE_USER is specified and there are other
users connected to the database, the ALTER DATABASE statement will be blocked until all users disconnect
from the specified database. To override this behavior, see the WITH <termination> clause.
The database remains in SINGLE_USER mode even if the user that set the option logs off. At that point, a
different user, but only one, can connect to the database.
Before you set the database to SINGLE_USER, verify the AUTO_UPDATE_STATISTICS_ASYNC option is set to
OFF. When set to ON, the background thread used to update statistics takes a connection against the database,
and you will be unable to access the database in single-user mode. To view the status of this option, query the
is_auto_update_stats_async_on column in the sys.databases catalog view. If the option is set to ON, perform the
following tasks:
1. Set AUTO_UPDATE_STATISTICS_ASYNC to OFF.
2. Check for active asynchronous statistics jobs by querying the sys.dm_exec_background_job_queue
dynamic management view.
If there are active jobs, either allow the jobs to complete or manually terminate them by using KILL STATS JOB.
RESTRICTED_USER
RESTRICTED_USER allows for only members of the db_owner fixed database role and dbcreator and sysadmin
fixed server roles to connect to the database, but does not limit their number. All connections to the database
are disconnected in the timeframe specified by the termination clause of the ALTER DATABASE statement. After
the database has transitioned to the RESTRICTED_USER state, connection attempts by unqualified users are
refused.
MULTI_USER
All users that have the appropriate permissions to connect to the database are allowed.
The status of this option can be determined by examining the user_access column in the sys.databases catalog
view or the UserAccess property of the DATABASEPROPERTYEX function.
<delayed_durability_option> ::=
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
Controls whether transactions commit fully durable or delayed durable.
DISABLED
All transactions following SET DISABLED are fully durable. Any durability options set in an atomic block or
commit statement are ignored.
ALLOWED
All transactions following SET ALLOWED are either fully durable or delayed durable, depending upon the
durability option set in the atomic block or commit statement.
FORCED
All transactions following SET FORCED are delayed durable. Any durability options set in an atomic block or
commit statement are ignored.
<external_access_option> ::=
Applies to: SQL Server.
Controls whether the database can be accessed by external resources, such as objects from another database.
DB_CHAINING { ON | OFF }
ON
Database can be the source or target of a cross-database ownership chain.
OFF
Database cannot participate in cross-database ownership chaining.

IMPORTANT
The instance of SQL Server will recognize this setting when the cross db ownership chaining server option is 0 (OFF).
When cross db ownership chaining is 1 (ON), all user databases can participate in cross-database ownership chains,
regardless of the value of this option. This option is set by using sp_configure.

To set this option, requires CONTROL SERVER permission on the database.


The DB_CHAINING option cannot be set on these system databases: master, model, and tempdb.
The status of this option can be determined by examining the is_db_chaining_on column in the sys.databases
catalog view.
TRUSTWORTHY { ON | OFF }
ON
Database modules (for example, user-defined functions or stored procedures) that use an impersonation
context can access resources outside the database.
OFF
Database modules in an impersonation context cannot access resources outside the database.
TRUSTWORTHY is set to OFF whenever the database is attached.
By default, all system databases except the msdb database have TRUSTWORTHY set to OFF. The value cannot
be changed for the model and tempdb databases. We recommend that you never set the TRUSTWORTHY
option to ON for the master database.
To set this option, requires CONTROL SERVER permission on the database.
The status of this option can be determined by examining the is_trustworthy_on column in the sys.databases
catalog view.
DEFAULT_FULLTEXT_LANGUAGE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the default language value for full-text indexed columns.

IMPORTANT
This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is set to NONE, errors will
occur.

DEFAULT_LANGUAGE
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the default language for all newly created logins. Language can be specified by providing the local ID
(lcid), the language name, or the language alias. For a list of acceptable language names and aliases, see
sys.syslanguages. This option is allowable only when CONTAINMENT has been set to PARTIAL. If
CONTAINMENT is set to NONE, errors will occur.
NESTED_TRIGGERS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies whether an AFTER trigger can cascade; that is, perform an action that initiates another trigger, which
initiates another trigger, and so on. This option is allowable only when CONTAINMENT has been set to
PARTIAL. If CONTAINMENT is set to NONE, errors will occur.
TRANSFORM_NOISE_WORDS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Used to suppress an error message if noise words, or stopwords, cause a Boolean operation on a full-text query
to fail. This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is set
to NONE, errors will occur.
TWO_DIGIT_YEAR_CUTOFF
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies an integer from 1753 to 9999 that represents the cutoff year for interpreting two-digit years as four-
digit years. This option is allowable only when CONTAINMENT has been set to PARTIAL. If CONTAINMENT is
set to NONE, errors will occur.
<FILESTREAM_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Controls the settings for FileTables.
NON_TRANSACTED_ACCESS = { OFF | READ_ONLY | FULL }
OFF
Non-transactional access to FileTable data is disabled.
READ_ONLY
FILESTREAM data in FileTables in this database can be read by non-transactional processes.
FULL
Full non-transactional access to FILESTREAM data in FileTables is enabled.
DIRECTORY_NAME = <directory_name>
A windows-compatible directory name. This name should be unique among all the database-level directory
names in the SQL Server instance. Uniqueness comparison is case-insensitive, regardless of collation settings.
This option must be set before creating a FileTable in this database.
<HADR_options> ::=
Applies to: SQL Server.
See ALTER DATABASE SET HADR.
<mixed_page_allocation_option> ::=
Applies to: SQL Server ( SQL Server 2016 (13.x) through current version).
MIXED_PAGE_ALLOCATION { OFF | ON } controls whether the database can create initial pages using a mixed
extent for the first eight pages of a table or index.
OFF
The database always creates initial pages using uniform extents. This is the default value.
ON
The database can create initial pages using mixed extents.
This setting is ON for all system databases. tempdb is the only system database that supports OFF.
<PARAMETERIZATION_option> ::=
Controls the parameterization option. For more information on parameterization, see the Query Processing
Architecture Guide.
PARAMETERIZATION { SIMPLE | FORCED }
SIMPLE
Queries are parameterized based on the default behavior of the database.
FORCED
SQL Server parameterizes all queries in the database.
The current setting of this option can be determined by examining the is_parameterization_forced column in the
sys.databases catalog view.
<query_store_options> ::=
Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017).
ON | OFF | CLEAR [ ALL ]
Controls if the query store is enabled in this database, and also controls removing the contents of the query
store. For more information, see Query Store Usage Scenarios.
ON
Enables the query store.
OFF
Disables the query store. This is the default value.
CLEAR
Remove the contents of the query store.
OPERATION_MODE
Describes the operation mode of the query store. Valid values are READ_ONLY and READ_WRITE. In
READ_WRITE mode, the query store collects and persists query plan and runtime execution statistics
information. In READ_ONLY mode, information can be read from the query store, but new information is not
added. If the maximum allocated space of the query store has been exhausted, the query store will change is
operation mode to READ_ONLY.
CLEANUP_POLICY
Describes the data retention policy of the query store. STALE_QUERY_THRESHOLD_DAYS determines the
number of days for which the information for a query is retained in the query store.
STALE_QUERY_THRESHOLD_DAYS is type bigint.
DATA_FLUSH_INTERVAL_SECONDS
Determines the frequency at which data written to the query store is persisted to disk. To optimize for
performance, data collected by the query store is asynchronously written to the disk. The frequency at which
this asynchronous transfer occurs is configured by using the DATA_FLUSH_INTERVAL_SECONDS argument.
DATA_FLUSH_INTERVAL_SECONDS is type bigint.
MAX_STORAGE_SIZE_MB
Determines the space allocated to the query store. MAX_STORAGE_SIZE_MB is type bigint.
INTERVAL_LENGTH_MINUTES
Determines the time interval at which runtime execution statistics data is aggregated into the query store. To
optimize for space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed
time window. This fixed time window is configured by using the INTERVAL_LENGTH_MINUTES argument.
INTERVAL_LENGTH_MINUTES is type bigint.
SIZE_BASED_CLEANUP_MODE
Controls whether cleanup will be automatically activated when total amount of data gets close to maximum
size:
OFF
Size based cleanup won't be automatically activated.
AUTO
Size based cleanup will be automatically activated when size on disk reaches 90% of max_storage_size_mb.
Size based cleanup removes the least expensive and oldest queries first. It stops at approximately 80% of
max_storage_size_mb. This is the default configuration value.
SIZE_BASED_CLEANUP_MODE is type nvarchar.
QUERY_CAPTURE_MODE
Designates the currently active query capture mode:
ALL All queries are captured. This is the default configuration value. This is the default configuration value for
SQL Server 2016 (13.x)
AUTO Capture relevant queries based on execution count and resource consumption.
NONE Stop capturing new queries. Query Store will continue to collect compile and runtime statistics for
queries that were captured already. Use this configuration with caution since you may miss to capture
important queries.
QUERY_CAPTURE_MODE is type nvarchar.
MAX_PLANS_PER_QUERY
An integer representing the maximum number of plans maintained for each query. Default is 200.
<recovery_option> ::=
Applies to: SQL Server.
Controls database recovery options and disk I/O error checking.
FULL
Provides full recovery after media failure by using transaction log backups. If a data file is damaged, media
recovery can restore all committed transactions. For more information, see Recovery Models.
BULK_LOGGED
Provides recovery after media failure by combining the best performance and least amount of log-space use for
certain large-scale or bulk operations. For information about what operations can be minimally logged, see The
Transaction Log. Under the BULK_LOGGED recovery model, logging for these operations is minimal. For more
information, see Recovery Models.
SIMPLE
A simple backup strategy that uses minimal log space is provided. Log space can be automatically reused when
it is no longer required for server failure recovery. For more information, see Recovery Models.

IMPORTANT
The simple recovery model is easier to manage than the other two models but at the expense of greater data loss
exposure if a data file is damaged. All changes since the most recent database or differential database backup are lost and
must be manually reentered.

The default recovery model is determined by the recovery model of the model database. For more information
about selecting the appropriate recovery model, see Recovery Models.
The status of this option can be determined by examining the recovery_model and recovery_model_desc
columns in the sys.databases catalog view or the Recovery property of the DATABASEPROPERTYEX function.
TORN_PAGE_DETECTION { ON | OFF }
ON
Incomplete pages can be detected by the Database Engine.
OFF
Incomplete pages cannot be detected by the Database Engine.

IMPORTANT
The syntax structure TORN_PAGE_DETECTION ON | OFF will be removed in a future version of SQL Server. Avoid using
this syntax structure in new development work, and plan to modify applications that currently use the syntax structure.
Use the PAGE_VERIFY option instead.

PAGE_VERIFY { CHECKSUM | TORN_PAGE_DETECTION | NONE }


Discovers damaged database pages caused by disk I/O path errors. Disk I/O path errors can be the cause of
database corruption problems and are generally caused by power failures or disk hardware failures that occur
at the time the page is being written to disk.
CHECKSUM
Calculates a checksum over the contents of the whole page and stores the value in the page header when a
page is written to disk. When the page is read from disk, the checksum is recomputed and compared to the
checksum value stored in the page header. If the values do not match, error message 824 (indicating a
checksum failure) is reported to both the SQL Server error log and the Windows event log. A checksum failure
indicates an I/O path problem. To determine the root cause requires investigation of the hardware, firmware
drivers, BIOS, filter drivers (such as virus software), and other I/O path components.
TORN_PAGE_DETECTION
Saves a specific 2-bit pattern for each 512-byte sector in the 8-kilobyte (KB ) database page and stored in the
database page header when the page is written to disk. When the page is read from disk, the torn bits stored in
the page header are compared to the actual page sector information. Unmatched values indicate that only part
of the page was written to disk. In this situation, error message 824 (indicating a torn page error) is reported to
both the SQL Server error log and the Windows event log. Torn pages are typically detected by database
recovery if it is truly an incomplete write of a page. However, other I/O path failures can cause a torn page at
any time.
NONE
Database page writes will not generate a CHECKSUM or TORN_PAGE_DETECTION value. SQL Server will not
verify a checksum or torn page during a read even if a CHECKSUM or TORN_PAGE_DETECTION value is
present in the page header.
Consider the following important points when you use the PAGE_VERIFY option:
The default is CHECKSUM.
When a user or system database is upgraded to SQL Server 2005 (9.x) or a later version, the
PAGE_VERIFY value (NONE or TORN_PAGE_DETECTION ) is retained. We recommend that you use
CHECKSUM.
NOTE
In earlier versions of SQL Server, the PAGE_VERIFY database option is set to NONE for the tempdb database and
cannot be modified. In SQL Server 2008 and later versions, the default value for the tempdb database is
CHECKSUM for new installations of SQL Server. When upgrading an installation SQL Server, the default value
remains NONE. The option can be modified. We recommend that you use CHECKSUM for the tempdb database.

TORN_PAGE_DETECTION may use fewer resources but provides a minimal subset of the CHECKSUM
protection.
PAGE_VERIFY can be set without taking the database offline, locking the database, or otherwise impeding
concurrency on that database.
CHECKSUM is mutually exclusive to TORN_PAGE_DETECTION. Both options cannot be enabled at the
same time.
When a torn page or checksum failure is detected, you can recover by restoring the data or potentially
rebuilding the index if the failure is limited only to index pages. If you encounter a checksum failure, to
determine the type of database page or pages affected, run DBCC CHECKDB. For more information about
restore options, see RESTORE Arguments. Although restoring the data will resolve the data corruption
problem, the root cause, for example, disk hardware failure, should be diagnosed and corrected as soon as
possible to prevent continuing errors.
SQL Server will retry any read that fails with a checksum, torn page, or other I/O error four times. If the read is
successful in any one of the retry attempts, a message will be written to the error log and the command that
triggered the read will continue. If the retry attempts fail, the command will fail with error message 824.
For more information about error messages 823, 824 and 825, see How to troubleshoot a Msg 823 error in
SQL Server, How to troubleshoot Msg 824 in SQL Server and How to troubleshoot Msg 825 (read retry) in
SQL Server.
The current setting of this option can be determined by examining the page_verify_option column in the
sys.databases catalog view or the IsTornPageDetectionEnabled property of the DATABASEPROPERTYEX
function.
<remote_data_archive_option> ::=
Applies to: SQL Server 2016 (13.x) through SQL Server 2017.
Enables or disables Stretch Database for the database. For more info, see Stretch Database.
REMOTE_DATA_ARCHIVE = { ON ( SERVER = <server_name> , { CREDENTIAL =
<db_scoped_credential_name> | FEDERATED_SERVICE_ACCOUNT = ON | OFF } )| OFF ON
Enables Stretch Database for the database. For more info, including additional prerequisites, see Enable Stretch
Database for a database.
Permissions. Enabling Stretch Database for a database or a table requires db_owner permissions. Enabling
Stretch Database for a database also requires CONTROL DATABASE permissions.
SERVER = <server_name>
Specifies the address of the Azure server. Include the .database.windows.net portion of the name. For example,
MyStretchDatabaseServer.database.windows.net .

CREDENTIAL = <db_scoped_credential_name>
Specifies the database scoped credential that the instance of SQL Server uses to connect to the Azure server.
Make sure the credential exists before you run this command. For more info, see CREATE DATABASE SCOPED
CREDENTIAL.
FEDERATED_SERVICE_ACCOUNT = ON | OFF
You can use a federated service account for the on premises SQL Server to communicate with the remote
Azure server when the following conditions are all true.
The service account under which the instance of SQL Server is running is a domain account.
The domain account belongs to a domain whose Active Directory is federated with Azure Active Directory.
The remote Azure server is configured to support Azure Active Directory authentication.
The service account under which the instance of SQL Server is running must be configured as a dbmanager
or sysadmin account on the remote Azure server.
If you specify ON, you can't also specify the CREDENTIAL argument. If you specify OFF, you have to provide
the CREDENTIAL argument.
OFF
Disables Stretch Database for the database. For more info, see Disable Stretch Database and bring back remote
data.
You can only disable Stretch Database for a database after the database no longer contains any tables that are
enabled for Stretch Database. After you disable Stretch Database, data migration stops and query results no
longer include results from remote tables.
Disabling Stretch does not remove the remote database. If you want to delete the remote database, you have to
drop it by using the Azure portal.
<service_broker_option> ::=
Applies to: SQL Server.
Controls the following Service Broker options: enables or disables message delivery, sets a new Service Broker
identifier, or sets conversation priorities to ON or OFF.
ENABLE_BROKER
Specifies that Service Broker is enabled for the specified database. Message delivery is started, and the
is_broker_enabled flag is set to true in the sys.databases catalog view. The database retains the existing Service
Broker identifier. Service broker cannot be enabled while the database is the principal in a database mirroring
configuration.

NOTE
ENABLE_BROKER requires an exclusive database lock. If other sessions have locked resources in the database,
ENABLE_BROKER will wait until the other sessions release their locks. To enable Service Broker in a user database, ensure
that no other sessions are using the database before you run the ALTER DATABASE SET ENABLE_BROKER statement, such
as by putting the database in single user mode. To enable Service Broker in the msdb database, first stop SQL Server
Agent so that Service Broker can obtain the necessary lock.

DISABLE_BROKER
Specifies that Service Broker is disabled for the specified database. Message delivery is stopped, and the
is_broker_enabled flag is set to false in the sys.databases catalog view. The database retains the existing Service
Broker identifier.
NEW_BROKER
Specifies that the database should receive a new broker identifier. Because the database is considered to be a
new service broker, all existing conversations in the database are immediately removed without producing end
dialog messages. Any route that references the old Service Broker identifier must be re-created with the new
identifier.
ERROR_BROKER_CONVERSATIONS
Specifies that Service Broker message delivery is enabled. This preserves the existing Service Broker identifier
for the database. Service Broker ends all conversations in the database with an error. This enables applications
to perform regular cleanup for existing conversations.
HONOR_BROKER_PRIORITY {ON | OFF }
ON
Send operations take into consideration the priority levels that are assigned to conversations. Messages from
conversations that have high priority levels are sent before messages from conversations that are assigned low
priority levels.
OFF
Send operations run as if all conversations have the default priority level.
Changes to the HONOR_BROKER_PRIORITY option take effect immediately for new dialogs or dialogs that
have no messages waiting to be sent. Dialogs that have messages waiting to be sent when ALTER DATABASE is
run will not pick up the new setting until some of the messages for the dialog have been sent. The amount of
time before all dialogs start using the new setting can vary considerably.
The current setting of this property is reported in the is_broker_priority_honored column in the sys.databases
catalog view.
<snapshot_option> ::=
Determines the transaction isolation level.
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
ON
Enables Snapshot option at the database level. When it is enabled, DML statements start generating row
versions even when no transaction uses Snapshot Isolation. Once this option is enabled, transactions can
specify the SNAPSHOT transaction isolation level. When a transaction runs at the SNAPSHOT isolation level,
all statements see a snapshot of data as it exists at the start of the transaction. If a transaction running at the
SNAPSHOT isolation level accesses data in multiple databases, either ALLOW_SNAPSHOT_ISOLATION must
be set to ON in all the databases, or each statement in the transaction must use locking hints on any reference
in a FROM clause to a table in a database where ALLOW_SNAPSHOT_ISOLATION is OFF.
OFF
Turns off the Snapshot option at the database level. Transactions cannot specify the SNAPSHOT transaction
isolation level.
When you set ALLOW_SNAPSHOT_ISOLATION to a new state (from ON to OFF, or from OFF to ON ), ALTER
DATABASE does not return control to the caller until all existing transactions in the database are committed. If
the database is already in the state specified in the ALTER DATABASE statement, control is returned to the caller
immediately. If the ALTER DATABASE statement does not return quickly, use
sys.dm_tran_active_snapshot_database_transactions to determine whether there are long-running transactions.
If the ALTER DATABASE statement is canceled, the database remains in the state it was in when ALTER
DATABASE was started. The sys.databases catalog view indicates the state of snapshot-isolation transactions in
the database. If snapshot_isolation_state_desc = IN_TRANSITION_TO_ON, ALTER DATABASE
ALLOW_SNAPSHOT_ISOLATION OFF will pause six seconds and retry the operation.
You cannot change the state of ALLOW_SNAPSHOT_ISOLATION if the database is OFFLINE.
If you set ALLOW_SNAPSHOT_ISOLATION in a READ_ONLY database, the setting will be retained if the
database is later set to READ_WRITE.
You can change the ALLOW_SNAPSHOT_ISOLATION settings for the master, model, msdb, and tempdb
databases. If you change the setting for tempdb, the setting is retained every time the instance of the Database
Engine is stopped and restarted. If you change the setting for model, that setting becomes the default for any
new databases that are created, except for tempdb.
The option is ON, by default, for the master and msdb databases.
The current setting of this option can be determined by examining the snapshot_isolation_state column in the
sys.databases catalog view.
READ_COMMITTED_SNAPSHOT { ON | OFF }
ON
Enables Read-Committed Snapshot option at the database level. When it is enabled, DML statements start
generating row versions even when no transaction uses Snapshot Isolation. Once this option is enabled, the
transactions specifying the read committed isolation level use row versioning instead of locking. When a
transaction runs at the read committed isolation level, all statements see a snapshot of data as it exists at the
start of the statement.
OFF
Turns off Read-Committed Snapshot option at the database level. Transactions specifying the READ
COMMITTED isolation level use locking.
To set READ_COMMITTED_SNAPSHOT ON or OFF, there must be no active connections to the database
except for the connection executing the ALTER DATABASE command. However, the database does not have to
be in single-user mode. You cannot change the state of this option when the database is OFFLINE.
If you set READ_COMMITTED_SNAPSHOT in a READ_ONLY database, the setting will be retained when the
database is later set to READ_WRITE.
READ_COMMITTED_SNAPSHOT cannot be turned ON for the master, tempdb, or msdb system databases. If
you change the setting for model, that setting becomes the default for any new databases created, except for
tempdb.
The current setting of this option can be determined by examining the is_read_committed_snapshot_on column
in the sys.databases catalog view.

WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.

MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT { ON | OFF }
Applies to: SQL Server 2014 (12.x) through SQL Server 2017.
ON
When the transaction isolation level is set to any isolation level lower than SNAPSHOT (for example, READ
COMMITTED or READ UNCOMMITTED ), all interpreted Transact-SQL operations on memory-optimized
tables are performed under SNAPSHOT isolation. This is done regardless of whether the transaction isolation
level is set explicitly at the session level, or the default is used implicitly.
OFF
Does not elevate the transaction isolation level for interpreted Transact-SQL operations on memory-optimized
tables.
You cannot change the state of MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT if the database is OFFLINE.
The option is OFF, by default.
The current setting of this option can be determined by examining the
is_memory_optimized_elevate_to_snapshot_on column in the sys.databases catalog view.
<sql_option> ::=
Controls the ANSI compliance options at the database level.
ANSI_NULL_DEFAULT { ON | OFF }
Determines the default value, NULL or NOT NULL, of a column or CLR user-defined type for which the
nullability is not explicitly defined in CREATE TABLE or ALTER TABLE statements. Columns that are defined
with constraints follow constraint rules regardless of this setting.
ON
The default value is NULL.
OFF
The default value is NOT NULL.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_NULL_DEFAULT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULL_DEFAULT to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_NULL_DFLT_ON.
For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database
default to NULL.
The status of this option can be determined by examining the is_ansi_null_default_on column in the
sys.databases catalog view or the IsAnsiNullDefault property of the DATABASEPROPERTYEX function.
ANSI_NULLS { ON | OFF }
ON
All comparisons to a null value evaluate to UNKNOWN.
OFF
Comparisons of non-UNICODE values to a null value evaluate to TRUE if both values are NULL.

IMPORTANT
In a future version of SQL Server, ANSI_NULLS will always be ON and any applications that explicitly set the option to OFF
will produce an error. Avoid using this feature in new development work, and plan to modify applications that currently
use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_NULLS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULLS to ON for the session when connecting to an instance of SQL Server. For more information, see
SET ANSI_NULLS.
SET ANSI_NULLS also must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
The status of this option can be determined by examining the is_ansi_nulls_on column in the sys.databases
catalog view or the IsAnsiNullsEnabled property of the DATABASEPROPERTYEX function.
ANSI_PADDING { ON | OFF }
ON
Strings are padded to the same length before conversion or inserting to a varchar or nvarchar data type.
Trailing blanks in character values inserted into varchar or nvarchar columns and trailing zeros in binary
values inserted into varbinary columns are not trimmed. Values are not padded to the length of the column.
OFF
Trailing blanks for varchar or nvarchar and zeros for varbinary are trimmed.
When OFF is specified, this setting affects only the definition of new columns.

IMPORTANT
In a future version of SQL Server, ANSI_PADDING will always be ON and any applications that explicitly set the option to
OFF will produce an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature. We recommend that you always set ANSI_PADDING to ON. ANSI_PADDING must be ON when
you create or manipulate indexes on computed columns or indexed views.

char(n ) and binary(n ) columns that allow for nulls are padded to the length of the column when
ANSI_PADDING is set to ON, but trailing blanks and zeros are trimmed when ANSI_PADDING is OFF.
char(n ) and binary(n ) columns that do not allow nulls are always padded to the length of the column.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_PADDING. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_PADDING to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_PADDING.
The status of this option can be determined by examining the is_ansi_padding_on column in the sys.databases
catalog view or the IsAnsiPaddingEnabled property of the DATABASEPROPERTYEX function.
ANSI_WARNINGS { ON | OFF }
ON
Errors or warnings are issued when conditions such as divide-by-zero occur or null values appear in aggregate
functions.
OFF
No warnings are raised and null values are returned when conditions such as divide-by-zero occur.
SET ANSI_WARNINGS must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_WARNINGS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_WARNINGS to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_WARNINGS.
The status of this option can be determined by examining the is_ansi_warnings_on column in the sys.databases
catalog view or the IsAnsiWarningsEnabled property of the DATABASEPROPERTYEX function.
ARITHABORT { ON | OFF }
ON
A query is ended when an overflow or divide-by-zero error occurs during query execution.
OFF
A warning message is displayed when one of these errors occurs, but the query, batch, or transaction continues
to process as if no error occurred.
SET ARITHABORT must be set to ON when you create or make changes to indexes on computed columns or
indexed views.
The status of this option can be determined by examining the is_arithabort_on column in the sys.databases
catalog view or the IsArithmeticAbortEnabled property of the DATABASEPROPERTYEX function.
COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 | 90 }
For more information, see ALTER DATABASE Compatibility Level.
CONCAT_NULL_YIELDS_NULL { ON | OFF }
ON
The result of a concatenation operation is NULL when either operand is NULL. For example, concatenating the
character string "This is" and NULL causes the value NULL, instead of the value "This is".
OFF
The null value is treated as an empty character string.
CONCAT_NULL_YIELDS_NULL must be set to ON when you create or make changes to indexes on computed
columns or indexed views.

IMPORTANT
In a future version of SQL Server, CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set
the option to OFF will produce an error. Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
CONCAT_NULL_YIELDS_NULL. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CONCAT_NULL_YIELDS_NULL to ON for the session when connecting to an instance of SQL Server.
For more information, see SET CONCAT_NULL_YIELDS_NULL.
The status of this option can be determined by examining the is_concat_null_yields_null_on column in the
sys.databases catalog view or the IsNullConcat property of the DATABASEPROPERTYEX function.
QUOTED_IDENTIFIER { ON | OFF }
ON
Double quotation marks can be used to enclose delimited identifiers.
All strings delimited by double quotation marks are interpreted as object identifiers. Quoted identifiers do not
have to follow the Transact-SQL rules for identifiers. They can be keywords and can include characters not
generally allowed in Transact-SQL identifiers. If a single quotation mark (') is part of the literal string, it can be
represented by double quotation marks (").
OFF
Identifiers cannot be in quotation marks and must follow all Transact-SQL rules for identifiers. Literals can be
delimited by either single or double quotation marks.
SQL Server also allows for identifiers to be delimited by square brackets ([ ]). Bracketed identifiers can always
be used, regardless of the setting of QUOTED_IDENTIFIER. For more information, see Database Identifiers.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata of the table,
even if the option is set to OFF when the table is created.
Connection-level settings that are set by using the SET statement override the default database setting for
QUOTED_IDENTIFIER. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
QUOTED_IDENTIFIER to ON when connecting to an instance of SQL Server. For more information, see SET
QUOTED_IDENTIFIER.
The status of this option can be determined by examining the is_quoted_identifier_on column in the
sys.databases catalog view or the IsQuotedIdentifiersEnabled property of the DATABASEPROPERTYEX
function.
NUMERIC_ROUNDABORT { ON | OFF }
ON
An error is generated when loss of precision occurs in an expression.
OFF
Losses of precision do not generate error messages and the result is rounded to the precision of the column or
variable storing the result.
NUMERIC_ROUNDABORT must be set to OFF when you create or make changes to indexes on computed
columns or indexed views.
The status of this option can be determined by examining the is_numeric_roundabort_on column in the
sys.databases catalog view or the IsNumericRoundAbortEnabled property of the DATABASEPROPERTYEX
function.
RECURSIVE_TRIGGERS { ON | OFF }
ON
Recursive firing of AFTER triggers is allowed.
OFF
Only direct recursive firing of AFTER triggers is not allowed. To also disable indirect recursion of AFTER
triggers, set the nested triggers server option to 0 by using sp_configure.

NOTE
Only direct recursion is prevented when RECURSIVE_TRIGGERS is set to OFF. To disable indirect recursion, you must also
set the nested triggers server option to 0.

The status of this option can be determined by examining the is_recursive_triggers_on column in the
sys.databases catalog view or the IsRecursiveTriggersEnabled property of the DATABASEPROPERTYEX
function.
<target_recovery_time_option> ::=
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Specifies the frequency of indirect checkpoints on a per-database basis. Beginning with SQL Server 2016 (13.x)
the default value for new databases is 1 minute, which indicates database will use indirect checkpoints. For older
versions the default is 0, which indicates that the database will use automatic checkpoints, whose frequency
depends on the recovery interval setting of the server instance. Microsoft recommends 1 minute for most
systems.
TARGET_RECOVERY_TIME =target_recovery_time { SECONDS | MINUTES }
target_recovery_time
Specifies the maximum bound on the time to recover the specified database in the event of a crash.
SECONDS
Indicates that target_recovery_time is expressed as the number of seconds.
MINUTES
Indicates that target_recovery_time is expressed as the number of minutes.
For more information about indirect checkpoints, see Database Checkpoints.
WITH <termination> ::=
Specifies when to roll back incomplete transactions when the database is transitioned from one state to another.
If the termination clause is omitted, the ALTER DATABASE statement waits indefinitely if there is any lock on the
database. Only one termination clause can be specified, and it follows the SET clauses.
NOTE
Not all database options use the WITH <termination> clause. For more information, see the table under "Setting Options
of the "Remarks" section of this article.

ROLLBACK AFTER integer [SECONDS ] | ROLLBACK IMMEDIATE


Specifies whether to roll back after the specified number of seconds or immediately.
NO_WAIT
Specifies that if the requested database state or option change cannot complete immediately without waiting
for transactions to commit or roll back on their own, the request will fail.

Setting Options
To retrieve current settings for database options, use the sys.databases catalog view or
DATABASEPROPERTYEX
After you set a database option, the modification takes effect immediately.
To change the default values for any one of the database options for all newly created databases, change the
appropriate database option in the model database.
Not all database options use the WITH <termination> clause or can be specified in combination with other
options. The following table lists these options and their option and termination status.

CAN USE THE WITH <TERMINATION>


OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE

<db_state_option> Yes Yes

<db_user_access_option> Yes Yes

<db_update_option> Yes Yes

<delayed_durability_option> Yes Yes

<external_access_option> Yes No

<cursor_option> Yes No

<auto_option> Yes No

<sql_option> Yes No

<recovery_option> Yes No

<target_recovery_time_option> No Yes

<database_mirroring_option> No No

ALLOW_SNAPSHOT_ISOLATION No No

READ_COMMITTED_SNAPSHOT No Yes
CAN USE THE WITH <TERMINATION>
OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE

MEMORY_OPTIMIZED_ELEVATE_TO_S Yes Yes


NAPSHOT

<service_broker_option> Yes No

DATE_CORRELATION_OPTIMIZATION Yes Yes

<parameterization_option> Yes Yes

<change_tracking_option> Yes Yes

<db_encryption_option> Yes No

The plan cache for the instance of SQL Server is cleared by setting one of the following options:

OFFLINE READ_WRITE

ONLINE MODIFY FILEGROUP DEFAULT

MODIFY_NAME MODIFY FILEGROUP READ_WRITE

COLLATE MODIFY FILEGROUP READ_ONLY

READ_ONLY

The procedure cache is also flushed in the following scenarios.


A database has the AUTO_CLOSE database option set to ON. When no user connection references or uses
the database, the background task tries to close and shut down the database automatically.
You run several queries against a database that has default options. Then, the database is dropped.
A database snapshot for a source database is dropped.
You successfully rebuild the transaction log for a database.
You restore a database backup.
You detach a database.
Clearing the plan cache causes a recompilation of all subsequent execution plans and can cause a sudden,
temporary decrease in query performance. For each cleared cachestore in the plan cache, the SQL Server error
log contains the following informational message: " SQL Server has encountered %d occurrence(s) of
cachestore flush for the '%s' cachestore (part of plan cache) due to some database maintenance or reconfigure
operations". This message is logged every five minutes as long as the cache is flushed within that time interval.

Examples
A. Setting options on a database
The following example sets the recovery model and data page verification options for the
AdventureWorks2012 sample database.
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RECOVERY FULL PAGE_VERIFY CHECKSUM;
GO

B. Setting the database to READ_ONLY


Changing the state of a database or filegroup to READ_ONLY or READ_WRITE requires exclusive access to the
database. The following example sets the database to SINGLE_USER mode to obtain exclusive access. The
example then sets the state of the AdventureWorks2012 database to READ_ONLY and returns access to the
database to all users.

NOTE
This example uses the termination option WITH ROLLBACK IMMEDIATE in the first ALTER DATABASE statement. All
incomplete transactions will be rolled back and any other connections to the AdventureWorks2012 database will be
immediately disconnected.

USE master;
GO
ALTER DATABASE AdventureWorks2012
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE AdventureWorks2012
SET READ_ONLY
GO
ALTER DATABASE AdventureWorks2012
SET MULTI_USER;
GO

C. Enabling snapshot isolation on a database


The following example enables the snapshot isolation framework option for the AdventureWorks2012
database.

USE AdventureWorks2012;
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Check the state of the snapshot_isolation_framework
-- in the database.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO

The result set shows that the snapshot isolation framework is enabled.

NAME SNAPSHOT_ISOLATION_STATE DESCRIPTION

AdventureWorks2012 1 ON

D. Enabling, modifying, and disabling change tracking


The following example enables change tracking for the AdventureWorks2012 database and sets the retention
period to 2 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = ON
(AUTO_CLEANUP = ON, CHANGE_RETENTION = 2 DAYS);

The following example shows how to change the retention period to 3 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING (CHANGE_RETENTION = 3 DAYS);

The following example shows how to disable change tracking for the AdventureWorks2012 database.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = OFF;

E. Enabling the query store


Applies to: SQL Server ( SQL Server 2016 (13.x) through SQL Server 2017).
The following example enables the query store and configures query store parameters.

ALTER DATABASE AdventureWorks2012


SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE
, CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 )
, DATA_FLUSH_INTERVAL_SECONDS = 900
, MAX_STORAGE_SIZE_MB = 1024
, INTERVAL_LENGTH_MINUTES = 60
);

See Also
ALTER DATABASE Compatibility Level
ALTER DATABASE Database Mirroring
ALTER DATABASE SET HADR
Statistics
CREATE DATABASE
Enable and Disable Change Tracking
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOLATION LEVEL
sp_configure
sys.databases
sys.data_spaces
Best Practice with the Query Store

SQL Server * SQL Database


logical server *
Azure SQL Database logical server
Compatibility levels are SET options but are described in ALTER DATABASE Compatibility Level.

NOTE
Many database set options can be configured for the current session by using SET Statements and are often configured
by applications when they connect. Session level set options override the ALTER DATABASE SET values. The database
options described below are values that can be set for sessions that do not explicitly provide other set option values.

Syntax
ALTER DATABASE { database_name | Current }
SET
{
<option_spec> [ ,...n ] [ WITH <termination> ]
}
;

<option_spec> ::=
{
<auto_option>
| <automatic_tuning_option>
| <change_tracking_option>
| <cursor_option>
| <db_encryption_option>
| <db_update_option>
| <db_user_access_option>
| <delayed_durability_option>
| <parameterization_option>
| <query_store_options>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
| <temporal_history_retention>
}
;
<auto_option> ::=
{
AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<automatic_tuning_option> ::=
{ AUTOMATIC_TUNING = { AUTO | INHERIT | CUSTOM }
| AUTOMATIC_TUNING ( CREATE_INDEX = { DEFAULT | ON | OFF } )
| AUTOMATIC_TUNING ( DROP_INDEX = { DEFAULT | ON | OFF } )
| AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { DEFAULT | ON | OFF } )
}

<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}

<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}

<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
}

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

<db_update_option> ::=
{ READ_ONLY | READ_WRITE }

<db_user_access_option> ::=
{ RESTRICTED_USER | MULTI_USER }

<delayed_durability_option> ::= DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }

<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,... n] ) ]
| ( < query_store_option_list> [,... n] )
| CLEAR [ ALL ]
}
}

<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
}

<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}

<termination> ::=
{
{
ROLLBACK AFTER integer [ SECONDS ]
| ROLLBACK IMMEDIATE
| NO_WAIT
}

<temporal_history_retention> ::= TEMPORAL_HISTORY_RETENTION { ON | OFF }

Arguments
database_name
Is the name of the database to be modified.
CURRENT
CURRENT performs the action in the current database. CURRENT is not supported for all options in all contexts. If
CURRENT fails, provide the database name.

<auto_option> ::=
Controls automatic options.
AUTO_CREATE_STATISTICS { ON | OFF }
ON
The query optimizer creates statistics on single columns in query predicates, as necessary, to improve query
plans and query performance. These single-column statistics are created when the query optimizer compiles
queries. The single-column statistics are created only on columns that are not already the first column of an
existing statistics object.
The default is ON. We recommend that you use the default setting for most databases.
OFF
The query optimizer does not create statistics on single columns in query predicates when it is compiling
queries. Setting this option to OFF can cause suboptimal query plans and degraded query performance.
The status of this option can be determined by examining the is_auto_create_stats_on column in the
sys.databases catalog view or the IsAutoCreateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
INCREMENTAL = ON | OFF
When AUTO_CREATE_STATISTICS is ON, and INCREMENTAL is set to ON, automatically created stats are
created as incremental whenever incremental stats is supported. The default value is OFF. For more
information, see CREATE STATISTICS.
AUTO_SHRINK { ON | OFF }
ON
The database files are candidates for periodic shrinking.
Both data file and log files can be automatically shrunk. AUTO_SHRINK reduces the size of the transaction log
only if the database is set to SIMPLE recovery model or if the log is backed up. When set to OFF, the database
files are not automatically shrunk during periodic checks for unused space.
The AUTO_SHRINK option causes files to be shrunk when more than 25 percent of the file contains unused
space. The file is shrunk to a size where 25 percent of the file is unused space, or to the size of the file when it
was created, whichever is larger.
You cannot shrink a read-only database.
OFF
The database files are not automatically shrunk during periodic checks for unused space.
The status of this option can be determined by examining the is_auto_shrink_on column in the sys.databases
catalog view or the IsAutoShrink property of the DATABASEPROPERTYEX function.

NOTE
The AUTO_SHRINK option is not available in a Contained Database.

AUTO_UPDATE_STATISTICS { ON | OFF }
ON
Specifies that the query optimizer updates statistics when they are used by a query and when they might be
out-of-date. Statistics become out-of-date after insert, update, delete, or merge operations change the data
distribution in the table or indexed view. The query optimizer determines when statistics might be out-of-date
by counting the number of data modifications since the last statistics update and comparing the number of
modifications to a threshold. The threshold is based on the number of rows in the table or indexed view.
The query optimizer checks for out-of-date statistics before compiling a query and before executing a cached
query plan. Before compiling a query, the query optimizer uses the columns, tables, and indexed views in the
query predicate to determine which statistics might be out-of-date. Before executing a cached query plan, the
Database Engine verifies that the query plan references up-to-date statistics.
The AUTO_UPDATE_STATISTICS option applies to statistics created for indexes, single-columns in query
predicates, and statistics that are created by using the CREATE STATISTICS statement. This option also applies
to filtered statistics.
The default is ON. We recommend that you use the default setting for most databases.
Use the AUTO_UPDATE_STATISTICS_ASYNC option to specify whether the statistics are updated
synchronously or asynchronously.
OFF
Specifies that the query optimizer does not update statistics when they are used by a query and when they
might be out-of-date. Setting this option to OFF can cause suboptimal query plans and degraded query
performance.
The status of this option can be determined by examining the is_auto_update_stats_on column in the
sys.databases catalog view or the IsAutoUpdateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
ON
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are asynchronous. The query
optimizer does not wait for statistics updates to complete before it compiles queries.
Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
By default, the AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF, and the query optimizer updates
statistics synchronously.
OFF
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are synchronous. The query
optimizer waits for statistics updates to complete before it compiles queries.
Setting this option to OFF has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
The status of this option can be determined by examining the is_auto_update_stats_async_on column in the
sys.databases catalog view.
For more information that describes when to use synchronous or asynchronous statistics updates, see the
section "Using the Database-Wide Statistics Options" in Statistics.
<automatic_tuning_option> ::=
Applies to: SQL Server 2017 (14.x).
Controls automatic options for automatic tuning.
AUTOMATIC_TUNING = { AUTO | INHERIT | CUSTOM }
AUTO
Setting the automatic tuning value to AUTO will apply Azure configuration defaults for automatic tuning.
INHERIT
Using the value INHERIT will inherit the default configuration from the parent server. This is especially useful if
you would like to customize automatic tuning configuration on a parent server, and have all the databases on
such server INHERIT these custom settings. Please note that in order for the inheritance to work, the three
individual tuning options FORCE_LAST_GOOD_PLAN, CREATE_INDEX and DROP_INDEX need to be set to
DEFAULT on databases.
CUSTOM
Using the value CUSTOM, you will need to manually custom configure each of the automatic tuning options
available on databases.
Enables or disables automatic index management CREATE_INDEX option of automatic tuning.
CREATE_INDEX = { DEFAULT | ON | OFF }
DEFALT
Inherits default settings from the server. In this case, options of enabling or disabling individual Automatic
tuning features are defined at the server level.
ON
When enabled, missing indexes are automatically generated on a database. Following the index creation, gains
to the performance of the workload are verified. When such created index no longer provides benefits to the
workload performance, it is automatically reverted. Indexes automatically created are flagged as a system
generated indexed.
OFF
Does not automatically generate missing indexes on the database.
Enables or disables automatic index management DROP_INDEX option of automatic tuning.
DROP_INDEX = { DEFAULT | ON | OFF }
DEFALT
Inherits default settings from the server. In this case, options of enabling or disabling individual Automatic
tuning features are defined at the server level.
ON
Automatically drops duplicate or no longer useful indexes to the performance workload.
OFF
Does not automatically drop missing indexes on the database.
Enables or disables automatic plan correction FORCE_LAST_GOOD_PLAN option of automatic tuning.
FORCE_LAST_GOOD_PLAN = { DEFAULT | ON | OFF }
DEFAULT
Inherits default settings from the server. In this case, options of enabling or disabling individual Automatic
tuning features are defined at the server level.
ON
The Database Engine automatically forces the last known good plan on the Transact-SQL queries where new
SQL plan causes performance regressions. The Database Engine continuously monitors query performance of
the Transact-SQL query with the forced plan. If there are performance gains, the Database Engine will keep
using last known good plan. If performance gains are not detected, the Database Engine will produce a new
SQL plan. The statement will fail if Query Store is not enabled or if it is not in Read -Write mode.
OFF
The Database Engine reports potential query performance regressions caused by SQL plan changes in
sys.dm_db_tuning_recommendations view. However, these recommendations are not automatically applied.
User can monitor active recommendations and fix identified problems by applying Transact-SQL scripts that are
shown in the view. This is the default value.
<change_tracking_option> ::=
Controls change tracking options. You can enable change tracking, set options, change options, and disable
change tracking. For examples, see the Examples section later in this article.
ON
Enables change tracking for the database. When you enable change tracking, you can also set the AUTO
CLEANUP and CHANGE RETENTION options.
AUTO_CLEANUP = { ON | OFF }
ON
Change tracking information is automatically removed after the specified retention period.
OFF
Change tracking data is not removed from the database.
CHANGE_RETENTION =retention_period { DAYS | HOURS | MINUTES }
Specifies the minimum period for keeping change tracking information in the database. Data is removed only
when the AUTO_CLEANUP value is ON.
retention_period is an integer that specifies the numerical component of the retention period.
The default retention period is 2 days. The minimum retention period is 1 minute. The default retention type is
DAYS.
OFF
Disables change tracking for the database. You must disable change tracking on all tables before you can
disable change tracking off the database.
<cursor_option> ::=
Controls cursor options.
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
ON
Any cursors open when a transaction is committed or rolled back are closed.
OFF
Cursors remain open when a transaction is committed; rolling back a transaction closes any cursors except
those defined as INSENSITIVE or STATIC.
Connection-level settings that are set by using the SET statement override the default database setting for
CURSOR_CLOSE_ON_COMMIT. By default, ODBC, and OLE DB clients issue a connection-level SET
statement setting CURSOR_CLOSE_ON_COMMIT to OFF for the session when connecting to an instance of
SQL Server. For more information, see SET CURSOR_CLOSE_ON_COMMIT.
The status of this option can be determined by examining the is_cursor_close_on_commit_on column in the
sys.databases catalog view or the IsCloseCursorsOnCommitEnabled property of the DATABASEPROPERTYEX
function. The cursor is implicitly deallocated only at disconnect. For more information, see DECLARE CURSOR.
<db_encryption_option> ::=
Controls the database encryption state.
ENCRYPTION {ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). For more information about database
encryption, see Transparent Data Encryption, and Transparent Data Encryption with Azure SQL Database.
When encryption is enabled at the database level all filegroups will be encrypted. Any new filegroups will
inherit the encrypted property. If any filegroups in the database are set to READ ONLY, the database
encryption operation will fail.
You can see the encryption state of the database by using the sys.dm_database_encryption_keys dynamic
management view.
<db_update_option> ::=
Controls whether updates are allowed on the database.
READ_ONLY
Users can read data from the database but not modify it.

NOTE
To improve query performance, update statistics before setting a database to READ_ONLY. If additional statistics are
needed after a database is set to READ_ONLY, the Database Engine will create statistics in tempdb. For more information
about statistics for a read-only database, see Statistics.

READ_WRITE
The database is available for read and write operations.
To change this state, you must have exclusive access to the database. For more information, see the
SINGLE_USER clause.

NOTE
On SQL Database federated databases, SET { READ_ONLY | READ_WRITE } is disabled.

<db_user_access_option> ::=
Controls user access to the database.
RESTRICTED_USER
RESTRICTED_USER allows for only members of the db_owner fixed database role and dbcreator and sysadmin
fixed server roles to connect to the database, but does not limit their number. All connections to the database
are disconnected in the timeframe specified by the termination clause of the ALTER DATABASE statement. After
the database has transitioned to the RESTRICTED_USER state, connection attempts by unqualified users are
refused. RESTRICTED_USER cannot be modified with SQL Database Managed instance.
MULTI_USER
All users that have the appropriate permissions to connect to the database are allowed.
The status of this option can be determined by examining the user_access column in the sys.databases catalog
view or the UserAccess property of the DATABASEPROPERTYEX function.
<delayed_durability_option> ::=
Controls whether transactions commit fully durable or delayed durable.
DISABLED
All transactions following SET DISABLED are fully durable. Any durability options set in an atomic block or
commit statement are ignored.
ALLOWED
All transactions following SET ALLOWED are either fully durable or delayed durable, depending upon the
durability option set in the atomic block or commit statement.
FORCED
All transactions following SET FORCED are delayed durable. Any durability options set in an atomic block or
commit statement are ignored.
<PARAMETERIZATION_option> ::=
Controls the parameterization option.
PARAMETERIZATION { SIMPLE | FORCED }
SIMPLE
Queries are parameterized based on the default behavior of the database.
FORCED
SQL Server parameterizes all queries in the database.
The current setting of this option can be determined by examining the is_parameterization_forced column in the
sys.databases catalog view.
<query_store_options> ::=
ON | OFF | CLEAR [ ALL ]
Controls if the query store is enabled in this database, and also controls removing the contents of the query
store.
ON
Enables the query store.
OFF
Disables the query store. This is the default value.
CLEAR
Remove the contents of the query store.
OPERATION_MODE
Describes the operation mode of the query store. Valid values are READ_ONLY and READ_WRITE. In
READ_WRITE mode, the query store collects and persists query plan and runtime execution statistics
information. In READ_ONLY mode, information can be read from the query store, but new information is not
added. If the maximum allocated space of the query store has been exhausted, the query store will change is
operation mode to READ_ONLY.
CLEANUP_POLICY
Describes the data retention policy of the query store. STALE_QUERY_THRESHOLD_DAYS determines the
number of days for which the information for a query is retained in the query store.
STALE_QUERY_THRESHOLD_DAYS is type bigint.
DATA_FLUSH_INTERVAL_SECONDS
Determines the frequency at which data written to the query store is persisted to disk. To optimize for
performance, data collected by the query store is asynchronously written to the disk. The frequency at which
this asynchronous transfer occurs is configured by using the DATA_FLUSH_INTERVAL_SECONDS argument.
DATA_FLUSH_INTERVAL_SECONDS is type bigint.
MAX_STORAGE_SIZE_MB
Determines the space allocated to the query store. MAX_STORAGE_SIZE_MB is type bigint.
INTERVAL_LENGTH_MINUTES
Determines the time interval at which runtime execution statistics data is aggregated into the query store. To
optimize for space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed
time window. This fixed time window is configured by using the INTERVAL_LENGTH_MINUTES argument.
INTERVAL_LENGTH_MINUTES is type bigint.
SIZE_BASED_CLEANUP_MODE
Controls whether cleanup will be automatically activated when total amount of data gets close to maximum
size:
OFF
Size based cleanup won't be automatically activated.
AUTO
Size based cleanup will be automatically activated when size on disk reaches 90% of max_storage_size_mb.
Size based cleanup removes the least expensive and oldest queries first. It stops at approximately 80% of
max_storage_size_mb. This is the default configuration value.
SIZE_BASED_CLEANUP_MODE is type nvarchar.
QUERY_CAPTURE_MODE
Designates the currently active query capture mode:
ALL All queries are captured. This is the default configuration value.
AUTO Capture relevant queries based on execution count and resource consumption. This is the default
configuration value for SQL Database
NONE Stop capturing new queries. Query Store will continue to collect compile and runtime statistics for
queries that were captured already. Use this configuration with caution since you may miss to capture
important queries.
QUERY_CAPTURE_MODE is type nvarchar.
MAX_PLANS_PER_QUERY
An integer representing the maximum number of plans maintained for each query. Default is 200.
<snapshot_option> ::=
Determines the transaction isolation level.
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
ON
Enables Snapshot option at the database level. When it is enabled, DML statements start generating row
versions even when no transaction uses Snapshot Isolation. Once this option is enabled, transactions can
specify the SNAPSHOT transaction isolation level. When a transaction runs at the SNAPSHOT isolation level,
all statements see a snapshot of data as it exists at the start of the transaction. If a transaction running at the
SNAPSHOT isolation level accesses data in multiple databases, either ALLOW_SNAPSHOT_ISOLATION must
be set to ON in all the databases, or each statement in the transaction must use locking hints on any reference
in a FROM clause to a table in a database where ALLOW_SNAPSHOT_ISOLATION is OFF.
OFF
Turns off the Snapshot option at the database level. Transactions cannot specify the SNAPSHOT transaction
isolation level.
When you set ALLOW_SNAPSHOT_ISOLATION to a new state (from ON to OFF, or from OFF to ON ), ALTER
DATABASE does not return control to the caller until all existing transactions in the database are committed. If
the database is already in the state specified in the ALTER DATABASE statement, control is returned to the caller
immediately. If the ALTER DATABASE statement does not return quickly, use
sys.dm_tran_active_snapshot_database_transactions to determine whether there are long-running transactions.
If the ALTER DATABASE statement is canceled, the database remains in the state it was in when ALTER
DATABASE was started. The sys.databases catalog view indicates the state of snapshot-isolation transactions in
the database. If snapshot_isolation_state_desc = IN_TRANSITION_TO_ON, ALTER DATABASE
ALLOW_SNAPSHOT_ISOLATION OFF will pause six seconds and retry the operation.
You cannot change the state of ALLOW_SNAPSHOT_ISOLATION if the database is OFFLINE.
If you set ALLOW_SNAPSHOT_ISOLATION in a READ_ONLY database, the setting will be retained if the
database is later set to READ_WRITE.
You can change the ALLOW_SNAPSHOT_ISOLATION settings for the master, model, msdb, and tempdb
databases. If you change the setting for tempdb, the setting is retained every time the instance of the Database
Engine is stopped and restarted. If you change the setting for model, that setting becomes the default for any
new databases that are created, except for tempdb.
The option is ON, by default, for the master and msdb databases.
The current setting of this option can be determined by examining the snapshot_isolation_state column in the
sys.databases catalog view.
READ_COMMITTED_SNAPSHOT { ON | OFF }
ON
Enables Read-Committed Snapshot option at the database level. When it is enabled, DML statements start
generating row versions even when no transaction uses Snapshot Isolation. Once this option is enabled, the
transactions specifying the read committed isolation level use row versioning instead of locking. When a
transaction runs at the read committed isolation level, all statements see a snapshot of data as it exists at the
start of the statement.
OFF
Turns off Read-Committed Snapshot option at the database level. Transactions specifying the READ
COMMITTED isolation level use locking.
To set READ_COMMITTED_SNAPSHOT ON or OFF, there must be no active connections to the database
except for the connection executing the ALTER DATABASE command. However, the database does not have to
be in single-user mode. You cannot change the state of this option when the database is OFFLINE.
If you set READ_COMMITTED_SNAPSHOT in a READ_ONLY database, the setting will be retained when the
database is later set to READ_WRITE.
READ_COMMITTED_SNAPSHOT cannot be turned ON for the master, tempdb, or msdb system databases. If
you change the setting for model, that setting becomes the default for any new databases created, except for
tempdb.
The current setting of this option can be determined by examining the is_read_committed_snapshot_on column
in the sys.databases catalog view.
WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.

MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT { ON | OFF }
ON
When the transaction isolation level is set to any isolation level lower than SNAPSHOT (for example, READ
COMMITTED or READ UNCOMMITTED ), all interpreted Transact-SQL operations on memory-optimized
tables are performed under SNAPSHOT isolation. This is done regardless of whether the transaction isolation
level is set explicitly at the session level, or the default is used implicitly.
OFF
Does not elevate the transaction isolation level for interpreted Transact-SQL operations on memory-optimized
tables.
You cannot change the state of MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT if the database is OFFLINE.
The option is OFF, by default.
The current setting of this option can be determined by examining the
is_memory_optimized_elevate_to_snapshot_on column in the sys.databases catalog view.
<sql_option> ::=
Controls the ANSI compliance options at the database level.
ANSI_NULL_DEFAULT { ON | OFF }
Determines the default value, NULL or NOT NULL, of a column or CLR user-defined type for which the
nullability is not explicitly defined in CREATE TABLE or ALTER TABLE statements. Columns that are defined
with constraints follow constraint rules regardless of this setting.
ON
The default value is NULL.
OFF
The default value is NOT NULL.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_NULL_DEFAULT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULL_DEFAULT to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_NULL_DFLT_ON.
For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database
default to NULL.
The status of this option can be determined by examining the is_ansi_null_default_on column in the
sys.databases catalog view or the IsAnsiNullDefault property of the DATABASEPROPERTYEX function.
ANSI_NULLS { ON | OFF }
ON
All comparisons to a null value evaluate to UNKNOWN.
OFF
Comparisons of non-UNICODE values to a null value evaluate to TRUE if both values are NULL.
IMPORTANT
In a future version of SQL Server, ANSI_NULLS will always be ON and any applications that explicitly set the option to OFF
will produce an error. Avoid using this feature in new development work, and plan to modify applications that currently
use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_NULLS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULLS to ON for the session when connecting to an instance of SQL Server. For more information, see
SET ANSI_NULLS.
SET ANSI_NULLS also must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
The status of this option can be determined by examining the is_ansi_nulls_on column in the sys.databases
catalog view or the IsAnsiNullsEnabled property of the DATABASEPROPERTYEX function.
ANSI_PADDING { ON | OFF }
ON
Strings are padded to the same length before conversion or inserting to a varchar or nvarchar data type.
Trailing blanks in character values inserted into varchar or nvarchar columns and trailing zeros in binary
values inserted into varbinary columns are not trimmed. Values are not padded to the length of the column.
OFF
Trailing blanks for varchar or nvarchar and zeros for varbinary are trimmed.
When OFF is specified, this setting affects only the definition of new columns.

IMPORTANT
In a future version of SQL Server, ANSI_PADDING will always be ON and any applications that explicitly set the option to
OFF will produce an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature. We recommend that you always set ANSI_PADDING to ON. ANSI_PADDING must be ON when
you create or manipulate indexes on computed columns or indexed views.

char(n ) and binary(n ) columns that allow for nulls are padded to the length of the column when
ANSI_PADDING is set to ON, but trailing blanks and zeros are trimmed when ANSI_PADDING is OFF.
char(n ) and binary(n ) columns that do not allow nulls are always padded to the length of the column.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_PADDING. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_PADDING to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_PADDING.
The status of this option can be determined by examining the is_ansi_padding_on column in the sys.databases
catalog view or the IsAnsiPaddingEnabled property of the DATABASEPROPERTYEX function.
ANSI_WARNINGS { ON | OFF }
ON
Errors or warnings are issued when conditions such as divide-by-zero occur or null values appear in aggregate
functions.
OFF
No warnings are raised and null values are returned when conditions such as divide-by-zero occur.
SET ANSI_WARNINGS must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_WARNINGS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_WARNINGS to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_WARNINGS.
The status of this option can be determined by examining the is_ansi_warnings_on column in the sys.databases
catalog view or the IsAnsiWarningsEnabled property of the DATABASEPROPERTYEX function.
ARITHABORT { ON | OFF }
ON
A query is ended when an overflow or divide-by-zero error occurs during query execution.
OFF
A warning message is displayed when one of these errors occurs, but the query, batch, or transaction continues
to process as if no error occurred.
SET ARITHABORT must be set to ON when you create or make changes to indexes on computed columns or
indexed views.
The status of this option can be determined by examining the is_arithabort_on column in the sys.databases
catalog view or the IsArithmeticAbortEnabled property of the DATABASEPROPERTYEX function.
COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 }
For more information, see ALTER DATABASE Compatibility Level.
CONCAT_NULL_YIELDS_NULL { ON | OFF }
ON
The result of a concatenation operation is NULL when either operand is NULL. For example, concatenating the
character string "This is" and NULL causes the value NULL, instead of the value "This is".
OFF
The null value is treated as an empty character string.
CONCAT_NULL_YIELDS_NULL must be set to ON when you create or make changes to indexes on computed
columns or indexed views.

IMPORTANT
In a future version of SQL Server, CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set
the option to OFF will produce an error. Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
CONCAT_NULL_YIELDS_NULL. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CONCAT_NULL_YIELDS_NULL to ON for the session when connecting to an instance of SQL Server.
For more information, see SET CONCAT_NULL_YIELDS_NULL.
The status of this option can be determined by examining the is_concat_null_yields_null_on column in the
sys.databases catalog view or the IsNullConcat property of the DATABASEPROPERTYEX function.
QUOTED_IDENTIFIER { ON | OFF }
ON
Double quotation marks can be used to enclose delimited identifiers.
All strings delimited by double quotation marks are interpreted as object identifiers. Quoted identifiers do not
have to follow the Transact-SQL rules for identifiers. They can be keywords and can include characters not
generally allowed in Transact-SQL identifiers. If a single quotation mark (') is part of the literal string, it can be
represented by double quotation marks (").
OFF
Identifiers cannot be in quotation marks and must follow all Transact-SQL rules for identifiers. Literals can be
delimited by either single or double quotation marks.
SQL Server also allows for identifiers to be delimited by square brackets ([ ]). Bracketed identifiers can always
be used, regardless of the setting of QUOTED_IDENTIFIER. For more information, see Database Identifiers.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata of the table,
even if the option is set to OFF when the table is created.
Connection-level settings that are set by using the SET statement override the default database setting for
QUOTED_IDENTIFIER. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
QUOTED_IDENTIFIER to ON when connecting to an instance of SQL Server. For more information, see SET
QUOTED_IDENTIFIER.
The status of this option can be determined by examining the is_quoted_identifier_on column in the
sys.databases catalog view or the IsQuotedIdentifiersEnabled property of the DATABASEPROPERTYEX
function.
NUMERIC_ROUNDABORT { ON | OFF }
ON
An error is generated when loss of precision occurs in an expression.
OFF
Losses of precision do not generate error messages and the result is rounded to the precision of the column or
variable storing the result.
NUMERIC_ROUNDABORT must be set to OFF when you create or make changes to indexes on computed
columns or indexed views.
The status of this option can be determined by examining the is_numeric_roundabort_on column in the
sys.databases catalog view or the IsNumericRoundAbortEnabled property of the DATABASEPROPERTYEX
function.
RECURSIVE_TRIGGERS { ON | OFF }
ON
Recursive firing of AFTER triggers is allowed.
OFF
Only direct recursive firing of AFTER triggers is not allowed. To also disable indirect recursion of AFTER
triggers, set the nested triggers server option to 0 by using sp_configure.

NOTE
Only direct recursion is prevented when RECURSIVE_TRIGGERS is set to OFF. To disable indirect recursion, you must also
set the nested triggers server option to 0.

The status of this option can be determined by examining the is_recursive_triggers_on column in the
sys.databases catalog view or the IsRecursiveTriggersEnabled property of the DATABASEPROPERTYEX
function.
<target_recovery_time_option> ::=
Specifies the frequency of indirect checkpoints on a per-database basis. Beginning with SQL Server 2016 (13.x)
the default value for new databases is 1 minute, which indicates database will use indirect checkpoints. For older
versions the default is 0, which indicates that the database will use automatic checkpoints, whose frequency
depends on the recovery interval setting of the server instance. Microsoft recommends 1 minute for most
systems.
TARGET_RECOVERY_TIME =target_recovery_time { SECONDS | MINUTES }
target_recovery_time
Specifies the maximum bound on the time to recover the specified database in the event of a crash.
SECONDS
Indicates that target_recovery_time is expressed as the number of seconds.
MINUTES
Indicates that target_recovery_time is expressed as the number of minutes.
For more information about indirect checkpoints, see Database Checkpoints.
WITH <termination> ::=
Specifies when to roll back incomplete transactions when the database is transitioned from one state to another.
If the termination clause is omitted, the ALTER DATABASE statement waits indefinitely if there is any lock on the
database. Only one termination clause can be specified, and it follows the SET clauses.

NOTE
Not all database options use the WITH <termination> clause. For more information, see the table under "Setting Options
of the "Remarks" section of this article.

ROLLBACK AFTER integer [SECONDS ] | ROLLBACK IMMEDIATE


Specifies whether to roll back after the specified number of seconds or immediately.
NO_WAIT
Specifies that if the requested database state or option change cannot complete immediately without waiting
for transactions to commit or roll back on their own, the request will fail.

Setting Options
To retrieve current settings for database options, use the sys.databases catalog view or
DATABASEPROPERTYEX
After you set a database option, the modification takes effect immediately.
To change the default values for any one of the database options for all newly created databases, change the
appropriate database option in the model database.
Not all database options use the WITH <termination> clause or can be specified in combination with other
options. The following table lists these options and their option and termination status.

CAN USE THE WITH <TERMINATION>


OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE

<auto_option> Yes No

<change_tracking_option> Yes Yes

<cursor_option> Yes No
CAN USE THE WITH <TERMINATION>
OPTIONS CATEGORY CAN BE SPECIFIED WITH OTHER OPTIONS CLAUSE

<db_encryption_option> Yes No

<db_update_option> Yes Yes

<db_user_access_option> Yes Yes

<delayed_durability_option> Yes Yes

<parameterization_option> Yes Yes

ALLOW_SNAPSHOT_ISOLATION No No

READ_COMMITTED_SNAPSHOT No Yes

MEMORY_OPTIMIZED_ELEVATE_TO_S Yes Yes


NAPSHOT

DATE_CORRELATION_OPTIMIZATION Yes Yes

<sql_option> Yes No

<target_recovery_time_option> No Yes

Examples
A. Setting the database to READ_ONLY
Changing the state of a database or filegroup to READ_ONLY or READ_WRITE requires exclusive access to the
database. The following example sets the database to RESTRICTED_USER mode to limit access. The example then
sets the state of the AdventureWorks2012 database to READ_ONLY and returns access to the database to all
users.

USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RESTRICTED_USER;
GO
ALTER DATABASE AdventureWorks2012
SET READ_ONLY
GO
ALTER DATABASE AdventureWorks2012
SET MULTI_USER;
GO

B. Enabling snapshot isolation on a database


The following example enables the snapshot isolation framework option for the AdventureWorks2012
database.
USE AdventureWorks2012;
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Check the state of the snapshot_isolation_framework
-- in the database.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO

The result set shows that the snapshot isolation framework is enabled.

NAME SNAPSHOT_ISOLATION_STATE DESCRIPTION

AdventureWorks2012 1 ON

C. Enabling, modifying, and disabling change tracking


The following example enables change tracking for the AdventureWorks2012 database and sets the retention
period to 2 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = ON
(AUTO_CLEANUP = ON, CHANGE_RETENTION = 2 DAYS);

The following example shows how to change the retention period to 3 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING (CHANGE_RETENTION = 3 DAYS);

The following example shows how to disable change tracking for the AdventureWorks2012 database.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = OFF;

D. Enabling the query store


The following example enables the query store and configures query store parameters.

ALTER DATABASE AdventureWorks2012


SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE
, CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 )
, DATA_FLUSH_INTERVAL_SECONDS = 900
, MAX_STORAGE_SIZE_MB = 1024
, INTERVAL_LENGTH_MINUTES = 60
);

See Also
ALTER DATABASE Compatibility Level
ALTER DATABASE Database Mirroring
Statistics
CREATE DATABASE
Enable and Disable Change Tracking
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOLATION LEVEL
sp_configure
sys.databases
sys.data_spaces
Best Practice with the Query Store

SQL Server SQL Database


logical server

Azure SQL Database Managed Instance


Compatibility levels are SET options but are described in ALTER DATABASE Compatibility Level.

NOTE
Many database set options can be configured for the current session by using SET Statements and are often configured
by applications when they connect. Session level set options override the ALTER DATABASE SET values. The database
options described below are values that can be set for sessions that do not explicitly provide other set option values.

Syntax
ALTER DATABASE { database_name | Current }
SET
{
<optionspec> [ ,...n ]
}
;

<optionspec> ::=
{
<auto_option>
| <change_tracking_option>
| <cursor_option>
| <db_encryption_option>
| <delayed_durability_option>
| <parameterization_option>
| <query_store_options>
| <snapshot_option>
| <sql_option>
| <target_recovery_time_option>
| <termination>
| <temporal_history_retention>
}
;
<auto_option> ::=
{
AUTO_CREATE_STATISTICS { OFF | ON [ ( INCREMENTAL = { ON | OFF } ) ] }
| AUTO_SHRINK { ON | OFF }
| AUTO_UPDATE_STATISTICS { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
| AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
}

<automatic_tuning_option> ::=
{
AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { ON | OFF } )
}

<change_tracking_option> ::=
{
CHANGE_TRACKING
{
= OFF
| = ON [ ( <change_tracking_option_list > [,...n] ) ]
| ( <change_tracking_option_list> [,...n] )
}
}

<change_tracking_option_list> ::=
{
AUTO_CLEANUP = { ON | OFF }
| CHANGE_RETENTION = retention_period { DAYS | HOURS | MINUTES }
}

<cursor_option> ::=
{
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
}

<db_encryption_option> ::=
ENCRYPTION { ON | OFF }

<delayed_durability_option> ::= DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

<parameterization_option> ::=
PARAMETERIZATION { SIMPLE | FORCED }

<query_store_options> ::=
{
QUERY_STORE
{
= OFF
| = ON [ ( <query_store_option_list> [,... n] ) ]
| ( < query_store_option_list> [,... n] )
| CLEAR [ ALL ]
}
}

<query_store_option_list> ::=
{
OPERATION_MODE = { READ_WRITE | READ_ONLY }
| CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = number )
| DATA_FLUSH_INTERVAL_SECONDS = number
| MAX_STORAGE_SIZE_MB = number
| INTERVAL_LENGTH_MINUTES = number
| SIZE_BASED_CLEANUP_MODE = [ AUTO | OFF ]
| QUERY_CAPTURE_MODE = [ ALL | AUTO | NONE ]
| MAX_PLANS_PER_QUERY = number
}

<snapshot_option> ::=
{
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
| READ_COMMITTED_SNAPSHOT {ON | OFF }
| MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT {ON | OFF }
}
<sql_option> ::=
{
ANSI_NULL_DEFAULT { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_NULLS { ON | OFF }
| ANSI_PADDING { ON | OFF }
| ANSI_WARNINGS { ON | OFF }
| ARITHABORT { ON | OFF }
| COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 }
| CONCAT_NULL_YIELDS_NULL { ON | OFF }
| NUMERIC_ROUNDABORT { ON | OFF }
| QUOTED_IDENTIFIER { ON | OFF }
| RECURSIVE_TRIGGERS { ON | OFF }
}

<temporal_history_retention> ::= TEMPORAL_HISTORY_RETENTION { ON | OFF }

Arguments
database_name
Is the name of the database to be modified.
CURRENT
CURRENT performs the action in the current database. CURRENT is not supported for all options in all contexts. If
CURRENT fails, provide the database name.

<auto_option> ::=
Controls automatic options.
AUTO_CREATE_STATISTICS { ON | OFF }
ON
The query optimizer creates statistics on single columns in query predicates, as necessary, to improve query
plans and query performance. These single-column statistics are created when the query optimizer compiles
queries. The single-column statistics are created only on columns that are not already the first column of an
existing statistics object.
The default is ON. We recommend that you use the default setting for most databases.
OFF
The query optimizer does not create statistics on single columns in query predicates when it is compiling
queries. Setting this option to OFF can cause suboptimal query plans and degraded query performance.
The status of this option can be determined by examining the is_auto_create_stats_on column in the
sys.databases catalog view or the IsAutoCreateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
INCREMENTAL = ON | OFF
When AUTO_CREATE_STATISTICS is ON, and INCREMENTAL is set to ON, automatically created stats are
created as incremental whenever incremental stats is supported. The default value is OFF. For more
information, see CREATE STATISTICS.
AUTO_SHRINK { ON | OFF }
ON
The database files are candidates for periodic shrinking.
Both data file and log files can be automatically shrunk. AUTO_SHRINK reduces the size of the transaction log
only if the database is set to SIMPLE recovery model or if the log is backed up. When set to OFF, the database
files are not automatically shrunk during periodic checks for unused space.
The AUTO_SHRINK option causes files to be shrunk when more than 25 percent of the file contains unused
space. The file is shrunk to a size where 25 percent of the file is unused space, or to the size of the file when it
was created, whichever is larger.
You cannot shrink a read-only database.
OFF
The database files are not automatically shrunk during periodic checks for unused space.
The status of this option can be determined by examining the is_auto_shrink_on column in the sys.databases
catalog view or the IsAutoShrink property of the DATABASEPROPERTYEX function.

NOTE
The AUTO_SHRINK option is not available in a Contained Database.

AUTO_UPDATE_STATISTICS { ON | OFF }
ON
Specifies that the query optimizer updates statistics when they are used by a query and when they might be
out-of-date. Statistics become out-of-date after insert, update, delete, or merge operations change the data
distribution in the table or indexed view. The query optimizer determines when statistics might be out-of-date
by counting the number of data modifications since the last statistics update and comparing the number of
modifications to a threshold. The threshold is based on the number of rows in the table or indexed view.
The query optimizer checks for out-of-date statistics before compiling a query and before executing a cached
query plan. Before compiling a query, the query optimizer uses the columns, tables, and indexed views in the
query predicate to determine which statistics might be out-of-date. Before executing a cached query plan, the
Database Engine verifies that the query plan references up-to-date statistics.
The AUTO_UPDATE_STATISTICS option applies to statistics created for indexes, single-columns in query
predicates, and statistics that are created by using the CREATE STATISTICS statement. This option also applies
to filtered statistics.
The default is ON. We recommend that you use the default setting for most databases.
Use the AUTO_UPDATE_STATISTICS_ASYNC option to specify whether the statistics are updated
synchronously or asynchronously.
OFF
Specifies that the query optimizer does not update statistics when they are used by a query and when they
might be out-of-date. Setting this option to OFF can cause suboptimal query plans and degraded query
performance.
The status of this option can be determined by examining the is_auto_update_stats_on column in the
sys.databases catalog view or the IsAutoUpdateStatistics property of the DATABASEPROPERTYEX function.
For more information, see the section "Using the Database-Wide Statistics Options" in Statistics.
AUTO_UPDATE_STATISTICS_ASYNC { ON | OFF }
ON
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are asynchronous. The query
optimizer does not wait for statistics updates to complete before it compiles queries.
Setting this option to ON has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
By default, the AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF, and the query optimizer updates
statistics synchronously.
OFF
Specifies that statistics updates for the AUTO_UPDATE_STATISTICS option are synchronous. The query
optimizer waits for statistics updates to complete before it compiles queries.
Setting this option to OFF has no effect unless AUTO_UPDATE_STATISTICS is set to ON.
The status of this option can be determined by examining the is_auto_update_stats_async_on column in the
sys.databases catalog view.
For more information that describes when to use synchronous or asynchronous statistics updates, see the
section "Using the Database-Wide Statistics Options" in Statistics.
<automatic_tuning_option> ::=
Applies to: SQL Server 2017 (14.x).
Enables or disables FORCE_LAST_GOOD_PLAN automatic tuning option.
FORCE_LAST_GOOD_PLAN = { ON | OFF }
ON
The Database Engine automatically forces the last known good plan on the Transact-SQL queries where new
SQL plan causes performance regressions. The Database Engine continuously monitors query performance of
the Transact-SQL query with the forced plan. If there are performance gains, the Database Engine will keep
using last known good plan. If performance gains are not detected, the Database Engine will produce a new
SQL plan. The statement will fail if Query Store is not enabled or if it is not in Read -Write mode.
OFF
The Database Engine reports potential query performance regressions caused by SQL plan changes in
sys.dm_db_tuning_recommendations view. However, these recommendations are not automatically applied.
User can monitor active recommendations and fix identified problems by applying Transact-SQL scripts that are
shown in the view. This is the default value.
<change_tracking_option> ::=
Controls change tracking options. You can enable change tracking, set options, change options, and disable
change tracking. For examples, see the Examples section later in this article.
ON
Enables change tracking for the database. When you enable change tracking, you can also set the AUTO
CLEANUP and CHANGE RETENTION options.
AUTO_CLEANUP = { ON | OFF }
ON
Change tracking information is automatically removed after the specified retention period.
OFF
Change tracking data is not removed from the database.
CHANGE_RETENTION =retention_period { DAYS | HOURS | MINUTES }
Specifies the minimum period for keeping change tracking information in the database. Data is removed only
when the AUTO_CLEANUP value is ON.
retention_period is an integer that specifies the numerical component of the retention period.
The default retention period is 2 days. The minimum retention period is 1 minute. The default retention type is
DAYS.
OFF
Disables change tracking for the database. You must disable change tracking on all tables before you can
disable change tracking off the database.
<cursor_option> ::=
Controls cursor options.
CURSOR_CLOSE_ON_COMMIT { ON | OFF }
ON
Any cursors open when a transaction is committed or rolled back are closed.
OFF
Cursors remain open when a transaction is committed; rolling back a transaction closes any cursors except
those defined as INSENSITIVE or STATIC.
Connection-level settings that are set by using the SET statement override the default database setting for
CURSOR_CLOSE_ON_COMMIT. By default, ODBC, and OLE DB clients issue a connection-level SET
statement setting CURSOR_CLOSE_ON_COMMIT to OFF for the session when connecting to an instance of
SQL Server. For more information, see SET CURSOR_CLOSE_ON_COMMIT.
The status of this option can be determined by examining the is_cursor_close_on_commit_on column in the
sys.databases catalog view or the IsCloseCursorsOnCommitEnabled property of the DATABASEPROPERTYEX
function. The cursor is implicitly deallocated only at disconnect. For more information, see DECLARE CURSOR.
<db_encryption_option> ::=
Controls the database encryption state.
ENCRYPTION {ON | OFF }
Sets the database to be encrypted (ON ) or not encrypted (OFF ). For more information about database
encryption, see Transparent Data Encryption, and Transparent Data Encryption with Azure SQL Database.
When encryption is enabled at the database level all filegroups will be encrypted. Any new filegroups will
inherit the encrypted property. If any filegroups in the database are set to READ ONLY, the database
encryption operation will fail.
You can see the encryption state of the database by using the sys.dm_database_encryption_keys dynamic
management view.
<db_update_option> ::=
Controls whether updates are allowed on the database.
READ_ONLY
Users can read data from the database but not modify it.

NOTE
To improve query performance, update statistics before setting a database to READ_ONLY. If additional statistics are
needed after a database is set to READ_ONLY, the Database Engine will create statistics in tempdb. For more information
about statistics for a read-only database, see Statistics.

READ_WRITE
The database is available for read and write operations.
To change this state, you must have exclusive access to the database.
<db_user_access_option> ::=
Controls user access to the database.
RESTRICTED_USER
RESTRICTED_USER allows for only members of the db_owner fixed database role and dbcreator and sysadmin
fixed server roles to connect to the database, but does not limit their number. All connections to the database
are disconnected in the timeframe specified by the termination clause of the ALTER DATABASE statement. After
the database has transitioned to the RESTRICTED_USER state, connection attempts by unqualified users are
refused. RESTRICTED_USER cannot be modified with SQL Database Managed instance.
MULTI_USER
All users that have the appropriate permissions to connect to the database are allowed.
The status of this option can be determined by examining the user_access column in the sys.databases catalog
view or the UserAccess property of the DATABASEPROPERTYEX function.
<delayed_durability_option> ::=
Controls whether transactions commit fully durable or delayed durable.
DISABLED
All transactions following SET DISABLED are fully durable. Any durability options set in an atomic block or
commit statement are ignored.
ALLOWED
All transactions following SET ALLOWED are either fully durable or delayed durable, depending upon the
durability option set in the atomic block or commit statement.
FORCED
All transactions following SET FORCED are delayed durable. Any durability options set in an atomic block or
commit statement are ignored.
<PARAMETERIZATION_option> ::=
Controls the parameterization option.
PARAMETERIZATION { SIMPLE | FORCED }
SIMPLE
Queries are parameterized based on the default behavior of the database.
FORCED
SQL Server parameterizes all queries in the database.
The current setting of this option can be determined by examining the is_parameterization_forced column in the
sys.databases catalog view.
<query_store_options> ::=
ON | OFF | CLEAR [ ALL ]
Controls if the query store is enabled in this database, and also controls removing the contents of the query
store.
ON
Enables the query store.
OFF
Disables the query store. This is the default value.
CLEAR
Remove the contents of the query store.
OPERATION_MODE
Describes the operation mode of the query store. Valid values are READ_ONLY and READ_WRITE. In
READ_WRITE mode, the query store collects and persists query plan and runtime execution statistics
information. In READ_ONLY mode, information can be read from the query store, but new information is not
added. If the maximum allocated space of the query store has been exhausted, the query store will change is
operation mode to READ_ONLY.
CLEANUP_POLICY
Describes the data retention policy of the query store. STALE_QUERY_THRESHOLD_DAYS determines the
number of days for which the information for a query is retained in the query store.
STALE_QUERY_THRESHOLD_DAYS is type bigint.
DATA_FLUSH_INTERVAL_SECONDS
Determines the frequency at which data written to the query store is persisted to disk. To optimize for
performance, data collected by the query store is asynchronously written to the disk. The frequency at which
this asynchronous transfer occurs is configured by using the DATA_FLUSH_INTERVAL_SECONDS argument.
DATA_FLUSH_INTERVAL_SECONDS is type bigint.
MAX_STORAGE_SIZE_MB
Determines the space allocated to the query store. MAX_STORAGE_SIZE_MB is type bigint.
INTERVAL_LENGTH_MINUTES
Determines the time interval at which runtime execution statistics data is aggregated into the query store. To
optimize for space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed
time window. This fixed time window is configured by using the INTERVAL_LENGTH_MINUTES argument.
INTERVAL_LENGTH_MINUTES is type bigint.
SIZE_BASED_CLEANUP_MODE
Controls whether cleanup will be automatically activated when total amount of data gets close to maximum
size:
OFF
Size based cleanup won't be automatically activated.
AUTO
Size based cleanup will be automatically activated when size on disk reaches 90% of max_storage_size_mb.
Size based cleanup removes the least expensive and oldest queries first. It stops at approximately 80% of
max_storage_size_mb. This is the default configuration value.
SIZE_BASED_CLEANUP_MODE is type nvarchar.
QUERY_CAPTURE_MODE
Designates the currently active query capture mode:
ALL All queries are captured. This is the default configuration value.
AUTO Capture relevant queries based on execution count and resource consumption. This is the default
configuration value for SQL Database
NONE Stop capturing new queries. Query Store will continue to collect compile and runtime statistics for
queries that were captured already. Use this configuration with caution since you may miss to capture
important queries.
QUERY_CAPTURE_MODE is type nvarchar.
MAX_PLANS_PER_QUERY
An integer representing the maximum number of plans maintained for each query. Default is 200.
<snapshot_option> ::=
Determines the transaction isolation level.
ALLOW_SNAPSHOT_ISOLATION { ON | OFF }
ON
Enables Snapshot option at the database level. When it is enabled, DML statements start generating row
versions even when no transaction uses Snapshot Isolation. Once this option is enabled, transactions can
specify the SNAPSHOT transaction isolation level. When a transaction runs at the SNAPSHOT isolation level,
all statements see a snapshot of data as it exists at the start of the transaction. If a transaction running at the
SNAPSHOT isolation level accesses data in multiple databases, either ALLOW_SNAPSHOT_ISOLATION must
be set to ON in all the databases, or each statement in the transaction must use locking hints on any reference
in a FROM clause to a table in a database where ALLOW_SNAPSHOT_ISOLATION is OFF.
OFF
Turns off the Snapshot option at the database level. Transactions cannot specify the SNAPSHOT transaction
isolation level.
When you set ALLOW_SNAPSHOT_ISOLATION to a new state (from ON to OFF, or from OFF to ON ), ALTER
DATABASE does not return control to the caller until all existing transactions in the database are committed. If
the database is already in the state specified in the ALTER DATABASE statement, control is returned to the caller
immediately. If the ALTER DATABASE statement does not return quickly, use
sys.dm_tran_active_snapshot_database_transactions to determine whether there are long-running transactions.
If the ALTER DATABASE statement is canceled, the database remains in the state it was in when ALTER
DATABASE was started. The sys.databases catalog view indicates the state of snapshot-isolation transactions in
the database. If snapshot_isolation_state_desc = IN_TRANSITION_TO_ON, ALTER DATABASE
ALLOW_SNAPSHOT_ISOLATION OFF will pause six seconds and retry the operation.
You cannot change the state of ALLOW_SNAPSHOT_ISOLATION if the database is OFFLINE.
If you set ALLOW_SNAPSHOT_ISOLATION in a READ_ONLY database, the setting will be retained if the
database is later set to READ_WRITE.
You can change the ALLOW_SNAPSHOT_ISOLATION settings for the master, model, msdb, and tempdb
databases. If you change the setting for tempdb, the setting is retained every time the instance of the Database
Engine is stopped and restarted. If you change the setting for model, that setting becomes the default for any
new databases that are created, except for tempdb.
The option is ON, by default, for the master and msdb databases.
The current setting of this option can be determined by examining the snapshot_isolation_state column in the
sys.databases catalog view.
READ_COMMITTED_SNAPSHOT { ON | OFF }
ON
Enables Read-Committed Snapshot option at the database level. When it is enabled, DML statements start
generating row versions even when no transaction uses Snapshot Isolation. Once this option is enabled, the
transactions specifying the read committed isolation level use row versioning instead of locking. When a
transaction runs at the read committed isolation level, all statements see a snapshot of data as it exists at the
start of the statement.
OFF
Turns off Read-Committed Snapshot option at the database level. Transactions specifying the READ
COMMITTED isolation level use locking.
To set READ_COMMITTED_SNAPSHOT ON or OFF, there must be no active connections to the database
except for the connection executing the ALTER DATABASE command. However, the database does not have to
be in single-user mode. You cannot change the state of this option when the database is OFFLINE.
If you set READ_COMMITTED_SNAPSHOT in a READ_ONLY database, the setting will be retained when the
database is later set to READ_WRITE.
READ_COMMITTED_SNAPSHOT cannot be turned ON for the master, tempdb, or msdb system databases. If
you change the setting for model, that setting becomes the default for any new databases created, except for
tempdb.
The current setting of this option can be determined by examining the is_read_committed_snapshot_on column
in the sys.databases catalog view.

WARNING
When a table is created with DURABILITY = SCHEMA_ONLY, and READ_COMMITTED_SNAPSHOT is subsequently
changed using ALTER DATABASE, data in the table will be lost.

MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT { ON | OFF }
ON
When the transaction isolation level is set to any isolation level lower than SNAPSHOT (for example, READ
COMMITTED or READ UNCOMMITTED ), all interpreted Transact-SQL operations on memory-optimized
tables are performed under SNAPSHOT isolation. This is done regardless of whether the transaction isolation
level is set explicitly at the session level, or the default is used implicitly.
OFF
Does not elevate the transaction isolation level for interpreted Transact-SQL operations on memory-optimized
tables.
You cannot change the state of MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT if the database is OFFLINE.
The option is OFF, by default.
The current setting of this option can be determined by examining the
is_memory_optimized_elevate_to_snapshot_on column in the sys.databases catalog view.
<sql_option> ::=
Controls the ANSI compliance options at the database level.
ANSI_NULL_DEFAULT { ON | OFF }
Determines the default value, NULL or NOT NULL, of a column or CLR user-defined type for which the
nullability is not explicitly defined in CREATE TABLE or ALTER TABLE statements. Columns that are defined
with constraints follow constraint rules regardless of this setting.
ON
The default value is NULL.
OFF
The default value is NOT NULL.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_NULL_DEFAULT. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULL_DEFAULT to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_NULL_DFLT_ON.
For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database
default to NULL.
The status of this option can be determined by examining the is_ansi_null_default_on column in the
sys.databases catalog view or the IsAnsiNullDefault property of the DATABASEPROPERTYEX function.
ANSI_NULLS { ON | OFF }
ON
All comparisons to a null value evaluate to UNKNOWN.
OFF
Comparisons of non-UNICODE values to a null value evaluate to TRUE if both values are NULL.

IMPORTANT
In a future version of SQL Server, ANSI_NULLS will always be ON and any applications that explicitly set the option to OFF
will produce an error. Avoid using this feature in new development work, and plan to modify applications that currently
use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_NULLS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_NULLS to ON for the session when connecting to an instance of SQL Server. For more information, see
SET ANSI_NULLS.
SET ANSI_NULLS also must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
The status of this option can be determined by examining the is_ansi_nulls_on column in the sys.databases
catalog view or the IsAnsiNullsEnabled property of the DATABASEPROPERTYEX function.
ANSI_PADDING { ON | OFF }
ON
Strings are padded to the same length before conversion or inserting to a varchar or nvarchar data type.
Trailing blanks in character values inserted into varchar or nvarchar columns and trailing zeros in binary
values inserted into varbinary columns are not trimmed. Values are not padded to the length of the column.
OFF
Trailing blanks for varchar or nvarchar and zeros for varbinary are trimmed.
When OFF is specified, this setting affects only the definition of new columns.

IMPORTANT
In a future version of SQL Server, ANSI_PADDING will always be ON and any applications that explicitly set the option to
OFF will produce an error. Avoid using this feature in new development work, and plan to modify applications that
currently use this feature. We recommend that you always set ANSI_PADDING to ON. ANSI_PADDING must be ON when
you create or manipulate indexes on computed columns or indexed views.

char(n ) and binary(n ) columns that allow for nulls are padded to the length of the column when
ANSI_PADDING is set to ON, but trailing blanks and zeros are trimmed when ANSI_PADDING is OFF.
char(n ) and binary(n ) columns that do not allow nulls are always padded to the length of the column.
Connection-level settings that are set by using the SET statement override the default database-level setting for
ANSI_PADDING. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_PADDING to ON for the session when connecting to an instance of SQL Server. For more information,
see SET ANSI_PADDING.
The status of this option can be determined by examining the is_ansi_padding_on column in the sys.databases
catalog view or the IsAnsiPaddingEnabled property of the DATABASEPROPERTYEX function.
ANSI_WARNINGS { ON | OFF }
ON
Errors or warnings are issued when conditions such as divide-by-zero occur or null values appear in aggregate
functions.
OFF
No warnings are raised and null values are returned when conditions such as divide-by-zero occur.
SET ANSI_WARNINGS must be set to ON when you create or make changes to indexes on computed columns
or indexed views.
Connection-level settings that are set by using the SET statement override the default database setting for
ANSI_WARNINGS. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
ANSI_WARNINGS to ON for the session when connecting to an instance of SQL Server. For more
information, see SET ANSI_WARNINGS.
The status of this option can be determined by examining the is_ansi_warnings_on column in the sys.databases
catalog view or the IsAnsiWarningsEnabled property of the DATABASEPROPERTYEX function.
ARITHABORT { ON | OFF }
ON
A query is ended when an overflow or divide-by-zero error occurs during query execution.
OFF
A warning message is displayed when one of these errors occurs, but the query, batch, or transaction continues
to process as if no error occurred.
SET ARITHABORT must be set to ON when you create or make changes to indexes on computed columns or
indexed views.
The status of this option can be determined by examining the is_arithabort_on column in the sys.databases
catalog view or the IsArithmeticAbortEnabled property of the DATABASEPROPERTYEX function.
COMPATIBILITY_LEVEL = { 140 | 130 | 120 | 110 | 100 }
For more information, see ALTER DATABASE Compatibility Level.
CONCAT_NULL_YIELDS_NULL { ON | OFF }
ON
The result of a concatenation operation is NULL when either operand is NULL. For example, concatenating the
character string "This is" and NULL causes the value NULL, instead of the value "This is".
OFF
The null value is treated as an empty character string.
CONCAT_NULL_YIELDS_NULL must be set to ON when you create or make changes to indexes on computed
columns or indexed views.

IMPORTANT
In a future version of SQL Server, CONCAT_NULL_YIELDS_NULL will always be ON and any applications that explicitly set
the option to OFF will produce an error. Avoid using this feature in new development work, and plan to modify
applications that currently use this feature.

Connection-level settings that are set by using the SET statement override the default database setting for
CONCAT_NULL_YIELDS_NULL. By default, ODBC and OLE DB clients issue a connection-level SET statement
setting CONCAT_NULL_YIELDS_NULL to ON for the session when connecting to an instance of SQL Server.
For more information, see SET CONCAT_NULL_YIELDS_NULL.
The status of this option can be determined by examining the is_concat_null_yields_null_on column in the
sys.databases catalog view or the IsNullConcat property of the DATABASEPROPERTYEX function.
QUOTED_IDENTIFIER { ON | OFF }
ON
Double quotation marks can be used to enclose delimited identifiers.
All strings delimited by double quotation marks are interpreted as object identifiers. Quoted identifiers do not
have to follow the Transact-SQL rules for identifiers. They can be keywords and can include characters not
generally allowed in Transact-SQL identifiers. If a single quotation mark (') is part of the literal string, it can be
represented by double quotation marks (").
OFF
Identifiers cannot be in quotation marks and must follow all Transact-SQL rules for identifiers. Literals can be
delimited by either single or double quotation marks.
SQL Server also allows for identifiers to be delimited by square brackets ([ ]). Bracketed identifiers can always
be used, regardless of the setting of QUOTED_IDENTIFIER. For more information, see Database Identifiers.
When a table is created, the QUOTED IDENTIFIER option is always stored as ON in the metadata of the table,
even if the option is set to OFF when the table is created.
Connection-level settings that are set by using the SET statement override the default database setting for
QUOTED_IDENTIFIER. By default, ODBC and OLE DB clients issue a connection-level SET statement setting
QUOTED_IDENTIFIER to ON when connecting to an instance of SQL Server. For more information, see SET
QUOTED_IDENTIFIER.
The status of this option can be determined by examining the is_quoted_identifier_on column in the
sys.databases catalog view or the IsQuotedIdentifiersEnabled property of the DATABASEPROPERTYEX
function.
NUMERIC_ROUNDABORT { ON | OFF }
ON
An error is generated when loss of precision occurs in an expression.
OFF
Losses of precision do not generate error messages and the result is rounded to the precision of the column or
variable storing the result.
NUMERIC_ROUNDABORT must be set to OFF when you create or make changes to indexes on computed
columns or indexed views.
The status of this option can be determined by examining the is_numeric_roundabort_on column in the
sys.databases catalog view or the IsNumericRoundAbortEnabled property of the DATABASEPROPERTYEX
function.
RECURSIVE_TRIGGERS { ON | OFF }
ON
Recursive firing of AFTER triggers is allowed.
OFF
Only direct recursive firing of AFTER triggers is not allowed. To also disable indirect recursion of AFTER
triggers, set the nested triggers server option to 0 by using sp_configure.

NOTE
Only direct recursion is prevented when RECURSIVE_TRIGGERS is set to OFF. To disable indirect recursion, you must also
set the nested triggers server option to 0.

The status of this option can be determined by examining the is_recursive_triggers_on column in the
sys.databases catalog view or the IsRecursiveTriggersEnabled property of the DATABASEPROPERTYEX
function.
<target_recovery_time_option> ::=
Specifies the frequency of indirect checkpoints on a per-database basis. Beginning with SQL Server 2016 (13.x)
the default value for new databases is 1 minute, which indicates database will use indirect checkpoints. For older
versions the default is 0, which indicates that the database will use automatic checkpoints, whose frequency
depends on the recovery interval setting of the server instance. Microsoft recommends 1 minute for most
systems.
TARGET_RECOVERY_TIME =target_recovery_time { SECONDS | MINUTES }
target_recovery_time
Specifies the maximum bound on the time to recover the specified database in the event of a crash.
SECONDS
Indicates that target_recovery_time is expressed as the number of seconds.
MINUTES
Indicates that target_recovery_time is expressed as the number of minutes.
For more information about indirect checkpoints, see Database Checkpoints.
ROLLBACK AFTER integer [SECONDS ] | ROLLBACK IMMEDIATE
Specifies whether to roll back after the specified number of seconds or immediately.
NO_WAIT
Specifies that if the requested database state or option change cannot complete immediately without waiting
for transactions to commit or roll back on their own, the request will fail.

Setting Options
To retrieve current settings for database options, use the sys.databases catalog view or
DATABASEPROPERTYEX
After you set a database option, the modification takes effect immediately.
To change the default values for any one of the database options for all newly created databases, change the
appropriate database option in the model database.

Examples
A. Setting the database to READ_ONLY
Changing the state of a database or filegroup to READ_ONLY or READ_WRITE requires exclusive access to the
database. The following example sets the database to RESTRICTED_USER mode to restricted access. The example
then sets the state of the AdventureWorks2012 database to READ_ONLY and returns access to the database to
all users.

USE master;
GO
ALTER DATABASE AdventureWorks2012
SET RESTRICTED_USER;
GO
ALTER DATABASE AdventureWorks2012
SET READ_ONLY
GO
ALTER DATABASE AdventureWorks2012
SET MULTI_USER;
GO

B. Enabling snapshot isolation on a database


The following example enables the snapshot isolation framework option for the AdventureWorks2012
database.

USE AdventureWorks2012;
USE master;
GO
ALTER DATABASE AdventureWorks2012
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO
-- Check the state of the snapshot_isolation_framework
-- in the database.
SELECT name, snapshot_isolation_state,
snapshot_isolation_state_desc AS description
FROM sys.databases
WHERE name = N'AdventureWorks2012';
GO

The result set shows that the snapshot isolation framework is enabled.

NAME SNAPSHOT_ISOLATION_STATE DESCRIPTION

AdventureWorks2012 1 ON

C. Enabling, modifying, and disabling change tracking


The following example enables change tracking for the AdventureWorks2012 database and sets the retention
period to 2 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = ON
(AUTO_CLEANUP = ON, CHANGE_RETENTION = 2 DAYS);

The following example shows how to change the retention period to 3 days.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING (CHANGE_RETENTION = 3 DAYS);

The following example shows how to disable change tracking for the AdventureWorks2012 database.

ALTER DATABASE AdventureWorks2012


SET CHANGE_TRACKING = OFF;

D. Enabling the query store


The following example enables the query store and configures query store parameters.

ALTER DATABASE AdventureWorks2012


SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE
, CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 )
, DATA_FLUSH_INTERVAL_SECONDS = 900
, MAX_STORAGE_SIZE_MB = 1024
, INTERVAL_LENGTH_MINUTES = 60
);

See Also
ALTER DATABASE Compatibility Level
ALTER DATABASE Database Mirroring
Statistics
CREATE DATABASE
Enable and Disable Change Tracking
DATABASEPROPERTYEX
DROP DATABASE
SET TRANSACTION ISOLATION LEVEL
sp_configure
sys.databases
sys.data_spaces
Best Practice with the Query Store
ALTER ENDPOINT (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2014) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Enables modifying an existing endpoint in the following ways:
By adding a new method to an existing endpoint.
By modifying or dropping an existing method from the endpoint.
By changing the properties of an endpoint.

NOTE
This topic describes the syntax and arguments that are specific to ALTER ENDPOINT. For descriptions of the arguments that
are common to both CREATE ENDPOINT and ALTER ENDPOINT, see CREATE ENDPOINT (Transact-SQL).

Native XML Web Services (SOAP/HTTP endpoints) is removed beginning in SQL Server 2012 (11.x).
Transact-SQL Syntax Conventions

Syntax
ALTER ENDPOINT endPointName [ AUTHORIZATION login ]
[ STATE = { STARTED | STOPPED | DISABLED } ]
[ AS { TCP } ( <protocol_specific_items> ) ]
[ FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING } (
<language_specific_items>
) ]

<AS TCP_protocol_specific_arguments> ::=


AS TCP (
LISTENER_PORT = listenerPort
[ [ , ] LISTENER_IP = ALL | ( 4-part-ip ) | ( "ip_address_v6" ) ]
)
<FOR SERVICE_BROKER_language_specific_arguments> ::=
FOR SERVICE_BROKER (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
} ]
[ , ENCRYPTION = { DISABLED
|
{{SUPPORTED | REQUIRED }
[ ALGORITHM { RC4 | AES | AES RC4 | RC4 AES } ] }
]

[ , MESSAGE_FORWARDING = {ENABLED | DISABLED} ]


[ , MESSAGE_FORWARD_SIZE = forwardSize
)

<FOR DATABASE_MIRRORING_language_specific_arguments> ::=


FOR DATABASE_MIRRORING (
[ AUTHENTICATION = {
WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
| CERTIFICATE certificate_name
| WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ] CERTIFICATE certificate_name
| CERTIFICATE certificate_name WINDOWS [ { NTLM | KERBEROS | NEGOTIATE } ]
} ]
[ , ENCRYPTION = { DISABLED
|
{{SUPPORTED | REQUIRED }
[ ALGORITHM { RC4 | AES | AES RC4 | RC4 AES } ] }
]
[ , ] ROLE = { WITNESS | PARTNER | ALL }
)

Arguments
NOTE
The following arguments are specific to ALTER ENDPOINT. For descriptions of the remaining arguments, see CREATE
ENDPOINT (Transact-SQL).

AS { TCP }
You cannot change the transport protocol with ALTER ENDPOINT.
AUTHORIZATION login
The AUTHORIZATION option is not available in ALTER ENDPOINT. Ownership can only be assigned when
the endpoint is created.
FOR { TSQL | SERVICE_BROKER | DATABASE_MIRRORING }
You cannot change the payload type with ALTER ENDPOINT.

Remarks
When you use ALTER ENDPOINT, specify only those parameters that you want to update. All properties of an
existing endpoint remain the same unless you explicitly change them.
The ENDPOINT DDL statements cannot be executed inside a user transaction.
For information on choosing an encryption algorithm for use with an endpoint, see Choose an Encryption
Algorithm.

NOTE
The RC4 algorithm is only supported for backward compatibility. New material can only be encrypted using RC4 or RC4_128
when the database is in compatibility level 90 or 100. (Not recommended.) Use a newer algorithm such as one of the AES
algorithms instead. In SQL Server 2012 (11.x) and later versions, material encrypted using RC4 or RC4_128 can be decrypted
in any compatibility level.
RC4 is a relatively weak algorithm, and AES is a relatively strong algorithm. But AES is considerably slower than RC4. If
security is a higher priority for you than speed, we recommend you use AES.

Permissions
User must be a member of the sysadmin fixed server role, the owner of the endpoint, or have been granted
ALTER ANY ENDPOINT permission.
To change ownership of an existing endpoint, you must use the ALTER AUTHORIZATION statement. For more
information, see ALTER AUTHORIZATION (Transact-SQL ).
For more information, see GRANT Endpoint Permissions (Transact-SQL ).

See Also
DROP ENDPOINT (Transact-SQL )
EVENTDATA (Transact-SQL )
ALTER EVENT SESSION (Transact-SQL)
11/27/2018 • 6 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Starts or stops an event session or changes an event session configuration.
Transact-SQL Syntax Conventions

Syntax
ALTER EVENT SESSION event_session_name
ON SERVER
{
[ [ { <add_drop_event> [ ,...n] }
| { <add_drop_event_target> [ ,...n ] } ]
[ WITH ( <event_session_options> [ ,...n ] ) ]
]
| [ STATE = { START | STOP } ]
}

<add_drop_event>::=
{
[ ADD EVENT <event_specifier>
[ ( {
[ SET { event_customizable_attribute = <value> [ ,...n ] } ]
[ ACTION ( { [event_module_guid].event_package_name.action_name [ ,...n ] } ) ]
[ WHERE <predicate_expression> ]
} ) ]
]
| DROP EVENT <event_specifier> }

<event_specifier> ::=
{
[event_module_guid].event_package_name.event_name
}

<predicate_expression> ::=
{
[ NOT ] <predicate_factor> | {( <predicate_expression> ) }
[ { AND | OR } [ NOT ] { <predicate_factor> | ( <predicate_expression> ) } ]
[ ,...n ]
}

<predicate_factor>::=
{
<predicate_leaf> | ( <predicate_expression> )
}

<predicate_leaf>::=
{
<predicate_source_declaration> { = | < > | ! = | > | > = | < | < = } <value>
| [event_module_guid].event_package_name.predicate_compare_name ( <predicate_source_declaration>, <value>
)
}

<predicate_source_declaration>::=
{
event_field_name | ( [event_module_guid].event_package_name.predicate_source_name )
}
<value>::=
{
number | 'string'
}

<add_drop_event_target>::=
{
ADD TARGET <event_target_specifier>
[ ( SET { target_parameter_name = <value> [ ,...n] } ) ]
| DROP TARGET <event_target_specifier>
}

<event_target_specifier>::=
{
[event_module_guid].event_package_name.target_name
}

<event_session_options>::=
{
[ MAX_MEMORY = size [ KB | MB] ]
[ [,] EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS | ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } ]
[ [,] MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } ]
[ [,] MAX_EVENT_SIZE = size [ KB | MB ] ]
[ [,] MEMORY_PARTITION_MODE = { NONE | PER_NODE | PER_CPU } ]
[ [,] TRACK_CAUSALITY = { ON | OFF } ]
[ [,] STARTUP_STATE = { ON | OFF } ]
}

Arguments

Term Definition

event_session_name Is the name of an existing event session.

STATE = START | STOP Starts or stops the event session. This argument is only valid
when ALTER EVENT SESSION is applied to an event session
object.

ADD EVENT <event_specifier> Associates the event identified by <event_specifier>with the


event session.

[event_module_guid].event_package_name.event_name Is the name of an event in an event package, where:

- event_module_guid is the GUID for the module that


contains the event.
- event_package_name is the package that contains the
action object.
- event_name is the event object.

Events appear in the sys.dm_xe_objects view as object_type


'event'.

SET { event_customizable_attribute= <value> [ ,...n] } Specifies customizable attributes for the event. Customizable
attributes appear in the sys.dm_xe_object_columns view as
column_type 'customizable ' and object_name = event_name.
ACTION ( { Is the action to associate with the event session, where:
[event_module_guid].event_package_name.action_name [
,...n] } ) - event_module_guid is the GUID for the module that
contains the event.
- event_package_name is the package that contains the
action object.
- action_name is the action object.

Actions appear in the sys.dm_xe_objects view as object_type


'action'.

WHERE <predicate_expression> Specifies the predicate expression used to determine if an


event should be processed. If <predicate_expression> is true,
the event is processed further by the actions and targets for
the session. If <predicate_expression> is false, the event is
dropped by the session before being processed by the actions
and targets for the session. Predicate expressions are limited
to 3000 characters, which limits string arguments.

event_field_name Is the name of the event field that identifies the predicate
source.

[event_module_guid].event_package_name.predicate_source_n Is the name of the global predicate source where:


ame
- event_module_guid is the GUID for the module that
contains the event.
- event_package_name is the package that contains the
predicate object.
- predicate_source_name is defined in the sys.dm_xe_objects
view as object_type 'pred_source'.

[event_module_guid].event_package_name.predicate_compar Is the name of the predicate object to associate with the


e_name event, where:

- event_module_guid is the GUID for the module that


contains the event.
- event_package_name is the package that contains the
predicate object.
- predicate_compare_name is a global source defined in the
sys.dm_xe_objects view as object_type 'pred_compare'.

DROP EVENT <event_specifier> Drops the event identified by <event_specifier>.


<event_specifier> must be valid in the event session.

ADD TARGET <event_target_specifier> Associates the target identified by


<event_target_specifier>with the event session.

[event_module_guid].event_package_name.target_name Is the name of a target in the event session, where:

- event_module_guid is the GUID for the module that


contains the event.
- event_package_name is the package that contains the
action object.
- target_name is the action. Actions appear in
sys.dm_xe_objects view as object_type 'target'.
SET { target_parameter_name= <value> [, ...n] } Sets a target parameter. Target parameters appear in the
sys.dm_xe_object_columns view as column_type 'customizable'
and object_name = target_name.

NOTE!! If you are using the ring buffer target, we recommend


that you set the max_memory target parameter to 2048
kilobytes (KB) to help avoid possible data truncation of the
XML output. For more information about when to use the
different target types, see SQL Server Extended Events Targets.

DROP TARGET <event_target_specifier> Drops the target identified by <event_target_specifier>.


<event_target_specifier> must be valid in the event session.

EVENT_RETENTION_MODE = { ALLOW_SINGLE_EVENT_LOSS Specifies the event retention mode to use for handling event
| ALLOW_MULTIPLE_EVENT_LOSS | NO_EVENT_LOSS } loss.

ALLOW_SINGLE_EVENT_LOSS
An event can be lost from the session. A single event is only
dropped when all the event buffers are full. Losing a single
event when event buffers are full allows for acceptable SQL
Server performance characteristics, while minimizing the loss
of data in the processed event stream.

ALLOW_MULTIPLE_EVENT_LOSS
Full event buffers containing multiple events can be lost from
the session. The number of events lost is dependent upon the
memory size allocated to the session, the partitioning of the
memory, and the size of the events in the buffer. This option
minimizes performance impact on the server when event
buffers are quickly filled, but large numbers of events can be
lost from the session.

NO_EVENT_LOSS
No event loss is allowed. This option ensures that all events
raised will be retained. Using this option forces all tasks that
fire events to wait until space is available in an event buffer.
This may cause detectable performance issues while the event
session is active. User connections may stall while waiting for
events to be flushed from the buffer.

MAX_DISPATCH_LATENCY = { seconds SECONDS | INFINITE } Specifies the amount of time that events are buffered in
memory before being dispatched to event session targets. The
minimum latency value is 1 second. However, 0 can be used
to specify INFINITE latency. By default, this value is set to 30
seconds.

seconds SECONDS
The time, in seconds, to wait before starting to flush buffers to
targets. seconds is a whole number.

INFINITE
Flush buffers to targets only when the buffers are full, or when
the event session closes.

NOTE!! MAX_DISPATCH_LATENCY = 0 SECONDS is


equivalent to MAX_DISPATCH_LATENCY = INFINITE.
MAX_EVENT_SIZE =size [ KB | MB ] Specifies the maximum allowable size for events.
MAX_EVENT_SIZE should only be set to allow single events
larger than MAX_MEMORY; setting it to less than
MAX_MEMORY will raise an error. size is a whole number and
can be a kilobyte (KB) or a megabyte (MB) value. If size is
specified in kilobytes, the minimum allowable size is 64 KB.
When MAX_EVENT_SIZE is set, two buffers of size are created
in addition to MAX_MEMORY. This means that the total
memory used for event buffering is MAX_MEMORY + 2 *
MAX_EVENT_SIZE.

MEMORY_PARTITION_MODE = { NONE | PER_NODE | Specifies the location where event buffers are created.
PER_CPU }
NONE
A single set of buffers is created within the SQL Server
instance.

PER NODE - A set of buffers is created for each NUMA node.

PER CPU - A set of buffers is created for each CPU.

TRACK_CAUSALITY = { ON | OFF } Specifies whether or not causality is tracked. If enabled,


causality allows related events on different server connections
to be correlated together.

STARTUP_STATE = { ON | OFF } Specifies whether or not to start this event session


automatically when SQL Server starts.

If STARTUP_STATE=ON the event session will only start if SQL


Server is stopped and then restarted.

ON= Event session is started at startup.

OFF = Event session is NOT started at startup.

Remarks
The ADD and DROP arguments cannot be used in the same statement.

Permissions
Requires the ALTER ANY EVENT SESSION permission.

Examples
The following example starts an event session, obtains some live session statistics, and then adds two events to the
existing session.
-- Start the event session
ALTER EVENT SESSION test_session ON SERVER
STATE = start;
GO

-- Obtain live session statistics


SELECT * FROM sys.dm_xe_sessions;
SELECT * FROM sys.dm_xe_session_events;
GO

-- Add new events to the session


ALTER EVENT SESSION test_session ON SERVER
ADD EVENT sqlserver.database_transaction_begin,
ADD EVENT sqlserver.database_transaction_end;
GO

See Also
CREATE EVENT SESSION (Transact-SQL )
DROP EVENT SESSION (Transact-SQL )
SQL Server Extended Events Targets
sys.server_event_sessions (Transact-SQL )
sys.dm_xe_objects (Transact-SQL )
sys.dm_xe_object_columns (Transact-SQL )
ALTER EXTERNAL DATA SOURCE (Transact-SQL)
11/27/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Modifies an external data source used to create an external table. The external data source can be Hadoop or Azure
blob storage (WASB ).

Syntax
-- Modify an external data source
-- Applies to: SQL Server (2016 or later)
ALTER EXTERNAL DATA SOURCE data_source_name SET
{
LOCATION = 'server_name_or_IP' [,] |
RESOURCE_MANAGER_LOCATION = <'IP address;Port'> [,] |
CREDENTIAL = credential_name
}
[;]

-- Modify an external data source pointing to Azure Blob storage


-- Applies to: SQL Server (starting with 2017)
ALTER EXTERNAL DATA SOURCE data_source_name
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://storage_account_name.blob.core.windows.net'
[, CREDENTIAL = credential_name ]
)

Arguments
data_source_name Specifies the user-defined name for the data source. The name must be unique.
LOCATION = 'server_name_or_IP' Specifies the name of the server or an IP address.
RESOURCE_MANAGER_LOCATION = '<IP address;Port>' Specifies the Hadoop Resource Manager location.
When specified, the query optimizer might choose to pre-process data for a PolyBase query by using Hadoop's
computation capabilities. This is a cost-based decision. Called predicate pushdown, this can significantly reduce the
volume of data transferred between Hadoop and SQL, and therefore improve query performance.
CREDENTIAL = Credential_Name Specifies the named credential. See CREATE DATABASE SCOPED
CREDENTIAL (Transact-SQL ).
TYPE = BLOB_STORAGE
Applies to: SQL Server 2017 (14.x). For bulk operations only, LOCATION must be valid the URL to Azure Blob
storage. Do not put /, file name, or shared access signature parameters at the end of the LOCATION URL. The
credential used, must be created using SHARED ACCESS SIGNATURE as the identity. For more information on shared
access signatures, see Using Shared Access Signatures (SAS ).

Remarks
Only single source can be modified at a time. Concurrent requests to modify the same source cause one statement
to wait. However, different sources can be modified at the same time. This statement can run concurrently with
other statements.

Permissions
Requires ALTER ANY EXTERNAL DATA SOURCE permission.

IMPORTANT
The ALTER ANY EXTERNAL DATA SOURCE permission grants any principal the ability to create and modify any external data
source object, and therefore, it also grants the ability to access all database scoped credentials on the database. This
permission must be considered as highly privileged, and therefore must be granted only to trusted principals in the system.

Examples
The following example alters the location and resource manager location of an existing data source.

ALTER EXTERNAL DATA SOURCE hadoop_eds SET


LOCATION = 'hdfs://10.10.10.10:8020',
RESOURCE_MANAGER_LOCATION = '10.10.10.10:8032'
;

The following example alters the credential to connect to an existing data source.

ALTER EXTERNAL DATA SOURCE hadoop_eds SET


CREDENTIAL = new_hadoop_user
;
ALTER EXTERNAL LIBRARY (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2017) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Modifies the content of an existing external package library.

Syntax
ALTER EXTERNAL LIBRARY library_name
[ AUTHORIZATION owner_name ]
SET <file_spec>
WITH ( LANGUAGE = 'R' )
[ ; ]

<file_spec> ::=
{
(CONTENT = { <client_library_specifier> | <library_bits> | NONE}
[, PLATFORM = WINDOWS )
}

<client_library_specifier> :: =
'[\\computer_name\]share_name\[path\]manifest_file_name'
| '[local_path\]manifest_file_name'
| '<relative_path_in_external_data_source>'

<library_bits> :: =
{ varbinary_literal | varbinary_expression }

Arguments
library_name
Specifies the name of an existing package library. Libraries are scoped to the user. Library names are must be
unique within the context of a specific user or owner.
The library name cannot be arbitrarily assigned. That is, you must use the name that the calling runtime expects
when it loads the package.
owner_name
Specifies the name of the user or role that owns the external library.
file_spec
Specifies the content of the package for a specific platform. Only one file artifact per platform is supported.
The file can be specified in the form of a local path or network path. If the data source option is specified, the file
name can be a relative path with respect to the container referenced in the EXTERNAL DATA SOURCE .
Optionally, an OS platform for the file can be specified. Only one file artifact or content is permitted for each OS
platform for a specific language or runtime.
library_bits
Specifies the content of the package as a hex literal, similar to assemblies.
This option is useful if you have the required permission to alter a library, but file access on the server is restricted
and you cannot save the contents to a path the server can access.
Instead, you can pass the package contents as a variable in binary format.
PLATFORM = WINDOWS
Specifies the platform for the content of the library. This value is required when modifying an existing library to
add a different platform. Windows is the only supported platform.

Remarks
For the R language, packages must be prepared in the form of zipped archive files with the .ZIP extension for
Windows. Currently, only the Windows platform is supported.
The ALTER EXTERNAL LIBRARY statement only uploads the library bits to the database. The modified library is
installed when a user runs code in sp_execute_external_script (Transact-SQL ) that calls the library.

Permissions
By default, the dbo user or any member of the role db_owner has permission to run ALTER EXTERNAL
LIBRARY. Additionally, the user who created the external library can alter that external library.

Examples
The following examples change an external library called customPackage .
A. Replace the contents of a library using a file
The following example modifies an external library called customPackage , using a zipped file containing the
updated bits.

ALTER EXTERNAL LIBRARY customPackage


SET
(CONTENT = 'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\customPackage.zip')
WITH (LANGUAGE = 'R');

To install the updated library, execute the stored procedure sp_execute_external_script .

EXEC sp_execute_external_script
@language =N'R',
@script=N'library(customPackage)'
;

B. Alter an existing library using a byte stream


The following example alters the existing library by passing the new bits as a hexidecimal literal.

ALTER EXTERNAL LIBRARY customLibrary


SET (CONTENT = 0xabc123) WITH (LANGUAGE = 'R');

NOTE
This code sample only demonstrates the syntax; the binary value in CONTENT = has been truncated for readability and does
not create a working library. The actual contents of the binary variable would be much longer.
See also
CREATE EXTERNAL LIBRARY (Transact-SQL ) DROP EXTERNAL LIBRARY (Transact-SQL )
sys.external_library_files
sys.external_libraries
ALTER EXTERNAL RESOURCE POOL (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Applies to: SQL Server 2016 (13.x) R Services (In-Database) and SQL Server 2017 (14.x) Machine Learning
Services (In-Database)
Changes a Resource Governor external pool that specifies resources that can be used by external processes.
For R Services (In-Database) in SQL Server 2016 (13.x), the external pool governs rterm.exe ,
BxlServer.exe , and other processes spawned by them.

For Machine Learning Services (In-Database) in SQL Server 2017, the external pool governs the R
processes listed for the previous version, as well as python.exe , BxlServer.exe , and other processes
spawned by them.
Transact-SQL Syntax Conventions.

Syntax
ALTER EXTERNAL RESOURCE POOL { pool_name | "default" }
[ WITH (
[ MAX_CPU_PERCENT = value ]
[ [ , ] AFFINITY CPU =
{
AUTO
| ( <cpu_range_spec> )
| NUMANODE = (( <NUMA_node_id> )
} ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MAX_PROCESSES = value ]
)
]
[ ; ]

<CPU_range_spec> ::=
{ CPU_ID | CPU_ID TO CPU_ID } [ ,...n ]

Arguments
{ pool_name | "default" }
Is the name of an existing user-defined external resource pool or the default external resource pool that is created
when SQL Server is installed. "default" must be enclosed by quotation marks ("") or brackets ([]) when used with
ALTER EXTERNAL RESOURCE POOL to avoid conflict with DEFAULT , which is a system reserved word.

MAX_CPU_PERCENT =value
Specifies the maximum average CPU bandwidth that all requests in the external resource pool can receive when
there is CPU contention. value is an integer with a default setting of 100. The allowed range for value is from 1
through 100.
AFFINITY {CPU = AUTO | ( <CPU_range_spec> ) | NUMANODE = (<NUMA_node_range_spec>)}
Attach the external resource pool to specific CPUs. The default value is AUTO.
AFFINITY CPU = ( <CPU_range_spec> ) maps the external resource pool to the SQL Server CPUs identified by
the given CPU_IDs. When you use AFFINITY NUMANODE = ( <NUMA_node_range_spec> ), the external
resource pool is affinitized to the SQL Server physical CPUs that correspond to the given NUMA node or range of
nodes.
MAX_MEMORY_PERCENT =value
Specifies the total server memory that can be used by requests in this external resource pool. value is an integer
with a default setting of 100. The allowed range for value is from 1 through 100.
MAX_PROCESSES =value
Specifies the maximum number of processes allowed for the external resource pool. Specify 0 to set an unlimited
threshold for the pool, which is thereafter bound only by computer resources. The default is 0.

Remarks
The Database Engine implements the resource pool when you execute the ALTER RESOURCE GOVERNOR
RECONFIGURE statement.
For general information about resource pools, see Resource Governor Resource Pool,
sys.resource_governor_external_resource_pools (Transact-SQL ), and
sys.dm_resource_governor_external_resource_pool_affinity (Transact-SQL ).
For information specific to the use of external resource pools to govern machine learning jobs, see Resource
governance for machine learning in SQL Server...

Permissions
Requires CONTROL SERVER permission.

Examples
The following statement changes an external pool, restricting the CPU usage to 50 percent and the maximum
memory to 25 percent of the available memory on the computer.

ALTER EXTERNAL RESOURCE POOL ep_1


WITH (
MAX_CPU_PERCENT = 50
, AFFINITY CPU = AUTO
, MAX_MEMORY_PERCENT = 25
);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

See also
Resource governance for machine learning in SQL Server
external scripts enabled Server Configuration Option
CREATE EXTERNAL RESOURCE POOL (Transact-SQL )
DROP EXTERNAL RESOURCE POOL (Transact-SQL )
ALTER RESOURCE POOL (Transact-SQL )
CREATE WORKLOAD GROUP (Transact-SQL )
Resource Governor Resource Pool
ALTER RESOURCE GOVERNOR (Transact-SQL )
ALTER FULLTEXT CATALOG (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the properties of a full-text catalog.
Transact-SQL Syntax Conventions

Syntax
ALTER FULLTEXT CATALOG catalog_name
{ REBUILD [ WITH ACCENT_SENSITIVITY = { ON | OFF } ]
| REORGANIZE
| AS DEFAULT
}

Arguments
catalog_name
Specifies the name of the catalog to be modified. If a catalog with the specified name does not exist, Microsoft SQL
Server returns an error and does not perform the ALTER operation.
REBUILD
Tells SQL Server to rebuild the entire catalog. When a catalog is rebuilt, the existing catalog is deleted and a new
catalog is created in its place. All the tables that have full-text indexing references are associated with the new
catalog. Rebuilding resets the full-text metadata in the database system tables.
WITH ACCENT_SENSITIVITY = {ON|OFF }
Specifies if the catalog to be altered is accent-sensitive or accent-insensitive for full-text indexing and querying.
To determine the current accent-sensitivity property setting of a full-text catalog, use the
FULLTEXTCATALOGPROPERTY function with the accentsensitivity property value against catalog_name. If the
function returns '1', the full-text catalog is accent sensitive; if the function returns '0', the catalog is not accent
sensitive.
The catalog and database default accent sensitivity are the same.
REORGANIZE
Tells SQL Server to perform a master merge, which involves merging the smaller indexes created in the process of
indexing into one large index. Merging the full-text index fragments can improve performance and free up disk and
memory resources. If there are frequent changes to the full-text catalog, use this command periodically to
reorganize the full-text catalog.
REORGANIZE also optimizes internal index and catalog structures.
Keep in mind that, depending on the amount of indexed data, a master merge may take some time to complete.
Master merging a large amount of data can create a long running transaction, delaying truncation of the
transaction log during checkpoint. In this case, the transaction log might grow significantly under the full recovery
model. As a best practice, ensure that your transaction log contains sufficient space for a long-running transaction
before reorganizing a large full-text index in a database that uses the full recovery model. For more information,
see Manage the Size of the Transaction Log File.
AS DEFAULT
Specifies that this catalog is the default catalog. When full-text indexes are created with no specified catalogs, the
default catalog is used. If there is an existing default full-text catalog, setting this catalog AS DEFAULT will override
the existing default.

Permissions
User must have ALTER permission on the full-text catalog, or be a member of the db_owner, db_ddladmin fixed
database roles, or sysadmin fixed server role.

NOTE
To use ALTER FULLTEXT CATALOG AS DEFAULT, the user must have ALTER permission on the full-text catalog and CREATE
FULLTEXT CATALOG permission on the database.

Examples
The following example changes the accentsensitivity property of the default full-text catalog ftCatalog , which is
accent sensitive.

--Change to accent insensitive


USE AdventureWorks2012;
GO
ALTER FULLTEXT CATALOG ftCatalog
REBUILD WITH ACCENT_SENSITIVITY=OFF;
GO
-- Check Accentsensitivity
SELECT FULLTEXTCATALOGPROPERTY('ftCatalog', 'accentsensitivity');
GO
--Returned 0, which means the catalog is not accent sensitive.

See Also
sys.fulltext_catalogs (Transact-SQL )
CREATE FULLTEXT CATALOG (Transact-SQL )
DROP FULLTEXT CATALOG (Transact-SQL )
Full-Text Search
ALTER FULLTEXT INDEX (Transact-SQL)
11/28/2018 • 13 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Changes the properties of a full-text index in SQL Server.
Transact-SQL Syntax Conventions

Syntax
ALTER FULLTEXT INDEX ON table_name
{ ENABLE
| DISABLE
| SET CHANGE_TRACKING [ = ] { MANUAL | AUTO | OFF }
| ADD ( column_name
[ TYPE COLUMN type_column_name ]
[ LANGUAGE language_term ]
[ STATISTICAL_SEMANTICS ]
[,...n]
)
[ WITH NO POPULATION ]
| ALTER COLUMN column_name
{ ADD | DROP } STATISTICAL_SEMANTICS
[ WITH NO POPULATION ]
| DROP ( column_name [,...n] )
[ WITH NO POPULATION ]
| START { FULL | INCREMENTAL | UPDATE } POPULATION
| {STOP | PAUSE | RESUME } POPULATION
| SET STOPLIST [ = ] { OFF| SYSTEM | stoplist_name }
[ WITH NO POPULATION ]
| SET SEARCH PROPERTY LIST [ = ] { OFF | property_list_name }
[ WITH NO POPULATION ]
}
[;]

Arguments
table_name
Is the name of the table or indexed view that contains the column or columns included in the full-text index.
Specifying database and table owner names is optional.
ENABLE | DISABLE
Tells SQL Server whether to gather full-text index data for table_name. ENABLE activates the full-text index;
DISABLE turns off the full-text index. The table will not support full-text queries while the index is disabled.
Disabling a full-text index allows you to turn off change tracking but keep the full-text index, which you can
reactivate at any time using ENABLE. When the full-text index is disabled, the full-text index metadata remains in
the system tables. If CHANGE_TRACKING is in the enabled state (automatic or manual update) when the full-text
index is disabled, the state of the index freezes, any ongoing crawl stops, and new changes to the table data are
not tracked or propagated to the index.
SET CHANGE_TRACKING {MANUAL | AUTO | OFF }
Specifies whether changes (updates, deletes, or inserts) made to table columns that are covered by the full-text
index will be propagated by SQL Server to the full-text index. Data changes through WRITETEXT and
UPDATETEXT are not reflected in the full-text index, and are not picked up with change tracking.

NOTE
For information about the interaction of change tracking and WITH NO POPULATION, see "Remarks," later in this topic.

MANUAL
Specifies that the tracked changes will be propagated manually by calling the ALTER FULLTEXT INDEX ... START
UPDATE POPULATION Transact-SQL statement (manual population). You can use SQL Server Agent to call this
Transact-SQL statement periodically.
AUTO
Specifies that the tracked changes will be propagated automatically as data is modified in the base table
(automatic population). Although changes are propagated automatically, these changes might not be reflected
immediately in the full-text index. AUTO is the default.
OFF
Specifies that SQL Server will not keep a list of changes to the indexed data.
ADD | DROP column_name
Specifies the columns to be added or deleted from a full-text index. The column or columns must be of type char,
varchar, nchar, nvarchar, text, ntext, image, xml, varbinary, or varbinary(max).
Use the DROP clause only on columns that have been enabled previously for full-text indexing.
Use TYPE COLUMN and LANGUAGE with the ADD clause to set these properties on the column_name. When a
column is added, the full-text index on the table must be repopulated in order for full-text queries against this
column to work.

NOTE
Whether the full-text index is populated after a column is added or dropped from a full-text index depends on whether
change-tracking is enabled and whether WITH NO POPULATION is specified. For more information, see "Remarks," later in
this topic.

TYPE COLUMN type_column_name


Specifies the name of a table column, type_column_name, that is used to hold the document type for a varbinary,
varbinary(max), or image document. This column, known as the type column, contains a user-supplied file
extension (.doc, .pdf, .xls, and so forth). The type column must be of type char, nchar, varchar, or nvarchar.
Specify TYPE COLUMN type_column_name only if column_name specifies a varbinary, varbinary(max) or
image column, in which data is stored as binary data; otherwise, SQL Server returns an error.

NOTE
At indexing time, the Full-Text Engine uses the abbreviation in the type column of each table row to identify which full-text
search filter to use for the document in column_name. The filter loads the document as a binary stream, removes the
formatting information, and sends the text from the document to the word-breaker component. For more information, see
Configure and Manage Filters for Search.

LANGUAGE language_term
Is the language of the data stored in column_name.
language_term is optional and can be specified as a string, integer, or hexadecimal value corresponding to the
locale identifier (LCID ) of a language. If language_term is specified, the language it represents will be applied to
all elements of the search condition. If no value is specified, the default full-text language of the SQL Server
instance is used.
Use the sp_configure stored procedure to access information about the default full-text language of the SQL
Server instance.
When specified as a string, language_term corresponds to the alias column value in the syslanguages system
table. The string must be enclosed in single quotation marks, as in 'language_term'. When specified as an integer,
language_term is the actual LCID that identifies the language. When specified as a hexadecimal value,
language_term is 0x followed by the hex value of the LCID. The hex value must not exceed eight digits, including
leading zeros.
If the value is in double-byte character set (DBCS ) format, SQL Server will convert it to Unicode.
Resources, such as word breakers and stemmers, must be enabled for the language specified as language_term. If
such resources do not support the specified language, SQL Server returns an error.
For non-BLOB and non-XML columns containing text data in multiple languages, or for cases when the language
of the text stored in the column is unknown, use the neutral (0x0) language resource. For documents stored in
XML - or BLOB -type columns, the language encoding within the document will be used at indexing time. For
example, in XML columns, the xml:lang attribute in XML documents will identify the language. At query time, the
value previously specified in language_term becomes the default language used for full-text queries unless
language_term is specified as part of a full-text query.
STATISTICAL_SEMANTICS
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Creates the additional key phrase and document similarity indexes that are part of statistical semantic indexing.
For more information, see Semantic Search (SQL Server).
[ ,...n]
Indicates that multiple columns may be specified for the ADD, ALTER, or DROP clauses. When multiple columns
are specified, separate these columns with commas.
WITH NO POPULATION
Specifies that the full-text index will not be populated after an ADD or DROP column operation or a SET
STOPLIST operation. The index will only be populated if the user executes a START...POPULATION command.
When NO POPULATION is specified, SQL Server does not populate an index. The index is populated only after
the user gives an ALTER FULLTEXT INDEX...START POPULATION command. When NO POPULATION is not
specified, SQL Server populates the index.
If CHANGE_TRACKING is enabled and WITH NO POPULATION is specified, SQL Server returns an error. If
CHANGE_TRACKING is enabled and WITH NO POPULATION is not specified, SQL Server performs a full
population on the index.

NOTE
For more information about the interaction of change tracking and WITH NO POPULATION, see "Remarks," later in this
topic.

{ADD | DROP } STATISTICAL_SEMANTICS


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Enables or disables statistical semantic indexing for the specified columns. For more information, see Semantic
Search (SQL Server).
START {FULL|INCREMENTAL|UPDATE } POPULATION
Tells SQL Server to begin population of the full-text index of table_name. If a full-text index population is already
in progress, SQL Server returns a warning and does not start a new population.
FULL
Specifies that every row of the table be retrieved for full-text indexing even if the rows have already been indexed.
INCREMENTAL
Specifies that only the modified rows since the last population be retrieved for full-text indexing. INCREMENTAL
can be applied only if the table has a column of the type timestamp. If a table in the full-text catalog does not
contain a column of the type timestamp, the table undergoes a FULL population.
UPDATE
Specifies the processing of all insertions, updates, or deletions since the last time the change-tracking index was
updated. Change-tracking population must be enabled on a table, but the background update index or the auto
change tracking should not be turned on.
{STOP | PAUSE | RESUME } POPULATION
Stops, or pauses any population in progress; or stops or resumes any paused population.
STOP POPULATION does not stop auto change tracking or background update index. To stop change tracking,
use SET CHANGE_TRACKING OFF.
PAUSE POPULATION and RESUME POPULATION can only be used for full populations. They are not relevant
to other population types because the other populations resume crawls from where the crawl stopped.
SET STOPLIST { OFF| SYSTEM | stoplist_name }
Changes the full-text stoplist that is associated with the index, if any.
OFF
Specifies that no stoplist be associated with the full-text index.
SYSTEM
Specifies that the default full-text system STOPLIST should be used for this full-text index.
stoplist_name
Specifies the name of the stoplist to be associated with the full-text index.
For more information, see Configure and Manage Stopwords and Stoplists for Full-Text Search.
SET SEARCH PROPERTY LIST { OFF | property_list_name } [ WITH NO POPULATION ]
Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
Changes the search property list that is associated with the index, if any.
OFF
Specifies that no property list be associated with the full-text index. When you turn off the search property list of a
full-text index (ALTER FULLTEXT INDEX ... SET SEARCH PROPERTY LIST OFF ), property searching on the base
table is no longer possible.
By default, when you turn off an existing search property list, the full-text index automatically repopulates. If you
specify WITH NO POPULATION when you turn off the search property list, automatic repopulation does not
occur. However, we recommend that you eventually run a full population on this full-text index at your
convenience. Repopulating the full-text index removes the property-specific metadata of each dropped search
property, making the full-text index smaller and more efficient.
property_list_name
Specifies the name of the search property list to be associated with the full-text index.
Adding a search property list to a full-text index requires repopulating the index to index the search properties
that are registered for the associated search property list. If you specify WITH NO POPULATION when adding
the search property list, you will need to run a population on the index, at an appropriate time.

IMPORTANT
If the full-text index was previously associated with a different search it must be rebuilt property list in order to bring the
index into a consistent state. The index is truncated immediately and is empty until the full population runs. For more
information about when changing the search property list causes rebuilding, see "Remarks," later in this topic.

NOTE
You can associate a given search property list with more than one full-text index in the same database.

To find the search property lists on the current database


sys.registered_search_property_lists
For more information about search property lists, see Search Document Properties with Search Property
Lists.

Interactions of Change Tracking and NO POPULATION Parameter


Whether the full-text index is populated depends on whether change-tracking is enabled and whether WITH NO
POPULATION is specified in the ALTER FULLTEXT INDEX statement. The following table summarizes the result
of their interaction.

CHANGE TRACKING WITH NO POPULATION RESULT

Not Enabled Not specified A full population is performed on the


index.

Not Enabled Specified No population of the index occurs until


an ALTER FULLTEXT INDEX...START
POPULATION statement is issued.

Enabled Specified An error is raised, and the index is not


altered.

Enabled Not specified A full population is performed on the


index.

For more information about populating full-text indexes, see Populate Full-Text Indexes.

Changing the Search Property List Causes Rebuilding the Index


The first time that a full-text index is associated with a search property list, the index must be repopulated to index
property-specific search terms. The existing index data is not truncated.
However, if you associate the full-text index with a different property list, the index is rebuilt. Rebuilding
immediately truncates the full-text index, removing all existing data, and the index must be repopulated. While the
population progresses, full-text queries on the base table search only on the table rows that have already been
indexed by the population. The repopulated index data will include metadata from the registered properties of the
newly added search property list.
Scenarios that cause rebuilding include:
Switching directly to a different search property list (see "Scenario A," later in this section).
Turning off the search property list and later associating the index with any search property list (see
"Scenario B," later in this section)

NOTE
For more information about how full-text search works with search property lists, see Search Document Properties with
Search Property Lists. For information about full populations, see Populate Full-Text Indexes.

Scenario A: Switching Directly to a Different Search Property List


1. A full-text index is created on table_1 with a search property list spl_1 :

CREATE FULLTEXT INDEX ON table_1 (column_name) KEY INDEX unique_key_index


WITH SEARCH PROPERTY LIST=spl_1,
CHANGE_TRACKING OFF, NO POPULATION;

2. A full population is run on the full-text index:

ALTER FULLTEXT INDEX ON table_1 START FULL POPULATION;

3. The full-text index is later associated a different search property list, spl_2 , using the following statement:

ALTER FULLTEXT INDEX ON table_1 SET SEARCH PROPERTY LIST spl_2;

This statement causes a full population, the default behavior. However, before beginning this population,
the Full-Text Engine automatically truncates the index.
Scenario B: Turning Off the Search Property List and Later Associating the Index with Any Search Property List
1. A full-text index is created on table_1 with a search property list spl_1 , followed by an automatic full
population (the default behavior):

CREATE FULLTEXT INDEX ON table_1 (column_name) KEY INDEX unique_key_index


WITH SEARCH PROPERTY LIST=spl_1;

2. The search property list is turned off, as follows:

ALTER FULLTEXT INDEX ON table_1


SET SEARCH PROPERTY LIST OFF WITH NO POPULATION;

3. The full-text index is once more associated either the same search property list or a different one.
For example the following statement re-associates the full-text index with the original search property list,
spl_1 :

ALTER FULLTEXT INDEX ON table_1 SET SEARCH PROPERTY LIST spl_1;

This statement starts a full population, the default behavior.


NOTE
The rebuild would also be required for a different search property list, such as spl_2 .

Permissions
The user must have ALTER permission on the table or indexed view, or be a member of the sysadmin fixed
server role, or the db_ddladmin or db_owner fixed database roles.
If SET STOPLIST is specified, the user must have REFERENCES permission on the stoplist. If SET SEARCH
PROPERTY LIST is specified, the user must have REFERENCES permission on the search property list. The
owner of the specified stoplist or search property list can grant REFERENCES permission, if the owner has ALTER
FULLTEXT CATALOG permissions.

NOTE
The public is granted REFERENCES permission to the default stoplist that is shipped with SQL Server.

Examples
A. Setting manual change tracking
The following example sets manual change tracking on the full-text index on the JobCandidate table.

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
SET CHANGE_TRACKING MANUAL;
GO

B. Associating a property list with a full-text index


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The following example associates the DocumentPropertyList property list with the full-text index on the
Production.Document table. This ALTER FULLTEXT INDEX statement starts a full population, which is the default
behavior of the SET SEARCH PROPERTY LIST clause.

NOTE
For an example that creates the DocumentPropertyList property list, see CREATE SEARCH PROPERTY LIST (Transact-SQL).

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST DocumentPropertyList;
GO

C. Removing a search property list


Applies to: SQL Server 2012 (11.x) through SQL Server 2017.
The following example removes the DocumentPropertyList property list from the full-text index on the
Production.Document . In this example, there is no hurry for removing the properties from the index, so the WITH
NO POPULATION option is specified. However, property-level searching is longer allowed against this full-text
index.

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON Production.Document
SET SEARCH PROPERTY LIST OFF WITH NO POPULATION;
GO

D. Starting a full population


The following example starts a full population on the full-text index on the JobCandidate table.

USE AdventureWorks2012;
GO
ALTER FULLTEXT INDEX ON HumanResources.JobCandidate
START FULL POPULATION;
GO

See Also
sys.fulltext_indexes (Transact-SQL )
CREATE FULLTEXT INDEX (Transact-SQL )
DROP FULLTEXT INDEX (Transact-SQL )
Full-Text Search
Populate Full-Text Indexes
ALTER FULLTEXT STOPLIST (Transact-SQL)
10/1/2018 • 2 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Inserts or deletes a stop word in the default full-text stoplist of the current database.
Transact-SQL Syntax Conventions

Syntax
ALTER FULLTEXT STOPLIST stoplist_name
{
ADD [N] 'stopword' LANGUAGE language_term
| DROP
{
'stopword' LANGUAGE language_term
| ALL LANGUAGE language_term
| ALL
}
;

Arguments
stoplist_name
Is the name of the stoplist being altered. stoplist_name can be a maximum of 128 characters.
' stopword '
Is a string that could be a word with linguistic meaning in the specified language or a token that does not have a
linguistic meaning. stopword is limited to the maximum token length (64 characters). A stopword can be specified
as a Unicode string.
LANGUAGE language_term
Specifies the language to associate with the stopword being added or dropped.
language_term can be specified as a string, integer, or hexadecimal value corresponding to the locale identifier
(LCID ) of the language, as follows:

FORMAT DESCRIPTION

String language_term corresponds to the alias column value in the


sys.syslanguages (Transact-SQL) compatibility view. The string
must be enclosed in single quotation marks, as in
'language_term'.

Integer language_term is the LCID of the language.

Hexadecimal language_term is 0x followed by the hexadecimal value of the


LCID. The hexadecimal value must not exceed eight digits,
including leading zeros. If the value is in double-byte character
set (DBCS) format, SQL Server converts it to Unicode.
ADD 'stopword' LANGUAGE language_term
Adds a stop word to the stoplist for the language specified by LANGUAGE language_term.
If the specified combination of keyword and the LCID value of the language is not unique in the STOPLIST, an
error is returned. If the LCID value does not correspond to a registered language, an error is generated.
DROP { 'stopword' LANGUAGE language_term | ALL LANGUAGE language_term | ALL }
Drops a stop word from the stop list.
' stopword ' LANGUAGE language_term
Drops the specified stop word for the language specified by language_term.
ALL LANGUAGE language_term
Drops all of the stop words for the language specified by language_term.
ALL
Drops all of the stop words in the stoplist.

Remarks
CREATE FULLTEXT STOPLIST is supported only for compatibility level 100 and higher. For compatibility levels 80
and 90, the system stoplist is always assigned to the database.

Permissions
To designate a stoplist as the default stoplist of the database requires ALTER DATABASE permission. To otherwise
alter a stoplist requires being the stoplist owner or membership in the db_owner or db_ddladmin fixed database
roles.

Examples
The following example alters a stoplist named CombinedFunctionWordList , adding the word 'en', first for Spanish
and then for French.

ALTER FULLTEXT STOPLIST CombinedFunctionWordList ADD 'en' LANGUAGE 'Spanish';


ALTER FULLTEXT STOPLIST CombinedFunctionWordList ADD 'en' LANGUAGE 'French';

See Also
CREATE FULLTEXT STOPLIST (Transact-SQL )
DROP FULLTEXT STOPLIST (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
sys.fulltext_stoplists (Transact-SQL )
sys.fulltext_stopwords (Transact-SQL )
Configure and Manage Stopwords and Stoplists for Full-Text Search
ALTER FUNCTION (Transact-SQL)
1/8/2019 • 14 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Alters an existing Transact-SQL or CLR function that was previously created by executing the CREATE FUNCTION
statement, without changing permissions and without affecting any dependent functions, stored procedures, or
triggers.
Transact-SQL Syntax Conventions

Syntax
-- Transact-SQL Scalar Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN scalar_expression
END
[ ; ]

-- Transact-SQL Inline Table-Valued Function Syntax


ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS TABLE
[ WITH <function_option> [ ,...n ] ]
[ AS ]
RETURN [ ( ] select_stmt [ ) ]
[ ; ]
-- Transact-SQL Multistatement Table-valued Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
]
)
RETURNS @return_variable TABLE <table_type_definition>
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN
function_body
RETURN
END
[ ; ]
-- Transact-SQL Function Clauses
<function_option>::=
{
[ ENCRYPTION ]
| [ SCHEMABINDING ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}

<table_type_definition>:: =
( { <column_definition> <column_constraint>
| <computed_column_definition> }
[ <table_constraint> ] [ ,...n ]
)
<column_definition>::=
{
{ column_name data_type }
[ [ DEFAULT constant_expression ]
[ COLLATE collation_name ] | [ ROWGUIDCOL ]
]
| [ IDENTITY [ (seed , increment ) ] ]
[ <column_constraint> [ ...n ] ]
}

<column_constraint>::=
{
[ NULL | NOT NULL ]
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
[ WITH FILLFACTOR = fillfactor
| WITH ( < index_option > [ , ...n ] )
[ ON { filegroup | "default" } ]
| [ CHECK ( logical_expression ) ] [ ,...n ]
}

<computed_column_definition>::=
column_name AS computed_column_expression

<table_constraint>::=
{
{ PRIMARY KEY | UNIQUE }
[ CLUSTERED | NONCLUSTERED ]
( column_name [ ASC | DESC ] [ ,...n ] )
[ WITH FILLFACTOR = fillfactor
| WITH ( <index_option> [ , ...n ] )
| [ CHECK ( logical_expression ) ] [ ,...n ]
}

<index_option>::=
{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS ={ ON | OFF }
}
-- CLR Scalar and Table-Valued Function Syntax
ALTER FUNCTION [ schema_name. ] function_name
( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type
[ = default ] }
[ ,...n ]
)
RETURNS { return_data_type | TABLE <clr_table_type_definition> }
[ WITH <clr_function_option> [ ,...n ] ]
[ AS ] EXTERNAL NAME <method_specifier>
[ ; ]

-- CLR Function Clauses


<method_specifier>::=
assembly_name.class_name.method_name

<clr_function_option>::=
}
[ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
| [ EXECUTE_AS_Clause ]
}

<clr_table_type_definition>::=
( { column_name data_type } [ ,...n ] )

-- Syntax for In-Memory OLTP: Natively compiled, scalar user-defined function


ALTER FUNCTION [ schema_name. ] function_name
( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type
[ NULL | NOT NULL ] [ = default ] }
[ ,...n ]
]
)
RETURNS return_data_type
[ WITH <function_option> [ ,...n ] ]
[ AS ]
BEGIN ATOMIC WITH (set_option [ ,... n ])
function_body
RETURN scalar_expression
END

<function_option>::=
{ | NATIVE_COMPILATION
| SCHEMABINDING
| [ EXECUTE_AS_Clause ]
| [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ]
}

Arguments
schema_name
Is the name of the schema to which the user-defined function belongs.
function_name
Is the user-defined function to be changed.

NOTE
Parentheses are required after the function name even if a parameter is not specified.
@ parameter_name
Is a parameter in the user-defined function. One or more parameters can be declared.
A function can have a maximum of 2,100 parameters. The value of each declared parameter must be supplied by
the user when the function is executed, unless a default for the parameter is defined.
Specify a parameter name by using an at sign (@) as the first character. The parameter name must comply with the
rules for identifiers. Parameters are local to the function; the same parameter names can be used in other
functions. Parameters can take the place only of constants; they cannot be used instead of table names, column
names, or the names of other database objects.

NOTE
ANSI_WARNINGS is not honored when passing parameters in a stored procedure, user-defined function, or when declaring
and setting variables in a batch statement. For example, if a variable is defined as char(3), and then set to a value larger than
three characters, the data is truncated to the defined size and the INSERT or UPDATE statement succeeds.

[ type_schema_name. ] parameter_data_type
Is the parameter data type and optionally, the schema to which it belongs. For Transact-SQL functions, all data
types, including CLR user-defined types, are allowed except the timestamp data type. For CLR functions, all data
types, including CLR user-defined types, are allowed except text, ntext, image, and timestamp data types. The
nonscalar types cursor and table cannot be specified as a parameter data type in either Transact-SQL or CLR
functions.
If type_schema_name is not specified, the SQL Server 2005 Database Engine looks for the parameter_data_type
in the following order:
The schema that contains the names of SQL Server system data types.
The default schema of the current user in the current database.
The dbo schema in the current database.
[ =default ]
Is a default value for the parameter. If a default value is defined, the function can be executed without
specifying a value for that parameter.

NOTE
Default parameter values can be specified for CLR functions except for varchar(max) and varbinary(max) data types.

When a parameter of the function has a default value, the keyword DEFAULT must be specified when calling the
function to retrieve the default value. This behavior is different from using parameters with default values in stored
procedures in which omitting the parameter also implies the default value.
return_data_type
Is the return value of a scalar user-defined function. For Transact-SQL functions, all data types, including CLR
user-defined types, are allowed except the timestamp data type. For CLR functions, all data types, including CLR
user-defined types, are allowed except text, ntext, image, and timestamp data types. The nonscalar types cursor
and table cannot be specified as a return data type in either Transact-SQL or CLR functions.
function_body
Specifies that a series of Transact-SQL statements, which together do not produce a side effect such as modifying
a table, define the value of the function. function_body is used only in scalar functions and multistatement table-
valued functions.
In scalar functions, function_body is a series of Transact-SQL statements that together evaluate to a scalar value.
In multistatement table-valued functions, function_body is a series of Transact-SQL statements that populate a
TABLE return variable.
scalar_expression
Specifies that the scalar function returns a scalar value.
TABLE
Specifies that the return value of the table-valued function is a table. Only constants and @local_variables can be
passed to table-valued functions.
In inline table-valued functions, the TABLE return value is defined through a single SELECT statement. Inline
functions do not have associated return variables.
In multistatement table-valued functions, @return_variable is a TABLE variable used to store and accumulate the
rows that should be returned as the value of the function. @return_variable can be specified only for Transact-SQL
functions and not for CLR functions.
select-stmt
Is the single SELECT statement that defines the return value of an inline table-valued function.
EXTERNAL NAME <method_specifier>assembly_name.class_name.method_name
Applies to: SQL Server 2008 through SQL Server 2017.
Specifies the method of an assembly to bind with the function. assembly_name must match an existing assembly
in SQL Server in the current database with visibility on. class_name must be a valid SQL Server identifier and
must exist as a class in the assembly. If the class has a namespace-qualified name that uses a period (.) to separate
namespace parts, the class name must be delimited by using brackets ([]) or quotation marks (""). method_name
must be a valid SQL Server identifier and must exist as a static method in the specified class.

NOTE
By default, SQL Server cannot execute CLR code. You can create, modify, and drop database objects that reference common
language runtime modules; however, you cannot execute these references in SQL Server until you enable the clr enabled
option. To enable the option, use sp_configure.

NOTE
This option is not available in a contained database.

<_table_type_definition>_( { <column_definition> <column_constraint> | <computed_column_definition> } [


<table_constraint> ] [ ,...n ])
Defines the table data type for a Transact-SQL function. The table declaration includes column definitions and
column or table constraints.
< clr_table_type_definition > ( { column_namedata_type } [ ,...n ] ) Applies to: SQL Server 2008 through SQL
Server 2017, SQL Database (Preview in some regions).
Defines the table data types for a CLR function. The table declaration includes only column names and data types.
NULL|NOT NULL
Supported only for natively compiled, scalar user-defined functions. For more information, see Scalar User-
Defined Functions for In-Memory OLTP.
NATIVE_COMPILATION
Indicates whether a user-defined function is natively compiled. This argument is required for natively compiled,
scalar user-defined functions.
The NATIVE_COMPILATION argument is required when you ALTER the function, and can only be used, if the
function was created with the NATIVE_COMPILATION argument.
BEGIN ATOMIC WITH
Supported only for natively compiled, scalar user-defined functions, and is required. For more information, see
Atomic Blocks.
SCHEMABINDING
The SCHEMABINDING argument is required for natively compiled, scalar user-defined functions.
<function_option>::= and <clr_function_option>::=
Specifies the function will have one or more of the following options.
ENCRYPTION
Applies to: SQL Server 2008 through SQL Server 2017.
Indicates that the Database Engine encrypts the catalog view columns that contains the text of the ALTER
FUNCTION statement. Using ENCRYPTION prevents the function from being published as part of SQL Server
replication. ENCRYPTION cannot be specified for CLR functions.
SCHEMABINDING
Specifies that the function is bound to the database objects that it references. This condition will prevent changes
to the function if other schema bound objects are referencing it.
The binding of the function to the objects it references is removed only when one of the following actions occurs:
The function is dropped.
The function is modified by using the ALTER statement with the SCHEMABINDING option not specified.
For a list of conditions that must be met before a function can be schema bound, see CREATE FUNCTION
(Transact-SQL ).
RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT
Specifies the OnNULLCall attribute of a scalar-valued function. If not specified, CALLED ON NULL INPUT is
implied by default. This means that the function body executes even if NULL is passed as an argument.
If RETURNS NULL ON NULL INPUT is specified in a CLR function, it indicates that SQL Server can return NULL
when any of the arguments it receives is NULL, without actually invoking the body of the function. If the method
specified in <method_specifier> already has a custom attribute that indicates RETURNS NULL ON NULL INPUT,
but the ALTER FUNCTION statement indicates CALLED ON NULL INPUT, the ALTER FUNCTION statement
takes precedence. The OnNULLCall attribute cannot be specified for CLR table-valued functions.
EXECUTE AS Clause
Specifies the security context under which the user-defined function is executed. Therefore, you can control which
user account SQL Server uses to validate permissions on any database objects referenced by the function.

NOTE
EXECUTE AS cannot be specified for inline user-defined functions.

For more information, see EXECUTE AS Clause (Transact-SQL ).


< column_definition >::=
Defines the table data type. The table declaration includes column definitions and constraints. For CLR functions,
only column_name and data_type can be specified.
column_name
Is the name of a column in the table. Column names must comply with the rules for identifiers and must be unique
in the table. column_name can consist of 1 through 128 characters.
data_type
Specifies the column data type. For Transact-SQL functions, all data types, including CLR user-defined types, are
allowed except timestamp. For CLR functions, all data types, including CLR user-defined types, are allowed except
text, ntext, image, char, varchar, varchar(max), and timestamp.The nonscalar type cursor cannot be specified
as a column data type in either Transact-SQL or CLR functions.
DEFAULT constant_expression
Specifies the value provided for the column when a value is not explicitly supplied during an insert.
constant_expression is a constant, NULL, or a system function value. DEFAULT definitions can be applied to any
column except those that have the IDENTITY property. DEFAULT cannot be specified for CLR table-valued
functions.
COLLATE collation_name
Specifies the collation for the column. If not specified, the column is assigned the default collation of the database.
Collation name can be either a Windows collation name or a SQL collation name. For a list of and more
information, see Windows Collation Name (Transact-SQL ) and SQL Server Collation Name (Transact-SQL ).
The COLLATE clause can be used to change the collations only of columns of the char, varchar, nchar, and
nvarchar data types.
COLLATE cannot be specified for CLR table-valued functions.
ROWGUIDCOL
Indicates that the new column is a row global unique identifier column. Only one uniqueidentifier column per
table can be designated as the ROWGUIDCOL column. The ROWGUIDCOL property can be assigned only to a
uniqueidentifier column.
The ROWGUIDCOL property does not enforce uniqueness of the values stored in the column. It also does not
automatically generate values for new rows inserted into the table. To generate unique values for each column, use
the NEWID function on INSERT statements. A default value can be specified; however, NEWID cannot be specified
as the default.
IDENTITY
Indicates that the new column is an identity column. When a new row is added to the table, SQL Server provides a
unique, incremental value for the column. Identity columns are typically used together with PRIMARY KEY
constraints to serve as the unique row identifier for the table. The IDENTITY property can be assigned to tinyint,
smallint, int, bigint, decimal(p,0), or numeric(p,0) columns. Only one identity column can be created per table.
Bound defaults and DEFAULT constraints cannot be used with an identity column. You must specify both the seed
and increment or neither. If neither is specified, the default is (1,1).
IDENTITY cannot be specified for CLR table-valued functions.
seed
Is the integer value to be assigned to the first row in the table.
increment
Is the integer value to add to the seed value for successive rows in the table.
< column_constraint >::= and < table_constraint>::=
Defines the constraint for a specified column or table. For CLR functions, the only constraint type allowed is NULL.
Named constraints are not allowed.
NULL | NOT NULL
Determines whether null values are allowed in the column. NULL is not strictly a constraint but can be specified
just like NOT NULL. NOT NULL cannot be specified for CLR table-valued functions.
PRIMARY KEY
Is a constraint that enforces entity integrity for a specified column through a unique index. In table-valued user-
defined functions, the PRIMARY KEY constraint can be created on only one column per table. PRIMARY KEY
cannot be specified for CLR table-valued functions.
UNIQUE
Is a constraint that provides entity integrity for a specified column or columns through a unique index. A table can
have multiple UNIQUE constraints. UNIQUE cannot be specified for CLR table-valued functions.
CLUSTERED | NONCLUSTERED
Indicate that a clustered or a nonclustered index is created for the PRIMARY KEY or UNIQUE constraint.
PRIMARY KEY constraints use CLUSTERED, and UNIQUE constraints use NONCLUSTERED.
CLUSTERED can be specified for only one constraint. If CLUSTERED is specified for a UNIQUE constraint and a
PRIMARY KEY constraint is also specified, the PRIMARY KEY uses NONCLUSTERED.
CLUSTERED and NONCLUSTERED cannot be specified for CLR table-valued functions.
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or
columns. CHECK constraints cannot be specified for CLR table-valued functions.
logical_expression
Is a logical expression that returns TRUE or FALSE.
<computed_column_definition>::=
Specifies a computed column. For more information about computed columns, see CREATE TABLE (Transact-
SQL ).
column_name
Is the name of the computed column.
computed_column_expression
Is an expression that defines the value of a computed column.
<index_option>::=
Specifies the index options for the PRIMARY KEY or UNIQUE index. For more information about index options,
see CREATE INDEX (Transact-SQL ).
PAD_INDEX = { ON | OFF }
Specifies index padding. The default is OFF.
FILLFACTOR = fillfactor
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or change. fillfactor must be an integer value from 1 to 100. The default is 0.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index.
The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The default is
OFF.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether distribution statistics are recomputed. The default is OFF.
ALLOW_ROW_LOCKS = { ON | OFF }
Specifies whether row locks are allowed. The default is ON.
ALLOW_PAGE_LOCKS = { ON | OFF }
Specifies whether page locks are allowed. The default is ON.

Remarks
ALTER FUNCTION cannot be used to change a scalar-valued function to a table-valued function, or vice versa.
Also, ALTER FUNCTION cannot be used to change an inline function to a multistatement function, or vice versa.
ALTER FUNCTION cannot be used to change a Transact-SQL function to a CLR function or vice-versa.
The following Service Broker statements cannot be included in the definition of a Transact-SQL user-defined
function:
BEGIN DIALOG CONVERSATION
END CONVERSATION
GET CONVERSATION GROUP
MOVE CONVERSATION
RECEIVE
SEND

Permissions
Requires ALTER permission on the function or on the schema. If the function specifies a user-defined type,
requires EXECUTE permission on the type.

See Also
CREATE FUNCTION (Transact-SQL )
DROP FUNCTION (Transact-SQL )
Make Schema Changes on Publication Databases
EVENTDATA (Transact-SQL )
ALTER INDEX (Transact-SQL)
1/2/2019 • 46 minutes to read • Edit Online

APPLIES TO: SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
Modifies an existing table or view index (relational or XML ) by disabling, rebuilding, or reorganizing the index;
or by setting options on the index.
Transact-SQL Syntax Conventions

Syntax
-- Syntax for SQL Server and Azure SQL Database

ALTER INDEX { index_name | ALL } ON <object>


{
REBUILD {
[ PARTITION = ALL ] [ WITH ( <rebuild_index_option> [ ,...n ] ) ]
| [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> ) [ ,...n ] ]
}
| DISABLE
| REORGANIZE [ PARTITION = partition_number ] [ WITH ( <reorganize_option> ) ]
| SET ( <set_index_option> [ ,...n ] )
| RESUME [WITH (<resumable_index_options>,[...n])]
| PAUSE
| ABORT
}
[ ; ]

<object> ::=
{
[ database_name. [ schema_name ] . | schema_name. ]
table_or_view_name
}

<rebuild_index_option > ::=


{
PAD_INDEX = { ON | OFF }
| FILLFACTOR = fillfactor
| SORT_IN_TEMPDB = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| STATISTICS_INCREMENTAL = { ON | OFF }
| ONLINE = {
ON [ ( <low_priority_lock_wait> ) ]
| OFF }
| RESUMABLE = { ON | OFF }
| MAX_DURATION = <time> [MINUTES}
| ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| COMPRESSION_DELAY = {0 | delay [Minutes]}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( {<partition_number> [ TO <partition_number>] } [ , ...n ] ) ]
}

<single_partition_rebuild_index_option> ::=
{
SORT_IN_TEMPDB = { ON | OFF }
| MAXDOP = max_degree_of_parallelism
| RESUMABLE = { ON | OFF }
| MAX_DURATION = <time> [MINUTES}
| DATA_COMPRESSION = { NONE | ROW | PAGE | COLUMNSTORE | COLUMNSTORE_ARCHIVE} }
| ONLINE = { ON [ ( <low_priority_lock_wait> ) ] | OFF }
}

<reorganize_option>::=
{
LOB_COMPACTION = { ON | OFF }
| COMPRESS_ALL_ROW_GROUPS = { ON | OFF}
}

<set_index_option>::=
{
ALLOW_ROW_LOCKS = { ON | OFF }
| ALLOW_PAGE_LOCKS = { ON | OFF }
| IGNORE_DUP_KEY = { ON | OFF }
| STATISTICS_NORECOMPUTE = { ON | OFF }
| COMPRESSION_DELAY= {0 | delay [Minutes]}
}

<resumable_index_option> ::=
{
MAXDOP = max_degree_of_parallelism
| MAX_DURATION =<time> [MINUTES]
| <low_priority_lock_wait>
}

<low_priority_lock_wait>::=
{
WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ] ,
ABORT_AFTER_WAIT = { NONE | SELF | BLOCKERS } )
}

-- Syntax for SQL Data Warehouse and Parallel Data Warehouse

ALTER INDEX { index_name | ALL }


ON [ schema_name. ] table_name
{
REBUILD {
[ PARTITION = ALL [ WITH ( <rebuild_index_option> ) ] ]
| [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> )] ]
}
| DISABLE
| REORGANIZE [ PARTITION = partition_number ]
}
[;]

<rebuild_index_option > ::=


{
DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
[ ON PARTITIONS ( {<partition_number> [ TO <partition_number>] } [ , ...n ] ) ]
}

<single_partition_rebuild_index_option > ::=


{
DATA_COMPRESSION = { COLUMNSTORE | COLUMNSTORE_ARCHIVE }
}

Arguments
index_name
Is the name of the index. Index names must be unique within a table or view but do not have to be unique within
a database. Index names must follow the rules of identifiers.
ALL
Specifies all indexes associated with the table or view regardless of the index type. Specifying ALL causes the
statement to fail if one or more indexes are in an offline or read-only filegroup or the specified operation is not
allowed on one or more index types. The following table lists the index operations and disallowed index types.

USING THE KEYWORD ALL WITH THIS OPERATION FAILS IF THE TABLE HAS ONE OR MORE

REBUILD WITH ONLINE = ON XML index

Spatial index

Columnstore index: Applies to: SQL Server (Starting with


SQL Server 2012 (11.x)) and SQL Database.

REBUILD PARTITION = partition_number Nonpartitioned index, XML index, spatial index, or disabled
index

REORGANIZE Indexes with ALLOW_PAGE_LOCKS set to OFF

REORGANIZE PARTITION = partition_number Nonpartitioned index, XML index, spatial index, or disabled
index

IGNORE_DUP_KEY = ON XML index

Spatial index

Columnstore index: Applies to: SQL Server (Starting with


SQL Server 2012 (11.x)) and SQL Database.

ONLINE = ON XML index

Spatial index

Columnstore index: Applies to: SQL Server (Starting with


SQL Server 2012 (11.x)) and SQL Database.

RESUMABLE = ON Resumable indexes not supported with All keyword.

Applies to: SQL Server (Starting with SQL Server 2017


(14.x)) and SQL Database

WARNING
For more detailed information about index operations that can be performed online, see Guidelines for Online Index
Operations.

If ALL is specified with PARTITION = partition_number, all indexes must be aligned. This means that they are
partitioned based on equivalent partition functions. Using ALL with PARTITION causes all index partitions with
the same partition_number to be rebuilt or reorganized. For more information about partitioned indexes, see
Partitioned Tables and Indexes.
database_name
Is the name of the database.
schema_name
Is the name of the schema to which the table or view belongs.
table_or_view_name
Is the name of the table or view associated with the index. To display a report of the indexes on an object, use the
sys.indexes catalog view.
SQL Database supports the three-part name format database_name.[schema_name].table_or_view_name when
the database_name is the current database or the database_name is tempdb and the table_or_view_name starts
with #.
REBUILD [ WITH (<rebuild_index_option> [ ,... n]) ]
Specifies the index will be rebuilt using the same columns, index type, uniqueness attribute, and sort order. This
clause is equivalent to DBCC DBREINDEX. REBUILD enables a disabled index. Rebuilding a clustered index
does not rebuild associated nonclustered indexes unless the keyword ALL is specified. If index options are not
specified, the existing index option values stored in sys.indexes are applied. For any index option whose value is
not stored in sys.indexes, the default indicated in the argument definition of the option applies.
If ALL is specified and the underlying table is a heap, the rebuild operation has no effect on the table. Any
nonclustered indexes associated with the table are rebuilt.
The rebuild operation can be minimally logged if the database recovery model is set to either bulk-logged or
simple.

NOTE
When you rebuild a primary XML index, the underlying user table is unavailable for the duration of the index operation.

Applies to: SQL Server (Starting with SQL Server 2012 (11.x)) and SQL Database.
For columnstore indexes, the rebuild operation:
1. Does not use the sort order.
2. Acquires an exclusive lock on the table or partition while the rebuild occurs. The data is "offline" and
unavailable during the rebuild, even when using NOLOCK, RCSI, or SI.
3. Re-compresses all data into the columnstore. Two copies of the columnstore index exist while the rebuild
is taking place. When the rebuild is finished, SQL Server deletes the original columnstore index.
For more information about rebuilding columnstore indexes, see Columnstore indexes - defragmentation
PARTITION
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies that only one partition of an index will be rebuilt or reorganized. PARTITION cannot be specified if
index_name is not a partitioned index.
PARTITION = ALL rebuilds all partitions.

WARNING
Creating and rebuilding nonaligned indexes on a table with more than 1,000 partitions is possible, but is not supported.
Doing so may cause degraded performance or excessive memory consumption during these operations. We recommend
using only aligned indexes when the number of partitions exceed 1,000.

partition_number
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Is the partition number of a partitioned index that is to be rebuilt or reorganized. partition_number is a constant
expression that can reference variables. These include user-defined type variables or functions and user-defined
functions, but cannot reference a Transact-SQL statement. partition_number must exist or the statement fails.
WITH (<single_partition_rebuild_index_option>)
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
SORT_IN_TEMPDB, MAXDOP, and DATA_COMPRESSION are the options that can be specified when you
rebuild a single partition (PARTITION = n). XML indexes cannot be specified in a single partition rebuild
operation.
DISABLE
Marks the index as disabled and unavailable for use by the Database Engine. Any index can be disabled. The
index definition of a disabled index remains in the system catalog with no underlying index data. Disabling a
clustered index prevents user access to the underlying table data. To enable an index, use ALTER INDEX
REBUILD or CREATE INDEX WITH DROP_EXISTING. For more information, see Disable Indexes and
Constraints and Enable Indexes and Constraints.
REORGANIZE a rowstore index
For rowstore indexes, REORGANIZE specifies to reorganize the index leaf level. The REORGANIZE operation is:
Always performed online. This means long-term blocking table locks are not held and queries or updates to
the underlying table can continue during the ALTER INDEX REORGANIZE transaction.
Not allowed for a disabled index
Not allowed when ALLOW_PAGE_LOCKS is set to OFF
Not rolled back when it is performed within a transaction and the transaction is rolled back.
REORGANIZE WITH ( LOB_COMPACTION = { ON | OFF } )
Applies to rowstore indexes.
LOB_COMPACTION = ON
Specifies to compact all pages that contain data of these large object (LOB ) data types: image, text, ntext,
varchar(max), nvarchar(max), varbinary(max), and xml. Compacting this data can reduce the data size on
disk.
For a clustered index, this compacts all LOB columns that are contained in the table.
For a nonclustered index, this compacts all LOB columns that are nonkey (included) columns in the index.
REORGANIZE ALL performs LOB_COMPACTION on all indexes. For each index, this compacts all LOB
columns in the clustered index, underlying table, or included columns in a nonclustered index.
LOB_COMPACTION = OFF
Pages that contain large object data are not compacted.
OFF has no effect on a heap.
REORGANIZE a columnstore index
For columnstore indexes, REORGANIZE compresses each CLOSED delta rowgroup into the columnstore
as a compressed rowgroup. The REORGANIZE operation is always performed online. This means long-
term blocking table locks are not held and queries or updates to the underlying table can continue during
the ALTER INDEX REORGANIZE transaction.
REORGANIZE is not required in order to move CLOSED delta rowgroups into compressed rowgroups.
The background tuple-mover (TM ) process wakes up periodically to compress CLOSED delta rowgroups.
We recommend using REORGANIZE when tuple-mover is falling behind. REORGANIZE can compress
rowgroups more aggressively.
To compress all OPEN and CLOSED rowgroups, see the REORGANIZE WITH
(COMPRESS_ALL_ROW_GROUPS ) option in this section.

For columnstore indexes in SQL Server (Starting with 2016) and SQL Database, REORGANIZE performs the
following additional defragmentation optimizations online:
Physically removes rows from a rowgroup when 10% or more of the rows have been logically deleted.
The deleted bytes are reclaimed on the physical media. For example, if a compressed row group of 1
million rows has 100K rows deleted, SQL Server will remove the deleted rows and recompress the
rowgroup with 900k rows. It saves on the storage by removing deleted rows.
Combines one or more compressed rowgroups to increase rows per rowgroup up to the maximum of
1,024,576 rows. For example, if you bulk import 5 batches of 102,400 rows you will get 5 compressed
rowgroups. If you run REORGANIZE, these rowgroups will get merged into 1 compressed rowgroup of
size 512,000 rows. This assumes there were no dictionary size or memory limitations.
For rowgroups in which 10% or more of the rows have been logically deleted, SQL Server will try to
combine this rowgroup with one or more rowgroups. For example, rowgroup 1 is compressed with
500,000 rows and rowgroup 21 is compressed with the maximum of 1,048,576 rows. Rowgroup 21 has
60% of the rows deleted which leaves 409,830 rows. SQL Server favors combining these two rowgroups
to compress a new rowgroup that has 909,830 rows.
REORGANIZE WITH ( COMPRESS_ALL_ROW_GROUPS = { ON | OFF } )
Applies to: SQL Server (Starting with SQL Server 2016 (13.x)) and SQL Database
COMPRESS_ALL_ROW_GROUPS provides a way to force OPEN or CLOSED delta rowgroups into the
columnstore. With this option, it is not necessary to rebuild the columnstore index to empty the delta
rowgroups. This, combined with the other remove and merge defragmentation features makes it no longer
necessary to rebuild the index in most situations.
ON forces all rowgroups into the columnstore, regardless of size and state (CLOSED or OPEN ).
OFF forces all CLOSED rowgroups into the columnstore.
SET ( <set_index option> [ ,... n] )
Specifies index options without rebuilding or reorganizing the index. SET cannot be specified for a disabled
index.
PAD_INDEX = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies index padding. The default is OFF.
ON
The percentage of free space that is specified by FILLFACTOR is applied to the intermediate-level pages of the
index. If FILLFACTOR is not specified at the same time PAD_INDEX is set to ON, the fill factor value stored in
sys.indexes is used.
OFF or fillfactor is not specified
The intermediate-level pages are filled to near capacity. This leaves sufficient space for at least one row of the
maximum size that the index can have, based on the set of keys on the intermediate pages.
For more information, see CREATE INDEX (Transact-SQL ).
FILLFACTOR = fillfactor
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page
during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Fill factor
values 0 and 100 are the same in all respects.
An explicit FILLFACTOR setting applies only when the index is first created or rebuilt. The Database Engine does
not dynamically keep the specified percentage of empty space in the pages. For more information, see CREATE
INDEX (Transact-SQL ).
To view the fill factor setting, use sys.indexes.

IMPORTANT
Creating or altering a clustered index with a FILLFACTOR value affects the amount of storage space the data occupies,
because the Database Engine redistributes the data when it creates the clustered index.

SORT_IN_TEMPDB = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether to store the sort results in tempdb. The default is OFF.
ON
The intermediate sort results that are used to build the index are stored in tempdb. If tempdb is on a different
set of disks than the user database, this may reduce the time needed to create an index. However, this increases
the amount of disk space that is used during the index build.
OFF
The intermediate sort results are stored in the same database as the index.
If a sort operation is not required, or if the sort can be performed in memory, the SORT_IN_TEMPDB option is
ignored.
For more information, see SORT_IN_TEMPDB Option For Indexes.
IGNORE_DUP_KEY = { ON | OFF }
Specifies the error response when an insert operation attempts to insert duplicate key values into a unique
index. The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The
default is OFF.
ON
A warning message will occur when duplicate key values are inserted into a unique index. Only the rows
violating the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT
operation will be rolled back.
IGNORE_DUP_KEY cannot be set to ON for indexes created on a view, non-unique indexes, XML indexes,
spatial indexes, and filtered indexes.
To view IGNORE_DUP_KEY, use sys.indexes.
In backward compatible syntax, WITH IGNORE_DUP_KEY is equivalent to WITH IGNORE_DUP_KEY = ON.
STATISTICS_NORECOMPUTE = { ON | OFF }
Specifies whether distribution statistics are recomputed. The default is OFF.
ON
Out-of-date statistics are not automatically recomputed.
OFF
Automatic statistics updating are enabled.
To restore automatic statistics updating, set the STATISTICS_NORECOMPUTE to OFF, or execute UPDATE
STATISTICS without the NORECOMPUTE clause.

IMPORTANT
Disabling automatic recomputation of distribution statistics may prevent the query optimizer from picking optimal
execution plans for queries that involve the table.

STATISTICS_INCREMENTAL = { ON | OFF }
When ON, the statistics created are per partition statistics. When OFF, the statistics tree is dropped and SQL
Server re-computes the statistics. The default is OFF.
If per partition statistics are not supported the option is ignored and a warning is generated. Incremental stats
are not supported for following statistics types:
Statistics created with indexes that are not partition-aligned with the base table.
Statistics created on Always On readable secondary databases.
Statistics created on read-only databases.
Statistics created on filtered indexes.
Statistics created on views.
Statistics created on internal tables.
Statistics created with spatial indexes or XML indexes.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
ONLINE = { ON | OFF } <as applies to rebuild_index_option>
Specifies whether underlying tables and associated indexes are available for queries and data modification
during the index operation. The default is OFF.
For an XML index or spatial index, only ONLINE = OFF is supported, and if ONLINE is set to ON an error is
raised.

NOTE
Online index operations are not available in every edition of Microsoft SQL Server. For a list of features that are supported
by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 (13.x) and Editions and Supported
Features for SQL Server 2017.

ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. This allows queries or updates to the
underlying table and indexes to continue. At the start of the operation, a Shared (S ) lock is very briefly held on
the source object. At the end of the operation, an S lock is very briefly held on the source if a nonclustered index
is being created, or an SCH-M (Schema Modification) lock is acquired when a clustered index is created or
dropped online, or when a clustered or nonclustered index is being rebuilt. ONLINE cannot be set to ON when
an index is being created on a local temporary table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds,
or drops a clustered, spatial, or XML index, or rebuilds or drops a nonclustered index, acquires a Schema
modification (Sch-M ) lock on the table. This prevents all user access to the underlying table for the duration of
the operation. An offline index operation that creates a nonclustered index acquires a Shared (S ) lock on the
table. This prevents updates to the underlying table but allows read operations, such as SELECT statements.
For more information, see How Online Index Operations Work.
Indexes, including indexes on global temp tables, can be rebuilt online with the following exceptions:
XML indexes
Indexes on local temp tables
A subset of a partitioned index (An entire partitioned index can be rebuilt online.)
SQL Database prior to V12, and SQL Server prior to SQL Server 2012 (11.x), do not permit the ONLINE
option for clustered index build or rebuild operations when the base table contains varchar(max) or
varbinary(max) columns.
RESUMABLE = { ON | OFF}
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Specifies whether an online index operation is resumable.
ON Index operation is resumable.
OFF Index operation is not resumable.
MAX_DURATION = time [MINUTES ] used with RESUMABLE = ON (requires ONLINE = ON ).
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Indicates time (an integer value specified in minutes) that a resumable online index operation is executed before
being paused.
ALLOW_ROW_LOCKS = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether row locks are allowed. The default is ON.
ON
Row locks are allowed when accessing the index. The Database Engine determines when row locks are used.
OFF
Row locks are not used.
ALLOW_PAGE_LOCKS = { ON | OFF }
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies whether page locks are allowed. The default is ON.
ON
Page locks are allowed when you access the index. The Database Engine determines when page locks are used.
OFF
Page locks are not used.
NOTE
An index cannot be reorganized when ALLOW_PAGE_LOCKS is set to OFF.

MAXDOP = max_degree_of_parallelism
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Overrides the max degree of parallelism configuration option for the duration of the index operation. For
more information, see Configure the max degree of parallelism Server Configuration Option. Use MAXDOP to
limit the number of processors used in a parallel plan execution. The maximum is 64 processors.

IMPORTANT
Although the MAXDOP option is syntactically supported for all XML indexes, for a spatial index or a primary XML index,
ALTER INDEX currently uses only a single processor.

max_degree_of_parallelism can be:


1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.
For more information, see Configure Parallel Index Operations.

NOTE
Parallel index operations are not available in every edition of Microsoft SQL Server. For a list of features that are
supported by the editions of SQL Server, see Editions and Supported Features for SQL Server 2016 (13.x).

COMPRESSION_DELAY = { 0 |duration [Minutes] }


This feature is available Starting with SQL Server 2016 (13.x)
For a disk-based table, delay specifies the minimum number of minutes a delta rowgroup in the CLOSED state
must remain in the delta rowgroup before SQL Server can compress it into the compressed rowgroup. Since
disk-based tables don't track insert and update times on individual rows, SQL Server applies the delay to delta
rowgroups in the CLOSED state.
The default is 0 minutes.
The default is 0 minutes.
For recommendations on when to use COMPRESSION_DELAY, see Columnstore Indexes for Real-Time
Operational Analytics.
DATA_COMPRESSION
Specifies the data compression option for the specified index, partition number, or range of partitions. The
options are as follows:
NONE
Index or specified partitions are not compressed. This does not apply to columnstore indexes.
ROW
Index or specified partitions are compressed by using row compression. This does not apply to columnstore
indexes.
PAGE
Index or specified partitions are compressed by using page compression. This does not apply to columnstore
indexes.
COLUMNSTORE
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
Applies only to columnstore indexes, including both nonclustered columnstore and clustered columnstore
indexes. COLUMNSTORE specifies to decompress the index or specified partitions that are compressed with the
COLUMNSTORE_ARCHIVE option. When the data is restored, it will continue to be compressed with the
columnstore compression that is used for all columnstore indexes.
COLUMNSTORE_ARCHIVE
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
Applies only to columnstore indexes, including both nonclustered columnstore and clustered columnstore
indexes. COLUMNSTORE_ARCHIVE will further compress the specified partition to a smaller size. This can be
used for archival, or for other situations that require a smaller storage size and can afford more time for storage
and retrieval.
For more information about compression, see Data Compression.
ON PARTITIONS ( { <partition_number_expression> | <range> } [ ,...n] )
Applies to: SQL Server (Starting with SQL Server 2008) and SQL Database.
Specifies the partitions to which the DATA_COMPRESSION setting applies. If the index is not partitioned, the
ON PARTITIONS argument will generate an error. If the ON PARTITIONS clause is not provided, the
DATA_COMPRESSION option applies to all partitions of a partitioned index.
<partition_number_expression> can be specified in the following ways:
Provide the number for a partition, for example: ON PARTITIONS (2).
Provide the partition numbers for several individual partitions separated by commas, for example: ON
PARTITIONS (1, 5).
Provide both ranges and individual partitions: ON PARTITIONS (2, 4, 6 TO 8).
<range> can be specified as partition numbers separated by the word TO, for example: ON PARTITIONS
(6 TO 8).
To set different types of data compression for different partitions, specify the DATA_COMPRESSION
option more than once, for example:

REBUILD WITH
(
DATA_COMPRESSION = NONE ON PARTITIONS (1),
DATA_COMPRESSION = ROW ON PARTITIONS (2, 4, 6 TO 8),
DATA_COMPRESSION = PAGE ON PARTITIONS (3, 5)
);

ONLINE = { ON | OFF } <as applies to single_partition_rebuild_index_option>


Specifies whether an index or an index partition of an underlying table can be rebuilt online or offline. If
REBUILD is performed online (ON ) the data in this table is available for queries and data modification during
the index operation. The default is OFF.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index
operation, only an Intent Share (IS ) lock is held on the source table. An S -lock on the table is required in the
Starting of the index rebuild and a Sch-M lock on the table at the end of the online index rebuild. Although both
locks are short metadata locks, especially the Sch-M lock must wait for all blocking transactions to be
completed. During the wait time the Sch-M lock blocks all other transactions that wait behind this lock when
accessing the same table.

NOTE
Online index rebuild can set the low_priority_lock_wait options described later in this section.

OFF
Table locks are applied for the duration of the index operation. This prevents all user access to the underlying
table for the duration of the operation.
WAIT_AT_LOW_PRIORITY used with ONLINE=ON only.
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
An online index rebuild has to wait for blocking operations on this table. WAIT_AT_LOW_PRIORITY indicates
that the online index rebuild operation will wait for low priority locks, allowing other operations to proceed while
the online index build operation is waiting. Omitting the WAIT AT LOW PRIORITY option is equivalent to
WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE ). For more
information, see WAIT_AT_LOW_PRIORITY.
MAX_DURATION = time [MINUTES ]
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
The wait time (an integer value specified in minutes) that the online index rebuild locks will wait with low priority
when executing the DDL command. If the operation is blocked for the MAX_DURATION time, one of the
ABORT_AFTER_WAIT actions will be executed. MAX_DURATION time is always in minutes, and the word
MINUTES can be omitted.
ABORT_AFTER_WAIT = [NONE | SELF | BLOCKERS } ]
Applies to: SQL Server (Starting with SQL Server 2014 (12.x)) and SQL Database.
NONE
Continue waiting for the lock with normal (regular) priority.
SELF
Exit the online index rebuild DDL operation currently being executed without taking any action.
BLOCKERS
Kill all user transactions that block the online index rebuild DDL operation so that the operation can continue.
The BLOCKERS option requires the login to have ALTER ANY CONNECTION permission.
RESUME
Applies to: Starting with SQL Server 2017 (14.x)
Resume an index operation that is paused manually or due to a failure.
MAX_DURATION used with RESUMABLE=ON
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
The time (an integer value specified in minutes) the resumable online index operation is executed after being
resumed. Once the time expires, the resumable operation is paused if it is still running.
WAIT_AT_LOW_PRIORITY used with RESUMABLE=ON and ONLINE = ON.
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Resuming an online index rebuild after a pause has to wait for blocking operations on this table.
WAIT_AT_LOW_PRIORITY indicates that the online index rebuild operation will wait for low priority locks,
allowing other operations to proceed while the online index build operation is waiting. Omitting the WAIT AT
LOW PRIORITY option is equivalent to WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes,
ABORT_AFTER_WAIT = NONE ). For more information, see WAIT_AT_LOW_PRIORITY.
PAUSE
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Pause a resumable online index rebuild operation.
ABORT
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Abort a running or paused index operation that was declared as resumable. You have to explicitly execute an
ABORT command to terminate a resumable index rebuild operation. Failure or pausing a resumable index
operation does not terminate its execution; rather, it leaves the operation in an indefinite pause state.

Remarks
ALTER INDEX cannot be used to repartition an index or move it to a different filegroup. This statement cannot
be used to modify the index definition, such as adding or deleting columns or changing the column order. Use
CREATE INDEX with the DROP_EXISTING clause to perform these operations.
When an option is not explicitly specified, the current setting is applied. For example, if a FILLFACTOR setting is
not specified in the REBUILD clause, the fill factor value stored in the system catalog will be used during the
rebuild process. To view the current index option settings, use sys.indexes.
The values for ONLINE, MAXDOP, and SORT_IN_TEMPDB are not stored in the system catalog. Unless
specified in the index statement, the default value for the option is used.
On multiprocessor computers, just like other queries do, ALTER INDEX ... REBUILD automatically uses more
processors to perform the scan and sort operations that are associated with modifying the index. When you run
ALTER INDEX ... REORGANIZE, with or without LOB_COMPACTION, the max degree of parallelism value is
a single threaded operation. For more information, see Configure Parallel Index Operations.

IMPORTANT
An index cannot be reorganized or rebuilt if the filegroup in which it is located is offline or set to read-only. When the
keyword ALL is specified and one or more indexes are in an offline or read-only filegroup, the statement fails.

Rebuilding Indexes
Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by
compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in
contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction.
Foreign key constraints do not have to be dropped in advance. When indexes with 128 extents or more are
rebuilt, the Database Engine defers the actual page deallocations, and their associated locks, until after the
transaction commits.
For more information, see Reorganize and Rebuild Indexes.

NOTE
Rebuilding or reorganizing small indexes often does not reduce fragmentation. The pages of small indexes are sometimes
stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not
be reduced after reorganizing or rebuilding it.

IMPORTANT
When an index is created or rebuilt in SQL Server, statistics are created or updated by scanning all the rows in the table.
However, starting with SQL Server 2012 (11.x), statistics are not created by scanning all the rows in the table when a
partitioned index is created or rebuilt. Instead, the query optimizer uses the default sampling algorithm to generate these
statistics. To obtain statistics on partitioned indexes by scanning all the rows in the table, use CREATE STATISTICS or
UPDATE STATISTICS with the FULLSCAN clause.

In earlier versions of SQL Server, you could sometimes rebuild a nonclustered index to correct inconsistencies
caused by hardware failures.
In SQL Server 2008 and later, you may still be able to repair such inconsistencies between the index and the
clustered index by rebuilding a nonclustered index offline. However, you cannot repair nonclustered index
inconsistencies by rebuilding the index online, because the online rebuild mechanism will use the existing
nonclustered index as the basis for the rebuild and thus persist the inconsistency. Rebuilding the index offline
can sometimes force a scan of the clustered index (or heap) and so remove the inconsistency. To assure a rebuild
from the clustered index, drop and recreate the non-clustered index. As with earlier versions, we recommend
recovering from inconsistencies by restoring the affected data from a backup; however, you may be able to
repair the index inconsistencies by rebuilding the nonclustered index offline. For more information, see DBCC
CHECKDB (Transact-SQL ).
To rebuild a clustered columnstore index, SQL Server:
1. Acquires an exclusive lock on the table or partition while the rebuild occurs. The data is "offline" and
unavailable during the rebuild.
2. Defragments the columnstore by physically deleting rows that have been logically deleted from the table;
the deleted bytes are reclaimed on the physical media.
3. Reads all data from the original columnstore index, including the deltastore. It combines the data into
new rowgroups, and compresses the rowgroups into the columnstore.
4. Requires space on the physical media to store two copies of the columnstore index while the rebuild is
taking place. When the rebuild is finished, SQL Server deletes the original clustered columnstore index.

Reorganizing Indexes
Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and
nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left
to right, order of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the
existing fill factor value. To view the fill factor setting, use sys.indexes.
When ALL is specified, relational indexes, both clustered and nonclustered, and XML indexes on the table are
reorganized. Some restrictions apply when specifying ALL, refer to the definition for ALL in the Arguments
section of this article.
For more information, see Reorganize and Rebuild Indexes.

IMPORTANT
When an index is reorganized in SQL Server, statistics are not updated.

Disabling Indexes
Disabling an index prevents user access to the index, and for clustered indexes, to the underlying table data. The
index definition remains in the system catalog. Disabling a nonclustered index or clustered index on a view
physically deletes the index data. Disabling a clustered index prevents access to the data, but the data remains
unmaintained in the B -tree until the index is dropped or rebuilt. To view the status of an enabled or disabled
index, query the is_disabled column in the sys.indexes catalog view.
If a table is in a transactional replication publication, you cannot disable any indexes that are associated with
primary key columns. These indexes are required by replication. To disable an index, you must first drop the
table from the publication. For more information, see Publish Data and Database Objects.
Use the ALTER INDEX REBUILD statement or the CREATE INDEX WITH DROP_EXISTING statement to
enable the index. Rebuilding a disabled clustered index cannot be performed with the ONLINE option set to ON.
For more information, see Disable Indexes and Constraints.

Setting Options
You can set the options ALLOW_ROW_LOCKS, ALLOW_PAGE_LOCKS, IGNORE_DUP_KEY and
STATISTICS_NORECOMPUTE for a specified index without rebuilding or reorganizing that index. The modified
values are immediately applied to the index. To view these settings, use sys.indexes. For more information, see
Set Index Options.
Row and Page Locks Options
When ALLOW_ROW_LOCKS = ON and ALLOW_PAGE_LOCK = ON, row -level, page-level, and table-level
locks are allowed when you access the index. The Database Engine chooses the appropriate lock and can
escalate the lock from a row or page lock to a table lock.
When ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCK = OFF, only a table-level lock is allowed when
you access the index.
If ALL is specified when the row or page lock options are set, the settings are applied to all indexes. When the
underlying table is a heap, the settings are applied in the following ways:

ALLOW_ROW_LOCKS = ON or OFF To the heap and any associated nonclustered indexes.

ALLOW_PAGE_LOCKS = ON To the heap and any associated nonclustered indexes.

ALLOW_PAGE_LOCKS = OFF Fully to the nonclustered indexes. This means that all page
locks are not allowed on the nonclustered indexes. On the
heap, only the shared (S), update (U) and exclusive (X) locks
for the page are not allowed. The Database Engine can still
acquire an intent page lock (IS, IU or IX) for internal
purposes.
Online Index Operations
When rebuilding an index and the ONLINE option is set to ON, the underlying objects, the tables and
associated indexes, are available for queries and data modification. You can also rebuild online a portion of an
index residing on a single partition. Exclusive table locks are held only for a very short amount of time during
the alteration process.
Reorganizing an index is always performed online. The process does not hold locks long term and, therefore,
does not block queries or updates that are running.
You can perform concurrent online index operations on the same table or table partition only when doing the
following:
Creating multiple nonclustered indexes.
Reorganizing different indexes on the same table.
Reorganizing different indexes while rebuilding nonoverlapping indexes on the same table.
All other online index operations performed at the same time fail. For example, you cannot rebuild two or more
indexes on the same table concurrently, or create a new index while rebuilding an existing index on the same
table.
Resumable index operations
Applies to: SQL Server (Starting with SQL Server 2017 (14.x)) and SQL Database
Online index rebuild is specified as resumable using the RESUMABLE = ON option.
The RESUMABLE option is not persisted in the metadata for a given index and applies only to the duration
of a current DDL statement. Therefore, the RESUMABLE = ON clause must be specified explicitly to enable
resumability.
MAX_DURATION option is supported for RESUMABLE = ON option or the low_priority_lock_wait
argument option.
MAX_DURATION for RESUMABLE option specifies the time interval for an index being rebuild. Once
this time is used the index rebuild is either paused or it completes its execution. User decides when a
rebuild for a paused index can be resumed. The time in minutes for MAX_DURATION must be
greater than 0 minutes and less or equal one week (7 * 24 * 60 = 10080 minutes). Having a long
pause for an index operation may impact the DML performance on a specific table as well as the
database disk capacity since both indexes the original one and the newly created one require disk
space and need to be updated during DML operations. If MAX_DURATION option is omitted, the
index operation will continue until its completion or until a failure occurs.
The <low_priority_lock_wait> argument option allows you to decide how the index operation can
proceed when blocked on the SCH-M lock.
Re-executing the original ALTER INDEX REBUILD statement with the same parameters resumes a
paused index rebuild operation. You can also resume a paused index rebuild operation by executing the
ALTER INDEX RESUME statement.
The SORT_IN_TEMPDB=ON option is not supported for resumable index
The DDL command with RESUMABLE=ON cannot be executed inside an explicit transaction (cannot be part
of begin tran ... commit block).
Only index operations that are paused are resumable.
When resuming an index operation that is paused, you can change the MAXDOP value to a new value. If
MAXDOP is not specified when resuming an index operation that is paused, the last MAXDOP value is
taken. IF the MAXDOP option is not specified at all for index rebuild operation, the default value is taken.
To pause immediately the index operation, you can stop the ongoing command (Ctrl-C ) or you can execute
the ALTER INDEX PAUSE command or the KILL session_id command. Once the command is paused it can
be resumed using RESUME option.
The ABORT command kills the session that hosted the original index rebuild and aborts the index operation
No extra resources are required for resumable index rebuild except for
Additional space required to keep the index being built, including the time when index is being paused
A DDL state preventing any DDL modification
The ghost cleanup will be running during the index pause phase, but it will be paused during index run
The following functionality is disabled for resumable index rebuild operations
Rebuilding an index that is disabled is not supported with RESUMABLE=ON
ALTER INDEX REBUILD ALL command
ALTER TABLE using index rebuild
DDL command with "RESUMEABLE = ON" cannot be executed inside an explicit transaction (cannot
be part of begin tran ... commit block)
Rebuild an index that has computed or TIMESTAMP column(s) as key columns.
In case the base table contains LOB column(s) resumable clustered index rebuild requires a Sch-M lock in the
Starting of this operation
The SORT_IN_TEMPDB=ON option is not supported for resumable index

NOTE
The DDL command runs until it completes, pauses or fails. In case the command pauses, an error will be issued indicating
that the operation was paused and that the index creation did not complete. More information about the current index
status can be obtained from sys.index_resumable_operations. As before in case of a failure an error will be issued as well.

For more information, see Perform Index Operations Online.


WAIT_AT_LOW_PRIORITY with online index operations
In order to execute the DDL statement for an online index rebuild, all active blocking transactions running on a
particular table must be completed. When the online index rebuild executes, it blocks all new transactions that
are ready to start execution on this table. Although the duration of the lock for online index rebuild is very short,
waiting for all open transactions on a given table to complete and blocking the new transactions to start, might
significantly affect the throughput, causing a workload slow down or timeout, and significantly limit access to
the underlying table. The WAIT_AT_LOW_PRIORITY option allows DBA's to manage the S -lock and Sch-M
locks required for online index rebuilds and allows them to select one of 3 options. In all 3 cases, if during the
wait time ( (MAX_DURATION = n [minutes]) ), there are no blocking activities, the online index rebuild is
executed immediately without waiting and the DDL statement is completed.

Spatial Index Restrictions


When you rebuild a spatial index, the underlying user table is unavailable for the duration of the index operation
because the spatial index holds a schema lock.
The PRIMARY KEY constraint in the user table cannot be modified while a spatial index is defined on a column
of that table. To change the PRIMARY KEY constraint, first drop every spatial index of the table. After modifying
the PRIMARY KEy constraint, you can re-create each of the spatial indexes.
In a single partition rebuild opera