Best Practices SQL Server For OpenText Content Server 2x
Best Practices SQL Server For OpenText Content Server 2x
Contents
Audience ...................................................................................................................... 3
Executive Summary .................................................................................................... 4
General Overview of Content Server Database Connections ................................ 5
Configuration Recommendations ............................................................................. 6
Installation Options ................................................................................................. 6
SQL Server Setup Best Practices.............................................................................. 8
Antivirus Software................................................................................................... 8
Backup Compression ............................................................................................. 8
Instant Database File Initialization.......................................................................... 9
Lock Pages in Memory ......................................................................................... 10
Maximum Degree of Parallelism (MaxDOP) ........................................................ 11
Min and Max Memory ........................................................................................... 12
ODBC Native Client Driver ................................................................................... 13
Perform Volume Maintenance Task ...................................................................... 13
Storage Best Practices ......................................................................................... 14
tempDB Configuration .......................................................................................... 15
SQL Server Configuration Settings......................................................................... 16
Allocate Full Extent and Autogrow........................................................................ 16
AlwaysOn Availability Groups ............................................................................... 16
Cost Threshold for Parallelism ............................................................................. 17
Locks .................................................................................................................... 18
Optimize for Ad hoc Workloads ............................................................................ 18
Content Server Database Settings .......................................................................... 19
Clustered Indexes................................................................................................. 19
Collation ................................................................................................................ 20
Compatibility Level ............................................................................................... 21
Data Compression ................................................................................................ 22
Database Data, Log File Size, and AutoGrowth ................................................... 23
DBrowseAncestorsCore Indexes ......................................................................... 24
Recovery Model.................................................................................................... 24
Statistics ............................................................................................................... 25
Table and Index Fragmentation, Fill factor ........................................................... 26
Target Recovery Time........................................................................................... 27
Monitoring and Benchmarking ................................................................................ 28
Benchmark ........................................................................................................... 28
SQL Server Performance Monitoring Tools .......................................................... 30
Activity Monitor ............................................................................................... 30
Azure Monitoring ............................................................................................ 30
Management Data Warehouse ...................................................................... 31
Query Store .................................................................................................... 31
Audience
The document is intended for a technical audience that is planning an
implementation of OpenText™ products. OpenText recommends consulting with
OpenText Professional Services who can assist with the specific details of individual
implementation architectures.
Disclaimer
The tests and results described in this document apply only to the OpenText
configuration described herein. For testing or certification of other configurations,
contact OpenText Corporation for more information.
All tests described in this document were run on equipment located in the OpenText
Performance Laboratory and were performed by the OpenText Performance
Engineering Group. Note that using a configuration similar to that described in this
document, or any other certified configuration, does not guarantee the results
documented herein. There may be parameters or variables that were not
contemplated during these performance tests that could affect results in other test
environments.
For any OpenText production deployment, OpenText recommends a rigorous
performance evaluation of the specific environment and applications to ensure that
there are no configuration or custom development bottlenecks present that hinder
overall performance.
Executive Summary
This white paper is intended to explore aspects of Microsoft® SQL Server that may be
of value when configuring and scaling OpenText Content Server™ 2x.x. It is relevant
to SQL Server Azure, 2019, and is based on customer experiences, performance lab
tests with a typical document management workload, and technical advisements from
Microsoft.
Most common performance issues can be solved by ensuring that the hardware used
to deploy SQL Server has sufficient CPU, RAM, and fast I/O devices, properly
balanced.
These topics explore non-default options available when simple expansion of
resources is ineffective and discuss best practices for administration of Content
Server’s database. It concentrates on non-default options, because as a
recommended starting point, Content Server on SQL Server installations uses
Microsoft’s default deployment options. Usage profiles vary widely, so any actions
taken based on topics discussed in this paper must be verified in your own environment
prior to production deployment, and a rollback plan must be available in case adverse
effects are detected.
These recommendations are not intended to replace the services of an experienced
and trained SQL Server database administrator (DBA), and do not cover standard
operational procedures for SQL Server database maintenance, but rather offer advice
specific to Content Server on the SQL Server platform.
Configuration Recommendations
This section provides recommendations for configuring an SQL Server instance to host
a Content Server ECM repository.
Installation Options
Consider the following tips and guidelines when initially installing Content Server for
use with SQL Server:
• Use the configuration options in SQL Server Configuration Settings and Content
Server Database Settings
• For Microsoft Azure SQL Databases
o The database must be created in the Microsoft Azure Portal (see article
Azure Create a SQL Database)
o The database can be managed in SQL Server Management Studio (SSMS)
2016 and in the Microsoft Azure Portal
The user account for the Azure database should be created using SQL Server
Management Studio (SSMS). Please note: database user, database name and
password should be altered.
CREATE SCHEMA dbuser AUTHORIZATION dbuser
CREATE USER dbuser WITH PASSWORD = 'pwd' WITH
DEFAULT_SCHEMA = dbname
ALTER ROLE db_owner ADD MEMBER dbuser
GRANT CONNECT TO dbuser
Apply database configurations.
ALTER DATABASE dbname SET COMPATIBILITY_LEVEL = 110
The core Content Server database is installed using Create Tables In Existing
Database option in Content Server using the Azure user account.
• For SQL Server 2019
Core Content Server database is installed using the sa SQL Server account. For
SQL Server DBAs who are uncomfortable allowing an application to log into the
SQL Server database with the sa account, the SQL Server DBA can create the
database and user account.
To create the database, the SQL Server DBA can follow the SQL command below
or can use SQL Server Management Studio (SSMS) to create the database GUI.
Please note: database name, name, filename, size, and password should be
altered.
CREATE DATABASE dbname
ON PRIMARY (NAME = 'livelink_Data',
FILENAME = 'C:\databases\dbname.mdf', SIZE = 5MB,
MAXSIZE = UNLIMITED)
LOG ON (NAME = ‘livelink_Log',
FILENAME = 'C:\databases\dbname.ldf', SIZE = 1MB,
MAXSIZE = UNLIMITED)
Antivirus Software
Description Antivirus software scans files and monitors activity to prevent, detect,
and remove malicious software. Guidelines for antivirus software
configuration are provided in the Microsoft support article: File
Locations for Default Named Instances of SQL Server.
Default Depends on vendor.
Recommendation Exclude all database data and log files from scanning (including
tempDB). Exclude SQL Server engine process from active
monitoring.
Notes Follow the Microsoft support article for SQL Server version-specific
details.
Backup Compression
Description A backup that has been compressed to take up less space and use
less I/O which increases backup speed, see Backup Compression.
Default Compression is not set
Recommendation Microsoft recommends that backups be configured individually and
compressed. SQL Server editions from 2008 and higher can restore
a compressed backup.
tempDB Configuration
Versions SQL Server 2019 and higher
Description The tempDB is a global resource that stores user objects (such as
temp tables), internal objects (such as work tables, work files,
intermediate results for large sorts and index builds). When snapshot
isolation is used, the tempDB stores the before images of blocks that
are being modified, to allow for row versioning and consistent
committed read access.
In SQL Server 2019 and higher, heavily used system objects are
moved to in-memory tables instead of using the tempDB.
Default Single data file.
Recommendation The tempDB has a large impact on Content Server performance.
Follow these guidelines for best results:
• Create one data file per physical core, up to a maximum of
eight. If the tempDB continues to have memory contention,
add four files at a time (up to the total number of logical
processors).
• Make each data file the same size. Each one should be
large enough to accommodate a typical workload. (As a
general rule, set it to one-and-a-half times the size of the
largest single table in any database used by the instance.)
Allow auto-growth to accommodate usage spikes.
• Place these files on your fastest available storage.
• Ensure that the tempDB has the same collation as the
Content Server database.
• Ensure that each tempDB is the same size
• If using autogrowth, set the growth to MB not percentage
Notes Be mindful of factors that can limit the number of processors SQL
Server will utilize. Appropriately set the number of tempDB files.
Monitoring Monitor latch waits related to pages in tempDB. PAGELATCH_XX wait
types can indicate tempDB contention. Appendix B – Dynamic
Management Views (DMVs) provides sample queries that you can
use to monitor waits and identify active tasks that are blocked on
tempDB.
Monitor the space used in and the growth of tempDB and adjust
tempDB size as needed.
Permissions Adding or modifying tempDB data files requires the ALTER
permission on the tempDB database.
Locks
Description The locks option sets the maximum number of locks available.
Locks use memory from the Database Engine memory pool and
can allocate memory from the server, if available, when more
memory is required.
Default 0
Recommendation Microsoft recommends the default setting which allows the Database
Engine to allocate/deallocate locks dynamically.
For more details, see Configure the locks.
Clustered Indexes
Description Clustered indexes store data rows for the index columns in sorted
order. In general, the primary key or the most frequently used index
on each table is a good candidate for a clustered index. This is
especially important for key highly-active core tables. Only one
clustered index can be defined per table.
Default In Content Server 10.5 and later, many tables in the Content Server
database have a clustered index.
Recommendation OpenText does not recommend making schema changes such as
adding clustered indexes to tables in the Content Server database.
Additional clustered indexes may be added in future releases of
Content Server.
Notes One benefit of clustered indexes is to avoid potential blocking by the
ghost record cleanup process when there is a high volume of deletes
(such as with Records Management Disposition). Without clustered
indexes, the cleanup process may require a table lock to scan for
ghost records, blocking other operations.
There are other situations where the ghost record cleanup process
might fall behind the rate of row deletions. You can monitor this using
the sys.dm_db_index_physical_stats (see Appendix B –
Dynamic Management Views (DMVs)). Review columns
Ghost_Record_Count (ghost records ready for cleanup) and
Version_Ghost_Record_Count (ghost records retained by an
outstanding snapshot isolation transaction).
Collation
Description The collation for a database defines the language and character set
used to store data, sets rules for sorting and comparing characters.
It also determines case-sensitivity, accent-sensitivity, and kana-
sensitivity.
Default When SQL Server is installed, it derives its default server-level
collation from the Windows system locale. See article, Collation
and Unicode Support.
The default collation for a new database is the same as the SQL
Server default setting. It is inherited from the MODEL database,
where it cannot be changed.
Databases restored from a backup originating from another server
will retain their original collation.
Recommendation To avoid potential issues, provide best performance, and ensure
compatibility with other products on the same server, OpenText
recommends the following:
• For new SQL Server installations, select a collation that is
case-sensitive and accent-sensitive for compatibility with
other suite products (such as Archive Server).
• Ensure that the Content Server database has the same
collation as the server (and hence the same as system
databases like tempDB).
• Ensure that the collation for all objects in the Content
Server database are the same as the database collation.
The exception to this is the WebNodesMeta_XX tables,
which may derive a different collation from settings on the
Configure Multilingual Metadata administration page.
• Contact customer support for assistance if you have an
existing deployment that has a database collation that is
different from the servers’, or that has tables or columns
(other than WebNodesMeta_XX tables) that have a
collation that is different from the database collation.
Notes Refer to scripts in SQL Server and Database Information Queries
SQL Server and Database Information Queries for database
collation and column collation that differs from database.
Refer to Appendix C – Collation Fix on suggestions to fix collation
issues.
Compatibility Level
Description The database compatibility level sets certain database behaviors to
be compatible with the specified version of SQL Server.
Default The compatibility level for newly created databases is the same as
the MODEL database which, by default, is the same as the installed
version of SQL Server.
When upgrading the database engine, compatibility level for user
databases is not altered, unless it is lower than the minimum
supported. Restoring a database backup to a newer version also
does not change its compatibility level.
Recommendation In general, using the latest compatibility mode allows the Content
Server database to benefit from all performance improvements in the
installed SQL Server version.
However, because of performance issues with SQL Server 2014
and later, set the Compatibility Level to the version of SQL Server
2012 (see article Alter Database Compatibility Level).
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 110
When you change the compatibility level of the Content Server
database, be sure to update statistics on the database after making
the change.
Please note that regardless of the implementation method, this level
is required for this version of Content Server. How the DBA
achieves this is not relevant.
As an alternative, you can use trace flag 9481 and leave the
compatibility level unchanged. Using trace flag 9481 affects all the
databases on the SQL Server instance. For more details, see
Technical Alert.
Data Compression
Description SQL Server offers data compression at the row and page level, but
only in the Enterprise Edition. Compression reduces I/O and the
amount of storage and memory used by SQL Server. It only adds a
small amount of overhead in the form of additional CPU usage.
Default Not compressed.
Recommendation When storage space, available memory, or disk I/O are under
pressure, and the database server is not CPU-bound, consider using
compression on selected tables and indexes.
Microsoft recommends compressing large objects that have either a
low ratio of update operations, or a high ratio of scan operations. See
articles Data Compression, Enable Compression on a Table or Index,
and sp_estimate_data_compression_savings.
You can automate the process using a script. (An example of this
type of approach and a sample script, which was used for internal
testing, is covered in the Practical Data Compression article.) The
script analyzes the usage of Content Server tables and indexes that
have more than 100 pages and selects candidates for compression.
It estimates the savings from row or page compression and
generates a command to implement the recommended compression.
The script relies on usage data from the DMVs, so it should be run
after a period of representative usage.
Overall impact from compression on performance, storage, memory,
and CPU will depend on many factors related to the environment and
product usage. Testing in the OpenText performance lab has
demonstrated the following:
Performance: For load tests involving a mix of document-
management operations, with a small set of indexes compressed
based on only high-read-ratio indexes, there was minimal
performance impact. When a larger set of tables and indexes was
compressed, performance was less consistent, and degraded by up
to 20%. For high-volume ingestion of documents with metadata,
there was no impact on ingestion throughput.
CPU: CPU usage increased by up to 8% in relative terms.
MDF File Storage: Reduced by up to 40% depending on what was
compressed. Specific large tables like LLAttrData were reduced by
as much as 82%.
I/O: Read I/O on MDF files reduced up to 30%; write I/O up to 18%.
Memory Usage: SQL Buffer memory usage reduced up to 25%. As
with any configuration change, test the performance impact of any
compression changes on a test system prior to deploying on
production systems.
Notes It can take longer to rebuild indexes when they are compressed.
DBrowseAncestorsCore Indexes
Versions Content Server 21.2 and higher
Description The indexes for the table DBrowseAncestorsCore have changed. Th
new indexes should help improve Content Server performance.
Default The indexes should be as follows:
UK_DBC_DID_AID - Unique (DataID, AncestorID) INCLUDE
(Deleted)
IDX_DBC_Del – (Deleted) INCLUDE (AncestorID, DataID)
IDX_DBC_AID_Del – (AncestorID, Deleted) INCLUDE (DataID)
Recommendation Rebuilding the DBrowseAncestorsCore table will recreate the
indexes.
Another option is to have the DBA drop the existing indexes and
create them as listed in the default. Please make sure you contact
OpenText support before changing indexes.
Recovery Model
Versions SQL Server 2019 and higher
Description The Recovery Model is how SQL Server controls how transaction
logs are maintained per database. For more information, see article
Recovery Models.
Default SIMPLE model is the default. This does not cover backups of the
transaction logs.
Recommendation Content Server supports both SIMPLE and FULL recovery models.
SIMPLE recovery is initially set up when creating the database with
Content Server. FULL recovery requires setting the recovery
directories and can only be handled manually by a SQL Server DBA.
The DBA can change the recovery setting once the database has
been created or by creating the database using the steps in
(Installation Options).
The Content Server database should be configured too FULL. This
requires the DBA to set the transaction log backups to prevent the
log file from growing too large. BULK LOGGED is not recommended
as it is not compatible with many of the operations that Content
Server can perform.
Notes SIMPLE Recovery Model is typically used in test systems or in
databases that are stagnant. The transaction log space is reclaimed
once the transaction has completed.
FULL Recovery Model is typically used for critical data. Transaction
logs will need to be backed up on a regular basis - controlled by the
DBA.
BULK LOGGED Recovery Model is typically used to avoid logging
irregular bulk insert, create index, and select into statements. It
works the same as Full with that exception.
Statistics
Description The query optimizer uses statistics to aid in creating high-quality
query plans that improve performance. Statistics contain information
about the distribution of values in one or more columns of a table or
view. They are used to estimate the number of rows in a query result.
See article, Statistics.
The following settings control the automation of the statistics:
AUTO_CREATE_STATISTICS: When set to TRUE, automatically
build any missing statistics needed by a query for optimization.
AUTO_UPDATE_STATISTICS: When set to TRUE, automatically
build any out-of-date statistics needed by a query for optimization.
AUTO_UPDATE_STATISTICS_ASYNC: When set to TRUE and
AUTO_UPDATE_STATISTICS is set to TRUE, queries that initiate
an automatic update of out-of-date statistics will not wait for the
statistics to be updated before compiling.
Default The first two settings above are on by default, and the third is off. All
settings can be changed in the MODEL database.
Recommendation Maintaining up-to-date statistics is a key factor in allowing SQL
Server to generate plans with optimal performance.
OpenText recommends using the default values for these settings
based on load profile. Daily/weekly updates or allow the query
optimizer to automatically create and update statistics as needed.
Please note: this is generalized and should be defined by the DBA
based on load and usage profiles.
In SQL Server 2016 and higher, the features of trace flag 2371 are
used automatically and do not need to be set if database is in
compatibility of 130 or higher.
You can update statistics with the UPDATE STATISTICS statement
or the sp_updatestats stored procedure. This is good to do when
your site has a large amounts of update activity, or if you want to
update statistics on a more predictable schedule.
Note that the sp_updatestats stored procedure updates statistics
on all tables that have one or more rows modified. It is normally
preferable to use an UPDATE STATISTICS statement to update
statistics on specific tables, as needed.
Monitoring Use the sys.dm_db_stats_properties DMV to track the
number of rows updated in each table, and which statistics may need
to be updated during maintenance windows. See the article, Object
Statistics.
Notes Updating statistics causes queries to recompile, which can be time-
consuming. Any strategy to update statistics must balance the
benefit of generating improved query plans with the need to avoid
overly frequent query recompilation.
A rebuild of an index will also update statistics.
Permissions Update Statistics requires the ALTER permission on the table or
view.
Benchmark
Collect the following as the basis for a benchmark and further analysis of worst-
performing aspects:
1. For Azure, the server is not directly available to you. Review Microsoft
Documentation for Azure SQL Database Benchmark Overview.
a. Note any Windows event log errors present after or during the monitored
period.
Note that connect logging requires substantial space. Depending on the activity
level of the site, the connect log files may be 5 to 10 GB, so adequate disk space
should be planned. Content Server logs can be redirected to a different file system
4. Use Trace Capture/Replay to trace files for periods of three to four hours during
core usage hours that fall within the monitored period.
You can use SQL Server Extended Events to monitor activity. For more
information, see articles Extended Events and Convert an Existing SQL Trace
Script.
6. Obtain the results of a Content Server Level 5 database verification report (run
from the Content Server Administration page, Maintain Database section). To
speed up the queries involved in this verification, ensure there is an index present
on DVersData.ProviderID. Note that for a large site this may take days to run.
If there is a period of lower activity during the night or weekends, that would be an
ideal time to run this verification.
7. Gather feedback from Content Server business users that summarizes any current
performance issues or operational failures that might be database related.
Activity Monitor
The Activity Monitor offers real-time views of system activity and wait states. It lets you
drill down on specific slow or blocking queries. It also provides historical information
on Resource Waits, Data File I/O, and Expensive Queries.
The Activity Monitor is available in Microsoft SQL Server Management Studio (SSMS)
when you right click on the connected server.
Azure Monitoring
To monitor Azure databases, you will need to log onto the Azure portal. From there,
you have many options available under Monitoring and Support + Troubleshooting.
For information on Azure Monitoring tools, see the articles: Microsoft Azure Monitor a
Cloud Service or Microsoft Azure Web Application Performance.
Query Store
The Query Store is a performance monitoring tool which can help find performance
differences caused by query plan changes. It also can force a plan which already
exists in the CACHE to a query.
For the best information on how to use the Query Store, see article Best Practices with
Query Store.
Performance Analyzer
OpenText Performance Analyzer is a tool that works with Content Server Logs. This
tool allows you to open connect logs, see the percentage mixture, and see the relative
importance of your site’s usage profile.
The Raw Data tab allows you to sort the transactions by overall execution time and
SQL time. This will show you which Content Server transactions are taking the most
SQL time (not just the individual statements that they issue).
To see the individual SQL statements (from the Raw Data tab), right-click on the line
and select Show SQL. Performance Analyzer will display each SQL statement that the
transaction issues, how long each one took to execute, and how many rows were
affected.
Description Check size of Indexes over 900 Bytes comply with SQL Server.
Sample SELECT ss.name schema_name,
st.name table_name, si.name index_name,
IIF(si.type_desc = 'CLUSTERED',
si.type_desc + IIF(si.is_unique = 0, '',
' - Unique'), IIF(si.is_unique = 0, '',
'Unique')) index_type,
IIF(si.is_primary_key = 1, 'Primary Key', '')
primary_key,
SUM(max_length) row_length,
COUNT(ic.index_id) column_count
FROM sys.tables st INNER JOIN (SELECT *
FROM sys.indexes WHERE type_desc != 'HEAP') si
ON st.object_id = si.object_id
INNER JOIN sys.schemas ss
ON st.schema_id = ss.schema_id
INNER JOIN sys.index_columns ic
ON si.object_id = ic.object_id
AND si.index_id = ic.index_id
INNER JOIN sys.columns sc
ON ic.object_id = sc.object_id
AND ic.column_id = sc.column_id
WHERE ss.name = '<databasename>'
AND ic.is_included_column = 0
GROUP BY ss.name, st.name, si.name, si.type_desc,
si.is_unique, si.is_primary_key
HAVING SUM(max_length) > 900
ORDER BY ss.name, st.name, si.name
Integrity Check
Corruption in the database can cause performance issues. It is suggested that before
and after a backup is performed, an integrity check is done on the database.
The command DBCC CHECKDB will check the logical and physical integrity of all of the
objects in the specified database. Follow the suggestions made by Microsoft.
Locking
Lock Manager
The Lock Manager is responsible for deciding which locks are appropriate depending
on the type of operation and the amount of data being affected. It also manages
compatibility between lock types attempting to access the same resources.
SQL Server uses Lock Manager to provide consistency and assign locks as needed
(top-down approach – database, table, page/row). This is based on a first in first out
with the first incompatible lock waiting (which can cause blocking).
The default setting for the Lock Manager is 24% of allocated memory from the
database engine pool. The Lock Manager decides if enough memory is available for
all of the locks required.
A lock escalation will trigger when more than 40% of the value set for Locks is reached.
A row or page lock escalates to a table lock when 5,000 locks is reached.
A table with billions of rows will automatically use a table lock as it consumes fewer
resources.
For more information on the Lock Manager, see Transaction Locking.
Lock Escalation
Description Some bulk operations, such as copying or moving a large subtree, or
changing permissions on a tree, can cause SQL Server resource
thresholds to be exceeded. Lock escalation is triggered when one of
the following conditions exists:
• A single Transact-SQL statement acquires at least 5,000
locks on a single non-partitioned table or index.
• A single Transact-SQL statement acquires at least 5,000
locks on a single partition of a partitioned table and the
ALTER TABLE SET LOCK_ESCALATION option is set to
AUTO.
• The number of locks in an instance of the Database Engine
exceeds memory or configuration thresholds. (The
thresholds vary depending on memory usage and the Locks
server setting.)
Although escalation to a lower granularity of lock can free resources,
it also affects concurrency. This means that other sessions
accessing the same tables and indexes can be put in a wait state and
degrade performance.
Default Locks defaults to 0. This means that lock escalation occurs when
memory used by lock objects is 24% of the memory used by the
database engine.
All objects have a default lock escalation value of TABLE. That
means when lock escalation is triggered, it is done at the table level.
Recommendation Use the lock escalation DMV example in Appendix B – Dynamic
Management Views (DMVs) to monitor for lock escalation attempts
and successes.
For objects experiencing frequent lock escalations, consider using
the SET LOCK_ESCALATION clause in the ALTER TABLE statement
to change the escalation algorithm from the default TABLE to either
AUTO or DISABLE.
• AUTO means that escalation can happen at the partition
level of a partitioned table, and thus not affect concurrency
on other partitions. (This introduces the potential of
deadlocks - when transactions locking different partitions
each want to expand an exclusive lock to the other
partitions.)
• DISABLE does not guarantee that no escalation will occur.
It puts the thresholds much higher, so that only a stress on
memory resources will trigger escalation.
Notes For a description of the Lock Escalation process in SQL Server, see
the article, Lock Escalation (Database Engine).
Permissions Changing this setting requires the ALTER permission on the table.
Transaction Isolation
Description When snapshot isolation is enabled, all statements see a snapshot
of data as it existed at the start of the transaction. This reduces
blocking contention and improves concurrency since readers do not
block writers and vice-versa. It also reduces the potential for
deadlocks. See the article, Snapshot Isolation in SQL Server.
Default ALLOW_SNAPSHOT_ISOLATION and
READ_COMMITTED_SNAPSHOT are both automatically set to ON for
the Content Server database.
Recommendation To manually set ALLOW_SNAPSHOT_ISOLATION and
READ_COMMITTED_SNAPSHOT use the following commands in SQL
Management Studio:
ALTER DATABASE <Content_Server_DB>
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE <Content_Server_DB>
SET READ_COMMITTED_SNAPSHOT ON
Permissions Changing this setting requires the ALTER permission on the
database.
Appendices
Appendix A – References
Buffer Manager Object
https://msdn.microsoft.com/en-us/library/ms189628.aspx
Blocking sessions
Description Methods on how to view the blocking sessions and the cause.
There is also SSMS reports available:
Reports–Standard Reports–Activity–All Blocking Transactions
Management-Extended-Events-Sessions-Watch Live Data
You can also create your own Blocked Process Report – see below.
More can be found: Identify the cause of SQL Server blocking
Sample --view any pending changes
SELECT * FROM sys.configurations WHERE value <>
value_in_use;
GO
PAGELATCH_XX Waits
Description The following script can help you identify active tasks that are
blocked on tempDB PAGELATCH_XX waits.
SELECT session_id, wait_type, wait_duration_ms,
Sample
blocking_session_id, resource_description,
ResourceType = CASE
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 1 % 8088 = 0 THEN 'Is PFS Page'
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 2 % 511232 = 0 THEN 'Is GAM Page'
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 3 % 511232 = 0 THEN 'Is SGAM Page'
ELSE 'Is Not PFS, GAM, or SGAM page'
END
FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'PAGE%LATCH_%'
AND resource_description LIKE '2:%'
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' +
CAST(@partitionnum AS nvarchar(10));
-- EXEC (@command);
IF LEN( @command ) > 0
PRINT @command;
END;
-- Close and deallocate the cursor
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table
DROP TABLE #work_to_do;
--GO
Waits (sys.dm_os_wait_stats)
Description Shows aggregate time spent on different wait categories.
SELECT wait_type, wait_time_ms, waiting_tasks_count,
Sample
max_wait_time_ms, signal_wait_time_ms,
wait_time_ms/waiting_tasks_count AS AvgWaitTimems
FROM sys.dm_os_wait_stats
WHERE waiting_tasks_count > 0
ORDER BY wait_time_ms desc
Notes Consider excluding wait types that do not impact user query
performance. See details about wait statistics in article
sys.dm_os_wait_stats and in the SQL Skills blog post, SQL Server
Wait Statistics.
Description Get a list of objects where the statistics may need to be updated.
This query is based on the large data tables where the suggested
percent to update statistics is 20% - this number can be changed.
This query will return a SQL statement for each object so that you
can update the statistics.
SELECT rowmodcounter.modPercent, ss.name schema_name,
Sample
st.name table_name, sst.name stats_name,
sst.auto_created, sst.no_recompute,
sp.last_updated, sp.rows, sp.rows_sampled, sp.steps,
sp.unfiltered_rows, sp.modification_counter,
sampleRate = (1.0 * sp.rows_sampled / sp.rows)
* 100,
'UPDATE STATISTICS ' + ss.name + '.' + st.name +
'(' + sst.name + ')'
FROM sys.stats sst
CROSS APPLY sys.dm_db_stats_properties(sst.object_id,
sst.stats_id) sp
INNER JOIN sys.tables st
ON sst.object_id = st.object_id
INNER JOIN sys.schemas ss
ON st.schema_id = ss.schema_id
CROSS APPLY (SELECT (1.0 * sp.modification_counter /
NULLIF(sp.rows, 0)) * 100) AS
rowmodcounter(modPercent)
WHERE ss.name = '<schema>'
AND rowmodcounter.modPercent >= 20
ORDER BY rowmodcounter.modPercent DESC;
• Remove the ad hoc and prepared plan cache for the entire instance
DBCC FREESYSTEMCACHE ('SQL Plans');
• Flush the ad hoc and prepared plan cache for a resource pool
Find all the resource pools on a SQL Server (query above)
Free the resource pool cache
DBCC FREESYSTEMCACHE ('SQL Plans', '<name>');
• Remove all query plan caches from the Compute nodes. This can be done
with or without the regular completion message (WITH NO_INFOMSGS).
USE <databasename>
DBCC FREEPROCCACHE (COMPUTE);
About OpenText
OpenText enables the digital world, creating a better way for organizations to work with
information, on premises or in the cloud. For more information about OpenText (NASDAQ:
OTEX, TSX: OTC) visit opentext.com.
Connect with us:
Copyright © 2021 Open Text. All rights reserved. Trademarks owned by Open Text.