Query Store is SQL Server’s built-in query performance tracking system. It captures a history of queries, their execution plans, and runtime statistics, storing everything in the database itself. It constantly records what’s happening so you can analyze performance issues after the fact.
Query performance issues can be notoriously hard to debug. A query runs fine for weeks, then suddenly it’s slow, and by the time you check, the problem has vanished or the execution plan is no longer in cache. SQL Server 2016 introduced Query Store to address this. Once enabled on a database, it continuously records query execution history, giving you the data you need to investigate performance problems after they happen. It won’t tell you what’s wrong or how to fix it, but at least you’ll have evidence to work with instead of flying blind.
What Query Store Captures
Query Store tracks three main types of information:
- Query text and execution plans: It stores the actual SQL statements and the execution plans the optimizer chose for them. This means you can see exactly how a query was executed at any point in time.
- Runtime statistics: It captures metrics like execution count, total duration, CPU time, logical reads, and physical reads. These stats are aggregated over configurable time intervals, so you can see trends rather than just point-in-time snapshots.
- Wait statistics: Query Store records what queries are waiting on (locks, I/O, CPU, memory) helping you understand why queries are slow, not just that they’re slow. These can be found in the in the
sys.query_store_wait_statsview (SQL Server 2017+).
When you have Query Store enabled, all of this data lives in your database in a set of system views that you can query directly, or you can use the built-in Query Store reports in SQL Server Management Studio.
Scenarios for Using Query Store
The most common use case for Query Store is troubleshooting performance regressions. A query that ran fine yesterday is suddenly slow today. What changed? Query Store lets you compare execution plans over time and identify when the optimizer started choosing a different (worse) plan.
You can also identify your most resource-intensive queries. Which queries consume the most CPU? Which ones do the most reads? Query Store gives you ranked lists based on actual production workload, not guesses.
Another valuable feature is plan forcing. If the optimizer picks a bad plan, you can force it to use a previous good plan without changing any code or adding query hints. This can be a lifesaver when you need a quick fix while you investigate the root cause.
Enabling Query Store
From SQL Server 2022, Query Store is enabled by default on new databases. But in earlier versions it’s disabled by default. In any case, you can enable it per database with a simple ALTER DATABASE command:
ALTER DATABASE YourDatabaseName
SET QUERY_STORE = ON;
This turns on Query Store with default settings. Once enabled, it starts capturing data immediately.
You can also configure various settings like how much disk space to allocate, how long to retain data, and how often to aggregate statistics. A basic configuration might look something like this:
ALTER DATABASE YourDatabaseName
SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE,
CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),
DATA_FLUSH_INTERVAL_SECONDS = 900,
MAX_STORAGE_SIZE_MB = 1000,
INTERVAL_LENGTH_MINUTES = 60
);
This keeps 30 days of history, flushes data to disk every 900 seconds (15 minutes), caps storage at 1GB, and aggregates statistics hourly.
By default, Query Store only captures relevant queries based on execution count and resource consumption. You can change this with the QUERY_CAPTURE_MODE parameter. This defaults at AUTO, but you can change it to ALL, NONE, or CUSTOM (which allows you to specify your own custom policy).
Example
Let’s enable Query Store on the WideWorldImporters database:
ALTER DATABASE WideWorldImporters SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE,
QUERY_CAPTURE_MODE = ALL,
CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),
DATA_FLUSH_INTERVAL_SECONDS = 900,
MAX_STORAGE_SIZE_MB = 1000,
INTERVAL_LENGTH_MINUTES = 60
);
Here, I used QUERY_CAPTURE_MODE = ALL to capture all queries. The default value depends on the SQL Server version:
- SQL Server 2016 and SQL Server 2017 default to
ALL - SQL Server 2019 and Azure SQL Database default to
AUTO
If you use AUTO, SQL Server only captures relevant queries based on execution count and resource consumption. So if you want to test this out, setting it to ALL will ensure that all queries will be captured.
Anyway, let’s run a query:
DECLARE @StartDate DATE = '2016-01-01';
DECLARE @EndDate DATE = '2016-01-31';
SELECT
c.CustomerName,
o.OrderID,
o.OrderDate,
SUM(ol.Quantity * ol.UnitPrice) AS OrderTotal,
COUNT(ol.OrderLineID) AS ItemCount
FROM Sales.Customers c
JOIN Sales.Orders o ON c.CustomerID = o.CustomerID
JOIN Sales.OrderLines ol ON o.OrderID = ol.OrderID
WHERE o.OrderDate >= @StartDate
AND o.OrderDate < @EndDate
GROUP BY c.CustomerName, o.OrderID, o.OrderDate
HAVING SUM(ol.Quantity * ol.UnitPrice) > 1000
ORDER BY OrderTotal DESC;
GO
Now let’s check the query store:
SELECT
q.query_id,
SUBSTRING(qt.query_sql_text, 1, 100) AS query_text_preview,
COUNT(DISTINCT p.plan_id) AS plan_count,
MAX(rs.last_execution_time) AS last_execution
FROM sys.query_store_query q
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
WHERE qt.query_sql_text LIKE '%Sales.Customers%Sales.Orders%'
AND qt.query_sql_text NOT LIKE '%sys.query_store%'
GROUP BY q.query_id, qt.query_sql_text;
Output:
query_id query_text_preview plan_count last_execution
-------- ------------------------------------- ---------- ------------------------
246 (@StartDate date,@EndDate date)SELECT
c.CustomerName,
o.OrderID,
o.OrderDate,
SUM(o 1 2025-12-08T00:46:55.813Z
1 row(s) returned
In this case I used the SUBSTRING() function to return just the first 100 characters of the query text, but you can increase that as required.
After running the initial query multiple times over several days, you can check Query Store to see if there are multiple execution plans:
SELECT
q.query_id,
qt.query_sql_text,
COUNT(DISTINCT p.plan_id) AS plan_count,
MAX(rs.last_execution_time) AS last_execution
FROM sys.query_store_query q
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
WHERE qt.query_sql_text LIKE '%Orders%Status%Pending%'
GROUP BY q.query_id, qt.query_sql_text
HAVING COUNT(DISTINCT p.plan_id) > 1;
This query narrows it down to just those cases where the optimizer generated different plans for the same query. I did this by adding a HAVING clause to the end of the query. The COUNT() column will tell you how many plans were generated.
You can then compare the performance of different plans:
SELECT
p.plan_id,
rs.avg_duration / 1000.0 AS avg_duration_ms,
rs.avg_cpu_time / 1000.0 AS avg_cpu_ms,
rs.avg_logical_io_reads,
rs.count_executions,
p.is_forced_plan
FROM sys.query_store_plan p
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
WHERE p.query_id = 246 -- Use the query_id from previous query
ORDER BY rs.avg_duration DESC;
Example output:
plan_id avg_duration_ms avg_cpu_ms avg_logical_io_reads count_executions is_forced_plan
------- --------------- ---------- -------------------- ---------------- --------------
31 111.796 123.328 1309 1 false
35 65.383 62.89 89 1 false
34 55.594 82.831 402 1 false
33 51.616 73.669 367 1 false
32 45.708 44.593 132 1 false
30 41.56 40.709 161 1 false
36 38.947 48.742 1309 1 false
29 28.039 26.784 84 1 false
If you identify that say, plan ID 29 was fast and plan ID 31 is slow, you can force the optimizer to use the good plan:
EXEC sp_query_store_force_plan @query_id = 246, @plan_id = 29;
Now every execution of that query will use plan 29, regardless of what the optimizer thinks. This buys you time to figure out why the optimizer started choosing the wrong plan. Maybe statistics are stale, maybe parameter sniffing is causing issues, or maybe the data distribution changed.
When Query Store Fills Up
Query Store has a maximum size limit. When it reaches that limit, it switches to read-only mode and stops capturing new data. You’ll see this reflected in the OPERATION_MODE setting.
You have a few options when this happens. You can increase MAX_STORAGE_SIZE_MB to allocate more space. You can reduce STALE_QUERY_THRESHOLD_DAYS to purge older data more aggressively. Or you can manually clean up data for specific queries that are no longer relevant.
To manually clear all Query Store data:
ALTER DATABASE YourDatabaseName
SET QUERY_STORE CLEAR ALL;
This wipes everything and starts fresh, which you’d only do if you’re comfortable losing the historical data.
I should point out that Microsoft advises that MAX_STORAGE_SIZE_MB isn’t strictly enforced:
MAX_STORAGE_SIZE_MBlimit isn’t strictly enforced. Storage size is checked only when Query Store writes data to disk. This interval is set by theDATA_FLUSH_INTERVAL_SECONDSoption or the Management Studio Query Store dialog option Data Flush Interval. The interval default value is 900 seconds (or 15 minutes).If the Query Store has breached the
MAX_STORAGE_SIZE_MBlimit between storage size checks, it transitions to read-only mode. IfSIZE_BASED_CLEANUP_MODEis enabled, the cleanup mechanism to enforce theMAX_STORAGE_SIZE_MBlimit is also triggered.Once enough space has been cleared, the Query Store mode automatically switches back to read-write.
Query Store Overhead
Query Store does add some overhead. It’s tracking every query execution and persisting that data to disk. In most cases, the performance impact is minimal (usually less than 5%), but it’s not zero.
The overhead comes from two places: capturing runtime stats during query execution and flushing data to disk periodically.
The DATA_FLUSH_INTERVAL_SECONDS setting controls how often data gets written, with longer intervals reducing overhead but increasing the risk of data loss if SQL Server crashes.
For high-throughput OLTP systems, you might want to tune the capture settings more carefully. You can reduce overhead by increasing the statistics aggregation interval or by using Query Store in read-only mode during peak hours (although you’ll lose real-time capture when you do this).
Disabling Query Store
If you need to turn off Query Store, it’s a simple ALTER DATABASE command:
ALTER DATABASE YourDatabaseName
SET QUERY_STORE = OFF;
This stops Query Store from capturing new data immediately. However, it doesn’t delete anything. All your historical query data and configuration settings remain in the database. If you re-enable Query Store later with just SET QUERY_STORE = ON, it picks up where it left off with the same settings you had configured.
If you want to clear out all the captured data while keeping Query Store enabled:
ALTER DATABASE YourDatabaseName
SET QUERY_STORE CLEAR ALL;
This wipes all queries, plans, and statistics but preserves your configuration settings. It’s useful when you want to start fresh without disabling the feature entirely.
To completely reset Query Store (clearing data and reverting to default settings) you’d need to clear the data first, then reconfigure with default values explicitly, or simply disable it and remove any custom configuration when you re-enable it later.
Quick Wrap Up
Query Store can be a handy tool for diagnosing performance problems, especially when issues are intermittent or you need to understand how query performance has changed over time.
Whether you enable it depends on your needs and tolerance for the overhead. For databases where performance troubleshooting is difficult or time-consuming, the historical data Query Store provides can save you significant investigative effort. For simpler systems or those with tight performance margins, the added overhead might not be worth it.
If you do enable it, configure reasonable retention policies based on your available storage, and check in occasionally to make sure it hasn’t filled up or switched to read-only mode. When something does go wrong and you need to understand why a query suddenly got slower, you’ll appreciate having the data available.