Windows Server Storage
Windows Server Storage
h WHAT'S NEW
What's new?
e OVERVIEW
Storage Replica
Data Deduplication
e OVERVIEW
Work Folders
DFS Namespaces
DFS Replication
iSCSI t tb t
iSCSI target boot
e OVERVIEW
ReFS
Storage-class memory
NTFS
In Azure
e OVERVIEW
Azure Storage
Azure StorSimple
e OVERVIEW
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016
This topic explains the new and changed functionality in storage in Windows Server
2019, Windows Server 2016, and Windows Server Semi-Annual Channel releases.
When using this version of Windows Server to orchestrate migrations, we've added the
following abilities:
For more info about Storage Migration Service, see Storage Migration Service overview.
Disk anomaly detection is a new capability that highlights when disks are behaving
differently than usual. While different isn't necessarily a bad thing, seeing these
anomalous moments can be helpful when troubleshooting issues on your systems.
This capability is also available for servers running Windows Server 2019.
To be fair, Windows Admin Center is a separate download that runs on Windows Server
2019 and other versions of Windows, but it's new and we didn't want you to miss it...
Store up to ten times more data on the same volume with deduplication and
compression for the ReFS filesystem. (It's just one click to turn on with Windows
Admin Center.) The variable-size chunk store with optional compression maximizes
savings rates, while the multi-threaded post-processing architecture keeps
performance impact minimal. Supports volumes up to 64 TB and will deduplicate
the first 4 TB of each file.
Unlock unprecedented performance with native Storage Spaces Direct support for
persistent memory modules, including Intel® Optane™ DC PM and NVDIMM-N.
Use persistent memory as cache to accelerate the active working set, or as capacity
to guarantee consistent low latency on the order of microseconds. Manage
persistent memory just as you would any other drive in PowerShell or Windows
Admin Center.
Survive two hardware failures at once with an all-new software resiliency option
inspired by RAID 5+1. With nested resiliency, a two-node Storage Spaces Direct
cluster can provide continuously accessible storage for apps and virtual machines
even if one server node goes down and a drive fails in the other server node.
Use a low-cost USB flash drive plugged into your router to act as a witness in two-
server clusters. If a server goes down and then back up, the USB drive cluster
knows which server has the most up-to-date data. For more info, see the Storage
at Microsoft blog and documentation on how to deploy a file share witness.
Manage and monitor Storage Spaces Direct with the new purpose-built Dashboard
and experience in Windows Admin Center. Create, open, expand, or delete
volumes with just a few clicks. Monitor performance like IOPS and IO latency from
the overall cluster down to the individual SSD or HDD. Available at no additional
cost for Windows Server 2016 and Windows Server 2019.
Performance history
Get effortless visibility into resource utilization and performance with built-in
history. Over 50 essential counters spanning compute, memory, network, and
storage are automatically collected and stored on the cluster for up to one year.
Best of all, there's nothing to install, configure, or start – it just works. Visualize in
Windows Admin Center or query and process in PowerShell.
Achieve multi-petabyte scale – great for media, backup, and archival use cases. In
Windows Server 2019, Storage Spaces Direct supports up to 4 petabytes (PB) =
4,000 terabytes of raw capacity per storage pool. Related capacity guidelines are
increased as well: for example, you can create twice as many volumes (64 instead
of 32), each twice as large as before (64 TB instead of 32 TB). Stitch multiple
clusters together into a cluster set for even greater scale within one storage
namespace. For more info, see the Storage at Microsoft blog .
With mirror-accelerated parity you can create Storage Spaces Direct volumes that
are part mirror and part parity, like mixing RAID-1 and RAID-5/6 to get the best of
both. (It's easier than you think in Windows Admin Center.) In Windows Server
2019, the performance of mirror-accelerated parity is more than doubled relative
to Windows Server 2016 thanks to optimizations.
Easily identify drives with abnormal latency with proactive monitoring and built-in
outlier detection, inspired by Microsoft Azure's long-standing and successful
approach. Whether it's average latency or something more subtle like 99th
percentile latency that stands out, slow drives are automatically labeled in
PowerShell and Windows Admin Center with ‘Abnormal Latency' status.
Storage Replica
There are a number of improvements to Storage Replica in this release:
Storage Replica in Windows Server, Standard Edition
You can now use Storage Replica with Windows Server, Standard Edition in addition to
Datacenter Edition. Storage Replica running on Windows Server, Standard Edition, has
the following limitations:
To gain the increased performance, all members of the replication group must run
Windows Server 2019.
Test failover
You can now temporarily mount a snapshot of the replicated storage on a destination
server for testing or backup purposes. For more information, see Frequently Asked
Questions about Storage Replica.
Miscellaneous improvements
Storage Replica also contains the following improvements:
SMB
SMB1 and guest authentication removal: Windows Server no longer installs the
SMB1 client and server by default. Additionally, the ability to authenticate as a
guest in SMB2 and later is off by default. For more information, review SMBv1 is
not installed by default in Windows 10, version 1709 and Windows Server, version
1709 .
Data Deduplication
Data Deduplication now supports ReFS: You no longer must choose between the
advantages of a modern file system with ReFS and the Data Deduplication: now,
you can enable Data Deduplication wherever you can enable ReFS. Increase
storage efficiency by upwards of 95% with ReFS.
DataPort API for optimized ingress/egress to deduplicated volumes: Developers
can now take advantage of the knowledge Data Deduplication has about how to
store data efficiently to move data between volumes, servers, and clusters
efficiently.
Storage Replica
The disaster recovery protection added by Storage Replica is now expanded to include:
Test failover: the option to mount the destination storage is now possible through
the test failover feature. You can mount a snapshot of the replicated storage on
destination nodes temporarily for testing or backup purposes. For more
information, see Frequently Asked Questions about Storage Replica.
Windows Admin Center support: Support for graphical management of
replication is now available in Windows Admin Center via the Server Manager tool.
This includes server-to-server replication, cluster-to-cluster, as well as stretch
cluster replication.
SMB
SMB1 and guest authentication removal: Windows Server, version 1709 no longer
installs the SMB1 client and server by default. Additionally, the ability to
authenticate as a guest in SMB2 and later is off by default. For more information,
review SMBv1 is not installed by default in Windows 10, version 1709 and Windows
Server, version 1709 .
Data Deduplication
Data Deduplication now supports ReFS: You no longer must choose between the
advantages of a modern file system with ReFS and the Data Deduplication: now,
you can enable Data Deduplication wherever you can enable ReFS. Increase
storage efficiency by upwards of 95% with ReFS.
DataPort API for optimized ingress/egress to deduplicated volumes: Developers
can now take advantage of the knowledge Data Deduplication has about how to
store data efficiently to move data between volumes, servers, and clusters
efficiently.
What value does this change add? Storage Spaces Direct enables service providers and
enterprises to use industry standard servers with local storage to build highly available
and scalable software defined storage. Using servers with local storage decreases
complexity, increases scalability, and enables use of storage devices that were not
previously possible, such as SATA solid state disks to lower cost of flash storage, or
NVMe solid state disks for better performance.
Storage Spaces Direct removes the need for a shared SAS fabric, simplifying deployment
and configuration. Instead it uses the network as a storage fabric, leveraging SMB3 and
SMB Direct (RDMA) for high-speed, low-latency CPU efficient storage. To scale out,
simply add more servers to increase storage capacity and I/O performance For more
information, see the Storage Spaces Direct.
What value does this change add? Storage Replication enables you to do the following:
Provide a single vendor disaster recovery solution for planned and unplanned
outages of mission critical workloads.
Use SMB3 transport with proven reliability, scalability, and performance.
Stretch Windows failover clusters to metropolitan distances.
Use Microsoft software end to end for storage and clustering, such as Hyper-V,
Storage Replica, Storage Spaces, Cluster, Scale-Out File Server, SMB3,
Deduplication, and ReFS/NTFS.
Help reduce cost and complexity as follows:
Is hardware agnostic, with no requirement for a specific storage configuration
like DAS or SAN.
Allows commodity storage and networking technologies.
Features ease of graphical management for individual nodes and clusters
through Failover Cluster Manager.
Includes comprehensive, large-scale scripting options through Windows
PowerShell.
Help reduce downtime, and increase reliability and productivity intrinsic to
Windows.
Provide supportability, performance metrics, and diagnostic capabilities.
For more information, see the Storage Replica in Windows Server 2016.
What value does this change add? You can now create storage QoS policies on a CSV
cluster and assign them to one or more virtual disks on Hyper-V virtual machines.
Storage performance is automatically readjusted to meet policies as the workloads and
storage loads fluctuate.
What works differently? This capability is new in Windows Server 2016. Managing
minimum reserves, monitoring flows of all virtual disks across the cluster via a single
command, and centralized policy based management were not possible in previous
releases of Windows Server.
Data Deduplication
Support for Updated Prior to Windows Server 2016, volumes had to specifically sized for
Large the expected churn, with volume sizes above 10 TB not being good
Volumes candidates for deduplication. In Windows Server 2016, Data
Deduplication supports volume sizes up to 64 TB.
Support for Updated Prior to Windows Server 2016, files approaching 1 TB in size were not
Large Files good candidates for deduplication. In Windows Server 2016, files up
to 1 TB are fully supported.
Support for New Data Deduplication is available and fully supported in the new Nano
Nano Server Server deployment option for Windows Server 2016.
Functionality New or Description
Updated
Simplified New In Windows Server 2012 R2, Virtualized Backup Applications, such as
Backup Microsoft's Data Protection Manager, were supported through a
Support series of manual configuration steps. In Windows Server 2016, a new
default Usage Type "Backup", has been added for seamless
deployment of Data Deduplication for Virtualized Backup
Applications.
Support for New Data Deduplication fully supports the new Cluster OS Rolling
Cluster OS Upgrade feature of Windows Server 2016.
Rolling
Upgrades
What value does this change add? This change reduces the likelihood of man-in-the-
middle attacks.
What works differently? If SMB signing and mutual authentication are unavailable, a
Windows 10 or Windows Server 2016 computer won't process domain-based Group
Policy and scripts.
7 Note
The registry values for these settings aren't present by default, but the hardening
rules still apply until overridden by Group Policy or other registry values.
Work Folders
Improved change notification when the Work Folders server is running Windows Server
2016 and the Work Folders client is Windows 10.
What value does this change add?
For Windows Server 2012 R2, when file changes are synced to the Work Folders server,
clients are not notified of the change and wait up to 10 minutes to get the update.
When using Windows Server 2016, the Work Folders server immediately notifies
Windows 10 clients and the file changes are synced immediately.
If you're using an older client or the Work Folders server is Windows Server 2012 R2, the
client will continue to poll every 10 minutes for changes.
ReFS
The next iteration of ReFS provides support for large-scale storage deployments with
diverse workloads, delivering reliability, resiliency, and scalability for your data.
Additional References
What's New in Windows Server 2016
Data Deduplication Overview
Article • 02/18/2022
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
) Important
User file shares may have many copies of the same or similar files.
Virtualization guests might be almost identical from VM-to-VM.
Backup snapshots might have minor differences from day to day.
The space savings that you can gain from Data Deduplication depend on the dataset or
workload on the volume. Datasets that have high duplication could see optimization
rates of up to 95%, or a 20x reduction in storage utilization. The following table
highlights typical deduplication savings for various content types:
7 Note
If you're just looking to free up space on a volume, consider using Azure File Sync
with cloud tiering enabled. This allows you to cache your most frequently accessed
files locally and tier your least frequently accessed files to the cloud, saving local
storage space while maintaining performance. For details, see Planning for an
Azure File Sync deployment.
General purpose file servers: General purpose file servers are general use file
servers that might contain any of the following types of shares:
Team shares
User home folders
Work folders
Software development shares
General purpose file servers are a good candidate for Data Deduplication because
multiple users tend to have many copies or versions of the same file. Software
development shares benefit from Data Deduplication because many binaries remain
essentially unchanged from build to build.
Scenario Description
illustration
VDI deployments are great candidates for Data Deduplication because the virtual
hard disks that drive the remote desktops for users are essentially identical.
Additionally, Data Deduplication can help with the so-called VDI boot storm, which
is the drop in storage performance when many users simultaneously sign in to their
desktops to start the day.
Other workloads: Other workloads may also be excellent candidates for Data
Deduplication.
What's New in Data Deduplication
Article • 02/18/2022
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
ReFS support New Store up to 10X more data on the same volume with deduplication
and compression for the ReFS filesystem. (It's just one click to turn
on with Windows Admin Center.) The variable-size chunk store with
optional compression maximizes savings rates, while the multi-
threaded post-processing architecture keeps performance impact
minimal. Supports volumes up to 64 TB and will deduplicate the first 4
TB of each file.
Support for Updated Prior to Windows Server 2016, volumes had to be specifically sized for
large the expected churn, with volume sizes above 10 TB not being good
volumes candidates for deduplication. In Windows Server 2016, Data
Deduplication supports volume sizes up to 64 TB.
Functionality New or Description
updated
Support for Updated Prior to Windows Server 2016, files approaching 1 TB in size were not
large files) good candidates for deduplication. In Windows Server 2016, files up
to 1 TB are fully supported.
Support for New Data Deduplication is available and fully supported in the new Nano
Nano Server Server deployment option for Windows Server 2016.
Support for New Data Deduplication fully supports the new Cluster OS Rolling Upgrade
Cluster OS feature of Windows Server 2016.
Rolling
Upgrade
Starting with Windows Server 2016, Data Deduplication is highly performant on volumes
up to 64 TB.
Starting with Windows Server 2016, the Data Deduplication Job pipeline has been
redesigned to run multiple threads in parallel using multiple I/O queues for each
volume. This results in performance that was previously only possible by dividing up
data into multiple smaller volumes. This change is represented in the following image:
These optimizations apply to all Data Deduplication Jobs, not just the Optimization Job.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
1. Optimization should not get in the way of writes to the disk Data Deduplication
optimizes data by using a post-processing model. All data is written unoptimized
to the disk and then optimized later by Data Deduplication.
2. Optimization should not change access semantics Users and applications that
access data on an optimized volume are completely unaware that the files they are
accessing have been deduplicated.
Once enabled for a volume, Data Deduplication runs in the background to:
1. Scan the file system for files meeting the optimization policy.
2. Break files into variable-size chunks.
5. Replace the original file stream of now optimized files with a reparse point to the
chunk store.
When optimized files are read, the file system sends the files with a reparse point to the
Data Deduplication file system filter (Dedup.sys). The filter redirects the read operation
to the appropriate chunks that constitute the stream for that file in the chunk store.
Modifications to ranges of a deduplicated files get written unoptimized to the disk and
are optimized by the Optimization job the next time it runs.
Usage Types
The following Usage Types provide reasonable Data Deduplication configuration for
common workloads:
Jobs
Data Deduplication uses a post-processing strategy to optimize and maintain a volume's
space efficiency.
Garbage The Garbage Collection job reclaims disk space by removing Every
Collection unnecessary chunks that are no longer being referenced by files Saturday
that have been recently modified or deleted. at 2:35
AM
Integrity The Integrity Scrubbing job identifies corruption in the chunk store Every
Scrubbing due to disk failures or bad sectors. When possible, Data Saturday
Deduplication can automatically use volume features (such as at 3:35
mirror or parity on a Storage Spaces volume) to reconstruct the AM
corrupted data. Additionally, Data Deduplication keeps backup
copies of popular chunks when they are referenced more than 100
times in an area called the hotspot.
Unoptimization The Unoptimization job, which is a special job that should only be On-
run manually, undoes the optimization done by deduplication and demand
disables Data Deduplication for that volume. only
Chunk A chunk is a section of a file that has been selected by the Data Deduplication
chunking algorithm as likely to occur in other, similar files.
Chunk store The chunk store is an organized series of container files in the System Volume
Information folder that Data Deduplication uses to uniquely store chunks.
File Every file contains metadata that describes interesting properties about the file
metadata that are not related to the main content of the file. For instance, Date Created, Last
Read Date, Author, etc.
File stream The file stream is the main content of the file. This is the part of the file that Data
Deduplication optimizes.
File system The file system is the software and on-disk data structure that the operating
system uses to store files on storage media. Data Deduplication is supported on
NTFS formatted volumes.
File system A file system filter is a plugin that modifies the default behavior of the file system.
filter To preserve access semantics, Data Deduplication uses a file system filter
(Dedup.sys) to redirect reads to optimized content completely transparently to the
user or application that makes the read request.
Optimization The optimization policy specifies the files that should be considered for Data
policy Deduplication. For example, files may be considered out-of-policy if they are
brand new, open, in a certain path on the volume, or a certain file type.
Reparse A reparse point is a special tag that notifies the file system to pass off I/O to a
point specified file system filter. When a file's file stream has been optimized, Data
Deduplication replaces the file stream with a reparse point, which enables Data
Deduplication to preserve the access semantics for that file.
Volume A volume is a Windows construct for a logical storage drive that may span
multiple physical storage devices across a one or more servers. Deduplication is
enabled on a volume-by-volume basis.
2 Warning
How does Data Deduplication differ from Single Instance Store? Single Instance
Store, or SIS, is a technology that preceded Data Deduplication and was first
introduced in Windows Storage Server 2008 R2. To optimize a volume, Single
Instance Store identified files that were completely identical and replaced them
with logical links to a single copy of a file that's stored in the SIS common store.
Unlike Single Instance Store, Data Deduplication can get space savings from files
that are not identical but share many common patterns and from files that
themselves contain many repeated patterns. Single Instance Store was deprecated
in Windows Server 2012 R2 and removed in Windows Server 2016 in favor of Data
Deduplication.
How does Data Deduplication differ from NTFS compression? NTFS compression is a
feature of NTFS that you can optionally enable at the volume level. With NTFS
compression, each file is optimized individually via compression at write-time.
Unlike NTFS compression, Data Deduplication can get spacing savings across all
the files on a volume. This is better than NTFS compression because files may have
both internal duplication (which is addressed by NTFS compression) and have
similarities with other files on the volume (which is not addressed by NTFS
compression). Additionally, Data Deduplication has a post-processing model,
which means that new or modified files will be written to disk unoptimized and will
be optimized later by Data Deduplication.
How does Data Deduplication differ from archive file formats like zip, rar, 7z, cab,
etc.? Archive file formats, like zip, rar, 7z, cab, etc., perform compression over a
specified set of files. Like Data Deduplication, duplicated patterns within files and
duplicated patterns across files are optimized. However, you have to choose the
files that you want to include in the archive. Access semantics are different, too. To
access a specific file within the archive, you have to open the archive, select a
specific file, and decompress that file for use. Data Deduplication operates
transparently to users and administrators and requires no manual kick-off.
Additionally, Data Deduplication preserves access semantics: optimized files
appear unchanged after optimization.
Can I change the Data Deduplication settings for my selected Usage Type? Yes.
Although Data Deduplication provides reasonable defaults for Recommended
workloads, you might still want to tweak Data Deduplication settings to get the most
out of your storage. Additionally, other workloads will require some tweaking to ensure
that Data Deduplication does not interfere with the workload.
Can I manually run a Data Deduplication job? Yes, all Data Deduplication jobs may be
run manually. This may be desirable if scheduled jobs did not run due to insufficient
system resources or because of an error. Additionally, the Unoptimization job can only
be run manually.
Can I monitor the historical outcomes of Data Deduplication jobs? Yes, all Data
Deduplication jobs make entries in the Windows Event Log.
Can I change the default schedules for the Data Deduplication jobs on my system?
Yes, all schedules are configurable. Modifying the default Data Deduplication schedules
is particularly desirable to ensure that the Data Deduplication jobs have time to finish
and do not compete for resources with the workload.
Install and enable Data Deduplication
Article • 07/05/2022
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
This topic explains how to install Data Deduplication, evaluate workloads for
deduplication, and enable Data Deduplication on specific volumes.
7 Note
If you're planning to run Data Deduplication in a Failover Cluster, every node in the
cluster must have the Data Deduplication server role installed.
) Important
PowerShell
Or
Connect remotely to the server instance with PowerShell remoting and install Data
Deduplication by using DISM:
PowerShell
Recommended workloads that have been proven to have both datasets that benefit
highly from deduplication and have resource consumption patterns that are
compatible with Data Deduplication's post-processing model. We recommend that
you always enable Data Deduplication on these workloads:
General purpose file servers (GPFS) serving shares such as team shares, user
home folders, work folders, and software development shares.
Virtualized desktop infrastructure (VDI) servers.
Virtualized backup applications, such as Microsoft Data Protection Manager
(DPM).
Workloads that might benefit from deduplication, but aren't always good
candidates for deduplication. For example, the following workloads could work
well with deduplication, but you should evaluate the benefits of deduplication first:
General purpose Hyper-V hosts
SQL servers
Line-of-business (LOB) servers
) Important
If you are running a recommended workload, you can skip this section and go to
Enable Data Deduplication for your workload.
To determine whether a workload works well with deduplication, answer the following
questions. If you're unsure about a workload, consider doing a pilot deployment of Data
Deduplication on a test dataset for your workload to see how it performs.
2. What do my workload's I/O patterns to its dataset look like? What performance
do I have for my workload? Data Deduplication optimizes files as a periodic job,
rather than when the file is written to disk. As a result, it is important to examine is
a workload's expected read patterns to the deduplicated volume. Because Data
Deduplication moves file content into the Chunk Store and attempts to organize
the Chunk Store by file as much as possible, read operations perform best when
they are applied to sequential ranges of a file.
Database-like workloads typically have more random read patterns than sequential
read patterns because databases do not typically guarantee that the database
layout will be optimal for all possible queries that may be run. Because the sections
of the Chunk Store may exist all over the volume, accessing data ranges in the
Chunk Store for database queries may introduce additional latency. High
performance workloads are particularly sensitive to this extra latency, but other
database-like workloads might not be.
7 Note
3. What are the resource requirements of my workload on the server? Because Data
Deduplication uses a post-processing model, Data Deduplication periodically
needs to have sufficient system resources to complete its optimization and other
jobs. This means that workloads that have idle time, such as in the evening or on
weekends, are excellent candidates for deduplication, and workloads that run all
day, every day may not be. Workloads that have no idle time may still be good
candidates for deduplication if the workload does not have high resource
requirements on the server.
5. If you are running a recommended workload, you're done. For other workloads,
see Other considerations.
7 Note
You can find more information on excluding file extensions or folders and selecting
the deduplication schedule, including why you would want to do this, in
Configuring Data Deduplication.
PowerShell
2. If you are running a recommended workload, you're done. For other workloads,
see Other considerations.
7 Note
Other considerations
) Important
If you are running a recommended workload, you can skip this section.
What are the volume sizing requirements for deduplicated volumes? In Windows
Server 2012 and Windows Server 2012 R2, volumes had to be carefully sized to ensure
that Data Deduplication could keep up with the churn on the volume. This typically
meant that the average maximum size of a deduplicated volume for a high-churn
workload was 1-2 TB, and the absolute maximum recommended size was 10 TB. In
Windows Server 2016, these limitations were removed. For more information, see What's
new in Data Deduplication.
What are the memory requirements for Data Deduplication? At a minimum, Data
Deduplication should have 300 MB + 50 MB for each TB of logical data. For instance, if
you are optimizing a 10 TB volume, you would need a minimum of 800 MB of memory
allocated for deduplication ( 300 MB + 50 MB * 10 = 300 MB + 500 MB = 800 MB ). While
Data Deduplication can optimize a volume with this low amount of memory, having
such constrained resources will slow down Data Deduplication's jobs.
What are the storage requirements for Data Deduplication? In Windows Server 2016,
Data Deduplication can support volume sizes up to 64 TB. For more information, view
What's new in Data Deduplication.
Running Data Deduplication
Article • 02/18/2022
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
All settings that are available when you schedule a Data Deduplication job are also
available when you start a job manually except for the scheduling-specific settings. For
example, to start an Optimization job manually with high priority, maximum CPU usage,
and maximum memory usage, execute the following PowerShell command with
administrator privilege:
PowerShell
Job successes
Because Data Deduplication uses a post-processing model, it is important that Data
Deduplication jobs succeed. An easy way to check the status of the most recent job is to
use the Get-DedupStatus PowerShell cmdlet. Periodically check the following fields:
7 Note
More detail on job successes and failures can be found in the Windows Event
Viewer under \Applications and Services
Logs\Windows\Deduplication\Operational .
Optimization rates
One indicator of Optimization job failure is a downward-trending optimization rate
which might indicate that the Optimization jobs are not keeping up with the rate of
changes, or churn. You can check the optimization rate by using the Get-DedupStatus
PowerShell cmdlet.
) Important
Get-DedupStatus has two fields that are relevant to the optimization rate:
PowerShell
The Unoptimization job will fail if the volume does not have sufficient space to hold
the unoptimized data.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
This document describes how to modify advanced Data Deduplication settings. For
recommended workloads, the default settings should be sufficient. The main reason to
modify these settings is to improve Data Deduplication's performance with other kinds
of workloads.
The most common reason for changing when Data Deduplication jobs run is to ensure
that jobs run during off hours. The following step-by-step example shows how to
modify the Data Deduplication schedule for a sunny day scenario: a hyper-converged
Hyper-V host that is idle on weekends and after 7:00 PM on week nights. To change the
schedule, run the following PowerShell cmdlets in an Administrator context.
PowerShell
Set-DedupSchedule -Name BackgroundOptimization -Enabled $false
Set-DedupSchedule -Name PriorityOptimization -Enabled $false
2. Remove the currently scheduled Garbage Collection and Integrity Scrubbing jobs.
PowerShell
3. Create a nightly Optimization job that runs at 7:00 PM with high priority and all the
CPUs and memory available on the system.
PowerShell
7 Note
The date part of the System.Datetime provided to -Start is irrelevant (as long
as it's in the past), but the time part specifies when the job should start.
4. Create a weekly Garbage Collection job that runs on Saturday starting at 7:00 AM
with high priority and all the CPUs and memory available on the system.
PowerShell
5. Create a weekly Integrity Scrubbing job that runs on Sunday starting at 7 AM with
high priority and all the CPUs and memory available on the system.
PowerShell
Days The days that the job An array of integers 0-6 Scheduled tasks
is scheduled representing the days of have to run on at
the week: least one day.
0 = Sunday
1 = Monday
2 = Tuesday
3 = Wednesday
4 = Thursday
5 = Friday
6 = Saturday
Get-DedupVolume
Set-DedupVolume
The main reasons to modify the volume settings from the selected usage type are to
improve read performance for specific files (such as multimedia or other file types that
are already compressed) or to fine-tune Data Deduplication for better optimization for
your specific workload. The following example shows how to modify the Data
Deduplication volume settings for a workload that most closely resembles a general
purpose file server workload, but uses large files that change frequently.
PowerShell
PowerShell
MinimumFileAgeDays Number of days after the Positive The Default and Hyper-V
file is created before the integers usage types set this value
file is considered to be (inclusive to 3 to maximize
in-policy for of zero) performance on hot or
optimization. recently created files. You
may want to modify this
if you want Data
Deduplication to be more
aggressive or if you do
not care about the extra
latency associated with
deduplication.
OptimizeInUseFiles When enabled, files that True/false Enable this setting if your
have active handles workload keeps files
against them will be open for extended
considered as in-policy periods of time. If this
for optimization. setting is not enabled, a
file would never get
optimized if the workload
has an open handle to it,
even if it's only
occasionally appending
data at the end.
Setting name Definition Accepted Why would you want to
values modify this value?
For example, you may want to disable full Garbage Collection. More information about
why this may be useful for your scenario can be found in Frequently asked questions. To
edit the registry with PowerShell:
Set-ItemProperty -Path
HKLM:\System\CurrentControlSet\Services\ddpsvc\Settings -Name
DeepGCInterval -Type DWord -Value 0xFFFFFFFF
Set-ItemProperty -Path HKLM:\CLUSTER\Dedup -Name DeepGCInterval -Type
DWord -Value 0xFFFFFFFF
PowerShell
Set-ItemProperty -Path
HKLM:\System\CurrentControlSet\Services\ddpsvc\Settings -Name
DeepGCInterval -Type DWord -Value 0xFFFFFFFF
I want to run a Data Deduplication job right now, but I don't want to create a new
schedule--can I do this? Yes, all jobs can be run manually.
What is the difference between full and regular Garbage Collection? There are two
types of Garbage Collection:
Garbage Collection could adversely affect the volume's lifetime shadow copies and
the size of incremental backup. High churn or I/O-intensive workloads may see a
degradation in performance by full Garbage Collection jobs.
You can manually run a full Garbage Collection job from PowerShell to clean up
leaks if you know your system crashed.
Data Deduplication interoperability
Article • 03/29/2022
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2
Supported
ReFS
Data Deduplication is supported starting with Windows Server 2019.
Failover Clustering
Failover Clustering is fully supported, if every node in the cluster has the Data
Deduplication feature installed. Other important notes:
Manually started Data Deduplication jobs must be run on the Owner node for the
Cluster Shared Volume.
Scheduled Data Deduplication jobs are stored in the cluster task scheduled so that
if a deduplicated volume is taken over by another node, the scheduled job will be
applied on the next scheduled interval.
Data Deduplication fully interoperates with the Cluster OS Rolling Upgrade feature.
Data Deduplication is fully supported on Storage Spaces Direct with ReFS or NTFS-
formatted volumes (mirror or parity). ReFS-formatted volumes are supported
starting with Windows Server 2019. Deduplication is not supported on volumes
with multiple tiers.
Storage Replica
Storage Replica is fully supported. Data Deduplication should be configured to not run
on the secondary copy.
BranchCache
You can optimize data access over the network by enabling BranchCache on servers and
clients. When a BranchCache-enabled system communicates over a WAN with a remote
file server that is running data deduplication, all of the deduplicated files are already
indexed and hashed. Therefore, requests for data from a branch office are quickly
computed. This is similar to preindexing or prehashing a BranchCache-enabled server.
DFS Replication
Data Deduplication works with Distributed File System (DFS) Replication. Optimizing or
unoptimizing a file will not trigger a replication because the file does not change. DFS
Replication uses Remote Differential Compression (RDC), not the chunks in the chunk
store, for over-the-wire savings. The files on the replica can also be optimized by using
deduplication if the replica is using Data Deduplication.
Quotas
Data Deduplication does not support creating a hard quota on a volume root folder that
also has deduplication enabled. When a hard quota is present on a volume root, the
actual free space on the volume and the quota-restricted space on the volume are not
the same. This may cause deduplication optimization jobs to fail. It is possible however
to creating a soft quota on a volume root that has deduplication enabled.
When quota is enabled on a deduplicated volume, quota uses the logical size of the file
rather than the physical size of the file. Quota usage (including any quota thresholds)
does not change when a file is processed by deduplication. All other quota functionality,
including volume-root soft quotas and quotas on subfolders, works normally when
using deduplication.
PowerShell
PowerShell
wbadmin start backup –include:E: -backuptarget:F: -quiet
PowerShell
This output version ID will be a date and time string, for example: 08/18/2016-
06:22.
PowerShell
--OR--
PowerShell
Unsupported
Windows Search
Windows Search doesn't support Data Deduplication. Data Deduplication uses reparse
points, which Windows Search can't index, so Windows Search skips all deduplicated
files, excluding them from the index. As a result, search results might be incomplete for
deduplicated volumes.
Robocopy
Running Robocopy with Data Deduplication is not recommended because certain
Robocopy commands can corrupt the Chunk Store. The Chunk Store is stored in the
System Volume Information folder for a volume. If the folder is deleted, the optimized
files (reparse points) that are copied from the source volume become corrupted because
the data chunks are not copied to the destination volume.
DFS Namespaces overview
Article • 01/05/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
DFS (Distributed File System) Namespaces is a role service in Windows Server that
enables you to group shared folders located on different servers into one or more
logically structured namespaces. This makes it possible to give users a virtual view of
shared folders, where a single path leads to files located on multiple servers, as shown in
the following figure:
This article discusses how to install DFS, what's new, and where to find evaluation and
deployment information.
You can administer namespaces by using DFS Management, the DFS Namespace (DFSN)
Cmdlets in Windows PowerShell, the DfsUtil command, or scripts that call WMI.
Servers that are running the following operating systems can host multiple domain-
based namespaces in addition to a single stand-alone namespace.
Servers that are running the following operating systems can host a single stand-alone
namespace:
The following table describes additional factors to consider when choosing servers to
host a namespace.
Server Hosting Server Hosting Domain-Based Namespaces
Stand-Alone
Namespaces
Must contain an NTFS Must contain an NTFS volume to host the namespace.
volume to host the
namespace.
Can be a member Must be a member server or domain controller in the domain in which
server or domain the namespace is configured. (This requirement applies to every
controller. namespace server that hosts a given domain-based namespace.)
2. On the Server Selection page, select the server or virtual hard disk (VHD) of an
offline virtual machine on which you want to install DFS.
3. Select the role services and features that you want to install.
To install the DFS Namespaces service, on the Server Roles page, select DFS
Namespaces.
To install only the DFS Management Tools, on the Features page, expand
Remote Server Administration Tools, Role Administration Tools, expand File
Services Tools, and then select DFS Management Tools.
DFS Management Tools installs the DFS Management snap-in, the DFS
Namespaces module for Windows PowerShell, and command-line tools, but
it does not install any DFS services on the server.
PowerShell
Install-WindowsFeature <name>
For example, to install the Distributed File System Tools portion of the Remote Server
Administration Tools feature, type:
PowerShell
Install-WindowsFeature "RSAT-DFS-Mgmt-Con"
To install the Distributed File System Tools portion for a client device, type:
PowerShell
To install the DFS Namespaces, and the Distributed File System Tools portions of the
Remote Server Administration Tools feature, type:
PowerShell
To learn about how to get started with Azure virtual machines, see Azure virtual
machines documentation.
Additional References
For additional related information, see the following resources.
Product evaluation What's New in DFS Namespaces and DFS Replication in Windows Server
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
Distributed File System (DFS) Namespaces and DFS Replication can be used to publish
documents, software, and line-of-business data to users throughout an organization.
Although DFS Replication alone is sufficient to distribute data, you can use
DFS Namespaces to configure the namespace so that a folder is hosted by multiple
servers, each of which holds an updated copy of the folder. This increases data
availability and distributes the client load across servers.
When browsing a folder in the namespace, users are not aware that the folder is hosted
by multiple servers. When a user opens the folder, the client computer is automatically
referred to a server on its site. If no same-site servers are available, you can configure
the namespace to refer the client to a server that has the lowest connection cost as
defined in Active Directory Directory Services (AD DS).
Additional References
Namespaces
Checklist: Tune a DFS Namespace
Replication
Checklist: Tune a DFS namespace
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
After creating a namespace and adding folders and targets, use the following checklist
to tune or optimize the way the DFS namespace handles referrals and polls Active
Directory Domain Services (AD DS) for updated namespace data.
Prevent users from seeing folders in a namespace that they do not have
permissions to access. Enable Access-Based Enumeration on a Namespace
Enable or prevent users from being referred to a namespace or folder target when
they access a folder in the namespace. Enable or Disable Referrals and Client
Failback
Adjust how long clients cache a referral before requesting a new one. Change the
Amount of Time That Clients Cache Referrals
Optimize how namespace servers poll AD DS to obtain the most current
namespace data. Optimize Namespace Polling
Use inherited permissions to control which users can view folders in a namespace
for which access-based enumeration is enabled. Using Inherited Permissions with
Access-Based Enumeration
In addition, by using a DFS Namespaces enhancement known as target priority, you can
specify the priority of servers so that a specific server is always placed first or last in the
list of servers (known as a referral) that the client receives when it accesses a folder with
targets in the namespace.
Specify in what order users should be referred to folder targets. Set the Ordering
Method for Targets in Referrals
Override referral ordering for a specific namespace server or folder target. Set
Target Priority to Override Referral Ordering
Additional References
Namespaces
Checklist: Deploy DFS Namespaces
Tuning DFS Namespaces
Deploying DFS Namespaces
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows
Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows Server 2008
When creating a namespace, you must choose one of two namespace types: a stand-alone
namespace or a domain-based namespace. In addition, if you choose a domain-based
namespace, you must choose a namespace mode: Windows 2000 Server mode or Windows
Server 2008 mode.
Your organization does not use Active Directory Domain Services (AD DS).
You want to increase the availability of the namespace by using a failover cluster.
You need to create a single namespace with more than 5,000 DFS folders in a domain that
does not meet the requirements for a domain-based namespace (Windows Server 2008
mode) as described later in this topic.
7 Note
To check the size of a namespace, right-click the namespace in the DFS Management
console tree, click Properties, and then view the namespace size in the Namespace
Properties dialog box. For more information about DFS Namespace scalability, see the
Microsoft website File Services.
Choose a domain-based namespace if any of the following conditions apply to your environment:
You want to ensure the availability of the namespace by using multiple namespace servers.
You want to hide the name of the namespace server from users. This makes it easier to
replace the namespace server or migrate the namespace to another server.
The forest uses the Windows Server 2003 or higher forest functional level.
The domain uses the Windows Server 2008 or higher domain functional level.
All namespace servers are running Windows Server 2012 R2, Windows Server 2012,
Windows Server 2008 R2, or Windows Server 2008.
If your environment supports it, choose the Windows Server 2008 mode when you create new
domain-based namespaces. This mode provides additional features and scalability, and also
eliminates the possible need to migrate a namespace from the Windows 2000 Server mode.
For information about migrating a namespace to Windows Server 2008 mode, see Migrate a
Domain-based Namespace to Windows Server 2008 Mode.
If your environment does not support domain-based namespaces in Windows Server 2008 mode,
use the existing Windows 2000 Server mode for the namespace.
Path to \\ \\ \\
namespace ServerName\RootName NetBIOSDomainName\RootName NetBIOSDomainName\RootName
\\ DNSDomainName\RootName \\ DNSDomainName\RootName
Namespace In the registry and in a In AD DS and in a memory cache In AD DS and in a memory cache
information memory cache on the on each namespace server on each namespace server
storage location namespace server
Namespace size The namespace can The size of the namespace object The namespace can contain
recommendations contain more than in AD DS should be less than 5 more than 5,000 folders with
5,000 folders with megabytes (MB) to maintain targets; the recommended limit
targets; the compatibility with domain is 50,000 folders with targets
recommended limit is controllers that are not running
50,000 folders with Windows Server 2008. This
targets means no more than
approximately 5,000 folders with
targets.
Minimum Windows 2000 Server Windows 2000 Server Windows Server 2008
supported
namespace
servers
Supported Create a stand-alone Use multiple namespace servers Use multiple namespace servers
methods to namespace on a to host the namespace. (The to host the namespace. (The
ensure failover cluster. namespace servers must be in namespace servers must be in
namespace the same domain.) the same domain.)
availability
Additional References
Deploying DFS Namespaces
Migrate a Domain-based Namespace to Windows Server 2008 Mode
Create a DFS namespace
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
To create a new namespace, you can use Server Manager to create the namespace when
you install the DFS Namespaces role service. You can also use the New-DfsnRoot cmdlet
from a Windows PowerShell session.
The DFSN Windows PowerShell module was introduced in Windows Server 2012.
Alernatively, you can use the following procedure to create a namespace after installing
the role service.
To create a namespace
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, right-click the Namespaces node, and then click New
Namespace.
) Important
Additional References
Deploying DFS Namespaces
Choose a Namespace Type
Add Namespace Servers to a Domain-based DFS Namespace
Delegate Management Permissions for DFS Namespaces.
Migrate a domain-based namespace to
Windows Server 2008 Mode
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
The Windows Server 2008 mode for domain-based namespaces includes support for
access-based enumeration and increased scalability.
1. Open a Command Prompt window and type the following command to export the
namespace to a file, where \\domain\namespace is the name of the appropriate
domain, and namespace and path\filename is the path and file name of the file for
export:
2. Write down the path ( \\server \share ) for each namespace server. You must
manually add namespace servers to the re-created namespace because Dfsutil
cannot import namespace servers.
3. In DFS Management, right-click the namespace and then click Delete, or type the
following command at a command prompt,
where \\domain\namespace is the name of the appropriate domain and
namespace:
Dfsutil root remove \\domain\namespace
4. In DFS Management, re-create the namespace with the same name, but use the
Windows Server 2008 mode, or type the following command at a command
prompt, where
\\server\namespace is the name of the appropriate server and share for the
namespace root:
5. To import the namespace from the export file, type the following command at a
command prompt, where
\\domain\namespace is the name of the appropriate domain and namespace and
path\filename is the path and file name of the file to import:
7 Note
To minimize the time required to import a large namespace, run the Dfsutil
root import command locally on a namespace server.
7 Note
You can add namespace servers before importing the namespace, but doing
so causes the namespace servers to incrementally download the metadata for
the namespace instead of immediately downloading the entire namespace
after being added as a namespace server.
Additional References
Deploying DFS Namespaces
Choose a Namespace Type
Add namespace servers to a domain-
based DFS namespace
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
1. Click Start, point to Administrative Tools, and then click DFS Management.
7 Note
This procedure is not applicable for stand-alone namespaces because they support
only a single namespace server. To increase the availability of a stand-alone
namespace, specify a failover cluster as the namespace server in the New
Namespace Wizard.
Tip
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
You can use folders to create additional levels of hierarchy in a namespace. You can also
create folders with folder targets to add shared folders to the namespace. DFS folders
with folder targets cannot contain other DFS folders, so if you want to add a level of
hierarchy to the namespace, do not add folder targets to the folder.
Use the following procedure to create a folder in a namespace using DFS Management:
3. In the Name text box, type the name of the new folder.
4. To add one or more folder targets to the folder, click Add and specify the Universal
Naming Convention (UNC) path of the folder target, and then click OK .
Tip
Additional References
Deploying DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Add folder targets
Article • 05/11/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
A folder target is the Universal Naming Convention (UNC) path of a shared folder or
another namespace that is associated with a folder in a namespace. Adding multiple
folder targets increases the availability of the folder in the namespace.
Prerequisites
The following must be installed to use this feature:
A Windows Server with the DFS Namespaces role service installed as part of the
File and Storage Services server role. To learn more, see Install or Uninstall Roles,
Role Services, or Features.
An account with Administrative privileges.
A server to host the DFS folder target.
1. Click Start > Windows Administrative Tools > select DFS Management.
Alternatively, click Start > type dfsmgmt.msc > hit Enter .
2. In the console tree, under the Namespaces node, right-click on the namespace
where you want to add the folder, then select New Folder.
3. In the popup box, provide a name for this folder in the Name field, then click Add.
4. Type the path to the folder target, or click Browse to locate the folder target, click
OK, then click OK again.
The following animation demonstrates the steps to add a folder target by using DFS
Management.
If the folder is replicated using DFS Replication, you can specify whether to add the new
folder target to the replication group.
Tip
7 Note
Folders can contain folder targets or other DFS folders, but not both, at the same
level in the folder hierarchy.
Additional references
Deploying DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Replicate Folder Targets Using DFS Replication
Replicate folder targets using DFS
Replication
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
You can use DFS Replication to keep the contents of folder targets in sync so that users
see the same files regardless of which folder target the client computer is referred to.
2. In the console tree, under the Namespaces node, right-click a folder that has two
or more folder targets, and then click Replicate Folder.
7 Note
Configuration changes are not applied immediately to all members except when
using the Suspend-DfsReplicationGroup and Sync-DfsReplicationGroup cmdlets.
The new configuration must be replicated to all domain controllers, and each
member in the replication group must poll its closest domain controller to obtain
the changes. The amount of time this takes depends on the Active Directory
Directory Services (AD DS) replication latency and the long polling interval (60
minutes) on each member. To poll immediately for configuration changes, open a
Command Prompt window and then type the following command once for each
member of the replication group:
dfsrdiag.exe PollAD /Member:DOMAIN\Server1
To do so from a Windows PowerShell session, use the Update-
DfsrConfigurationFromAD cmdlet, which was introduced in Windows
Server 2012 R2.
Additional References
Deploying DFS Namespaces
Delegate Management Permissions for DFS Namespaces
DFS Replication
Delegate management permissions for
DFS Namespaces
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
The following table describes the groups that can perform basic namespace tasks by
default, and the method for delegating the ability to perform these tasks:
Create a Domain Right-click the Namespaces node in the console tree, and then
domain- Admins group click Delegate Management Permissions. Or use the Set-
based in the domain DfsnRoot GrantAdminAccounts and Set-DfsnRoot
namespace where the RevokeAdminAccounts. Windows PowerShell cmdlets (introduced
namespace is in Windows Server 2012). You must also add the user to the local
configured Administrators group on the namespace server.
Add a Domain Right-click the domain-based namespace in the console tree, and
namespace Admins group then click Delegate Management Permissions. Or use the Set-
server to a in the domain DfsnRoot GrantAdminAccounts and Set-DfsnRoot
domain- where the RevokeAdminAccounts. Windows PowerShell cmdlets (introduced
based namespace is in Windows Server 2012). You must also add the user to the local
namespace configured Administrators group on the namespace server to be added.
Manage a Local Right-click the domain-based namespace in the console tree, and
domain- Administrators then click Delegate Management Permissions.
based group on each
namespace namespace
server
Create a Local Add the user to the local Administrators group on the namespace
stand-alone Administrators server.
namespace group on the
namespace
server
Task Groups that Delegation Method
Can Perform
this Task by
Default
Manage a Local Right-click the stand-alone namespace in the console tree, and
stand-alone Administrators then click Delegate Management Permissions. Or use the Set-
namespace* group on the DfsnRoot GrantAdminAccounts and Set-DfsnRoot
namespace RevokeAdminAccounts. Windows PowerShell cmdlets (introduced
server in Windows Server 2012).
Create a Domain Right-click the Replication node in the console tree, and then
replication Admins group click Delegate Management Permissions.
group or in the domain
enable DFS where the
Replication namespace is
on a folder configured
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
After creating a namespace and adding folders and targets, refer to the following
sections to tune or optimize the way DFS Namespace handles referrals and polls Active
Directory Domain Services (AD DS) for updated namespace data:
7 Note
To search for folders or folder targets, select a namespace, click the Search tab,
type your search string in the text box, and then click Search.
Enable or Disable Referrals and Client
Failback
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
A referral is an ordered list of servers that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or DFS folder
with targets. After the computer receives the referral, the computer attempts to access
the first server in the list. If the server is not available, the client computer attempts to
access the next server. If a server becomes unavailable, you can configure clients to fail
back to the preferred server after it becomes available.
The following sections provide information about how to enable or disable referrals or
enable client failback:
1. In the DFS Management console tree, under the Namespaces node, click a
folder containing targets, and then click the Folder Targets tab in the Details
pane.
2. Right-click the folder target, and then click either Disable Folder Target or
Enable Folder Target.
1. In the DFS Management console tree, select the appropriate namespace and
then click the Namespace Servers tab.
2. Right-click the appropriate namespace server and then click either Disable
Namespace Server or Enable Namespace Server.
Tip
To enable or disable referrals by using Windows PowerShell, use the Set-
DfsnRootTarget –State or Set-DfsnServerConfiguration cmdlets, which were
introduced in Windows Server 2012.
7 Note
2. In the console tree, under the Namespaces node, right-click a namespace, and
then click Properties.
3. On the Referrals tab, select the Clients fail back to preferred targets check box.
Folders with targets inherit client failback settings from the namespace root. If client
failback is disabled on the namespace root, you can use the following procedure to
enable the client to fail back on a folder with targets.
2. In the console tree, under the Namespaces node, right-click a folder with targets,
and then click Properties.
3. On the Referrals tab, click the Clients fail back to preferred targets check box.
Additional References
Tuning DFS Namespaces
Review DFS Namespaces Client Requirements
Delegate Management Permissions for DFS Namespaces
Change the amount of time that clients
cache referrals
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
A referral is an ordered list of targets that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or folder with
targets in the namespace. You can adjust how long clients cache a referral before
requesting a new one.
2. In the console tree, under the Namespaces node, right-click a namespace, and
then click Properties.
3. On the Referrals tab, in the Cache duration (in seconds) text box, type the amount
of time (in seconds) that clients cache namespace root referrals. The default setting
is 300 seconds (five minutes).
Tip
To change the amount of time that clients cache namespace root referrals by using
Windows PowerShell, use the Set-DfsnRoot TimeToLiveSec cmdlet. These cmdlets
were introduced in Windows Server 2012.
3. On the Referrals tab, in the Cache duration (in seconds) text box, type the amount
of time (in seconds) that clients cache folder referrals. The default setting is 1800
seconds (30 minutes).
Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Set the Ordering Method for Targets in
Referrals
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
A referral is an ordered list of targets that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or folder with
targets. After the client receives the referral, the client attempts to access the first target
in the list. If the target is not available, the client attempts to access the next target.
Targets on the client's site are always listed first in a referral. Targets outside of the
client's site are listed according to the ordering method.
Use the following sections to specify in what order targets should be referred to clients
and to understand the different methods of ordering target referrals:
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Namespaces node, right-click a namespace, and
then click Properties.
7 Note
To use Windows PowerShell to set the ordering method for targets in namespace
root referrals, use the Set-DfsnRoot cmdlet with one of the following parameters:
The DFSN Windows PowerShell module was introduced in Windows Server 2012.
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Namespaces node, right-click a folder with targets,
and then click Properties.
3. On the Referrals tab, select the Exclude targets outside of the client's site check
box.
7 Note
To use Windows PowerShell to exclude folder targets outside of the client's site, use
the Set-DfsnFolder –EnableInsiteReferrals cmdlet.
Random order
Lowest cost
Exclude targets outside of the client's site
Random order
In this method, targets are ordered as follows:
1. Targets in the same Active Directory Directory Services (AD DS) site as the client
are listed in random order at the top of the referral.
2. Targets outside of the client's site are listed in random order.
If no same-site target servers are available, the client computer is referred to a random
target server regardless of how expensive the connection is or how distant the target.
Lowest cost
In this method, targets are ordered as follows:
1. Targets in the same site as the client are listed in random order at the top of the
referral.
2. Targets outside of the client's site are listed in order of lowest cost to highest cost.
Referrals with the same cost are grouped together, and the targets are listed in
random order within each group.
7 Note
Site link costs are not shown in the DFS Management snap-in. To view site link
costs, use the Active Directory Sites and Services snap-in.
7 Note
Targets that have target priority set to "First among all targets" or "Last among all
targets" are still listed in the referral, even if the ordering method is set to Exclude
targets outside of the client's site.
Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Set target priority to override referral
ordering
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
A referral is an ordered list of targets that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or folder with
targets in the namespace. Each target in a referral is ordered according to the ordering
method for the namespace root or folder.
To refine how targets are ordered, you can set priority on individual targets. For
example, you can specify that the target is first among all targets, last among all targets,
or first or last among all targets of equal cost.
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Namespaces node, click the domain-based
namespace for the root targets for which you want to set priority.
3. In the Details pane, on the Namespace Servers tab, right-click the root target with
the priority that you want to change, and then click Properties.
4. On the Advanced tab, click Override referral ordering, and then click the priority
you want.
First among all targets Specifies that users should always be referred to this
target if the target is available.
Last among all targets Specifies that users should never be referred to this
target unless all other targets are unavailable.
First among targets of equal cost Specifies that users should be referred to
this target before other targets of equal cost (which usually means other
targets in the same site).
Last among targets of equal cost Specifies that users should never be
referred to this target if there are other targets of equal cost available (which
usually means other targets in the same site).
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Namespaces node, click the folder of the targets for
which you want to set priority.
3. In the Details pane, on the Folder Targets tab, right-click the folder target with the
priority that you want to change, and then click Properties .
4. On the Advanced tab, click Override referral ordering and then click the priority
that you want.
7 Note
Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Optimize Namespace Polling
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
1. Click Start, point to Administrative Tools, and then click DFS Management.
3. On the Advanced tab, select whether you want the namespace optimized for
consistency or scalability.
7 Note
To set the namespace polling mode by using Windows PowerShell, use the Set-
DfsnRoot EnableRootScalability cmdlet, which was introduced in Windows
Server 2012.
Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Enable access-based enumeration on a
namespace
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
Access-based enumeration hides files and folders that users do not have permissions to
access. By default, this feature is not enabled for DFS namespaces. You can enable
access-based enumeration of DFS folders by using DFS Management. To control access-
based enumeration of files and folders in folder targets, you must enable access-based
enumeration on each shared folder by using Share and Storage Management.
7 Note
If you upgrade the domain functional level to Windows Server 2008 while there are
existing domain-based namespaces, DFS Management will allow you to enable
access-based enumeration on these namespaces. However, you will not be able to
edit permissions to hide folders from any groups or users unless you migrate the
namespaces to the Windows Server 2008 mode. For more information, see Migrate
a Domain-based Namespace to Windows Server 2008 Mode.
To use access-based enumeration with DFS Namespaces, you must follow these steps:
2 Warning
Access-based enumeration does not prevent users from getting a referral to a
folder target if they already know the DFS path. Only the share permissions or the
NTFS file system permissions of the folder target (shared folder) itself can prevent
users from accessing a folder target. DFS folder permissions are used only for
displaying or hiding DFS folders, not for controlling access, making Read access the
only relevant permission at the DFS folder level. For more information, see Using
Inherited Permissions with Access-Based Enumeration
You can enable access-based enumeration on a namespace either by using the Windows
interface or by using a command line.
2. Click the Advanced tab and then select the Enable access-based enumeration for
this namespace check box.
Tip
3. Click Set explicit view permissions on the DFS folder and then Configure view
permissions.
5. To allow users to see the DFS folder, select the group or user, and then select the
Allow check box.
To hide the folder from a group or user, select the group or user, and then select
the Deny check box.
2. Type the following command, where <DFSPath> is the path of the DFS folder
(link), <DOMAIN\Account> is the name of the group or user account, and (...) is
replaced with additional Access Control Entries (ACEs):
For example, to replace existing permissions with permissions that allows the
Domain Admins and CONTOSO\Trainers groups Read (R) access to the
\contoso.office\public\training folder, type the following command:
dfsutil property sd grant \\contoso.office\public\training
"CONTOSO\Domain Admins":R CONTOSO\Trainers:R Protect Replace
3. To perform additional tasks from the command prompt, use the following
commands:
Command Description
Dfsutil property sd deny Denies a group or user the ability to view the folder.
Dfsutil property sd revoke Removes a group or user ACE from the folder.
Additional References
Create a DFS Namespace
Delegate Management Permissions for DFS Namespaces
Installing DFS
Using Inherited Permissions with Access-Based Enumeration
Using inherited permissions with
Access-based Enumeration
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
By default, the permissions used for a DFS folder are inherited from the local file system
of the namespace server. The permissions are inherited from the root directory of the
system drive and grant the DOMAIN\Users group Read permissions. As a result, even
after enabling access-based enumeration, all folders in the namespace remain visible to
all domain users.
You can quickly apply inherited permissions to many folders without having to use
scripts.
You can apply inherited permissions to namespace roots and folders without
targets.
Despite the benefits, inherited permissions in DFS Namespaces have many limitations
that make them inappropriate for most environments:
7 Note
Set explicit permissions for the folder, disabling inheritance. To set explicit
permissions on a folder with targets (a link) using DFS Management or the Dfsutil
command, see Enable Access-Based Enumeration on a Namespace.
Modify inherited permissions on the parent in the local file system. To modify the
permissions inherited by a folder with targets, if you have already set explicit
permissions on the folder, switch to inherited permissions from explicit
permissions, as discussed in the following procedure. Then use Windows Explorer
or the Icacls command to modify the permissions of the folder from which the
folder with targets inherits its permissions.
7 Note
3. Click Use inherited permissions from the local file system and then click OK in the
Confirm Use of Inherited Permissions dialog box. Doing this removes all explicitly
set permissions on this folder, restoring inherited NTFS permissions from the local
file system of the namespace server.
Additional References
Create a DFS Namespace
DFS Replication overview
Article • 03/28/2023
Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
DFS Replication is an efficient, multiple-master replication engine that you can use to
keep folders synchronized between servers across limited bandwidth network
connections. The service replaces the File Replication Service (FRS) as the replication
engine for DFS namespaces.
Tip
Consider using Azure File Sync to reduce your on-premises storage footprint.
Azure File Sync can keep multiple Windows file servers in sync. Each server needs to
only keep a cache on-premises while the full copy of the data is in the cloud. Azure
File Sync also provides the benefit of cloud backups with integrated snapshots. For
more information, see Planning for an Azure File Sync deployment.
Active Directory Domain Services (AD DS) uses DFS Replication to replicate the sysvol
folder in domains that use the Windows Server 2008 or later domain functional level. For
more information about replicating the sysvol folder by using DFS Replication, see
Migrate the sysvol folder replication to DFS Replication.
A replicated folder stays synchronized on each member in a group. In the figure, there
are two replicated folders: Projects and Proposals. As the data changes in each
replicated folder, the changes are replicated across connections between the members
of the replication group. The connections between all members form the replication
topology.
Creating multiple replicated folders in a single replication group simplifies the process
of deploying replicated folders. The topology, schedule, and bandwidth throttling for
the replication group are applied to each replicated folder. To deploy more replicated
folders, you can run the Dfsradmin.exe tool or use a wizard to define the local path and
permissions for the new replicated folder.
Each replicated folder has unique settings, such as file and subfolder filters. The settings
let you filter out different files and subfolders for each replicated folder.
The replicated folders stored on each member can be located on different volumes in
the member, and the replicated folders don't need to be shared folders or part of a
namespace. However, the DFS Management snap-in simplifies sharing replicated folders
and optionally publishing them in an existing namespace.
You can install DFS Replication by using Server Manager, Windows PowerShell, or
Windows Admin Center.
You can administer DFS Replication by using DFS Management, the dfsradmin and
dfsrdiag commands, or scripts that call WMI.
Deployment requirements
Before you can deploy DFS Replication, you must configure your servers as follows:
Confirm file system and volume format. Determine the folders that you want to
replicate, and identify any folders located on volumes that are formatted with the
NTFS file system. DFS Replication doesn't support the Resilient File System (ReFS)
or the FAT file system. DFS Replication also doesn't support replicating content
stored on Cluster Shared Volumes.
Prepare replication group servers. Install DFS Replication on all servers that you
plan to use as members of a replication group.
Check forest location. Ensure all servers in a replication group are located in the
same forest. You can't enable replication across servers in different forests.
Snapshots and saved states. To restore a server that's running DFS Replication,
don't use snapshots or saved states to replicate anything other than the sysvol
folder. If you attempt this type of restore, DFS Replication fails. This restoration
requires special database recovery steps. Also, don't export, clone, or copy the
virtual machines. For more information, see DFSR no longer replicates files after
restoring a virtualized server's snapshot and Safely virtualizing DFSR .
VPN connection. DFS Replication requires a VPN connection between your on-
premises replication group members and any members hosted in Azure virtual
machines. You also need to configure the on-premises router (such as Forefront
Threat Management Gateway) to allow the RPC Endpoint Mapper (port 135) and a
randomly assigned port between 49152 and 65535 to pass over the VPN
connection. You can use the Set-DfsrMachineConfiguration cmdlet or the dfsrdiag
command-line tool to specify a static port instead of the random port. For more
information about how to specify a static port for DFS Replication, see Set-
DfsrServiceConfiguration. For information about related ports to open for
managing Windows Server, see Service overview and network port requirements
for Windows.
To learn how to get started with Azure virtual machines, visit the Microsoft Azure
website.
2. Select Manage, and then select Add Roles and Features. The Add Roles and
Features wizard opens.
3. Under Server Selection, select the server or virtual hard disk (VHD) where you want
to install DFS Replication. The server or VHD should be an offline virtual machine.
5. Expand File and Storage Services > File and iSCSI Services, and then select DFS
Replication.
2. Enter the following command to install the desired RSAT role services or features
to support DFS replication.
For the <name\> parameter, enter of the names of the RSAT role services or
features that you want to install. You can install one or multiple services and
features in a single command. The table lists the names of the relevant RSAT role
services and features.
PowerShell
Install-WindowsFeature <name>
To install the DFS Replication service only, enter the following command:
PowerShell
Install-WindowsFeature "RSAT-DFS-Mgmt-Con"
To install both the DFS Replication service and the DFS Management Tools,
enter the following command:
PowerShell
Related links
DFS Namespaces and DFS Replication overview
Checklist: Deploy DFS Replication
Checklist: Manage DFS Replication
Deploying DFS Replication
Troubleshooting DFS Replication
Migrate SYSVOL replication to DFS
Replication
Article • 04/25/2023
Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
Domain controllers use a special shared folder named SYSVOL to replicate sign-in
scripts and Group Policy object files to other domain controllers. Windows 2000 Server
and Windows Server 2003 use the File Replication Service (FRS) to replicate SYSVOL.
Windows Server 2008 uses the newer Distributed File System Replication (DFS
Replication) service for domains that use the Windows Server 2008 domain functional
level. Windows Server 2008 uses FRS for domains that run older domain functional
levels.
To use DFS Replication to replicate the SYSVOL folder, you can create a new domain that
uses the Windows Server 2008 domain functional level. You can also use the procedure
described in this article to upgrade an existing domain and migrate replication to DFS
Replication.
Prerequisites
This article assumes you have a basic knowledge of Active Directory Domain Services
(AD DS), FRS, and Distributed File System Replication (DFS Replication). For more
information, see:
Printable download
To download a printable version of this guide, go to SYSVOL Replication Migration
Guide: FRS to DFS Replication .
Migration topics
The SYSVOL migration guide provides topics that describe a range of concepts and
procedures from the use of FRS to the use DFS. Use the following list to access articles
about migrating the SYSVOL folder to use DFS Replication.
Concepts
Review these concepts about SYSVOL migration states for a basic understanding of the
migration tasks.
Procedures
Follow these SYSVOL migration procedures for a basic understanding of the migration
states.
Troubleshooting
Access these articles that describe known issues and provide troubleshooting guidance.
References
The following resources offer supplemental reference information:
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
This topic explains how to use the command-line tool, Robocopy.exe, to pre-seed files
when setting up replication for Distributed File System (DFS) Replication (also known as
DFSR or DFS-R) in Windows Server. By pre-seeding files before you set up DFS
Replication, add a new replication partner, or replace a server, you can speed up initial
synchronization and enable cloning of the DFS Replication database in Windows Server
2012 R2. The Robocopy method is one of several pre-seeding methods; for an overview,
see Step 1: pre-seed files for DFS Replication.
The Robocopy (Robust File Copy) command-line utility is included with Windows Server.
The utility provides extensive options that include copying security, backup API support,
retry capabilities, and logging. Later versions include multi-threading and un-buffered
I/O support.
) Important
Robocopy does not copy exclusively locked files. If users tend to lock many files for
long periods on your file servers, consider using a different pre-seeding method.
pre-seeding does not require a perfect match between file lists on the source and
destination servers, but the more files that do not exist when initial synchronization
is performed for DFS Replication, the less effective pre-seeding is. To minimize lock
conflicts, use Robocopy during non-peak hours for your organization. Always
examine the Robocopy logs after pre-seeding to ensure that you understand which
files were skipped because of exclusive locks.
To use Robocopy to pre-seed files for DFS Replication, follow these steps:
You need an account that's a member of the local Administrators group on both
the source and destination servers.
Install the most recent version of Robocopy on the server that you will use to copy
the files—either the source server or the destination server; you will need to install
the most recent version for the operating system version. For instructions, see Step
2: Stabilize files that will be replicated. Unless you are pre-seeding files from a
server running Windows Server 2003 R2, you can run Robocopy on either the
source or destination server. The destination server, which typically has the more
recent operating system version, gives you access to the most recent version of
Robocopy.
Ensure that sufficient storage space is available on the destination drive. Do not
create a folder on the path that you plan to copy to: Robocopy must create the
root folder.
7 Note
When you decide how much space to allocate for the pre-seeded files,
consider expected data growth over time and storage requirements for DFS
Replication. For planning help, see Edit the Quota Size of the Staging Folder
and Conflict and Deleted Folder in Managing DFS Replication.
On the source server, optionally install Process Monitor or Process Explorer, which
you can use to check for applications that are locking files. For download
information, see Process Monitor and Process Explorer.
The source for the latest compatible Robocopy version depends on the version of
Windows Server that is running on the server. For information about downloading the
hotfix with the most recent version of Robocopy for Windows Server 2008 R2 or
Windows Server 2008, see List of currently available hotfixes for Distributed File System
(DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2 .
Alternatively, you can locate and install the latest hotfix for an operating system by
taking the following steps.
3. Locate and download the hotfix with the highest ID number (that is, the latest
version).
Applications Application workloads Temporarily disable or uninstall the applications that are
open files running on a file server locking files. You can use Process Monitor or Process
local. sometimes lock files. Explorer to determine which applications are locking
files. To download Process Monitor or Process Explorer,
visit the Process Monitor and Process Explorer pages.
7 Note
You can run Robocopy on either the source computer or the destination computer.
The following procedure describes running Robocopy on the destination server,
which typically is running a more recent operating system, to take advantage of any
additional Robocopy capabilities that the more recent operating system might
provide.
pre-seed the replicated files onto the destination server
with Robocopy
1. Sign in to the destination server with an account that's a member of the local
Administrators group on both the source and destination servers.
3. To pre-seed the files from the source to destination server, run the following
command, substituting your own source, destination, and log file paths for the
bracketed values:
PowerShell
This command copies all contents of the source folder to the destination folder,
with the following parameters:
Parameter Description
"<destination Specifies the path to the folder that will store the pre-seeded files.
replicated
folder path>" The destination folder must not already exist on the destination server. To
get matching file hashes, Robocopy must create the root folder when it
pre-seeds the files.
/copyall Copies all file information, including data, attributes, time stamps, the
NTFS access control list (ACL), owner information, and auditing
information.
/tee Writes status output to the console window, as well as to the log file.
/log <log file Specifies the log file to write. Overwrites the file's existing contents. (To
path> append the entries to the existing log file, use /log+ <log file path> .)
For example, the following command replicates files from the source replicated
folder, E:\RF01, to data drive D on the destination server:
PowerShell
7 Note
We recommend that you use the parameters described above when you use
Robocopy to pre-seed files for DFS Replication. However, you can change
some of their values or add additional parameters. For example, you might
find out through testing that you have the capacity to set a higher value
(thread count) for the /MT parameter. Also, if you'll primarily replicate larger
files, you might be able to increase copy performance by adding the /j option
for unbuffered I/O. For more information about Robocopy parameters, see
the Robocopy command-line reference.
2 Warning
To avoid potential data loss when you use Robocopy to pre-seed files for DFS
Replication, do not make the following changes to the recommended
parameters:
Do not use the /mir parameter (that mirrors a directory tree) or the /mov
parameter (that moves the files, then deletes them from the source).
Do not remove the /e, /b, and /copyall options.
4. After copying completes, examine the log for any errors or skipped files. Use
Robocopy to copy any skipped files individually instead of recopying the entire set
of files. If files were skipped because of exclusive locks, either try copying
individual files with Robocopy later, or accept that those files will require over-the-
wire replication by DFS Replication during initial synchronization.
Next step
After you complete the initial copy, and use Robocopy to resolve issues with as many
skipped files as possible, you will use the Get-DfsrFileHash cmdlet in Windows
PowerShell or the Dfsrdiag command to validate the pre-seeded files by comparing file
hashes on the source and destination servers. For detailed instructions, see Step 2:
Validate pre-seeded Files for DFS Replication.
DFS Replication FAQ
FAQ
Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008
Try our Virtual Agent - It can help you quickly identify and fix common File
replication issues.
This FAQ answers questions about Distributed File System (DFS) Replication (also known
as DFS-R or DFSR) for Windows Server.
For information about DFS Namespaces, see DFS Namespaces: Frequently Asked
Questions.
For information about what's new in DFS Replication, see the following topics:
DFS Namespaces and DFS Replication Overview (in Windows Server 2012)
For a list of recent changes to this topic, see the Change history section of this topic.
Interoperability
Can DFS Replication communicate with FRS?
No. DFS Replication does not communicate with File Replication Service (FRS). DFS
Replication and FRS can run on the same server at the same time, but they must never
be configured to replicate the same folders or subfolders because doing so can cause
data loss.
For more information about replicating SYSVOL by using DFS Replication, see the
Migrate SYSVOL replication to DFS Replication.
To migrate replication of folders other than the SYSVOL folder, see DFS Operations
Guide: Migrating from FRS to DFS Replication and FRS2DFSR – An FRS to DFSR
Migration Utility (https://go.microsoft.com/fwlink/?LinkID=195437 ).
You can also use the SMB/CIFS client functionality included in many UNIX clients to
directly access the Windows file shares, although this functionality is often limited or
requires modifications to the Windows environment (such as disabling SMB Signing by
using Group Policy).
DFS Replication interoperates with NFS on a server running a Windows Server operating
system, but you can't replicate an NFS mount point.
To back up files that are stored in a replicated folder, use Windows Server Backup or
Microsoft® System Center Data Protection Manager. For information about Backup and
Recovery functionality in Windows Server 2008 R2 and Windows Server 2008, see
Backup and Recovery. For more information, see System Center Data Protection
Manager (https://go.microsoft.com/fwlink/?LinkId=182261 ).
The following are best practices for implementing file screens or quotas:
The hidden DfsrPrivate folder must not be subject to quotas or file screens.
Screened files must not exist in any replicated folder before screening is enabled.
7 Note
Don't use DFS Replication with Offline Files in a multi-user environment because DFS
Replication doesn't provide any distributed locking mechanism or file checkout
capability. If two users modify the same file at the same time on different servers, DFS
Replication moves the older file to the DfsrPrivate\ConflictandDeleted folder (located
under the local path of the replicated folder) during the next replication.
Opening .pst files over network connections could lead to data corruption in the
.pst files. For more information about why .pst files cannot be safely accessed from
across a network, see article 297019 in the Microsoft Knowledge Base
(https://go.microsoft.com/fwlink/?LinkId=125363 ).
.pst and Access files tend to stay open for long periods of time while being
accessed by a client such as Outlook or Office Access. This prevents DFS
Replication from replicating these files until they are closed.
If the volume contains a Windows paging file, replication fails and logs DFSR event
4312 in the system event log.
DFS Replication sets the System and Hidden attributes on the replicated folder on
the destination server(s). This occurs because Windows applies the System and
Hidden attributes to the volume root folder by default. If the local path of the
replicated folder on the destination server(s) is also a volume root, no further
changes are made to the folder attributes.
When replicating a volume that contains the Windows system folder, DFS
Replication recognizes the %WINDIR% folder and does not replicate it. However,
DFS Replication does replicate folders used by non-Microsoft applications, which
might cause the applications to fail on the destination server(s) if the applications
have interoperability issues with DFS Replication.
For more information, see "DFS Replication security requirements and delegation" in the
Delegate the Ability to Manage DFS Replication (https://go.microsoft.com/fwlink/?
LinkId=182294 ).
) Important
7 Note
For a list of scalability guidelines that have been tested by Microsoft for Windows
Server 2003 R2, see DFS Replication scalability guidelines
(https://go.microsoft.com/fwlink/?LinkId=75043 ).
When multiple users need to modify the same files at the same time on different
servers, use the file check-out feature of Windows SharePoint Services to ensure that
only one user is working on a file. Windows SharePoint Services 2.0 with Service Pack 2
is available as part of Windows Server 2003 R2. Windows SharePoint Services can be
downloaded from the Microsoft Web site; it is not included in newer versions of
Windows Server.
Use the DFSR Windows PowerShell module included in Windows Server 2012 R2 or
DfsrAdmin.exe in conjunction with Scheduled Tasks to regularly generate health
reports. For more information, see Automating DFS Replication Health Reports
(https://go.microsoft.com/fwlink/?LinkId=74010 ).
Use the DFS Replication Management Pack for System Center Operations Manager
to create alerts that are based on specified conditions.
DFS Management is included with Windows Server 2012 R2, Windows Server 2012,
Windows Server 2008 R2, Windows Server 2008, and Windows Server 2003 R2. To
manage DFS Replication from other versions of Windows, use Remote Desktop or the
Remote Server Administration Tools for Windows 7.
) Important
DFS Replication has a management pack for System Center Operations Manager
that provides proactive monitoring.
DFS Management has an in-box diagnostic report for the replication backlog,
replication efficiency, and the number of files and folders in a given replication
group.
Performance
Does DFS Replication support dial-up
connections?
Although DFS Replication will work at dial-up speeds, it can get backlogged if there are
large numbers of changes to replicate. If small changes are made to existing files, DFS
Replication with Remote Differential Compression (RDC) will provide a much higher
performance than copying the file directly.
Nonetheless, the bandwidth throttling is not 100% accurate and DFS Replication can
saturate the link for short periods of time. This is because DFS Replication throttles
bandwidth by throttling RPC calls. Because this process relies on various buffers in lower
levels of the network stack, including RPC, the replication traffic tends to travel in bursts
which may at times saturate the network links.
RDC is not used on files smaller than 64 KB and might not be beneficial on high-speed
LANs where network bandwidth is not contended. RDC can be disabled on a per-
connection basis using DFS Management.
The replication group schedule may be set to Universal Time Coordinate (UTC) while the
connection schedule is set to the local time of the receiving member. Take this into
account when the replication group spans multiple time zones. Local time means the
time of the member hosting the inbound connection. The displayed schedule of the
inbound connection and the corresponding outbound connection reflect time zone
differences when the schedule is set to local time.
* You can optionally disable cross-file RDC on Windows Server 2012 R2.
You can change the RDC size threshold by using the Dfsradmin Connection Set
command, the DFS Replication WMI Provider, or by manually editing the configuration
XML file.
To use cross-file RDC, one member of the replication connection must be running an
edition of Windows that supports cross-file RDC. For a list of editions that support
cross-file RDC, see Which editions of the Windows operating system support cross-file
RDC?
Replication details
Can I change the path for a replicated folder
after it is created?
No. If you need to change the path of a replicated folder, you must delete it in DFS
Management and add it back as a new replicated folder. DFS Replication then uses
Remote Differential Compression (RDC) to perform a synchronization that determines
whether the data is the same on the sending and receiving members. It does not
replicate all the data in the folder again.
For a list of attribute values and their descriptions, see File Attributes on MSDN
(https://go.microsoft.com/fwlink/?LinkId=182268 ).
The following attribute values are set by using the SetFileAttributes dwFileAttributes
function, and they are replicated by DFS Replication. Changes to these attribute values
trigger replication of the attributes. The contents of the file are not replicated unless the
contents change as well. For more information, see SetFileAttributes Function in the
MSDN library (https://go.microsoft.com/fwlink/?LinkId=182269 ).
FILE_ATTRIBUTE_HIDDEN
FILE_ATTRIBUTE_READONLY
FILE_ATTRIBUTE_SYSTEM
FILE_ATTRIBUTE_NOT_CONTENT_INDEXED
FILE_ATTRIBUTE_OFFLINE
The following attribute values are replicated by DFS Replication, but they do not trigger
replication.
FILE_ATTRIBUTE_ARCHIVE
FILE_ATTRIBUTE_NORMAL
The following file attribute values also trigger replication, although they cannot be set
by using the SetFileAttributes function (use the GetFileAttributes function to view
the attribute values).
FILE_ATTRIBUTE_REPARSE_POINT
7 Note
DFS Replication does not replicate reparse point attribute values unless the reparse
tag is IO_REPARSE_TAG_SYMLINK. Files with the IO_REPARSE_TAG_DEDUP,
IO_REPARSE_TAG_SIS or IO_REPARSE_TAG_HSM reparse tags are replicated as
normal files. However, the reparse tag and reparse data buffers are not replicated
to other servers because the reparse point only works on the local system.
FILE_ATTRIBUTE_COMPRESSED
FILE_ATTRIBUTE_ENCRYPTED
7 Note
DFS Replication does not replicate files that are encrypted by using the Encrypting
File System (EFS). DFS Replication does replicate files that are encrypted by using
non-Microsoft software, but only if it does not set the FILE_ATTRIBUTE_ENCRYPTED
attribute value on the file.
FILE_ATTRIBUTE_SPARSE_FILE
FILE_ATTRIBUTE_DIRECTORY
The initial replication does not need to replicate contents when files differ only by real
attributes or time stamps. A real attribute is an attribute that can be set by the Win32
function SetFileAttributes . For more information, see SetFileAttributes Function in the
MSDN library (https://go.microsoft.com/fwlink/?LinkId=182269 ). If two files differ by
other attributes, such as compression, then the contents of the file are replicated.
To prestage a replication group member, copy the files to the appropriate folder on the
destination server(s), create the replication group, and then choose a primary member.
Choose the member that has the most up-to-date files that you want to replicate
because the primary member's content is considered "authoritative." This means that
during initial replication, the primary member's files will always overwrite other versions
of the files on other members of the replication group.
For information about pre-seeding and cloning the DFSR database, see DFS Replication
Initial Sync in Windows Server 2012 R2: Attack of the Clones .
For more information about the initial replication, see Create a Replication Group.
Journal wraps: DFS Replication recovers from journal wraps on the fly. Each existing
file or folder will be marked as journalWrap and verified against the file system
before replication is enabled again. During the recovery, this volume is not
available for replication in either direction.
Microsoft does not support creating NTFS hard links to or from files in a replicated
folder – doing so can cause replication issues with the affected files. Hard link files
are ignored by DFS Replication and are not replicated. Junction points also are not
replicated, and DFS Replication logs event 4406 for each junction point it
encounters.
The only reparse points replicated by DFS Replication are those that use the
IO_REPARSE_TAG_SYMLINK tag; however, DFS Replication does not guarantee that
the target of a symlink is also replicated. For more information, see the Ask the
Directory Services Team blog.
7 Note
Authentication-Level Constants
Windows Server 2012 R2, or the Dfsrdiag SyncNow command. You can force polling by
using the Update-DfsrConfigurationFromAD cmdlet, or the Dfsrdiag PollAD command.
If you are using Windows Server 2008 or Windows Server 2003 R2, you can simulate a
one-way connection by performing the following actions:
Train administrators to make changes only on the server(s) that you want to
designate as primary servers. Then let the changes replicate to the destination
servers.
Configure the share permissions on the destination servers so that end users do
not have Write permissions. If no changes are allowed on the branch servers, then
there is nothing to replicate back, simulating a one-way connection and keeping
WAN utilization low.
If the initial replication fails or the DFS Replication service restarts during the replication,
the primary member sees the primary member designation in the local DFS Replication
database and retries the initial replication. If the primary member's DFS Replication
database is lost after clearing the primary designation in Active Directory Domain
Services, but before all members of the replication group complete the initial replication,
all members of the replication group fail to replicate the folder because no server is
designated as the primary member. If this happens, use the Dfsradmin membership
/set /isprimary:true command on the primary member server to restore the primary
member designation manually.
For more information about initial replication, see Create a Replication Group.
2 Warning
The primary member designation is used only during the initial replication process.
If you use the Dfsradmin command to specify a primary member for a replicated
folder after replication is complete, DFS Replication does not designate the server
as a primary member in Active Directory Domain Services. However, if the DFS
Replication database on the server subsequently suffers irreversible corruption or
data loss, the server attempts to perform an initial replication as the primary
member instead of recovering its data from another member of the replication
group. Essentially, the server becomes a rogue primary server, which can cause
conflicts. For this reason, specify the primary member manually only if you are
certain that the initial replication has irretrievably failed.
If RDC is turned off, DFS Replication completely restarts the file transfer. This can delay
when the file is available on the receiving member.
When a conflict occurs, DFS Replication logs an informational event to the DFS
Replication event log. This event does not require user action for the following reasons:
DFS Replication treats the Conflict and Deleted folder as a cache. When a quota
threshold is reached, it cleans out some of those files. There is no guarantee that
conflicting files will be saved.
The conflict could reside on a server different from the origin of the conflict.
Staging
Does DFS Replication continue staging files
when replication is disabled by a schedule or
bandwidth throttling quota, or when a
connection is manually disabled?
No. DFS Replication does not continue to stage files outside of scheduled replication
times, if the bandwidth throttling quota has been exceeded, or when connections are
disabled.
Change history
Date Description Reason
October Updated the What are the supported limits Updates for the latest
9th, 2013 of DFS Replication? section with results version of Windows Server
from tests on Windows Server 2012 R2.
Date Description Reason
October Edited the What are the supported limits of Customer feedback
31st, 2012 DFS Replication? entry to increase the
tested number of replicated files on a
volume.
August 15, Edited the Does DFS Replication replicate Feedback from Customer
2012 NTFS file permissions, alternate data Support Services
streams, hard links, and reparse points?
entry to further clarify how DFS Replication
handles hard links and reparse points.
June 13, Edited the Does DFS Replication work on Customer feedback
2012 ReFS or FAT volumes? entry to add
discussion of ReFS.
April 25, Edited the Does DFS Replication replicate Reduce potential
2012 NTFS file permissions, alternate data confusion
streams, hard links, and reparse points?
entry to clarify how DFS Replication handles
hard links.
March 30, Edited the Can DFS Replication replicate Customer questions about
2011 Outlook .pst or Microsoft Office Access the previous entry, which
database files? entry to correct the incorrectly indicated that
potential impact of using DFS Replication replicating .pst or Access
with .pst and Access files. Added How can I files could corrupt the DFS
improve replication performance? Replication database.
January Added How can files be recovered from the Customer feedback
26, 2011 ConflictAndDeleted or PreExisting folders?
This article is a quick reference guide on how to calculate the minimum staging area
needed for DFSR to function properly. Values lower than these may cause replication to
go slowly or stop altogether.
Keep in mind these are minimums only. When considering staging area size, the bigger
the staging area the better, up to the size of the Replicated Folder. See the section "How
to determine if you have a staging area problem" and the blog posts linked at the end
of this article for more details on why it is important to have a properly sized staging
area.
General guidance
The staging area quota must be as large as the 32 largest files in the Replicated Folder.
Initial Replication will make much more use of the staging area than day-to-day
replication. Setting the staging area higher than the minimum during initial replication is
strongly encouraged if you have the drive space available.
Powershell
This command will return the file names and the size of the files in bytes. Useful if
you want to know what 32 files are the largest in the Replicated Folder so you can
“visit” their owners.
This command will return the total number of bytes of the 32 largest files in the
folder without listing the file names.
Powershell
$big32.sum /1gb
This command will get the total number of bytes of 32 largest files in the folder
and do the math to convert bytes to gigabytes for you. This command is two
separate lines. You can paste both them into the PowerShell command shell at
once or run them back to back.
Manual Walkthrough
Running command 1 will return results similar to the output below. This example only
uses 16 files for brevity. Always use 32 for Windows 2008 and later operating systems.
Name Length
File5.zip 10286089216
archive.zip 6029853696
BACKUP.zip 5751522304
file9.zip 5472683008
MENTOS.zip 5241586688
File7.zip 4321264640
file2.zip 4176765952
frd2.zip 4176765952
BACKUP.zip 4078994432
File44.zip 4058424320
file11.zip 3858056192
Backup2.zip 3815138304
BACKUP3.zip 3815138304
Current.zip 3576931328
Backup8.zip 3307488256
File999.zip 3274982400
First, sum the total number of bytes. Next divide the total by 1073741824. Microsoft
Excel is an easy way to do this.
Example
From the example above the total number of bytes = 75241684992. To get the
minimum staging area quota needed you need to divide 75241684992 by 1073741824.
Based on this data you would set my staging area to 71 GB if you round up to the
nearest whole number.
The DFS Replication service has detected that the staging space in use for the
replicated folder at local path (path) is above the high watermark. The service will
attempt to delete the oldest staging files. Performance may be affected.
The DFS Replication service has successfully deleted old staging files for the
replicated folder at local path (path). The staging space is now below the high
watermark.
The DFS Replication service failed to clean up old staging files for the replicated
folder at local path (path). The service might fail to replicate some large files and the
replicated folder might get out of sync. The service will automatically retry staging
space cleanup in (x) minutes. The service may start cleanup earlier if it detects some
staging files have been unlocked.
Event ID: 4208 Severity: Warning
The DFS Replication service detected that the staging space usage is above the
staging quota for the replicated folder at local path (path). The service might fail to
replicate some large files and the replicated folder might get out of sync. The
service will attempt to clean up staging space automatically.
The DFS Replication service could not replicate the replicated folder at local path
(path) because the staging path is invalid or inaccessible.
1. Is the Replicated Folder (RF) logging 4202 performing initial replication? If so, it is
normal to log 4202 and 4204 events. You will want to keep these to down to as few
as possible during Initial Replication by providing as much staging area as possible
2. Simply checking the total number of 4202 events is not sufficient. You have to
know how many were logged per RF. If you log twenty 4202 events for one RF in a
24 hour period that is high. However if you have 20 Replicated Folders and there is
one event per folder, you are doing well.
3. You should examine several days of data to establish trends.
We usually counsel customers to allow no more than one 4202 event per Replicated
Folder per day under normal operating conditions. “Normal” meaning no Initial
Replication is occurring. We base this on the reasoning that:
1. Time spent cleaning up the staging area is time spent not replicating files.
Replication is paused while the staging area is cleared.
2. DFSR benefits from a full staging area using it for RDC and cross-file RDC or
replicating the same files to other members
3. The more 4202 and 4204 events you log the greater the odds you will run into the
condition where DFSR cannot clean up the staging area or will have to prematurely
purge files from the staging area.
4. 4206, 4208 and 4212 events are, in my experience, always preceded and followed
by a high number of 4202 and 4204 events.
While allowing for only one 4202 event per RF per day is conservative, it greatly
decreases your odds of running into staging area problems and better utilizes your
DFSR server’s resources for the intended purpose of replicating files.
Understanding (the Lack of) Distributed
File Locking in DFSR
Article • 06/21/2022
This article discusses the absence of a multi-host distributed file locking mechanism
within Windows, and specifically within folders replicated by DFSR.
Some Background
Distributed File Locking – this refers to the concept of having multiple copies of a
file on several computers and when one file is opened for writing, all other copies
are locked. This prevents a file from being modified on multiple servers at the
same time by several users.
Distributed File System Replication – DFSR operates in a multi-master, state-based
design. In state-based replication, each server in the multi-master system applies
updates to its replica as they arrive, without exchanging log files (it instead uses
version vectors to maintain “up-to-dateness” information). No one server is ever
arbitrarily authoritative after initial sync, so it is highly available and very flexible on
various network topologies.
Server Message Block - SMB is the common protocol used in Windows for
accessing files over the network. In simplified terms, it's a client-server protocol
that makes use of a redirector to have remote file systems appear to be local file
systems. It is not specific to Windows and is quite common – a well known non-
Microsoft example is Samba, which allows Linux, Mac, and other operating systems
to act as SMB clients/servers and participate in Windows networks.
It's important to make a clear delineation of where DFSR and SMB live in your replicated
data environment. SMB allows users to access their files, and it has no awareness of
DFSR. Likewise, DFSR (using the RPC protocol) keeps files in sync between servers and
has no awareness of SMB. Don't confuse distributed locking as defined in this post and
Opportunistic Locking.
Since users can modify data on multiple servers, and since each Windows server only
knows about a file lock on itself, and since DFSR doesn't know anything about those
locks on other servers, it becomes possible for users to overwrite each other's changes.
DFSR uses a “last writer wins” conflict algorithm, so someone has to lose and the person
to save last gets to keep their changes. The losing file copy is chucked into the
ConflictAndDeleted folder.
Now, this is far less common than people like to believe. Typically, true shared files are
modified in a local environment; in the branch office or in the same row of cubicles.
They are usually worked on by people on the same team, so people are generally aware
of colleagues modifying data. And since they are usually in the same site, the odds are
much higher that all the users working on a shared doc will be using the same server.
Windows SMB handles the situation here. When a user has a file locked for modification
and his coworker tries to edit it, the other user will get an error like:
And if the application opening the file is really clever, like Word 2007, it might give you:
DFSR does have a mechanism for locked files, but it is only within the server's own
context. DFSR will not replicate a file in or out if its local copy has an exclusive lock. But
this doesn't prevent anyone on another server from modifying the file.
Back on topic, the issue of shared data being modified geographically does exist, and for
some folks it's pretty gnarly. We're occasionally asked why DFSR doesn't handle this
locking and take of everything with a wave of the magic wand. It turns out this is an
interesting and difficult scenario to solve for a multi-master replication system. Let's
explore.
Third-Party Solutions
There are some vendor solutions that take on this problem, which they typically tackle
through one or more of the following methods*:
Use of a broker mechanism
Having a central ‘traffic cop' allows one server to be aware of all the other servers
and which files they have locked by users. Unfortunately this also means that there
is often a single point of failure in the distributed locking system.
Since a central broker must be able to talk to all servers participating in file
replication, this removes the ability to handle complex network topologies. Ring
topologies and multi hub-and-spoke topologies are not usually possible. In a non-
fully routed network, some servers may not be able to directly contact each other or
a broker, and can only talk to a partner who himself can talk to another server – and
so on. This is fine in a multi-master environment, but not with a brokering
mechanism.
Are limited to a pair of servers
Some solutions limit the topology to a pair of servers in order to simplify their
distributed locking mechanism. For larger environments this is may not be feasible.
* Note that I say typically! Please do not post death threats because you have a solution
that does/does not implement one or more of those methods!
Deeper Thoughts
As you think further about this issue, some fundamental issues start to crop up. For
example, if we have four servers with data that can be modified by users in four sites,
and the WAN connection to one of them goes offline, what do we do? The users can still
access their individual servers – but should we let them? We don't want them to make
changes that conflict, but we definitely want them to keep working and making our
company money. If we arbitrarily block changes at that point, no users can work even
though there may not actually be any conflicts happening! There's no way to tell the
other servers that the file is in use and you're back at square one.
Then there's SMB itself and the error handling of reporting locks. We can't really change
how SMB reports sharing violations as we'd break a ton of applications and clients
wouldn't understand new extended error messages anyways. Applications like Word
2007 do some undercover trickery to figure out who is locking files, but the vast
majority of applications don't know who has a file in use (or even that SMB exists.
Really.). So when a user gets the message ‘This file is in use' it's not particularly
actionable – should they all call the help desk? Does the help desk have access to all the
file servers to see which users are accessing files? Messy.
Since we want multi-master for high availability, a broker system is less desirable; we
might need to have something running on all servers that allows them all to
communicate even through non-fully routed networks. This will require very complex
synchronization techniques. It will add some overhead on the network (although
probably not much) and it will need to be lightning fast to make sure that we are not
holding up the user in their work; it needs to outrun file replication itself - in fact, it
might need to actually be tied to replication somehow. It will also have to account for
server outages that are network related and not server crashes, somehow.
And then we're back to special client software for this scenario that better understands
the locks and can give the user some useful info (“Go call Susie in accounting and tell
her to release that doc”, “Sorry, the file locking topology is broken and your
administrator is preventing you from opening this file until it's fixed”, etc). Getting this to
play nicely with the millions of applications running in Windows will definitely be
interesting. There are plenty of OS's that would not be supported or get the software –
Windows 2000 is out of mainstream support and XP soon will be. Linux and Mac clients
wouldn't have this software until they felt it was important, so the customer would have
to hope their vendors made something analogous.
More information
Right now the easiest way to control this situation in DFSR is to use DFS Namespaces to
guide users to predictable locations, with a consistent namespace. By correctly
configuring your DFSN site topology and server links, you force users to all share the
same local server and only allow them to access remote computers when their ‘main'
server is down. For most environments, this works quite well. Alternative to DFSR,
SharePoint is an option because of its check-out/check-in system. BranchCache (coming
in Windows Server 2008 R2 and Windows 7) may be an option for you as it is designed
for easing the reading of files in a branch scenario, but in the end the authoritative data
will still live on one server only – more on this here. And again, those vendors have their
solutions.
Overview of Disk Management
Article • 03/22/2023
Applies To: Windows 11, Windows 10, Windows Server 2022, Windows Server 2019,
Windows Server 2016
Disk Management is a system utility in Windows for advanced storage operations. Here
are some tasks you can complete with Disk Management:
Set up a new drive. For more information, see Initialize new disks.
Extend a volume into space that's not already part of a volume on the same drive.
For more information, see Extend a basic volume.
Change a drive letter or assign a new drive letter. For more information, see
Change a drive letter.
The following image shows the Disk Management overview for several drives. Disk 0 has
three partitions, and Disk 1 has two partitions. On Disk 0, the C: drive for Windows uses
the most disk space. Two other partitions for system operations and recovery use a
smaller amount of disk space.
Windows typically includes three partitions on your main drive (usually the C:\ drive).
These partitions include the EFI System Partition, the Local Disk (C:) Partition, and a
Recovery Partition.
The Windows operating system is installed on the Local Disk (C:) Partition. This
partition is the common storage location for your other apps and files.
Modern PCs use the EFI System Partition to start (boot) your PC and your
operating system.
The Recovery Partition stores special tools to help you recover Windows, in case
there's a problem starting the PC or other serious issues.
) Important
Disk Management might show the EFI System Partition and Recovery Partition as
100 percent free. However, these partitions store critical files that your PC needs to
operate properly, and the partitions are generally nearly full. It's recommended to
not modify these partitions in any way.
Troubleshoot issues
Sometimes a Disk Management task reports an error, or a procedure doesn't work as
expected. There are several options available to help you resolve the issue.
If you don't find an answer on the site, you can post a question for input from
Microsoft or other members of the community. You can also Contact Microsoft
Support .
Free up disk space. For more information, see Free up drive space in Windows .
Defragment or optimize your drives. For more information, see Ways to improve
your computer's performance .
Related links
Manage disks
Manage basic volumes
Troubleshooting Disk Management
Recovery options in Windows
Find lost files after the upgrade to Windows
Backup and Restore in Windows
Create a recovery drive
Create a system restore point
Where to look for your BitLocker recovery key
Overview of file sharing using the SMB
3 protocol in Windows Server
Article • 01/26/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012
This topic describes the SMB 3 feature in Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012—practical uses for the feature, the
most significant new or updated functionality in this version compared to previous
versions, and the hardware requirements. SMB is also a fabric protocol used by
software-defined data center (SDDC) solutions such as Storage Spaces Direct, Storage
Replica, and others. SMB version 3.0 was introduced with Windows Server 2012 and has
been incrementally improved in subsequent releases.
Feature description
The Server Message Block (SMB) protocol is a network file sharing protocol that allows
applications on a computer to read and write to files and to request services from server
programs in a computer network. The SMB protocol can be used on top of its TCP/IP
protocol or other network protocols. Using the SMB protocol, an application (or the user
of an application) can access files or other resources at a remote server. This allows
applications to read, create, and update files on the remote server. SMB can also
communicate with any server program that is set up to receive an SMB client request.
SMB is a fabric protocol that is used by Software-defined Data Center (SDDC)
computing technologies, such as Storage Spaces Direct, Storage Replica. For more
information, see Windows Server software-defined datacenter.
Practical applications
This section discusses some new practical ways to use the new SMB 3.0 protocol.
File storage for virtualization (Hyper-V™ over SMB). Hyper-V can store virtual
machine files, such as configuration, Virtual hard disk (VHD) files, and snapshots, in
file shares over the SMB 3.0 protocol. This can be used for both stand-alone file
servers and clustered file servers that use Hyper-V together with shared file
storage for the cluster.
Microsoft SQL Server over SMB. SQL Server can store user database files on SMB
file shares. Currently, this is supported with SQL Server 2008 R2 for stand-alone
SQL servers. Upcoming versions of SQL Server will add support for clustered SQL
servers and system databases.
Traditional storage for end-user data. The SMB 3.0 protocol provides
enhancements to the Information Worker (or client) workloads. These
enhancements include reducing the application latencies experienced by branch
office users when accessing data over wide area networks (WAN) and protecting
data from eavesdropping attacks.
7 Note
If you need to conserve storage space on an SMB file share, consider using Azure
File Sync with cloud tiering enabled. This allows you to cache your most frequently
accessed files locally and tier your least frequently accessed files to the cloud,
saving local storage space while maintaining performance. For details, see Planning
for an Azure File Sync deployment.
Ability to require New To provide some added assurance that writes to a file share
write-through to disk make it all the way through the software and hardware stack
on file shares that to the physical disk prior to the write operation returning as
aren't continuously completed, you can enable write-through on the file share
available using either the NET USE /WRITETHROUGH command or the
New-SMBMapping -UseWriteThrough PowerShell cmdlet. There's
some amount of performance hit to using write-through; see
the blog post Controlling write-through behaviors in SMB
for further discussion.
Features added in Windows Server, version
1709, and Windows 10, version 1709
Feature/functionality New or Summary
updated
Guest access to file New The SMB client no longer allows the following actions: Guest
shares is disabled account access to a remote server; Fallback to the Guest
account after invalid credentials are provided. For details, see
Guest access in SMB2 disabled by default in Windows .
SMB global mapping New Maps a remote SMB share to a drive letter that is accessible
to all users on the local host, including containers. This is
required to enable container I/O on the data volume to
traverse the remote mount point. Be aware that when using
SMB global mapping for containers, all users on the
container host can access the remote share. Any application
running on the container host also have access to the
mapped remote share. For details, see Container Storage
Support with Cluster Shared Volumes (CSV), Storage Spaces
Direct, SMB Global Mapping .
SMB dialect control New You can now set registry values to control the minimum SMB
version (dialect) and maximum SMB version used. For details,
see Controlling SMB Dialects .
SMB Encryption Improvements New SMB 3.1.1 offers a mechanism to negotiate the
crypto algorithm per connection, with options for
AES-128-CCM and AES-128-GCM. AES-128-GCM is
the default for new Windows versions, while older
versions will continue to use AES-128-CCM.
Rolling cluster upgrade support New Enables rolling cluster upgrades by letting SMB
appear to support different max versions of SMB
for clusters in the process of being upgraded. For
more details on letting SMB communicate using
different versions (dialects) of the protocol, see the
blog post Controlling SMB Dialects .
Native support for New Adds native support for querying the normalized
FileNormalizedNameInformation name of a file. For details, see
API calls FileNormalizedNameInformation.
For additional details, see the blog post What’s new in SMB 3.1.1 in the Windows Server
2016 Technical Preview 2.
Automatic rebalancing New Improves scalability and manageability for Scale-Out File
of Scale-Out File Servers. SMB client connections are tracked per file share
Server clients (instead of per server), and clients are then redirected to the
cluster node with the best access to the volume used by the
file share. This improves efficiency by reducing redirection
traffic between file server nodes. Clients are redirected
following an initial connection and when cluster storage is
reconfigured.
Performance over Updated Windows 8.1 and Windows 10 provide improved CopyFile
WAN SRV_COPYCHUNK over SMB support when you use File
Explorer for remote copies from one location on a remote
machine to another copy on the same server. You will copy
only a small amount of metadata over the network (1/2KiB
per 16MiB of file data is transmitted). This results in a
significant performance improvement. This is an OS-level and
File Explorer-level distinction for SMB.
SMB Direct Updated Improves performance for small I/O workloads by increasing
efficiency when hosting workloads with small I/Os (such as
an online transaction processing (OLTP) database in a virtual
machine). These improvements are evident when using
higher speed network interfaces, such as 40 Gbps Ethernet
and 56 Gbps InfiniBand.
SMB bandwidth limits New You can now use Set-SmbBandwidthLimit to set bandwidth
limits in three categories: VirtualMachine (Hyper-V over SMB
traffic), LiveMigration (Hyper-V Live Migration traffic over
SMB), or Default (all other types of SMB traffic).
For more information on new and changed SMB functionality in Windows Server 2012
R2, see What's New in SMB in Windows Server.
SMB Scale Out New Support for multiple SMB instances on a Scale-Out File
Server. Using Cluster Shared Volumes (CSV) version 2,
administrators can create file shares that provide
simultaneous access to data files, with direct I/O, through all
nodes in a file server cluster. This provides better utilization
of network bandwidth and load balancing of the file server
clients, and optimizes performance for server applications.
SMB Multichannel New Enables aggregation of network bandwidth and network fault
tolerance if multiple paths are available between the SMB
client and server. This enables server applications to take full
advantage of all available network bandwidth and be resilient
to a network failure.
SMB Direct New Supports the use of network adapters that have RDMA
capability and can function at full speed with very low
latency, while using very little CPU. For workloads such as
Hyper-V or Microsoft SQL Server, this enables a remote file
server to resemble local storage.
Performance Counters New The new SMB performance counters provide detailed, per-
for server applications share information about throughput, latency, and I/O per
second (IOPS), allowing administrators to analyze the
performance of SMB file shares where their data is stored.
These counters are specifically designed for server
applications, such as Hyper-V and SQL Server, which store
files on remote file shares.
Feature/functionality New or Summary
updated
Performance Updated Both the SMB client and server have been optimized for
optimizations small random read/write I/O, which is common in server
applications such as SQL Server OLTP. In addition, large
Maximum Transmission Unit (MTU) is turned on by default,
which significantly enhances performance in large sequential
transfers, such as SQL Server data warehouse, database
backup or restore, deploying or copying virtual hard disks.
SMB-specific Windows New With Windows PowerShell cmdlets for SMB, an administrator
PowerShell cmdlets can manage file shares on the file server, end to end, from
the command line.
SMB Encryption New Provides end-to-end encryption of SMB data and protects
data from eavesdropping occurrences on untrusted
networks. Requires no new deployment costs, and no need
for Internet Protocol security (IPsec), specialized hardware, or
WAN accelerators. It may be configured on a per share basis,
or for the entire file server, and may be enabled for a variety
of scenarios where data traverses untrusted networks.
SMB Directory Leasing New Improves application response times in branch offices. With
the use of directory leases, roundtrips from client to server
are reduced since metadata is retrieved from a longer living
directory cache. Cache coherency is maintained because
clients are notified when directory information on the server
changes. Directory leases work with scenarios for
HomeFolder (read/write with no sharing) and Publication
(read-only with sharing).
Performance over New Directory opportunistic locks (oplocks) and oplock leases
WAN were introduced in SMB 3.0. For typical office/client
workloads, oplocks/leases are shown to reduce network
round trips by approximately 15%.
Hardware requirements
SMB Transparent Failover has the following requirements:
A failover cluster running Windows Server 2012 or Windows Server 2016 with at
least two nodes configured. The cluster must pass the cluster validation tests
included in the validation wizard.
File shares must be created with the Continuous Availability (CA) property, which is
the default.
File shares must be created on CSV volume paths to attain SMB Scale-Out.
Client computers must be running Windows® 8 or Windows Server 2012, both of
which include the updated SMB client that supports continuous availability.
7 Note
Down-level clients can connect to file shares that have the CA property, but
transparent failover will not be supported for these clients.
At least two computers running Windows Server 2012 are required. No extra
features need to be installed—the technology is on by default.
For information on recommended network configurations, see the See Also section
at the end of this overview topic.
At least two computers running Windows Server 2012 are required. No extra
features need to be installed—the technology is on by default.
Network adapters with RDMA capability are required. Currently, these adapters are
available in three different types: iWARP, Infiniband, or RoCE (RDMA over
Converged Ethernet).
More information
The following list provides additional resources on the web about SMB and related
technologies in Windows Server 2012 R2, Windows Server 2012, and Windows Server
2016.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Azure Stack HCI, version 21H2
Windows Server includes a feature called SMB Direct, which supports the use of network
adapters that have Remote Direct Memory Access (RDMA) capability. Network adapters
that have RDMA can function at full speed with very low latency, while using very little
CPU. For workloads such as Hyper-V or Microsoft SQL Server, this enables a remote file
server to resemble local storage. SMB Direct includes:
With SMB Multichannel, SMB detects whether a network adapter has the RDMA
capability, and then creates multiple RDMA connections for that single session (two per
interface). This allows SMB to use the high throughput, low latency, and low CPU
utilization offered by RDMA-capable network adapters. It also offers fault tolerance if
you are using multiple RDMA interfaces.
You can team RDMA-capable network adapters using Switch Embedded Teaming (SET)
starting with Windows Server 2016. After at least one RDMA network connection is
created, the TCP/IP connection used for the original protocol negotiation is no longer
used. However, the TCP/IP connection is retained in case the RDMA network
connections fail.
SMB Encryption with SMB Direct
Beginning in Windows Server 2022 and Windows 11, SMB Direct now supports
encryption. Previously, enabling SMB encryption disabled direct data placement, making
RDMA performance as slow as TCP. Now data is encrypted before placement, leading to
relatively minor performance degradation while adding AES-128 and AES-256 protected
packet privacy. For more information on configuring SMB encryption, review SMB
security enhancements.
Requirements
SMB Direct requires the following:
PowerShell
Disable-NetAdapterRdma <name>
PowerShell
When you disable RDMA on either the client or the server, the systems cannot use it.
Network Direct is the internal name for Windows Server basic networking support for
RDMA interfaces.
PowerShell
Enable-NetAdapterRDMA <name>
You need to enable RDMA on both the client and the server to start using it again.
7 Note
To avoid failures of a workload that does not use SMB Direct, make sure there are
no other workloads using the disconnected network path.
More information
Server Message Block overview
Increasing Server, Storage, and Network Availability: Scenario Overview
Deploy Hyper-V over SMB
SMB over QUIC
Article • 05/18/2023
SMB over QUIC introduces an alternative to the TCP network transport, providing
secure, reliable connectivity to edge file servers over untrusted networks like the
Internet. QUIC is an IETF-standardized protocol with many benefits when compared with
TCP:
All packets are always encrypted and handshake is authenticated with TLS 1.3
Parallel streams of reliable and unreliable application data
Exchanges application data in the first round trip (0-RTT)
Improved congestion control and loss recovery
Survives a change in the clients IP address or port
SMB over QUIC offers an "SMB VPN" for telecommuters, mobile device users, and high
security organizations. The server certificate creates a TLS 1.3-encrypted tunnel over the
internet-friendly UDP port 443 instead of the legacy TCP port 445. All SMB traffic,
including authentication and authorization within the tunnel is never exposed to the
underlying network. SMB behaves normally within the QUIC tunnel, meaning the user
experience doesn't change. SMB features like multichannel, signing, compression,
continuous availability, directory leasing, and so on, work normally.
A file server administrator must opt in to enabling SMB over QUIC. It isn't on by default
and a client can't force a file server to enable SMB over QUIC. Windows SMB clients still
use TCP by default and will only attempt SMB over QUIC if the TCP attempt first fails or
if intentionally requiring QUIC using NET USE /TRANSPORT:QUIC or New-SmbMapping -
TransportType QUIC .
Prerequisites
To use SMB over QUIC, you need the following things:
A file server running Windows Server 2022 Datacenter: Azure Edition (Microsoft
Server Operating Systems )
A Windows 11 computer (Windows for business )
Windows Admin Center (Homepage )
A Public Key Infrastructure to issue certificates like Active Directory Certificate
Server or access to a trusted third party certificate issuer like Verisign, Digicert,
Let's Encrypt, and so on.
If you're using a certificate file issued by a third party certificate authority, you can
use the Certificates snap-in or Windows Admin Center to import it.
2. Install the latest version of Windows Admin Center on a management PC or the file
server. You need the latest version of the Files & File Sharing extension. It's
installed automatically by Windows Admin Center if Automatically update
extensions is enabled in Settings > Extensions.
3. Join your Windows Server 2022 Datacenter: Azure Edition file server to your Active
Directory domain and make it accessible to Windows Insider clients on the Azure
public interface by adding a firewall allow rule for UDP/443 inbound. Do not allow
TCP/445 inbound to the file server. The file server must have access to at least one
domain controller for authentication, but no domain controller requires any
internet access.
4. Connect to the server with Windows Admin Center and click the Settings icon in
the lower left. In the File shares (SMB server) section, under File sharing across
the internet with SMB over QUIC, click Configure.
5. Click a certificate under Select a computer certificate for this file server, click the
server addresses clients can connect to or click Select all, and click Enable.
6. Ensure that the certificate and SMB over QUIC report are healthy.
7. Click on the Files and File Sharing menu option. Note your existing SMB shares or
create a new one.
For a demonstration of configuring and using SMB over QUIC, watch this video:
https://www.youtube-nocookie.com/embed/OslBSB8IkUw
2. Move your Windows 11 device to an external network where it no longer has any
network access to domain controllers or the file server's internal IP addresses.
3. In Windows File Explorer, in the Address Bar, type the UNC path to a share on the
file server and confirm you can access data in the share. Alternatively, you can use
NET USE /TRANSPORT:QUIC or New-SmbMapping -TransportType QUIC with a UNC
path. Examples:
7 Note
You cannot configure the Windows Admin Center (WAC) in gateway mode using
TCP port 443 on a file server where you are configuring KDC Proxy. When
configuring WAC on the file server, change the port to one that is not in use and is
not 443. If you have already configured WAC on port 443, re-run the WAC setup
MSI and choose a different port when prompted.
2. Configure SMB over QUIC normally. Starting in Windows Admin Center 2110, the
option to configure KDC proxy in SMB over QUIC is automatically enabled and you
don't need to perform extra steps on the file servers. The default KDC proxy port is
443 and assigned automatically by Windows Admin Center.
7 Note
You cannot configure an SMB over QUIC server joined to a Workgroup using
Windows Admin Center. You must join the server to an Active Directory
domain or use the step in Manual Method section.
3. Configure the following group policy setting to apply to the Windows 11 device:
Computers > Administrative templates > System > Kerberos > Specify KDC
proxy servers for Kerberos clients
The format of this group policy setting is a value name of your fully qualified
Active Directory domain name and the value will be the external name you
specified for the QUIC server. For example, where the Active Directory domain is
named corp.contoso.com and the external DNS domain is named contoso.com:
4. Ensure that edge firewalls allow HTTPS on port 443 inbound to the file server.
Service"
Get-SmbServerCertificateMapping
2. Copy the thumbprint value from the certificate associated with SMB over QUIC
certificate (there may be multiple lines but they will all have the same thumbprint)
and paste it as the Certhash value for the following command:
$guid = [Guid]::NewGuid()
3. Add the file server's SMB over QUIC names as SPNs in Active Directory for
Kerberos. For example:
Computers > Administrative templates > System > Kerberos > Specify KDC
proxy servers for Kerberos clients
The format of this group policy setting is a value name of your fully qualified
Active Directory domain name and the value will be the external name you
specified for the QUIC server. For example, where the Active Directory domain is
named "corp.contoso.com" and the external DNS domain is named "contoso.com":
on port 443 and user credentials aren't directly exposed on the client-file server
network.
6. Create a Windows Defender Firewall rule that inbound-enables TCP port 443 for
the KDC Proxy service to receive authentication requests.
7. Ensure that edge firewalls allow HTTPS on port 443 inbound to the file server.
7 Note
Automatic configuration of the KDC Proxy will come later in the SMB over QUIC
and these server steps will not be necessary.
Notes
Windows Server 2022 Datacenter: Azure Edition will also eventually be available on
Azure Stack HCI 21H2, for customers not using Azure public cloud.
We recommend read-only domain controllers configured only with passwords of
mobile users be made available to the file server.
Users should have strong passwords or, ideally, be configured using a
passwordless strategy with Windows Hello for Business MFA or smart cards.
Configure an account lockout policy for mobile users through fine-grained
password policy and you should deploy intrusion protection software to detect
brute force or password spray attacks.
You can't configure SMB over QUIC using WAC when the SMB server is in a
workgroup (that is, not AD domain joined). In that scenario you must use the New-
SMBServerCertificateMapping cmdlet and the Manual Method steps for KDC proxy
configuration.
More references
Storage at Microsoft blog
QUIC Wikipedia
Taking Transport Layer Security (TLS) to the next level with TLS 1.3
SMB compression
Article • 05/18/2023
Requirements
To use SMB compression in a traditional client-file server workload, you need the
following:
Using PowerShell
1. Open an elevated PowerShell command prompt as an administrator.
2. Create a new share with compression using New-SMBShare with the -CompressData
$true parameter and argument. For example:
PowerShell
PowerShell
PowerShell
PowerShell
7 Note
If you want File Explorer, third party copy tools, or applications to use compression,
map drives with compression, enable compression on shares, or set SMB clients to
always compress.
Robocopy
1. Open a CMD prompt or PowerShell command prompt.
Group Policy
1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Workstation.
3. Enable policy Use SMB Compression by Default.
4. Close the policy editor.
Group Policy
1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Workstation.
3. Enable policy Disable SMB Compression.
4. Close the policy editor.
Group Policy
1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Server.
3. Enable policy Request traffic compression for all shares.
4. Close the policy editor.
Group Policy
1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Server.
3. Enable policy Disable SMB Compression.
4. Close the policy editor.
In the original release of Windows Server 2022 and Windows 11, SMB compression
defaulted to use of an algorithm where it attempted to compress the first
524,288,000 bytes (500 MiB) of a file during transfer and track that at least
104,857,600 bytes (100 MiB) compressed within that 500 MiB range. If fewer than
100 MiB was compressible, SMB compression stopped trying to compress the rest
of the file. If at least 100 MiB compressed, SMB compression attempted to
compress the rest of the file. With this behavior change, sampling is now disabled
by default and SMB always attempts to compress the entire file when a client or
server requests it.
1. Start Diskmgmt.msc.
4. In Diskmgmt, right-click your VHDX now shown as "Not initialized" and click
Initialize disk and click OK. Right-click on the disks Unallocated section and click
New Simple Volume, then Next for all menu prompts, then click Finish.
5. Specify a file path, set the size to "25 GB", select VHDX and Fixed size, then click
OK.
6. Right-click on the disk and click Detach VHD, then click OK.
7. In File Explorer, double-click that VHDX file to mount it. Copy a few MB of
uncompressible files, such as JPG format, then right-click the mounted disk and
click Eject.
Testing SMB compression between a pair of VMs running on the same Hyper-V host
may not show time savings because the virtual switch is 10 Gbps and has no congestion,
plus modern hypervisors often use flash storage. Test your compression over the real
networks you plan to use. You can also reduce the network bandwidth on Hyper-V VMs
for testing purposes using Set-VMNetworkAdapter with -MaximumBandwidth set to 1Gb ,
for example.
To see how well compression is working, you can robocopy the same file to a server
twice, once with the /compress flag and again without compression, deleting the server
file between each test. If the file is compressing, you should see less network utilization
in Task Manager and a lower copy time. You can also observe the SMB server's
Performance Monitor object "SMB Server Shares" for its "Compressed Requests/sec" and
"Compressed Responses/sec" counters.
RDMA and SMB Direct
SMB compression doesn't support SMB Direct and RDMA. This means that even if the
client requests compression and the server supports it, compression will not be
attempted with SMB Direct and RDMA. Support for SMB compression with SMB Direct
and RDMA will come after the Windows Server 2022 and Windows 11 public previews.
SMB security enhancements
Article • 05/18/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Azure Stack HCI version 21H2,
Windows 11, Windows 10
This article explains the SMB security enhancements in Windows Server and Windows.
SMB Encryption
SMB Encryption provides SMB data end-to-end encryption and protects data from
eavesdropping occurrences on untrusted networks. You can deploy SMB Encryption with
minimal effort, but it might require other costs for specialized hardware or software. It
has no requirements for Internet Protocol security (IPsec) or WAN accelerators. SMB
Encryption can be configured on a per share basis, for the entire file server, or when
mapping drives.
7 Note
SMB Encryption does not cover security at rest, which is typically handled by
BitLocker Drive Encryption.
You can consider SMB Encryption for any scenario in which sensitive data needs to be
protected from interception attacks. Possible scenarios include:
You move an information worker’s sensitive data by using the SMB protocol. SMB
Encryption offers an end-to-end privacy and integrity assurance between the file
server and the client. It provides this security regardless of the networks traversed,
such as wide area network (WAN) connections maintained by non-Microsoft
providers.
SMB 3.0 enables file servers to provide continuously available storage for server
applications, such as SQL Server or Hyper-V. Enabling SMB Encryption provides an
opportunity to protect that information from snooping attacks. SMB Encryption is
simpler to use than the dedicated hardware solutions that are required for most
storage area networks (SANs).
Windows Server 2022 and Windows 11 SMB Direct now support encryption. Previously,
enabling SMB encryption disabled direct data placement, making RDMA performance as
slow as TCP. Now data is encrypted before placement, leading to relatively minor
performance degradation while adding AES-128 and AES-256 protected packet privacy.
You can enable encryption using Windows Admin Center, Set-SmbServerConfiguration,
or UNC Hardening group policy .
) Important
2. To enable SMB Encryption for an individual file share, run the following command.
PowerShell
3. To enable SMB Encryption for the entire file server, run the following command.
PowerShell
4. To create a new SMB file share with SMB Encryption enabled, run the following
command.
PowerShell
PowerShell
PowerShell
7 Note
To guarantee that SMB 3.1.1 clients always use SMB Encryption to access encrypted
shares, you must disable the SMB 1.0 server. For instructions, connect to the server with
Windows Admin Center and open the Files & File Sharing extension, and then select the
File shares tab to be prompted to uninstall. For more information, see How to detect,
enable and disable SMBv1, SMBv2, and SMBv3 in Windows.
SMB Encryption uses the Advanced Encryption Standard (AES)-GCM and CCM
algorithm to encrypt and decrypt the data. AES-CMAC and AES-GMAC also
provide data integrity validation (signing) for encrypted file shares, regardless of
the SMB signing settings. If you want to enable SMB signing without encryption,
you can continue to do so. For more information, see Configure SMB Signing with
Confidence .
You might encounter issues when you attempt to access the file share or server if
your organization uses wide area network (WAN) acceleration appliances.
With a default configuration (where there's no unencrypted access allowed to
encrypted file shares), if clients that don't support SMB 3.x attempt to access an
encrypted file share, Event ID 1003 is logged to the Microsoft-Windows-
SmbServer/Operational event log, and the client receives an Access denied error
message.
SMB Encryption and the Encrypting File System (EFS) in the NTFS file system are
unrelated, and SMB Encryption doesn't require or depend on using EFS.
SMB Encryption and the BitLocker Drive Encryption are unrelated, and SMB
Encryption doesn't require or depend on using BitLocker Drive Encryption.
Preauthentication integrity
SMB 3.1.1 is capable of detecting interception attacks that attempt to downgrade the
protocol or the capabilities that the client and server negotiate by use of
preauthentication integrity. Preauthentication integrity is a mandatory feature in SMB
3.1.1. It protects against any tampering with Negotiate and Session Setup messages by
using cryptographic hashing. The resulting hash is used as input to derive the session’s
cryptographic keys, including its signing key. This process enables the client and server
to mutually trust the connection and session properties. When the client or the server
detects such an attack, the connection is disconnected, and event ID 1005 is logged in
the Microsoft-Windows-SmbServer/Operational event log.
Because of this protection, and to take advantage of the full capabilities of SMB
Encryption, we strongly recommend that you disable the SMB 1.0 server. For
instructions, connect to the server with Windows Admin Center and open the Files &
File Sharing extension, and then select the File shares tab to be prompted to uninstall.
For more information, see How to detect, enable and disable SMBv1, SMBv2, and SMBv3
in Windows.
New signing algorithm
SMB 3.0 and 3.02 use a more recent encryption algorithm for signing: Advanced
Encryption Standard (AES)-cipher-based message authentication code (CMAC). SMB 2.0
used the older HMAC-SHA256 encryption algorithm. AES-CMAC and AES-CCM can
significantly accelerate data encryption on most modern CPUs that have AES instruction
support.
Windows Server 2022 and Windows 11 introduce AES-128-GMAC for SMB 3.1.1 signing.
Windows automatically negotiates this better-performing cipher method when
connecting to another computer that supports it. Windows still supports AES-128-
CMAC. For more information, see Configure SMB Signing with Confidence .
If it's still installed, you should disable SMB1 immediately. For more information on
detecting and disabling SMB 1.0 usage, see Stop using SMB1 . For a clearinghouse of
software that previously or currently requires SMB 1.0, see SMB1 Product
Clearinghouse .
Related links
Overview of file sharing using the SMB 3 protocol in Windows Server
Windows Server Storage documentation
Scale-Out File Server for application data overview
SMB: File and printer sharing ports
should be open
Article • 03/22/2023
Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
When a Best Practices Analyzer scan for Server Message Block (SMB)-based network
services identifies that firewall ports for file and printer sharing aren't open, follow the
steps in this article to resolve the issue.
7 Note
This article addresses a specific issue identified by a Best Practices Analyzer scan.
Apply the information in this article only to computers that have a File Services Best
Practices Analyzer scan that reports the specific port issue. For more information
about best practices and scans, see Best Practices Analyzer.
The issue prevents computer access to shared folders and other SMB-based network
services on the server.
To open the firewall ports and enable file and printer sharing, complete the following
steps:
1. Open Control Panel, select System and Security, and then select Windows
Defender Firewall.
2. On the left, select Advanced settings. The Windows Defender Firewall console
opens and shows the advanced settings.
3. In the Windows Defender Firewall console on the left, select Inbound Rules.
5. For each rule, select and hold (or right-click) the rule, and then select Enable Rule.
Related links
Understanding shared folders and the Windows Firewall
Secure SMB Traffic in Windows Server
Article • 04/01/2022
As a defense in depth measure, you can use segmentation and isolation techniques to
secure SMB traffic and reduce threats between devices on your network.
SMB is used for file sharing, printing, and inter-process communication such as named
pipes and RPC. It's also used as a network data fabric for technologies such as Storage
Spaces Direct, Storage Replica, Hyper-V Live Migration, and Cluster Shared Volumes. Use
the following sections to configure SMB traffic segmentation and endpoint isolation to
help prevent outbound and lateral network communications.
If you want users to access their files inbound at the edge of your network, you can use
SMB over QUIC. This uses UDP port 443 by default and provides a TLS 1.3-encrypted
security tunnel like a VPN for SMB traffic. The solution requires Windows 11 and
Windows Server 2022 Datacenter: Azure Edition file servers running on Azure Stack HCI.
For more information, see SMB over QUIC .
It is unlikely you need to allow any outbound SMB using TCP port 445 to the internet
unless you require it as part of a public cloud offering. The primary scenarios include
Azure Files and Office 365.
If you are using Azure Files SMB, use a VPN for outbound VPN traffic. By using a VPN,
you restrict the outbound traffic to the required service IP ranges. For more information
about Azure Cloud and Office 365 IP address ranges, see:
Azure IP ranges and service tags: public cloud ,US government cloud , Germany
cloud , or China cloud . The JSON files are updated weekly and include
versioning both for the full file and each individual service tag. The AzureCloud tag
provides the IP ranges for the cloud (Public, US government, Germany, or China)
and is grouped by region within that cloud. Service tags in the file will increase as
Azure services are added.
Office 365 URLs and IP address ranges.
With Windows 11 and Windows Server 2022 Datacenter: Azure Edition, you can use SMB
over QUIC to connect to file servers in Azure. This uses UDP port 443 by default and
provides a TLS 1.3-encrypted security tunnel like a VPN for the SMB traffic. For more
information, see SMB over QUIC .
1. Which server endpoints require inbound SMB access to do their role? Do they
need inbound access from all clients, certain networks, or certain nodes?
2. Of the remaining server endpoints, is inbound SMB access necessary?
1. Which client endpoints (for example, Windows 10) require inbound SMB access?
Do they need inbound access from all clients, certain networks, or certain nodes?
2. Of the remaining client endpoints, is inbound SMB access necessary?
3. Of the remaining client endpoints, do they need to run the SMB server service?
For all endpoints, determine if you allow outbound SMB in the safest and most minimal
fashion.
Review server built-in roles and features that require SMB inbound. For example, file
servers and domain controllers require SMB inbound to do their role. For more
information on built-in roles and feature network port requirements, see Service
overview and network port requirements for Windows.
Review servers that need to be accessed from inside the network. For example, domain
controllers and file servers likely need to be accessed anywhere in the network.
However, application server access may be limited to a set of other application servers
on the same subnet. You can use the following tools and features to help you inventory
SMB access:
Examining SMB logs lets you know which nodes are communicating with endpoints over
SMB. You can decide if an endpoint's shares are in use and understand which to exist.
For information on the SMB firewall rules you need to set for inbound and outbound
connections, see the support article Preventing SMB traffic from lateral connections and
entering or leaving the network .
To use the null encapsulation IPSEC authentication, you must create a Security
Connection rule on all computers in your network that are participating in the rules.
Otherwise, the firewall exceptions won't work and you'll only be arbitrarily blocking.
U Caution
You should test the Security Connection rule before broad deployment. An
incorrect rule could prevent users from accessing their data.
To create a Connection Security rule, use Windows Defender Firewall with Advanced
Security control panel or snap-in:
1. In Windows Defender Firewall, select Connection Security Rules and choose a New
rule.
2. In Rule Type, select Isolation then select Next.
3. In Requirements, select Request authentication for inbound and outbound
connections then select Next.
4. In Authentication Method, select Computer and User (Kerberos V5) then select
Next.
5. In Profile, check all profiles (Domain, Private, Public) then select Next.
6. Enter a name your rule then select Finish.
Remember, the Connection Security rule must be created on all clients and servers
participating in your inbound and outbound rules or they will be blocked from
connecting SMB outbound. These rules may already be in place from other security
efforts in your environment and like the firewall inbound/outbound rules, can be
deployed via group policy.
When configuring rules based on the templates in the Preventing SMB traffic from
lateral connections and entering or leaving the network support article, set the
following to customize the Allow the connection if secure action:
1. In the Action step, select Allow the connection if it is secure then select
Customize.
2. In Customize Allow if Secure Settings, select Allow the connection to use null
encapsulation.
The Allow the connection if it is secure option allows override of a global block rule. You
can use the easy but least secure Allow the connection to use null encapsulation with
*override block rules, which relies on Kerberos and domain membership for
authentication. Windows Defender Firewall allows for more secure options like IPSEC.
For more information about configuring the firewall, see Windows Defender Firewall
with Advanced Security deployment overview.
Next steps
Watch Jessica Payne's Ignite conference session Demystifying the Windows Firewall
Protect SMB traffic from interception
Article • 12/13/2022
In this article, you'll learn about some of the ways an attacker might use interception
techniques against the SMB protocol and how you might mitigate an attack. The
concepts will support you with developing your own defense-in-depth strategy for the
SMB protocol.
Many organizations rely on SMB to share files between users and to support other
applications or technologies like Active Directory Domain Services. With such broad
adoption, SMB is both a popular target for attackers and has the potential for business-
wide impact.
For example, an AITM attack might be used for industrial or state-level espionage,
extortion, or finding sensitive security data stored in files. It could also be used as part of
a wider attack to enable the attacker to move laterally within your network or to target
multiple endpoints.
Attacks are constantly evolving, with attackers often using a combination of established
and new techniques. When protecting your system against SMB interception, there are
two main goals:
Due to the diversity of technology and clients within many organizations, a well-
rounded defense will combine multiple methods and will follow the Zero Trust
principles. Learn more about Zero Trust in the What is Zero Trust? article.
Now you'll learn about some of the typical good practice configurations to reduce the
risk of SMB interception.
In the following sections, we'll discuss some of the basic steps you should take to reduce
the attack surface.
Install updates
Regularly install all available security updates on both your Windows Server and client
systems as close to their release as your organization allows. Installing the latest security
updates is the quickest and easiest way to protect your systems from the current known
security vulnerabilities affecting not just SMB, but all Microsoft products and services.
You can install security updates using a few different methods depending on your
organizations requirements. Typical methods are:
Tip
Windows 10 Home and Windows 10 Pro still contain the SMB 1.0 client by default
after a clean installation or in-place upgrade. This behavior is changing in Windows
11, you can read more in the article SMB1 now disabled by default for Windows 11
Home Insiders builds .
Removing SMB 1.0 protects your systems by eliminating several well known security
vulnerabilities. SMB 1.0 lacks the security features of SMB 2.0 and later that help protect
against interception. For example, to prevent a compromised connection SMB 3.0 or
later uses pre-authentication integrity, encryption, and signing. Learn more in the SMB
security enhancements article.
Before removing the SMB 1.0 feature, be sure no applications and processes on the
computer require it. For more information on how to detect and disable SMB 1.0, see
the article How to detect, enable and disable SMBv1, SMBv2, and SMBv3 in Windows.
You can also use the Windows Admin Center Files and file sharing tool to both quickly
enable the auditing of SMB1 client connections and to uninstall SMB 1.
Tip
Windows 11 Home and Pro editions are unchanged from their previous default
behavior; they allow guest authentication by default.
When guest access is disabled, it prevents a malicious actor from creating a server and
tricking users into accessing it using guest access. For example, when a user accesses
the spoofed share, their credentials would fail and SMB 1.0 would fall back to using
guest access. Disabling guest access stops the SMB session from connecting, preventing
the user from accessing the share and any malicious files.
To prevent the use of guest fallback on Windows SMB clients where guest access isn't
disabled by default (including Windows Server):
Group Policy
To learn more about guest access default behavior, read the article Guest access in
SMB2 and SMB3 disabled by default in Windows.
When your users are accessing files using WebDAV, there's no method to force a TLS
based connection over HTTPS. For example, your server may be configured to require
SMB signing or encryption, however the Webclient could connect to HTTP/80 if
WebDAV has been enabled. Any resulting connection would be unencrypted, regardless
of your SMB configuration.
You can use Group Policy Preferences to disable the service on a large number of
machines when you're ready to implement. For more information about configuring
Group Policy Preferences, see Configure a Service Item.
In the following sections, we'll discuss some of the basic steps you should take to secure
the SMB protocol.
SMB 3.1.1 is available beginning with Windows 10 and Windows Server 2016. SMB 3.1.1
includes a new mandatory security feature called pre-authentication integrity. Pre-
authentication integrity signs or encrypts the early phases of SMB connections to
prevent the tampering of Negotiate and Session Setup messages by using
cryptographic hashing.
Cryptographic hashing means the client and server can mutually trust the connection
and session properties. Pre-authentication integrity supersedes secure dialect
negotiation introduced in SMB 3.0. You can’t turn off pre-authentication integrity, but if
a client uses an older dialect, it won’t be used.
You can enhance your security posture further by forcing the use of SMB 3.1.1 as a
minimum. To set the minimum SMB dialect to 3.1.1, from an elevated PowerShell
prompt, run the following commands:
PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" -Name
"MinSMB2Dialect" -Value 0x000000311
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" -Name
"MaxSMB2Dialect" -Value 0x000000311
To learn more about how to set the minimum SMB dialect used, see Controlling SMB
dialects .
Use UNC hardening to require signing, encryption, and
mutual authentication
Enable UNC hardening for all SMB shares by requiring at least mutual authentication
(Kerberos) and integrity (SMB signing). You should also consider evaluating privacy (SMB
encryption) instead of SMB signing. There's no need to configure both SMB signing and
encryption because encryption implicitly includes the signatures used by signing.
U Caution
SMB encryption was introduced with SMB 3 in Windows 8 and Windows Server
2012. You shouldn't require encryption unless all your machines support SMB 3.0 or
later, or are third parties with SMB 3 and encryption support. If you configure SMB
encryption on clients or UNC paths hosted by servers that do not support SMB
Encryption, the SMB client will be unable to access the specified path. Also, if you
configure your server for SMB encryption and it is accessed by clients that don't
support it, those clients will again be unable to access the path.
UNC Hardening gives the ability to check UNC paths for mandated security settings and
will refuse to connect if a server couldn't meet them. Beginning with Windows Server
2016 and Windows 10, UNC Hardening is enabled by default for SYSVOL and
NETLOGON shares on domain controllers. It's a highly effective tool against spoofing
and tampering because the client can authenticate the identity of the server and
validate the integrity of the SMB payloads.
When configuring UNC hardening, you can specify various UNC path patterns. For
example:
\\<Server>\<Share> - The configuration entry applies to the share that has the
specified name on the specified server.
\\*\<Share> - The configuration entry applies to the share that has the specified
name on any server.
\\<Server>\* - The configuration entry applies to any share on the specified
server.
You can use Group Policy to apply the UNC hardening feature to a large number of
machines when you're ready to implement it. For more information about configuring
UNC hardening through Group Policy, see the security bulletin MS15-011 .
Map drives on demand with mandated signing or
encryption
In addition to UNC hardening, you can use signing or encryption when mapping
network drives. Beginning in Windows version 1709 and later, you can create encrypted
or signed mapped drives on demand using Windows PowerShell or Command Prompt.
You can use the NET USE command or the PowerShell New-SmbMapping command to map
drives by specifying RequireIntegrity (signing) or RequirePrivacy (encryption)
parameters.
The parameters don't change how signing or encryption work, or the dialect
requirements. If you try to map a drive and the server refuses to honor your requirement
for signing or encryption, the drive mapping will fail rather than connect unsafely.
Learn about the syntax and parameters for the New-SmbMapping command in New-
SmbMapping reference article.
Beyond SMB
Stop using NTLM and increase your Kerberos security. You can start by enabling
auditing for NTLM usage, then reviewing the logs to find where NTLM is used.
Removing NTLM helps to protect you against common attacks like pass-the-hash,
brute-force or rainbow hash tables due to its use of older MD4/MD5 cryptography hash
function. NTLM also isn't able to verify the server identity, unlike more recent protocols
like Kerberos, making it vulnerable to NTLM relay attacks as well. Many of these
common attacks are easily mitigated with Kerberos.
To learn how to audit NTLM as part of your effort to begin the transition to Kerberos,
see the Assessing NTLM usage article. You can also read learn about detecting insecure
protocols using Azure Sentinel in the Azure Sentinel Insecure Protocols Workbook
Implementation Guide blog article.
In parallel to removing NTLM, you should consider adding more layers of protection for
offline and ticket passing attacks. Use the following items as a guide when enhancing
Kerberos security.
Next steps
Now you've learned about some of the security controls and mitigations to prevent SMB
interception, you'll understand there’s no single step to prevent all interception attacks.
The goal is to create a thoughtful, holistic, and prioritized combination of risk
mitigations spanning multiple technologies through layered defenses.
You can continue to learn more about these concepts in the articles below.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012
This article describes the Network File System role service and features included with the
File and Storage Services server role in Windows Server. Network File System (NFS)
provides a file sharing solution for enterprises that have heterogeneous environments
that include both Windows and non-Windows computers.
Feature description
Using the NFS protocol, you can transfer files between computers running Windows and
other non-Windows operating systems, such as Linux or UNIX.
NFS in Windows Server includes Server for NFS and Client for NFS. A computer running
Windows Server can use Server for NFS to act as a NFS file server for other non-
Windows client computers. Client for NFS allows a Windows-based computer running
Windows Server to access files stored on a non-Windows NFS server.
Windows Server 2012, Windows Server 2012 R2, Windows Server NFSv2, NFSv2,
2016, Windows Server 2019, Windows Server 2022 NFSv3, NFSv3
NFSv4.1
Practical applications
Here are some ways you can use NFS:
Use a Windows NFS file server to provide multi-protocol access to the same file
share over both SMB and NFS protocols from multi-platform clients.
Deploy a Windows NFS file server in a predominantly non-Windows operating
system environment to provide non-Windows client computers access to NFS file
shares.
Migrate applications from one operating system to another by storing the data on
file shares accessible through both SMB and NFS protocols.
Deployment and Updated Enables you to easily deploy and manage NFS with new
manageability Windows PowerShell cmdlets and a new WMI provider.
improvements
Pseudo file system, a file system that separates physical and logical namespace
and is compatible with NFS version 3 and NFS version 2. An alias is provided for
the exported file system, which is part of the pseudo file system.
Compound RPCs combine relevant operations and reduce chattiness.
Sessions and session trunking enables just one semantic and allows continuous
availability and better performance while utilizing multiple networks between NFS
4.1 clients and the NFS Server.
NFS infrastructure
Improvements to the overall NFS infrastructure in Windows Server 2012 are detailed
below:
The clustering infrastructure now allows one resource per network name instead of
one resource per share, which significantly improves resources' failover time.
Failover paths within an NFS server are tuned for better performance.
Wildcard registration in an NFS server is no longer required, and the failovers are
more fine-tuned.
Network Status Monitor (NSM) notifications are sent out after a failover process,
and clients no longer need to wait for TCP timeouts to reconnect to the failed over
server.
Note that Server for NFS supports transparent failover only when manually initiated,
typically during planned maintenance. If an unplanned failover occurs, NFS clients lose
their connections. Server for NFS also doesn't have any integration with the Resume Key
filter. This means that if a local app or SMB session attempts to access the same file that
an NFS client is accessing immediately after a planned failover, the NFS client might lose
its connections (transparent failover wouldn't succeed).
Over forty new Windows PowerShell cmdlets make it easier to configure and
manage NFS file shares. For more information, see NFS Cmdlets in Windows
PowerShell.
Identity mapping is improved with a local flat file mapping store and new Windows
PowerShell cmdlets for configuring identity mapping.
The Server Manager graphical user interface is easier to use.
The new WMI version 2 provider is available for easier management.
The RPC port multiplexer (port 2049) is firewall-friendly and simplifies deployment
of NFS.
Mount mounts a remote NFS share (also known as an export) locally and maps it
to a local drive letter on the Windows client computer.
Nfsadmin manages configuration settings of the Server for NFS and Client for NFS
components.
Nfsshare configures NFS share settings for folders that are shared using Server for
NFS.
Nfsstat displays or resets statistics of calls received by Server for NFS.
Showmount displays mounted file systems exported by Server for NFS.
Umount removes NFS-mounted drives.
NFS in Windows Server 2012 introduces the NFS module for Windows PowerShell with
several new cmdlets specifically for NFS. These cmdlets provide an easy way to
automate NFS management tasks. For more information, see NFS cmdlets in Windows
PowerShell.
Additional information
The following table provides additional resources for evaluating NFS.
Applies to: Windows Server 2022, Windows Server 2019, and Windows Server 2016.
Network File System (NFS) provides a file sharing solution that lets you transfer files
between computers running Windows Server and UNIX operating systems by using the
NFS protocol. This article describes the steps you should follow to deploy NFS.
Support for NFS version 4.1: This protocol version includes the following
enhancements.
Makes navigating firewalls easier, which improves accessibility.
Supports the RPCSEC_GSS protocol, providing stronger security and allowing
clients and servers to negotiate security.
Supports UNIX and Windows file semantics.
Takes advantage of clustered file server deployments.
Supports WAN-friendly compound procedures.
NFS module for Windows PowerShell: The availability of built-in NFS cmdlets
makes it easier to automate various operations. The cmdlet names are consistent
with other Windows PowerShell cmdlets (with verbs such as "Get" and "Set"),
making it easier for users familiar with Windows PowerShell to learn to use new
cmdlets.
Integration with Resume Key Manager: The Resume Key Manager tracks file
server and file system state. The tool enables the Windows SMB and NFS protocol
servers to fail over without disrupting clients or server applications that store their
data on the file server. This improvement is a key component of the continuous
availability capability of the file server running Windows Server.
For this scenario, you must have a valid identity mapping source configuration. Windows
Server supports the following identity mapping stores:
Mapping File
Active Directory Domain Services (AD DS)
RFC 2307-compliant LDAP stores such as Active Directory Lightweight Directory
Services (AD LDS)
User Name Mapping (UNM) server
System requirements
Server for NFS can be installed on any version of Windows Server. You can use NFS with
UNIX-based computers that are running an NFS server or NFS client, if these NFS server
and client implementations comply with one of the following protocol specifications:
One or more computers running Windows Server on which you'll install the two
main Services for NFS components: Server for NFS and Client for NFS. You can
install these components on the same computer or on different computers.
One or more UNIX-based computers that are running NFS server and NFS client
software. The UNIX-based computer that is running NFS server hosts an NFS file
share or export, which is accessed by a computer that is running Windows Server
as a client by using Client for NFS. You can install NFS server and client software
either in the same UNIX-based computer or on different UNIX-based computers,
as desired.
A domain controller running at the Windows Server 2008 R2 functional level. The
domain controller provides user authentication information and mapping for the
Windows environment.
When a domain controller isn't deployed, you can use a Network Information
Service (NIS) server to provide user authentication information for the UNIX
environment. Or, if you prefer, you can use password and group files that are
stored on the computer that's running the User Name Mapping service.
3. A dialog box lets you know what other tools are required for the selected feature.
Check the box for the required features, select Add Features.
4. Select Next, and then choose any other preferred features. When you're ready,
select Next.
PowerShell
Import-Module ServerManager
Add-WindowsFeature FS-NFS-Service
Import-Module NFS
Krb5: Uses the Kerberos version 5 protocol to authenticate users before granting
them access to the file share.
Krb5i: Uses the Kerberos version 5 protocol to authenticate with integrity checking
(checksums), which verifies that the data hasn't been altered.
Krb5p: Uses the Kerberos version 5 protocol, which authenticates NFS traffic with
encryption for privacy. This option is the most secure Kerberos option.
7 Note
You can also choose not to use the preceding Kerberos authentication
methods and instead enable unmapped user access through AUTH_SYS. We
strongly discourage using this option as it removes all authentication
protections and allows any user with access to the NFS server to access data.
When you use unmapped user access, you can specify to allow unmapped
user access by UID or GID, which is the default. You can also allow anonymous
access.
Instructions for configuring NFS authentication are discussed in the following section.
3. On the left, select File and Storage Services, then select Shares.
4. Under the Shares column, select To create a file share, start the New Share
Wizard.
5. On the Select Profile page, select either NFS Share - Quick or NFS Share -
Advanced, then select Next.
6. On the Share Location page, select a server and a volume, then select Next.
7. On the Share Name page, enter a name for the new share, then select Next.
8. On the Authentication page, specify the authentication method you want to use,
then select Next.
9. On the Share Permissions page, select Add. The Add Permissions dialog opens.
a. Choose the level of user permissions to grant: Host, Netgroup, Client group, or
All Machines.
b. For the selected user level, enter the name for the user(s) to grant permission to
the share.
e. (Optional) Select the Allow root access checkbox. This option isn't
recommended.
10. On the Permissions page, configure access control for your selected users. When
you're ready, select Next.
11. On the Confirmation page, review your configuration, and select Create to create
the NFS file share.
PowerShell
Known issue
NFS version 4.1 allows the file names to be created or copied with illegal characters. If
you attempt to open the files with vi editor, it shows the files as being corrupt. You can't
save the file from vi, rename, move it, or change permissions. So avoid using illegal
characters.
NTFS overview
Article • 03/24/2023
Applies to: Windows Server 2022, Windows 10, Windows Server 2019, Windows
Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008
R2, Windows Server 2008
NTFS, the primary file system for recent versions of Windows and Windows Server,
provides a full set of features including security descriptors, encryption, disk quotas, and
rich metadata. It can be used with Cluster Shared Volumes (CSV) to provide continuously
available volumes that can be accessed simultaneously from multiple nodes of a failover
cluster.
For more feature information, see the Related links section of this article. To learn about
the newer Resilient File System (ReFS), see Resilient File System (ReFS) overview.
Increased reliability
NTFS uses its log file and checkpoint information to restore the consistency of the file
system when the computer is restarted after a system failure. After a bad-sector error,
NTFS dynamically remaps the cluster that contains the bad sector, and allocates a new
cluster for the data. It also marks the original cluster as bad, and no longer uses the old
cluster. For example, after a server crash, NTFS can recover data by replaying its log files.
NTFS continuously monitors and corrects transient corruption issues in the background
without taking the volume offline. This feature is known as self-healing NTFS, which was
introduced in Windows Server 2008.
For larger corruption issues, the Chkdsk utility, in Windows Server 2012 and later, scans
and analyzes the drive while the volume is online, limiting time offline to the time
required to restore data consistency on the volume. When you use NTFS with Cluster
Shared Volumes, no downtime is required. For more information, see NTFS Health and
Chkdsk.
Increased security
Access Control List (ACL)-based security for files and folders: NTFS lets you set
permissions on a file or folder, specify the groups and users whose access you
want to restrict or allow, and select access type.
Support for BitLocker Drive Encryption: BitLocker Drive Encryption provides more
security for critical system information and other data stored on NTFS volumes.
Beginning in Windows Server 2012 R2 and Windows 8.1, BitLocker provides
support for device encryption on x86 and x64-based computers with a Trusted
Platform Module (TPM) that supports connected stand-by (previously available
only on Windows RT devices). Device encryption helps protect data on Windows-
based computers, and it helps block malicious users from accessing the system
files they rely on to discover the user's password. It also prevents malicious users
from accessing a drive by physically removing it from the PC and installing it on a
different one. For more information, see What's new in BitLocker.
4 KB (default size) 16 TB
8 KB 32 TB
16 KB 64 TB
32 KB 128 TB
128 KB 512 TB
256 KB 1 PB
512 KB 2 PB
1024 KB 4 PB
If you try to mount a volume with a cluster size larger than the supported maximum of
the Windows version you're using, you get the error STATUS_UNRECOGNIZED_VOLUME.
) Important
Services and apps might impose additional limits on file and volume sizes. For
example, the volume size limit is 64 TB if you're using the Previous Versions feature
or a backup app that makes use of Volume Shadow Copy Service (VSS) snapshots
(and you're not using a SAN or RAID enclosure). However, you might need to use
smaller volume sizes depending on your workload and the performance of your
storage.
Parameter Description
-UseLargeFRS Enables support for large file record segments (FRS). Using this parameter
increases the number of extents allowed per file on the volume. For large FRS
records, the limit increases from about 1.5 million extents to about 6 million
extents.
For example, the following cmdlet formats drive D as an NTFS volume, with FRS enabled
and an allocation unit size of 64 KB.
PowerShell
You also can use the format command. At a system command prompt, enter the
following command, where /L formats a large FRS volume and /A:64k sets a 64-KB
allocation unit size:
PowerShell
format /L /A:64k
Maximum file name and path
NTFS supports long file names and extended-length paths, with the following maximum
values:
Support for long file names, with backward compatibility: NTFS supports long file
names, storing an 8.3 alias on disk (in Unicode) to provide compatibility with file
systems that impose an 8.3 limit on file names and extensions. If needed (for
performance reasons), you can selectively disable 8.3 aliasing on individual NTFS
volumes in Windows Server 2008 R2, Windows 8, and more recent versions of the
Windows operating system. In Windows Server 2008 R2 and later systems, short
names are disabled by default when a volume is formatted using the operating
system. For application compatibility, short names still are enabled on the system
volume.
Support for extended-length paths: Many Windows API functions have Unicode
versions that allow an extended-length path of approximately 32,767 characters.
That total is well beyond the 260-character path limit defined by the MAX_PATH
setting. For detailed file name and path format requirements, and guidance for
implementing extended-length paths, see Naming files, paths, and namespaces.
Use disk quotas to track and control disk space usage on NTFS volumes for
individual users.
Use file system compression to maximize the amount of data that can be stored.
Increase the size of an NTFS volume by adding unallocated space from the same
disk or from a different disk.
Mount a volume at any empty folder on a local NTFS volume if you run out of
drive letters or need to create extra space that is accessible from an existing folder.
Related links
Cluster size recommendations for ReFS and NTFS
Resilient File System (ReFS) overview
What's New in NTFS for Windows Server (Windows Server 2012 R2)
What's New in NTFS (Windows Server 2008 R2, Windows 7)
NTFS Health and Chkdsk
Self-Healing NTFS (introduced in Windows Server 2008)
Transactional NTFS (introduced in Windows Server 2008)
Windows Server Storage documentation
Volume Shadow Copy Service
Article • 12/07/2022
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2,
Windows Server 2008, Windows 10, Windows 8.1, Windows 8, Windows 7
Backing up and restoring critical business data can be very complex due to the following
issues:
The data usually needs to be backed up while the applications that produce the
data are still running. This means that some of the data files might be open or they
might be in an inconsistent state.
If the data set is large, it can be difficult to back up all of it at one time.
VSS coordinates the actions that are required to create a consistent shadow copy (also
known as a snapshot or a point-in-time copy) of the data that is to be backed up. The
shadow copy can be used as-is, or it can be used in scenarios such as the following:
You want to back up application data and system state information, including
archiving data to another hard disk drive, to tape, or to other removable media.
You need a fast recovery from data loss by restoring data to the original Logical
Unit Number (LUN) or to an entirely new LUN that replaces an original LUN that
failed.
Windows features and applications that use VSS include the following:
VSS service Part of the Windows operating system that ensures the other components
can communicate with each other properly and work together.
VSS requester The software that requests the actual creation of shadow copies (or
other high-level operations like importing or deleting them). Typically, this is the backup
application. The Windows Server Backup utility and the System Center Data Protection
Manager application are VSS requesters. Non-Microsoft® VSS requesters include nearly
all backup software that runs on Windows.
VSS writer The component that guarantees we have a consistent data set to back up.
This is typically provided as part of a line-of-business application, such as SQL Server®
or Exchange Server. VSS writers for various Windows components, such as the registry,
are included with the Windows operating system. Non-Microsoft VSS writers are
included with many applications for Windows that need to guarantee data consistency
during back up.
VSS provider The component that creates and maintains the shadow copies. This can
occur in the software or in the hardware. The Windows operating system includes a VSS
provider that uses copy-on-write. If you use a storage area network (SAN), it is
important that you install the VSS hardware provider for the SAN, if one is provided. A
hardware provider offloads the task of creating and maintaining a shadow copy from
the host operating system.
The following diagram illustrates how the VSS service coordinates with requesters,
writers, and providers to create a shadow copy of a volume.
Figure 1 Architectural diagram of Volume Shadow Copy Service
To create a shadow copy, the requester, writer, and provider perform the following
actions:
1. The requester asks the Volume Shadow Copy Service to enumerate the writers,
gather the writer metadata, and prepare for shadow copy creation.
2. Each writer creates an XML description of the components and data stores that
need to be backed up and provides it to the Volume Shadow Copy Service. The
writer also defines a restore method, which is used for all components. The Volume
Shadow Copy Service provides the writer's description to the requester, which
selects the components that will be backed up.
3. The Volume Shadow Copy Service notifies all the writers to prepare their data for
making a shadow copy.
4. Each writer prepares the data as appropriate, such as completing all open
transactions, rolling transaction logs, and flushing caches. When the data is ready
to be shadow-copied, the writer notifies the Volume Shadow Copy Service.
5. The Volume Shadow Copy Service tells the writers to temporarily freeze application
write I/O requests (read I/O requests are still possible) for the few seconds that are
required to create the shadow copy of the volume or volumes. The application
freeze is not allowed to take longer than 60 seconds. The Volume Shadow Copy
Service flushes the file system buffers and then freezes the file system, which
ensures that the file system metadata is recorded correctly and the data to be
shadow-copied is written in a consistent order.
6. The Volume Shadow Copy Service tells the provider to create the shadow copy.
The shadow copy creation period lasts no more than 10 seconds, during which all
write I/O requests to the file system remain frozen.
7. The Volume Shadow Copy Service releases file system write I/O requests.
8. VSS tells the writers to thaw application write I/O requests. At this point
applications are free to resume writing data to the disk that is being shadow-
copied.
7 Note
The shadow copy creation can be aborted if the writers are kept in the freeze state
for longer than 60 seconds or if the providers take longer than 10 seconds to
commit the shadow copy.
9. The requester can retry the process (go back to step 1) or notify the administrator
to retry at a later time.
10. If the shadow copy is successfully created, the Volume Shadow Copy Service
returns the location information for the shadow copy to the requester. In some
cases, the shadow copy can be temporarily made available as a read-write volume
so that VSS and one or more applications can alter the contents of the shadow
copy before the shadow copy is finished. After VSS and the applications make their
alterations, the shadow copy is made read-only. This phase is called Auto-recovery,
and it is used to undo any file-system or application transactions on the shadow
copy volume that were not completed before the shadow copy was created.
Complete copy This method makes a complete copy (called a "full copy" or "clone") of
the original volume at a given point in time. This copy is read-only.
Copy-on-write This method does not copy the original volume. Instead, it makes a
differential copy by copying all changes (completed write I/O requests) that are made to
the volume after a given point in time.
Redirect-on-write This method does not copy the original volume, and it does not
make any changes to the original volume after a given point in time. Instead, it makes a
differential copy by redirecting all changes to a different volume.
Complete copy
A complete copy is usually created by making a "split mirror" as follows:
1. The original volume and the shadow copy volume are a mirrored volume set.
2. The shadow copy volume is separated from the original volume. This breaks the
mirror connection.
After the mirror connection is broken, the original volume and the shadow copy volume
are independent. The original volume continues to accept all changes (write I/O
requests), while the shadow copy volume remains an exact read-only copy of the
original data at the time of the break.
Copy-on-write method
In the copy-on-write method, when a change to the original volume occurs (but before
the write I/O request is completed), each block to be modified is read and then written
to the volume's shadow copy storage area (also called its "diff area"). The shadow copy
storage area can be on the same volume or a different volume. This preserves a copy of
the data block on the original volume before the change overwrites it.
Time Source data (status and data) Shadow copy (status and data)
T2 Original data overwritten: 1 2 3' 4 5 Differences and index stored on shadow copy: 3
The copy-on-write method is a quick method for creating a shadow copy, because it
copies only data that is changed. The copied blocks in the diff area can be combined
with the changed data on the original volume to restore the volume to its state before
any of the changes were made. If there are many changes, the copy-on-write method
can become expensive.
Redirect-on-write method
In the redirect-on-write method, whenever the original volume receives a change (write
I/O request), the change is not applied to the original volume. Instead, the change is
written to another volume's shadow copy storage area.
Time Source data (status and data) Shadow copy (status and data)
T1 Data changed in cache: 3 to 3' Shadow copy created (differences only): 3'
T2 Original data unchanged: 1 2 3 4 5 Differences and index stored on shadow copy: 3'
Like the copy-on-write method, the redirect-on-write method is a quick method for
creating a shadow copy, because it copies only changes to the data. The copied blocks
in the diff area can be combined with the unchanged data on the original volume to
create a complete, up-to-date copy of the data. If there are many read I/O requests, the
redirect-on-write method can become expensive.
Shadow Copy Providers
There are two types of shadow copy providers: hardware-based providers and software-
based providers. There is also a system provider, which is a software provider that is
built in to the Windows operating system.
Hardware-based providers
Hardware-based shadow copy providers act as an interface between the Volume
Shadow Copy Service and the hardware level by working in conjunction with a hardware
storage adapter or controller. The work of creating and maintaining the shadow copy is
performed by the storage array.
Hardware providers always take the shadow copy of an entire LUN, but the Volume
Shadow Copy Service only exposes the shadow copy of the volume or volumes that
were requested.
A hardware-based shadow copy provider makes use of the Volume Shadow Copy
Service functionality that defines the point in time, allows data synchronization,
manages the shadow copy, and provides a common interface with backup applications.
However, the Volume Shadow Copy Service does not specify the underlying mechanism
by which the hardware-based provider produces and maintains shadow copies.
Software-based providers
Software-based shadow copy providers typically intercept and process read and write
I/O requests in a software layer between the file system and the volume manager
software.
These providers are implemented as a user-mode DLL component and at least one
kernel-mode device driver, typically a storage filter driver. Unlike hardware-based
providers, software-based providers create shadow copies at the software level, not the
hardware level.
For more information about basic disks, see What Are Basic Disks and Volumes?.
System provider
One shadow copy provider, the system provider, is supplied in the Windows operating
system. Although a default provider is supplied in Windows, other vendors are free to
supply implementations that are optimized for their storage hardware and software
applications.
To maintain the "point-in-time" view of a volume that is contained in a shadow copy, the
system provider uses a copy-on-write technique. Copies of the blocks on volume that
have been modified since the beginning of the shadow copy creation are stored in a
shadow copy storage area.
The system provider can expose the production volume, which can be written to and
read from normally. When the shadow copy is needed, it logically applies the differences
to data on the production volume to expose the complete shadow copy.
For the system provider, the shadow copy storage area must be on an NTFS volume. The
volume to be shadow copied does not need to be an NTFS volume, but at least one
volume mounted on the system must be an NTFS volume.
The component files that make up the system provider are swprv.dll and volsnap.sys.
For more information about these writers, see In-Box VSS Writers.
The shadow copy can be a full clone or a differential shadow copy. In either case, at the
end of the resync operation, the destination LUN will have the same contents as the
shadow copy LUN. During the resync operation, the array performs a block-level copy
from the shadow copy to the destination LUN.
7 Note
Most arrays allow production I/O operations to resume shortly after the resync
operation begins. While the resync operation is in progress, read requests are redirected
to the shadow copy LUN, and write requests to the destination LUN. This allows arrays
to recover very large data sets and resume normal operations in several seconds.
LUN resynchronization is different from LUN swapping. A LUN swap is a fast recovery
scenario that VSS has supported since Windows Server 2003 SP1. In a LUN swap, the
shadow copy is imported and then converted into a read-write volume. The conversion
is an irreversible operation, and the volume and underlying LUN cannot be controlled
with the VSS APIs after that. The following list describes how LUN resynchronization
compares with LUN swapping:
In LUN resynchronization, the shadow copy is not altered, so it can be used several
times. In LUN swapping, the shadow copy can be used only once for a recovery.
For the most safety-conscious administrators, this is important. When LUN
resynchronization is used, the requester can retry the entire restore operation if
something goes wrong the first time.
At the end of a LUN swap, the shadow copy LUN is used for production I/O
requests. For this reason, the shadow copy LUN must use the same quality of
storage as the original production LUN to ensure that performance is not impacted
after the recovery operation. If LUN resynchronization is used instead, the
hardware provider can maintain the shadow copy on storage that is less expensive
than production-quality storage.
If the destination LUN is unusable and needs to be recreated, LUN swapping may
be more economical because it doesn't require a destination LUN.
2 Warning
All of the operations listed are LUN-level operations. If you attempt to recover a
specific volume by using LUN resynchronization, you are unwittingly going to revert
all the other volumes that are sharing the LUN.
For more information about Shadow Copies for Shared Folders, see Shadow Copies for
Shared Folders (https://go.microsoft.com/fwlink/?LinkId=180898 ) on TechNet.
With the Volume Shadow Copy Service and a storage array with a hardware provider
that is designed for use with the Volume Shadow Copy Service, it is possible to create a
shadow copy of the source data volume on one server, and then import the shadow
copy onto another server (or back to the same server). This process is accomplished in a
few minutes, regardless of the size of the data. The transport process is accomplished
through a series of steps that use a shadow copy requester (a storage-management
application) that supports transportable shadow copies.
To transport a shadow copy
1. Create a transportable shadow copy of the source data on a server.
2. Import the shadow copy to a server that is connected to the SAN (you can import
to a different server or the same server).
7 Note
A transportable shadow copy that is created on Windows Server 2003 cannot be
imported onto a server that is running Windows Server 2008 or Windows Server
2008 R2. A transportable shadow copy that was created on Windows Server 2008
or Windows Server 2008 R2 cannot be imported onto a server that is running
Windows Server 2003. However, a shadow copy that is created on Windows Server
2008 can be imported onto a server that is running Windows Server 2008 R2 and
vice versa.
Shadow copies are read-only. If you want to convert a shadow copy to a read/write LUN,
you can use a Virtual Disk Service-based storage-management application (including
some requesters) in addition to the Volume Shadow Copy Service. By using this
application, you can remove the shadow copy from Volume Shadow Copy Service
management and convert it to a read/write LUN.
Shadow copies that are created on either of these versions of Windows can be used on
the other.
For more information, see the following Microsoft TechNet Web sites:
To exclude specific files from shadow copies, use the following registry key:
FilesNotToSnapshot.
7 Note
It cannot delete files from a shadow copy that was created on a Windows
Server by using the Previous Versions feature.
It can delete files from a shadow copy that was created by using the
Diskshadow utility, but it cannot delete files from a shadow copy that was
created by using the Vssadmin utility.
Files are deleted from a shadow copy on a best-effort basis. This means that
they are not guaranteed to be deleted.
System administrators can use the VSS troubleshooting information on the following
Microsoft TechNet Library Web site to gather diagnostic information about VSS-related
issues.
If there is a preconfigured manual association between the original volume and the
shadow copy volume location, then that location is used.
If the previous two criteria do not provide a location, the shadow copy service
chooses a location based on available free space. If more than one volume is being
shadow copied, the shadow copy service creates a list of possible snapshot
locations based on the size of free space, in descending order. The number of
locations provided is equal to the number of volumes being shadow copied.
If the volume being shadow copied is one of the possible locations, then a local
association is created. Otherwise an association with the volume with the most
available space is created.
How can I control the space that is used for shadow copy
storage space?
Type the vssadmin resize shadowstorage command.
DiskShadow (https://go.microsoft.com/fwlink/?LinkId=180907 )
VssAdmin (https://go.microsoft.com/fwlink/?LinkId=84008 )
DiskShadow
DiskShadow is a VSS requester that you can use to manage all the hardware and
software snapshots that you can have on a system. DiskShadow includes commands
such as the following:
expose: Exposes a persistent shadow copy (as a drive letter, for example)
revert: Reverts a volume back to a specified shadow copy
This tool is intended for use by IT professionals, but developers might also find it useful
when testing a VSS writer or VSS provider.
VssAdmin
VssAdmin is used to create, delete, and list information about shadow copies. It can also
be used to resize the shadow copy storage area ("diff area").
resize shadowstorage: Changes the maximum size of the shadow copy storage
area
VssAdmin can only be used to administer shadow copies that are created by the system
software provider.
VssAccessControl
MaxShadowCopies
MinDiffAreaFileSize
VssAccessControl
This key is used to specify which users have access to shadow copies.
For more information, see the following entries on the MSDN Web site:
MaxShadowCopies
This key specifies the maximum number of client-accessible shadow copies that can be
stored on each volume of the computer. Client-accessible shadow copies are used by
Shadow Copies for Shared Folders.
For more information, see the following entry on the MSDN Web site:
MinDiffAreaFileSize
This key specifies the minimum initial size, in MB, of the shadow copy storage area.
For more information, see the following entry on the MSDN Web site:
Note
Additional References
Volume Shadow Copy Service in Windows Developer Center
Use Disk Cleanup on Windows Server
Article • 03/13/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows
Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
The Disk Cleanup tool clears unnecessary files in a Windows Server environment. This tool is
available by default on Windows Server 2019 and Windows Server 2016, but you might have to
take a few manual steps to enable it on earlier versions of Windows Server.
To start the Disk Cleanup tool, either run the Cleanmgr.exe file, or select Start > Windows
Administrative Tools > Disk Cleanup.
You can also run Disk Cleanup by using the cleanmgr Windows command, and use command-
line options to direct Disk Cleanup to clean certain files.
7 Note
If you're just looking to free up disk space, consider using Azure File Sync with cloud tiering
enabled. This method lets you cache your most frequently accessed files locally and tier
your least frequently accessed files to the cloud, saving local storage space while
maintaining performance. For more information, see Planning for an Azure File Sync
deployment.
1. If Server Manager is already open, go to the next step. If Server Manager isn't open yet,
launch it by doing one of the following options.
3. On the Before you begin page, verify that your destination server and network
environment are prepared for the feature that you want to install. Select Next.
4. On the Select installation type page, select Role-based or feature-based installation to
install all parts features on a single server. Select Next.
5. On the Select destination server page, select a server from the server pool, or select an
offline VHD. Select Next.
7. On the Select features page, select User Interface and Infrastructure, and then select
Desktop Experience.
8. In Add features that are required for Desktop Experience?, select Add Features.
10. Verify that the Disk Cleanup button appears in the Properties dialog box.
To use cleanmgr.exe, install the Desktop Experience as described earlier, or copy two files that
are already present on the server, cleanmgr.exe and cleanmgr.exe.mui. Use the following table
to locate the files for your operating system.
You can launch the Disk Cleanup tool by running Cleanmgr.exe from a Command Prompt
window, or by selecting Start and entering Cleanmgr in the search field.
To set up the Disk Cleanup button to appear on a disk's Properties dialog, you need to install
the Desktop Experience feature, as shown in the previous section.
Related links
Free up drive space in Windows 10
cleanmgr
Advanced Troubleshooting Server
Message Block (SMB)
Article • 12/13/2022
Try our Virtual Agent - It can help you quickly identify and fix common SMB issues.
Server Message Block (SMB) is a network transport protocol for file systems operations
to enable a client to access resources on a server. The primary purpose of the SMB
protocol is to enable remote file system access between two systems over TCP/IP.
7 Note
The SMB Server (SRV) refers to the system that is hosting the file system, also
known as the file server. The SMB Client (CLI) refers to the system that is trying to
access the file system, regardless of the OS version or edition.
For example, if you use Windows Server 2016 to reach an SMB share that is hosted on
Windows 10, Windows Server 2016 is the SMB Client and Windows 10 the SMB Server.
Collect data
Before you troubleshoot SMB issues, we recommend that you first collect a network
trace on both the client and server sides. The following guidelines apply:
On Windows systems, you can use netshell (netsh), Network Monitor, Message
Analyzer, or Wireshark to collect a network trace.
Third-party devices generally have an in-box packet capture tool, such as tcpdump
(Linux/FreeBSD/Unix), or pktt (NetApp). For example, if the SMB client or SMB
server is a Unix host, you can collect data by running the following command:
To discover the source of the issue, you can check the two-sided traces: CLI, SRV, or
somewhere in between.
This section provides the steps for using netshell to collect network trace.
) Important
The Microsoft Message Analyzer tool has been retired and we recommend
Wireshark to analyze ETL files. For those who have downloaded the tool
previously and are looking for more information, see Installing and upgrading
Message Analyzer.
7 Note
A Netsh trace creates an ETL file. ETL files can be opened in Message Analyzer (MA),
Network Monitor 3.4 (set the parser to Network Monitor Parsers > Windows), and
Wireshark .
1. On both the SMB server and SMB client, create a Temp folder on drive C. Then, run
the following command:
PowerShell
Start-NetEventSession trace
PowerShell
Stop-NetEventSession trace
Remove-NetEventSession trace
7 Note
You should trace only a minimum amount of the data that's transferred. For
performance issues, always take both a good and bad trace, if the situation allows
it.
1. The TCP three-way handshake does not finish. This typically indicates that there is
a firewall block, or that the Server service is not running.
2. Retransmits are occurring. These can cause slow file transfers because of
compound TCP congestion throttling.
3. Five retransmits followed by a TCP reset could mean that the connection between
systems was lost, or that one of the SMB services crashed or stopped responding.
4. The TCP receive window is diminishing. This can be caused by slow storage or
some other issue that prevents data from being retrieved from the Ancillary
Function Driver (AFD) Winsock buffer.
If there is no noticeable TCP/IP issue, look for SMB errors. To do this, follow these steps:
1. Always check SMB errors against the MS-SMB2 protocol specification. Many SMB
errors are benign (not harmful). Refer to the following information to determine
why SMB returned the error before you conclude that the error is related to any of
the following issues:
The MS-SMB2 Message Syntax article details each SMB command and its
options.
The MS-SMB2 Client Processing article details how the SMB client creates
requests and responds to server messages.
The MS-SMB2 Server Processing article details how the SMB server creates
requests and responds to client requests.
The SMB session must be terminated (TCP reset) when the Validate Negotiate
process fails on either the client or the server.
This process might fail because a WAN optimizer is modifying the SMB
Negotiate packet.
You can learn a lot about what the application is trying to do by examining the
SMB commands.
Compare the commands and operations to the protocol specification to make sure that
everything is operating correctly. If it is not, collect data that is closer to or at a lower
level to look for more information about the root cause. To do this, follow these steps:
2. Run the netsh command to trace and gather details about whether there are issues
in the network stack or drops in Windows Filtering Platform (WFP) applications,
such as firewall or antivirus program.
3. If all other options fail, collect a t.cmd if you suspect that the issue occurs within
SMB itself, or if none of the other data is sufficient to identify a root cause.
For example:
The two-sided traces show that the SRV responds slowly to a READ request.
7 Note
Optionally, you might also temporarily uninstall the antivirus program during
troubleshooting.
Event logs
Both SMB Client and SMB Server have a detailed event log structure, as shown in the
following screenshot. Collect the event logs to help find the root cause of the issue.
RDBSS.sys
MRXSMB.sys
MRXSMB10.sys
MRXSMB20.sys
MUP.sys
SMBdirect.sys
SRVNET.sys
SRV.sys
SRV2.sys
SMBdirect.sys
Under %windir%\system32
srvsvc.dll
Update suggestions
We recommend that you update the following components before you troubleshoot
SMB issues:
A file server requires file storage. If your storage has iSCSI component, update
those components.
Reference
Microsoft SMB Protocol Packet Exchange Scenario
How to detect, enable and disable
SMBv1, SMBv2, and SMBv3 in Windows
Article • 05/18/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows 11, Windows 10,
Windows 8.1, Windows 8
This article describes how to enable and disable Server Message Block (SMB) version 1
(SMBv1), SMB version 2 (SMBv2), and SMB version 3 (SMBv3) on the SMB client and
server components.
While disabling or removing SMBv1 might cause some compatibility issues with old
computers or software, SMBv1 has significant security vulnerabilities, and we strongly
encourage you not to use it . SMB 1.0 isn't installed by default in any edition of
Windows 11 or Windows Server 2019 and later. SMB 1.0 also isn't installed by default in
Windows 10, except Home and Pro editions. We recommend that instead of reinstalling
SMB 1.0, you update the SMB server that still requires it. For a list of third parties that
require SMB 1.0 and their updates that remove the requirement, review the SMB1
Product Clearinghouse .
In Windows 10, Windows 8.1, Windows Server 2019, Windows Server 2016, Windows
Server 2012 R2, and Windows Server 2012, disabling SMBv3 deactivates the following
functionality:
In Windows 7 and Windows Server 2008 R2, disabling SMBv2 deactivates the following
functionality:
The SMBv2 protocol was introduced in Windows Vista and Windows Server 2008, while
the SMBv3 protocol was introduced in Windows 8 and Windows Server 2012. For more
information about SMBv2 and SMBv3 capabilities, see the following articles:
7 Note
The computer will restart after you run the PowerShell commands to disable or
enable SMBv1.
Detect:
PowerShell
Disable:
PowerShell
Enable:
PowerShell
Tip
1. On the Server Manager Dashboard of the server where you want to remove
SMBv1, under Configure this local server, select Add roles and features.
2. On the Before you begin page, select Start the Remove Roles and Features
Wizard, and then on the following page, select Next.
3. On the Select destination server page under Server Pool, ensure that the server
you want to remove the feature from is selected, and then select Next.
4. On the Remove server roles page, select Next.
5. On the Remove features page, clear the check box for SMB 1.0/CIFS File Sharing
Support and select Next.
6. On the Confirm removal selections page, confirm that the feature is listed, and
then select Remove.
7 Note
When you enable or disable SMBv2 in Windows 8 or Windows Server 2012, SMBv3
is also enabled or disabled. This behavior occurs because these protocols share the
same stack.
Server
SMBv1
Detect:
PowerShell
Disable:
PowerShell
Enable:
PowerShell
SMB v2/v3
Detect:
PowerShell
Disable:
PowerShell
Enable:
PowerShell
Set-SmbServerConfiguration -EnableSMB2Protocol $true
7 Note
Detect:
PowerShell
Get-Item HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
| ForEach-Object {Get-ItemProperty $_.pspath}
Disable:
PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB1 -
Type DWORD -Value 0 -Force
Enable:
PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB1 -
Type DWORD -Value 1 -Force
Note You must restart the computer after you make these changes. For more
information, see Server storage at Microsoft .
Detect:
PowerShell
Get-ItemProperty
HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters |
ForEach-Object {Get-ItemProperty $_.pspath}
Disable:
PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB2 -
Type DWORD -Value 0 -Force
Enable:
PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB2 -
Type DWORD -Value 1 -Force
7 Note
You must restart the computer after you make these changes.
Registry Editor
) Important
Follow the steps in this section carefully. Serious problems might occur if you
modify the registry incorrectly. Before you modify it, back up the registry for
restoration in case problems occur.
To enable or disable SMBv1 on the SMB server, configure the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters
To enable or disable SMBv2 on the SMB server, configure the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters
7 Note
You must restart the computer after you make these changes.
Server
SMBv1
This procedure configures the following new item in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters
1. Open the Group Policy Management Console. Right-click the Group Policy
object (GPO) that should contain the new preference item, and then click Edit.
3. Right-click the Registry node, point to New, and select Registry Item.
Action: Create
Hive: HKEY_LOCAL_MACHINE
Key Path: SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
Value name: SMB1
Value type: REG_DWORD
Value data: 0
This procedure disables the SMBv1 Server components. This Group Policy must be
applied to all necessary workstations, servers, and domain controllers in the
domain.
7 Note
) Important
Enable:
PowerShell
Disable:
PowerShell
Detect:
PowerShell
Summary
If all the settings are in the same GPO, Group Policy Management displays the following
settings.
Testing and validation
After completing the configuration steps in this article, allow the policy to replicate and
update. As necessary for testing, run gpupdate /force at a command prompt, and then
review the target computers to make sure that the registry settings are applied correctly.
Make sure SMBv2 and SMBv3 are functioning for all other systems in the environment.
7 Note
Summary
Since Windows 10 Fall Creators Update and Windows Server, version 1709 (RS3), the
Server Message Block version 1 (SMBv1) network protocol is no longer installed by
default. It was superseded by SMBv2 and later protocols starting in 2007. Microsoft
publicly deprecated the SMBv1 protocol in 2014.
SMBv1 has the following behavior in Windows 10 and Windows Server 2019 and later
versions:
SMBv1 now has both client and server sub-features that can be uninstalled
separately.
Windows 10 Enterprise, Windows 10 Education, and Windows 10 Pro for
Workstations no longer contain the SMBv1 client or server by default after a clean
installation.
Windows Server 2019 and later no longer contains the SMBv1 client or server by
default after a clean installation.
Windows 10 Home and Windows 10 Pro no longer contain the SMBv1 server by
default after a clean installation.
Windows 11 doesn't contain the SMBv1 server or client by default after a clean
installation.
Windows 10 Home and Windows 10 Pro still contain the SMBv1 client by default
after a clean installation. If the SMBv1 client isn't used for 15 days in total
(excluding the computer being turned off), it automatically uninstalls itself.
In-place upgrades and Insider flights of Windows 10 Home and Windows 10 Pro
don't automatically remove SMBv1 initially. Windows evaluate the usage of SMBv1
client and server, and if either of them isn't used for 15 days in total (excluding the
time during which the computer is off), Windows will automatically uninstall it.
In-place upgrades and Insider flights of the Windows 10 Enterprise, Windows 10
Education, and Windows 10 Pro for Workstations editions don't automatically
remove SMBv1. An administrator must decide to uninstall SMBv1 in these
managed environments.
Automatic removal of SMBv1 after 15 days is a one-time operation. If an
administrator re-installs SMBv1, no further attempts will be made to uninstall it.
The SMB version 2.02, 2.1, 3.0, 3.02, and 3.1.1 features are still fully supported and
included by default as part of the SMBv2 binaries.
Because the Computer Browser service relies on SMBv1, the service is uninstalled if
the SMBv1 client or server is uninstalled. This means that Explorer Network can no
longer display Windows computers through the legacy NetBIOS datagram
browsing method.
SMBv1 can still be reinstalled in all editions of Windows 10 and Windows Server
2016.
Windows Server virtual machines created by Microsoft for the Azure Marketplace
don't contain the SMB1 binaries & you can't enable SMB1. Third-party Azure
Marketplace VMs may contain SMB1, contact their vendor for information.
Starting in Windows 10, version 1809 (RS5), Windows 10 Pro no longer contains the
SMBv1 client by default after a clean installation. All other behaviors from version 1709
still apply.
7 Note
Windows 10, version 1803 (RS4) Pro handles SMBv1 in the same manner
as Windows 10, version 1703 (RS2) and Windows 10, version 1607 (RS1). This issue
was fixed in Windows 10, version 1809 (RS5). You can still uninstall SMBv1
manually. However, Windows will not automatically uninstall SMBv1 after 15 days in
the following scenarios:
If you try to connect to devices that support only SMBv1, or if these devices try to
connect to you, you may receive one of the following error messages:
Output
You can't connect to the file share because it's not secure. This share
requires the obsolete SMB1 protocol, which is unsafe and could expose your
system to attack.
Your system requires SMB2 or higher. For more info on resolving this issue,
see: https://go.microsoft.com/fwlink/?linkid=852747
Output
Output
Output
System Error 64
Output
Output
Error 58
When a remote server required an SMBv1 connection from this client, and the SMBv1
client is installed, the following event is logged. This mechanism audits the use of
SMBv1 and is also used by the automatic uninstaller to set the 15-day timer of removing
SMBv1 because of lack of use.
Output
Dialect:
SecurityMode
Server name:
Guidance:
SMB1 is deprecated and should not be installed nor enabled. For more
information, see https://go.microsoft.com/fwlink/?linkid=852747.
When a remote server required an SMBv1 connection from this client, and the SMBv1
client isn't installed, the following event is logged. This event is to show why the
connection fails.
Output
Guidance:
The client has SMB1 disabled or uninstalled. For more information:
https://go.microsoft.com/fwlink/?linkid=852747.
These devices aren't likely running Windows. They are more likely running older versions
of Linux, Samba, or other types of third-party software to provide SMB services. Often,
these versions of Linux and Samba are, themselves, no longer supported.
7 Note
More Information
To work around this issue, contact the manufacturer of the product that supports only
SMBv1, and request a software or firmware update that support SMBv2.02 or a later
version. For a current list of known vendors and their SMBv1 requirements, see the
following Windows and Windows Server Storage Engineering Team Blog article:
Leasing mode
If SMBv1 is required to provide application compatibility for legacy software behavior,
such as a requirement to disable oplocks, Windows provides a new SMB share flag that's
known as Leasing mode. This flag specifies whether a share disables modern SMB
semantics such as leases and oplocks.
You can specify a share without using oplocks or leasing to allow a legacy application to
work with SMBv2 or a later version. To do this, use the New-SmbShare or Set-SmbShare
PowerShell cmdlets together with the -LeasingMode None parameter.
7 Note
You should use this option only on shares that are required by a third-party
application for legacy support if the vendor states that it is required. Do not specify
Leasing mode on user data shares or CA shares that are used by Scale-Out File
Servers. This is because the removal of oplocks and leases causes instability and
data corruption in most applications. Leasing mode works only in Share mode. It
can be used by any client operating system.
However, if you still have to use the Explorer Network in home and small business
workgroup environments to locate Windows-based computers, you can follow these
steps on your Windows-based computers that no longer use SMBv1:
1. Start the "Function Discovery Provider Host" and "Function Discovery Resource
Publication" services, and then set them to Automatic (Delayed Start).
2. When you open Explorer Network, enable network discovery when you're
prompted.
All Windows devices within that subnet that have these settings will now appear in
Network for browsing. This uses the WS-DISCOVERY protocol. Contact your other
vendors and manufacturers if their devices still don't appear in this browse list after the
Windows devices appear. It's possible they have this protocol disabled or that they
support only SMBv1.
7 Note
We recommend that you map drives and printers instead of enabling this feature,
which still requires searching and browsing for their devices. Mapped resources are
easier to locate, require less training, and are safer to use. This is especially true if
these resources are provided automatically through Group Policy. An administrator
can configure printers for location by methods other than the legacy Computer
Browser service by using IP addresses, Active Directory Domain Services (AD DS),
Bonjour, mDNS, uPnP, and so on.
) Important
We strongly recommend that you don't reinstall SMBv1. This is because this older
protocol has known security issues regarding ransomware and other malware.
Output
) Important
You should ignore this specific BPA rule's guidance, it's deprecated. The false error
was first corrected in Windows Server 2022 and Windows Server 2019 in the 2022
April Cumulative Update. We repeat: don't enable SMB 1.0.
Additional references
Stop using SMB1
SMB known issues
Article • 05/22/2020
The following topics describe some common troubleshooting issues that can occur
when you use Server Message Block (SMB). These topics also provide possible solutions
to those issues.
When you analyze a network trace, you notice that there is a Transmission Control
Protocol (TCP) three-way handshake failure that causes the SMB issue to occur. This
article describes how to troubleshoot this situation.
Troubleshooting
Generally, the cause is a local or infrastructure firewall that blocks the traffic. This issue
can occur in either of the following scenarios.
Scenario 1
The TCP SYN packet arrives on the SMB server, but the SMB server does not return a
TCP SYN-ACK packet.
Step 1
Run netstat or Get-NetTcpConnection to make sure that there is a listener on TCP port
445 that should be owned by the SYSTEM process.
PowerShell
Step 2
Make sure that the Server service is started and running.
Step 3
Take a Windows Filtering Platform (WFP) capture to determine which rule or program is
dropping the traffic. To do this, run the following command in a Command Prompt
window:
Run a scenario trace, and look for WFP drops in SMB traffic (on TCP port 445).
Optionally, you could remove the anti-virus programs because they are not always WFP-
based.
Step 4
If Windows Firewall is enabled, enable firewall logging to determine whether it records a
drop in traffic.
Make sure that the appropriate "File and Printer Sharing (SMB-In)" rules are enabled in
Windows Firewall with Advanced Security > Inbound Rules.
7 Note
Depending on how your computer is set up, "Windows Firewall" might be called
"Windows Defender Firewall."
Scenario 2
The TCP SYN packet never arrives at the SMB server.
In this scenario, you have to investigate the devices along the network path. You may
analyze network traces that are captured on each device to determine which device is
blocking the traffic.
Negotiate, Session Setup, and Tree
Connect Failures
Article • 12/07/2020
This article describes how to troubleshoot the failures that occur during an SMB
Negotiate, Session Setup, and Tree Connect request.
Negotiate fails
The SMB server receives an SMB NEGOTIATE request from an SMB client. The
connection times out and is reset after 60 seconds. There may be an ACK message after
about 200 microseconds.
If you are using Windows Server 2008 R2, there are hotfixes for this problem. Make sure
that the SMB client and the SMB server are up to date.
If the fully qualified domain name (FQDN) or Network Basic Input/Output System
(NetBIOS) name of the server is 'sed in the Universal Naming Convention (UNC) path,
Windows will use Kerberos for authentication.
After the Negotiate response, there will be an attempt to get a Kerberos ticket for the
Common Internet File System (CIFS) service principal name (SPN) of the server. Look at
the Kerberos traffic on TCP port 88 to make sure that there are no Kerberos errors when
the SMB client is gaining the token.
7 Note
The errors that occur during the Kerberos Pre-Authentication are OK. The errors
that occur after the Kerberos Pre-Authentication (instances in which authentication
does not work), are the errors that caused the SMB problem.
Make sure that the SMB server has an SPN when it is accessed through a CNAME
DNS record.
Make sure that SMB signing is working. (This is especially important for older,
third-party devices.)
The cause of common Tree Connect errors can be found in 3.3.5.7 Receiving an SMB2
TREE_CONNECT Request. The following are the solutions for two common status codes.
[STATUS_BAD_NETWORK_NAME]
Make sure that the share exists on the server, and that it is spelled correctly in the SMB
client request.
[STATUS_ACCESS_DENIED]
Verify that the disk and folder that are used by the share exists and is accessible.
If you are using SMBv3 or later, check whether the server and the share require
encryption, but the client doesn't support encryption. To do this, take the following
actions:
PowerShell
PowerShell
Get-SmbShare | select name, EncryptData
Windows 8, Windows Server 2012, and later versions of Windows support client-
side encryption (SMBv3 and later).
Samba and third-party device may not support encryption. You may have to
consult product documentation for more information.
References
For more information, see the following articles.
In the network trace for the SMB issue, you notice that a TCP Reset abort occurred
during the Validate Negotiate process. This article describes how to troubleshoot the
situation.
Cause
This issue can be caused by a failed negotiation validation. This typically occurs because
a WAN accelerator modifies the original SMB NEGOTIATE packet.
Microsoft no longer allows modification of the Validate Negotiate packet for any reason.
This is because this behavior creates a serious security risk.
Workaround
You can temporarily disable the Validate Negotiate process. To do this, locate the
following registry subkey:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Par
ameters
In Windows PowerShell, you can run the following command to set this value:
PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters"
RequireSecureNegotiate -Value 0 -Force
7 Note
The Validate Negotiate process cannot be disabled in Windows 10, Windows Server
2016, or later versions of Windows.
If either the client or server cannot support the Validate Negotiate command, you can
work around this issue by setting SMB signing to be required. SMB signing is considered
more secure than Validate Negotiate. However, there can also be performance
degradation if signing is required.
Reference
For more information, see the following articles:
Server Message Block (SMB) file transfer speeds can slow down depending on the size
and quantity of your files, your connection type, and the version of apps you use. This
article provides troubleshooting procedures for slow file transfer speeds through SMB.
Slow transfer
You can troubleshoot slow file transfers by checking your current storage use. If you
observe slow transfers of files, consider the following steps:
Test the storage speed. Copy speeds are limited by storage speed.
File copies sometimes start fast and then slow down. A change in copy speed
occurs when the initial copy is cached or buffered, in memory or in the RAID
controller's memory cache, and the cache runs out. This change forces data to be
written directly to disk (write-through).
Look for packet loss in the trace. Packet loss can cause throttling by the TCP
congestion provider.
For SMBv3 and later versions, verify that SMB Multichannel is enabled and
working.
On the SMB client, enable large MTU in SMB, and disable bandwidth throttling by
running the following command:
PowerShell
During file transfer, file creation causes both high protocol overhead and high file
system overhead. For large file transfers, these costs occur only one time. When a large
number of small files are transferred, the cost is repetitive and causes slow transfers.
Issue details
Network latency, create commands, and antivirus programs contribute to a slower
transfer of small files. The following are technical details about this problem:
SMB calls a create command to request that the file is created. Code checks
whether the file exists and then creates the file. Otherwise, some variation of the
create command creates the actual file.
The process can suffer from network latency and SMB server latency. This latency
occurs because the SMB request is first translated to a file system command and
then to the actual file system to complete the operation.
The transfer continues to slow while an antivirus program is running. This change
happens because the data is typically scanned once by the packet sniffer and a
second time when the data is written to disk. In some scenarios, these actions are
repeated thousands of times. You can potentially observe speeds of less than 1
MB/s.
You should verify that the Office and SMB binaries are up-to-date, and then test by
having leasing disabled on the SMB server. To verify both conditions have resolved,
follow these steps:
1. Run the following PowerShell command in Windows 8 and Windows Server 2012
or later versions of Windows:
PowerShell
You can also run the following command in an elevated Command Prompt
window:
REG ADD
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\param
eters /v DisableLeasing /t REG\_DWORD /d 1 /f
7 Note
After you set this registry key, SMB2 leases are no longer granted, but oplocks
are still available. This setting is used primarily for troubleshooting.
2. Restart the file server or restart the Server service. To restart the service, run the
following commands:
To avoid this issue, you can also replicate the file to a local file server. For more
information, see saving Office documents to a network server is slow when using EFS.
High CPU usage issue on the SMB
server
Article • 05/22/2020
This article discusses how to troubleshoot the high CPU usage issue on the SMB server.
In most cases, you will notice the issue of high CPU usage in the system process. Before
you proceed, use Process Explorer to make sure that srv2.sys or ntfs.sys is consuming
excessive CPU resources.
Generally, this issue can be caused by some form of command queuing in the SAN. You
can use Perfmon to capture a Microsoft-Windows-StorPort tracing, and analyze it to
accurately determine storage responsiveness.
Disk IO latency
Disk IO latency is a measure of the delay between the time that a disk IO request is
created and completed.
The IO latency that is measured in Perfmon includes all the time that is spent in the
hardware layers plus the time that is spent in the Microsoft Port Driver queue
(Storport.sys for SCSI). If the running processes generate a large StorPort queue, the
measured latency increases. This is because IO must wait before it is dispatched to the
hardware layers.
"Physical disk performance object" -> "Avg. Disk sec/Write counter" – This shows
the average write latency.
"Physical disk performance object" -> "Avg. Disk sec/Transfer counter" – This
shows the combined averages for both reads and writes.
The "_Total" instance is an average of the latencies for all physical disks in the computer.
Each of other instances represents an individual Physical Disk.
7 Note
Do not confuse these counters with Avg. Disk Transfers/sec. These are completely
different counters.
For the "physical disk performance object," the data is captured at the "Partition
manager" level in the storage stack.
When we measure the counters that are mentioned in the previous section, we are
measuring all the time that is spent by the request below the "Partition manager" level.
When the IO request is sent by the partition manager down the stack, we time stamp it.
When it returns, we time stamp it again and calculate the time difference. The time
difference is the latency.
By doing this, we are accounting for the time that is spent in the following components:
Class Driver - This manages the device type, such as disks, tapes, and so on.
Port Driver - This manages the transport protocol, such as SCSI, FC, SATA, and so
on.
Device Miniport Driver - This is the device driver for the Storage Adapter. It is
supplied by the manufacturer of the devices, such as Raid Controller, and FC HBA.
Disk Subsystem - This includes everything that is below the Device Miniport Driver.
This could be as simple as a cable that is connected to a single physical hard disk,
or as complex as a Storage Area Network. If the issue is determined to be caused
by this component, you can contact the hardware vendor for more information
about troubleshooting.
Disk queuing
There is a limited amount of IO that a disk subsystem can accept at a given time. The
excess IO gets queued until the disk can accept IO again. The time that IO spends in the
queues below the "Partition manager" level is accounted for in the Perfmon physical disk
latency measurements. As queues grow larger and IO must wait longer, the measured
latency also grows.
There are multiple queues below the "Partition manager" level, as follows:
Hardware Queues – such as disk controller queue, SAN switches queue, array
controller queue, and hard disk queue
We also account for the time that the hard disk spends actively servicing the IO and the
travel time that is taken for the request to return to the "Partition manager" level to be
marked as completed.
Finally, we have to pay special attention to the Port Driver Queue (for SCSI Storport.sys).
The Port Driver is the last Microsoft component to touch an IO before we hand it off to
the manufacturer-supplied Device Miniport Driver.
If the Device Miniport Driver can't accept any more IO because its queue or the
hardware queues below it are saturated, we will start accumulating IO on the Port Driver
Queue. The size of the Microsoft Port Driver queue is limited only by the available
system memory (RAM), and it can grow very large. This causes large measured latency.
To determine which SMB shares have ABE enabled, run the following PowerShell
command,
PowerShell
You can enable ABE in Server Manager. Navigatie to File and Storage Services >
Shares, right-click the share, select Properties, go to Settings and then select Enable
access-based enumeration.
You can check disk performance when enumeration is slow by opening the folder locally
through a console or an RDP session.
Troubleshoot the Event ID 50 Error
Message
Article • 11/03/2020
Symptoms
When information is being written to the physical disk, the following two event
messages may be logged in the system event log:
Event ID: 50
Event Type: Warning
Event Source: Ftdisk
Description: {Lost Delayed-Write Data} The system was attempting to transfer
file data from buffers to \Device\HarddiskVolume4. The write operation
failed, and only some of the data may have been written to the file.
Data:
0000: 00 00 04 00 02 00 56 00
0008: 00 00 00 00 32 00 04 80
0010: 00 00 00 00 00 00 00 00
0018: 00 00 00 00 00 00 00 00
0020: 00 00 00 00 00 00 00 00
0028: 11 00 00 80
Event ID: 26
Event Type: Information
Event Source: Application Popup
Description: Windows - Delayed Write Failed : Windows was unable to save all
the data for the file \Device\HarddiskVolume4\Program Files\Microsoft SQL
Server\MSSQL$INSTANCETWO\LOG\ERRORLOG. The data has been lost. This error
may be caused by a failure of your computer hardware or network connection.
These event ID messages mean exactly the same thing and are generated for the same
reasons. For the purposes of this article, only the event ID 50 message is described.
7 Note
The device and path in the description and the specific hexadecimal data will vary.
More Information
An event ID 50 message is logged if a generic error occurs when Windows is trying to
write information to the disk. This error occurs when Windows is trying to commit data
from the file system Cache Manager (not hardware level cache) to the physical disk. This
behavior is part of the memory management of Windows. For example, if a program
sends a write request, the write request is cached by Cache Manager and the program is
told the write is completed successfully. At a later point in time, Cache Manager tries to
lazy write the data to the physical disk. When Cache Manager tries to commit the data
to disk, an error occurs writing the data, and the data is flushed from the cache and
discarded. Write-back caching improves system performance, but data loss and volume
integrity loss can occur as a result of lost delayed-write failures.
It is important to remember that not all I/O is buffered I/O by Cache Manager. Programs
can set a FILE_FLAG_NO_BUFFERING flag that bypasses Cache Manager. When SQL
performs critical writes to a database, this flag is set to guarantee that the transaction is
completed directly to disk. For example, non-critical writes to log files perform buffered
I/O to improve overall performance. An event ID 50 message never results from non-
buffered I/O.
There are several different sources for an event ID 50 message. For example, an event ID
50 message logged from a MRxSmb source occurs if there is a network connectivity
problem with the redirector. To avoid performing incorrect troubleshooting steps, make
sure to review the event ID 50 message to confirm that it refers to a disk I/O issue and
that this article applies.
You can use the binary data that is associated with any accompanying "DISK" error
(indicated by an event ID 9, 11, 51 error message or other messages) to help you in
identifying the problem.
The following table describes what each offset of this message represents:
In the example in the "Summary" section, the error code is listed in the second line. This
line starts with "0008:" and it includes the last four bytes in this line:0008: 00 00 00 00 32
00 04 80 In this case, the error code is 0x80040032. The following code is the code for
error 50, and it is the same for all event ID 50 messages:
IO_LOST_DELAYED_WRITEWARNINGNote When you are converting the hexadecimal
data in the event ID message to the status code, remember that the values are
represented in the little-endian format.
You can identify the disk that the write was being tried to by using the symbolic link that
is listed to the drive in the "Description" section of the event ID message, for example:
\Device\HarddiskVolume4.
0028: 11 00 00 80
In this case, the final status equals 0x80000011.This status code maps to
STATUS_DEVICE_BUSY and implies that the device is currently busy.
7 Note
When you are converting the hexadecimal data in the event ID 50 message to the
status code, remember that the values are represented in the little-endian format.
Because the status code is the only piece of information that you are interested in,
it may be easier to view the data in WORDS format instead of BYTES. If you do so,
the bytes will be in the correct format and the data may be easier to interpret
quickly.
To do so, click Words in the Event Properties window. In the Data Words view, the
example in the "Symptoms" section would read as follows: Data:
() Bytes (.)
Words 0000: 00040000 00560002 00000000 80040032 0010: 00000000 00000000
00000000 00000000 0020: 00000000 00000000 80000011
To obtain a list of Windows NT status codes, see NTSTATUS.H in the Windows Software
Developers Kit (SDK).
SMB Multichannel troubleshooting
Article • 07/22/2020
This article describes how to troubleshoot issues that are related to SMB Multichannel.
PowerShell
After that, make sure the network interface is listed in the output of the following
commands:
PowerShell
Get-SmbServerNetworkInterface
PowerShell
Get-SmbClientNetworkInterface
You can also run the Get-NetAdapter command to view the interface index to verify the
result. The interface index shows all the active SMB adapters that are actively bound to
the appropriate interface.
The following command reveals which connection profile is being used. You can also use
the Network and Sharing Center to retrieve this information.
Get-NetConnectionProfile
Under the File and Printer Sharing group, check the firewall inbound rules to make sure
that "SMB-In" is enabled for the correct profile.
You can also enable File and Printer Sharing in the Network and Sharing Center
window. To do this, select Change advanced sharing settings in the menu on the left,
and then select Turn on file and printer sharing for the profile. This option enables the
File and Printer Sharing firewall rules.
Make sure that the SMBv3.x connection is being negotiated, and that nothing in
between the server and the client is affecting dialect negotiation. SMBv2 and earlier
versions don't support multichannel.
Look for the NETWORK_INTERFACE_INFO packets. This is where the SMB client requests
a list of adapters from the SMB server. If these packets aren't exchanged, multichannel
doesn't work.
The server responds by returning a list of valid network interfaces. Then, the SMB client
adds those to the list of available adapters for multichannel. At this point, multichannel
should start and, at least, try to start the connection.
Multichannel constraints have been set. For more information, see New-
SmbMultichannelConstraint.
The client and server can't communicate over the extra network interface. For
example, the TCP three-way handshake failed, the connection is blocked by a
firewall, session setup failed, and so on.
If the adapter and its IPv6 address are on the list that is sent by the server, the next step
is to see whether communications are tried over that interface. Filter the trace by the
link-local address and SMB traffic, and look for a connection attempt. If this is a
NetConnection trace, you can also examine Windows Filtering Platform (WFP) events to
see whether the connection is being blocked.
File Server Resource Manager (FSRM)
overview
Article • 03/21/2023
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
File Server Resource Manager (FSRM) is a role service in Windows Server that enables
you to manage and classify data stored on file servers. You can use FSRM to
automatically classify files, perform tasks based on these classifications, set quotas on
folders, and create reports monitoring storage usage. In Windows Server version 1803,
FSRM adds the ability to prevent the creation of change journals.
7 Note
For new features on older versions of Windows Server, see What's New in File
Server Resource Manager.
Features
FSRM includes the following features:
Quota management: Limit the space that is allowed for a volume or folder. These
limits can be automatically applied to new folders that are created on a volume.
You can also define quota templates that can be applied to new volumes or
folders.
File Classification Infrastructure: Gain insight into your data by automating
classification processes so that you can manage your data more effectively. You
can classify files and apply policies based on this classification. Example policies
include dynamic access control for restricting access to files, file encryption, and
file expiration. Files can be classified automatically by using file classification rules
or manually by modifying the properties of a selected file or folder.
File Management Tasks: Gain the ability to apply a conditional policy or action to
files based on their classification. The conditions of a file management task include
the file location, the classification properties, the date the file was created, the last
modified date of the file, or the last time the file was accessed. The actions that a
file management task can take include the ability to expire files, encrypt files, or
run a custom command.
File screening management: Control the types of files that the user can store on a
file server. You can limit the extension that can be stored on your shared files. For
example, you can create a file screen that doesn't allow files with an MP3 extension
to be stored in personal shared folders on a file server.
Storage reports: Use these reports to help you identify trends in disk usage and
how your data is classified. You can also monitor a selected group of users for
attempts to save unauthorized files.
You can configure and manage the FSRM features by using the FSRM app or by using
Windows PowerShell.
) Important
FSRM supports volumes formatted with the NTFS file system only. The Resilient File
System isn't supported.
Practical applications
The following list outlines some practical applications for FSRM:
Use File Classification Infrastructure with the Dynamic Access Control scenario.
Create a policy that grants access to files and folders based on the way files are
classified on the file server.
Create a file classification rule that tags any file that contains at least 10 social
security numbers as having customer content.
Expire any file that hasn't been modified in the last 10 years.
Create a 200-MB quota for each user's home directory and notify them when
they're using 180 MB.
Schedule a report that runs every Sunday night at midnight that generates a list of
the most recently accessed files from the previous two days. This report can help
you determine the weekend storage activity and plan your server downtime
accordingly.
To prevent FSRM from creating a change journal on some or all volumes when the
service starts, complete the following steps:
1. Stop the SRMSVC service. Open a PowerShell session as an administrator and enter
Stop-Service SrmSvc .
2. Delete the USN journal for the volumes you want to conserve space on by using
the fsutil command:
PowerShell
5. To prevent change journal creation for the entire server, complete the following
steps:
) Important
If you want to disable journal creation for specific volumes only, continue to
the next step.
a. Right-click the Settings key, and then select New > DWORD (32-bit) Value.
b. Name the value SkipUSNCreationForSystem .
c. Set the value to 1 (in hexadecimal).
6. To prevent change journal creation for specific volumes, complete the following
steps:
a. Identify the volume paths you want to skip. You can use the fsutil volume list
command or the following PowerShell command:
PowerShell
Get-Volume | Format-Table DriveLetter,FileSystemLabel,Path
Console
d. Enter the path for each volume that you want to skip. Place each path on a
separate line. For example:
PowerShell
\\?\Volume{8d3c9e8a-0000-0000-0000-100000000000}\
\\?\Volume{8d3c9e8a-0000-0000-0000-501f00000000}\
7 Note
If Registry Editor displays a warning about removed empty strings, you can
safely disregard the message. Here’s an example of the message you might
see: Data of type REG_MULTI_SZ cannot contain empty strings. Registry
Editor will remove all empty strings found.
7. Start the SRMSVC service. For example, in a PowerShell session enter Start-
Service SrmSvc .
Related links
Dynamic Access Control overview
Checklist: Apply a Quota to a volume or
folder
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
2. Assess storage requirements on the volume or folder. You can use reports on the
Storage Reports Management node to provide data. (For example, run a Files by
Owner report on demand to identify users who use large amounts of disk space.)
Generate Reports on Demand
4. Create a quota based on the template on the volume or folder. Create a Quota
-Or-
Create an auto apply quota to automatically generate quotas for subfolders on the
volume or folder. Create an Auto Apply Quota
5. Schedule a report task that contains a Quota Usage report to monitor quota usage
periodically. Schedule a Set of Reports
7 Note
If you want to screen files on a volume or folder, see Checklist: Apply a File Screen
to a Volume or Folder.
Checklist - Apply a file screen to a
volume or folder
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
1. Configure e-mail settings if you plan to send file screening notifications or storage
reports by e-mail by following the instructions in Configure E-Mail Notifications.
2. Enable recording of file screening events in the auditing database if you plan to
generate File Screening Audit reports. Configure File Screen Audit
3. Assess stored file types that are candidates for screening rules. You can use reports
at the Storage Reports Management node to provide data. (For example, run a
Files by File Group report or a Large Files report on demand to identify files that
occupy large amounts of disk space.) Generate Reports on Demand
4. Review the preconfigured file groups, or create a new file group to enforce a
specific screening policy in your organization. Define File Groups for Screening
5. Review the properties of available file screen templates. (In File Screening
Management, click the File Screen Templates node.) Edit File Screen Template
Properties
-Or-
Create a new file screen template to enforce a storage policy in your organization.
Create a File Screen Template
6. Create a file screen based on the template on a volume or folder. Create a File
Screen
7. Configure file screen exceptions in subfolders of the volume or folder. Create a File
Screen Exception
7 Note
To limit storage on a volume or folder, see Checklist: Apply a Quota to a Volume
or Folder
Setting File Server Resource Manager
Options
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
The general File Server Resource Manager options can be set in the File Server Resource
Manager Options dialog box. These settings are used throughout the nodes, and some
of them can be modified when you work with quotas, screen files, or generate storage
reports.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
When you create quotas and file screens, you have the option of sending e-mail
notifications to users when their quota limit is approaching or after they have attempted
to save files that have been blocked. When you generate storage reports, you have the
option of sending the reports to specific recipients by e-mail. If you want to routinely
notify certain administrators about quota and file screening events, or send storage
reports, you can configure one or more default recipients.
To send these notifications and storage reports, you must specify the SMTP server to be
used for forwarding the e-mail messages.
2. On the E-mail Notifications tab, under SMTP server name or IP address, type the
host name or the IP address of the SMTP server that will forward e-mail
notifications and storage reports.
3. If you want to routinely notify certain administrators about quota or file screening
events or e-mail storage reports, under Default administrator recipients, type
each e-mail address.
4. To specify a different "From" address for e-mail notifications and storage reports
sent from File Server Resource Manager, under the Default "From" e-mail address,
type the e-mail address that you want to appear in your message.
6. Click OK.
PowerShell
Additional References
Setting File Server Resource Manager Options
Configure Notification Limits
Article • 07/29/2021
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2
To reduce the number of notifications that accumulate for repeatedly exceeding a quota
threshold or attempting to save an unauthorized file, File Server Resource Manager
applies time limits to the following notification types:
E-mail
Event log
Command
Report
Each limit specifies a period of time before another configured notification of the same
type is generated for an identical issue.
A default 60-minute limit is set for each notification type, but you can change these
limits. The limit applies to all the notifications of a given type, whether they are
generated by quota thresholds or by file screening events.
2. On the Notification Limits tab, enter a value in minutes for each notification type
that is shown.
3. Click OK.
7 Note
To customize time limits that are associated with