100% found this document useful (1 vote)
660 views1,131 pages

Windows Server Storage

Uploaded by

Ceasar Cambel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
660 views1,131 pages

Windows Server Storage

Uploaded by

Ceasar Cambel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Tell us about your PDF experience.

Windows Server Storage documentation


Storage in Windows Server provides new and improved features for software-defined
datacenter (SDDC) customers focusing on virtualized workloads. Windows Server also
provides extensive support for enterprise customers using file servers with existing
workloads. Applies to: Windows Server 2019, Windows Server 2016.

About Windows Server Storage

h WHAT'S NEW

What's new?

Software-defined storage for virtualized workloads

e OVERVIEW

Storage Spaces Direct

Storage Replica

Storage Quality of Service (QoS)

Data Deduplication

General-purpose file servers

e OVERVIEW

Storage Migration Service

Work Folders

Offline Files and Folder Redirection

Roaming User Profiles

DFS Namespaces

DFS Replication

File Server Resource Manager

iSCSI Target Server

iSCSI t tb t
iSCSI target boot

File systems, protocols, etc

e OVERVIEW

SMB over QUIC

ReFS

Server Message Block (SMB) protocol

Storage-class memory

BitLocker Drive Encryption

NTFS

Network File System (NFS)

In Azure

e OVERVIEW

Azure Storage

Azure StorSimple

Azure Shared Disk

Older versions of Windows Server

e OVERVIEW

Windows previous versions documentation

Search for specific information


What's new in Storage in Windows
Server
Article • 03/29/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016

This topic explains the new and changed functionality in storage in Windows Server
2019, Windows Server 2016, and Windows Server Semi-Annual Channel releases.

What's new in storage in Windows Server,


version 1903
This release of Windows Server adds the following changes and technologies.

Storage Migration Service now migrates local accounts,


clusters, and Linux servers
Storage Migration Service makes it easier to migrate servers to a newer version of
Windows Server. It provides a graphical tool that inventories data on servers and then
transfers the data and configuration to newer servers—all without apps or users having
to change anything.

When using this version of Windows Server to orchestrate migrations, we've added the
following abilities:

Migrate local users and groups to the new server


Migrate storage from failover clusters
Migrate storage from a Linux server that uses Samba
More easily sync migrated shares into Azure by using Azure File Sync
Migrate to new networks such as Azure

For more info about Storage Migration Service, see Storage Migration Service overview.

System Insights disk anomaly detection


System Insights is a predictive analytics feature that locally analyzes Windows Server
system data and provides insight into the functioning of the server. It comes with a
number of built-in capabilities, but we've added the ability to install additional
capabilities via Windows Admin Center, starting with disk anomaly detection.

Disk anomaly detection is a new capability that highlights when disks are behaving
differently than usual. While different isn't necessarily a bad thing, seeing these
anomalous moments can be helpful when troubleshooting issues on your systems.

This capability is also available for servers running Windows Server 2019.

Windows Admin Center enhancements


A new release of Windows Admin Center is out, adding new functionality to Windows
Server. For info on the latest features, see Windows Admin Center.

What's new in storage in Windows Server 2019


and Windows Server, version 1809
This release of Windows Server adds the following changes and technologies.

Manage storage with Windows Admin Center


Windows Admin Center is a new locally deployed, browser-based app for managing
servers, clusters, hyper-converged infrastructure with Storage Spaces Direct, and
Windows 10 PCs. It comes at no additional cost beyond Windows and is ready for
production use.

To be fair, Windows Admin Center is a separate download that runs on Windows Server
2019 and other versions of Windows, but it's new and we didn't want you to miss it...

Storage Migration Service


Storage Migration Service is a new technology that makes it easier to migrate servers to
a newer version of Windows Server. It provides a graphical tool that inventories data on
servers, transfers the data and configuration to newer servers, and then optionally
moves the identities of the old servers to the new servers so that apps and users don't
have to change anything. For more info, see Storage Migration Service.

Storage Spaces Direct (Windows Server 2019 only)


There are a number of improvements to Storage Spaces Direct in Windows Server 2019
(Storage Spaces Direct isn't included in Windows Server, Semi-Annual Channel):

Deduplication and compression for ReFS volumes

Store up to ten times more data on the same volume with deduplication and
compression for the ReFS filesystem. (It's just one click to turn on with Windows
Admin Center.) The variable-size chunk store with optional compression maximizes
savings rates, while the multi-threaded post-processing architecture keeps
performance impact minimal. Supports volumes up to 64 TB and will deduplicate
the first 4 TB of each file.

Native support for persistent memory

Unlock unprecedented performance with native Storage Spaces Direct support for
persistent memory modules, including Intel® Optane™ DC PM and NVDIMM-N.
Use persistent memory as cache to accelerate the active working set, or as capacity
to guarantee consistent low latency on the order of microseconds. Manage
persistent memory just as you would any other drive in PowerShell or Windows
Admin Center.

Nested resiliency for two-node hyper-converged infrastructure at the edge

Survive two hardware failures at once with an all-new software resiliency option
inspired by RAID 5+1. With nested resiliency, a two-node Storage Spaces Direct
cluster can provide continuously accessible storage for apps and virtual machines
even if one server node goes down and a drive fails in the other server node.

Two-server clusters using a USB flash drive as a witness

Use a low-cost USB flash drive plugged into your router to act as a witness in two-
server clusters. If a server goes down and then back up, the USB drive cluster
knows which server has the most up-to-date data. For more info, see the Storage
at Microsoft blog and documentation on how to deploy a file share witness.

Windows Admin Center

Manage and monitor Storage Spaces Direct with the new purpose-built Dashboard
and experience in Windows Admin Center. Create, open, expand, or delete
volumes with just a few clicks. Monitor performance like IOPS and IO latency from
the overall cluster down to the individual SSD or HDD. Available at no additional
cost for Windows Server 2016 and Windows Server 2019.

Performance history
Get effortless visibility into resource utilization and performance with built-in
history. Over 50 essential counters spanning compute, memory, network, and
storage are automatically collected and stored on the cluster for up to one year.
Best of all, there's nothing to install, configure, or start – it just works. Visualize in
Windows Admin Center or query and process in PowerShell.

Scale up to 4 PB per cluster

Achieve multi-petabyte scale – great for media, backup, and archival use cases. In
Windows Server 2019, Storage Spaces Direct supports up to 4 petabytes (PB) =
4,000 terabytes of raw capacity per storage pool. Related capacity guidelines are
increased as well: for example, you can create twice as many volumes (64 instead
of 32), each twice as large as before (64 TB instead of 32 TB). Stitch multiple
clusters together into a cluster set for even greater scale within one storage
namespace. For more info, see the Storage at Microsoft blog .

Mirror-accelerated parity is 2X faster

With mirror-accelerated parity you can create Storage Spaces Direct volumes that
are part mirror and part parity, like mixing RAID-1 and RAID-5/6 to get the best of
both. (It's easier than you think in Windows Admin Center.) In Windows Server
2019, the performance of mirror-accelerated parity is more than doubled relative
to Windows Server 2016 thanks to optimizations.

Drive latency outlier detection

Easily identify drives with abnormal latency with proactive monitoring and built-in
outlier detection, inspired by Microsoft Azure's long-standing and successful
approach. Whether it's average latency or something more subtle like 99th
percentile latency that stands out, slow drives are automatically labeled in
PowerShell and Windows Admin Center with ‘Abnormal Latency' status.

Manually delimit the allocation of volumes to increase fault tolerance

This enables admins to manually delimit the allocation of volumes in Storage


Spaces Direct. Doing so can significantly increase fault tolerance under certain
conditions, but imposes some added management considerations and complexity.
For more info, see Delimit the allocation of volumes.

Storage Replica
There are a number of improvements to Storage Replica in this release:
Storage Replica in Windows Server, Standard Edition
You can now use Storage Replica with Windows Server, Standard Edition in addition to
Datacenter Edition. Storage Replica running on Windows Server, Standard Edition, has
the following limitations:

Storage Replica replicates a single volume instead of an unlimited number of


volumes.
Volumes can have a size of up to 2 TB instead of an unlimited size.

Storage Replica log performance improvements


We also made improvements to how the Storage Replica log tracks replication,
improving replication throughput and latency, especially on all-flash storage as well as
Storage Spaces Direct clusters that replicate between each other.

To gain the increased performance, all members of the replication group must run
Windows Server 2019.

Test failover

You can now temporarily mount a snapshot of the replicated storage on a destination
server for testing or backup purposes. For more information, see Frequently Asked
Questions about Storage Replica.

Windows Admin Center support


Support for graphical management of replication is now available in Windows Admin
Center via the Server Manager tool. This includes server-to-server replication, cluster-to-
cluster, as well as stretch cluster replication.

Miscellaneous improvements
Storage Replica also contains the following improvements:

Alters asynchronous stretch cluster behaviors so that automatic failovers now


occur
Multiple bug fixes

SMB
SMB1 and guest authentication removal: Windows Server no longer installs the
SMB1 client and server by default. Additionally, the ability to authenticate as a
guest in SMB2 and later is off by default. For more information, review SMBv1 is
not installed by default in Windows 10, version 1709 and Windows Server, version
1709 .

SMB2/SMB3 security and compatibility: Additional options for security and


application compatibility were added, including the ability to disable oplocks in
SMB2+ for legacy applications, as well as require signing or encryption on per-
connection basis from a client. For more information, review the SMBShare
PowerShell module help.

Data Deduplication
Data Deduplication now supports ReFS: You no longer must choose between the
advantages of a modern file system with ReFS and the Data Deduplication: now,
you can enable Data Deduplication wherever you can enable ReFS. Increase
storage efficiency by upwards of 95% with ReFS.
DataPort API for optimized ingress/egress to deduplicated volumes: Developers
can now take advantage of the knowledge Data Deduplication has about how to
store data efficiently to move data between volumes, servers, and clusters
efficiently.

File Server Resource Manager


Windows Server 2019 includes the ability to prevent the File Server Resource Manager
service from creating a change journal (also known as a USN journal) on all volumes
when the service starts. This can conserve space on each volume, but will disable real-
time file classification. For more information, see File Server Resource Manager overview.

What's new in storage in Windows Server,


version 1803

File Server Resource Manager


Windows Server, version 1803 includes the ability to prevent the File Server Resource
Manager service from creating a change journal (also known as a USN journal) on all
volumes when the service starts. This can conserve space on each volume, but will
disable real-time file classification. For more information, see File Server Resource
Manager overview.

What's new in storage in Windows Server,


version 1709
Windows Server, version 1709 is the first Windows Server release in the Semi-Annual
Channel. The Semi-Annual Channel is a Software Assurance benefit and is fully
supported in production for 18 months, with a new version every six months.

For more information, see Windows Server Semi-annual Channel Overview.

Storage Replica
The disaster recovery protection added by Storage Replica is now expanded to include:

Test failover: the option to mount the destination storage is now possible through
the test failover feature. You can mount a snapshot of the replicated storage on
destination nodes temporarily for testing or backup purposes. For more
information, see Frequently Asked Questions about Storage Replica.
Windows Admin Center support: Support for graphical management of
replication is now available in Windows Admin Center via the Server Manager tool.
This includes server-to-server replication, cluster-to-cluster, as well as stretch
cluster replication.

Storage Replica also contains the following improvements:

Alters asynchronous stretch cluster behaviors so that automatic failovers now


occur
Multiple bug fixes

SMB
SMB1 and guest authentication removal: Windows Server, version 1709 no longer
installs the SMB1 client and server by default. Additionally, the ability to
authenticate as a guest in SMB2 and later is off by default. For more information,
review SMBv1 is not installed by default in Windows 10, version 1709 and Windows
Server, version 1709 .

SMB2/SMB3 security and compatibility: Additional options for security and


application compatibility were added, including the ability to disable oplocks in
SMB2+ for legacy applications, as well as require signing or encryption on per-
connection basis from a client. For more information, review the SMBShare
PowerShell module help.

Data Deduplication
Data Deduplication now supports ReFS: You no longer must choose between the
advantages of a modern file system with ReFS and the Data Deduplication: now,
you can enable Data Deduplication wherever you can enable ReFS. Increase
storage efficiency by upwards of 95% with ReFS.
DataPort API for optimized ingress/egress to deduplicated volumes: Developers
can now take advantage of the knowledge Data Deduplication has about how to
store data efficiently to move data between volumes, servers, and clusters
efficiently.

What's new in storage in Windows Server 2016

Storage Spaces Direct


Storage Spaces Direct enables building highly available and scalable storage using
servers with local storage. It simplifies the deployment and management of software-
defined storage systems and unlocks use of new classes of disk devices, such as SATA
SSD and NVMe disk devices, that were previously not possible with clustered Storage
Spaces with shared disks.

What value does this change add? Storage Spaces Direct enables service providers and
enterprises to use industry standard servers with local storage to build highly available
and scalable software defined storage. Using servers with local storage decreases
complexity, increases scalability, and enables use of storage devices that were not
previously possible, such as SATA solid state disks to lower cost of flash storage, or
NVMe solid state disks for better performance.

Storage Spaces Direct removes the need for a shared SAS fabric, simplifying deployment
and configuration. Instead it uses the network as a storage fabric, leveraging SMB3 and
SMB Direct (RDMA) for high-speed, low-latency CPU efficient storage. To scale out,
simply add more servers to increase storage capacity and I/O performance For more
information, see the Storage Spaces Direct.

What works differently? This capability is new in Windows Server 2016.


Storage Replica
Storage Replica enables storage-agnostic, block-level, synchronous replication between
servers or clusters for disaster recovery, as well as stretching of a failover cluster
between sites. Synchronous replication enables mirroring of data in physical sites with
crash-consistent volumes to ensure zero data loss at the file-system level. Asynchronous
replication allows site extension beyond metropolitan ranges with the possibility of data
loss.

What value does this change add? Storage Replication enables you to do the following:

Provide a single vendor disaster recovery solution for planned and unplanned
outages of mission critical workloads.
Use SMB3 transport with proven reliability, scalability, and performance.
Stretch Windows failover clusters to metropolitan distances.
Use Microsoft software end to end for storage and clustering, such as Hyper-V,
Storage Replica, Storage Spaces, Cluster, Scale-Out File Server, SMB3,
Deduplication, and ReFS/NTFS.
Help reduce cost and complexity as follows:
Is hardware agnostic, with no requirement for a specific storage configuration
like DAS or SAN.
Allows commodity storage and networking technologies.
Features ease of graphical management for individual nodes and clusters
through Failover Cluster Manager.
Includes comprehensive, large-scale scripting options through Windows
PowerShell.
Help reduce downtime, and increase reliability and productivity intrinsic to
Windows.
Provide supportability, performance metrics, and diagnostic capabilities.

For more information, see the Storage Replica in Windows Server 2016.

What works differently? This capability is new in Windows Server 2016.

Storage Quality of Service


You can now use storage quality of service (QoS) to centrally monitor end-to-end
storage performance and create management policies using Hyper-V and CSV clusters
in Windows Server 2016.

What value does this change add? You can now create storage QoS policies on a CSV
cluster and assign them to one or more virtual disks on Hyper-V virtual machines.
Storage performance is automatically readjusted to meet policies as the workloads and
storage loads fluctuate.

Each policy can specify a reserve (minimum) and/or a limit (maximum) to be


applied to a collection of data flows, such as a virtual hard disk, a single virtual
machine or a group of virtual machines, a service, or a tenant.
Using Windows PowerShell or WMI, you can perform the following tasks:
Create policies on a CSV cluster.
Enumerate policies available on a CSV cluster.
Assign a policy to a virtual hard disk of a Hyper-V virtual machine.
Monitor the performance of each flow and status within the policy.
If multiple virtual hard disks share the same policy, performance is fairly distributed
to meet demand within the policy's minimum and maximum settings. Therefore, a
policy can be used to manage a virtual hard disk, a virtual machine, multiple virtual
machines comprising a service, or all virtual machines owned by a tenant.

What works differently? This capability is new in Windows Server 2016. Managing
minimum reserves, monitoring flows of all virtual disks across the cluster via a single
command, and centralized policy based management were not possible in previous
releases of Windows Server.

For more information, see Storage Quality of Service

Data Deduplication

Functionality New or Description


Updated

Support for Updated Prior to Windows Server 2016, volumes had to specifically sized for
Large the expected churn, with volume sizes above 10 TB not being good
Volumes candidates for deduplication. In Windows Server 2016, Data
Deduplication supports volume sizes up to 64 TB.

Support for Updated Prior to Windows Server 2016, files approaching 1 TB in size were not
Large Files good candidates for deduplication. In Windows Server 2016, files up
to 1 TB are fully supported.

Support for New Data Deduplication is available and fully supported in the new Nano
Nano Server Server deployment option for Windows Server 2016.
Functionality New or Description
Updated

Simplified New In Windows Server 2012 R2, Virtualized Backup Applications, such as
Backup Microsoft's Data Protection Manager, were supported through a
Support series of manual configuration steps. In Windows Server 2016, a new
default Usage Type "Backup", has been added for seamless
deployment of Data Deduplication for Virtualized Backup
Applications.

Support for New Data Deduplication fully supports the new Cluster OS Rolling
Cluster OS Upgrade feature of Windows Server 2016.
Rolling
Upgrades

SMB hardening improvements for SYSVOL and


NETLOGON connections
In Windows 10 and Windows Server 2016 client connections to the Active Directory
Domain Services default SYSVOL and NETLOGON shares on domain controllers now
require SMB signing and mutual authentication (such as Kerberos).

What value does this change add? This change reduces the likelihood of man-in-the-
middle attacks.

What works differently? If SMB signing and mutual authentication are unavailable, a
Windows 10 or Windows Server 2016 computer won't process domain-based Group
Policy and scripts.

7 Note

The registry values for these settings aren't present by default, but the hardening
rules still apply until overridden by Group Policy or other registry values.

For more information on these security improvements - also referred to as UNC


hardening, see Microsoft Knowledge Base article 3000483 and MS15-011 & MS15-
014: Hardening Group Policy .

Work Folders
Improved change notification when the Work Folders server is running Windows Server
2016 and the Work Folders client is Windows 10.
What value does this change add?
For Windows Server 2012 R2, when file changes are synced to the Work Folders server,
clients are not notified of the change and wait up to 10 minutes to get the update.
When using Windows Server 2016, the Work Folders server immediately notifies
Windows 10 clients and the file changes are synced immediately.

What works differently?


This capability is new in Windows Server 2016. This requires a Windows Server 2016
Work Folders server and the client must be Windows 10.

If you're using an older client or the Work Folders server is Windows Server 2012 R2, the
client will continue to poll every 10 minutes for changes.

ReFS
The next iteration of ReFS provides support for large-scale storage deployments with
diverse workloads, delivering reliability, resiliency, and scalability for your data.

What value does this change add?


ReFS introduces the following improvements:

ReFS implements new storage tiers functionality, helping deliver faster


performance and increased storage capacity. This new functionality enables:
Multiple resiliency types on the same virtual disk (using mirroring in the
performance tier and parity in the capacity tier, for example).
Increased responsiveness to drifting working sets.
The introduction of block cloning substantially improves the performance of VM
operations, such as .vhdx checkpoint merge operations.
The new ReFS scan tool enables the recovery of leaked storage and helps salvage
data from critical corruptions.

What works differently?


These capabilities are new in Windows Server 2016.

Additional References
What's New in Windows Server 2016
Data Deduplication Overview
Article • 02/18/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

What is Data Deduplication?


Data Deduplication, often called Dedup for short, is a feature that can help reduce the
impact of redundant data on storage costs. When enabled, Data Deduplication
optimizes free space on a volume by examining the data on the volume by looking for
duplicated portions on the volume. Duplicated portions of the volume's dataset are
stored once and are (optionally) compressed for additional savings. Data Deduplication
optimizes redundancies without compromising data fidelity or integrity. More
information about how Data Deduplication works can be found in the 'How does Data
Deduplication work?' section of the Understanding Data Deduplication page.

) Important

KB4025334 contains a roll up of fixes for Data Deduplication, including


important reliability fixes, and we strongly recommend installing it when using Data
Deduplication with Windows Server 2016 and Windows Server 2019.

Why is Data Deduplication useful?


Data Deduplication helps storage administrators reduce costs that are associated with
duplicated data. Large datasets often have a lot of duplication, which increases the costs
of storing the data. For example:

User file shares may have many copies of the same or similar files.
Virtualization guests might be almost identical from VM-to-VM.
Backup snapshots might have minor differences from day to day.

The space savings that you can gain from Data Deduplication depend on the dataset or
workload on the volume. Datasets that have high duplication could see optimization
rates of up to 95%, or a 20x reduction in storage utilization. The following table
highlights typical deduplication savings for various content types:

Scenario Content Typical space savings


Scenario Content Typical space savings

User documents Office documents, photos, music, videos, etc. 30-50%

Deployment shares Software binaries, cab files, symbols, etc. 70-80%

Virtualization libraries ISOs, virtual hard disk files, etc. 80-95%

General file share All the above 50-60%

7 Note

If you're just looking to free up space on a volume, consider using Azure File Sync
with cloud tiering enabled. This allows you to cache your most frequently accessed
files locally and tier your least frequently accessed files to the cloud, saving local
storage space while maintaining performance. For details, see Planning for an
Azure File Sync deployment.

When can Data Deduplication be used?


Scenario Description
illustration

General purpose file servers: General purpose file servers are general use file
servers that might contain any of the following types of shares:

Team shares
User home folders
Work folders
Software development shares

General purpose file servers are a good candidate for Data Deduplication because
multiple users tend to have many copies or versions of the same file. Software
development shares benefit from Data Deduplication because many binaries remain
essentially unchanged from build to build.
Scenario Description
illustration

Virtual Desktop Infrastructure (VDI) deployments: VDI servers, such as Remote


Desktop Services, provide a lightweight option for organizations to provision
desktops to users. There are many reasons for an organization to rely on such
technology:

Application deployment: You can quickly deploy applications across your


enterprise. This is especially useful when you have applications that are
frequently updated, infrequently used, or difficult to manage.
Application consolidation: When you install and run applications from a set
of centrally managed virtual machines, you eliminate the need to update
applications on client computers. This option also reduces the amount of
network bandwidth that is required to access applications.
Remote Access: Users can access enterprise applications from devices such as
home computers, kiosks, low-powered hardware, and operating systems
other than Windows.
Branch office access: VDI deployments can provide better application
performance for branch office workers who need access to centralized data
stores. Data-intensive applications sometimes do not have client/server
protocols that are optimized for low-speed connections.

VDI deployments are great candidates for Data Deduplication because the virtual
hard disks that drive the remote desktops for users are essentially identical.
Additionally, Data Deduplication can help with the so-called VDI boot storm, which
is the drop in storage performance when many users simultaneously sign in to their
desktops to start the day.

Backup targets, such as virtualized backup applications: Backup applications, such


as Microsoft Data Protection Manager (DPM), are excellent candidates for Data
Deduplication because of the significant duplication between backup snapshots.

Other workloads: Other workloads may also be excellent candidates for Data
Deduplication.
What's New in Data Deduplication
Article • 02/18/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

Data Deduplication in Windows Server has been optimized to be highly performant,


flexible, and manageable at private cloud scale. For more information about the
software-defined storage stack in Windows Server, please see What's New in Storage in
Windows Server.

Windows Server 2022


Data Deduplication has no extra enhancements in Windows Server 2022.

Windows Server 2019


Data Deduplication has the following enhancements in Windows Server 2019:

Functionality New or Description


updated

ReFS support New Store up to 10X more data on the same volume with deduplication
and compression for the ReFS filesystem. (It's just one click to turn
on with Windows Admin Center.) The variable-size chunk store with
optional compression maximizes savings rates, while the multi-
threaded post-processing architecture keeps performance impact
minimal. Supports volumes up to 64 TB and will deduplicate the first 4
TB of each file.

Windows Server 2016


Data Deduplication has the following enhancements starting in Windows Server 2016:

Functionality New or Description


updated

Support for Updated Prior to Windows Server 2016, volumes had to be specifically sized for
large the expected churn, with volume sizes above 10 TB not being good
volumes candidates for deduplication. In Windows Server 2016, Data
Deduplication supports volume sizes up to 64 TB.
Functionality New or Description
updated

Support for Updated Prior to Windows Server 2016, files approaching 1 TB in size were not
large files) good candidates for deduplication. In Windows Server 2016, files up
to 1 TB are fully supported.

Support for New Data Deduplication is available and fully supported in the new Nano
Nano Server Server deployment option for Windows Server 2016.

Simplified New Windows Server 2012 R2 supported Virtualized Backup Applications,


backup such as Microsoft's Data Protection Manager, through a series of
support manual configuration steps. Windows Server 2016 has added a new
default Usage Type (Backup) for seamless deployment of Data
Deduplication for Virtualized Backup Applications.

Support for New Data Deduplication fully supports the new Cluster OS Rolling Upgrade
Cluster OS feature of Windows Server 2016.
Rolling
Upgrade

Support for large volumes


What value does this change add?
To get the best performance out of Data Deduplication in Windows Server 2012 R2,
volumes must be sized properly to ensure that the Optimization job can keep up with
the rate of data changes, or churn. Typically, this means that Data Deduplication is only
performant on volumes of 10 TB or less, depending on the workload's write patterns.

Starting with Windows Server 2016, Data Deduplication is highly performant on volumes
up to 64 TB.

What works differently?


In Windows Server 2012 R2, the Data Deduplication Job Pipeline uses a single-thread
and I/O queue for each volume. To ensure that the Optimization jobs do not fall behind,
which would cause the overall savings rate for the volume to decrease, large datasets
must be broken up into smaller volumes. The appropriate volume size depends on the
expected churn for that volume. On average, the maximum is ~6-7 TB for high churn
volumes and ~9-10 TB for low churn volumes.

Starting with Windows Server 2016, the Data Deduplication Job pipeline has been
redesigned to run multiple threads in parallel using multiple I/O queues for each
volume. This results in performance that was previously only possible by dividing up
data into multiple smaller volumes. This change is represented in the following image:
These optimizations apply to all Data Deduplication Jobs, not just the Optimization Job.

Support for large files


What value does this change add?
In Windows Server 2012 R2, very large files are not good candidates for Data
Deduplication due to decreased performance of the Deduplication Processing Pipeline.
In Windows Server 2016, deduplication of files up to 1 TB is very performant, enabling
administrators to apply deduplication savings to a larger range of workloads. For
example, you can deduplicate very large files normally associated with backup
workloads.

What works differently?


Starting with Windows Server 2016, Data Deduplication makes use of new stream map
structures and other "under- the hood" improvements to increase optimization
throughput and access performance. Additionally, the Deduplication Processing Pipeline
can now resume optimization after a failover rather than restarting. These changes make
deduplication on files up to 1 TB highly performant.

Support for Nano Server


What value does this change add?
Nano Server is a new headless deployment option in Windows Server 2016 that requires
a far smaller system resource footprint, starts up significantly faster, and requires fewer
updates and restarts than the Windows Server Core deployment option. Data
Deduplication is fully supported on Nano Server. For more information about Nano
Server, see Getting Started with Nano Server.

Simplified configuration for Virtualized Backup


Applications
What value does this change add?
Data Deduplication for Virtualized Backup Applications is a supported scenario in
Windows Server 2012 R2, but it requires manually tuning of the deduplication settings.
Starting with Windows Server 2016, the configuration of Deduplication for Virtualized
Backup Applications is drastically simplified. It uses a predefined Usage Type option
when enabling Deduplication for a volume, just like our options for General Purpose File
Server and VDI.

Support for Cluster OS Rolling Upgrade


What value does this change add?
Windows Server Failover Clusters running Data Deduplication can have a mix of nodes
running Windows Server 2012 R2 versions of Data Deduplication alongside nodes
running Windows Server 2016 versions of Data Deduplication. This enhancement
provides full data access to all deduplicated volumes during a cluster rolling upgrade,
allowing for the gradual rollout of the new version of Data Deduplication on an existing
Windows Server 2012 R2 cluster without incurring downtime to upgrade all nodes at
once.

What works differently?


With previous versions of Windows Server, a Windows Server Failover Cluster required
all nodes in the cluster to have the same Windows Server version. Starting with the
Windows Server 2016, the cluster rolling upgrade functionality allows a cluster to run in
a mixed-mode. Data Deduplication supports this new mixed-mode cluster configuration
to enable full data access during a cluster rolling upgrade.
Understanding Data Deduplication
Article • 02/18/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

This document describes how Data Deduplication works.

How does Data Deduplication work?


Data Deduplication in Windows Server was created with the following two principles:

1. Optimization should not get in the way of writes to the disk Data Deduplication
optimizes data by using a post-processing model. All data is written unoptimized
to the disk and then optimized later by Data Deduplication.

2. Optimization should not change access semantics Users and applications that
access data on an optimized volume are completely unaware that the files they are
accessing have been deduplicated.

Once enabled for a volume, Data Deduplication runs in the background to:

Identify repeated patterns across files on that volume.


Seamlessly move those portions, or chunks, with special pointers called reparse
points that point to a unique copy of that chunk.

This occurs in the following four steps:

1. Scan the file system for files meeting the optimization policy.
2. Break files into variable-size chunks.

3. Identify unique chunks.

4. Place chunks in the chunk store and optionally compress.

5. Replace the original file stream of now optimized files with a reparse point to the
chunk store.
When optimized files are read, the file system sends the files with a reparse point to the
Data Deduplication file system filter (Dedup.sys). The filter redirects the read operation
to the appropriate chunks that constitute the stream for that file in the chunk store.
Modifications to ranges of a deduplicated files get written unoptimized to the disk and
are optimized by the Optimization job the next time it runs.

Usage Types
The following Usage Types provide reasonable Data Deduplication configuration for
common workloads:

Usage Ideal workloads What's different


Type

Default General purpose file server: Background optimization


Team shares Default optimization policy:
Work Folders Minimum file age = 3 days
Folder redirection Optimize in-use files = No
Software development shares Optimize partial files = No

Hyper- Virtualized Desktop Infrastructure (VDI) Background optimization


V servers Default optimization policy:
Minimum file age = 3 days
Optimize in-use files = Yes
Optimize partial files = Yes
"Under-the-hood" tweaks for
Hyper-V interop
Usage Ideal workloads What's different
Type

Backup Virtualized backup applications, such as Priority optimization


Microsoft Data Protection Manager (DPM) Default optimization policy:
Minimum file age = 0 days
Optimize in-use files = Yes
Optimize partial files = No
"Under-the-hood" tweaks for
interop with DPM/DPM-like
solutions

Jobs
Data Deduplication uses a post-processing strategy to optimize and maintain a volume's
space efficiency.

Job name Job descriptions Default


schedule

Optimization The Optimization job deduplicates by chunking data on a volume Once


per the volume policy settings, (optionally) compressing those every
chunks, and storing chunks uniquely in the chunk store. The hour
optimization process that Data Deduplication uses is described in
detail in How does Data Deduplication work?.

Garbage The Garbage Collection job reclaims disk space by removing Every
Collection unnecessary chunks that are no longer being referenced by files Saturday
that have been recently modified or deleted. at 2:35
AM

Integrity The Integrity Scrubbing job identifies corruption in the chunk store Every
Scrubbing due to disk failures or bad sectors. When possible, Data Saturday
Deduplication can automatically use volume features (such as at 3:35
mirror or parity on a Storage Spaces volume) to reconstruct the AM
corrupted data. Additionally, Data Deduplication keeps backup
copies of popular chunks when they are referenced more than 100
times in an area called the hotspot.

Unoptimization The Unoptimization job, which is a special job that should only be On-
run manually, undoes the optimization done by deduplication and demand
disables Data Deduplication for that volume. only

Data Deduplication terminology


Term Definition

Chunk A chunk is a section of a file that has been selected by the Data Deduplication
chunking algorithm as likely to occur in other, similar files.

Chunk store The chunk store is an organized series of container files in the System Volume
Information folder that Data Deduplication uses to uniquely store chunks.

Dedup An abbreviation for Data Deduplication that's commonly used in PowerShell,


Windows Server APIs and components, and the Windows Server community.

File Every file contains metadata that describes interesting properties about the file
metadata that are not related to the main content of the file. For instance, Date Created, Last
Read Date, Author, etc.

File stream The file stream is the main content of the file. This is the part of the file that Data
Deduplication optimizes.

File system The file system is the software and on-disk data structure that the operating
system uses to store files on storage media. Data Deduplication is supported on
NTFS formatted volumes.

File system A file system filter is a plugin that modifies the default behavior of the file system.
filter To preserve access semantics, Data Deduplication uses a file system filter
(Dedup.sys) to redirect reads to optimized content completely transparently to the
user or application that makes the read request.

Optimization A file is considered optimized (or deduplicated) by Data Deduplication if it has


been chunked, and its unique chunks have been stored in the chunk store.

Optimization The optimization policy specifies the files that should be considered for Data
policy Deduplication. For example, files may be considered out-of-policy if they are
brand new, open, in a certain path on the volume, or a certain file type.

Reparse A reparse point is a special tag that notifies the file system to pass off I/O to a
point specified file system filter. When a file's file stream has been optimized, Data
Deduplication replaces the file stream with a reparse point, which enables Data
Deduplication to preserve the access semantics for that file.

Volume A volume is a Windows construct for a logical storage drive that may span
multiple physical storage devices across a one or more servers. Deduplication is
enabled on a volume-by-volume basis.

Workload A workload is an application that runs on Windows Server. Example workloads


include general purpose file server, Hyper-V, and SQL Server.

2 Warning

Unless instructed by authorized Microsoft Support Personnel, do not attempt to


manually modify the chunk store. Doing so may result in data corruption or loss.
Frequently asked questions
How does Data Deduplication differ from other optimization products? There are
several important differences between Data Deduplication and other common storage
optimization products:

How does Data Deduplication differ from Single Instance Store? Single Instance
Store, or SIS, is a technology that preceded Data Deduplication and was first
introduced in Windows Storage Server 2008 R2. To optimize a volume, Single
Instance Store identified files that were completely identical and replaced them
with logical links to a single copy of a file that's stored in the SIS common store.
Unlike Single Instance Store, Data Deduplication can get space savings from files
that are not identical but share many common patterns and from files that
themselves contain many repeated patterns. Single Instance Store was deprecated
in Windows Server 2012 R2 and removed in Windows Server 2016 in favor of Data
Deduplication.

How does Data Deduplication differ from NTFS compression? NTFS compression is a
feature of NTFS that you can optionally enable at the volume level. With NTFS
compression, each file is optimized individually via compression at write-time.
Unlike NTFS compression, Data Deduplication can get spacing savings across all
the files on a volume. This is better than NTFS compression because files may have
both internal duplication (which is addressed by NTFS compression) and have
similarities with other files on the volume (which is not addressed by NTFS
compression). Additionally, Data Deduplication has a post-processing model,
which means that new or modified files will be written to disk unoptimized and will
be optimized later by Data Deduplication.

How does Data Deduplication differ from archive file formats like zip, rar, 7z, cab,
etc.? Archive file formats, like zip, rar, 7z, cab, etc., perform compression over a
specified set of files. Like Data Deduplication, duplicated patterns within files and
duplicated patterns across files are optimized. However, you have to choose the
files that you want to include in the archive. Access semantics are different, too. To
access a specific file within the archive, you have to open the archive, select a
specific file, and decompress that file for use. Data Deduplication operates
transparently to users and administrators and requires no manual kick-off.
Additionally, Data Deduplication preserves access semantics: optimized files
appear unchanged after optimization.
Can I change the Data Deduplication settings for my selected Usage Type? Yes.
Although Data Deduplication provides reasonable defaults for Recommended
workloads, you might still want to tweak Data Deduplication settings to get the most
out of your storage. Additionally, other workloads will require some tweaking to ensure
that Data Deduplication does not interfere with the workload.

Can I manually run a Data Deduplication job? Yes, all Data Deduplication jobs may be
run manually. This may be desirable if scheduled jobs did not run due to insufficient
system resources or because of an error. Additionally, the Unoptimization job can only
be run manually.

Can I monitor the historical outcomes of Data Deduplication jobs? Yes, all Data
Deduplication jobs make entries in the Windows Event Log.

Can I change the default schedules for the Data Deduplication jobs on my system?
Yes, all schedules are configurable. Modifying the default Data Deduplication schedules
is particularly desirable to ensure that the Data Deduplication jobs have time to finish
and do not compete for resources with the workload.
Install and enable Data Deduplication
Article • 07/05/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

This topic explains how to install Data Deduplication, evaluate workloads for
deduplication, and enable Data Deduplication on specific volumes.

7 Note

If you're planning to run Data Deduplication in a Failover Cluster, every node in the
cluster must have the Data Deduplication server role installed.

Install Data Deduplication

) Important

KB4025334 contains a roll up of fixes for Data Deduplication, including


important reliability fixes, and we strongly recommend installing it when using Data
Deduplication with Windows Server 2016.

Install Data Deduplication by using Server Manager


1. In the Add Roles and Feature wizard, select Server Roles, and then select Data
Deduplication.
2. Click Next until the Install button is active, and then click Install.

Install Data Deduplication by using PowerShell


To install Data Deduplication, run the following PowerShell command as an
administrator: Install-WindowsFeature -Name FS-Data-Deduplication

To install Data Deduplication:


From a server running Windows Server 2016 or later, or from a Windows PC with
the Remote Server Administration Tools (RSAT) installed, install Data
Deduplication with an explicit reference to the server name (replace 'MyServer'
with the real name of the server instance):

PowerShell

Install-WindowsFeature -ComputerName <MyServer> -Name FS-Data-


Deduplication

Or

Connect remotely to the server instance with PowerShell remoting and install Data
Deduplication by using DISM:

PowerShell

Enter-PSSession -ComputerName MyServer


dism /online /enable-feature /featurename:dedup-core /all

Enable Data Deduplication

Determine which workloads are candidates for Data


Deduplication
Data Deduplication can effectively minimize the costs of a server application's data
consumption by reducing the amount of disk space consumed by redundant data.
Before enabling deduplication, it is important that you understand the characteristics of
your workload to ensure that you get the maximum performance out of your storage.
There are two classes of workloads to consider:

Recommended workloads that have been proven to have both datasets that benefit
highly from deduplication and have resource consumption patterns that are
compatible with Data Deduplication's post-processing model. We recommend that
you always enable Data Deduplication on these workloads:
General purpose file servers (GPFS) serving shares such as team shares, user
home folders, work folders, and software development shares.
Virtualized desktop infrastructure (VDI) servers.
Virtualized backup applications, such as Microsoft Data Protection Manager
(DPM).
Workloads that might benefit from deduplication, but aren't always good
candidates for deduplication. For example, the following workloads could work
well with deduplication, but you should evaluate the benefits of deduplication first:
General purpose Hyper-V hosts
SQL servers
Line-of-business (LOB) servers

Evaluate workloads for Data Deduplication

) Important

If you are running a recommended workload, you can skip this section and go to
Enable Data Deduplication for your workload.

To determine whether a workload works well with deduplication, answer the following
questions. If you're unsure about a workload, consider doing a pilot deployment of Data
Deduplication on a test dataset for your workload to see how it performs.

1. Does my workload's dataset have enough duplication to benefit from enabling


deduplication? Before enabling Data Deduplication for a workload, investigate
how much duplication your workload's dataset has by using the Data
Deduplication Savings Evaluation tool, or DDPEval. After installing Data
Deduplication, you can find this tool at C:\Windows\System32\DDPEval.exe . DDPEval
can evaluate the potential for optimization against directly connected volumes
(including local drives or Cluster Shared Volumes) and mapped or unmapped
network shares.

Running DDPEval.exe will return an output similar to the following:

Data Deduplication Savings Evaluation Tool


Copyright 2011-2012 Microsoft Corporation. All Rights Reserved.

Evaluated folder: E:\Test


Processed files: 34
Processed files size: 12.03MB
Optimized files size: 4.02MB
Space savings: 8.01MB
Space savings percent: 66
Optimized files size (no compression): 11.47MB
Space savings (no compression): 571.53KB
Space savings percent (no compression): 4
Files with duplication: 2
Files excluded by policy: 20
Files excluded by error: 0

2. What do my workload's I/O patterns to its dataset look like? What performance
do I have for my workload? Data Deduplication optimizes files as a periodic job,
rather than when the file is written to disk. As a result, it is important to examine is
a workload's expected read patterns to the deduplicated volume. Because Data
Deduplication moves file content into the Chunk Store and attempts to organize
the Chunk Store by file as much as possible, read operations perform best when
they are applied to sequential ranges of a file.

Database-like workloads typically have more random read patterns than sequential
read patterns because databases do not typically guarantee that the database
layout will be optimal for all possible queries that may be run. Because the sections
of the Chunk Store may exist all over the volume, accessing data ranges in the
Chunk Store for database queries may introduce additional latency. High
performance workloads are particularly sensitive to this extra latency, but other
database-like workloads might not be.

7 Note

These concerns primarily apply to storage workloads on volumes made up of


traditional rotational storage media (also known as Hard Disk drives, or
HDDs). All-flash storage infrastructure (also known as Solid State Disk drives,
or SSDs), is less affected by random I/O patterns because one of the
properties of flash media is equal access time to all locations on the media.
Therefore, deduplication will not introduce the same amount of latency for
reads to a workload's datasets stored on all-flash media as it would on
traditional rotational storage media.

3. What are the resource requirements of my workload on the server? Because Data
Deduplication uses a post-processing model, Data Deduplication periodically
needs to have sufficient system resources to complete its optimization and other
jobs. This means that workloads that have idle time, such as in the evening or on
weekends, are excellent candidates for deduplication, and workloads that run all
day, every day may not be. Workloads that have no idle time may still be good
candidates for deduplication if the workload does not have high resource
requirements on the server.

Enable Data Deduplication


Before enabling Data Deduplication, you must choose the Usage Type that most closely
resembles your workload. There are three Usage Types included with Data
Deduplication.

Default - tuned specifically for general purpose file servers


Hyper-V - tuned specifically for VDI servers
Backup - tuned specifically for virtualized backup applications, such as Microsoft
DPM

Enable Data Deduplication by using Server Manager

1. Select File and Storage Services in Server Manager.


2. Select Volumes from File and Storage Services.

3. Right-click the desired volume and select Configure Data Deduplication.


4. Select the desired Usage Type from the drop-down box and select OK.

5. If you are running a recommended workload, you're done. For other workloads,
see Other considerations.

7 Note

You can find more information on excluding file extensions or folders and selecting
the deduplication schedule, including why you would want to do this, in
Configuring Data Deduplication.

Enable Data Deduplication by using PowerShell

1. With an administrator context, run the following PowerShell command:

PowerShell

Enable-DedupVolume -Volume <Volume-Path> -UsageType <Selected-Usage-


Type>

2. If you are running a recommended workload, you're done. For other workloads,
see Other considerations.
7 Note

The Data Deduplication PowerShell cmdlets, including Enable-DedupVolume, can


be run remotely by appending the -CimSession parameter with a CIM Session. This
is particularly useful for running the Data Deduplication PowerShell cmdlets
remotely against a server instance. To create a new CIM Session run New-
CimSession.

Other considerations

) Important

If you are running a recommended workload, you can skip this section.

Data Deduplication's Usage Types give sensible defaults for recommended


workloads, but they also provide a good starting point for all workloads. For
workloads other than the recommended workloads, it is possible to modify Data
Deduplication's advanced settings to improve deduplication performance.
If your workload has high resource requirements on your server, the Data
Deduplication jobs should be scheduled to run during the expected idle times for
that workload. This is particularly important when running deduplication on a
hyper-converged host, because running Data Deduplication during expected
working hours can starve VMs.
If your workload does not have high resource requirements, or if it is more
important that optimization jobs complete than workload requests be served, the
memory, CPU, and priority of the Data Deduplication jobs can be adjusted.

Frequently asked questions (FAQ)


I want to run Data Deduplication on the dataset for X workload. Is this supported?
Aside from workloads that are known not to interoperate with Data Deduplication, we
fully support the data integrity of Data Deduplication with any workload. Recommended
workloads are supported by Microsoft for performance as well. The performance of
other workloads depends greatly on what they are doing on your server. You must
determine what performance impacts Data Deduplication has on your workload, and if
this is acceptable for this workload.

What are the volume sizing requirements for deduplicated volumes? In Windows
Server 2012 and Windows Server 2012 R2, volumes had to be carefully sized to ensure
that Data Deduplication could keep up with the churn on the volume. This typically
meant that the average maximum size of a deduplicated volume for a high-churn
workload was 1-2 TB, and the absolute maximum recommended size was 10 TB. In
Windows Server 2016, these limitations were removed. For more information, see What's
new in Data Deduplication.

Do I need to modify the schedule or other Data Deduplication settings for


recommended workloads? No, the provided Usage Types were created to provide
reasonable defaults for recommended workloads.

What are the memory requirements for Data Deduplication? At a minimum, Data
Deduplication should have 300 MB + 50 MB for each TB of logical data. For instance, if
you are optimizing a 10 TB volume, you would need a minimum of 800 MB of memory
allocated for deduplication ( 300 MB + 50 MB * 10 = 300 MB + 500 MB = 800 MB ). While
Data Deduplication can optimize a volume with this low amount of memory, having
such constrained resources will slow down Data Deduplication's jobs.

Optimally, Data Deduplication should have 1 GB of memory for every 1 TB of logical


data. For instance, if you are optimizing a 10 TB volume, you would optimally need 10
GB of memory allocated for Data Deduplication ( 1 GB * 10 ). This ratio will ensure the
maximum performance for Data Deduplication jobs.

What are the storage requirements for Data Deduplication? In Windows Server 2016,
Data Deduplication can support volume sizes up to 64 TB. For more information, view
What's new in Data Deduplication.
Running Data Deduplication
Article • 02/18/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

Running Data Deduplication jobs manually


You can run every scheduled Data Deduplication job manually by using the following
PowerShell cmdlets:

Start-DedupJob: Starts a new Data Deduplication job


Stop-DedupJob: Stops a Data Deduplication job already in progress (or removes it
from the queue)
Get-DedupJob: Shows all the active and queued Data Deduplication jobs

All settings that are available when you schedule a Data Deduplication job are also
available when you start a job manually except for the scheduling-specific settings. For
example, to start an Optimization job manually with high priority, maximum CPU usage,
and maximum memory usage, execute the following PowerShell command with
administrator privilege:

PowerShell

Start-DedupJob -Type Optimization -Volume <Your-Volume-Here> -Memory 100 -


Cores 100 -Priority High

Monitoring Data Deduplication

Job successes
Because Data Deduplication uses a post-processing model, it is important that Data
Deduplication jobs succeed. An easy way to check the status of the most recent job is to
use the Get-DedupStatus PowerShell cmdlet. Periodically check the following fields:

For the Optimization job, look at LastOptimizationResult (0 = Success),


LastOptimizationResultMessage , and LastOptimizationTime (should be recent).
For the Garbage Collection job, look at LastGarbageCollectionResult (0 = Success),
LastGarbageCollectionResultMessage , and LastGarbageCollectionTime (should be
recent).
For the Integrity Scrubbing job, look at LastScrubbingResult (0 = Success),
LastScrubbingResultMessage , and LastScrubbingTime (should be recent).

7 Note

More detail on job successes and failures can be found in the Windows Event
Viewer under \Applications and Services
Logs\Windows\Deduplication\Operational .

Optimization rates
One indicator of Optimization job failure is a downward-trending optimization rate
which might indicate that the Optimization jobs are not keeping up with the rate of
changes, or churn. You can check the optimization rate by using the Get-DedupStatus
PowerShell cmdlet.

) Important

Get-DedupStatus has two fields that are relevant to the optimization rate:

OptimizedFilesSavingsRate and SavingsRate . These are both important values to


track, but each has a unique meaning.

OptimizedFilesSavingsRate applies only to the files that are 'in-policy' for

optimization ( space used by optimized files after optimization / logical


size of optimized files ).

SavingsRate applies to the entire volume ( space used by optimized files

after optimization / total logical size of the optimization ).

Disabling Data Deduplication


To turn off Data Deduplication, run the Unoptimization job. To undo volume
optimization, run the following command:

PowerShell

Start-DedupJob -Type Unoptimization -Volume <Desired-Volume>


) Important

The Unoptimization job will fail if the volume does not have sufficient space to hold
the unoptimized data.

Frequently Asked Questions


Is there a System Center Operations Manager Management Pack available to monitor
Data Deduplication? Yes. Data Deduplication can be monitored through the System
Center Management Pack for File Server. For more information, see the Guide for
System Center Management Pack for File Server 2012 R2 document.
Advanced Data Deduplication settings
Article • 02/18/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

This document describes how to modify advanced Data Deduplication settings. For
recommended workloads, the default settings should be sufficient. The main reason to
modify these settings is to improve Data Deduplication's performance with other kinds
of workloads.

Modifying Data Deduplication job schedules


The default Data Deduplication job schedules are designed to work well for
recommended workloads and be as non-intrusive as possible (excluding the Priority
Optimization job that is enabled for the Backup usage type). When workloads have
large resource requirements, it is possible to ensure that jobs run only during idle hours,
or to reduce or increase the amount of system resources that a Data Deduplication job
is allowed to consume.

Changing a Data Deduplication schedule


Data Deduplication jobs are scheduled via Windows Task Scheduler and can be viewed
and edited there under the path Microsoft\Windows\Deduplication. Data Deduplication
includes several cmdlets that make scheduling easy.

Get-DedupSchedule shows the current scheduled jobs.


New-DedupSchedule creates a new scheduled job.
Set-DedupSchedule modifies an existing scheduled job.
Remove-DedupSchedule removes a scheduled job.

The most common reason for changing when Data Deduplication jobs run is to ensure
that jobs run during off hours. The following step-by-step example shows how to
modify the Data Deduplication schedule for a sunny day scenario: a hyper-converged
Hyper-V host that is idle on weekends and after 7:00 PM on week nights. To change the
schedule, run the following PowerShell cmdlets in an Administrator context.

1. Disable the scheduled hourly Optimization jobs.

PowerShell
Set-DedupSchedule -Name BackgroundOptimization -Enabled $false
Set-DedupSchedule -Name PriorityOptimization -Enabled $false

2. Remove the currently scheduled Garbage Collection and Integrity Scrubbing jobs.

PowerShell

Get-DedupSchedule -Type GarbageCollection | ForEach-Object { Remove-


DedupSchedule -InputObject $_ }
Get-DedupSchedule -Type Scrubbing | ForEach-Object { Remove-
DedupSchedule -InputObject $_ }

3. Create a nightly Optimization job that runs at 7:00 PM with high priority and all the
CPUs and memory available on the system.

PowerShell

New-DedupSchedule -Name "NightlyOptimization" -Type Optimization -


DurationHours 11 -Memory 100 -Cores 100 -Priority High -Days
@(1,2,3,4,5) -Start (Get-Date "2016-08-08 19:00:00")

7 Note

The date part of the System.Datetime provided to -Start is irrelevant (as long
as it's in the past), but the time part specifies when the job should start.

4. Create a weekly Garbage Collection job that runs on Saturday starting at 7:00 AM
with high priority and all the CPUs and memory available on the system.

PowerShell

New-DedupSchedule -Name "WeeklyGarbageCollection" -Type


GarbageCollection -DurationHours 23 -Memory 100 -Cores 100 -Priority
High -Days @(6) -Start (Get-Date "2016-08-13 07:00:00")

5. Create a weekly Integrity Scrubbing job that runs on Sunday starting at 7 AM with
high priority and all the CPUs and memory available on the system.

PowerShell

New-DedupSchedule -Name "WeeklyIntegrityScrubbing" -Type Scrubbing -


DurationHours 23 -Memory 100 -Cores 100 -Priority High -Days @(0) -
Start (Get-Date "2016-08-14 07:00:00")
Available job-wide settings
You can toggle the following settings for new or scheduled Data Deduplication jobs:

Parameter name Definition Accepted values Why would you


want to set this
value?

Type The type of the job Optimization This value is


that should be GarbageCollection required because it
scheduled Scrubbing is the type of job
that you want to
schedule. This value
cannot be changed
after the task has
been scheduled.

Priority The system priority of High This value helps the


the scheduled job Medium system determine
Low how to allocate CPU
time. High will use
more CPU time, low
will use less.

Days The days that the job An array of integers 0-6 Scheduled tasks
is scheduled representing the days of have to run on at
the week: least one day.
0 = Sunday
1 = Monday
2 = Tuesday
3 = Wednesday
4 = Thursday
5 = Friday
6 = Saturday

Cores The percentage of Integers 0-100 (indicates To control what level


cores on the system a percentage) of impact a job will
that a job should use have on the
compute resources
on the system

DurationHours The maximum number Positive integers To prevent a job for


of hours a job should running into a
be allowed to run workload's non-idle
hours

Enabled Whether the job will True/false To disable a job


run without removing it
Parameter name Definition Accepted values Why would you
want to set this
value?

Full For scheduling a full Switch (true/false) By default, every


Garbage Collection fourth job is a full
job Garbage Collection
job. With this switch,
you can schedule
full Garbage
Collection to run
more frequently.

InputOutputThrottle Specifies the amount Integers 0-100 (indicates Throttling ensures


of input/output a percentage) that jobs don't
throttling applied to interfere with other
the job I/O-intensive
processes.

Memory The percentage of Integers 0-100 (indicates To control what level


memory on the a percentage) of impact the job
system that a job will have on the
should use memory resources
of the system

Name The name of the String A job must have a


scheduled job uniquely identifiable
name.

ReadOnly Indicates that the Switch (true/false) You want to


scrubbing job manually restore
processes and reports files that sit on bad
on corruptions that it sections of the disk.
finds, but does not
run any repair actions

Start Specifies the time a System.DateTime The date part of the


job should start System.Datetime
provided to Start is
irrelevant (as long as
it's in the past), but
the time part
specifies when the
job should start.
Parameter name Definition Accepted values Why would you
want to set this
value?

StopWhenSystemBusy Specifies whether Switch (True/False) This switch gives


Data Deduplication you the ability to
should stop if the control the behavior
system is busy of Data
Deduplication--this
is especially
important if you
want to run Data
Deduplication while
your workload is not
idle.

Modifying Data Deduplication volume-wide


settings

Toggling volume settings


You can set the volume-wide default settings for Data Deduplication via the usage type
that you select when you enable a deduplication for a volume. Data Deduplication
includes cmdlets that make editing volume-wide settings easy:

Get-DedupVolume
Set-DedupVolume

The main reasons to modify the volume settings from the selected usage type are to
improve read performance for specific files (such as multimedia or other file types that
are already compressed) or to fine-tune Data Deduplication for better optimization for
your specific workload. The following example shows how to modify the Data
Deduplication volume settings for a workload that most closely resembles a general
purpose file server workload, but uses large files that change frequently.

1. See the current volume settings for Cluster Shared Volume 1.

PowerShell

Get-DedupVolume -Volume C:\ClusterStorage\Volume1 | Select *

2. Enable OptimizePartialFiles on Cluster Shared Volume 1 so that the


MinimumFileAge policy applies to sections of the file rather than the whole file.
This ensures that the majority of the file gets optimized even though sections of
the file change regularly.

PowerShell

Set-DedupVolume -Volume C:\ClusterStorage\Volume1 -


OptimizePartialFiles

Available volume-wide settings

Setting name Definition Accepted Why would you want to


values modify this value?

ChunkRedundancyThreshold The number of times Positive The main reason to


that a chunk is integers modify this number is to
referenced before a increase the savings rate
chunk is duplicated into for volumes with high
the hotspot section of duplication. In general,
the Chunk Store. The the default value (100) is
value of the hotspot the recommended
section is that so-called setting, and you
"hot" chunks that are shouldn't need to modify
referenced frequently this.
have multiple access
paths to improve access
time.

ExcludeFileType File types that are Array of Some file types,


excluded from file particularly multimedia or
optimization extensions files that are already
compressed, do not
benefit very much from
being optimized. This
setting allows you to
configure which types are
excluded.

ExcludeFolder Specifies folder paths Array of If you want to improve


that should not be folder performance or keep
considered for paths content in particular
optimization paths from being
optimized, you can
exclude certain paths on
the volume from
consideration for
optimization.
Setting name Definition Accepted Why would you want to
values modify this value?

InputOutputScale Specifies the level of IO Positive The main reason to


parallelization (IO integers modify this value is to
queues) for Data ranging 1- decrease the impact on
Deduplication to use on 36 the performance of a
a volume during a post- high IO workload by
processing job restricting the number of
IO queues that Data
Deduplication is allowed
to use on a volume. Note
that modifying this
setting from the default
may cause Data
Deduplication's post-
processing jobs to run
slowly.

MinimumFileAgeDays Number of days after the Positive The Default and Hyper-V
file is created before the integers usage types set this value
file is considered to be (inclusive to 3 to maximize
in-policy for of zero) performance on hot or
optimization. recently created files. You
may want to modify this
if you want Data
Deduplication to be more
aggressive or if you do
not care about the extra
latency associated with
deduplication.

MinimumFileSize Minimum file size that a Positive The main reason to


file must have to be integers change this value is to
considered in-policy for (bytes) exclude small files that
optimization greater may have limited
than 32 optimization value to
KB conserve compute time.
Setting name Definition Accepted Why would you want to
values modify this value?

NoCompress Whether the chunks True/False Some types of files,


should be compressed particularly multimedia
before being put into the files and already
Chunk Store compressed file types,
may not compress well.
This setting allows you to
turn off compression for
all files on the volume.
This would be ideal if you
are optimizing a dataset
that has a lot of files that
are already compressed.

NoCompressionFileType File types whose chunks Array of Some types of files,


should not be file particularly multimedia
compressed before extensions files and already
going into the Chunk compressed file types,
Store may not compress well.
This setting allows
compression to be
turned off for those files,
saving CPU resources.

OptimizeInUseFiles When enabled, files that True/false Enable this setting if your
have active handles workload keeps files
against them will be open for extended
considered as in-policy periods of time. If this
for optimization. setting is not enabled, a
file would never get
optimized if the workload
has an open handle to it,
even if it's only
occasionally appending
data at the end.
Setting name Definition Accepted Why would you want to
values modify this value?

OptimizePartialFiles When enabled, the True/false Enable this setting if your


MinimumFileAge value workload works with
applies to segments of a large, often edited files
file rather than to the where most of the file
whole file. content is untouched. If
this setting is not
enabled, these files
would never get
optimized because they
keep getting changed,
even though most of the
file content is ready to be
optimized.

Verify When enabled, if the True/false This is an integrity


hash of a chunk matches feature that ensures that
a chunk we already have the hashing algorithm
in our Chunk Store, the that compares chunks
chunks are compared does not make a mistake
byte-by-byte to ensure by comparing two
they are identical. chunks of data that are
actually different but
have the same hash. In
practice, it is extremely
improbable that this
would ever happen.
Enabling the verification
feature adds significant
overhead to the
optimization job.

Modifying Data Deduplication system-wide


settings
Data Deduplication has additional system-wide settings that can be configured via the
registry. These settings apply to all of the jobs and volumes that run on the system. Extra
care must be given whenever editing the registry.

For example, you may want to disable full Garbage Collection. More information about
why this may be useful for your scenario can be found in Frequently asked questions. To
edit the registry with PowerShell:

If Data Deduplication is running in a cluster:


PowerShell

Set-ItemProperty -Path
HKLM:\System\CurrentControlSet\Services\ddpsvc\Settings -Name
DeepGCInterval -Type DWord -Value 0xFFFFFFFF
Set-ItemProperty -Path HKLM:\CLUSTER\Dedup -Name DeepGCInterval -Type
DWord -Value 0xFFFFFFFF

If Data Deduplication is not running in a cluster:

PowerShell

Set-ItemProperty -Path
HKLM:\System\CurrentControlSet\Services\ddpsvc\Settings -Name
DeepGCInterval -Type DWord -Value 0xFFFFFFFF

Available system-wide settings

Setting name Definition Accepted Why would


values you want to
change this?

WlmMemoryOverPercentThreshold This setting allows jobs to use Positive If you have


more memory than Data integers another task
Deduplication judges to (a value that will stop
actually be available. For of 300 if Data
example, a setting of 300 means Deduplication
would mean that the job 300% or takes more
would have to use three times 3 times) memory
the assigned memory to get
canceled.
Setting name Definition Accepted Why would
values you want to
change this?

DeepGCInterval This setting configures the Integers See this


interval at which regular (-1 frequently
Garbage Collection jobs indicates asked
become full Garbage disabled) question
Collection jobs. A setting of n
would mean that every nth job
was a full Garbage Collection
job. Note that full Garbage
Collection is always disabled
(regardless of the registry
value) for volumes with the
Backup Usage Type. Start-
DedupJob -Type
GarbageCollection -Full may
be used if full Garbage
Collection is desired on a
Backup volume.

Frequently asked questions


I changed a Data Deduplication setting, and now jobs are slow or don't finish, or my
workload performance has decreased. Why? These settings give you a lot of power to
control how Data Deduplication runs. Use them responsibly, and monitor performance.

I want to run a Data Deduplication job right now, but I don't want to create a new
schedule--can I do this? Yes, all jobs can be run manually.

What is the difference between full and regular Garbage Collection? There are two
types of Garbage Collection:

Regular Garbage Collection uses a statistical algorithm to find large unreferenced


chunks that meet a certain criteria (low in memory and IOPs). Regular Garbage
Collection compacts a chunk store container only if a minimum percentage of the
chunks is unreferenced. This type of Garbage Collection runs much faster and uses
fewer resources than full Garbage Collection. The default schedule of the regular
Garbage Collection job is to run once a week.
Full Garbage Collection does a much more thorough job of finding unreferenced
chunks and freeing more disk space. Full Garbage Collection compacts every
container even if just a single chunk in the container is unreferenced. Full Garbage
Collection will also free space that may have been in use if there was a crash or
power failure during an Optimization job. Full Garbage Collection jobs will recover
100 percent of the available space that can be recovered on a deduplicated
volume at the cost of requiring more time and system resources compared to a
regular Garbage Collection job. The full Garbage Collection job will typically find
and release up to 5 percent more of the unreferenced data than a regular Garbage
Collection job. The default schedule of the full Garbage Collection job is to run
every fourth time Garbage Collection is scheduled.

Why would I want to disable full Garbage Collection?

Garbage Collection could adversely affect the volume's lifetime shadow copies and
the size of incremental backup. High churn or I/O-intensive workloads may see a
degradation in performance by full Garbage Collection jobs.
You can manually run a full Garbage Collection job from PowerShell to clean up
leaks if you know your system crashed.
Data Deduplication interoperability
Article • 03/29/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Azure Stack HCI, versions 21H2 and 20H2

Supported

ReFS
Data Deduplication is supported starting with Windows Server 2019.

Failover Clustering
Failover Clustering is fully supported, if every node in the cluster has the Data
Deduplication feature installed. Other important notes:

Manually started Data Deduplication jobs must be run on the Owner node for the
Cluster Shared Volume.
Scheduled Data Deduplication jobs are stored in the cluster task scheduled so that
if a deduplicated volume is taken over by another node, the scheduled job will be
applied on the next scheduled interval.
Data Deduplication fully interoperates with the Cluster OS Rolling Upgrade feature.
Data Deduplication is fully supported on Storage Spaces Direct with ReFS or NTFS-
formatted volumes (mirror or parity). ReFS-formatted volumes are supported
starting with Windows Server 2019. Deduplication is not supported on volumes
with multiple tiers.

Storage Replica
Storage Replica is fully supported. Data Deduplication should be configured to not run
on the secondary copy.

BranchCache
You can optimize data access over the network by enabling BranchCache on servers and
clients. When a BranchCache-enabled system communicates over a WAN with a remote
file server that is running data deduplication, all of the deduplicated files are already
indexed and hashed. Therefore, requests for data from a branch office are quickly
computed. This is similar to preindexing or prehashing a BranchCache-enabled server.

DFS Replication
Data Deduplication works with Distributed File System (DFS) Replication. Optimizing or
unoptimizing a file will not trigger a replication because the file does not change. DFS
Replication uses Remote Differential Compression (RDC), not the chunks in the chunk
store, for over-the-wire savings. The files on the replica can also be optimized by using
deduplication if the replica is using Data Deduplication.

Quotas
Data Deduplication does not support creating a hard quota on a volume root folder that
also has deduplication enabled. When a hard quota is present on a volume root, the
actual free space on the volume and the quota-restricted space on the volume are not
the same. This may cause deduplication optimization jobs to fail. It is possible however
to creating a soft quota on a volume root that has deduplication enabled.

When quota is enabled on a deduplicated volume, quota uses the logical size of the file
rather than the physical size of the file. Quota usage (including any quota thresholds)
does not change when a file is processed by deduplication. All other quota functionality,
including volume-root soft quotas and quotas on subfolders, works normally when
using deduplication.

Windows Server Backup


Windows Server Backup can back up an optimized volume as-is (that is, without
removing deduplicated data). The following steps show how to back up a volume and
how to restore a volume or selected files from a volume.

1. Install Windows Server Backup.

PowerShell

Install-WindowsFeature -Name Windows-Server-Backup

2. Back up the E: volume to another volume by running the following command,


substituting the correct volume names for your situation.

PowerShell
wbadmin start backup –include:E: -backuptarget:F: -quiet

3. Get the version ID of the backup you just created.

PowerShell

wbadmin get versions

This output version ID will be a date and time string, for example: 08/18/2016-
06:22.

4. Restore the entire volume.

PowerShell

wbadmin start recovery –version:02/16/2012-06:22 -itemtype:Volume -


items:E: -recoveryTarget:E:

--OR--

Restore a particular folder (in this case, the E:\Docs folder):

PowerShell

wbadmin start recovery –version:02/16/2012-06:22 -itemtype:File -


items:E:\Docs -recursive

Unsupported

Windows 10 (client OS)


Data Deduplication is not supported on Windows 10. There are several popular blog
posts in the Windows community describing how to remove the binaries from Windows
Server 2016 and install on Windows 10, but this scenario has not been validated as part
of the development of Data Deduplication.

Windows Search
Windows Search doesn't support Data Deduplication. Data Deduplication uses reparse
points, which Windows Search can't index, so Windows Search skips all deduplicated
files, excluding them from the index. As a result, search results might be incomplete for
deduplicated volumes.

Robocopy
Running Robocopy with Data Deduplication is not recommended because certain
Robocopy commands can corrupt the Chunk Store. The Chunk Store is stored in the
System Volume Information folder for a volume. If the folder is deleted, the optimized
files (reparse points) that are copied from the source volume become corrupted because
the data chunks are not copied to the destination volume.
DFS Namespaces overview
Article • 01/05/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

DFS (Distributed File System) Namespaces is a role service in Windows Server that
enables you to group shared folders located on different servers into one or more
logically structured namespaces. This makes it possible to give users a virtual view of
shared folders, where a single path leads to files located on multiple servers, as shown in
the following figure:

Here's a description of the elements that make up a DFS namespace:

Namespace server - A namespace server hosts a namespace. The namespace


server can be a member server or a domain controller.
Namespace root - The namespace root is the starting point of the namespace. In
the previous figure, the name of the root is Public, and the namespace path is
\\Contoso\Public. This type of namespace is a domain-based namespace because
it begins with a domain name (for example, Contoso) and its metadata is stored in
Active Directory Domain Services (AD DS). Although a single namespace server is
shown in the previous figure, a domain-based namespace can be hosted on
multiple namespace servers to increase the availability of the namespace.
Folder - Folders without folder targets add structure and hierarchy to the
namespace, and folders with folder targets provide users with actual content.
When users browse a folder that has folder targets in the namespace, the client
computer receives a referral that transparently redirects the client computer to one
of the folder targets.
Folder targets - A folder target is the UNC path of a shared folder or another
namespace that is associated with a folder in a namespace. The folder target is
where data and content is stored. In the previous figure, the folder named Tools
has two folder targets, one in London and one in New York, and the folder named
Training Guides has a single folder target in New York. A user who browses to
\\Contoso\Public\Software\Tools is transparently redirected to the shared folder
\\LDN-SVR-01\Tools or \\NYC-SVR-01\Tools, depending on which site the user is
currently located in.

This article discusses how to install DFS, what's new, and where to find evaluation and
deployment information.

You can administer namespaces by using DFS Management, the DFS Namespace (DFSN)
Cmdlets in Windows PowerShell, the DfsUtil command, or scripts that call WMI.

Server requirements and limits


There are no additional hardware or software requirements for running DFS
Management or using DFS Namespaces.

A namespace server is a domain controller or member server that hosts a namespace.


The number of namespaces you can host on a server is determined by the operating
system running on the namespace server.

Servers that are running the following operating systems can host multiple domain-
based namespaces in addition to a single stand-alone namespace.

Windows Server 2022


Windows Server 2019
Windows Server 2016
Windows Server 2012 R2
Windows Server 2012
Windows Server 2008 R2 Datacenter and Enterprise Editions
Windows Server (Semi-Annual Channel)

Servers that are running the following operating systems can host a single stand-alone
namespace:

Windows Server 2008 R2 Standard

The following table describes additional factors to consider when choosing servers to
host a namespace.
Server Hosting Server Hosting Domain-Based Namespaces
Stand-Alone
Namespaces

Must contain an NTFS Must contain an NTFS volume to host the namespace.
volume to host the
namespace.

Can be a member Must be a member server or domain controller in the domain in which
server or domain the namespace is configured. (This requirement applies to every
controller. namespace server that hosts a given domain-based namespace.)

Can be hosted by a The namespace cannot be a clustered resource in a failover cluster.


failover cluster to However, you can locate the namespace on a server that also functions
increase the as a node in a failover cluster if you configure the namespace to use only
availability of the local resources on that server.
namespace.

Installing DFS Namespaces


DFS Namespaces and DFS Replication are a part of the File and Storage Services role.
The management tools for DFS (DFS Management, the DFS Namespaces module for
Windows PowerShell, and command-line tools) are installed separately as part of the
Remote Server Administration Tools.

Install DFS Namespaces by using Windows Admin Center, Server Manager, or


PowerShell, as described in the next sections.

To install DFS by using Server Manager


1. Open Server Manager, click Manage, and then click Add Roles and Features. The
Add Roles and Features Wizard appears.

2. On the Server Selection page, select the server or virtual hard disk (VHD) of an
offline virtual machine on which you want to install DFS.

3. Select the role services and features that you want to install.

To install the DFS Namespaces service, on the Server Roles page, select DFS
Namespaces.

To install only the DFS Management Tools, on the Features page, expand
Remote Server Administration Tools, Role Administration Tools, expand File
Services Tools, and then select DFS Management Tools.
DFS Management Tools installs the DFS Management snap-in, the DFS
Namespaces module for Windows PowerShell, and command-line tools, but
it does not install any DFS services on the server.

To install DFS by using Windows PowerShell


Open a Windows PowerShell session with elevated user rights, and then type the
following command, where <name> is the role service or feature that you want to install
(see the following table for a list of relevant role service or feature names):

PowerShell

Install-WindowsFeature <name>

Role service or feature Name

DFS Namespaces FS-DFS-Namespace

DFS Management Tools RSAT-DFS-Mgmt-Con

For example, to install the Distributed File System Tools portion of the Remote Server
Administration Tools feature, type:

PowerShell

Install-WindowsFeature "RSAT-DFS-Mgmt-Con"

To install the Distributed File System Tools portion for a client device, type:

PowerShell

Add-WindowsCapability -Name Rsat.FileServices.Tools~~~~0.0.1.0 -Online

To install the DFS Namespaces, and the Distributed File System Tools portions of the
Remote Server Administration Tools feature, type:

PowerShell

Install-WindowsFeature "FS-DFS-Namespace", "RSAT-DFS-Mgmt-Con"

Interoperability with Azure virtual machines


Using DFS Namespaces on a virtual machine in Microsoft Azure has been tested.

You can host domain-based namespaces in Azure virtual machines, including


environments with Azure Active Directory.
You can cluster stand-alone namespaces in Azure virtual machines using failover
clusters that use Shared Disk or Ultra Disks.

To learn about how to get started with Azure virtual machines, see Azure virtual
machines documentation.

Additional References
For additional related information, see the following resources.

Content type References

Product evaluation What's New in DFS Namespaces and DFS Replication in Windows Server

Deployment DFS Namespace Scalability Considerations

Operations DFS Namespaces: Frequently Asked Questions

Community resources The File Services and Storage TechNet Forum

Protocols File Services Protocols in Windows Server (Deprecated)

Related technologies Failover Clustering

Support Windows IT Pro Support


Checklist: Deploy DFS Namespaces
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

Distributed File System (DFS) Namespaces and DFS Replication can be used to publish
documents, software, and line-of-business data to users throughout an organization.
Although DFS Replication alone is sufficient to distribute data, you can use
DFS Namespaces to configure the namespace so that a folder is hosted by multiple
servers, each of which holds an updated copy of the folder. This increases data
availability and distributes the client load across servers.

When browsing a folder in the namespace, users are not aware that the folder is hosted
by multiple servers. When a user opens the folder, the client computer is automatically
referred to a server on its site. If no same-site servers are available, you can configure
the namespace to refer the client to a server that has the lowest connection cost as
defined in Active Directory Directory Services (AD DS).

To deploy DFS Namespaces, perform the following tasks:

Review the concepts, and requirements of DFS Namespaces. Overview of DFS


Namespaces
Choose a namespace type
Create a DFS namespace
Migrate existing domain-based namespaces to Windows Server 2008 mode
domain-based namespaces. Migrate a Domain-based Namespace to Windows
Server 2008 mode
Increase availability by adding namespace servers to a domain-based namespace.
Add Namespace Servers to a Domain-based DFS Namespace
Add folders to a namespace. Create a Folder in a DFS Namespace
Add folder targets to folders in a namespace. Add Folder Targets
Replicate content between folder targets using DFS Replication (optional).
Replicate Folder Targets Using DFS Replication

Additional References
Namespaces
Checklist: Tune a DFS Namespace
Replication
Checklist: Tune a DFS namespace
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

After creating a namespace and adding folders and targets, use the following checklist
to tune or optimize the way the DFS namespace handles referrals and polls Active
Directory Domain Services (AD DS) for updated namespace data.

Prevent users from seeing folders in a namespace that they do not have
permissions to access. Enable Access-Based Enumeration on a Namespace
Enable or prevent users from being referred to a namespace or folder target when
they access a folder in the namespace. Enable or Disable Referrals and Client
Failback
Adjust how long clients cache a referral before requesting a new one. Change the
Amount of Time That Clients Cache Referrals
Optimize how namespace servers poll AD DS to obtain the most current
namespace data. Optimize Namespace Polling
Use inherited permissions to control which users can view folders in a namespace
for which access-based enumeration is enabled. Using Inherited Permissions with
Access-Based Enumeration

In addition, by using a DFS Namespaces enhancement known as target priority, you can
specify the priority of servers so that a specific server is always placed first or last in the
list of servers (known as a referral) that the client receives when it accesses a folder with
targets in the namespace.

Specify in what order users should be referred to folder targets. Set the Ordering
Method for Targets in Referrals
Override referral ordering for a specific namespace server or folder target. Set
Target Priority to Override Referral Ordering

Additional References
Namespaces
Checklist: Deploy DFS Namespaces
Tuning DFS Namespaces
Deploying DFS Namespaces
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

To deploy DFS Namespaces, refer to the following topics:

Choose a Namespaces Type


Create a DFS Namespace
Migrate a Domain-based Namespace to Windows Server 2008 Mode
Add Namespace Servers to a Domain-based DFS Namespace
Create a Folder in a DFS Namespace
Add Folder Targets
Replicate Folder Targets Using DFS Replication
Delegate Management Permissions for DFS Namespaces
Choose a namespace type
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows
Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows Server 2008

When creating a namespace, you must choose one of two namespace types: a stand-alone
namespace or a domain-based namespace. In addition, if you choose a domain-based
namespace, you must choose a namespace mode: Windows 2000 Server mode or Windows
Server 2008 mode.

Choosing a namespace type


Choose a stand-alone namespace if any of the following conditions apply to your environment:

Your organization does not use Active Directory Domain Services (AD DS).
You want to increase the availability of the namespace by using a failover cluster.
You need to create a single namespace with more than 5,000 DFS folders in a domain that
does not meet the requirements for a domain-based namespace (Windows Server 2008
mode) as described later in this topic.

7 Note

To check the size of a namespace, right-click the namespace in the DFS Management
console tree, click Properties, and then view the namespace size in the Namespace
Properties dialog box. For more information about DFS Namespace scalability, see the
Microsoft website File Services.

Choose a domain-based namespace if any of the following conditions apply to your environment:

You want to ensure the availability of the namespace by using multiple namespace servers.
You want to hide the name of the namespace server from users. This makes it easier to
replace the namespace server or migrate the namespace to another server.

Choosing a domain-based namespace mode


If you choose a domain-based namespace, you must choose whether to use the
Windows 2000 Server mode or the Windows Server 2008 mode. The Windows Server 2008 mode
includes support for access-based enumeration and increased scalability. The domain-based
namespace introduced in Windows 2000 Server is now referred to as "domain-based namespace
(Windows 2000 Server mode)."
To use the Windows Server 2008 mode, the domain and namespace must meet the following
minimum requirements:

The forest uses the Windows Server 2003 or higher forest functional level.
The domain uses the Windows Server 2008 or higher domain functional level.
All namespace servers are running Windows Server 2012 R2, Windows Server 2012,
Windows Server 2008 R2, or Windows Server 2008.

If your environment supports it, choose the Windows Server 2008 mode when you create new
domain-based namespaces. This mode provides additional features and scalability, and also
eliminates the possible need to migrate a namespace from the Windows 2000 Server mode.

For information about migrating a namespace to Windows Server 2008 mode, see Migrate a
Domain-based Namespace to Windows Server 2008 Mode.

If your environment does not support domain-based namespaces in Windows Server 2008 mode,
use the existing Windows 2000 Server mode for the namespace.

Comparing namespace types and modes


The characteristics of each namespace type and mode are described in the following table.

Characteristic Stand-Alone Domain-based Namespace Domain-based Namespace


Namespace (Windows 2000 Server Mode) (Windows Server 2008 Mode)

Path to \\ \\ \\
namespace ServerName\RootName NetBIOSDomainName\RootName NetBIOSDomainName\RootName
\\ DNSDomainName\RootName \\ DNSDomainName\RootName

Namespace In the registry and in a In AD DS and in a memory cache In AD DS and in a memory cache
information memory cache on the on each namespace server on each namespace server
storage location namespace server

Namespace size The namespace can The size of the namespace object The namespace can contain
recommendations contain more than in AD DS should be less than 5 more than 5,000 folders with
5,000 folders with megabytes (MB) to maintain targets; the recommended limit
targets; the compatibility with domain is 50,000 folders with targets
recommended limit is controllers that are not running
50,000 folders with Windows Server 2008. This
targets means no more than
approximately 5,000 folders with
targets.

Minimum AD DS AD DS is not required Windows 2000 Windows Server 2003


forest functional
level

Minimum AD DS AD DS is not required Windows 2000 mixed Windows Server 2008


domain
functional level
Characteristic Stand-Alone Domain-based Namespace Domain-based Namespace
Namespace (Windows 2000 Server Mode) (Windows Server 2008 Mode)

Minimum Windows 2000 Server Windows 2000 Server Windows Server 2008
supported
namespace
servers

Support for Yes, requires Windows No Yes


access-based Server 2008
enumeration (if namespace server
enabled)

Supported Create a stand-alone Use multiple namespace servers Use multiple namespace servers
methods to namespace on a to host the namespace. (The to host the namespace. (The
ensure failover cluster. namespace servers must be in namespace servers must be in
namespace the same domain.) the same domain.)
availability

Support for using Supported when Supported Supported


DFS Replication joined to an AD DS
to replicate folder domain
targets

Additional References
Deploying DFS Namespaces
Migrate a Domain-based Namespace to Windows Server 2008 Mode
Create a DFS namespace
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

To create a new namespace, you can use Server Manager to create the namespace when
you install the DFS Namespaces role service. You can also use the New-DfsnRoot cmdlet
from a Windows PowerShell session.

The DFSN Windows PowerShell module was introduced in Windows Server 2012.

Alernatively, you can use the following procedure to create a namespace after installing
the role service.

To create a namespace
1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, right-click the Namespaces node, and then click New
Namespace.

3. Follow the instructions in the New Namespace Wizard.

To create a stand-alone namespace on a failover cluster, specify the name of a


clustered file server instance on the Namespace Server page of the New
Namespace Wizard .

) Important

Do not attempt to create a domain-based namespace using the Windows


Server 2008 mode unless the forest functional level is Windows Server 2003 or
higher. Doing so can result in a namespace for which you cannot delete DFS
folders, yielding the following error message: "The folder cannot be deleted.
Cannot complete this function".

Additional References
Deploying DFS Namespaces
Choose a Namespace Type
Add Namespace Servers to a Domain-based DFS Namespace
Delegate Management Permissions for DFS Namespaces.
Migrate a domain-based namespace to
Windows Server 2008 Mode
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

The Windows Server 2008 mode for domain-based namespaces includes support for
access-based enumeration and increased scalability.

To migrate a domain-based namespace to


Windows Server 2008 mode
To migrate a domain-based namespace from Windows 2000 Server mode to Windows
Server 2008 mode, you must export the namespace to a file, delete the namespace, re-
create it in Windows Server 2008 mode, and then import the namespace settings. To do
so, use the following procedure:

1. Open a Command Prompt window and type the following command to export the
namespace to a file, where \\domain\namespace is the name of the appropriate
domain, and namespace and path\filename is the path and file name of the file for
export:

Dfsutil root export \\domain\namespace path\filename.xml

2. Write down the path ( \\server \share ) for each namespace server. You must
manually add namespace servers to the re-created namespace because Dfsutil
cannot import namespace servers.

3. In DFS Management, right-click the namespace and then click Delete, or type the
following command at a command prompt,
where \\domain\namespace is the name of the appropriate domain and
namespace:
Dfsutil root remove \\domain\namespace

4. In DFS Management, re-create the namespace with the same name, but use the
Windows Server 2008 mode, or type the following command at a command
prompt, where
\\server\namespace is the name of the appropriate server and share for the
namespace root:

Dfsutil root adddom \\server\namespace v2

5. To import the namespace from the export file, type the following command at a
command prompt, where
\\domain\namespace is the name of the appropriate domain and namespace and
path\filename is the path and file name of the file to import:

Dfsutil root import merge path\filename.xml \\domain\namespace

7 Note

To minimize the time required to import a large namespace, run the Dfsutil
root import command locally on a namespace server.

6. Add any remaining namespace servers to the re-created namespace by right-


clicking the namespace in DFS Management and then clicking Add Namespace
Server, or by typing the following command at a command prompt, where
\\server\share is the name of the appropriate server and share for the namespace
root:

Dfsutil target add \\server\share

7 Note

You can add namespace servers before importing the namespace, but doing
so causes the namespace servers to incrementally download the metadata for
the namespace instead of immediately downloading the entire namespace
after being added as a namespace server.

Additional References
Deploying DFS Namespaces
Choose a Namespace Type
Add namespace servers to a domain-
based DFS namespace
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

You can increase the availability of a domain-based namespace by specifying additional


namespace servers to host the namespace.

To add a namespace server to a domain-based


namespace
To add a namespace server to a domain-based namespace using DFS Management, use
the following procedure:

1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a domain-based


namespace, and then click Add Namespace Server.

3. Enter the path to another server, or click Browse to locate a server.

7 Note

This procedure is not applicable for stand-alone namespaces because they support
only a single namespace server. To increase the availability of a stand-alone
namespace, specify a failover cluster as the namespace server in the New
Namespace Wizard.

 Tip

To add a namespace server by using Windows PowerShell, use the New-


DfsnRootTarget cmdlet. The DFSN Windows PowerShell module was introduced in
Windows Server 2012.
Additional References
Deploying DFS Namespaces
Review DFS Namespaces Server Requirements
Create a DFS Namespace
Delegate Management Permissions for DFS Namespaces
Create a folder in a DFS namespace
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

You can use folders to create additional levels of hierarchy in a namespace. You can also
create folders with folder targets to add shared folders to the namespace. DFS folders
with folder targets cannot contain other DFS folders, so if you want to add a level of
hierarchy to the namespace, do not add folder targets to the folder.

Use the following procedure to create a folder in a namespace using DFS Management:

To create a folder in a DFS namespace


1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a namespace or a


folder within a namespace, and then click New Folder.

3. In the Name text box, type the name of the new folder.

4. To add one or more folder targets to the folder, click Add and specify the Universal
Naming Convention (UNC) path of the folder target, and then click OK .

 Tip

To create a folder in a namespace by using Windows PowerShell, use the New-


DfsnFolder cmdlet. The DFSN Windows PowerShell module was introduced in
Windows Server 2012.

Additional References
Deploying DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Add folder targets
Article • 05/11/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

A folder target is the Universal Naming Convention (UNC) path of a shared folder or
another namespace that is associated with a folder in a namespace. Adding multiple
folder targets increases the availability of the folder in the namespace.

Prerequisites
The following must be installed to use this feature:

A Windows Server with the DFS Namespaces role service installed as part of the
File and Storage Services server role. To learn more, see Install or Uninstall Roles,
Role Services, or Features.
An account with Administrative privileges.
A server to host the DFS folder target.

Add a folder target


To add a folder target by using DFS Management, perform the following:

1. Click Start > Windows Administrative Tools > select DFS Management.
Alternatively, click Start > type dfsmgmt.msc > hit Enter .
2. In the console tree, under the Namespaces node, right-click on the namespace
where you want to add the folder, then select New Folder.
3. In the popup box, provide a name for this folder in the Name field, then click Add.
4. Type the path to the folder target, or click Browse to locate the folder target, click
OK, then click OK again.

The following animation demonstrates the steps to add a folder target by using DFS
Management.
If the folder is replicated using DFS Replication, you can specify whether to add the new
folder target to the replication group.

 Tip

To add a folder target by using Windows PowerShell, use the New-


DfsnFolderTarget cmdlet. The DFSN Windows PowerShell module was introduced
in Windows Server 2012.

7 Note

Folders can contain folder targets or other DFS folders, but not both, at the same
level in the folder hierarchy.

Additional references
Deploying DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Replicate Folder Targets Using DFS Replication
Replicate folder targets using DFS
Replication
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008

You can use DFS Replication to keep the contents of folder targets in sync so that users
see the same files regardless of which folder target the client computer is referred to.

To replicate folder targets using DFS


Replication
1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a folder that has two
or more folder targets, and then click Replicate Folder.

3. Follow the instructions in the Replicate Folder Wizard.

7 Note

Configuration changes are not applied immediately to all members except when
using the Suspend-DfsReplicationGroup and Sync-DfsReplicationGroup cmdlets.
The new configuration must be replicated to all domain controllers, and each
member in the replication group must poll its closest domain controller to obtain
the changes. The amount of time this takes depends on the Active Directory
Directory Services (AD DS) replication latency and the long polling interval (60
minutes) on each member. To poll immediately for configuration changes, open a
Command Prompt window and then type the following command once for each
member of the replication group:
dfsrdiag.exe PollAD /Member:DOMAIN\Server1
To do so from a Windows PowerShell session, use the Update-
DfsrConfigurationFromAD cmdlet, which was introduced in Windows
Server 2012 R2.
Additional References
Deploying DFS Namespaces
Delegate Management Permissions for DFS Namespaces
DFS Replication
Delegate management permissions for
DFS Namespaces
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

The following table describes the groups that can perform basic namespace tasks by
default, and the method for delegating the ability to perform these tasks:

Task Groups that Delegation Method


Can Perform
this Task by
Default

Create a Domain Right-click the Namespaces node in the console tree, and then
domain- Admins group click Delegate Management Permissions. Or use the Set-
based in the domain DfsnRoot GrantAdminAccounts and Set-DfsnRoot
namespace where the RevokeAdminAccounts. Windows PowerShell cmdlets (introduced
namespace is in Windows Server 2012). You must also add the user to the local
configured Administrators group on the namespace server.

Add a Domain Right-click the domain-based namespace in the console tree, and
namespace Admins group then click Delegate Management Permissions. Or use the Set-
server to a in the domain DfsnRoot GrantAdminAccounts and Set-DfsnRoot
domain- where the RevokeAdminAccounts. Windows PowerShell cmdlets (introduced
based namespace is in Windows Server 2012). You must also add the user to the local
namespace configured Administrators group on the namespace server to be added.

Manage a Local Right-click the domain-based namespace in the console tree, and
domain- Administrators then click Delegate Management Permissions.
based group on each
namespace namespace
server

Create a Local Add the user to the local Administrators group on the namespace
stand-alone Administrators server.
namespace group on the
namespace
server
Task Groups that Delegation Method
Can Perform
this Task by
Default

Manage a Local Right-click the stand-alone namespace in the console tree, and
stand-alone Administrators then click Delegate Management Permissions. Or use the Set-
namespace* group on the DfsnRoot GrantAdminAccounts and Set-DfsnRoot
namespace RevokeAdminAccounts. Windows PowerShell cmdlets (introduced
server in Windows Server 2012).

Create a Domain Right-click the Replication node in the console tree, and then
replication Admins group click Delegate Management Permissions.
group or in the domain
enable DFS where the
Replication namespace is
on a folder configured

*Delegating management permissions to manage a stand-alone namespace does not


grant the user the ability to view and manage security by using the Delegation tab
unless the user is a member of the local Administrators group on the namespace server.
This issue occurs because the DFS Management snap-in cannot retrieve the
discretionary access control lists (DACLs) for the stand-alone namespace from the
registry. To enable the snap-in to display delegation information, you must follow the
steps in the Microsoft® Knowledge Base article: KB314837: How to Manage Remote
Access to the Registry
Tuning DFS Namespaces
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

After creating a namespace and adding folders and targets, refer to the following
sections to tune or optimize the way DFS Namespace handles referrals and polls Active
Directory Domain Services (AD DS) for updated namespace data:

Enable Access-Based Enumeration on a Namespace


Enable or Disable Referrals and Client Failback
Change the Amount of Time That Clients Cache Referrals
Set the Ordering Method for Targets in Referrals
Set Target Priority to Override Referral Ordering
Optimize Namespace Polling
Using Inherited Permissions with Access-Based Enumeration

7 Note

To search for folders or folder targets, select a namespace, click the Search tab,
type your search string in the text box, and then click Search.
Enable or Disable Referrals and Client
Failback
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

A referral is an ordered list of servers that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or DFS folder
with targets. After the computer receives the referral, the computer attempts to access
the first server in the list. If the server is not available, the client computer attempts to
access the next server. If a server becomes unavailable, you can configure clients to fail
back to the preferred server after it becomes available.

The following sections provide information about how to enable or disable referrals or
enable client failback:

Enable or disable referrals


By disabling a namespace server's or folder target's referral, you can prevent users from
being directed to that namespace server or folder target. This is useful if you need to
temporarily take a server offline for maintenance.

To enable or disable referrals to a folder target, use the following steps:

1. In the DFS Management console tree, under the Namespaces node, click a
folder containing targets, and then click the Folder Targets tab in the Details
pane.
2. Right-click the folder target, and then click either Disable Folder Target or
Enable Folder Target.

To enable or disable referrals to a namespace server, use the following steps:

1. In the DFS Management console tree, select the appropriate namespace and
then click the Namespace Servers tab.
2. Right-click the appropriate namespace server and then click either Disable
Namespace Server or Enable Namespace Server.

 Tip
To enable or disable referrals by using Windows PowerShell, use the Set-
DfsnRootTarget –State or Set-DfsnServerConfiguration cmdlets, which were
introduced in Windows Server 2012.

Enable client failback


If a target becomes unavailable, you can configure clients to fail back to the target after
it is restored. For failback to work, client computers must meet the requirements listed
in the following topic: Review DFS Namespaces Client Requirements.

7 Note

To enable client failback on a namespace root by using Windows PowerShell, use


the Set-DfsnRoot cmdlet. To enable client failback on a DFS folder, use the Set-
DfsnFolder cmdlet.

To enable client failback for a namespace root


1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a namespace, and
then click Properties.

3. On the Referrals tab, select the Clients fail back to preferred targets check box.

Folders with targets inherit client failback settings from the namespace root. If client
failback is disabled on the namespace root, you can use the following procedure to
enable the client to fail back on a folder with targets.

To enable client failback for a folder with


targets
1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a folder with targets,
and then click Properties.

3. On the Referrals tab, click the Clients fail back to preferred targets check box.
Additional References
Tuning DFS Namespaces
Review DFS Namespaces Client Requirements
Delegate Management Permissions for DFS Namespaces
Change the amount of time that clients
cache referrals
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

A referral is an ordered list of targets that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or folder with
targets in the namespace. You can adjust how long clients cache a referral before
requesting a new one.

To change the amount of time that clients


cache namespace root referrals
1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a namespace, and
then click Properties.

3. On the Referrals tab, in the Cache duration (in seconds) text box, type the amount
of time (in seconds) that clients cache namespace root referrals. The default setting
is 300 seconds (five minutes).

 Tip

To change the amount of time that clients cache namespace root referrals by using
Windows PowerShell, use the Set-DfsnRoot TimeToLiveSec cmdlet. These cmdlets
were introduced in Windows Server 2012.

To change the amount of time that clients


cache folder referrals
1. Click Start , point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Namespaces node, right-click a folder that has
targets, and then click Properties.

3. On the Referrals tab, in the Cache duration (in seconds) text box, type the amount
of time (in seconds) that clients cache folder referrals. The default setting is 1800
seconds (30 minutes).

Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Set the Ordering Method for Targets in
Referrals
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

A referral is an ordered list of targets that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or folder with
targets. After the client receives the referral, the client attempts to access the first target
in the list. If the target is not available, the client attempts to access the next target.
Targets on the client's site are always listed first in a referral. Targets outside of the
client's site are listed according to the ordering method.

Use the following sections to specify in what order targets should be referred to clients
and to understand the different methods of ordering target referrals:

To set the ordering method for targets in


namespace root referrals
Use the following procedure to set the ordering method on the namespace root:

1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a namespace, and
then click Properties.

3. On the Referrals tab, select an ordering method.

7 Note

To use Windows PowerShell to set the ordering method for targets in namespace
root referrals, use the Set-DfsnRoot cmdlet with one of the following parameters:

EnableSiteCosting specifies the Lowest cost ordering method


EnableInsiteReferrals specifies the Exclude targets outside of the client's site
ordering method
Omitting either parameter specifies the Random order referral ordering
method.

The DFSN Windows PowerShell module was introduced in Windows Server 2012.

To set the ordering method for targets in folder


referrals
Folders with targets inherit the ordering method from the namespace root. You can
override the ordering method by using the following procedure:

1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a folder with targets,
and then click Properties.

3. On the Referrals tab, select the Exclude targets outside of the client's site check
box.

7 Note

To use Windows PowerShell to exclude folder targets outside of the client's site, use
the Set-DfsnFolder –EnableInsiteReferrals cmdlet.

Target referral ordering methods


The three ordering methods are:

Random order
Lowest cost
Exclude targets outside of the client's site

Random order
In this method, targets are ordered as follows:

1. Targets in the same Active Directory Directory Services (AD DS) site as the client
are listed in random order at the top of the referral.
2. Targets outside of the client's site are listed in random order.
If no same-site target servers are available, the client computer is referred to a random
target server regardless of how expensive the connection is or how distant the target.

Lowest cost
In this method, targets are ordered as follows:

1. Targets in the same site as the client are listed in random order at the top of the
referral.
2. Targets outside of the client's site are listed in order of lowest cost to highest cost.
Referrals with the same cost are grouped together, and the targets are listed in
random order within each group.

7 Note

Site link costs are not shown in the DFS Management snap-in. To view site link
costs, use the Active Directory Sites and Services snap-in.

Exclude targets outside of the client's site


In this method, the referral contains only the targets that are in the same site as the
client. These same-site targets are listed in random order. If no same-site targets exist,
the client does not receive a referral and cannot access that portion of the namespace.

7 Note

Targets that have target priority set to "First among all targets" or "Last among all
targets" are still listed in the referral, even if the ordering method is set to Exclude
targets outside of the client's site.

Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Set target priority to override referral
ordering
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

A referral is an ordered list of targets that a client computer receives from a domain
controller or namespace server when the user accesses a namespace root or folder with
targets in the namespace. Each target in a referral is ordered according to the ordering
method for the namespace root or folder.

To refine how targets are ordered, you can set priority on individual targets. For
example, you can specify that the target is first among all targets, last among all targets,
or first or last among all targets of equal cost.

To set target priority on a root target for a


domain-based namespace
To set target priority on a root target for a domain-based namespace, use the following
procedure:

1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, click the domain-based
namespace for the root targets for which you want to set priority.

3. In the Details pane, on the Namespace Servers tab, right-click the root target with
the priority that you want to change, and then click Properties.

4. On the Advanced tab, click Override referral ordering, and then click the priority
you want.

First among all targets Specifies that users should always be referred to this
target if the target is available.
Last among all targets Specifies that users should never be referred to this
target unless all other targets are unavailable.
First among targets of equal cost Specifies that users should be referred to
this target before other targets of equal cost (which usually means other
targets in the same site).
Last among targets of equal cost Specifies that users should never be
referred to this target if there are other targets of equal cost available (which
usually means other targets in the same site).

To set target priority on a folder target


To set target priority on a folder target, use the following procedure:

1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, click the folder of the targets for
which you want to set priority.

3. In the Details pane, on the Folder Targets tab, right-click the folder target with the
priority that you want to change, and then click Properties .

4. On the Advanced tab, click Override referral ordering and then click the priority
that you want.

7 Note

To set target priorities by using Windows PowerShell, use the Set-DfsnRootTarget


and Set-DfsnFolderTarget cmdlets with the ReferralPriorityClass and
ReferralPriorityRank parameters. These cmdlets were introduced in Windows
Server 2012.

Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Optimize Namespace Polling
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

To maintain a consistent domain-based namespace across namespace servers, it is


necessary for namespace servers to periodically poll Active Directory Domain Services
(AD DS) to obtain the most current namespace data.

To optimize namespace polling


Use the following procedure to optimize how namespace polling occurs:

1. Click Start, point to Administrative Tools, and then click DFS Management.

2. In the console tree, under the Namespaces node, right-click a domain-based


namespace, and then click Properties .

3. On the Advanced tab, select whether you want the namespace optimized for
consistency or scalability.

Choose Optimize for consistency if there are 16 or fewer namespace servers


hosting the namespace.
Choose Optimize for scalability if there are more than 16 namespace servers.
This reduces the load on the Primary Domain Controller (PDC) Emulator, but
increases the time required for changes to the namespace to replicate to all
namespace servers. Until changes replicate to all servers, users might have an
inconsistent view of the namespace.

7 Note

To set the namespace polling mode by using Windows PowerShell, use the Set-
DfsnRoot EnableRootScalability cmdlet, which was introduced in Windows
Server 2012.

Additional References
Tuning DFS Namespaces
Delegate Management Permissions for DFS Namespaces
Enable access-based enumeration on a
namespace
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

Access-based enumeration hides files and folders that users do not have permissions to
access. By default, this feature is not enabled for DFS namespaces. You can enable
access-based enumeration of DFS folders by using DFS Management. To control access-
based enumeration of files and folders in folder targets, you must enable access-based
enumeration on each shared folder by using Share and Storage Management.

To enable access-based enumeration on a namespace, all namespace servers must be


running Windows Server 2008 or newer. Additionally, domain-based namespaces must
use the Windows Server 2008 mode. For information about the requirements of the
Windows Server 2008 mode, see Choose a Namespace Type.

In some environments, enabling access-based enumeration can cause high CPU


utilization on the server and slow response times for users.

7 Note

If you upgrade the domain functional level to Windows Server 2008 while there are
existing domain-based namespaces, DFS Management will allow you to enable
access-based enumeration on these namespaces. However, you will not be able to
edit permissions to hide folders from any groups or users unless you migrate the
namespaces to the Windows Server 2008 mode. For more information, see Migrate
a Domain-based Namespace to Windows Server 2008 Mode.

To use access-based enumeration with DFS Namespaces, you must follow these steps:

Enable access-based enumeration on a namespace


Control which users and groups can view individual DFS folders

2 Warning
Access-based enumeration does not prevent users from getting a referral to a
folder target if they already know the DFS path. Only the share permissions or the
NTFS file system permissions of the folder target (shared folder) itself can prevent
users from accessing a folder target. DFS folder permissions are used only for
displaying or hiding DFS folders, not for controlling access, making Read access the
only relevant permission at the DFS folder level. For more information, see Using
Inherited Permissions with Access-Based Enumeration

You can enable access-based enumeration on a namespace either by using the Windows
interface or by using a command line.

To enable access-based enumeration by using


the Windows interface
1. In the console tree, under the Namespaces node, right-click the appropriate
namespace and then click Properties .

2. Click the Advanced tab and then select the Enable access-based enumeration for
this namespace check box.

To enable access-based enumeration by using a


command line
1. Open a command prompt window on a server that has the Distributed File System
role service or Distributed File System Tools feature installed.

2. Type the following command, where <namespace_root> is the root of the


namespace:

dfsutil property abe enable \\ <namespace_root>

 Tip

To manage access-based enumeration on a namespace by using Windows


PowerShell, use the Set-DfsnRoot, Grant-DfsnAccess, and Revoke-DfsnAccess
cmdlets. The DFSN Windows PowerShell module was introduced in Windows Server
2012.
You can control which users and groups can view individual DFS folders either by using
the Windows interface or by using a command line.

To control folder visibility by using the


Windows interface
1. In the console tree, under the Namespaces node, locate the folder with targets for
which you want to control visibility, right-click it and then click Properties.

2. Click the Advanced tab.

3. Click Set explicit view permissions on the DFS folder and then Configure view
permissions.

4. Add or remove groups or users by clicking Add or Remove.

5. To allow users to see the DFS folder, select the group or user, and then select the
Allow check box.

To hide the folder from a group or user, select the group or user, and then select
the Deny check box.

To control folder visibility by using a command


line
1. Open a Command Prompt window on a server that has the Distributed File
System role service or Distributed File System Tools feature installed.

2. Type the following command, where <DFSPath> is the path of the DFS folder
(link), <DOMAIN\Account> is the name of the group or user account, and (...) is
replaced with additional Access Control Entries (ACEs):

dfsutil property sd grant <DFSPath> DOMAIN\Account:R (...) Protect


Replace

For example, to replace existing permissions with permissions that allows the
Domain Admins and CONTOSO\Trainers groups Read (R) access to the
\contoso.office\public\training folder, type the following command:
dfsutil property sd grant \\contoso.office\public\training
"CONTOSO\Domain Admins":R CONTOSO\Trainers:R Protect Replace

3. To perform additional tasks from the command prompt, use the following
commands:

Command Description

Dfsutil property sd deny Denies a group or user the ability to view the folder.

Dfsutil property sd reset Removes all permissions from the folder.

Dfsutil property sd revoke Removes a group or user ACE from the folder.

Additional References
Create a DFS Namespace
Delegate Management Permissions for DFS Namespaces
Installing DFS
Using Inherited Permissions with Access-Based Enumeration
Using inherited permissions with
Access-based Enumeration
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

By default, the permissions used for a DFS folder are inherited from the local file system
of the namespace server. The permissions are inherited from the root directory of the
system drive and grant the DOMAIN\Users group Read permissions. As a result, even
after enabling access-based enumeration, all folders in the namespace remain visible to
all domain users.

Advantages and limitations of inherited


permissions
There are two primary benefits to using inherited permissions to control which users can
view folders in a DFS namespace:

You can quickly apply inherited permissions to many folders without having to use
scripts.
You can apply inherited permissions to namespace roots and folders without
targets.

Despite the benefits, inherited permissions in DFS Namespaces have many limitations
that make them inappropriate for most environments:

Modifications to inherited permissions are not replicated to other namespace


servers. Therefore, use inherited permissions only on stand-alone namespaces or in
environments where you can implement a third-party replication system to keep
the Access Control Lists (ACLs) on all namespace servers synchronized.
DFS Management and Dfsutil cannot view or modify inherited permissions.
Therefore, you must use Windows Explorer or the Icacls command in addition to
DFS Management or Dfsutil to manage the namespace.
When using inherited permissions, you cannot modify the permissions of a folder
with targets except by using the Dfsutil command. DFS Namespaces automatically
removes permissions from folders with targets set using other tools or methods.
If you set permissions on a folder with targets while you are using inherited
permissions, the ACL that you set on the folder with targets combines with
inherited permissions from the folder's parent in the file system. You must examine
both sets of permissions to determine what the net permissions are.

7 Note

When using inherited permissions, it is simplest to set permissions on namespace


roots and folders without targets. Then use inherited permissions on folders with
targets so that they inherit all permissions from their parents.

Using inherited permissions


To limit which users can view a DFS folder, you must perform one of the following tasks:

Set explicit permissions for the folder, disabling inheritance. To set explicit
permissions on a folder with targets (a link) using DFS Management or the Dfsutil
command, see Enable Access-Based Enumeration on a Namespace.
Modify inherited permissions on the parent in the local file system. To modify the
permissions inherited by a folder with targets, if you have already set explicit
permissions on the folder, switch to inherited permissions from explicit
permissions, as discussed in the following procedure. Then use Windows Explorer
or the Icacls command to modify the permissions of the folder from which the
folder with targets inherits its permissions.

7 Note

Access-based enumeration does not prevent users from obtaining a referral to a


folder target if they already know the DFS path of the folder with targets.
Permissions set using Windows Explorer or the Icacls command on namespace
roots or folders without targets control whether users can access the DFS folder or
namespace root. However, they do not prevent users from directly accessing a
folder with targets. Only the share permissions or the NTFS file system permissions
of the shared folder itself can prevent users from accessing folder targets.

To switch from explicit permissions to inherited


permissions
1. In the console tree, under the Namespaces node, locate the folder with targets
whose visibility you want to control, right-click the folder and then click Properties.

2. Click the Advanced tab.

3. Click Use inherited permissions from the local file system and then click OK in the
Confirm Use of Inherited Permissions dialog box. Doing this removes all explicitly
set permissions on this folder, restoring inherited NTFS permissions from the local
file system of the namespace server.

4. To change the inherited permissions for folders or namespace roots in a DFS


namespace, use Windows Explorer or the ICacls command.

Additional References
Create a DFS Namespace
DFS Replication overview
Article • 03/28/2023

Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

Distributed File System Replication, or DFS Replication, is a role service in Windows


Server that enables you to efficiently replicate folders across multiple servers and sites.
You can replicate all types of folders, including folders referred to by a DFS namespace
path.

DFS Replication is an efficient, multiple-master replication engine that you can use to
keep folders synchronized between servers across limited bandwidth network
connections. The service replaces the File Replication Service (FRS) as the replication
engine for DFS namespaces.

 Tip

Consider using Azure File Sync to reduce your on-premises storage footprint.
Azure File Sync can keep multiple Windows file servers in sync. Each server needs to
only keep a cache on-premises while the full copy of the data is in the cloud. Azure
File Sync also provides the benefit of cloud backups with integrated snapshots. For
more information, see Planning for an Azure File Sync deployment.

DFS Replication uses a compression algorithm known as remote differential compression,


or RDC. RDC detects changes to the data in a file and enables DFS Replication to
replicate only the changed file blocks instead of the entire file.

Active Directory Domain Services (AD DS) uses DFS Replication to replicate the sysvol
folder in domains that use the Windows Server 2008 or later domain functional level. For
more information about replicating the sysvol folder by using DFS Replication, see
Migrate the sysvol folder replication to DFS Replication.

Understand replication groups


To use DFS Replication, you create replication groups and add replicated folders to the
groups. Replicated folders are stored on servers in the group, which are referred to as
members. DFS Replication establishes connections between the members of a group.
The following figure illustrates the relationship between a replication group, the
members in the group, and the replicated folders.

A replicated folder stays synchronized on each member in a group. In the figure, there
are two replicated folders: Projects and Proposals. As the data changes in each
replicated folder, the changes are replicated across connections between the members
of the replication group. The connections between all members form the replication
topology.

Creating multiple replicated folders in a single replication group simplifies the process
of deploying replicated folders. The topology, schedule, and bandwidth throttling for
the replication group are applied to each replicated folder. To deploy more replicated
folders, you can run the Dfsradmin.exe tool or use a wizard to define the local path and
permissions for the new replicated folder.

Each replicated folder has unique settings, such as file and subfolder filters. The settings
let you filter out different files and subfolders for each replicated folder.

The replicated folders stored on each member can be located on different volumes in
the member, and the replicated folders don't need to be shared folders or part of a
namespace. However, the DFS Management snap-in simplifies sharing replicated folders
and optionally publishing them in an existing namespace.

Deploy and manage DFS Replication


DFS Replication is a part of the File and Storage Services role for Windows Server. The
management tools for DFS (DFS Management, the DFS Replication module for Windows
PowerShell, and command-line tools) are installed separately as part of the Remote
Server Administration Tools (RSAT).

You can install DFS Replication by using Server Manager, Windows PowerShell, or
Windows Admin Center.
You can administer DFS Replication by using DFS Management, the dfsradmin and
dfsrdiag commands, or scripts that call WMI.

Deployment requirements
Before you can deploy DFS Replication, you must configure your servers as follows:

Confirm file system and volume format. Determine the folders that you want to
replicate, and identify any folders located on volumes that are formatted with the
NTFS file system. DFS Replication doesn't support the Resilient File System (ReFS)
or the FAT file system. DFS Replication also doesn't support replicating content
stored on Cluster Shared Volumes.

Verify antivirus compatibility. Contact your antivirus software vendor to confirm


your antivirus software is compatible with DFS Replication.

Update AD DS schema. Update the AD DS schema to include Windows Server


2003 R2 or later schema additions. You can't use read-only replicated folders with
the Windows Server 2003 R2 or later schema additions.

Prepare replication group servers. Install DFS Replication on all servers that you
plan to use as members of a replication group.

Check forest location. Ensure all servers in a replication group are located in the
same forest. You can't enable replication across servers in different forests.

Interoperability with Azure virtual machines


DFS Replication on an Azure virtual machine is a verified scenario for Windows Server.
However, there are some limitations and requirements for this implementation.

Snapshots and saved states. To restore a server that's running DFS Replication,
don't use snapshots or saved states to replicate anything other than the sysvol
folder. If you attempt this type of restore, DFS Replication fails. This restoration
requires special database recovery steps. Also, don't export, clone, or copy the
virtual machines. For more information, see DFSR no longer replicates files after
restoring a virtualized server's snapshot and Safely virtualizing DFSR .

DFS Replication backups. To back up data in a replicated folder that's stored in a


virtual machine, use backup software that's located on the guest virtual machine.
Don't back up or restore a virtualized DFS Replication server from the host virtual
machine.
Domain controller access. DFS Replication requires access to physical or
virtualized domain controllers. The DFS Replication service can't communicate
directly with Azure Active Directory.

VPN connection. DFS Replication requires a VPN connection between your on-
premises replication group members and any members hosted in Azure virtual
machines. You also need to configure the on-premises router (such as Forefront
Threat Management Gateway) to allow the RPC Endpoint Mapper (port 135) and a
randomly assigned port between 49152 and 65535 to pass over the VPN
connection. You can use the Set-DfsrMachineConfiguration cmdlet or the dfsrdiag
command-line tool to specify a static port instead of the random port. For more
information about how to specify a static port for DFS Replication, see Set-
DfsrServiceConfiguration. For information about related ports to open for
managing Windows Server, see Service overview and network port requirements
for Windows.

To learn how to get started with Azure virtual machines, visit the Microsoft Azure
website.

Install DFS Replication from Server Manager


To install DFS Replication by using Server Manager, follow these steps:

1. Open Server Manager.

2. Select Manage, and then select Add Roles and Features. The Add Roles and
Features wizard opens.

3. Under Server Selection, select the server or virtual hard disk (VHD) where you want
to install DFS Replication. The server or VHD should be an offline virtual machine.

4. To install the DFS Replication service, go to Server Roles.

5. Expand File and Storage Services > File and iSCSI Services, and then select DFS
Replication.

6. To install the DFS Management Tools, go to Features.

a. Expand Remote Server Administration Tools, Role Administration Tools, and


then expand File Services Tools.

b. Select DFS Management Tools.


The DFS Management Tools option installs the DFS Management snap-in, the DFS
Replication and DFS Namespaces modules for Windows PowerShell, and
command-line tools. The option doesn't install any DFS services on the server.

Install DFS Replication from PowerShell


To install the DFS Replication by using Windows PowerShell, follow these steps:

1. Open a Windows PowerShell session with elevated user rights.

2. Enter the following command to install the desired RSAT role services or features
to support DFS replication.

For the <name\> parameter, enter of the names of the RSAT role services or
features that you want to install. You can install one or multiple services and
features in a single command. The table lists the names of the relevant RSAT role
services and features.

PowerShell

Install-WindowsFeature <name>

RSAT role service/feature Value for <name> parameter

DFS Replication FS-DFS-Replication

DFS Management Tools RSAT-DFS-Mgmt-Con

To install the DFS Replication service only, enter the following command:

PowerShell

Install-WindowsFeature "RSAT-DFS-Mgmt-Con"

To install both the DFS Replication service and the DFS Management Tools,
enter the following command:

PowerShell

Install-WindowsFeature "FS-DFS-Replication", "RSAT-DFS-Mgmt-Con"

Related links
DFS Namespaces and DFS Replication overview
Checklist: Deploy DFS Replication
Checklist: Manage DFS Replication
Deploying DFS Replication
Troubleshooting DFS Replication
Migrate SYSVOL replication to DFS
Replication
Article • 04/25/2023

Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008

Domain controllers use a special shared folder named SYSVOL to replicate sign-in
scripts and Group Policy object files to other domain controllers. Windows 2000 Server
and Windows Server 2003 use the File Replication Service (FRS) to replicate SYSVOL.
Windows Server 2008 uses the newer Distributed File System Replication (DFS
Replication) service for domains that use the Windows Server 2008 domain functional
level. Windows Server 2008 uses FRS for domains that run older domain functional
levels.

To use DFS Replication to replicate the SYSVOL folder, you can create a new domain that
uses the Windows Server 2008 domain functional level. You can also use the procedure
described in this article to upgrade an existing domain and migrate replication to DFS
Replication.

Prerequisites
This article assumes you have a basic knowledge of Active Directory Domain Services
(AD DS), FRS, and Distributed File System Replication (DFS Replication). For more
information, see:

Active Directory Domain Services overview


FRS overview
Overview of DFS Replication

Printable download
To download a printable version of this guide, go to SYSVOL Replication Migration
Guide: FRS to DFS Replication .

Migration topics
The SYSVOL migration guide provides topics that describe a range of concepts and
procedures from the use of FRS to the use DFS. Use the following list to access articles
about migrating the SYSVOL folder to use DFS Replication.

Concepts
Review these concepts about SYSVOL migration states for a basic understanding of the
migration tasks.

SYSVOL migration conceptual information


SYSVOL migration states
Overview of the SYSVOL migration procedure

Procedures
Follow these SYSVOL migration procedures for a basic understanding of the migration
states.

SYSVOL migration procedure


Migrating to the Prepared state
Migrating to the Redirected state
Migrating to the Eliminated state

Troubleshooting
Access these articles that describe known issues and provide troubleshooting guidance.

Troubleshooting SYSVOL migration


Troubleshooting SYSVOL migration issues
Rolling back SYSVOL migration to a previous stable state

References
The following resources offer supplemental reference information:

SYSVOL migration reference information


Supported SYSVOL migration scenarios
Verifying the state of SYSVOL migration
Dfsrmig
SYSVOL migration tool actions
Related links
SYSVOL Migration Series: Part 1 – Introduction to the SYSVOL migration process
SYSVOL Migration Series: Part 2 – Dfsrmig.exe: The SYSVOL migration tool
SYSVOL Migration Series: Part 3 - Migrating to the 'PREPARED' state
SYSVOL Migration Series: Part 4 – Migrating to the 'REDIRECTED' state
SYSVOL Migration Series: Part 5 – Migrating to the 'ELIMINATED' state
Distributed File System step-by-step guide for Windows Server 2008
FRS technical reference
Use Robocopy to pre-seed files for DFS
Replication
Article • 12/23/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

This topic explains how to use the command-line tool, Robocopy.exe, to pre-seed files
when setting up replication for Distributed File System (DFS) Replication (also known as
DFSR or DFS-R) in Windows Server. By pre-seeding files before you set up DFS
Replication, add a new replication partner, or replace a server, you can speed up initial
synchronization and enable cloning of the DFS Replication database in Windows Server
2012 R2. The Robocopy method is one of several pre-seeding methods; for an overview,
see Step 1: pre-seed files for DFS Replication.

The Robocopy (Robust File Copy) command-line utility is included with Windows Server.
The utility provides extensive options that include copying security, backup API support,
retry capabilities, and logging. Later versions include multi-threading and un-buffered
I/O support.

) Important

Robocopy does not copy exclusively locked files. If users tend to lock many files for
long periods on your file servers, consider using a different pre-seeding method.
pre-seeding does not require a perfect match between file lists on the source and
destination servers, but the more files that do not exist when initial synchronization
is performed for DFS Replication, the less effective pre-seeding is. To minimize lock
conflicts, use Robocopy during non-peak hours for your organization. Always
examine the Robocopy logs after pre-seeding to ensure that you understand which
files were skipped because of exclusive locks.

To use Robocopy to pre-seed files for DFS Replication, follow these steps:

1. Download and install the latest version of Robocopy.


2. Stabilize files that will be replicated.
3. Copy the replicated files to the destination server.
Prerequisites
Because pre-seeding does not directly involve DFS Replication, you only need to meet
the requirements for performing a file copy with Robocopy.

You need an account that's a member of the local Administrators group on both
the source and destination servers.

Install the most recent version of Robocopy on the server that you will use to copy
the files—either the source server or the destination server; you will need to install
the most recent version for the operating system version. For instructions, see Step
2: Stabilize files that will be replicated. Unless you are pre-seeding files from a
server running Windows Server 2003 R2, you can run Robocopy on either the
source or destination server. The destination server, which typically has the more
recent operating system version, gives you access to the most recent version of
Robocopy.

Ensure that sufficient storage space is available on the destination drive. Do not
create a folder on the path that you plan to copy to: Robocopy must create the
root folder.

7 Note

When you decide how much space to allocate for the pre-seeded files,
consider expected data growth over time and storage requirements for DFS
Replication. For planning help, see Edit the Quota Size of the Staging Folder
and Conflict and Deleted Folder in Managing DFS Replication.

On the source server, optionally install Process Monitor or Process Explorer, which
you can use to check for applications that are locking files. For download
information, see Process Monitor and Process Explorer.

Step 1: Download and install the latest version


of Robocopy
Before you use Robocopy to pre-seed files, you should download and install the latest
version of Robocopy.exe. This ensures that DFS Replication doesn't skip files because of
issues within Robocopy's shipping versions.

The source for the latest compatible Robocopy version depends on the version of
Windows Server that is running on the server. For information about downloading the
hotfix with the most recent version of Robocopy for Windows Server 2008 R2 or
Windows Server 2008, see List of currently available hotfixes for Distributed File System
(DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2 .

Alternatively, you can locate and install the latest hotfix for an operating system by
taking the following steps.

Locate and install the latest version of Robocopy for a


specific version of Windows Server
1. In a web browser, open https://support.microsoft.com .

2. In Search Support, enter the following string, replacing <operating system


version> with the appropriate operating system, then press the Enter key:

robocopy.exe kbqfe "<operating system version>"

For example, enter robocopy.exe kbqfe "Windows Server 2008 R2".

3. Locate and download the hotfix with the highest ID number (that is, the latest
version).

4. Install the hotfix on the server.

Step 2: Stabilize files that will be replicated


After you install the latest version of Robocopy on the server, you should prevent locked
files from blocking copying by using the methods described in the following table. Most
applications do not exclusively lock files. However, during normal operations, a small
percentage of files might be locked on file servers.

Source of Explanation Mitigation


the lock
Source of Explanation Mitigation
the lock

Users Employees connect to Only perform Robocopy operations during off-peak,


remotely a standard file server non-business hours. This minimizes the number of files
open files and edit documents, that Robocopy must skip during pre-seeding.
on shares. multimedia content, or
other files. Sometimes Consider temporarily setting Read-only access on the file
referred to as the shares that will be replicated by using the Windows
traditional home folder PowerShell Grant-SmbShareAccess and Close-
or shared data SmbSession cmdlets. If you set permissions for a
workloads. common group such as Everyone or Authenticated Users
to READ, standard users might be less likely to open files
with exclusive locks (if their applications detect the Read-
only access when files are opened).

You might also consider setting a temporary firewall rule


for SMB port 445 inbound to that server to block access
to files or use the Block-SmbShareAccess cmdlet.
However, both of these methods are very disruptive to
user operations.

Applications Application workloads Temporarily disable or uninstall the applications that are
open files running on a file server locking files. You can use Process Monitor or Process
local. sometimes lock files. Explorer to determine which applications are locking
files. To download Process Monitor or Process Explorer,
visit the Process Monitor and Process Explorer pages.

Step 3: Copy the replicated files to the


destination server
After you minimize locks on the files that will be replicated, you can pre-seed the files
from the source server to the destination server.

7 Note

You can run Robocopy on either the source computer or the destination computer.
The following procedure describes running Robocopy on the destination server,
which typically is running a more recent operating system, to take advantage of any
additional Robocopy capabilities that the more recent operating system might
provide.
pre-seed the replicated files onto the destination server
with Robocopy
1. Sign in to the destination server with an account that's a member of the local
Administrators group on both the source and destination servers.

2. Open an elevated command prompt.

3. To pre-seed the files from the source to destination server, run the following
command, substituting your own source, destination, and log file paths for the
bracketed values:

PowerShell

robocopy "<source replicated folder path>" "<destination replicated


folder path>" /e /b /copyall /r:6 /w:5 /MT:64 /xd DfsrPrivate /tee
/log:<log file path> /v

This command copies all contents of the source folder to the destination folder,
with the following parameters:

Parameter Description

"<source Specifies the source folder to pre-seed on the destination server.


replicated
folder path>"

"<destination Specifies the path to the folder that will store the pre-seeded files.
replicated
folder path>" The destination folder must not already exist on the destination server. To
get matching file hashes, Robocopy must create the root folder when it
pre-seeds the files.

/e Copies subdirectories and their files, as well as empty subdirectories.

/b Copies files in Backup mode.

/copyall Copies all file information, including data, attributes, time stamps, the
NTFS access control list (ACL), owner information, and auditing
information.

/r:6 Retries the operation six times when an error occurs.

/w:5 Waits 5 seconds between retries.

MT:64 Copies 64 files simultaneously.

/xd DfsrPrivate Excludes the DfsrPrivate folder.


Parameter Description

/tee Writes status output to the console window, as well as to the log file.

/log <log file Specifies the log file to write. Overwrites the file's existing contents. (To
path> append the entries to the existing log file, use /log+ <log file path> .)

/v Produces verbose output that includes skipped files.

For example, the following command replicates files from the source replicated
folder, E:\RF01, to data drive D on the destination server:

PowerShell

robocopy.exe "\\srv01\e$\rf01" "d:\rf01" /e /b /copyall /r:6 /w:5


/MT:64 /xd DfsrPrivate /tee /log:c:\temp\pre-seedsrv02.log

7 Note

We recommend that you use the parameters described above when you use
Robocopy to pre-seed files for DFS Replication. However, you can change
some of their values or add additional parameters. For example, you might
find out through testing that you have the capacity to set a higher value
(thread count) for the /MT parameter. Also, if you'll primarily replicate larger
files, you might be able to increase copy performance by adding the /j option
for unbuffered I/O. For more information about Robocopy parameters, see
the Robocopy command-line reference.

2 Warning

To avoid potential data loss when you use Robocopy to pre-seed files for DFS
Replication, do not make the following changes to the recommended
parameters:

Do not use the /mir parameter (that mirrors a directory tree) or the /mov
parameter (that moves the files, then deletes them from the source).
Do not remove the /e, /b, and /copyall options.

4. After copying completes, examine the log for any errors or skipped files. Use
Robocopy to copy any skipped files individually instead of recopying the entire set
of files. If files were skipped because of exclusive locks, either try copying
individual files with Robocopy later, or accept that those files will require over-the-
wire replication by DFS Replication during initial synchronization.

Next step
After you complete the initial copy, and use Robocopy to resolve issues with as many
skipped files as possible, you will use the Get-DfsrFileHash cmdlet in Windows
PowerShell or the Dfsrdiag command to validate the pre-seeded files by comparing file
hashes on the source and destination servers. For detailed instructions, see Step 2:
Validate pre-seeded Files for DFS Replication.
DFS Replication FAQ
FAQ

Updated: January 30, 2019

Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows
Server 2008

Try our Virtual Agent - It can help you quickly identify and fix common File

replication issues.

This FAQ answers questions about Distributed File System (DFS) Replication (also known
as DFS-R or DFSR) for Windows Server.

For information about DFS Namespaces, see DFS Namespaces: Frequently Asked
Questions.

For information about what's new in DFS Replication, see the following topics:

DFS Namespaces and DFS Replication Overview (in Windows Server 2012)

What's New in Distributed File System topic in Changes in Functionality from


Windows Server 2008 to Windows Server 2008 R2

Distributed File System topic in Changes in Functionality from Windows Server


2003 with SP1 to Windows Server 2008

For a list of recent changes to this topic, see the Change history section of this topic.

Interoperability
Can DFS Replication communicate with FRS?
No. DFS Replication does not communicate with File Replication Service (FRS). DFS
Replication and FRS can run on the same server at the same time, but they must never
be configured to replicate the same folders or subfolders because doing so can cause
data loss.

Can DFS Replication replace FRS for SYSVOL


replication
Yes, DFS Replication can replace FRS for SYSVOL replication on servers running Windows
Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, or Windows Server
2008. Servers running Windows Server 2003 R2 don't support using DFS Replication to
replicate the SYSVOL folder.

For more information about replicating SYSVOL by using DFS Replication, see the
Migrate SYSVOL replication to DFS Replication.

Can I upgrade from FRS to DFS Replication


without losing configuration settings?
Yes. To migrate replication from FRS to DFS Replication, see the following documents:

To migrate replication of folders other than the SYSVOL folder, see DFS Operations
Guide: Migrating from FRS to DFS Replication and FRS2DFSR – An FRS to DFSR
Migration Utility (https://go.microsoft.com/fwlink/?LinkID=195437 ).

To migrate replication of the SYSVOL folder to DFS Replication, see Migrate


SYSVOL replication to DFS Replication.

Can I use DFS Replication in a mixed


Windows/UNIX environment?
Yes. Although DFS Replication only supports replicating content between servers
running Windows Server, UNIX clients can access file shares on the Windows servers. To
do so, install Services for Network File Systems (NFS) on the DFS Replication server.

You can also use the SMB/CIFS client functionality included in many UNIX clients to
directly access the Windows file shares, although this functionality is often limited or
requires modifications to the Windows environment (such as disabling SMB Signing by
using Group Policy).

DFS Replication interoperates with NFS on a server running a Windows Server operating
system, but you can't replicate an NFS mount point.

Can I use the Volume Shadow Copy Service with


DFS Replication?
Yes. DFS Replication is supported on Volume Shadow Copy Service (VSS) volumes and
previous snapshots can be restored successfully with the Previous Versions Client.
Can I use Windows Backup (Ntbackup.exe) to
remotely back up a replicated folder?
No, using Windows Backup (Ntbackup.exe) on a computer running Windows
Server 2003 or earlier to back up the contents of a replicated folder on a computer
running Windows Server 2012, Windows Server 2008 R2, or Windows Server 2008 isn't
supported.

To back up files that are stored in a replicated folder, use Windows Server Backup or
Microsoft® System Center Data Protection Manager. For information about Backup and
Recovery functionality in Windows Server 2008 R2 and Windows Server 2008, see
Backup and Recovery. For more information, see System Center Data Protection
Manager (https://go.microsoft.com/fwlink/?LinkId=182261 ).

Do file system policies impact DFS Replication?


Yes. Don't configure file system policies on replicated folders. The file system policy
reapplies NTFS permissions at every Group Policy refresh interval. This can result in
sharing violations because an open file isn't replicated until the file is closed.

Does DFS Replication replicate mailboxes hosted


on Microsoft Exchange Server?
No. DFS Replication can't be used to replicate mailboxes hosted on Microsoft Exchange
Server.

Does DFS Replication support file screens


created by File Server Resource Manager?
Yes. However, the File Server Resource Manager (FSRM) file screening settings must
match on both ends of the replication. In addition, DFS Replication has its own filter
mechanism for files and folders that you can use to exclude certain files and file types
from replication.

The following are best practices for implementing file screens or quotas:

The hidden DfsrPrivate folder must not be subject to quotas or file screens.

Screened files must not exist in any replicated folder before screening is enabled.

No folders may exceed the quota before the quota is enabled.


You must use hard quotas with caution. It's possible for individual members of a
replication group to stay within a quota before replication, but exceed it when files
are replicated. For example, if a user copies a 10 megabyte (MB) file onto server A
(which is then at the hard limit) and another user copies a 5 MB file onto server B,
when the next replication occurs, both servers will exceed the quota by 5
megabytes. This can cause DFS Replication to continually retry replicating the files,
causing holes in the version vector and possible performance problems.

Is DFS Replication cluster aware?


Yes, DFS Replication in Windows Server 2012 R2, Windows Server 2012 and Windows
Server 2008 R2 includes the ability to add a failover cluster as a member of a replication
group. For more information, see Add a Failover Cluster to a Replication Group
(https://go.microsoft.com/fwlink/?LinkId=155085 ). The DFS Replication service on
versions of Windows prior to Windows Server 2008 R2 isn't designed to coordinate with
a failover cluster, and the service won't fail over to another node.

7 Note

DFS Replication doesn't support replicating files on Cluster Shared Volumes.

Is DFS Replication compatible with Data


Deduplication?
Yes, DFS Replication can replicate folders on volumes that use Data Deduplication in
Windows Server.

Is DFS Replication compatible with RIS and


WDS?
Yes. DFS Replication replicates volumes on which Single Instance Storage (SIS) is
enabled. SIS is used by Remote Installation Services (RIS), Windows Deployment Services
(WDS), and Windows Storage Server.

Is it possible to use DFS Replication with Offline


Files?
You can safely use DFS Replication and Offline Files together in scenarios when there's
only one user at a time who writes to the files. This is useful for users who travel
between two branch offices and want to be able to access their files at either branch or
while offline. Offline Files caches the files locally for offline use and DFS Replication
replicates the data between each branch office.

Don't use DFS Replication with Offline Files in a multi-user environment because DFS
Replication doesn't provide any distributed locking mechanism or file checkout
capability. If two users modify the same file at the same time on different servers, DFS
Replication moves the older file to the DfsrPrivate\ConflictandDeleted folder (located
under the local path of the replicated folder) during the next replication.

What antivirus applications are compatible with


DFS Replication?
Antivirus applications can cause excessive replication if their scanning activities alter the
files in a replicated folder. For more information, Testing Antivirus Application
Interoperability with DFS Replication (https://go.microsoft.com/fwlink/?
LinkId=73990 ).

What are the benefits of using DFS Replication


instead of Windows SharePoint Services?
Windows® SharePoint® Services provides tight coherency in the form of file check-out
functionality that DFS Replication doesn't. If you're concerned about multiple people
editing the same file, we recommend using Windows SharePoint Services. Windows
SharePoint Services 2.0 with Service Pack 2 is available as part of Windows
Server 2003 R2. Windows SharePoint Services can be downloaded from the Microsoft
Web site; it isn't included in newer versions of Windows Server. However, if you're
replicating data across multiple sites and users won't edit the same files at the same
time, DFS Replication provides greater bandwidth and simpler management.

Limitations and requirements


Can DFS Replication replicate between branch
offices without a VPN connection?
Yes—assuming that there's a private Wide Area Network (WAN) link (not the Internet)
connecting the branch offices. However, you must open the proper ports in external
firewalls. DFS Replication uses the RPC Endpoint Mapper (port 135) and a randomly
assigned ephemeral port above 1024. You can use the Dfsrdiag command line tool to
specify a static port instead of the ephemeral port. For more information about how to
specify the RPC Endpoint Mapper, see article 154596 in the Microsoft Knowledge Base
(https://go.microsoft.com/fwlink/?LinkId=73991 ).

Can DFS Replication replicate files encrypted


with the Encrypting File System?
No. DFS Replication won't replicate files or folders that are encrypted using the
Encrypting File System (EFS). If a user encrypts a file that was previously replicated, DFS
Replication deletes the file from all other members of the replication group. This ensures
that the only available copy of the file is the encrypted version on the server.

Can DFS Replication replicate Outlook .pst or


Microsoft Office Access database files?
DFS Replication can safely replicate Microsoft Outlook personal folder files (.pst) and
Microsoft Access files only if they are stored for archival purposes and are not accessed
across the network by using a client such as Outlook or Access (to open .pst or Access
files, first copy the files to a local storage device). The reasons for this are as follows:

Opening .pst files over network connections could lead to data corruption in the
.pst files. For more information about why .pst files cannot be safely accessed from
across a network, see article 297019 in the Microsoft Knowledge Base
(https://go.microsoft.com/fwlink/?LinkId=125363 ).

.pst and Access files tend to stay open for long periods of time while being
accessed by a client such as Outlook or Office Access. This prevents DFS
Replication from replicating these files until they are closed.

Can I use DFS Replication in a workgroup?


No. DFS Replication relies on Active Directory® Domain Services for configuration. It will
only work in a domain.

Can more than one folder be replicated on a


single server?
Yes. DFS Replication can replicate numerous folders between servers. Ensure that each of
the replicated folders has a unique root path and that they do not overlap. For example,
D:\Sales and D:\Accounting can be the root paths for two replicated folders, but D:\Sales
and D:\Sales\Reports cannot be the root paths for two replicated folders.

Does DFS Replication require DFS Namespaces?


No. DFS Replication and DFS Namespaces can be used separately or together. In
addition, DFS Replication can be used to replicate standalone DFS namespaces, which
was not possible with FRS.

Does DFS Replication require time


synchronization between servers?
No. DFS Replication does not explicitly require time synchronization between servers.
However, DFS Replication does require that the server clocks match closely. The server
clocks must be set within five minutes of each other (by default) for Kerberos
authentication to function properly. For example, DFS Replication uses time stamps to
determine which file takes precedence in the event of a conflict. Accurate times are also
important for garbage collection, schedules, and other features.

Does DFS Replication support replicating an


entire volume?
Yes. However, replicating an entire volume can cause the following problems:

If the volume contains a Windows paging file, replication fails and logs DFSR event
4312 in the system event log.

DFS Replication sets the System and Hidden attributes on the replicated folder on
the destination server(s). This occurs because Windows applies the System and
Hidden attributes to the volume root folder by default. If the local path of the
replicated folder on the destination server(s) is also a volume root, no further
changes are made to the folder attributes.

When replicating a volume that contains the Windows system folder, DFS
Replication recognizes the %WINDIR% folder and does not replicate it. However,
DFS Replication does replicate folders used by non-Microsoft applications, which
might cause the applications to fail on the destination server(s) if the applications
have interoperability issues with DFS Replication.

Does DFS Replication support RPC over HTTP?


No.

Does DFS Replication work across wireless


networks?
Yes. DFS Replication is independent of the connection type.

Does DFS Replication work on ReFS or FAT


volumes?
No. DFS Replication supports volumes formatted with the NTFS file system only; the
Resilient File System (ReFS) and the FAT file system are not supported. DFS Replication
requires NTFS because it uses the NTFS change journal and other features of the NTFS
file system.

Does DFS Replication work with sparse files?


Yes. You can replicate sparse files. The Sparse attribute is preserved on the receiving
member.

Do I need to log in as administrator to replicate


files?
No. DFS Replication is a service that runs under the local system account, so you do not
need to log in as administrator to replicate. However, you must be a domain
administrator or local administrator of the affected file servers to make changes to the
DFS Replication configuration.

For more information, see "DFS Replication security requirements and delegation" in the
Delegate the Ability to Manage DFS Replication (https://go.microsoft.com/fwlink/?
LinkId=182294 ).

How can I upgrade or replace a DFS Replication


member?
To upgrade or replace a DFS Replication member, see this blog post on the Ask the
Directory Services Team blog: Replacing DFSR Member Hardware or OS.
Is DFS Replication suitable for replicating
roaming profiles?
Yes. Certain scenarios are supported when replicating roaming user profiles. For
information about the supported scenarios, see Microsoft's Support Statement Around
Replicated User Profile Data (https://go.microsoft.com/fwlink/?LinkId=201282 ).

Is there a file character limit or limit to the folder


depth?
Windows and DFS Replication support folder paths with up to 32 thousand characters.
DFS Replication is not limited to folder paths of 260 characters.

Must members of a replication group reside in


the same domain?
No. Replication groups can span across domains within a single forest but not across
different forests.

What are the supported limits of DFS


Replication?
The following list provides a set of scalability guidelines that have been tested by
Microsoft and apply to Windows Server 2012 R2, Windows Server 2016, and Windows
Server 2019

Size of all replicated files on a server: 100 terabytes.

Number of replicated files on a volume: 70 million.

Maximum file size: 250 gigabytes.

) Important

When creating replication groups with a large number or size of files we


recommend exporting a database clone and using pre-seeding techniques to
minimize the duration of initial replication. For more information, see DFS
Replication Initial Sync in Windows Server 2012 R2: Attack of the Clones .
The following list provides a set of scalability guidelines that have been tested by
Microsoft on Windows Server 2012, Windows Server 2008 R2, and Windows
Server 2008:

Size of all replicated files on a server: 10 terabytes.

Number of replicated files on a volume: 11 million.

Maximum file size: 64 gigabytes.

7 Note

There is no longer a limit to the number of replication groups, replicated folders,


connections, or replication group members.

For a list of scalability guidelines that have been tested by Microsoft for Windows
Server 2003 R2, see DFS Replication scalability guidelines
(https://go.microsoft.com/fwlink/?LinkId=75043 ).

When should I not use DFS Replication?


Do not use DFS Replication in an environment where multiple users update or modify
the same files simultaneously on different servers. Doing so can cause DFS Replication
to move conflicting copies of the files to the hidden DfsrPrivate\ConflictandDeleted
folder.

When multiple users need to modify the same files at the same time on different
servers, use the file check-out feature of Windows SharePoint Services to ensure that
only one user is working on a file. Windows SharePoint Services 2.0 with Service Pack 2
is available as part of Windows Server 2003 R2. Windows SharePoint Services can be
downloaded from the Microsoft Web site; it is not included in newer versions of
Windows Server.

Why is a schema update required for DFS


Replication?
DFS Replication uses new objects in the domain-naming context of Active Directory
Domain Services to store configuration information. These objects are created when you
update the Active Directory Domain Services schema. For more information, see Review
Requirements for DFS Replication (https://go.microsoft.com/fwlink/?LinkId=182264 ).
Monitoring and management tools
Can I automate the health report to receive
warnings?
Yes. There are three ways to automate health reports:

Use the DFSR Windows PowerShell module included in Windows Server 2012 R2 or
DfsrAdmin.exe in conjunction with Scheduled Tasks to regularly generate health
reports. For more information, see Automating DFS Replication Health Reports
(https://go.microsoft.com/fwlink/?LinkId=74010 ).

Use the DFS Replication Management Pack for System Center Operations Manager
to create alerts that are based on specified conditions.

Use the DFS Replication WMI provider to script alerts.

Can I use Microsoft System Center Operations


Manager to monitor DFS Replication?
Yes. For more information, see the DFS Replication Management Pack for System Center
Operations Manager 2007 in the Microsoft Download Center
(https://go.microsoft.com/fwlink/?LinkId=182265 ).

Does DFS Replication support remote


management?
Yes. DFS Replication supports remote management using the DFS Management console
and the Add Replication Group command. For example, on server A, you can connect to
a replication group defined in the forest with servers A and B as members.

DFS Management is included with Windows Server 2012 R2, Windows Server 2012,
Windows Server 2008 R2, Windows Server 2008, and Windows Server 2003 R2. To
manage DFS Replication from other versions of Windows, use Remote Desktop or the
Remote Server Administration Tools for Windows 7.

) Important

To view or manage replication groups that contain read-only replicated folders or


members that are failover clusters, you must use the version of DFS Management
that is included with Windows Server 2012 R2, Windows Server 2012, Windows
Server 2008 R2, the Remote Server Administration Tools for Windows 8 , or the
Remote Server Administration Tools for Windows 7.

Do Ultrasound and Sonar work with DFS


Replication?
No. DFS Replication has its own set of monitoring and diagnostics tools. Ultrasound and
Sonar are only capable of monitoring FRS.

How can files be recovered from the


ConflictAndDeleted or PreExisting folders?
To recover lost files, restore the files from the file system folder or shared folder using
File History, the Restore previous versions command in File Explorer, or by restoring the
files from backup. To recover files directly from the ConflictAndDeleted or PreExisting
folder, use the Get-DfsrPreservedFiles and Restore-DfsrPreservedFiles Windows
PowerShell cmdlets (included with the DFSR module in Windows Server 2012 R2), or the
RestoreDFSR sample script from the MSDN Code Gallery. This script is intended only
for disaster recovery and is provided AS-IS, without warranty.

Is there a way to know the state of replication?


Yes. There are a number of ways to monitor replication:

DFS Replication has a management pack for System Center Operations Manager
that provides proactive monitoring.

DFS Management has an in-box diagnostic report for the replication backlog,
replication efficiency, and the number of files and folders in a given replication
group.

The DFSR Windows PowerShell module in Windows Server 2012 R2 contains


cmdlets for starting propagation tests and writing propagation and health reports.
For more information, see Distributed File System Replication Cmdlets in Windows
PowerShell.

Dfsrdiag.exe is a command-line tool that can generate a backlog count or trigger a


propagation test. Both show the state of replication. Propagation shows you if files
are being replicated to all nodes. Backlog shows you how many files still need to
replicate before two computers are in sync. The backlog count is the number of
updates that a replication group member has not processed. On computers
running Windows Server 2012 R2, Windows Server 2012 or Windows
Server 2008 R2, Dfsrdiag.exe can also display the updates that DFS Replication is
currently replicating.

Scripts can use WMI to collect backlog information—manually or through MOM.

Performance
Does DFS Replication support dial-up
connections?
Although DFS Replication will work at dial-up speeds, it can get backlogged if there are
large numbers of changes to replicate. If small changes are made to existing files, DFS
Replication with Remote Differential Compression (RDC) will provide a much higher
performance than copying the file directly.

Does DFS Replication perform bandwidth


sensing?
No. DFS Replication does not perform bandwidth sensing. You can configure DFS
Replication to use a limited amount of bandwidth on a per-connection basis (bandwidth
throttling). However, DFS Replication does not further reduce bandwidth utilization if
the network interface becomes saturated, and DFS Replication can saturate the link for
short periods. Bandwidth throttling with DFS Replication is not completely accurate
because DFS Replication throttles bandwidth by throttling RPC calls. As a result, various
buffers in lower levels of the network stack (including RPC) may interfere, causing bursts
of network traffic.

Does DFS Replication throttle bandwidth per


schedule, per server, or per connection?
If you configure bandwidth throttling when specifying the schedule, all connections for
that replication group will use that setting for bandwidth throttling. Bandwidth
throttling can be also set as a connection-level setting using DFS Management.
Does DFS Replication use Active Directory
Domain Services to calculate site links and
connection costs?
No. DFS Replication uses the topology defined by the administrator, which is
independent of Active Directory Domain Services site costing.

How can I improve replication performance?


To learn about different methods of tuning replication performance, see Tuning
Replication Performance in DFSR on the Ask the Directory Services Team blog.

How does DFS Replication avoid saturating a


connection?
In DFS Replication you set the maximum bandwidth you want to use on a connection,
and the service maintains that level of network usage. This is different from the
Background Intelligent Transfer Service (BITS), and DFS Replication does not saturate the
connection if you set it appropriately.

Nonetheless, the bandwidth throttling is not 100% accurate and DFS Replication can
saturate the link for short periods of time. This is because DFS Replication throttles
bandwidth by throttling RPC calls. Because this process relies on various buffers in lower
levels of the network stack, including RPC, the replication traffic tends to travel in bursts
which may at times saturate the network links.

DFS Replication in Windows Server 2008 includes several performance enhancements, as


discussed in Distributed File System, a topic in Changes in Functionality from Windows
Server 2003 with SP1 to Windows Server 2008.

How does DFS Replication performance compare


with FRS?
DFS Replication is much faster than FRS, particularly when small changes are made to
large files and RDC is enabled. For example, with RDC, a small change to a 2 MB
PowerPoint® presentation can result in only 60 kilobytes (KB) being sent across the
network—a 97 percent savings in bytes transferred.

RDC is not used on files smaller than 64 KB and might not be beneficial on high-speed
LANs where network bandwidth is not contended. RDC can be disabled on a per-
connection basis using DFS Management.

How frequently does DFS Replication replicate


data?
Data replicates according to the schedule you set. For example, you can set the schedule
to 15-minute intervals, seven days a week. During these intervals, replication is enabled.
Replication starts soon after a file change is detected (generally within seconds).

The replication group schedule may be set to Universal Time Coordinate (UTC) while the
connection schedule is set to the local time of the receiving member. Take this into
account when the replication group spans multiple time zones. Local time means the
time of the member hosting the inbound connection. The displayed schedule of the
inbound connection and the corresponding outbound connection reflect time zone
differences when the schedule is set to local time.

How much of my server's system resources will


DFS Replication consume?
The disk, memory, and CPU resources used by DFS Replication depend on a number of
factors, including the number and size of the files, rate of change, number of replication
group members, and number of replicated folders. In addition, some resources are
harder to estimate. For example, the Extensible Storage Engine (ESE) technology used
for the DFS Replication database can consume a large percentage of available memory,
which it releases on demand. Applications other than DFS Replication can be hosted on
the same server depending on the server configuration. However, when hosting multiple
applications or server roles on a single server, it is important that you test this
configuration before implementing it in a production environment.

What happens if a WAN link fails during


replication?
If the connection goes down, DFS Replication will keep trying to replicate while the
schedule is open. There will also be connectivity errors noted in the DFS Replication
event log that can be harvested using MOM (proactively through alerts) and the DFS
Replication Health Report (reactively, such as when an administrator runs it).

Remote Differential Compression details


What is RDC?
Remote differential compression (RDC) is a client-server protocol that can be used to
efficiently update files over a limited-bandwidth network. RDC detects insertions,
removals, and rearrangements of data in files, enabling DFS Replication to replicate only
the changes when files are updated. RDC is used only for files that are 64 KB or larger by
default. RDC can use an older version of a file with the same name in the replicated
folder or in the DfsrPrivate\ConflictandDeleted folder (located under the local path of
the replicated folder).

When is RDC used for replication?


RDC is used when the file exceeds a minimum size threshold. This size threshold is 64 KB
by default. After a file exceeding that threshold has been replicated, updated versions of
the file always use RDC, unless a large portion of the file is changed or RDC is disabled.

Which editions of the Windows operating


system support cross-file RDC?
To use cross-file RDC, one member of the replication connection must be running an
edition of the Windows operating system that supports cross-file RDC. The following
table shows which editions of the Windows operating system support cross-file RDC.

Cross-file RDC availability in editions of the


Windows operating system

Operating System Standard Enterprise Datacenter


Version Edition Edition Edition

Windows Server 2012 Yes Not available Yes


R2

Windows Server 2012 Yes Not available Yes

Windows Server 2008 No Yes Yes


R2

Windows Server 2008 No Yes No


Operating System Standard Enterprise Datacenter
Version Edition Edition Edition

Windows Server 2003 No Yes No


R2

* You can optionally disable cross-file RDC on Windows Server 2012 R2.

Are changes compressed before being


replicated?
Yes. Changed portions of files are compressed before being sent for all file types except
the following (which are already compressed): .wma, .wmv, .zip, .jpg, .mpg, .mpeg, .m1v,
.mp2, .mp3, .mpa, .cab, .wav, .snd, .au, .asf, .wm, .avi, .z, .gz, .tgz, and .frx. Compression
settings for these file types are not configurable in Windows Server 2003 R2.

Can an administrator turn off RDC or change the


threshold?
Yes. You can turn off RDC through the property page of a given connection. Disabling
RDC can reduce CPU utilization and replication latency on fast local area network (LAN)
links that have no bandwidth constraints or for replication groups that consist primarily
of files smaller than 64 KB. If you choose to disable RDC on a connection, test the
replication efficiency before and after the change to verify that you have improved
replication performance.

You can change the RDC size threshold by using the Dfsradmin Connection Set
command, the DFS Replication WMI Provider, or by manually editing the configuration
XML file.

Does RDC work on all file types?


Yes. RDC computes differences at the block level irrespective of file data type. However,
RDC works more efficiently on certain file types such as Word docs, PST files, and VHD
images.

How does RDC work on a compressed file?


DFS Replication uses RDC, which computes the blocks in the file that have changed and
sends only those blocks over the network. DFS Replication does not need to know
anything about the contents of the file—only which blocks have changed.

Is cross-file RDC enabled when upgrading to


Windows Server Enterprise Edition or Datacenter
Edition?
The Standard Editions of Windows Server do not support cross-file RDC. However, it is
automatically enabled when you upgrade to an edition that supports cross-file RDC, or
if a member of the replication connection is running a supported edition. For a list of
editions that support cross-file RDC, see Which editions of the Windows operating
system support cross-file RDC?

Is RDC true block-level replication?


No. RDC is a general purpose protocol for compressing file transfer. DFS Replication
uses RDC on blocks at the file level, not at the disk block level. RDC divides a file into
blocks. For each block in a file, it calculates a signature, which is a small number of bytes
that can represent the larger block. The set of signatures is transferred from server to
client. The client compares the server signatures to its own. The client then requests the
server send only the data for signatures that are not already on the client.

What happens if I rename a file?


DFS Replication renames the file on all other members of the replication group during
the next replication. Files are tracked using a unique ID, so renaming a file and moving
the file within the replica has no effect on the ability of DFS Replication to replicate a
file.

What is cross-file RDC?


Cross-file RDC allows DFS Replication to use RDC even when a file with the same name
does not exist at the client end. Cross-file RDC uses a heuristic to determine files that
are similar to the file that needs to be replicated, and uses blocks of the similar files that
are identical to the replicating file to minimize the amount of data transferred over the
WAN. Cross-file RDC can use blocks of up to five similar files in this process.

To use cross-file RDC, one member of the replication connection must be running an
edition of Windows that supports cross-file RDC. For a list of editions that support
cross-file RDC, see Which editions of the Windows operating system support cross-file
RDC?
Replication details
Can I change the path for a replicated folder
after it is created?
No. If you need to change the path of a replicated folder, you must delete it in DFS
Management and add it back as a new replicated folder. DFS Replication then uses
Remote Differential Compression (RDC) to perform a synchronization that determines
whether the data is the same on the sending and receiving members. It does not
replicate all the data in the folder again.

Can I configure which file attributes are


replicated?
No, you cannot configure which file attributes that DFS Replication replicates.

For a list of attribute values and their descriptions, see File Attributes on MSDN
(https://go.microsoft.com/fwlink/?LinkId=182268 ).

The following attribute values are set by using the SetFileAttributes dwFileAttributes
function, and they are replicated by DFS Replication. Changes to these attribute values
trigger replication of the attributes. The contents of the file are not replicated unless the
contents change as well. For more information, see SetFileAttributes Function in the
MSDN library (https://go.microsoft.com/fwlink/?LinkId=182269 ).

FILE_ATTRIBUTE_HIDDEN

FILE_ATTRIBUTE_READONLY

FILE_ATTRIBUTE_SYSTEM

FILE_ATTRIBUTE_NOT_CONTENT_INDEXED

FILE_ATTRIBUTE_OFFLINE

The following attribute values are replicated by DFS Replication, but they do not trigger
replication.

FILE_ATTRIBUTE_ARCHIVE

FILE_ATTRIBUTE_NORMAL
The following file attribute values also trigger replication, although they cannot be set
by using the SetFileAttributes function (use the GetFileAttributes function to view
the attribute values).

FILE_ATTRIBUTE_REPARSE_POINT

7 Note

DFS Replication does not replicate reparse point attribute values unless the reparse
tag is IO_REPARSE_TAG_SYMLINK. Files with the IO_REPARSE_TAG_DEDUP,
IO_REPARSE_TAG_SIS or IO_REPARSE_TAG_HSM reparse tags are replicated as
normal files. However, the reparse tag and reparse data buffers are not replicated
to other servers because the reparse point only works on the local system.

FILE_ATTRIBUTE_COMPRESSED

FILE_ATTRIBUTE_ENCRYPTED

7 Note

DFS Replication does not replicate files that are encrypted by using the Encrypting
File System (EFS). DFS Replication does replicate files that are encrypted by using
non-Microsoft software, but only if it does not set the FILE_ATTRIBUTE_ENCRYPTED
attribute value on the file.

FILE_ATTRIBUTE_SPARSE_FILE

FILE_ATTRIBUTE_DIRECTORY

DFS Replication does not replicate the FILE_ATTRIBUTE_TEMPORARY value.

Can I control which member is replicated?


Yes. You can choose a topology when you create a replication group. Or you can select
No topology and manually configure connections after the replication group has been
created.

Can I seed a replication group member with data


prior to the initial replication?
Yes. DFS Replication supports copying files to a replication group member before the
initial replication. This "prestaging" can dramatically reduce the amount of data
replicated during the initial replication.

The initial replication does not need to replicate contents when files differ only by real
attributes or time stamps. A real attribute is an attribute that can be set by the Win32
function SetFileAttributes . For more information, see SetFileAttributes Function in the
MSDN library (https://go.microsoft.com/fwlink/?LinkId=182269 ). If two files differ by
other attributes, such as compression, then the contents of the file are replicated.

To prestage a replication group member, copy the files to the appropriate folder on the
destination server(s), create the replication group, and then choose a primary member.
Choose the member that has the most up-to-date files that you want to replicate
because the primary member's content is considered "authoritative." This means that
during initial replication, the primary member's files will always overwrite other versions
of the files on other members of the replication group.

For information about pre-seeding and cloning the DFSR database, see DFS Replication
Initial Sync in Windows Server 2012 R2: Attack of the Clones .

For more information about the initial replication, see Create a Replication Group.

Does DFS Replication overcome common File


Replication Service issues?
Yes. DFS Replication overcomes three common FRS issues:

Journal wraps: DFS Replication recovers from journal wraps on the fly. Each existing
file or folder will be marked as journalWrap and verified against the file system
before replication is enabled again. During the recovery, this volume is not
available for replication in either direction.

Excessive replication: To prevent excessive replication, DFS Replication uses a


system of credits.

Morphed folders: To prevent morphed folder names, DFS Replication stores


conflicting data in a hidden DfsrPrivate\ConflictandDeleted folder (located under
the local path of the replicated folder). For example, creating multiple folders
simultaneously with identical names on different servers replicated using FRS
causes FRS to rename the older folder(s). DFS Replication instead moves the older
folder(s) to the local Conflict and Deleted folder.
Does DFS Replication replicate files in
chronological order?
No. Files may be replicated out of order.

Does DFS Replication replicate files that are


being used by another application?
If an application opens a file and creates a file lock on it (preventing it from being used
by other applications while it is open), DFS Replication will not replicate the file until it is
closed. If the application opens the file with read-share access, the file can still be
replicated.

Does DFS Replication replicate NTFS file


permissions, alternate data streams, hard links,
and reparse points?
DFS Replication replicates NTFS file permissions and alternate data streams.

Microsoft does not support creating NTFS hard links to or from files in a replicated
folder – doing so can cause replication issues with the affected files. Hard link files
are ignored by DFS Replication and are not replicated. Junction points also are not
replicated, and DFS Replication logs event 4406 for each junction point it
encounters.

The only reparse points replicated by DFS Replication are those that use the
IO_REPARSE_TAG_SYMLINK tag; however, DFS Replication does not guarantee that
the target of a symlink is also replicated. For more information, see the Ask the
Directory Services Team blog.

Files with the IO_REPARSE_TAG_DEDUP, IO_REPARSE_TAG_SIS, or


IO_REPARSE_TAG_HSM reparse tags are replicated as normal files. The reparse tag
and reparse data buffers are not replicated to other servers because the reparse
point only works on the local system. As such, DFS Replication can replicate folders
on volumes that use Data Deduplication in Windows Server 2012, or Single
Instance Storage (SIS), however, data deduplication information is maintained
separately by each server on which the role service is enabled.
Does DFS Replication replicate timestamp
changes if no other changes are made to the
file?
No, DFS Replication does not replicate files for which the only change is a change to the
timestamp. Additionally, the changed timestamp is not replicated to other members of
the replication group unless other changes are made to the file.

Does DFS Replication replicate updated


permissions on a file or folder?
Yes. DFS Replication replicates permission changes for files and folders. Only the part of
the file associated with the Access Control List (ACL) is replicated, although DFS
Replication must still read the entire file into the staging area.

7 Note

Changing ACLs on a large number of files can have an impact on replication


performance. However, when using RDC, the amount of data transferred is
proportionate to the size of the ACLs, not the size of the entire file. The amount of
disk traffic is still proportional to the size of the files because the files must be read
to and from the staging folder.

Does DFS Replication support merging text files


in the event of a conflict?
DFS Replication does not merge files when there is a conflict. However, it does attempt
to preserve the older version of the file in the hidden DfsrPrivate\ConflictandDeleted
folder on the computer where the conflict was detected.

Does DFS Replication use encryption when


transmitting data?
Yes. DFS Replication uses Remote Procedure Call (RPC) connections with encryption.

Is it possible to disable the use of encrypted


RPC?
No. The DFS Replication service uses remote procedure calls (RPC) over TCP to replicate
data. To secure data transfers across the Internet, the DFS Replication service is designed
to always use the authentication-level constant, RPC_C_AUTHN_LEVEL_PKT_PRIVACY . This
ensures that the RPC communication across the Internet is always encrypted. Therefore,
it is not possible to disable the use of encrypted RPC by the DFS Replication service.

For more information, see the following Microsoft Web sites:

RPC Technical Reference

About Remote Differential Compression

Authentication-Level Constants

How are simultaneous replications handled?


There is one update manager per replicated folder. Update managers work
independently of one another.

By default, a maximum of 16 (four in Windows Server 2003 R2) concurrent downloads


are shared among all connections and replication groups. Because connections and
replication group updates are not serialized, there is no specific order in which updates
are received. If two schedules are opened, updates are generally received and installed
from both connections at the same time.

How do I force replication or polling?


You can force replication immediately by using DFS Management, as described in Edit
Replication Schedules. You can also force replication by using the Sync-
DfsReplicationGroup cmdlet, included in the DFSR PowerShell module introduced with

Windows Server 2012 R2, or the Dfsrdiag SyncNow command. You can force polling by
using the Update-DfsrConfigurationFromAD cmdlet, or the Dfsrdiag PollAD command.

Is it possible to configure a quiet time between


replications for files that change frequently?
No. If the schedule is open, DFS Replication will replicate changes as it notices them.
There is no way to configure a quiet time for files.

Is it possible to configure one-way replication


with DFS Replication?
Yes. If you are using Windows Server 2012 or Windows Server 2008 R2, you can create a
read-only replicated folder that replicates content through a one-way connection. For
more information, see Make a Replicated Folder Read-Only on a Particular Member
(https://go.microsoft.com/fwlink/?LinkId=156740 ).

We do not support creating a one-way replication connection with DFS Replication in


Windows Server 2008 or Windows Server 2003 R2. Doing so can cause numerous
problems including health-check topology errors, staging issues, and problems with the
DFS Replication database.

If you are using Windows Server 2008 or Windows Server 2003 R2, you can simulate a
one-way connection by performing the following actions:

Train administrators to make changes only on the server(s) that you want to
designate as primary servers. Then let the changes replicate to the destination
servers.

Configure the share permissions on the destination servers so that end users do
not have Write permissions. If no changes are allowed on the branch servers, then
there is nothing to replicate back, simulating a one-way connection and keeping
WAN utilization low.

Is there a way to force a complete replication of


all files including unchanged files?
No. If DFS Replication considers the files identical, it will not replicate them. If changed
files have not been replicated, DFS Replication will automatically replicate them when
configured to do so. To overwrite the configured schedule, use the WMI method
ForceReplicate(). However, this is only a schedule override, and it does not force
replication of unchanged or identical files.

What happens if the primary member suffers a


database loss during initial replication?
During initial replication, the primary member's files will always take precedence in the
conflict resolution that occurs if the receiving members have different versions of files
on the primary member. The primary member designation is stored in Active Directory
Domain Services, and the designation is cleared after the primary member is ready to
replicate, but before all members of the replication group replicate.

If the initial replication fails or the DFS Replication service restarts during the replication,
the primary member sees the primary member designation in the local DFS Replication
database and retries the initial replication. If the primary member's DFS Replication
database is lost after clearing the primary designation in Active Directory Domain
Services, but before all members of the replication group complete the initial replication,
all members of the replication group fail to replicate the folder because no server is
designated as the primary member. If this happens, use the Dfsradmin membership
/set /isprimary:true command on the primary member server to restore the primary
member designation manually.

For more information about initial replication, see Create a Replication Group.

2 Warning

The primary member designation is used only during the initial replication process.
If you use the Dfsradmin command to specify a primary member for a replicated
folder after replication is complete, DFS Replication does not designate the server
as a primary member in Active Directory Domain Services. However, if the DFS
Replication database on the server subsequently suffers irreversible corruption or
data loss, the server attempts to perform an initial replication as the primary
member instead of recovering its data from another member of the replication
group. Essentially, the server becomes a rogue primary server, which can cause
conflicts. For this reason, specify the primary member manually only if you are
certain that the initial replication has irretrievably failed.

What happens if the replication schedule closes


while a file is being replicated?
If remote differential compression (RDC) is enabled on the connection, inbound
replication of a file larger than 64 KB that began replicating immediately prior to the
schedule closing (or changing to No bandwidth) continues when the schedule opens (or
changes to something other than No bandwidth). The replication continues from the
state it was in when replication stopped.

If RDC is turned off, DFS Replication completely restarts the file transfer. This can delay
when the file is available on the receiving member.

What happens when two users simultaneously


update the same file on different servers?
When DFS Replication detects a conflict, it uses the version of the file that was saved
last. It moves the other file into the DfsrPrivate\ConflictandDeleted folder (under the
local path of the replicated folder on the computer that resolved the conflict). It remains
there until Conflict and Deleted folder cleanup, which occurs when the Conflict and
Deleted folder exceeds the configured size or DFS Replication encounters an Out of disk
space error. The Conflict and Deleted folder is not replicated, and this method of conflict
resolution avoids the problem of morphed directories that was possible in FRS.

When a conflict occurs, DFS Replication logs an informational event to the DFS
Replication event log. This event does not require user action for the following reasons:

It is not visible to users (it is visible only to server administrators).

DFS Replication treats the Conflict and Deleted folder as a cache. When a quota
threshold is reached, it cleans out some of those files. There is no guarantee that
conflicting files will be saved.

The conflict could reside on a server different from the origin of the conflict.

Staging
Does DFS Replication continue staging files
when replication is disabled by a schedule or
bandwidth throttling quota, or when a
connection is manually disabled?
No. DFS Replication does not continue to stage files outside of scheduled replication
times, if the bandwidth throttling quota has been exceeded, or when connections are
disabled.

Does DFS Replication prevent other applications


from accessing a file during staging?
No. DFS Replication opens files in a way that does not block users or applications from
opening files in the replication folder. This method is known as "opportunistic locking."

Is it possible to change the location of the


staging folder with the DFS Management Tool?
Yes. The staging folder location is configured on the Advanced tab of the Properties
dialog box for each member of a replication group.
When are files staged?
Files are staged on the sending member when the receiving member requests the file
(unless the file is 64 KB or smaller) as shown in the following table. If Remote Differential
Compression (RDC) is disabled on the connection, the file is staged unless it is 256 KB or
smaller. Files are also staged on the receiving member as they are transferred if they are
less than 64 KB in size, although you can configure this setting between 16 KB and 1 MB.
If the schedule is closed, files are not staged.

The minimum file sizes for staging files

RDC enabled RDC disabled

Sending member 64 KB 256 KB

Receiving member 64 KB by default 64 KB by default

What happens if a file is changed after it is


staged but before it is completely transmitted to
the remote site?
If any part of the file is already being transmitted, DFS Replication continues the
transmission. If the file is changed before DFS Replication begins transmitting the file,
then the newer version of the file is sent.

Change history
Date Description Reason

November Updated for Windows Server 2019. New operating system.


15, 2018

October Updated the What are the supported limits Updates for the latest
9th, 2013 of DFS Replication? section with results version of Windows Server
from tests on Windows Server 2012 R2.
Date Description Reason

January Added the Does DFS Replication continue Customer questions


30th, 2013 staging files when replication is disabled by
a schedule or bandwidth throttling quota,
or when a connection is manually disabled?
entry.

October Edited the What are the supported limits of Customer feedback
31st, 2012 DFS Replication? entry to increase the
tested number of replicated files on a
volume.

August 15, Edited the Does DFS Replication replicate Feedback from Customer
2012 NTFS file permissions, alternate data Support Services
streams, hard links, and reparse points?
entry to further clarify how DFS Replication
handles hard links and reparse points.

June 13, Edited the Does DFS Replication work on Customer feedback
2012 ReFS or FAT volumes? entry to add
discussion of ReFS.

April 25, Edited the Does DFS Replication replicate Reduce potential
2012 NTFS file permissions, alternate data confusion
streams, hard links, and reparse points?
entry to clarify how DFS Replication handles
hard links.

March 30, Edited the Can DFS Replication replicate Customer questions about
2011 Outlook .pst or Microsoft Office Access the previous entry, which
database files? entry to correct the incorrectly indicated that
potential impact of using DFS Replication replicating .pst or Access
with .pst and Access files. Added How can I files could corrupt the DFS
improve replication performance? Replication database.

January Added How can files be recovered from the Customer feedback
26, 2011 ConflictAndDeleted or PreExisting folders?

October Added How can I upgrade or replace a DFS Customer feedback


20, 2010 Replication member?
How to determine the minimum staging
area DFSR needs for a replicated folder
Article • 04/28/2023

This article is a quick reference guide on how to calculate the minimum staging area
needed for DFSR to function properly. Values lower than these may cause replication to
go slowly or stop altogether.

Keep in mind these are minimums only. When considering staging area size, the bigger
the staging area the better, up to the size of the Replicated Folder. See the section "How
to determine if you have a staging area problem" and the blog posts linked at the end
of this article for more details on why it is important to have a properly sized staging
area.

General guidance
The staging area quota must be as large as the 32 largest files in the Replicated Folder.

Initial Replication will make much more use of the staging area than day-to-day
replication. Setting the staging area higher than the minimum during initial replication is
strongly encouraged if you have the drive space available.

How do you find these X largest files?


Use a PowerShell script to find the 32 or 9 largest files and determine how many
gigabytes they add up to. Before beginning, enable maximum path length support, first
added in Windows Server 2016 using Maximum Path Length Limitation

1. Run the following command:

Powershell

Get-ChildItem c:\\temp -recurse | Sort-Object length -descending |


select-object -first 32 | ft name,length -wrap –auto

This command will return the file names and the size of the files in bytes. Useful if
you want to know what 32 files are the largest in the Replicated Folder so you can
“visit” their owners.

2. Run the following command:


Powershell

Get-ChildItem c:\\temp -recurse | Sort-Object length -descending |


select-object -first 32 | measure-object -property length –sum

This command will return the total number of bytes of the 32 largest files in the
folder without listing the file names.

3. Run the following command:

Powershell

$big32 = Get-ChildItem c:\\temp -recurse | Sort-Object length -


descending | select-object -first 32 | measure-object -property length
–sum

$big32.sum /1gb

This command will get the total number of bytes of 32 largest files in the folder
and do the math to convert bytes to gigabytes for you. This command is two
separate lines. You can paste both them into the PowerShell command shell at
once or run them back to back.

Manual Walkthrough
Running command 1 will return results similar to the output below. This example only
uses 16 files for brevity. Always use 32 for Windows 2008 and later operating systems.

Example Data returned by PowerShell

Name Length

File5.zip 10286089216

archive.zip 6029853696

BACKUP.zip 5751522304

file9.zip 5472683008

MENTOS.zip 5241586688

File7.zip 4321264640

file2.zip 4176765952
frd2.zip 4176765952

BACKUP.zip 4078994432

File44.zip 4058424320

file11.zip 3858056192

Backup2.zip 3815138304

BACKUP3.zip 3815138304

Current.zip 3576931328

Backup8.zip 3307488256

File999.zip 3274982400

How to use this data to determine the minimum staging


area size:
Name = Name of the file.
Length = bytes
One Gigabyte = 1073741824 Bytes

First, sum the total number of bytes. Next divide the total by 1073741824. Microsoft
Excel is an easy way to do this.

Example
From the example above the total number of bytes = 75241684992. To get the
minimum staging area quota needed you need to divide 75241684992 by 1073741824.

75241684992 / 1073741824 = 70.07 GB

Based on this data you would set my staging area to 71 GB if you round up to the
nearest whole number.

Real World Scenario:


While a manual walkthrough is interesting it is likely not the best use of your time to do
the math yourself. To automate the process, use command 3 from the examples above.
The results will look like this
Using the example command 3 without any extra effort except for rounding to the
nearest whole number, you can determine that you need a 6 GB staging area quota for
d:\docs.

Do you need to reboot or restart the DFSR


service for the changes to be picked Up?
Changes to the staging area quota do not require a reboot or restart of the service to
take effect. You will need to wait on AD replication and DFSR’s AD polling cycle for the
changes to be applied.

How to determine if you have a staging area


problem
You detect staging area problems by monitoring for specific events IDs on your DFSR
servers. The list of events is 4202, 4204, 4206, 4208 and 4212. The texts of these events
are listed below. It is important to distinguish between 4202 and 4204 and the other
events. It is possible to log a high number of 4202 and 4204 events under normal
operating conditions.

Staging Area Events


Event ID: 4202 Severity: Warning

The DFS Replication service has detected that the staging space in use for the
replicated folder at local path (path) is above the high watermark. The service will
attempt to delete the oldest staging files. Performance may be affected.

Event ID: 4204 Severity: Informational

The DFS Replication service has successfully deleted old staging files for the
replicated folder at local path (path). The staging space is now below the high
watermark.

Event ID: 4206 Severity: Warning

The DFS Replication service failed to clean up old staging files for the replicated
folder at local path (path). The service might fail to replicate some large files and the
replicated folder might get out of sync. The service will automatically retry staging
space cleanup in (x) minutes. The service may start cleanup earlier if it detects some
staging files have been unlocked.
Event ID: 4208 Severity: Warning

The DFS Replication service detected that the staging space usage is above the
staging quota for the replicated folder at local path (path). The service might fail to
replicate some large files and the replicated folder might get out of sync. The
service will attempt to clean up staging space automatically.

Event ID: 4212 Severity: Error

The DFS Replication service could not replicate the replicated folder at local path
(path) because the staging path is invalid or inaccessible.

What is the difference between 4202 and 4208?


Events 4202 and 4208 have similar text; i.e. DFSR detected the staging area usage
exceeds the high watermark. The difference is that 4208 is logged after staging area
cleanup has run and the staging quota is still exceeded. 4202 is a normal and expected
event whereas 4208 is abnormal and requires intervention.

How many 4202, 4204 events are too many?


There is no single answer to this question. Unlike 4206, 4208 or 4212 events, which are
always bad and indicate action is needed, 4202 and 4204 events occur under normal
operating conditions. Seeing many 4202 and 4204 events may indicate a problem.
Things to consider:

1. Is the Replicated Folder (RF) logging 4202 performing initial replication? If so, it is
normal to log 4202 and 4204 events. You will want to keep these to down to as few
as possible during Initial Replication by providing as much staging area as possible
2. Simply checking the total number of 4202 events is not sufficient. You have to
know how many were logged per RF. If you log twenty 4202 events for one RF in a
24 hour period that is high. However if you have 20 Replicated Folders and there is
one event per folder, you are doing well.
3. You should examine several days of data to establish trends.

We usually counsel customers to allow no more than one 4202 event per Replicated
Folder per day under normal operating conditions. “Normal” meaning no Initial
Replication is occurring. We base this on the reasoning that:

1. Time spent cleaning up the staging area is time spent not replicating files.
Replication is paused while the staging area is cleared.
2. DFSR benefits from a full staging area using it for RDC and cross-file RDC or
replicating the same files to other members
3. The more 4202 and 4204 events you log the greater the odds you will run into the
condition where DFSR cannot clean up the staging area or will have to prematurely
purge files from the staging area.
4. 4206, 4208 and 4212 events are, in my experience, always preceded and followed
by a high number of 4202 and 4204 events.

While allowing for only one 4202 event per RF per day is conservative, it greatly
decreases your odds of running into staging area problems and better utilizes your
DFSR server’s resources for the intended purpose of replicating files.
Understanding (the Lack of) Distributed
File Locking in DFSR
Article • 06/21/2022

This article discusses the absence of a multi-host distributed file locking mechanism
within Windows, and specifically within folders replicated by DFSR.

Some Background
Distributed File Locking – this refers to the concept of having multiple copies of a
file on several computers and when one file is opened for writing, all other copies
are locked. This prevents a file from being modified on multiple servers at the
same time by several users.
Distributed File System Replication – DFSR operates in a multi-master, state-based
design. In state-based replication, each server in the multi-master system applies
updates to its replica as they arrive, without exchanging log files (it instead uses
version vectors to maintain “up-to-dateness” information). No one server is ever
arbitrarily authoritative after initial sync, so it is highly available and very flexible on
various network topologies.
Server Message Block - SMB is the common protocol used in Windows for
accessing files over the network. In simplified terms, it's a client-server protocol
that makes use of a redirector to have remote file systems appear to be local file
systems. It is not specific to Windows and is quite common – a well known non-
Microsoft example is Samba, which allows Linux, Mac, and other operating systems
to act as SMB clients/servers and participate in Windows networks.

It's important to make a clear delineation of where DFSR and SMB live in your replicated
data environment. SMB allows users to access their files, and it has no awareness of
DFSR. Likewise, DFSR (using the RPC protocol) keeps files in sync between servers and
has no awareness of SMB. Don't confuse distributed locking as defined in this post and
Opportunistic Locking.

So here's where things can go pear-shaped, as the Brits say.

Since users can modify data on multiple servers, and since each Windows server only
knows about a file lock on itself, and since DFSR doesn't know anything about those
locks on other servers, it becomes possible for users to overwrite each other's changes.
DFSR uses a “last writer wins” conflict algorithm, so someone has to lose and the person
to save last gets to keep their changes. The losing file copy is chucked into the
ConflictAndDeleted folder.
Now, this is far less common than people like to believe. Typically, true shared files are
modified in a local environment; in the branch office or in the same row of cubicles.
They are usually worked on by people on the same team, so people are generally aware
of colleagues modifying data. And since they are usually in the same site, the odds are
much higher that all the users working on a shared doc will be using the same server.
Windows SMB handles the situation here. When a user has a file locked for modification
and his coworker tries to edit it, the other user will get an error like:

And if the application opening the file is really clever, like Word 2007, it might give you:

DFSR does have a mechanism for locked files, but it is only within the server's own
context. DFSR will not replicate a file in or out if its local copy has an exclusive lock. But
this doesn't prevent anyone on another server from modifying the file.

Back on topic, the issue of shared data being modified geographically does exist, and for
some folks it's pretty gnarly. We're occasionally asked why DFSR doesn't handle this
locking and take of everything with a wave of the magic wand. It turns out this is an
interesting and difficult scenario to solve for a multi-master replication system. Let's
explore.

Third-Party Solutions
There are some vendor solutions that take on this problem, which they typically tackle
through one or more of the following methods*:
Use of a broker mechanism

Having a central ‘traffic cop' allows one server to be aware of all the other servers
and which files they have locked by users. Unfortunately this also means that there
is often a single point of failure in the distributed locking system.

Requirement for a fully routed network

Since a central broker must be able to talk to all servers participating in file
replication, this removes the ability to handle complex network topologies. Ring
topologies and multi hub-and-spoke topologies are not usually possible. In a non-
fully routed network, some servers may not be able to directly contact each other or
a broker, and can only talk to a partner who himself can talk to another server – and
so on. This is fine in a multi-master environment, but not with a brokering
mechanism.
Are limited to a pair of servers

Some solutions limit the topology to a pair of servers in order to simplify their
distributed locking mechanism. For larger environments this is may not be feasible.

Make use of agents on clients and servers


Do not use multi-master replication
Do not make use of MS clustering
Make use of specialty appliances

* Note that I say typically! Please do not post death threats because you have a solution
that does/does not implement one or more of those methods!

Deeper Thoughts
As you think further about this issue, some fundamental issues start to crop up. For
example, if we have four servers with data that can be modified by users in four sites,
and the WAN connection to one of them goes offline, what do we do? The users can still
access their individual servers – but should we let them? We don't want them to make
changes that conflict, but we definitely want them to keep working and making our
company money. If we arbitrarily block changes at that point, no users can work even
though there may not actually be any conflicts happening! There's no way to tell the
other servers that the file is in use and you're back at square one.

Then there's SMB itself and the error handling of reporting locks. We can't really change
how SMB reports sharing violations as we'd break a ton of applications and clients
wouldn't understand new extended error messages anyways. Applications like Word
2007 do some undercover trickery to figure out who is locking files, but the vast
majority of applications don't know who has a file in use (or even that SMB exists.
Really.). So when a user gets the message ‘This file is in use' it's not particularly
actionable – should they all call the help desk? Does the help desk have access to all the
file servers to see which users are accessing files? Messy.

Since we want multi-master for high availability, a broker system is less desirable; we
might need to have something running on all servers that allows them all to
communicate even through non-fully routed networks. This will require very complex
synchronization techniques. It will add some overhead on the network (although
probably not much) and it will need to be lightning fast to make sure that we are not
holding up the user in their work; it needs to outrun file replication itself - in fact, it
might need to actually be tied to replication somehow. It will also have to account for
server outages that are network related and not server crashes, somehow.
And then we're back to special client software for this scenario that better understands
the locks and can give the user some useful info (“Go call Susie in accounting and tell
her to release that doc”, “Sorry, the file locking topology is broken and your
administrator is preventing you from opening this file until it's fixed”, etc). Getting this to
play nicely with the millions of applications running in Windows will definitely be
interesting. There are plenty of OS's that would not be supported or get the software –
Windows 2000 is out of mainstream support and XP soon will be. Linux and Mac clients
wouldn't have this software until they felt it was important, so the customer would have
to hope their vendors made something analogous.

More information
Right now the easiest way to control this situation in DFSR is to use DFS Namespaces to
guide users to predictable locations, with a consistent namespace. By correctly
configuring your DFSN site topology and server links, you force users to all share the
same local server and only allow them to access remote computers when their ‘main'
server is down. For most environments, this works quite well. Alternative to DFSR,
SharePoint is an option because of its check-out/check-in system. BranchCache (coming
in Windows Server 2008 R2 and Windows 7) may be an option for you as it is designed
for easing the reading of files in a branch scenario, but in the end the authoritative data
will still live on one server only – more on this here. And again, those vendors have their
solutions.
Overview of Disk Management
Article • 03/22/2023

Applies To: Windows 11, Windows 10, Windows Server 2022, Windows Server 2019,
Windows Server 2016

Disk Management is a system utility in Windows for advanced storage operations. Here
are some tasks you can complete with Disk Management:

Set up a new drive. For more information, see Initialize new disks.

Extend a volume into space that's not already part of a volume on the same drive.
For more information, see Extend a basic volume.

Shrink a partition, such as to enable extending into a neighboring partition. For


more information, see Shrink a basic volume.

Change a drive letter or assign a new drive letter. For more information, see
Change a drive letter.

Review drives and partitions


Disk Management shows the details for each drive on your PC and all partitions for each
drive. The details include statistics about the partitions, including the amount of space
allocated or used.

The following image shows the Disk Management overview for several drives. Disk 0 has
three partitions, and Disk 1 has two partitions. On Disk 0, the C: drive for Windows uses
the most disk space. Two other partitions for system operations and recovery use a
smaller amount of disk space.
Windows typically includes three partitions on your main drive (usually the C:\ drive).
These partitions include the EFI System Partition, the Local Disk (C:) Partition, and a
Recovery Partition.

The Windows operating system is installed on the Local Disk (C:) Partition. This
partition is the common storage location for your other apps and files.

Modern PCs use the EFI System Partition to start (boot) your PC and your
operating system.

The Recovery Partition stores special tools to help you recover Windows, in case
there's a problem starting the PC or other serious issues.

) Important

Disk Management might show the EFI System Partition and Recovery Partition as
100 percent free. However, these partitions store critical files that your PC needs to
operate properly, and the partitions are generally nearly full. It's recommended to
not modify these partitions in any way.

Troubleshoot issues
Sometimes a Disk Management task reports an error, or a procedure doesn't work as
expected. There are several options available to help you resolve the issue.

Review suggestions in the Troubleshooting Disk Management article.


Search the Microsoft Community website for posts about files, folders, and
storage.

If you don't find an answer on the site, you can post a question for input from
Microsoft or other members of the community. You can also Contact Microsoft
Support .

Complete related tasks


Disk Management supports a wide range of drive tasks, but some tasks need to be
completed by using a different tool. Here are some common disk management tasks to
complete with other tools in Windows:

Free up disk space. For more information, see Free up drive space in Windows .

Defragment or optimize your drives. For more information, see Ways to improve
your computer's performance .

Pool multiple hard drives together, similar to a RAID (redundant array of


independent disks). For more information, see Storage Spaces in Windows .

Related links
Manage disks
Manage basic volumes
Troubleshooting Disk Management
Recovery options in Windows
Find lost files after the upgrade to Windows
Backup and Restore in Windows
Create a recovery drive
Create a system restore point
Where to look for your BitLocker recovery key
Overview of file sharing using the SMB
3 protocol in Windows Server
Article • 01/26/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012

This topic describes the SMB 3 feature in Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012—practical uses for the feature, the
most significant new or updated functionality in this version compared to previous
versions, and the hardware requirements. SMB is also a fabric protocol used by
software-defined data center (SDDC) solutions such as Storage Spaces Direct, Storage
Replica, and others. SMB version 3.0 was introduced with Windows Server 2012 and has
been incrementally improved in subsequent releases.

Feature description
The Server Message Block (SMB) protocol is a network file sharing protocol that allows
applications on a computer to read and write to files and to request services from server
programs in a computer network. The SMB protocol can be used on top of its TCP/IP
protocol or other network protocols. Using the SMB protocol, an application (or the user
of an application) can access files or other resources at a remote server. This allows
applications to read, create, and update files on the remote server. SMB can also
communicate with any server program that is set up to receive an SMB client request.
SMB is a fabric protocol that is used by Software-defined Data Center (SDDC)
computing technologies, such as Storage Spaces Direct, Storage Replica. For more
information, see Windows Server software-defined datacenter.

Practical applications
This section discusses some new practical ways to use the new SMB 3.0 protocol.

File storage for virtualization (Hyper-V™ over SMB). Hyper-V can store virtual
machine files, such as configuration, Virtual hard disk (VHD) files, and snapshots, in
file shares over the SMB 3.0 protocol. This can be used for both stand-alone file
servers and clustered file servers that use Hyper-V together with shared file
storage for the cluster.
Microsoft SQL Server over SMB. SQL Server can store user database files on SMB
file shares. Currently, this is supported with SQL Server 2008 R2 for stand-alone
SQL servers. Upcoming versions of SQL Server will add support for clustered SQL
servers and system databases.
Traditional storage for end-user data. The SMB 3.0 protocol provides
enhancements to the Information Worker (or client) workloads. These
enhancements include reducing the application latencies experienced by branch
office users when accessing data over wide area networks (WAN) and protecting
data from eavesdropping attacks.

7 Note

If you need to conserve storage space on an SMB file share, consider using Azure
File Sync with cloud tiering enabled. This allows you to cache your most frequently
accessed files locally and tier your least frequently accessed files to the cloud,
saving local storage space while maintaining performance. For details, see Planning
for an Azure File Sync deployment.

New and changed functionality


The following sections describe functionality that was added in SMB 3 and subsequent
updates.

Features added in Windows Server 2019 and


Windows 10, version 1809
Feature/functionality New or Summary
updated

Ability to require New To provide some added assurance that writes to a file share
write-through to disk make it all the way through the software and hardware stack
on file shares that to the physical disk prior to the write operation returning as
aren't continuously completed, you can enable write-through on the file share
available using either the NET USE /WRITETHROUGH command or the
New-SMBMapping -UseWriteThrough PowerShell cmdlet. There's
some amount of performance hit to using write-through; see
the blog post Controlling write-through behaviors in SMB
for further discussion.
Features added in Windows Server, version
1709, and Windows 10, version 1709
Feature/functionality New or Summary
updated

Guest access to file New The SMB client no longer allows the following actions: Guest
shares is disabled account access to a remote server; Fallback to the Guest
account after invalid credentials are provided. For details, see
Guest access in SMB2 disabled by default in Windows .

SMB global mapping New Maps a remote SMB share to a drive letter that is accessible
to all users on the local host, including containers. This is
required to enable container I/O on the data volume to
traverse the remote mount point. Be aware that when using
SMB global mapping for containers, all users on the
container host can access the remote share. Any application
running on the container host also have access to the
mapped remote share. For details, see Container Storage
Support with Cluster Shared Volumes (CSV), Storage Spaces
Direct, SMB Global Mapping .

SMB dialect control New You can now set registry values to control the minimum SMB
version (dialect) and maximum SMB version used. For details,
see Controlling SMB Dialects .

Features added in SMB 3.1.1 with Windows


Server 2016 and Windows 10, version 1607
Feature/functionality New or Summary
updated

SMB Encryption Updated SMB 3.1.1 encryption with Advanced Encryption


Standard-Galois/Counter Mode (AES-GCM) is
faster than SMB Signing or previous SMB
encryption using AES-CCM.

Directory Caching New SMB 3.1.1 includes enhancements to directory


caching. Windows clients can now cache much
larger directories, approximately 500K entries.
Windows clients will attempt directory queries with
1 MB buffers to reduce round trips and improve
performance.
Feature/functionality New or Summary
updated

Pre-Authentication Integrity New In SMB 3.1.1, pre-authentication integrity provides


improved protection from a man-in-the-middle
attacker tampering with SMB’s connection
establishment and authentication messages. For
details, see SMB 3.1.1 Pre-authentication integrity
in Windows 10.

SMB Encryption Improvements New SMB 3.1.1 offers a mechanism to negotiate the
crypto algorithm per connection, with options for
AES-128-CCM and AES-128-GCM. AES-128-GCM is
the default for new Windows versions, while older
versions will continue to use AES-128-CCM.

Rolling cluster upgrade support New Enables rolling cluster upgrades by letting SMB
appear to support different max versions of SMB
for clusters in the process of being upgraded. For
more details on letting SMB communicate using
different versions (dialects) of the protocol, see the
blog post Controlling SMB Dialects .

SMB Direct client support in New Windows 10 Enterprise, Windows 10 Education,


Windows 10 and Windows 10 Pro for Workstations now include
SMB Direct client support.

Native support for New Adds native support for querying the normalized
FileNormalizedNameInformation name of a file. For details, see
API calls FileNormalizedNameInformation.

For additional details, see the blog post What’s new in SMB 3.1.1 in the Windows Server
2016 Technical Preview 2.

Features added in SMB 3.02 with Windows


Server 2012 R2 and Windows 8.1
Feature/functionality New or Summary
updated
Feature/functionality New or Summary
updated

Automatic rebalancing New Improves scalability and manageability for Scale-Out File
of Scale-Out File Servers. SMB client connections are tracked per file share
Server clients (instead of per server), and clients are then redirected to the
cluster node with the best access to the volume used by the
file share. This improves efficiency by reducing redirection
traffic between file server nodes. Clients are redirected
following an initial connection and when cluster storage is
reconfigured.

Performance over Updated Windows 8.1 and Windows 10 provide improved CopyFile
WAN SRV_COPYCHUNK over SMB support when you use File
Explorer for remote copies from one location on a remote
machine to another copy on the same server. You will copy
only a small amount of metadata over the network (1/2KiB
per 16MiB of file data is transmitted). This results in a
significant performance improvement. This is an OS-level and
File Explorer-level distinction for SMB.

SMB Direct Updated Improves performance for small I/O workloads by increasing
efficiency when hosting workloads with small I/Os (such as
an online transaction processing (OLTP) database in a virtual
machine). These improvements are evident when using
higher speed network interfaces, such as 40 Gbps Ethernet
and 56 Gbps InfiniBand.

SMB bandwidth limits New You can now use Set-SmbBandwidthLimit to set bandwidth
limits in three categories: VirtualMachine (Hyper-V over SMB
traffic), LiveMigration (Hyper-V Live Migration traffic over
SMB), or Default (all other types of SMB traffic).

For more information on new and changed SMB functionality in Windows Server 2012
R2, see What's New in SMB in Windows Server.

Features added in SMB 3.0 with Windows


Server 2012 and Windows 8
Feature/functionality New or Summary
updated
Feature/functionality New or Summary
updated

SMB Transparent New Enables administrators to perform hardware or software


Failover maintenance of nodes in a clustered file server without
interrupting server applications storing data on these file
shares. Also, if a hardware or software failure occurs on a
cluster node, SMB clients transparently reconnect to another
cluster node without interrupting server applications that are
storing data on these file shares.

SMB Scale Out New Support for multiple SMB instances on a Scale-Out File
Server. Using Cluster Shared Volumes (CSV) version 2,
administrators can create file shares that provide
simultaneous access to data files, with direct I/O, through all
nodes in a file server cluster. This provides better utilization
of network bandwidth and load balancing of the file server
clients, and optimizes performance for server applications.

SMB Multichannel New Enables aggregation of network bandwidth and network fault
tolerance if multiple paths are available between the SMB
client and server. This enables server applications to take full
advantage of all available network bandwidth and be resilient
to a network failure.

SMB Multichannel in SMB 3 contributes to a substantial


increase in performance compared to previous versions of
SMB.

SMB Direct New Supports the use of network adapters that have RDMA
capability and can function at full speed with very low
latency, while using very little CPU. For workloads such as
Hyper-V or Microsoft SQL Server, this enables a remote file
server to resemble local storage.

SMB Direct in SMB 3 contributes to a substantial increase in


performance compared to previous versions of SMB.

Performance Counters New The new SMB performance counters provide detailed, per-
for server applications share information about throughput, latency, and I/O per
second (IOPS), allowing administrators to analyze the
performance of SMB file shares where their data is stored.
These counters are specifically designed for server
applications, such as Hyper-V and SQL Server, which store
files on remote file shares.
Feature/functionality New or Summary
updated

Performance Updated Both the SMB client and server have been optimized for
optimizations small random read/write I/O, which is common in server
applications such as SQL Server OLTP. In addition, large
Maximum Transmission Unit (MTU) is turned on by default,
which significantly enhances performance in large sequential
transfers, such as SQL Server data warehouse, database
backup or restore, deploying or copying virtual hard disks.

SMB-specific Windows New With Windows PowerShell cmdlets for SMB, an administrator
PowerShell cmdlets can manage file shares on the file server, end to end, from
the command line.

SMB Encryption New Provides end-to-end encryption of SMB data and protects
data from eavesdropping occurrences on untrusted
networks. Requires no new deployment costs, and no need
for Internet Protocol security (IPsec), specialized hardware, or
WAN accelerators. It may be configured on a per share basis,
or for the entire file server, and may be enabled for a variety
of scenarios where data traverses untrusted networks.

SMB Directory Leasing New Improves application response times in branch offices. With
the use of directory leases, roundtrips from client to server
are reduced since metadata is retrieved from a longer living
directory cache. Cache coherency is maintained because
clients are notified when directory information on the server
changes. Directory leases work with scenarios for
HomeFolder (read/write with no sharing) and Publication
(read-only with sharing).

Performance over New Directory opportunistic locks (oplocks) and oplock leases
WAN were introduced in SMB 3.0. For typical office/client
workloads, oplocks/leases are shown to reduce network
round trips by approximately 15%.

In SMB 3, the Windows implementation of SMB has been


refined to improve the caching behavior on the client as well
as the ability to push higher throughputs.

SMB 3 features improvements to the CopyFile() API, as well


as to associated tools such as Robocopy, to push significantly
more data over the network.
Feature/functionality New or Summary
updated

Secure dialect New Helps protect against man-in-the-middle attempt to


negotiation downgrade dialect negotiation. The idea is to prevent an
eavesdropper from downgrading the initially negotiated
dialect and capabilities between the client and the server. For
details, see SMB3 Secure Dialect Negotiation. Note that this
has been superceded by the SMB 3.1.1 Pre-authentication
integrity in Windows 10 feature in SMB 3.1.1.

Hardware requirements
SMB Transparent Failover has the following requirements:

A failover cluster running Windows Server 2012 or Windows Server 2016 with at
least two nodes configured. The cluster must pass the cluster validation tests
included in the validation wizard.
File shares must be created with the Continuous Availability (CA) property, which is
the default.
File shares must be created on CSV volume paths to attain SMB Scale-Out.
Client computers must be running Windows® 8 or Windows Server 2012, both of
which include the updated SMB client that supports continuous availability.

7 Note

Down-level clients can connect to file shares that have the CA property, but
transparent failover will not be supported for these clients.

SMB Multichannel has the following requirements:

At least two computers running Windows Server 2012 are required. No extra
features need to be installed—the technology is on by default.
For information on recommended network configurations, see the See Also section
at the end of this overview topic.

SMB Direct has the following requirements:

At least two computers running Windows Server 2012 are required. No extra
features need to be installed—the technology is on by default.
Network adapters with RDMA capability are required. Currently, these adapters are
available in three different types: iWARP, Infiniband, or RoCE (RDMA over
Converged Ethernet).
More information
The following list provides additional resources on the web about SMB and related
technologies in Windows Server 2012 R2, Windows Server 2012, and Windows Server
2016.

Storage in Windows Server


Scale-Out File Server for Application Data
Improve Performance of a File Server with SMB Direct
Deploy Hyper-V over SMB
Deploy SMB Multichannel
Deploying Fast and Efficient File Servers for Server Applications
SMB: Troubleshooting Guide
SMB Direct
Article • 01/25/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Azure Stack HCI, version 21H2

Windows Server includes a feature called SMB Direct, which supports the use of network
adapters that have Remote Direct Memory Access (RDMA) capability. Network adapters
that have RDMA can function at full speed with very low latency, while using very little
CPU. For workloads such as Hyper-V or Microsoft SQL Server, this enables a remote file
server to resemble local storage. SMB Direct includes:

Increased throughput: Leverages the full throughput of high speed networks


where the network adapters coordinate the transfer of large amounts of data at
line speed.
Low latency: Provides extremely fast responses to network requests, and, as a
result, makes remote file storage feel as if it is directly attached block storage.
Low CPU utilization: Uses fewer CPU cycles when transferring data over the
network, which leaves more power available to server applications.

SMB Direct is automatically configured by Windows Server.

SMB Multichannel and SMB Direct


SMB Multichannel is the feature responsible for detecting the RDMA capabilities of
network adapters to enable SMB Direct. Without SMB Multichannel, SMB uses regular
TCP/IP with the RDMA-capable network adapters (all network adapters provide a TCP/IP
stack along with the new RDMA stack).

With SMB Multichannel, SMB detects whether a network adapter has the RDMA
capability, and then creates multiple RDMA connections for that single session (two per
interface). This allows SMB to use the high throughput, low latency, and low CPU
utilization offered by RDMA-capable network adapters. It also offers fault tolerance if
you are using multiple RDMA interfaces.

You can team RDMA-capable network adapters using Switch Embedded Teaming (SET)
starting with Windows Server 2016. After at least one RDMA network connection is
created, the TCP/IP connection used for the original protocol negotiation is no longer
used. However, the TCP/IP connection is retained in case the RDMA network
connections fail.
SMB Encryption with SMB Direct
Beginning in Windows Server 2022 and Windows 11, SMB Direct now supports
encryption. Previously, enabling SMB encryption disabled direct data placement, making
RDMA performance as slow as TCP. Now data is encrypted before placement, leading to
relatively minor performance degradation while adding AES-128 and AES-256 protected
packet privacy. For more information on configuring SMB encryption, review SMB
security enhancements.

Furthermore, Windows Server failover clusters now support granular control of


encrypting intra-node storage communications for Cluster Shared Volumes (CSV) and
the storage bus layer (SBL). This means that when using Storage Spaces Direct and SMB
Direct, you can decide to encrypt east-west communications within the cluster itself for
higher security.

Requirements
SMB Direct requires the following:

At least two computers running Windows Server 2012 or newer.


One or more network adapters with RDMA capability.

Considerations when using SMB Direct


You can use SMB Direct in a failover cluster; however, you need to make sure that
the cluster networks used for client access are adequate for SMB Direct. Failover
clustering supports using multiple networks for client access, along with network
adapters that are RSS (Receive Side Scaling)-capable and RDMA-capable.
You can use SMB Direct on the Hyper-V management operating system to support
using Hyper-V over SMB, and to provide storage to a virtual machine that uses the
Hyper-V storage stack. However, RDMA-capable network adapters are not directly
exposed to a Hyper-V client. If you connect an RDMA-capable network adapter to
a virtual switch, the virtual network adapters from the switch will not be RDMA-
capable.
If you disable SMB Multichannel, SMB Direct is also disabled. Since SMB
Multichannel detects network adapter capabilities and determines whether a
network adapter is RDMA-capable, SMB Direct cannot be used by the client if SMB
Multichannel is disabled.
SMB Direct is not supported on Windows RT. SMB Direct requires support for
RDMA-capable network adapters, which is available starting with Windows Server
2012.
SMB Direct is not supported on down-level versions of Windows Server. It is
supported starting with Windows Server 2012.

Enabling and disabling SMB Direct


SMB Direct is enabled by default starting with Windows Server 2012. The SMB client
automatically detects and uses multiple network connections if an appropriate
configuration is identified.

Disable SMB Direct


Typically, you will not need to disable SMB Direct, however, you can disable it by
running one of the following Windows PowerShell scripts.

To disable RDMA for a specific interface, type:

PowerShell

Disable-NetAdapterRdma <name>

To disable RDMA for all interfaces, type:

PowerShell

Set-NetOffloadGlobalSetting -NetworkDirect Disabled

When you disable RDMA on either the client or the server, the systems cannot use it.
Network Direct is the internal name for Windows Server basic networking support for
RDMA interfaces.

Re-enable SMB Direct


After disabling RDMA, you can re-enable it by running one of the following Windows
PowerShell scripts.

To re-enable RDMA for a specific interface, type:

PowerShell

Enable-NetAdapterRDMA <name>

To re-enable RDMA for all interfaces, type:


PowerShell

Set-NetOffloadGlobalSetting -NetworkDirect Enabled

You need to enable RDMA on both the client and the server to start using it again.

Test performance of SMB Direct


You can test how the performance is working by using one of the following procedures.

Compare a file copy with and without using SMB Direct


Here's how to measure the increased throughput of SMB Direct:

1. Configure SMB Direct


2. Measure the amount of time to run a large file copy using SMB Direct.
3. Disable RDMA on the network adapter, see Enabling and disabling SMB Direct.
4. Measure the amount of time to run a large file copy without using SMB Direct.
5. Re-enable RDMA on the network adapter, and then compare the two results.
6. To avoid the impact of caching, you should do the following:
a. Copy a large amount of data (more data than memory is capable of handling).
b. Copy the data twice, with the first copy as practice and then timing the second
copy.
c. Restart both the server and the client before each test to make sure they
operate under similar conditions.

Fail one of multiple network adapters during a file copy


with SMB Direct
Here's how to confirm the failover capability of SMB Direct:

1. Ensure that SMB Direct is functioning in a multiple network adapter configuration.


2. Run a large file copy. While the copying is run, simulate a failure of one of the
network paths by disconnecting one of the cables (or by disabling one of the
network adapters).
3. Confirm that the file copying continues using one of the remaining network
adapters, and that there are no file copy errors.

7 Note
To avoid failures of a workload that does not use SMB Direct, make sure there are
no other workloads using the disconnected network path.

More information
Server Message Block overview
Increasing Server, Storage, and Network Availability: Scenario Overview
Deploy Hyper-V over SMB
SMB over QUIC
Article • 05/18/2023

Applies to: Windows Server 2022 Datacenter: Azure Edition, Windows 11

SMB over QUIC introduces an alternative to the TCP network transport, providing
secure, reliable connectivity to edge file servers over untrusted networks like the
Internet. QUIC is an IETF-standardized protocol with many benefits when compared with
TCP:

All packets are always encrypted and handshake is authenticated with TLS 1.3
Parallel streams of reliable and unreliable application data
Exchanges application data in the first round trip (0-RTT)
Improved congestion control and loss recovery
Survives a change in the clients IP address or port

SMB over QUIC offers an "SMB VPN" for telecommuters, mobile device users, and high
security organizations. The server certificate creates a TLS 1.3-encrypted tunnel over the
internet-friendly UDP port 443 instead of the legacy TCP port 445. All SMB traffic,
including authentication and authorization within the tunnel is never exposed to the
underlying network. SMB behaves normally within the QUIC tunnel, meaning the user
experience doesn't change. SMB features like multichannel, signing, compression,
continuous availability, directory leasing, and so on, work normally.

A file server administrator must opt in to enabling SMB over QUIC. It isn't on by default
and a client can't force a file server to enable SMB over QUIC. Windows SMB clients still
use TCP by default and will only attempt SMB over QUIC if the TCP attempt first fails or
if intentionally requiring QUIC using NET USE /TRANSPORT:QUIC or New-SmbMapping -
TransportType QUIC .

Prerequisites
To use SMB over QUIC, you need the following things:

A file server running Windows Server 2022 Datacenter: Azure Edition (Microsoft
Server Operating Systems )
A Windows 11 computer (Windows for business )
Windows Admin Center (Homepage )
A Public Key Infrastructure to issue certificates like Active Directory Certificate
Server or access to a trusted third party certificate issuer like Verisign, Digicert,
Let's Encrypt, and so on.

Deploy SMB over QUIC

Step 1: Install a server certificate


1. Create a Certificate Authority-issued certificate with the following properties:

Key usage: digital signature


Purpose: Server Authentication (EKU 1.3.6.1.5.5.7.3.1)
Signature algorithm: SHA256RSA (or greater)
Signature hash: SHA256 (or greater)
Public key algorithm: ECDSA_P256 (or greater. Can also use RSA with at least
2048 length)
Subject Alternative Name (SAN): (A DNS name entry for each fully qualified
DNS name used to reach the SMB server)
Subject: (CN= anything, but must exist)
Private key included: yes
If using a Microsoft Enterprise Certificate Authority, you can create a certificate
template and allow the file server administrator to supply the DNS names when
requesting it. For more information on creating a certificate template, review
Designing and Implementing a PKI: Part III Certificate Templates . For a
demonstration of creating a certificate for SMB over QUIC using a Microsoft
Enterprise Certificate Authority, watch this video:
https://www.youtube-nocookie.com/embed/L0yl5Z5wToA

For requesting a third-party certificate, consult your vendor documentation.

2. If using a Microsoft Enterprise Certificate Authority:


a. Start MMC.EXE on the file server.
b. Add the Certificates snap-in, and select the Computer account.
c. Expand Certificates (Local Computer), Personal, then right-click Certificates
and click Request New Certificate.
d. Click Next
e. Select Active Directory Enrollment Policy
f. Click Next
g. Select the certificate template for SMB over QUIC that was published in Active
Directory.
h. Click More information is required to enroll for this certificate. Click here to
configure settings.
i. So users can use to locate the file server, fill in the value Subject with a common
name and Subject Alternative Name with one or more DNS names.
j. Click Ok and click Enroll.
7 Note

If you're using a certificate file issued by a third party certificate authority, you can
use the Certificates snap-in or Windows Admin Center to import it.

Step 2: Configure SMB over QUIC


1. Deploy a Windows Server 2022 Datacenter: Azure Edition server.

2. Install the latest version of Windows Admin Center on a management PC or the file
server. You need the latest version of the Files & File Sharing extension. It's
installed automatically by Windows Admin Center if Automatically update
extensions is enabled in Settings > Extensions.

3. Join your Windows Server 2022 Datacenter: Azure Edition file server to your Active
Directory domain and make it accessible to Windows Insider clients on the Azure
public interface by adding a firewall allow rule for UDP/443 inbound. Do not allow
TCP/445 inbound to the file server. The file server must have access to at least one
domain controller for authentication, but no domain controller requires any
internet access.

4. Connect to the server with Windows Admin Center and click the Settings icon in
the lower left. In the File shares (SMB server) section, under File sharing across
the internet with SMB over QUIC, click Configure.

5. Click a certificate under Select a computer certificate for this file server, click the
server addresses clients can connect to or click Select all, and click Enable.
6. Ensure that the certificate and SMB over QUIC report are healthy.

7. Click on the Files and File Sharing menu option. Note your existing SMB shares or
create a new one.

For a demonstration of configuring and using SMB over QUIC, watch this video:
https://www.youtube-nocookie.com/embed/OslBSB8IkUw

Step 3: Connect to SMB shares


1. Join your Windows 11 device to your domain. Be certain the names of the SMB
over QUIC file server's certificate subject alternative names are published to DNS
and are fully qualified OR added to the HOST files for your Windows 11. Ensure
that the server's certificate subject alternative names are published to DNS OR
added to the HOSTS files for your Windows 11 .

2. Move your Windows 11 device to an external network where it no longer has any
network access to domain controllers or the file server's internal IP addresses.

3. In Windows File Explorer, in the Address Bar, type the UNC path to a share on the
file server and confirm you can access data in the share. Alternatively, you can use
NET USE /TRANSPORT:QUIC or New-SmbMapping -TransportType QUIC with a UNC
path. Examples:

NET USE * \\fsedge1.contoso.com\sales (automatically tries TCP then QUIC)

NET USE * \\fsedge1.contoso.com\sales /TRANSPORT:QUIC (tries only QUIC)

New-SmbMapping -LocalPath 'Z:' -RemotePath '\\fsedge1.contoso.com\sales' -

TransportType QUIC (tries only QUIC)

Configure the KDC Proxy (Optional, but recommended)


By default, a Windows 11 device won't have access to an Active Directory domain
controller when connecting to an SMB over QUIC file server. This means authentication
uses NTLMv2, where the file server authenticates on behalf of the client. No NTLMv2
authentication or authorization occurs outside the TLS 1.3-encrypted QUIC tunnel.
However, we still recommend using Kerberos as a general security best practice and
don't recommend creating new NTLMv2 dependencies in deployments. To allow this,
you can configure the KDC proxy to forward ticket requests on the user's behalf, all
while using an internet-friendly HTTPS encrypted communication channel. The KDC
Proxy is fully supported by SMB over QUIC and highly recommended.

7 Note

You cannot configure the Windows Admin Center (WAC) in gateway mode using
TCP port 443 on a file server where you are configuring KDC Proxy. When
configuring WAC on the file server, change the port to one that is not in use and is
not 443. If you have already configured WAC on port 443, re-run the WAC setup
MSI and choose a different port when prompted.

Windows Admin Center method

1. Ensure you're using at least Windows Admin Center version 2110.

2. Configure SMB over QUIC normally. Starting in Windows Admin Center 2110, the
option to configure KDC proxy in SMB over QUIC is automatically enabled and you
don't need to perform extra steps on the file servers. The default KDC proxy port is
443 and assigned automatically by Windows Admin Center.

7 Note

You cannot configure an SMB over QUIC server joined to a Workgroup using
Windows Admin Center. You must join the server to an Active Directory
domain or use the step in Manual Method section.

3. Configure the following group policy setting to apply to the Windows 11 device:

Computers > Administrative templates > System > Kerberos > Specify KDC
proxy servers for Kerberos clients

The format of this group policy setting is a value name of your fully qualified
Active Directory domain name and the value will be the external name you
specified for the QUIC server. For example, where the Active Directory domain is
named corp.contoso.com and the external DNS domain is named contoso.com:

value name: corp.contoso.com

value: <https fsedge1.contoso.com:443:kdcproxy />

This Kerberos realm mapping means that if user [email protected] tried to


connect to a file server name fs1edge.contoso.com, the KDC proxy will know to
forward the kerberos tickets to a domain controller in the internal corp.contoso.com
domain. The communication with the client will be over HTTPS on port 443 and
user credentials aren't directly exposed on the client-file server network.

4. Ensure that edge firewalls allow HTTPS on port 443 inbound to the file server.

5. Apply the group policy and restart the Windows 11 device.


Manual Method
1. On the file server, in an elevated PowerShell prompt, run:

NETSH http add urlacl url=https://+:443/KdcProxy user="NT authority\Network

Service"

REG ADD "HKLM\SYSTEM\CurrentControlSet\Services\KPSSVC\Settings" /v

HttpsClientAuth /t REG_DWORD /d 0x0 /f

REG ADD "HKLM\SYSTEM\CurrentControlSet\Services\KPSSVC\Settings" /v

DisallowUnprotectedPasswordAuth /t REG_DWORD /d 0x0 /f

Get-SmbServerCertificateMapping

2. Copy the thumbprint value from the certificate associated with SMB over QUIC
certificate (there may be multiple lines but they will all have the same thumbprint)
and paste it as the Certhash value for the following command:

$guid = [Guid]::NewGuid()

Add-NetIPHttpsCertBinding -ipport 0.0.0.0:443 -CertificateHash <thumbprint> -


CertificateStoreName "my" -ApplicationId "{$guid}" -NullEncryption $false

3. Add the file server's SMB over QUIC names as SPNs in Active Directory for
Kerberos. For example:

NETDOM computername ws2022-quic.corp.contoso.com /add fsedge1.contoso.com

4. Set the KDC Proxy service to automatic and start it:

Set-Service -Name kpssvc -StartupType Automatic

Start-Service -Name kpssvc

5. Configure the following group policy to apply to the Windows 11 device:

Computers > Administrative templates > System > Kerberos > Specify KDC
proxy servers for Kerberos clients

The format of this group policy setting is a value name of your fully qualified
Active Directory domain name and the value will be the external name you
specified for the QUIC server. For example, where the Active Directory domain is
named "corp.contoso.com" and the external DNS domain is named "contoso.com":

value name: corp.contoso.com


value: <https fsedge1.contoso.com:443:kdcproxy />

This Kerberos realm mapping means that if user [email protected] tried to


connect to a file server name fs1edge.contoso.com" , the KDC proxy will know to
forward the kerberos tickets to a domain controller in the internal
corp.contoso.com domain. The communication with the client will be over HTTPS

on port 443 and user credentials aren't directly exposed on the client-file server
network.

6. Create a Windows Defender Firewall rule that inbound-enables TCP port 443 for
the KDC Proxy service to receive authentication requests.

7. Ensure that edge firewalls allow HTTPS on port 443 inbound to the file server.

8. Apply the group policy and restart the Windows 11 device.

7 Note

Automatic configuration of the KDC Proxy will come later in the SMB over QUIC
and these server steps will not be necessary.

Certificate expiration and renewal


An expired SMB over QUIC certificate that you replace with a new certificate from the
issuer will contain a new thumbprint. While you can automatically renew SMB over QUIC
certificates when they expire using Active Directory Certificate Services, a renewed
certificate gets a new thumbprint as well. This effectively means that SMB over QUIC
must be reconfigured when the certificate expires, as a new thumbprint must be
mapped. You can simply select your new certificate in Windows Admin Center for the
existing SMB over QUIC configuration or use the Set-SMBServerCertificateMapping
PowerShell command to update the mapping for the new certificate. You can use Azure
Automanage for Windows Server to detect impending certificate expiration and prevent
an outage. For more information, review Azure Automanage for Windows Server.

Notes
Windows Server 2022 Datacenter: Azure Edition will also eventually be available on
Azure Stack HCI 21H2, for customers not using Azure public cloud.
We recommend read-only domain controllers configured only with passwords of
mobile users be made available to the file server.
Users should have strong passwords or, ideally, be configured using a
passwordless strategy with Windows Hello for Business MFA or smart cards.
Configure an account lockout policy for mobile users through fine-grained
password policy and you should deploy intrusion protection software to detect
brute force or password spray attacks.
You can't configure SMB over QUIC using WAC when the SMB server is in a
workgroup (that is, not AD domain joined). In that scenario you must use the New-
SMBServerCertificateMapping cmdlet and the Manual Method steps for KDC proxy
configuration.

More references
Storage at Microsoft blog

QUIC Working Group homepage

Microsoft MsQuic GitHub homepage

QUIC Wikipedia

TLS 1.3 Working Group homepage

Taking Transport Layer Security (TLS) to the next level with TLS 1.3
SMB compression
Article • 05/18/2023

Applies to: Windows Server 2022, Windows 11

SMB compression allows an administrator, user or application to request compression of


files as they transfer over the network. This removes the need to first manually deflate a
file with an application, copy it, then inflate on the destination computer. Compressed
files will consume less network bandwidth and take less time to transfer, at the cost of
slightly increased CPU usage during transfers. SMB compression is most effective on
networks with less bandwidth, such as a client's 1 Gbps ethernet or Wi-Fi network; a file
transfer over an uncongested 100 Gbps ethernet network between two servers with
flash storage may be as fast without SMB compression in practice, but will still create
less congestion for other applications.

SMB compression in Windows has the following characteristics:

Supports compression algorithms XPRESS (LZ77), XPRESS Huffman


(LZ77+Huffman), LZNT1, or PATTERN_V1*. XPRESS is used automatically
Supports SMB signing and SMB encryption
Supports SMB over QUIC
Supports SMB Multichannel
Doesn't support SMB Direct over RDMA

For a demonstration of SMB compression, watch this video:


https://www.youtube-nocookie.com/embed/zpMS6w33H7U

Requirements
To use SMB compression in a traditional client-file server workload, you need the
following:

A file server running Windows Server 2022 on-premises or in Azure


A Windows 11 (Windows for business ) computer
Windows Admin Center - (Homepage )

Configuring SMB compression


You can configure SMB compression from both a client and server perspective. Client
and server don't refer to a particular edition like Windows Server 2022 or Windows 11
Insider Preview but instead to the architecture of a file transfer between two computers.
Both Windows Server 2022 and Windows 11 support being a client or server of SMB
compression.

Requesting SMB compression on file shares


You can configure shares to always request compression when connected to by clients.
You can use Windows Admin Center or PowerShell.

Using Windows Admin Center


1. Install Windows Admin Center and connect to a Windows Server 2022 file server.
2. Click on the Files and file sharing menu item.
3. Click on File shares.
4. Edit an existing share or create a new share.
5. Select Compress data and then click Add or Edit.

Using PowerShell
1. Open an elevated PowerShell command prompt as an administrator.

2. Create a new share with compression using New-SMBShare with the -CompressData
$true parameter and argument. For example:

PowerShell

New-SmbShare -Name "Sales" -Path "C:\sales" -CompressData $true


3. Edit an existing share with compression using Set-SMBShare with the -CompressData
$true parameter and argument. For example:

PowerShell

Set-SmbShare -Name "Sales" -CompressData $true

Requesting SMB compression on mapped drives


You can request that all data copied over a mapped drive to be compressed. This can be
done as part of a logon script or when run manually.

PowerShell

1. Open a PowerShell command prompt.

2. Map a drive using New-SMBMapping with the -CompressNetworkTraffic $true


parameter and argument. For example:

PowerShell

New-SmbMapping -LocalPath "Z:" -RemotePath


"\\fs1.corp.contoso.com\sales" -CompressNetworkTraffic $true

Requesting SMB compression with copy tools


You can request that SMB compression is attempted for particular files using robocopy
or xcopy.

7 Note

If you want File Explorer, third party copy tools, or applications to use compression,
map drives with compression, enable compression on shares, or set SMB clients to
always compress.

Robocopy
1. Open a CMD prompt or PowerShell command prompt.

2. Copy with the /COMPRESS flag. For example:

ROBOCOPY c:\hypervdisks \\hypervcluster21.corp.contoso.com\disks$


*.vhdx /COMPRESS

Always require or always reject compression requests


Starting in Windows Server 2022 with update KB5016693 (OS Build 20348.946) and
Windows 11 with update KB5016691 (OS Build 22000.918) you can configure an SMB
client or SMB server to always request compression and to always reject requests for
compression. You can now use Group Policy or PowerShell; in the initial release of
Windows 11 and Windows Server 2022, you could only use registry settings to control
most of these behaviors and you could not configure an SMB server to always request
compression despite its share settings. An SMB client and SMB server refers to the SMB
services, not to a Windows edition or SKU. All of these SMB changes take effect
immediately without a reboot.

Always attempt compression (SMB client)

Group Policy

1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Workstation.
3. Enable policy Use SMB Compression by Default.
4. Close the policy editor.

Never compress (SMB client)

Group Policy

1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Workstation.
3. Enable policy Disable SMB Compression.
4. Close the policy editor.

Always attempt compression (SMB server)

Group Policy

1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Server.
3. Enable policy Request traffic compression for all shares.
4. Close the policy editor.

Never compress (SMB server)

Group Policy

1. Run the Group Policy Management Console for your Active Directory domain
and create or navigate to a group policy.
2. Expand policy Computer Configuration\Policies\Administrative
Templates\Network\Lanman Server.
3. Enable policy Disable SMB Compression.
4. Close the policy editor.

Understanding and controlling compression


behaviors
Starting in Windows Server 2022 with update KB5016693 (OS Build 20348.946) and
Windows 11 with update KB5016691 (OS Build 22000.918), SMB by default always
attempts to compress a file when a client or server requests it, without using
compression sampling.
7 Note

In the original release of Windows Server 2022 and Windows 11, SMB compression
defaulted to use of an algorithm where it attempted to compress the first
524,288,000 bytes (500 MiB) of a file during transfer and track that at least
104,857,600 bytes (100 MiB) compressed within that 500 MiB range. If fewer than
100 MiB was compressible, SMB compression stopped trying to compress the rest
of the file. If at least 100 MiB compressed, SMB compression attempted to
compress the rest of the file. With this behavior change, sampling is now disabled
by default and SMB always attempts to compress the entire file when a client or
server requests it.

Testing SMB compression


A simple way to test your compression configuration is using VHDX files. You can create
and mount a VHDX, add some files to it, then dismount the VHDX and copy it as a file.
Alternatively, you can just copy an existing dismounted virtual machine VHDX file, as
much of its file contents will compress. For an example of creating a VHDX test file:

1. Start Diskmgmt.msc.

2. Select Local Disk (C:) by clicking on it.

3. Click Action then Create VHD.

4. In Diskmgmt, right-click your VHDX now shown as "Not initialized" and click
Initialize disk and click OK. Right-click on the disks Unallocated section and click
New Simple Volume, then Next for all menu prompts, then click Finish.

5. Specify a file path, set the size to "25 GB", select VHDX and Fixed size, then click
OK.
6. Right-click on the disk and click Detach VHD, then click OK.

7. In File Explorer, double-click that VHDX file to mount it. Copy a few MB of
uncompressible files, such as JPG format, then right-click the mounted disk and
click Eject.

You now have a large test file with compressed contents.

Testing SMB compression between a pair of VMs running on the same Hyper-V host
may not show time savings because the virtual switch is 10 Gbps and has no congestion,
plus modern hypervisors often use flash storage. Test your compression over the real
networks you plan to use. You can also reduce the network bandwidth on Hyper-V VMs
for testing purposes using Set-VMNetworkAdapter with -MaximumBandwidth set to 1Gb ,
for example.

To see how well compression is working, you can robocopy the same file to a server
twice, once with the /compress flag and again without compression, deleting the server
file between each test. If the file is compressing, you should see less network utilization
in Task Manager and a lower copy time. You can also observe the SMB server's
Performance Monitor object "SMB Server Shares" for its "Compressed Requests/sec" and
"Compressed Responses/sec" counters.
RDMA and SMB Direct
SMB compression doesn't support SMB Direct and RDMA. This means that even if the
client requests compression and the server supports it, compression will not be
attempted with SMB Direct and RDMA. Support for SMB compression with SMB Direct
and RDMA will come after the Windows Server 2022 and Windows 11 public previews.
SMB security enhancements
Article • 05/18/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Azure Stack HCI version 21H2,
Windows 11, Windows 10

This article explains the SMB security enhancements in Windows Server and Windows.

SMB Encryption
SMB Encryption provides SMB data end-to-end encryption and protects data from
eavesdropping occurrences on untrusted networks. You can deploy SMB Encryption with
minimal effort, but it might require other costs for specialized hardware or software. It
has no requirements for Internet Protocol security (IPsec) or WAN accelerators. SMB
Encryption can be configured on a per share basis, for the entire file server, or when
mapping drives.

7 Note

SMB Encryption does not cover security at rest, which is typically handled by
BitLocker Drive Encryption.

You can consider SMB Encryption for any scenario in which sensitive data needs to be
protected from interception attacks. Possible scenarios include:

You move an information worker’s sensitive data by using the SMB protocol. SMB
Encryption offers an end-to-end privacy and integrity assurance between the file
server and the client. It provides this security regardless of the networks traversed,
such as wide area network (WAN) connections maintained by non-Microsoft
providers.
SMB 3.0 enables file servers to provide continuously available storage for server
applications, such as SQL Server or Hyper-V. Enabling SMB Encryption provides an
opportunity to protect that information from snooping attacks. SMB Encryption is
simpler to use than the dedicated hardware solutions that are required for most
storage area networks (SANs).

Windows Server 2022 and Windows 11 introduce AES-256-GCM and AES-256-CCM


cryptographic suites for SMB 3.1.1 encryption. Windows automatically negotiates this
more advanced cipher method when connecting to another computer that supports it.
You can also mandate this method through Group Policy. Windows still supports AES-
128-GCM and AES-128-CCM. By default, AES-128-GCM is negotiated with SMB 3.1.1,
bringing the best balance of security and performance.

Windows Server 2022 and Windows 11 SMB Direct now support encryption. Previously,
enabling SMB encryption disabled direct data placement, making RDMA performance as
slow as TCP. Now data is encrypted before placement, leading to relatively minor
performance degradation while adding AES-128 and AES-256 protected packet privacy.
You can enable encryption using Windows Admin Center, Set-SmbServerConfiguration,
or UNC Hardening group policy .

Furthermore, Windows Server failover clusters now support granular control of


encrypting intra-node storage communications for Cluster Shared Volumes (CSV) and
the storage bus layer (SBL). This support means that when using Storage Spaces Direct
and SMB Direct, you can encrypt east-west communications within the cluster itself for
higher security.

) Important

There is a notable performance operating cost with any end-to-end encryption


protection when compared to non-encrypted.

Enable SMB Encryption


You can enable SMB Encryption for the entire file server or only for specific file shares.
Use one of the following procedures to enable SMB Encryption.

Enable SMB Encryption with Windows Admin Center


1. Download and install Windows Admin Center.
2. Connect to the file server.
3. Select Files & file sharing.
4. Select the File shares tab.
5. To require encryption on a share, select the share name and choose Enable SMB
encryption.
6. To require encryption on the server, select File server settings.
7. Under SMB 3 encryption, select Required from all clients (others are rejected),
and then choose Save.
Enable SMB Encryption with UNC Hardening
UNC Hardening lets you configure SMB clients to require encryption regardless of server
encryption settings. This feature helps prevent interception attacks. To configure UNC
Hardening, see MS15-011: Vulnerability in Group Policy could allow remote code
execution . For more information on interception attack defenses, see How to Defend
Users from Interception Attacks via SMB Client Defense .

Enable SMB Encryption with Windows PowerShell


1. Sign into your server and run PowerShell on your computer in an elevated session.

2. To enable SMB Encryption for an individual file share, run the following command.

PowerShell

Set-SmbShare –Name <sharename> -EncryptData $true

3. To enable SMB Encryption for the entire file server, run the following command.

PowerShell

Set-SmbServerConfiguration –EncryptData $true

4. To create a new SMB file share with SMB Encryption enabled, run the following
command.

PowerShell

New-SmbShare –Name <sharename> -Path <pathname> –EncryptData $true

Map drives with encryption


1. To enable SMB Encryption when mapping a drive using PowerShell, run the
following command.

PowerShell

New-SMBMapping -LocalPath <drive letter> -RemotePath <UNC path> -


RequirePrivacy $TRUE
2. To enable SMB Encryption when mapping a drive using CMD, run the following
command.

Windows Command Prompt

NET USE <drive letter> <UNC path> /REQUIREPRIVACY

Considerations for deploying SMB Encryption


By default, when SMB Encryption is enabled for a file share or server, only SMB 3.0, 3.02,
and 3.1.1 clients are allowed to access the specified file shares. This limit enforces the
administrator's intent of safeguarding the data for all clients that access the shares.

However, in some circumstances, an administrator might want to allow unencrypted


access for clients that don't support SMB 3.x. This situation could occur during a
transition period when mixed client operating system versions are being used. To allow
unencrypted access for clients that don't support SMB 3.x, enter the following script in
Windows PowerShell:

PowerShell

Set-SmbServerConfiguration –RejectUnencryptedAccess $false

7 Note

We do not recommend allowing unencrypted access when you have deployed


encryption. Update the clients to support encryption instead.

The preauthentication integrity capability described in the next section prevents an


interception attack from downgrading a connection from SMB 3.1.1 to SMB 2.x (which
would use unencrypted access). However, it doesn't prevent a downgrade to SMB 1.0,
which would also result in unencrypted access.

To guarantee that SMB 3.1.1 clients always use SMB Encryption to access encrypted
shares, you must disable the SMB 1.0 server. For instructions, connect to the server with
Windows Admin Center and open the Files & File Sharing extension, and then select the
File shares tab to be prompted to uninstall. For more information, see How to detect,
enable and disable SMBv1, SMBv2, and SMBv3 in Windows.

If the –RejectUnencryptedAccess setting is left at its default setting of $true, only


encryption-capable SMB 3.x clients are allowed to access the file shares (SMB 1.0 clients
are also rejected).

Consider the following issues as you deploy SMB Encryption:

SMB Encryption uses the Advanced Encryption Standard (AES)-GCM and CCM
algorithm to encrypt and decrypt the data. AES-CMAC and AES-GMAC also
provide data integrity validation (signing) for encrypted file shares, regardless of
the SMB signing settings. If you want to enable SMB signing without encryption,
you can continue to do so. For more information, see Configure SMB Signing with
Confidence .
You might encounter issues when you attempt to access the file share or server if
your organization uses wide area network (WAN) acceleration appliances.
With a default configuration (where there's no unencrypted access allowed to
encrypted file shares), if clients that don't support SMB 3.x attempt to access an
encrypted file share, Event ID 1003 is logged to the Microsoft-Windows-
SmbServer/Operational event log, and the client receives an Access denied error
message.
SMB Encryption and the Encrypting File System (EFS) in the NTFS file system are
unrelated, and SMB Encryption doesn't require or depend on using EFS.
SMB Encryption and the BitLocker Drive Encryption are unrelated, and SMB
Encryption doesn't require or depend on using BitLocker Drive Encryption.

Preauthentication integrity
SMB 3.1.1 is capable of detecting interception attacks that attempt to downgrade the
protocol or the capabilities that the client and server negotiate by use of
preauthentication integrity. Preauthentication integrity is a mandatory feature in SMB
3.1.1. It protects against any tampering with Negotiate and Session Setup messages by
using cryptographic hashing. The resulting hash is used as input to derive the session’s
cryptographic keys, including its signing key. This process enables the client and server
to mutually trust the connection and session properties. When the client or the server
detects such an attack, the connection is disconnected, and event ID 1005 is logged in
the Microsoft-Windows-SmbServer/Operational event log.

Because of this protection, and to take advantage of the full capabilities of SMB
Encryption, we strongly recommend that you disable the SMB 1.0 server. For
instructions, connect to the server with Windows Admin Center and open the Files &
File Sharing extension, and then select the File shares tab to be prompted to uninstall.
For more information, see How to detect, enable and disable SMBv1, SMBv2, and SMBv3
in Windows.
New signing algorithm
SMB 3.0 and 3.02 use a more recent encryption algorithm for signing: Advanced
Encryption Standard (AES)-cipher-based message authentication code (CMAC). SMB 2.0
used the older HMAC-SHA256 encryption algorithm. AES-CMAC and AES-CCM can
significantly accelerate data encryption on most modern CPUs that have AES instruction
support.

Windows Server 2022 and Windows 11 introduce AES-128-GMAC for SMB 3.1.1 signing.
Windows automatically negotiates this better-performing cipher method when
connecting to another computer that supports it. Windows still supports AES-128-
CMAC. For more information, see Configure SMB Signing with Confidence .

Disabling SMB 1.0


SMB 1.0 isn't installed by default starting in Windows Server version 1709 and Windows
10 version 1709. For instructions on removing SMB1, connect to the server with
Windows Admin Center, open the Files & File Sharing extension, and then select the File
shares tab to be prompted to uninstall. For more information, see How to detect, enable
and disable SMBv1, SMBv2, and SMBv3 in Windows.

If it's still installed, you should disable SMB1 immediately. For more information on
detecting and disabling SMB 1.0 usage, see Stop using SMB1 . For a clearinghouse of
software that previously or currently requires SMB 1.0, see SMB1 Product
Clearinghouse .

Related links
Overview of file sharing using the SMB 3 protocol in Windows Server
Windows Server Storage documentation
Scale-Out File Server for application data overview
SMB: File and printer sharing ports
should be open
Article • 03/22/2023

Applies To: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

When a Best Practices Analyzer scan for Server Message Block (SMB)-based network
services identifies that firewall ports for file and printer sharing aren't open, follow the
steps in this article to resolve the issue.

Operating system Product/Feature Severity Category

Windows Server File Services Error Configuration

7 Note

This article addresses a specific issue identified by a Best Practices Analyzer scan.
Apply the information in this article only to computers that have a File Services Best
Practices Analyzer scan that reports the specific port issue. For more information
about best practices and scans, see Best Practices Analyzer.

Identify the issue


A File Services Best Practices Analyzer scan reports that firewall ports necessary for file
and printer sharing aren't open (ports 445 and 139).

The issue prevents computer access to shared folders and other SMB-based network
services on the server.

Resolve the issue


To resolve the issue, enable file and printer sharing to communicate through the
computer's firewall. To complete the procedure, you must be a member of the
Administrators group (or equivalent), at a minimum.

To open the firewall ports and enable file and printer sharing, complete the following
steps:
1. Open Control Panel, select System and Security, and then select Windows
Defender Firewall.

2. On the left, select Advanced settings. The Windows Defender Firewall console
opens and shows the advanced settings.

3. In the Windows Defender Firewall console on the left, select Inbound Rules.

4. Under Inbound Rules, locate the following two rules:

File and Printer Sharing (NB-Session-In)

File and Printer Sharing (SMB-In)

5. For each rule, select and hold (or right-click) the rule, and then select Enable Rule.

Related links
Understanding shared folders and the Windows Firewall
Secure SMB Traffic in Windows Server
Article • 04/01/2022

As a defense in depth measure, you can use segmentation and isolation techniques to
secure SMB traffic and reduce threats between devices on your network.

SMB is used for file sharing, printing, and inter-process communication such as named
pipes and RPC. It's also used as a network data fabric for technologies such as Storage
Spaces Direct, Storage Replica, Hyper-V Live Migration, and Cluster Shared Volumes. Use
the following sections to configure SMB traffic segmentation and endpoint isolation to
help prevent outbound and lateral network communications.

Block inbound SMB access


Block TCP port 445 inbound from the internet at your corporate hardware firewalls.
Blocking inbound SMB traffic protects devices inside your network by preventing access
from the internet.

If you want users to access their files inbound at the edge of your network, you can use
SMB over QUIC. This uses UDP port 443 by default and provides a TLS 1.3-encrypted
security tunnel like a VPN for SMB traffic. The solution requires Windows 11 and
Windows Server 2022 Datacenter: Azure Edition file servers running on Azure Stack HCI.
For more information, see SMB over QUIC .

Block outbound SMB access


Block TCP port 445 outbound to the internet at your corporate firewall. Blocking
outbound SMB traffic prevents devices inside your network from sending data using
SMB to the internet.

It is unlikely you need to allow any outbound SMB using TCP port 445 to the internet
unless you require it as part of a public cloud offering. The primary scenarios include
Azure Files and Office 365.

If you are using Azure Files SMB, use a VPN for outbound VPN traffic. By using a VPN,
you restrict the outbound traffic to the required service IP ranges. For more information
about Azure Cloud and Office 365 IP address ranges, see:

Azure IP ranges and service tags: public cloud ,US government cloud , Germany
cloud , or China cloud . The JSON files are updated weekly and include
versioning both for the full file and each individual service tag. The AzureCloud tag
provides the IP ranges for the cloud (Public, US government, Germany, or China)
and is grouped by region within that cloud. Service tags in the file will increase as
Azure services are added.
Office 365 URLs and IP address ranges.

With Windows 11 and Windows Server 2022 Datacenter: Azure Edition, you can use SMB
over QUIC to connect to file servers in Azure. This uses UDP port 443 by default and
provides a TLS 1.3-encrypted security tunnel like a VPN for the SMB traffic. For more
information, see SMB over QUIC .

Inventory SMB usage and shares


By inventorying your network's SMB traffic, you get an understanding of traffic that is
occurring and can determine if it's necessary. Use the following checklist of questions to
help identify unnecessary SMB traffic.

For server endpoints:

1. Which server endpoints require inbound SMB access to do their role? Do they
need inbound access from all clients, certain networks, or certain nodes?
2. Of the remaining server endpoints, is inbound SMB access necessary?

For client endpoints:

1. Which client endpoints (for example, Windows 10) require inbound SMB access?
Do they need inbound access from all clients, certain networks, or certain nodes?
2. Of the remaining client endpoints, is inbound SMB access necessary?
3. Of the remaining client endpoints, do they need to run the SMB server service?

For all endpoints, determine if you allow outbound SMB in the safest and most minimal
fashion.

Review server built-in roles and features that require SMB inbound. For example, file
servers and domain controllers require SMB inbound to do their role. For more
information on built-in roles and feature network port requirements, see Service
overview and network port requirements for Windows.

Review servers that need to be accessed from inside the network. For example, domain
controllers and file servers likely need to be accessed anywhere in the network.
However, application server access may be limited to a set of other application servers
on the same subnet. You can use the following tools and features to help you inventory
SMB access:

Use Get-FileShares script to examine shares on servers and clients.


Enable an audit trail of SMB inbound access using the registry key Security
Settings\Advanced Audit Policy Configuration\Audit Policies\Object Access\File
Share . Since the number of events may be large, consider enabling for a specified

amount of time or use Azure Monitor .

Examining SMB logs lets you know which nodes are communicating with endpoints over
SMB. You can decide if an endpoint's shares are in use and understand which to exist.

Configure Windows Defender Firewall


Use firewall rules to add extra connection security. Configure rules to block both
inbound and outbound communications that include exceptions. An outbound firewall
policy that prevents use of SMB connections both outside and inside your managed
network while allowing access to the minimum set of servers and no other devices is a
lateral defense-in-depth measure.

For information on the SMB firewall rules you need to set for inbound and outbound
connections, see the support article Preventing SMB traffic from lateral connections and
entering or leaving the network .

The support article includes templates for:

Inbound rules that are based on any kind of network profile.


Outbound rules for private/domain (trusted) networks.
Outbound rules for guest/public (untrusted) networks. This template is important
to enforce on mobile devices and home-based telecommuters that are not behind
your firewall that is blocking outbound traffic. Enforcing these rules on laptops
reduces the odds of phishing attacks that send users to malicious servers to
harvest credentials or run attack code.
Outbound rules that contain an override allowlist for domain controllers and file
servers called Allow the connection if secure.

To use the null encapsulation IPSEC authentication, you must create a Security
Connection rule on all computers in your network that are participating in the rules.
Otherwise, the firewall exceptions won't work and you'll only be arbitrarily blocking.

U Caution

You should test the Security Connection rule before broad deployment. An
incorrect rule could prevent users from accessing their data.
To create a Connection Security rule, use Windows Defender Firewall with Advanced
Security control panel or snap-in:

1. In Windows Defender Firewall, select Connection Security Rules and choose a New
rule.
2. In Rule Type, select Isolation then select Next.
3. In Requirements, select Request authentication for inbound and outbound
connections then select Next.
4. In Authentication Method, select Computer and User (Kerberos V5) then select
Next.
5. In Profile, check all profiles (Domain, Private, Public) then select Next.
6. Enter a name your rule then select Finish.

Remember, the Connection Security rule must be created on all clients and servers
participating in your inbound and outbound rules or they will be blocked from
connecting SMB outbound. These rules may already be in place from other security
efforts in your environment and like the firewall inbound/outbound rules, can be
deployed via group policy.

When configuring rules based on the templates in the Preventing SMB traffic from
lateral connections and entering or leaving the network support article, set the
following to customize the Allow the connection if secure action:

1. In the Action step, select Allow the connection if it is secure then select
Customize.
2. In Customize Allow if Secure Settings, select Allow the connection to use null
encapsulation.

The Allow the connection if it is secure option allows override of a global block rule. You
can use the easy but least secure Allow the connection to use null encapsulation with
*override block rules, which relies on Kerberos and domain membership for
authentication. Windows Defender Firewall allows for more secure options like IPSEC.

For more information about configuring the firewall, see Windows Defender Firewall
with Advanced Security deployment overview.

Disable SMB Server if unused


Windows clients and some of your Windows Servers on your network may not require
the SMB Server service to be running. If the SMB Server service isn't required, you can
disable the service. Before disabling SMB Server service, be sure no applications and
processes on the computer require the service.
You can use Group Policy Preferences to disable the service on a large number of
machines when you are ready to implement. For more information about configuring
Group Policy Preferences, see Configure a Service Item.

Test and deploy using policy


Begin by testing using small-scale, hand-made deployments on select servers and
clients. Use phased group policy rollouts to make these changes. For example, start with
the heaviest user of SMB such as your own IT team. If your team's laptops and apps and
file share access work well after deploying your inbound and outbound firewall rules,
create test group policy within your broad test and QA environments. Based on results,
start sampling some departmental machines, then expand out.

Next steps
Watch Jessica Payne's Ignite conference session Demystifying the Windows Firewall
Protect SMB traffic from interception
Article • 12/13/2022

In this article, you'll learn about some of the ways an attacker might use interception
techniques against the SMB protocol and how you might mitigate an attack. The
concepts will support you with developing your own defense-in-depth strategy for the
SMB protocol.

What is an interception attack?


An adversary-in-the-middle (AITM) attack intends to modify the network
communication between a client and server, allowing a threat actor to intercept traffic.
After interception, a malicious actor may have the ability to spoof, tamper, disclose, or
deny access to your organizations data or account credentials.

Many organizations rely on SMB to share files between users and to support other
applications or technologies like Active Directory Domain Services. With such broad
adoption, SMB is both a popular target for attackers and has the potential for business-
wide impact.

For example, an AITM attack might be used for industrial or state-level espionage,
extortion, or finding sensitive security data stored in files. It could also be used as part of
a wider attack to enable the attacker to move laterally within your network or to target
multiple endpoints.

Attacks are constantly evolving, with attackers often using a combination of established
and new techniques. When protecting your system against SMB interception, there are
two main goals:

Reduce the number of attack methods available.


Secure the pathways you present to your users.

Due to the diversity of technology and clients within many organizations, a well-
rounded defense will combine multiple methods and will follow the Zero Trust
principles. Learn more about Zero Trust in the What is Zero Trust? article.

Now you'll learn about some of the typical good practice configurations to reduce the
risk of SMB interception.

Reducing the attack methods available


To protect your system against SMB interception attacks, your first step should be to
reduce the attack surface. Attack surfaces are places where your system is vulnerable to
cyberthreats and compromise.

In the following sections, we'll discuss some of the basic steps you should take to reduce
the attack surface.

Install updates
Regularly install all available security updates on both your Windows Server and client
systems as close to their release as your organization allows. Installing the latest security
updates is the quickest and easiest way to protect your systems from the current known
security vulnerabilities affecting not just SMB, but all Microsoft products and services.

You can install security updates using a few different methods depending on your
organizations requirements. Typical methods are:

Azure Update Management


Windows Update
Windows Server Update Services (WSUS)
Software updates in Endpoint Configuration Manager

Consider subscribing to notifications in the Microsoft Security Response Center (MSRC)


Security Update Guide . The Security Update Guide Notification System will let you
know when software updates are published to address new and existing Common
Vulnerabilities and Exposures (CVEs).

Remove SMB 1.0


You should remove or disable the SMB 1.0 feature from all Windows Servers and clients
that don't require it. For systems that do require SMB 1.0, you should move to SMB 2.0
or higher as soon as possible. Starting in the Windows 10 Fall Creators Update and
Windows Server 2019, SMB 1.0 is no longer installed by default.

 Tip

Windows 10 Home and Windows 10 Pro still contain the SMB 1.0 client by default
after a clean installation or in-place upgrade. This behavior is changing in Windows
11, you can read more in the article SMB1 now disabled by default for Windows 11
Home Insiders builds .
Removing SMB 1.0 protects your systems by eliminating several well known security
vulnerabilities. SMB 1.0 lacks the security features of SMB 2.0 and later that help protect
against interception. For example, to prevent a compromised connection SMB 3.0 or
later uses pre-authentication integrity, encryption, and signing. Learn more in the SMB
security enhancements article.

Before removing the SMB 1.0 feature, be sure no applications and processes on the
computer require it. For more information on how to detect and disable SMB 1.0, see
the article How to detect, enable and disable SMBv1, SMBv2, and SMBv3 in Windows.

You can also use the Windows Admin Center Files and file sharing tool to both quickly
enable the auditing of SMB1 client connections and to uninstall SMB 1.

Disable guest authentication and fallback


In SMB 1.0 when a user's credentials fail, the SMB client will try guest access. Starting in
Windows 10, version 1709, and Windows Server 2019, SMB2 and SMB3 clients no longer
allow guest account access or fallback to the guest account by default. You should use
SMB 2.0 or higher and disable the use of SMB guest access on any systems where guest
access isn't disabled by default.

 Tip

Windows 11 Home and Pro editions are unchanged from their previous default
behavior; they allow guest authentication by default.

When guest access is disabled, it prevents a malicious actor from creating a server and
tricking users into accessing it using guest access. For example, when a user accesses
the spoofed share, their credentials would fail and SMB 1.0 would fall back to using
guest access. Disabling guest access stops the SMB session from connecting, preventing
the user from accessing the share and any malicious files.

To prevent the use of guest fallback on Windows SMB clients where guest access isn't
disabled by default (including Windows Server):

Group Policy

1. Open the Group Policy Management Console.


2. In the console tree, select Computer Configuration > Administrative
Templates > Network > Lanman Workstation.
3. For the setting, right-click Enable insecure guest logons and select Edit.
4. Select Enabled and select OK.

To learn more about guest access default behavior, read the article Guest access in
SMB2 and SMB3 disabled by default in Windows.

Disable the WebDAV protocol


Windows clients may not require the WebClient service to be running. The service
provides the Web Distributed Authoring and Versioning (WebDAV) protocol. If your
clients aren't accessing SMB shares over HTTP or HTTPS using WebDAV, you can disable
the service.

When your users are accessing files using WebDAV, there's no method to force a TLS
based connection over HTTPS. For example, your server may be configured to require
SMB signing or encryption, however the Webclient could connect to HTTP/80 if
WebDAV has been enabled. Any resulting connection would be unencrypted, regardless
of your SMB configuration.

You can use Group Policy Preferences to disable the service on a large number of
machines when you're ready to implement. For more information about configuring
Group Policy Preferences, see Configure a Service Item.

Restrict outbound SMB destinations


Block outbound SMB traffic to devices outside your network as a minimum. Blocking
outbound SMB prevents data being sent to external endpoints. Malicious actors often
try spoofing, tampering or phishing attacks that attempt to send users to malicious
endpoints disguised as friendly links or shortcuts within emails or other files. To learn
more about blocking outbound SMB access, read the Secure SMB Traffic in Windows
Server article.

Take this principle further by introducing micro-perimeters and micro-segmentation into


your architecture. Blocking outbound SMB traffic to external networks helps to prevent
the direct exfiltration of data to internet, however modern attacks use advanced
techniques to indirectly gain access by attacking other systems, then moving laterally
within your network. Micro-perimeters and micro-segmentation aim to reduce the
number of systems and users being able to directly connect to your SMB share unless
that explicitly need to. Learn more about Network segmentation as part of our Zero
Trust Guidance.
Secure the protocol
Your second goal is to secure the pathways between your users and their data, known as
data-in-transit protection. Data-in-transit protection typically involves the use of
encryption, interface hardening, and removing insecure protocols to improve your
resistance to attack.

In the following sections, we'll discuss some of the basic steps you should take to secure
the SMB protocol.

Use SMB 3.1.1


Windows always negotiates to the highest available protocol, ensure your devices and
machines support SMB 3.1.1.

SMB 3.1.1 is available beginning with Windows 10 and Windows Server 2016. SMB 3.1.1
includes a new mandatory security feature called pre-authentication integrity. Pre-
authentication integrity signs or encrypts the early phases of SMB connections to
prevent the tampering of Negotiate and Session Setup messages by using
cryptographic hashing.

Cryptographic hashing means the client and server can mutually trust the connection
and session properties. Pre-authentication integrity supersedes secure dialect
negotiation introduced in SMB 3.0. You can’t turn off pre-authentication integrity, but if
a client uses an older dialect, it won’t be used.

You can enhance your security posture further by forcing the use of SMB 3.1.1 as a
minimum. To set the minimum SMB dialect to 3.1.1, from an elevated PowerShell
prompt, run the following commands:

PowerShell

Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" -Name
"MinSMB2Dialect" -Value 0x000000311
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" -Name
"MaxSMB2Dialect" -Value 0x000000311

To learn more about how to set the minimum SMB dialect used, see Controlling SMB
dialects .
Use UNC hardening to require signing, encryption, and
mutual authentication
Enable UNC hardening for all SMB shares by requiring at least mutual authentication
(Kerberos) and integrity (SMB signing). You should also consider evaluating privacy (SMB
encryption) instead of SMB signing. There's no need to configure both SMB signing and
encryption because encryption implicitly includes the signatures used by signing.

U Caution

SMB encryption was introduced with SMB 3 in Windows 8 and Windows Server
2012. You shouldn't require encryption unless all your machines support SMB 3.0 or
later, or are third parties with SMB 3 and encryption support. If you configure SMB
encryption on clients or UNC paths hosted by servers that do not support SMB
Encryption, the SMB client will be unable to access the specified path. Also, if you
configure your server for SMB encryption and it is accessed by clients that don't
support it, those clients will again be unable to access the path.

UNC Hardening gives the ability to check UNC paths for mandated security settings and
will refuse to connect if a server couldn't meet them. Beginning with Windows Server
2016 and Windows 10, UNC Hardening is enabled by default for SYSVOL and
NETLOGON shares on domain controllers. It's a highly effective tool against spoofing
and tampering because the client can authenticate the identity of the server and
validate the integrity of the SMB payloads.

When configuring UNC hardening, you can specify various UNC path patterns. For
example:

\\<Server>\<Share> - The configuration entry applies to the share that has the
specified name on the specified server.
\\*\<Share> - The configuration entry applies to the share that has the specified
name on any server.
\\<Server>\* - The configuration entry applies to any share on the specified

server.

You can use Group Policy to apply the UNC hardening feature to a large number of
machines when you're ready to implement it. For more information about configuring
UNC hardening through Group Policy, see the security bulletin MS15-011 .
Map drives on demand with mandated signing or
encryption
In addition to UNC hardening, you can use signing or encryption when mapping
network drives. Beginning in Windows version 1709 and later, you can create encrypted
or signed mapped drives on demand using Windows PowerShell or Command Prompt.
You can use the NET USE command or the PowerShell New-SmbMapping command to map
drives by specifying RequireIntegrity (signing) or RequirePrivacy (encryption)
parameters.

The commands can be used by administrators or included in scripts to automate the


mapping of drives that require encryption or integrity checks.

The parameters don't change how signing or encryption work, or the dialect
requirements. If you try to map a drive and the server refuses to honor your requirement
for signing or encryption, the drive mapping will fail rather than connect unsafely.

Learn about the syntax and parameters for the New-SmbMapping command in New-
SmbMapping reference article.

Beyond SMB
Stop using NTLM and increase your Kerberos security. You can start by enabling
auditing for NTLM usage, then reviewing the logs to find where NTLM is used.

Removing NTLM helps to protect you against common attacks like pass-the-hash,
brute-force or rainbow hash tables due to its use of older MD4/MD5 cryptography hash
function. NTLM also isn't able to verify the server identity, unlike more recent protocols
like Kerberos, making it vulnerable to NTLM relay attacks as well. Many of these
common attacks are easily mitigated with Kerberos.

To learn how to audit NTLM as part of your effort to begin the transition to Kerberos,
see the Assessing NTLM usage article. You can also read learn about detecting insecure
protocols using Azure Sentinel in the Azure Sentinel Insecure Protocols Workbook
Implementation Guide blog article.

In parallel to removing NTLM, you should consider adding more layers of protection for
offline and ticket passing attacks. Use the following items as a guide when enhancing
Kerberos security.

1. Deploy Windows Hello for Business or smart cards - Two-factor authentication


with Windows Hello for Business adds an entire new layer of protection. Learn
about Windows Hello for Business.
2. Enforce long passwords and phrases - We encourage using longer password
lengths such as 15 characters or more to reduce your resistance to brute force
attacks. You should also avoid common words or phrases to make your password
even stronger.
3. Deploy Azure AD Password Protection for Active Directory Domain Services -
Use Azure AD Password Protect to block known weak passwords and terms that
are specific to your organization. To learn more, review Enforce on-premises Azure
AD Password Protection for Active Directory Domain Services.
4. Use group Managed Service Accounts (gMSA) - gMSA enabled services with their
127 random character construction, makes brute force and dictionary attacks to
crack passwords incredibly time consuming. Read about what gMSAs are in the
article Group Managed Service Accounts Overview.
5. Kerberos Armoring, known as Flexible Authentication Secure Tunneling (FAST) -
FAST prevents Kerberoasting because the user’s pre-authentication data is
protected and no longer subject to offline brute force or dictionary attacks. It also
prevents downgrade attacks from spoofed KDCs, to learn more review Kerberos
Armoring.
6. Use Windows Defender Credential Guard - Credential Guard makes the local
compromise of tickets harder by preventing ticket-granting and cached service
tickets from being stolen. Learn more in the How Windows Defender Credential
Guard works article.
7. Consider requiring SCRIL: Smart Card Required for Interactive Logon - When
deploying SCRIL, AD changes the user's password to a random 128-bit set which
users can no longer use to sign in interactively. SCRIL is typically only suitable for
environments with specific security requirements. To learn more about
passwordless strategies, review Configuring user accounts to disallow password
authentication.

Next steps
Now you've learned about some of the security controls and mitigations to prevent SMB
interception, you'll understand there’s no single step to prevent all interception attacks.
The goal is to create a thoughtful, holistic, and prioritized combination of risk
mitigations spanning multiple technologies through layered defenses.

You can continue to learn more about these concepts in the articles below.

Secure SMB traffic in Windows Server.


SMB security enhancements
SMB 3.1.1 Pre-authentication integrity in Windows 10
SMB 2 and SMB 3 security in Windows 10: the anatomy of signing and
cryptographic keys
Zero Trust Guidance Center
Network File System overview
Article • 12/07/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012

This article describes the Network File System role service and features included with the
File and Storage Services server role in Windows Server. Network File System (NFS)
provides a file sharing solution for enterprises that have heterogeneous environments
that include both Windows and non-Windows computers.

Feature description
Using the NFS protocol, you can transfer files between computers running Windows and
other non-Windows operating systems, such as Linux or UNIX.

NFS in Windows Server includes Server for NFS and Client for NFS. A computer running
Windows Server can use Server for NFS to act as a NFS file server for other non-
Windows client computers. Client for NFS allows a Windows-based computer running
Windows Server to access files stored on a non-Windows NFS server.

Windows and Windows Server versions


Windows supports multiple versions of the NFS client and server, depending on
operating system version and family.

Operating Systems NFS Server NFS Client


Versions Versions

Windows 7, Windows 8.1, Windows 10, Windows 11 N/A NFSv2,


NFSv3

Windows Server 2008, Windows Server 2008 R2 NFSv2, NFSv2,


NFSv3 NFSv3

Windows Server 2012, Windows Server 2012 R2, Windows Server NFSv2, NFSv2,
2016, Windows Server 2019, Windows Server 2022 NFSv3, NFSv3
NFSv4.1

Practical applications
Here are some ways you can use NFS:

Use a Windows NFS file server to provide multi-protocol access to the same file
share over both SMB and NFS protocols from multi-platform clients.
Deploy a Windows NFS file server in a predominantly non-Windows operating
system environment to provide non-Windows client computers access to NFS file
shares.
Migrate applications from one operating system to another by storing the data on
file shares accessible through both SMB and NFS protocols.

New and changed functionality


New and changed functionality in Network File System includes support for the NFS
version 4.1 and improved deployment and manageability. For information about
functionality that is new or changed in Windows Server 2012, review the following table:

Feature/functionality New or Description


updated

NFS version 4.1 New Increased security, performance, and interoperability


compared to NFS version 3.

NFS infrastructure Updated Improves deployment and manageability, and increases


security.

NFS version 3 Updated Improves continuous availability on NFS version 3 clients.


continuous availability

Deployment and Updated Enables you to easily deploy and manage NFS with new
manageability Windows PowerShell cmdlets and a new WMI provider.
improvements

NFS version 4.1


NFS version 4.1 implements all of the required aspects, in addition to some of the
optional aspects, of RFC 5661 :

Pseudo file system, a file system that separates physical and logical namespace
and is compatible with NFS version 3 and NFS version 2. An alias is provided for
the exported file system, which is part of the pseudo file system.
Compound RPCs combine relevant operations and reduce chattiness.
Sessions and session trunking enables just one semantic and allows continuous
availability and better performance while utilizing multiple networks between NFS
4.1 clients and the NFS Server.
NFS infrastructure
Improvements to the overall NFS infrastructure in Windows Server 2012 are detailed
below:

The Remote Procedure Call (RPC)/External Data Representation (XDR) transport


infrastructure, powered by the WinSock network protocol, is available for both
Server for NFS and Client for NFS. This replaces Transport Device Interface (TDI),
offers better support, and provides better scalability and Receive Side Scaling
(RSS).
The RPC port multiplexer feature is firewall-friendly (less ports to manage) and
simplifies deployment of NFS.
Auto-tuned caches and thread pools are resource management capabilities of the
new RPC/XDR infrastructure that are dynamic, automatically tuning caches and
thread pools based on workload. This completely removes the guesswork involved
when tuning parameters, providing optimal performance as soon as NFS is
deployed.
New Kerberos privacy implementation and authentication options with the
addition of Kerberos privacy (Krb5p) support along with the existing krb5 and krb5i
authentication options.
Identity Mapping Windows PowerShell module cmdlets make it easier to manage
identity mapping, configure Active Directory Lightweight Directory Services (AD
LDS), and set up UNIX and Linux passwd and flat files.
Volume mount point lets you access volumes mounted under an NFS share with
NFS version 4.1.
The Port Multiplexing feature supports the RPC port multiplexer (port 2049),
which is firewall-friendly and simplifies NFS deployment.

NFS version 3 continuous availability


NFS version 3 clients can have fast and transparent planned failovers with more
availability and reduced downtime. The failover process is faster for NFS version 3 clients
because:

The clustering infrastructure now allows one resource per network name instead of
one resource per share, which significantly improves resources' failover time.
Failover paths within an NFS server are tuned for better performance.
Wildcard registration in an NFS server is no longer required, and the failovers are
more fine-tuned.
Network Status Monitor (NSM) notifications are sent out after a failover process,
and clients no longer need to wait for TCP timeouts to reconnect to the failed over
server.

Note that Server for NFS supports transparent failover only when manually initiated,
typically during planned maintenance. If an unplanned failover occurs, NFS clients lose
their connections. Server for NFS also doesn't have any integration with the Resume Key
filter. This means that if a local app or SMB session attempts to access the same file that
an NFS client is accessing immediately after a planned failover, the NFS client might lose
its connections (transparent failover wouldn't succeed).

Deployment and manageability improvements


Deploying and managing NFS has improved in the following ways:

Over forty new Windows PowerShell cmdlets make it easier to configure and
manage NFS file shares. For more information, see NFS Cmdlets in Windows
PowerShell.
Identity mapping is improved with a local flat file mapping store and new Windows
PowerShell cmdlets for configuring identity mapping.
The Server Manager graphical user interface is easier to use.
The new WMI version 2 provider is available for easier management.
The RPC port multiplexer (port 2049) is firewall-friendly and simplifies deployment
of NFS.

Server Manager information


In Server Manager - or the newer Windows Admin Center - use the Add Roles and
Features Wizard to add the Server for NFS role service (under the File and iSCSI Services
role). For general information about installing features, see Install or Uninstall Roles, Role
Services, or Features. Server for NFS tools includes the Services for Network File System
MMC snap-in to manage the Server for NFS and Client for NFS components. Using the
snap-in, you can manage the Server for NFS components installed on the computer.
Server for NFS also contains several Windows command-line administration tools:

Mount mounts a remote NFS share (also known as an export) locally and maps it
to a local drive letter on the Windows client computer.
Nfsadmin manages configuration settings of the Server for NFS and Client for NFS
components.
Nfsshare configures NFS share settings for folders that are shared using Server for
NFS.
Nfsstat displays or resets statistics of calls received by Server for NFS.
Showmount displays mounted file systems exported by Server for NFS.
Umount removes NFS-mounted drives.

NFS in Windows Server 2012 introduces the NFS module for Windows PowerShell with
several new cmdlets specifically for NFS. These cmdlets provide an easy way to
automate NFS management tasks. For more information, see NFS cmdlets in Windows
PowerShell.

Additional information
The following table provides additional resources for evaluating NFS.

Content type References

Deployment Deploy Network File System

Operations NFS cmdlets in Windows PowerShell

Related technologies Storage in Windows Server


Deploy Network File System
Article • 03/29/2023

Applies to: Windows Server 2022, Windows Server 2019, and Windows Server 2016.

Network File System (NFS) provides a file sharing solution that lets you transfer files
between computers running Windows Server and UNIX operating systems by using the
NFS protocol. This article describes the steps you should follow to deploy NFS.

What's new in Network File System


Here's what's changed for NFS in Windows Server:

Support for NFS version 4.1: This protocol version includes the following
enhancements.
Makes navigating firewalls easier, which improves accessibility.
Supports the RPCSEC_GSS protocol, providing stronger security and allowing
clients and servers to negotiate security.
Supports UNIX and Windows file semantics.
Takes advantage of clustered file server deployments.
Supports WAN-friendly compound procedures.

NFS module for Windows PowerShell: The availability of built-in NFS cmdlets
makes it easier to automate various operations. The cmdlet names are consistent
with other Windows PowerShell cmdlets (with verbs such as "Get" and "Set"),
making it easier for users familiar with Windows PowerShell to learn to use new
cmdlets.

NFS management improvements: A new centralized UI-based management


console simplifies configuration and management of SMB and NFS shares, quotas,
file screens, and classification, in addition to managing clustered file servers.

Identity Mapping improvements: This improvement includes new UI support and


task-based Windows PowerShell cmdlets for configuring identity mapping.
Administrators can quickly configure an identity mapping source, and then create
individual mapped identities for users. Improvements make it easy for
administrators to set up a share for multi-protocol access over both NFS and SMB.

Cluster resource model restructure: This improvement brings consistency between


the cluster resource model for the Windows NFS and SMB protocol servers and
simplifies administration. For NFS servers that have many shares, the resource
network and the number of WMI calls required fail over a volume containing a
large number of NFS shares are reduced.

Integration with Resume Key Manager: The Resume Key Manager tracks file
server and file system state. The tool enables the Windows SMB and NFS protocol
servers to fail over without disrupting clients or server applications that store their
data on the file server. This improvement is a key component of the continuous
availability capability of the file server running Windows Server.

Scenarios for using Network File System


NFS supports a mixed environment of Windows-based and UNIX-based operating
systems. The following deployment scenarios are examples of how you can deploy a
continuously available Windows Server file server by using NFS.

Provision file shares in heterogeneous environments


This scenario applies to organizations with heterogeneous environments that consist of
both Windows and other operating systems, such as UNIX or Linux-based client
computers. With this scenario, you can provide multi-protocol access to the same file
share over both the SMB and NFS protocols. Typically, when you deploy a Windows file
server in this scenario, you want to facilitate collaboration between users on Windows
and UNIX-based computers. When a file share is configured, it's shared with both the
SMB and NFS protocols. Windows users access their files over the SMB protocol, and
users on UNIX-based computers typically access their files over the NFS protocol.

For this scenario, you must have a valid identity mapping source configuration. Windows
Server supports the following identity mapping stores:

Mapping File
Active Directory Domain Services (AD DS)
RFC 2307-compliant LDAP stores such as Active Directory Lightweight Directory
Services (AD LDS)
User Name Mapping (UNM) server

Provision file shares in UNIX-based environments


In this scenario, Windows file servers are deployed in a predominantly UNIX-based
environment to provide access to NFS file shares for UNIX-based client computers. An
Unmapped UNIX User Access (UUUA) option was initially implemented for NFS shares in
Windows Server 2008 R2. This option enables Windows servers to store NFS data
without creating UNIX-to-Windows account mapping. UUUA allows administrators to
quickly provision and deploy NFS without having to configure account mapping. When
it is enabled for NFS, UUUA creates custom security identifiers (SIDs) to represent
unmapped users. Mapped user accounts use standard Windows SIDs, and unmapped
user accounts use custom NFS SIDs.

System requirements
Server for NFS can be installed on any version of Windows Server. You can use NFS with
UNIX-based computers that are running an NFS server or NFS client, if these NFS server
and client implementations comply with one of the following protocol specifications:

NFS Version 4.1 Protocol Specification (as defined in RFC 5661 )


NFS Version 3 Protocol Specification (as defined in RFC 1813 )
NFS Version 2 Protocol Specification (as defined in RFC 1094 )

Deploy NFS infrastructure


You need to deploy the following computers and connect them on a local area network
(LAN):

One or more computers running Windows Server on which you'll install the two
main Services for NFS components: Server for NFS and Client for NFS. You can
install these components on the same computer or on different computers.
One or more UNIX-based computers that are running NFS server and NFS client
software. The UNIX-based computer that is running NFS server hosts an NFS file
share or export, which is accessed by a computer that is running Windows Server
as a client by using Client for NFS. You can install NFS server and client software
either in the same UNIX-based computer or on different UNIX-based computers,
as desired.
A domain controller running at the Windows Server 2008 R2 functional level. The
domain controller provides user authentication information and mapping for the
Windows environment.
When a domain controller isn't deployed, you can use a Network Information
Service (NIS) server to provide user authentication information for the UNIX
environment. Or, if you prefer, you can use password and group files that are
stored on the computer that's running the User Name Mapping service.

Install Network File System on the server with Server


Manager
1. From the Add Roles and Features Wizard, under Server Roles, expand File and
Storage Services > expand File and iSCSI Services.

2. Select File Server and Server for NFS, select Next.

3. A dialog box lets you know what other tools are required for the selected feature.

Check the box for the required features, select Add Features.

4. Select Next, and then choose any other preferred features. When you're ready,
select Next.

5. To install the NFS components on the server, select Install.

Install Network File System on the server with Windows


PowerShell
1. Start Windows PowerShell. Select and hold (or right-click) the PowerShell icon on
the taskbar, and select Run as Administrator.

2. Run the following Windows PowerShell commands:

PowerShell

Import-Module ServerManager
Add-WindowsFeature FS-NFS-Service
Import-Module NFS

Configure NFS authentication


For the NFS version 4.1 and NFS version 3.0 protocols, we recommend that you use
Kerberos (RPCSEC_GSS). There are three options with increasing levels of security
protection:

Krb5: Uses the Kerberos version 5 protocol to authenticate users before granting
them access to the file share.

Krb5i: Uses the Kerberos version 5 protocol to authenticate with integrity checking
(checksums), which verifies that the data hasn't been altered.

Krb5p: Uses the Kerberos version 5 protocol, which authenticates NFS traffic with
encryption for privacy. This option is the most secure Kerberos option.
7 Note

You can also choose not to use the preceding Kerberos authentication
methods and instead enable unmapped user access through AUTH_SYS. We
strongly discourage using this option as it removes all authentication
protections and allows any user with access to the NFS server to access data.
When you use unmapped user access, you can specify to allow unmapped
user access by UID or GID, which is the default. You can also allow anonymous
access.

Instructions for configuring NFS authentication are discussed in the following section.

Create an NFS file share


You can create an NFS file share by using either Server Manager or Windows PowerShell
NFS cmdlets.

Create an NFS file share with Server Manager


1. Sign in to the server as a member of the local Administrators group.

2. Server Manager starts automatically.

If the tool doesn't automatically start, select Start. Enter servermanager.exe,


and then select Server Manager.

3. On the left, select File and Storage Services, then select Shares.

4. Under the Shares column, select To create a file share, start the New Share
Wizard.

5. On the Select Profile page, select either NFS Share - Quick or NFS Share -
Advanced, then select Next.

6. On the Share Location page, select a server and a volume, then select Next.

7. On the Share Name page, enter a name for the new share, then select Next.

8. On the Authentication page, specify the authentication method you want to use,
then select Next.

9. On the Share Permissions page, select Add. The Add Permissions dialog opens.
a. Choose the level of user permissions to grant: Host, Netgroup, Client group, or
All Machines.

b. For the selected user level, enter the name for the user(s) to grant permission to
the share.

c. Use the drop-down menu to select the preferred Language encoding.

d. Use the drop-down menu to select the preferred Share permissions.

e. (Optional) Select the Allow root access checkbox. This option isn't
recommended.

f. Select Add. The dialog closes. Select Next.

10. On the Permissions page, configure access control for your selected users. When
you're ready, select Next.

11. On the Confirmation page, review your configuration, and select Create to create
the NFS file share.

Windows PowerShell equivalent commands


The following Windows PowerShell cmdlet can also create an NFS file share (where nfs1
is the name of the share and C:\\shares\\nfsfolder is the file path):

PowerShell

New-NfsShare -Name nfs1 -Path C:\shares\nfsfolder

Known issue
NFS version 4.1 allows the file names to be created or copied with illegal characters. If
you attempt to open the files with vi editor, it shows the files as being corrupt. You can't
save the file from vi, rename, move it, or change permissions. So avoid using illegal
characters.
NTFS overview
Article • 03/24/2023

Applies to: Windows Server 2022, Windows 10, Windows Server 2019, Windows
Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008
R2, Windows Server 2008

NTFS, the primary file system for recent versions of Windows and Windows Server,
provides a full set of features including security descriptors, encryption, disk quotas, and
rich metadata. It can be used with Cluster Shared Volumes (CSV) to provide continuously
available volumes that can be accessed simultaneously from multiple nodes of a failover
cluster.

For more feature information, see the Related links section of this article. To learn about
the newer Resilient File System (ReFS), see Resilient File System (ReFS) overview.

Increased reliability
NTFS uses its log file and checkpoint information to restore the consistency of the file
system when the computer is restarted after a system failure. After a bad-sector error,
NTFS dynamically remaps the cluster that contains the bad sector, and allocates a new
cluster for the data. It also marks the original cluster as bad, and no longer uses the old
cluster. For example, after a server crash, NTFS can recover data by replaying its log files.

NTFS continuously monitors and corrects transient corruption issues in the background
without taking the volume offline. This feature is known as self-healing NTFS, which was
introduced in Windows Server 2008.

For larger corruption issues, the Chkdsk utility, in Windows Server 2012 and later, scans
and analyzes the drive while the volume is online, limiting time offline to the time
required to restore data consistency on the volume. When you use NTFS with Cluster
Shared Volumes, no downtime is required. For more information, see NTFS Health and
Chkdsk.

Increased security
Access Control List (ACL)-based security for files and folders: NTFS lets you set
permissions on a file or folder, specify the groups and users whose access you
want to restrict or allow, and select access type.
Support for BitLocker Drive Encryption: BitLocker Drive Encryption provides more
security for critical system information and other data stored on NTFS volumes.
Beginning in Windows Server 2012 R2 and Windows 8.1, BitLocker provides
support for device encryption on x86 and x64-based computers with a Trusted
Platform Module (TPM) that supports connected stand-by (previously available
only on Windows RT devices). Device encryption helps protect data on Windows-
based computers, and it helps block malicious users from accessing the system
files they rely on to discover the user's password. It also prevents malicious users
from accessing a drive by physically removing it from the PC and installing it on a
different one. For more information, see What's new in BitLocker.

Support for large volumes


NTFS can support volumes as large as 8 petabytes on Windows Server 2019 and newer
and Windows 10, version 1709 and newer (older versions support up to 256 TB).
Supported volume sizes are affected by the cluster size and the number of clusters. With
(232 – 1) clusters (the maximum number of clusters that NTFS supports), the following
volume and file sizes are supported.

Cluster size Largest volume and file

4 KB (default size) 16 TB

8 KB 32 TB

16 KB 64 TB

32 KB 128 TB

64 KB (earlier max) 256 TB

128 KB 512 TB

256 KB 1 PB

512 KB 2 PB

1024 KB 4 PB

2048 KB (max size) 8 PB

If you try to mount a volume with a cluster size larger than the supported maximum of
the Windows version you're using, you get the error STATUS_UNRECOGNIZED_VOLUME.

) Important
Services and apps might impose additional limits on file and volume sizes. For
example, the volume size limit is 64 TB if you're using the Previous Versions feature
or a backup app that makes use of Volume Shadow Copy Service (VSS) snapshots
(and you're not using a SAN or RAID enclosure). However, you might need to use
smaller volume sizes depending on your workload and the performance of your
storage.

Formatting requirements for large files


To allow proper extension of large VDHX files, there are new recommendations for
formatting volumes. When formatting volumes that you use with Data Deduplication or
that host very large files, such as VDHX files larger than 1 TB, use the Format-Volume
cmdlet in Windows PowerShell with the following parameters.

Parameter Description

- Sets a 64-KB NTFS allocation unit size.


AllocationUnitSize
64KB

-UseLargeFRS Enables support for large file record segments (FRS). Using this parameter
increases the number of extents allowed per file on the volume. For large FRS
records, the limit increases from about 1.5 million extents to about 6 million
extents.

For example, the following cmdlet formats drive D as an NTFS volume, with FRS enabled
and an allocation unit size of 64 KB.

PowerShell

Format-Volume -DriveLetter D -FileSystem NTFS -AllocationUnitSize 64KB -


UseLargeFRS

You also can use the format command. At a system command prompt, enter the
following command, where /L formats a large FRS volume and /A:64k sets a 64-KB
allocation unit size:

PowerShell

format /L /A:64k
Maximum file name and path
NTFS supports long file names and extended-length paths, with the following maximum
values:

Support for long file names, with backward compatibility: NTFS supports long file
names, storing an 8.3 alias on disk (in Unicode) to provide compatibility with file
systems that impose an 8.3 limit on file names and extensions. If needed (for
performance reasons), you can selectively disable 8.3 aliasing on individual NTFS
volumes in Windows Server 2008 R2, Windows 8, and more recent versions of the
Windows operating system. In Windows Server 2008 R2 and later systems, short
names are disabled by default when a volume is formatted using the operating
system. For application compatibility, short names still are enabled on the system
volume.

Support for extended-length paths: Many Windows API functions have Unicode
versions that allow an extended-length path of approximately 32,767 characters.
That total is well beyond the 260-character path limit defined by the MAX_PATH
setting. For detailed file name and path format requirements, and guidance for
implementing extended-length paths, see Naming files, paths, and namespaces.

Clustered storage: When used in failover clusters, NTFS supports continuously


available volumes that can be accessed by multiple cluster nodes simultaneously
when used with the Cluster Shared Volumes (CSV) file system. For more
information, see Use Cluster Shared Volumes in a failover cluster.

Flexible allocation of capacity


If the space on a volume is limited, NTFS provides the following ways to work with the
storage capacity of a server:

Use disk quotas to track and control disk space usage on NTFS volumes for
individual users.
Use file system compression to maximize the amount of data that can be stored.
Increase the size of an NTFS volume by adding unallocated space from the same
disk or from a different disk.
Mount a volume at any empty folder on a local NTFS volume if you run out of
drive letters or need to create extra space that is accessible from an existing folder.

Related links
Cluster size recommendations for ReFS and NTFS
Resilient File System (ReFS) overview
What's New in NTFS for Windows Server (Windows Server 2012 R2)
What's New in NTFS (Windows Server 2008 R2, Windows 7)
NTFS Health and Chkdsk
Self-Healing NTFS (introduced in Windows Server 2008)
Transactional NTFS (introduced in Windows Server 2008)
Windows Server Storage documentation
Volume Shadow Copy Service
Article • 12/07/2022

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2,
Windows Server 2008, Windows 10, Windows 8.1, Windows 8, Windows 7

Backing up and restoring critical business data can be very complex due to the following
issues:

The data usually needs to be backed up while the applications that produce the
data are still running. This means that some of the data files might be open or they
might be in an inconsistent state.

If the data set is large, it can be difficult to back up all of it at one time.

Correctly performing backup and restore operations requires close coordination


between the backup applications, the line-of-business applications that are being
backed up, and the storage management hardware and software. The Volume Shadow
Copy Service (VSS), which was introduced in Windows Server® 2003, facilitates the
conversation between these components to allow them to work better together. When
all the components support VSS, you can use them to back up your application data
without taking the applications offline.

VSS coordinates the actions that are required to create a consistent shadow copy (also
known as a snapshot or a point-in-time copy) of the data that is to be backed up. The
shadow copy can be used as-is, or it can be used in scenarios such as the following:

You want to back up application data and system state information, including
archiving data to another hard disk drive, to tape, or to other removable media.

You are data mining.

You are performing disk-to-disk backups.

You need a fast recovery from data loss by restoring data to the original Logical
Unit Number (LUN) or to an entirely new LUN that replaces an original LUN that
failed.

Windows features and applications that use VSS include the following:

Windows Server Backup (https://go.microsoft.com/fwlink/?LinkId=180891 )


Shadow Copies of Shared Folders (https://go.microsoft.com/fwlink/?
LinkId=142874 )

System Center Data Protection Manager (https://go.microsoft.com/fwlink/?


LinkId=180892 )

System Restore (https://support.microsoft.com/windows/use-system-restore-


a5ae3ed9-07c4-fd56-45ee-096777ecd14e )

How Volume Shadow Copy Service Works


A complete VSS solution requires all of the following basic parts:

VSS service Part of the Windows operating system that ensures the other components
can communicate with each other properly and work together.

VSS requester The software that requests the actual creation of shadow copies (or
other high-level operations like importing or deleting them). Typically, this is the backup
application. The Windows Server Backup utility and the System Center Data Protection
Manager application are VSS requesters. Non-Microsoft® VSS requesters include nearly
all backup software that runs on Windows.

VSS writer The component that guarantees we have a consistent data set to back up.
This is typically provided as part of a line-of-business application, such as SQL Server®
or Exchange Server. VSS writers for various Windows components, such as the registry,
are included with the Windows operating system. Non-Microsoft VSS writers are
included with many applications for Windows that need to guarantee data consistency
during back up.

VSS provider The component that creates and maintains the shadow copies. This can
occur in the software or in the hardware. The Windows operating system includes a VSS
provider that uses copy-on-write. If you use a storage area network (SAN), it is
important that you install the VSS hardware provider for the SAN, if one is provided. A
hardware provider offloads the task of creating and maintaining a shadow copy from
the host operating system.

The following diagram illustrates how the VSS service coordinates with requesters,
writers, and providers to create a shadow copy of a volume.
Figure 1 Architectural diagram of Volume Shadow Copy Service

How a Shadow Copy Is Created


This section puts the various roles of the requester, writer, and provider into context by
listing the steps that need to be taken to create a shadow copy. The following diagram
shows how the Volume Shadow Copy Service controls the overall coordination of the
requester, writer, and provider.

Figure 2 Shadow copy creation process

To create a shadow copy, the requester, writer, and provider perform the following
actions:
1. The requester asks the Volume Shadow Copy Service to enumerate the writers,
gather the writer metadata, and prepare for shadow copy creation.

2. Each writer creates an XML description of the components and data stores that
need to be backed up and provides it to the Volume Shadow Copy Service. The
writer also defines a restore method, which is used for all components. The Volume
Shadow Copy Service provides the writer's description to the requester, which
selects the components that will be backed up.

3. The Volume Shadow Copy Service notifies all the writers to prepare their data for
making a shadow copy.

4. Each writer prepares the data as appropriate, such as completing all open
transactions, rolling transaction logs, and flushing caches. When the data is ready
to be shadow-copied, the writer notifies the Volume Shadow Copy Service.

5. The Volume Shadow Copy Service tells the writers to temporarily freeze application
write I/O requests (read I/O requests are still possible) for the few seconds that are
required to create the shadow copy of the volume or volumes. The application
freeze is not allowed to take longer than 60 seconds. The Volume Shadow Copy
Service flushes the file system buffers and then freezes the file system, which
ensures that the file system metadata is recorded correctly and the data to be
shadow-copied is written in a consistent order.

6. The Volume Shadow Copy Service tells the provider to create the shadow copy.
The shadow copy creation period lasts no more than 10 seconds, during which all
write I/O requests to the file system remain frozen.

7. The Volume Shadow Copy Service releases file system write I/O requests.

8. VSS tells the writers to thaw application write I/O requests. At this point
applications are free to resume writing data to the disk that is being shadow-
copied.

7 Note

The shadow copy creation can be aborted if the writers are kept in the freeze state
for longer than 60 seconds or if the providers take longer than 10 seconds to
commit the shadow copy.

9. The requester can retry the process (go back to step 1) or notify the administrator
to retry at a later time.
10. If the shadow copy is successfully created, the Volume Shadow Copy Service
returns the location information for the shadow copy to the requester. In some
cases, the shadow copy can be temporarily made available as a read-write volume
so that VSS and one or more applications can alter the contents of the shadow
copy before the shadow copy is finished. After VSS and the applications make their
alterations, the shadow copy is made read-only. This phase is called Auto-recovery,
and it is used to undo any file-system or application transactions on the shadow
copy volume that were not completed before the shadow copy was created.

How the Provider Creates a Shadow Copy


A hardware or software shadow copy provider uses one of the following methods for
creating a shadow copy:

Complete copy This method makes a complete copy (called a "full copy" or "clone") of
the original volume at a given point in time. This copy is read-only.

Copy-on-write This method does not copy the original volume. Instead, it makes a
differential copy by copying all changes (completed write I/O requests) that are made to
the volume after a given point in time.

Redirect-on-write This method does not copy the original volume, and it does not
make any changes to the original volume after a given point in time. Instead, it makes a
differential copy by redirecting all changes to a different volume.

Complete copy
A complete copy is usually created by making a "split mirror" as follows:

1. The original volume and the shadow copy volume are a mirrored volume set.

2. The shadow copy volume is separated from the original volume. This breaks the
mirror connection.

After the mirror connection is broken, the original volume and the shadow copy volume
are independent. The original volume continues to accept all changes (write I/O
requests), while the shadow copy volume remains an exact read-only copy of the
original data at the time of the break.

Copy-on-write method
In the copy-on-write method, when a change to the original volume occurs (but before
the write I/O request is completed), each block to be modified is read and then written
to the volume's shadow copy storage area (also called its "diff area"). The shadow copy
storage area can be on the same volume or a different volume. This preserves a copy of
the data block on the original volume before the change overwrites it.

Time Source data (status and data) Shadow copy (status and data)

T0 Original data: 1 2 3 4 5 No copy: —

T1 Data changed in cache: 3 to 3' Shadow copy created (differences only): 3

T2 Original data overwritten: 1 2 3' 4 5 Differences and index stored on shadow copy: 3

Table 1 The copy-on-write method of creating shadow copies

The copy-on-write method is a quick method for creating a shadow copy, because it
copies only data that is changed. The copied blocks in the diff area can be combined
with the changed data on the original volume to restore the volume to its state before
any of the changes were made. If there are many changes, the copy-on-write method
can become expensive.

Redirect-on-write method
In the redirect-on-write method, whenever the original volume receives a change (write
I/O request), the change is not applied to the original volume. Instead, the change is
written to another volume's shadow copy storage area.

Time Source data (status and data) Shadow copy (status and data)

T0 Original data: 1 2 3 4 5 No copy: —

T1 Data changed in cache: 3 to 3' Shadow copy created (differences only): 3'

T2 Original data unchanged: 1 2 3 4 5 Differences and index stored on shadow copy: 3'

Table 2 The redirect-on-write method of creating shadow copies

Like the copy-on-write method, the redirect-on-write method is a quick method for
creating a shadow copy, because it copies only changes to the data. The copied blocks
in the diff area can be combined with the unchanged data on the original volume to
create a complete, up-to-date copy of the data. If there are many read I/O requests, the
redirect-on-write method can become expensive.
Shadow Copy Providers
There are two types of shadow copy providers: hardware-based providers and software-
based providers. There is also a system provider, which is a software provider that is
built in to the Windows operating system.

Hardware-based providers
Hardware-based shadow copy providers act as an interface between the Volume
Shadow Copy Service and the hardware level by working in conjunction with a hardware
storage adapter or controller. The work of creating and maintaining the shadow copy is
performed by the storage array.

Hardware providers always take the shadow copy of an entire LUN, but the Volume
Shadow Copy Service only exposes the shadow copy of the volume or volumes that
were requested.

A hardware-based shadow copy provider makes use of the Volume Shadow Copy
Service functionality that defines the point in time, allows data synchronization,
manages the shadow copy, and provides a common interface with backup applications.
However, the Volume Shadow Copy Service does not specify the underlying mechanism
by which the hardware-based provider produces and maintains shadow copies.

Software-based providers
Software-based shadow copy providers typically intercept and process read and write
I/O requests in a software layer between the file system and the volume manager
software.

These providers are implemented as a user-mode DLL component and at least one
kernel-mode device driver, typically a storage filter driver. Unlike hardware-based
providers, software-based providers create shadow copies at the software level, not the
hardware level.

A software-based shadow copy provider must maintain a "point-in-time" view of a


volume by having access to a data set that can be used to re-create volume status
before the shadow copy creation time. An example is the copy-on-write technique of
the system provider. However, the Volume Shadow Copy Service places no restrictions
on what technique the software-based providers use to create and maintain shadow
copies.
A software provider is applicable to a wider range of storage platforms than a hardware-
based provider, and it should work with basic disks or logical volumes equally well. (A
logical volume is a volume that is created by combining free space from two or more
disks.) In contrast to hardware shadow copies, software providers consume operating
system resources to maintain the shadow copy.

For more information about basic disks, see What Are Basic Disks and Volumes?.

System provider
One shadow copy provider, the system provider, is supplied in the Windows operating
system. Although a default provider is supplied in Windows, other vendors are free to
supply implementations that are optimized for their storage hardware and software
applications.

To maintain the "point-in-time" view of a volume that is contained in a shadow copy, the
system provider uses a copy-on-write technique. Copies of the blocks on volume that
have been modified since the beginning of the shadow copy creation are stored in a
shadow copy storage area.

The system provider can expose the production volume, which can be written to and
read from normally. When the shadow copy is needed, it logically applies the differences
to data on the production volume to expose the complete shadow copy.

For the system provider, the shadow copy storage area must be on an NTFS volume. The
volume to be shadow copied does not need to be an NTFS volume, but at least one
volume mounted on the system must be an NTFS volume.

The component files that make up the system provider are swprv.dll and volsnap.sys.

In-Box VSS Writers


The Windows operating system includes a set of VSS writers that are responsible for
enumerating the data that is required by various Windows features.

For more information about these writers, see In-Box VSS Writers.

How Shadow Copies Are Used


In addition to backing up application data and system state information, shadow copies
can be used for a number of purposes, including the following:

Restoring LUNs (LUN resynchronization and LUN swapping)


Restoring individual files (Shadow Copies for Shared Folders)

Data mining by using transportable shadow copies

Restoring LUNs (LUN resynchronization and LUN


swapping)
In Windows Server 2008 R2 and Windows 7, VSS requesters can use a hardware shadow
copy provider feature called LUN resynchronization (or "LUN resync"). This is a fast-
recovery scheme that allows an application administrator to restore data from a shadow
copy to the original LUN or to a new LUN.

The shadow copy can be a full clone or a differential shadow copy. In either case, at the
end of the resync operation, the destination LUN will have the same contents as the
shadow copy LUN. During the resync operation, the array performs a block-level copy
from the shadow copy to the destination LUN.

7 Note

The shadow copy must be a transportable hardware shadow copy.

Most arrays allow production I/O operations to resume shortly after the resync
operation begins. While the resync operation is in progress, read requests are redirected
to the shadow copy LUN, and write requests to the destination LUN. This allows arrays
to recover very large data sets and resume normal operations in several seconds.

LUN resynchronization is different from LUN swapping. A LUN swap is a fast recovery
scenario that VSS has supported since Windows Server 2003 SP1. In a LUN swap, the
shadow copy is imported and then converted into a read-write volume. The conversion
is an irreversible operation, and the volume and underlying LUN cannot be controlled
with the VSS APIs after that. The following list describes how LUN resynchronization
compares with LUN swapping:

In LUN resynchronization, the shadow copy is not altered, so it can be used several
times. In LUN swapping, the shadow copy can be used only once for a recovery.
For the most safety-conscious administrators, this is important. When LUN
resynchronization is used, the requester can retry the entire restore operation if
something goes wrong the first time.

At the end of a LUN swap, the shadow copy LUN is used for production I/O
requests. For this reason, the shadow copy LUN must use the same quality of
storage as the original production LUN to ensure that performance is not impacted
after the recovery operation. If LUN resynchronization is used instead, the
hardware provider can maintain the shadow copy on storage that is less expensive
than production-quality storage.

If the destination LUN is unusable and needs to be recreated, LUN swapping may
be more economical because it doesn't require a destination LUN.

2 Warning

All of the operations listed are LUN-level operations. If you attempt to recover a
specific volume by using LUN resynchronization, you are unwittingly going to revert
all the other volumes that are sharing the LUN.

Restoring individual files (Shadow Copies for Shared


Folders)
Shadow Copies for Shared Folders uses the Volume Shadow Copy Service to provide
point-in-time copies of files that are located on a shared network resource, such as a file
server. With Shadow Copies for Shared Folders, users can quickly recover deleted or
changed files that are stored on the network. Because they can do so without
administrator assistance, Shadow Copies for Shared Folders can increase productivity
and reduce administrative costs.

For more information about Shadow Copies for Shared Folders, see Shadow Copies for
Shared Folders (https://go.microsoft.com/fwlink/?LinkId=180898 ) on TechNet.

Data mining by using transportable shadow copies


With a hardware provider that is designed for use with the Volume Shadow Copy
Service, you can create transportable shadow copies that can be imported onto servers
within the same subsystem (for example, a SAN). These shadow copies can be used to
seed a production or test installation with read-only data for data mining.

With the Volume Shadow Copy Service and a storage array with a hardware provider
that is designed for use with the Volume Shadow Copy Service, it is possible to create a
shadow copy of the source data volume on one server, and then import the shadow
copy onto another server (or back to the same server). This process is accomplished in a
few minutes, regardless of the size of the data. The transport process is accomplished
through a series of steps that use a shadow copy requester (a storage-management
application) that supports transportable shadow copies.
To transport a shadow copy
1. Create a transportable shadow copy of the source data on a server.

2. Import the shadow copy to a server that is connected to the SAN (you can import
to a different server or the same server).

3. The data is now ready to be used.

Figure 3 Shadow copy creation and transport between two servers

7 Note
A transportable shadow copy that is created on Windows Server 2003 cannot be
imported onto a server that is running Windows Server 2008 or Windows Server
2008 R2. A transportable shadow copy that was created on Windows Server 2008
or Windows Server 2008 R2 cannot be imported onto a server that is running
Windows Server 2003. However, a shadow copy that is created on Windows Server
2008 can be imported onto a server that is running Windows Server 2008 R2 and
vice versa.

Shadow copies are read-only. If you want to convert a shadow copy to a read/write LUN,
you can use a Virtual Disk Service-based storage-management application (including
some requesters) in addition to the Volume Shadow Copy Service. By using this
application, you can remove the shadow copy from Volume Shadow Copy Service
management and convert it to a read/write LUN.

Volume Shadow Copy Service transport is an advanced solution on computers running


Windows Server 2003 Enterprise Edition, Windows Server 2003 Datacenter Edition,
Windows Server 2008, or Windows Server 2008 R2. It works only if there is a hardware
provider on the storage array. Shadow copy transport can be used for a number of
purposes, including tape backups, data mining, and testing.

Frequently Asked Questions


This FAQ answers questions about Volume Shadow Copy Service (VSS) for system
administrators. For information about VSS application programming interfaces, see
Volume Shadow Copy Service (https://go.microsoft.com/fwlink/?LinkId=180899 ) in the
Windows Developer Center Library.

When was Volume Shadow Copy Service introduced? On


which Windows operating system versions is it available?
VSS was introduced in Windows XP. It is available on Windows XP, Windows
Server 2003, Windows Vista®, Windows Server 2008, Windows 7, and Windows
Server 2008 R2.

What is the difference between a shadow copy and a


backup?
In the case of a hard disk drive backup, the shadow copy created is also the backup.
Data can be copied off the shadow copy for a restore or the shadow copy can be used
for a fast recovery scenario—for example, LUN resynchronization or LUN swapping.
When data is copied from the shadow copy to tape or other removable media, the
content that is stored on the media constitutes the backup. The shadow copy itself can
be deleted after the data is copied from it.

What is the largest size volume that Volume Shadow


Copy Service supports?
Volume Shadow Copy Service supports a volume size of up to 64 TB.

I made a backup on Windows Server 2008. Can I restore it


on Windows Server 2008 R2?
It depends on the backup software that you used. Most backup programs support this
scenario for data but not for system state backups.

Shadow copies that are created on either of these versions of Windows can be used on
the other.

I made a backup on Windows Server 2003. Can I restore it


on Windows Server 2008?
It depends on the backup software you used. If you create a shadow copy on Windows
Server 2003, you cannot use it on Windows Server 2008. Also, if you create a shadow
copy on Windows Server 2008, you cannot restore it on Windows Server 2003.

How can I disable VSS?


It is possible to disable the Volume Shadow Copy Service by using the Microsoft
Management Console. However, you should not do this. Disabling VSS adversely affects
any software you use that depends on it, such as System Restore and Windows Server
Backup.

For more information, see the following Microsoft TechNet Web sites:

System Restore (https://go.microsoft.com/fwlink/?LinkID=157113 )

Windows Server Backup (https://go.microsoft.com/fwlink/?LinkID=180891 )

Can I exclude files from a shadow copy to save space?


VSS is designed to create shadow copies of entire volumes. Temporary files, such as
paging files, are automatically omitted from shadow copies to save space.

To exclude specific files from shadow copies, use the following registry key:
FilesNotToSnapshot.

7 Note

The FilesNotToSnapshot registry key is intended to be used only by applications.


Users who attempt to use it will encounter limitations such as the following:

It cannot delete files from a shadow copy that was created on a Windows
Server by using the Previous Versions feature.

It cannot delete files from shadow copies for shared folders.

It can delete files from a shadow copy that was created by using the
Diskshadow utility, but it cannot delete files from a shadow copy that was
created by using the Vssadmin utility.

Files are deleted from a shadow copy on a best-effort basis. This means that
they are not guaranteed to be deleted.

For more information, see Excluding Files from Shadow Copies


(https://go.microsoft.com/fwlink/?LinkId=180904 ) on MSDN.

My non-Microsoft backup program failed with a VSS


error. What can I do?
Check the product support section of the Web site of the company that created the
backup program. There may be a product update that you can download and install to
fix the problem. If not, contact the company's product support department.

System administrators can use the VSS troubleshooting information on the following
Microsoft TechNet Library Web site to gather diagnostic information about VSS-related
issues.

For more information, see Volume Shadow Copy Service


(https://go.microsoft.com/fwlink/?LinkId=180905 ) on TechNet.
What is the "diff area"?
The shadow copy storage area (or "diff area") is the location where the data for the
shadow copy that is created by the system software provider is stored.

Where is the diff area located?


The diff area can be located on any local volume. However, it must be located on an
NTFS volume that has enough space to store it.

How is the diff area location determined?


The following criteria are evaluated, in this order, to determine the diff area location:

If a volume already has an existing shadow copy, that location is used.

If there is a preconfigured manual association between the original volume and the
shadow copy volume location, then that location is used.

If the previous two criteria do not provide a location, the shadow copy service
chooses a location based on available free space. If more than one volume is being
shadow copied, the shadow copy service creates a list of possible snapshot
locations based on the size of free space, in descending order. The number of
locations provided is equal to the number of volumes being shadow copied.

If the volume being shadow copied is one of the possible locations, then a local
association is created. Otherwise an association with the volume with the most
available space is created.

Can VSS create shadow copies of non-NTFS volumes?


Yes. However, persistent shadow copies can be made only for NTFS volumes. In addition,
at least one volume mounted on the system must be an NTFS volume.

What's the maximum number of shadow copies I can


create at one time?
The maximum number of shadow copied volumes in a single shadow copy set is 64.
Note that this is not the same as the number of shadow copies.
What's the maximum number of software shadow copies
created by the system provider that I can maintain for a
volume?
The max number is of software shadow copies for each volume is 512. However, by
default you can only maintain 64 shadow copies that are used by the Shadow Copies of
Shared Folders feature. To change the limit for the Shadow Copies of Shared Folders
feature, use the following registry key: MaxShadowCopies.

How can I control the space that is used for shadow copy
storage space?
Type the vssadmin resize shadowstorage command.

For more information, see Vssadmin resize shadowstorage


(https://go.microsoft.com/fwlink/?LinkId=180906 ) on TechNet.

What happens when I run out of space?


Shadow copies for the volume are deleted, beginning with the oldest shadow copy.

Volume Shadow Copy Service Tools


The Windows operating system provides the following tools for working with VSS:

DiskShadow (https://go.microsoft.com/fwlink/?LinkId=180907 )

VssAdmin (https://go.microsoft.com/fwlink/?LinkId=84008 )

DiskShadow
DiskShadow is a VSS requester that you can use to manage all the hardware and
software snapshots that you can have on a system. DiskShadow includes commands
such as the following:

list: Lists VSS writers, VSS providers, and shadow copies

create: Creates a new shadow copy

import: Imports a transportable shadow copy

expose: Exposes a persistent shadow copy (as a drive letter, for example)
revert: Reverts a volume back to a specified shadow copy

This tool is intended for use by IT professionals, but developers might also find it useful
when testing a VSS writer or VSS provider.

DiskShadow is available only on Windows Server operating systems. It is not available


on Windows client operating systems.

VssAdmin
VssAdmin is used to create, delete, and list information about shadow copies. It can also
be used to resize the shadow copy storage area ("diff area").

VssAdmin includes commands such as the following:

create shadow: Creates a new shadow copy

delete shadows: Deletes shadow copies

list providers: Lists all registered VSS providers

list writers: Lists all subscribed VSS writers

resize shadowstorage: Changes the maximum size of the shadow copy storage
area

VssAdmin can only be used to administer shadow copies that are created by the system
software provider.

VssAdmin is available on Windows client and Windows Server operating system


versions.

Volume Shadow Copy Service Registry Keys


The following registry keys are available for use with VSS:

VssAccessControl

MaxShadowCopies

MinDiffAreaFileSize

VssAccessControl
This key is used to specify which users have access to shadow copies.
For more information, see the following entries on the MSDN Web site:

Security Considerations for Writers (https://go.microsoft.com/fwlink/?


LinkId=157739 )

Security Considerations for Requesters (https://go.microsoft.com/fwlink/?


LinkId=180908 )

MaxShadowCopies
This key specifies the maximum number of client-accessible shadow copies that can be
stored on each volume of the computer. Client-accessible shadow copies are used by
Shadow Copies for Shared Folders.

For more information, see the following entry on the MSDN Web site:

MaxShadowCopies under Registry Keys for Backup and Restore


(https://go.microsoft.com/fwlink/?LinkId=180909 )

MinDiffAreaFileSize
This key specifies the minimum initial size, in MB, of the shadow copy storage area.

For more information, see the following entry on the MSDN Web site:

MinDiffAreaFileSize under Registry Keys for Backup and Restore


(https://go.microsoft.com/fwlink/?LinkId=180910 )

Supported Operating System Versions


The following table lists the minimum supported operating system versions for VSS
features.

VSS feature Minimum Minimum


supported supported
client server

LUN resynchronization None Windows


supported Server 2008 R2

FilesNotToSnapshot registry key Windows Windows


Vista Server 2008
VSS feature Minimum Minimum
supported supported
client server

Transportable shadow copies None Windows


supported Server 2003
with SP1

Hardware shadow copies None Windows


supported Server 2003

Previous versions of Windows Server Windows Windows


Vista Server 2003

Fast recovery using LUN swap None Windows


supported Server 2003
with SP1

Multiple imports of hardware shadow copies None Windows


supported Server 2008

Note

This is the ability to import a shadow copy more than


once. Only one import operation can be performed at a
time.

Shadow Copies for Shared Folders None Windows


supported Server 2003

Transportable auto-recovered shadow copies None Windows


supported Server 2008

Concurrent backup sessions (up to 64) Windows XP Windows


Server 2003

Single restore session concurrent with backups Windows Windows


Vista Server 2003
with SP2

Up to 8 restore sessions concurrent with backups Windows 7 Windows


Server 2003 R2

Additional References
Volume Shadow Copy Service in Windows Developer Center
Use Disk Cleanup on Windows Server
Article • 03/13/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows
Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

The Disk Cleanup tool clears unnecessary files in a Windows Server environment. This tool is
available by default on Windows Server 2019 and Windows Server 2016, but you might have to
take a few manual steps to enable it on earlier versions of Windows Server.

To start the Disk Cleanup tool, either run the Cleanmgr.exe file, or select Start > Windows
Administrative Tools > Disk Cleanup.

You can also run Disk Cleanup by using the cleanmgr Windows command, and use command-
line options to direct Disk Cleanup to clean certain files.

7 Note

If you're just looking to free up disk space, consider using Azure File Sync with cloud tiering
enabled. This method lets you cache your most frequently accessed files locally and tier
your least frequently accessed files to the cloud, saving local storage space while
maintaining performance. For more information, see Planning for an Azure File Sync
deployment.

Enable Disk Cleanup on an earlier version of


Windows Server
Follow these steps to use the Add Roles and Features Wizard to install the Desktop Experience
on a server running Windows Server 2012 R2 or earlier. This process also installs the Disk
Cleanup tool.

1. If Server Manager is already open, go to the next step. If Server Manager isn't open yet,
launch it by doing one of the following options.

On the Windows desktop, select Server Manager in the Windows taskbar.

On the Windows Start menu, select the Server Manager tile.

2. On the Manage menu, select Add Roles and Features.

3. On the Before you begin page, verify that your destination server and network
environment are prepared for the feature that you want to install. Select Next.
4. On the Select installation type page, select Role-based or feature-based installation to
install all parts features on a single server. Select Next.

5. On the Select destination server page, select a server from the server pool, or select an
offline VHD. Select Next.

6. On the Select server roles page, select Next.

7. On the Select features page, select User Interface and Infrastructure, and then select
Desktop Experience.

8. In Add features that are required for Desktop Experience?, select Add Features.

9. Finish the installation, and then reboot the system.

10. Verify that the Disk Cleanup button appears in the Properties dialog box.

Manually add Disk Cleanup to an earlier version of


Windows Server
The Disk Cleanup tool (cleanmgr.exe) isn't present on Windows Server 2012 R2 or earlier unless
you have the Desktop Experience feature installed.

To use cleanmgr.exe, install the Desktop Experience as described earlier, or copy two files that
are already present on the server, cleanmgr.exe and cleanmgr.exe.mui. Use the following table
to locate the files for your operating system.

Operating Architecture File Location


System

Windows 64-bit C:\Windows\winsxs\amd64_microsoft-windows-


Server cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7da\cleanmgr.exe
2008 R2

Windows 64-bit C:\Windows\winsxs\amd64_microsoft-windows-


Server cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-
2008 R2 us_b9cb6194b257cc63\cleanmgr.exe.mui

Locate cleanmgr.exe and move the file to %systemroot%\System32.

Locate cleanmgr.exe.mui and move the files to %systemroot%\System32\en-US.

You can launch the Disk Cleanup tool by running Cleanmgr.exe from a Command Prompt
window, or by selecting Start and entering Cleanmgr in the search field.

To set up the Disk Cleanup button to appear on a disk's Properties dialog, you need to install
the Desktop Experience feature, as shown in the previous section.

Related links
Free up drive space in Windows 10
cleanmgr
Advanced Troubleshooting Server
Message Block (SMB)
Article • 12/13/2022

Try our Virtual Agent - It can help you quickly identify and fix common SMB issues.

Server Message Block (SMB) is a network transport protocol for file systems operations
to enable a client to access resources on a server. The primary purpose of the SMB
protocol is to enable remote file system access between two systems over TCP/IP.

SMB troubleshooting can be extremely complex. This article isn't an exhaustive


troubleshooting guide Instead, it's a short primer to understand the basics of how to
effectively troubleshoot SMB.

Tools and data collection


One key aspect of quality SMB troubleshooting is communicating the correct
terminology. Therefore, this article introduces basic SMB terminology to ensure accuracy
of data collection and analysis.

7 Note

The SMB Server (SRV) refers to the system that is hosting the file system, also
known as the file server. The SMB Client (CLI) refers to the system that is trying to
access the file system, regardless of the OS version or edition.

For example, if you use Windows Server 2016 to reach an SMB share that is hosted on
Windows 10, Windows Server 2016 is the SMB Client and Windows 10 the SMB Server.

Collect data
Before you troubleshoot SMB issues, we recommend that you first collect a network
trace on both the client and server sides. The following guidelines apply:

On Windows systems, you can use netshell (netsh), Network Monitor, Message
Analyzer, or Wireshark to collect a network trace.

Third-party devices generally have an in-box packet capture tool, such as tcpdump
(Linux/FreeBSD/Unix), or pktt (NetApp). For example, if the SMB client or SMB
server is a Unix host, you can collect data by running the following command:

Windows Command Prompt

# tcpdump -s0 -n -i any -w /tmp/$(hostname)-smbtrace.pcap

Stop collecting data by using Ctrl+C from keyboard.

To discover the source of the issue, you can check the two-sided traces: CLI, SRV, or
somewhere in between.

Using netshell to collect data

This section provides the steps for using netshell to collect network trace.

) Important

The Microsoft Message Analyzer tool has been retired and we recommend
Wireshark to analyze ETL files. For those who have downloaded the tool
previously and are looking for more information, see Installing and upgrading
Message Analyzer.

7 Note

A Netsh trace creates an ETL file. ETL files can be opened in Message Analyzer (MA),
Network Monitor 3.4 (set the parser to Network Monitor Parsers > Windows), and
Wireshark .

1. On both the SMB server and SMB client, create a Temp folder on drive C. Then, run
the following command:

Windows Command Prompt

netsh trace start capture=yes report=yes scenario=NetConnection level=5


maxsize=1024 tracefile=c:\\Temp\\%computername%\_nettrace.etl**

If you are using PowerShell, run the following cmdlets:

PowerShell

New-NetEventSession -Name trace -LocalFilePath


"C:\Temp\$env:computername`_netCap.etl" -MaxFileSize 1024
Add-NetEventPacketCaptureProvider -SessionName trace -TruncationLength
1500

Start-NetEventSession trace

2. Reproduce the issue.

3. Stop the trace by running the following command:

Windows Command Prompt

netsh trace stop

If you are using PowerShell, run the following cmdlets:

PowerShell

Stop-NetEventSession trace
Remove-NetEventSession trace

7 Note

You should trace only a minimum amount of the data that's transferred. For
performance issues, always take both a good and bad trace, if the situation allows
it.

Analyze the traffic


SMB is an application-level protocol that uses TCP/IP as the network transport protocol.
Therefore, an SMB issue can also be caused by TCP/IP issues.

Check whether TCP/IP experiences any of these issues:

1. The TCP three-way handshake does not finish. This typically indicates that there is
a firewall block, or that the Server service is not running.

2. Retransmits are occurring. These can cause slow file transfers because of
compound TCP congestion throttling.

3. Five retransmits followed by a TCP reset could mean that the connection between
systems was lost, or that one of the SMB services crashed or stopped responding.
4. The TCP receive window is diminishing. This can be caused by slow storage or
some other issue that prevents data from being retrieved from the Ancillary
Function Driver (AFD) Winsock buffer.

If there is no noticeable TCP/IP issue, look for SMB errors. To do this, follow these steps:

1. Always check SMB errors against the MS-SMB2 protocol specification. Many SMB
errors are benign (not harmful). Refer to the following information to determine
why SMB returned the error before you conclude that the error is related to any of
the following issues:

The MS-SMB2 Message Syntax article details each SMB command and its
options.

The MS-SMB2 Client Processing article details how the SMB client creates
requests and responds to server messages.

The MS-SMB2 Server Processing article details how the SMB server creates
requests and responds to client requests.

2. Check whether a TCP reset command is sent immediately after an


FSCTL_VALIDATE_NEGOTIATE_INFO (validate negotiate) command. If so, refer to
the following information:

The SMB session must be terminated (TCP reset) when the Validate Negotiate
process fails on either the client or the server.

This process might fail because a WAN optimizer is modifying the SMB
Negotiate packet.

If the connection ended prematurely, identify the last exchange


communication between the client and server.

Analyze the protocol


Look at the actual SMB protocol details in the network trace to understand the exact
commands and options that are used.

Remember that SMB does only what it is told to do.

You can learn a lot about what the application is trying to do by examining the
SMB commands.

Compare the commands and operations to the protocol specification to make sure that
everything is operating correctly. If it is not, collect data that is closer to or at a lower
level to look for more information about the root cause. To do this, follow these steps:

1. Collect a standard packet capture.

2. Run the netsh command to trace and gather details about whether there are issues
in the network stack or drops in Windows Filtering Platform (WFP) applications,
such as firewall or antivirus program.

3. If all other options fail, collect a t.cmd if you suspect that the issue occurs within
SMB itself, or if none of the other data is sufficient to identify a root cause.

For example:

You experience slow file transfers to a single file server.

The two-sided traces show that the SRV responds slowly to a READ request.

Removing an antivirus program resolves the slow file transfers.

You contact the antivirus program manufactory to resolve the issue.

7 Note

Optionally, you might also temporarily uninstall the antivirus program during
troubleshooting.

Event logs
Both SMB Client and SMB Server have a detailed event log structure, as shown in the
following screenshot. Collect the event logs to help find the root cause of the issue.

SMB-related system files


This section lists the SMB-related system files. To keep the system files updated, make
sure that the latest update rollup is installed.

SMB Client binaries that are listed under %windir%\system32\Drivers:

RDBSS.sys

MRXSMB.sys

MRXSMB10.sys

MRXSMB20.sys

MUP.sys

SMBdirect.sys

SMB Server binaries that are listed under %windir%\system32\Drivers:

SRVNET.sys

SRV.sys

SRV2.sys

SMBdirect.sys

Under %windir%\system32

srvsvc.dll

Update suggestions
We recommend that you update the following components before you troubleshoot
SMB issues:

A file server requires file storage. If your storage has iSCSI component, update
those components.

Update the network components.

For better performance and stability, update Windows Core.

Reference
Microsoft SMB Protocol Packet Exchange Scenario
How to detect, enable and disable
SMBv1, SMBv2, and SMBv3 in Windows
Article • 05/18/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows 11, Windows 10,
Windows 8.1, Windows 8

This article describes how to enable and disable Server Message Block (SMB) version 1
(SMBv1), SMB version 2 (SMBv2), and SMB version 3 (SMBv3) on the SMB client and
server components.

While disabling or removing SMBv1 might cause some compatibility issues with old
computers or software, SMBv1 has significant security vulnerabilities, and we strongly
encourage you not to use it . SMB 1.0 isn't installed by default in any edition of
Windows 11 or Windows Server 2019 and later. SMB 1.0 also isn't installed by default in
Windows 10, except Home and Pro editions. We recommend that instead of reinstalling
SMB 1.0, you update the SMB server that still requires it. For a list of third parties that
require SMB 1.0 and their updates that remove the requirement, review the SMB1
Product Clearinghouse .

Disabling SMBv2 or SMBv3 for troubleshooting


We recommend keeping SMBv2 and SMBv3 enabled, but you might find it useful to
disable one temporarily for troubleshooting. For more information, see How to detect
status, enable, and disable SMB protocols on the SMB Server.

In Windows 10, Windows 8.1, Windows Server 2019, Windows Server 2016, Windows
Server 2012 R2, and Windows Server 2012, disabling SMBv3 deactivates the following
functionality:

Transparent Failover - clients reconnect without interruption to cluster nodes


during maintenance or failover
Scale Out - concurrent access to shared data on all file cluster nodes
Multichannel - aggregation of network bandwidth and fault tolerance if multiple
paths are available between client and server
SMB Direct - adds RDMA networking support for high performance, with low
latency and low CPU use
Encryption - Provides end-to-end encryption and protects from eavesdropping on
untrustworthy networks
Directory Leasing - Improves application response times in branch offices through
caching
Performance Optimizations - optimizations for small random read/write I/O

In Windows 7 and Windows Server 2008 R2, disabling SMBv2 deactivates the following
functionality:

Request compounding - allows for sending multiple SMBv2 requests as a single


network request
Larger reads and writes - better use of faster networks
Caching of folder and file properties - clients keep local copies of folders and files
Durable handles - allow for connection to transparently reconnect to the server if
there's a temporary disconnection
Improved message signing - HMAC SHA-256 replaces MD5 as hashing algorithm
Improved scalability for file sharing - number of users, shares, and open files per
server greatly increased
Support for symbolic links
Client oplock leasing model - limits the data transferred between the client and
server, improving performance on high-latency networks and increasing SMB
server scalability
Large MTU support - for full use of 10 Gigabit Ethernet (GbE)
Improved energy efficiency - clients that have open files to a server can sleep

The SMBv2 protocol was introduced in Windows Vista and Windows Server 2008, while
the SMBv3 protocol was introduced in Windows 8 and Windows Server 2012. For more
information about SMBv2 and SMBv3 capabilities, see the following articles:

Server Message Block overview


What's New in SMB

How to remove SMBv1 via PowerShell


Here are the steps to detect, disable and enable SMBv1 client and server by using
PowerShell commands with elevation.

7 Note

The computer will restart after you run the PowerShell commands to disable or
enable SMBv1.
Detect:

PowerShell

Get-WindowsOptionalFeature -Online -FeatureName SMB1Protocol

Disable:

PowerShell

Disable-WindowsOptionalFeature -Online -FeatureName SMB1Protocol

Enable:

PowerShell

Enable-WindowsOptionalFeature -Online -FeatureName SMB1Protocol

 Tip

You can detect SMBv1 status, without elevation, by running: Get-


SmbServerConfiguration | Format-List EnableSMB1Protocol .

Windows Server 2012 Windows Server 2012 R2, Windows


Server 2016, Windows Server 2019: Server Manager
method
To remove SMBv1 from Windows Server:

1. On the Server Manager Dashboard of the server where you want to remove
SMBv1, under Configure this local server, select Add roles and features.
2. On the Before you begin page, select Start the Remove Roles and Features
Wizard, and then on the following page, select Next.
3. On the Select destination server page under Server Pool, ensure that the server
you want to remove the feature from is selected, and then select Next.
4. On the Remove server roles page, select Next.
5. On the Remove features page, clear the check box for SMB 1.0/CIFS File Sharing
Support and select Next.
6. On the Confirm removal selections page, confirm that the feature is listed, and
then select Remove.

Windows 8.1, Windows 10, and Windows 11: Add or


Remove Programs method
To disable SMBv1 for the mentioned operating systems:

1. In Control Panel, select Programs and Features.


2. Under Control Panel Home, select Turn Windows features on or off to open the
Windows Features box.
3. In the Windows Features box, scroll down the list, clear the check box for SMB
1.0/CIFS File Sharing Support and select OK.
4. After Windows applies the change, on the confirmation page, select Restart now.

How to detect status, enable, and disable SMB


protocols

7 Note

When you enable or disable SMBv2 in Windows 8 or Windows Server 2012, SMBv3
is also enabled or disabled. This behavior occurs because these protocols share the
same stack.

Server

Windows 8 and Windows Server 2012 introduced the new Set-


SMBServerConfiguration Windows PowerShell cmdlet. The cmdlet enables you to
enable or disable the SMBv1, SMBv2, and SMBv3 protocols on the server
component.
You don't have to restart the computer after you run the Set-
SMBServerConfiguration cmdlet.

SMBv1
Detect:

PowerShell

Get-SmbServerConfiguration | Select EnableSMB1Protocol

Disable:

PowerShell

Set-SmbServerConfiguration -EnableSMB1Protocol $false

Enable:

PowerShell

Set-SmbServerConfiguration -EnableSMB1Protocol $true

For more information, see Server storage at Microsoft .

SMB v2/v3
Detect:

PowerShell

Get-SmbServerConfiguration | Select EnableSMB2Protocol

Disable:

PowerShell

Set-SmbServerConfiguration -EnableSMB2Protocol $false

Enable:

PowerShell
Set-SmbServerConfiguration -EnableSMB2Protocol $true

For Windows 7, Windows Server 2008 R2, Windows


Vista, and Windows Server 2008
To enable or disable SMB protocols on an SMB Server that is running Windows 7,
Windows Server 2008 R2, Windows Vista, or Windows Server 2008, use Windows
PowerShell or Registry Editor.

Additional PowerShell methods

7 Note

This method requires PowerShell 2.0 or later.

SMBv1 on SMB Server

Detect:

PowerShell

Get-Item HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
| ForEach-Object {Get-ItemProperty $_.pspath}

Default configuration = Enabled (No registry named value is created), so no SMB1


value will be returned

Disable:

PowerShell

Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB1 -
Type DWORD -Value 0 -Force

Enable:

PowerShell

Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB1 -
Type DWORD -Value 1 -Force

Note You must restart the computer after you make these changes. For more
information, see Server storage at Microsoft .

SMBv2/v3 on SMB Server

Detect:

PowerShell

Get-ItemProperty
HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters |
ForEach-Object {Get-ItemProperty $_.pspath}

Disable:

PowerShell

Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB2 -
Type DWORD -Value 0 -Force

Enable:

PowerShell

Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters" SMB2 -
Type DWORD -Value 1 -Force

7 Note

You must restart the computer after you make these changes.

Registry Editor

) Important

Follow the steps in this section carefully. Serious problems might occur if you
modify the registry incorrectly. Before you modify it, back up the registry for
restoration in case problems occur.
To enable or disable SMBv1 on the SMB server, configure the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters

Registry entry: SMB1


REG_DWORD: 0 = Disabled
REG_DWORD: 1 = Enabled
Default: 1 = Enabled (No registry key is created)

To enable or disable SMBv2 on the SMB server, configure the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters

Registry entry: SMB2


REG_DWORD: 0 = Disabled
REG_DWORD: 1 = Enabled
Default: 1 = Enabled (No registry key is created)

7 Note

You must restart the computer after you make these changes.

Disable SMBv1 by using Group Policy


This section introduces how to use Group Policy to disable SMBv1. You can use this
method on different versions of Windows.

Server

SMBv1
This procedure configures the following new item in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Para
meters

Registry entry: SMB1


REG_DWORD: 0 = Disabled

To use Group Policy to configure this, follow these steps:

1. Open the Group Policy Management Console. Right-click the Group Policy
object (GPO) that should contain the new preference item, and then click Edit.

2. In the console tree under Computer Configuration, expand the Preferences


folder, and then expand the Windows Settings folder.

3. Right-click the Registry node, point to New, and select Registry Item.

In the New Registry Properties dialog box, select the following:

Action: Create
Hive: HKEY_LOCAL_MACHINE
Key Path: SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
Value name: SMB1
Value type: REG_DWORD
Value data: 0
This procedure disables the SMBv1 Server components. This Group Policy must be
applied to all necessary workstations, servers, and domain controllers in the
domain.

7 Note

WMI filters can also be set to exclude unsupported operating systems or


selected exclusions, such as Windows XP.

) Important

Be careful when you make these changes on domain controllers on which


legacy Windows XP or older Linux and third-party systems (that don't support
SMBv2 or SMBv3) require access to SYSVOL or other file shares where SMB v1
is being disabled.

Auditing SMBv1 usage


To determine which clients are attempting to connect to an SMB server with SMBv1, you
can enable auditing on Windows Server 2016, Windows 10, and Windows Server 2019.
You can also audit on Windows 7 and Windows Server 2008 R2 if the May 2018 monthly
update is installed, and on Windows 8.1 and Windows Server 2012 R2 if the July 2017
monthly update is installed.

Enable:

PowerShell

Set-SmbServerConfiguration -AuditSmb1Access $true

Disable:

PowerShell

Set-SmbServerConfiguration -AuditSmb1Access $false

Detect:

PowerShell

Get-SmbServerConfiguration | Select AuditSmb1Access

When SMBv1 auditing is enabled, event 3000 appears in the "Microsoft-Windows-


SMBServer\Audit" event log, identifying each client that attempts to connect with
SMBv1.

Summary
If all the settings are in the same GPO, Group Policy Management displays the following
settings.
Testing and validation
After completing the configuration steps in this article, allow the policy to replicate and
update. As necessary for testing, run gpupdate /force at a command prompt, and then
review the target computers to make sure that the registry settings are applied correctly.
Make sure SMBv2 and SMBv3 are functioning for all other systems in the environment.

7 Note

Don't forget to restart the target systems.


SMBv1 is not installed by default in
Windows 10 version 1709, Windows
Server version 1709 and later versions
Article • 05/18/2023

Summary
Since Windows 10 Fall Creators Update and Windows Server, version 1709 (RS3), the
Server Message Block version 1 (SMBv1) network protocol is no longer installed by
default. It was superseded by SMBv2 and later protocols starting in 2007. Microsoft
publicly deprecated the SMBv1 protocol in 2014.

SMBv1 has the following behavior in Windows 10 and Windows Server 2019 and later
versions:

SMBv1 now has both client and server sub-features that can be uninstalled
separately.
Windows 10 Enterprise, Windows 10 Education, and Windows 10 Pro for
Workstations no longer contain the SMBv1 client or server by default after a clean
installation.
Windows Server 2019 and later no longer contains the SMBv1 client or server by
default after a clean installation.
Windows 10 Home and Windows 10 Pro no longer contain the SMBv1 server by
default after a clean installation.
Windows 11 doesn't contain the SMBv1 server or client by default after a clean
installation.
Windows 10 Home and Windows 10 Pro still contain the SMBv1 client by default
after a clean installation. If the SMBv1 client isn't used for 15 days in total
(excluding the computer being turned off), it automatically uninstalls itself.
In-place upgrades and Insider flights of Windows 10 Home and Windows 10 Pro
don't automatically remove SMBv1 initially. Windows evaluate the usage of SMBv1
client and server, and if either of them isn't used for 15 days in total (excluding the
time during which the computer is off), Windows will automatically uninstall it.
In-place upgrades and Insider flights of the Windows 10 Enterprise, Windows 10
Education, and Windows 10 Pro for Workstations editions don't automatically
remove SMBv1. An administrator must decide to uninstall SMBv1 in these
managed environments.
Automatic removal of SMBv1 after 15 days is a one-time operation. If an
administrator re-installs SMBv1, no further attempts will be made to uninstall it.
The SMB version 2.02, 2.1, 3.0, 3.02, and 3.1.1 features are still fully supported and
included by default as part of the SMBv2 binaries.
Because the Computer Browser service relies on SMBv1, the service is uninstalled if
the SMBv1 client or server is uninstalled. This means that Explorer Network can no
longer display Windows computers through the legacy NetBIOS datagram
browsing method.
SMBv1 can still be reinstalled in all editions of Windows 10 and Windows Server
2016.
Windows Server virtual machines created by Microsoft for the Azure Marketplace
don't contain the SMB1 binaries & you can't enable SMB1. Third-party Azure
Marketplace VMs may contain SMB1, contact their vendor for information.

Starting in Windows 10, version 1809 (RS5), Windows 10 Pro no longer contains the
SMBv1 client by default after a clean installation. All other behaviors from version 1709
still apply.

7 Note

Windows 10, version 1803 (RS4) Pro handles SMBv1 in the same manner
as Windows 10, version 1703 (RS2) and Windows 10, version 1607 (RS1). This issue
was fixed in Windows 10, version 1809 (RS5). You can still uninstall SMBv1
manually. However, Windows will not automatically uninstall SMBv1 after 15 days in
the following scenarios:

You do a clean install of Windows 10, version 1803.


You upgrade Windows 10, version 1607 or Windows 10, version 1703 to
Windows 10, version 1803 directly without first upgrading to Windows 10,
version 1709.

If you try to connect to devices that support only SMBv1, or if these devices try to
connect to you, you may receive one of the following error messages:

Output

You can't connect to the file share because it's not secure. This share
requires the obsolete SMB1 protocol, which is unsafe and could expose your
system to attack.
Your system requires SMB2 or higher. For more info on resolving this issue,
see: https://go.microsoft.com/fwlink/?linkid=852747
Output

The specified network name is no longer available.

Output

Unspecified error 0x80004005

Output

System Error 64

Output

The specified server cannot perform the requested operation.

Output

Error 58

When a remote server required an SMBv1 connection from this client, and the SMBv1
client is installed, the following event is logged. This mechanism audits the use of
SMBv1 and is also used by the automatic uninstaller to set the 15-day timer of removing
SMBv1 because of lack of use.

Output

Log Name: Microsoft-Windows-SmbClient/Security


Source: Microsoft-Windows-SMBClient
Date: Date/Time
Event ID: 32002
Task Category: None
Level: Info
Keywords: (128)
User: NETWORK SERVICE
Computer: junkle.contoso.com
Description:
The local computer received an SMB1 negotiate response.

Dialect:
SecurityMode
Server name:

Guidance:
SMB1 is deprecated and should not be installed nor enabled. For more
information, see https://go.microsoft.com/fwlink/?linkid=852747.
When a remote server required an SMBv1 connection from this client, and the SMBv1
client isn't installed, the following event is logged. This event is to show why the
connection fails.

Output

Log Name: Microsoft-Windows-SmbClient/Security


Source: Microsoft-Windows-SMBClient
Date: Date/Time
Event ID: 32000
Task Category: None
Level: Info
Keywords: (128)
User: NETWORK SERVICE
Computer: junkle.contoso.com
Description:
SMB1 negotiate response received from remote device when SMB1 cannot be
negotiated by the local computer.
Dialect:
Server name:

Guidance:
The client has SMB1 disabled or uninstalled. For more information:
https://go.microsoft.com/fwlink/?linkid=852747.

These devices aren't likely running Windows. They are more likely running older versions
of Linux, Samba, or other types of third-party software to provide SMB services. Often,
these versions of Linux and Samba are, themselves, no longer supported.

7 Note

Windows 10, version 1709 is also known as "Fall Creators Update."

More Information
To work around this issue, contact the manufacturer of the product that supports only
SMBv1, and request a software or firmware update that support SMBv2.02 or a later
version. For a current list of known vendors and their SMBv1 requirements, see the
following Windows and Windows Server Storage Engineering Team Blog article:

SMBv1 Product Clearinghouse

Leasing mode
If SMBv1 is required to provide application compatibility for legacy software behavior,
such as a requirement to disable oplocks, Windows provides a new SMB share flag that's
known as Leasing mode. This flag specifies whether a share disables modern SMB
semantics such as leases and oplocks.

You can specify a share without using oplocks or leasing to allow a legacy application to
work with SMBv2 or a later version. To do this, use the New-SmbShare or Set-SmbShare
PowerShell cmdlets together with the -LeasingMode None parameter.

7 Note

You should use this option only on shares that are required by a third-party
application for legacy support if the vendor states that it is required. Do not specify
Leasing mode on user data shares or CA shares that are used by Scale-Out File
Servers. This is because the removal of oplocks and leases causes instability and
data corruption in most applications. Leasing mode works only in Share mode. It
can be used by any client operating system.

Explorer Network Browsing


The Computer Browser service relies on the SMBv1 protocol to populate the Windows
Explorer Network node (also known as "Network Neighborhood"). This legacy protocol
is long deprecated, doesn't route, and has limited security. Because the service can't
function without SMBv1, it's removed at the same time.

However, if you still have to use the Explorer Network in home and small business
workgroup environments to locate Windows-based computers, you can follow these
steps on your Windows-based computers that no longer use SMBv1:

1. Start the "Function Discovery Provider Host" and "Function Discovery Resource
Publication" services, and then set them to Automatic (Delayed Start).

2. When you open Explorer Network, enable network discovery when you're
prompted.

All Windows devices within that subnet that have these settings will now appear in
Network for browsing. This uses the WS-DISCOVERY protocol. Contact your other
vendors and manufacturers if their devices still don't appear in this browse list after the
Windows devices appear. It's possible they have this protocol disabled or that they
support only SMBv1.
7 Note

We recommend that you map drives and printers instead of enabling this feature,
which still requires searching and browsing for their devices. Mapped resources are
easier to locate, require less training, and are safer to use. This is especially true if
these resources are provided automatically through Group Policy. An administrator
can configure printers for location by methods other than the legacy Computer
Browser service by using IP addresses, Active Directory Domain Services (AD DS),
Bonjour, mDNS, uPnP, and so on.

If you can't use any of these workarounds, or if the application manufacturer


can't provide supported versions of SMB, you can re-enable SMBv1 manually by
following the steps in How to detect, enable and disable SMBv1, SMBv2, and SMBv3 in
Windows.

) Important

We strongly recommend that you don't reinstall SMBv1. This is because this older
protocol has known security issues regarding ransomware and other malware.

Windows Server best practices analyzer messaging


Windows Server 2012 and later server operation systems contain a best practices
analyzer (BPA) for file servers. If you've followed the correct online guidance to uninstall
SMB1, running this BPA will return a contradictory warning message:

Output

Title: The SMB 1.0 file sharing protocol should be enabled


Severity: Warning
Date: 3/25/2020 12:38:47 PM
Category: Configuration
Problem: The Server Message Block 1.0 (SMB 1.0) file sharing protocol is
disabled on this file server.
Impact: SMB not in a default configuration, which could lead to less than
optimal behavior.
Resolution: Use Registry Editor to enable the SMB 1.0 protocol.

) Important
You should ignore this specific BPA rule's guidance, it's deprecated. The false error
was first corrected in Windows Server 2022 and Windows Server 2019 in the 2022
April Cumulative Update. We repeat: don't enable SMB 1.0.

Additional references
Stop using SMB1
SMB known issues
Article • 05/22/2020

The following topics describe some common troubleshooting issues that can occur
when you use Server Message Block (SMB). These topics also provide possible solutions
to those issues.

TCP three-way handshake failure

Negotiate, Session Setup, and Tree Connect Failures

TCP connection is aborted during Validate Negotiate

Slow files transfer speed

High CPU usage issue on the SMB server

Troubleshoot the Event ID 50 Error Message

SMB Multichannel troubleshooting


TCP three-way handshake failure during
SMB connection
Article • 05/22/2020

When you analyze a network trace, you notice that there is a Transmission Control
Protocol (TCP) three-way handshake failure that causes the SMB issue to occur. This
article describes how to troubleshoot this situation.

Troubleshooting
Generally, the cause is a local or infrastructure firewall that blocks the traffic. This issue
can occur in either of the following scenarios.

Scenario 1
The TCP SYN packet arrives on the SMB server, but the SMB server does not return a
TCP SYN-ACK packet.

To troubleshoot this scenario, follow these steps.

Step 1
Run netstat or Get-NetTcpConnection to make sure that there is a listener on TCP port
445 that should be owned by the SYSTEM process.

Windows Command Prompt

netstat -ano | findstr :445

PowerShell

Get-NetTcpConnection -LocalPort 445

Step 2
Make sure that the Server service is started and running.

Step 3
Take a Windows Filtering Platform (WFP) capture to determine which rule or program is
dropping the traffic. To do this, run the following command in a Command Prompt
window:

Windows Command Prompt

netsh wfp capture start

Reproduce the issue, and then, run the following command:

Windows Command Prompt

netsh wfp capture stop

Run a scenario trace, and look for WFP drops in SMB traffic (on TCP port 445).

Optionally, you could remove the anti-virus programs because they are not always WFP-
based.

Step 4
If Windows Firewall is enabled, enable firewall logging to determine whether it records a
drop in traffic.

Make sure that the appropriate "File and Printer Sharing (SMB-In)" rules are enabled in
Windows Firewall with Advanced Security > Inbound Rules.

7 Note

Depending on how your computer is set up, "Windows Firewall" might be called
"Windows Defender Firewall."

Scenario 2
The TCP SYN packet never arrives at the SMB server.

In this scenario, you have to investigate the devices along the network path. You may
analyze network traces that are captured on each device to determine which device is
blocking the traffic.
Negotiate, Session Setup, and Tree
Connect Failures
Article • 12/07/2020

This article describes how to troubleshoot the failures that occur during an SMB
Negotiate, Session Setup, and Tree Connect request.

Negotiate fails
The SMB server receives an SMB NEGOTIATE request from an SMB client. The
connection times out and is reset after 60 seconds. There may be an ACK message after
about 200 microseconds.

This problem is most often caused by antivirus program.

If you are using Windows Server 2008 R2, there are hotfixes for this problem. Make sure
that the SMB client and the SMB server are up to date.

Session Setup fails


The SMB server receives an SMB SESSION_SETUP request from a SMB client but failed to
response.

If the fully qualified domain name (FQDN) or Network Basic Input/Output System
(NetBIOS) name of the server is 'sed in the Universal Naming Convention (UNC) path,
Windows will use Kerberos for authentication.

After the Negotiate response, there will be an attempt to get a Kerberos ticket for the
Common Internet File System (CIFS) service principal name (SPN) of the server. Look at
the Kerberos traffic on TCP port 88 to make sure that there are no Kerberos errors when
the SMB client is gaining the token.

7 Note

The errors that occur during the Kerberos Pre-Authentication are OK. The errors
that occur after the Kerberos Pre-Authentication (instances in which authentication
does not work), are the errors that caused the SMB problem.

Additionally, make the following checks:


Look at the security blob in the SMB SESSION_SETUP request to make sure the
correct credentials are sent.

Try to disable SMB server name hardening (SmbServerNameHardeningLevel = 0).

Make sure that the SMB server has an SPN when it is accessed through a CNAME
DNS record.

Make sure that SMB signing is working. (This is especially important for older,
third-party devices.)

Tree Connect fails


Make sure that the user account credentials have both share and NT file system (NTFS)
permissions to the folder.

The cause of common Tree Connect errors can be found in 3.3.5.7 Receiving an SMB2
TREE_CONNECT Request. The following are the solutions for two common status codes.

[STATUS_BAD_NETWORK_NAME]

Make sure that the share exists on the server, and that it is spelled correctly in the SMB
client request.

[STATUS_ACCESS_DENIED]

Verify that the disk and folder that are used by the share exists and is accessible.

If you are using SMBv3 or later, check whether the server and the share require
encryption, but the client doesn't support encryption. To do this, take the following
actions:

Check the server by running the following command.

PowerShell

Get-SmbServerConfiguration | select Encrypt*

If EncryptData and RejectUnencryptedAccess are true, the server requires


encryption.

Check the share by running the following command:

PowerShell
Get-SmbShare | select name, EncryptData

If EncryptData is true on the share, and RejectUnencryptedAccess is true on the


server, encryption is required by the share

Follow these guidelines as you troubleshoot:

Windows 8, Windows Server 2012, and later versions of Windows support client-
side encryption (SMBv3 and later).

Windows 7, Windows Server 2008 R2 and earlier versions of Windows do not


support client-side encryption.

Samba and third-party device may not support encryption. You may have to
consult product documentation for more information.

References
For more information, see the following articles.

3.3.5.4 Receiving an SMB2 NEGOTIATE Request

3.3.5.5 Receiving an SMB2 SESSION_SETUP Request

3.3.5.7 Receiving an SMB2 TREE_CONNECT Request


TCP connection is aborted during
Validate Negotiate
Article • 07/22/2020

In the network trace for the SMB issue, you notice that a TCP Reset abort occurred
during the Validate Negotiate process. This article describes how to troubleshoot the
situation.

Cause
This issue can be caused by a failed negotiation validation. This typically occurs because
a WAN accelerator modifies the original SMB NEGOTIATE packet.

Microsoft no longer allows modification of the Validate Negotiate packet for any reason.
This is because this behavior creates a serious security risk.

The following requirements apply to the Validate Negotiate packet:

The Validate Negotiate process uses the FSCTL_VALIDATE_NEGOTIATE_INFO


command.

The Validate Negotiate response must be signed. Otherwise, the connection is


aborted.

You should compare the FSCTL_VALIDATE_NEGOTIATE_INFO messages to the


Negotiate messages to make sure that nothing was changed.

Workaround
You can temporarily disable the Validate Negotiate process. To do this, locate the
following registry subkey:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Par
ameters

Under the Parameters key, set RequireSecureNegotiate to 0.

In Windows PowerShell, you can run the following command to set this value:

PowerShell
Set-ItemProperty -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters"
RequireSecureNegotiate -Value 0 -Force

7 Note

The Validate Negotiate process cannot be disabled in Windows 10, Windows Server
2016, or later versions of Windows.

If either the client or server cannot support the Validate Negotiate command, you can
work around this issue by setting SMB signing to be required. SMB signing is considered
more secure than Validate Negotiate. However, there can also be performance
degradation if signing is required.

Reference
For more information, see the following articles:

3.3.5.15.12 Handling a Validate Negotiate Info Request

3.2.5.14.12 Handling a Validate Negotiate Info Response


Slow SMB files transfer speed
Article • 03/28/2023

Server Message Block (SMB) file transfer speeds can slow down depending on the size
and quantity of your files, your connection type, and the version of apps you use. This
article provides troubleshooting procedures for slow file transfer speeds through SMB.

Slow transfer
You can troubleshoot slow file transfers by checking your current storage use. If you
observe slow transfers of files, consider the following steps:

Try the file copy command for unbuffered IO:


xcopy /J
robocopy /J

Test the storage speed. Copy speeds are limited by storage speed.

File copies sometimes start fast and then slow down. A change in copy speed
occurs when the initial copy is cached or buffered, in memory or in the RAID
controller's memory cache, and the cache runs out. This change forces data to be
written directly to disk (write-through).

To verify this situation, use storage performance monitor counters to determine


whether storage performance degrades over time. For more information, see
Performance tuning for SMB file servers.

Use RAMMap (SysInternals) to determine whether "Mapped File" usage in memory


stops growing because of free memory exhaustion.

Look for packet loss in the trace. Packet loss can cause throttling by the TCP
congestion provider.

For SMBv3 and later versions, verify that SMB Multichannel is enabled and
working.

On the SMB client, enable large MTU in SMB, and disable bandwidth throttling by
running the following command:

PowerShell

Set-SmbClientConfiguration -EnableBandwidthThrottling 0 -EnableLargeMtu


1
Slow transfer of small files
A slow transfer of small files occurs most commonly when there are many files. This
occurrence is an expected behavior.

During file transfer, file creation causes both high protocol overhead and high file
system overhead. For large file transfers, these costs occur only one time. When a large
number of small files are transferred, the cost is repetitive and causes slow transfers.

Issue details
Network latency, create commands, and antivirus programs contribute to a slower
transfer of small files. The following are technical details about this problem:

SMB calls a create command to request that the file is created. Code checks
whether the file exists and then creates the file. Otherwise, some variation of the
create command creates the actual file.

Each create command generates activity on the file system.


After the data is written, the file is closed.

The process can suffer from network latency and SMB server latency. This latency
occurs because the SMB request is first translated to a file system command and
then to the actual file system to complete the operation.

The transfer continues to slow while an antivirus program is running. This change
happens because the data is typically scanned once by the packet sniffer and a
second time when the data is written to disk. In some scenarios, these actions are
repeated thousands of times. You can potentially observe speeds of less than 1
MB/s.

Slow open of Office documents


Office documents can open slowly, which generally occurs on a WAN connection. The
manner in which Office apps (Microsoft Excel, in particular) access and read data is
typically what causes the documents to open slowly.

You should verify that the Office and SMB binaries are up-to-date, and then test by
having leasing disabled on the SMB server. To verify both conditions have resolved,
follow these steps:
1. Run the following PowerShell command in Windows 8 and Windows Server 2012
or later versions of Windows:

PowerShell

Set-SmbServerConfiguration -EnableLeasing $false

You can also run the following command in an elevated Command Prompt
window:

Windows Command Prompt

REG ADD
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\param
eters /v DisableLeasing /t REG\_DWORD /d 1 /f

7 Note

After you set this registry key, SMB2 leases are no longer granted, but oplocks
are still available. This setting is used primarily for troubleshooting.

2. Restart the file server or restart the Server service. To restart the service, run the
following commands:

Windows Command Prompt

NET STOP SERVER


NET START SERVER

To avoid this issue, you can also replicate the file to a local file server. For more
information, see saving Office documents to a network server is slow when using EFS.
High CPU usage issue on the SMB
server
Article • 05/22/2020

This article discusses how to troubleshoot the high CPU usage issue on the SMB server.

High CPU usage because of storage


performance issues
Storage performance issues can cause high CPU usage on SMB servers. Before you
troubleshoot, make sure that the latest update rollup is installed on the SMB server to
eliminate any known issues in srv2.sys.

In most cases, you will notice the issue of high CPU usage in the system process. Before
you proceed, use Process Explorer to make sure that srv2.sys or ntfs.sys is consuming
excessive CPU resources.

Storage area network (SAN) scenario


In aggregate levels, the overall SAN performance may appear to be fine. However, when
you work with SMB issues, the individual request response time is what matters the
most.

Generally, this issue can be caused by some form of command queuing in the SAN. You
can use Perfmon to capture a Microsoft-Windows-StorPort tracing, and analyze it to
accurately determine storage responsiveness.

Disk IO latency
Disk IO latency is a measure of the delay between the time that a disk IO request is
created and completed.

The IO latency that is measured in Perfmon includes all the time that is spent in the
hardware layers plus the time that is spent in the Microsoft Port Driver queue
(Storport.sys for SCSI). If the running processes generate a large StorPort queue, the
measured latency increases. This is because IO must wait before it is dispatched to the
hardware layers.

In Perfmon, the following counters show physical disk latency:


"Physical disk performance object" -> "Avg. Disk sec/Read counter" – This shows
the average read latency.

"Physical disk performance object" -> "Avg. Disk sec/Write counter" – This shows
the average write latency.

"Physical disk performance object" -> "Avg. Disk sec/Transfer counter" – This
shows the combined averages for both reads and writes.

The "_Total" instance is an average of the latencies for all physical disks in the computer.
Each of other instances represents an individual Physical Disk.

7 Note

Do not confuse these counters with Avg. Disk Transfers/sec. These are completely
different counters.

Windows Storage Stack follows


This section gives a brief explanation on the Windows Storage Stack follows.

When an application creates an IO request, it sends the request to the Windows IO


subsystem at the top of the stack. The IO then travels all the way down the stack to the
hardware "Disk" subsystem. Then, the response travels all the way back up. During this
process, each layer performs its function and then hands the IO to the next layer.
Perfmon does not create any performance data per second. Instead, it consumes data
that is provided by other subsystems within Windows.

For the "physical disk performance object," the data is captured at the "Partition
manager" level in the storage stack.

When we measure the counters that are mentioned in the previous section, we are
measuring all the time that is spent by the request below the "Partition manager" level.
When the IO request is sent by the partition manager down the stack, we time stamp it.
When it returns, we time stamp it again and calculate the time difference. The time
difference is the latency.

By doing this, we are accounting for the time that is spent in the following components:

Class Driver - This manages the device type, such as disks, tapes, and so on.

Port Driver - This manages the transport protocol, such as SCSI, FC, SATA, and so
on.

Device Miniport Driver - This is the device driver for the Storage Adapter. It is
supplied by the manufacturer of the devices, such as Raid Controller, and FC HBA.
Disk Subsystem - This includes everything that is below the Device Miniport Driver.
This could be as simple as a cable that is connected to a single physical hard disk,
or as complex as a Storage Area Network. If the issue is determined to be caused
by this component, you can contact the hardware vendor for more information
about troubleshooting.

Disk queuing
There is a limited amount of IO that a disk subsystem can accept at a given time. The
excess IO gets queued until the disk can accept IO again. The time that IO spends in the
queues below the "Partition manager" level is accounted for in the Perfmon physical disk
latency measurements. As queues grow larger and IO must wait longer, the measured
latency also grows.

There are multiple queues below the "Partition manager" level, as follows:

Microsoft Port Driver Queue - SCSIport or Storport queue

Manufacturer Supplied Device Driver Queue - OEM Device driver

Hardware Queues – such as disk controller queue, SAN switches queue, array
controller queue, and hard disk queue

We also account for the time that the hard disk spends actively servicing the IO and the
travel time that is taken for the request to return to the "Partition manager" level to be
marked as completed.

Finally, we have to pay special attention to the Port Driver Queue (for SCSI Storport.sys).
The Port Driver is the last Microsoft component to touch an IO before we hand it off to
the manufacturer-supplied Device Miniport Driver.

If the Device Miniport Driver can't accept any more IO because its queue or the
hardware queues below it are saturated, we will start accumulating IO on the Port Driver
Queue. The size of the Microsoft Port Driver queue is limited only by the available
system memory (RAM), and it can grow very large. This causes large measured latency.

High CPU caused by enumerating folders


To troubleshoot this issue, disable the Access Based Enumeration (ABE) feature.

To determine which SMB shares have ABE enabled, run the following PowerShell
command,
PowerShell

Get-SmbShare | Select Name, FolderEnumerationMode

Unrestricted = ABE disabled.


AccessBase = ABE enabled.

You can enable ABE in Server Manager. Navigatie to File and Storage Services >
Shares, right-click the share, select Properties, go to Settings and then select Enable
access-based enumeration.

Also, you can reduce ABELevel to a lower level (1 or 2) to improve performance.

You can check disk performance when enumeration is slow by opening the folder locally
through a console or an RDP session.
Troubleshoot the Event ID 50 Error
Message
Article • 11/03/2020

Symptoms
When information is being written to the physical disk, the following two event
messages may be logged in the system event log:

Event ID: 50
Event Type: Warning
Event Source: Ftdisk
Description: {Lost Delayed-Write Data} The system was attempting to transfer
file data from buffers to \Device\HarddiskVolume4. The write operation
failed, and only some of the data may have been written to the file.
Data:
0000: 00 00 04 00 02 00 56 00
0008: 00 00 00 00 32 00 04 80
0010: 00 00 00 00 00 00 00 00
0018: 00 00 00 00 00 00 00 00
0020: 00 00 00 00 00 00 00 00
0028: 11 00 00 80

Event ID: 26
Event Type: Information
Event Source: Application Popup
Description: Windows - Delayed Write Failed : Windows was unable to save all
the data for the file \Device\HarddiskVolume4\Program Files\Microsoft SQL
Server\MSSQL$INSTANCETWO\LOG\ERRORLOG. The data has been lost. This error
may be caused by a failure of your computer hardware or network connection.

Please try to save this file elsewhere.

These event ID messages mean exactly the same thing and are generated for the same
reasons. For the purposes of this article, only the event ID 50 message is described.

7 Note

The device and path in the description and the specific hexadecimal data will vary.
More Information
An event ID 50 message is logged if a generic error occurs when Windows is trying to
write information to the disk. This error occurs when Windows is trying to commit data
from the file system Cache Manager (not hardware level cache) to the physical disk. This
behavior is part of the memory management of Windows. For example, if a program
sends a write request, the write request is cached by Cache Manager and the program is
told the write is completed successfully. At a later point in time, Cache Manager tries to
lazy write the data to the physical disk. When Cache Manager tries to commit the data
to disk, an error occurs writing the data, and the data is flushed from the cache and
discarded. Write-back caching improves system performance, but data loss and volume
integrity loss can occur as a result of lost delayed-write failures.

It is important to remember that not all I/O is buffered I/O by Cache Manager. Programs
can set a FILE_FLAG_NO_BUFFERING flag that bypasses Cache Manager. When SQL
performs critical writes to a database, this flag is set to guarantee that the transaction is
completed directly to disk. For example, non-critical writes to log files perform buffered
I/O to improve overall performance. An event ID 50 message never results from non-
buffered I/O.

There are several different sources for an event ID 50 message. For example, an event ID
50 message logged from a MRxSmb source occurs if there is a network connectivity
problem with the redirector. To avoid performing incorrect troubleshooting steps, make
sure to review the event ID 50 message to confirm that it refers to a disk I/O issue and
that this article applies.

An event ID 50 message is similar to an event ID 9 and an event ID 11 message.


Although the error is not as serious as the error indicated by the event ID 9 and an event
ID 11 message, you can use the same troubleshooting techniques for a event ID 50
message as you do for an event ID 9 and an event ID 11 message. However, remember
that anything in the stack can cause lost-delay writes, such as filter drivers and mini-port
drivers.

You can use the binary data that is associated with any accompanying "DISK" error
(indicated by an event ID 9, 11, 51 error message or other messages) to help you in
identifying the problem.

How to Decode the Data Section of an Event ID 50 Event


Message
When you decode the data section in the example of an event ID 50 message that is
included in the "Summary" section, you see that the attempt to perform a write
operation failed because the device was busy and the data was lost. This section
describes how to decode this event ID 50 message.

The following table describes what each offset of this message represents:

OffsetLengthValues Length Values

0x00 2 Not Used

0x02 2 Dump Data Size = 0x0004

0x04 2 Number of Strings = 0x0002

0x06 2 Offset to the strings

0x08 2 Event Category

0x0c 4 NTSTATUS Error Code = 0x80040032 =


IO_LOST_DELAYED_WRITE

0x10 8 Not Used

0x18 8 Not Used

0x20 8 Not Used

0x28 4 NT Status error code

Key Sections to Decode


The Error Code

In the example in the "Summary" section, the error code is listed in the second line. This
line starts with "0008:" and it includes the last four bytes in this line:0008: 00 00 00 00 32
00 04 80 In this case, the error code is 0x80040032. The following code is the code for
error 50, and it is the same for all event ID 50 messages:
IO_LOST_DELAYED_WRITEWARNINGNote When you are converting the hexadecimal
data in the event ID message to the status code, remember that the values are
represented in the little-endian format.

The Target Disk

You can identify the disk that the write was being tried to by using the symbolic link that
is listed to the drive in the "Description" section of the event ID message, for example:
\Device\HarddiskVolume4.

The Final Status Code


The final status code is the most important piece of information in an event ID 50
message. This is the error code that is return when the I/O request was made, and it is
the key source of information. In the example in the "Summary" section, the final status
code is listed at 0x28, the sixth line, that starts with "0028:" and includes the only four
octets in this line:

0028: 11 00 00 80

In this case, the final status equals 0x80000011.This status code maps to
STATUS_DEVICE_BUSY and implies that the device is currently busy.

7 Note

When you are converting the hexadecimal data in the event ID 50 message to the
status code, remember that the values are represented in the little-endian format.
Because the status code is the only piece of information that you are interested in,
it may be easier to view the data in WORDS format instead of BYTES. If you do so,
the bytes will be in the correct format and the data may be easier to interpret
quickly.

To do so, click Words in the Event Properties window. In the Data Words view, the
example in the "Symptoms" section would read as follows: Data:

() Bytes (.)
Words 0000: 00040000 00560002 00000000 80040032 0010: 00000000 00000000
00000000 00000000 0020: 00000000 00000000 80000011

To obtain a list of Windows NT status codes, see NTSTATUS.H in the Windows Software
Developers Kit (SDK).
SMB Multichannel troubleshooting
Article • 07/22/2020

This article describes how to troubleshoot issues that are related to SMB Multichannel.

Check the network interface status


Make sure that the binding for the network interface is set to True on the SMB client
(MS_client) and SMB server (MS_server). When you run the following command, the
output should show True under Enabled for both network interfaces:

PowerShell

Get-NetAdapterBinding -ComponentID ms_server,ms_msclient

After that, make sure the network interface is listed in the output of the following
commands:

PowerShell

Get-SmbServerNetworkInterface

PowerShell

Get-SmbClientNetworkInterface

You can also run the Get-NetAdapter command to view the interface index to verify the
result. The interface index shows all the active SMB adapters that are actively bound to
the appropriate interface.

Check the firewall


If there is only a link-local IP address, and no publicly routable address, the network
profile is likely set to Public. This means that SMB is blocked at the firewall by default.

The following command reveals which connection profile is being used. You can also use
the Network and Sharing Center to retrieve this information.

Get-NetConnectionProfile
Under the File and Printer Sharing group, check the firewall inbound rules to make sure
that "SMB-In" is enabled for the correct profile.

You can also enable File and Printer Sharing in the Network and Sharing Center
window. To do this, select Change advanced sharing settings in the menu on the left,
and then select Turn on file and printer sharing for the profile. This option enables the
File and Printer Sharing firewall rules.

Capture client and server sided traffic for


troubleshooting
You need the SMB connection tracing information that starts from the TCP three-way
handshake. We recommend that you close all applications (especially Windows Explorer)
before you start the capture. Restart the Workstation service on the SMB client, start the
packet capture, and then reproduce the issue.

Make sure that the SMBv3.x connection is being negotiated, and that nothing in
between the server and the client is affecting dialect negotiation. SMBv2 and earlier
versions don't support multichannel.
Look for the NETWORK_INTERFACE_INFO packets. This is where the SMB client requests
a list of adapters from the SMB server. If these packets aren't exchanged, multichannel
doesn't work.

The server responds by returning a list of valid network interfaces. Then, the SMB client
adds those to the list of available adapters for multichannel. At this point, multichannel
should start and, at least, try to start the connection.

For more information, see the following articles:

3.2.4.20.10 Application Requests Querying Server's Network Interfaces

2.2.32.5 NETWORK_INTERFACE_INFO Response

3.2.5.14.11 Handling a Network Interfaces Response

In the following scenarios, an adapter cannot be used:

There is a routing issue on the client. This is typically caused by an incorrect


routing table that forces traffic over the wrong interface.

Multichannel constraints have been set. For more information, see New-
SmbMultichannelConstraint.

Something blocked the network interface request and response packets.

The client and server can't communicate over the extra network interface. For
example, the TCP three-way handshake failed, the connection is blocked by a
firewall, session setup failed, and so on.

If the adapter and its IPv6 address are on the list that is sent by the server, the next step
is to see whether communications are tried over that interface. Filter the trace by the
link-local address and SMB traffic, and look for a connection attempt. If this is a
NetConnection trace, you can also examine Windows Filtering Platform (WFP) events to
see whether the connection is being blocked.
File Server Resource Manager (FSRM)
overview
Article • 03/21/2023

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

File Server Resource Manager (FSRM) is a role service in Windows Server that enables
you to manage and classify data stored on file servers. You can use FSRM to
automatically classify files, perform tasks based on these classifications, set quotas on
folders, and create reports monitoring storage usage. In Windows Server version 1803,
FSRM adds the ability to prevent the creation of change journals.

7 Note

For new features on older versions of Windows Server, see What's New in File
Server Resource Manager.

Features
FSRM includes the following features:

Quota management: Limit the space that is allowed for a volume or folder. These
limits can be automatically applied to new folders that are created on a volume.
You can also define quota templates that can be applied to new volumes or
folders.
File Classification Infrastructure: Gain insight into your data by automating
classification processes so that you can manage your data more effectively. You
can classify files and apply policies based on this classification. Example policies
include dynamic access control for restricting access to files, file encryption, and
file expiration. Files can be classified automatically by using file classification rules
or manually by modifying the properties of a selected file or folder.
File Management Tasks: Gain the ability to apply a conditional policy or action to
files based on their classification. The conditions of a file management task include
the file location, the classification properties, the date the file was created, the last
modified date of the file, or the last time the file was accessed. The actions that a
file management task can take include the ability to expire files, encrypt files, or
run a custom command.
File screening management: Control the types of files that the user can store on a
file server. You can limit the extension that can be stored on your shared files. For
example, you can create a file screen that doesn't allow files with an MP3 extension
to be stored in personal shared folders on a file server.
Storage reports: Use these reports to help you identify trends in disk usage and
how your data is classified. You can also monitor a selected group of users for
attempts to save unauthorized files.

You can configure and manage the FSRM features by using the FSRM app or by using
Windows PowerShell.

) Important

FSRM supports volumes formatted with the NTFS file system only. The Resilient File
System isn't supported.

Practical applications
The following list outlines some practical applications for FSRM:

Use File Classification Infrastructure with the Dynamic Access Control scenario.
Create a policy that grants access to files and folders based on the way files are
classified on the file server.

Create a file classification rule that tags any file that contains at least 10 social
security numbers as having customer content.

Expire any file that hasn't been modified in the last 10 years.

Create a 200-MB quota for each user's home directory and notify them when
they're using 180 MB.

Disallow any music files to be stored in personal shared folders.

Schedule a report that runs every Sunday night at midnight that generates a list of
the most recently accessed files from the previous two days. This report can help
you determine the weekend storage activity and plan your server downtime
accordingly.

What's new - prevent FSRM from creating


change journals
Starting with Windows Server, version 1803, you can now prevent the FSRM service from
creating a change journal (also known as a USN journal) on volumes when the service
starts. This feature can conserve some space on each volume, but disables real-time file
classification.

To prevent FSRM from creating a change journal on some or all volumes when the
service starts, complete the following steps:

1. Stop the SRMSVC service. Open a PowerShell session as an administrator and enter
Stop-Service SrmSvc .

2. Delete the USN journal for the volumes you want to conserve space on by using
the fsutil command:

PowerShell

fsutil usn deletejournal /d <VolumeName>

For example: fsutil usn deletejournal /d c:

3. Open Registry Editor by typing regedit in the same PowerShell session.

4. Go to the following key:


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SrmSvc\Settings.

5. To prevent change journal creation for the entire server, complete the following
steps:

) Important

If you want to disable journal creation for specific volumes only, continue to
the next step.

a. Right-click the Settings key, and then select New > DWORD (32-bit) Value.
b. Name the value SkipUSNCreationForSystem .
c. Set the value to 1 (in hexadecimal).

6. To prevent change journal creation for specific volumes, complete the following
steps:

a. Identify the volume paths you want to skip. You can use the fsutil volume list
command or the following PowerShell command:

PowerShell
Get-Volume | Format-Table DriveLetter,FileSystemLabel,Path

Here's an example output:

Console

DriveLetter FileSystemLabel Path


----------- --------------- ----
System Reserved \\?\Volume{8d3c9e8a-0000-0000-0000-
100000000000}\
C \\?\Volume{8d3c9e8a-0000-0000-0000-
501f00000000}\

b. Return to your Registry Editor session. Right-click the


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SrmSvc\Settings key,
and then select New > Multi-String Value.

c. Name the value SkipUSNCreationForVolumes .

d. Enter the path for each volume that you want to skip. Place each path on a
separate line. For example:

PowerShell

\\?\Volume{8d3c9e8a-0000-0000-0000-100000000000}\
\\?\Volume{8d3c9e8a-0000-0000-0000-501f00000000}\

7 Note

If Registry Editor displays a warning about removed empty strings, you can
safely disregard the message. Here’s an example of the message you might
see: Data of type REG_MULTI_SZ cannot contain empty strings. Registry
Editor will remove all empty strings found.

7. Start the SRMSVC service. For example, in a PowerShell session enter Start-
Service SrmSvc .

Related links
Dynamic Access Control overview
Checklist: Apply a Quota to a volume or
folder
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

1. Configure e-mail settings if you plan to send threshold notifications or storage


reports by e-mail. Configure E-Mail Notifications

2. Assess storage requirements on the volume or folder. You can use reports on the
Storage Reports Management node to provide data. (For example, run a Files by
Owner report on demand to identify users who use large amounts of disk space.)
Generate Reports on Demand

3. Review available pre-configured quota templates. (In Quota Management, click


the Quota Templates node.) Edit Quota Template Properties
-Or-
Create a new quota template to enforce a storage policy in your organization.
Create a Quota Template

4. Create a quota based on the template on the volume or folder. Create a Quota
-Or-
Create an auto apply quota to automatically generate quotas for subfolders on the
volume or folder. Create an Auto Apply Quota

5. Schedule a report task that contains a Quota Usage report to monitor quota usage
periodically. Schedule a Set of Reports

7 Note

If you want to screen files on a volume or folder, see Checklist: Apply a File Screen
to a Volume or Folder.
Checklist - Apply a file screen to a
volume or folder
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

To apply a file screen to a volume or folder, use the following list:

1. Configure e-mail settings if you plan to send file screening notifications or storage
reports by e-mail by following the instructions in Configure E-Mail Notifications.

2. Enable recording of file screening events in the auditing database if you plan to
generate File Screening Audit reports. Configure File Screen Audit

3. Assess stored file types that are candidates for screening rules. You can use reports
at the Storage Reports Management node to provide data. (For example, run a
Files by File Group report or a Large Files report on demand to identify files that
occupy large amounts of disk space.) Generate Reports on Demand

4. Review the preconfigured file groups, or create a new file group to enforce a
specific screening policy in your organization. Define File Groups for Screening

5. Review the properties of available file screen templates. (In File Screening
Management, click the File Screen Templates node.) Edit File Screen Template
Properties
-Or-
Create a new file screen template to enforce a storage policy in your organization.
Create a File Screen Template

6. Create a file screen based on the template on a volume or folder. Create a File
Screen

7. Configure file screen exceptions in subfolders of the volume or folder. Create a File
Screen Exception

8. Schedule a report task containing a File Screening Audit report to monitor


screening activity periodically. Schedule a Set of Reports

7 Note
To limit storage on a volume or folder, see Checklist: Apply a Quota to a Volume
or Folder
Setting File Server Resource Manager
Options
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

The general File Server Resource Manager options can be set in the File Server Resource
Manager Options dialog box. These settings are used throughout the nodes, and some
of them can be modified when you work with quotas, screen files, or generate storage
reports.

This section includes the following topics:

Configure E-Mail Notifications


Configure Notification Limits
Configure Storage Reports
Configure File Screen Audit
Configure E-Mail Notifications
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

When you create quotas and file screens, you have the option of sending e-mail
notifications to users when their quota limit is approaching or after they have attempted
to save files that have been blocked. When you generate storage reports, you have the
option of sending the reports to specific recipients by e-mail. If you want to routinely
notify certain administrators about quota and file screening events, or send storage
reports, you can configure one or more default recipients.

To send these notifications and storage reports, you must specify the SMTP server to be
used for forwarding the e-mail messages.

To configure e-mail options


1. In the console tree, right-click File Server Resource Manager, and then click
Configure Options. The File Server Resource Manager Options dialog box opens.

2. On the E-mail Notifications tab, under SMTP server name or IP address, type the
host name or the IP address of the SMTP server that will forward e-mail
notifications and storage reports.

3. If you want to routinely notify certain administrators about quota or file screening
events or e-mail storage reports, under Default administrator recipients, type
each e-mail address.

Use the format account@domain. Use semicolons to separate multiple accounts.

4. To specify a different "From" address for e-mail notifications and storage reports
sent from File Server Resource Manager, under the Default "From" e-mail address,
type the e-mail address that you want to appear in your message.

5. To test your settings, click Send Test E-mail.

6. Click OK.

To configure e-mail options using PowerShell


You can use the Set-FsrmSetting cmdlet to set the e-mail configuration and the Send-
FsrmTestEmail cmdlet to send a test email as shown in the following example:

PowerShell

# Setting FSRM email options


$MHT = @{
SmtpServer = 'SMTP.Contoso.Com'
FromEmailAddress = '[email protected]'
AdminEmailAddress = '[email protected]'
}
Set-FsrmSetting @MHT

# Sending a test email to check the setup


$MHT = @{
ToEmailAddress = '[email protected]'
Confirm = $false
}
Send-FsrmTestEmail @MHT

Additional References
Setting File Server Resource Manager Options
Configure Notification Limits
Article • 07/29/2021

Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

To reduce the number of notifications that accumulate for repeatedly exceeding a quota
threshold or attempting to save an unauthorized file, File Server Resource Manager
applies time limits to the following notification types:

E-mail
Event log
Command
Report

Each limit specifies a period of time before another configured notification of the same
type is generated for an identical issue.

A default 60-minute limit is set for each notification type, but you can change these
limits. The limit applies to all the notifications of a given type, whether they are
generated by quota thresholds or by file screening events.

To specify a standard notification limit for each


notification type
1. In the console tree, right-click File Server Resource Manager, and then click
Configure Options. The File Server Resource Manager Options dialog box opens.

2. On the Notification Limits tab, enter a value in minutes for each notification type
that is shown.

3. Click OK.

7 Note

To customize time limits that are associated with