0% found this document useful (0 votes)
3K views1,600 pages

Azure Stack Hci

Azure Stack HCI is a hyperconverged infrastructure solution that runs virtual workloads on-premises using validated hardware. It combines the Azure Stack HCI operating system, validated hardware, Azure hybrid services, Windows Admin Center, Hyper-V compute, Storage Spaces Direct storage, and optional SDN networking. Common uses include branch offices, VDI, SQL Server, trusted virtualization, and Kubernetes containers.

Uploaded by

Pt Buddhakird
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views1,600 pages

Azure Stack Hci

Azure Stack HCI is a hyperconverged infrastructure solution that runs virtual workloads on-premises using validated hardware. It combines the Azure Stack HCI operating system, validated hardware, Azure hybrid services, Windows Admin Center, Hyper-V compute, Storage Spaces Direct storage, and optional SDN networking. Common uses include branch offices, VDI, SQL Server, trusted virtualization, and Kubernetes containers.

Uploaded by

Pt Buddhakird
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Tell us about your PDF experience.

Azure Stack HCI documentation


Azure Stack HCI is a hyperconverged clustering solution that uses validated hardware to
run virtualized workloads on-premises, making it easy for customers to consolidate
aging infrastructure and connect to Azure for cloud services.

About Azure Stack HCI

e OVERVIEW

What is Azure Stack HCI?

h WHAT'S NEW

What's new in Azure Stack HCI, version 22H2?

b GET STARTED

Use Azure Stack HCI with Windows Admin Center

Create an Azure Stack HCI cluster

Deploy Azure Stack HCI

` DEPLOY

1. Deploy the Azure Stack HCI OS

2. Create a cluster

3. Register with Azure

4. Validate a cluster

g TUTORIAL

Create a VM-based lab for Azure Stack HCI

Manage storage

c HOW-TO GUIDE
Replace drives

Use the CSV cache

Manage SMB Multichannel

Storage thin provisioning

Adjustable storage repair speed

Manage volumes

c HOW-TO GUIDE

Create volumes

Create stretched volumes

Protect volumes

Manage volumes

Manage the cluster

c HOW-TO GUIDE

Manage the cluster with Windows Admin Center

Add or remove servers

Take a server offline for maintenance

Monitor the cluster

Update the cluster

Change languages

Azure Stack HCI concepts

p CONCEPT

Stretched clusters

Billing
Firewall requirements

Updates and upgrades

Choose drives

Plan volumes

Fault tolerance and storage efficiency

Security considerations

Manage VMs

c HOW-TO GUIDE

Manage VMs with Windows Admin Center

Manage VMs with PowerShell

Set up VM affinity rules

VM load balancing

Use GPUs with clustered VMs

Connect to Azure

c HOW-TO GUIDE

Register cluster with Azure

Manage cluster registration with Azure

Register Windows Admin Center with Azure

Software Defined Networking

e OVERVIEW

Software Defined Networking (SDN) overview

Plan SDN infrastructure

Network Controller overview


Plan Network Controller

Datacenter Firewall overview

c HOW-TO GUIDE

Deploy an SDN infrastructure

Azure Stack HCI learning paths and modules

d TRAINING

Azure Stack HCI foundations

Operate and maintain Azure Stack HCI

Implement Datacenter Firewall and Software Load Balancer

Plan for and deploy SDN infrastructure on Azure Stack HCI


Azure Stack HCI solution overview
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Azure Stack HCI is a hyperconverged infrastructure (HCI) cluster solution that hosts
virtualized Windows and Linux workloads and their storage in a hybrid environment that
combines on-premises infrastructure with Azure cloud services.

Azure Stack HCI is available for download with a free 60-day trial . You can either
purchase integrated systems from a Microsoft hardware partner with the Azure Stack
HCI operating system pre-installed, or buy validated nodes and install the operating
system yourself. See the Azure Stack HCI Catalog for hardware options. Use the Azure
Stack HCI sizing tool to estimate the hardware requirements for your Azure Stack HCI
solution. This sizing tool is currently in public preview and requires your personal
Microsoft account (MSA) credentials (not a corporate account) to sign in.

Azure Stack HCI is intended as a virtualization host, so most apps and server roles must
run inside of virtual machines (VMs). Exceptions include Hyper-V, Network Controller,
and other components required for Software Defined Networking (SDN) or for the
management and health of hosted VMs.

Azure Stack HCI is delivered as an Azure service and billed to an Azure subscription.
Azure hybrid services enhance the cluster with capabilities such as cloud-based
monitoring, Site Recovery, and VM backups, as well as a central view of all of your Azure
Stack HCI deployments in the Azure portal. You can manage the cluster with your
existing tools, including Windows Admin Center and PowerShell.

Azure Stack HCI features and architecture


Azure Stack HCI is a world-class, integrated virtualization stack built on proven
technologies that have already been deployed at scale, including Hyper-V, Storage
Spaces Direct, and Azure-inspired SDN. It's part of the Azure Stack family, using the
same software-defined compute, storage, and networking software as Azure Stack
Hub .

Each Azure Stack HCI cluster consists of between 1 and 16 physical, validated servers. All
clustered servers, including single server, share common configurations and resources
by leveraging the Windows Server Failover Clustering feature.

Azure Stack HCI combines the following:


Azure Stack HCI operating system
Validated hardware from an OEM partner
Azure hybrid services
Windows Admin Center
Hyper-V-based compute resources
Storage Spaces Direct-based virtualized storage
SDN-based virtualized networking using Network Controller (optional)

Using Azure Stack HCI and Windows Admin Center, you can create a hyperconverged
cluster that's easy to manage and uses Storage Spaces Direct for superior storage price-
performance. This includes the option to stretch the cluster across sites and use
automatic failover. See What's new in Azure Stack HCI, version 22H2 for details on the
latest functionality enhancements.

Why Azure Stack HCI?


There are many reasons customers choose Azure Stack HCI, including:

It's familiar for Hyper-V and server admins, allowing them to leverage existing
virtualization and storage concepts and skills.
It works with existing data center processes and tools such as Microsoft System
Center, Active Directory, Group Policy, and PowerShell scripting.
It works with popular third-party backup, security, and monitoring tools.
Flexible hardware choices allow customers to choose the vendor with the best
service and support in their geography.
Joint support between Microsoft and the hardware vendor improves the customer
experience.
Seamless, full-stack updates make it easy to stay current.
A flexible and broad ecosystem gives IT professionals the flexibility they need to
build a solution that best meets their needs.

Common use cases for Azure Stack HCI


Customers often choose Azure Stack HCI in the following scenarios.

Use case Description

Branch office For branch office and edge workloads, you can minimize infrastructure costs by
and edge deploying two-node clusters with inexpensive witness options, such as Cloud
Witness or a USB drive–based file share witness. Another factor that contributes
to the lower cost of two-node clusters is support for switchless networking,
which relies on crossover cable between cluster nodes instead of more expensive
high-speed switches. Customers can also centrally view remote Azure Stack HCI
deployments in the Azure portal. To learn more about this workload, see Deploy
branch office and edge on Azure Stack HCI.

Virtual Azure Stack HCI clusters are well suited for large-scale VDI deployments with
desktop RDS or equivalent third-party offerings as the virtual desktop broker. Azure Stack
infrastructure HCI provides additional benefits by including centralized storage and enhanced
(VDI) security, which simplifies protecting user data and minimizes the risk of
accidental or intentional data leaks. To learn more about this workload, see
Deploy virtual desktop infrastructure (VDI) on Azure Stack HCI.

Highly Azure Stack HCI provides an additional layer of resiliency to highly available,
performant mission-critical Always On availability groups-based deployments of SQL Server.
SQL Server This approach also offers extra benefits associated with the single-vendor
approach, including simplified support and performance optimizations built into
the underlying platform. To learn more about this workload, see Deploy SQL
Server on Azure Stack HCI.
Use case Description

Trusted Azure Stack HCI satisfies the trusted enterprise virtualization requirements
enterprise through its built-in support for Virtualization-based Security (VBS). VBS relies on
virtualization Hyper-V to implement the mechanism referred to as virtual secure mode, which
forms a dedicated, isolated memory region within its guest VMs. By using
programming techniques, it's possible to perform designated, security-sensitive
operations in this dedicated memory region while blocking access to it from the
host OS. This considerably limits potential vulnerability to kernel-based exploits.
To learn more about this workload, see Deploy Trusted Enterprise Virtualization
on Azure Stack HCI.

Azure You can leverage Azure Stack HCI to host container-based deployments, which
Kubernetes increases workload density and resource usage efficiency. Azure Stack HCI also
Service (AKS) further enhances the agility and resiliency inherent to Azure Kubernetes
deployments. Azure Stack HCI manages automatic failover of VMs serving as
Kubernetes cluster nodes in case of a localized failure of the underlying physical
components. This supplements the high availability built into Kubernetes, which
automatically restarts failed containers on either the same or another VM. To
learn more about this workload, see What is Azure Kubernetes Service on Azure
Stack HCI and Windows Server ?.

Scale-out Storage Spaces Direct is a core technology of Azure Stack HCI that uses industry-
storage standard servers with locally attached drives to offer high availability,
performance, and scalability. Using Storage Spaces Direct results in significant
cost reductions compared with competing offers based on storage area network
(SAN) or network-attached storage (NAS) technologies. These benefits result
from an innovative design and a wide range of enhancements, such as persistent
read/write cache drives, mirror-accelerated parity, nested resiliency, and
deduplication.

Disaster An Azure Stack HCI stretched cluster provides automatic failover of virtualized
recovery for workloads to a secondary site following a primary site failure. Synchronous
virtualized replication ensures crash consistency of VM disks.
workloads

Data center Refreshing and consolidating aging virtualization hosts with Azure Stack HCI can
consolidation improve scalability and make your environment easier to manage and secure. It's
and also an opportunity to retire legacy SAN storage to reduce footprint and total
modernization cost of ownership. Operations and systems administration are simplified with
unified tools and interfaces and a single point of support.

Run Azure Azure Arc allows you to run Azure services anywhere. This allows you to build
services on- consistent hybrid and multicloud application architectures by using Azure
premises services that can run in Azure, on-premises, at the edge, or at other cloud
providers. Azure Arc enabled services allow you to run Azure data services and
Azure application services such as Azure App Service, Functions, Logic Apps,
Event Grid, and API Management anywhere to support hybrid workloads. To
learn more, see Azure Arc overview.
Demo of using Microsoft Azure with Azure Stack HCI
For an end-to-end example of using Microsoft Azure to manage apps and infrastructure
at the Edge using Azure Arc, Azure Kubernetes Service, and Azure Stack HCI, see the
Retail edge transformation with Azure hybrid demo.

Using a fictional customer, inspired directly by real customers, you will see how to
deploy Kubernetes, set up GitOps, deploy VMs, use Azure Monitor and drill into a
hardware failure, all without leaving the Azure portal.
https://www.youtube-nocookie.com/embed/2gKx3IySlAY

This video includes preview functionality which shows real product functionality, but in a
closely controlled environment.

Azure integration benefits


Azure Stack HCI allows you to take advantage of cloud and on-premises resources
working together and natively monitor, secure, and back up to the cloud.

After you register your Azure Stack HCI cluster with Azure, you can use the Azure portal
initially for:

Monitoring: View all of your Azure Stack HCI clusters in a single, global view where
you can group them by resource group and tag them.
Billing: Pay for Azure Stack HCI through your Azure subscription.

You can also subscribe to additional Azure hybrid services.

For more details on the cloud service components of Azure Stack HCI, see Azure Stack
HCI hybrid capabilities with Azure services.

What you need for Azure Stack HCI


To get started, you'll need:

One or more servers from the Azure Stack HCI Catalog , purchased from your
preferred Microsoft hardware partner.
An Azure subscription .
Operating system licenses for your workload VMs – for example, Windows Server.
See Activate Windows Server VMs.
An internet connection for each server in the cluster that can connect via HTTPS
outbound traffic to well-known Azure endpoints at least every 30 days. See Azure
connectivity requirements for more information.
For clusters stretched across sites:
At least four severs (two in each site)
At least one 1 Gb connection between sites (a 25 Gb RDMA connection is
preferred)
An average latency of 5 ms round trip between sites if you want to do
synchronous replication where writes occur simultaneously in both sites.
If you plan to use SDN, you'll need a virtual hard disk (VHD) for the Azure Stack
HCI operating system to create Network Controller VMs (see Plan to deploy
Network Controller).

Make sure your hardware meets the System requirements and that your network meets
the physical network and host network requirements for Azure Stack HCI.

For Azure Kubernetes Service on Azure Stack HCI and Windows Server requirements, see
AKS requirements on Azure Stack HCI.

Azure Stack HCI is priced on a per core basis on your on-premises servers. For current
pricing, see Azure Stack HCI pricing .

Hardware and software partners


Microsoft recommends purchasing Integrated Systems built by our hardware partners
and validated by Microsoft to provide the best experience running Azure Stack HCI. You
can also run Azure Stack HCI on Validated Nodes, which offer a basic building block for
HCI clusters to give customers more hardware choices. Microsoft partners also offer a
single point of contact for implementation and support services.

Visit the Azure Stack HCI solutions page or browse the Azure Stack HCI Catalog to
view Azure Stack HCI solutions from Microsoft partners such as ASUS, Blue Chip,
DataON, Dell EMC, Fujitsu, HPE, Hitachi, Lenovo, NEC, primeLine Solutions, QCT,
SecureGUARD, and Supermicro.

Some Microsoft partners are developing software that extends the capabilities of Azure
Stack HCI while allowing IT admins to use familiar tools. To learn more, see Utility
applications for Azure Stack HCI.

Next steps
Download Azure Stack HCI
Create an Azure Stack HCI cluster and register it with Azure
Use Azure Stack HCI with Windows Admin Center
Compare Azure Stack HCI to Windows Server
Compare Azure Stack HCI to Azure Stack Hub
Azure Stack HCI foundations learning path
What's new in Azure Stack HCI, version
22H2
Article • 05/30/2023

Applies to: Azure Stack HCI, version 22H2 and Supplemental Package

This article lists the various features and improvements that are available in Azure Stack
HCI, version 22H2. This article also describes the Azure Stack HCI, Supplemental Package
that can be deployed in conjunction with Azure Stack HCI, version 22H2 OS.

Azure Stack HCI, version 22H2 is the latest version of the operating system available for
the Azure Stack HCI solution and focuses on Network ATC v2 improvements, storage
replication compression, Hyper-V live migration, and more. Additionally, a preview
version of Azure Stack HCI, Supplemental Package, is now available that can be
deployed on servers running the English version of the Azure Stack HCI, version 22H2
OS.

You can also join the Azure Stack HCI preview channel to test out features for future
versions of the Azure Stack HCI operating system. For more information, see Join the
Azure Stack HCI preview channel.

The following sections briefly describe the various features and enhancements in Azure
Stack HCI, Supplemental Package and in Azure Stack HCI, version 22H2.

) Important

This feature is currently in PREVIEW.


See the Supplemental Terms of Use for
Microsoft Azure Previews for legal terms that apply to Azure features that are in
beta, preview, or otherwise not yet released into general availability.

Azure Stack HCI, Supplemental Package


(preview)
Azure Stack HCI, Supplemental Package is now available to be deployed on servers
running Azure Stack HCI, version 22H2 OS. This package contains a brand new
deployment tool that allows for an interactive deployment, new security capabilities, an
Azure Stack HCI Environment Checker tool that will validate connectivity, hardware,
identity and networking prior to deployment, and a unified log collection experience.
New deployment tool (preview)
For servers running Azure Stack HCI, version 22H2 OS, you can perform new
deployments using the Azure Stack HCI, Supplemental Package (preview). You can
deploy an Azure Stack HCI cluster via a brand new deployment tool in one of the three
ways - interactively, using an existing configuration file, or via PowerShell.

) Important

When you try out this new deployment tool, make sure that you do not run
production workloads on systems deployed with the Supplemental Package while
it's in preview even with the core operating system Azure Stack HCI 22H2 being
generally available. Microsoft Customer Support will supply support services while
in preview, but service level agreements available at GA do not apply.

Follow these steps to download the Supplemental Package files:

1. Go to Download Azure Stack HCI 22H2 and fill out and submit a trial form.

2. On the Azure Stack HCI software download page, go to Supplemental package


for Azure Stack HCI 22H2 (public preview).

3. Download the following files:


Azure Stack HCI Supplemental Description
Package component

BootstrapCloudDeploymentTool.ps1 Script to extract content and launch the deployment


tool. When this script is run with the -ExtractOnly
parameter, it will extract the zip file but not launch the
deployment tool. 

CloudDeployment.zip Azure Stack HCI, version 22H2 content, such as images


and agents. 

Verify-CloudDeployment.ps1 Hash used to validate the integrity of zip file. 

To learn more about the new deployment methods, see Deployment overview.

New security capabilities (preview)


The new installations with Azure Stack HCI, Supplemental Package release start with a
secure-by-default strategy. The new version has a tailored security baseline coupled with
a security drift control mechanism and a set of well-known security features enabled by
default.

To summarize, this release provides:

A tailored security baseline with over 200 security settings configured and
enforced with a security drift control mechanism that ensures the cluster always
starts and remains in a known good security state.

The security baseline enables you to closely meet the Center for Internet Security
(CIS) Benchmark, Defense Information Systems Agency Security Technical
Implementation Guides (DISA STIG), Common Criteria, and Federal Information
Processing Standards (FIPS) requirements for the OS and Azure Compute Security
baselines.

For more information, see Security baseline settings for Azure Stack HCI.

Improved security posture achieved through a stronger set of protocols and cipher
suites enabled by default.

Secured-Core Server that achieves higher protection by advancing a combination


of hardware, firmware, and driver capabilities. For more information, see What is
Secured-core server?

Out-of-box protection for data and network with SMB signing and BitLocker
encryption for OS and Cluster Shared Volumes. For more information, see
BitLocker encryption for Azure Stack HCI.

Reduced attack surface as Windows Defender Application Control is enabled by


default and limits the applications and the code that you can run on the core
platform. For more information, see Windows Defender Application Control for
Azure Stack HCI.

New Azure Stack HCI Environment Checker tool (preview)


Azure Stack HCI Environment Checker is a standalone, PowerShell tool that you can use
prior to even ordering hardware to validate connectivity readiness.

For new deployments using the supplemental package, the environment checker
automatically validates internet connectivity, hardware, identity and networking on all
the nodes of your Azure Stack HCI cluster. The tool also returns a Pass/Fail status for
each test, and saves a log file and a detailed report file.

To get started, you can download this free tool here . For more information, see Assess
your environment for deployment readiness.

Azure Stack HCI, version 22H2


The following sections briefly describe the various features and enhancements in Azure
Stack HCI, version 22H2.

Network ATC v2 improvements


In this release, the Network ATC has several new features and improvements:

Network symmetry. Network ATC automatically checks for and validates network
symmetry across all adapters (on each node) in the same intent - specifically the
make, model, speed, and configuration of your selected adapters.

Storage automatic IP assignment. Network ATC automatically identifies available


IPs in our default subnets and assigns those addresses to your storage adapters.

Scope detection. Network ATC automatically detects if you're configuring a cluster


node, so no need to add the -ClusterName or -ComputerName parameter in your
commands.

Contextual cluster network naming. Network ATC understands how you'll use
cluster networks and names them more appropriately.
Live Migration optimization. Network ATC intelligently manages:
Maximum simultaneous live migrations - Network ATC ensures that the
maximum recommended value is configured and maintained across all cluster
nodes.
Best live migration network - Network ATC determines the best network for live
migration and automatically configures your system.
Best live migration transport - Network ATC selects the best algorithm for SMB,
compression, and TCP given your network configuration.
Maximum SMB (RDMA) bandwidth - If SMB (RDMA) is used, Network ATC
determines the maximum bandwidth reserved for live migration to ensure that
there's enough bandwidth for Storage Spaces Direct.

Proxy configuration. Network ATC can configure all server nodes with the same
proxy information as needed for your environment. This action provides one-time
configuration for all current and future server nodes.

Stretched cluster support. Network ATC configures all storage adapters used by
Storage Replica in stretched cluster environments. However, since such adapters
need to route across subnets, Network ATC can't assign any IP addresses to them,
so you’ll still need to assign these addresses yourselves.

Post-deployment VLAN modification. You can use the new Set-NetIntent cmdlet
in Network ATC to modify VLAN settings just as you would if you were using the
Add-NetIntent cmdlet. No need to remove and then add the intents again when
changing VLANs.

For more information, see the blog on Network ATC v2 improvements .

Storage Replica compression


This release includes the Storage Replica compression feature for data transferred
between the source and destination servers. This new functionality compresses the
replication data from the source system, which is transferred over the network,
decompressed, and then saved on the destination. The compression results in fewer
network packets to transfer the same amount of data, allowing for higher throughput
and lower network utilization, which in turn results in lower costs for metered networks.

There are no changes to the way you create replica groups and partnerships. The only
change is a new parameter that can be used with the existing Storage Replica cmdlets.

You specify compression when the group and the partnership are created. Use the
following cmdlets to specify compression:
PowerShell

New-SRGroup -EnableCompression

New-SRPartnership -EnableCompression

If the parameter isn't specified, the default is set to Disabled.

To modify this setting later, use the following cmdlets:

PowerShell

Set-SRGroup -Compression <Boolean>

Set-SRPartnership -Compression <Boolean>

where $False is Disabled and $True is Enabled.

All the other commands and steps remain the same. These changes aren't in Windows
Admin Center at this time and will be added in a subsequent release.

For more information, see Storage Replica overview.

Partition and share GPU with virtual machines on Azure


Stack HCI
With this release, GPU partitioning is now supported on NVIDIA A2 , A10 , A16 ,
and A40 GPUs in Azure Stack HCI, enabled with NVIDIA RTX Virtual Workstation (vWS)
and NVIDIA Virtual PC (vPC) software. GPU partitioning is implemented using single root
I/O virtualization (SR-IOV), which provides a strong, hardware-backed security boundary
with predictable performance for each virtual machine.

For more information, see Partition and share GPU with virtual machines on Azure Stack
HCI.

Hyper-V live migration improvements


In Azure Stack HCI, version 22H2, the Hyper-V live migration is faster and more reliable
for switchless 2-node and 3-node clusters. Switchless interconnects can cause live
migration delays and this release addresses these issues.

Cluster-Aware Updating (CAU) improvements


With this release, Cluster-Aware Updating is more reliable due to the smarter retry and
mitigation logic that reduces errors when pausing and draining cluster nodes. Cluster-
Aware Updating also supports single server deployments.

For more information, see What is Cluster-Aware Updating?

Thin provisioning conversion


With this release, you can now convert existing fixed provisioned volumes to thin using
PowerShell. Thin provisioning improves storage efficiency and simplifies management.

For more information, see Convert fixed to thin provisioned volumes on your Azure
Stack HCI.

Single server scale-out


This release supports inline fault domain and resiliency changes to scale out a single
server. Azure Stack HCI, version 22H2 provides easy scaling options to go from a single
server to a two-node cluster, and from a two-node cluster to a three-node cluster.

For more information, see Scale out single server on your Azure Stack HCI.

Tag-based segmentation
In this release, you can secure your application workload virtual machines (VMs) from
external and lateral threats with custom tags of your choice. Assign custom tags to
classify your VMs, and then apply Network Security Groups (NSGs) based on those tags
to restrict communication to and from external and internal sources. For example, to
prevent your SQL Server VMs from communicating with your web server VMs, simply
tag the corresponding VMs with SQL and Web tags. You can then create an NSG to
prevent Web tag from communicating with SQL tag.

For more information, see Configure network security groups with Windows Admin
Center.

Azure Hybrid Benefit for Azure Stack HCI


Azure Hybrid Benefit program enables customers to significantly reduce the costs of
running workloads in the cloud. With Windows Server Software Assurance (SA), we are
further expanding Azure Hybrid Benefit to reduce the costs of running workloads on-
premises and at edge locations.
If you have Windows Server Datacenter licenses with active Software Assurance, use
Azure Hybrid Benefit to waive host service fees for Azure Stack HCI and unlimited
virtualization with Windows Server subscription at no additional cost. You can then
modernize your existing datacenter and edge infrastructure to run VM and container-
based applications.

For more information, see Azure Hybrid Benefit for Azure Stack HCI.

Azure Arc VM changes and Azure Marketplace


Another feature also available with this release is Azure Marketplace integration for
Azure Arc-enabled Azure Stack HCI. With this integration, you'll be able to access the
latest fully updated images from Microsoft, including Windows Server 2022 Datacenter:
Azure Edition and Windows 10/11 Enterprise multi-session for Azure Virtual Desktop.

You can now use the Azure portal or the Azure CLI to easily add and manage VM images
and then use those images to create Azure Arc VMs. This feature works with your
existing cluster running Azure Stack HCI, version 21H2 or later.

For more information, see:

Create VM image using an Azure Marketplace image.


Create VM image using an image in an Azure Storage account.
Create VM image using an image in a local share.

Windows Server 2022 Datacenter: Azure Edition VMs on


Azure Stack HCI
Beginning this release, you can run Windows Server 2022 Datacenter: Azure Edition on
Azure Stack HCI. The preview of Marketplace VM images lets customers deploy
Windows Server 2022 Datacenter: Azure Edition (already generally available in Azure
IaaS) on Azure Stack HCI. This enables unique features like Hotpatch and SMB over
QUIC on Windows Server 2022 Datacenter: Azure Edition VMs on Azure Stack HCI.
Through future guest management extensions, the full Azure Automanage experience
will also become available in upcoming releases.

Automatic renewal of Network Controller certificates


You can now renew your Network Controller certificates automatically, in addition to
manual renewal. For information on how to renew the Network Controller certificates
automatically, see Automatic renewal.
Next steps
Read the blog about What’s new for Azure Stack HCI at Microsoft Ignite 2022 .
For existing Azure Stack HCI deployments, Update Azure Stack HCI.
For new Azure Stack HCI deployments:
Read the Deployment overview.
Learn how to Deploy interactively using the Azure Stack HCI, Supplemental
Package.
Compare Azure Stack HCI to Windows
Server
Article • 01/20/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022

This article explains key differences between Azure Stack HCI and Windows Server and
provides guidance about when to use each. Both products are actively supported and
maintained by Microsoft. Many organizations choose to deploy both as they are
intended for different and complementary purposes.

When to use Azure Stack HCI


Azure Stack HCI is Microsoft's premier hyperconverged infrastructure platform for
running VMs or virtual desktops on-premises with connections to Azure hybrid services.
Azure Stack HCI can help to modernize and secure your datacenters and branch offices,
and achieve industry-best performance with low latency and data sovereignty.

Use Azure Stack HCI for:

The best virtualization host to modernize your infrastructure, either for existing
workloads in your core datacenter or emerging requirements for branch office and
edge locations.

Easy extensibility to the cloud, with a regular stream of innovations from your
Azure subscription and a consistent set of tools and experiences.

All the benefits of hyperconverged infrastructure: a simpler, more consolidated


datacenter architecture with high-speed storage and networking.
7 Note

When using Azure Stack HCI, run all of your workloads inside virtual machines
or containers, not directly on the cluster. Azure Stack HCI isn't licensed for
clients to connect directly to it using Client Access Licenses (CALs).

For information about licensing Windows Server VMs running on an Azure Stack HCI
cluster, see Activate Windows Server VMs.

When to use Windows Server


Windows Server is a highly versatile, multi-purpose operating system with dozens of
roles and hundreds of features and includes the right for clients to connect directly with
appropriate CALs. Windows Server machines can be in the cloud or on-premises,
including virtualized on top of Azure Stack HCI.

Use Windows Server for:

A guest operating system inside of virtual machines (VMs) or containers


As the runtime server for a Windows application
To use one or more of the built-in server roles such as Active Directory, file
services, DNS, DHCP, or Internet Information Services (IIS)
As a traditional server, such as a bare-metal domain controller or SQL Server
installation
For traditional infrastructure, such as VMs connected to Fibre Channel SAN storage

Compare product positioning


The following table shows the high-level product packaging for Azure Stack HCI and
Windows Server.
Attribute Azure Stack HCI Windows Server

Product Cloud service that includes an Operating system


type operating system and more

Legal Covered under your Microsoft Has its own end-user license agreement
customer agreement or online
subscription agreement

Licensing Billed to your Azure subscription Has its own paid license

Support Covered under Azure support Can be covered by different support


agreements, including Microsoft Premier
Support

Where to Download from Azure.com/HCI or Microsoft Volume Licensing Service Center


get it comes preinstalled on integrated or Evaluation Center
systems

Runs in For evaluation only; intended as a host Yes, in the cloud or on premises
VMs operating system

Hardware Runs on any of more than 200 pre- Runs on any hardware with the "Certified
validated solutions from the Azure for Windows Server" logo. See the
Stack HCI Catalog WindowsServerCatalog

Sizing Azure Stack HCI sizing tool None

Lifecycle Always up to date with the latest Use this option of the Windows Server
policy features. You have up to six months to servicing channels: Long-Term Servicing
install updates. Channel (LTSC)

Compare workloads and benefits


The following table compares the workloads and benefits of Azure Stack HCI and
Windows Server.

Attribute Azure Windows


Stack HCI Server

Azure Kubernetes Service (AKS) Yes Yes

Azure Arc-Enabled PaaS Services Yes Yes

Windows Server 2022 Azure Edition Yes No

Windows Server subscription add-on (Dec. 2021) Yes No


Attribute Azure Windows
Stack HCI Server

Free Extended Security Updates (ESUs) for Windows Server and SQL Yes No 1
2008/R2 and 2012/R2

1
Requires purchasing an Extended Security Updates (ESU) license key and manually
applying it to every VM.

Compare technical features


The following table compares the technical features of Azure Stack HCI and Windows
Server 2022.

Attribute Azure Stack HCI Windows Server


2022

Hyper-V Yes Yes

Storage Spaces Direct Yes Yes

Software-Defined Networking Yes Yes

Adjustable storage repair speed Yes Yes

Secured-core Server Yes Yes

Stronger, faster network encryption Yes Yes

4-5x faster Storage Spaces repairs Yes Yes

Stretch clustering for disaster recovery with Storage Yes No


Spaces Direct

High availability for GPU workload Yes No

Restart up to 10x faster with kernel-only restarts Yes No

Simplified host networking with Network ATC Yes No

Storage Spaces Direct on a single server Yes No

Storage Spaces Direct thin provisioning Yes No

Dynamic processor compatibility mode Yes No

Cluster-Aware OS feature update Yes No

Integrated driver and firmware updates Yes (Integrated No


Systems only)
For more information, see What's New in Azure Stack HCI, version 22H2 and Using
Azure Stack HCI on a single server.

Compare management options


The following table compares the management options for Azure Stack HCI and
Windows Server. Both products are designed for remote management and can be
managed with many of the same tools.

Attribute Azure Stack Windows Server


HCI

Windows Admin Center Yes Yes

Microsoft System Center Yes (sold Yes (sold separately)


separately)

Third-party tools Yes Yes

Azure Backup and Azure Site Recovery support Yes Yes

Azure portal Yes (natively) Requires Azure Arc


agent

Azure portal > Extensions and Arc-enabled host Yes Manual 1

Azure portal > Windows Admin Center integration Yes Azure VMs only 1
(preview)

Azure portal > Multi-cluster monitoring for Azure Stack Yes No


HCI (preview)

Azure portal > Azure Resource Manager integration for Yes No


clusters

Azure portal > Arc VM management (preview) Yes No

Desktop experience No Yes

1
Requires manually installing the Arc-git statusConnected Machine agent on every
machine.

Compare product pricing


The table below compares the product pricing for Azure Stack HCI and Windows Server.
For details, see Azure Stack HCI pricing .
Attribute Azure Stack HCI Windows Server

Price type Subscription service Varies: most often a one-time license

Price structure Per core, per month Varies: usually per core

Price Per core, per month See Pricing and licensing for Windows Server
2022

Evaluation/trial 60-day free trial once 180-day evaluation copy


period registered

Channels Enterprise agreement, cloud Enterprise agreement/volume licensing, OEM,


service provider, or direct services provider license agreement (SPLA)

Next steps
Compare Azure Stack HCI to Azure Stack Hub
Compare Azure Stack HCI to Azure Stack
Hub
Article • 04/20/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Azure Stack Hub

As your organization digitally transforms, you may find you can move faster by using
public cloud services to build on modern architectures and refresh legacy apps.
However, for reasons that include technological and regulatory obstacles, many
workloads must remain on-premises. Use this table to help determine which Microsoft
hybrid cloud strategy provides what you need where you need it, delivering cloud
innovation for workloads wherever they are.

Azure Stack HCI Azure Stack Hub

Same skills, familiar processes New skills, innovative processes

Azure services in your datacenter using Azure Arc Azure services in your datacenter in
disconnected scenarios

Connect your datacenter to Azure services and Run your own instance of Azure Resource
Azure control plane Manager

Flexible platform using integrated systems or Delivered as an integrated system from


validated nodes from OEMs OEMs

When to use Azure Stack HCI versus Azure


Stack Hub
The following table compares Azure Stack HCI to Azure Stack Hub and explains why one
may be better than the other for your use case:

Use cases Azure Stack HCI Azure Stack Hub


and features

Lower server Use Azure Stack HCI for the minimum Azure Stack Hub requires
footprint footprint for remote offices and branches. minimum of four servers and its
Start with just two servers and switchless own network switches.
back-to-back networking for peak simplicity
and affordability.
Use cases Azure Stack HCI Azure Stack Hub
and features

Hyper-V Use Azure Stack HCI to virtualize classic Azure Stack Hub constrains
support enterprise apps like Exchange, SharePoint, Hyper-V configurability and
and SQL Server, and to virtualize Windows feature set for consistency with
Server roles like File Server, DNS, DHCP, IIS, Azure.
and Active Directory. It provides unrestricted
access to Hyper-V features.

Software- Use Azure Stack HCI to use software-defined Azure Stack Hub doesn't expose
defined infrastructure in place of aging storage these infrastructural technologies.
infrastructure arrays or network appliances, without major
stack rearchitecture. Built-in Hyper-V, Storage
Spaces Direct, and Software-Defined
Networking (SDN) are directly accessible and
manageable.

Platform-as- Azure Stack HCI runs Platform-as-a-Service Use Azure Stack Hub to develop
a-Service (PaaS) services on-premises with Azure Arc, and run apps that rely on PaaS
(PaaS) and offers the ability to host Azure services like Web Apps, Functions,
Kubernetes Service. You can also run Azure or Event Hubs on-premises in a
Virtual Desktop, Azure Arc-enabled data disconnected scenario. These
services, including SQL Managed Instance services run on Azure Stack Hub
and PostgreSQL Hyperscale (preview), and exactly like they do in Azure,
App Service, Functions, and Logic Apps on providing a consistent hybrid
Azure Arc (preview) on Azure Stack HCI. development and runtime
environment.

Multi- Azure Stack HCI doesn't natively enforce or Use Azure Stack Hub for self-
tenancy provide for multi-tenancy. service Infrastructure-as-a-Service
support (IaaS), with strong isolation and
precise usage tracking and
chargeback for multiple colocated
tenants. Ideal for service providers
and enterprise private clouds.
Templates from the Azure
Marketplace.

DevOps tools Azure Stack HCI doesn't natively include any Use Azure Stack Hub to
DevOps tooling. modernize app deployment and
operation with DevOps practices
like infrastructure as code,
continuous integration and
continuous deployment (CI/CD),
and convenient features like
Azure-consistent VM extensions.
Ideal for Dev and DevOps teams.
Next steps
Compare Azure Stack HCI and Windows Server
Azure Hybrid Benefit for Azure Stack
HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article describes Azure Hybrid Benefit and how to use it for Azure Stack HCI.

Azure Hybrid Benefit is a program that helps you reduce the costs of running
workloads in the cloud. With Azure Hybrid Benefit for Azure Stack HCI, you can
maximize the value of your on-premises licenses and modernize your existing
infrastructure to Azure Stack HCI at no additional cost.

What is Azure Hybrid Benefit for Azure Stack


HCI?
If you have Windows Server Datacenter licenses with active Software Assurance , you
are eligible to activate Azure Hybrid Benefit for your Azure Stack HCI cluster. To activate
this benefit, you'll need to exchange 1-core license of Software Assurance-enabled
Windows Server Datacenter for 1-physical core of Azure Stack HCI. For detailed licensing
requirements, see Azure Hybrid Benefit for Windows Server.

This benefit waives the Azure Stack HCI host service fee and Windows Server guest
subscription on your cluster. Other costs associated with Azure Stack HCI, such as Azure
services, are billed as per normal. For details about pricing with Azure Hybrid Benefit,
see Azure Stack HCI pricing .

 Tip

You can maximize cost savings by also using Azure Hybrid Benefit for AKS. For
more information, see Azure Hybrid Benefits for AKS.

Activate Azure Hybrid Benefit for Azure Stack


HCI
You can activate Azure Hybrid Benefit for your Azure Stack HCI cluster using the Azure
portal.
Prerequisites
The following prerequisites are required to activate Azure Hybrid Benefit for your Azure
Stack HCI cluster:

Make sure your Azure Stack HCI cluster is installed with the following:
Version 22H2 or later; or
Version 21H2 with at least the September 13, 2022 security update
KB5017316 or later

Make sure that all servers in your cluster are online and registered with Azure

Make sure that your cluster has Windows Server Datacenter licenses with active
Software Assurance. For other licensing prerequisites, see Licensing prerequisites

Make sure you have permission to write to the Azure Stack HCI resource. This is
included if you're assigned the contributor or owner role on your subscription

Activate Azure Hybrid Benefit


Follow these steps to activate Azure Hybrid Benefit for your Azure Stack HCI cluster via
the Azure portal:

1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL:
https://portal.azure.com .

2. Go to your Azure Stack HCI cluster resource page.

3. Under Settings, select Configuration.

4. Under Azure Hybrid Benefit, select the Activate link.

5. In the Activate Azure Hybrid Benefit pane on the right-hand side, confirm the
designated cluster and the number of core licenses you wish to allocate, and select
Activate again to confirm.

7 Note

You can't deactivate Azure Hybrid Benefit for your cluster after activation.
Proceed after you have confirmed the changes.

6. When Azure Hybrid Benefit successfully activates for your cluster, the Azure Stack
HCI host fee is waived for the cluster.

) Important

Windows Server subscription is a way to get unlimited virtualization rights on


your cluster through Azure. Now that you have Azure Hybrid Benefit enabled,
you have the option of turning on Windows Server subscription at no
additional cost.

7. To enable Windows Server subscription at no additional cost, under the Windows


Server subscription add-on feature in the same Configuration pane, select
Activate benefit.

8. In the Activate Azure Hybrid Benefit pane on the right-hand side, check the
details and then select Activate to confirm. Upon activation, licenses take a few
minutes to apply and set up automatic VM activation (AVMA) on the cluster.

Maintain compliance for Azure Hybrid Benefit


After you activate your Azure Stack HCI cluster with Azure Hybrid Benefit, you must
regularly check status and maintain compliance for Azure Hybrid Benefit. An Azure Stack
HCI cluster using Azure Hybrid Benefit can run only during the Software Assurance term.
When the Software Assurance term is nearing expiry, you need to either renew your
agreement with Software Assurance, disable the Azure Hybrid Benefit functionality, or
de-provision the clusters that are using Azure Hybrid Benefit.

You can perform an inventory of your clusters through the Azure portal and Azure
Resource Graph as described in the following section.

Verify that your cluster is using Azure Hybrid Benefit


You can verify if your cluster is using Azure Hybrid Benefit via Azure portal, PowerShell,
or Azure CLI.

Azure portal

1. In your Azure Stack HCI cluster resource page, under Settings, select
Configuration.
2. Under Azure Hybrid Benefit, the status shows as:

Activated - indicates Azure Hybrid Benefit is activated


Not activated - indicates Azure Hybrid Benefit isn't activated
You can also navigate to Cost Analysis > Cost by Resource > Cost by Resource.
Expand your Azure Stack HCI resource to check that the meter is under Software
Assurance.

List all Azure Stack HCI clusters with Azure Hybrid Benefit
in a subscription
You can list all Azure Stack HCI clusters with Azure Hybrid Benefit in a subscription using
PowerShell and Azure CLI.

Azure portal

Use PowerShell or Azure CLI to list all Azure Stack HCI clusters with Azure Hybrid
Benefit in a subscription.

Troubleshoot Azure Hybrid Benefit for Azure


Stack HCI
This section describes the errors that you may get when activating Azure Hybrid Benefit
for Azure Stack HCI.

Error

Failed to activate Azure Hybrid Benefit. We couldn’t find your Software Assurance contract.

Suggested solution

This error can occur if you have a new Software Assurance contract or if you have set up
this Azure subscription recently, but your information isn't updated in the portal yet. If
you get this error, reach out to us at [email protected] and share the following
information:

Customer/organization name - the name registered on your Software Assurance


contract.
Azure subscription ID – to which your Azure Stack HCI cluster is registered.
Agreement number for Software Assurance – this can be found on your purchase
order, and is the number you would use to install software from the Volume
Licensing Service Center (VLSC).
FAQs
This section answers questions you may have about Azure Hybrid Benefit for Azure
Stack HCI.

How does licensing work for Azure Hybrid Benefit?


For more information about licensing, see Azure Hybrid Benefit for Windows Server.

Can I opt-in to Azure Hybrid Benefit for an existing


cluster?
Yes.

Is there any additional cost incurred by opting in to Azure


Hybrid Benefit for Azure Stack HCI?
No additional costs are incurred, as Azure Hybrid Benefit is included as part of your
Software Assurance benefit.

How do I find out if my organization has Software


Assurance?
Consult your Account Manager or licensing partner.

When would the new pricing benefit for Azure Hybrid


Benefit take effect?
The pricing benefit for Azure Stack HCI host fees takes effect immediately upon
activation of Azure Hybrid Benefit for your cluster. The pricing benefit for Windows
Server subscription takes effect immediately after you activate both Azure Hybrid
Benefit and Windows Server subscription.

Next steps
For related information, see also:

Azure Hybrid Benefit for Windows Server


Azure Stack HCI FAQ
FAQ

The Azure Stack HCI FAQ provides information on Azure Stack HCI connectivity with the
cloud, and how Azure Stack HCI relates to Windows Server and Azure Stack Hub.

How does Azure Stack HCI use the


cloud?
Azure Stack HCI is an on-premises hyperconverged infrastructure stack delivered as an
Azure hybrid service. You install the Azure Stack HCI software on physical servers that
you control on your premises. Then you connect to Azure for cloud-based monitoring,
support, billing, and optional management and security features. This FAQ section
clarifies how Azure Stack HCI uses the cloud by addressing frequently asked questions
about connectivity requirements and behavior.

Does my data stored on Azure Stack HCI


get sent to the cloud?
No. Customer data, including the names, metadata, configuration, and contents of your
on-premises virtual machines (VMs) is never sent to the cloud unless you turn on
additional services, like Azure Backup or Azure Site Recovery, or unless you enroll those
VMs individually into cloud management services like Azure Arc.

Because Azure Stack HCI doesn't store customer data in the cloud, business continuity
disaster recovery (BCDR) for the customer's on-premises data is defined and controlled
by the customer. To set up your own site-to-site replication using a stretched cluster, see
Stretched clusters overview.

To learn more about the diagnostic data we collect to keep Azure Stack HCI secure, up
to date, and working as expected, see Azure Stack HCI data collection and Data
residency in Azure .

Does the control plane for Azure Stack


HCI go through the cloud?
No. You can use edge-local tools, such as Windows Admin Center, PowerShell, or System
Center, to manage directly the host infrastructure and VMs even if your network
connection to the cloud is down or severely limited. Common everyday operations, such
as moving a VM between hosts, replacing a failed drive, or configuring IP addresses
don’t rely on the cloud. However, cloud connectivity is required to obtain over-the-air
software updates, change your Azure registration, or use features that directly rely on
cloud services for backup, monitoring, and more.

Are there bandwidth or latency


requirements between Azure Stack HCI
and the cloud?
No. Limited-bandwidth connections like rural T1 lines or satellite/cellular connections
are adequate for Azure Stack HCI to sync. The minimum required connectivity is just
several kilobytes per day. Additional services may require additional bandwidth,
especially to replicate or back up whole VMs, download large software updates, or
upload verbose logs for analysis and monitoring in the cloud.

Does Azure Stack HCI require


continuous connectivity to the cloud?
No. Azure Stack HCI is designed to handle periods of limited or zero connectivity.

What happens if my network


connection to the cloud temporarily
goes down?
While your connection is down, all host infrastructure and VMs continue to run
normally, and you can use edge-local tools for management. You would not be able to
use features that directly rely on cloud services. Information in the Azure portal also
would become out-of-date until Azure Stack HCI is able to sync again.

How long can Azure Stack HCI run with


the connection down?
At the minimum, Azure Stack HCI needs to sync successfully with Azure once per 30
consecutive days.

What happens if the 30-day limit is


exceeded?
If Azure Stack HCI hasn’t synced with Azure in more than 30 consecutive days, the
cluster’s connection status will show Out of policy in the Azure portal and other tools,
and the cluster will enter a reduced functionality mode. In this mode, the host
infrastructure stays up and all current VMs continue to run normally. However, new VMs
can’t be created until Azure Stack HCI is able to sync again. The internal technical reason
is that the cluster’s cloud-generated license has expired and needs to be renewed by
syncing with Azure.

What content does Azure Stack HCI


sync with the cloud?
This depends on which features you’re using. At the minimum, Azure Stack HCI syncs
basic cluster information to display in the Azure portal (like the list of clustered nodes,
hardware model, and software version); billing information that summarizes accrued
core-days since the last sync; and minimal required diagnostic information that helps
Microsoft keep your Azure Stack HCI secure, up-to-date, and working properly. The total
size is small – a few kilobytes. If you turn on additional services, they may upload more:
for example, Azure Log Analytics would upload logs and performance counters for
monitoring.

How often does Azure Stack HCI sync


with the cloud?
This depends on which features you’re using. At the minimum, Azure Stack HCI will try
to sync every 12 hours. If sync doesn’t succeed, the content is retained locally and sent
with the next successful sync. In addition to this regular timer, you can manually sync
any time, using either the Sync-AzureStackHCI PowerShell cmdlet or from Windows
Admin Center. If you turn on additional services, they may upload more frequently: for
example, Azure Log Analytics would upload every 5 minutes for monitoring.
Where does the synced information
actually go?
Azure Stack HCI syncs with Azure and stores data in a secure, Microsoft-operated
datacenter. To learn more about the diagnostic data we collect to keep Azure Stack HCI
secure, up to date, and working as expected, see Azure Stack HCI data collection and
Data residency in Azure .

Can I use Azure Stack HCI and never


connect to Azure?
No. Azure Stack HCI needs to sync successfully with Azure once per 30 consecutive days.

Can I transfer data offline between an


"air-gapped" Azure Stack HCI and
Azure?
No. There is currently no mechanism to register and sync between on-premises and
Azure without network connectivity. For example, you can't transport certificates or
billing data using removable storage. If there is sufficient customer demand, we're open
to exploring such a feature in the future. Let us know in the Azure Stack HCI feedback
forum .

How does Azure Stack HCI relate to


Windows Server?
Windows Server is the foundation of nearly every Azure product, and all the features
you value continue to release with support in Windows Server. The initial offering of
Azure Stack HCI was based on Windows Server 2019 and used the traditional Windows
Server licensing model. Today, Azure Stack HCI has its own operating system and
subscription-based licensing model. Azure Stack HCI is the recommended way to deploy
HCI on-premises, using Microsoft-validated hardware from our partners.
Which guest operating systems are
supported on Azure Stack HCI?
Azure Stack HCI supports several guest operating systems. For more information, see
Supported Windows guest operating systems for Hyper-V on Windows Server.

Can I upgrade from Windows Server


2019 to Azure Stack HCI?
There is no in-place upgrade from Windows Server to Azure Stack HCI at this time. Stay
tuned for specific migration guidance for customers running hyperconverged clusters
based on Windows Server 2019 and 2016.

What Azure services can I connect to


Azure Stack HCI?
For an updated list of Azure services that you can connect Azure Stack HCI to, see
Connecting Windows Server to Azure hybrid services.

What do the Azure Stack Hub and Azure


Stack HCI solutions have in common?
Azure Stack HCI features the same Hyper-V-based software-defined compute, storage,
and networking technologies as Azure Stack Hub. Both offerings meet rigorous testing
and validation criteria to ensure reliability and compatibility with the underlying
hardware platform.

How are the Azure Stack Hub and Azure


Stack HCI solutions different?
With Azure Stack Hub, you run cloud services on-premises. You can run Azure IaaS and
PaaS services on-premises to consistently build and run cloud apps anywhere, managed
with the Azure portal on-premises.

With Azure Stack HCI, you run virtualized workloads on-premises, managed with
Windows Admin Center and familiar Windows Server tools. You can also connect to
Azure for hybrid scenarios like cloud-based Site Recovery, monitoring, and others.

Can I upgrade from Azure Stack HCI to


Azure Stack Hub?
No, but customers can migrate their workloads from Azure Stack HCI to Azure Stack
Hub or Azure.

How do I identify an Azure Stack HCI


server?
Windows Admin Center lists the operating system in the All Connections list and various
other places, or you can use the following PowerShell command to query for the
operating system name and version.

PowerShell

Get-ComputerInfo -Property 'osName', 'osDisplayVersion'

Here’s some example output:

OsName OSDisplayVersion
------ ----------------
Microsoft Azure Stack HCI 20H2
Azure Stack HCI release information
Article • 07/11/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Feature updates for Azure Stack HCI are released periodically to enhance the customer
experience. To keep your Azure Stack HCI service in a supported state, you have up to
six months to install updates, but we recommend installing updates as they are released.
Microsoft provides monthly quality and security updates for each supported version of
Azure Stack HCI and also provides yearly feature updates.

This article provides a list of the available updates for each version of Azure Stack HCI.

Azure Stack HCI release summary


The following table provides the release summary of each version of Azure Stack HCI
and the date by when you must apply updates to keep your Azure Stack HCI service in a
supported state.

All dates are listed in ISO 8601 format: YYYY-MM-DD

Version Availability date OS build Apply update by

22H2 2022-10-25 20349.1194 2023-04-25

21H2 2021-10-19 20348.288 2022-04-19

20H2 2020-12-10 17784.1408 Out of support

Azure Stack HCI, version 22H2 (OS build 20349)


The following are the available updates for Azure Stack HCI, version 22H2.

All dates are listed in ISO 8601 format: YYYY-MM-DD

OS build Availability date KB article

20349.1850 2023-07-11 KB 5028171

20349.1787 2023-06-13 KB 5027225

20349.1726 2023-05-09 KB 5026370


OS build Availability date KB article

20349.1668 2023-04-11 KB 5025230

20349.1607 2023-03-14 KB 5023705

20349.1547 2023-02-14 KB 5022842

20349.1487 2023-01-10 KB 5022291

20349.1368 2022-12-20 KB 5022553

20349.1366 2022-12-13 KB 5021249

20349.1311 2022-11-22 KB 5020032

20349.1251 2022-11-17 KB 5021656

20349.1249 2022-11-08 KB 5019081

20349.1194 2022-10-25 KB 5018485

Azure Stack HCI, version 21H2 (OS build 20348)


The following are the available updates for Azure Stack HCI, version 21H2.

All dates are listed in ISO 8601 format: YYYY-MM-DD

OS build Availability date KB article

20348.1850 2023-07-11 KB 5028171

20348.1787 2023-06-13 KB 5027225

20348.1726 2023-05-09 KB 5026370

20348.1668 2023-04-11 KB 5025230

20348.1607 2023-03-14 KB 5023705

20348.1547 2023-02-14 KB 5022842

20348.1487 2023-01-10 KB 5022291

20348.1368 2022-12-20 KB 5022553

20348.1366 2022-12-13 KB 5021249

20348.1311 2022-11-22 KB 5020032

20348.1251 2022-11-17 KB 5021656


OS build Availability date KB article

20348.1249 2022-11-08 KB 5019081

20348.1194 2022-10-25 KB 5018485

20348.1131 2022-10-17 KB 5020436

20348.1129 2022-10-11 KB 5018421

20348.1070 2022-09-20 KB 5017381

20348.1006 2022-09-13 KB 5017316

20348.946 2022-08-16 KB 5016693

20348.887 2022-08-09 KB 5016627

20348.859 2022-07-19 KB 5015879

20348.825 2022-07-12 KB 5015827

20348.803 2022-06-23 KB 5014665

20348.768 2022-06-14 KB 5014678

20348.740 2022-05-24 KB 5014021

20348.707 2022-05-10 KB 5013944

20348.681 2022-04-25 KB 5012637

20348.643 2022-04-12 KB 5012604

20348.617 2022-03-22 KB 5011558

20348.587 2022-03-08 KB 5011497

20348.558 2022-02-15 KB 5010421

20348.524 2022-02-08 KB 5010354

20348.502 2022-01-25 KB 5009608

20348.473 2022-01-17 KB 5010796

20348.469 2022-01-11 KB 5009555

20348.407 2022-01-05 KB 5010197

20348.405 2021-12-14 KB 5008223

20348.380 2021-11-22 KB 5007254


OS build Availability date KB article

20348.350 2021-11-09 KB 5007205

20348.320 2021-10-26 KB 5006745

Azure Stack HCI, version 20H2 (OS build 17784)


The following are the available updates for Azure Stack HCI, version 20H2.

All dates are listed in ISO 8601 format: YYYY-MM-DD

OS build Availability date KB article

17784.3092 2022-12-13 KB 5021236

17784.2977 2022-11-08 KB 5019962

17784.2968 2022-11-08 KB 5020804

17784.2868 2022-10-17 KB 5020446

17784.2866 2022-10-11 KB 5018415

17784.2791 2022-09-20 KB 5017382

17784.2780 2022-09-20 KB 5017395

17784.2725 2022-09-13 KB 5017311

17784.2665 2022-08-23 KB 5016692

17784.2605 2022-08-09 KB 5016620

17784.2576 2022-07-21 KB 5015881

17784.2576 2022-07-21 KB 5015899

17784.2545 2022-07-12 KB 5015809

17784.2540 2022-07-12 KB 5015894

17784.2524 2022-06-23 KB 5014667

17784.2515 2022-06-23 KB 5014798

17784.2486 2022-06-14 KB 5014698

17784.2462 2022-05-24 KB 5014020

17784.2430 2022-05-10 KB 5013951


OS build Availability date KB article

17784.2424 2022-05-10 KB 5014033

17784.2398 2022-04-21 KB 5012660

17784.2398 2022-04-21 KB 5012676

17784.2364 2022-04-12 KB 5012589

17784.2337 2022-03-22 KB 5011566

17784.2331 2022-03-22 KB 5011575

17784.2306 2022-03-08 KB 5011490

17784.2279 2022-02-15 KB 5010428

17784.2245 2022-02-08 KB 5010343

17784.2244 2022-02-08 KB 5011353

17784.2219 2022-01-27 KB 5009625

17784.2219 2022-01-27 KB 5009640

17784.2190 2022-01-11 KB 5009542

17784.2135 2021-12-14 KB 5008210

17784.2100 2021-11-22 KB 5007264

17784.2067 2021-11-09 KB 5007187

17784.2060 2021-11-09 KB 5007349

17784.2038 2021-10-19 KB 5006741

17784.2036 2021-10-19 KB 5006751

17784.2005 2021-10-12 KB 5006679

17784.1979 2021-09-21 KB 5005620

17784.1950 2021-09-14 KB 5005567

17784.1941 2021-09-14 KB 5005942

17784.1915 2021-08-26 KB 5005105

17784.1884 2021-08-10 KB 5005042

17784.1881 2021-08-10 KB 5005410


OS build Availability date KB article

17784.1861 2021-07-20 KB 5004311

17784.1861 2021-07-20 KB 5004425

17784.1827 2021-07-13 KB 5004235

17784.1797 2021-06-15 KB 5003704

17784.1768 2021-06-08 KB 5003643

17784.1762 2021-06-08 KB 5004179

17784.1737 2021-05-20 KB 5003237

17784.1705 2021-05-11 KB 5003188

17784.1700 2021-05-11 KB 5003282

17784.1681 2021-04-22 KB 5001395

17784.1645 2021-04-13 KB 5001343

17784.1640 2021-04-13 KB 5001449

17784.1619 2021-03-25 KB 5000849

17784.1589 2021-03-09 KB 5000801

17784.1580 2021-03-09 KB 5001158

17784.1557 2021-02-16 KB 4601381

17784.1526 2021-02-09 KB 4601317

17784.1497 2021-01-21 KB 4598294

17784.1466 2021-01-12 KB 4598232

17784.1408 2020-12-10 KB 4592441

Release notes
For information about what's included in each version of Azure Stack HCI, see the
release notes:

Release Notes for Azure Stack HCI, version 22H2


Release Notes for Azure Stack HCI, version 21H2
Release notes for Azure Stack HCI, version 20H2
Release notes for Azure Stack HCI, version 20H2 preview releases

Next steps
Azure Stack HCI Lifecycle
Get started with Azure Stack HCI and
Windows Admin Center
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This topic provides instructions for connecting to an Azure Stack HCI cluster, and for
monitoring cluster and storage performance. If you haven't set up a cluster yet,
download Azure Stack HCI and see Quickstart: Create an Azure Stack HCI cluster and
register it with Azure for instructions.

Install Windows Admin Center


Windows Admin Center is a locally deployed, browser-based app for managing Azure
Stack HCI. The simplest way to install Windows Admin Center is on a local management
PC (desktop mode), although you can also install it on a server (service mode).

7 Note

For Azure AD authentication, install Windows Admin Center on a server.

If you install Windows Admin Center on a server, tasks that require CredSSP, such as
cluster creation and installing updates and extensions, require using an account that's a
member of the Gateway Administrators group on the Windows Admin Center server. For
more information, see the first two sections of Configure User Access Control and
Permissions.

Add and connect to an Azure Stack HCI cluster


After you have completed the installation of Windows Admin Center, you can add a
cluster to manage from the main overview page.

1. Click + Add under All Connections.


2. Choose to add a Windows Server cluster:

3. Type the name of the cluster to manage and click Add. The cluster will be added to
your connection list on the overview page.

4. Under All Connections, click the name of the cluster you just added. Windows
Admin Center will start Cluster Manager and take you directly to the Windows
Admin Center dashboard for that cluster.

Monitor cluster performance with the Windows


Admin Center dashboard
The Windows Admin Center dashboard provides alerts and health information about
servers, drives, and volumes, as well as details about CPU, memory, and storage usage.
The bottom of the dashboard displays cluster performance information such as IOPS
and latency by hour, day, week, month, or year.
Monitor performance of individual
components
The Tools menu to the left of the dashboard allows you to drill down on any component
of the cluster to view summaries and inventories of virtual machines, servers, volumes,
and drives.

Virtual machines
To view a summary of virtual machines that are running on the cluster, click Virtual
machines from the Tools menu at the left.
For a complete inventory of virtual machines running on the cluster along with their
state, host server, CPU usage, memory pressure, memory demand, assigned memory,
and uptime, click Inventory at the top of the page.

Servers
To view a summary of the servers in the cluster, click Servers from the Tools menu at the
left.
For a complete inventory of servers in the cluster including their status, uptime,
manufacturer, model, and serial number, click Inventory at the top of the page.

Volumes
To view a summary of volumes on the cluster, click Volumes from the Tools menu at the
left.
For a complete inventory of volumes on the cluster including their status, file system,
resiliency, size, storage usage, and IOPS, click Inventory at the top of the page.

Drives
To view a summary of drives in the cluster, click Drives from the Tools menu at the left.
For a complete inventory of drives in the cluster along with their serial number, status,
model, size, type, use, location, server, and capacity, click Inventory at the top of the
page.

Virtual switches
To view the settings for a virtual switch in the cluster, click Virtual switches from the
Tools menu at the left, then click the name of the virtual switch you want to display the
settings for. Windows Admin Center will display the network adapters associated with
the virtual switch, including their IP addresses, connection state, link speed, and MAC
address.
Add counters with the Performance Monitor
tool
Use the Performance Monitor tool to view and compare performance counters for
Windows, apps, or devices in real-time.

1. Select Performance Monitor from the Tools menu on the left.


2. Click blank workspace to start a new workspace, or restore previous to restore a
previous workspace.

3. If creating a new workspace, click the Add counter button and select one or more
source servers to monitor, or select the entire cluster.
4. Select the object and instance you wish to monitor, as well as the counter and
graph type to view dynamic performance information.
5. Save the workspace by choosing Save > Save As from the top menu.

Collect diagnostics information


Select Diagnostics from the Tools menu to collect information for troubleshooting
problems with your cluster. If you call Microsoft Support, they may ask for this
information.

Next steps
To monitor performance history on your Azure Stack HCI clusters, see also:

Performance history for Storage Spaces Direct


Quickstart: Create an Azure Stack HCI
cluster and register it with Azure
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this quickstart, you'll learn how to deploy a two-server, single-site Azure Stack HCI
cluster and register it with Azure. For multisite deployments, see the Stretched clusters
overview.

Before you start


Before creating a cluster, do the following:

Purchase two servers from the Azure Stack HCI Catalog through your preferred
Microsoft hardware partner with the Azure Stack HCI operating system pre-
installed. Review the system requirements to make sure the hardware you select
will support the workloads you plan to run on the cluster. We recommend using a
system with high-speed network adapters that use iWARP for simple configuration.

Create a user account that’s a member of the local Administrators group on each
server.

Create an Active Directory domain controller on-premises, if you don't already


have one.

) Important

On-premises Active Directory is required. Using Azure Active Directory alone


will not enable you to join the cluster to Azure Active Directory Domain
Services.

Get an Azure subscription , if you don't already have one.

Install Windows Admin Center on a management PC and register Windows Admin


Center with Azure. Note that your management computer must be joined to the
same Active Directory domain in which you'll create the cluster, or a fully trusted
domain.
Take note of the server names, domain names, IP addresses, and VLAN ID for your
deployment.

Create the cluster


Follow these steps to create a simple two-node, single-site cluster. For more details or
to create a stretched cluster, see Create an Azure Stack HCI cluster using Windows
Admin Center.

1. In Windows Admin Center, under All connections, select Add.


2. In the Add resources panel, under Windows Server cluster, select Create new.
3. Under Choose cluster type, select Azure Stack HCI.
4. Under Select server locations, select All servers in one site.
5. Select Create. You will now see the Create Cluster wizard. If the Credential Security
Service Provider (CredSSP) pop-up appears, select Yes to temporarily enable it.

The Create Cluster wizard has five sections, each with several steps.

1. Get started. In this section, you'll check the prerequisites, add servers, join a
domain, install required features and updates, and restart the servers.
2. Networking. This section of the wizard verifies that the correct networking
adapters are enabled and disables any you're not using. You'll select management
adapters, set up a virtual switch configuration, and define your network by
supplying IP addresses.
3. Clustering. This section validates that your servers have a consistent configuration
and are suitable for clustering, and creates the actual cluster.
4. Storage. Next, you'll clean and check drives, validate your storage, and enable
Storage Spaces Direct.
5. SDN. You can skip Section 5 because we won't be using Software Defined
Networking (SDN) for this cluster.

If you enabled the CredSSP protocol in the wizard, you'll want to disable it on each
server for security purposes.

1. In Windows Admin Center, under All connections, select the cluster you just
created.
2. Under Tools, select Servers.
3. In the right pane, select the first server in the cluster.
4. Under Overview, select Disable CredSSP. You will see that the red CredSSP
ENABLED banner at the top disappears.
5. Repeat steps 3 and 4 for the second server in the cluster.
Set up a cluster witness
Setting up a witness resource is required so that if one of the servers in the cluster goes
offline, it does not cause the other node to become unavailable as well. For this
quickstart, we'll use an SMB file share located on another server as a witness. You may
prefer to use an Azure cloud witness, provided all server nodes in the cluster have a
reliable internet connection. For more information about witness options, see Set up a
cluster witness.

1. In Windows Admin Center, select Cluster Manager from the top drop-down arrow.
2. Under Cluster connections, select the cluster.
3. Under Tools, select Settings.
4. In the right pane, select Witness.
5. For Witness type, select File share witness.
6. Specify a file share path such as \servername.domain.com\Witness$ and supply
credentials if needed.
7. Select Save.

Register with Azure


Azure Stack HCI requires a connection to Azure, and you'll need Azure Active Directory
permissions to complete the registration. If you don't already have them, ask your Azure
AD administrator to either grant permissions or delegate them to you. See Connect
Azure Stack HCI to Azure for more information. Once registered, the cluster connects
automatically in the background.

Next steps
In this quickstart, you created an Azure Stack HCI cluster and registered it with Azure.
You are now ready to Create volumes and then Create virtual machines.
Tutorial: Create a VM-based lab for
Azure Stack HCI
Article • 07/11/2022

Applies to: Azure Stack HCI, version 21H2

In this tutorial, you use MSLab PowerShell scripts to automate the process of creating a
private forest to run Azure Stack HCI on virtual machines (VMs) using nested
virtualization.

) Important

Because Azure Stack HCI is intended as a virtualization host where you run all of
your workloads in VMs, nested virtualization is not supported in production
environments. Use nested virtualization for testing and evaluation purposes only.

You'll learn how to:

" Create a private forest with a domain controller and a Windows Admin Center
server
" Deploy multiple VMs running Azure Stack HCI for clustering

Once completed, you'll be able to create an Azure Stack HCI cluster using the VMs
you've deployed and use the private lab for prototyping, testing, troubleshooting, or
evaluation.

Prerequisites
To complete this tutorial, you need:

Admin privileges on a Hyper-V host server running Windows Server 2022,


Windows Server 2019, or Windows Server 2016
At least 8 GB RAM
CPU with nested virtualization support
Solid state drives (SSD)
40 GB of free space on the Hyper-V host server
An Azure account to register Windows Admin Center and your cluster
Prepare the lab
Carefully prepare the lab environment following these instructions.

Connect to the virtualization host


Connect to the physical server on which you'll create the VM-based lab. If you're using a
remote server, connect via Remote Desktop.

Download Azure Stack HCI


Launch a web browser on the server and visit the Azure Stack HCI product page .
Select "Register for a free trial" and complete the short registration form. Select the box
indicating that you agree with the licensing and privacy terms, and then select submit.

Select Download Azure Stack HCI, which will trigger an ISO download.

Download Windows Server


You'll also need a copy of Windows Server 2022, Windows Server 2019, or Windows
Server 2016 for the domain controller and Windows Admin Center VMs. You can use
evaluation media, or if you have access to either a VL or Visual Studio Subscription, you
can use those. For this tutorial, we'll download an evaluation copy .

Create a folder for the lab files


Create a Lab folder at the root of your C drive (or wherever you prefer), and use File
Explorer to copy the OS files you downloaded to the C:\Lab\Isos folder.

Download MSLab scripts


Using the web browser on your server, download MSLab scripts . The zip file
wslab_vxx.xx.x.zip should automatically download to your hard drive. Copy the zip file
to the hard drive location (C:\Lab) and extract the scripts.

Edit the LabConfig script


MSLab VMs are defined in the LabConfig.ps1 PowerShell script as a simple hash table.
You'll need to customize the script to create a private forest with Azure Stack HCI VMs.
To edit the script, use File Explorer to navigate to C:\Lab\wslab_xxx\ and then right-click
on LabConfig.ps1. Select Edit, which will open the file using Windows PowerShell ISE.

 Tip

Save the original version of LabConfig.ps1 as Original_LabConfig.ps1, so it's easy to


start over if you need to.

Notice that most of the script is commented out; you will only need to execute a few
lines. Follow these steps to customize the script so it produces the desired output.
Alternatively, you can simply copy the code block at the end of this section and replace
the appropriate lines in LabConfig.

To customize the script:

1. Add the following to the first uncommented line of LabConfig.ps1 to tell the script
where to find the ISOs, enable the guest service interface, and enable DNS
forwarding on the host: ServerISOFolder="C:\lab\isos" ;
EnableGuestServiceInterface=$true ; UseHostDnsAsForwarder=$true

2. Change the admin name and password, if desired.

3. If you plan to create multiple labs on the same server, change Prefix = 'MSLab-' to
use a new Prefix name, such as Lab1-. We'll stick with the default MSLab- prefix for
this tutorial.

4. Comment out the default ForEach-Object line for Windows Server and remove the
hashtag before the ForEach-Object line for Azure Stack HCI so that the script will
create Azure Stack HCI VMs instead of Windows Server VMs for the cluster nodes.

5. By default, the script creates a four-node cluster. If you want to change the number
of VMs in the cluster, replace 1..4 with 1..2 or 1..8, for example. Remember, the
more VMs in your cluster, the greater the memory requirements on your host
server.

6. Add NestedVirt=$true ; AdditionalNetworks=$True to the ForEach-Object


command and set MemoryStartupBytes to 4GB.

7. Add an AdditionalNetworksConfig line: $LabConfig.AdditionalNetworksConfig +=


@{ NetName = 'Converged'; NetAddress='10.0.1.'; NetVLAN='0';
Subnet='255.255.255.0'}

8. Add the following line to configure a Windows Admin Center management server
running the Windows Server Core operating system to add a second NIC so you
can connect to Windows Admin Center from outside the private network:
$LabConfig.VMs += @{ VMName = 'AdminCenter' ; ParentVHD =
'Win2019Core_G2.vhdx'; MGMTNICs=2}

9. Be sure to save your changes to LabConfig.ps1.

The changes to LabConfig.ps1 made in the steps above are reflected in this code block:

PowerShell

$LabConfig=@{ DomainAdminName='LabAdmin'; AdminPassword='LS1setup!'; Prefix


= 'MSLab-' ; DCEdition='4'; Internet=$true ; AdditionalNetworksConfig=@();
VMs=@() ; ServerISOFolder="C:\lab\isos" ; EnableGuestServiceInterface=$true
; UseHostDnsAsForwarder=$true }

# Windows Server 2019

#1..4 | ForEach-Object {$VMNames="S2D"; $LABConfig.VMs += @{ VMName =


"$VMNames$_" ; Configuration = 'S2D' ; ParentVHD = 'Win2019Core_G2.vhdx';
SSDNumber = 0; SSDSize=800GB ; HDDNumber = 12; HDDSize= 4TB ;
MemoryStartupBytes= 512MB }}

# Or Azure Stack HCI

1..4 | ForEach-Object {$VMNames="AzSHCI"; $LABConfig.VMs += @{ VMName =


"$VMNames$_" ; Configuration = 'S2D' ; ParentVHD = 'AzSHCI21H2_G2.vhdx';
SSDNumber = 0; SSDSize=800GB ; HDDNumber = 12; HDDSize= 4TB ;
MemoryStartupBytes= 4GB ; NestedVirt=$true ; AdditionalNetworks=$true }}

# Or Windows Server 2022

#1..4 | ForEach-Object {$VMNames="S2D"; $LABConfig.VMs += @{ VMName =


"$VMNames$_" ; Configuration = 'S2D' ; ParentVHD = 'Win2022Core_G2.vhdx';
SSDNumber = 0; SSDSize=800GB ; HDDNumber = 12; HDDSize= 4TB ;
MemoryStartupBytes= 512MB }}

$LabConfig.AdditionalNetworksConfig += @{ NetName = 'Converged';


NetAddress='10.0.1.'; NetVLAN='0'; Subnet='255.255.255.0'}

$LabConfig.VMs += @{ VMName = 'AdminCenter' ; ParentVHD =


'Win2019Core_G2.vhdx'; MGMTNICs=2}

Run MSLab scripts and create parent disks


MSLab scripts automate much of the lab setup process and convert ISO images of the
operating systems to VHD files.

Run the Prereq script


Navigate to C:\Lab\wslab_xxx\ and run the 1_Prereq.ps1 script by right-clicking on the
file and selecting Run With PowerShell. The script will download necessary files. Some
example files will be placed into the ToolsDisk folder, and some scripts will be added to
the ParentDisks folder. When the script is finished, it will ask you to press Enter to
continue.

7 Note

You might need to change the script execution policy on your system to allow
unsigned scripts by running this PowerShell cmdlet as administrator: Set-
ExecutionPolicy -ExecutionPolicy Unrestricted

Create the Windows Server parent disks


The 2_CreateParentDisks.ps1 script prepares virtual hard disks (VHDs) for Windows
Server and Server Core from the operating system ISO file, and also prepares a domain
controller for deployment with all required roles configured. Run
2_CreateParentDisks.ps1 by right-clicking on the file and selecting Run with PowerShell.

You'll be asked to select telemetry levels; choose B for Basic or F for Full. The script will
also ask for the ISO file for Windows Server 2019. Point it to the location you copied the
file to (C:\Labs\Isos). If there are multiple ISO files in the folder, you'll be asked to select
the ISO that you want to use. Select the Windows Server ISO. If you're asked to format a
drive, select N.

2 Warning

Don't select the Azure Stack HCI ISO - you'll create the Azure Stack HCI parent disk
(VHD) in the next section.

Creating the parent disks can take as long as 1-2 hours, although it can take much less
time. When complete, the script will ask you if unnecessary files should be removed. If
you select Y, it will remove the first two scripts because they're no longer needed. Press
Enter to continue.

Create the Azure Stack HCI parent disk


Download the Convert-WindowsImage.ps1 function to the
C:\Lab\wslab_xxx\ParentDisks folder as Convert-WindowsImage.ps1. Then run
CreateParentDisk.ps1 as administrator. Choose the Azure Stack HCI ISO from
C:\Labs\Isos, and accept the default name and size.
Creating the parent disk will take a while. When the operation is complete, you'll be
prompted to start the VMs. Don't start them yet - type N.

Deploy the VMs


Run Deploy.ps1 by right-clicking and selecting Run with PowerShell. The script will take
10-15 minutes to complete.

Install operating system updates and software


Now that the VMs are deployed, you'll need to install security updates and the software
needed to manage your lab.

Update the domain controller and Windows Admin


Center VMs
Log on to your virtualization host and launch Hyper-V Manager. The domain controller
in your private forest should already be running (MSLab-DC). Go to Virtual Machines,
select the domain controller, and connect to it. Sign in with the username and password
you specified, or if you didn't change them, use the defaults: LabAdmin/LS1setup!

Install any required security updates and restart the domain controller VM if needed.
This may take a while, and you may need to restart the VM multiple times.

In Hyper-V Manager, start the Windows Admin Center VM (MSLab-AdminCenter), which


is running Server Core. Connect to it, log in, and type sconfig. Select download and
install security updates, and reboot if needed. This may take a while, and you may need
to restart the VM and type sconfig multiple times.

Install Microsoft Edge on the domain controller


You'll need a web browser on the domain controller VM in order to use Windows Admin
Center in your private forest. It's likely that Internet Explorer will be blocked for security
reasons, so use Microsoft Edge instead. If Edge isn't already installed on the domain
controller VM, you'll need to install it.

To install Microsoft Edge, connect to the domain controller VM from Hyper-V Manager
and launch a PowerShell session as administrator. Then run the following code to install
and start Microsoft Edge.

PowerShell
#Install Edge

Start-BitsTransfer -Source "https://aka.ms/edge-msi" -Destination


"$env:USERPROFILE\Downloads\MicrosoftEdgeEnterpriseX64.msi"

#Start install

Start-Process -Wait -Filepath msiexec.exe -Argumentlist "/i


$env:UserProfile\Downloads\MicrosoftEdgeEnterpriseX64.msi /q"

#Start Edge

start microsoft-edge:

Install Windows Admin Center in gateway mode


Using Microsoft Edge on the domain controller VM, download this script to the
domain controller VM and save it with a .ps1 file extension.

Right-click on the file, choose Edit with PowerShell, and change the value of
$GatewayServerName in the first line to match the name of your AdminCenter VM
without the prefix (for example, AdminCenter). Save the script and run it by right-
clicking on the file and selecting Run with PowerShell.

Log on to Windows Admin Center


You should now be able to access Windows Admin Center from Edge on the DC:
http://admincenter

Your browser may warn you that it's an unsafe or insecure connection, but it's OK to
proceed.

Add an externally accessible network adapter (optional)


If your lab is on a private network, you might want to add an externally accessible NIC to
the AdminCenter VM so that you can connect to it and manage your lab from outside
the private forest. To do this, use Windows Admin Center to connect to your
virtualization host (not the domain controller) and go to Virtual machines > MSLab-
AdminCenter > Settings > Networks. Make sure that you have a virtual switch
connected to the appropriate network. Look for Switch Type = External (such as MSLab-
LabSwitch-External). Then add/bind a VM NIC to this external virtual switch. Be sure to
select the "Allow management OS to share these network adapters" checkbox.

Take note of the IP addresses of the network adapters on the AdminCenter VM. Append
:443 to the IP address of the externally accessible NIC, and you should be able to log on
to Windows Admin Center and create and manage your cluster from an external web
browser, such as: https://10.217.XX.XXX:443
Install operating system updates on the Azure Stack HCI
VMs
Start the Azure Stack HCI VMs using Hyper-V Manager on the virtualization host.
Connect to each VM, and download and install security updates using Sconfig on each
of them. You may need to restart the VMs multiple times. (You can skip this step if you'd
rather install the OS updates later as part of the cluster creation wizard).

Enable the Hyper-V role on the Azure Stack HCI VMs


If your cluster VMs are running Azure Stack HCI 20H2, you'll need to run a script to
enable the Hyper-V role on the VMs. Save this script to C:\Lab on your virtualization
host as PreviewWorkaround.ps1.

Right-click on the PreviewWorkaround.ps1 file and select Edit with PowerShell. Change
the $domainName, $domainAdmin, and $nodeName variables if they don't match,
such as:

PowerShell

$domainName = "corp.contoso.com"

$domainAdmin = "$domainName\labadmin"

$nodeName = "MSLab-AzSHCI1","MSLab-AzSHCI2","MSLab-AzSHCI3","MSLab-AzSHCI4"

Save your changes, then open a PowerShell session as administrator and run the script:

PowerShell

PS C:\Lab> ./PreviewWorkaround.ps1

The script will take some time to run, especially if you've created lots of VMs. You should
see the message "MSLab-AzSHCI1 MSLab-AzSHCI2 is now online. Proceeding to install
Hyper-V PowerShell." If the script appears to freeze after displaying the message, press
Enter to wake it up. When it's done, you should see: "MSLab-AzSHCI1 MSLab-AzSHCI2 is
now online. Proceed to the next step ..."

Add additional network adapters (optional)


Depending on how you intend to use the cluster, you may want to add a couple more
network adapters to each Azure Stack HCI VM for more versatile testing. To do this,
connect to your host server using Windows Admin Center and go to Virtual machines >
MSLab-(node) > Settings > Networks. Make sure to select Advanced > Enable MAC
Address Spoofing. If this setting isn't enabled, you may encounter failed connectivity
tests when trying to create a cluster.

Register Windows Admin Center with Azure


Connect to Windows Admin Center in your private forest using either the external URL
or using Edge on the domain controller, and Register Windows Admin Center with
Azure.

Clean up resources
If you selected Y to cleanup unnecessary files and folders, then cleanup is already done.
If you prefer to do it manually, navigate to C:\Labs and delete any unneeded files.

Next steps
You're now ready to proceed to the Cluster Creation Wizard.

Create an Azure Stack HCI cluster


System requirements for Azure Stack
HCI
Article • 05/31/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article discusses the system requirements for servers, storage, and networking for
Azure Stack HCI. Note that if you purchase Azure Stack HCI Integrated System solution
hardware from the Azure Stack HCI Catalog , you can skip to the Networking
requirements since the hardware already adheres to server and storage requirements.

Azure requirements
Here are the Azure requirements for your Azure Stack HCI cluster:

Azure subscription: If you don't already have an Azure account, create one . You
can use an existing subscription of any type:
Free account with Azure credits for students or Visual Studio subscribers .
Pay-as-you-go subscription with credit card.
Subscription obtained through an Enterprise Agreement (EA).
Subscription obtained through the Cloud Solution Provider (CSP) program.

Azure permissions: Make sure that you're assigned the following roles in your
Azure subscription: User Access Administrator and Contributor. For information on
how to assign permissions, see Assign Azure permissions for registration.

Azure regions

The Azure Stack HCI service is used for registration, billing, and management. It is
currently supported in the following regions:

Azure public

These public regions support geographic locations worldwide, for clusters


deployed anywhere in the world:
East US
South Central US
Canada Central
West Europe
Southeast Asia
Central India
Japan East
Australia East

Regions supported for additional features of Azure Stack HCI:

Currently, Azure Arc Resource Bridge supports only the following regions for Azure
Stack HCI registration:

East US
West Europe

Server requirements
A standard Azure Stack HCI cluster requires a minimum of one server and a maximum of
16 servers.

Keep the following in mind for various types of Azure Stack HCI deployments:

It's required that all servers be the same manufacturer and model, using 64-bit
Intel Nehalem grade, AMD EPYC grade or later compatible processors with
second-level address translation (SLAT). A second-generation Intel Xeon Scalable
processor is required to support Intel Optane DC persistent memory. Processors
must be at least 1.4 GHz and compatible with the x64 instruction set.

Make sure that the servers are equipped with at least 32 GB of RAM per node to
accommodate the server operating system, VMs, and other apps or workloads. In
addition, allow 4 GB of RAM per terabyte (TB) of cache drive capacity on each
server for Storage Spaces Direct metadata.

Verify that virtualization support is turned on in the BIOS or UEFI:


Hardware-assisted virtualization. This is available in processors that include a
virtualization option, specifically processors with Intel Virtualization Technology
(Intel VT) or AMD Virtualization (AMD-V) technology.
Hardware-enforced Data Execution Prevention (DEP) must be available and
enabled. For Intel systems, this is the XD bit (execute disable bit). For AMD
systems, this is the NX bit (no execute bit).

Ensure all the servers are in the same time zone as your local domain controller.

You can use any boot device supported by Windows Server, which now includes
SATADOM . RAID 1 mirror is not required but is supported for boot. A 200 GB
minimum size is recommended.

For additional feature-specific requirements for Hyper-V, see System requirements


for Hyper-V on Windows Server.

Storage requirements
Azure Stack HCI works with direct-attached SATA, SAS, NVMe, or persistent memory
drives that are physically attached to just one server each.

For best results, adhere to the following:

Every server in the cluster should have the same types of drives and the same
number of each type. It's also recommended (but not required) that the drives be
the same size and model. Drives can be internal to the server or in an external
enclosure that is connected to just one server. To learn more, see Drive symmetry
considerations.

Each server in the cluster should have dedicated volumes for logs, with log storage
at least as fast as data storage. Stretched clusters require at least two volumes: one
for replicated data and one for log data.

SCSI Enclosure Services (SES) is required for slot mapping and identification. Each
external enclosure must present a unique identifier (Unique ID).

) Important

NOT SUPPORTED: RAID controller cards or SAN (Fibre Channel, iSCSI, FCoE)
storage, shared SAS enclosures connected to multiple servers, or any form of
multi-path IO (MPIO) where drives are accessible by multiple paths. Host-bus
adapter (HBA) cards must implement simple pass-through mode for any
storage devices used for Storage Spaces Direct.

Networking requirements
An Azure Stack HCI cluster requires a reliable high-bandwidth, low-latency network
connection between each server node.

Verify at least one network adapter is available and dedicated for cluster
management.
Verify that physical switches in your network are configured to allow traffic on any
VLANs you will use.

For physical networking considerations and requirements, see Physical network


requirements.

For host networking considerations and requirements, see Host network requirements.

Stretched clusters require servers be deployed at two separate sites. The sites can be in
different countries/regions, different cities, different floors, or different rooms. For
synchronous replication, you must have a network between servers with enough
bandwidth to contain your IO write workload and an average of 5 ms round trip latency
or lower. Asynchronous replication doesn't have a latency recommendation.

A stretched cluster requires a minimum of 4 servers (2 per site) and a maximum of


16 servers (8 per site). You can’t create a stretched cluster with two single servers.
Each site must have the same number of servers and drives.
SDN isn’t supported on stretched clusters.

For additional discussion of stretched cluster networking requirements, see Host


network requirements.

Software Defined Networking (SDN) requirements


When you create an Azure Stack HCI cluster using Windows Admin Center, you have the
option to deploy Network Controller to enable Software Defined Networking (SDN). If
you intend to use SDN on Azure Stack HCI:

Make sure the host servers have at least 50-100 GB of free space to create the
Network Controller VMs.

You must download a virtual hard disk of the Azure Stack HCI operating system to
use for the SDN infrastructure VMs (Network Controller, Software Load Balancer,
Gateway). For download instructions, see Download the VHDX file.

For more information about preparing for using SDN in Azure Stack HCI, see Plan a
Software Defined Network infrastructure and Plan to deploy Network Controller.

7 Note

SDN is not supported on stretched (multi-site) clusters.


Active Directory Domain requirements
You must have an Active Directory Domain Services (AD DS) domain available for the
Azure Stack HCI system to join. There are no special domain functional-level
requirements. We do recommend turning on the Active Directory Recycle Bin feature as
a general best practice, if you haven't already. To learn more, see Active Directory
Domain Services Overview.

Windows Admin Center requirements


If you use Windows Admin Center to create or manage your Azure Stack HCI cluster,
make sure to complete the following requirements:

Install the latest version of Windows Admin Center on a PC or server for


management. See Install Windows Admin Center.

Ensure that Windows Admin Center and your domain controller are not installed
on the same instance. Also, ensure that the domain controller is not hosted on the
Azure Stack HCI cluster or one of the nodes in the cluster.

If you're running Windows Admin Center on a server (instead of a local PC), use an
account that's a member of the Gateway Administrators group, or the local
Administrators group on the Windows Admin Center server.

Verify that your Windows Admin Center management computer is joined to the
same Active Directory domain in which you'll create the cluster, or joined to a fully
trusted domain. The servers that you'll cluster don't need to belong to the domain
yet; they can be added to the domain during cluster creation.

Maximum supported hardware specifications


Azure Stack HCI deployments that exceed the following specifications are not
supported:

Resource Maximum

Physical servers per cluster 16

VMs per host 1,024

Disks per VM (SCSI) 256

Storage per cluster 4 PB


Resource Maximum

Storage per server 400 TB

Volumes per cluster 64

Volume size 64 TB

Logical processors per host 512

RAM per host 24 TB

RAM per VM 12 TB (generation 2 VM) or 1 TB (generation 1)

Virtual processors per host 2,048

Virtual processors per VM 240 (generation 2 VM) or 64 (generation 1)

Next steps
For related information, see also:

Choose drives
Storage Spaces Direct hardware requirements
Physical network requirements for
Azure Stack HCI
Article • 04/19/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article discusses physical (fabric) network considerations and requirements for
Azure Stack HCI, particularly for network switches.

7 Note

Requirements for future Azure Stack HCI versions may change.

Network switches for Azure Stack HCI


Microsoft tests Azure Stack HCI to the standards and protocols identified in the
Network switch requirements section below. While Microsoft doesn't certify network
switches, we do work with vendors to identify devices that support Azure Stack HCI
requirements.

) Important

While other network switches using technologies and protocols not listed here may
work, Microsoft cannot guarantee they will work with Azure Stack HCI and may be
unable to assist in troubleshooting issues that occur.

When purchasing network switches, contact your switch vendor and ensure that the
devices meet the Azure Stack HCI requirements for your specified role types. The
following vendors (in alphabetical order) have confirmed that their switches support
Azure Stack HCI requirements:

Overview

Click on a vendor tab to see validated switches for each of the Azure Stack HCI
traffic types. These network classifications can be found here.

) Important
We update these lists as we're informed of changes by network switch vendors.

If your switch isn't included, contact your switch vendor to ensure that your switch
model and the version of the switch's operating system supports the requirements
in the next section.

Network switch requirements


This section lists industry standards that are mandatory for the specific roles of network
switches used in Azure Stack HCI deployments. These standards help ensure reliable
communications between nodes in Azure Stack HCI cluster deployments.

7 Note

Network adapters used for compute, storage, and management traffic require
Ethernet. For more information, see Host network requirements.

Here are the mandatory IEEE standards and specifications:

22H2

22H2 Role Requirements

Requirement Management Storage Compute Compute


(Standard) (SDN)

Virtual LANS X X X X

Priority Flow Control X

Enhanced Transmission X
Selection

LLDP Port VLAN ID X

LLDP VLAN Name X X X

LLDP Link Aggregation X X X X

LLDP ETS Configuration X

LLDP ETS Recommendation X


Requirement Management Storage Compute Compute
(Standard) (SDN)

LLDP PFC Configuration X

LLDP Maximum Frame Size X X X X

Maximum Transmission X
Unit

Border Gateway Protocol X

DHCP Relay Agent X

7 Note

Guest RDMA requires both Compute (Standard) and Storage.

Standard: IEEE 802.1Q


Ethernet switches must comply with the IEEE 802.1Q specification that defines
VLANs. VLANs are required for several aspects of Azure Stack HCI and are required
in all scenarios.

Standard: IEEE 802.1Qbb


Ethernet switches used for Azure Stack HCI storage traffic must comply with the
IEEE 802.1Qbb specification that defines Priority Flow Control (PFC). PFC is required
where Data Center Bridging (DCB) is used. Since DCB can be used in both RoCE and
iWARP RDMA scenarios, 802.1Qbb is required in all scenarios. A minimum of three
Class of Service (CoS) priorities are required without downgrading the switch
capabilities or port speeds. At least one of these traffic classes must provide lossless
communication.

Standard: IEEE 802.1Qaz


Ethernet switches used for Azure Stack HCI storage traffic must comply with the
IEEE 802.1Qaz specification that defines Enhanced Transmission Select (ETS). ETS is
required where DCB is used. Since DCB can be used in both RoCE and iWARP RDMA
scenarios, 802.1Qaz is required in all scenarios.
A minimum of three CoS priorities are required without downgrading the switch
capabilities or port speed. Additionally, if your device allows ingress QoS rates to be
defined, we recommend that you do not configure ingress rates or configure them
to the exact same value as the egress (ETS) rates.

7 Note

Hyper-converged infrastructure has a high reliance on East-West Layer-2


communication within the same rack and therefore requires ETS. Microsoft
doesn't test Azure Stack HCI with Differentiated Services Code Point (DSCP).

Standard: IEEE 802.1AB


Ethernet switches must comply with the IEEE 802.1AB specification that defines the
Link Layer Discovery Protocol (LLDP). LLDP is required for Azure Stack HCI and
enables troubleshooting of physical networking configurations.

Configuration of the LLDP Type-Length-Values (TLVs) must be dynamically enabled.


Switches must not require additional configuration beyond enablement of a specific
TLV. For example, enabling 802.1 Subtype 3 should automatically advertise all
VLANs available on switch ports.

Custom TLV requirements


LLDP allows organizations to define and encode their own custom TLVs. These are
called Organizationally Specific TLVs. All Organizationally Specific TLVs start with an
LLDP TLV Type value of 127. The table below shows which Organizationally Specific
Custom TLV (TLV Type 127) subtypes are required.

Organization TLV Subtype

IEEE 802.1 Port VLAN ID (Subtype = 1)

IEEE 802.1 VLAN Name (Subtype = 3)

Minimum of 10 VLANS

IEEE 802.1 Link Aggregation (Subtype = 7)

IEEE 802.1 ETS Configuration (Subtype = 9)

IEEE 802.1 ETS Recommendation (Subtype = A)

IEEE 802.1 PFC Configuration (Subtype = B)


Organization TLV Subtype

IEEE 802.3 Maximum Frame Size (Subtype = 4)

Maximum Transmission Unit


New Requirement in 22H2

The maximum transmission unit (MTU) is the largest size frame or packet that can
be transmitted across a data link. A range of 1514 - 9174 is required for SDN
encapsulation.

Border Gateway Protocol


New Requirement in 22H2

Ethernet switches used for Azure Stack HCI SDN compute traffic must support
Border Gateway Protocol (BGP). BGP is a standard routing protocol used to
exchange routing and reachability information between two or more networks.
Routes are automatically added to the route table of all subnets with BGP
propagation enabled. This is required to enable tenant workloads with SDN and
dynamic peering. RFC 4271: Border Gateway Protocol 4

DHCP Relay Agent


New Requirement in 22H2

Ethernet switches used for Azure Stack HCI management traffic must support DHCP
relay agent. The DHCP relay agent is any TCP/IP host which is used to forward
requests and replies between the DHCP server and client when the server is present
on a different network. It is required for PXE boot services. RFC 3046: DHCPv4 or
RFC 6148: DHCPv4

Network traffic and architecture


This section is predominantly for network administrators.

Azure Stack HCI can function in various data center architectures including 2-tier (Spine-
Leaf) and 3-tier (Core-Aggregation-Access). This section refers more to concepts from
the Spine-Leaf topology that is commonly used with workloads in hyper-converged
infrastructure such as Azure Stack HCI.
Network models
Network traffic can be classified by its direction. Traditional Storage Area Network (SAN)
environments are heavily North-South where traffic flows from a compute tier to a
storage tier across a Layer-3 (IP) boundary. Hyperconverged infrastructure is more
heavily East-West where a substantial portion of traffic stays within a Layer-2 (VLAN)
boundary.

) Important

We highly recommend that all cluster nodes in a site are physically located in the
same rack and connected to the same top-of-rack (ToR) switches.

North-South traffic for Azure Stack HCI


North-South traffic has the following characteristics:

Traffic flows out of a ToR switch to the spine or in from the spine to a ToR switch.
Traffic leaves the physical rack or crosses a Layer-3 boundary (IP).
Includes management (PowerShell, Windows Admin Center), compute (VM), and
inter-site stretched cluster traffic.
Uses an Ethernet switch for connectivity to the physical network.

East-West traffic for Azure Stack HCI


East-West traffic has the following characteristics:

Traffic remains within the ToR switches and Layer-2 boundary (VLAN).
Includes storage traffic or Live Migration traffic between nodes in the same cluster
and (if using a stretched cluster) within the same site.
May use an Ethernet switch (switched) or a direct (switchless) connection, as
described in the next two sections.

Using switches
North-South traffic requires the use of switches. Besides using an Ethernet switch that
supports the required protocols for Azure Stack HCI, the most important aspect is the
proper sizing of the network fabric.
It is imperative to understand the "non-blocking" fabric bandwidth that your Ethernet
switches can support and that you minimize (or preferably eliminate) oversubscription of
the network.

Common congestion points and oversubscription, such as the Multi-Chassis Link


Aggregation Group used for path redundancy, can be eliminated through proper use
of subnets and VLANs. Also see Host network requirements.

Work with your network vendor or network support team to ensure your network
switches have been properly sized for the workload you are intending to run.

Using switchless
Azure Stack HCI supports switchless (direct) connections for East-West traffic for all
cluster sizes so long as each node in the cluster has a redundant connection to every
node in the cluster. This is called a "full-mesh" connection.

Interface pair Subnet VLAN

Mgmt host vNIC Customer-specific Customer-specific

SMB01 192.168.71.x/24 711

SMB02 192.168.72.x/24 712

SMB03 192.168.73.x/24 713

7 Note

The benefits of switchless deployments diminish with clusters larger than three-
nodes due to the number of network adapters required.
Advantages of switchless connections
No switch purchase is necessary for East-West traffic. A switch is required for
North-South traffic. This may result in lower capital expenditure (CAPEX) costs but
is dependent on the number of nodes in the cluster.
Because there is no switch, configuration is limited to the host, which may reduce
the potential number of configuration steps needed. This value diminishes as the
cluster size increases.

Disadvantages of switchless connections


More planning is required for IP and subnet addressing schemes.
Provides only local storage access. Management traffic, VM traffic, and other traffic
requiring North-South access cannot use these adapters.
As the number of nodes in the cluster grows, the cost of network adapters could
exceed the cost of using network switches.
Doesn't scale well beyond three-node clusters. More nodes incur additional
cabling and configuration that can surpass the complexity of using a switch.
Cluster expansion is complex, requiring hardware and software configuration
changes.

Next steps
Learn about network adapter and host requirements. See Host network
requirements.
Brush up on failover clustering basics. See Failover Clustering Networking Basics .
Brush up on using SET. See Remote Direct Memory Access (RDMA) and Switch
Embedded Teaming (SET).
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Host network requirements for Azure Stack
HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This topic discusses host networking considerations and requirements for Azure Stack HCI. For
information on datacenter architectures and the physical connections between servers, see Physical
network requirements.

For information on how to simplify host networking using Network ATC, see Simplify host
networking with Network ATC.

Network traffic types


Azure Stack HCI network traffic can be classified by its intended purpose:

Management traffic: Traffic to or from outside the local cluster. For example, storage replica
traffic or traffic used by the administrator for management of the cluster like Remote
Desktop, Windows Admin Center, Active Directory, etc.
Compute traffic: Traffic originating from or destined to a virtual machine (VM).
Storage traffic: Traffic using Server Message Block (SMB), for example Storage Spaces Direct
or SMB-based live migration. This traffic is layer-2 traffic and is not routable.

) Important

Storage replica uses non-RDMA based SMB traffic. This and the directional nature of the
traffic (North-South) makes it closely aligned to that of "management" traffic listed above,
similar to that of a traditional file share.

Select a network adapter


Network adapters are qualified by the network traffic types (see above) they are supported for use
with. As you review the Windows Server Catalog , the Windows Server 2022 certification now
indicates one or more of the following roles. Before purchasing a server for Azure Stack HCI, you
must minimally have at least one adapter that is qualified for management, compute, and storage
as all three traffic types are required on Azure Stack HCI. You can then use Network ATC to
configure your adapters for the appropriate traffic types.

For more information about this role-based NIC qualification, please see this link .

) Important
Using an adapter outside of its qualified traffic type is not supported.

Level Management Role Compute Role Storage Role

Role-based distinction Management Compute Standard Compute Storage

Maximum Award Not Applicable Compute Premium Storage Premium

7 Note

The highest qualification for any adapter in our ecosystem will contain the Management,
Compute Premium, and Storage Premium qualifications.

Driver Requirements
Inbox drivers are not supported for use with Azure Stack HCI. To identify if your adapter is using an
inbox driver, run the following cmdlet. An adapter is using an inbox driver if the DriverProvider
property is Microsoft.

Powershell

Get-NetAdapter -Name <AdapterName> | Select *Driver*

Overview of key network adapter capabilities


Important network adapter capabilities used by Azure Stack HCI include:

Dynamic Virtual Machine Multi-Queue (Dynamic VMMQ or d.VMMQ)


Remote Direct Memory Access (RDMA)
Guest RDMA
Switch Embedded Teaming (SET)

Dynamic VMMQ
All network adapters with the Compute (Premium) qualification support Dynamic VMMQ. Dynamic
VMMQ requires the use of Switch Embedded Teaming.

Applicable traffic types: compute


Certifications required: Compute (Premium)

Dynamic VMMQ is an intelligent, receive-side technology. It builds upon its predecessors of Virtual
Machine Queue (VMQ), Virtual Receive Side Scaling (vRSS), and VMMQ, to provide three primary
improvements:

Optimizes host efficiency by using fewer CPU cores.


Automatic tuning of network traffic processing to CPU cores, thus enabling VMs to meet and
maintain expected throughput.
Enables “bursty” workloads to receive the expected amount of traffic.

For more information on Dynamic VMMQ, see the blog post Synthetic accelerations .

RDMA
RDMA is a network stack offload to the network adapter. It allows SMB storage traffic to bypass
the operating system for processing.

RDMA enables high-throughput, low-latency networking, using minimal host CPU resources. These
host CPU resources can then be used to run additional VMs or containers.

Applicable traffic types: host storage

Certifications required: Storage (Standard)

All adapters with Storage (Standard) or Storage (Premium) qualification support host-side RDMA.
For more information on using RDMA with guest workloads, see the "Guest RDMA" section later in
this article.

Azure Stack HCI supports RDMA with either the Internet Wide Area RDMA Protocol (iWARP) or
RDMA over Converged Ethernet (RoCE) protocol implementations.

) Important

RDMA adapters only work with other RDMA adapters that implement the same RDMA
protocol (iWARP or RoCE).

Not all network adapters from vendors support RDMA. The following table lists those vendors (in
alphabetical order) that offer certified RDMA adapters. However, there are hardware vendors not
included in this list that also support RDMA. See the Windows Server Catalog to find adapters
with the Storage (Standard) or Storage (Premium) qualification which require RDMA support.

7 Note

InfiniBand (IB) is not supported with Azure Stack HCI.

NIC vendor iWARP RoCE


NIC vendor iWARP RoCE

Broadcom No Yes

Intel Yes Yes (some models)

Marvell (Qlogic) Yes Yes

Nvidia No Yes

For more information on deploying RDMA for the host, we highly recommend you use Network
ATC. For information on manual deployment see the SDN GitHub repo .

iWARP

iWARP uses Transmission Control Protocol (TCP), and can be optionally enhanced with Priority-
based Flow Control (PFC) and Enhanced Transmission Service (ETS).

Use iWARP if:

You don't have experience managing RDMA networks.


You don't manage or are uncomfortable managing your top-of-rack (ToR) switches.
You won't be managing the solution after deployment.
You already have deployments that use iWARP.
You're unsure which option to choose.

RoCE

RoCE uses User Datagram Protocol (UDP), and requires PFC and ETS to provide reliability.

Use RoCE if:

You already have deployments with RoCE in your datacenter.


You're comfortable managing the DCB network requirements.

Guest RDMA
Guest RDMA enables SMB workloads for VMs to gain the same benefits of using RDMA on hosts.

Applicable traffic types: Guest-based storage

Certifications required: Compute (Premium)

The primary benefits of using Guest RDMA are:

CPU offload to the NIC for network traffic processing.


Extremely low latency.
High throughput.

For more information, download the document from the SDN GitHub repo .
Switch Embedded Teaming (SET)
SET is a software-based teaming technology that has been included in the Windows Server
operating system since Windows Server 2016. SET is the only teaming technology supported by
Azure Stack HCI. SET works well with compute, storage, and management traffic and is supported
with up to eight adapters in the same team.

Applicable traffic types: compute, storage, and management

Certifications required: Compute (Standard) or Compute (Premium)

SET is the only teaming technology supported by Azure Stack HCI. SET works well with compute,
storage, and management traffic.

) Important

Azure Stack HCI doesn’t support NIC teaming with the older Load Balancing/Failover (LBFO).
See the blog post Teaming in Azure Stack HCI for more information on LBFO in Azure Stack
HCI.

SET is important for Azure Stack HCI because it's the only teaming technology that enables:

Teaming of RDMA adapters (if needed).


Guest RDMA.
Dynamic VMMQ.
Other key Azure Stack HCI features (see Teaming in Azure Stack HCI ).

SET requires the use of symmetric (identical) adapters. Symmetric network adapters are those that
have the same:

make (vendor)
model (version)
speed (throughput)
configuration

In 22H2, Network ATC will automatically detect and inform you if the adapters you've chosen are
asymmetric. The easiest way to manually identify if adapters are symmetric is if the speeds and
interface descriptions are exact matches. They can deviate only in the numeral listed in the
description. Use the Get-NetAdapterAdvancedProperty cmdlet to ensure the configuration
reported lists the same property values.

See the following table for an example of the interface descriptions deviating only by numeral (#):

Name Interface description Link speed

NIC1 Network Adapter #1 25 Gbps

NIC2 Network Adapter #2 25 Gbps


Name Interface description Link speed

NIC3 Network Adapter #3 25 Gbps

NIC4 Network Adapter #4 25 Gbps

7 Note

SET supports only switch-independent configuration by using either Dynamic or Hyper-V Port
load-balancing algorithms. For best performance, Hyper-V Port is recommended for use on all
NICs that operate at or above 10 Gbps. Network ATC makes all the required configurations for
SET.

RDMA traffic considerations


If you implement DCB, you must ensure that the PFC and ETS configuration is implemented
properly across every network port, including network switches. DCB is required for RoCE and
optional for iWARP.

For detailed information on how to deploy RDMA, download the document from the SDN GitHub
repo .

RoCE-based Azure Stack HCI implementations require the configuration of three PFC traffic classes,
including the default traffic class, across the fabric and all hosts.

Cluster traffic class

This traffic class ensures that there's enough bandwidth reserved for cluster heartbeats:

Required: Yes
PFC-enabled: No
Recommended traffic priority: Priority 7
Recommended bandwidth reservation:
10 GbE or lower RDMA networks = 2 percent
25 GbE or higher RDMA networks = 1 percent

RDMA traffic class


This traffic class ensures that there's enough bandwidth reserved for lossless RDMA
communications by using SMB Direct:

Required: Yes
PFC-enabled: Yes
Recommended traffic priority: Priority 3 or 4
Recommended bandwidth reservation: 50 percent
Default traffic class
This traffic class carries all other traffic not defined in the cluster or RDMA traffic classes, including
VM traffic and management traffic:

Required: By default (no configuration necessary on the host)


Flow control (PFC)-enabled: No
Recommended traffic class: By default (Priority 0)
Recommended bandwidth reservation: By default (no host configuration required)

Storage traffic models


SMB provides many benefits as the storage protocol for Azure Stack HCI, including SMB
Multichannel. SMB Multichannel isn't covered in this article, but it's important to understand that
traffic is multiplexed across every possible link that SMB Multichannel can use.

7 Note

We recommend using multiple subnets and VLANs to separate storage traffic in Azure Stack
HCI.

Consider the following example of a four node cluster. Each server has two storage ports (left and
right side). Because each adapter is on the same subnet and VLAN, SMB Multichannel will spread
connections across all available links. Therefore, the left-side port on the first server (192.168.1.1)
will make a connection to the left-side port on the second server (192.168.1.2). The right-side port
on the first server (192.168.1.12) will connect to the right-side port on the second server. Similar
connections are established for the third and fourth servers.

However, this creates unnecessary connections and causes congestion at the interlink (multi-
chassis link aggregation group or MC-LAG) that connects the ToR switches (marked with Xs). See
the following diagram:

The recommended approach is to use separate subnets and VLANs for each set of adapters. In the
following diagram, the right-hand ports now use subnet 192.168.2.x /24 and VLAN2. This allows
traffic on the left-side ports to remain on TOR1 and the traffic on the right-side ports to remain on
TOR2.

Traffic bandwidth allocation


The following table shows example bandwidth allocations of various traffic types, using common
adapter speeds, in Azure Stack HCI. Note that this is an example of a converged solution, where all
traffic types (compute, storage, and management) run over the same physical adapters, and are
teamed by using SET.

Because this use case poses the most constraints, it represents a good baseline. However,
considering the permutations for the number of adapters and speeds, this should be considered an
example and not a support requirement.

The following assumptions are made for this example:

There are two adapters per team.

Storage Bus Layer (SBL), Cluster Shared Volume (CSV), and Hyper-V (Live Migration) traffic:
Use the same physical adapters.
Use SMB.

SMB is given a 50 percent bandwidth allocation by using DCB.


SBL/CSV is the highest priority traffic, and receives 70 percent of the SMB bandwidth
reservation.
Live Migration (LM) is limited by using the Set-SMBBandwidthLimit cmdlet, and receives 29
percent of the remaining bandwidth.

If the available bandwidth for Live Migration is >= 5 Gbps, and the network adapters
are capable, use RDMA. Use the following cmdlet to do so:

Powershell

Set-VMHost -VirtualMachineMigrationPerformanceOption SMB

If the available bandwidth for Live Migration is < 5 Gbps, use compression to reduce
blackout times. Use the following cmdlet to do so:

Powershell

Set-VMHost -VirtualMachineMigrationPerformanceOption Compression

If you're using RDMA for Live Migration traffic, ensure that Live Migration traffic can't
consume the entire bandwidth allocated to the RDMA traffic class by using an SMB
bandwidth limit. Be careful, because this cmdlet takes entry in bytes per second (Bps),
whereas network adapters are listed in bits per second (bps). Use the following cmdlet to set
a bandwidth limit of 6 Gbps, for example:

Powershell

Set-SMBBandwidthLimit -Category LiveMigration -BytesPerSecond 750MB

7 Note

750 MBps in this example equates to 6 Gbps.

Here is the example bandwidth allocation table:

NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth

10 20 Gbps 10 Gbps 70% 7 Gbps ** 200 Mbps


Gbps

25 50 Gbps 25 Gbps 70% 17.5 Gbps 29% 7.25 Gbps 1% 250 Mbps
Gbps

40 80 Gbps 40 Gbps 70% 28 Gbps 29% 11.6 Gbps 1% 400 Mbps


Gbps
NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth

50 100 Gbps 50 Gbps 70% 35 Gbps 29% 14.5 Gbps 1% 500 Mbps
Gbps

100 200 Gbps 100 Gbps 70% 70 Gbps 29% 29 Gbps 1% 1 Gbps
Gbps

200 400 Gbps 200 Gbps 70% 140 Gbps 29% 58 Gbps 1% 2 Gbps
Gbps

* Use compression rather than RDMA, because the bandwidth allocation for Live Migration traffic is
<5 Gbps.

** 50 percent is an example bandwidth reservation.

Stretched clusters
Stretched clusters provide disaster recovery that spans multiple datacenters. In its simplest form, a
stretched Azure Stack HCI cluster network looks like this:

Stretched cluster requirements


Stretched clusters have the following requirements and characteristics:

RDMA is limited to a single site, and isn't supported across different sites or subnets.

Servers in the same site must reside in the same rack and Layer-2 boundary.
Host communication between sites must cross a Layer-3 boundary; stretched Layer-2
topologies aren't supported.

Have enough bandwidth to run the workloads at the other site. In the event of a failover, the
alternate site will need to run all traffic. We recommend that you provision sites at 50 percent
of their available network capacity. This isn't a requirement, however, if you are able to
tolerate lower performance during a failover.

Replication between sites (north/south traffic) can use the same physical NICs as the local
storage (east/west traffic). If you're using the same physical adapters, these adapters must be
teamed with SET. The adapters must also have additional virtual NICs provisioned for
routable traffic between sites.

Adapters used for communication between sites:

Can be physical or virtual (host vNIC). If adapters are virtual, you must provision one vNIC
in its own subnet and VLAN per physical NIC.

Must be on their own subnet and VLAN that can route between sites.

RDMA must be disabled by using the Disable-NetAdapterRDMA cmdlet. We recommend


that you explicitly require Storage Replica to use specific interfaces by using the Set-
SRNetworkConstraint cmdlet.

Must meet any additional requirements for Storage Replica.

Stretched cluster example


The following example illustrates a stretched cluster configuration. To ensure that a specific virtual
NIC is mapped to a specific physical adapter, use the Set-VMNetworkAdapterTeammapping
cmdlet.

The following shows the details for the example stretched cluster configuration.

7 Note

Your exact configuration, including NIC names, IP addresses, and VLANs, might be different
than what is shown. This is used only as a reference configuration that can be adapted to your
environment.

SiteA – Local replication, RDMA enabled, non-routable between sites

Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope

NodeA1 vSMB01 pNIC01 711 192.168.1.1/24 Local Site Only

NodeA2 vSMB01 pNIC01 711 192.168.1.2/24 Local Site Only

NodeA1 vSMB02 pNIC02 712 192.168.2.1/24 Local Site Only

NodeA2 vSMB02 pNIC02 712 192.168.2.2/24 Local Site Only


SiteB – Local replication, RDMA enabled, non-routable between sites

Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope

NodeB1 vSMB01 pNIC01 711 192.168.1.1/24 Local Site Only

NodeB2 vSMB01 pNIC01 711 192.168.1.2/24 Local Site Only

NodeB1 vSMB02 pNIC02 712 192.168.2.1/24 Local Site Only

NodeB2 vSMB02 pNIC02 712 192.168.2.2/24 Local Site Only

SiteA – Stretched replication, RDMA disabled, routable between sites

Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope

NodeA1 Stretch1 pNIC01 173.0.0.1/8 Cross-Site Routable

NodeA2 Stretch1 pNIC01 173.0.0.2/8 Cross-Site Routable

NodeA1 Stretch2 pNIC02 174.0.0.1/8 Cross-Site Routable

NodeA2 Stretch2 pNIC02 174.0.0.2/8 Cross-Site Routable

SiteB – Stretched replication, RDMA disabled, routable between sites

Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope

NodeB1 Stretch1 pNIC01 175.0.0.1/8 Cross-Site Routable

NodeB2 Stretch1 pNIC01 175.0.0.2/8 Cross-Site Routable

NodeB1 Stretch2 pNIC02 176.0.0.1/8 Cross-Site Routable

NodeB2 Stretch2 pNIC02 176.0.0.2/8 Cross-Site Routable

Next steps
Learn about network switch and physical network requirements. See Physical network
requirements.
Learn how to simplify host networking using Network ATC. See Simplify host networking with
Network ATC.
Brush up on failover clustering networking basics .
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Firewall requirements for Azure Stack
HCI
Article • 06/28/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article provides guidance on how to configure firewalls for the Azure Stack HCI
operating system. It includes firewall requirements for outbound endpoints and internal
rules and ports. The article also provides information on how to use Azure service tags
with Microsoft Defender firewall.

If your network uses a proxy server for internet access, see Configure proxy settings for
Azure Stack HCI.

Firewall requirements for outbound endpoints


Opening port 443 for outbound network traffic on your organization's firewall meets the
connectivity requirements for the operating system to connect with Azure and Microsoft
Update. If your outbound firewall is restricted, then we recommend including the URLs
and ports described in the Recommended firewall URLs section of this article.

Azure Stack HCI needs to periodically connect to Azure. Access is limited only to:

Well-known Azure IPs


Outbound direction
Port 443 (HTTPS)

) Important

Azure Stack HCI doesn’t support HTTPS inspection. Make sure that HTTPS
inspection is disabled along your networking path for Azure Stack HCI to prevent
any connectivity errors.

As shown in the following diagram, Azure Stack HCI accesses Azure using more than
one firewall potentially.

This article describes how to optionally use a highly locked-down firewall configuration
to block all traffic to all destinations except those included in your allowlist.

Required firewall URLs


The following table provides a list of required firewall URLs. Make sure to include these
URLs to your allowlist.

7 Note

The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.

Service URL Port Notes

Azure login.microsoftonline.com 443 For Active Directory Authority and used


Stack for authentication, token fetch, and
HCI validation.

Azure graph.windows.net 443 For Graph and used for authentication,


Stack token fetch, and validation.
HCI
Service URL Port Notes

Azure management.azure.com 443 For Resource Manager and used during


Stack initial bootstrapping of the cluster to
HCI Azure for registration purposes and to
unregister the cluster.

Azure dp.stackhci.azure.com 443 For Dataplane that pushes up diagnostics


Stack data and used in the Portal pipeline and
HCI pushes billing data.

Azure azurestackhci.azurefd.net 443 Previous URL for Dataplane. This URL was
Stack recently changed, customers who
HCI registered their cluster using this old URL
must allowlist it as well.

Arc For aka.ms 443 For resolving the download script during
Servers installation.

Arc For download.microsoft.com 443 For downloading the Windows


Servers installation package.

Arc For login.windows.net 443 For Azure Active Directory


Servers

Arc For login.microsoftonline.com 443 For Azure Active Directory


Servers

Arc For pas.windows.net 443 For Azure Active Directory


Servers

Arc For management.azure.com 443 For Azure Resource Manager to create or


Servers delete the Arc Server resource

Arc For guestnotificationservice.azure.com 443 For the notification service for extension
Servers and connectivity scenarios

Arc For *.his.arc.azure.com 443 For metadata and hybrid identity services
Servers

Arc For *.guestconfiguration.azure.com 443 For extension management and guest


Servers configuration services

Arc For *.guestnotificationservice.azure.com 443 For notification service for extension and
Servers connectivity scenarios

Arc For azgn*.servicebus.windows.net 443 For notification service for extension and
Servers connectivity scenarios

Arc For *.servicebus.windows.net 443 For Windows Admin Center and SSH
Servers scenarios
Service URL Port Notes

Arc For *.waconazure.com 443 For Windows Admin Center connectivity


Servers

Arc For *.blob.core.windows.net 443 For download source for Azure Arc-
Servers enabled servers extensions

For a comprehensive list of all the firewall URLs, download the firewall URLs
spreadsheet .

Recommended firewall URLs


The following table provides a list of recommended firewall URLs. If your outbound
firewall is restricted, we recommend including the URLs and ports described in this
section to your allowlist.

7 Note

The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.

Service URL Port Notes

Azure crl3.digicert.com 80 Enables the platform attestation service on


Benefits Azure Stack HCI to perform a certificate
on Azure revocation list check to provide assurance
Stack HCI that VMs are indeed running on Azure
environments.

Azure crl4.digicert.com 80 Enables the platform attestation service on


Benefits Azure Stack HCI to perform a certificate
on Azure revocation list check to provide assurance
Stack HCI that VMs are indeed running on Azure
environments.

Azure *.powershellgallery.com 443 To obtain the Az.StackHCI PowerShell


Stack HCI module, which is required for cluster
registration. Alternatively, you can
download and install the Az.StackHCI
PowerShell module manually
from PowerShell Gallery .
Service URL Port Notes

Cluster *.blob.core.windows.net 443 For firewall access to the Azure blob


Cloud container, if choosing to use a cloud
Witness witness as the cluster witness, which is
optional.

Microsoft windowsupdate.microsoft.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft download.windowsupdate.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft *.download.windowsupdate.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft download.microsoft.com 443 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft wustat.windows.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft ntservicepack.microsoft.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft go.microsoft.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft dl.delivery.mp.microsoft.com 80, For Microsoft Update, which allows the OS


Update 443 to receive updates.

Microsoft *.delivery.mp.microsoft.com 80, For Microsoft Update, which allows the OS


Update 443 to receive updates.

Microsoft *.windowsupdate.microsoft.com 80, For Microsoft Update, which allows the OS


Update 443 to receive updates.

Microsoft *.windowsupdate.com 80 For Microsoft Update, which allows the OS


Update to receive updates.

Microsoft *.update.microsoft.com 80, For Microsoft Update, which allows the OS


Update 443 to receive updates.

Firewall requirements for additional Azure


services
Depending on additional Azure services you enable on HCI, you may need to make
additional firewall configuration changes. Refer to the following links for information on
firewall requirements for each Azure service:
AKS on Azure Stack HCI
Azure Arc-enabled servers
Azure Arc VM management
Azure Monitor Agent
Azure portal
Azure Site Recovery
Azure Virtual Desktop
Microsoft Defender
Microsoft Monitoring Agent (MMA) and Log Analytics Agent
Qualys
Remote support
Windows Admin Center
Windows Admin Center in Azure portal

Firewall requirements for internal rules and


ports
Ensure that the proper network ports are open between all server nodes both within a
site and between sites (for stretched clusters). You'll need appropriate firewall rules to
allow ICMP, SMB (port 445, plus port 5445 for SMB Direct if using iWARP RDMA), and
WS-MAN (port 5985) bi-directional traffic between all servers in the cluster.

When using the Cluster Creation wizard in Windows Admin Center to create the cluster,
the wizard automatically opens the appropriate firewall ports on each server in the
cluster for Failover Clustering, Hyper-V, and Storage Replica. If you're using a different
firewall on each server, open the ports as described in the following sections:

Azure Stack HCI OS management


Ensure that the following firewall rules are configured in your on-premises firewall for
Azure Stack HCI OS management, including licensing and billing.

Rule Action Source Destination Service Ports

Allow inbound/outbound traffic to and from Allow Cluster Cluster TCP 30301
the Azure Stack HCI service on cluster servers servers
servers

Windows Admin Center


Ensure that the following firewall rules are configured in your on-premises firewall for
Windows Admin Center.

Rule Action Source Destination Service Ports

Provide access to Azure and Allow Windows Admin Azure Stack TCP 445
Microsoft Update Center HCI

Use Windows Remote Allow Windows Admin Azure Stack TCP 5985
Management (WinRM) 2.0
Center HCI
for HTTP connections to run
commands

on remote Windows servers

Use WinRM 2.0 for HTTPS Allow Windows Admin Azure Stack TCP 5986
connections to run
Center HCI
commands on remote Windows
servers

7 Note

While installing Windows Admin Center, if you select the Use WinRM over HTTPS
only setting, then port 5986 is required.

Failover Clustering
Ensure that the following firewall rules are configured in your on-premises firewall for
Failover Clustering.

Rule Action Source Destination Service Ports

Allow Failover Cluster Allow Management Cluster TCP 445


validation system servers

Allow RPC dynamic port Allow Management Cluster TCP Minimum of


allocation system servers 100 ports

above port
5000

Allow Remote Procedure Call Allow Management Cluster TCP 135


(RPC) system servers

Allow Cluster Administrator Allow Management Cluster UDP 137


system servers
Rule Action Source Destination Service Ports

Allow Cluster Service Allow Management Cluster UDP 3343


system servers

Allow Cluster Service Allow Management Cluster TCP 3343


(Required during
system servers
a server join operation.)

Allow ICMPv4 and ICMPv6


Allow Management Cluster n/a n/a
for Failover Cluster validation system servers

7 Note

The management system includes any computer from which you plan to administer
the cluster, using tools such as Windows Admin Center, Windows PowerShell or
System Center Virtual Machine Manager.

Hyper-V
Ensure that the following firewall rules are configured in your on-premises firewall for
Hyper-V.

Rule Action Source Destination Service Ports

Allow cluster communication Allow Management Hyper-V TCP 445


system server

Allow RPC Endpoint Mapper Allow Management Hyper-V TCP 135


and WMI system server

Allow HTTP connectivity Allow Management Hyper-V TCP 80


system server

Allow HTTPS connectivity Allow Management Hyper-V TCP 443


system server

Allow Live Migration Allow Management Hyper-V TCP 6600


system server

Allow VM Management Allow Management Hyper-V TCP 2179


Service system server

Allow RPC dynamic port Allow Management Hyper-V TCP Minimum of


allocation system server 100 ports

above port
5000
7 Note

Open up a range of ports above port 5000 to allow RPC dynamic port allocation.
Ports below 5000 may already be in use by other applications and could cause
conflicts with DCOM applications. Previous experience shows that a minimum of
100 ports should be opened, because several system services rely on these RPC
ports to communicate with each other. For more information, see How to
configure RPC dynamic port allocation to work with firewalls.

Storage Replica (stretched cluster)


Ensure that the following firewall rules are configured in your on-premises firewall for
Storage Replica (stretched cluster).

Rule Action Source Destination Service Ports

Allow Server Message Allow Stretched cluster Stretched cluster TCP 445
Block
servers servers
(SMB) protocol

Allow Web Services- Allow Stretched cluster Stretched cluster TCP 5985
Management
servers servers
(WS-MAN)

Allow ICMPv4 and ICMPv6


Allow Stretched cluster Stretched cluster n/a n/a
(if using the Test- servers servers
SRTopology

PowerShell cmdlet)

Update Microsoft Defender firewall


This section shows how to configure Microsoft Defender firewall to allow IP addresses
associated with a service tag to connect with the operating system. A service tag
represents a group of IP addresses from a given Azure service. Microsoft manages the IP
addresses included in the service tag, and automatically updates the service tag as IP
addresses change to keep updates to a minimum. To learn more, see Virtual network
service tags.

1. Download the JSON file from the following resource to the target computer
running the operating system: Azure IP Ranges and Service Tags – Public Cloud .

2. Use the following PowerShell command to open the JSON file:


PowerShell

$json = Get-Content -Path .\ServiceTags_Public_20201012.json |


ConvertFrom-Json

3. Get the list of IP address ranges for a given service tag, such as the
"AzureResourceManager" service tag:

PowerShell

$IpList = ($json.values | where Name -Eq


"AzureResourceManager").properties.addressPrefixes

4. Import the list of IP addresses to your external corporate firewall, if you're using an
allowlist with it.

5. Create a firewall rule for each server in the cluster to allow outbound 443 (HTTPS)
traffic to the list of IP address ranges:

PowerShell

New-NetFirewallRule -DisplayName "Allow Azure Resource Manager" -


RemoteAddress $IpList -Direction Outbound -LocalPort 443 -Protocol TCP
-Action Allow -Profile Any -Enabled True

Next steps
For more information, see also:

The Windows Firewall and WinRM 2.0 ports section of Installation and
configuration for Windows Remote Management
Network reference patterns overview
for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll gain an overview understanding for deploying network reference
patterns on Azure Stack HCI.

A deployment consists of single-server or two-node systems that connect to one or two


Top of Rack (TOR) switches. This environment has the following characteristics:

Two storage ports dedicated for storage traffic intent. The RDMA NIC is optional
for single-server deployments.

One or two ports dedicated to management and compute traffic intents.

One optional Baseboard Management Controller (BMC) for OOB management.

A single-server deployment features a single TOR switch for northbound/southbound


(internal-external) traffic. Two-node deployments consist of either a storage switchless
configuration using one or two TOR switches; or a storage switched configuration using
two TOR switches with either non-converged or fully converged host network adapters.

Switchless advantages and disadvantages


The following highlights some advantages and disadvantages of using switchless
configurations:

No switch is necessary for in-cluster (East-West) traffic; however, a switch is


required for traffic outside the cluster (North-South). This may result in lower
capital expenditure (CAPEX) costs, but is dependent on the number of nodes in the
cluster.

If switchless is used, configuration is limited to the host, which may reduce the
potential number of configuration steps needed. However, this value diminishes as
the cluster size increases.

Switchless has the lowest level of resiliency, and it comes with extra complexity and
planning if after the initial deployment it needs to be scaled up. Storage
connectivity needs to be enabled when adding the second node, which will require
to define what physical connectivity between nodes is needed.

More planning is required for IP and subnet addressing schemes.

Storage adapters are single-purpose interfaces. Management, compute, stretched


cluster, and other traffic requiring North-South communication can't use these
adapters.

As the number of nodes in the cluster grows beyond two nodes, the cost of
network adapters could exceed the cost of using network switches.

Beyond a three-node cluster, cable management complexity grows.

Cluster expansion beyond two-nodes is complex, potentially requiring per-node


hardware and software configuration changes.

For more information, see Physical network requirements for Azure Stack HCI.

Firewall requirements
Azure Stack HCI requires periodic connectivity to Azure. If your organization's outbound
firewall is restricted, you would need to include firewall requirements for outbound
endpoints and internal rules and ports. There are required and recommended endpoints
for the Azure Stack HCI core components, which include cluster creation, registration
and billing, Microsoft Update, and cloud cluster witness.

See the firewall requirements for a complete list of endpoints. Make sure to include
these URLS in your allowed list. Proper network ports need to be opened between all
server nodes both within a site and between sites (for stretched clusters).

With Azure Stack HCI the connectivity validator of the Environment Checker tool will
check for the outbound connectivity requirement by default during deployment.
Additionally, you can run the Environment Checker tool standalone before, during, or
after deployment to evaluate the outbound connectivity of your environment.

A best practice is to have all relevant endpoints in a data file that can be accessed by the
environment checker tool. The same file can also be shared with your firewall
administrator to open up the necessary ports and URLs.

For more information, see Firewall requirements.

Next steps
Choose a network pattern to review.
Azure Stack HCI network deployment
patterns
Article • 05/31/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article describes a set of network patterns references to architect, deploy, and
configure Azure Stack HCI using either one or two physical hosts. Depending on your
needs or scenarios, you can go directly to your pattern of interest. Each pattern is
described as a standalone entity and includes all the network components for specific
scenarios.

Choose a network reference pattern


Use the following table to directly go to a pattern and its content.

Single-server deployment pattern


Go to single server deployment

Two-node deployment patterns

Go to storage switchless, single TOR switch Go to storage switchless, two TOR switches
Go to storage switchless, single TOR switch Go to storage switchless, two TOR switches

 

Go to storage switched, non-converged, two Go to storage switched, fully converged, two


TOR switches TOR switches.

 

Next steps
Download Azure Stack HCI
Review single-server storage
deployment network reference pattern
for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the single-server storage network reference pattern that
you can use to deploy your Azure Stack HCI solution. The information in this article will
also help you determine if this configuration is viable for your deployment planning
needs. This article is targeted towards the IT administrators who deploy and manage
Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Introduction
Single-server deployments provide cost and space benefits while helping to modernize
your infrastructure and bring Azure hybrid computing to locations that can tolerate the
resiliency of a single server. Azure Stack HCI running on a single-server behaves similarly
to Azure Stack HCI on a multi-node cluster: it brings native Azure Arc integration, the
ability to add servers to scale out the cluster, and it includes the same Azure benefits.

It also supports the same workloads, such as Azure Virtual Desktop (AVD) and AKS
hybrid, and is supported and billed the same way.

Scenarios
Use the single-server storage pattern in the following scenarios:

Facilities that can tolerate lower level of resiliency. Consider implementing this
pattern whenever your location or service provided by this pattern can tolerate a
lower level of resiliency without impacting your business.

Food, healthcare, finance, retail, government facilities. Some food, healthcare,


finance, and retail scenarios can apply this option to minimize their costs without
impacting core operations and business transactions.
Although Software Defined Networking (SDN) Layer 3 (L3) services are fully supported
on this pattern, routing services such as Border Gateway Protocol (BGP) may need to be
configured for the firewall device on the top-of-rack (TOR) switch.

Network security features such as microsegmentation and Quality of Service (QoS) don't
require extra configuration for the firewall device, as they're implemented at the virtual
network adapter layer. For more information, see Microsegmentation with Azure Stack
HCI .

Physical connectivity components


As illustrated in the diagram below, this pattern has the following physical network
components:

For northbound/southbound traffic, the Azure Stack HCI cluster is implemented


using a single TOR L2 or L3 switch.
Two teamed network ports to handle the management and compute traffic
connected to the switch.
Two disconnected RDMA NICs that are only used if add a second server to your
cluster for scale-out. This means no increased costs for cabling or physical switch
ports.
(Optional) A BMC card can be used to enable remote management of your
environment. For security purposes, some solutions might use a headless
configuration without the BMC card.

The following table lists some guidelines for a single-server deployment:

Network Management & compute Storage BMC

Link speed At least 1 Gbps; 10 Gbps At least 1 Gbps; 10 GBps Check with
recommended. recommended. hardware
manufacturer.

Interface RJ45, SFP+, or SFP28 SFP+ or SFP28 RJ45


type

Ports and Two teamed ports Optional to allow adding a One port
aggregation second server; disconnected
ports.
Network Management & compute Storage BMC

RDMA Optional. Depends on N/A N/A


requirements for guest RDMA
and NIC support.

Network ATC intents


The single-server pattern uses only one Network ATC intent for management and
compute traffic. The RDMA network interfaces are optional and disconnected.

Management and compute intent


The management and compute intent has the following characteristics:

Intent type: Management and compute


Intent mode: Cluster mode
Teaming: Yes - pNIC01 and pNIC02 are teamed
Default management VLAN: Configured VLAN for management adapters is
ummodified
PA VLAN and vNICs: Network ATC is transparent to PA vNICs and VLANs
Compute VLANs and vNICs: Network ATC is transparent to compute VM vNICs and
VLANs

Storage intent
The storage intent has the following characteristics:

Intent type: None


Intent mode: None
Teaming: pNIC03 and pNIC04 are disconnected
Default VLANs: None
Default subnets: None

Follow these steps to create a network intent for this reference pattern:

1. Run PowerShell as Administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <management_compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

For more information, see Deploy host networking: Compute and management intent.

Logical network components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage network VLANs


Optional - this pattern doesn't require a storage network.

OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.


tenant
connections on each gateway, and switches network traffic flows to a standby
gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications


and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.


QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options


The following L3 networking service options are available:
Virtual network peering
Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.
SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures

Next steps
Learn about two-node patterns - Azure Stack HCI network deployment patterns.
Review single-server storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about which network components are deployed for the single-
server reference pattern, as shown in the following diagram:

Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.

SDN Network Controller VM


The Network Controller VM is optionally deployed. If a Network Controller VM is not
deployed, default network access policies are not available. A Network Controller VM is
needed if you have any of the following requirements:

Create and manage virtual networks or connect VMs to virtual network subnets.

Configure and manage microsegmentation for VMs connected to virtual networks


or traditional VLAN-based networks.

Attach virtual appliances to your virtual networks.

Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.

SDN Software Load Balancer VM

The SDN Software Load Balancer (SLB) VM is used to evenly distribute network traffic
among multiple VMs. It enables multiple servers to host the same workload, providing
high availability and scalability. It is also used to provide inbound Network Address
Translation (NAT) services for inbound access to VMs, and outbound NAT services for
outbound connectivity.

SDN Gateway VM
The SDN Gateway VM is used to route network traffic between a virtual network and
another network, either local or remote. SDN Gateways can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter is not an encrypted connection. For more
information about GRE connectivity, see GRE Tunneling in Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, SDN Gateway simply acts as a router between your virtual
network and the external network.

Host agents
The following components run as services or agents on the host server:
Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.

Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.

Monitor host agent: Orchestrator-managed agent used for emitting observability


(telemetry and diagnostics) pipeline data that upload to Geneva (Azure Storage).

Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.

Components running on VMs


The following table lists the various components running on virtual machines (VMs) for a
single-server network pattern:

Component Number of OS disk Data disk vCPUs Memory


VMs size size

Network Controller 1 100 GB 30 GB 4 4 GB

SDN Software Load 1 60 GB 30 GB 16 8 GB


Balancers

SDN Gateways 1 60 GB 30 GB 8 8 GB

OEM Management OEM defined OEM OEM OEM OEM


defined defined defined defined

Total 3 + OEM 270 GB + 90 GB + 32 + OEM 28 GB +


OEM OEM OEM

Next steps
Learn about single-server IP requirements.
Review single-server storage reference
pattern IP requirements for Azure Stack
HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, learn about the IP requirements for deploying a single-server network
reference pattern in your environment.

Deployments without microsegmentation and


QoS enabled
The following table lists network attributes for deployments without microsegmentation
and Quality of Service (QoS) enabled. This is the default scenario and is deployed
automatically.

Network IP Network Network Subnet Required


component ATC intent routing properties IPs

Storage 1 1 IP for Storage No defined Network ATC 1 optional


each host gateway.
managed subnet.
if
IP-less L2 VLAN. Default VLAN tag connected
711. to switch.

Storage 2 1 IP for Storage No defined Network ATC 1 optional


each host gateway.
managed subnet.
if
IP-less L2 VLAN. Default VLAN tag connected
712. to switch.

Management 1 IP for Management Outbound Customer-defined 2 required,

each host,
connected management 1 optional.
1 IP for (internet access VLAN.

Failover required).
(Native VLAN
Cluster,
Disconnected preferred but
1 IP for (Arc trunk mode
OEM VM autonomous supported).
(optional) controller).
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Total 2 required.

2 optional
for
storage,

1 optional
for OEM
VM.

(Optional) Deployments with


microsegmentation and QoS enabled
The following table lists network attributes for deployments with microsegmentation
and QoS enabled. This scenario is optional and deployed only with Network Controller.

Network IP Network Network Subnet Required


component ATC intent routing properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 1 optional


host GW.
managed subnet.
if
IP-less L2 Default VLAN connected
VLAN. tag 711. to switch.

Storage 2 1 IP for each Storage No defined Network ATC 1 optional


host GW.
managed subnet.
if
IP-less L2 Default VLAN connected
VLAN. tag 712. to switch.

Management 1 IP for each Management Outbound Customer- 4


host,
connected defined required,

1 IP for (internet access management 1 optional


Failover required).
VLAN.

Cluster,
Disconnected (Native VLAN
1 IP for (Arc preferred but
Network autonomous trunk mode
Controller VM,
controller). supported).
1 IP for Arc
VM
management
stack VM,

1 IP for OEM
VM (new)
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Total 4
Required.

2 optional
for
storage,

1 optional
for OEM
VM.

Deployments with SDN optional services


The following table lists network attributes for deployments SDN optional services:

Network IP Network Network Subnet Required IPs


component ATC intent routing properties

Storage 1 1 IP for each Storage No defined Network ATC 1 optional if


host GW.
managed connected to
IP-less L2 subnet.
switch.
VLAN. Default VLAN
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 1 optional if


host GW.
managed connected to
IP-less L2 subnet.
switch.
VLAN. Default VLAN
tag 712.

Tenant Tenant VM IPs Compute Tenant VLAN Customer-


compute connected to routing/access defined
corresponding customer-
VLANs managed.

VLAN trunk
configuration
on physical
switches
required.
Network IP Network Network Subnet Required IPs
component ATC intent routing properties

Management 1 IP for each Management Connected Customer- 6 required

host,
Outbound defined 1 optional
1 IP for (internet management
Failover access VLAN.

Cluster,
required).
(Native VLAN
1 IP for Disconnected preferred but
Network (Arc trunk mode
Controller VM,
autonomous supported).
1 IP for Arc controller).
VM
management
stack VM,

1 IP for OEM
VM (new)

Single node:

1 Network
Controller VM
IP

1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP

HNV (AKA 2 IPs for each N/A Requires Provider IPs


PA network) host
default Address automatically
gateway to Network assigned out
Single node:
route packets VLAN.
of the subnet
1 SLB VM IP
externally. Subnet needs by Network
1 gateway VM to allocate Controller
IP hosts and SLB
VMs.

Potential
subnet
growth
consideration.

Public VIPs LB and GWs, N/A Advertised Network


Public VIPs through BGP Controller-
managed IPs

Private VIPs LB Private N/A Advertised Network


VIPs through BGP Controller-
managed IPs
Network IP Network Network Subnet Required IPs
component ATC intent routing properties

GRE VIPs GRE N/A Advertised Network


connections, through BGP Controller-
gateway VIPs managed IPs

L3 N/A Separate
Forwarding physical
subnet to
communicate
with virtual
network

Total 6 required.

2 optional for
storage,

1 optional for
OEM VM.

Next steps
Download Azure Stack HCI
Review two-node storage switchless,
single switch deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switchless with single TOR switch
network reference pattern that you can use to deploy your Azure Stack HCI solution. The
information in this article will also help you determine if this configuration is viable for
your deployment planning needs. This article is targeted towards the IT administrators
who deploy and manage Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, factories, retail stores, and
government facilities.

Consider this pattern for a cost-effective solution that includes fault-tolerance at the
cluster level, but can tolerate northbound connectivity interruptions if the single physical
switch fails or requires maintenance.

You can scale out this pattern, but it will require workload downtime to reconfigure
storage physical connectivity and storage network reconfiguration. Although SDN L3
services are fully supported for this pattern, the routing services such as BGP will need to
be configured on the firewall device on top of the TOR switch if it doesn't support L3
services. Network security features such as microsegmentation and QoS don't require
extra configuration on the firewall device, as they're implemented on the virtual switch.

Physical connectivity components


As illustrate in the diagram below, this pattern has the following physical network
components:

Single TOR switch for north-south traffic communication.


Two teamed network ports to handle management and compute traffic, connected
to the L2 switch on each host

Two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each
node in the cluster has a redundant connection to the other node in the cluster.

As an option, some solutions might use a headless configuration without a BMC


card for security purposes.

Networks Management and compute Storage BMC

Link speed At least 1 Gbps. 10 Gbps At least 10 Check with hardware


recommended Gbps manufacturer
Networks Management and compute Storage BMC

Interface type RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Two standalone One port


aggregation ports

Network ATC intents


For two-node storage switchless patterns, two Network ATC intents are created. The first
for management and compute network traffic, and the second for storage traffic.

Management and compute intent


Intent Type: Management and compute
Intent Mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 are teamed
Default Management VLAN: Configured VLAN for management adapters isn’t
modified
PA & Compute VLANs and vNICs: Network ATC is transparent to PA vNICs and
VLAN or compute VM vNICs and VLANs
Storage intent
Intent type: Storage
Intent mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2

For more information, see Deploy host networking.

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

Add-NetIntent -Name <Storage> -Storage -ClusterName <HCI01> -


AdapterName <pNIC03, pNIC04>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic consists of two individual networks supporting RDMA
traffic. Each interface will be dedicated to a separate storage network, and both may
utilize the same VLAN tag. This traffic is only intended to travel between the two nodes.
Storage traffic is a private network without connectivity to other resources.

The storage adapters operate on different IP subnets. To enable a switchless


configuration, each connected node supports a matching subnet of its neighbor. Each
storage network uses the Network ATC predefined VLANs by default (711 and 712).
However, these VLANs can be customized if necessary. In addition, if the default subnets
defined by Network ATC (10.71.1.0/24 and 10.71.2.0/24) aren't usable, you're responsible
for assigning all storage IP addresses in the cluster.

For more information, see Network ATC overview.

OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.

HNV Provider Address (PA) network


The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.
Microsegmentation involves creating granular network policies between applications
and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.


QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.


L3 networking services options
The following L3 networking service options are available:

Virtual network peering


Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.

SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switchless, two switches network pattern.
Review two-node storage switchless,
two switches deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switchless with two TOR L3
switches network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.

Consider implementing this pattern when looking for a cost-efficient solution that has
fault tolerance across all the network components. It is possible to scale out the pattern,
but will require workload downtime to reconfigure storage physical connectivity and
storage network reconfiguration. SDN L3 services are fully supported on this pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as micro-segmentation and QoS do
not require additional configuration for the firewall device as they are implemented at
the virtual network adapter layer.

Physical connectivity components


As illustrated in the diagram below, this pattern has the following physical network
components:

For northbound/southbound traffic, the cluster requires two TOR switches in MLAG
configuration.
Two teamed network cards to handle management and compute traffic, and
connected to the TOR switches. Each NIC will be connected to a different TOR
switch.

Two RDMA NICs in a full-mesh configuration for East-West storage traffic. Each
node in the cluster has a redundant connection to the other node in the cluster.

As an option, some solutions might use a headless configuration without a BMC


card for security purposes.

Networks Management and compute Storage BMC

Link speed At least 1 GBps. 10 GBps At least 10 Check with hardware


recommended GBps manufacturer

Interface type RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Two standalone One port


aggregation ports

Network ATC intents


For two-node storage switchless patterns, two Network ATC intents are created. The first
for management and compute network traffic, and the second for storage traffic.

Management and compute intent


Intent Type: Management and Compute
Intent Mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 Team
Default Management VLAN: Configured VLAN for management adapters isn’t
modified
PA & Compute VLANs and vNICs: Network ATC is transparent to PA vNICs and
VLAN or compute VM vNICs and VLANs

Storage intent
Intent type: Storage
Intent mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2

For more information, see Deploy host networking.

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

Add-NetIntent -Name <Storage> -Storage -ClusterName <HCI01> -


AdapterName <pNIC03, pNIC04>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic consists of two individual networks supporting RDMA
traffic. Each interface is dedicated to a separate storage network, and both may share
the same VLAN tag. This traffic is only intended to travel between the two nodes.
Storage traffic is a private network without connectivity to other resources.

The storage adapters operate in different IP subnets. To enable a switchless


configuration, each connected node a matching subnet of its neighbor. Each storage
network uses the Network ATC predefined VLANs by default (711 and 712). These
VLANs can be customized if required. In addition, if the default subnet defined by ATC is
not usable, you are responsible for assigning all storage IP addresses in the cluster.
For more information, see Network ATC overview.

OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.

HNV Provider Address (PA) network


The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.
Microsegmentation involves creating granular network policies between applications
and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.


QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.


L3 networking services options
The following L3 networking service options are available:

Virtual network peering


Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.

SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switchless, one switch network pattern.
Review two-node storage switched,
non-converged deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switched, non-converged, two-
TOR-switch network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, factories, branch offices, and
datacenter facilities.

Deploy this pattern for enhanced network performance of your system and if you plan
to add additional nodes. East-West storage traffic replication won't interfere or compete
with north-sound traffic dedicated for management and compute. Logical network
configuration when adding additional nodes are ready without requiring workload
downtime or physical connection changes. SDN L3 services are fully supported on this
pattern.

Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.

Physical connectivity components


As described in the diagram below, this pattern has the following physical network
components:
For northbound/southbound traffic, the cluster in this pattern is implemented with
two TOR switches in MLAG configuration.

Two teamed network cards to handle management and compute traffic connected
to two TOR switches. Each NIC is connected to a different TOR switch.

Two RDMA NICs in standalone configuration. Each NIC is connected to a different


TOR switch. SMB multichannel capability provides path aggregation and fault
tolerance.

As an option, deployments can include a BMC card to enable remote management


of the environment. Some solutions might use a headless configuration without a
BMC card for security purposes.


Networks Management and compute Storage BMC

Link speed At least 1 Gbps. 10 Gbps At least 10 Check with hardware


recommended Gbps manufacturer

Interface type RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Two standalone One port


aggregation ports

Network ATC intents

Management and compute intent


Intent type: Management and compute
Intent mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 are teamed
Default management VLAN: Configured VLAN for management adapters isn’t
modified
PA & compute VLANs and vNICs: Network ATC is transparent to PA vNICs and
VLAN or compute VM vNICs and VLANs
Storage intent
Intent Type: Storage
Intent Mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following commands:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

Add-NetIntent -Name <Storage> -Storage -ClusterName <HCI01> -


AdapterName <pNIC03, pNIC04>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic consists of two individual networks supporting RDMA
traffic. Each interface is dedicated to a separate storage network, and both can use the
same VLAN tag.

The storage adapters operate in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.

For more information, see Network ATC overview.


OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications


and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.


QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options


The following L3 networking service options are available:
Virtual network peering
Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.
SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switched, fully converged network pattern.
Review two-node storage switched, fully
converged deployment network
reference pattern for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switched, fully converged with
two TOR switches network reference pattern that you can use to deploy your Azure
Stack HCI solution. The information in this article will also help you determine if this
configuration is viable for your deployment planning needs. This article is targeted
towards the IT administrators who deploy and manage Azure Stack HCI in their
datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.

Consider this pattern if you plan to add additional nodes and your bandwidth
requirements for north-south traffic don't require dedicated adapters. This solution
might be a good option when physical switch ports are scarce and you're looking for
cost reductions for your solution. This pattern requires additional operational costs to
fine-tune the shared host network adapters QoS policies to protect storage traffic from
workload and management traffic. SDN L3 services are fully supported on this pattern.

Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.

Physical connectivity components


As described in the diagram below, this pattern has the following physical network
components:
For northbound/southbound traffic, the cluster in this pattern is implemented with
two TOR switches in MLAG configuration.

Two teamed network cards handle the management, compute, and RDMA storage
traffic connected to the TOR switches. Each NIC is connected to a different TOR
switch. SMB multichannel capability provides path aggregation and fault tolerance.

As an option, deployments can include a BMC card to enable remote management


of the environment. Some solutions might use a headless configuration without a
BMC card for security purposes.

Networks Management, compute, storage BMC


Networks Management, compute, storage BMC

Link speed At 10 Gbps Check with hardware manufacturer

Interface type SFP+ or SFP28 RJ45

Ports and aggregation Two teamed ports One port

Network ATC intents

Management, compute, and storage intent


Intent Type: Management, compute, and storage
Intent Mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 are teamed
Default Management VLAN: Configured VLAN for management adapters isn’t
modified
Storage vNIC 1:
VLAN 711
Subnet 10.71.1.0/24 for storage network 1
Storage vNIC 2:
VLAN 712
Subnet 10.71.2.0/24 for storage network 2
Storage vNIC1 and storage vNIC2 use SMB Multichannel to provide resiliency and
bandwidth aggregation
PA VLAN and vNICs: Network ATC is transparent to PA vNICs and VLAN
Compute VLANs and vNICs: Network ATC is transparent to compute VM vNICs and
VLANs

For more information, see Deploy host networking.

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -Storage


-ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic in this pattern shares the physical network adapters with
management and compute.

The storage network operates in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.

For more information, see Network ATC overview.


OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications


and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.


QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options


The following L3 networking service options are available:
Virtual network peering
Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.
SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switched, non-converged network pattern.
Review two-node storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about which network components get deployed for two-node
reference patterns, as shown below:

VM components
The following table lists all the components running on VMs for two-node network
patterns:

Component Number of OS disk Data disk vCPUs Memory


VMs size size

Network Controller 1 100 GB 30 GB 4 4 GB

SDN Software Load 1 60 GB 30 GB 16 8 GB


Balancers (SLB)

SDN Gateways 1 60 GB 30 GB 8 8 GB

OEM Management OEM defined OEM OEM OEM OEM


defined defined defined defined

Total 3 + OEM 270 GB + 90 GB + 32 + OEM 28 GB +


OEM OEM OEM
Default components

Network Controller VM
The Network Controller VM is deployed optionally. If Network Controller VM isn't
deployed, the default access network access policies won't be available. Additionally, it's
needed if you have any of the following requirements:

Create and manage virtual networks. Connect virtual machines (VMs) to virtual
network subnets.

Configure and manage micro-segmentation for VMs connected to virtual networks


or traditional VLAN-based networks.

Attach virtual appliances to your virtual networks.

Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.

Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.

SDN Load Balancer VM

The SDN Software Load Balancer (SLB) VM is used to evenly distribute customer
network traffic among multiple VMs. It enables multiple servers to host the same
workload, providing high availability and scalability. It's also used to provide inbound
Network Address Translation (NAT) services for inbound access to virtual machines, and
outbound NAT services for outbound connectivity.

SDN Gateway VM
The SDN Gateway VM is used for routing network traffic between a virtual network and
another network, either local or remote. Gateways can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection. For more
information about GRE connectivity scenarios, see GRE Tunneling in Windows
Server.

Create Layer 3 connections between SDN virtual networks and external networks.
In this case, the SDN gateway simply acts as a router between your virtual network
and the external network.

Host service and agent components


The following components run as services or agents on the host server:

Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.

Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.

Monitor host agent: Orchestrator-managed agent used for emitting observability


(telemetry and diagnostics) pipeline data that upload to Geneva (Azure Storage).

Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.

Next steps
Learn about Two-node deployment IP requirements.
Review two-node storage reference
pattern IP requirements for Azure Stack
HCI
Article • 11/10/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, learn about the IP requirements for deploying a two-node network
reference pattern in your environment.

Deployments without microsegmentation and


QoS enabled
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Storage 1 1 IP for Storage No defined Network ATC 2


each host gateway.
managed subnet.

IP-less L2 VLAN. Default VLAN tag


711.

Storage 2 1 IP for Storage No defined Network ATC 2


each host gateway.
managed subnet.

IP-less L2 VLAN. Default VLAN tag


712.

Management 1 IP for Management Connected Customer-defined 2


each host,
(outbound management required

1 IP for internet access VLAN.


1
Failover required).
(Native VLAN optional
Cluster,
Disconnected preferred but
1 IP for (Arc autonomous trunk mode
OEM VM controller). supported).
(optional)

Total 6
required,

1
optional
for OEM
VM.
Deployments with microsegmentation and QoS
enabled
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 2


host gateway.
managed
IP-less L2 subnet.

VLAN. Default VLAN


tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 2


host gateway.
managed
IP-less L2 subnet.

VLAN. Default VLAN


tag 712.

Management 1 IP for each Management Connected Customer- 5 required

host,
(outbound defined 1 optional
1 IP for internet access management
Failover required).
VLAN.

Cluster,
Disconnected (Native VLAN
1 IP for (Arc preferred but
Network autonomous trunk mode
Controller VM,
controller). supported).
1 IP for Arc
VM
management
stack VM,

1 IP for OEM
VM (new)

Total 9
minimum.

10
maximum.

Deployments with SDN optional services


Network IP component Network Network routing Subnet Required
ATC intent properties IPs
Network IP component Network Network routing Subnet Required
ATC intent properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 2


host gateway.
managed
IP-less L2 VLAN. subnet.

Default VLAN
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 2


host gateway.
managed
IP-less L2 VLAN. subnet.

Default VLAN
tag 712.

Tenant Tenant VM IPs Compute Tenant VLAN Customer-


compute connected to routing/access defined
corresponding customer-
VLANs managed.

VLAN trunk
configuration on
the physical
switches required.

Management 1 IP for each Management Connected Customer- 7 required

host,
(outbound defined 1 optional
1 IP for internet access management
Failover required).
VLAN.

Cluster,
Disconnected (Native VLAN
1 IP for (Arc autonomous preferred but
Network controller). trunk mode
Controller VM,
supported).
1 IP for Arc VM
management
stack VM,

1 IP for OEM
VM (new)

Two-node:

1 Network
Controller VM
IP

1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP
Network IP component Network Network routing Subnet Required
ATC intent properties IPs

HNV 2 IPs for each N/A Requires default Provider NC-


host
gateway to route Address managed
the packets Network IPs
Two-node:
externally. VLAN

1 SLB VM IP
Subnet size
1 gateway VM needs to
IP allocate hosts
and SLB VMs

Potential
subnet
growth to be
considered

Public VIPs SLB and N/A Advertised Network


gateway public through BGP Controller-
VIPs managed
IPs

Private VIPs SLB private N/A Advertised Network


VIPs through BGP Controller-
managed
IPs

GRE VIPs GRE N/A Advertised Network


connections through BGP Controller-
for gateway managed
VIPs IPs

L3 N/A Separate physical


Forwarding network subnet
to communicate
with virtual
network

Total 11
minimum

12
maximum

Next steps
Choose a reference pattern.
Review two-node storage reference
pattern decision matrix for Azure Stack
HCI
Article • 11/10/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Study the two-node storage reference pattern decision matrix to help decide which
reference pattern is best suited for your deployment needs:

Feature Storage Storage Storage Storage


switchless switchless switched switched

Single switch Two switches Non- Fully-


converged converged

Scalable pattern unsuitable unsuitable suitable suitable

HA solution unsuitable suitable suitable suitable

VLAN-based tenants suitable suitable suitable suitable

SDN L3 integration neutral suitable suitable suitable

Total cost of ownership suitable neutral neutral neutral


(TCO)

Compacted/portable suitable neutral unsuitable unsuitable


solution

RDMA Performance neutral neutral suitable neutral

Physical switch suitable neutral neutral unsuitable


operational costs

Physical switch routing neutral neutral neutral neutral


and ACLs

Next steps
Download Azure Stack HCI
Review SDN considerations for network
reference patterns
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll review considerations when deploying Software Defined Networking
(SDN) in your Azure Stack HCI cluster.

SDN hardware requirements


When using SDN, you must ensure that the physical switches used in your Azure Stack
HCI cluster support a set of capabilities that are documented at Plan a Software Defined
Network infrastructure.

If you are using SDN Software Load Balancers (SLB) or Gateway Generic Routing
Encapsulation (GRE) gateways, you must also configure Border Gateway Protocol (BGP)
peering with the top of rack (ToR) switches so that the SLB and GRE Virtual IP addresses
(VIPs) can be advertised. For more information, see Switches and routers.

SDN Network Controller


SDN Network Controller is the centralized control plane to provision and manage
networking services for your Azure Stack HCI workloads. It provides virtual network
management, microsegmentation through Network Security Groups (NSGs),
management of Quality of Service (QoS) policies, virtual appliance chaining to allow you
to bring in third-party appliances, and is also responsible for managing SLB and GRE.
SLBs leverage virtual first-party appliances to provide high availability to applications,
while and Gateways are used to provide external network connectivity to workloads.

For more information about Network Controller, see What is Network Controller.

SDN configuration options


Based on your requirements, you may need to deploy a subset of the SDN
infrastructure. For example, if you want to only host customer workloads in your
datacenter, and external communication is not required, you can deploy Network
Controller and skip deploying SLB/MUX and gateway VMs. The following describes
networking feature infrastructure requirements for a phased deployment of the SDN
infrastructure.

Feature Deployment requirements Network requirements

Logical Network management


Network Controller None
NSGs for VLAN-based network

QoS for VLAN-based networks

Virtual Networking
Network Controller HNV PA VLAN, subnet, router
User Defined Routing

ACLs for virtual network

Encrypted subnets

QoS for virtual networks

Virtual network peering

Inbound/Outbound NAT
Network Controller
BGP on HNV PA network

Load Balancing SLB/MUX Private and public VIP subnets

GRE gateway connections Network Controller


BGP on HNV PA network

SLB/MUX
Private and public VIP subnets

Gateway GRE VIP subnet

IPSec gateway connections Network Controller


BGP on HNV PA network

SLB/MUX
Private and public VIP subnets
Gateway

L3 gateway connections Network Controller


BGP on HNV PA network

SLB/MUX
Private and public VIP subnets

Gateway Tenant VLAN, subnet, router

BGP on tenant VLAN optional

Next steps
Choose a network pattern to review.
Deploy the Azure Stack HCI operating
system
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

The first step in deploying Azure Stack HCI is to download Azure Stack HCI and install
the operating system on each server that you want to cluster. This article discusses
different ways to deploy the operating system, and using Windows Admin Center to
connect to the servers.

7 Note

If you've purchased Azure Stack HCI Integrated System solution hardware from the
Azure Stack HCI Catalog through your preferred Microsoft hardware partner, the
Azure Stack HCI operating system should be pre-installed. In that case, you can skip
this step and move on to Create an Azure Stack HCI cluster.

Determine hardware and network requirements


Microsoft recommends purchasing a validated Azure Stack HCI hardware/software
solution from our partners. These solutions are designed, assembled, and validated
against our reference architecture to ensure compatibility and reliability, so you get up
and running quickly. Check that the systems, components, devices, and drivers you are
using are certified for use with Azure Stack HCI. Visit the Azure Stack HCI solutions
website for validated solutions.

At minimum, you need one server, a reliable high-bandwidth, low-latency network


connection between servers, and SATA, SAS, NVMe, or persistent memory drives that are
physically attached to just one server each. However, your hardware requirements may
vary depending on the size and configuration of the cluster(s) you wish to deploy. To
make sure your deployment is successful, review the Azure Stack HCI system
requirements.

Before you deploy the Azure Stack HCI operating system:

Plan your physical network requirements and host network requirements.


If your deployment will stretch across multiple sites, determine how many servers
you will need at each site, and whether the cluster configuration will be
active/passive or active/active. For more information, see Stretched clusters
overview.
Carefully choose drives and plan volumes to meet your storage performance and
capacity requirements.

For Azure Kubernetes Service on Azure Stack HCI and Windows Server requirements, see
AKS requirements on Azure Stack HCI.

Gather information
To prepare for deployment, you'll need to take note of the server names, domain names,
computer account names, RDMA protocols and versions, and VLAN ID for your
deployment. Gather the following details about your environment:

Server name: Get familiar with your organization's naming policies for computers,
files, paths, and other resources. If you need to provision several servers, each
should have a unique name.

Domain name: Get familiar with your organization's policies for domain naming
and domain joining. You'll be joining the servers to your domain, and you'll need
to specify the domain name.

Computer account names: Servers that you want to add as cluster nodes have
computer accounts. These computer accounts need to be moved into their own
dedicated organizational unit (OU).

Organizational unit (OU): If not already done so, create a dedicated OU for your
computer accounts. Consult your domain administrator about creating an OU. For
detailed information, see Create a failover cluster.

Static IP addresses: Azure Stack HCI requires static IP addresses for storage and
workload (VM) traffic and doesn't support dynamic IP address assignment through
DHCP for this high-speed network. You can use DHCP for the management
network adapter unless you're using two in a team, in which case again you need
to use static IPs. Consult your network administrator about the IP address you
should use for each server in the cluster.

RDMA networking: There are two types of RDMA protocols: iWarp and RoCE. Note
which one your network adapters use, and if RoCE, also note the version (v1 or v2).
For RoCE, also note the model of your top-of-rack switch.

VLAN ID: Note the VLAN ID to be used for the network adapters on the servers, if
any. You should be able to obtain this from your network administrator.
Site names: For stretched clusters, two sites are used for disaster recovery. You can
set up sites using Active Directory Domain Services, or the Create cluster wizard
can automatically set them up for you. Consult your domain administrator about
setting up sites.

Install Windows Admin Center


Windows Admin Center is a locally deployed, browser-based app for managing Azure
Stack HCI. The simplest way to install Windows Admin Center is on a local management
PC (desktop mode), although you can also install it on a server (service mode).

If you install Windows Admin Center on a server, tasks that require CredSSP, such as
cluster creation and installing updates and extensions, require using an account that's a
member of the Gateway Administrators group on the Windows Admin Center server. For
more information, see the first two sections of Configure User Access Control and
Permissions.

Prepare hardware for deployment


After you've acquired the server hardware for your Azure Stack HCI solution, it's time to
rack and cable it. Use the following steps to prepare the server hardware for deployment
of the operating system.

1. Rack all server nodes that you want to use in your server cluster.
2. Connect the server nodes to your network switches.
3. Configure the BIOS or the Unified Extensible Firmware Interface (UEFI) of your
servers as recommended by your Azure Stack HCI hardware vendor to maximize
performance and reliability.

7 Note

If you are preparing a single server deployment, see the Azure Stack HCI OS single
server overview

Operating system deployment options


You can deploy the Azure Stack HCI operating system in the same ways that you're used
to deploying other Microsoft operating systems:

Server manufacturer pre-installation.


Headless deployment using an answer file.
System Center Virtual Machine Manager (VMM).
Network deployment.
Manual deployment by connecting either a keyboard and monitor directly to the
server hardware in your datacenter, or by connecting a KVM hardware device to
the server hardware.

Server manufacturer pre-installation


For enterprise deployment of the Azure Stack HCI operating system, we recommend
Azure Stack HCI Integrated System solution hardware from your preferred hardware
partner. The solution hardware arrives with the operating system preinstalled, and
supports using Windows Admin Center to deploy and update drivers and firmware from
the hardware manufacturer.

Solution hardware ranges from 1 to 16 nodes and is tested and validated by Microsoft
and partner vendors. T
​ o find Azure Stack HCI solution hardware from your preferred
hardware partner, see the Azure Stack HCI Catalog .

Headless deployment
You can use an answer file to do a headless deployment of the operating system. The
answer file uses an XML format to define configuration settings and values during an
unattended installation of the operating system.

For this deployment option, you can use Windows System Image Manager to create an
unattend.xml answer file to deploy the operating system on your servers. Windows
System Image Manager creates your unattend answer file through a graphical tool with
component sections to define the "answers" to the configuration questions, and then
ensure the correct format and syntax in the file.
The Windows System Image Manager
tool is available in the Windows Assessment and Deployment Kit (Windows ADK). To get
started: Download and install the Windows ADK.

System Center Virtual Machine Manager (VMM)


deployment
You can use System Center 2022 to deploy the Azure Stack HCI, version 21H2 operating
system on bare-metal hardware, as well as to cluster and manage the servers. For more
information about using VMM to do a bare-metal deployment of the operating system,
see Provision a Hyper-V host or cluster from bare metal computers.
) Important

You can't use Microsoft System Center Virtual Machine Manager 2019 to deploy or
manage clusters running Azure Stack HCI, version 21H2. If you're using VMM 2019
to manage your Azure Stack HCI, version 20H2 cluster, don't attempt to upgrade
the cluster to version 21H2 without first installing System Center 2022.

Network deployment
Another option is to install the Azure Stack HCI operating system over the network
using Windows Deployment Services.

Manual deployment
To manually deploy the Azure Stack HCI operating system on the system drive of each
server to be clustered, install the operating system via your preferred method, such as
booting from a DVD or USB drive. Complete the installation process using the Server
Configuration tool (SConfig) to prepare the server or servers for clustering. To learn
more about the tool, see Configure a Server Core installation with SConfig.

To manually install the Azure Stack HCI operating system:

1. Start the Install Azure Stack HCI wizard on the system drive of the server where you
want to install the operating system.

2. Choose the language to install or accept the default language settings, select Next,
and then on next page of the wizard, select Install now.
3. On the Applicable notices and license terms page, review the license terms, select
the I accept the license terms checkbox, and then select Next.

4. On the Which type of installation do you want? page, select Custom: Install the
newer version of Azure Stack HCI only (advanced).

7 Note

Upgrade installations are not supported in this release of the operating


system.
5. On the Where do you want to install Azure Stack HCI? page, either confirm the
drive location where you want to install the operating system or update it, and
then select Next.

6. The Installing Azure Stack HCI page displays to show status on the process.
7 Note

The installation process restarts the operating system twice to complete the
process, and displays notices on starting services before opening an
Administrator command prompt.

7. At the Administrator command prompt, select Ok to change the user's password


before signing in to the operating system, and press Enter.

8. At the Enter new credential for Administrator prompt, enter a new password, enter
it again to confirm it, and then press Enter.

9. At the Your password has been changed confirmation prompt, press Enter.
Configure the server using SConfig
Now you're ready to use the Server Configuration tool (SConfig) to perform important
tasks. To use SConfig, log on to the server running the Azure Stack HCI operating
system. This could be locally via a keyboard and monitor, or using a remote
management (headless or BMC) controller, or Remote Desktop. The SConfig tool opens
automatically when you log on to the server.

From the Welcome to Azure Stack HCI window (SConfig tool), you can perform these
initial configuration tasks on each server:

Configure networking or confirm that the network was configured automatically


using Dynamic Host Configuration Protocol (DHCP).
Rename the server if the default automatically generated server name does not
suit you.
Join the server to an Active Directory domain.
Add your domain user account or designated domain group to local
administrators.
Enable access to Windows Remote Management (WinRM) if you plan to manage
the server from outside the local subnet and decided not to join domain yet. (The
default Firewall rules allow management both from local subnet and from any
subnet within your Active Directory domain services.)

For more detail, see Server Configuration Tool (SConfig).

After configuring the operating system as needed with SConfig on each server, you're
ready to use the Cluster Creation wizard in Windows Admin Center to cluster the
servers.

7 Note

If you're installing Azure Stack HCI on a single server, you must use PowerShell to
create the cluster.

Next steps
To perform the next management task related to this article, see:

Create an Azure Stack HCI cluster


Create an Azure Stack HCI cluster using
Windows Admin Center
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Now that you've deployed the Azure Stack HCI operating system, you'll learn how to use
Windows Admin Center to create an Azure Stack HCI cluster that uses Storage Spaces
Direct, and, optionally, Software Defined Networking. The Create Cluster wizard in
Windows Admin Center will do most of the heavy lifting for you. If you'd rather do it
yourself with PowerShell, see Create an Azure Stack HCI cluster using PowerShell. The
PowerShell article is also a good source of information for what is going on under the
hood of the wizard and for troubleshooting purposes.

7 Note

If you are doing a single server installation of Azure Stack HCI 21H2, use
PowerShell to create the cluster.

If you're interested in testing Azure Stack HCI but have limited or no spare hardware, see
the Azure Stack HCI Evaluation Guide, where we'll walk you through experiencing Azure
Stack HCI using nested virtualization inside an Azure VM. Or try the Create a VM-based
lab for Azure Stack HCI tutorial to create your own private lab environment using nested
virtualization on a server of your choice to deploy VMs running Azure Stack HCI for
clustering.

Cluster creation workflow


Here's the workflow for creating a cluster in Windows Admin Center:

1. Complete the prerequisites.


2. Start the Create Cluster wizard.
3. Complete the following steps in the Create Cluster wizard:
a. Step 1: Get Started. Ensures that each server meets the prerequisites and
features needed for cluster join.
b. Step 2: Networking. Assigns and configures network adapters and creates the
virtual switches for each server.
c. Step 3: Clustering. Validates the cluster is set up correctly. For stretched clusters,
also sets up the two sites.
d. Step 4: Storage. Configures Storage Spaces Direct.
e. Step 5: SDN. (Optional) Sets up a Network Controller for SDN deployment.

After you're done creating a cluster in the Create Cluster wizard, complete these post-
cluster creation steps:

Set up a cluster witness. This is highly recommended for all clusters with at least
two nodes.
Register with Azure. Your cluster is not fully supported until your registration is
active.
Validate an Azure Stack HCI cluster. Your cluster is ready to work in a production
environment after completing this step.

Prerequisites
Before you run the Create Cluster wizard in Windows Admin Center, you must complete
the following prerequisites.

2 Warning

Running the wizard before completing the prerequisites can result in a failure to
create the cluster.

Review the hardware and related requirements in System requirements.

Consult with your networking team to identify and understand Physical network
requirements, Host network requirements, and Firewall requirements. Especially
review the Network Reference patterns, which provide example network designs.
Also, determine how you'd like to configure host networking, using Network ATC
or manually.

Install the Azure Stack HCI operating system on each server in the cluster. See
Deploy the Azure Stack HCI operating system.

Obtain an account that's a member of the local Administrators group on each


server.

Have at least two servers to cluster; four if creating a stretched cluster (two in each
site). To instead deploy Azure Stack HCI on a single server, see Deploy Azure Stack
HCI on a single server.
Ensure all servers are in the same time zone as your local domain controller.

Install the latest version of Windows Admin Center on a PC or server for


management. See Install Windows Admin Center.

Ensure that Windows Admin Center and your domain controller are not installed
on the same system. Also, ensure that the domain controller is not hosted on the
Azure Stack HCI cluster or one of the nodes in the cluster.

If you're running Windows Admin Center on a server (instead of a local PC), use an
account that's a member of the Gateway Administrators group, or the local
Administrators group on the Windows Admin Center server.

Verify that your Windows Admin Center management computer is joined to the
same Active Directory domain in which you'll create the cluster, or joined to a fully
trusted domain. The servers that you'll cluster don't need to belong to the domain
yet; they can be added to the domain during cluster creation.

If you're using an integrated system from a Microsoft hardware partner, install the
latest version of vendor extensions on Windows Admin Center to help keep the
integrated hardware and firmware up to date. To install them, open Windows
Admin Center and click Settings (gear icon) at the upper right. Select any
applicable hardware vendor extensions, and click Install.

For stretched clusters, set up your two sites beforehand in Active Directory.
Alternatively, the wizard can set them up for you too. For more information about
stretched clusters, see the Stretched clusters overview.

Start the Create Cluster wizard


To start the Create Cluster wizard in Windows Admin Center:

1. Log in to Windows Admin Center.

2. Under All connections, click Add.

3. In the Add or create resources panel, under Server clusters, select Create new.

4. Under Choose the cluster type, select Azure Stack HCI.


5. Under Select server locations, select one the following:

All servers in one site


Servers in two sites (for stretched cluster)

6. When finished, click Create. You'll see the Create Cluster wizard, as shown below.

Proceed to the next step in the cluster creation workflow, Step 1: Get started.

Step 1: Get started


Step 1 of the wizard walks you through making sure all prerequisites are met, adding
the server nodes, installing needed features, and then restarting each server if needed.

1. Review 1.1 Check the prerequisites listed in the wizard to ensure each server node
is cluster-ready. When finished, click Next.
2. On 1.2 Add servers, enter your account username using the format
domain\username. Enter your password, then click Next. This account must be a
member of the local Administrators group on each server.

3. Enter the name of the first server you want to add, then click Add. When you add
servers, make sure to use a fully qualified domain name.

4. Repeat Step 3 for each server that will be part of the cluster. When you're finished,
select Next.

5. If needed, on 1.3 Join a domain​, specify the domain to join the servers to and the
account to use. You can optionally rename the servers if you want. Then click Next.

6. On 1.4 Install features, review and add features as needed. When finished, click
Next.

The wizard lists and installs required features for you, including the following
options:

Data Deduplication
Hyper-V
BitLocker Drive Encryption
Data Center Bridging (for RoCEv2 network adapters)
Failover Clustering
Network ATC
Active Directory module for Windows PowerShell
Hyper-V module for Windows PowerShell

7. On 1.5 Install updates, click Install updates as needed to install any operating
system updates. When complete, click Next.

8. On 1.6 Install hardware updates, click Get updates as needed to get available
vendor hardware updates. If you don't install the updates now, we recommend
manually installing the latest networking drivers before continuing. Updated
drivers are required if you want to use Network ATC to configure host networking.

7 Note

Some extensions require extra configuration on the servers or your network,


such as configuring the baseboard management controller (BMC). Consult
your vendor's documentation for details.
9. Follow the vendor-specific steps to install the updates on your hardware. These
steps include performing symmetry and compliance checks on your hardware to
ensure a successful update. You may need to re-run some steps.

10. On 1.7 Restart servers, click Restart servers if required. Verify that each server has
successfully started.

11. On 1.8 Choose host networking, select one of the following:

Use Network ATC to deploy and manage networking (Recommended). We


recommend using this option for configuring host networking. Network ATC
provides an intent-based approach to host network deployment and helps
simplify the deployment and network configuration management for Azure
Stack HCI clusters. For more information about using Network ATC, see
Network ATC.
Manually configure host networking. Select this option to manually
configure host networking. For more information about configuring RDMA
and Hyper-V host networking for Azure Stack HCI, see Host network
requirements.

12. Select Next: Networking to proceed to Step 2: Networking.

Step 2: Networking
Step 2 of the wizard walks you through configuring the host networking elements for
your cluster. RDMA (both iWARP and RoCE) network adapters are supported.

Depending on the option you selected in 1.8 Choose host networking of Step 1: Get
started above, refer to one of the following tabs to configure host networking for your
cluster:

Use Network ATC to deploy and manage networking (Recommended)

This is the recommended option for configuring host networking. For more
information about Network ATC, see Network ATC overview.

1. On 2.1 Verify network adapters, review the list displayed, and exclude or add
any adapters you want to cluster. Wait for a couple of minutes for the
adapters to show up. Only adapters with matching names, interface
descriptions, and link speed on each server are displayed. All other adapters
are hidden.

2. If you don't see your adapters in the list, click Show hidden adapters to see all
the available adapters and then select the missing adapters.

3. On the Select the cluster network adapters page, select the checkbox for any
adapters listed that you want to cluster. The adapters must have matching
names, interface descriptions, and link speeds on each server. You can rename
the adapters to match, or just select the matching adapters. When finished,
click Close.

4. The selected adapters will now display under Adapters available on all
servers. When finished selecting and verifying adapters, click Next.

5. On 2.2 Define intents, under Intent 1, do the following:

For Traffic types, select a traffic type from the dropdown list. You can
add the Management and Storage intent types to exactly one intent
while the Compute intent type can be added to one or more intents. For
more information, see Network ATC traffic types.
For Intent name, enter a friendly name for the intent.
For Network adapters, select an adapter from the dropdown list.
(Optional) Click Select another adapter for this traffic if needed.

For recommended intent configurations, see the network reference pattern


that matches your deployment:

Storage switchless, single switch


Storage switchless, two switches
Storage switched, non-converged
Storage switched, fully converged

6. (Optional) After an intent is added, select Customize network settings to


modify its network settings. When finished, select Save.

7. (Optional) To add another intent, select Add an intent, and repeat step 5 and
optionally step 6.

8. When finished defining network intents, select Next.

9. On 2.3: Provide network details, for each storage traffic adapter listed, enter
the following or use the default values (recommended):

Subnet mask/CIDR
VLAN ID
IP address (this is usually on a private subnet such as 10.71.1.x and
10.71.2.x)

10. Select Next: Clustering to proceed to Step 3: Clustering.


Step 3: Clustering
Step 3 of the wizard makes sure everything thus far is set up correctly, automatically sets
up two sites in the case of stretched cluster deployments, and then actually creates the
cluster. You can also set up your sites beforehand in Active Directory.

1. On 3.1 Create the cluster, specify a unique name for the cluster.

2. Under IP address, do one of the following:

Specify one or more static addresses. The IP address must be entered in the
following format: IP address/current subnet length. For example:
10.0.0.200/24.
Assign address dynamically with DHCP.

3. When finished, select Create cluster. This can take a while to complete.

If you get the error "Failed to reach cluster through DNS," select the Retry
connectivity checks button. You might have to wait several hours before it
succeeds on larger networks due to DNS propagation delays.

) Important

If you failed to create a cluster, do not click the Back button instead of the
Retry connectivity checks button. If you select Back, the Cluster Creation
wizard exits prematurely, and can potentially reset the entire process.

If you encounter issues with deployment after the cluster is created and you want
to restart the Cluster Creation wizard, first remove (destroy) the cluster. To do so,
see Remove a cluster.

4. The next step appears only if you selected Use Network ATC to deploy and
manage networking (Recommended) for step 1.8 Choose host networking.

In Deploy host networking settings, select Deploy to apply the Network ATC
intents you defined earlier. If you chose to manually deploy host networking in
step 1.8 of the Cluster Creation wizard, you won't see this page.

5. On 3.2 Deploy host networking settings, select Deploy to apply the


Network ATC
intents you defined earlier. This can take a few minutes to complete. When
finished, select Next.

6. On 3.3 Validate cluster, select Validate. Validation can take several minutes. Note
that the in-wizard validation is not the same as the post-cluster creation validation
step, which performs additional checks to catch any hardware or configuration
problems before the cluster goes into production. If you experience issues with
cluster validation, see Troubleshoot cluster validation reporting.

If the Credential Security Service Provider (CredSSP) pop-up appears, select Yes
to temporarily enable CredSSP for the wizard to continue. Once your cluster is
created and the wizard has completed, you'll disable CredSSP to increase security.
If you experience issues with CredSSP, see Troubleshoot CredSSP.

7. Review all validation statuses, download the report to get detailed information on
any failures, make changes, then click Validate again as needed. You can
Download report as well. Repeat again as necessary until all validation checks
pass. When all is OK, click Next.

8. Select Advanced. You have a couple of options here:

Register the cluster with DNS and Active Directory


Add eligible storage to the cluster (recommended)

9. Under Networks, select whether to Use all networks (recommended) or Specify


one or more networks not to use.

10. When finished, click Create cluster.

11. For stretched clusters, on 3.3 Assign servers to sites, name the two sites that will
be used.

12. Next assign each server to a site. You'll set up replication across sites later. When
finished, click Apply changes.

13. Select Next: Storage to proceed to Step 4: Storage.

Step 4: Storage
Complete these steps after finishing the Create Cluster wizard.
Step 4 walks you through
setting up Storage Spaces Direct for your cluster.

1. On 4.1 Clean drives, you can optionally select Erase drives if it makes sense for
your deployment.

2. On 4.2 Check drives, click the > icon next to each server to verify that the disks are
working and connected. If all is OK, click Next.

3. On 4.3 Validate storage, click Next.


4. Download and review the validation report. If all is good, click Next. If not, run
Validate again.

5. On 4.4 Enable Storage Spaces Direct, click Enable.

6. Download and review the report. When all is good, click Finish.

7. Select Go to connections list.

8. After a few minutes, you should see your cluster in the list. Select it to view the
cluster overview page.

It can take some time for the cluster name to be replicated across your domain,
especially if workgroup servers have been newly added to Active Directory.
Although the cluster might be displayed in Windows Admin Center, it might not be
available to connect to yet.

If resolving the cluster isn't successful after some time, in most cases you can
substitute a server name instead of the cluster name.

9. (Optional) Select Next: SDN to proceed to Step 5: SDN.

Step 5: SDN (optional)


This optional step walks you through setting up the Network Controller component of
Software Defined Networking (SDN). Once the Network Controller is set up, you can
configure other SDN components such as Software Load Balancer (SLB) and RAS
Gateway as per your requirements. See the Phased deployment section of the planning
article to understand what other SDN components you might need.

You can also deploy Network Controller using SDN Express scripts. See Deploy an SDN
infrastructure using SDN Express.

7 Note

The Create Cluster wizard does not currently support configuring SLB And RAS
gateway. You can use SDN Express scripts to configure these components. Also,
SDN is not supported or available for stretched clusters.

1. Under Host, enter a name for the Network Controller. This is the DNS name used
by management clients (such as Windows Admin Center) to communicate with
Network Controller. You can also use the default populated name.
2. Download the Azure Stack HCI VHDX file. For more information, see Download the
VHDX file.
3. Specify the path where you downloaded the VHDX file. Use Browse to find it
quicker.
4. Specify the number of VMs to be dedicated for Network Controller. Three VMs are
strongly recommended for production deployments.
5. Under Network, enter the VLAN ID of the management network. Network
Controller needs connectivity to same management network as the Hyper-V hosts
so that it can communicate and configure the hosts.
6. For VM network addressing, select either DHCP or Static.
7. If you selected DHCP, enter the name for the Network Controller VMs. You can
also use the default populated names.
8. If you selected Static, do the following:

Specify an IP address.
Specify a subnet prefix.
Specify the default gateway.
Specify one or more DNS servers. Click Add to add additional DNS servers.

9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.
10. Enter the local administrative password for these VMs.
11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values.
13. When finished, click Next.
14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete. Then click Finish.

7 Note

After Network Controller VM(s) are created, you must configure dynamic DNS
updates for the Network Controller cluster name on the DNS server.

If Network Controller deployment fails, do the following before you try this again:

Stop and delete any Network Controller VMs that the wizard created.

Clean up any VHD mount points that the wizard created.

Ensure you have at least 50-100GB of free space on your Hyper-V hosts.

Step 6: Remove a Cluster (optional)


There are situations in which you may need to actually remove the cluster which you
created in Step 3. If so, choose the Remove the Cluster option in the Cluster Creation
wizard.

For more information about removing a cluster, see Remove a cluster.

Next steps
To perform the next management task related to this article, see:

Set up a cluster witness


Set up a cluster witness
Article • 06/28/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019

This article describes how to set up an Azure Stack HCI or Windows Server cluster with a
cluster witness in Azure (known as a cloud witness).

We recommend setting up a cluster witness for clusters with two, three, or four nodes.
The witness helps the cluster determine which nodes have the most up-to-date cluster
data if some nodes can't communicate with the rest of the cluster. You can host the
cluster witness on a file share located on another server, or use a cloud witness.

To learn more about cluster witnesses and quorum, see Understanding cluster and pool
quorum on Azure Stack HCI. To manage the witness, including setting a file share
witness, see Change cluster settings.

Before you begin


Before you can create a cloud witness, you must have an Azure account and
subscription, and register your Azure Stack HCI cluster with Azure. See the following
articles for more information:

Make sure that port 443 is open in your firewalls and that *.core.windows.net is
included in any firewall allow lists you're using between the cluster and Azure
Storage. For details, see Recommended firewall URLs.
If your network uses a proxy server for internet access, you must configure proxy
settings for Azure Stack HCI.
Create an Azure account.
If applicable, create an additional Azure subscription.
Connect Azure Stack HCI to Azure.
Make sure DNS is available for the cluster.

Create an Azure storage account


This section describes how to create an Azure storage account. This account is used to
store an Azure blob file used for arbitration for a specific cluster. You can use the same
Azure storage account to configure a cloud witness for multiple clusters.
1. Sign in to the Azure portal .

2. On the Azure portal home menu, under Azure services, select Storage accounts. If
this icon is missing, select Create a resource to create a Storage accounts resource
first.

3. On the Storage accounts page, select New.

4. On the Create storage account page, complete the following:


a. Select the Azure Subscription to apply the storage account to.
b. Select the Azure Resource group to apply the storage account to.
c. Enter a Storage account name.

Storage account names must be between 3 and 24 characters in length and may
contain numbers and lowercase letters only. This name must also be unique
within Azure.
d. Select a Location that is closest to you physically.
e. For Performance, select Standard.
f. For Account kind, select Storage general purpose.
g. For Replication, select Locally-redundant storage (LRS).
h. When finished, click Review + create.


5. Ensure that the storage account passes validation and then review account
settings. When finished, click Create.

6. It may take a few seconds for account deployment to occur in Azure. When
deployment is complete, click Go to resource.

Copy the access key and endpoint URL


When you create an Azure storage account, the process automatically generates two
access keys, a primary key (key1) and a secondary key (key2). For the first time creation
of a cloud witness, key1 is used. The endpoint URL is also generated automatically.

An Azure cloud witness uses a blob file for storage, with an endpoint generated of the
form storage_account_name.blob.core.windows.net as the endpoint.

7 Note

An Azure cloud witness uses HTTPS (default port 443) to establish communication
with the Azure blob service. Ensure that the HTTPS port is accessible.
Copy the account name and access key
1. In the Azure portal, under Settings, select Access keys.

2. Select Show keys to display key information.

3. Click the copy-and-paste icon to the right of the Storage account name and key1
fields and paste each text string to Notepad or other text editor.

Copy the endpoint URL (optional)


The endpoint URL is optional and may not be needed for a cloud witness.

1. In the Azure portal, select Properties.

2. Select Show keys to display endpoint information.

3. Under Blob service, click the copy-and-paste icon to the right of the Blob service
field and paste the text string to Notepad or other text editor.


Create a cloud witness using Windows Admin
Center
Now you are ready to create a witness instance for your cluster using Windows Admin
Center.

1. In Windows Admin Center, select Cluster Manager from the top drop-down arrow.

2. Under Cluster connections, select the cluster.

3. Under Tools, select Settings.

4. In the right pane, select Witness.

5. For Witness type, select one of the following:

Cloud witness - enter your Azure storage account name, access key, and
endpoint URL, as described previously
File share witness - enter the file share path "(//server/share)"

6. For a cloud witness, for the following fields, paste the text strings you copied
previously for:
a. Azure storage account name
b. Azure storage access key
c. Azure service endpoint

7. When finished, click Save. It might take a bit for the information to propagate to
Azure.

7 Note
The third option, Disk witness, is not suitable for use in stretched clusters.

Create a cloud witness using Windows


PowerShell
Alternatively, you can create a witness instance for your cluster using PowerShell.

Use the following cmdlet to create an Azure cloud witness. Enter the Azure storage
account name and access key information as described previously:

PowerShell

Set-ClusterQuorum –Cluster "Cluster1" -CloudWitness -AccountName


"AzureStorageAccountName" -AccessKey "AzureStorageAccountAccessKey"

Use the following cmdlet to create a file share witness. Enter the path to the file server
share:

PowerShell

Set-ClusterQuorum -FileShareWitness "\\fileserver\share" -Credential (Get-


Credential)

Next steps
To perform the next management task related to this article, see:

Connect Azure Stack HCI to Azure

For more information on cluster quorum, see Understanding cluster and pool
quorum on Azure Stack HCI.

For more information about creating and managing Azure Storage Accounts, see
Create a storage account.
Register Azure Stack HCI with Azure
Article • 05/12/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Now that you've deployed the Azure Stack HCI operating system and created a cluster,
you must register it with Azure.

This article describes how to register Azure Stack HCI with Azure via Windows Admin
Center or PowerShell. For information on how to manage cluster registration, see
Manage cluster registration.

About Azure Stack HCI registration


Azure Stack HCI is delivered as an Azure service. As per the Azure online services terms,
you must register your cluster within 30 days of installation. Your cluster isn't fully
supported until your registration is active. If you don't register your cluster with Azure
upon deployment, or if your cluster is registered but hasn't connected to Azure for more
than 30 days, the system won't allow new virtual machines (VMs) to be created or
added. For more information, see Job failure when attempting to create VM.

After registration, an Azure Resource Manager resource is created to represent the on-
premises Azure Stack HCI cluster. Starting with Azure Stack HCI, version 21H2,
registering a cluster automatically creates an Azure Arc of the server resource for each
server in the Azure Stack HCI cluster. This Azure Arc integration extends the Azure
management plane to Azure Stack HCI. The Azure Arc integration enables periodic
syncing of information between the Azure resource and the on-premises clusters.

Prerequisites
Before you begin cluster registration, make sure the following prerequisites are in place:

Azure Stack HCI system deployed and online. Make sure the system is deployed
and all servers are online.

Network connectivity. Azure Stack HCI needs to periodically connect to the Azure
public cloud. For information on how to prepare your firewalls and set up a proxy
server, see Firewall requirements for Azure Stack HCI.
Azure subscription and permissions. Make sure you have an Azure subscription
and you know the Azure region where the cluster resources should be created. For
more information about Azure subscription and supported Azure regions, see
Azure requirements.

Management computer. Make sure you have access to a management computer


with internet access. Your management computer must be joined to the same
Active Directory domain in which you've created your Azure Stack HCI cluster.

Windows Admin Center. If you're using Windows Admin Center to register the
cluster, make sure you:

Install Windows Admin Center on a management computer and register


Windows Admin Center with Azure. For registration, use the same Azure Active
Directory (tenant) ID that you plan to use for the cluster registration. To get your
Azure subscription ID, visit the Azure portal, navigate to Subscriptions, and
copy/paste your ID from the list. To get your tenant ID, visit the Azure portal,
navigate to Azure Active Directory, and copy/paste your tenant ID.

To register your cluster in Azure China, install Windows Admin Center version
2103.2 or later.

Azure policies. Make sure you don't have any conflicting Azure policies that might
interfere with cluster registration. Some of the common conflicting policies can be:

Resource group naming: Azure Stack HCI registration provides two


configuration parameters for naming resource groups: -ResourceGroupName and
-ArcServerResourceGroupName . See Register-AzStackHCI for details on the

resource group naming. Make sure that the naming does not conflict with the
existing policies.

Resource group tags: Currently Azure Stack HCI does not support adding tags
to resource groups during cluster registration. Make sure your policy accounts
for this behavior.

.msi download: Azure Stack HCI downloads the Arc agent on the cluster nodes
during cluster registration. Make sure you don't restrict these downloads.

Credentials lifetime: By default, the Azure Stack HCI service requests two years
of credential lifetime. Make sure your Azure policy doesn't have any
configuration conflicts.

7 Note
If you have a separate resource group for Arc-for-Server resources, we
recommend using a resource group having Arc-for-Server resources
related only to Azure Stack HCI. The Azure Stack HCI resource provider has
permissions to manage any other Arc-for-Server resources in the ArcServer
resource group.

Assign Azure permissions for registration


This section describes how to assign Azure permissions for registration from the Azure
portal or using PowerShell.

Assign Azure permissions from the Azure portal


If your Azure subscription is through an EA or CSP, ask your Azure subscription admin to
assign Azure subscription level privileges of:

User Access Administrator role: Required to Arc-enable each server of an Azure


Stack HCI cluster.

Contributor role: Required to register and unregister the Azure Stack HCI cluster.

Assign Azure permissions using PowerShell


Some admins may prefer a more restrictive option. In this case, it's possible to create a
custom Azure role specific for Azure Stack HCI registration. The following procedure
provides a typical set of permissions to the custom role; to set more restrictive
permissions, see How do I use a more restricted custom permissions role?
1. Create a json file called customHCIRole.json with following content. Make sure to
change <subscriptionID> to your Azure subscription ID. To get your subscription
ID, visit the Azure portal , navigate to Subscriptions, and copy/paste your ID from
the list.

JSON

"Name": "Azure Stack HCI registration role",

"Id": null,

"IsCustom": true,

"Description": "Custom Azure role to allow subscription-level access


to register Azure Stack HCI",

"Actions": [

"Microsoft.Resources/subscriptions/resourceGroups/read",

"Microsoft.Resources/subscriptions/resourceGroups/write",

"Microsoft.Resources/subscriptions/resourceGroups/delete",

"Microsoft.AzureStackHCI/register/action",

"Microsoft.AzureStackHCI/Unregister/Action",

"Microsoft.AzureStackHCI/clusters/*",

"Microsoft.Authorization/roleAssignments/write",

"Microsoft.Authorization/roleAssignments/read",

"Microsoft.HybridCompute/register/action",

"Microsoft.GuestConfiguration/register/action",

"Microsoft.HybridConnectivity/register/action"

],

"NotActions": [

],

"AssignableScopes": [

"/subscriptions/<subscriptionId>"

2. Create the custom role:

PowerShell

New-AzRoleDefinition -InputFile <path to customHCIRole.json>

3. Assign the custom role to the user:

PowerShell

$user = get-AzAdUser -DisplayName <userdisplayname>

$role = Get-AzRoleDefinition -Name "Azure Stack HCI registration role"

New-AzRoleAssignment -ObjectId $user.Id -RoleDefinitionId $role.Id -


Scope /subscriptions/<subscriptionid>

The following table explains why these permissions are required:


Permissions Reason

"Microsoft.Resources/subscriptions/resourceGroups/read",
To register and unregister the
"Microsoft.Resources/subscriptions/resourceGroups/write",
Azure Stack HCI cluster.
"Microsoft.Resources/subscriptions/resourceGroups/delete"

"Microsoft.AzureStackHCI/register/action",

"Microsoft.AzureStackHCI/Unregister/Action",

"Microsoft.AzureStackHCI/clusters/*",

"Microsoft.Authorization/roleAssignments/read",

"Microsoft.Authorization/roleAssignments/write",
To register and unregister the Arc
"Microsoft.HybridCompute/register/action",
for server resources.
"Microsoft.GuestConfiguration/register/action",

"Microsoft.HybridConnectivity/register/action"

Register a cluster
You can register your Azure Stack HCI cluster using Windows Admin Center or
PowerShell.

Windows Admin Center

Follow these steps to register Azure Stack HCI with Azure via Windows Admin
Center:

1. Make sure all the prerequisites are met.

2. Launch Windows Admin center and sign in to your Azure account. Go to


Settings > Account, and then select Sign in under Azure Account.

3. In Windows Admin Center, select Cluster Manager from the top drop-down
arrow.

4. Under Cluster connections, select the cluster you want to register.

5. On Dashboard, under Azure Arc, check the status of Azure Stack HCI
registration and Arc-enabled servers.

Not configured means your cluster isn't registered.


Connected means your cluster is registered with Azure and is
successfully synced to the cloud within the last day. Skip rest of the
registration steps and see Manage the cluster to manage your cluster.

6. If your cluster isn't registered, under Azure Stack HCI registration, select
Register to proceed.
7 Note

If you didn't register Windows Admin Center with Azure earlier, you are
asked to do so now. Instead of the cluster registration wizard, you'll see
the Windows Admin Center registration wizard.

7. Specify the Azure subscription ID to which you want to register the cluster. To
get your Azure subscription ID, visit the Azure portal, navigate to
Subscriptions, and copy/paste your ID from the list.

8. Select the Azure region from the drop-down menu.

9. Select one of the following options to select the Azure Stack HCI resource
group:

Select Use existing to create the Azure Stack HCI cluster and Arc for
Server resources in an existing resource group.

Select Create new to create a new resource group. Enter a name for the
new resource group.

10. Select Register. It takes a few minutes to complete the registration.

Additional registration options


You have other options to register your cluster:

Register a cluster using ArmAccessToken/SPN


Register a cluster using SPN for Arc onboarding
Manage cluster registration
After you've registered your cluster with Azure, you can manage its registration through
Windows Admin Center, PowerShell, or the Azure portal.

Depending on your cluster configuration and requirements, you may need to take the
following actions to manage the cluster registration:

View status of registration and Arc-enabled servers


Enable Azure Arc integration
Upgrade Arc agent on cluster servers
Unregister the cluster
Review FAQs

For information on how to manage your cluster registration, see Manage cluster
registration.

Next steps
To perform the next management task related to this article, see:

Validate an Azure Stack HCI cluster


Validate an Azure Stack HCI cluster
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019

Although the Create cluster wizard in Windows Admin Center performs certain
validations to create a working cluster with the selected hardware, cluster validation
performs additional checks to make sure the cluster will work in a production
environment. This how-to article focuses on why cluster validation is important, and
when to run it on an Azure Stack HCI cluster.

We recommend performing cluster validation for the following primary scenarios:

After deploying a server cluster, run the Validate-DCB tool to test networking.
After updating a server cluster, depending on your scenario, run both validation
options to troubleshoot cluster issues.
After setting up replication with Storage Replica, validate that the replication is
proceeding normally by checking some specific events and running a couple
commands.
After creating a server cluster, run the Validate-DCB tool before placing it into
production.

What is cluster validation?


Cluster validation is intended to catch hardware or configuration problems before a
cluster goes into production. Cluster validation helps to ensure that the Azure Stack HCI
solution that you're about to deploy is truly dependable. You can also use cluster
validation on configured failover clusters as a diagnostic tool.

Specific validation scenarios


This section describes scenarios in which validation is also needed or useful.

Validation before the cluster is configured:

A set of servers ready to become a failover cluster: This is the most


straightforward validation scenario. The hardware components (systems,
networks, and storage) are connected, but the systems aren't yet functioning as
a cluster. Running tests in this situation has no effect on availability.
Server VMs: For virtualized servers in a cluster, run cluster validation as you
would on any other new cluster. The requirement to run the feature is the same
whether you have:
A "host cluster" where failover occurs between two physical computers.
A "guest cluster" where failover occurs between guest operating systems on
the same physical computer.

Validation after the cluster is configured and in use:

Before adding a server to the cluster: When you add a server to a cluster, we
strongly recommend validating the cluster. Specify both the existing cluster
members and the new server when you run cluster validation.

When adding drives: When you add additional drives to the cluster, which is
different from replacing failed drives or creating virtual disks or volumes that
rely on the existing drives, run cluster validation to confirm that the new storage
will function correctly.

When making changes that affect firmware or drivers: If you upgrade or make
changes to the cluster that affect firmware or drivers, you must run cluster
validation to confirm that the new combination of hardware, firmware, drivers,
and software supports failover cluster functionality.

After restoring a system from backup: After you restore a system from backup,
run cluster validation to confirm that the system functions correctly as part of a
cluster.

Validate networking
The Microsoft Validate-DCB tool is designed to validate the Data Center Bridging (DCB)
configuration on the cluster. To do this, the tool takes an expected configuration as
input, and then tests each server in the cluster. This section covers how to install and run
the Validate-DCB tool, review results, and resolve networking errors that the tool
identifies.

7 Note

Microsoft recommends deploying and managing your configuration with Network


ATC, which eliminates most of the configuration challenges that the Validate-DCB
tool checks for. To learn more about Network ATC, which provides an intent-based
approach to host network deployment, see Simplify host networking with
Network ATC.
On the network, remote direct memory access (RDMA) over Converged Ethernet (RoCE)
requires DCB technologies to make the network fabric lossless. With iWARP, DCB is
optional. However, configuring DCB can be complex, with exact configuration required
across:

Each server in the cluster


Each network port that RDMA traffic passes through on the fabric

Prerequisites
Network setup information of the server cluster that you want to validate,
including:
Host or server cluster name
Virtual switch name
Network adapter names
Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS) settings
An internet connection to download the tool module in Windows PowerShell from
Microsoft.

Install and run the Validate-DCB tool


To install and run the Validate-DCB tool:

1. On your management PC, open a Windows PowerShell session as an


Administrator, and then use the following command to install the tool.

PowerShell

Install-Module Validate-DCB

2. Accept the requests to use the NuGet provider and access the repository to install
the tool.

3. After PowerShell connects to the Microsoft network to download the tool, type
Validate-DCB and press Enter to start the tool wizard.

7 Note

If you cannot run the Validate-DCB tool script, you might need to adjust your
PowerShell execution policies. Use the Get-ExecutionPolicy cmdlet to view
your current script execution policy settings. For information on setting
execution policies in PowerShell, see About Execution Policies.

4. On the Welcome to the Validate-DCB configuration wizard page, select Next.

5. On the Clusters and Nodes page, type the name of the server cluster that you want
to validate, select Resolve to list it on the page, and then select Next.

6. On the Adapters page:


a. Select the vSwitch attached checkbox and type the name of the vSwitch.
b. Under Adapter Name, type the name of each physical NIC, under Host vNIC
Name, the name of each virtual NIC (vNIC), and under VLAN, the VLAN ID in
use for each adapter.
c. Expand the RDMA Type drop-down list box and select the appropriate protocol:
RoCE or iWARP. Also set Jumbo Frames to the appropriate value for your
network, and then select Next.

7 Note

To learn more about how SR-IOV improves network performance, see


Overview of Single Root I/O Virtualization (SR-IOV).

7. On the Data Center Bridging page, modify the values to match your organization's
settings for Priority, Policy Name, and Bandwidth Reservation, and then select
Next.

7 Note

Selecting RDMA over RoCE on the previous wizard page requires DCB for
network reliability on all NICs and switchports.

8. On the Save and Deploy page, in the Configuration File Path box, save the
configuration file using .ps1 extension to a location where you can use it again
later if needed, and then select Export to start running the Validate-DCB tool.

You can optionally deploy your configuration file by completing the Deploy
Configuration to Nodes section of the page, which includes the ability to use
an Azure Automation account to deploy the configuration and then validate
it. See Create an Azure Automation account to get started with Azure
Automation.
Review results and fix errors
The Validate-DCB tool produces results in two units:

1. [Global Unit] results list prerequisites and requirements to run the modal tests.
2. [Modal Unit] results provide feedback on each cluster host configuration and best
practices.

This example shows successful scan results of a single server for all prerequisites and
modal unit tests by indicating a Failed Count of 0.
The following steps show how to identify a Jumbo Packet error from vNIC SMB02 and fix
it:

1. The results of the Validate-DCB tool scans show a Failed Count error of 1.

2. Scrolling back through the results shows an error in red indicating that the Jumbo
Packet for vNIC SMB02 on Host S046036 is set at the default size of 1514, but
should be set to 9014.
3. Reviewing the Advanced properties of vNIC SMB02 on Host S046036 shows that
the Jumbo Packet is set to the default of Disabled.

4. Fixing the error requires enabling the Jumbo Packet feature and changing its size
to 9014 bytes. Running the scan again on host S046036 confirms this change by
returning a Failed Count of 0.
To learn more about resolving errors that the Validate-DCB tool identifies, see the
following video.
https://www.youtube-nocookie.com/embed/cC1uACvhPBs

You can also install the tool offline. For disconnected systems, use Save-Module -Name
Validate-DCB -Path c:\temp\Validate-DCB and then move the modules in

c:\temp\Validate-DCB to your disconnected system. For more information, see the


following video.
https://www.youtube-nocookie.com/embed/T_VzGte3KJc

Validate the cluster


Use the following steps to validate the servers in an existing cluster in Windows Admin
Center.

1. In Windows Admin Center, under All connections, select the Azure Stack HCI
cluster that you want to validate, and then select Connect.

The Cluster Manager Dashboard displays overview information about the cluster.

2. On the Cluster Manager Dashboard, under Tools, select Servers.

3. On the Inventory page, select the servers in the cluster, then expand the More
submenu and select Validate cluster.

4. On the Validate Cluster pop-up window, select Yes.


5. On the Credential Security Service Provider (CredSSP) pop-up window, select Yes.

6. Provide your credentials to enable CredSSP and then select Continue.

Cluster validation runs in the background and gives you a notification when it's
complete, at which point you can view the validation report, as described in the
next section.

7 Note

After your cluster servers have been validated, you will need to disable CredSSP for
security reasons.

Disable CredSSP
After your server cluster is successfully validated, you'll need to disable the Credential
Security Support Provider (CredSSP) protocol on each server for security purposes. For
more information, see CVE-2018-0886 .

1. In Windows Admin Center, under All connections, select the first server in your
cluster, and then select Connect.

2. On the Overview page, select Disable CredSSP, and then on the Disable CredSSP
pop-up window, select Yes.

The result of Step 2 removes the red CredSSP ENABLED banner at the top of the
server's Overview page, and disables CredSSP on the other servers.

View validation reports


Now you're ready to view your cluster validation report.

There are a couple ways to access validation reports:

On the Inventory page, expand the More submenu, and then select View
validation reports.

At the top right of Windows Admin Center, select the Notifications bell icon to
display the Notifications pane.
Select the Successfully validated cluster notice,
and then select Go to Failover Cluster validation report.

7 Note

The server cluster validation process may take some time to complete. Don't switch
to another tool in Windows Admin Center while the process is running. In the
Notifications pane, a status bar below your Validate cluster notice indicates when
the process is done.

Validate the cluster using PowerShell


You can also use Windows PowerShell to run validation tests on your server cluster and
view the results. You can run tests both before and after a cluster is set up.

To run a validation test on a server cluster, issue the Get-Cluster and Test-Cluster
<server clustername> PowerShell cmdlets from your management PC, or run only the
Test-Cluster cmdlet directly on the cluster:

PowerShell

$Cluster = Get-Cluster -Name 'server-cluster1'

Test-Cluster -InputObject $Cluster -Verbose

For more examples and usage information, see the Test-Cluster reference
documentation.

Test-NetStack is a PowerShell-based testing tool available from GitHub that you can
use to perform ICMP, TCP, and RDMA traffic testing of networks and identify potential
network fabric and host misconfigurations or operational instability. Use Test-NetStack
to validate network data paths by testing native, synthetic, and hardware offloaded
(RDMA) network data paths for issues with connectivity, packet fragmentation, low
throughput, and congestion.
Validate replication for Storage Replica
If you're using Storage Replica to replicate volumes in a stretched cluster or cluster-to-
cluster, there are several events and cmdlets that you can use to get the state of
replication.

In the following scenario, we configured Storage Replica by creating replication groups


(RGs) for two sites, and then specified the data volumes and log volumes for both the
source server nodes in Site1 (Server1, Server2), and the destination (replicated) server
nodes in Site2 (Server3, Server4).

To determine the replication progress for Server1 in Site1, run the Get-WinEvent
command and examine events 5015, 5002, 5004, 1237, 5001, and 2200:

PowerShell

Get-WinEvent -ComputerName Server1 -ProviderName Microsoft-Windows-


StorageReplica -max 20

For Server3 in Site2, run the following Get-WinEvent command to see the Storage
Replica events that show creation of the partnership. This event states the number of
copied bytes and the time taken. For example:

PowerShell

Get-WinEvent -ComputerName Server3 -ProviderName Microsoft-Windows-


StorageReplica | Where-Object {$_.ID -eq "1215"} | FL

For Server3 in Site2, run the Get-WinEvent command and examine events 5009, 1237,
5001, 5015, 5005, and 2200 to understand the processing progress. There should be no
warnings of errors in this sequence. There will be many 1237 events - these indicate
progress.

PowerShell

Get-WinEvent -ComputerName Server3 -ProviderName Microsoft-Windows-


StorageReplica | FL

Alternately, the destination server group for the replica states the number of byte
remaining to copy at all times, and can be queried through PowerShell with Get-
SRGroup . For example:

PowerShell
(Get-SRGroup).Replicas | Select-Object numofbytesremaining

For node Server3 in Site2, run the following command and examine events 5009, 1237,
5001, 5015, 5005, and 2200 to understand the replication progress. There should be no
warnings of errors. However, there will be many "1237" events - these simply indicate
progress.

PowerShell

Get-WinEvent -ComputerName Server3 -ProviderName Microsoft-Windows-


StorageReplica | FL

As a progress script that will not terminate:

PowerShell

while($true) {

$v = (Get-SRGroup -Name "Replication2").replicas | Select-Object


numofbytesremaining

[System.Console]::Write("Number of bytes remaining: {0}`r",


$v.numofbytesremaining)

Start-Sleep -s 5

To get replication state within the stretched cluster, use Get-SRGroup and Get-
SRPartnership :

PowerShell

Get-SRGroup -Cluster ClusterS1

PowerShell

Get-SRPartnership -Cluster ClusterS1

PowerShell

(Get-SRGroup).replicas -Cluster ClusterS1

Once successful data replication is confirmed between sites, you can create your VMs
and other workloads.
See also
Performance testing against synthetic workloads in a newly created storage space
using DiskSpd.exe. To learn more, see Test Storage Spaces Performance Using
Synthetic Workloads in Windows Server.
Windows Server Assessment is a Premier Service available for customers who want
Microsoft to review their installations. For more information, contact Microsoft
Premier Support. To learn more, see Getting Started with the Windows Server On-
Demand Assessment (Server, Security, Hyper-V, Failover Cluster, IIS).
Migrate to Azure Stack HCI on new
hardware
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows
Server 2008 R2

This topic describes how to migrate virtual machine (VM) files on Windows Server 2012
R2, Windows Server 2016, or Windows Server 2019 to new Azure Stack HCI server
hardware using Windows PowerShell and Robocopy. Robocopy is a robust method for
copying files from one server to another. It resumes if disconnected and continues to
work from its last known state. Robocopy also supports multi-threaded file copy over
Server Message Block (SMB). For more information, see Robocopy.

7 Note

Hyper-V Live Migration and Hyper-V Replica from Windows Server to Azure Stack
HCI is not supported. However, Hyper-V replica is valid and supported between HCI
systems. You can't replicate a VM to another volume in the same cluster, only to
another HCI system.

If you have VMs on Windows 2012 R2 or older that you want to migrate, see Migrating
older VMs.

To migrate to Azure Stack HCI using the same hardware, see Migrate to Azure Stack HCI
on the same hardware.

The following diagram shows a Windows Server source cluster and an Azure Stack HCI
destination cluster as an example. You can also migrate VMs on stand-alone servers as
well.


In terms of expected downtime, using a single NIC with a dual 40 GB RDMA East-West
network between clusters, and Robocopy configured for 32 multithreads, you can realize
transfer speeds of 1.9 TB per hour.

7 Note

Migrating VMs for stretched clusters is not covered in this article.

Before you begin


There are several requirements and things to consider before you begin migration:

All Windows PowerShell commands must be run As Administrator.

You must have domain credentials with administrator permissions for both source
and destination clusters, with full rights to the source and destination
Organizational Unit (OU) that contains both clusters.

Both clusters must be in the same Active Directory forest and domain to facilitate
Kerberos authentication between clusters for migration of VMs.

Both clusters must reside in an Active Directory OU with Group Policy Object (GPO)
Block inheritance set on this OU. This ensures no domain-level GPOs and security
policies can impact the migration.

Both clusters must be connected to the same time source to support consistent
Kerberos authentication between clusters.

Make note of the Hyper-V virtual switch name used by the VMs on the source
cluster. You must use the same virtual switch name for the Azure Stack HCI
destination cluster "virtual machine network" prior to importing VMs.

Remove any ISO image files for your source VMs. This is done using Hyper-V
Manager in VM Properties in the Hardware section. Select Remove for any virtual
CD/DVD drives.

Shutdown all VMs on the source cluster. This is required to ensure version control
and state are maintained throughout the migration process.

Check if Azure Stack HCI supports your version of the VMs to import and update
your VMs as needed. See the VM version support and update section on how to
do this.
Backup all VMs on your source cluster. Complete a crash-consistent backup of all
applications and data and an application-consistent backup of all databases. To
backup to Azure, see Use Azure Backup.

Make a checkpoint of your source cluster VMs and domain controller in case you
have to roll back to a prior state. This is not applicable for physical servers.

Ensure the maximum Jumbo frame sizes are the same between source and
destination cluster storage networks, specifically the RDMA network adapters and
their respective switch network ports to provide the most efficient end-to-end
transfer packet size.

Make note of the Hyper-V virtual switch name on the source cluster. You will reuse
it on the destination cluster.

The Azure Stack HCI hardware should have at least equal capacity and
configuration as the source hardware.

Minimize the number of network hops or physical distance between the source
and destination clusters to facilitate the fastest file transfer.

VM version support and update


This table lists the Windows Server OS versions and their VM versions.

Regardless of the OS version a VM may be running on, the minimum VM version


supported for direct migration to Azure Stack HCI is version 5.0. This represents the
default version for VMs on Windows Server 2012 R2. So any VMs running at version 2.0,
3.0, or 4.0 for example must be updated to version 5.0 before migration.

OS version VM version

Windows Server 2008 SP1 2.0

Windows Server 2008 R2 3.0

Windows Server 2012 4.0

Windows Server 2012 R2 5.0

Windows Server 2016 8.0

Windows Server 2019 9.0

Azure Stack HCI 9.0


For VMs on Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019,
update all VMs to the latest VM version supported on the source hardware first before
running the Robocopy migration script. This ensures all VMs are at least at version 5.0
for a successful VM import.

For VMs on Windows Server 2008 SP1, Windows Server 2008 R2-SP1, and Windows
2012, the VM version will be less than version 5.0. These VMs also use an .xml file for
configuration instead of an .vcmx file. As such, a direct import of the VM to Azure Stack
HCI is not supported. In these cases, you have two options, as detailed in Migrating
older VMs.

Updating the VM version


The following commands apply to Windows Server 2012 R2 and later. Use the following
command to show all VM versions on a single server:

PowerShell

Get-VM * | Format-Table Name,Version

To show all VM versions across all servers on a cluster:

PowerShell

Get-VM –ComputerName (Get-ClusterNode)

To update all VMs to the latest supported version on all servers:

PowerShell

Get-VM –ComputerName (Get-ClusterNode) | Update-VMVersion -Force

RDMA recommendations
If you are using Remote Direct Memory Access (RDMA), Robocopy can leverage it for
copying your VMs between clusters. Here are some recommendations for using RDMA:

Connect both clusters to the same top of rack (ToR) switch to use the fastest
network path between source and destination clusters. For the storage network
path this typically supports 10GbE/25GbE or higher speeds and leverages RDMA.
If the RDMA adapter or standard is different between source and destination
clusters (ROCE vs iWARP), Robocopy will instead leverage SMB over TCP/IP via the
fastest available network. This will typically be a dual 10Gbe/25Gbe or higher
speed for the East-West network, providing the most optimal way to copy VM
VHDX files between clusters.

To ensure Robocopy can leverage RDMA between clusters (East-West network),


configure RDMA storage networks so they are routeable between the source and
destination clusters.

Create the new cluster


Before you can create the Azure Stack HCI cluster, you need to install the Azure Stack
HCI OS on each new server that will be in the cluster. For information on how to do this,
see Deploy the Azure Stack HCI operating system.

Use Windows Admin Center or Windows PowerShell to create the new cluster. For
detailed information on how to do this, see Create an Azure Stack HCI cluster using
Windows Admin Center and Create an Azure Stack HCI cluster using Windows
PowerShell.

) Important

Hyper-V virtual switch ( VMSwitch ) names between clusters must be the same. Make
sure that virtual switch names created on the destination cluster match those used
on the source cluster across all servers. Verify the switch names for the same before
you import the VMs.

7 Note

You must register the Azure Stack HCI cluster with Azure before you can create new
VMs on it. For more information, see Register with Azure.

Run the migration script


The following PowerShell script Robocopy_Remote_Server_.ps1 uses Robocopy to copy
VM files and their dependent directories and metadata from the source to the
destination cluster. This script has been modified from the original script on TechNet at:
Robocopy Files to Remote Server Using PowerShell and RoboCopy .
The script copies all VM VHD, VHDX, and VMCX files to your destination cluster for a
given Cluster Shared Volume (CSV). One CSV is migrated at a time.

The migration script is run locally on each source server to leverage the benefit of
RDMA and fast network transfer. To do this:

1. Make sure each destination cluster node is set to the CSV owner for the
destination CSV.

2. To determine the location of all VM VHD and VHDX files to be copied, use the
following cmdlet. Review the C:\vmpaths.txt file to determine the topmost source
file path for Robocopy to start from for step 4:

PowerShell

Get-ChildItem -Path "C:\Clusterstorage\Volume01\*.vhd*" -Recurse >


c:\vmpaths.txt

7 Note

If your VHD and VHDX files are located in different paths on the same volume,
you will need to run the migration script for each different path to copy them
all.

3. Change the following three variables to match the source cluster VM path with the
destination cluster VM path:

$Dest_Server = "Node01"
$source = "C:\Clusterstorage\Volume01"

$dest = "\\$Dest_Server\C$\Clusterstorage\Volume01"

4. Run the following script on each Windows Server source server:

PowerShell

<#

#===========================================================================

# Script: Robocopy_Remote_Server_.ps1

#===========================================================================

.DESCRIPTION:

Change the following variables to match your source cluster VM path with the
destination cluster VM path. Then run this script on each source Cluster
Node CSV owner and make sure the destination cluster node is set to the CSV
owner for the destination CSV.

Change $Dest_Server = "Node01"

Change $source = "C:\Clusterstorage\Volume01"

Change $dest = "\\$Dest_Server\C$\Clusterstorage\Volume01"

#>

$Space = Write-host ""

$Dest_Server = "Node01"

$source = "C:\Clusterstorage\Volume01"

$dest = "\\$Dest_Server\C$\Clusterstorage\Volume01"

$Logfile = "c:\temp\Robocopy1-$date.txt"

$date = Get-Date -UFormat "%Y%m%d"

$cmdArgs = @("$source","$dest",$what,$options)

$what = @("/COPYALL")

$options =
@("/E","/MT:32","/R:0","/W:1","/NFL","/NDL","/LOG:$logfile","/xf")

## Get Start Time

$startDTM = (Get-Date)

$Dest_Server = "Node01"

$TARGETDIR = \\$Dest_Server\C$\Clusterstorage\Volume01

$Space

Clear

## Provide Information

Write-host ".....Copying Virtual Machines FROM $Source to $TARGETDIR


....................." -fore Green -back black

Write-Host "........................................." -Fore Green

## Kick off the copy with options defined

robocopy @cmdArgs

## Get End Time

$endDTM = (Get-Date)

## Echo Time elapsed

$Time = "Elapsed Time: = $(($endDTM-$startDTM).totalminutes) minutes"

## Provide time it took

Write-host ""

Write-host " Copy Virtual Machines to $Dest_Server has been completed......"


-fore Green -back black
Write-host " Copy Virtual Machines to $Dest_Server took $Time ......"
-fore Cyan

Import the VMs


A best practice is to create at least one Cluster Shared Volume (CSV) per cluster node to
enable an even balance of VMs for each CSV owner for increased resiliency,
performance, and scale of VM workloads. By default, this balance occurs automatically
every five minutes and needs to be considered when using Robocopy between a source
cluster node and the destination cluster node to ensure source and destination CSV
owners match to provide the most optimal transfer path and speed.

Perform the following steps on your Azure Stack HCI cluster to import the VMs, make
them highly available, and start them:

1. Run the following cmdlet to show all CSV owner nodes:

PowerShell

Get-ClusterSharedVolume

2. For each server node, go to C:\Clusterstorage\Volume and set the path for all VMs
- for example C:\Clusterstorage\volume01 .

3. Run the following cmdlet on each CSV owner node to display the path to all VM
VMCX files per volume prior to VM import. Modify the path to match your
environment:

PowerShell

Get-ChildItem -Path "C:\Clusterstorage\Volume01\*.vmcx" -Recurse

7 Note

Windows Server 2012 R2 and older VMs use an XML file instead of a VCMX
file. Fore more information, see the section Migrating older VMs.

4. Run the following cmdlet for each server node to import, register, and make the
VMs highly available on each CSV owner node. This ensures an even distribution of
VMs for optimal processor and memory allocation:

PowerShell

Get-ChildItem -Path "C:\Clusterstorage\Volume01\*.vmcx" -Recurse |


Import-VM -Register | Get-VM | Add-ClusterVirtualMachineRole

5. Start each destination VM on each node:

PowerShell

Start-VM -Name

6. Log on and verify that all VMs are running and that all your apps and data are
there:

PowerShell

Get-VM -ComputerName Server01 | Where-Object {$_.State -eq 'Running'}

7. Update your VMs to the latest version for Azure Stack HCI to take advantage of all
the advancements:

PowerShell

Get-VM | Update-VMVersion -Force

8. After the script has completed, check the Robocopy log file for any errors listed
and to verify that all VMs are copied successfully.

Migrating older VMs


If you have Windows Server 2008 SP1, Windows Server 2008 R2-SP1, Windows Server
2012, or Windows Server 2012 R2 VMs, this section applies to you. You have two options
for handling these VMs:

Migrate these VMs to Windows Server 2012 R2, Windows Server 2016, or Windows
Server 2019 first, update the VM version, then begin the migration process.

Use Robocopy to copy all VM VHDs to Azure Stack HCI. Then create new VMs and
attach the copied VHDs to the VMs in Azure Stack HCI. This bypasses the VM
version limitation for these older VMs.

Windows Server 2012 R2 and older Hyper-V hosts use an XML file format for their VM
configuration, which is different than the VCMX file format used for Windows Server
2016 and later Hyper-V hosts. This requires a different Robocopy command to copy
these VMs to Azure Stack HCI.

Option 1: Staged migration


This is a two-stage migration used for VMs hosted on Windows Server 2008 SP1,
Windows Server 2008 R2-SP, and Windows Server 2012. Here is the process you use:

1. Discover the location of all VM VHD and VHDX files to be copied, then review the
vmpaths.txt file to determine the topmost source file path for Robocopy to start
from. Use the following cmdlet:

PowerShell

Get-ChildItem -Path "C:\Clusterstorage\Volume01\*.vhd*" -Recurse >


c:\vmpaths.txt

2. Use the following example Robocopy command to copy VMs to Windows Server
2012 R2 first using the topmost path determined in step 1:

Robocopy \\2012R2-Clus01\c$\clusterstorage\volume01\Hyper-V\ \\20H2-


Clus01\c$\clusterstorage\volume01\Hyper-V\ /E /MT:32 /R:0 /w:1 /NFL /NDL

/copyall /log:c:\log.txt /xf

3. Verify the virtual switch ( VMSwitch ) name on used on the Windows Server 2012 R2
cluster is the same as the switch name used on the Windows 2008 R2 or Windows
Server 2008 R2-SP1 source. To display the switch names used across all servers in a
cluster, use this:

PowerShell

Get-VMSwitch -CimSession $Servers | Select-Object Name

Rename the switch name on Windows Server 20212 R2 as needed. To rename the
switch name across all servers in the cluster, use this:

PowerShell

Invoke-Command -ComputerName $Servers -ScriptBlock {rename-VMSwitch -


Name $using:vSwitcholdName -NewName $using:vSwitchnewname}

4. Copy and import the VMs to Windows Server 2012 R2:

PowerShell

Get-ChildItem -Path "c:\clusterstorage\volume01\Hyper-V\*.xml"-Recurse

PowerShell

Get-ChildItem -Path "c:\clusterstorage\volume01\image\*.xml" -Recurse


| Import-VM -Register | Get-VM | Add-ClusterVirtualMachineRole

5. On Windows Server 2012 R2, update the VM version to 5.0 for all VMs:
PowerShell

Get-VM | Update-VMVersion -Force

6. Run the migration script to copy VMs to Azure Stack HCI.

7. Follow the process in Import the VMs, replacing Step 3 and Step 4 with the
following to handle the XML files and to import the VMs to Azure Stack HCI:

PowerShell

Get-ChildItem -Path "c:\clusterstorage\volume01\Hyper-V\*.xml"-Recurse

PowerShell

Get-ChildItem -Path "c:\clusterstorage\volume01\image\*.xml" -Recurse


| Import-VM -Register | Get-VM | Add-ClusterVirtualMachineRole

8. Complete the remaining steps in Import the VMs.

Option 2: Direct VHD copy


This method uses Robocopy to copy VM VHDs that are hosted on Windows 2008 SP1,
Windows 2008 R2-SP1, and Windows 2012 to Azure Stack HCI. This bypasses the
minimum supported VM version limitation for these older VMs. We recommend this
option for VMs hosted on Windows Server 2008 SP1 and Windows Server 2008 R2-SP1.

VMs hosted on Windows 2008 SP1 and Windows 2008 R2-SP1 support only Generation
1 VMs with Generation 1 VHDs. As such, corresponding Generation 1 VMs need to be
created on Azure Stack HCI so that the copied VHDs can be attached to the new VMs.
Note that these VHDs cannot be upgraded to Generation 2 VHDs.

7 Note

Windows Server 2012 supports both Generation 1 and Generation 2 VMs.

Here is the process you use:

1. Use the example Robocopy to copy VMs VHDs directly to Azure Stack HCI:

Robocopy \\2012R2-Clus01\c$\clusterstorage\volume01\Hyper-V\ \\20H2-

Clus01\c$\clusterstorage\volume01\Hyper-V\ /E /MT:32 /R:0 /w:1 /NFL /NDL


/copyall /log:c:\log.txt /xf

2. Create new Generation 1 VMs. For detailed information on how to do this, see
Manage VMs.

3. Attach the copied VHD files to the new VMs. For detailed information, see Manage
Virtual Hard Disks (VHD)

As an FYI, the following Windows Server guest operating systems support Generation 2
VMs:

Windows Server 2019


Windows Server 2016
Windows Server 2012 R2
Windows Server 2012
Windows 10
64-bit versions of Windows 8.1 (64-bit)
64-bit versions of Windows 8 (64-bit)
Linux (See Supported Linux and FreeBSD VMs)

Next steps
Validate the cluster after migration. See Validate an Azure Stack HCI cluster.

To migrate to Azure Stack HCI in-place using the same hardware, see Migrate to
Azure Stack HCI on the same hardware.
Migrate to Azure Stack HCI on same
hardware
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows
Server 2008 R2

This topic describes how to migrate a Windows Server failover cluster to Azure Stack HCI
using your existing server hardware. This process installs the new Azure Stack HCI
operating system and retains your existing cluster settings and storage, and imports
your VMs.

The following diagram depicts migrating your Windows Server cluster in-place using the
same server hardware. After shutting your cluster down, Azure Stack HCI is installed,
storage is reattached, and your VMs are imported and made highly available (HA).

To migrate your VMs to new Azure Stack HCI hardware, see Migrate to Azure Stack HCI
on new hardware.

7 Note

Migrating stretched clusters is not covered in this article.

Before you begin


There are several requirements and things to consider before you begin migration:

All Windows PowerShell commands must be run As Administrator.


You must have domain credentials with administrator permissions for Azure Stack
HCI.

Backup all VMs on your source cluster. Complete a crash-consistent backup of all
applications and data and an application-consistent backup of all databases. To
backup to Azure, see Use Azure Backup.

Collect inventory and configuration of all cluster nodes and cluster naming,
network configuration, Cluster Shared Volume (CSV) resiliency and capacity, and
quorum witness.

Shutdown your cluster VMs, offline CSVs, offline storage pools, and the cluster
service.

Disable the Cluster Name Object (CNO) (it is reused later) and:
Check that the CNO has Create Object rights to its own Organizational Unit
(OU)
Check that the block inherited policy has been set on the OU
Set the required policy for Azure Stack HCI on this OU

VM version support and update


The following table lists supported Windows Server OS versions and their VM versions
for in-place migration on the same hardware.

Regardless of the OS version a VM may be running on, the minimum VM version


supported for migration to Azure Stack HCI is version 5.0. So any VMs running at version
2.0, 3.0, or 4.0 on your Windows Server 2016 or Windows Server 2019 cluster must be
updated to version 5.0 before migration.

OS version VM version

Windows Server 2008 SP1 2.0

Windows Server 2008 R2 3.0

Windows Server 2012 4.0

Windows Server 2012 R2 5.0

Windows Server 2016 8.0

Windows Server 2019 9.0

Azure Stack HCI 9.0


For VMs on Windows Server 2008 SP1, Windows Server 2008 R2-SP1, and Windows
2012 clusters, direct migration to Azure Stack HCI is not supported. In these cases, you
have two options:

Migrate these VMs to Windows Server 2012 R2, Windows Server 2016, or Windows
Server 2019 first, update the VM version, then begin the migration process.

Use Robocopy to copy all VM VHDs to Azure Stack HCI. Then create new VMs and
attach the copied VHDs to their respective VMs in Azure Stack HCI. This bypasses
the VM version limitation for these older VMs.

Updating the VM version


Use the following command to show all VM versions on a single server:

PowerShell

Get-VM * | Format-Table Name,Version

To show all VM versions across all nodes on your Windows Server cluster:

PowerShell

Get-VM –ComputerName (Get-ClusterNode)

To update all VMs to the latest version on all Windows Server nodes:

PowerShell

Get-VM –ComputerName (Get-ClusterNode) | Update-VMVersion -Force

Updating the servers and cluster


Migration consists of running Azure Stack HCI setup on your Windows Server
deployment for a clean OS install with your VMs and storage intact. This replaces the
current operating system with Azure Stack HCI. For detailed information, see Deploy the
Azure Stack HCI operating system. Afterwards, you create a new Azure Stack HCI cluster,
reattach your storage and import the VMs over.

1. Shutdown your existing cluster VMs, offline CSVs, offline storage pools, and the
cluster service.
2. Go to the location where you downloaded the Azure Stack HCI bits, then run Azure
Stack HCI setup on each Windows Server node.

3. During setup, select Custom: Install the newer version of Azure Stack HCI only
(Advanced). Repeat for each server.

4. Create the new Azure Stack HCI cluster. You can use Windows Admin Center or
Windows PowerShell to do this, as described below.

) Important

Hyper-V virtual switch ( VMSwitch ) name must be the same name captured in the
cluster configuration inventory. Make sure the virtual switch name used on the
Azure Stack HCI cluster matches the original source virtual switch name before you
import the VMs.

7 Note

You must register the Azure Stack HCI cluster with Azure before you can create new
VMs on it. For more information, see Register with Azure.

Using Windows Admin Center


If using Windows Admin Center to create the Azure Stack HCI cluster, the Create Cluster
wizard automatically installs all required roles and features on each server node.

For detailed information on how to create the cluster, see Create an Azure Stack HCI
cluster using Windows Admin Center.

) Important

Skip step 4.1 Clean drives in the Create cluster wizard. Otherwise you will delete
your existing VMs and storage.

1. Start the Create Cluster wizard. When you get to Step 4: Storage:

2. Skip step 4.1 Clean drives. Do not do this.

3. Step away from the wizard.


4. Open PowerShell, and run the following cmdlet to create the new
Storagesubsystem Object ID, rediscover all storage enclosures, and assign SES drive
numbers:

PowerShell

Enable-ClusterS2D -Verbose

If migrating from Windows Server 2016, this also creates a new


ClusterperformanceHistory ReFS volume and assigns it to the SDDC Cluster

Resource Group.

If migrating from Windows Server 2019, this also adds the existing
ClusterperformanceHistory ReFS volume and assigns it to the SDDC Cluster

Resource Group.

5. Go back to the wizard. In step 4.2 Verify drives, verify that all drives are listed
without warnings or errors.

6. Complete the wizard.

Using Windows PowerShell


If using PowerShell to create the Azure Stack HCI cluster, the following roles and
features must be installed on each Azure Stack HCI cluster node using the following
cmdlet:

PowerShell

Install-WindowsFeature -Name Hyper-V, Failover-Clustering, FS-Data-


Deduplication, Bitlocker, Data-Center-Bridging, RSAT-AD-PowerShell -
IncludeAllSubFeature -IncludeManagementTools -Verbose

For more information on how to create the cluster using PowerShell, see Create an
Azure Stack HCI cluster using Windows PowerShell.

7 Note

Re-use the same name for the previously disabled Cluster Name Object.

1. Run the following cmdlet to create the cluster:

PowerShell
New-cluster –name "clustername" –node Server01,Server02 –staticaddress
xx.xx.xx.xx –nostorage

2. Run the following cmdlet to create the new Storagesubsystem Object ID,
rediscover all storage enclosures, and assign SES drive numbers:

PowerShell

Enable-ClusterS2D -Verbose

3. If migrating from Windows Server 2016, this also creates a new


ClusterperformanceHistory ReFS volume and assigns it to the SDDC Cluster

Resource Group.

7 Note

If a storage pool shows Minority Disk errors (viewable in Cluster Manager), re-
run the Enable-ClusterS2D -verbose cmdlet.

4. Using Cluster Manager, enable every CSV except the ClusterperformanceHistory


volume, which is a ReFS volume (make sure this is not an ReFS CSV).

5. If migrating from Windows Server 2019, re-run the Enable-ClusterS2D -verbose


cmdlet. This will associate the ClusterperformanceHistory ReFS volume with the
SDDC Cluster Resource Group.

6. Determine your current storage pool name and version by running the following:

PowerShell

Get-StoragePool | ? IsPrimordial -eq $false | ft FriendlyName,Version

7. Now determine your new storage pool name and version:

PowerShell

Get-StoragePool | ? IsPrimordial -eq $false | ft FriendlyName,Version

8. Create the quorum witness. For information on how, see Set up a cluster witness.

9. Verify that storage repair jobs are completed using the following:
PowerShell

Get-StorageJob

7 Note

This could take considerable time depending on the number of VMs running
during the upgrade.

10. Verify that all disks are healthy:

PowerShell

Get-VirtualDisk

11. Determine the cluster node version, which displays ClusterFunctionalLevel and
ClusterUpgradeVersion . Run the following to get this:

PowerShell

Get-ClusterNodeSupportedVersion

7 Note

ClusterFunctionalLevel is automatically set to 10 and does not require


updating due to new the operating system and cluster creation.

12. Update your storage pool as follows:

PowerShell

Get-StoragePool | Update-StoragePool

ReFS volumes
If migrating from Windows Server 2016, Resilient File System (ReFS) volumes are
supported, but such volumes do not benefit from performance enhancements in Azure
Stack HCI from using mirror-accelerated parity (MAP) volumes. This enhancement
requires a new ReFS volume to be created using the PowerShell New-Volume cmdlet.
For Windows Server 2016 MAP volumes, ReFS compaction was not available, so re-
attaching these volumes is OK but will be less performant compared to creating a new
MAP volume in an Azure Stack HCI cluster.

Import the VMs


A best practice is to create at least one Cluster Shared Volume (CSV) per cluster node to
enable an even balance of VMs for each CSV owner for increased resiliency,
performance, and scale of VM workloads. By default, this balance occurs automatically
every five minutes and needs to be considered when using Robocopy between a source
cluster node and the destination cluster node to ensure source and destination CSV
owners match to provide the most optimal transfer path and speed.

Perform the following steps on your Azure Stack HCI cluster to import the VMs, make
them highly available, and start them:

1. Run the following cmdlet to show all CSV owner nodes:

PowerShell

Get-ClusterSharedVolume

2. For each server node, go to C:\Clusterstorage\Volume and set the path for all VMs
- for example C:\Clusterstorage\volume01 .

3. Run the following cmdlet on each CSV owner node to display the path to all VM
VMCX files per volume prior to VM import. Modify the path to match your
environment:

PowerShell

Get-ChildItem -Path "C:\Clusterstorage\Volume01\*.vmcx" -Recurse

4. Run the following cmdlet for each server node to import and register all VMs and
make them highly available on each CSV owner node. This ensures an even
distribution of VMs for optimal processor and memory allocation:

PowerShell

Get-ChildItem -Path "C:\Clusterstorage\Volume01\*.vmcx" -Recurse |


Import-VM -Register | Get-VM | Add-ClusterVirtualMachineRole

5. Start each destination VM on each node:


PowerShell

Start-VM -Name

6. Login and verify that all VMs are running and that all your apps and data are there:

PowerShell

Get-VM -ComputerName Server01 | Where-Object {$_.State -eq 'Running'}

7. Lastly, update your VMs to the latest Azure Stack HCI version to take advantage of
all the advancements:

PowerShell

Get-VM | Update-VMVersion -Force

Next steps
Validate the cluster after migration. See Validate an Azure Stack HCI cluster.
To migrate Windows Server VMs to new Azure Stack HCI hardware, see Migrate to
Azure Stack HCI on new hardware.
Deploy Azure Stack HCI on a single
server
Article • 05/12/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article describes how to use PowerShell to deploy Azure Stack HCI on a single
server that contains all NVMe or SSD drives, creating a single-node cluster. It also
describes how to add servers to the cluster (scale-out) later.

Currently you can't use Windows Admin Center to deploy Azure Stack HCI on a single
server. For more info, see Using Azure Stack HCI on a single server.

Prerequisites
A server from the Azure Stack HCI Catalog that's certified for use as a single-
node cluster and configured with all NVMe or all SSD drives.
For network, hardware and other requirements, see Azure Stack HCI network and
domain requirements.
Optionally, install Windows Admin Center to register and manage the server once
it has been deployed.

Deploy on a single server


Here are the steps to install the Azure Stack HCI OS on a single server, create the single-
node cluster, register the cluster with Azure, and create volumes.

1. Install the Azure Stack HCI OS on your server. For more information, see Deploy
the Azure Stack HCI OS onto your server.

2. Configure the server utilizing the Server Configuration Tool (SConfig).

3. Install the required roles and features using the following command, then reboot
before continuing.

PowerShell

Install-WindowsFeature -Name "BitLocker", "Data-Center-Bridging",


"Failover-Clustering", "FS-FileServer", "FS-Data-Deduplication",
"Hyper-V", "Hyper-V-PowerShell", "RSAT-AD-Powershell", "RSAT-
Clustering-PowerShell", "NetworkATC", "Storage-Replica" -
IncludeAllSubFeature -IncludeManagementTools

4. Use PowerShell to create a cluster, skipping creating a cluster witness.

Here's an example of creating the cluster and then enabling Storage Spaces Direct
while disabling the storage cache:

PowerShell

New-Cluster -Name <cluster-name> -Node <node-name> -NOSTORAGE

PowerShell

Enable-ClusterStorageSpacesDirect -CacheState Disabled

7 Note

The New-Cluster command will also require the StaticAddress parameter if


the node is not using DHCP for its IP address assignment. This parameter
should be supplied with a new, available IP address on the node's subnet.

5. Use PowerShell or Windows Admin Center to register the cluster.

6. Create volumes.

Updating single-node clusters


To install updates for Azure Stack HCI version 21H2, use Windows Admin Center (Server
Manager > Updates), PowerShell, or connect via Remote Desktop and use Server
Configuration tool (SConfig).

To install updates for Azure Stack HCI version 22H2, use Windows Admin Center (Cluster
Manager > Updates). Cluster Aware Updating (CAU) is supported beginning with this
version. To use PowerShell or connect via Remote Desktop and use Server Configuration
Tool (SConfig), see Update Azure Stack HCI clusters.

For solution updates (such as driver and firmware updates), see your solution vendor.
Change a single-node to a multi-node cluster
(optional)
You can add servers to your single-node cluster, also known as scaling out, though there
are some manual steps you must take to properly configure Storage Spaces Direct fault
domains ( FaultDomainAwarenessDefault ) in the process. These steps aren't present when
adding servers to clusters with two or more servers.

1. Validate the cluster by specifying the existing server and the new server: Validate
an Azure Stack HCI cluster - Azure Stack HCI | Microsoft Docs.
2. If cluster validation was successful, add the new server to the cluster: Add or
remove servers for an Azure Stack HCI cluster - Azure Stack HCI | Microsoft Docs.
3. Once the server is added, change the cluster's fault domain awareness from
PhysicalDisk to ScaleScaleUnit: Inline fault domain changes.
4. Optionally, if more resiliency is needed, adjust the volume resiliency type from a 2-
way mirror to a Nested 2-way mirror: Single-server to two-node cluster.
5. Set up a cluster witness.

Next steps
Deploy workload – AVD
Deploy workload – AKS-HCI
Deploy workload – Azure Arc-enabled data services
Single server scale-out for your Azure
Stack HCI
Article • 07/10/2023

Applies to: Azure Stack HCI, version 22H2

Azure Stack HCI version 22H2 supports inline fault domain and resiliency changes for
single-server cluster scale-out. This article describes how you can scale out your Azure
Stack HCI cluster.

About single server cluster scale-out


Azure Stack HCI version 22H2 provides easy scaling options to go from a single-server
cluster to a two-node cluster, and from a two-node cluster to a three-node cluster. The
following diagram shows how a single server can be scaled out to a multi-node cluster
on your Azure Stack HCI.

Inline fault domain changes


When scaling up from a single-server cluster to a two-node cluster, the storage fault
domain first needs to be changed from type PhysicalDisk to StorageScaleUnit . The
change needs to be applied to all virtual disks and storage tiers. Extra nodes can be
created and the data is evenly balanced across all nodes in the cluster.

Complete the following steps to correctly set fault domains after adding a node:

1. Run PowerShell as Administrator.

2. Change the fault domain type of the storage pool:

PowerShell

Get-StoragePool -FriendlyName <s2d*> | Set-StoragePool -


FaultDomainAwarenessDefault StorageScaleUnit

3. Remove the Cluster Performance History volume:

PowerShell

Remove-VirtualDisk -FriendlyName ClusterPerformanceHistory

4. Generate new storage tiers and recreate the cluster performance history volume by
running the following command:

PowerShell

Enable-ClusterStorageSpacesDirect -Verbose

5. Remove storage tiers that are no longer applicable by running the following
command. See the Storage tier summary table for more information.

PowerShell

Remove-StorageTier -FriendlyName <tier_name>

6. Change the fault domain type of existing volumes:

For a non-tiered volume, run the following command:

PowerShell

Set-VirtualDisk –FriendlyName <name> -FaultDomainAwareness


StorageScaleUnit

To check the progress of this change, run the following commands:


PowerShell

Get-VirtualDisk -FriendlyName <volume_name> | FL FaultDomainAwareness


Get-StorageJob

Here is sample output from the previous commands:

PowerShell

PS C:\> Get-VirtualDisk -FriendlyName DemoVol | FL FaultDomainAwareness

FaultDomainAwareness : StorageScaleUnit

PS C:\> Get-StorageJob

Name IsBackgroundTask ElapsedTime JobState


PercentComplete BytesProcessed BytesTotal
---- ---------------- ----------- -------- --------------
- -------------- ----------
S2DPool-Rebalance True 00:00:10 Running 0
0 B 512 MB

For a tiered volume, run the following command:

PowerShell

Get-StorageTier -FriendlyName <volume_name*> | Set-StorageTier -


FaultDomainAwareness StorageScaleUnit

To check the fault domain awareness of storage tiers, run the following command:

PowerShell

Get-StorageTier -FriendlyName <volume_name*> | FL FriendlyName,


FaultDomainAwareness

7 Note

The prior commands don't work for changing from StorageScaleUnit to


PhysicalDisk , or from StorageScaleUnit to Node or Chassis types.

Inline resiliency changes


Once the inline fault domain changes are made, volume resiliency can be increased to
handle node scale-out in the following scenarios.

Run the following command to check the progress of the resiliency changes. The repair
operation should be observed for all volumes in the cluster.

PowerShell

Get-StorageJob

This command displays only ongoing jobs.

Single-server to two-node cluster


To remain as a two-way mirror, no action is required. To convert a two-way mirror to a
nested two-way mirror, do the following:

For a non-tiered volume, run the following commands to first set the virtual disk:

PowerShell

Set-VirtualDisk -FriendlyName <name> -NumberOfDataCopies 4

For a tiered volume, run the following command:

PowerShell

Get-StorageTier -FriendlyName <volume_name*> | Set-StorageTier -


NumberOfDataCopies 4

Then, move the volume to a different node to remount the volume. A remount is
needed as ReFS only recognizes provisioning type at mount time.

PowerShell

Move-ClusterSharedVolume -Name <name> -Node <node>

Two-node to three-node+ cluster


To remain as a two-way mirror, no action is required. To convert a two-way mirror to a
three-way or larger mirror, the following procedure is recommended.
Existing two-way mirror volumes can also take advantage of this using the following
PowerShell commands. For example, for a single-server cluster or a three-node or larger
cluster, you convert your two-way mirror volume into a three-way mirror volume.

The following scenarios are not supported:

Scaling down, such as from a three-way mirror to a two-way mirror.


Scaling to or from mirror-accelerated parity volumes.
Scaling from nested two-way mirror or nested mirror-accelerated parity volumes.

For a non-tiered volume, run the following command:

PowerShell

Set-VirtualDisk -FriendlyName <name> -NumberOfDataCopies 3

For a tiered volume, run the following command:

PowerShell

Get-StorageTier -FriendlyName <volume_name*> | Set-StorageTier -


NumberfOfDataCopies 3

Then, move the volume to a different node to remount the volume. A remount is
needed as ReFS only recognizes provisioning type at mount time.

PowerShell

Move-ClusterSharedVolume -Name <name> -Node <node>

Next steps
See ReFS for more information.
Create an Azure Stack HCI cluster using
Windows PowerShell
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you learn how to use Windows PowerShell to create an Azure Stack HCI
hyperconverged cluster that uses Storage Spaces Direct. If you're rather use the Cluster
Creation wizard in Windows Admin Center to create the cluster, see Create the cluster
with Windows Admin Center.

7 Note

If you're doing a single server installation of Azure Stack HCI 21H2, use PowerShell
to create the cluster.

You have a choice between two cluster types:

Standard cluster with one or two server nodes, all residing in a single site.
Stretched cluster with at least four server nodes that span across two sites, with
two nodes per site.

For the single server scenario, complete the same instructions for the one server.

7 Note

Stretch clusters are not supported in a single server configuration.

In this article, we create an example cluster named Cluster1 that is composed of four
server nodes named Server1, Server2, Server3, and Server4.

For the stretched cluster scenario, we will use ClusterS1 as the name and use the same
four server nodes stretched across sites Site1 and Site2.

For more information about stretched clusters, see Stretched clusters overview.

If you're interested in testing Azure Stack HCI, but have limited or no spare hardware,
check out the Azure Stack HCI Evaluation Guide , where we'll walk you through
experiencing Azure Stack HCI using nested virtualization inside an Azure VM. Or try the
Create a VM-based lab for Azure Stack HCI tutorial to create your own private lab
environment using nested virtualization on a server of your choice to deploy VMs
running Azure Stack HCI for clustering.

Before you begin


Before you begin, make sure you:

Have read the Azure Stack HCI system requirements.


Have read the Physical network requirements and Host network requirements for
Azure Stack HCI.
Install the Azure Stack HCI OS on each server in the cluster. See Deploy the Azure
Stack HCI operating system.
Ensure all servers are in the correct time zone.
Have an account that's a member of the local Administrators group on each server.
Have rights in Active Directory to create objects.
For stretched clusters, set up your two sites beforehand in Active Directory.

Using Windows PowerShell


You can either run PowerShell locally in an RDP session on a host server, or you can run
PowerShell remotely from a management computer. This article will cover the remote
option.

When running PowerShell from a management computer, include the -Name or -


Cluster parameter with the name of the server or cluster you are managing. In addition,

you may need to specify the fully qualified domain name (FQDN) when using the -
ComputerName parameter for a server node.

You will also need the Remote Server Administration Tools (RSAT) cmdlets and
PowerShell modules for Hyper-V and Failover Clustering. If these aren't already available
in your PowerShell session on your management computer, you can add them using the
following command: Add-WindowsFeature RSAT-Clustering-PowerShell .

Step 1: Provision the servers


First we will connect to each of the servers, join them to a domain (the same domain the
management computer is in), and install required roles and features.

Step 1.1: Connect to the servers


To connect to the servers, you must first have network connectivity, be joined to the
same domain or a fully trusted domain, and have local administrative permissions to the
servers.

Open PowerShell and use either the fully-qualified domain name or the IP address of
the server you want to connect to. You'll be prompted for a password after you run the
following command on each server.

For this example, we assume that the servers have been named Server1, Server2,
Server3, and Server4:

PowerShell

Enter-PSSession -ComputerName "Server1" -Credential "Server1\Administrator"

Here's another example of doing the same thing:

PowerShell

$myServer1 = "Server1"

$user = "$myServer1\Administrator"

Enter-PSSession -ComputerName $myServer1 -Credential $user

 Tip

When running PowerShell commands from your management PC, you might get an
error like WinRM cannot process the request. To fix this, use PowerShell to add each
server to the Trusted Hosts list on your management computer. This list supports
wildcards, like Server* for example.

Set-Item WSMAN:\Localhost\Client\TrustedHosts -Value Server1 -Force

To view your Trusted Hosts list, type Get-Item


WSMAN:\Localhost\Client\TrustedHosts .

To empty the list, type Clear-Item WSMAN:\Localhost\Client\TrustedHost .

Step 1.2: Join the domain and add domain accounts


So far you've connected to each server node with the local administrator account
<ServerName>\Administrator .
To proceed, you'll need to join the servers to a domain and use the domain account that
is in the local Administrators group on every server.

Use the Enter-PSSession cmdlet to connect to each server and run the following cmdlet,
substituting the server name, domain name, and domain credentials:

PowerShell

Add-Computer -NewName "Server1" -DomainName "contoso.com" -Credential


"Contoso\User" -Restart -Force

If your administrator account isn't a member of the Domain Admins group, add your
administrator account to the local Administrators group on each server - or better yet,
add the group you use for administrators. You can use the following command to do so:

PowerShell

Add-LocalGroupMember -Group "Administrators" -Member "[email protected]"

Step 1.3: Install roles and features


The next step is to install required Windows roles and features on every server for the
cluster. Here are the roles to install:

BitLocker
Data Center Bridging
Failover Clustering
File Server
FS-Data-Deduplication module
Hyper-V
Hyper-V PowerShell
RSAT-AD-Clustering-PowerShell module
RSAT-AD-PowerShell module
NetworkATC
NetworkHUD
SMB Bandwidth Limit
Storage Replica (for stretched clusters)

Use the following command for each server (if you're connected via Remote Desktop
omit the -ComputerName parameter here and in subsequent commands):

PowerShell
Install-WindowsFeature -ComputerName "Server1" -Name "BitLocker", "Data-
Center-Bridging", "Failover-Clustering", "FS-FileServer", "FS-Data-
Deduplication", "FS-SMBBW", "Hyper-V", "Hyper-V-PowerShell", "RSAT-AD-
Powershell", "RSAT-Clustering-PowerShell", "NetworkATC", "NetworkHUD",
"Storage-Replica" -IncludeAllSubFeature -IncludeManagementTools

To run the command on all servers in the cluster at the same time, use the following
script, modifying the list of variables at the beginning to fit your environment:

PowerShell

# Fill in these variables with your values

$ServerList = "Server1", "Server2", "Server3", "Server4"

$FeatureList = "BitLocker", "Data-Center-Bridging", "Failover-Clustering",


"FS-FileServer", "FS-Data-Deduplication", "Hyper-V", "Hyper-V-PowerShell",
"RSAT-AD-Powershell", "RSAT-Clustering-PowerShell", "NetworkATC",
"NetworkHUD", "FS-SMBBW", "Storage-Replica"

# This part runs the Install-WindowsFeature cmdlet on all servers in


$ServerList, passing the list of features in $FeatureList.

Invoke-Command ($ServerList) {

Install-WindowsFeature -Name $Using:Featurelist -IncludeAllSubFeature -


IncludeManagementTools

Next, restart all the servers:

PowerShell

$ServerList = "Server1", "Server2", "Server3", "Server4"

Restart-Computer -ComputerName $ServerList -WSManAuthentication Kerberos

Step 2: Prep for cluster setup


Next, verify that your servers are ready for clustering.

As a sanity check first, consider running the following commands to make sure that your
servers don't already belong to a cluster:

Use Get-ClusterNode to show all nodes:

PowerShell

Get-ClusterNode

Use Get-ClusterResource to show all cluster nodes:


PowerShell

Get-ClusterResource

Use Get-ClusterNetwork to show all cluster networks:

PowerShell

Get-ClusterNetwork

Step 2.1: Prepare drives


Before you enable Storage Spaces Direct, ensure your permanent drives are empty. Run
the following script to remove any old partitions and other data.

7 Note

Exclude any removable drives attached to a server node from the script. If you are
running this script locally from a server node for example, you don't want to wipe
the removable drive you might be using to deploy the cluster.

PowerShell

# Fill in these variables with your values

$ServerList = "Server1", "Server2", "Server3", "Server4"

Invoke-Command ($ServerList) {

Update-StorageProviderCache

Get-StoragePool | ? IsPrimordial -eq $false | Set-StoragePool -


IsReadOnly:$false -ErrorAction SilentlyContinue

Get-StoragePool | ? IsPrimordial -eq $false | Get-VirtualDisk | Remove-


VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue

Get-StoragePool | ? IsPrimordial -eq $false | Remove-StoragePool -


Confirm:$false -ErrorAction SilentlyContinue

Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue

Get-Disk | ? Number -ne $null | ? IsBoot -ne $true | ? IsSystem -ne


$true | ? PartitionStyle -ne RAW | % {

$_ | Set-Disk -isoffline:$false

$_ | Set-Disk -isreadonly:$false

$_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false

$_ | Set-Disk -isreadonly:$true

$_ | Set-Disk -isoffline:$true

Get-Disk | Where Number -Ne $Null | Where IsBoot -Ne $True | Where
IsSystem -Ne $True | Where PartitionStyle -Eq RAW | Group -NoElement -
Property FriendlyName

} | Sort -Property PsComputerName, Count

Step 2.2: Test cluster configuration


In this step, you'll ensure that the server nodes are configured correctly to create a
cluster. The Test-Cluster cmdlet is used to run tests to verify your configuration is
suitable to function as a hyperconverged cluster. The example below uses the -Include
parameter, with the specific categories of tests specified. This ensures that the correct
tests are included in the validation.

PowerShell

Test-Cluster -Node $ServerList -Include "Storage Spaces Direct",


"Inventory", "Network", "System Configuration"

Step 3: Create the cluster


You are now ready to create a cluster with the server nodes that you have validated in
the preceding steps.

When creating the cluster, you'll get a warning that states - "There were issues while
creating the clustered role that may prevent it from starting. For more

information, view the report file below." You can safely ignore this warning. It's due
to no disks being available for the cluster witness that you will create later.

7 Note

If the servers are using static IP addresses, modify the following command to reflect
the static IP address by adding the following parameter and specifying the IP
address: -StaticAddress <X.X.X.X>; .

PowerShell

$ClusterName="cluster1"

New-Cluster -Name $ClusterName –Node $ServerList –nostorage

After the cluster is created, it can some take time for the cluster name to be replicated
via DNS across your domain, especially if workgroup servers have been newly added to
Active Directory. Although the cluster might be displayed in Windows Admin Center, it
might not be available to connect to yet.

A good check to ensure all cluster resources are online:

PowerShell

Get-Cluster -Name $ClusterName | Get-ClusterResource

If resolving the cluster isn't successful after some time, in most cases you can connect by
using the name of one of the clustered servers instead of the cluster name.

Step 4: Configure host networking


Microsoft recommends using Network ATC to deploy host networking if you're running
Azure Stack HCI version 21H2 or newer. Otherwise, see Host network requirements for
specific requirements and information.

Network ATC can automate the deployment of your intended networking configuration
if you specify one or more intent types for your adapters. For more information on
specific intent types, please see: Network Traffic Types.

Step 4.1: Review physical adapters


On one of the cluster nodes, run Get-NetAdapter to review the physical adapters. Ensure
that each node in the cluster has the same named physical adapters and that they
report status as 'Up'.

PowerShell

Get-NetAdapter -Name pNIC01, pNIC02 -CimSession (Get-ClusterNode).Name |


Select Name, PSComputerName

If a physical adapter name varies across nodes in your cluster, you can rename it using
Rename-NetAdapter .

PowerShell

Rename-NetAdapter -Name oldName -NewName newName

Step 4.2: Configure an intent


In this example, an intent is created that specifies the compute and storage intent. See
Simplify host networking with Network ATC for more intent examples.

Run the following command to add the storage and compute intent types to pNIC01
and pNIC02. Note that we specify the -ClusterName parameter.

PowerShell

Add-NetIntent -Name Cluster_ComputeStorage -Compute -Storage -ClusterName


$ClusterName -AdapterName pNIC01, pNIC02

The command should immediately return after some initial verification.

Step 4.3: Validate intent deployment


Run the Get-NetIntent cmdlet to see the cluster intent. If you have more than one
intent, you can specify the Name parameter to see details of only a specific intent.

PowerShell

Get-NetIntent -ClusterName $ClusterName

To see the provisioning status of the intent, run the Get-NetIntentStatus command:

PowerShell

Get-NetIntentStatus -ClusterName $ClusterName -Name Cluster_ComputeStorage

Note the status parameter that shows Provisioning, Validating, Success, Failure.

Status should display success in a few minutes. If this doesn't occur or you see a Status
parameter failure, check the event viewer for issues.

7 Note

At this time, Network ATC does not configure IP addresses for any of its managed
adapters. Once Get-NetIntentStatus reports status completed, you should add IP
addresses to the adapters.

Step 5: Set up sites (stretched cluster)


This task only applies if you are creating a stretched cluster between two sites with at
least two servers in each site.

7 Note

If you have set up Active Directory Sites and Services beforehand, you do not need
to create the sites manually as described below.

Step 5.1: Create sites


In the cmdlet below, FaultDomain is simply another name for a site. This example uses
"ClusterS1" as the name of the stretched cluster.

PowerShell

New-ClusterFaultDomain -CimSession "ClusterS1" -FaultDomainType Site -Name


"Site1"

PowerShell

New-ClusterFaultDomain -CimSession "ClusterS1" -FaultDomainType Site -Name


"Site2"

Use the Get-ClusterFaultDomain cmdlet to verify that both sites have been created for
the cluster.

PowerShell

Get-ClusterFaultDomain -CimSession "ClusterS1"

Step 5.2: Assign server nodes


Next, we will assign the four server nodes to their respective sites. In the example below,
Server1 and Server2 are assigned to Site1, while Server3 and Server4 are assigned to
Site2.

PowerShell

Set-ClusterFaultDomain -CimSession "ClusterS1" -Name "Server1", "Server2" -


Parent "Site1"

PowerShell

Set-ClusterFaultDomain -CimSession "ClusterS1" -Name "Server3", "Server4" -


Parent "Site2"

Using the Get-ClusterFaultDomain cmdlet, verify the nodes are in the correct sites.

PowerShell

Get-ClusterFaultDomain -CimSession "ClusterS1"

Step 5.3: Set a preferred site


You can also define a global preferred site, which means that specified resources and
groups must run on the preferred site. This setting can be defined at the site level using
the following command:

PowerShell

(Get-Cluster).PreferredSite = "Site1"

Specifying a preferred Site for stretched clusters has the following benefits:

Cold start - during a cold start, virtual machines are placed in the preferred site

Quorum voting

Using a dynamic quorum, weighting is decreased from the passive (replicated)


site first to ensure that the preferred site survives if all other things are equal. In
addition, server nodes are pruned from the passive site first during regrouping
after events such as asymmetric network connectivity failures.

During a quorum split of two sites, if the cluster witness cannot be contacted,
the preferred site is automatically elected to win. The server nodes in the
passive site then drop out of cluster membership. This allows the cluster to
survive a simultaneous 50% loss of votes.

The preferred site can also be configured at the cluster role or group level. In this case, a
different preferred site can be configured for each virtual machine group. This enables a
site to be active and preferred for specific virtual machines.

Step 5.4: Set-up Stretch Clustering with Network ATC


After version 22H2, you can use Network ATC to set up Stretch clustering. Network ATC
adds Stretch as an intent type from version 22H2. To deploy an intent with Stretch
clustering with Network ATC, run the following command:

PowerShell

Add-NetIntent -Name StretchIntent -Stretch -AdapterName "pNIC01", "pNIC02"

A stretch intent can also be combined with other intents, when deploying with Network
ATC.

SiteOverrides
Based on steps 5.1-5.3 you can add your pre-created sites to your stretch intent
deployed with Network ATC. Network ATC will handle this using SiteOverrides. To create
a SiteOverride, run:

PowerShell

$siteOverride = New-NetIntentSiteOverrides

Once you have created a siteOverride, you can set any property for the siteOverride.
Make sure that the name property of the siteOverride has the exact same name, as the
name your site has in the ClusterFaultDomain. A mismatch of names between the
ClusterFaultDomain and the siteOverride will end up resulting in the siteOverride not
being applied.

The properties you can set for a particular siteOverride are: Name, StorageVlan and
StretchVlan. For example, this is how you create 2 siteOverrides for your 2 sites- site1
and site2:

PowerShell

$siteOverride1 = New-NetIntentSiteOverrides

$siteoverride1.Name = "site1"

$siteOverride1.StorageVLAN = 711

$siteOverride1.StretchVLAN = 25

$siteOverride2 = New-NetIntentSiteOverrides

$siteOverride2.Name = "site2"

$siteOverride2.StorageVLAN = 712

$siteOverride2.StretchVLAN = 26

You can run $siteOverride1 , $siteOverride2 in your powershell window to make sure
all your properties are set in the desired manner.

Finally, to add one or more siteOverrides to your intent, run:

PowerShell

Add-NetIntent -Name StretchIntent -Stretch -AdapterName "pNIC01" , "pNIC02"


-SiteOverrides $siteOverride1, $siteOverride2

Step 6: Enable Storage Spaces Direct


After creating the cluster, use the Enable-ClusterStorageSpacesDirect cmdlet, which will
enable Storage Spaces Direct and do the following automatically:

Create a storage pool: Creates a storage pool for the cluster that has a name like
"Cluster1 Storage Pool".

Create a Cluster Performance History disk: Creates a Cluster Performance History


virtual disk in the storage pool.

Create data and log volumes: Creates a data volume and a log volume in the
storage pool.

Configure Storage Spaces Direct caches: If there is more than one media (drive)
type available for Storage Spaces Direct, it enables the fastest as cache devices
(read and write in most cases).

Create tiers: Creates two tiers as default tiers. One is called "Capacity" and the
other called "Performance". The cmdlet analyzes the devices and configures each
tier with the mix of device types and resiliency.

For the single server scenario, the only FaultDomainAwarenessDefault is PhysicalDisk.


Enable-ClusterStorageSpacesDirect cmdlet will detect single server and automatically

configure FaultDomainAwarenessDefault as PhysicalDisk during enablement.

For stretched clusters, the Enable-ClusterStorageSpacesDirect cmdlet will also do the


following:

Check to see if sites have been set up


Determine which nodes are in which sites
Determines what storage each node has available
Checks to see if the Storage Replica feature is installed on each node
Creates a storage pool for each site and identifies it with the name of the site
Creates data and log volumes in each storage pool - one per site

The following command enables Storage Spaces Direct on a multi-node cluster. You can
also specify a friendly name for a storage pool, as shown here:

PowerShell

Enable-ClusterStorageSpacesDirect -PoolFriendlyName "$ClusterName Storage


Pool" -CimSession $ClusterName

Here's an example on a single-node cluster, disabling the storage cache:

PowerShell

Enable-ClusterStorageSpacesDirect -CacheState Disabled

To see the storage pools, use this:

PowerShell

Get-StoragePool -CimSession $session

After you create the cluster


Now that you are done, there are still some important tasks you need to complete:

Set up a cluster witness if you're using a two-node or larger cluster. See Set up a
cluster witness.
Create your volumes. See Create volumes.
When creating volumes on a single-
node cluster, you must use PowerShell. See Create volumes using PowerShell.
For stretched clusters, create volumes and set up replication using Storage Replica.
See Create volumes and set up replication for stretched clusters.

Next steps
Register your cluster with Azure. See Connect Azure Stack HCI to Azure.
Do a final validation of the cluster. See Validate an Azure Stack HCI cluster
Manage host networking. See Manage host networking using Network ATC.
Deploy host networking with Network
ATC
Article • 05/22/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article guides you through the requirements, best practices, and deployment of
Network ATC. Network ATC simplifies the deployment and network configuration
management for Azure Stack HCI clusters. Network ATC provides an intent-based
approach to host network deployment. By specifying one or more intents (management,
compute, or storage) for a network adapter, you can automate the deployment of the
intended configuration. For more information on Network ATC, including an overview
and definitions, please see Network ATC overview.

If you have feedback or encounter any issues, review the Requirements and best
practices section, check the Network ATC event log, and work with your Microsoft
support team.

Requirements and best practices


22H2

The following are requirements and best practices for using Network ATC in Azure
Stack HCI:

All servers in the cluster must be running Azure Stack HCI, version 22H2 with
the November update (or later).

Must use physical hosts that are Azure Stack HCI certified.

Adapters in the same Network ATC intent must be symmetric (of the same
make, model, speed, and configuration) and available on each cluster node.

Asymmetric adapters lead to a failure in deploying any intent.

With Azure Stack HCI 22H2, Network ATC will automatically confirm adapter
symmetry for all nodes in the cluster before deploying an intent.

For more information on adapter symmetry, see Switch Embedded Teaming


(SET)
Each physical adapter specified in an intent must use the same name on all
nodes in the cluster.

Ensure each network adapter has an "Up" status, as verified by the PowerShell
Get-NetAdapter cmdlet.

Ensure all hosts have the November Azure Stack HCI update or later.

Each node must have the following Azure Stack HCI features installed:
Network ATC
Network HUD
Hyper-V
Failover Clustering
Data Center Bridging

Here's an example of installing the required features via PowerShell:

PowerShell

Install-WindowsFeature -Name NetworkATC, NetworkHUD, Hyper-V,


'Failover-Clustering', 'Data-Center-Bridging' -
IncludeManagementTools

Best practice: Insert each adapter in the same PCI slot(s) in each host. This
practice leads to ease in automated naming conventions by imaging systems.

Best practice: Configure the physical network (switches) prior to Network ATC
including VLANs, MTU, and DCB configuration. For more information, please
see Physical Network Requirements.

) Important

Updated: Deploying Network ATC in virtual machines may be used for test and
validation purposes only. VM-based deployment requires an override to the default
adapter settings to disable the NetworkDirect property. For more information on
submission of an override, please see: Override default network settings.

Deploying Network ATC in standalone mode may be used for test and validation
purposes only.

Common Network ATC commands


There are several new PowerShell commands included with Network ATC. Run the Get-
Command -ModuleName NetworkATC cmdlet to identify them. Ensure PowerShell is run as an
administrator.

The Remove-NetIntent cmdlet removes an intent from the local node or cluster. This
command doesn't destroy the invoked configuration.

Example intents
Network ATC modifies how you deploy host networking, not what you deploy. You can
deploy multiple scenarios so long as each scenario is supported by Microsoft. Here are
some examples of common deployment options, and the PowerShell commands
needed. These aren't the only combinations available but they should give you an idea
of the possibilities.

For simplicity we only demonstrate two physical adapters per SET team, however it's
possible to add more. For more information, please see Plan Host Networking.

Fully converged intent


For this intent, compute, storage, and management networks are deployed and
managed across all cluster nodes.

22H2

PowerShell
Add-NetIntent -Name ConvergedIntent -Management -Compute -Storage -
AdapterName pNIC01, pNIC02

Converged compute and storage intent; separate


management intent
Two intents are managed across cluster nodes. Management uses pNIC01, and pNIC02;
Compute and storage are on different adapters.

22H2

PowerShell

Add-NetIntent -Name Mgmt -Management -AdapterName pNIC01, pNIC02

Add-NetIntent -Name Compute_Storage -Compute -Storage -AdapterName


pNIC03, pNIC04

Fully disaggregated intent


For this intent, compute, storage, and management networks are all managed on
different adapters across all cluster nodes.

22H2

PowerShell

Add-NetIntent -Name Mgmt -Management -AdapterName pNIC01, pNIC02

Add-NetIntent -Name Compute -Compute -AdapterName pNIC03, pNIC04

Add-NetIntent -Name Storage -Storage -AdapterName pNIC05, pNIC06

Storage-only intent
For this intent, only storage is managed. Management and compute adapters aren't
managed by Network ATC.

22H2

PowerShell

Add-NetIntent -Name Storage -Storage -AdapterName pNIC05, pNIC06

Compute and management intent


For this intent, compute and management networks are managed, but not storage.

22H2

PowerShell

Add-NetIntent -Name Management_Compute -Management -Compute -AdapterName


pNIC01, pNIC02

Multiple compute (switch) intents


For this intent, multiple compute switches are managed.

22H2
PowerShell

Add-NetIntent -Name Compute1 -Compute -AdapterName pNIC03, pNIC04

Add-NetIntent -Name Compute2 -Compute -AdapterName pNIC05, pNIC06

Default Network ATC values


This section lists some of the key default values used by Network ATC.

Default values
This section covers additional default values that Network ATC will be setting in versions
22H2 and later.

Default VLANs

Applies to: Azure Stack HCI 21H2, 22H2

Network ATC uses the following VLANs by default for adapters with the storage intent
type. If the adapters are connected to a physical switch, these VLANs must be allowed
on the physical network. If the adapters are switchless, no additional configuration is
required.

Adapter Intent Default Value

Management Configured VLAN for management adapters isn't modified

Storage Adapter 1 711

Storage Adapter 2 712

Storage Adapter 3 713

Storage Adapter 4 714

Storage Adapter 5 715

Storage Adapter 6 716

Storage Adapter 7 717

Storage Adapter 8 718

Future Use 719


Consider the following command:

22H2

PowerShell

Add-NetIntent -Name MyIntent -Storage -AdapterName pNIC01, pNIC02,


pNIC03, pNIC04

The physical NIC (or virtual NIC if necessary) is configured to use VLANs 711, 712, 713,
and 714 respectively.

7 Note

Network ATC allows you to change the VLANs used with the StorageVlans
parameter on Add-NetIntent .

Automatic storage IP addressing

Applies to: Azure Stack HCI 22H2

Network ATC will automatically configure valid IP Addresses for adapters with the
storage intent type. Network ATC does this in a uniform manner across all nodes in your
cluster and verifies that the address chosen isn't already in use on the network.

The default IP Address for each adapter on each node in the storage intent will be set
up as follows:

Adapter IP Address and Subnet VLAN

pNIC1 10.71.1.X 711

pNIC2 10.71.2.X 712

pNIC3 10.71.3.X 713

To override Automatic Storage IP Addressing, create a storage override and pass the
override when creating an intent:

PowerShell

$StorageOverride = New-NetIntentStorageOverrides

$StorageOverride.EnableAutomaticIPGeneration = $false

PowerShell

Add-NetIntent -Name MyIntent -Storage -Compute -AdapterName 'pNIC01',


'pNIC02' -StorageOverrides $StorageOverride

Cluster network settings

Applies to: Azure Stack HCI 22H2

Network ATC configures a set of Cluster Network Features by default. The defaults are
listed below:

Property Default

EnableNetworkNaming $true

EnableLiveMigrationNetworkSelection $true

EnableVirtualMachineMigrationPerformance $true

VirtualMachineMigrationPerformanceOption Default is calculated: SMB, TCP or Compression

MaximumVirtualMachineMigrations 1

MaximumSMBMigrationBandwidthInGbps Default is calculated based on set-up

Default Data Center Bridging (DCB) configuration


Network ATC establishes the following priorities and bandwidth reservations. This
configuration should also be configured on the physical network.

Policy Use Default Default Bandwidth Reservation


Priority

Cluster Cluster Heartbeat 7 2% if the adapter(s) are <= 10 Gbps; 1% if the


reservation adapter(s) are > 10 Gbps

SMB_Direct RDMA Storage 3 50%


Traffic

Default All other traffic 0 Remainder


types

7 Note
Network ATC allows you to override default settings like default bandwidth
reservation. For examples, see Update or override network settings.

Common Error Messages


With the new event logs in 22H2, there are some simplistic troubleshooting methods to
identify intent deployment failures. This section outlines some of the common fixes
when an issue is encountered.

Error: AdapterBindingConflict

Scenario 1: An adapter is actually bound to an existing vSwitch that conflicts with the
new vSwitch that is being deployed by Network ATC.

Solution: Remove the conflicting vSwitch, then Set-NetIntentRetryState

Scenario 2: An adapter is bound to the component, but not necessarily a vSwitch.

Solution: Disable the vms_pp component (unbind the adapter from the vSwitch) then
run Set-NetIntentRetryState.

Error: ConflictingTrafficClass

This issue occurs because a traffic class is already configured. This preconfigured traffic
class conflicts with the traffic classes being deployed by Network ATC. For example, the
customer may have already deployed a traffic class called SMB when Network ATC will
deploy a similar traffic class with a different name.

Solution:

Clear the existing DCB configuration on the system then run Set-NetIntentRetryState

PowerShell

Get-NetQosTrafficClass | Remove-NetQosTrafficClass

Get-NetQosPolicy | Remove-NetQosPolicy -Confirm:$false

Get-NetQosFlowControl | Disable-NetQosFlowControl

Error: RDMANotOperational

You may see this message:

1. If the network adapter uses an inbox driver. Inbox drivers aren't supported and
must be updated.

Solution: Upgrade the driver for the adapter.

2. If SR-IOV is disabled in the BIOS.

Solution: Enable SR-IOV for the adapter in the system BIOS

3. If RDMA is disabled in the BIOS

Solution: Enable RDMA for the adapter in the system BIOS

Error: InvalidIsolationID


This message will occur when RoCE RDMA is in use and you have overridden the default
VLAN with a value that can't be used with that protocol. For example, RoCE RDMA
requires a non-zero VLAN so that Priority Flow Control (PFC) markings can be added to
the frame. A VLAN value between 1 - 4094 must be used. Network ATC won't override
the value you specified without administrator intervention for several reasons. To resolve
this issue:

1. Choose iWARP as the RDMA (NetworkDirect) protocol

Solution: If supported by the adapter, Network ATC automatically chooses iWARP


as its RDMA protocol which may use a VLAN ID of 0. Remove the override that
enforces RoCE as the chosen protocol.

2. Use the default VLANs

Solution: We highly recommend using the Network ATC Default VLANs

3. Use a valid VLAN

When specifying a VLAN use the -StorageVLANs parameter and specify comma
separated values between 1 - 4094.

Next steps
Manage your Network ATC deployment. See Manage Network ATC.
Learn more about Stretched clusters.
Deploy SDN using Windows Admin
Center
Article • 06/28/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022
Datacenter, Windows Server 2019 Datacenter

This article describes how to deploy Software Defined Networking (SDN) through
Windows Admin Center after you configured your Azure Stack HCI cluster. Windows
Admin Center enables you to deploy all the SDN infrastructure components on your
existing Azure Stack HCI cluster, in the following deployment order:

Network Controller
Software Load Balancer (SLB)
Gateway

To deploy SDN Network Controller during cluster creation, see Step 5: SDN (optional) of
the Create cluster wizard.

Alternatively, you can deploy the entire SDN infrastructure through the SDN Express
scripts.

You can also deploy an SDN infrastructure using System Center Virtual Machine
Manager (VMM). For more information, see Manage SDN resources in the VMM fabric.

) Important

You can't use Microsoft System Center VMM 2019 to manage clusters running
Azure Stack HCI, version 21H2 or Windows Server 2022. Instead, you can use
Microsoft System Center VMM 2022.

) Important

You can't use Microsoft System Center VMM 2019 and Windows Admin Center to
manage SDN at the same time.

) Important
You can’t manage SDN on the Standard edition of Windows Server 2022 or
Windows Server 2019. This is due to the limitations in the Remote Server
Administration Tools (RSAT) installation on Windows Admin Center. However, you
can manage SDN on the Datacenter edition of Windows Server 2022 and Windows
Server 2019 and also on the Datacenter: Azure Edition of Windows Server 2022.

Before you begin


Before you begin an SDN deployment, plan out and configure your physical and host
network infrastructure. Reference the following articles:

Physical network requirements


Host network requirements
Create a cluster using Windows Admin Center
Create a cluster using Windows PowerShell
Plan a Software Defined Network infrastructure
The Phased deployment section of Plan a Software Defined Network infrastructure
to determine the capabilities enabled by deploying Network Controller

Requirements
The following requirements must be met for a successful SDN deployment:

All server nodes must have Hyper-V enabled.


All server nodes must be joined to Active Directory.
A virtual switch must be created.
The physical network must be configured.

Download the VHDX file


SDN uses a VHDX file containing either the Azure Stack HCI or Windows Server
operating system (OS) as a source for creating the SDN virtual machines (VMs).

7 Note

The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.
Follow these steps to download an English version of the VHDX file:

1. Go to Azure Stack HCI software download site .

2. Complete the download form and select Submit to display the Azure Stack HCI
software download page.

3. Under Azure Stack HCI, select English – VHDX from the Choose language
dropdown menu, and then select Download Azure Stack HCI.

Currently, a non-English VHDX file is not available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.

You will probably need to run this as Administrator and modify the execution policy for
scripts using the Set-ExecutionPolicy command.

The following is an example of using Convert-WindowsImage :

PowerShell

Install-Module -Name Convert-WindowsImage

Import-Module Convert-WindowsImage

$wimpath = "E:\sources\install.wim"

$vhdpath = "D:\temp\AzureStackHCI.vhdx"

$edition=1

Convert-WindowsImage -SourcePath $wimpath -Edition $edition -VHDPath


$vhdpath -SizeBytes 500GB -DiskLayout UEFI

Deploy SDN Network Controller


SDN Network Controller deployment is a functionality of the SDN Infrastructure
extension in Windows Admin Center. Complete the following steps to deploy Network
Controller on your existing Azure Stack HCI cluster.

1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.

2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.

3. In Windows Admin Center, under Tools, select SDN Infrastructure, then click Get
Started.

4. Under Cluster settings, under Host, enter a name for the Network Controller. This
is the DNS name used by management clients (such as Windows Admin Center) to
communicate with Network Controller. You can also use the default populated
name.

5. Specify a path to the Azure Stack HCI VHD file. Use Browse to find it quicker.

6. Specify the number of VMs to be dedicated for Network Controller. We strongly


recommend three VMs for production deployments.

7. Under Network, enter the VLAN ID of the management network. Network


Controller needs connectivity to same management network as the Hyper-V hosts
so that it can communicate and configure the hosts.

8. For VM network addressing, select either DHCP or Static.


For DHCP, enter the name for the Network Controller VMs. You can also use
the default populated names.

For Static, do the following:


a. Specify an IP address.
b. Specify a subnet prefix.
c. Specify the default gateway.
d. Specify one or more DNS servers. Click Add to add additional DNS servers.

9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.

10. Enter the local administrative password for these VMs.

11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.

7 Note

Universal Naming Convention (UNC) paths aren't supported. For cluster


storage-based paths, use a format like C:\ClusterStorage\... .

12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values. This is the MAC pool used to assign MAC
addresses to VMs attached to SDN networks.

13. When finished, click Next: Deploy.

14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then click Finish.

15. After the Network Controller VMs are created, configure dynamic DNS updates for
the Network Controller cluster name on the DNS server. For more information, see
Dynamic DNS updates.

Redeploy SDN Network Controller


If the Network Controller deployment fails or you want to deploy it again, do the
following:

1. Delete all Network Controller VMs and their VHDs from all server nodes.

2. Remove the following registry key from all hosts by running this command:
PowerShell

Remove-ItemProperty -path
'HKLM:\SYSTEM\CurrentControlSet\Services\NcHostAgent\Parameters\' -Name
Connections

3. After removing the registry key, remove the cluster from the Windows Admin
Center management, and then add it back.

7 Note

If you don't do this step, you may not see the SDN deployment wizard in
Windows Admin Center.

4. (Additional step only if you plan to uninstall Network Controller and not deploy it
again) Run the following cmdlet on all the servers in your Azure Stack HCI cluster,
and then skip the last step.

PowerShell

Disable-VMSwitchExtension -VMSwitchName "<Compute vmswitch name>" -Name


"Microsoft Azure VFP Switch Extension"

5. Run the deployment wizard again.

Deploy SDN Software Load Balancer


SDN SLB deployment is a functionality of the SDN Infrastructure extension in Windows
Admin Center. Complete the following steps to deploy SLB on your existing Azure Stack
HCI cluster.

7 Note

Network Controller must be set up before you configure SLB.

1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.

2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then click Get
Started on the Load Balancer tab.

4. Under Load Balancer Settings, under Front-End subnets, provide the following:

Public VIP subnet prefix. This could be public Internet subnets. They serve as
the front end IP addresses for accessing workloads behind the load balancer,
which use IP addresses from a private backend network.

Private VIP subnet prefix. These don’t need to be routable on the public
Internet because they are used for internal load balancing.

5. Under BGP Router Settings, enter the SDN ASN for the SLB. This ASN is used to
peer the SLB infrastructure with the Top of the Rack switches to advertise the
Public VIP and Private VIP IP addresses.

6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. SLB infrastructure needs these settings to create a BGP peer with the
switch. If you have an additional Top of Rack switch that you want to peer the SLB
infrastructure with, add IP Address and ASN for that switch as well.

7. Under VM Settings, specify a path to the Azure Stack HCI VHDX file. Use Browse to
find it quicker.

8. Specify the number of VMs to be dedicated for software load balancing. We


strongly recommend at least two VMs for production deployments.

9. Under Network, enter the VLAN ID of the management network. SLB needs
connectivity to same management network as the Hyper-V hosts so that it can
communicate and configure the hosts.

10. For VM network addressing, select either DHCP or Static.

For DHCP, enter the name for the Network Controller VMs. You can also use
the default populated names.

For Static, do the following:


a. Specify an IP address.
b. Specify a subnet prefix.
c. Specify the default gateway.
d. Specify one or more DNS servers. Click Add to add additional DNS servers.

11. Under Credentials, enter the username and password that you used to join the
Software Load Balancer VMs to the cluster domain.
12. Enter the local administrative password for these VMs.

13. Under Advanced, enter the path to the VMs. You can also use the default
populated path.

7 Note

Universal Naming Convention (UNC) paths aren't supported. For cluster


storage-based paths, use a format like C:\ClusterStorage\... .

14. When finished, click Next: Deploy.

15. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then click Finish.

Deploy SDN Gateway


SDN Gateway deployment is a functionality of the SDN Infrastructure extension in
Windows Admin Center. Complete the following steps to deploy SDN Gateways on your
existing Azure Stack HCI cluster.

7 Note

Network Controller and SLB must be set up before you configure Gateways.

1. In Windows Admin Center, under Tools, select Settings, then select Extensions.

2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.

3. In Windows Admin Center, under Tools, select SDN Infrastructure, then click Get
Started on the Gateway tab.

4. Under Define the Gateway Settings, under Tunnel subnets, provide the GRE
Tunnel Subnets. IP addresses from this subnet are used for provisioning on the
SDN gateway VMs for GRE tunnels. If you don't plan to use GRE tunnels, put any
placeholder subnets in this field.

5. Under BGP Router Settings, enter the SDN ASN for the Gateway. This ASN is used
to peer the gateway VMs with the Top of the Rack switches to advertise the GRE IP
addresses. This field is auto populated to the SDN ASN used by SLB.
6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. Gateway VMs need these settings to create a BGP peer with the switch.
These fields are auto populated from the SLB deployment wizard. If you have an
additional Top of Rack switch that you want to peer the gateway VMs with, add IP
Address and ASN for that switch as well.

7. Under Define the Gateway VM Settings, specify a path to the Azure Stack HCI
VHDX file. Use Browse to find it quicker.

8. Specify the number of VMs to be dedicated for gateways. We strongly recommend


at least two VMs for production deployments.

9. Enter the value for Redundant Gateways. Redundant gateways don't host any
gateway connections. In event of failure or restart of an active gateway VM,
gateway connections from the active VM are moved to the redundant gateway and
the redundant gateway is then marked as active. In a production deployment, we
strongly recommend to have at least one redundant gateway.

7 Note

Ensure that the total number of gateway VMs is at least one more than the
number of redundant gateways. Otherwise, you won't have any active
gateways to host gateway connections.

10. Under Network, enter the VLAN ID of the management network. Gateways needs
connectivity to same management network as the Hyper-V hosts and Network
Controller VMs.

11. For VM network addressing, select either DHCP or Static.

For DHCP, enter the name for the Gateway VMs. You can also use the default
populated names.

For Static, do the following:


a. Specify an IP address.
b. Specify a subnet prefix.
c. Specify the default gateway.
d. Specify one or more DNS servers. Click Add to add additional DNS servers.

12. Under Credentials, enter the username and password used to join the Gateway
VMs to the cluster domain.

13. Enter the local administrative password for these VMs.


14. Under Advanced, provide the Gateway Capacity. It is auto populated to 10 Gbps.
Ideally, you should set this value to approximate throughput available to the
gateway VM. This value may depend on various factors, such as physical NIC speed
on the host machine, other VMs on the host machine and their throughput
requirements.

7 Note

Universal Naming Convention (UNC) paths aren't supported. For cluster


storage-based paths, use a format like C:\ClusterStorage\... .

15. Enter the path to the VMs. You can also use the default populated path.

16. When finished, click Next: Deploy the Gateway.

17. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then click Finish.

Next steps
Manage SDN logical networks. See Manage tenant logical networks.
Manage SDN virtual networks. See Manage tenant virtual networks.
Manage microsegmentation with datacenter firewall. See Use Datacenter Firewall
to configure ACLs.
Manage your VMs. See Manage VMs.
Manage Software Load Balancers. See Manage Software Load Balancers.
Manage Gateway connections. See Manage Gateway Connections.
Troubleshoot SDN deployment. See Troubleshoot Software Defined Networking
deployment via Windows Admin Center.
Deploy an SDN infrastructure using SDN
Express
Article • 06/28/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019, Windows Server 2016

In this topic, you deploy an end-to-end Software Defined Network (SDN) infrastructure
using SDN Express PowerShell scripts. The infrastructure includes a highly available (HA)
Network Controller (NC), and optionally, a highly available Software Load Balancer (SLB),
and a highly available Gateway (GW). The scripts support a phased deployment, where
you can deploy just the Network Controller component to achieve a core set of
functionality with minimal network requirements.

You can also deploy an SDN infrastructure using Windows Admin Center or using
System Center Virtual Machine Manager (VMM). For more information, see Create a
cluster - Step 5: SDN and see Manage SDN resources in the VMM fabric.

) Important

You can't use Microsoft System Center Virtual Machine Manager 2019 to manage
clusters running Azure Stack HCI, version 21H2 or Windows Server 2022.

Before you begin


Before you begin an SDN deployment, plan out and configure your physical and host
network infrastructure. Reference the following articles:

Physical network requirements


Host network requirements
Create a cluster using Windows Admin Center
Create a cluster using Windows PowerShell
Plan a Software Defined Network infrastructure

You do not have to deploy all SDN components. See the Phased deployment section of
Plan a Software Defined Network infrastructure to determine which infrastructure
components you need, and then run the scripts accordingly.
Make sure all host servers have the Azure Stack HCI operating system installed. See
Deploy the Azure Stack HCI operating system on how to do this.

Requirements
The following requirements must be met for a successful SDN deployment:

All host servers must have Hyper-V enabled.


All host servers must be joined to Active Directory.
A virtual switch must be created.
The physical network must be configured for the subnets and VLANs defined in
the configuration file.
The SDN Express script needs to be run from a Windows Server 2016 or later
computer.
The VHDX file specified in the configuration file must be reachable from the
computer where the SDN Express script is run.

Download the VHDX file


SDN uses a VHDX file containing either the Azure Stack HCI or Windows Server
operating system (OS) as a source for creating the SDN virtual machines (VMs).

7 Note

The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.

Follow these steps to download an English version of the VHDX file:

1. Go to Azure Stack HCI software download site .

2. Complete the download form and select Submit to display the Azure Stack HCI
software download page.

3. Under Azure Stack HCI, select English – VHDX from the Choose language
dropdown menu, and then select Download Azure Stack HCI.

Currently, a non-English VHDX file is not available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.
You will probably need to run this as Administrator and modify the execution policy for
scripts using the Set-ExecutionPolicy command.

The following is an example of using Convert-WindowsImage :

PowerShell

Install-Module -Name Convert-WindowsImage

Import-Module Convert-WindowsImage

$wimpath = "E:\sources\install.wim"

$vhdpath = "D:\temp\AzureStackHCI.vhdx"

$edition=1

Convert-WindowsImage -SourcePath $wimpath -Edition $edition -VHDPath


$vhdpath -SizeBytes 500GB -DiskLayout UEFI

Download the GitHub repository


The SDN Express script files live in GitHub. The first step is to get the necessary files and
folders onto your deployment computer.

1. Go to the Microsoft SDN GitHub repository.


2. In the repository, expand the Code drop-down list, and then choose either Clone
or Download ZIP to download the SDN files to your designated deployment
computer.

7 Note

The designated deployment computer must be running Windows Server 2016


or later.

3. Extract the ZIP file and copy the SDNExpress folder to your deployment computer's
C:\ folder.

Edit the configuration file


The PowerShell MultiNodeSampleConfig.psd1 configuration data file contains all the
parameters and settings that are needed for the SDN Express script as input for the
various parameters and configuration settings. This file has specific information about
what needs to be filled out based on whether you are deploying only the network
controller component, or the software load balancer and gateway components as well.
For detailed information, see Plan a Software Defined Network infrastructure topic.

Navigate to the C:\SDNExpress\scripts folder and open the


MultiNodeSampleConfig.psd1 file in your favorite text editor. Change specific parameter

values to fit your infrastructure and deployment:

General settings and parameters


The settings and parameters are used by SDN in general for all deployments. For
specific recommendations, see SDN infrastructure VM role requirements.

VHDPath - VHD file path used by all SDN infrastructure VMs (NC, SLB, GW)
VHDFile - VHDX file name used by all SDN infrastructure VMs
VMLocation - file path to SDN infrastructure VMs. Note that Universal Naming
Convention (UNC) paths aren't supported. For cluster storage-based paths, use a
format like C:\ClusterStorage\...
JoinDomain - domain to which SDN infrastructure VMs are joined to
SDNMacPoolStart - beginning MAC pool address for client workload VMs
SDNMacPoolEnd - end MAC pool address for client workload VMs
ManagementSubnet - management network subnet used by NC to manage
Hyper-V hosts, SLB, and GW components
ManagementGateway - Gateway address for the management network
ManagementDNS - DNS server for the management network
ManagementVLANID - VLAN ID for the management network
DomainJoinUsername - administrator user name
LocalAdminDomainUser - local administrator user name
RestName - DNS name used by management clients (such as Windows Admin
Center) to communicate with NC
HyperVHosts - host servers to be managed by Network Controller
NCUsername - Network Controller account user name
ProductKey - product key for SDN infrastructure VMs
SwitchName - only required if more than one virtual switch exists on the Hyper-V
hosts
VMMemory - memory (in GB) assigned to infrastructure VMs. Default is 4 GB
VMProcessorCount - number of processors assigned to infrastructure VMs.
Default is 8
Locale - if not specified, locale of deployment computer is used
TimeZone - if not specified, local time zone of deployment computer is used

Passwords can be optionally included if stored encrypted as text-encoded secure strings.


Passwords will only be used if SDN Express scripts are run on the same computer where
passwords were encrypted, otherwise it will prompt for these passwords:

DomainJoinSecurePassword - for domain account


LocalAdminSecurePassword - for local administrator account
NCSecurePassword - for Network Controller account

Network Controller VM section


A minimum of three Network Controller VMs are recommended for SDN.

The NCs = @() section is used for the Network Controller VMs. Make sure that the MAC
address of each NC VM is outside the SDNMACPool range listed in the General settings.

ComputerName - name of NC VM
HostName - host name of server where the NC VM is located
ManagementIP - management network IP address for the NC VM
MACAddress - MAC address for the NC VM

Software Load Balancer VM section


A minimum of two Software Load Balancer VMs are recommended for SDN.
The Muxes = @() section is used for the SLB VMs. Make sure that the MACAddress and
PAMACAddress parameters of each SLB VM are outside the SDNMACPool range listed in the
General settings. Ensure that you get the PAIPAddress parameter from outside the PA
Pool specified in the configuration file, but part of the PASubnet specified in the
configuration file.

Leave this section empty ( Muxes = @() ) if not deploying the SLB component:

ComputerName - name of SLB VM


HostName - host name of server where the SLB VM is located
ManagementIP - management network IP address for the SLB VM
MACAddress - MAC address for the SLB VM
PAIPAddress - Provider network IP address (PA) for the SLB VM
PAMACAddress - Provider network IP address (PA) for the SLB VM

Gateway VM section
A minimum of two Gateway VMs (one active and one redundant) are recommended for
SDN.

The Gateways = @() section is used for the Gateway VMs. Make sure that the
MACAddress parameter of each Gateway VM is outside the SDNMACPool range listed in the
General settings. The FrontEndMac and BackendMac must be from within the SDNMACPool
range. Ensure that you get the FrontEndMac and the BackendMac parameters from the
end of the SDNMACPool range.

Leave this section empty ( Gateways = @() ) if not deploying the Gateway component:

ComputerName - name of Gateway VM


HostName - host name of server where the Gateway VM is located
ManagementIP - management network IP address for the Gateway VM
MACAddress - MAC address for the Gateway VM
FrontEndMac - Provider network front end MAC address for the Gateway VM
BackEndMac - Provider network back end MAC address for the Gateway VM

Additional settings for SLB and Gateway


The following additional parameters are used by SLB and Gateway VMs. Leave these
values blank if you are not deploying SLB or Gateway VMs:

SDNASN - Autonomous System Number (ASN) used by SDN to peer with network
switches
RouterASN - Gateway router ASN
RouterIPAddress - Gateway router IP address
PrivateVIPSubnet - virtual IP address (VIP) for the private subnet
PublicVIPSubnet - virtual IP address for the public subnet

The following additional parameters are used by Gateway VMs only. Leave these values
blank if you are not deploying Gateway VMs:

PoolName - pool name used by all Gateway VMs

GRESubnet - VIP subnet for GRE (if using GRE connections)

Capacity - capacity in Kbps for each Gateway VM in the pool

RedundantCount - number of gateways in redundant mode. The default value is 1.


Redundant gateways don't have any active connections. Once an active gateway
goes down, the connections from that gateway moves to the redundant gateway
and the redundant gateway becomes active.

7 Note

If you fill in a value for RedundantCount, ensure that the total number of
gateway VMs is at least one more than the RedundantCount. By default, the
RedundantCount is 1, so you must have at least 2 gateway VMs to ensure
that there is at least 1 active gateway to host gateway connections.

Settings for tenant overlay networks


The following parameters are used if you are deploying and managing overlay
virtualized networks for tenants. If you are using Network Controller to manage
traditional VLAN networks instead, these values can be left blank.

PASubnet - subnet for the Provider Address (PA) network


PAVLANID - VLAN ID for the PA network
PAGateway - IP address for the PA network Gateway
PAPoolStart - beginning IP address for the PA network pool
PAPoolEnd - end IP address for the PA network pool

Here's how Hyper-V Network Virtualization (HNV) Provider logical network allocates IP
addresses. Use this to plan your address space for the HNV Provider network.

Allocates two IP addresses to each physical server


Allocates one IP address to each SLB MUX VM
Allocates one IP address to each gateway VM

Run the deployment script


The SDN Express script deploys your specified SDN infrastructure. When the script is
complete, your SDN infrastructure is ready to be used for VM workload deployments.

1. Review the README.md file for late-breaking information on how to run the
deployment script.

2. Run the following command from a user account with administrative credentials
for the cluster host servers:

PowerShell

SDNExpress\scripts\SDNExpress.ps1 -ConfigurationDataFile
MultiNodeSampleConfig.psd1 -Verbose

3. After the NC VMs are created, configure dynamic DNS updates for the Network
Controller cluster name on the DNS server. For more information, see Dynamic
DNS updates.

Configuration sample files


The following configuration sample files for deploying SDN are available on the
Microsoft SDN GitHub repository:

Traditional VLAN networks.psd1 - Deploy Network Controller for managing


network policies like microsegmentation and Quality of Service on traditional
VLAN Networks.

Virtualized networks.psd1 - Deploy Network Controller for managing virtual


networks and network policies on virtual networks.

Software Load Balancer.psd1 - Deploy Network Controller and Software Load


Balancer for load balancing on virtual networks.

SDN Gateways.psd1 - Deploy Network Controller, Software Load Balancer and


Gateway for connectivity to external networks.

Next steps
Manage VMs
Learn module: Plan for and deploy SDN infrastructure on Azure Stack HCI
Create an Azure Stack HCI cluster using
Windows Admin Center
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Now that you've deployed the Azure Stack HCI operating system, you'll learn how to use
Windows Admin Center to create an Azure Stack HCI cluster that uses Storage Spaces
Direct, and, optionally, Software Defined Networking. The Create Cluster wizard in
Windows Admin Center will do most of the heavy lifting for you. If you'd rather do it
yourself with PowerShell, see Create an Azure Stack HCI cluster using PowerShell. The
PowerShell article is also a good source of information for what is going on under the
hood of the wizard and for troubleshooting purposes.

7 Note

If you are doing a single server installation of Azure Stack HCI 21H2, use
PowerShell to create the cluster.

If you're interested in testing Azure Stack HCI but have limited or no spare hardware, see
the Azure Stack HCI Evaluation Guide, where we'll walk you through experiencing Azure
Stack HCI using nested virtualization inside an Azure VM. Or try the Create a VM-based
lab for Azure Stack HCI tutorial to create your own private lab environment using nested
virtualization on a server of your choice to deploy VMs running Azure Stack HCI for
clustering.

Cluster creation workflow


Here's the workflow for creating a cluster in Windows Admin Center:

1. Complete the prerequisites.


2. Start the Create Cluster wizard.
3. Complete the following steps in the Create Cluster wizard:
a. Step 1: Get Started. Ensures that each server meets the prerequisites and
features needed for cluster join.
b. Step 2: Networking. Assigns and configures network adapters and creates the
virtual switches for each server.
c. Step 3: Clustering. Validates the cluster is set up correctly. For stretched clusters,
also sets up the two sites.
d. Step 4: Storage. Configures Storage Spaces Direct.
e. Step 5: SDN. (Optional) Sets up a Network Controller for SDN deployment.

After you're done creating a cluster in the Create Cluster wizard, complete these post-
cluster creation steps:

Set up a cluster witness. This is highly recommended for all clusters with at least
two nodes.
Register with Azure. Your cluster is not fully supported until your registration is
active.
Validate an Azure Stack HCI cluster. Your cluster is ready to work in a production
environment after completing this step.

Prerequisites
Before you run the Create Cluster wizard in Windows Admin Center, you must complete
the following prerequisites.

2 Warning

Running the wizard before completing the prerequisites can result in a failure to
create the cluster.

Review the hardware and related requirements in System requirements.

Consult with your networking team to identify and understand Physical network
requirements, Host network requirements, and Firewall requirements. Especially
review the Network Reference patterns, which provide example network designs.
Also, determine how you'd like to configure host networking, using Network ATC
or manually.

Install the Azure Stack HCI operating system on each server in the cluster. See
Deploy the Azure Stack HCI operating system.

Obtain an account that's a member of the local Administrators group on each


server.

Have at least two servers to cluster; four if creating a stretched cluster (two in each
site). To instead deploy Azure Stack HCI on a single server, see Deploy Azure Stack
HCI on a single server.
Ensure all servers are in the same time zone as your local domain controller.

Install the latest version of Windows Admin Center on a PC or server for


management. See Install Windows Admin Center.

Ensure that Windows Admin Center and your domain controller are not installed
on the same system. Also, ensure that the domain controller is not hosted on the
Azure Stack HCI cluster or one of the nodes in the cluster.

If you're running Windows Admin Center on a server (instead of a local PC), use an
account that's a member of the Gateway Administrators group, or the local
Administrators group on the Windows Admin Center server.

Verify that your Windows Admin Center management computer is joined to the
same Active Directory domain in which you'll create the cluster, or joined to a fully
trusted domain. The servers that you'll cluster don't need to belong to the domain
yet; they can be added to the domain during cluster creation.

If you're using an integrated system from a Microsoft hardware partner, install the
latest version of vendor extensions on Windows Admin Center to help keep the
integrated hardware and firmware up to date. To install them, open Windows
Admin Center and click Settings (gear icon) at the upper right. Select any
applicable hardware vendor extensions, and click Install.

For stretched clusters, set up your two sites beforehand in Active Directory.
Alternatively, the wizard can set them up for you too. For more information about
stretched clusters, see the Stretched clusters overview.

Start the Create Cluster wizard


To start the Create Cluster wizard in Windows Admin Center:

1. Log in to Windows Admin Center.

2. Under All connections, click Add.

3. In the Add or create resources panel, under Server clusters, select Create new.

4. Under Choose the cluster type, select Azure Stack HCI.


5. Under Select server locations, select one the following:

All servers in one site


Servers in two sites (for stretched cluster)

6. When finished, click Create. You'll see the Create Cluster wizard, as shown below.

Proceed to the next step in the cluster creation workflow, Step 1: Get started.

Step 1: Get started


Step 1 of the wizard walks you through making sure all prerequisites are met, adding
the server nodes, installing needed features, and then restarting each server if needed.

1. Review 1.1 Check the prerequisites listed in the wizard to ensure each server node
is cluster-ready. When finished, click Next.
2. On 1.2 Add servers, enter your account username using the format
domain\username. Enter your password, then click Next. This account must be a
member of the local Administrators group on each server.

3. Enter the name of the first server you want to add, then click Add. When you add
servers, make sure to use a fully qualified domain name.

4. Repeat Step 3 for each server that will be part of the cluster. When you're finished,
select Next.

5. If needed, on 1.3 Join a domain​, specify the domain to join the servers to and the
account to use. You can optionally rename the servers if you want. Then click Next.

6. On 1.4 Install features, review and add features as needed. When finished, click
Next.

The wizard lists and installs required features for you, including the following
options:

Data Deduplication
Hyper-V
BitLocker Drive Encryption
Data Center Bridging (for RoCEv2 network adapters)
Failover Clustering
Network ATC
Active Directory module for Windows PowerShell
Hyper-V module for Windows PowerShell

7. On 1.5 Install updates, click Install updates as needed to install any operating
system updates. When complete, click Next.

8. On 1.6 Install hardware updates, click Get updates as needed to get available
vendor hardware updates. If you don't install the updates now, we recommend
manually installing the latest networking drivers before continuing. Updated
drivers are required if you want to use Network ATC to configure host networking.

7 Note

Some extensions require extra configuration on the servers or your network,


such as configuring the baseboard management controller (BMC). Consult
your vendor's documentation for details.
9. Follow the vendor-specific steps to install the updates on your hardware. These
steps include performing symmetry and compliance checks on your hardware to
ensure a successful update. You may need to re-run some steps.

10. On 1.7 Restart servers, click Restart servers if required. Verify that each server has
successfully started.

11. On 1.8 Choose host networking, select one of the following:

Use Network ATC to deploy and manage networking (Recommended). We


recommend using this option for configuring host networking. Network ATC
provides an intent-based approach to host network deployment and helps
simplify the deployment and network configuration management for Azure
Stack HCI clusters. For more information about using Network ATC, see
Network ATC.
Manually configure host networking. Select this option to manually
configure host networking. For more information about configuring RDMA
and Hyper-V host networking for Azure Stack HCI, see Host network
requirements.

12. Select Next: Networking to proceed to Step 2: Networking.

Step 2: Networking
Step 2 of the wizard walks you through configuring the host networking elements for
your cluster. RDMA (both iWARP and RoCE) network adapters are supported.

Depending on the option you selected in 1.8 Choose host networking of Step 1: Get
started above, refer to one of the following tabs to configure host networking for your
cluster:

Use Network ATC to deploy and manage networking (Recommended)

This is the recommended option for configuring host networking. For more
information about Network ATC, see Network ATC overview.

1. On 2.1 Verify network adapters, review the list displayed, and exclude or add
any adapters you want to cluster. Wait for a couple of minutes for the
adapters to show up. Only adapters with matching names, interface
descriptions, and link speed on each server are displayed. All other adapters
are hidden.

2. If you don't see your adapters in the list, click Show hidden adapters to see all
the available adapters and then select the missing adapters.

3. On the Select the cluster network adapters page, select the checkbox for any
adapters listed that you want to cluster. The adapters must have matching
names, interface descriptions, and link speeds on each server. You can rename
the adapters to match, or just select the matching adapters. When finished,
click Close.

4. The selected adapters will now display under Adapters available on all
servers. When finished selecting and verifying adapters, click Next.

5. On 2.2 Define intents, under Intent 1, do the following:

For Traffic types, select a traffic type from the dropdown list. You can
add the Management and Storage intent types to exactly one intent
while the Compute intent type can be added to one or more intents. For
more information, see Network ATC traffic types.
For Intent name, enter a friendly name for the intent.
For Network adapters, select an adapter from the dropdown list.
(Optional) Click Select another adapter for this traffic if needed.

For recommended intent configurations, see the network reference pattern


that matches your deployment:

Storage switchless, single switch


Storage switchless, two switches
Storage switched, non-converged
Storage switched, fully converged

6. (Optional) After an intent is added, select Customize network settings to


modify its network settings. When finished, select Save.

7. (Optional) To add another intent, select Add an intent, and repeat step 5 and
optionally step 6.

8. When finished defining network intents, select Next.

9. On 2.3: Provide network details, for each storage traffic adapter listed, enter
the following or use the default values (recommended):

Subnet mask/CIDR
VLAN ID
IP address (this is usually on a private subnet such as 10.71.1.x and
10.71.2.x)

10. Select Next: Clustering to proceed to Step 3: Clustering.


Step 3: Clustering
Step 3 of the wizard makes sure everything thus far is set up correctly, automatically sets
up two sites in the case of stretched cluster deployments, and then actually creates the
cluster. You can also set up your sites beforehand in Active Directory.

1. On 3.1 Create the cluster, specify a unique name for the cluster.

2. Under IP address, do one of the following:

Specify one or more static addresses. The IP address must be entered in the
following format: IP address/current subnet length. For example:
10.0.0.200/24.
Assign address dynamically with DHCP.

3. When finished, select Create cluster. This can take a while to complete.

If you get the error "Failed to reach cluster through DNS," select the Retry
connectivity checks button. You might have to wait several hours before it
succeeds on larger networks due to DNS propagation delays.

) Important

If you failed to create a cluster, do not click the Back button instead of the
Retry connectivity checks button. If you select Back, the Cluster Creation
wizard exits prematurely, and can potentially reset the entire process.

If you encounter issues with deployment after the cluster is created and you want
to restart the Cluster Creation wizard, first remove (destroy) the cluster. To do so,
see Remove a cluster.

4. The next step appears only if you selected Use Network ATC to deploy and
manage networking (Recommended) for step 1.8 Choose host networking.

In Deploy host networking settings, select Deploy to apply the Network ATC
intents you defined earlier. If you chose to manually deploy host networking in
step 1.8 of the Cluster Creation wizard, you won't see this page.

5. On 3.2 Deploy host networking settings, select Deploy to apply the


Network ATC
intents you defined earlier. This can take a few minutes to complete. When
finished, select Next.

6. On 3.3 Validate cluster, select Validate. Validation can take several minutes. Note
that the in-wizard validation is not the same as the post-cluster creation validation
step, which performs additional checks to catch any hardware or configuration
problems before the cluster goes into production. If you experience issues with
cluster validation, see Troubleshoot cluster validation reporting.

If the Credential Security Service Provider (CredSSP) pop-up appears, select Yes
to temporarily enable CredSSP for the wizard to continue. Once your cluster is
created and the wizard has completed, you'll disable CredSSP to increase security.
If you experience issues with CredSSP, see Troubleshoot CredSSP.

7. Review all validation statuses, download the report to get detailed information on
any failures, make changes, then click Validate again as needed. You can
Download report as well. Repeat again as necessary until all validation checks
pass. When all is OK, click Next.

8. Select Advanced. You have a couple of options here:

Register the cluster with DNS and Active Directory


Add eligible storage to the cluster (recommended)

9. Under Networks, select whether to Use all networks (recommended) or Specify


one or more networks not to use.

10. When finished, click Create cluster.

11. For stretched clusters, on 3.3 Assign servers to sites, name the two sites that will
be used.

12. Next assign each server to a site. You'll set up replication across sites later. When
finished, click Apply changes.

13. Select Next: Storage to proceed to Step 4: Storage.

Step 4: Storage
Complete these steps after finishing the Create Cluster wizard.
Step 4 walks you through
setting up Storage Spaces Direct for your cluster.

1. On 4.1 Clean drives, you can optionally select Erase drives if it makes sense for
your deployment.

2. On 4.2 Check drives, click the > icon next to each server to verify that the disks are
working and connected. If all is OK, click Next.

3. On 4.3 Validate storage, click Next.


4. Download and review the validation report. If all is good, click Next. If not, run
Validate again.

5. On 4.4 Enable Storage Spaces Direct, click Enable.

6. Download and review the report. When all is good, click Finish.

7. Select Go to connections list.

8. After a few minutes, you should see your cluster in the list. Select it to view the
cluster overview page.

It can take some time for the cluster name to be replicated across your domain,
especially if workgroup servers have been newly added to Active Directory.
Although the cluster might be displayed in Windows Admin Center, it might not be
available to connect to yet.

If resolving the cluster isn't successful after some time, in most cases you can
substitute a server name instead of the cluster name.

9. (Optional) Select Next: SDN to proceed to Step 5: SDN.

Step 5: SDN (optional)


This optional step walks you through setting up the Network Controller component of
Software Defined Networking (SDN). Once the Network Controller is set up, you can
configure other SDN components such as Software Load Balancer (SLB) and RAS
Gateway as per your requirements. See the Phased deployment section of the planning
article to understand what other SDN components you might need.

You can also deploy Network Controller using SDN Express scripts. See Deploy an SDN
infrastructure using SDN Express.

7 Note

The Create Cluster wizard does not currently support configuring SLB And RAS
gateway. You can use SDN Express scripts to configure these components. Also,
SDN is not supported or available for stretched clusters.

1. Under Host, enter a name for the Network Controller. This is the DNS name used
by management clients (such as Windows Admin Center) to communicate with
Network Controller. You can also use the default populated name.
2. Download the Azure Stack HCI VHDX file. For more information, see Download the
VHDX file.
3. Specify the path where you downloaded the VHDX file. Use Browse to find it
quicker.
4. Specify the number of VMs to be dedicated for Network Controller. Three VMs are
strongly recommended for production deployments.
5. Under Network, enter the VLAN ID of the management network. Network
Controller needs connectivity to same management network as the Hyper-V hosts
so that it can communicate and configure the hosts.
6. For VM network addressing, select either DHCP or Static.
7. If you selected DHCP, enter the name for the Network Controller VMs. You can
also use the default populated names.
8. If you selected Static, do the following:

Specify an IP address.
Specify a subnet prefix.
Specify the default gateway.
Specify one or more DNS servers. Click Add to add additional DNS servers.

9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.
10. Enter the local administrative password for these VMs.
11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values.
13. When finished, click Next.
14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete. Then click Finish.

7 Note

After Network Controller VM(s) are created, you must configure dynamic DNS
updates for the Network Controller cluster name on the DNS server.

If Network Controller deployment fails, do the following before you try this again:

Stop and delete any Network Controller VMs that the wizard created.

Clean up any VHD mount points that the wizard created.

Ensure you have at least 50-100GB of free space on your Hyper-V hosts.

Step 6: Remove a Cluster (optional)


There are situations in which you may need to actually remove the cluster which you
created in Step 3. If so, choose the Remove the Cluster option in the Cluster Creation
wizard.

For more information about removing a cluster, see Remove a cluster.

Next steps
To perform the next management task related to this article, see:

Set up a cluster witness


Azure Benefits on Azure Stack HCI
Article • 05/12/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Microsoft Azure offers a range of differentiated workloads and capabilities that are
designed to run only on Azure. Azure Stack HCI extends many of the same benefits you
get from Azure, while running on the same familiar and high-performance on-premises
or edge environments.

Azure Benefits makes it possible for supported Azure-exclusive workloads to work


outside of the cloud. You can enable Azure Benefits on Azure Stack HCI at no extra cost.
If you have Windows Server workloads, we recommend turning it on.

Take a few minutes to watch the introductory video on Azure Benefits:


https://www.youtube-nocookie.com/embed/s3CE9ob3hDo

Azure Benefits available on Azure Stack HCI


Turning on Azure Benefits enables you to use these Azure-exclusive workloads on Azure
Stack HCI:

Workload Versions supported What it is

Windows 2022 edition or later An Azure-only guest operating system that includes all
Server the latest Windows Server innovations and other
Datacenter: exclusive features.

Azure Edition Learn more: Automanage for Windows Server

Extended October 12th, 2021 A program that allows customers to continue to get
Security security updates or security updates for End-of-Support SQL Server and
Update later Windows Server VMs, now free when running on Azure
(ESUs) Stack HCI.

For more information, see Extended security updates


(ESU) on Azure Stack HCI.

Azure Policy Arc agent version 1.13 A feature that can audit or configure OS settings as
guest or later code, for both host and guest machines.

configuration Learn more: Understand the guest configuration


feature of Azure Policy
Workload Versions supported What it is

Azure Virtual For multi-session A service that enables you to deploy Azure Virtual
Desktop editions only. Windows Desktop session hosts on your Azure Stack HCI
10 Enterprise multi- infrastructure.

session or later. For more information, see the Azure Virtual Desktop for
Azure Stack HCI overview.

How it works
This section is optional reading, and explains more about how Azure Benefits on HCI
works "under the hood."

Azure Benefits relies on a built-in platform attestation service on Azure Stack HCI, and
helps to provide assurance that VMs are indeed running on Azure environments.

This service is modeled after the same IMDS Attestation service that runs in Azure, in
order to enable some of the same workloads and benefits available to customers in
Azure. Azure Benefits returns an almost identical payload. The main difference is that it
runs on-premises, and therefore guarantees that VMs are running on Azure Stack HCI
instead of Azure.

Turning on Azure Benefits starts the service running on your Azure Stack HCI cluster:

1. On every server, HciSvc obtains a certificate from Azure, and securely stores it
within an enclave on the server.
7 Note

Certificates are renewed every time the Azure Stack HCI cluster syncs with
Azure, and each renewal is valid for 30 days. As long as you maintain the usual
30 day connectivity requirements for Azure Stack HCI, no user action is
required.

2. HciSvc exposes a private and non-routable REST endpoint, accessible only to VMs
on the same server. To enable this endpoint, an internal vSwitch is configured on
the Azure Stack HCI host (named AZSHCI_HOST-IMDS_DO_NOT_MODIFY). VMs
then must have a NIC configured and attached to the same vSwitch
(AZSHCI_GUEST-IMDS_DO_NOT_MODIFY).

7 Note

Modifying or deleting this switch and NIC prevents Azure Benefits from
working properly. If errors occur, disable Azure Benefits using Windows
Admin Center or the PowerShell instructions that follow, and then try again.

3. Consumer workloads (for example, Windows Server Azure Edition guests) request
attestation. HciSvc then signs the response with an Azure certificate.

7 Note

You must manually enable access for each VM that needs Azure Benefits.

Enable Azure Benefits


Before you begin, you'll need the following prerequisites:

An Azure Stack HCI cluster:


Install updates: Version 21H2, with at least the December 14, 2021 security
update KB5008223 or later.
Register Azure Stack HCI: All servers must be online and registered to Azure.
Install Hyper-V and RSAT-Hyper-V-Tools.

If you are using Windows Admin Center:


Windows Admin Center (version 2103 or later) with Cluster Manager extension
(version 2.41.0 or later).
You can enable Azure Benefits on Azure Stack HCI using either Windows Admin Center
or PowerShell. The following sections describe each option.

Option 1: Turn on Azure Benefits using


Windows Admin Center

1. In Windows Admin Center, select Cluster Manager from the top drop-down menu,
navigate to the cluster that you want to activate, then under Settings, select Azure
Benefits.

2. In the Azure Benefits pane, select Turn on. By default, the checkbox to turn on for
all existing VMs is selected. You can deselect it and manually add VMs later.

3. Select Turn on again to confirm setup. It may take a few minutes for servers to
reflect the changes.

4. When Azure Benefits setup is successful, the page updates to show the Azure
Benefits dashboard. To check Azure Benefits for the host:
a. Check that Azure Benefits cluster status appears as On.
b. Under the Cluster tab in the dashboard, check that Azure Benefits for every
server shows as Active in the table.

5. To check access to Azure Benefits for VMs: Check the status for VMs with Azure
Benefits turned on. It's recommended that all of your existing VMs have Azure
Benefits turned on; for example, 3 out of 3 VMs.
Manage access to Azure Benefits for your VMs - WAC

To turn on Azure Benefits for VMs, select the VMs tab, then select the VM(s) in the top
table VMs without Azure Benefits, and then select Turn on Azure Benefits for VMs.

Troubleshooting - WAC
To turn off and reset Azure Benefits on your cluster:
Under the Cluster tab, click Turn off Azure Benefits.
To remove access to Azure Benefits for VMs:
Under the VM tab, select the VM(s) in the top table VMs without Azure
Benefits, and then click Turn on Azure Benefits for VMs.
Under the Cluster tab, one or more servers appear as Expired:
If Azure Benefits for one or more servers has not synced with Azure for more
than 30 days, it appears as Expired or Inactive. Select Sync with Azure to
schedule a manual sync.
Under the VM tab, host server benefits appear as Unknown or Inactive:
You will not be able to add or remove Azure Benefits for VMs on these host
servers. Go to the Cluster tab to fix Azure Benefits for host servers with errors,
then try and manage VMs again.

Option 2: turn on Azure Benefits using


PowerShell
1. To set up Azure Benefits, run the following command from an elevated PowerShell
window on your Azure Stack HCI cluster:

PowerShell

Enable-AzStackHCIAttestation

Or, if you want to add all existing VMs on setup, you can run the following
command:

PowerShell

Enable-AzStackHCIAttestation -AddVM

2. When Azure Benefits setup is successful, you can view the Azure Benefits status.
Check the cluster property IMDS Attestation by running the following command:

PowerShell

Get-AzureStackHCI

Or, to view Azure Benefits status for servers, run the following command:

PowerShell

Get-AzureStackHCIAttestation [[-ComputerName] <string>]

3. To check access to Azure Benefits for VMs, run the following command:

PowerShell

Get-AzStackHCIVMAttestation

Manage access to Azure Benefits for your VMs -


PowerShell
To turn on benefits for selected VMs, run the following command on your Azure
Stack HCI cluster:

PowerShell

Add-AzStackHCIVMAttestation [-VMName]

Or, to add all existing VMs, run the following command:

PowerShell

Add-AzStackHCIVMAttestation -AddAll

Troubleshooting - PowerShell
To turn off and reset Azure Benefits on your cluster, run the following command:

PowerShell

Disable-AzStackHCIAttestation -RemoveVM

To remove access to Azure Benefits for selected VMs:

PowerShell

Remove-AzStackHCIVMAttestation -VMName <string>

Or, to remove access for all existing VMs:

PowerShell

Remove-AzStackHCIVMAttestation -RemoveAll

If Azure Benefits for one or more servers is not yet synced and renewed with Azure,
it may appear as Expired or Inactive. Schedule a manual sync:

PowerShell

Sync-AzureStackHCI

If a server is newly added and has not yet been set up with Azure Benefits, it may
appear as Inactive. To add the new server, run setup again:

PowerShell

Enable-AzStackHCIAttestation

(Optional) View Azure Benefits using the Azure


portal
1. In your Azure Stack HCI cluster resource page, navigate to the Configuration tab.

2. Under the feature Enable Azure Benefits, view the host attestation status:

(Optional) Access Azure Benefits from VMs


To check that VMs can properly access Azure Benefits on the host, you can run this
command from the VM and confirm that there is a response:

PowerShell

Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri


"http://169.254.169.253:80/metadata/attested/document?api-version=2018-10-
01"

FAQ
This FAQ provides answers to some questions about using Azure Benefits.

What Azure-exclusive workloads can I enable with Azure


Benefits?
See the full list here.

Does it cost anything to turn on Azure Benefits?


No, turning on Azure Benefits incurs no extra fees.

Can I use Azure Benefits on environments other than


Azure Stack HCI?
No, Azure Benefits is a feature built into the Azure Stack HCI OS, and can only be used
on Azure Stack HCI.

I have just set up Azure Benefits on my cluster. How do I


ensure that Azure Benefits stays active?
In most cases, there is no user action required. Azure Stack HCI automatically
renews Azure Benefits when it syncs with Azure.
However, if the cluster disconnects for more than 30 days and Azure Benefits
shows as Expired, you can manually sync using PowerShell and Windows Admin
Center. For more information, see syncing Azure Stack HCI.

What happens when I deploy new VMs, or delete VMs?


When you deploy new VMs that require Azure Benefits, you can manually add new
VMs to access Azure Benefits using Windows Admin Center or PowerShell, using
the preceding instructions.
You can still delete and migrate VMs as usual. The NIC AZSHCI_GUEST-
IMDS_DO_NOT_MODIFY will still exist on the VM after migration. To clean up the
NIC before migration, you can remove VMs from Azure Benefits using Windows
Admin Center or PowerShell using the preceding instructions, or you can migrate
first and manually delete NICs afterwards.

What happens when I add or remove servers?


When you add a server, you can navigate to the Azure Benefits page in Windows
Admin Center, and a banner will appear with a link to Enable inactive server.
Or, you can run Enable-AzStackHCIAttestation [[-ComputerName] <String>] in
PowerShell.
You can still delete servers or remove them from the cluster as usual. The vSwitch
AZSHCI_HOST-IMDS_DO_NOT_MODIFY will still exist on the server after removal
from the cluster. You can leave it if you are planning to add the server back to the
cluster later, or you can remove it manually.

Next steps
Extended security updates (ESU) on Azure Stack HCI
Azure Stack HCI overview
Azure Stack HCI FAQ
Deploy branch office and edge on Azure
Stack HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This topic provides guidance on how to plan, configure, and deploy branch office and
edge scenarios on the Azure Stack HCI operating system. The guidance positions your
organization to run complex, highly available workloads in virtual machines (VMs) and
containers in remote branch office and edge deployments. Computing at the edge shifts
most data processing from a centralized system to the edge of the network, closer to a
device or system that requires data quickly.

Use Azure Stack HCI to run virtualized applications and workloads with high availability
on recommended hardware. The hardware supports clusters consisting of two servers
configured with nested resiliency for storage, a simple, low-cost USB thumb drive cluster
witness, and administration via the browser-based Windows Admin Center. For details
on creating a USB device cluster witness, see Deploy a file share witness.

Azure IoT Edge moves cloud analytics and custom business logic to devices so that you
can focus on business insights instead of data management. Azure IoT Edge combines
AI, cloud, and edge computing in containerized cloud workloads, such as Azure
Cognitive Services, Machine Learning, Stream Analytics, and Functions. Workloads can
run on devices ranging from a Raspberry Pi to a converged edge server. You use Azure
IoT Hub to manage your edge applications and devices.

Adding Azure IoT Edge to your Azure Stack HCI branch office and edge deployments
modernizes your environment to support the CI/CD pipeline application deployment
framework. DevOps personnel in your organization can deploy and iterate containerized
applications that IT builds and supports via traditional VM management processes and
tools.

Primary features of Azure IoT Edge:

Open source software from Microsoft


Runs on either Windows or Linux
Runs “on the edge” to provide near-real time responses
Secure software and hardware mechanisms
Available in the AI Toolkit for Azure IoT Edge
Open programmability support for: Java, .NET Core 2.0, Node.js, C, and Python
Offline and intermittent connectivity support
Native management from Azure IoT Hub

To learn more, see What is Azure IoT Edge.

Deploy branch office and edge


This section describes at a high level how to acquire hardware for branch office and
edge deployments on Azure Stack HCI and use Windows Admin Center for
management. It also covers deploying Azure IoT Edge to manage containers in the
cloud.

Acquire hardware from the Azure Stack HCI Catalog


First, you'll need to procure hardware. The easiest way to do that is to locate your
preferred Microsoft hardware partner in the Azure Stack HCI Catalog and purchase an
integrated system with the Azure Stack HCI operating system preinstalled. In the
catalog, you can filter to see vendor hardware that is optimized for this type of
workload.

Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.

Next, use Windows Admin Center to create an Azure Stack HCI cluster.

Use container-based apps and IoT data processing


Now you’re ready to use modern container-based application development and IoT
data processing. Use Windows Admin Center for the steps in this section to deploy a VM
running Azure IoT Edge.

To learn more, see What is Azure IoT Edge.

To deploy Azure IoT Edge on Azure Stack HCI:

1. Use Windows Admin Center to create a new VM in Azure Stack HCI.

For information on supported operating system versions, VM types, processor


architectures, and system requirements, see Azure IoT Edge supported systems.

2. If you don’t already have an Azure account, start a free account .


3. In the Azure portal, create an Azure IoT hub.

4. In the Azure portal, register an IoT Edge device.

7 Note

The IoT Edge device is on a VM running either Windows or Linux on Azure


Stack HCI.

5. On the VM that you created in Step 1, install and start the IoT Edge runtime.

) Important

You need the device string that you created in Step 4 to connect the runtime
to Azure IoT Hub.

6. Deploy a module to Azure IoT Edge.

You can source and deploy pre-built modules from the IoT Edge Modules
section of Azure Marketplace.

Next steps
For more information about branch office and edge, and Azure IoT Edge, see:

Quickstart: Deploy your first IoT Edge module to a virtual Linux device
Quickstart: Deploy your first IoT Edge module to a Windows device
Deploy virtual desktop infrastructure
(VDI) on Azure Stack HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This topic provides guidance on how to plan, configure, and deploy virtual desktop
infrastructure (VDI) on the Azure Stack HCI operating system. Leverage your Azure Stack
HCI investment to deliver centralized, highly available, simplified, and secure
management for the users in your organization. Use this guidance to enable scenarios
like bring-your-own-device (BYOD) for your users, while providing them with a
consistent and reliable experience for business-critical applications without sacrificing
security.

7 Note

This article focuses on deploying Remote Desktop Services (RDS) to Azure Stack
HCI. You can also support VDI workloads using Azure Virtual Desktop for Azure
Stack HCI. Learn more at Azure Virtual Desktop for Azure Stack HCI (preview).

Overview
VDI uses server hardware to run desktop operating systems and software programs on a
virtual machine (VM). In this way, VDI lets you run traditional desktop workloads on
centralized servers. VDI advantages in a business setting include keeping sensitive
company applications and data in a secure datacenter, and accommodating a BYOD
policy without worrying about mixing personal data with corporate assets. VDI has also
become the standard to support remote and branch office workers and provide access
to contractors and partners.

Azure Stack HCI offers the optimal platform for VDI. A validated Azure Stack HCI
solution combined with Microsoft Remote Desktop Services (RDS) lets you achieve a
highly available and highly scalable architecture.

In addition, Azure Stack HCI VDI provides the following unique cloud-based capabilities
to protect VDI workloads and clients:

Centrally managed updates using Azure Update Management


Unified security management and advanced threat protection for VDI clients
Deploy VDI
This section describes at a high level how to acquire hardware to deploy VDI on Azure
Stack HCI and use Windows Admin Center for management. It also covers deploying
RDS to support VDI.

Step 1: Acquire hardware for VDI on Azure Stack HCI


First, you'll need to procure hardware. The easiest way to do that is to locate your
preferred Microsoft hardware partner in the Azure Stack HCI Catalog and purchase an
integrated system with the Azure Stack HCI operating system preinstalled. In the
catalog, you can filter to see vendor hardware that is optimized for this type of
workload. Be sure to consult your hardware partner to make sure the hardware can
support the number of virtual desktops that you want to host on your cluster.

Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.

Next, use Windows Admin Center to create an Azure Stack HCI cluster.

Step 2: Set up Azure Update Management in Windows


Admin Center
In Windows Admin Center, set up Azure Update Management to quickly assess the
status of available updates, schedule required updates, and review deployment results
to verify applied updates.

To get started with Azure Update Management, you need a subscription to Microsoft
Azure. If you don’t have a subscription, you can sign up for a free trial .

You can also use Windows Admin Center to set up additional Azure hybrid services, such
as Backup, File Sync, Site Recovery, Point-to-Site VPN, and Azure Security Center.

Step 3: Deploy Remote Desktop Services (RDS) for VDI


support
After completing your Azure Stack HCI deployment and registering with Azure to use
Update Management, you’re ready to use the guidance in this section to build and
deploy RDS to support VDI.

To build and deploy RDS:


1. Deploy your Remote Desktop environment
2. Create an RDS session collection to share apps and resources
3. License your RDS deployment
4. Direct your users to install a Remote Desktop client to access apps and resources
5. Add Connection Brokers and Session Hosts to enable high availability:

Scale out an existing RDS collection with an RD Session Host farm


Add high availability to the RD Connection Broker infrastructure
Add high availability to the RD Web and RD Gateway web front
Deploy a two-node Storage Spaces Direct file system for UPD storage

Next steps
For more information related to VDI, see Supported configurations for Remote Desktop
Services
Deploy SQL Server on Azure Stack HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; SQL Server (all supported
versions)

This topic provides guidance on how to plan, configure, and deploy SQL Server on the
Azure Stack HCI operating system. The operating system is a hyperconverged
infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads
and their storage in a hybrid on-premises environment.

Solution overview
Azure Stack HCI provides a highly available, cost efficient, flexible platform to run SQL
Server and Storage Spaces Direct. Azure Stack HCI can run Online Transaction
Processing (OLTP) workloads, data warehouse and BI, and AI and advanced analytics
over big data.

The platform’s flexibility is especially important for mission critical databases. You can
run SQL Server on virtual machines (VMs) that use either Windows Server or Linux,
which allows you to consolidate multiple database workloads and add more VMs to
your Azure Stack HCI environment as needed. Azure Stack HCI also enables you to
integrate SQL Server with Azure Site Recovery to provide a cloud-based migration,
restoration, and protection solution for your organization’s data that is reliable and
secure.

Deploy SQL Server


This section describes at a high level how to acquire hardware for SQL Server on Azure
Stack HCI, and use Windows Admin Center to manage the operating system on your
servers. Information on setting up SQL Server, monitoring and performance tuning, and
using High Availability (HA) and Azure hybrid services is included.

Step 1: Acquire hardware from the Azure Stack HCI


Catalog
First, you'll need to procure hardware. The easiest way to do that is to locate your
preferred Microsoft hardware partner in the Azure Stack HCI Catalog and purchase an
integrated system with the Azure Stack HCI operating system preinstalled. In the
catalog, you can filter to see vendor hardware that is optimized for this type of
workload.

Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.

Next, use Windows Admin Center to create an Azure Stack HCI cluster.

Step 2: Install SQL Server on Azure Stack HCI


You can install SQL Server on VMs running either Windows Server or Linux depending
on your requirements.

For instructions on installing SQL Server, see:

SQL Server installation guide for Windows.


Installation guidance for SQL Server on Linux.

Step 3: Monitor and performance tune SQL Server


Microsoft provides a comprehensive set of tools for monitoring events in SQL Server
and for tuning the physical database design. Tool choice depends on the type of
monitoring or tuning that you want to perform.

To ensure the performance and health of your SQL Server instances on Azure Stack HCI,
see Performance Monitoring and Tuning Tools.

For tuning SQL Server 2017 and SQL Server 2016, see Recommended updates and
configuration options for SQL Server 2017 and 2016 with high-performance
workloads .

Step 4: Use SQL Server high availability features


Azure Stack HCI leverages Windows Server Failover Clustering with SQL Server (WSFC)
to support SQL Server running in VMs in the event of a hardware failure. SQL Server also
offers Always On availability groups (AG) to provide database-level high availability that
is designed to help with application and software faults. In addition to WSFC and AG,
Azure Stack HCI can use Always On Failover Cluster Instance (FCI), which is based on
Storage Spaces Direct technology for shared storage.

These options all work with the Microsoft Azure Cloud witness for quorum control. We
recommend using cluster AntiAffinity rules in WSFC for VMs placed on different physical
nodes to maintain uptime for SQL Server in the event of host failures when you
configure Always On availability groups.

Step 5: Set up Azure hybrid services


There are several Azure hybrid services that you can use to help keep your SQL Server
data and applications secure. Azure Site Recovery is a disaster recovery as a service
(DRaaS). For more information about using this service to protect the SQL Server back
end of an application to help keep workloads online, see Set up disaster recovery for
SQL Server.

Azure Backup lets you define backup policies to protect enterprise workloads and
supports backing up and restoring SQL Server consistency. For more information about
how to back up your on-premises SQL data, see Install Azure Backup Server.

Alternatively, you can use the SQL Server Managed Backup feature in SQL Server to
manage Azure Blob Storage backups.

For more information about using this option that is suitable for off-site archiving, see:

Tutorial: Use Azure Blob storage service with SQL Server 2016
Quickstart: SQL backup and restore to Azure Blob storage service

In addition to these backup scenarios, you can set up other database services that SQL
Server offers, including Azure Data Factory and Azure Feature Pack for Integration
Services (SSIS).

Next steps
For more information about working with SQL Server, see:

Tutorial: Getting Started with the Database Engine


Deploy trusted enterprise virtualization
on Azure Stack HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This topic provides guidance on how to plan, configure, and deploy a highly secure
infrastructure that uses trusted enterprise virtualization on the Azure Stack HCI
operating system. Leverage your Azure Stack HCI investment to run secure workloads on
hardware that uses virtualization-based security (VBS) and hybrid cloud services through
Windows Admin Center and the Azure portal.

Overview
VBS is a key component of the security investments in Azure Stack HCI to protect hosts
and virtual machines (VMs) from security threats. For example, the Security Technical
Implementation Guide (STIG) , which is published as a tool to improve the security of
Department of Defense (DoD) information systems, lists VBS and Hypervisor-Protected
Code Integrity (HVCI) as general security requirements. It is imperative to use host
hardware that is enabled for VBS and HVCI to protect workloads on VMs, because a
compromised host cannot guarantee VM protection.

VBS uses hardware virtualization features to create and isolate a secure region of
memory from the operating system. You can use Virtual Secure Mode (VSM) in Windows
to host a number of security solutions to greatly increase protection from operating
system vulnerabilities and malicious exploits.

VBS uses the Windows hypervisor to create and manage security boundaries in
operating system software, enforce restrictions to protect vital system resources, and
protect security assets, such as authenticated user credentials. With VBS, even if malware
gains access to the operating system kernel, you can greatly limit and contain possible
exploits, because the hypervisor prevents malware from executing code or accessing
platform secrets.

The hypervisor, the most privileged level of system software, sets and enforces page
permissions across all system memory. While in VSM, pages can only execute after
passing code integrity checks. Even if a vulnerability, such as a buffer overflow that could
allow malware to attempt to modify memory occurs, code pages cannot be modified,
and modified memory cannot be executed. VBS and HVCI significantly strengthen code
integrity policy enforcement. All kernel mode drivers and binaries are checked before
they can start, and unsigned drivers or system files are prevented from loading into
system memory.

Deploy trusted enterprise virtualization


This section describes at a high level how to acquire hardware to deploy a highly secure
infrastructure that uses trusted enterprise virtualization on Azure Stack HCI and
Windows Admin Center for management.

Step 1: Acquire hardware for trusted enterprise


virtualization on Azure Stack HCI
First, you'll need to procure hardware. The easiest way to do that is to locate your
preferred Microsoft hardware partner in the Azure Stack HCI Catalog and purchase an
integrated system with the Azure Stack HCI operating system preinstalled. In the
catalog, you can filter to see vendor hardware that is optimized for this type of
workload.

Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.

Next, use Windows Admin Center to create an Azure Stack HCI cluster.

All partner hardware for Azure Stack HCI is certified with the Hardware Assurance
Additional Qualification. The qualification process tests for all required VBS functionality.
However, VBS and HVCI are not automatically enabled in Azure Stack HCI. For more
information about the the Hardware Assurance Additional Qualification, see "Hardware
Assurance" under Systems in the Windows Server Catalog .

2 Warning

HVCI may be incompatible with hardware devices not listed in the Azure Stack HCI
Catalog. We strongly recommend using Azure Stack HCI validated hardware from
our partners for trusted enterprise virtualization infrastructure.

Step 2: Enable HVCI


Enable HVCI on your server hardware and VMs. For details, see Enable virtualization-
based protection of code integrity.

Step 3: Set up Azure Security Center in Windows Admin


Center
In Windows Admin Center, set up Azure Security Center to add threat protection and
quickly assess the security posture of your workloads.

To learn more, see Protect Windows Admin Center resources with Security Center.

To get started with Security Center:

You need a subscription to Microsoft Azure. If you don’t have a subscription, you
can sign up for a free trial .
Security Center's free pricing tier is enabled on all your current Azure subscriptions
once you either visit the Azure Security Center dashboard in the Azure portal, or
enable it programmatically via API.
To take advantage of advanced security
management and threat detection capabilities, you must enable Azure Defender.
You can use Azure Defender free for 30 days. For more information, see Security
Center pricing .
If you're ready to enable Azure Defender, see Quickstart: Setting up Azure Security
Center to walk through the steps.

You can also use Windows Admin Center to set up additional Azure hybrid services, such
as Backup, File Sync, Site Recovery, Point-to-Site VPN, and Update Management.

Next steps
For more information related to trusted enterprise virtualization, see:

Hyper-V on Windows Server


Deploy and manage Azure Stack HCI
clusters in VMM
Article • 03/24/2023

This article provides information about how to set up an Azure Stack HCI cluster in
System Center - Virtual Machine Manager (VMM). You can deploy an Azure Stack HCI
cluster by provisioning from bare-metal servers or by adding existing hosts. Learn
more about the new Azure Stack HCI.

VMM 2019 Update Rollup 3 (UR3) supports Azure Stack HCI, version 20H2. The current
product is Azure Stack HCI, version 21H2. Starting with System Center 2022, VMM
supports Azure Stack HCI, version 20H2; Azure Stack HCI, version 21H2; and Azure Stack
HCI, version 22H2 (supported from VMM 2022 UR1).

) Important

Azure Stack HCI clusters that are managed by Virtual Machine Manager shouldn’t
join the preview channel yet. System Center (including Virtual Machine Manager,
Operations Manager, and other components) does not currently support Azure
Stack preview versions. For the latest updates, see the System Center blog .

Before you start


Ensure that you're running VMM 2019 UR3 or later.

What’s supported?

Addition, creation, and management of Azure Stack HCI clusters. See detailed
steps to create and manage HCI clusters.

Ability to provision & deploy VMs on the Azure Stack HCI clusters and perform VM
life cycle operations. VMs can be provisioned using VHD(x) files, templates, or from
an existing VM. Learn more.

Set up VLAN based network on Azure Stack HCI clusters.

Deployment and management of SDN network controller on Azure Stack HCI


clusters.
Management of storage pool settings, creation of virtual disks, creation of cluster
shared volumes (CSVs), and application of QoS settings.

Moving VMs between Windows Server and Azure Stack HCI clusters works via
Network Migration and migrating an offline (shut down) VM. In this scenario, VMM
does export and import under the hood, even though it's performed as a single
operation.

The PowerShell cmdlets used to manage Windows Server clusters can be used to
manage Azure Stack HCI clusters as well.

Register and unregister Azure Stack HCI clusters

With VMM 2022, we're introducing VMM PowerShell cmdlets to register and unregister
Azure Stack HCI clusters.

Use the following cmdlets to register an HCI cluster:

PowerShell

Register-SCAzStackHCI -VMHostCluster <HostCluster> -SubscriptionID <string>

Use the following command to unregister a cluster:

PowerShell

Unregister-SCAzStackHCI -VMHostCluster <HostCluster> -SubscriptionID


<string>

For detailed information on the supported parameter, see Register-SCAzStackHCI and


Unregister-SCAzStackHCI.

What’s not supported?

Management of Azure Stack HCI stretched clusters is currently not supported in


VMM.

Azure Stack HCI is intended as a virtualization host where you run all your
workloads in virtual machines. The Azure Stack HCI terms allow you to run only
what's necessary for hosting virtual machines. Azure Stack HCI clusters shouldn't
be used for other purposes like WSUS servers, WDS servers, or library servers. Refer
to Use cases for Azure Stack HCI, When to use Azure Stack HCI, and Roles you can
run without virtualizing.
Live migration between any version of Windows Server and Azure Stack HCI
clusters isn't supported.

7 Note

Live migration between Azure Stack HCI clusters works, as well as between
Windows Server clusters.

The only storage type available for Azure Stack HCI is Storage Spaces Direct (S2D).
Creation or management of non-S2D cluster with Azure Stack HCI nodes isn't
supported. If you need to use any other type of storage, for example SANs, use
Windows Server as the virtualization host.

7 Note

You must enable S2D when creating an Azure Stack HCI cluster.
To enable S2D, in
the cluster creation wizard, go to General Configuration. Under Specify the cluster
name and host group, select Enable Storage Spaces Direct as shown below:

After you enable a cluster with S2D, VMM does the following:
The Failover Clustering feature is enabled.
Storage replica and data deduplication are enabled.
The cluster is optionally validated and created.
S2D is enabled, and a storage array object is created in VMM with the same name
as you provided in the wizard.

When you use VMM to create a hyper-converged cluster, the pool and the storage tiers
are automatically created by running Enable-ClusterStorageSpacesDirect -Autoconfig
$True .

After these prerequisites are in place, you provision a cluster, and set up storage
resources on it. You can then deploy VMs on the cluster.

Follow these steps:

Step 1: Provision the cluster


You can provision a cluster by Hyper-V hosts and bare-metal machines:

Provision a cluster from Hyper-V hosts


If you need to add the Azure Stack HCI hosts to the VMM fabric, follow these steps. If
they’re already in the VMM fabric, skip to the next step.

7 Note

When you set up the cluster, select the Enable Storage Spaces Direct option
on the General Configuration page of the Create Hyper-V Cluster wizard.
In Resource Type, select Existing servers running a Windows Server
operating system, and select the Hyper-V hosts to add to the cluster.
All the selected hosts should have Azure Stack HCI installed.
Since S2D is enabled, the cluster must be validated.

Provision a cluster from bare metal machines

7 Note

Typically, S2D node requires RDMA, QoS, and SET settings. To configure these
settings for a node using bare metal computers, you can use the post deployment
script capability in PCP. Here's the sample PCP post deployment script.
You can
also use this script to configure RDMA, QoS, and SET while adding a new node to
an existing S2D deployment from bare metal computers.

1. Read the prerequisites for bare-metal cluster deployment.

7 Note

The generalized VHD or VHDX in the VMM library should be running Azure
Stack HCI with the latest updates. The Operating system and Virtualization
platform values for the hard disk should be set.
For bare-metal deployment, you need to add a pre-boot execution
environment (PXE) server to the VMM fabric. The PXE server is provided
through Windows Deployment Services. VMM uses its own WinPE image, and
you need to ensure that it’s the latest. To do this, select Fabric >
Infrastructure > Update WinPE image, and ensure that the job finishes.

2. Follow the instructions for provisioning a cluster from bare-metal computers.

Step 2: Set up networking for the cluster


After the cluster is provisioned and managed in the VMM fabric, you need to set up
networking for cluster nodes.

1. Start by creating a logical network to mirror your physical management network.


2. You need to set up a logical switch with Switch Embedded Teaming (SET) enabled
so that the switch is aware of virtualization. This switch is connected to the
management logical network and has all of the host virtual adapters, which are
required to provide access to the management network or configure storage
networking. S2D relies on a network to communicate between hosts. RDMA-
capable adapters are recommended.
3. Create VM networks.

Step 3: Configure DCB settings on the Azure


Stack HCI cluster

7 Note
Configuration of DCB settings is an optional step to achieve high performance
during S2D cluster creation workflow. Skip to step 4 if you do not wish to configure
DCB settings.

Recommendations
If you've vNICs deployed, for optimal performance, we recommend you to map all
your vNICs with the corresponding pNICs. Affinities between vNIC and pNIC are
set randomly by the operating system, and there could be scenarios where
multiple vNICs are mapped to the same pNIC. To avoid such scenarios, we
recommend you to manually set affinity between vNIC and pNIC by following the
steps listed here.

When you create a network adapter port profile, we recommend you to allow IEEE
priority. Learn more.

You can also set the IEEE Priority by using the following PowerShell commands:

PowerShell

Set-VMNetworkAdapterVlan -VMNetworkAdapterName 'SMB2' -VlanId '101' -


Access -ManagementOS

Set-VMNetworkAdapter -ManagementOS -Name 'SMB2' -IeeePriorityTag on

Use the following steps to configure DCB settings:

1. Create a new Hyper-V cluster, select Enable Storage Spaces Direct.


DCB
Configuration option gets added to the Hyper-V cluster creation workflow.
2. In DCB configuration, select Configure Data Center Bridging.

3. Provide Priority and Bandwidth values for SMB-Direct and Cluster Heartbeat
traffic.

7 Note

Default values are assigned to Priority and Bandwidth. Customize these values
based on your organization's environment needs.
Default values:

Traffic Class Priority Bandwidth (%)

Cluster Heartbeat 7 1

SMB-Direct 3 50

4. Select the network adapters used for storage traffic. RDMA is enabled on these
network adapters.

7 Note

In a converged NIC scenario, select the storage vNICs. The underlying pNICs
should be RDMA capable for vNICs to be displayed and available for
selection.
5. Review the summary and select Finish.

An Azure Stack HCI cluster will be created and the DCB parameters are configured
on all the S2D nodes.

7 Note

DCB settings can be configured on the existing Hyper-V S2D clusters by


visiting the Cluster Properties page and navigating to the DCB
configuration page.
Any out-of-band changes to DCB settings on any of the nodes will cause
the S2D cluster to be non-compliant in VMM. A Remediate option will
be provided in the DCB configuration page of cluster properties, which
you can use to enforce the DCB settings configured in VMM on the
cluster nodes.

Step 4: Register Azure Stack HCI cluster with


Azure
After creating an Azure Stack HCI cluster, it must be registered with Azure within 30 days
of installation per Azure Online Service terms. If you're using System Center 2022, use
Register-SCAzStackHCI cmdlet in VMM to register the Azure Stack HCI cluster with

Azure. Alternatively, follow these steps to register the Azure Stack HCI cluster with
Azure.

The registration status will reflect in VMM after a successful cluster refresh.

Step 5: View the registration status of Azure


Stack HCI clusters
1. In the VMM console, you can view the registration status and last connected date
of Azure Stack HCI clusters.

2. Select Fabric, right-click the Azure Stack HCI cluster, and select Properties.

3. Alternatively, run Get-SCVMHost and observe the properties of returned object to


check the registration status.

Step 6: Manage the pool and create CSVs


You can now modify the storage pool settings and create virtual disks and CSVs.
1. Select Fabric > Storage > Arrays.

2. Right-click the cluster > Manage Pool, and select the storage pool that was
created by default. You can change the default name and add a classification.

3. To create a CSV, right-click the cluster > Properties > Shared Volumes.

4. In the Create Volume Wizard > Storage Type, specify the volume name and select
the storage pool.

5. In Capacity, you can specify the volume size, file system, and resiliency (Failures to
tolerate) settings.

6. Select Configure advanced storage and tiering settings to set up these options.
7. In Summary, verify settings and finish the wizard. A virtual disk will be created
automatically when you create the volume.

Step 7: Deploy VMs on the cluster


In a hyper-converged topology, VMs can be directly deployed on the cluster. Their
virtual hard disks are placed on the volumes you created using S2D. You create and
deploy these VMs just as you would create any other VM.

) Important

If the Azure Stack HCI cluster isn't registered with Azure or not connected to Azure
for more than 30 days post registration, high availability virtual machine (HAVM)
creation will be blocked on the cluster. Refer to step 4 & 5 for cluster registration.

Step 8: Migrate VMs from Windows Server to


Azure Stack HCI cluster
Use Network migration functionality in VMM to migrate workloads from Hyper-V
(Windows Server 2019 & later) to Azure Stack HCI.
7 Note

Live migration between Windows Server and Azure Stack HCI isn’t supported.
Network migration from Azure Stack HCI to Windows Server isn’t supported.

1. Temporarily disable the live migration at the destination Azure Stack HCI host.
2. Select VMs and Services > All Hosts, and then select the source Hyper-V host from
which you want to migrate.
3. Select the VM that you want to migrate. The VM must be in a turned off state.
4. Select Migrate Virtual Machine.
5. In Select Host, review and select the destination Azure Stack HCI host.
6. Select Next to initiate network migration. VMM will perform imports and exports at
the back end.
7. To verify that the virtual machine is successfully migrated, check the VMs list on the
destination host. Turn on the VM and re-enable live migration on the Azure Stack
HCI host.

Step 9: Migrate VMware workloads to Azure


Stack HCI cluster using SCVMM
VMM offers a simple wizard-based experience for V2V (Virtual to Virtual) conversion.
You can use the conversion tool to migrate workloads at scale from VMware
infrastructure to Hyper-V infrastructure.
For the list of supported VMware servers, see
System requirements.

For prerequisites and limitations for the conversion, see Convert a VMware VM to
Hyper-V in the VMM fabric.

1. Create Run as account for vCenter Server Administrator role in VMM. These
administrator credentials are used to manage vCenter server and ESXi hosts.
2. In the VMM console, under Fabric, select Servers > Add VMware vCenter Server.

3. In the Add VMware vCenter Server page, do the following:


a. Computer name: Specify the vCenter server name.
b. Run As account: Select the Run As account created for vSphere administrator.

4. Select Finish.

5. In the Import Certificate page, select Import.

6. After the successful addition of the vCenter server, all the ESXi hosts under the
vCenter are migrated to VMM.

Add Hosts
1. In the VMM console, under Fabric, select Servers > Add VMware ESX Hosts and
Clusters.

2. In the Add Resource Wizard,

a. Under Credentials, select the Run as account that is used for the port and select


Next.

b. Under Target Resources, select all the ESX clusters that need to be added to
VMM and select Next.

c. Under Host Settings, select the location where you want to add the VMs and
select Next.

d. Under Summary, review the settings and select Finish. Along with the hosts,
associated VMs will also get added.

Verify the status of ESXi host


1. If the ESXi host status reflects as OK (Limited), right-click Properties >
Management, select Run as account that is used for the port and import the
certificates for the host.

Repeat the same process for all the ESXi hosts.


After you add the ESXi clusters, all the virtual machines running on the ESXi
clusters are auto discovered in VMM.

View VMs
1. Go to VMs and Services to view the virtual machines.
You can also manage the
primary lifecycle operations of these virtual machines from VMM.

2. Right-click the VM and select Power Off (online migrations are not supported) that
need to be migrated and uninstall VMware tools from the guest operating system.

3. Select Home > Create Virtual Machines > Convert Virtual Machine.

4. In the Convert Virtual Machine Wizard,


a. Under Select Source, select the VM running in ESXi server and select Next.

b. Under Specify Virtual Machine Identity, enter the new name for the virtual
machine if you wish to and select Next.

5. Under Select Host, select the target Azure Stack HCI node and specify the location
on the host for VM storage files and select Next.

6. Select a virtual network for the virtual machine and select Create to complete the
migration.

The virtual machine running on the ESXi cluster is successfully migrated to Azure
Stack HCI cluster. For automation, use PowerShell commands for conversion.

Next steps
Provision VMs
Manage the cluster
License Windows Server VMs on Azure
Stack HCI
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019 Datacenter Edition and later

Windows Server virtual machines (VMs) must be licensed and activated before you can
use them on Azure Stack HCI. You can use any existing Windows Server licenses and
activation methods that you already have. Optionally, Azure Stack HCI offers new
licensing models and tools to help simplify this process. This article describes general
licensing concepts and the new options that are available on Azure Stack HCI.

Summary
The following figure shows the different Windows Server VM licensing options:

First, choose one of two licensing options:

Windows Server subscription: Subscribe to Windows Server guest licenses


through Azure. Available for Azure Stack HCI only.
Bring your own license (BYOL): Apply your existing Windows Server licenses.
For more information, see Compare licensing options.

Next, activate your Windows Server VMs:

If you are using Windows Server subscription, AVMA is automatically enabled on


the host. You can immediately activate VMs against the cluster using generic
AVMA client keys.
If you are using BYOL, you must use the corresponding keys associated with your
license and apply them using your chosen activation method. One of the most
convenient ways is to use Automatic VM Activation (AVMA).
To use other methods to activate VMs, see Key Management Services (KMS)
activation planning.

Compare licensing options


Choose the licensing option that best suits your needs:

Question Windows Server subscription Bring your own license


(BYOL)

Where do I Azure Stack HCI only. Can be applied


want to deploy anywhere.
my Windows
Server (WS)
VMs?

What versions Evergreen – all versions up to the latest version. Version-specific.


of WS VMs do
you want to
use?

Does this Yes. Need to have both


option also Software Assurance (SA)
allow me to use and WS volume license
Windows keys.
Server: Azure
edition?

How do I No host-side keys – AVMA is automatically Key based – for example,


activate my WS enabled. After it's enabled, you can then apply the KMS/AVMA/enter keys
VMs? generic AVMA keys on the client side. in VM.

What are the No CAL required – included in WS subscription. Windows Server CAL.
CAL
requirements?
Question Windows Server subscription Bring your own license
(BYOL)

What is the Per physical core/per month pricing, purchased Core licenses. For details,
pricing model? and billed through Azure (free trial within the first see Licensing Windows
60 days of registering your Azure Stack HCI). For Server and Pricing for
details, see Pricing for Windows Server Windows Server
subscription . licenses .

Guest versions
The following table shows the guest operating systems that the different licensing
methods can activate:

Version BYO Windows Server BYO Windows Server Windows Server


2019 license 2022 license subscription

Windows Server X X X
2012/R2

Windows Server 2016 X X X

Windows Server 2019 X X X

Windows Server 2022 X X

Windows Server 2022: Requires Software Requires Software X


Azure Edition Assurance Assurance

Future editions X
(Evergreen)

Tutorial: Windows Server subscription


Windows Server subscription enables you to subscribe to Windows Server guest
licensing on Azure Stack HCI through Azure.

How does Windows Server subscription work?


When Windows Server subscription is purchased, Azure Stack HCI servers retrieve
licenses from the cloud and automatically set up AVMA on the cluster. After setting up
AVMA, you can then apply the generic AVMA keys on the client side.
Prerequisites
An Azure Stack HCI cluster
Install updates: Version 21H2, with at least the December 14, 2021 security
update KB5008210 or later.
Register Azure Stack HCI: All servers must be online and registered to Azure.

If using Windows Admin Center:


Windows Admin Center (version 2103 or later) with the Cluster Manager
extension (version 2.41.0 or later).

Enable Windows Server subscription using the Azure


portal
1. In your Azure Stack HCI cluster resource page, navigate to the Configuration
screen.

2. Under the feature Windows Server subscription add-on, select Purchase. In the
context pane, select Purchase again to confirm.

3. When Windows Server subscription has been successfully purchased, you can start
using Windows Server VMs on your cluster. Licenses will take a few minutes to be
applied on your cluster.

Troubleshooting - Windows Server subscription


Error: One or more servers in the cluster does not have the latest changes to this setting.
We'll apply the changes as soon as the servers sync again.

Remediation: Your cluster does not yet have the latest status on Windows Server
subscription (for example, you just enrolled or just canceled), and therefore might not
have retrieved the licenses to set up AVMA. In most cases, the next cloud sync will
resolve this discrepancy, or you can sync manually. See Syncing Azure Stack HCI.

Enable Windows Server subscription using Windows


Admin Center
1. Select Cluster Manager from the top drop-down, navigate to the cluster that you
want to activate, then under Settings, select Activate Windows Server VMs.

2. In the Automatically activate VMs pane, select Set up, and then select Purchase
Windows Server subscription. Select Next and confirm details, then select
Purchase.

3. When you complete the purchase successfully, the cluster retrieves licenses from
the cloud, and sets up AVMA on your cluster.
Enable Windows Server subscription using PowerShell
Purchase Windows Server subscription: From your cluster, run the following
command:

PowerShell

Set-AzStackHCI -EnableWSsubscription $true

Check status: To check subscription status for each server, run the following
command on each server:

PowerShell

Get-AzureStackHCISubscriptionStatus

To check that AVMA has been set up with Windows Server subscription, run the
following command on each server:

PowerShell

Get-VMAutomaticActivation

Activate VMs against a host server


Now that AVMA has been enabled through Windows Server subscription, you can
activate VMs against the host server by following the steps in Automatic Virtual Machine
Activation in Windows Server.

Tutorial: Bring your own license (BYOL)


activation through AVMA
You can use any existing method to activate VMs on Azure Stack HCI. Optionally, you
can use AVMA, which enables activated host servers to automatically activate VMs
running on them. For more information, see AVMA in Windows Server.

Benefits of AVMA
VM activation through host servers presents several benefits:

Individual VMs don't have to be connected to the internet. Only licensed host
servers with internet connectivity are required.
License management is simplified. Instead of having to true-up key usage counts
for individual VMs, you can activate any number of VMs with just a properly
licensed server.
AVMA acts as a proof-of-purchase mechanism. This capability helps to ensure that
Windows products are used in accordance with product use rights and Microsoft
software license terms.

Take a few minutes to watch the video on using Automatic Virtual Machine Activation in
Windows Admin Center:
https://www.microsoft.com/en-us/videoplayer/embed/RWFdsF?postJsllMsg=true

Prerequisites - activation
Before you begin:

Get the required Windows Server Datacenter key(s):


Key editions: Windows Server 2019 Datacenter or later. For information about
what guest versions your key activates, see Guest versions.
Number of keys: One unique key for each host server you are activating, unless
you have a valid volume license key.
Consistency across cluster: All servers in a cluster need to use the same edition
of keys, so that VMs stay activated regardless of which server they run on.

An Azure Stack HCI cluster (version 20H2 with the June 8, 2021 cumulative update
or later).

Windows Admin Center (version 2103 or later).

The Cluster Manager extension for Windows Admin Center (version 1.523.0 or
later).

7 Note

For VMs to stay activated regardless of which server they run on, AVMA must be set
up for each server in the cluster.

AVMA using Windows Admin Center


You can use Windows Admin Center to set up and manage product keys for your Azure
Stack HCI cluster.

Apply activation keys

To use AVMA in Windows Admin Center:

1. Select Cluster Manager from the top drop-down arrow, navigate to the cluster
that you want to activate, then under Settings, select Activate Windows Server
VMs.
2. In the Automatically activate VMs pane, select Set up and then select Use existing
Windows Server licenses. In the Apply activation keys to each server pane, enter
your Windows Server Datacenter keys.

When you have finished entering keys for each host server in the cluster, select
Apply. The process then takes a few minutes to complete.

7 Note

Each server requires a unique key, unless you have a valid volume license key.

3. Now that AVMA has been enabled, you can activate VMs against the host server
by following the steps in Automatic Virtual Machine Activation.

Change or add keys later (optional)


You might want to either change or add keys later; for example, when you add a server
to the cluster, or use new Windows Server VM versions.

To change or add keys:

1. In the Activate Windows Server VMs pane, select the servers that you want to
manage, and then select Manage activation keys.

2. In the Manage activation keys pane, enter the new keys for the selected host
servers, and then select Apply.

7 Note

Overwriting keys does not reset the activation count for used keys. Ensure
that you're using the right keys before applying them to the servers.

Troubleshooting - Windows Admin Center


If you receive the following AVMA error messages, try using the verification steps in this
section to resolve them.

Error 1: "The key you entered didn't work"


This error might be due to one of the following issues:

A key submitted to activate a server in the cluster was not accepted.


A disruption of the activation process prevented a server in the cluster from being
activated.
A valid key hasn't yet been applied to a server that was added to the cluster.

To resolve such issues, in the Activate Windows Server VMs window, select the server
with the warning, and then select Manage activation keys to enter a new key.

Error 2: "Some servers use keys for an older version of Windows


Server"
All servers must use the same version of keys. Update the keys to the same version to
ensure that the VMs stay activated regardless of which server they run on.

Error 3: "Server is down"

Your server is offline and cannot be reached. Bring all servers online and then refresh the
page.

Error 4: "Couldn't check the status on this server" or "To use this
feature, install the latest update"

One or more of your servers is not updated and does not have the required packages to
set up AVMA. Ensure that your cluster is updated, and then refresh the page. For more
information, see Update Azure Stack HCI clusters.

AVMA using PowerShell


You can also use PowerShell to set up and manage key-based AVMA for your Azure
Stack HCI cluster.

Open PowerShell as an administrator, and run the following commands:

1. Apply Windows Server Datacenter keys to each server:

PowerShell

Set-VMAutomaticActivation <product key>

2. View and confirm Automatic Virtual Machine Activation status:

PowerShell

Get-VMAutomaticActivation

3. Repeat these steps on each of the other servers in your Azure Stack HCI cluster.

Now that you have set up AVMA through BYOL, you can activate VMs against the host
server by following the steps here.

FAQ
This FAQ provides answers to some questions about licensing Windows Server.
Will my Windows Server Datacenter Azure Edition guests
activate on Azure Stack HCI?
Yes, but you must use either Windows Server subscription-based AVMA, or else bring
Windows Server Datacenter keys with Software Assurance. For BYOL, you can use either:

AVMA client keys


KMS client keys

Do I still need Windows Server CALs?


Yes, you still need Windows Server CALs for BYOL, but not for Windows Server
subscription.

Do I need to be connected to the internet?


You do need internet connectivity:

To sync host servers to Azure at least once every 30 days, in order to maintain
Azure Stack HCI 30-day connectivity requirements and to sync host licenses for
AVMA.
When purchasing or canceling Windows Server subscription.

You do not need internet connectivity:

For VMs to activate via Windows Server subscription or BYOL-based AVMA. For
connectivity requirements for other forms of activation, see the Windows Server
documentation.

When does Windows Server subscription start/end


billing?
Windows Server subscription starts billing and activating Windows Server VMs
immediately upon purchase. If you enable Windows Server subscription within the first
60 days of activating Azure Stack HCI, you automatically have a free trial for the duration
of that period.

You can sign up or cancel your Windows Server subscription at any time. Upon
cancellation, billing and activation via Azure stops immediately. Make sure you have an
alternate form of licensing if you continue to run Windows Server VMs on your cluster.
I have a license for Windows Server, can I run Windows
Server 2016 VMs on Azure Stack HCI?
Yes. Although you cannot use Windows Server 2016 keys to set up AVMA on Azure
Stack HCI, they can still be applied using other activation methods. For example, you can
enter a Windows Server 2016 key into your Windows Server 2016 VM directly.

Where can I get BYOL keys for AVMA?


To get a product key, choose from the following options:

OEM provider: Find a Certificate of Authenticity (COA) key label on the outside of
the OEM hardware. You can use this key once per server in the cluster.
Volume Licensing Service Center (VLSC): From the VLSC, you can download a
Multiple Activation Key (MAK) that you can reuse up to a predetermined number
of allowed activations. For more information, see MAK keys.
Retail channels: You can also find a retail key on a retail box label. You can only use
this key once per server in the cluster. For more information, see Packaged
Software .

I want to change an existing key. What happens to the


previous key if the overwrite is successful/unsuccessful?
Once a product key is associated with a device, that association is permanent.
Overwriting keys does not reduce the activation count for used keys. If you successfully
apply another key, both keys would be considered to have been "used" once. If you
unsuccessfully apply a key, your host server activation state remains unchanged and
defaults to the last successfully added key.

I want to change to another key of a different version. Is


it possible to switch keys between versions?
You can update to newer versions of keys, or replace existing keys with the same
version, but you cannot downgrade to a previous version.

What happens if I add or remove a new server?


You'll need to add activation keys for each new server, so that the Windows Server VMs
can be activated against the new server. Removing a server does not impact how AVMA
is set up for the remaining servers in the cluster.
I previously purchased a Windows Server Software-
Defined Datacenter (WSSD) solution with a Windows
Server 2019 key. Can I use that key for Azure Stack HCI?
Yes, but you'll need to use keys for Windows Server 2022 or later, which will be available
after the general availability of Windows Server 2022.

Next steps
Automatic virtual machine activation
Key Management Services (KMS) activation planning for Windows Server
Manage VMs with Windows Admin
Center
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019

Windows Admin Center can be used to create and manage your virtual machines (VMs)
on Azure Stack HCI.

Create a new VM
You can easily create a new VM using Windows Admin Center.


1. On the Windows Admin Center home screen, under All connections, select the
server or cluster you want to create the VM on.

2. Under Tools, scroll down and select Virtual machines.

3. Under Virtual machines, select the Inventory tab, then select Add and New.

4. Under New virtual machine, enter a name for your VM.

5. Select Generation 2 (Recommended).

6. Under Host, select the server you want the VM to reside on.

7. Under Path, select a preassigned file path from the dropdown list or click Browse
to choose the folder to save the VM configuration and virtual hard disk (VHD) files
to. You can browse to any available SMB share on the network by entering the
path as \server\share.

8. Under Virtual processors, select the number of virtual processors and whether you
want nested virtualization enabled for the VM. If the cluster is running Azure Stack
HCI, version 21H2, you'll also see a checkbox to enable processor compatibility
mode on the VM.

9. Under Memory, select the amount of startup memory (4 GB is recommended as a


minimum), and a min and max range of dynamic memory as applicable to be
allocated to the VM.

10. Under Network, select a virtual switch from the dropdown list.

11. Under Network, select one of the following for the isolation mode from the
dropdown list:

Set to Default (None) if the VM is connected to the virtual switch in access


mode.
Set to VLAN if the VM is connected to the virtual switch over a VLAN. Specify
the VLAN identifier as well.
Set to Virtual Network (SDN) if the VM is part of an SDN virtual network.
Select a virtual network name, subnet, and specify the IP Address. Optionally,
select a network security group that can be applied to the VM.
Set to Logical Network (SDN) if the VM is part of an SDN logical network.
Select the logical network name, subnet, and specify the IP Address.
Optionally, select a network security group that can be applied to the VM.

12. Under Storage, click Add and select whether to create a new empty virtual hard
disk or to use an existing virtual hard disk. If you're using an existing virtual hard
disk, click Browse and select the applicable file path.

13. Under Operating system, do one of the following:

Select Install an operating system later if you want to install an operating


system for the VM after the VM is created.
Select Install an operating system from an image file (*.iso), click Browse,
then select the applicable .iso image file to use.

14. When finished, click Create to create the VM.

15. To start the VM, in the Virtual Machines list, hover over the new VM, enable the
checkbox for it on the left, and select Start.

16. Under State, verify that the VM state is Running.

Get a list of VMs


You can easily see all VMs on a server or in your cluster.

1. In Windows Admin Center, under Tools, scroll down and select Virtual Machines.
2. The Inventory tab on the right lists all VMs available on the current server or the
cluster, and provides commands to manage individual VMs. You can:

View a list of the VMs running on the current server or cluster.


View the VM's state and host server if you are viewing VMs for a cluster. Also
view CPU and memory usage from the host perspective, including memory
pressure, memory demand and assigned memory, and the VM's uptime,
heartbeat status, and protection status (using Azure Site Recovery).
Create a new VM.
Delete, start, turn off, shut down, pause, resume, reset or rename a VM. Also
save the VM, delete a saved state, or create a checkpoint.
Change settings for a VM.
Connect to a VM console via the Hyper-V host.
Replicate a VM using Azure Site Recovery.
For operations that can be run on multiple VMs, such as Start, Shut down,
Save, Pause, Delete, or Reset, you can select multiple VMs and run the
operation once.

View VM details
You can view detailed information and performance charts for a specific VM from its
dedicated page.

1. Under Tools, scroll down and select Virtual machines.

2. Click the Inventory tab on the right, then select the VM. On the subsequent page,
you can do the following:

View live and historical data line charts for CPU, memory, network, IOPS and
IO throughput (historical data is only available for hyperconverged clusters)
View, create, apply, rename, and delete checkpoints.
View details for the virtual hard disk (.vhd) files, network adapters, and host
server.
View the state of the VM.
Save the VM, delete a saved state, export, or clone the VM.
Change settings for the VM.
Connect to the VM console using VMConnect via the Hyper-V host.
Replicate the VM using Azure Site Recovery.

View aggregate VM metrics


You can view resources usage and performance metrics for all VMs in your cluster.

1. Under Tools, scroll down and select Virtual machines.


2. The Summary tab on the right provides a holistic view of Hyper-V host resources
and performance for a selected server or cluster, including the following:

The number of VMs that are running, stopped, paused, and saved
Recent health alerts or Hyper-V event log events for clusters
CPU and memory usage with host vs guest breakdown
Live and historical data line charts for IOPS and I/O throughput for clusters

Change VM settings
There are a variety of settings that you can change for a VM.

7 Note

Some settings cannot be changed for a VM that is running and you will need to
stop the VM first.

1. Under Tools, scroll down and select Virtual machines.

2. Click the Inventory tab on the right, select the VM, then click Settings.
3. To change VM start/stop actions and general settings, select General and do the
following:

To change the VM name, enter it in the Name field

To change default VM start/stop actions, select the appropriate settings from


the dropdown boxes.

To change time intervals for pausing or starting a VM, enter the appropriate
values in the fields shown

4. Select Memory to change VM startup memory, dynamic memory range, memory


buffer percentage, and memory weight.

5. Select Processors to change the number of virtual processors, to enable nested


virtualization, or to enable simultaneous multithreading (SMT).

6. To change the size of an existing disk, modify the value in Size (GB). To add a new
virtual disk, select Disks and then select whether to create an empty virtual disk or
to use an existing virtual disk or ISO (.iso) image file. Click Browse and select the
path to the virtual disk or image file.

7. To add, remove, or change network adapter settings, select Networks and do the
following:

Select a virtual switch from the dropdown list.

Select one of the following for the isolation mode from the dropdown list:
Set to Default (None) if the VM is connected to the virtual switch in access
mode.
Set to VLAN if the VM is connected to the virtual switch over a VLAN.
Specify the VLAN identifier as well.
Set to Virtual Network (SDN) if the VM is part of an SDN virtual network.
Select a virtual network name, subnet, and specify the IP Address.
Optionally, select a network security group that can be applied to the VM.
Set to Logical Network (SDN) if the VM is part of an SDN logical network.
Select the logical network name, subnet, and specify the IP Address.
Optionally, select a network security group that can be applied to the VM.

To change additional settings for a network adapter, click Advanced to be


able to:
Select between dynamic or static MAC address type
Enable MAC address spoofing
Enable bandwidth management and specify the max/min range

8. Select Boot order to add boot devices or change the VM boot sequence.

9. Select Checkpoints to enable VM checkpoints, select checkpoint type, and specify


checkpoint file location.

7 Note
The Production checkpoint setting is recommended and uses backup
technology in the guest operating system to create data-consistent
checkpoints. The Standard setting uses VHD snapshots to create checkpoints
with application and service state.

10. Select Affinity rules to create an affinity rule for a VM. For more information on
creating affinity rules, see Create server and site affinity rules for VMs.

11. To change VM security settings, select Security and do the following:

Select Enable Secure Boot to help prevent unauthorized code from running
at boot time (recommended). Also select a Microsoft or open-source
template from the drop-down box

For Template, select a security template to use

Under Encryption Support, you can


Select Enable Trusted Platform Module to be able to use a hardware
cryptographic service module

Enable encryption of state and virtual machine migration traffic

7 Note

Encryption support requires a key protector (KP) for the


VM. If not
already present, selecting one of these options will
generate a KP that
allows running the VM on this host.

Under Security Policy, select Enable Shielding for additional protection


options for the VM.

Move a VM to another server or cluster


You can easily move a virtual machine to another server or another cluster as follows:

1. Under Tools, scroll down and select Virtual machines.

2. Under the Inventory tab, select a VM from the list and select Manage > Move.

3. Choose a server from the list and select Move.

4. If you want to move both the VM and its storage, choose whether to move it to
another cluster or to another server in the same cluster.

5. If you want to move just the VM's storage, select either to move it to the same
path or select different paths for configuration, checkpoint, or smart paging.

Join a VM to a domain
You can easily join a VM to a domain as follows:

1. Under Tools, scroll down and select Virtual machines.


2. Under the Inventory tab, select a VM from the list and select Manage > Domain
join.
3. Enter the name of the domain to join to, along with the domain user name and
password.
4. Enter the VM user name and password.
5. When finished, click Join.

Clone a VM
You can easily clone a VM as follows:

1. Under Tools, scroll down and select Virtual machines.


2. Select the Inventory tab on the right. Choose a VM from the list and select
Manage > Clone.
3. Specify a name and path to the cloned VM.
4. Run Sysprep on your VM if you haven't already done so.

Import or export a VM
You can easily import or export a VM. The following procedure describes the import
process.

1. Under Tools, scroll down and select Virtual machines.


2. On the Inventory tab, select Add > Import.
3. Enter the folder name containing the VM or click Browse and select a folder.
4. Select the VM you want to import.
5. Create a unique ID for the VM if needed.
6. When finished, select Import.

For exporting a VM, the process is similar:

1. Under Tools, scroll down and select Virtual machines.


2. On the Inventory tab, select the VM to export in the list.
3. Select Manage > Export.
4. Enter the path to export the VM to.

View VM event logs


You can view VM event logs as follows:

1. Under Tools, scroll down and select Virtual machines.


2. On the Summary tab on the right, select View all events.
3. Select an event category and expand the view.

Connect to a VM by using Remote Desktop


Instead of using Windows Admin Center, you can also manage your VMs through a
Hyper-V host using a Remote Desktop Protocol (RDP) connection.

1. Under Tools, scroll down and select Virtual machines.

2. On the Inventory tab, select Choose a virtual machine from the list and select the
Connect > Connect or Connect > Download RDP file option. Both options use the
VMConnect tool to connect to the guest VM through the Hyper-V host and
requires you to enter your administrator username and password credentials for
the Hyper-V host.

The Connect option connects to the VM using Remote Desktop in your web
browser.
The Download RDP file option downloads an .rdp file that you can open to
connect with the Remote Desktop Connection app (mstsc.exe).

Protect VMs with Azure Site Recovery


You can use Windows Admin Center to configure Azure Site Recovery and replicate your
on-premises VMs to Azure. This is an optional value-add service. To get started, see
Protect VMs using Azure Site Recovery.

Remove a VM and resources


To remove VM and its resources, see Remove a VM.

Next steps
You can also create and manage VMs using Windows PowerShell. For more information,
see Manage VMs on Azure Stack HCI using Windows PowerShell.

See Create and manage Azure virtual networks for Windows virtual machines.

See Configure User Access Control and Permissions.


Manage VMs on Azure Stack HCI using
Windows PowerShell
Article • 04/17/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019

Windows PowerShell can be used to create and manage your virtual machines (VMs) on
Azure Stack HCI.

Typically, you manage VMs from a remote computer, rather than on a host server in a
cluster. This remote computer is called the management computer.

7 Note

When running PowerShell commands from a management computer, include the -


ComputerName parameter with the name of the host server you are managing.
NetBIOS names, IP addresses, and fully qualified domain names are allowable.

For the complete reference documentation for managing VMs using PowerShell, see
Hyper-V reference.

Create a VM
The New-VM cmdlet is used to create a new VM. For detailed usage, see the New-VM
reference documentation.

Here are the settings that you can specify when creating a new VM with an existing
virtual hard disk, where:

-Name is the name that you provide for the virtual machine that you're creating.

-MemoryStartupBytes is the amount of memory that is available to the virtual


machine at start up.

-BootDevice is the device that the virtual machine boots to when it starts.
Typically
this is a virtual hard disk (VHD), an .iso file for DVD-based boot, or a network
adapter (NetworkAdapter) for network boot.

-VHDPath is the path to the virtual machine disk that you want to use.
-Path is the path to store the virtual machine configuration files.

-Generation is the virtual machine generation. Use generation 1 for VHD and
generation 2 for VHDX.

-SwitchName is the name of the virtual switch that you want the virtual machine to
use to connect to other virtual machines or the network. Get the name of the
virtual switch by using Get-VMSwitch. For example:

The full command as follows for creating a VM called VM1:

PowerShell

New-VM -ComputerName Server1 -Name VM1 -MemoryStartupBytes <Memory> -


BootDevice <BootDevice> -VHDPath <VHDPath> -Path <Path> -Generation
<Generation> -SwitchName <Switch name>

The next example creates a Generation 2 virtual machine with 4GB of memory. It boots
from the folder VMs\Win10.vhdx in the current directory and uses the virtual switch
named ExternalSwitch. The virtual machine configuration files are stored in the folder
VMData.

PowerShell

New-VM -ComputerName Server1 -Name VM1 -MemoryStartupBytes 4GB -BootDevice


VHD -VHDPath .\VMs\Win10.vhdx -Path .\VMData -Generation 2 -SwitchName
ExternalSwitch

The following parameters are used to specify virtual hard disks.

To create a virtual machine with a new virtual hard disk, replace the -VHDPath
parameter from the example above with -NewVHDPath and add the -
NewVHDSizeBytes parameter as shown here:

PowerShell

New-VM -ComputerName Server1 -Name VM1 -MemoryStartupBytes 4GB -BootDevice


VHD -NewVHDPath .\VMs\Win10.vhdx -Path .\VMData -NewVHDSizeBytes 20GB -
Generation 2 -SwitchName ExternalSwitch

To create a virtual machine with a new virtual disk that boots to an operating system
image, see the PowerShell example in Create virtual machine walkthrough for Hyper-V
on Windows 10.
Get a list of VMs
The following example returns a list of all VMs on Server1.

PowerShell

Get-VM -ComputerName Server1

The following example returns a list of all running VMs on a server by adding a filter
using the Where-Object command. For more information, see Using the Where-Object
documentation.

PowerShell

Get-VM -ComputerName Server1 | Where-Object -Property State -eq "Running"

The next example returns a list of all shut-down VMs on the server.

PowerShell

Get-VM -ComputerName Server1 | Where-Object -Property State -eq "Off"

Start and stop a VM


Use the Start-VM and Stop-VM commands to start or stop a VM. For detailed
information, see the Start-VM and Stop-VM reference documentation.

The following example shows how to start a VM named VM1:

PowerShell

Start-VM -Name VM1 -ComputerName Server1

The following example shows how to shut-down a VM named TestVM:

PowerShell

Stop-VM -Name VM1 -ComputerName Server1

Move a VM
The Move-VM cmdlet moves a VM to a different server. For more information, see the
Move-VM reference documentation.

The following example shows how to move a VM to Server2 when the VM is stored on
an SMB share on Server1:

PowerShell

Move-VM -ComputerName Server1 -Name VM1 -DestinationHost Server2

The following example shows how to move a VM to Server2 from Server1 and move all
files associated with the VM to D:\VM_name on the remote computer:

PowerShell

Move-VM -ComputerName Server1 -Name VM1 -DestinationHost Server2 -


IncludeStorage -DestinationStoragePath D:\VM_name

Import or export a VM
The Import-VM and Export-VM cmdlets import and export a VM. The following shows a
couple of examples. For more information, see the Import-VM and Export-VM reference
documentation.

The following example shows how to import a VM from its configuration file. The VM is
registered in-place, so its files are not copied:

PowerShell

Import-VM -ComputerName Server1 -Name VM1 -Path 'C:\<vm export


path>\2B91FEB3-F1E0-4FFF-B8BE-29CED892A95A.vmcx'

The following example exports a VM to the root of the D drive:

PowerShell

Export-VM -ComputerName Server1 -Name VM1 -Path D:\

Rename a VM
The Rename-VM cmdlet is used to rename a VM. For detailed information, see the
Rename-VM reference documentation.
The following example renames VM1 to VM2 and displays the renamed virtual machine:

PowerShell

Rename-VM -ComputerName Server1 -Name VM1 -NewName VM2

Create a VM checkpoint
The Checkpoint-VM cmdlet is used to create a checkpoint for a VM. For detailed
information, see the Checkpoint-VM reference documentation.

The following example creates a checkpoint named BeforeInstallingUpdates for the VM


named Test.

PowerShell

Checkpoint-VM -ComputerName Server1 -Name VM1 -SnapshotName


BeforeInstallingUpdates

Create a VHD for a VM


The New-VHD cmdlet is used to create a new VHD for a VM. For detailed information on
how to use it, see the New-VHD reference documentation.

The following example creates a dynamic virtual hard disk in VHDX format that is 10 GB
in size. The file name extension determines the format and the default type of dynamic
is used because no type is specified.

PowerShell

Get-ClusterGroup

Add a network adapter to a VM


The Add-VMNetworkAdapter cmdlet is used to add a virtual network adapter to a VM. The
following shows a couple of examples. For detailed information on how to use it, see the
Add-VMNetworkAdapter reference documentation.

The following example adds a virtual network adapter named Redmond NIC1 to a virtual
machine named VM1:
PowerShell

Add-VMNetworkAdapter -ComputerName Server1 -VMName VM1 -Name "Redmond NIC1"

This example adds a virtual network adapter to a virtual machine named VM1 and
connects it to a virtual switch named Network:

PowerShell

Add-VMNetworkAdapter -ComputerName Server1 -VMName VM1 -SwitchName Network

Create a virtual switch for a VM


The New-VMSwitch cmdlet is used to new virtual switch on a VM host. For detailed
information on how to use it, see the New-VMSwitch reference documentation.

The following example creates a new switch called "QoS switch", which binds to a
network adapter called Wired Ethernet Connection 3 and supports weight-based
minimum bandwidth.

PowerShell

New-VMSwitch "QoS Switch" -NetAdapterName "Wired Ethernet Connection 3" -


MinimumBandwidthMode Weight

Set memory for a VM


The Set-VMMemory cmdlet is