Azure Stack Hci
Azure Stack Hci
e OVERVIEW
h WHAT'S NEW
b GET STARTED
` DEPLOY
2. Create a cluster
4. Validate a cluster
g TUTORIAL
Manage storage
c HOW-TO GUIDE
Replace drives
Manage volumes
c HOW-TO GUIDE
Create volumes
Protect volumes
Manage volumes
c HOW-TO GUIDE
Change languages
p CONCEPT
Stretched clusters
Billing
Firewall requirements
Choose drives
Plan volumes
Security considerations
Manage VMs
c HOW-TO GUIDE
VM load balancing
Connect to Azure
c HOW-TO GUIDE
e OVERVIEW
c HOW-TO GUIDE
d TRAINING
Azure Stack HCI is a hyperconverged infrastructure (HCI) cluster solution that hosts
virtualized Windows and Linux workloads and their storage in a hybrid environment that
combines on-premises infrastructure with Azure cloud services.
Azure Stack HCI is available for download with a free 60-day trial . You can either
purchase integrated systems from a Microsoft hardware partner with the Azure Stack
HCI operating system pre-installed, or buy validated nodes and install the operating
system yourself. See the Azure Stack HCI Catalog for hardware options. Use the Azure
Stack HCI sizing tool to estimate the hardware requirements for your Azure Stack HCI
solution. This sizing tool is currently in public preview and requires your personal
Microsoft account (MSA) credentials (not a corporate account) to sign in.
Azure Stack HCI is intended as a virtualization host, so most apps and server roles must
run inside of virtual machines (VMs). Exceptions include Hyper-V, Network Controller,
and other components required for Software Defined Networking (SDN) or for the
management and health of hosted VMs.
Azure Stack HCI is delivered as an Azure service and billed to an Azure subscription.
Azure hybrid services enhance the cluster with capabilities such as cloud-based
monitoring, Site Recovery, and VM backups, as well as a central view of all of your Azure
Stack HCI deployments in the Azure portal. You can manage the cluster with your
existing tools, including Windows Admin Center and PowerShell.
Each Azure Stack HCI cluster consists of between 1 and 16 physical, validated servers. All
clustered servers, including single server, share common configurations and resources
by leveraging the Windows Server Failover Clustering feature.
Using Azure Stack HCI and Windows Admin Center, you can create a hyperconverged
cluster that's easy to manage and uses Storage Spaces Direct for superior storage price-
performance. This includes the option to stretch the cluster across sites and use
automatic failover. See What's new in Azure Stack HCI, version 22H2 for details on the
latest functionality enhancements.
It's familiar for Hyper-V and server admins, allowing them to leverage existing
virtualization and storage concepts and skills.
It works with existing data center processes and tools such as Microsoft System
Center, Active Directory, Group Policy, and PowerShell scripting.
It works with popular third-party backup, security, and monitoring tools.
Flexible hardware choices allow customers to choose the vendor with the best
service and support in their geography.
Joint support between Microsoft and the hardware vendor improves the customer
experience.
Seamless, full-stack updates make it easy to stay current.
A flexible and broad ecosystem gives IT professionals the flexibility they need to
build a solution that best meets their needs.
Branch office For branch office and edge workloads, you can minimize infrastructure costs by
and edge deploying two-node clusters with inexpensive witness options, such as Cloud
Witness or a USB drive–based file share witness. Another factor that contributes
to the lower cost of two-node clusters is support for switchless networking,
which relies on crossover cable between cluster nodes instead of more expensive
high-speed switches. Customers can also centrally view remote Azure Stack HCI
deployments in the Azure portal. To learn more about this workload, see Deploy
branch office and edge on Azure Stack HCI.
Virtual Azure Stack HCI clusters are well suited for large-scale VDI deployments with
desktop RDS or equivalent third-party offerings as the virtual desktop broker. Azure Stack
infrastructure HCI provides additional benefits by including centralized storage and enhanced
(VDI) security, which simplifies protecting user data and minimizes the risk of
accidental or intentional data leaks. To learn more about this workload, see
Deploy virtual desktop infrastructure (VDI) on Azure Stack HCI.
Highly Azure Stack HCI provides an additional layer of resiliency to highly available,
performant mission-critical Always On availability groups-based deployments of SQL Server.
SQL Server This approach also offers extra benefits associated with the single-vendor
approach, including simplified support and performance optimizations built into
the underlying platform. To learn more about this workload, see Deploy SQL
Server on Azure Stack HCI.
Use case Description
Trusted Azure Stack HCI satisfies the trusted enterprise virtualization requirements
enterprise through its built-in support for Virtualization-based Security (VBS). VBS relies on
virtualization Hyper-V to implement the mechanism referred to as virtual secure mode, which
forms a dedicated, isolated memory region within its guest VMs. By using
programming techniques, it's possible to perform designated, security-sensitive
operations in this dedicated memory region while blocking access to it from the
host OS. This considerably limits potential vulnerability to kernel-based exploits.
To learn more about this workload, see Deploy Trusted Enterprise Virtualization
on Azure Stack HCI.
Azure You can leverage Azure Stack HCI to host container-based deployments, which
Kubernetes increases workload density and resource usage efficiency. Azure Stack HCI also
Service (AKS) further enhances the agility and resiliency inherent to Azure Kubernetes
deployments. Azure Stack HCI manages automatic failover of VMs serving as
Kubernetes cluster nodes in case of a localized failure of the underlying physical
components. This supplements the high availability built into Kubernetes, which
automatically restarts failed containers on either the same or another VM. To
learn more about this workload, see What is Azure Kubernetes Service on Azure
Stack HCI and Windows Server ?.
Scale-out Storage Spaces Direct is a core technology of Azure Stack HCI that uses industry-
storage standard servers with locally attached drives to offer high availability,
performance, and scalability. Using Storage Spaces Direct results in significant
cost reductions compared with competing offers based on storage area network
(SAN) or network-attached storage (NAS) technologies. These benefits result
from an innovative design and a wide range of enhancements, such as persistent
read/write cache drives, mirror-accelerated parity, nested resiliency, and
deduplication.
Disaster An Azure Stack HCI stretched cluster provides automatic failover of virtualized
recovery for workloads to a secondary site following a primary site failure. Synchronous
virtualized replication ensures crash consistency of VM disks.
workloads
Data center Refreshing and consolidating aging virtualization hosts with Azure Stack HCI can
consolidation improve scalability and make your environment easier to manage and secure. It's
and also an opportunity to retire legacy SAN storage to reduce footprint and total
modernization cost of ownership. Operations and systems administration are simplified with
unified tools and interfaces and a single point of support.
Run Azure Azure Arc allows you to run Azure services anywhere. This allows you to build
services on- consistent hybrid and multicloud application architectures by using Azure
premises services that can run in Azure, on-premises, at the edge, or at other cloud
providers. Azure Arc enabled services allow you to run Azure data services and
Azure application services such as Azure App Service, Functions, Logic Apps,
Event Grid, and API Management anywhere to support hybrid workloads. To
learn more, see Azure Arc overview.
Demo of using Microsoft Azure with Azure Stack HCI
For an end-to-end example of using Microsoft Azure to manage apps and infrastructure
at the Edge using Azure Arc, Azure Kubernetes Service, and Azure Stack HCI, see the
Retail edge transformation with Azure hybrid demo.
Using a fictional customer, inspired directly by real customers, you will see how to
deploy Kubernetes, set up GitOps, deploy VMs, use Azure Monitor and drill into a
hardware failure, all without leaving the Azure portal.
https://www.youtube-nocookie.com/embed/2gKx3IySlAY
This video includes preview functionality which shows real product functionality, but in a
closely controlled environment.
After you register your Azure Stack HCI cluster with Azure, you can use the Azure portal
initially for:
Monitoring: View all of your Azure Stack HCI clusters in a single, global view where
you can group them by resource group and tag them.
Billing: Pay for Azure Stack HCI through your Azure subscription.
For more details on the cloud service components of Azure Stack HCI, see Azure Stack
HCI hybrid capabilities with Azure services.
One or more servers from the Azure Stack HCI Catalog , purchased from your
preferred Microsoft hardware partner.
An Azure subscription .
Operating system licenses for your workload VMs – for example, Windows Server.
See Activate Windows Server VMs.
An internet connection for each server in the cluster that can connect via HTTPS
outbound traffic to well-known Azure endpoints at least every 30 days. See Azure
connectivity requirements for more information.
For clusters stretched across sites:
At least four severs (two in each site)
At least one 1 Gb connection between sites (a 25 Gb RDMA connection is
preferred)
An average latency of 5 ms round trip between sites if you want to do
synchronous replication where writes occur simultaneously in both sites.
If you plan to use SDN, you'll need a virtual hard disk (VHD) for the Azure Stack
HCI operating system to create Network Controller VMs (see Plan to deploy
Network Controller).
Make sure your hardware meets the System requirements and that your network meets
the physical network and host network requirements for Azure Stack HCI.
For Azure Kubernetes Service on Azure Stack HCI and Windows Server requirements, see
AKS requirements on Azure Stack HCI.
Azure Stack HCI is priced on a per core basis on your on-premises servers. For current
pricing, see Azure Stack HCI pricing .
Visit the Azure Stack HCI solutions page or browse the Azure Stack HCI Catalog to
view Azure Stack HCI solutions from Microsoft partners such as ASUS, Blue Chip,
DataON, Dell EMC, Fujitsu, HPE, Hitachi, Lenovo, NEC, primeLine Solutions, QCT,
SecureGUARD, and Supermicro.
Some Microsoft partners are developing software that extends the capabilities of Azure
Stack HCI while allowing IT admins to use familiar tools. To learn more, see Utility
applications for Azure Stack HCI.
Next steps
Download Azure Stack HCI
Create an Azure Stack HCI cluster and register it with Azure
Use Azure Stack HCI with Windows Admin Center
Compare Azure Stack HCI to Windows Server
Compare Azure Stack HCI to Azure Stack Hub
Azure Stack HCI foundations learning path
What's new in Azure Stack HCI, version
22H2
Article • 05/30/2023
Applies to: Azure Stack HCI, version 22H2 and Supplemental Package
This article lists the various features and improvements that are available in Azure Stack
HCI, version 22H2. This article also describes the Azure Stack HCI, Supplemental Package
that can be deployed in conjunction with Azure Stack HCI, version 22H2 OS.
Azure Stack HCI, version 22H2 is the latest version of the operating system available for
the Azure Stack HCI solution and focuses on Network ATC v2 improvements, storage
replication compression, Hyper-V live migration, and more. Additionally, a preview
version of Azure Stack HCI, Supplemental Package, is now available that can be
deployed on servers running the English version of the Azure Stack HCI, version 22H2
OS.
You can also join the Azure Stack HCI preview channel to test out features for future
versions of the Azure Stack HCI operating system. For more information, see Join the
Azure Stack HCI preview channel.
The following sections briefly describe the various features and enhancements in Azure
Stack HCI, Supplemental Package and in Azure Stack HCI, version 22H2.
) Important
) Important
When you try out this new deployment tool, make sure that you do not run
production workloads on systems deployed with the Supplemental Package while
it's in preview even with the core operating system Azure Stack HCI 22H2 being
generally available. Microsoft Customer Support will supply support services while
in preview, but service level agreements available at GA do not apply.
1. Go to Download Azure Stack HCI 22H2 and fill out and submit a trial form.
To learn more about the new deployment methods, see Deployment overview.
A tailored security baseline with over 200 security settings configured and
enforced with a security drift control mechanism that ensures the cluster always
starts and remains in a known good security state.
The security baseline enables you to closely meet the Center for Internet Security
(CIS) Benchmark, Defense Information Systems Agency Security Technical
Implementation Guides (DISA STIG), Common Criteria, and Federal Information
Processing Standards (FIPS) requirements for the OS and Azure Compute Security
baselines.
For more information, see Security baseline settings for Azure Stack HCI.
Improved security posture achieved through a stronger set of protocols and cipher
suites enabled by default.
Out-of-box protection for data and network with SMB signing and BitLocker
encryption for OS and Cluster Shared Volumes. For more information, see
BitLocker encryption for Azure Stack HCI.
For new deployments using the supplemental package, the environment checker
automatically validates internet connectivity, hardware, identity and networking on all
the nodes of your Azure Stack HCI cluster. The tool also returns a Pass/Fail status for
each test, and saves a log file and a detailed report file.
To get started, you can download this free tool here . For more information, see Assess
your environment for deployment readiness.
Network symmetry. Network ATC automatically checks for and validates network
symmetry across all adapters (on each node) in the same intent - specifically the
make, model, speed, and configuration of your selected adapters.
Contextual cluster network naming. Network ATC understands how you'll use
cluster networks and names them more appropriately.
Live Migration optimization. Network ATC intelligently manages:
Maximum simultaneous live migrations - Network ATC ensures that the
maximum recommended value is configured and maintained across all cluster
nodes.
Best live migration network - Network ATC determines the best network for live
migration and automatically configures your system.
Best live migration transport - Network ATC selects the best algorithm for SMB,
compression, and TCP given your network configuration.
Maximum SMB (RDMA) bandwidth - If SMB (RDMA) is used, Network ATC
determines the maximum bandwidth reserved for live migration to ensure that
there's enough bandwidth for Storage Spaces Direct.
Proxy configuration. Network ATC can configure all server nodes with the same
proxy information as needed for your environment. This action provides one-time
configuration for all current and future server nodes.
Stretched cluster support. Network ATC configures all storage adapters used by
Storage Replica in stretched cluster environments. However, since such adapters
need to route across subnets, Network ATC can't assign any IP addresses to them,
so you’ll still need to assign these addresses yourselves.
Post-deployment VLAN modification. You can use the new Set-NetIntent cmdlet
in Network ATC to modify VLAN settings just as you would if you were using the
Add-NetIntent cmdlet. No need to remove and then add the intents again when
changing VLANs.
There are no changes to the way you create replica groups and partnerships. The only
change is a new parameter that can be used with the existing Storage Replica cmdlets.
You specify compression when the group and the partnership are created. Use the
following cmdlets to specify compression:
PowerShell
New-SRGroup -EnableCompression
New-SRPartnership -EnableCompression
PowerShell
All the other commands and steps remain the same. These changes aren't in Windows
Admin Center at this time and will be added in a subsequent release.
For more information, see Partition and share GPU with virtual machines on Azure Stack
HCI.
For more information, see Convert fixed to thin provisioned volumes on your Azure
Stack HCI.
For more information, see Scale out single server on your Azure Stack HCI.
Tag-based segmentation
In this release, you can secure your application workload virtual machines (VMs) from
external and lateral threats with custom tags of your choice. Assign custom tags to
classify your VMs, and then apply Network Security Groups (NSGs) based on those tags
to restrict communication to and from external and internal sources. For example, to
prevent your SQL Server VMs from communicating with your web server VMs, simply
tag the corresponding VMs with SQL and Web tags. You can then create an NSG to
prevent Web tag from communicating with SQL tag.
For more information, see Configure network security groups with Windows Admin
Center.
For more information, see Azure Hybrid Benefit for Azure Stack HCI.
You can now use the Azure portal or the Azure CLI to easily add and manage VM images
and then use those images to create Azure Arc VMs. This feature works with your
existing cluster running Azure Stack HCI, version 21H2 or later.
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022
This article explains key differences between Azure Stack HCI and Windows Server and
provides guidance about when to use each. Both products are actively supported and
maintained by Microsoft. Many organizations choose to deploy both as they are
intended for different and complementary purposes.
The best virtualization host to modernize your infrastructure, either for existing
workloads in your core datacenter or emerging requirements for branch office and
edge locations.
Easy extensibility to the cloud, with a regular stream of innovations from your
Azure subscription and a consistent set of tools and experiences.
When using Azure Stack HCI, run all of your workloads inside virtual machines
or containers, not directly on the cluster. Azure Stack HCI isn't licensed for
clients to connect directly to it using Client Access Licenses (CALs).
For information about licensing Windows Server VMs running on an Azure Stack HCI
cluster, see Activate Windows Server VMs.
Legal Covered under your Microsoft Has its own end-user license agreement
customer agreement or online
subscription agreement
Licensing Billed to your Azure subscription Has its own paid license
Runs in For evaluation only; intended as a host Yes, in the cloud or on premises
VMs operating system
Hardware Runs on any of more than 200 pre- Runs on any hardware with the "Certified
validated solutions from the Azure for Windows Server" logo. See the
Stack HCI Catalog WindowsServerCatalog
Lifecycle Always up to date with the latest Use this option of the Windows Server
policy features. You have up to six months to servicing channels: Long-Term Servicing
install updates. Channel (LTSC)
Free Extended Security Updates (ESUs) for Windows Server and SQL Yes No 1
2008/R2 and 2012/R2
1
Requires purchasing an Extended Security Updates (ESU) license key and manually
applying it to every VM.
Azure portal > Windows Admin Center integration Yes Azure VMs only 1
(preview)
1
Requires manually installing the Arc-git statusConnected Machine agent on every
machine.
Price structure Per core, per month Varies: usually per core
Price Per core, per month See Pricing and licensing for Windows Server
2022
Next steps
Compare Azure Stack HCI to Azure Stack Hub
Compare Azure Stack HCI to Azure Stack
Hub
Article • 04/20/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Azure Stack Hub
As your organization digitally transforms, you may find you can move faster by using
public cloud services to build on modern architectures and refresh legacy apps.
However, for reasons that include technological and regulatory obstacles, many
workloads must remain on-premises. Use this table to help determine which Microsoft
hybrid cloud strategy provides what you need where you need it, delivering cloud
innovation for workloads wherever they are.
Azure services in your datacenter using Azure Arc Azure services in your datacenter in
disconnected scenarios
Connect your datacenter to Azure services and Run your own instance of Azure Resource
Azure control plane Manager
Lower server Use Azure Stack HCI for the minimum Azure Stack Hub requires
footprint footprint for remote offices and branches. minimum of four servers and its
Start with just two servers and switchless own network switches.
back-to-back networking for peak simplicity
and affordability.
Use cases Azure Stack HCI Azure Stack Hub
and features
Hyper-V Use Azure Stack HCI to virtualize classic Azure Stack Hub constrains
support enterprise apps like Exchange, SharePoint, Hyper-V configurability and
and SQL Server, and to virtualize Windows feature set for consistency with
Server roles like File Server, DNS, DHCP, IIS, Azure.
and Active Directory. It provides unrestricted
access to Hyper-V features.
Software- Use Azure Stack HCI to use software-defined Azure Stack Hub doesn't expose
defined infrastructure in place of aging storage these infrastructural technologies.
infrastructure arrays or network appliances, without major
stack rearchitecture. Built-in Hyper-V, Storage
Spaces Direct, and Software-Defined
Networking (SDN) are directly accessible and
manageable.
Platform-as- Azure Stack HCI runs Platform-as-a-Service Use Azure Stack Hub to develop
a-Service (PaaS) services on-premises with Azure Arc, and run apps that rely on PaaS
(PaaS) and offers the ability to host Azure services like Web Apps, Functions,
Kubernetes Service. You can also run Azure or Event Hubs on-premises in a
Virtual Desktop, Azure Arc-enabled data disconnected scenario. These
services, including SQL Managed Instance services run on Azure Stack Hub
and PostgreSQL Hyperscale (preview), and exactly like they do in Azure,
App Service, Functions, and Logic Apps on providing a consistent hybrid
Azure Arc (preview) on Azure Stack HCI. development and runtime
environment.
Multi- Azure Stack HCI doesn't natively enforce or Use Azure Stack Hub for self-
tenancy provide for multi-tenancy. service Infrastructure-as-a-Service
support (IaaS), with strong isolation and
precise usage tracking and
chargeback for multiple colocated
tenants. Ideal for service providers
and enterprise private clouds.
Templates from the Azure
Marketplace.
DevOps tools Azure Stack HCI doesn't natively include any Use Azure Stack Hub to
DevOps tooling. modernize app deployment and
operation with DevOps practices
like infrastructure as code,
continuous integration and
continuous deployment (CI/CD),
and convenient features like
Azure-consistent VM extensions.
Ideal for Dev and DevOps teams.
Next steps
Compare Azure Stack HCI and Windows Server
Azure Hybrid Benefit for Azure Stack
HCI
Article • 04/17/2023
This article describes Azure Hybrid Benefit and how to use it for Azure Stack HCI.
Azure Hybrid Benefit is a program that helps you reduce the costs of running
workloads in the cloud. With Azure Hybrid Benefit for Azure Stack HCI, you can
maximize the value of your on-premises licenses and modernize your existing
infrastructure to Azure Stack HCI at no additional cost.
This benefit waives the Azure Stack HCI host service fee and Windows Server guest
subscription on your cluster. Other costs associated with Azure Stack HCI, such as Azure
services, are billed as per normal. For details about pricing with Azure Hybrid Benefit,
see Azure Stack HCI pricing .
Tip
You can maximize cost savings by also using Azure Hybrid Benefit for AKS. For
more information, see Azure Hybrid Benefits for AKS.
Make sure your Azure Stack HCI cluster is installed with the following:
Version 22H2 or later; or
Version 21H2 with at least the September 13, 2022 security update
KB5017316 or later
Make sure that all servers in your cluster are online and registered with Azure
Make sure that your cluster has Windows Server Datacenter licenses with active
Software Assurance. For other licensing prerequisites, see Licensing prerequisites
Make sure you have permission to write to the Azure Stack HCI resource. This is
included if you're assigned the contributor or owner role on your subscription
1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL:
https://portal.azure.com .
5. In the Activate Azure Hybrid Benefit pane on the right-hand side, confirm the
designated cluster and the number of core licenses you wish to allocate, and select
Activate again to confirm.
7 Note
You can't deactivate Azure Hybrid Benefit for your cluster after activation.
Proceed after you have confirmed the changes.
6. When Azure Hybrid Benefit successfully activates for your cluster, the Azure Stack
HCI host fee is waived for the cluster.
) Important
8. In the Activate Azure Hybrid Benefit pane on the right-hand side, check the
details and then select Activate to confirm. Upon activation, licenses take a few
minutes to apply and set up automatic VM activation (AVMA) on the cluster.
You can perform an inventory of your clusters through the Azure portal and Azure
Resource Graph as described in the following section.
Azure portal
1. In your Azure Stack HCI cluster resource page, under Settings, select
Configuration.
2. Under Azure Hybrid Benefit, the status shows as:
List all Azure Stack HCI clusters with Azure Hybrid Benefit
in a subscription
You can list all Azure Stack HCI clusters with Azure Hybrid Benefit in a subscription using
PowerShell and Azure CLI.
Azure portal
Use PowerShell or Azure CLI to list all Azure Stack HCI clusters with Azure Hybrid
Benefit in a subscription.
Error
Failed to activate Azure Hybrid Benefit. We couldn’t find your Software Assurance contract.
Suggested solution
This error can occur if you have a new Software Assurance contract or if you have set up
this Azure subscription recently, but your information isn't updated in the portal yet. If
you get this error, reach out to us at [email protected] and share the following
information:
Next steps
For related information, see also:
The Azure Stack HCI FAQ provides information on Azure Stack HCI connectivity with the
cloud, and how Azure Stack HCI relates to Windows Server and Azure Stack Hub.
Because Azure Stack HCI doesn't store customer data in the cloud, business continuity
disaster recovery (BCDR) for the customer's on-premises data is defined and controlled
by the customer. To set up your own site-to-site replication using a stretched cluster, see
Stretched clusters overview.
To learn more about the diagnostic data we collect to keep Azure Stack HCI secure, up
to date, and working as expected, see Azure Stack HCI data collection and Data
residency in Azure .
With Azure Stack HCI, you run virtualized workloads on-premises, managed with
Windows Admin Center and familiar Windows Server tools. You can also connect to
Azure for hybrid scenarios like cloud-based Site Recovery, monitoring, and others.
PowerShell
OsName OSDisplayVersion
------ ----------------
Microsoft Azure Stack HCI 20H2
Azure Stack HCI release information
Article • 07/11/2023
Feature updates for Azure Stack HCI are released periodically to enhance the customer
experience. To keep your Azure Stack HCI service in a supported state, you have up to
six months to install updates, but we recommend installing updates as they are released.
Microsoft provides monthly quality and security updates for each supported version of
Azure Stack HCI and also provides yearly feature updates.
This article provides a list of the available updates for each version of Azure Stack HCI.
Release notes
For information about what's included in each version of Azure Stack HCI, see the
release notes:
Next steps
Azure Stack HCI Lifecycle
Get started with Azure Stack HCI and
Windows Admin Center
Article • 04/17/2023
This topic provides instructions for connecting to an Azure Stack HCI cluster, and for
monitoring cluster and storage performance. If you haven't set up a cluster yet,
download Azure Stack HCI and see Quickstart: Create an Azure Stack HCI cluster and
register it with Azure for instructions.
7 Note
If you install Windows Admin Center on a server, tasks that require CredSSP, such as
cluster creation and installing updates and extensions, require using an account that's a
member of the Gateway Administrators group on the Windows Admin Center server. For
more information, see the first two sections of Configure User Access Control and
Permissions.
3. Type the name of the cluster to manage and click Add. The cluster will be added to
your connection list on the overview page.
4. Under All Connections, click the name of the cluster you just added. Windows
Admin Center will start Cluster Manager and take you directly to the Windows
Admin Center dashboard for that cluster.
Virtual machines
To view a summary of virtual machines that are running on the cluster, click Virtual
machines from the Tools menu at the left.
For a complete inventory of virtual machines running on the cluster along with their
state, host server, CPU usage, memory pressure, memory demand, assigned memory,
and uptime, click Inventory at the top of the page.
Servers
To view a summary of the servers in the cluster, click Servers from the Tools menu at the
left.
For a complete inventory of servers in the cluster including their status, uptime,
manufacturer, model, and serial number, click Inventory at the top of the page.
Volumes
To view a summary of volumes on the cluster, click Volumes from the Tools menu at the
left.
For a complete inventory of volumes on the cluster including their status, file system,
resiliency, size, storage usage, and IOPS, click Inventory at the top of the page.
Drives
To view a summary of drives in the cluster, click Drives from the Tools menu at the left.
For a complete inventory of drives in the cluster along with their serial number, status,
model, size, type, use, location, server, and capacity, click Inventory at the top of the
page.
Virtual switches
To view the settings for a virtual switch in the cluster, click Virtual switches from the
Tools menu at the left, then click the name of the virtual switch you want to display the
settings for. Windows Admin Center will display the network adapters associated with
the virtual switch, including their IP addresses, connection state, link speed, and MAC
address.
Add counters with the Performance Monitor
tool
Use the Performance Monitor tool to view and compare performance counters for
Windows, apps, or devices in real-time.
3. If creating a new workspace, click the Add counter button and select one or more
source servers to monitor, or select the entire cluster.
4. Select the object and instance you wish to monitor, as well as the counter and
graph type to view dynamic performance information.
5. Save the workspace by choosing Save > Save As from the top menu.
Next steps
To monitor performance history on your Azure Stack HCI clusters, see also:
In this quickstart, you'll learn how to deploy a two-server, single-site Azure Stack HCI
cluster and register it with Azure. For multisite deployments, see the Stretched clusters
overview.
Purchase two servers from the Azure Stack HCI Catalog through your preferred
Microsoft hardware partner with the Azure Stack HCI operating system pre-
installed. Review the system requirements to make sure the hardware you select
will support the workloads you plan to run on the cluster. We recommend using a
system with high-speed network adapters that use iWARP for simple configuration.
Create a user account that’s a member of the local Administrators group on each
server.
) Important
The Create Cluster wizard has five sections, each with several steps.
1. Get started. In this section, you'll check the prerequisites, add servers, join a
domain, install required features and updates, and restart the servers.
2. Networking. This section of the wizard verifies that the correct networking
adapters are enabled and disables any you're not using. You'll select management
adapters, set up a virtual switch configuration, and define your network by
supplying IP addresses.
3. Clustering. This section validates that your servers have a consistent configuration
and are suitable for clustering, and creates the actual cluster.
4. Storage. Next, you'll clean and check drives, validate your storage, and enable
Storage Spaces Direct.
5. SDN. You can skip Section 5 because we won't be using Software Defined
Networking (SDN) for this cluster.
If you enabled the CredSSP protocol in the wizard, you'll want to disable it on each
server for security purposes.
1. In Windows Admin Center, under All connections, select the cluster you just
created.
2. Under Tools, select Servers.
3. In the right pane, select the first server in the cluster.
4. Under Overview, select Disable CredSSP. You will see that the red CredSSP
ENABLED banner at the top disappears.
5. Repeat steps 3 and 4 for the second server in the cluster.
Set up a cluster witness
Setting up a witness resource is required so that if one of the servers in the cluster goes
offline, it does not cause the other node to become unavailable as well. For this
quickstart, we'll use an SMB file share located on another server as a witness. You may
prefer to use an Azure cloud witness, provided all server nodes in the cluster have a
reliable internet connection. For more information about witness options, see Set up a
cluster witness.
1. In Windows Admin Center, select Cluster Manager from the top drop-down arrow.
2. Under Cluster connections, select the cluster.
3. Under Tools, select Settings.
4. In the right pane, select Witness.
5. For Witness type, select File share witness.
6. Specify a file share path such as \servername.domain.com\Witness$ and supply
credentials if needed.
7. Select Save.
Next steps
In this quickstart, you created an Azure Stack HCI cluster and registered it with Azure.
You are now ready to Create volumes and then Create virtual machines.
Tutorial: Create a VM-based lab for
Azure Stack HCI
Article • 07/11/2022
In this tutorial, you use MSLab PowerShell scripts to automate the process of creating a
private forest to run Azure Stack HCI on virtual machines (VMs) using nested
virtualization.
) Important
Because Azure Stack HCI is intended as a virtualization host where you run all of
your workloads in VMs, nested virtualization is not supported in production
environments. Use nested virtualization for testing and evaluation purposes only.
" Create a private forest with a domain controller and a Windows Admin Center
server
" Deploy multiple VMs running Azure Stack HCI for clustering
Once completed, you'll be able to create an Azure Stack HCI cluster using the VMs
you've deployed and use the private lab for prototyping, testing, troubleshooting, or
evaluation.
Prerequisites
To complete this tutorial, you need:
Select Download Azure Stack HCI, which will trigger an ISO download.
Tip
Notice that most of the script is commented out; you will only need to execute a few
lines. Follow these steps to customize the script so it produces the desired output.
Alternatively, you can simply copy the code block at the end of this section and replace
the appropriate lines in LabConfig.
1. Add the following to the first uncommented line of LabConfig.ps1 to tell the script
where to find the ISOs, enable the guest service interface, and enable DNS
forwarding on the host: ServerISOFolder="C:\lab\isos" ;
EnableGuestServiceInterface=$true ; UseHostDnsAsForwarder=$true
3. If you plan to create multiple labs on the same server, change Prefix = 'MSLab-' to
use a new Prefix name, such as Lab1-. We'll stick with the default MSLab- prefix for
this tutorial.
4. Comment out the default ForEach-Object line for Windows Server and remove the
hashtag before the ForEach-Object line for Azure Stack HCI so that the script will
create Azure Stack HCI VMs instead of Windows Server VMs for the cluster nodes.
5. By default, the script creates a four-node cluster. If you want to change the number
of VMs in the cluster, replace 1..4 with 1..2 or 1..8, for example. Remember, the
more VMs in your cluster, the greater the memory requirements on your host
server.
8. Add the following line to configure a Windows Admin Center management server
running the Windows Server Core operating system to add a second NIC so you
can connect to Windows Admin Center from outside the private network:
$LabConfig.VMs += @{ VMName = 'AdminCenter' ; ParentVHD =
'Win2019Core_G2.vhdx'; MGMTNICs=2}
The changes to LabConfig.ps1 made in the steps above are reflected in this code block:
PowerShell
7 Note
You might need to change the script execution policy on your system to allow
unsigned scripts by running this PowerShell cmdlet as administrator: Set-
ExecutionPolicy -ExecutionPolicy Unrestricted
You'll be asked to select telemetry levels; choose B for Basic or F for Full. The script will
also ask for the ISO file for Windows Server 2019. Point it to the location you copied the
file to (C:\Labs\Isos). If there are multiple ISO files in the folder, you'll be asked to select
the ISO that you want to use. Select the Windows Server ISO. If you're asked to format a
drive, select N.
2 Warning
Don't select the Azure Stack HCI ISO - you'll create the Azure Stack HCI parent disk
(VHD) in the next section.
Creating the parent disks can take as long as 1-2 hours, although it can take much less
time. When complete, the script will ask you if unnecessary files should be removed. If
you select Y, it will remove the first two scripts because they're no longer needed. Press
Enter to continue.
Install any required security updates and restart the domain controller VM if needed.
This may take a while, and you may need to restart the VM multiple times.
To install Microsoft Edge, connect to the domain controller VM from Hyper-V Manager
and launch a PowerShell session as administrator. Then run the following code to install
and start Microsoft Edge.
PowerShell
#Install Edge
#Start install
#Start Edge
start microsoft-edge:
Right-click on the file, choose Edit with PowerShell, and change the value of
$GatewayServerName in the first line to match the name of your AdminCenter VM
without the prefix (for example, AdminCenter). Save the script and run it by right-
clicking on the file and selecting Run with PowerShell.
Your browser may warn you that it's an unsafe or insecure connection, but it's OK to
proceed.
Take note of the IP addresses of the network adapters on the AdminCenter VM. Append
:443 to the IP address of the externally accessible NIC, and you should be able to log on
to Windows Admin Center and create and manage your cluster from an external web
browser, such as: https://10.217.XX.XXX:443
Install operating system updates on the Azure Stack HCI
VMs
Start the Azure Stack HCI VMs using Hyper-V Manager on the virtualization host.
Connect to each VM, and download and install security updates using Sconfig on each
of them. You may need to restart the VMs multiple times. (You can skip this step if you'd
rather install the OS updates later as part of the cluster creation wizard).
Right-click on the PreviewWorkaround.ps1 file and select Edit with PowerShell. Change
the $domainName, $domainAdmin, and $nodeName variables if they don't match,
such as:
PowerShell
$domainName = "corp.contoso.com"
$domainAdmin = "$domainName\labadmin"
$nodeName = "MSLab-AzSHCI1","MSLab-AzSHCI2","MSLab-AzSHCI3","MSLab-AzSHCI4"
Save your changes, then open a PowerShell session as administrator and run the script:
PowerShell
PS C:\Lab> ./PreviewWorkaround.ps1
The script will take some time to run, especially if you've created lots of VMs. You should
see the message "MSLab-AzSHCI1 MSLab-AzSHCI2 is now online. Proceeding to install
Hyper-V PowerShell." If the script appears to freeze after displaying the message, press
Enter to wake it up. When it's done, you should see: "MSLab-AzSHCI1 MSLab-AzSHCI2 is
now online. Proceed to the next step ..."
Clean up resources
If you selected Y to cleanup unnecessary files and folders, then cleanup is already done.
If you prefer to do it manually, navigate to C:\Labs and delete any unneeded files.
Next steps
You're now ready to proceed to the Cluster Creation Wizard.
This article discusses the system requirements for servers, storage, and networking for
Azure Stack HCI. Note that if you purchase Azure Stack HCI Integrated System solution
hardware from the Azure Stack HCI Catalog , you can skip to the Networking
requirements since the hardware already adheres to server and storage requirements.
Azure requirements
Here are the Azure requirements for your Azure Stack HCI cluster:
Azure subscription: If you don't already have an Azure account, create one . You
can use an existing subscription of any type:
Free account with Azure credits for students or Visual Studio subscribers .
Pay-as-you-go subscription with credit card.
Subscription obtained through an Enterprise Agreement (EA).
Subscription obtained through the Cloud Solution Provider (CSP) program.
Azure permissions: Make sure that you're assigned the following roles in your
Azure subscription: User Access Administrator and Contributor. For information on
how to assign permissions, see Assign Azure permissions for registration.
Azure regions
The Azure Stack HCI service is used for registration, billing, and management. It is
currently supported in the following regions:
Azure public
Currently, Azure Arc Resource Bridge supports only the following regions for Azure
Stack HCI registration:
East US
West Europe
Server requirements
A standard Azure Stack HCI cluster requires a minimum of one server and a maximum of
16 servers.
Keep the following in mind for various types of Azure Stack HCI deployments:
It's required that all servers be the same manufacturer and model, using 64-bit
Intel Nehalem grade, AMD EPYC grade or later compatible processors with
second-level address translation (SLAT). A second-generation Intel Xeon Scalable
processor is required to support Intel Optane DC persistent memory. Processors
must be at least 1.4 GHz and compatible with the x64 instruction set.
Make sure that the servers are equipped with at least 32 GB of RAM per node to
accommodate the server operating system, VMs, and other apps or workloads. In
addition, allow 4 GB of RAM per terabyte (TB) of cache drive capacity on each
server for Storage Spaces Direct metadata.
Ensure all the servers are in the same time zone as your local domain controller.
You can use any boot device supported by Windows Server, which now includes
SATADOM . RAID 1 mirror is not required but is supported for boot. A 200 GB
minimum size is recommended.
Storage requirements
Azure Stack HCI works with direct-attached SATA, SAS, NVMe, or persistent memory
drives that are physically attached to just one server each.
Every server in the cluster should have the same types of drives and the same
number of each type. It's also recommended (but not required) that the drives be
the same size and model. Drives can be internal to the server or in an external
enclosure that is connected to just one server. To learn more, see Drive symmetry
considerations.
Each server in the cluster should have dedicated volumes for logs, with log storage
at least as fast as data storage. Stretched clusters require at least two volumes: one
for replicated data and one for log data.
SCSI Enclosure Services (SES) is required for slot mapping and identification. Each
external enclosure must present a unique identifier (Unique ID).
) Important
NOT SUPPORTED: RAID controller cards or SAN (Fibre Channel, iSCSI, FCoE)
storage, shared SAS enclosures connected to multiple servers, or any form of
multi-path IO (MPIO) where drives are accessible by multiple paths. Host-bus
adapter (HBA) cards must implement simple pass-through mode for any
storage devices used for Storage Spaces Direct.
Networking requirements
An Azure Stack HCI cluster requires a reliable high-bandwidth, low-latency network
connection between each server node.
Verify at least one network adapter is available and dedicated for cluster
management.
Verify that physical switches in your network are configured to allow traffic on any
VLANs you will use.
For host networking considerations and requirements, see Host network requirements.
Stretched clusters require servers be deployed at two separate sites. The sites can be in
different countries/regions, different cities, different floors, or different rooms. For
synchronous replication, you must have a network between servers with enough
bandwidth to contain your IO write workload and an average of 5 ms round trip latency
or lower. Asynchronous replication doesn't have a latency recommendation.
Make sure the host servers have at least 50-100 GB of free space to create the
Network Controller VMs.
You must download a virtual hard disk of the Azure Stack HCI operating system to
use for the SDN infrastructure VMs (Network Controller, Software Load Balancer,
Gateway). For download instructions, see Download the VHDX file.
For more information about preparing for using SDN in Azure Stack HCI, see Plan a
Software Defined Network infrastructure and Plan to deploy Network Controller.
7 Note
Ensure that Windows Admin Center and your domain controller are not installed
on the same instance. Also, ensure that the domain controller is not hosted on the
Azure Stack HCI cluster or one of the nodes in the cluster.
If you're running Windows Admin Center on a server (instead of a local PC), use an
account that's a member of the Gateway Administrators group, or the local
Administrators group on the Windows Admin Center server.
Verify that your Windows Admin Center management computer is joined to the
same Active Directory domain in which you'll create the cluster, or joined to a fully
trusted domain. The servers that you'll cluster don't need to belong to the domain
yet; they can be added to the domain during cluster creation.
Resource Maximum
Volume size 64 TB
Next steps
For related information, see also:
Choose drives
Storage Spaces Direct hardware requirements
Physical network requirements for
Azure Stack HCI
Article • 04/19/2023
This article discusses physical (fabric) network considerations and requirements for
Azure Stack HCI, particularly for network switches.
7 Note
) Important
While other network switches using technologies and protocols not listed here may
work, Microsoft cannot guarantee they will work with Azure Stack HCI and may be
unable to assist in troubleshooting issues that occur.
When purchasing network switches, contact your switch vendor and ensure that the
devices meet the Azure Stack HCI requirements for your specified role types. The
following vendors (in alphabetical order) have confirmed that their switches support
Azure Stack HCI requirements:
Overview
Click on a vendor tab to see validated switches for each of the Azure Stack HCI
traffic types. These network classifications can be found here.
) Important
We update these lists as we're informed of changes by network switch vendors.
If your switch isn't included, contact your switch vendor to ensure that your switch
model and the version of the switch's operating system supports the requirements
in the next section.
7 Note
Network adapters used for compute, storage, and management traffic require
Ethernet. For more information, see Host network requirements.
22H2
Virtual LANS X X X X
Enhanced Transmission X
Selection
Maximum Transmission X
Unit
7 Note
7 Note
Minimum of 10 VLANS
The maximum transmission unit (MTU) is the largest size frame or packet that can
be transmitted across a data link. A range of 1514 - 9174 is required for SDN
encapsulation.
Ethernet switches used for Azure Stack HCI SDN compute traffic must support
Border Gateway Protocol (BGP). BGP is a standard routing protocol used to
exchange routing and reachability information between two or more networks.
Routes are automatically added to the route table of all subnets with BGP
propagation enabled. This is required to enable tenant workloads with SDN and
dynamic peering. RFC 4271: Border Gateway Protocol 4
Ethernet switches used for Azure Stack HCI management traffic must support DHCP
relay agent. The DHCP relay agent is any TCP/IP host which is used to forward
requests and replies between the DHCP server and client when the server is present
on a different network. It is required for PXE boot services. RFC 3046: DHCPv4 or
RFC 6148: DHCPv4
Azure Stack HCI can function in various data center architectures including 2-tier (Spine-
Leaf) and 3-tier (Core-Aggregation-Access). This section refers more to concepts from
the Spine-Leaf topology that is commonly used with workloads in hyper-converged
infrastructure such as Azure Stack HCI.
Network models
Network traffic can be classified by its direction. Traditional Storage Area Network (SAN)
environments are heavily North-South where traffic flows from a compute tier to a
storage tier across a Layer-3 (IP) boundary. Hyperconverged infrastructure is more
heavily East-West where a substantial portion of traffic stays within a Layer-2 (VLAN)
boundary.
) Important
We highly recommend that all cluster nodes in a site are physically located in the
same rack and connected to the same top-of-rack (ToR) switches.
Traffic flows out of a ToR switch to the spine or in from the spine to a ToR switch.
Traffic leaves the physical rack or crosses a Layer-3 boundary (IP).
Includes management (PowerShell, Windows Admin Center), compute (VM), and
inter-site stretched cluster traffic.
Uses an Ethernet switch for connectivity to the physical network.
Traffic remains within the ToR switches and Layer-2 boundary (VLAN).
Includes storage traffic or Live Migration traffic between nodes in the same cluster
and (if using a stretched cluster) within the same site.
May use an Ethernet switch (switched) or a direct (switchless) connection, as
described in the next two sections.
Using switches
North-South traffic requires the use of switches. Besides using an Ethernet switch that
supports the required protocols for Azure Stack HCI, the most important aspect is the
proper sizing of the network fabric.
It is imperative to understand the "non-blocking" fabric bandwidth that your Ethernet
switches can support and that you minimize (or preferably eliminate) oversubscription of
the network.
Work with your network vendor or network support team to ensure your network
switches have been properly sized for the workload you are intending to run.
Using switchless
Azure Stack HCI supports switchless (direct) connections for East-West traffic for all
cluster sizes so long as each node in the cluster has a redundant connection to every
node in the cluster. This is called a "full-mesh" connection.
7 Note
The benefits of switchless deployments diminish with clusters larger than three-
nodes due to the number of network adapters required.
Advantages of switchless connections
No switch purchase is necessary for East-West traffic. A switch is required for
North-South traffic. This may result in lower capital expenditure (CAPEX) costs but
is dependent on the number of nodes in the cluster.
Because there is no switch, configuration is limited to the host, which may reduce
the potential number of configuration steps needed. This value diminishes as the
cluster size increases.
Next steps
Learn about network adapter and host requirements. See Host network
requirements.
Brush up on failover clustering basics. See Failover Clustering Networking Basics .
Brush up on using SET. See Remote Direct Memory Access (RDMA) and Switch
Embedded Teaming (SET).
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Host network requirements for Azure Stack
HCI
Article • 04/17/2023
This topic discusses host networking considerations and requirements for Azure Stack HCI. For
information on datacenter architectures and the physical connections between servers, see Physical
network requirements.
For information on how to simplify host networking using Network ATC, see Simplify host
networking with Network ATC.
Management traffic: Traffic to or from outside the local cluster. For example, storage replica
traffic or traffic used by the administrator for management of the cluster like Remote
Desktop, Windows Admin Center, Active Directory, etc.
Compute traffic: Traffic originating from or destined to a virtual machine (VM).
Storage traffic: Traffic using Server Message Block (SMB), for example Storage Spaces Direct
or SMB-based live migration. This traffic is layer-2 traffic and is not routable.
) Important
Storage replica uses non-RDMA based SMB traffic. This and the directional nature of the
traffic (North-South) makes it closely aligned to that of "management" traffic listed above,
similar to that of a traditional file share.
For more information about this role-based NIC qualification, please see this link .
) Important
Using an adapter outside of its qualified traffic type is not supported.
7 Note
The highest qualification for any adapter in our ecosystem will contain the Management,
Compute Premium, and Storage Premium qualifications.
Driver Requirements
Inbox drivers are not supported for use with Azure Stack HCI. To identify if your adapter is using an
inbox driver, run the following cmdlet. An adapter is using an inbox driver if the DriverProvider
property is Microsoft.
Powershell
Dynamic VMMQ
All network adapters with the Compute (Premium) qualification support Dynamic VMMQ. Dynamic
VMMQ requires the use of Switch Embedded Teaming.
Dynamic VMMQ is an intelligent, receive-side technology. It builds upon its predecessors of Virtual
Machine Queue (VMQ), Virtual Receive Side Scaling (vRSS), and VMMQ, to provide three primary
improvements:
For more information on Dynamic VMMQ, see the blog post Synthetic accelerations .
RDMA
RDMA is a network stack offload to the network adapter. It allows SMB storage traffic to bypass
the operating system for processing.
RDMA enables high-throughput, low-latency networking, using minimal host CPU resources. These
host CPU resources can then be used to run additional VMs or containers.
All adapters with Storage (Standard) or Storage (Premium) qualification support host-side RDMA.
For more information on using RDMA with guest workloads, see the "Guest RDMA" section later in
this article.
Azure Stack HCI supports RDMA with either the Internet Wide Area RDMA Protocol (iWARP) or
RDMA over Converged Ethernet (RoCE) protocol implementations.
) Important
RDMA adapters only work with other RDMA adapters that implement the same RDMA
protocol (iWARP or RoCE).
Not all network adapters from vendors support RDMA. The following table lists those vendors (in
alphabetical order) that offer certified RDMA adapters. However, there are hardware vendors not
included in this list that also support RDMA. See the Windows Server Catalog to find adapters
with the Storage (Standard) or Storage (Premium) qualification which require RDMA support.
7 Note
Broadcom No Yes
Nvidia No Yes
For more information on deploying RDMA for the host, we highly recommend you use Network
ATC. For information on manual deployment see the SDN GitHub repo .
iWARP
iWARP uses Transmission Control Protocol (TCP), and can be optionally enhanced with Priority-
based Flow Control (PFC) and Enhanced Transmission Service (ETS).
RoCE
RoCE uses User Datagram Protocol (UDP), and requires PFC and ETS to provide reliability.
Guest RDMA
Guest RDMA enables SMB workloads for VMs to gain the same benefits of using RDMA on hosts.
For more information, download the document from the SDN GitHub repo .
Switch Embedded Teaming (SET)
SET is a software-based teaming technology that has been included in the Windows Server
operating system since Windows Server 2016. SET is the only teaming technology supported by
Azure Stack HCI. SET works well with compute, storage, and management traffic and is supported
with up to eight adapters in the same team.
SET is the only teaming technology supported by Azure Stack HCI. SET works well with compute,
storage, and management traffic.
) Important
Azure Stack HCI doesn’t support NIC teaming with the older Load Balancing/Failover (LBFO).
See the blog post Teaming in Azure Stack HCI for more information on LBFO in Azure Stack
HCI.
SET is important for Azure Stack HCI because it's the only teaming technology that enables:
SET requires the use of symmetric (identical) adapters. Symmetric network adapters are those that
have the same:
make (vendor)
model (version)
speed (throughput)
configuration
In 22H2, Network ATC will automatically detect and inform you if the adapters you've chosen are
asymmetric. The easiest way to manually identify if adapters are symmetric is if the speeds and
interface descriptions are exact matches. They can deviate only in the numeral listed in the
description. Use the Get-NetAdapterAdvancedProperty cmdlet to ensure the configuration
reported lists the same property values.
See the following table for an example of the interface descriptions deviating only by numeral (#):
7 Note
SET supports only switch-independent configuration by using either Dynamic or Hyper-V Port
load-balancing algorithms. For best performance, Hyper-V Port is recommended for use on all
NICs that operate at or above 10 Gbps. Network ATC makes all the required configurations for
SET.
For detailed information on how to deploy RDMA, download the document from the SDN GitHub
repo .
RoCE-based Azure Stack HCI implementations require the configuration of three PFC traffic classes,
including the default traffic class, across the fabric and all hosts.
This traffic class ensures that there's enough bandwidth reserved for cluster heartbeats:
Required: Yes
PFC-enabled: No
Recommended traffic priority: Priority 7
Recommended bandwidth reservation:
10 GbE or lower RDMA networks = 2 percent
25 GbE or higher RDMA networks = 1 percent
Required: Yes
PFC-enabled: Yes
Recommended traffic priority: Priority 3 or 4
Recommended bandwidth reservation: 50 percent
Default traffic class
This traffic class carries all other traffic not defined in the cluster or RDMA traffic classes, including
VM traffic and management traffic:
7 Note
We recommend using multiple subnets and VLANs to separate storage traffic in Azure Stack
HCI.
Consider the following example of a four node cluster. Each server has two storage ports (left and
right side). Because each adapter is on the same subnet and VLAN, SMB Multichannel will spread
connections across all available links. Therefore, the left-side port on the first server (192.168.1.1)
will make a connection to the left-side port on the second server (192.168.1.2). The right-side port
on the first server (192.168.1.12) will connect to the right-side port on the second server. Similar
connections are established for the third and fourth servers.
However, this creates unnecessary connections and causes congestion at the interlink (multi-
chassis link aggregation group or MC-LAG) that connects the ToR switches (marked with Xs). See
the following diagram:
The recommended approach is to use separate subnets and VLANs for each set of adapters. In the
following diagram, the right-hand ports now use subnet 192.168.2.x /24 and VLAN2. This allows
traffic on the left-side ports to remain on TOR1 and the traffic on the right-side ports to remain on
TOR2.
Because this use case poses the most constraints, it represents a good baseline. However,
considering the permutations for the number of adapters and speeds, this should be considered an
example and not a support requirement.
Storage Bus Layer (SBL), Cluster Shared Volume (CSV), and Hyper-V (Live Migration) traffic:
Use the same physical adapters.
Use SMB.
If the available bandwidth for Live Migration is >= 5 Gbps, and the network adapters
are capable, use RDMA. Use the following cmdlet to do so:
Powershell
If the available bandwidth for Live Migration is < 5 Gbps, use compression to reduce
blackout times. Use the following cmdlet to do so:
Powershell
If you're using RDMA for Live Migration traffic, ensure that Live Migration traffic can't
consume the entire bandwidth allocated to the RDMA traffic class by using an SMB
bandwidth limit. Be careful, because this cmdlet takes entry in bytes per second (Bps),
whereas network adapters are listed in bits per second (bps). Use the following cmdlet to set
a bandwidth limit of 6 Gbps, for example:
Powershell
7 Note
NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth
25 50 Gbps 25 Gbps 70% 17.5 Gbps 29% 7.25 Gbps 1% 250 Mbps
Gbps
50 100 Gbps 50 Gbps 70% 35 Gbps 29% 14.5 Gbps 1% 500 Mbps
Gbps
100 200 Gbps 100 Gbps 70% 70 Gbps 29% 29 Gbps 1% 1 Gbps
Gbps
200 400 Gbps 200 Gbps 70% 140 Gbps 29% 58 Gbps 1% 2 Gbps
Gbps
* Use compression rather than RDMA, because the bandwidth allocation for Live Migration traffic is
<5 Gbps.
Stretched clusters
Stretched clusters provide disaster recovery that spans multiple datacenters. In its simplest form, a
stretched Azure Stack HCI cluster network looks like this:
RDMA is limited to a single site, and isn't supported across different sites or subnets.
Servers in the same site must reside in the same rack and Layer-2 boundary.
Host communication between sites must cross a Layer-3 boundary; stretched Layer-2
topologies aren't supported.
Have enough bandwidth to run the workloads at the other site. In the event of a failover, the
alternate site will need to run all traffic. We recommend that you provision sites at 50 percent
of their available network capacity. This isn't a requirement, however, if you are able to
tolerate lower performance during a failover.
Replication between sites (north/south traffic) can use the same physical NICs as the local
storage (east/west traffic). If you're using the same physical adapters, these adapters must be
teamed with SET. The adapters must also have additional virtual NICs provisioned for
routable traffic between sites.
Can be physical or virtual (host vNIC). If adapters are virtual, you must provision one vNIC
in its own subnet and VLAN per physical NIC.
Must be on their own subnet and VLAN that can route between sites.
The following shows the details for the example stretched cluster configuration.
7 Note
Your exact configuration, including NIC names, IP addresses, and VLANs, might be different
than what is shown. This is used only as a reference configuration that can be adapted to your
environment.
Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope
Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope
Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope
Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope
Next steps
Learn about network switch and physical network requirements. See Physical network
requirements.
Learn how to simplify host networking using Network ATC. See Simplify host networking with
Network ATC.
Brush up on failover clustering networking basics .
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Firewall requirements for Azure Stack
HCI
Article • 06/28/2023
This article provides guidance on how to configure firewalls for the Azure Stack HCI
operating system. It includes firewall requirements for outbound endpoints and internal
rules and ports. The article also provides information on how to use Azure service tags
with Microsoft Defender firewall.
If your network uses a proxy server for internet access, see Configure proxy settings for
Azure Stack HCI.
Azure Stack HCI needs to periodically connect to Azure. Access is limited only to:
) Important
Azure Stack HCI doesn’t support HTTPS inspection. Make sure that HTTPS
inspection is disabled along your networking path for Azure Stack HCI to prevent
any connectivity errors.
As shown in the following diagram, Azure Stack HCI accesses Azure using more than
one firewall potentially.
This article describes how to optionally use a highly locked-down firewall configuration
to block all traffic to all destinations except those included in your allowlist.
7 Note
The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.
Azure azurestackhci.azurefd.net 443 Previous URL for Dataplane. This URL was
Stack recently changed, customers who
HCI registered their cluster using this old URL
must allowlist it as well.
Arc For aka.ms 443 For resolving the download script during
Servers installation.
Arc For guestnotificationservice.azure.com 443 For the notification service for extension
Servers and connectivity scenarios
Arc For *.his.arc.azure.com 443 For metadata and hybrid identity services
Servers
Arc For *.guestnotificationservice.azure.com 443 For notification service for extension and
Servers connectivity scenarios
Arc For azgn*.servicebus.windows.net 443 For notification service for extension and
Servers connectivity scenarios
Arc For *.servicebus.windows.net 443 For Windows Admin Center and SSH
Servers scenarios
Service URL Port Notes
Arc For *.blob.core.windows.net 443 For download source for Azure Arc-
Servers enabled servers extensions
For a comprehensive list of all the firewall URLs, download the firewall URLs
spreadsheet .
7 Note
The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.
When using the Cluster Creation wizard in Windows Admin Center to create the cluster,
the wizard automatically opens the appropriate firewall ports on each server in the
cluster for Failover Clustering, Hyper-V, and Storage Replica. If you're using a different
firewall on each server, open the ports as described in the following sections:
Allow inbound/outbound traffic to and from Allow Cluster Cluster TCP 30301
the Azure Stack HCI service on cluster servers servers
servers
Provide access to Azure and Allow Windows Admin Azure Stack TCP 445
Microsoft Update Center HCI
Use Windows Remote Allow Windows Admin Azure Stack TCP 5985
Management (WinRM) 2.0
Center HCI
for HTTP connections to run
commands
Use WinRM 2.0 for HTTPS Allow Windows Admin Azure Stack TCP 5986
connections to run
Center HCI
commands on remote Windows
servers
7 Note
While installing Windows Admin Center, if you select the Use WinRM over HTTPS
only setting, then port 5986 is required.
Failover Clustering
Ensure that the following firewall rules are configured in your on-premises firewall for
Failover Clustering.
above port
5000
7 Note
The management system includes any computer from which you plan to administer
the cluster, using tools such as Windows Admin Center, Windows PowerShell or
System Center Virtual Machine Manager.
Hyper-V
Ensure that the following firewall rules are configured in your on-premises firewall for
Hyper-V.
above port
5000
7 Note
Open up a range of ports above port 5000 to allow RPC dynamic port allocation.
Ports below 5000 may already be in use by other applications and could cause
conflicts with DCOM applications. Previous experience shows that a minimum of
100 ports should be opened, because several system services rely on these RPC
ports to communicate with each other. For more information, see How to
configure RPC dynamic port allocation to work with firewalls.
Allow Server Message Allow Stretched cluster Stretched cluster TCP 445
Block
servers servers
(SMB) protocol
Allow Web Services- Allow Stretched cluster Stretched cluster TCP 5985
Management
servers servers
(WS-MAN)
PowerShell cmdlet)
1. Download the JSON file from the following resource to the target computer
running the operating system: Azure IP Ranges and Service Tags – Public Cloud .
3. Get the list of IP address ranges for a given service tag, such as the
"AzureResourceManager" service tag:
PowerShell
4. Import the list of IP addresses to your external corporate firewall, if you're using an
allowlist with it.
5. Create a firewall rule for each server in the cluster to allow outbound 443 (HTTPS)
traffic to the list of IP address ranges:
PowerShell
Next steps
For more information, see also:
The Windows Firewall and WinRM 2.0 ports section of Installation and
configuration for Windows Remote Management
Network reference patterns overview
for Azure Stack HCI
Article • 12/12/2022
In this article, you'll gain an overview understanding for deploying network reference
patterns on Azure Stack HCI.
Two storage ports dedicated for storage traffic intent. The RDMA NIC is optional
for single-server deployments.
If switchless is used, configuration is limited to the host, which may reduce the
potential number of configuration steps needed. However, this value diminishes as
the cluster size increases.
Switchless has the lowest level of resiliency, and it comes with extra complexity and
planning if after the initial deployment it needs to be scaled up. Storage
connectivity needs to be enabled when adding the second node, which will require
to define what physical connectivity between nodes is needed.
As the number of nodes in the cluster grows beyond two nodes, the cost of
network adapters could exceed the cost of using network switches.
For more information, see Physical network requirements for Azure Stack HCI.
Firewall requirements
Azure Stack HCI requires periodic connectivity to Azure. If your organization's outbound
firewall is restricted, you would need to include firewall requirements for outbound
endpoints and internal rules and ports. There are required and recommended endpoints
for the Azure Stack HCI core components, which include cluster creation, registration
and billing, Microsoft Update, and cloud cluster witness.
See the firewall requirements for a complete list of endpoints. Make sure to include
these URLS in your allowed list. Proper network ports need to be opened between all
server nodes both within a site and between sites (for stretched clusters).
With Azure Stack HCI the connectivity validator of the Environment Checker tool will
check for the outbound connectivity requirement by default during deployment.
Additionally, you can run the Environment Checker tool standalone before, during, or
after deployment to evaluate the outbound connectivity of your environment.
A best practice is to have all relevant endpoints in a data file that can be accessed by the
environment checker tool. The same file can also be shared with your firewall
administrator to open up the necessary ports and URLs.
Next steps
Choose a network pattern to review.
Azure Stack HCI network deployment
patterns
Article • 05/31/2023
This article describes a set of network patterns references to architect, deploy, and
configure Azure Stack HCI using either one or two physical hosts. Depending on your
needs or scenarios, you can go directly to your pattern of interest. Each pattern is
described as a standalone entity and includes all the network components for specific
scenarios.
Go to storage switchless, single TOR switch Go to storage switchless, two TOR switches
Go to storage switchless, single TOR switch Go to storage switchless, two TOR switches
Next steps
Download Azure Stack HCI
Review single-server storage
deployment network reference pattern
for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about the single-server storage network reference pattern that
you can use to deploy your Azure Stack HCI solution. The information in this article will
also help you determine if this configuration is viable for your deployment planning
needs. This article is targeted towards the IT administrators who deploy and manage
Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Introduction
Single-server deployments provide cost and space benefits while helping to modernize
your infrastructure and bring Azure hybrid computing to locations that can tolerate the
resiliency of a single server. Azure Stack HCI running on a single-server behaves similarly
to Azure Stack HCI on a multi-node cluster: it brings native Azure Arc integration, the
ability to add servers to scale out the cluster, and it includes the same Azure benefits.
It also supports the same workloads, such as Azure Virtual Desktop (AVD) and AKS
hybrid, and is supported and billed the same way.
Scenarios
Use the single-server storage pattern in the following scenarios:
Facilities that can tolerate lower level of resiliency. Consider implementing this
pattern whenever your location or service provided by this pattern can tolerate a
lower level of resiliency without impacting your business.
Network security features such as microsegmentation and Quality of Service (QoS) don't
require extra configuration for the firewall device, as they're implemented at the virtual
network adapter layer. For more information, see Microsegmentation with Azure Stack
HCI .
Link speed At least 1 Gbps; 10 Gbps At least 1 Gbps; 10 GBps Check with
recommended. recommended. hardware
manufacturer.
Ports and Two teamed ports Optional to allow adding a One port
aggregation second server; disconnected
ports.
Network Management & compute Storage BMC
Storage intent
The storage intent has the following characteristics:
Follow these steps to create a network intent for this reference pattern:
PowerShell
For more information, see Deploy host networking: Compute and management intent.
OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures
Next steps
Learn about two-node patterns - Azure Stack HCI network deployment patterns.
Review single-server storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about which network components are deployed for the single-
server reference pattern, as shown in the following diagram:
Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.
Create and manage virtual networks or connect VMs to virtual network subnets.
Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.
The SDN Software Load Balancer (SLB) VM is used to evenly distribute network traffic
among multiple VMs. It enables multiple servers to host the same workload, providing
high availability and scalability. It is also used to provide inbound Network Address
Translation (NAT) services for inbound access to VMs, and outbound NAT services for
outbound connectivity.
SDN Gateway VM
The SDN Gateway VM is used to route network traffic between a virtual network and
another network, either local or remote. SDN Gateways can be used to:
Create secure site-to-site IPsec connections between SDN virtual networks and
external networks over the internet.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, SDN Gateway simply acts as a router between your virtual
network and the external network.
Host agents
The following components run as services or agents on the host server:
Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.
Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.
Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.
SDN Gateways 1 60 GB 30 GB 8 8 GB
Next steps
Learn about single-server IP requirements.
Review single-server storage reference
pattern IP requirements for Azure Stack
HCI
Article • 12/12/2022
In this article, learn about the IP requirements for deploying a single-server network
reference pattern in your environment.
each host,
connected management 1 optional.
1 IP for (internet access VLAN.
Failover required).
(Native VLAN
Cluster,
Disconnected preferred but
1 IP for (Arc trunk mode
OEM VM autonomous supported).
(optional) controller).
Network IP Network Network Subnet Required
component ATC intent routing properties IPs
Total 2 required.
2 optional
for
storage,
1 optional
for OEM
VM.
Cluster,
Disconnected (Native VLAN
1 IP for (Arc preferred but
Network autonomous trunk mode
Controller VM,
controller). supported).
1 IP for Arc
VM
management
stack VM,
1 IP for OEM
VM (new)
Network IP Network Network Subnet Required
component ATC intent routing properties IPs
Total 4
Required.
2 optional
for
storage,
1 optional
for OEM
VM.
VLAN trunk
configuration
on physical
switches
required.
Network IP Network Network Subnet Required IPs
component ATC intent routing properties
host,
Outbound defined 1 optional
1 IP for (internet management
Failover access VLAN.
Cluster,
required).
(Native VLAN
1 IP for Disconnected preferred but
Network (Arc trunk mode
Controller VM,
autonomous supported).
1 IP for Arc controller).
VM
management
stack VM,
1 IP for OEM
VM (new)
Single node:
1 Network
Controller VM
IP
1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP
Potential
subnet
growth
consideration.
L3 N/A Separate
Forwarding physical
subnet to
communicate
with virtual
network
Total 6 required.
2 optional for
storage,
1 optional for
OEM VM.
Next steps
Download Azure Stack HCI
Review two-node storage switchless,
single switch deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022
In this article, you'll learn about the two-node storage switchless with single TOR switch
network reference pattern that you can use to deploy your Azure Stack HCI solution. The
information in this article will also help you determine if this configuration is viable for
your deployment planning needs. This article is targeted towards the IT administrators
who deploy and manage Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, factories, retail stores, and
government facilities.
Consider this pattern for a cost-effective solution that includes fault-tolerance at the
cluster level, but can tolerate northbound connectivity interruptions if the single physical
switch fails or requires maintenance.
You can scale out this pattern, but it will require workload downtime to reconfigure
storage physical connectivity and storage network reconfiguration. Although SDN L3
services are fully supported for this pattern, the routing services such as BGP will need to
be configured on the firewall device on top of the TOR switch if it doesn't support L3
services. Network security features such as microsegmentation and QoS don't require
extra configuration on the firewall device, as they're implemented on the virtual switch.
Two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each
node in the cluster has a redundant connection to the other node in the cluster.
Follow these steps to create network intents for this reference pattern:
PowerShell
OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switchless, two switches network pattern.
Review two-node storage switchless,
two switches deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022
In this article, you'll learn about the two-node storage switchless with two TOR L3
switches network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.
Consider implementing this pattern when looking for a cost-efficient solution that has
fault tolerance across all the network components. It is possible to scale out the pattern,
but will require workload downtime to reconfigure storage physical connectivity and
storage network reconfiguration. SDN L3 services are fully supported on this pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as micro-segmentation and QoS do
not require additional configuration for the firewall device as they are implemented at
the virtual network adapter layer.
For northbound/southbound traffic, the cluster requires two TOR switches in MLAG
configuration.
Two teamed network cards to handle management and compute traffic, and
connected to the TOR switches. Each NIC will be connected to a different TOR
switch.
Two RDMA NICs in a full-mesh configuration for East-West storage traffic. Each
node in the cluster has a redundant connection to the other node in the cluster.
Storage intent
Intent type: Storage
Intent mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2
Follow these steps to create network intents for this reference pattern:
PowerShell
OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switchless, one switch network pattern.
Review two-node storage switched,
non-converged deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022
In this article, you'll learn about the two-node storage switched, non-converged, two-
TOR-switch network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, factories, branch offices, and
datacenter facilities.
Deploy this pattern for enhanced network performance of your system and if you plan
to add additional nodes. East-West storage traffic replication won't interfere or compete
with north-sound traffic dedicated for management and compute. Logical network
configuration when adding additional nodes are ready without requiring workload
downtime or physical connection changes. SDN L3 services are fully supported on this
pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.
Two teamed network cards to handle management and compute traffic connected
to two TOR switches. Each NIC is connected to a different TOR switch.
Networks Management and compute Storage BMC
Follow these steps to create network intents for this reference pattern:
PowerShell
The storage adapters operate in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switched, fully converged network pattern.
Review two-node storage switched, fully
converged deployment network
reference pattern for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about the two-node storage switched, fully converged with
two TOR switches network reference pattern that you can use to deploy your Azure
Stack HCI solution. The information in this article will also help you determine if this
configuration is viable for your deployment planning needs. This article is targeted
towards the IT administrators who deploy and manage Azure Stack HCI in their
datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.
Consider this pattern if you plan to add additional nodes and your bandwidth
requirements for north-south traffic don't require dedicated adapters. This solution
might be a good option when physical switch ports are scarce and you're looking for
cost reductions for your solution. This pattern requires additional operational costs to
fine-tune the shared host network adapters QoS policies to protect storage traffic from
workload and management traffic. SDN L3 services are fully supported on this pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.
Two teamed network cards handle the management, compute, and RDMA storage
traffic connected to the TOR switches. Each NIC is connected to a different TOR
switch. SMB multichannel capability provides path aggregation and fault tolerance.
Follow these steps to create network intents for this reference pattern:
PowerShell
The storage network operates in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switched, non-converged network pattern.
Review two-node storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about which network components get deployed for two-node
reference patterns, as shown below:
VM components
The following table lists all the components running on VMs for two-node network
patterns:
SDN Gateways 1 60 GB 30 GB 8 8 GB
Network Controller VM
The Network Controller VM is deployed optionally. If Network Controller VM isn't
deployed, the default access network access policies won't be available. Additionally, it's
needed if you have any of the following requirements:
Create and manage virtual networks. Connect virtual machines (VMs) to virtual
network subnets.
Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.
Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.
The SDN Software Load Balancer (SLB) VM is used to evenly distribute customer
network traffic among multiple VMs. It enables multiple servers to host the same
workload, providing high availability and scalability. It's also used to provide inbound
Network Address Translation (NAT) services for inbound access to virtual machines, and
outbound NAT services for outbound connectivity.
SDN Gateway VM
The SDN Gateway VM is used for routing network traffic between a virtual network and
another network, either local or remote. Gateways can be used to:
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
Create Layer 3 connections between SDN virtual networks and external networks.
In this case, the SDN gateway simply acts as a router between your virtual network
and the external network.
Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.
Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.
Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.
Next steps
Learn about Two-node deployment IP requirements.
Review two-node storage reference
pattern IP requirements for Azure Stack
HCI
Article • 11/10/2022
In this article, learn about the IP requirements for deploying a two-node network
reference pattern in your environment.
Total 6
required,
1
optional
for OEM
VM.
Deployments with microsegmentation and QoS
enabled
Network IP Network Network Subnet Required
component ATC intent routing properties IPs
host,
(outbound defined 1 optional
1 IP for internet access management
Failover required).
VLAN.
Cluster,
Disconnected (Native VLAN
1 IP for (Arc preferred but
Network autonomous trunk mode
Controller VM,
controller). supported).
1 IP for Arc
VM
management
stack VM,
1 IP for OEM
VM (new)
Total 9
minimum.
10
maximum.
Default VLAN
tag 711.
Default VLAN
tag 712.
VLAN trunk
configuration on
the physical
switches required.
host,
(outbound defined 1 optional
1 IP for internet access management
Failover required).
VLAN.
Cluster,
Disconnected (Native VLAN
1 IP for (Arc autonomous preferred but
Network controller). trunk mode
Controller VM,
supported).
1 IP for Arc VM
management
stack VM,
1 IP for OEM
VM (new)
Two-node:
1 Network
Controller VM
IP
1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP
Network IP component Network Network routing Subnet Required
ATC intent properties IPs
1 SLB VM IP
Subnet size
1 gateway VM needs to
IP allocate hosts
and SLB VMs
Potential
subnet
growth to be
considered
Total 11
minimum
12
maximum
Next steps
Choose a reference pattern.
Review two-node storage reference
pattern decision matrix for Azure Stack
HCI
Article • 11/10/2022
Study the two-node storage reference pattern decision matrix to help decide which
reference pattern is best suited for your deployment needs:
Next steps
Download Azure Stack HCI
Review SDN considerations for network
reference patterns
Article • 12/12/2022
In this article, you'll review considerations when deploying Software Defined Networking
(SDN) in your Azure Stack HCI cluster.
If you are using SDN Software Load Balancers (SLB) or Gateway Generic Routing
Encapsulation (GRE) gateways, you must also configure Border Gateway Protocol (BGP)
peering with the top of rack (ToR) switches so that the SLB and GRE Virtual IP addresses
(VIPs) can be advertised. For more information, see Switches and routers.
For more information about Network Controller, see What is Network Controller.
Virtual Networking
Network Controller HNV PA VLAN, subnet, router
User Defined Routing
Encrypted subnets
Inbound/Outbound NAT
Network Controller
BGP on HNV PA network
SLB/MUX
Private and public VIP subnets
SLB/MUX
Private and public VIP subnets
Gateway
SLB/MUX
Private and public VIP subnets
Next steps
Choose a network pattern to review.
Deploy the Azure Stack HCI operating
system
Article • 04/17/2023
The first step in deploying Azure Stack HCI is to download Azure Stack HCI and install
the operating system on each server that you want to cluster. This article discusses
different ways to deploy the operating system, and using Windows Admin Center to
connect to the servers.
7 Note
If you've purchased Azure Stack HCI Integrated System solution hardware from the
Azure Stack HCI Catalog through your preferred Microsoft hardware partner, the
Azure Stack HCI operating system should be pre-installed. In that case, you can skip
this step and move on to Create an Azure Stack HCI cluster.
For Azure Kubernetes Service on Azure Stack HCI and Windows Server requirements, see
AKS requirements on Azure Stack HCI.
Gather information
To prepare for deployment, you'll need to take note of the server names, domain names,
computer account names, RDMA protocols and versions, and VLAN ID for your
deployment. Gather the following details about your environment:
Server name: Get familiar with your organization's naming policies for computers,
files, paths, and other resources. If you need to provision several servers, each
should have a unique name.
Domain name: Get familiar with your organization's policies for domain naming
and domain joining. You'll be joining the servers to your domain, and you'll need
to specify the domain name.
Computer account names: Servers that you want to add as cluster nodes have
computer accounts. These computer accounts need to be moved into their own
dedicated organizational unit (OU).
Organizational unit (OU): If not already done so, create a dedicated OU for your
computer accounts. Consult your domain administrator about creating an OU. For
detailed information, see Create a failover cluster.
Static IP addresses: Azure Stack HCI requires static IP addresses for storage and
workload (VM) traffic and doesn't support dynamic IP address assignment through
DHCP for this high-speed network. You can use DHCP for the management
network adapter unless you're using two in a team, in which case again you need
to use static IPs. Consult your network administrator about the IP address you
should use for each server in the cluster.
RDMA networking: There are two types of RDMA protocols: iWarp and RoCE. Note
which one your network adapters use, and if RoCE, also note the version (v1 or v2).
For RoCE, also note the model of your top-of-rack switch.
VLAN ID: Note the VLAN ID to be used for the network adapters on the servers, if
any. You should be able to obtain this from your network administrator.
Site names: For stretched clusters, two sites are used for disaster recovery. You can
set up sites using Active Directory Domain Services, or the Create cluster wizard
can automatically set them up for you. Consult your domain administrator about
setting up sites.
If you install Windows Admin Center on a server, tasks that require CredSSP, such as
cluster creation and installing updates and extensions, require using an account that's a
member of the Gateway Administrators group on the Windows Admin Center server. For
more information, see the first two sections of Configure User Access Control and
Permissions.
1. Rack all server nodes that you want to use in your server cluster.
2. Connect the server nodes to your network switches.
3. Configure the BIOS or the Unified Extensible Firmware Interface (UEFI) of your
servers as recommended by your Azure Stack HCI hardware vendor to maximize
performance and reliability.
7 Note
If you are preparing a single server deployment, see the Azure Stack HCI OS single
server overview
Solution hardware ranges from 1 to 16 nodes and is tested and validated by Microsoft
and partner vendors. T
o find Azure Stack HCI solution hardware from your preferred
hardware partner, see the Azure Stack HCI Catalog .
Headless deployment
You can use an answer file to do a headless deployment of the operating system. The
answer file uses an XML format to define configuration settings and values during an
unattended installation of the operating system.
For this deployment option, you can use Windows System Image Manager to create an
unattend.xml answer file to deploy the operating system on your servers. Windows
System Image Manager creates your unattend answer file through a graphical tool with
component sections to define the "answers" to the configuration questions, and then
ensure the correct format and syntax in the file.
The Windows System Image Manager
tool is available in the Windows Assessment and Deployment Kit (Windows ADK). To get
started: Download and install the Windows ADK.
You can't use Microsoft System Center Virtual Machine Manager 2019 to deploy or
manage clusters running Azure Stack HCI, version 21H2. If you're using VMM 2019
to manage your Azure Stack HCI, version 20H2 cluster, don't attempt to upgrade
the cluster to version 21H2 without first installing System Center 2022.
Network deployment
Another option is to install the Azure Stack HCI operating system over the network
using Windows Deployment Services.
Manual deployment
To manually deploy the Azure Stack HCI operating system on the system drive of each
server to be clustered, install the operating system via your preferred method, such as
booting from a DVD or USB drive. Complete the installation process using the Server
Configuration tool (SConfig) to prepare the server or servers for clustering. To learn
more about the tool, see Configure a Server Core installation with SConfig.
1. Start the Install Azure Stack HCI wizard on the system drive of the server where you
want to install the operating system.
2. Choose the language to install or accept the default language settings, select Next,
and then on next page of the wizard, select Install now.
3. On the Applicable notices and license terms page, review the license terms, select
the I accept the license terms checkbox, and then select Next.
4. On the Which type of installation do you want? page, select Custom: Install the
newer version of Azure Stack HCI only (advanced).
7 Note
6. The Installing Azure Stack HCI page displays to show status on the process.
7 Note
The installation process restarts the operating system twice to complete the
process, and displays notices on starting services before opening an
Administrator command prompt.
8. At the Enter new credential for Administrator prompt, enter a new password, enter
it again to confirm it, and then press Enter.
9. At the Your password has been changed confirmation prompt, press Enter.
Configure the server using SConfig
Now you're ready to use the Server Configuration tool (SConfig) to perform important
tasks. To use SConfig, log on to the server running the Azure Stack HCI operating
system. This could be locally via a keyboard and monitor, or using a remote
management (headless or BMC) controller, or Remote Desktop. The SConfig tool opens
automatically when you log on to the server.
From the Welcome to Azure Stack HCI window (SConfig tool), you can perform these
initial configuration tasks on each server:
After configuring the operating system as needed with SConfig on each server, you're
ready to use the Cluster Creation wizard in Windows Admin Center to cluster the
servers.
7 Note
If you're installing Azure Stack HCI on a single server, you must use PowerShell to
create the cluster.
Next steps
To perform the next management task related to this article, see:
Now that you've deployed the Azure Stack HCI operating system, you'll learn how to use
Windows Admin Center to create an Azure Stack HCI cluster that uses Storage Spaces
Direct, and, optionally, Software Defined Networking. The Create Cluster wizard in
Windows Admin Center will do most of the heavy lifting for you. If you'd rather do it
yourself with PowerShell, see Create an Azure Stack HCI cluster using PowerShell. The
PowerShell article is also a good source of information for what is going on under the
hood of the wizard and for troubleshooting purposes.
7 Note
If you are doing a single server installation of Azure Stack HCI 21H2, use
PowerShell to create the cluster.
If you're interested in testing Azure Stack HCI but have limited or no spare hardware, see
the Azure Stack HCI Evaluation Guide, where we'll walk you through experiencing Azure
Stack HCI using nested virtualization inside an Azure VM. Or try the Create a VM-based
lab for Azure Stack HCI tutorial to create your own private lab environment using nested
virtualization on a server of your choice to deploy VMs running Azure Stack HCI for
clustering.
After you're done creating a cluster in the Create Cluster wizard, complete these post-
cluster creation steps:
Set up a cluster witness. This is highly recommended for all clusters with at least
two nodes.
Register with Azure. Your cluster is not fully supported until your registration is
active.
Validate an Azure Stack HCI cluster. Your cluster is ready to work in a production
environment after completing this step.
Prerequisites
Before you run the Create Cluster wizard in Windows Admin Center, you must complete
the following prerequisites.
2 Warning
Running the wizard before completing the prerequisites can result in a failure to
create the cluster.
Consult with your networking team to identify and understand Physical network
requirements, Host network requirements, and Firewall requirements. Especially
review the Network Reference patterns, which provide example network designs.
Also, determine how you'd like to configure host networking, using Network ATC
or manually.
Install the Azure Stack HCI operating system on each server in the cluster. See
Deploy the Azure Stack HCI operating system.
Have at least two servers to cluster; four if creating a stretched cluster (two in each
site). To instead deploy Azure Stack HCI on a single server, see Deploy Azure Stack
HCI on a single server.
Ensure all servers are in the same time zone as your local domain controller.
Ensure that Windows Admin Center and your domain controller are not installed
on the same system. Also, ensure that the domain controller is not hosted on the
Azure Stack HCI cluster or one of the nodes in the cluster.
If you're running Windows Admin Center on a server (instead of a local PC), use an
account that's a member of the Gateway Administrators group, or the local
Administrators group on the Windows Admin Center server.
Verify that your Windows Admin Center management computer is joined to the
same Active Directory domain in which you'll create the cluster, or joined to a fully
trusted domain. The servers that you'll cluster don't need to belong to the domain
yet; they can be added to the domain during cluster creation.
If you're using an integrated system from a Microsoft hardware partner, install the
latest version of vendor extensions on Windows Admin Center to help keep the
integrated hardware and firmware up to date. To install them, open Windows
Admin Center and click Settings (gear icon) at the upper right. Select any
applicable hardware vendor extensions, and click Install.
For stretched clusters, set up your two sites beforehand in Active Directory.
Alternatively, the wizard can set them up for you too. For more information about
stretched clusters, see the Stretched clusters overview.
3. In the Add or create resources panel, under Server clusters, select Create new.
6. When finished, click Create. You'll see the Create Cluster wizard, as shown below.
Proceed to the next step in the cluster creation workflow, Step 1: Get started.
1. Review 1.1 Check the prerequisites listed in the wizard to ensure each server node
is cluster-ready. When finished, click Next.
2. On 1.2 Add servers, enter your account username using the format
domain\username. Enter your password, then click Next. This account must be a
member of the local Administrators group on each server.
3. Enter the name of the first server you want to add, then click Add. When you add
servers, make sure to use a fully qualified domain name.
4. Repeat Step 3 for each server that will be part of the cluster. When you're finished,
select Next.
5. If needed, on 1.3 Join a domain, specify the domain to join the servers to and the
account to use. You can optionally rename the servers if you want. Then click Next.
6. On 1.4 Install features, review and add features as needed. When finished, click
Next.
The wizard lists and installs required features for you, including the following
options:
Data Deduplication
Hyper-V
BitLocker Drive Encryption
Data Center Bridging (for RoCEv2 network adapters)
Failover Clustering
Network ATC
Active Directory module for Windows PowerShell
Hyper-V module for Windows PowerShell
7. On 1.5 Install updates, click Install updates as needed to install any operating
system updates. When complete, click Next.
8. On 1.6 Install hardware updates, click Get updates as needed to get available
vendor hardware updates. If you don't install the updates now, we recommend
manually installing the latest networking drivers before continuing. Updated
drivers are required if you want to use Network ATC to configure host networking.
7 Note
10. On 1.7 Restart servers, click Restart servers if required. Verify that each server has
successfully started.
Step 2: Networking
Step 2 of the wizard walks you through configuring the host networking elements for
your cluster. RDMA (both iWARP and RoCE) network adapters are supported.
Depending on the option you selected in 1.8 Choose host networking of Step 1: Get
started above, refer to one of the following tabs to configure host networking for your
cluster:
This is the recommended option for configuring host networking. For more
information about Network ATC, see Network ATC overview.
1. On 2.1 Verify network adapters, review the list displayed, and exclude or add
any adapters you want to cluster. Wait for a couple of minutes for the
adapters to show up. Only adapters with matching names, interface
descriptions, and link speed on each server are displayed. All other adapters
are hidden.
2. If you don't see your adapters in the list, click Show hidden adapters to see all
the available adapters and then select the missing adapters.
3. On the Select the cluster network adapters page, select the checkbox for any
adapters listed that you want to cluster. The adapters must have matching
names, interface descriptions, and link speeds on each server. You can rename
the adapters to match, or just select the matching adapters. When finished,
click Close.
4. The selected adapters will now display under Adapters available on all
servers. When finished selecting and verifying adapters, click Next.
For Traffic types, select a traffic type from the dropdown list. You can
add the Management and Storage intent types to exactly one intent
while the Compute intent type can be added to one or more intents. For
more information, see Network ATC traffic types.
For Intent name, enter a friendly name for the intent.
For Network adapters, select an adapter from the dropdown list.
(Optional) Click Select another adapter for this traffic if needed.
7. (Optional) To add another intent, select Add an intent, and repeat step 5 and
optionally step 6.
9. On 2.3: Provide network details, for each storage traffic adapter listed, enter
the following or use the default values (recommended):
Subnet mask/CIDR
VLAN ID
IP address (this is usually on a private subnet such as 10.71.1.x and
10.71.2.x)
1. On 3.1 Create the cluster, specify a unique name for the cluster.
Specify one or more static addresses. The IP address must be entered in the
following format: IP address/current subnet length. For example:
10.0.0.200/24.
Assign address dynamically with DHCP.
3. When finished, select Create cluster. This can take a while to complete.
If you get the error "Failed to reach cluster through DNS," select the Retry
connectivity checks button. You might have to wait several hours before it
succeeds on larger networks due to DNS propagation delays.
) Important
If you failed to create a cluster, do not click the Back button instead of the
Retry connectivity checks button. If you select Back, the Cluster Creation
wizard exits prematurely, and can potentially reset the entire process.
If you encounter issues with deployment after the cluster is created and you want
to restart the Cluster Creation wizard, first remove (destroy) the cluster. To do so,
see Remove a cluster.
4. The next step appears only if you selected Use Network ATC to deploy and
manage networking (Recommended) for step 1.8 Choose host networking.
In Deploy host networking settings, select Deploy to apply the Network ATC
intents you defined earlier. If you chose to manually deploy host networking in
step 1.8 of the Cluster Creation wizard, you won't see this page.
6. On 3.3 Validate cluster, select Validate. Validation can take several minutes. Note
that the in-wizard validation is not the same as the post-cluster creation validation
step, which performs additional checks to catch any hardware or configuration
problems before the cluster goes into production. If you experience issues with
cluster validation, see Troubleshoot cluster validation reporting.
If the Credential Security Service Provider (CredSSP) pop-up appears, select Yes
to temporarily enable CredSSP for the wizard to continue. Once your cluster is
created and the wizard has completed, you'll disable CredSSP to increase security.
If you experience issues with CredSSP, see Troubleshoot CredSSP.
7. Review all validation statuses, download the report to get detailed information on
any failures, make changes, then click Validate again as needed. You can
Download report as well. Repeat again as necessary until all validation checks
pass. When all is OK, click Next.
11. For stretched clusters, on 3.3 Assign servers to sites, name the two sites that will
be used.
12. Next assign each server to a site. You'll set up replication across sites later. When
finished, click Apply changes.
Step 4: Storage
Complete these steps after finishing the Create Cluster wizard.
Step 4 walks you through
setting up Storage Spaces Direct for your cluster.
1. On 4.1 Clean drives, you can optionally select Erase drives if it makes sense for
your deployment.
2. On 4.2 Check drives, click the > icon next to each server to verify that the disks are
working and connected. If all is OK, click Next.
6. Download and review the report. When all is good, click Finish.
8. After a few minutes, you should see your cluster in the list. Select it to view the
cluster overview page.
It can take some time for the cluster name to be replicated across your domain,
especially if workgroup servers have been newly added to Active Directory.
Although the cluster might be displayed in Windows Admin Center, it might not be
available to connect to yet.
If resolving the cluster isn't successful after some time, in most cases you can
substitute a server name instead of the cluster name.
You can also deploy Network Controller using SDN Express scripts. See Deploy an SDN
infrastructure using SDN Express.
7 Note
The Create Cluster wizard does not currently support configuring SLB And RAS
gateway. You can use SDN Express scripts to configure these components. Also,
SDN is not supported or available for stretched clusters.
1. Under Host, enter a name for the Network Controller. This is the DNS name used
by management clients (such as Windows Admin Center) to communicate with
Network Controller. You can also use the default populated name.
2. Download the Azure Stack HCI VHDX file. For more information, see Download the
VHDX file.
3. Specify the path where you downloaded the VHDX file. Use Browse to find it
quicker.
4. Specify the number of VMs to be dedicated for Network Controller. Three VMs are
strongly recommended for production deployments.
5. Under Network, enter the VLAN ID of the management network. Network
Controller needs connectivity to same management network as the Hyper-V hosts
so that it can communicate and configure the hosts.
6. For VM network addressing, select either DHCP or Static.
7. If you selected DHCP, enter the name for the Network Controller VMs. You can
also use the default populated names.
8. If you selected Static, do the following:
Specify an IP address.
Specify a subnet prefix.
Specify the default gateway.
Specify one or more DNS servers. Click Add to add additional DNS servers.
9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.
10. Enter the local administrative password for these VMs.
11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values.
13. When finished, click Next.
14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete. Then click Finish.
7 Note
After Network Controller VM(s) are created, you must configure dynamic DNS
updates for the Network Controller cluster name on the DNS server.
If Network Controller deployment fails, do the following before you try this again:
Stop and delete any Network Controller VMs that the wizard created.
Next steps
To perform the next management task related to this article, see:
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019
This article describes how to set up an Azure Stack HCI or Windows Server cluster with a
cluster witness in Azure (known as a cloud witness).
We recommend setting up a cluster witness for clusters with two, three, or four nodes.
The witness helps the cluster determine which nodes have the most up-to-date cluster
data if some nodes can't communicate with the rest of the cluster. You can host the
cluster witness on a file share located on another server, or use a cloud witness.
To learn more about cluster witnesses and quorum, see Understanding cluster and pool
quorum on Azure Stack HCI. To manage the witness, including setting a file share
witness, see Change cluster settings.
Make sure that port 443 is open in your firewalls and that *.core.windows.net is
included in any firewall allow lists you're using between the cluster and Azure
Storage. For details, see Recommended firewall URLs.
If your network uses a proxy server for internet access, you must configure proxy
settings for Azure Stack HCI.
Create an Azure account.
If applicable, create an additional Azure subscription.
Connect Azure Stack HCI to Azure.
Make sure DNS is available for the cluster.
2. On the Azure portal home menu, under Azure services, select Storage accounts. If
this icon is missing, select Create a resource to create a Storage accounts resource
first.
Storage account names must be between 3 and 24 characters in length and may
contain numbers and lowercase letters only. This name must also be unique
within Azure.
d. Select a Location that is closest to you physically.
e. For Performance, select Standard.
f. For Account kind, select Storage general purpose.
g. For Replication, select Locally-redundant storage (LRS).
h. When finished, click Review + create.
5. Ensure that the storage account passes validation and then review account
settings. When finished, click Create.
6. It may take a few seconds for account deployment to occur in Azure. When
deployment is complete, click Go to resource.
An Azure cloud witness uses a blob file for storage, with an endpoint generated of the
form storage_account_name.blob.core.windows.net as the endpoint.
7 Note
An Azure cloud witness uses HTTPS (default port 443) to establish communication
with the Azure blob service. Ensure that the HTTPS port is accessible.
Copy the account name and access key
1. In the Azure portal, under Settings, select Access keys.
3. Click the copy-and-paste icon to the right of the Storage account name and key1
fields and paste each text string to Notepad or other text editor.
3. Under Blob service, click the copy-and-paste icon to the right of the Blob service
field and paste the text string to Notepad or other text editor.
Create a cloud witness using Windows Admin
Center
Now you are ready to create a witness instance for your cluster using Windows Admin
Center.
1. In Windows Admin Center, select Cluster Manager from the top drop-down arrow.
Cloud witness - enter your Azure storage account name, access key, and
endpoint URL, as described previously
File share witness - enter the file share path "(//server/share)"
6. For a cloud witness, for the following fields, paste the text strings you copied
previously for:
a. Azure storage account name
b. Azure storage access key
c. Azure service endpoint
7. When finished, click Save. It might take a bit for the information to propagate to
Azure.
7 Note
The third option, Disk witness, is not suitable for use in stretched clusters.
Use the following cmdlet to create an Azure cloud witness. Enter the Azure storage
account name and access key information as described previously:
PowerShell
Use the following cmdlet to create a file share witness. Enter the path to the file server
share:
PowerShell
Next steps
To perform the next management task related to this article, see:
For more information on cluster quorum, see Understanding cluster and pool
quorum on Azure Stack HCI.
For more information about creating and managing Azure Storage Accounts, see
Create a storage account.
Register Azure Stack HCI with Azure
Article • 05/12/2023
Now that you've deployed the Azure Stack HCI operating system and created a cluster,
you must register it with Azure.
This article describes how to register Azure Stack HCI with Azure via Windows Admin
Center or PowerShell. For information on how to manage cluster registration, see
Manage cluster registration.
After registration, an Azure Resource Manager resource is created to represent the on-
premises Azure Stack HCI cluster. Starting with Azure Stack HCI, version 21H2,
registering a cluster automatically creates an Azure Arc of the server resource for each
server in the Azure Stack HCI cluster. This Azure Arc integration extends the Azure
management plane to Azure Stack HCI. The Azure Arc integration enables periodic
syncing of information between the Azure resource and the on-premises clusters.
Prerequisites
Before you begin cluster registration, make sure the following prerequisites are in place:
Azure Stack HCI system deployed and online. Make sure the system is deployed
and all servers are online.
Network connectivity. Azure Stack HCI needs to periodically connect to the Azure
public cloud. For information on how to prepare your firewalls and set up a proxy
server, see Firewall requirements for Azure Stack HCI.
Azure subscription and permissions. Make sure you have an Azure subscription
and you know the Azure region where the cluster resources should be created. For
more information about Azure subscription and supported Azure regions, see
Azure requirements.
Windows Admin Center. If you're using Windows Admin Center to register the
cluster, make sure you:
To register your cluster in Azure China, install Windows Admin Center version
2103.2 or later.
Azure policies. Make sure you don't have any conflicting Azure policies that might
interfere with cluster registration. Some of the common conflicting policies can be:
resource group naming. Make sure that the naming does not conflict with the
existing policies.
Resource group tags: Currently Azure Stack HCI does not support adding tags
to resource groups during cluster registration. Make sure your policy accounts
for this behavior.
.msi download: Azure Stack HCI downloads the Arc agent on the cluster nodes
during cluster registration. Make sure you don't restrict these downloads.
Credentials lifetime: By default, the Azure Stack HCI service requests two years
of credential lifetime. Make sure your Azure policy doesn't have any
configuration conflicts.
7 Note
If you have a separate resource group for Arc-for-Server resources, we
recommend using a resource group having Arc-for-Server resources
related only to Azure Stack HCI. The Azure Stack HCI resource provider has
permissions to manage any other Arc-for-Server resources in the ArcServer
resource group.
Contributor role: Required to register and unregister the Azure Stack HCI cluster.
JSON
"Id": null,
"IsCustom": true,
"Actions": [
"Microsoft.Resources/subscriptions/resourceGroups/read",
"Microsoft.Resources/subscriptions/resourceGroups/write",
"Microsoft.Resources/subscriptions/resourceGroups/delete",
"Microsoft.AzureStackHCI/register/action",
"Microsoft.AzureStackHCI/Unregister/Action",
"Microsoft.AzureStackHCI/clusters/*",
"Microsoft.Authorization/roleAssignments/write",
"Microsoft.Authorization/roleAssignments/read",
"Microsoft.HybridCompute/register/action",
"Microsoft.GuestConfiguration/register/action",
"Microsoft.HybridConnectivity/register/action"
],
"NotActions": [
],
"AssignableScopes": [
"/subscriptions/<subscriptionId>"
PowerShell
PowerShell
"Microsoft.Resources/subscriptions/resourceGroups/read",
To register and unregister the
"Microsoft.Resources/subscriptions/resourceGroups/write",
Azure Stack HCI cluster.
"Microsoft.Resources/subscriptions/resourceGroups/delete"
"Microsoft.AzureStackHCI/register/action",
"Microsoft.AzureStackHCI/Unregister/Action",
"Microsoft.AzureStackHCI/clusters/*",
"Microsoft.Authorization/roleAssignments/read",
"Microsoft.Authorization/roleAssignments/write",
To register and unregister the Arc
"Microsoft.HybridCompute/register/action",
for server resources.
"Microsoft.GuestConfiguration/register/action",
"Microsoft.HybridConnectivity/register/action"
Register a cluster
You can register your Azure Stack HCI cluster using Windows Admin Center or
PowerShell.
Follow these steps to register Azure Stack HCI with Azure via Windows Admin
Center:
3. In Windows Admin Center, select Cluster Manager from the top drop-down
arrow.
5. On Dashboard, under Azure Arc, check the status of Azure Stack HCI
registration and Arc-enabled servers.
6. If your cluster isn't registered, under Azure Stack HCI registration, select
Register to proceed.
7 Note
If you didn't register Windows Admin Center with Azure earlier, you are
asked to do so now. Instead of the cluster registration wizard, you'll see
the Windows Admin Center registration wizard.
7. Specify the Azure subscription ID to which you want to register the cluster. To
get your Azure subscription ID, visit the Azure portal, navigate to
Subscriptions, and copy/paste your ID from the list.
9. Select one of the following options to select the Azure Stack HCI resource
group:
Select Use existing to create the Azure Stack HCI cluster and Arc for
Server resources in an existing resource group.
Select Create new to create a new resource group. Enter a name for the
new resource group.
Depending on your cluster configuration and requirements, you may need to take the
following actions to manage the cluster registration:
For information on how to manage your cluster registration, see Manage cluster
registration.
Next steps
To perform the next management task related to this article, see:
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019
Although the Create cluster wizard in Windows Admin Center performs certain
validations to create a working cluster with the selected hardware, cluster validation
performs additional checks to make sure the cluster will work in a production
environment. This how-to article focuses on why cluster validation is important, and
when to run it on an Azure Stack HCI cluster.
After deploying a server cluster, run the Validate-DCB tool to test networking.
After updating a server cluster, depending on your scenario, run both validation
options to troubleshoot cluster issues.
After setting up replication with Storage Replica, validate that the replication is
proceeding normally by checking some specific events and running a couple
commands.
After creating a server cluster, run the Validate-DCB tool before placing it into
production.
Before adding a server to the cluster: When you add a server to a cluster, we
strongly recommend validating the cluster. Specify both the existing cluster
members and the new server when you run cluster validation.
When adding drives: When you add additional drives to the cluster, which is
different from replacing failed drives or creating virtual disks or volumes that
rely on the existing drives, run cluster validation to confirm that the new storage
will function correctly.
When making changes that affect firmware or drivers: If you upgrade or make
changes to the cluster that affect firmware or drivers, you must run cluster
validation to confirm that the new combination of hardware, firmware, drivers,
and software supports failover cluster functionality.
After restoring a system from backup: After you restore a system from backup,
run cluster validation to confirm that the system functions correctly as part of a
cluster.
Validate networking
The Microsoft Validate-DCB tool is designed to validate the Data Center Bridging (DCB)
configuration on the cluster. To do this, the tool takes an expected configuration as
input, and then tests each server in the cluster. This section covers how to install and run
the Validate-DCB tool, review results, and resolve networking errors that the tool
identifies.
7 Note
Prerequisites
Network setup information of the server cluster that you want to validate,
including:
Host or server cluster name
Virtual switch name
Network adapter names
Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS) settings
An internet connection to download the tool module in Windows PowerShell from
Microsoft.
PowerShell
Install-Module Validate-DCB
2. Accept the requests to use the NuGet provider and access the repository to install
the tool.
3. After PowerShell connects to the Microsoft network to download the tool, type
Validate-DCB and press Enter to start the tool wizard.
7 Note
If you cannot run the Validate-DCB tool script, you might need to adjust your
PowerShell execution policies. Use the Get-ExecutionPolicy cmdlet to view
your current script execution policy settings. For information on setting
execution policies in PowerShell, see About Execution Policies.
5. On the Clusters and Nodes page, type the name of the server cluster that you want
to validate, select Resolve to list it on the page, and then select Next.
7 Note
7. On the Data Center Bridging page, modify the values to match your organization's
settings for Priority, Policy Name, and Bandwidth Reservation, and then select
Next.
7 Note
Selecting RDMA over RoCE on the previous wizard page requires DCB for
network reliability on all NICs and switchports.
8. On the Save and Deploy page, in the Configuration File Path box, save the
configuration file using .ps1 extension to a location where you can use it again
later if needed, and then select Export to start running the Validate-DCB tool.
You can optionally deploy your configuration file by completing the Deploy
Configuration to Nodes section of the page, which includes the ability to use
an Azure Automation account to deploy the configuration and then validate
it. See Create an Azure Automation account to get started with Azure
Automation.
Review results and fix errors
The Validate-DCB tool produces results in two units:
1. [Global Unit] results list prerequisites and requirements to run the modal tests.
2. [Modal Unit] results provide feedback on each cluster host configuration and best
practices.
This example shows successful scan results of a single server for all prerequisites and
modal unit tests by indicating a Failed Count of 0.
The following steps show how to identify a Jumbo Packet error from vNIC SMB02 and fix
it:
1. The results of the Validate-DCB tool scans show a Failed Count error of 1.
2. Scrolling back through the results shows an error in red indicating that the Jumbo
Packet for vNIC SMB02 on Host S046036 is set at the default size of 1514, but
should be set to 9014.
3. Reviewing the Advanced properties of vNIC SMB02 on Host S046036 shows that
the Jumbo Packet is set to the default of Disabled.
4. Fixing the error requires enabling the Jumbo Packet feature and changing its size
to 9014 bytes. Running the scan again on host S046036 confirms this change by
returning a Failed Count of 0.
To learn more about resolving errors that the Validate-DCB tool identifies, see the
following video.
https://www.youtube-nocookie.com/embed/cC1uACvhPBs
You can also install the tool offline. For disconnected systems, use Save-Module -Name
Validate-DCB -Path c:\temp\Validate-DCB and then move the modules in
1. In Windows Admin Center, under All connections, select the Azure Stack HCI
cluster that you want to validate, and then select Connect.
The Cluster Manager Dashboard displays overview information about the cluster.
3. On the Inventory page, select the servers in the cluster, then expand the More
submenu and select Validate cluster.
Cluster validation runs in the background and gives you a notification when it's
complete, at which point you can view the validation report, as described in the
next section.
7 Note
After your cluster servers have been validated, you will need to disable CredSSP for
security reasons.
Disable CredSSP
After your server cluster is successfully validated, you'll need to disable the Credential
Security Support Provider (CredSSP) protocol on each server for security purposes. For
more information, see CVE-2018-0886 .
1. In Windows Admin Center, under All connections, select the first server in your
cluster, and then select Connect.
2. On the Overview page, select Disable CredSSP, and then on the Disable CredSSP
pop-up window, select Yes.
The result of Step 2 removes the red CredSSP ENABLED banner at the top of the
server's Overview page, and disables CredSSP on the other servers.
On the Inventory page, expand the More submenu, and then select View
validation reports.
At the top right of Windows Admin Center, select the Notifications bell icon to
display the Notifications pane.
Select the Successfully validated cluster notice,
and then select Go to Failover Cluster validation report.
7 Note
The server cluster validation process may take some time to complete. Don't switch
to another tool in Windows Admin Center while the process is running. In the
Notifications pane, a status bar below your Validate cluster notice indicates when
the process is done.
To run a validation test on a server cluster, issue the Get-Cluster and Test-Cluster
<server clustername> PowerShell cmdlets from your management PC, or run only the
Test-Cluster cmdlet directly on the cluster:
PowerShell
For more examples and usage information, see the Test-Cluster reference
documentation.
Test-NetStack is a PowerShell-based testing tool available from GitHub that you can
use to perform ICMP, TCP, and RDMA traffic testing of networks and identify potential
network fabric and host misconfigurations or operational instability. Use Test-NetStack
to validate network data paths by testing native, synthetic, and hardware offloaded
(RDMA) network data paths for issues with connectivity, packet fragmentation, low
throughput, and congestion.
Validate replication for Storage Replica
If you're using Storage Replica to replicate volumes in a stretched cluster or cluster-to-
cluster, there are several events and cmdlets that you can use to get the state of
replication.
To determine the replication progress for Server1 in Site1, run the Get-WinEvent
command and examine events 5015, 5002, 5004, 1237, 5001, and 2200:
PowerShell
For Server3 in Site2, run the following Get-WinEvent command to see the Storage
Replica events that show creation of the partnership. This event states the number of
copied bytes and the time taken. For example:
PowerShell
For Server3 in Site2, run the Get-WinEvent command and examine events 5009, 1237,
5001, 5015, 5005, and 2200 to understand the processing progress. There should be no
warnings of errors in this sequence. There will be many 1237 events - these indicate
progress.
PowerShell
Alternately, the destination server group for the replica states the number of byte
remaining to copy at all times, and can be queried through PowerShell with Get-
SRGroup . For example:
PowerShell
(Get-SRGroup).Replicas | Select-Object numofbytesremaining
For node Server3 in Site2, run the following command and examine events 5009, 1237,
5001, 5015, 5005, and 2200 to understand the replication progress. There should be no
warnings of errors. However, there will be many "1237" events - these simply indicate
progress.
PowerShell
PowerShell
while($true) {
Start-Sleep -s 5
To get replication state within the stretched cluster, use Get-SRGroup and Get-
SRPartnership :
PowerShell
PowerShell
PowerShell
Once successful data replication is confirmed between sites, you can create your VMs
and other workloads.
See also
Performance testing against synthetic workloads in a newly created storage space
using DiskSpd.exe. To learn more, see Test Storage Spaces Performance Using
Synthetic Workloads in Windows Server.
Windows Server Assessment is a Premier Service available for customers who want
Microsoft to review their installations. For more information, contact Microsoft
Premier Support. To learn more, see Getting Started with the Windows Server On-
Demand Assessment (Server, Security, Hyper-V, Failover Cluster, IIS).
Migrate to Azure Stack HCI on new
hardware
Article • 04/17/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows
Server 2008 R2
This topic describes how to migrate virtual machine (VM) files on Windows Server 2012
R2, Windows Server 2016, or Windows Server 2019 to new Azure Stack HCI server
hardware using Windows PowerShell and Robocopy. Robocopy is a robust method for
copying files from one server to another. It resumes if disconnected and continues to
work from its last known state. Robocopy also supports multi-threaded file copy over
Server Message Block (SMB). For more information, see Robocopy.
7 Note
Hyper-V Live Migration and Hyper-V Replica from Windows Server to Azure Stack
HCI is not supported. However, Hyper-V replica is valid and supported between HCI
systems. You can't replicate a VM to another volume in the same cluster, only to
another HCI system.
If you have VMs on Windows 2012 R2 or older that you want to migrate, see Migrating
older VMs.
To migrate to Azure Stack HCI using the same hardware, see Migrate to Azure Stack HCI
on the same hardware.
The following diagram shows a Windows Server source cluster and an Azure Stack HCI
destination cluster as an example. You can also migrate VMs on stand-alone servers as
well.
In terms of expected downtime, using a single NIC with a dual 40 GB RDMA East-West
network between clusters, and Robocopy configured for 32 multithreads, you can realize
transfer speeds of 1.9 TB per hour.
7 Note
You must have domain credentials with administrator permissions for both source
and destination clusters, with full rights to the source and destination
Organizational Unit (OU) that contains both clusters.
Both clusters must be in the same Active Directory forest and domain to facilitate
Kerberos authentication between clusters for migration of VMs.
Both clusters must reside in an Active Directory OU with Group Policy Object (GPO)
Block inheritance set on this OU. This ensures no domain-level GPOs and security
policies can impact the migration.
Both clusters must be connected to the same time source to support consistent
Kerberos authentication between clusters.
Make note of the Hyper-V virtual switch name used by the VMs on the source
cluster. You must use the same virtual switch name for the Azure Stack HCI
destination cluster "virtual machine network" prior to importing VMs.
Remove any ISO image files for your source VMs. This is done using Hyper-V
Manager in VM Properties in the Hardware section. Select Remove for any virtual
CD/DVD drives.
Shutdown all VMs on the source cluster. This is required to ensure version control
and state are maintained throughout the migration process.
Check if Azure Stack HCI supports your version of the VMs to import and update
your VMs as needed. See the VM version support and update section on how to
do this.
Backup all VMs on your source cluster. Complete a crash-consistent backup of all
applications and data and an application-consistent backup of all databases. To
backup to Azure, see Use Azure Backup.
Make a checkpoint of your source cluster VMs and domain controller in case you
have to roll back to a prior state. This is not applicable for physical servers.
Ensure the maximum Jumbo frame sizes are the same between source and
destination cluster storage networks, specifically the RDMA network adapters and
their respective switch network ports to provide the most efficient end-to-end
transfer packet size.
Make note of the Hyper-V virtual switch name on the source cluster. You will reuse
it on the destination cluster.
The Azure Stack HCI hardware should have at least equal capacity and
configuration as the source hardware.
Minimize the number of network hops or physical distance between the source
and destination clusters to facilitate the fastest file transfer.
OS version VM version
For VMs on Windows Server 2008 SP1, Windows Server 2008 R2-SP1, and Windows
2012, the VM version will be less than version 5.0. These VMs also use an .xml file for
configuration instead of an .vcmx file. As such, a direct import of the VM to Azure Stack
HCI is not supported. In these cases, you have two options, as detailed in Migrating
older VMs.
PowerShell
PowerShell
PowerShell
RDMA recommendations
If you are using Remote Direct Memory Access (RDMA), Robocopy can leverage it for
copying your VMs between clusters. Here are some recommendations for using RDMA:
Connect both clusters to the same top of rack (ToR) switch to use the fastest
network path between source and destination clusters. For the storage network
path this typically supports 10GbE/25GbE or higher speeds and leverages RDMA.
If the RDMA adapter or standard is different between source and destination
clusters (ROCE vs iWARP), Robocopy will instead leverage SMB over TCP/IP via the
fastest available network. This will typically be a dual 10Gbe/25Gbe or higher
speed for the East-West network, providing the most optimal way to copy VM
VHDX files between clusters.
Use Windows Admin Center or Windows PowerShell to create the new cluster. For
detailed information on how to do this, see Create an Azure Stack HCI cluster using
Windows Admin Center and Create an Azure Stack HCI cluster using Windows
PowerShell.
) Important
Hyper-V virtual switch ( VMSwitch ) names between clusters must be the same. Make
sure that virtual switch names created on the destination cluster match those used
on the source cluster across all servers. Verify the switch names for the same before
you import the VMs.
7 Note
You must register the Azure Stack HCI cluster with Azure before you can create new
VMs on it. For more information, see Register with Azure.
The migration script is run locally on each source server to leverage the benefit of
RDMA and fast network transfer. To do this:
1. Make sure each destination cluster node is set to the CSV owner for the
destination CSV.
2. To determine the location of all VM VHD and VHDX files to be copied, use the
following cmdlet. Review the C:\vmpaths.txt file to determine the topmost source
file path for Robocopy to start from for step 4:
PowerShell
7 Note
If your VHD and VHDX files are located in different paths on the same volume,
you will need to run the migration script for each different path to copy them
all.
3. Change the following three variables to match the source cluster VM path with the
destination cluster VM path:
$Dest_Server = "Node01"
$source = "C:\Clusterstorage\Volume01"
$dest = "\\$Dest_Server\C$\Clusterstorage\Volume01"
PowerShell
<#
#===========================================================================
# Script: Robocopy_Remote_Server_.ps1
#===========================================================================
.DESCRIPTION:
Change the following variables to match your source cluster VM path with the
destination cluster VM path. Then run this script on each source Cluster
Node CSV owner and make sure the destination cluster node is set to the CSV
owner for the destination CSV.
#>
$Dest_Server = "Node01"
$source = "C:\Clusterstorage\Volume01"
$dest = "\\$Dest_Server\C$\Clusterstorage\Volume01"
$Logfile = "c:\temp\Robocopy1-$date.txt"
$cmdArgs = @("$source","$dest",$what,$options)
$what = @("/COPYALL")
$options =
@("/E","/MT:32","/R:0","/W:1","/NFL","/NDL","/LOG:$logfile","/xf")
$startDTM = (Get-Date)
$Dest_Server = "Node01"
$TARGETDIR = \\$Dest_Server\C$\Clusterstorage\Volume01
$Space
Clear
## Provide Information
robocopy @cmdArgs
$endDTM = (Get-Date)
Write-host ""
Perform the following steps on your Azure Stack HCI cluster to import the VMs, make
them highly available, and start them:
PowerShell
Get-ClusterSharedVolume
2. For each server node, go to C:\Clusterstorage\Volume and set the path for all VMs
- for example C:\Clusterstorage\volume01 .
3. Run the following cmdlet on each CSV owner node to display the path to all VM
VMCX files per volume prior to VM import. Modify the path to match your
environment:
PowerShell
7 Note
Windows Server 2012 R2 and older VMs use an XML file instead of a VCMX
file. Fore more information, see the section Migrating older VMs.
4. Run the following cmdlet for each server node to import, register, and make the
VMs highly available on each CSV owner node. This ensures an even distribution of
VMs for optimal processor and memory allocation:
PowerShell
PowerShell
Start-VM -Name
6. Log on and verify that all VMs are running and that all your apps and data are
there:
PowerShell
7. Update your VMs to the latest version for Azure Stack HCI to take advantage of all
the advancements:
PowerShell
8. After the script has completed, check the Robocopy log file for any errors listed
and to verify that all VMs are copied successfully.
Migrate these VMs to Windows Server 2012 R2, Windows Server 2016, or Windows
Server 2019 first, update the VM version, then begin the migration process.
Use Robocopy to copy all VM VHDs to Azure Stack HCI. Then create new VMs and
attach the copied VHDs to the VMs in Azure Stack HCI. This bypasses the VM
version limitation for these older VMs.
Windows Server 2012 R2 and older Hyper-V hosts use an XML file format for their VM
configuration, which is different than the VCMX file format used for Windows Server
2016 and later Hyper-V hosts. This requires a different Robocopy command to copy
these VMs to Azure Stack HCI.
1. Discover the location of all VM VHD and VHDX files to be copied, then review the
vmpaths.txt file to determine the topmost source file path for Robocopy to start
from. Use the following cmdlet:
PowerShell
2. Use the following example Robocopy command to copy VMs to Windows Server
2012 R2 first using the topmost path determined in step 1:
3. Verify the virtual switch ( VMSwitch ) name on used on the Windows Server 2012 R2
cluster is the same as the switch name used on the Windows 2008 R2 or Windows
Server 2008 R2-SP1 source. To display the switch names used across all servers in a
cluster, use this:
PowerShell
Rename the switch name on Windows Server 20212 R2 as needed. To rename the
switch name across all servers in the cluster, use this:
PowerShell
PowerShell
PowerShell
5. On Windows Server 2012 R2, update the VM version to 5.0 for all VMs:
PowerShell
7. Follow the process in Import the VMs, replacing Step 3 and Step 4 with the
following to handle the XML files and to import the VMs to Azure Stack HCI:
PowerShell
PowerShell
VMs hosted on Windows 2008 SP1 and Windows 2008 R2-SP1 support only Generation
1 VMs with Generation 1 VHDs. As such, corresponding Generation 1 VMs need to be
created on Azure Stack HCI so that the copied VHDs can be attached to the new VMs.
Note that these VHDs cannot be upgraded to Generation 2 VHDs.
7 Note
1. Use the example Robocopy to copy VMs VHDs directly to Azure Stack HCI:
2. Create new Generation 1 VMs. For detailed information on how to do this, see
Manage VMs.
3. Attach the copied VHD files to the new VMs. For detailed information, see Manage
Virtual Hard Disks (VHD)
As an FYI, the following Windows Server guest operating systems support Generation 2
VMs:
Next steps
Validate the cluster after migration. See Validate an Azure Stack HCI cluster.
To migrate to Azure Stack HCI in-place using the same hardware, see Migrate to
Azure Stack HCI on the same hardware.
Migrate to Azure Stack HCI on same
hardware
Article • 04/17/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows
Server 2008 R2
This topic describes how to migrate a Windows Server failover cluster to Azure Stack HCI
using your existing server hardware. This process installs the new Azure Stack HCI
operating system and retains your existing cluster settings and storage, and imports
your VMs.
The following diagram depicts migrating your Windows Server cluster in-place using the
same server hardware. After shutting your cluster down, Azure Stack HCI is installed,
storage is reattached, and your VMs are imported and made highly available (HA).
To migrate your VMs to new Azure Stack HCI hardware, see Migrate to Azure Stack HCI
on new hardware.
7 Note
Backup all VMs on your source cluster. Complete a crash-consistent backup of all
applications and data and an application-consistent backup of all databases. To
backup to Azure, see Use Azure Backup.
Collect inventory and configuration of all cluster nodes and cluster naming,
network configuration, Cluster Shared Volume (CSV) resiliency and capacity, and
quorum witness.
Shutdown your cluster VMs, offline CSVs, offline storage pools, and the cluster
service.
Disable the Cluster Name Object (CNO) (it is reused later) and:
Check that the CNO has Create Object rights to its own Organizational Unit
(OU)
Check that the block inherited policy has been set on the OU
Set the required policy for Azure Stack HCI on this OU
OS version VM version
Migrate these VMs to Windows Server 2012 R2, Windows Server 2016, or Windows
Server 2019 first, update the VM version, then begin the migration process.
Use Robocopy to copy all VM VHDs to Azure Stack HCI. Then create new VMs and
attach the copied VHDs to their respective VMs in Azure Stack HCI. This bypasses
the VM version limitation for these older VMs.
PowerShell
To show all VM versions across all nodes on your Windows Server cluster:
PowerShell
To update all VMs to the latest version on all Windows Server nodes:
PowerShell
1. Shutdown your existing cluster VMs, offline CSVs, offline storage pools, and the
cluster service.
2. Go to the location where you downloaded the Azure Stack HCI bits, then run Azure
Stack HCI setup on each Windows Server node.
3. During setup, select Custom: Install the newer version of Azure Stack HCI only
(Advanced). Repeat for each server.
4. Create the new Azure Stack HCI cluster. You can use Windows Admin Center or
Windows PowerShell to do this, as described below.
) Important
Hyper-V virtual switch ( VMSwitch ) name must be the same name captured in the
cluster configuration inventory. Make sure the virtual switch name used on the
Azure Stack HCI cluster matches the original source virtual switch name before you
import the VMs.
7 Note
You must register the Azure Stack HCI cluster with Azure before you can create new
VMs on it. For more information, see Register with Azure.
For detailed information on how to create the cluster, see Create an Azure Stack HCI
cluster using Windows Admin Center.
) Important
Skip step 4.1 Clean drives in the Create cluster wizard. Otherwise you will delete
your existing VMs and storage.
1. Start the Create Cluster wizard. When you get to Step 4: Storage:
PowerShell
Enable-ClusterS2D -Verbose
Resource Group.
If migrating from Windows Server 2019, this also adds the existing
ClusterperformanceHistory ReFS volume and assigns it to the SDDC Cluster
Resource Group.
5. Go back to the wizard. In step 4.2 Verify drives, verify that all drives are listed
without warnings or errors.
PowerShell
For more information on how to create the cluster using PowerShell, see Create an
Azure Stack HCI cluster using Windows PowerShell.
7 Note
Re-use the same name for the previously disabled Cluster Name Object.
PowerShell
New-cluster –name "clustername" –node Server01,Server02 –staticaddress
xx.xx.xx.xx –nostorage
2. Run the following cmdlet to create the new Storagesubsystem Object ID,
rediscover all storage enclosures, and assign SES drive numbers:
PowerShell
Enable-ClusterS2D -Verbose
Resource Group.
7 Note
If a storage pool shows Minority Disk errors (viewable in Cluster Manager), re-
run the Enable-ClusterS2D -verbose cmdlet.
6. Determine your current storage pool name and version by running the following:
PowerShell
PowerShell
8. Create the quorum witness. For information on how, see Set up a cluster witness.
9. Verify that storage repair jobs are completed using the following:
PowerShell
Get-StorageJob
7 Note
This could take considerable time depending on the number of VMs running
during the upgrade.
PowerShell
Get-VirtualDisk
11. Determine the cluster node version, which displays ClusterFunctionalLevel and
ClusterUpgradeVersion . Run the following to get this:
PowerShell
Get-ClusterNodeSupportedVersion
7 Note
PowerShell
Get-StoragePool | Update-StoragePool
ReFS volumes
If migrating from Windows Server 2016, Resilient File System (ReFS) volumes are
supported, but such volumes do not benefit from performance enhancements in Azure
Stack HCI from using mirror-accelerated parity (MAP) volumes. This enhancement
requires a new ReFS volume to be created using the PowerShell New-Volume cmdlet.
For Windows Server 2016 MAP volumes, ReFS compaction was not available, so re-
attaching these volumes is OK but will be less performant compared to creating a new
MAP volume in an Azure Stack HCI cluster.
Perform the following steps on your Azure Stack HCI cluster to import the VMs, make
them highly available, and start them:
PowerShell
Get-ClusterSharedVolume
2. For each server node, go to C:\Clusterstorage\Volume and set the path for all VMs
- for example C:\Clusterstorage\volume01 .
3. Run the following cmdlet on each CSV owner node to display the path to all VM
VMCX files per volume prior to VM import. Modify the path to match your
environment:
PowerShell
4. Run the following cmdlet for each server node to import and register all VMs and
make them highly available on each CSV owner node. This ensures an even
distribution of VMs for optimal processor and memory allocation:
PowerShell
Start-VM -Name
6. Login and verify that all VMs are running and that all your apps and data are there:
PowerShell
7. Lastly, update your VMs to the latest Azure Stack HCI version to take advantage of
all the advancements:
PowerShell
Next steps
Validate the cluster after migration. See Validate an Azure Stack HCI cluster.
To migrate Windows Server VMs to new Azure Stack HCI hardware, see Migrate to
Azure Stack HCI on new hardware.
Deploy Azure Stack HCI on a single
server
Article • 05/12/2023
This article describes how to use PowerShell to deploy Azure Stack HCI on a single
server that contains all NVMe or SSD drives, creating a single-node cluster. It also
describes how to add servers to the cluster (scale-out) later.
Currently you can't use Windows Admin Center to deploy Azure Stack HCI on a single
server. For more info, see Using Azure Stack HCI on a single server.
Prerequisites
A server from the Azure Stack HCI Catalog that's certified for use as a single-
node cluster and configured with all NVMe or all SSD drives.
For network, hardware and other requirements, see Azure Stack HCI network and
domain requirements.
Optionally, install Windows Admin Center to register and manage the server once
it has been deployed.
1. Install the Azure Stack HCI OS on your server. For more information, see Deploy
the Azure Stack HCI OS onto your server.
3. Install the required roles and features using the following command, then reboot
before continuing.
PowerShell
Here's an example of creating the cluster and then enabling Storage Spaces Direct
while disabling the storage cache:
PowerShell
PowerShell
7 Note
6. Create volumes.
To install updates for Azure Stack HCI version 22H2, use Windows Admin Center (Cluster
Manager > Updates). Cluster Aware Updating (CAU) is supported beginning with this
version. To use PowerShell or connect via Remote Desktop and use Server Configuration
Tool (SConfig), see Update Azure Stack HCI clusters.
For solution updates (such as driver and firmware updates), see your solution vendor.
Change a single-node to a multi-node cluster
(optional)
You can add servers to your single-node cluster, also known as scaling out, though there
are some manual steps you must take to properly configure Storage Spaces Direct fault
domains ( FaultDomainAwarenessDefault ) in the process. These steps aren't present when
adding servers to clusters with two or more servers.
1. Validate the cluster by specifying the existing server and the new server: Validate
an Azure Stack HCI cluster - Azure Stack HCI | Microsoft Docs.
2. If cluster validation was successful, add the new server to the cluster: Add or
remove servers for an Azure Stack HCI cluster - Azure Stack HCI | Microsoft Docs.
3. Once the server is added, change the cluster's fault domain awareness from
PhysicalDisk to ScaleScaleUnit: Inline fault domain changes.
4. Optionally, if more resiliency is needed, adjust the volume resiliency type from a 2-
way mirror to a Nested 2-way mirror: Single-server to two-node cluster.
5. Set up a cluster witness.
Next steps
Deploy workload – AVD
Deploy workload – AKS-HCI
Deploy workload – Azure Arc-enabled data services
Single server scale-out for your Azure
Stack HCI
Article • 07/10/2023
Azure Stack HCI version 22H2 supports inline fault domain and resiliency changes for
single-server cluster scale-out. This article describes how you can scale out your Azure
Stack HCI cluster.
Complete the following steps to correctly set fault domains after adding a node:
PowerShell
PowerShell
4. Generate new storage tiers and recreate the cluster performance history volume by
running the following command:
PowerShell
Enable-ClusterStorageSpacesDirect -Verbose
5. Remove storage tiers that are no longer applicable by running the following
command. See the Storage tier summary table for more information.
PowerShell
PowerShell
PowerShell
FaultDomainAwareness : StorageScaleUnit
PS C:\> Get-StorageJob
PowerShell
To check the fault domain awareness of storage tiers, run the following command:
PowerShell
7 Note
Run the following command to check the progress of the resiliency changes. The repair
operation should be observed for all volumes in the cluster.
PowerShell
Get-StorageJob
For a non-tiered volume, run the following commands to first set the virtual disk:
PowerShell
PowerShell
Then, move the volume to a different node to remount the volume. A remount is
needed as ReFS only recognizes provisioning type at mount time.
PowerShell
PowerShell
PowerShell
Then, move the volume to a different node to remount the volume. A remount is
needed as ReFS only recognizes provisioning type at mount time.
PowerShell
Next steps
See ReFS for more information.
Create an Azure Stack HCI cluster using
Windows PowerShell
Article • 04/17/2023
In this article, you learn how to use Windows PowerShell to create an Azure Stack HCI
hyperconverged cluster that uses Storage Spaces Direct. If you're rather use the Cluster
Creation wizard in Windows Admin Center to create the cluster, see Create the cluster
with Windows Admin Center.
7 Note
If you're doing a single server installation of Azure Stack HCI 21H2, use PowerShell
to create the cluster.
Standard cluster with one or two server nodes, all residing in a single site.
Stretched cluster with at least four server nodes that span across two sites, with
two nodes per site.
For the single server scenario, complete the same instructions for the one server.
7 Note
In this article, we create an example cluster named Cluster1 that is composed of four
server nodes named Server1, Server2, Server3, and Server4.
For the stretched cluster scenario, we will use ClusterS1 as the name and use the same
four server nodes stretched across sites Site1 and Site2.
For more information about stretched clusters, see Stretched clusters overview.
If you're interested in testing Azure Stack HCI, but have limited or no spare hardware,
check out the Azure Stack HCI Evaluation Guide , where we'll walk you through
experiencing Azure Stack HCI using nested virtualization inside an Azure VM. Or try the
Create a VM-based lab for Azure Stack HCI tutorial to create your own private lab
environment using nested virtualization on a server of your choice to deploy VMs
running Azure Stack HCI for clustering.
you may need to specify the fully qualified domain name (FQDN) when using the -
ComputerName parameter for a server node.
You will also need the Remote Server Administration Tools (RSAT) cmdlets and
PowerShell modules for Hyper-V and Failover Clustering. If these aren't already available
in your PowerShell session on your management computer, you can add them using the
following command: Add-WindowsFeature RSAT-Clustering-PowerShell .
Open PowerShell and use either the fully-qualified domain name or the IP address of
the server you want to connect to. You'll be prompted for a password after you run the
following command on each server.
For this example, we assume that the servers have been named Server1, Server2,
Server3, and Server4:
PowerShell
PowerShell
$myServer1 = "Server1"
$user = "$myServer1\Administrator"
Tip
When running PowerShell commands from your management PC, you might get an
error like WinRM cannot process the request. To fix this, use PowerShell to add each
server to the Trusted Hosts list on your management computer. This list supports
wildcards, like Server* for example.
Use the Enter-PSSession cmdlet to connect to each server and run the following cmdlet,
substituting the server name, domain name, and domain credentials:
PowerShell
If your administrator account isn't a member of the Domain Admins group, add your
administrator account to the local Administrators group on each server - or better yet,
add the group you use for administrators. You can use the following command to do so:
PowerShell
BitLocker
Data Center Bridging
Failover Clustering
File Server
FS-Data-Deduplication module
Hyper-V
Hyper-V PowerShell
RSAT-AD-Clustering-PowerShell module
RSAT-AD-PowerShell module
NetworkATC
NetworkHUD
SMB Bandwidth Limit
Storage Replica (for stretched clusters)
Use the following command for each server (if you're connected via Remote Desktop
omit the -ComputerName parameter here and in subsequent commands):
PowerShell
Install-WindowsFeature -ComputerName "Server1" -Name "BitLocker", "Data-
Center-Bridging", "Failover-Clustering", "FS-FileServer", "FS-Data-
Deduplication", "FS-SMBBW", "Hyper-V", "Hyper-V-PowerShell", "RSAT-AD-
Powershell", "RSAT-Clustering-PowerShell", "NetworkATC", "NetworkHUD",
"Storage-Replica" -IncludeAllSubFeature -IncludeManagementTools
To run the command on all servers in the cluster at the same time, use the following
script, modifying the list of variables at the beginning to fit your environment:
PowerShell
Invoke-Command ($ServerList) {
PowerShell
As a sanity check first, consider running the following commands to make sure that your
servers don't already belong to a cluster:
PowerShell
Get-ClusterNode
Get-ClusterResource
PowerShell
Get-ClusterNetwork
7 Note
Exclude any removable drives attached to a server node from the script. If you are
running this script locally from a server node for example, you don't want to wipe
the removable drive you might be using to deploy the cluster.
PowerShell
Invoke-Command ($ServerList) {
Update-StorageProviderCache
$_ | Set-Disk -isoffline:$false
$_ | Set-Disk -isreadonly:$false
$_ | Set-Disk -isreadonly:$true
$_ | Set-Disk -isoffline:$true
Get-Disk | Where Number -Ne $Null | Where IsBoot -Ne $True | Where
IsSystem -Ne $True | Where PartitionStyle -Eq RAW | Group -NoElement -
Property FriendlyName
PowerShell
When creating the cluster, you'll get a warning that states - "There were issues while
creating the clustered role that may prevent it from starting. For more
information, view the report file below." You can safely ignore this warning. It's due
to no disks being available for the cluster witness that you will create later.
7 Note
If the servers are using static IP addresses, modify the following command to reflect
the static IP address by adding the following parameter and specifying the IP
address: -StaticAddress <X.X.X.X>; .
PowerShell
$ClusterName="cluster1"
After the cluster is created, it can some take time for the cluster name to be replicated
via DNS across your domain, especially if workgroup servers have been newly added to
Active Directory. Although the cluster might be displayed in Windows Admin Center, it
might not be available to connect to yet.
PowerShell
If resolving the cluster isn't successful after some time, in most cases you can connect by
using the name of one of the clustered servers instead of the cluster name.
Network ATC can automate the deployment of your intended networking configuration
if you specify one or more intent types for your adapters. For more information on
specific intent types, please see: Network Traffic Types.
PowerShell
If a physical adapter name varies across nodes in your cluster, you can rename it using
Rename-NetAdapter .
PowerShell
Run the following command to add the storage and compute intent types to pNIC01
and pNIC02. Note that we specify the -ClusterName parameter.
PowerShell
PowerShell
To see the provisioning status of the intent, run the Get-NetIntentStatus command:
PowerShell
Note the status parameter that shows Provisioning, Validating, Success, Failure.
Status should display success in a few minutes. If this doesn't occur or you see a Status
parameter failure, check the event viewer for issues.
7 Note
At this time, Network ATC does not configure IP addresses for any of its managed
adapters. Once Get-NetIntentStatus reports status completed, you should add IP
addresses to the adapters.
7 Note
If you have set up Active Directory Sites and Services beforehand, you do not need
to create the sites manually as described below.
PowerShell
PowerShell
Use the Get-ClusterFaultDomain cmdlet to verify that both sites have been created for
the cluster.
PowerShell
PowerShell
PowerShell
Using the Get-ClusterFaultDomain cmdlet, verify the nodes are in the correct sites.
PowerShell
PowerShell
(Get-Cluster).PreferredSite = "Site1"
Specifying a preferred Site for stretched clusters has the following benefits:
Cold start - during a cold start, virtual machines are placed in the preferred site
Quorum voting
During a quorum split of two sites, if the cluster witness cannot be contacted,
the preferred site is automatically elected to win. The server nodes in the
passive site then drop out of cluster membership. This allows the cluster to
survive a simultaneous 50% loss of votes.
The preferred site can also be configured at the cluster role or group level. In this case, a
different preferred site can be configured for each virtual machine group. This enables a
site to be active and preferred for specific virtual machines.
PowerShell
A stretch intent can also be combined with other intents, when deploying with Network
ATC.
SiteOverrides
Based on steps 5.1-5.3 you can add your pre-created sites to your stretch intent
deployed with Network ATC. Network ATC will handle this using SiteOverrides. To create
a SiteOverride, run:
PowerShell
$siteOverride = New-NetIntentSiteOverrides
Once you have created a siteOverride, you can set any property for the siteOverride.
Make sure that the name property of the siteOverride has the exact same name, as the
name your site has in the ClusterFaultDomain. A mismatch of names between the
ClusterFaultDomain and the siteOverride will end up resulting in the siteOverride not
being applied.
The properties you can set for a particular siteOverride are: Name, StorageVlan and
StretchVlan. For example, this is how you create 2 siteOverrides for your 2 sites- site1
and site2:
PowerShell
$siteOverride1 = New-NetIntentSiteOverrides
$siteoverride1.Name = "site1"
$siteOverride1.StorageVLAN = 711
$siteOverride1.StretchVLAN = 25
$siteOverride2 = New-NetIntentSiteOverrides
$siteOverride2.Name = "site2"
$siteOverride2.StorageVLAN = 712
$siteOverride2.StretchVLAN = 26
You can run $siteOverride1 , $siteOverride2 in your powershell window to make sure
all your properties are set in the desired manner.
PowerShell
Create a storage pool: Creates a storage pool for the cluster that has a name like
"Cluster1 Storage Pool".
Create data and log volumes: Creates a data volume and a log volume in the
storage pool.
Configure Storage Spaces Direct caches: If there is more than one media (drive)
type available for Storage Spaces Direct, it enables the fastest as cache devices
(read and write in most cases).
Create tiers: Creates two tiers as default tiers. One is called "Capacity" and the
other called "Performance". The cmdlet analyzes the devices and configures each
tier with the mix of device types and resiliency.
The following command enables Storage Spaces Direct on a multi-node cluster. You can
also specify a friendly name for a storage pool, as shown here:
PowerShell
PowerShell
PowerShell
Set up a cluster witness if you're using a two-node or larger cluster. See Set up a
cluster witness.
Create your volumes. See Create volumes.
When creating volumes on a single-
node cluster, you must use PowerShell. See Create volumes using PowerShell.
For stretched clusters, create volumes and set up replication using Storage Replica.
See Create volumes and set up replication for stretched clusters.
Next steps
Register your cluster with Azure. See Connect Azure Stack HCI to Azure.
Do a final validation of the cluster. See Validate an Azure Stack HCI cluster
Manage host networking. See Manage host networking using Network ATC.
Deploy host networking with Network
ATC
Article • 05/22/2023
This article guides you through the requirements, best practices, and deployment of
Network ATC. Network ATC simplifies the deployment and network configuration
management for Azure Stack HCI clusters. Network ATC provides an intent-based
approach to host network deployment. By specifying one or more intents (management,
compute, or storage) for a network adapter, you can automate the deployment of the
intended configuration. For more information on Network ATC, including an overview
and definitions, please see Network ATC overview.
If you have feedback or encounter any issues, review the Requirements and best
practices section, check the Network ATC event log, and work with your Microsoft
support team.
The following are requirements and best practices for using Network ATC in Azure
Stack HCI:
All servers in the cluster must be running Azure Stack HCI, version 22H2 with
the November update (or later).
Must use physical hosts that are Azure Stack HCI certified.
Adapters in the same Network ATC intent must be symmetric (of the same
make, model, speed, and configuration) and available on each cluster node.
With Azure Stack HCI 22H2, Network ATC will automatically confirm adapter
symmetry for all nodes in the cluster before deploying an intent.
Ensure each network adapter has an "Up" status, as verified by the PowerShell
Get-NetAdapter cmdlet.
Ensure all hosts have the November Azure Stack HCI update or later.
Each node must have the following Azure Stack HCI features installed:
Network ATC
Network HUD
Hyper-V
Failover Clustering
Data Center Bridging
PowerShell
Best practice: Insert each adapter in the same PCI slot(s) in each host. This
practice leads to ease in automated naming conventions by imaging systems.
Best practice: Configure the physical network (switches) prior to Network ATC
including VLANs, MTU, and DCB configuration. For more information, please
see Physical Network Requirements.
) Important
Updated: Deploying Network ATC in virtual machines may be used for test and
validation purposes only. VM-based deployment requires an override to the default
adapter settings to disable the NetworkDirect property. For more information on
submission of an override, please see: Override default network settings.
Deploying Network ATC in standalone mode may be used for test and validation
purposes only.
The Remove-NetIntent cmdlet removes an intent from the local node or cluster. This
command doesn't destroy the invoked configuration.
Example intents
Network ATC modifies how you deploy host networking, not what you deploy. You can
deploy multiple scenarios so long as each scenario is supported by Microsoft. Here are
some examples of common deployment options, and the PowerShell commands
needed. These aren't the only combinations available but they should give you an idea
of the possibilities.
For simplicity we only demonstrate two physical adapters per SET team, however it's
possible to add more. For more information, please see Plan Host Networking.
22H2
PowerShell
Add-NetIntent -Name ConvergedIntent -Management -Compute -Storage -
AdapterName pNIC01, pNIC02
22H2
PowerShell
22H2
PowerShell
Storage-only intent
For this intent, only storage is managed. Management and compute adapters aren't
managed by Network ATC.
22H2
PowerShell
22H2
PowerShell
22H2
PowerShell
Default values
This section covers additional default values that Network ATC will be setting in versions
22H2 and later.
Default VLANs
Network ATC uses the following VLANs by default for adapters with the storage intent
type. If the adapters are connected to a physical switch, these VLANs must be allowed
on the physical network. If the adapters are switchless, no additional configuration is
required.
22H2
PowerShell
The physical NIC (or virtual NIC if necessary) is configured to use VLANs 711, 712, 713,
and 714 respectively.
7 Note
Network ATC allows you to change the VLANs used with the StorageVlans
parameter on Add-NetIntent .
Network ATC will automatically configure valid IP Addresses for adapters with the
storage intent type. Network ATC does this in a uniform manner across all nodes in your
cluster and verifies that the address chosen isn't already in use on the network.
The default IP Address for each adapter on each node in the storage intent will be set
up as follows:
To override Automatic Storage IP Addressing, create a storage override and pass the
override when creating an intent:
PowerShell
$StorageOverride = New-NetIntentStorageOverrides
$StorageOverride.EnableAutomaticIPGeneration = $false
PowerShell
Network ATC configures a set of Cluster Network Features by default. The defaults are
listed below:
Property Default
EnableNetworkNaming $true
EnableLiveMigrationNetworkSelection $true
EnableVirtualMachineMigrationPerformance $true
MaximumVirtualMachineMigrations 1
7 Note
Network ATC allows you to override default settings like default bandwidth
reservation. For examples, see Update or override network settings.
Error: AdapterBindingConflict
Scenario 1: An adapter is actually bound to an existing vSwitch that conflicts with the
new vSwitch that is being deployed by Network ATC.
Solution: Disable the vms_pp component (unbind the adapter from the vSwitch) then
run Set-NetIntentRetryState.
Error: ConflictingTrafficClass
This issue occurs because a traffic class is already configured. This preconfigured traffic
class conflicts with the traffic classes being deployed by Network ATC. For example, the
customer may have already deployed a traffic class called SMB when Network ATC will
deploy a similar traffic class with a different name.
Solution:
Clear the existing DCB configuration on the system then run Set-NetIntentRetryState
PowerShell
Get-NetQosTrafficClass | Remove-NetQosTrafficClass
Get-NetQosFlowControl | Disable-NetQosFlowControl
Error: RDMANotOperational
1. If the network adapter uses an inbox driver. Inbox drivers aren't supported and
must be updated.
Error: InvalidIsolationID
This message will occur when RoCE RDMA is in use and you have overridden the default
VLAN with a value that can't be used with that protocol. For example, RoCE RDMA
requires a non-zero VLAN so that Priority Flow Control (PFC) markings can be added to
the frame. A VLAN value between 1 - 4094 must be used. Network ATC won't override
the value you specified without administrator intervention for several reasons. To resolve
this issue:
When specifying a VLAN use the -StorageVLANs parameter and specify comma
separated values between 1 - 4094.
Next steps
Manage your Network ATC deployment. See Manage Network ATC.
Learn more about Stretched clusters.
Deploy SDN using Windows Admin
Center
Article • 06/28/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022
Datacenter, Windows Server 2019 Datacenter
This article describes how to deploy Software Defined Networking (SDN) through
Windows Admin Center after you configured your Azure Stack HCI cluster. Windows
Admin Center enables you to deploy all the SDN infrastructure components on your
existing Azure Stack HCI cluster, in the following deployment order:
Network Controller
Software Load Balancer (SLB)
Gateway
To deploy SDN Network Controller during cluster creation, see Step 5: SDN (optional) of
the Create cluster wizard.
Alternatively, you can deploy the entire SDN infrastructure through the SDN Express
scripts.
You can also deploy an SDN infrastructure using System Center Virtual Machine
Manager (VMM). For more information, see Manage SDN resources in the VMM fabric.
) Important
You can't use Microsoft System Center VMM 2019 to manage clusters running
Azure Stack HCI, version 21H2 or Windows Server 2022. Instead, you can use
Microsoft System Center VMM 2022.
) Important
You can't use Microsoft System Center VMM 2019 and Windows Admin Center to
manage SDN at the same time.
) Important
You can’t manage SDN on the Standard edition of Windows Server 2022 or
Windows Server 2019. This is due to the limitations in the Remote Server
Administration Tools (RSAT) installation on Windows Admin Center. However, you
can manage SDN on the Datacenter edition of Windows Server 2022 and Windows
Server 2019 and also on the Datacenter: Azure Edition of Windows Server 2022.
Requirements
The following requirements must be met for a successful SDN deployment:
7 Note
The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.
Follow these steps to download an English version of the VHDX file:
2. Complete the download form and select Submit to display the Azure Stack HCI
software download page.
3. Under Azure Stack HCI, select English – VHDX from the Choose language
dropdown menu, and then select Download Azure Stack HCI.
Currently, a non-English VHDX file is not available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.
You will probably need to run this as Administrator and modify the execution policy for
scripts using the Set-ExecutionPolicy command.
PowerShell
Import-Module Convert-WindowsImage
$wimpath = "E:\sources\install.wim"
$vhdpath = "D:\temp\AzureStackHCI.vhdx"
$edition=1
1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.
2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then click Get
Started.
4. Under Cluster settings, under Host, enter a name for the Network Controller. This
is the DNS name used by management clients (such as Windows Admin Center) to
communicate with Network Controller. You can also use the default populated
name.
5. Specify a path to the Azure Stack HCI VHD file. Use Browse to find it quicker.
9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.
11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
7 Note
12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values. This is the MAC pool used to assign MAC
addresses to VMs attached to SDN networks.
14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then click Finish.
15. After the Network Controller VMs are created, configure dynamic DNS updates for
the Network Controller cluster name on the DNS server. For more information, see
Dynamic DNS updates.
1. Delete all Network Controller VMs and their VHDs from all server nodes.
2. Remove the following registry key from all hosts by running this command:
PowerShell
Remove-ItemProperty -path
'HKLM:\SYSTEM\CurrentControlSet\Services\NcHostAgent\Parameters\' -Name
Connections
3. After removing the registry key, remove the cluster from the Windows Admin
Center management, and then add it back.
7 Note
If you don't do this step, you may not see the SDN deployment wizard in
Windows Admin Center.
4. (Additional step only if you plan to uninstall Network Controller and not deploy it
again) Run the following cmdlet on all the servers in your Azure Stack HCI cluster,
and then skip the last step.
PowerShell
7 Note
1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.
2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then click Get
Started on the Load Balancer tab.
4. Under Load Balancer Settings, under Front-End subnets, provide the following:
Public VIP subnet prefix. This could be public Internet subnets. They serve as
the front end IP addresses for accessing workloads behind the load balancer,
which use IP addresses from a private backend network.
Private VIP subnet prefix. These don’t need to be routable on the public
Internet because they are used for internal load balancing.
5. Under BGP Router Settings, enter the SDN ASN for the SLB. This ASN is used to
peer the SLB infrastructure with the Top of the Rack switches to advertise the
Public VIP and Private VIP IP addresses.
6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. SLB infrastructure needs these settings to create a BGP peer with the
switch. If you have an additional Top of Rack switch that you want to peer the SLB
infrastructure with, add IP Address and ASN for that switch as well.
7. Under VM Settings, specify a path to the Azure Stack HCI VHDX file. Use Browse to
find it quicker.
9. Under Network, enter the VLAN ID of the management network. SLB needs
connectivity to same management network as the Hyper-V hosts so that it can
communicate and configure the hosts.
For DHCP, enter the name for the Network Controller VMs. You can also use
the default populated names.
11. Under Credentials, enter the username and password that you used to join the
Software Load Balancer VMs to the cluster domain.
12. Enter the local administrative password for these VMs.
13. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
7 Note
15. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then click Finish.
7 Note
Network Controller and SLB must be set up before you configure Gateways.
1. In Windows Admin Center, under Tools, select Settings, then select Extensions.
2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then click Get
Started on the Gateway tab.
4. Under Define the Gateway Settings, under Tunnel subnets, provide the GRE
Tunnel Subnets. IP addresses from this subnet are used for provisioning on the
SDN gateway VMs for GRE tunnels. If you don't plan to use GRE tunnels, put any
placeholder subnets in this field.
5. Under BGP Router Settings, enter the SDN ASN for the Gateway. This ASN is used
to peer the gateway VMs with the Top of the Rack switches to advertise the GRE IP
addresses. This field is auto populated to the SDN ASN used by SLB.
6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. Gateway VMs need these settings to create a BGP peer with the switch.
These fields are auto populated from the SLB deployment wizard. If you have an
additional Top of Rack switch that you want to peer the gateway VMs with, add IP
Address and ASN for that switch as well.
7. Under Define the Gateway VM Settings, specify a path to the Azure Stack HCI
VHDX file. Use Browse to find it quicker.
9. Enter the value for Redundant Gateways. Redundant gateways don't host any
gateway connections. In event of failure or restart of an active gateway VM,
gateway connections from the active VM are moved to the redundant gateway and
the redundant gateway is then marked as active. In a production deployment, we
strongly recommend to have at least one redundant gateway.
7 Note
Ensure that the total number of gateway VMs is at least one more than the
number of redundant gateways. Otherwise, you won't have any active
gateways to host gateway connections.
10. Under Network, enter the VLAN ID of the management network. Gateways needs
connectivity to same management network as the Hyper-V hosts and Network
Controller VMs.
For DHCP, enter the name for the Gateway VMs. You can also use the default
populated names.
12. Under Credentials, enter the username and password used to join the Gateway
VMs to the cluster domain.
7 Note
15. Enter the path to the VMs. You can also use the default populated path.
17. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then click Finish.
Next steps
Manage SDN logical networks. See Manage tenant logical networks.
Manage SDN virtual networks. See Manage tenant virtual networks.
Manage microsegmentation with datacenter firewall. See Use Datacenter Firewall
to configure ACLs.
Manage your VMs. See Manage VMs.
Manage Software Load Balancers. See Manage Software Load Balancers.
Manage Gateway connections. See Manage Gateway Connections.
Troubleshoot SDN deployment. See Troubleshoot Software Defined Networking
deployment via Windows Admin Center.
Deploy an SDN infrastructure using SDN
Express
Article • 06/28/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019, Windows Server 2016
In this topic, you deploy an end-to-end Software Defined Network (SDN) infrastructure
using SDN Express PowerShell scripts. The infrastructure includes a highly available (HA)
Network Controller (NC), and optionally, a highly available Software Load Balancer (SLB),
and a highly available Gateway (GW). The scripts support a phased deployment, where
you can deploy just the Network Controller component to achieve a core set of
functionality with minimal network requirements.
You can also deploy an SDN infrastructure using Windows Admin Center or using
System Center Virtual Machine Manager (VMM). For more information, see Create a
cluster - Step 5: SDN and see Manage SDN resources in the VMM fabric.
) Important
You can't use Microsoft System Center Virtual Machine Manager 2019 to manage
clusters running Azure Stack HCI, version 21H2 or Windows Server 2022.
You do not have to deploy all SDN components. See the Phased deployment section of
Plan a Software Defined Network infrastructure to determine which infrastructure
components you need, and then run the scripts accordingly.
Make sure all host servers have the Azure Stack HCI operating system installed. See
Deploy the Azure Stack HCI operating system on how to do this.
Requirements
The following requirements must be met for a successful SDN deployment:
7 Note
The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.
2. Complete the download form and select Submit to display the Azure Stack HCI
software download page.
3. Under Azure Stack HCI, select English – VHDX from the Choose language
dropdown menu, and then select Download Azure Stack HCI.
Currently, a non-English VHDX file is not available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.
You will probably need to run this as Administrator and modify the execution policy for
scripts using the Set-ExecutionPolicy command.
PowerShell
Import-Module Convert-WindowsImage
$wimpath = "E:\sources\install.wim"
$vhdpath = "D:\temp\AzureStackHCI.vhdx"
$edition=1
7 Note
3. Extract the ZIP file and copy the SDNExpress folder to your deployment computer's
C:\ folder.
VHDPath - VHD file path used by all SDN infrastructure VMs (NC, SLB, GW)
VHDFile - VHDX file name used by all SDN infrastructure VMs
VMLocation - file path to SDN infrastructure VMs. Note that Universal Naming
Convention (UNC) paths aren't supported. For cluster storage-based paths, use a
format like C:\ClusterStorage\...
JoinDomain - domain to which SDN infrastructure VMs are joined to
SDNMacPoolStart - beginning MAC pool address for client workload VMs
SDNMacPoolEnd - end MAC pool address for client workload VMs
ManagementSubnet - management network subnet used by NC to manage
Hyper-V hosts, SLB, and GW components
ManagementGateway - Gateway address for the management network
ManagementDNS - DNS server for the management network
ManagementVLANID - VLAN ID for the management network
DomainJoinUsername - administrator user name
LocalAdminDomainUser - local administrator user name
RestName - DNS name used by management clients (such as Windows Admin
Center) to communicate with NC
HyperVHosts - host servers to be managed by Network Controller
NCUsername - Network Controller account user name
ProductKey - product key for SDN infrastructure VMs
SwitchName - only required if more than one virtual switch exists on the Hyper-V
hosts
VMMemory - memory (in GB) assigned to infrastructure VMs. Default is 4 GB
VMProcessorCount - number of processors assigned to infrastructure VMs.
Default is 8
Locale - if not specified, locale of deployment computer is used
TimeZone - if not specified, local time zone of deployment computer is used
The NCs = @() section is used for the Network Controller VMs. Make sure that the MAC
address of each NC VM is outside the SDNMACPool range listed in the General settings.
ComputerName - name of NC VM
HostName - host name of server where the NC VM is located
ManagementIP - management network IP address for the NC VM
MACAddress - MAC address for the NC VM
Leave this section empty ( Muxes = @() ) if not deploying the SLB component:
Gateway VM section
A minimum of two Gateway VMs (one active and one redundant) are recommended for
SDN.
The Gateways = @() section is used for the Gateway VMs. Make sure that the
MACAddress parameter of each Gateway VM is outside the SDNMACPool range listed in the
General settings. The FrontEndMac and BackendMac must be from within the SDNMACPool
range. Ensure that you get the FrontEndMac and the BackendMac parameters from the
end of the SDNMACPool range.
Leave this section empty ( Gateways = @() ) if not deploying the Gateway component:
SDNASN - Autonomous System Number (ASN) used by SDN to peer with network
switches
RouterASN - Gateway router ASN
RouterIPAddress - Gateway router IP address
PrivateVIPSubnet - virtual IP address (VIP) for the private subnet
PublicVIPSubnet - virtual IP address for the public subnet
The following additional parameters are used by Gateway VMs only. Leave these values
blank if you are not deploying Gateway VMs:
7 Note
If you fill in a value for RedundantCount, ensure that the total number of
gateway VMs is at least one more than the RedundantCount. By default, the
RedundantCount is 1, so you must have at least 2 gateway VMs to ensure
that there is at least 1 active gateway to host gateway connections.
Here's how Hyper-V Network Virtualization (HNV) Provider logical network allocates IP
addresses. Use this to plan your address space for the HNV Provider network.
1. Review the README.md file for late-breaking information on how to run the
deployment script.
2. Run the following command from a user account with administrative credentials
for the cluster host servers:
PowerShell
SDNExpress\scripts\SDNExpress.ps1 -ConfigurationDataFile
MultiNodeSampleConfig.psd1 -Verbose
3. After the NC VMs are created, configure dynamic DNS updates for the Network
Controller cluster name on the DNS server. For more information, see Dynamic
DNS updates.
Next steps
Manage VMs
Learn module: Plan for and deploy SDN infrastructure on Azure Stack HCI
Create an Azure Stack HCI cluster using
Windows Admin Center
Article • 04/17/2023
Now that you've deployed the Azure Stack HCI operating system, you'll learn how to use
Windows Admin Center to create an Azure Stack HCI cluster that uses Storage Spaces
Direct, and, optionally, Software Defined Networking. The Create Cluster wizard in
Windows Admin Center will do most of the heavy lifting for you. If you'd rather do it
yourself with PowerShell, see Create an Azure Stack HCI cluster using PowerShell. The
PowerShell article is also a good source of information for what is going on under the
hood of the wizard and for troubleshooting purposes.
7 Note
If you are doing a single server installation of Azure Stack HCI 21H2, use
PowerShell to create the cluster.
If you're interested in testing Azure Stack HCI but have limited or no spare hardware, see
the Azure Stack HCI Evaluation Guide, where we'll walk you through experiencing Azure
Stack HCI using nested virtualization inside an Azure VM. Or try the Create a VM-based
lab for Azure Stack HCI tutorial to create your own private lab environment using nested
virtualization on a server of your choice to deploy VMs running Azure Stack HCI for
clustering.
After you're done creating a cluster in the Create Cluster wizard, complete these post-
cluster creation steps:
Set up a cluster witness. This is highly recommended for all clusters with at least
two nodes.
Register with Azure. Your cluster is not fully supported until your registration is
active.
Validate an Azure Stack HCI cluster. Your cluster is ready to work in a production
environment after completing this step.
Prerequisites
Before you run the Create Cluster wizard in Windows Admin Center, you must complete
the following prerequisites.
2 Warning
Running the wizard before completing the prerequisites can result in a failure to
create the cluster.
Consult with your networking team to identify and understand Physical network
requirements, Host network requirements, and Firewall requirements. Especially
review the Network Reference patterns, which provide example network designs.
Also, determine how you'd like to configure host networking, using Network ATC
or manually.
Install the Azure Stack HCI operating system on each server in the cluster. See
Deploy the Azure Stack HCI operating system.
Have at least two servers to cluster; four if creating a stretched cluster (two in each
site). To instead deploy Azure Stack HCI on a single server, see Deploy Azure Stack
HCI on a single server.
Ensure all servers are in the same time zone as your local domain controller.
Ensure that Windows Admin Center and your domain controller are not installed
on the same system. Also, ensure that the domain controller is not hosted on the
Azure Stack HCI cluster or one of the nodes in the cluster.
If you're running Windows Admin Center on a server (instead of a local PC), use an
account that's a member of the Gateway Administrators group, or the local
Administrators group on the Windows Admin Center server.
Verify that your Windows Admin Center management computer is joined to the
same Active Directory domain in which you'll create the cluster, or joined to a fully
trusted domain. The servers that you'll cluster don't need to belong to the domain
yet; they can be added to the domain during cluster creation.
If you're using an integrated system from a Microsoft hardware partner, install the
latest version of vendor extensions on Windows Admin Center to help keep the
integrated hardware and firmware up to date. To install them, open Windows
Admin Center and click Settings (gear icon) at the upper right. Select any
applicable hardware vendor extensions, and click Install.
For stretched clusters, set up your two sites beforehand in Active Directory.
Alternatively, the wizard can set them up for you too. For more information about
stretched clusters, see the Stretched clusters overview.
3. In the Add or create resources panel, under Server clusters, select Create new.
6. When finished, click Create. You'll see the Create Cluster wizard, as shown below.
Proceed to the next step in the cluster creation workflow, Step 1: Get started.
1. Review 1.1 Check the prerequisites listed in the wizard to ensure each server node
is cluster-ready. When finished, click Next.
2. On 1.2 Add servers, enter your account username using the format
domain\username. Enter your password, then click Next. This account must be a
member of the local Administrators group on each server.
3. Enter the name of the first server you want to add, then click Add. When you add
servers, make sure to use a fully qualified domain name.
4. Repeat Step 3 for each server that will be part of the cluster. When you're finished,
select Next.
5. If needed, on 1.3 Join a domain, specify the domain to join the servers to and the
account to use. You can optionally rename the servers if you want. Then click Next.
6. On 1.4 Install features, review and add features as needed. When finished, click
Next.
The wizard lists and installs required features for you, including the following
options:
Data Deduplication
Hyper-V
BitLocker Drive Encryption
Data Center Bridging (for RoCEv2 network adapters)
Failover Clustering
Network ATC
Active Directory module for Windows PowerShell
Hyper-V module for Windows PowerShell
7. On 1.5 Install updates, click Install updates as needed to install any operating
system updates. When complete, click Next.
8. On 1.6 Install hardware updates, click Get updates as needed to get available
vendor hardware updates. If you don't install the updates now, we recommend
manually installing the latest networking drivers before continuing. Updated
drivers are required if you want to use Network ATC to configure host networking.
7 Note
10. On 1.7 Restart servers, click Restart servers if required. Verify that each server has
successfully started.
Step 2: Networking
Step 2 of the wizard walks you through configuring the host networking elements for
your cluster. RDMA (both iWARP and RoCE) network adapters are supported.
Depending on the option you selected in 1.8 Choose host networking of Step 1: Get
started above, refer to one of the following tabs to configure host networking for your
cluster:
This is the recommended option for configuring host networking. For more
information about Network ATC, see Network ATC overview.
1. On 2.1 Verify network adapters, review the list displayed, and exclude or add
any adapters you want to cluster. Wait for a couple of minutes for the
adapters to show up. Only adapters with matching names, interface
descriptions, and link speed on each server are displayed. All other adapters
are hidden.
2. If you don't see your adapters in the list, click Show hidden adapters to see all
the available adapters and then select the missing adapters.
3. On the Select the cluster network adapters page, select the checkbox for any
adapters listed that you want to cluster. The adapters must have matching
names, interface descriptions, and link speeds on each server. You can rename
the adapters to match, or just select the matching adapters. When finished,
click Close.
4. The selected adapters will now display under Adapters available on all
servers. When finished selecting and verifying adapters, click Next.
For Traffic types, select a traffic type from the dropdown list. You can
add the Management and Storage intent types to exactly one intent
while the Compute intent type can be added to one or more intents. For
more information, see Network ATC traffic types.
For Intent name, enter a friendly name for the intent.
For Network adapters, select an adapter from the dropdown list.
(Optional) Click Select another adapter for this traffic if needed.
7. (Optional) To add another intent, select Add an intent, and repeat step 5 and
optionally step 6.
9. On 2.3: Provide network details, for each storage traffic adapter listed, enter
the following or use the default values (recommended):
Subnet mask/CIDR
VLAN ID
IP address (this is usually on a private subnet such as 10.71.1.x and
10.71.2.x)
1. On 3.1 Create the cluster, specify a unique name for the cluster.
Specify one or more static addresses. The IP address must be entered in the
following format: IP address/current subnet length. For example:
10.0.0.200/24.
Assign address dynamically with DHCP.
3. When finished, select Create cluster. This can take a while to complete.
If you get the error "Failed to reach cluster through DNS," select the Retry
connectivity checks button. You might have to wait several hours before it
succeeds on larger networks due to DNS propagation delays.
) Important
If you failed to create a cluster, do not click the Back button instead of the
Retry connectivity checks button. If you select Back, the Cluster Creation
wizard exits prematurely, and can potentially reset the entire process.
If you encounter issues with deployment after the cluster is created and you want
to restart the Cluster Creation wizard, first remove (destroy) the cluster. To do so,
see Remove a cluster.
4. The next step appears only if you selected Use Network ATC to deploy and
manage networking (Recommended) for step 1.8 Choose host networking.
In Deploy host networking settings, select Deploy to apply the Network ATC
intents you defined earlier. If you chose to manually deploy host networking in
step 1.8 of the Cluster Creation wizard, you won't see this page.
6. On 3.3 Validate cluster, select Validate. Validation can take several minutes. Note
that the in-wizard validation is not the same as the post-cluster creation validation
step, which performs additional checks to catch any hardware or configuration
problems before the cluster goes into production. If you experience issues with
cluster validation, see Troubleshoot cluster validation reporting.
If the Credential Security Service Provider (CredSSP) pop-up appears, select Yes
to temporarily enable CredSSP for the wizard to continue. Once your cluster is
created and the wizard has completed, you'll disable CredSSP to increase security.
If you experience issues with CredSSP, see Troubleshoot CredSSP.
7. Review all validation statuses, download the report to get detailed information on
any failures, make changes, then click Validate again as needed. You can
Download report as well. Repeat again as necessary until all validation checks
pass. When all is OK, click Next.
11. For stretched clusters, on 3.3 Assign servers to sites, name the two sites that will
be used.
12. Next assign each server to a site. You'll set up replication across sites later. When
finished, click Apply changes.
Step 4: Storage
Complete these steps after finishing the Create Cluster wizard.
Step 4 walks you through
setting up Storage Spaces Direct for your cluster.
1. On 4.1 Clean drives, you can optionally select Erase drives if it makes sense for
your deployment.
2. On 4.2 Check drives, click the > icon next to each server to verify that the disks are
working and connected. If all is OK, click Next.
6. Download and review the report. When all is good, click Finish.
8. After a few minutes, you should see your cluster in the list. Select it to view the
cluster overview page.
It can take some time for the cluster name to be replicated across your domain,
especially if workgroup servers have been newly added to Active Directory.
Although the cluster might be displayed in Windows Admin Center, it might not be
available to connect to yet.
If resolving the cluster isn't successful after some time, in most cases you can
substitute a server name instead of the cluster name.
You can also deploy Network Controller using SDN Express scripts. See Deploy an SDN
infrastructure using SDN Express.
7 Note
The Create Cluster wizard does not currently support configuring SLB And RAS
gateway. You can use SDN Express scripts to configure these components. Also,
SDN is not supported or available for stretched clusters.
1. Under Host, enter a name for the Network Controller. This is the DNS name used
by management clients (such as Windows Admin Center) to communicate with
Network Controller. You can also use the default populated name.
2. Download the Azure Stack HCI VHDX file. For more information, see Download the
VHDX file.
3. Specify the path where you downloaded the VHDX file. Use Browse to find it
quicker.
4. Specify the number of VMs to be dedicated for Network Controller. Three VMs are
strongly recommended for production deployments.
5. Under Network, enter the VLAN ID of the management network. Network
Controller needs connectivity to same management network as the Hyper-V hosts
so that it can communicate and configure the hosts.
6. For VM network addressing, select either DHCP or Static.
7. If you selected DHCP, enter the name for the Network Controller VMs. You can
also use the default populated names.
8. If you selected Static, do the following:
Specify an IP address.
Specify a subnet prefix.
Specify the default gateway.
Specify one or more DNS servers. Click Add to add additional DNS servers.
9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.
10. Enter the local administrative password for these VMs.
11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values.
13. When finished, click Next.
14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete. Then click Finish.
7 Note
After Network Controller VM(s) are created, you must configure dynamic DNS
updates for the Network Controller cluster name on the DNS server.
If Network Controller deployment fails, do the following before you try this again:
Stop and delete any Network Controller VMs that the wizard created.
Next steps
To perform the next management task related to this article, see:
Microsoft Azure offers a range of differentiated workloads and capabilities that are
designed to run only on Azure. Azure Stack HCI extends many of the same benefits you
get from Azure, while running on the same familiar and high-performance on-premises
or edge environments.
Windows 2022 edition or later An Azure-only guest operating system that includes all
Server the latest Windows Server innovations and other
Datacenter: exclusive features.
Extended October 12th, 2021 A program that allows customers to continue to get
Security security updates or security updates for End-of-Support SQL Server and
Update later Windows Server VMs, now free when running on Azure
(ESUs) Stack HCI.
Azure Policy Arc agent version 1.13 A feature that can audit or configure OS settings as
guest or later code, for both host and guest machines.
Azure Virtual For multi-session A service that enables you to deploy Azure Virtual
Desktop editions only. Windows Desktop session hosts on your Azure Stack HCI
10 Enterprise multi- infrastructure.
session or later. For more information, see the Azure Virtual Desktop for
Azure Stack HCI overview.
How it works
This section is optional reading, and explains more about how Azure Benefits on HCI
works "under the hood."
Azure Benefits relies on a built-in platform attestation service on Azure Stack HCI, and
helps to provide assurance that VMs are indeed running on Azure environments.
This service is modeled after the same IMDS Attestation service that runs in Azure, in
order to enable some of the same workloads and benefits available to customers in
Azure. Azure Benefits returns an almost identical payload. The main difference is that it
runs on-premises, and therefore guarantees that VMs are running on Azure Stack HCI
instead of Azure.
Turning on Azure Benefits starts the service running on your Azure Stack HCI cluster:
1. On every server, HciSvc obtains a certificate from Azure, and securely stores it
within an enclave on the server.
7 Note
Certificates are renewed every time the Azure Stack HCI cluster syncs with
Azure, and each renewal is valid for 30 days. As long as you maintain the usual
30 day connectivity requirements for Azure Stack HCI, no user action is
required.
2. HciSvc exposes a private and non-routable REST endpoint, accessible only to VMs
on the same server. To enable this endpoint, an internal vSwitch is configured on
the Azure Stack HCI host (named AZSHCI_HOST-IMDS_DO_NOT_MODIFY). VMs
then must have a NIC configured and attached to the same vSwitch
(AZSHCI_GUEST-IMDS_DO_NOT_MODIFY).
7 Note
Modifying or deleting this switch and NIC prevents Azure Benefits from
working properly. If errors occur, disable Azure Benefits using Windows
Admin Center or the PowerShell instructions that follow, and then try again.
3. Consumer workloads (for example, Windows Server Azure Edition guests) request
attestation. HciSvc then signs the response with an Azure certificate.
7 Note
You must manually enable access for each VM that needs Azure Benefits.
1. In Windows Admin Center, select Cluster Manager from the top drop-down menu,
navigate to the cluster that you want to activate, then under Settings, select Azure
Benefits.
2. In the Azure Benefits pane, select Turn on. By default, the checkbox to turn on for
all existing VMs is selected. You can deselect it and manually add VMs later.
3. Select Turn on again to confirm setup. It may take a few minutes for servers to
reflect the changes.
4. When Azure Benefits setup is successful, the page updates to show the Azure
Benefits dashboard. To check Azure Benefits for the host:
a. Check that Azure Benefits cluster status appears as On.
b. Under the Cluster tab in the dashboard, check that Azure Benefits for every
server shows as Active in the table.
5. To check access to Azure Benefits for VMs: Check the status for VMs with Azure
Benefits turned on. It's recommended that all of your existing VMs have Azure
Benefits turned on; for example, 3 out of 3 VMs.
Manage access to Azure Benefits for your VMs - WAC
To turn on Azure Benefits for VMs, select the VMs tab, then select the VM(s) in the top
table VMs without Azure Benefits, and then select Turn on Azure Benefits for VMs.
Troubleshooting - WAC
To turn off and reset Azure Benefits on your cluster:
Under the Cluster tab, click Turn off Azure Benefits.
To remove access to Azure Benefits for VMs:
Under the VM tab, select the VM(s) in the top table VMs without Azure
Benefits, and then click Turn on Azure Benefits for VMs.
Under the Cluster tab, one or more servers appear as Expired:
If Azure Benefits for one or more servers has not synced with Azure for more
than 30 days, it appears as Expired or Inactive. Select Sync with Azure to
schedule a manual sync.
Under the VM tab, host server benefits appear as Unknown or Inactive:
You will not be able to add or remove Azure Benefits for VMs on these host
servers. Go to the Cluster tab to fix Azure Benefits for host servers with errors,
then try and manage VMs again.
PowerShell
Enable-AzStackHCIAttestation
Or, if you want to add all existing VMs on setup, you can run the following
command:
PowerShell
Enable-AzStackHCIAttestation -AddVM
2. When Azure Benefits setup is successful, you can view the Azure Benefits status.
Check the cluster property IMDS Attestation by running the following command:
PowerShell
Get-AzureStackHCI
Or, to view Azure Benefits status for servers, run the following command:
PowerShell
3. To check access to Azure Benefits for VMs, run the following command:
PowerShell
Get-AzStackHCIVMAttestation
PowerShell
Add-AzStackHCIVMAttestation [-VMName]
PowerShell
Add-AzStackHCIVMAttestation -AddAll
Troubleshooting - PowerShell
To turn off and reset Azure Benefits on your cluster, run the following command:
PowerShell
Disable-AzStackHCIAttestation -RemoveVM
PowerShell
PowerShell
Remove-AzStackHCIVMAttestation -RemoveAll
If Azure Benefits for one or more servers is not yet synced and renewed with Azure,
it may appear as Expired or Inactive. Schedule a manual sync:
PowerShell
Sync-AzureStackHCI
If a server is newly added and has not yet been set up with Azure Benefits, it may
appear as Inactive. To add the new server, run setup again:
PowerShell
Enable-AzStackHCIAttestation
2. Under the feature Enable Azure Benefits, view the host attestation status:
PowerShell
FAQ
This FAQ provides answers to some questions about using Azure Benefits.
Next steps
Extended security updates (ESU) on Azure Stack HCI
Azure Stack HCI overview
Azure Stack HCI FAQ
Deploy branch office and edge on Azure
Stack HCI
Article • 04/17/2023
This topic provides guidance on how to plan, configure, and deploy branch office and
edge scenarios on the Azure Stack HCI operating system. The guidance positions your
organization to run complex, highly available workloads in virtual machines (VMs) and
containers in remote branch office and edge deployments. Computing at the edge shifts
most data processing from a centralized system to the edge of the network, closer to a
device or system that requires data quickly.
Use Azure Stack HCI to run virtualized applications and workloads with high availability
on recommended hardware. The hardware supports clusters consisting of two servers
configured with nested resiliency for storage, a simple, low-cost USB thumb drive cluster
witness, and administration via the browser-based Windows Admin Center. For details
on creating a USB device cluster witness, see Deploy a file share witness.
Azure IoT Edge moves cloud analytics and custom business logic to devices so that you
can focus on business insights instead of data management. Azure IoT Edge combines
AI, cloud, and edge computing in containerized cloud workloads, such as Azure
Cognitive Services, Machine Learning, Stream Analytics, and Functions. Workloads can
run on devices ranging from a Raspberry Pi to a converged edge server. You use Azure
IoT Hub to manage your edge applications and devices.
Adding Azure IoT Edge to your Azure Stack HCI branch office and edge deployments
modernizes your environment to support the CI/CD pipeline application deployment
framework. DevOps personnel in your organization can deploy and iterate containerized
applications that IT builds and supports via traditional VM management processes and
tools.
Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.
Next, use Windows Admin Center to create an Azure Stack HCI cluster.
7 Note
5. On the VM that you created in Step 1, install and start the IoT Edge runtime.
) Important
You need the device string that you created in Step 4 to connect the runtime
to Azure IoT Hub.
You can source and deploy pre-built modules from the IoT Edge Modules
section of Azure Marketplace.
Next steps
For more information about branch office and edge, and Azure IoT Edge, see:
Quickstart: Deploy your first IoT Edge module to a virtual Linux device
Quickstart: Deploy your first IoT Edge module to a Windows device
Deploy virtual desktop infrastructure
(VDI) on Azure Stack HCI
Article • 04/17/2023
This topic provides guidance on how to plan, configure, and deploy virtual desktop
infrastructure (VDI) on the Azure Stack HCI operating system. Leverage your Azure Stack
HCI investment to deliver centralized, highly available, simplified, and secure
management for the users in your organization. Use this guidance to enable scenarios
like bring-your-own-device (BYOD) for your users, while providing them with a
consistent and reliable experience for business-critical applications without sacrificing
security.
7 Note
This article focuses on deploying Remote Desktop Services (RDS) to Azure Stack
HCI. You can also support VDI workloads using Azure Virtual Desktop for Azure
Stack HCI. Learn more at Azure Virtual Desktop for Azure Stack HCI (preview).
Overview
VDI uses server hardware to run desktop operating systems and software programs on a
virtual machine (VM). In this way, VDI lets you run traditional desktop workloads on
centralized servers. VDI advantages in a business setting include keeping sensitive
company applications and data in a secure datacenter, and accommodating a BYOD
policy without worrying about mixing personal data with corporate assets. VDI has also
become the standard to support remote and branch office workers and provide access
to contractors and partners.
Azure Stack HCI offers the optimal platform for VDI. A validated Azure Stack HCI
solution combined with Microsoft Remote Desktop Services (RDS) lets you achieve a
highly available and highly scalable architecture.
In addition, Azure Stack HCI VDI provides the following unique cloud-based capabilities
to protect VDI workloads and clients:
Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.
Next, use Windows Admin Center to create an Azure Stack HCI cluster.
To get started with Azure Update Management, you need a subscription to Microsoft
Azure. If you don’t have a subscription, you can sign up for a free trial .
You can also use Windows Admin Center to set up additional Azure hybrid services, such
as Backup, File Sync, Site Recovery, Point-to-Site VPN, and Azure Security Center.
Next steps
For more information related to VDI, see Supported configurations for Remote Desktop
Services
Deploy SQL Server on Azure Stack HCI
Article • 04/17/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; SQL Server (all supported
versions)
This topic provides guidance on how to plan, configure, and deploy SQL Server on the
Azure Stack HCI operating system. The operating system is a hyperconverged
infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads
and their storage in a hybrid on-premises environment.
Solution overview
Azure Stack HCI provides a highly available, cost efficient, flexible platform to run SQL
Server and Storage Spaces Direct. Azure Stack HCI can run Online Transaction
Processing (OLTP) workloads, data warehouse and BI, and AI and advanced analytics
over big data.
The platform’s flexibility is especially important for mission critical databases. You can
run SQL Server on virtual machines (VMs) that use either Windows Server or Linux,
which allows you to consolidate multiple database workloads and add more VMs to
your Azure Stack HCI environment as needed. Azure Stack HCI also enables you to
integrate SQL Server with Azure Site Recovery to provide a cloud-based migration,
restoration, and protection solution for your organization’s data that is reliable and
secure.
Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.
Next, use Windows Admin Center to create an Azure Stack HCI cluster.
To ensure the performance and health of your SQL Server instances on Azure Stack HCI,
see Performance Monitoring and Tuning Tools.
For tuning SQL Server 2017 and SQL Server 2016, see Recommended updates and
configuration options for SQL Server 2017 and 2016 with high-performance
workloads .
These options all work with the Microsoft Azure Cloud witness for quorum control. We
recommend using cluster AntiAffinity rules in WSFC for VMs placed on different physical
nodes to maintain uptime for SQL Server in the event of host failures when you
configure Always On availability groups.
Azure Backup lets you define backup policies to protect enterprise workloads and
supports backing up and restoring SQL Server consistency. For more information about
how to back up your on-premises SQL data, see Install Azure Backup Server.
Alternatively, you can use the SQL Server Managed Backup feature in SQL Server to
manage Azure Blob Storage backups.
For more information about using this option that is suitable for off-site archiving, see:
Tutorial: Use Azure Blob storage service with SQL Server 2016
Quickstart: SQL backup and restore to Azure Blob storage service
In addition to these backup scenarios, you can set up other database services that SQL
Server offers, including Azure Data Factory and Azure Feature Pack for Integration
Services (SSIS).
Next steps
For more information about working with SQL Server, see:
This topic provides guidance on how to plan, configure, and deploy a highly secure
infrastructure that uses trusted enterprise virtualization on the Azure Stack HCI
operating system. Leverage your Azure Stack HCI investment to run secure workloads on
hardware that uses virtualization-based security (VBS) and hybrid cloud services through
Windows Admin Center and the Azure portal.
Overview
VBS is a key component of the security investments in Azure Stack HCI to protect hosts
and virtual machines (VMs) from security threats. For example, the Security Technical
Implementation Guide (STIG) , which is published as a tool to improve the security of
Department of Defense (DoD) information systems, lists VBS and Hypervisor-Protected
Code Integrity (HVCI) as general security requirements. It is imperative to use host
hardware that is enabled for VBS and HVCI to protect workloads on VMs, because a
compromised host cannot guarantee VM protection.
VBS uses hardware virtualization features to create and isolate a secure region of
memory from the operating system. You can use Virtual Secure Mode (VSM) in Windows
to host a number of security solutions to greatly increase protection from operating
system vulnerabilities and malicious exploits.
VBS uses the Windows hypervisor to create and manage security boundaries in
operating system software, enforce restrictions to protect vital system resources, and
protect security assets, such as authenticated user credentials. With VBS, even if malware
gains access to the operating system kernel, you can greatly limit and contain possible
exploits, because the hypervisor prevents malware from executing code or accessing
platform secrets.
The hypervisor, the most privileged level of system software, sets and enforces page
permissions across all system memory. While in VSM, pages can only execute after
passing code integrity checks. Even if a vulnerability, such as a buffer overflow that could
allow malware to attempt to modify memory occurs, code pages cannot be modified,
and modified memory cannot be executed. VBS and HVCI significantly strengthen code
integrity policy enforcement. All kernel mode drivers and binaries are checked before
they can start, and unsigned drivers or system files are prevented from loading into
system memory.
Otherwise, you'll need to deploy the Azure Stack HCI operating system on your own
hardware. For details on Azure Stack HCI deployment options and installing Windows
Admin Center, see Deploy the Azure Stack HCI operating system.
Next, use Windows Admin Center to create an Azure Stack HCI cluster.
All partner hardware for Azure Stack HCI is certified with the Hardware Assurance
Additional Qualification. The qualification process tests for all required VBS functionality.
However, VBS and HVCI are not automatically enabled in Azure Stack HCI. For more
information about the the Hardware Assurance Additional Qualification, see "Hardware
Assurance" under Systems in the Windows Server Catalog .
2 Warning
HVCI may be incompatible with hardware devices not listed in the Azure Stack HCI
Catalog. We strongly recommend using Azure Stack HCI validated hardware from
our partners for trusted enterprise virtualization infrastructure.
To learn more, see Protect Windows Admin Center resources with Security Center.
You need a subscription to Microsoft Azure. If you don’t have a subscription, you
can sign up for a free trial .
Security Center's free pricing tier is enabled on all your current Azure subscriptions
once you either visit the Azure Security Center dashboard in the Azure portal, or
enable it programmatically via API.
To take advantage of advanced security
management and threat detection capabilities, you must enable Azure Defender.
You can use Azure Defender free for 30 days. For more information, see Security
Center pricing .
If you're ready to enable Azure Defender, see Quickstart: Setting up Azure Security
Center to walk through the steps.
You can also use Windows Admin Center to set up additional Azure hybrid services, such
as Backup, File Sync, Site Recovery, Point-to-Site VPN, and Update Management.
Next steps
For more information related to trusted enterprise virtualization, see:
This article provides information about how to set up an Azure Stack HCI cluster in
System Center - Virtual Machine Manager (VMM). You can deploy an Azure Stack HCI
cluster by provisioning from bare-metal servers or by adding existing hosts. Learn
more about the new Azure Stack HCI.
VMM 2019 Update Rollup 3 (UR3) supports Azure Stack HCI, version 20H2. The current
product is Azure Stack HCI, version 21H2. Starting with System Center 2022, VMM
supports Azure Stack HCI, version 20H2; Azure Stack HCI, version 21H2; and Azure Stack
HCI, version 22H2 (supported from VMM 2022 UR1).
) Important
Azure Stack HCI clusters that are managed by Virtual Machine Manager shouldn’t
join the preview channel yet. System Center (including Virtual Machine Manager,
Operations Manager, and other components) does not currently support Azure
Stack preview versions. For the latest updates, see the System Center blog .
What’s supported?
Addition, creation, and management of Azure Stack HCI clusters. See detailed
steps to create and manage HCI clusters.
Ability to provision & deploy VMs on the Azure Stack HCI clusters and perform VM
life cycle operations. VMs can be provisioned using VHD(x) files, templates, or from
an existing VM. Learn more.
Moving VMs between Windows Server and Azure Stack HCI clusters works via
Network Migration and migrating an offline (shut down) VM. In this scenario, VMM
does export and import under the hood, even though it's performed as a single
operation.
The PowerShell cmdlets used to manage Windows Server clusters can be used to
manage Azure Stack HCI clusters as well.
With VMM 2022, we're introducing VMM PowerShell cmdlets to register and unregister
Azure Stack HCI clusters.
PowerShell
PowerShell
Azure Stack HCI is intended as a virtualization host where you run all your
workloads in virtual machines. The Azure Stack HCI terms allow you to run only
what's necessary for hosting virtual machines. Azure Stack HCI clusters shouldn't
be used for other purposes like WSUS servers, WDS servers, or library servers. Refer
to Use cases for Azure Stack HCI, When to use Azure Stack HCI, and Roles you can
run without virtualizing.
Live migration between any version of Windows Server and Azure Stack HCI
clusters isn't supported.
7 Note
Live migration between Azure Stack HCI clusters works, as well as between
Windows Server clusters.
The only storage type available for Azure Stack HCI is Storage Spaces Direct (S2D).
Creation or management of non-S2D cluster with Azure Stack HCI nodes isn't
supported. If you need to use any other type of storage, for example SANs, use
Windows Server as the virtualization host.
7 Note
You must enable S2D when creating an Azure Stack HCI cluster.
To enable S2D, in
the cluster creation wizard, go to General Configuration. Under Specify the cluster
name and host group, select Enable Storage Spaces Direct as shown below:
After you enable a cluster with S2D, VMM does the following:
The Failover Clustering feature is enabled.
Storage replica and data deduplication are enabled.
The cluster is optionally validated and created.
S2D is enabled, and a storage array object is created in VMM with the same name
as you provided in the wizard.
When you use VMM to create a hyper-converged cluster, the pool and the storage tiers
are automatically created by running Enable-ClusterStorageSpacesDirect -Autoconfig
$True .
After these prerequisites are in place, you provision a cluster, and set up storage
resources on it. You can then deploy VMs on the cluster.
7 Note
When you set up the cluster, select the Enable Storage Spaces Direct option
on the General Configuration page of the Create Hyper-V Cluster wizard.
In Resource Type, select Existing servers running a Windows Server
operating system, and select the Hyper-V hosts to add to the cluster.
All the selected hosts should have Azure Stack HCI installed.
Since S2D is enabled, the cluster must be validated.
7 Note
Typically, S2D node requires RDMA, QoS, and SET settings. To configure these
settings for a node using bare metal computers, you can use the post deployment
script capability in PCP. Here's the sample PCP post deployment script.
You can
also use this script to configure RDMA, QoS, and SET while adding a new node to
an existing S2D deployment from bare metal computers.
7 Note
The generalized VHD or VHDX in the VMM library should be running Azure
Stack HCI with the latest updates. The Operating system and Virtualization
platform values for the hard disk should be set.
For bare-metal deployment, you need to add a pre-boot execution
environment (PXE) server to the VMM fabric. The PXE server is provided
through Windows Deployment Services. VMM uses its own WinPE image, and
you need to ensure that it’s the latest. To do this, select Fabric >
Infrastructure > Update WinPE image, and ensure that the job finishes.
7 Note
Configuration of DCB settings is an optional step to achieve high performance
during S2D cluster creation workflow. Skip to step 4 if you do not wish to configure
DCB settings.
Recommendations
If you've vNICs deployed, for optimal performance, we recommend you to map all
your vNICs with the corresponding pNICs. Affinities between vNIC and pNIC are
set randomly by the operating system, and there could be scenarios where
multiple vNICs are mapped to the same pNIC. To avoid such scenarios, we
recommend you to manually set affinity between vNIC and pNIC by following the
steps listed here.
When you create a network adapter port profile, we recommend you to allow IEEE
priority. Learn more.
You can also set the IEEE Priority by using the following PowerShell commands:
PowerShell
3. Provide Priority and Bandwidth values for SMB-Direct and Cluster Heartbeat
traffic.
7 Note
Default values are assigned to Priority and Bandwidth. Customize these values
based on your organization's environment needs.
Default values:
Cluster Heartbeat 7 1
SMB-Direct 3 50
4. Select the network adapters used for storage traffic. RDMA is enabled on these
network adapters.
7 Note
In a converged NIC scenario, select the storage vNICs. The underlying pNICs
should be RDMA capable for vNICs to be displayed and available for
selection.
5. Review the summary and select Finish.
An Azure Stack HCI cluster will be created and the DCB parameters are configured
on all the S2D nodes.
7 Note
Azure. Alternatively, follow these steps to register the Azure Stack HCI cluster with
Azure.
The registration status will reflect in VMM after a successful cluster refresh.
2. Select Fabric, right-click the Azure Stack HCI cluster, and select Properties.
2. Right-click the cluster > Manage Pool, and select the storage pool that was
created by default. You can change the default name and add a classification.
3. To create a CSV, right-click the cluster > Properties > Shared Volumes.
4. In the Create Volume Wizard > Storage Type, specify the volume name and select
the storage pool.
5. In Capacity, you can specify the volume size, file system, and resiliency (Failures to
tolerate) settings.
6. Select Configure advanced storage and tiering settings to set up these options.
7. In Summary, verify settings and finish the wizard. A virtual disk will be created
automatically when you create the volume.
) Important
If the Azure Stack HCI cluster isn't registered with Azure or not connected to Azure
for more than 30 days post registration, high availability virtual machine (HAVM)
creation will be blocked on the cluster. Refer to step 4 & 5 for cluster registration.
Live migration between Windows Server and Azure Stack HCI isn’t supported.
Network migration from Azure Stack HCI to Windows Server isn’t supported.
1. Temporarily disable the live migration at the destination Azure Stack HCI host.
2. Select VMs and Services > All Hosts, and then select the source Hyper-V host from
which you want to migrate.
3. Select the VM that you want to migrate. The VM must be in a turned off state.
4. Select Migrate Virtual Machine.
5. In Select Host, review and select the destination Azure Stack HCI host.
6. Select Next to initiate network migration. VMM will perform imports and exports at
the back end.
7. To verify that the virtual machine is successfully migrated, check the VMs list on the
destination host. Turn on the VM and re-enable live migration on the Azure Stack
HCI host.
For prerequisites and limitations for the conversion, see Convert a VMware VM to
Hyper-V in the VMM fabric.
1. Create Run as account for vCenter Server Administrator role in VMM. These
administrator credentials are used to manage vCenter server and ESXi hosts.
2. In the VMM console, under Fabric, select Servers > Add VMware vCenter Server.
4. Select Finish.
6. After the successful addition of the vCenter server, all the ESXi hosts under the
vCenter are migrated to VMM.
Add Hosts
1. In the VMM console, under Fabric, select Servers > Add VMware ESX Hosts and
Clusters.
a. Under Credentials, select the Run as account that is used for the port and select
Next.
b. Under Target Resources, select all the ESX clusters that need to be added to
VMM and select Next.
c. Under Host Settings, select the location where you want to add the VMs and
select Next.
d. Under Summary, review the settings and select Finish. Along with the hosts,
associated VMs will also get added.
View VMs
1. Go to VMs and Services to view the virtual machines.
You can also manage the
primary lifecycle operations of these virtual machines from VMM.
2. Right-click the VM and select Power Off (online migrations are not supported) that
need to be migrated and uninstall VMware tools from the guest operating system.
3. Select Home > Create Virtual Machines > Convert Virtual Machine.
b. Under Specify Virtual Machine Identity, enter the new name for the virtual
machine if you wish to and select Next.
5. Under Select Host, select the target Azure Stack HCI node and specify the location
on the host for VM storage files and select Next.
6. Select a virtual network for the virtual machine and select Create to complete the
migration.
The virtual machine running on the ESXi cluster is successfully migrated to Azure
Stack HCI cluster. For automation, use PowerShell commands for conversion.
Next steps
Provision VMs
Manage the cluster
License Windows Server VMs on Azure
Stack HCI
Article • 04/17/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019 Datacenter Edition and later
Windows Server virtual machines (VMs) must be licensed and activated before you can
use them on Azure Stack HCI. You can use any existing Windows Server licenses and
activation methods that you already have. Optionally, Azure Stack HCI offers new
licensing models and tools to help simplify this process. This article describes general
licensing concepts and the new options that are available on Azure Stack HCI.
Summary
The following figure shows the different Windows Server VM licensing options:
What are the No CAL required – included in WS subscription. Windows Server CAL.
CAL
requirements?
Question Windows Server subscription Bring your own license
(BYOL)
What is the Per physical core/per month pricing, purchased Core licenses. For details,
pricing model? and billed through Azure (free trial within the first see Licensing Windows
60 days of registering your Azure Stack HCI). For Server and Pricing for
details, see Pricing for Windows Server Windows Server
subscription . licenses .
Guest versions
The following table shows the guest operating systems that the different licensing
methods can activate:
Windows Server X X X
2012/R2
Future editions X
(Evergreen)
2. Under the feature Windows Server subscription add-on, select Purchase. In the
context pane, select Purchase again to confirm.
3. When Windows Server subscription has been successfully purchased, you can start
using Windows Server VMs on your cluster. Licenses will take a few minutes to be
applied on your cluster.
Remediation: Your cluster does not yet have the latest status on Windows Server
subscription (for example, you just enrolled or just canceled), and therefore might not
have retrieved the licenses to set up AVMA. In most cases, the next cloud sync will
resolve this discrepancy, or you can sync manually. See Syncing Azure Stack HCI.
2. In the Automatically activate VMs pane, select Set up, and then select Purchase
Windows Server subscription. Select Next and confirm details, then select
Purchase.
3. When you complete the purchase successfully, the cluster retrieves licenses from
the cloud, and sets up AVMA on your cluster.
Enable Windows Server subscription using PowerShell
Purchase Windows Server subscription: From your cluster, run the following
command:
PowerShell
Check status: To check subscription status for each server, run the following
command on each server:
PowerShell
Get-AzureStackHCISubscriptionStatus
To check that AVMA has been set up with Windows Server subscription, run the
following command on each server:
PowerShell
Get-VMAutomaticActivation
Benefits of AVMA
VM activation through host servers presents several benefits:
Individual VMs don't have to be connected to the internet. Only licensed host
servers with internet connectivity are required.
License management is simplified. Instead of having to true-up key usage counts
for individual VMs, you can activate any number of VMs with just a properly
licensed server.
AVMA acts as a proof-of-purchase mechanism. This capability helps to ensure that
Windows products are used in accordance with product use rights and Microsoft
software license terms.
Take a few minutes to watch the video on using Automatic Virtual Machine Activation in
Windows Admin Center:
https://www.microsoft.com/en-us/videoplayer/embed/RWFdsF?postJsllMsg=true
Prerequisites - activation
Before you begin:
An Azure Stack HCI cluster (version 20H2 with the June 8, 2021 cumulative update
or later).
The Cluster Manager extension for Windows Admin Center (version 1.523.0 or
later).
7 Note
For VMs to stay activated regardless of which server they run on, AVMA must be set
up for each server in the cluster.
1. Select Cluster Manager from the top drop-down arrow, navigate to the cluster
that you want to activate, then under Settings, select Activate Windows Server
VMs.
2. In the Automatically activate VMs pane, select Set up and then select Use existing
Windows Server licenses. In the Apply activation keys to each server pane, enter
your Windows Server Datacenter keys.
When you have finished entering keys for each host server in the cluster, select
Apply. The process then takes a few minutes to complete.
7 Note
Each server requires a unique key, unless you have a valid volume license key.
3. Now that AVMA has been enabled, you can activate VMs against the host server
by following the steps in Automatic Virtual Machine Activation.
1. In the Activate Windows Server VMs pane, select the servers that you want to
manage, and then select Manage activation keys.
2. In the Manage activation keys pane, enter the new keys for the selected host
servers, and then select Apply.
7 Note
Overwriting keys does not reset the activation count for used keys. Ensure
that you're using the right keys before applying them to the servers.
To resolve such issues, in the Activate Windows Server VMs window, select the server
with the warning, and then select Manage activation keys to enter a new key.
Your server is offline and cannot be reached. Bring all servers online and then refresh the
page.
Error 4: "Couldn't check the status on this server" or "To use this
feature, install the latest update"
One or more of your servers is not updated and does not have the required packages to
set up AVMA. Ensure that your cluster is updated, and then refresh the page. For more
information, see Update Azure Stack HCI clusters.
PowerShell
PowerShell
Get-VMAutomaticActivation
3. Repeat these steps on each of the other servers in your Azure Stack HCI cluster.
Now that you have set up AVMA through BYOL, you can activate VMs against the host
server by following the steps here.
FAQ
This FAQ provides answers to some questions about licensing Windows Server.
Will my Windows Server Datacenter Azure Edition guests
activate on Azure Stack HCI?
Yes, but you must use either Windows Server subscription-based AVMA, or else bring
Windows Server Datacenter keys with Software Assurance. For BYOL, you can use either:
To sync host servers to Azure at least once every 30 days, in order to maintain
Azure Stack HCI 30-day connectivity requirements and to sync host licenses for
AVMA.
When purchasing or canceling Windows Server subscription.
For VMs to activate via Windows Server subscription or BYOL-based AVMA. For
connectivity requirements for other forms of activation, see the Windows Server
documentation.
You can sign up or cancel your Windows Server subscription at any time. Upon
cancellation, billing and activation via Azure stops immediately. Make sure you have an
alternate form of licensing if you continue to run Windows Server VMs on your cluster.
I have a license for Windows Server, can I run Windows
Server 2016 VMs on Azure Stack HCI?
Yes. Although you cannot use Windows Server 2016 keys to set up AVMA on Azure
Stack HCI, they can still be applied using other activation methods. For example, you can
enter a Windows Server 2016 key into your Windows Server 2016 VM directly.
OEM provider: Find a Certificate of Authenticity (COA) key label on the outside of
the OEM hardware. You can use this key once per server in the cluster.
Volume Licensing Service Center (VLSC): From the VLSC, you can download a
Multiple Activation Key (MAK) that you can reuse up to a predetermined number
of allowed activations. For more information, see MAK keys.
Retail channels: You can also find a retail key on a retail box label. You can only use
this key once per server in the cluster. For more information, see Packaged
Software .
Next steps
Automatic virtual machine activation
Key Management Services (KMS) activation planning for Windows Server
Manage VMs with Windows Admin
Center
Article • 04/17/2023
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019
Windows Admin Center can be used to create and manage your virtual machines (VMs)
on Azure Stack HCI.
Create a new VM
You can easily create a new VM using Windows Admin Center.
1. On the Windows Admin Center home screen, under All connections, select the
server or cluster you want to create the VM on.
3. Under Virtual machines, select the Inventory tab, then select Add and New.
6. Under Host, select the server you want the VM to reside on.
7. Under Path, select a preassigned file path from the dropdown list or click Browse
to choose the folder to save the VM configuration and virtual hard disk (VHD) files
to. You can browse to any available SMB share on the network by entering the
path as \server\share.
8. Under Virtual processors, select the number of virtual processors and whether you
want nested virtualization enabled for the VM. If the cluster is running Azure Stack
HCI, version 21H2, you'll also see a checkbox to enable processor compatibility
mode on the VM.
10. Under Network, select a virtual switch from the dropdown list.
11. Under Network, select one of the following for the isolation mode from the
dropdown list:
12. Under Storage, click Add and select whether to create a new empty virtual hard
disk or to use an existing virtual hard disk. If you're using an existing virtual hard
disk, click Browse and select the applicable file path.
15. To start the VM, in the Virtual Machines list, hover over the new VM, enable the
checkbox for it on the left, and select Start.
1. In Windows Admin Center, under Tools, scroll down and select Virtual Machines.
2. The Inventory tab on the right lists all VMs available on the current server or the
cluster, and provides commands to manage individual VMs. You can:
View VM details
You can view detailed information and performance charts for a specific VM from its
dedicated page.
2. Click the Inventory tab on the right, then select the VM. On the subsequent page,
you can do the following:
View live and historical data line charts for CPU, memory, network, IOPS and
IO throughput (historical data is only available for hyperconverged clusters)
View, create, apply, rename, and delete checkpoints.
View details for the virtual hard disk (.vhd) files, network adapters, and host
server.
View the state of the VM.
Save the VM, delete a saved state, export, or clone the VM.
Change settings for the VM.
Connect to the VM console using VMConnect via the Hyper-V host.
Replicate the VM using Azure Site Recovery.
The number of VMs that are running, stopped, paused, and saved
Recent health alerts or Hyper-V event log events for clusters
CPU and memory usage with host vs guest breakdown
Live and historical data line charts for IOPS and I/O throughput for clusters
Change VM settings
There are a variety of settings that you can change for a VM.
7 Note
Some settings cannot be changed for a VM that is running and you will need to
stop the VM first.
2. Click the Inventory tab on the right, select the VM, then click Settings.
3. To change VM start/stop actions and general settings, select General and do the
following:
To change time intervals for pausing or starting a VM, enter the appropriate
values in the fields shown
6. To change the size of an existing disk, modify the value in Size (GB). To add a new
virtual disk, select Disks and then select whether to create an empty virtual disk or
to use an existing virtual disk or ISO (.iso) image file. Click Browse and select the
path to the virtual disk or image file.
7. To add, remove, or change network adapter settings, select Networks and do the
following:
Select one of the following for the isolation mode from the dropdown list:
Set to Default (None) if the VM is connected to the virtual switch in access
mode.
Set to VLAN if the VM is connected to the virtual switch over a VLAN.
Specify the VLAN identifier as well.
Set to Virtual Network (SDN) if the VM is part of an SDN virtual network.
Select a virtual network name, subnet, and specify the IP Address.
Optionally, select a network security group that can be applied to the VM.
Set to Logical Network (SDN) if the VM is part of an SDN logical network.
Select the logical network name, subnet, and specify the IP Address.
Optionally, select a network security group that can be applied to the VM.
8. Select Boot order to add boot devices or change the VM boot sequence.
7 Note
The Production checkpoint setting is recommended and uses backup
technology in the guest operating system to create data-consistent
checkpoints. The Standard setting uses VHD snapshots to create checkpoints
with application and service state.
10. Select Affinity rules to create an affinity rule for a VM. For more information on
creating affinity rules, see Create server and site affinity rules for VMs.
Select Enable Secure Boot to help prevent unauthorized code from running
at boot time (recommended). Also select a Microsoft or open-source
template from the drop-down box
7 Note
2. Under the Inventory tab, select a VM from the list and select Manage > Move.
4. If you want to move both the VM and its storage, choose whether to move it to
another cluster or to another server in the same cluster.
5. If you want to move just the VM's storage, select either to move it to the same
path or select different paths for configuration, checkpoint, or smart paging.
Join a VM to a domain
You can easily join a VM to a domain as follows:
Clone a VM
You can easily clone a VM as follows:
Import or export a VM
You can easily import or export a VM. The following procedure describes the import
process.
2. On the Inventory tab, select Choose a virtual machine from the list and select the
Connect > Connect or Connect > Download RDP file option. Both options use the
VMConnect tool to connect to the guest VM through the Hyper-V host and
requires you to enter your administrator username and password credentials for
the Hyper-V host.
The Connect option connects to the VM using Remote Desktop in your web
browser.
The Download RDP file option downloads an .rdp file that you can open to
connect with the Remote Desktop Connection app (mstsc.exe).
Next steps
You can also create and manage VMs using Windows PowerShell. For more information,
see Manage VMs on Azure Stack HCI using Windows PowerShell.
See Create and manage Azure virtual networks for Windows virtual machines.
Applies to: Azure Stack HCI, versions 22H2 and 21H2; Windows Server 2022,
Windows Server 2019
Windows PowerShell can be used to create and manage your virtual machines (VMs) on
Azure Stack HCI.
Typically, you manage VMs from a remote computer, rather than on a host server in a
cluster. This remote computer is called the management computer.
7 Note
For the complete reference documentation for managing VMs using PowerShell, see
Hyper-V reference.
Create a VM
The New-VM cmdlet is used to create a new VM. For detailed usage, see the New-VM
reference documentation.
Here are the settings that you can specify when creating a new VM with an existing
virtual hard disk, where:
-Name is the name that you provide for the virtual machine that you're creating.
-BootDevice is the device that the virtual machine boots to when it starts.
Typically
this is a virtual hard disk (VHD), an .iso file for DVD-based boot, or a network
adapter (NetworkAdapter) for network boot.
-VHDPath is the path to the virtual machine disk that you want to use.
-Path is the path to store the virtual machine configuration files.
-Generation is the virtual machine generation. Use generation 1 for VHD and
generation 2 for VHDX.
-SwitchName is the name of the virtual switch that you want the virtual machine to
use to connect to other virtual machines or the network. Get the name of the
virtual switch by using Get-VMSwitch. For example:
PowerShell
The next example creates a Generation 2 virtual machine with 4GB of memory. It boots
from the folder VMs\Win10.vhdx in the current directory and uses the virtual switch
named ExternalSwitch. The virtual machine configuration files are stored in the folder
VMData.
PowerShell
To create a virtual machine with a new virtual hard disk, replace the -VHDPath
parameter from the example above with -NewVHDPath and add the -
NewVHDSizeBytes parameter as shown here:
PowerShell
To create a virtual machine with a new virtual disk that boots to an operating system
image, see the PowerShell example in Create virtual machine walkthrough for Hyper-V
on Windows 10.
Get a list of VMs
The following example returns a list of all VMs on Server1.
PowerShell
The following example returns a list of all running VMs on a server by adding a filter
using the Where-Object command. For more information, see Using the Where-Object
documentation.
PowerShell
The next example returns a list of all shut-down VMs on the server.
PowerShell
PowerShell
PowerShell
Move a VM
The Move-VM cmdlet moves a VM to a different server. For more information, see the
Move-VM reference documentation.
The following example shows how to move a VM to Server2 when the VM is stored on
an SMB share on Server1:
PowerShell
The following example shows how to move a VM to Server2 from Server1 and move all
files associated with the VM to D:\VM_name on the remote computer:
PowerShell
Import or export a VM
The Import-VM and Export-VM cmdlets import and export a VM. The following shows a
couple of examples. For more information, see the Import-VM and Export-VM reference
documentation.
The following example shows how to import a VM from its configuration file. The VM is
registered in-place, so its files are not copied:
PowerShell
PowerShell
Rename a VM
The Rename-VM cmdlet is used to rename a VM. For detailed information, see the
Rename-VM reference documentation.
The following example renames VM1 to VM2 and displays the renamed virtual machine:
PowerShell
Create a VM checkpoint
The Checkpoint-VM cmdlet is used to create a checkpoint for a VM. For detailed
information, see the Checkpoint-VM reference documentation.
PowerShell
The following example creates a dynamic virtual hard disk in VHDX format that is 10 GB
in size. The file name extension determines the format and the default type of dynamic
is used because no type is specified.
PowerShell
Get-ClusterGroup
The following example adds a virtual network adapter named Redmond NIC1 to a virtual
machine named VM1:
PowerShell
This example adds a virtual network adapter to a virtual machine named VM1 and
connects it to a virtual switch named Network:
PowerShell
The following example creates a new switch called "QoS switch", which binds to a
network adapter called Wired Ethernet Connection 3 and supports weight-based
minimum bandwidth.
PowerShell