0% found this document useful (0 votes)
50 views2,041 pages

Azure Stack Hci

Azure Stack HCI is a hyperconverged infrastructure solution that enables on-premises virtualized workloads and connects to Azure for cloud services. The documentation covers deployment, updates, workload management, monitoring, and troubleshooting, along with new features in version 23H2. It highlights the benefits of using Azure Stack HCI, including improved performance, flexibility in hardware choices, and integration with Azure services.

Uploaded by

Suresh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views2,041 pages

Azure Stack Hci

Azure Stack HCI is a hyperconverged infrastructure solution that enables on-premises virtualized workloads and connects to Azure for cloud services. The documentation covers deployment, updates, workload management, monitoring, and troubleshooting, along with new features in version 23H2. It highlights the benefits of using Azure Stack HCI, including improved performance, flexibility in hardware choices, and integration with Azure services.

Uploaded by

Suresh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2041

Tell us about your PDF experience.

Azure Stack HCI documentation


Azure Stack HCI is a hyperconverged clustering solution that uses validated hardware to
run virtualized workloads on-premises, making it easy for customers to consolidate
aging infrastructure and connect to Azure for cloud services.

About Azure Stack HCI

e OVERVIEW

What is Azure Stack HCI?

h WHAT'S NEW

What's new in Azure Stack HCI, version 23H2?

Deploy the cluster

` DEPLOY

1. Download the software

2. Install the OS

3. Register with Arc

4. Deploy via the portal

Update the cluster

e OVERVIEW

About updates

c HOW-TO GUIDE

Update via Azure portal

Updates via PowerShell


Deploy workloads

` DEPLOY

Deploy Azure Kubernetes Service clusters

Run Azure Virtual Desktop

Run Azure Arc Virtual Machines

Manage the cluster

c HOW-TO GUIDE

Add server

Repair server

Manage via PowerShell

Manage Arc extensions

Monitor the cluster

c HOW-TO GUIDE

Monitor via Insights

Azure Monitor Metrics

Health alerts

Log alerts

Metric alerts

Try the preview

e OVERVIEW

Migrate (preview)

Microsoft Defender for Cloud (preview)


Azure Site Recovery (preview)

Azure Stack HCI concepts

p CONCEPT

Network reference patterns overview

Security features

Observability

d TRAINING

Azure Stack HCI foundations

Troubleshoot

c HOW-TO GUIDE

Collect logs

Deployment issues

Remote support
Azure Stack HCI solution overview
Article • 02/21/2024

Applies to: Azure Stack HCI, versions 23H2 and 22H2

Azure Stack HCI is a hyperconverged infrastructure (HCI) solution that hosts Windows
and Linux VM or containerized workloads and their storage. It's a hybrid product that
connects the on-premises system to Azure for cloud-based services, monitoring, and
management.

Overview
An Azure Stack HCI system consists of a server or a cluster of servers running the Azure
Stack HCI operating system and connected to Azure. You can use the Azure portal to
monitor and manage individual Azure Stack HCI systems as well as view all of your
Azure Stack HCI deployments. You can also manage with your existing tools, including
Windows Admin Center and PowerShell.

Azure Stack HCI is available for download from the Azure portal with a free 60-day trial
(Download Azure Stack HCI).

To acquire the servers to run Azure Stack HCI, you can purchase Azure Stack HCI
integrated systems from a Microsoft hardware partner with the operating system pre-
installed, or buy validated nodes and install the operating system yourself. See the Azure
Stack HCI Catalog for hardware options and use the Azure Stack HCI sizing tool to
estimate hardware requirements.

Azure Stack HCI features and architecture


Azure Stack HCI is built on proven technologies including Hyper-V, Storage Spaces
Direct, and core Azure management service.

Each Azure Stack HCI system consists of between 1 and 16 physical servers. All servers
share common configurations and resources by leveraging the Windows Server Failover
Clustering feature.

Azure Stack HCI combines the following:

Validated hardware from a hardware partner


Azure Stack HCI operating system
Hyper-V-based compute resources
Storage Spaces Direct-based virtualized storage
Windows and Linux virtual machines as Arc-enabled servers
Azure Virtual Desktop
Azure Kubernetes Service (AKS) enabled by Azure Arc
Azure services including monitoring, backup, site recovery, and more
Azure portal, ARM and Bicep templates, Azure CLI and tools

See What's new in Azure Stack HCI for details on the latest enhancements.

Why Azure Stack HCI?


There are many reasons customers choose Azure Stack HCI, including:

It provides industry-leading virtualization performance and value.


You pay for the software monthly via an Azure subscription instead of when buying
the hardware.
It's familiar for Hyper-V and server admins, allowing them to leverage existing
virtualization and storage concepts and skills.
It can be monitored and managed from the Azure portal or using on-premises
tools such as Microsoft System Center, Active Directory, Group Policy, and
PowerShell scripting.
It works with popular third-party backup, security, and monitoring tools.
Flexible hardware choices allow customers to choose the vendor with the best
service and support in their geography.
Joint support between Microsoft and the hardware vendor improves the customer
experience.
Solution updates make it easy to keep the entire solution up-to-date.

Common use cases for Azure Stack HCI


Customers often choose Azure Stack HCI in the following scenarios.

ノ Expand table

Use case Description

Azure Virtual Azure Virtual Desktop for Azure Stack HCI lets you deploy Azure Virtual
Desktop (AVD) Desktop session hosts on your on-premises Azure Stack HCI infrastructure.
You manage your session hosts from the Azure portal. To learn more, see
Azure Virtual Desktop for Azure Stack HCI.

Azure Kubernetes You can leverage Azure Stack HCI to host container-based deployments,
Service (AKS) which increases workload density and resource usage efficiency. Azure Stack
hybrid HCI also further enhances the agility and resiliency inherent to Azure
Kubernetes deployments. Azure Stack HCI manages automatic failover of VMs
serving as Kubernetes cluster nodes in case of a localized failure of the
underlying physical components. This supplements the high availability built
into Kubernetes, which automatically restarts failed containers on either the
same or another VM. To learn more, see Azure Kubernetes Service on Azure
Stack HCI and Windows Server.

Run Azure Arc Azure Arc allows you to run Azure services anywhere. This allows you to build
services on- consistent hybrid and multicloud application architectures by using Azure
premises services that can run in Azure, on-premises, at the edge, or at other cloud
providers. Azure Arc enabled services allow you to run Arc VMs, Azure data
services and Azure application services such as Azure App Service, Functions,
Logic Apps, Event Grid, and API Management anywhere to support hybrid
workloads. To learn more, see Azure Arc overview.

Highly Azure Stack HCI provides an additional layer of resiliency to highly available,
performant SQL mission-critical Always On availability groups-based deployments of SQL
Server Server. This approach also offers extra benefits associated with the single-
vendor approach, including simplified support and performance
optimizations built into the underlying platform. To learn more, see Deploy
SQL Server on Azure Stack HCI.

Trusted Azure Stack HCI satisfies the trusted enterprise virtualization requirements
enterprise through its built-in support for Virtualization-based Security (VBS). VBS relies
virtualization on Hyper-V to implement the mechanism referred to as virtual secure mode,
which forms a dedicated, isolated memory region within its guest VMs. By
Use case Description

using programming techniques, it's possible to perform designated, security-


sensitive operations in this dedicated memory region while blocking access to
it from the host OS. This considerably limits potential vulnerability to kernel-
based exploits. To learn more, see Deploy Trusted Enterprise Virtualization on
Azure Stack HCI.

Scale-out storage Storage Spaces Direct is a core technology of Azure Stack HCI that uses
industry-standard servers with locally attached drives to offer high availability,
performance, and scalability. Using Storage Spaces Direct results in significant
cost reductions compared with competing offers based on storage area
network (SAN) or network-attached storage (NAS) technologies. These
benefits result from an innovative design and a wide range of enhancements,
such as persistent read/write cache drives, mirror-accelerated parity, nested
resiliency, and deduplication.

Disaster recovery An Azure Stack HCI stretched cluster provides automatic failover of virtualized
for virtualized workloads to a secondary site following a primary site failure. Synchronous
workloads replication ensures crash consistency of VM disks.

Data center Refreshing and consolidating aging virtualization hosts with Azure Stack HCI
consolidation can improve scalability and make your environment easier to manage and
and secure. It's also an opportunity to retire legacy SAN storage to reduce
modernization footprint and total cost of ownership. Operations and systems administration
are simplified with unified tools and interfaces and a single point of support.

Branch office and For branch office and edge workloads, you can minimize infrastructure costs
edge by deploying two-node clusters with inexpensive witness options, such as
Cloud Witness or a USB drive–based file share witness. Another factor that
contributes to the lower cost of two-node clusters is support for switchless
networking, which relies on crossover cable between cluster nodes instead of
more expensive high-speed switches. Customers can also centrally view
remote Azure Stack HCI deployments in the Azure portal. To learn more, see
Deploy branch office and edge on Azure Stack HCI.

Demo of using Microsoft Azure with Azure Stack HCI


For an end-to-end example of using Microsoft Azure to manage apps and infrastructure
at the Edge using Azure Arc, Azure Kubernetes Service, and Azure Stack HCI, see the
Retail edge transformation with Azure hybrid demo.

Using a fictional customer, inspired directly by real customers, you will see how to
deploy Kubernetes, set up GitOps, deploy VMs, use Azure Monitor and drill into a
hardware failure, all without leaving the Azure portal.
https://www.youtube-nocookie.com/embed/t81MNUjAnEQ
This video includes preview functionality which shows real product functionality, but in a
closely controlled environment.

Azure integration benefits


Azure Stack HCI allows you to take advantage of cloud and on-premises resources
working together and natively monitor, secure, and back up to the cloud.

You can use the Azure portal for an increasing number of tasks including:

Monitoring: View all of your Azure Stack HCI systems in a single, global view
where you can group them by resource group and tag them.
Billing: Pay for Azure Stack HCI through your Azure subscription.

You can also subscribe to additional Azure hybrid services.

For more details on the cloud service components of Azure Stack HCI, see Azure Stack
HCI hybrid capabilities with Azure services.

What you need for Azure Stack HCI


To get started, you'll need:

One or more servers from the Azure Stack HCI Catalog , purchased from your
preferred Microsoft hardware partner.
An Azure subscription .
Operating system licenses for your workload VMs – for example, Windows Server.
See Activate Windows Server VMs.
An internet connection for each server in the cluster that can connect via HTTPS
outbound traffic to well-known Azure endpoints at least every 30 days. See Azure
connectivity requirements for more information.
For clusters stretched across sites:
At least four severs (two in each site)
At least one 1 Gb connection between sites (a 25 Gb RDMA connection is
preferred)
An average latency of 5 ms round trip between sites if you want to do
synchronous replication where writes occur simultaneously in both sites.
If you plan to use SDN, you'll need a virtual hard disk (VHD) for the Azure Stack
HCI operating system to create Network Controller VMs (see Plan to deploy
Network Controller).
Make sure your hardware meets the System requirements and that your network meets
the physical network and host network requirements for Azure Stack HCI.

For Azure Kubernetes Service on Azure Stack HCI and Windows Server requirements, see
AKS requirements on Azure Stack HCI.

Azure Stack HCI is priced on a per core basis on your on-premises servers. For current
pricing, see Azure Stack HCI pricing .

Hardware and software partners


Microsoft recommends purchasing Integrated Systems built by our hardware partners
and validated by Microsoft to provide the best experience running Azure Stack HCI. You
can also run Azure Stack HCI on Validated Nodes, which offer a basic building block for
HCI systems to give customers more hardware choices. Microsoft partners also offer a
single point of contact for implementation and support services.

Browse the Azure Stack HCI Catalog to view Azure Stack HCI solutions from Microsoft
partners such as ASUS, Blue Chip, DataON, Dell EMC, Fujitsu, HPE, Hitachi, Lenovo, NEC,
primeLine Solutions, QCT, SecureGUARD, and Supermicro.

Some Microsoft partners are developing software that extends the capabilities of Azure
Stack HCI while allowing IT admins to use familiar tools. To learn more, see Utility
applications for Azure Stack HCI.

Next steps
Learn more about Azure Stack HCI, version 23H2 deployment.
What's new in Azure Stack HCI, version
23H2
Article • 03/04/2024

Applies to: Azure Stack HCI, version 23H2

This article lists the various features and improvements that are available in Azure Stack
HCI, version 23H2.

Azure Stack HCI, version 23H2 is the latest version of the Azure Stack HCI solution. This
version focuses on cloud-based deployment and updates, cloud-based monitoring, new
and simplified experience for Arc VM management, security, and more. For an earlier
version of Azure Stack HCI, see What's new in Azure Stack HCI, version 22H2.

The following sections briefly describe the various features and enhancements in Azure
Stack HCI, version 23H2 releases.

Features and improvements in 2402


This section lists the new features and improvements in the 2402 release of Azure Stack
HCI, version 23H2.

New built in security role


This release introduces a new Azure built-in role called Azure Resource Bridge
Deployment Role, to harden the security posture for Azure Stack HCI, version 23H2. If
you provisioned a cluster before January 2024, then you must assign the Azure
Resource Bridge Deployment User role to the Arc Resource Bridge principal.

The role applies the concept of least amount of privileges and must be assigned to the
service principal: clustername.arb before you update the cluster.

To take advantage of the constraint permissions, remove the permissions that were
applied before. Follow the steps to Assign an Azure RBAC role via the portal. Search for
and assign the Azure Resource Bridge Deployment role to the member: <deployment-
cluster-name>-cl.arb .

An update health check is also included in this release that confirms that the new role is
assigned before you apply the update.
Changes to Active Directory preparation
Beginning this release, the Active Directory preparation process is simplified. You can
use your own existing process to create an Organizational Unit (OU), a user account with
appropriate permissions, and with Group policy inheritance blocked for the Group Policy
Object (GPO). You can also use the Microsoft provided script to create the OU. For more
information, see Prepare Active Directory.

Region expansion
Azure Stack HCI, version 23H2 solution is now supported in Australia. For more
information, see Azure Stack HCI supported regions.

New documentation for network considerations


We're also releasing new documentation that provides guidance on network
considerations for the cloud deployment of Azure Stack HCI, version 23H2. For more
information, see Network considerations for Azure Stack HCI.

Features and improvements in 2311.3


A new Azure built-in role called Azure Resource Bridge Deployment Role is available to
harden the security posture for Azure Stack HCI, version 23H2. If you provisioned a
cluster before January 2024, then you must assign the Azure Resource Bridge
Deployment User role to the Arc Resource Bridge service principal.

The role applies the concept of the least amount of privileges and must be assigned to
the Azure resource bridge service principal, clustername.arb , before you update the
cluster.

You must remove the previously assigned permissions to take advantage of the
constraint permission. Follow the steps to Assign an Azure RBAC role via the portal.
Search for and assign the Azure Resource Bridge Deployment role to the member:
<deployment-cluster-name>-cl.arb .

Additionally, this release includes an update health check that confirms the assignment
of the new role before applying the update.

Features and improvements in 2311.2 GA


This section lists the new features and improvements in the 2311.2 General Availability
(GA) release for Azure Stack HCI, version 23H2.

) Important

The production workloads are only supported on the Azure Stack HCI systems
running the generally available 2311.2 release. To run the GA version, start with a
new 2311 deployment and then update to 2311.2.

In this generally available release of the Azure Stack HCI, version 23H2, all the features
that were available with the 2311 preview releases are also now generally available. In
addition, the following improvements and enhancements are available:

Deployment changes
With this release:

Deployment is supported using existing storage accounts and existing Azure Key
Vaults.
A failed deployment can be run using the Rerun deployment option that becomes
available in the cluster Overview page.
Network settings such as storage traffic priority, cluster traffic priority, storage
traffic bandwidth reservation, jumbo frames, and RDMA protocol can all be
customized.
Validation must be started explicitly via the Start validation button.

For more information, see Deploy via Azure portal.

Add server and repair server changes


Bug fixes in the add server and repair server scenarios. For more information, see
the Fixed issues in 2311.2.

Arc VM management changes


In this release:

Guest management is available via Azure CLI. For more information, see Enable
guest management.
Proxy is supported for Arc VMs. For more information, see Set up proxy for Arc
VMs on Azure Stack HCI.
Storage path selection is available during the VM image creation via the Azure
portal. For more information, see Create a VM image from Azure Marketplace via
the Azure portal.

Migration of Hyper-V VMs to Azure Stack HCI (preview)


You can now migrate Hyper-V VMs to Azure Stack HCI using Azure Migrate. This feature
is currently in Preview. For more information, see Migration of Hyper-V VMs using Azure
Migrate to Azure Stack HCI (preview).

Monitoring changes
In the Azure portal, you can now monitor platform metrics of your cluster by navigating
to the Monitoring tab on your cluster’s Overview page. This tab offers a quick way to
view graphs for different platform metrics. You can select any graph to open it in Metrics
Explorer for a more in-depth analysis. For more information, see Monitor Azure Stack
HCI through the Monitoring tab.

Security via Microsoft Defender for Cloud (preview)


You can now use Microsoft Defender for Cloud to help improve the security posture of
your Azure Stack HCI environment and protect against existing and evolving threats.
This feature is currently in Preview. For more information, see Microsoft Defender on
Cloud for Azure Stack HCI (Preview).

Supported workloads
Starting with this release, the following workloads are generally available on Azure Stack
HCI:

Azure Kubernetes Service (AKS) on Azure Stack HCI. For more information, see Set
up an Azure Kubernetes Service host on Azure Stack HCI and deploy a workload
cluster using PowerShell.

In addition, AKS on HCI has a new CLI extension and Azure portal experience,
Support for logical networks, Support for taints and labels, Support for upgrade via
Azure CLI, Support for Nvidia A2 and more. For details, see What's new in AKS on
Azure Stack HCI, version 23H2.

Azure Virtual Desktops (AVD) on Azure Stack HCI. For more information, see
Deploy AVD on Azure Stack HCI.
Features and improvements in 2311
This section lists the new features and improvements in the 2311 release of Azure Stack
HCI, version 23H2. Additionally, this section includes features and improvements that
were originally released for 2310 starting with cloud-based deployment.

AKS on Azure Stack HCI, version 23H2


Starting with this release, you can run Azure Kubernetes Service (AKS) workloads on
your Azure Stack HCI system. AKS on Azure Stack HCI, version 23H2 uses Azure Arc to
create new Kubernetes clusters on Azure Stack HCI directly from Azure. For more
information, see What's new in AKS on Azure Stack HCI, version 23H2.

The following Kubernetes cluster deployment and management capabilities are


available:

Simplified infrastructure deployment on Azure Stack HCI. In this release, the


infrastructure components of AKS on Azure Stack HCI 23H2 including the Arc
Resource Bridge, Custom Location, and the Kubernetes Extension for the AKS Arc
operator, are all deployed as part of the Azure Stack HCI deployment. For more
information, see Deploy Azure Stack HCI cluster using the Azure portal (preview).
Integrated infrastructure upgrade on Azure Stack HCI. The whole lifecycle
management of AKS Arc infrastructure follows the same approach as the other
components on Azure Stack HCI 23H2. For more information, see Infrastructure
component updates for AKS on Azure Stack HCI (preview).
New Azure consistent CLI. Starting with this preview release, a new consistent
command line experience is available to create and manage Kubernetes clusters.
Cloud-based management. You can now create and manage Kubernetes clusters
on Azure Stack HCI with familiar tools such as Azure portal and Azure CLI. For more
information, see Create Kubernetes clusters using Azure CLI.
Support upgrading a Kubernetes cluster using Azure CLI. You can use the Azure
CLI to upgrade the Kubernetes cluster to a newer version and apply the OS version
updates. For more information, see Upgrade an Azure Kubernetes Service (AKS)
cluster (preview).
Support Azure Container Registry to deploy container images. In this release, you
can deploy container images from a private container registry using Azure
Container Registry to your Kubernetes clusters running on Azure Stack HCI. For
more information, see Deploy from private container registry to on-premises
Kubernetes using Azure Container Registry and AKS Arc.
Support for managing and scaling the node pools. For more information, see
Manage multiple node pools in AKS Arc.
Support for Linux and Windows Server containers. For more information, see
Create Windows Server containers.

Support for web proxy


This release supports configuring a web proxy for your Azure Stack HCI system. You
perform this optional configuration if your network uses a proxy server for internet
access. For more information, see Configure web proxy for Azure Stack HCI.

Removal of GMSA accounts


In this release, the Group Managed Service Accounts (gMSA) created during the Active
Directory preparation are removed. For more information, see Prepare Active Directory.

Cloud-based deployment
For servers running Azure Stack HCI, version 23H2, you can perform new deployments
via the cloud. You can deploy an Azure Stack HCI cluster in one of the two ways - via the
Azure portal or via an Azure Resource Manager deployment template.

For more information, see Deploy Azure Stack HCI cluster using the Azure portal and
Deploy Azure Stack HCI via the Azure Resource Manager deployment template.

Cloud-based updates
This new release has the infrastructure to consolidate all the relevant updates for the OS,
software agents, Azure Arc infrastructure, and OEM drivers and firmware into a unified
monthly update package. This comprehensive update package is identified and applied
from the cloud through the Azure Update Manager tool. Alternatively, you can apply the
updates using the PowerShell.

For more information, see Update your Azure Stack HCI cluster via the Azure Update
Manager and Update your Azure Stack HCI via the PowerShell.​

Cloud-based monitoring

Respond to health alerts


This release integrates the Azure Monitor alerts with Azure Stack HCI so that any health
alerts generated within your on-premises Azure Stack HCI system are automatically
forwarded to Azure Monitor alerts. You can link these alerts with your automated
incident management systems, ensuring timely and efficient response.

For more information, see Respond to Azure Stack HCI health alerts using Azure Monitor
alerts.

Monitor metrics

This release also integrates the Azure Monitor metrics with Azure Stack HCI so that you
can monitor the health of your Azure Stack HCI system via the metrics collected for
compute, storage, and network resources. This integration enables you to store cluster
data in a dedicated time-series database that you can use to analyze data from your
Azure Stack HCI system.

For more information, see Monitor Azure Stack HCI with Azure Monitor metrics.

Enhanced monitoring capabilities with Insights

With Insights for Azure Stack HCI, you can now monitor and analyze performance,
savings, and usage insights about key Azure Stack HCI features, such as ReFS
deduplication and compression. To use these enhanced monitoring capabilities, ensure
that your cluster is deployed, registered, and connected to Azure, and enrolled in
monitoring. For more information, see Monitor Azure Stack HCI features with Insights.

Azure Arc VM management


Beginning this release, the following Azure Arc VM management capabilities are
available:

Simplified Arc Resource Bridge deployment. The Arc Resource Bridge is now
deployed as part of the Azure Stack HCI deployment. For more information, see
Deploy Azure Stack HCI cluster using the Azure portal.
New RBAC roles for Arc VMs. This release introduces new RBAC roles for Arc VMs.
For more information, see Manage RBAC roles for Arc VMs.
New Azure consistent CLI. Beginning this preview release, a new consistent
command line experience is available to create VM and VM resources such as VM
images, storage paths, logical networks, and network interfaces. For more
information, see Create Arc VMs on Azure Stack HCI.
Support for static IPs. This release has the support for static IPs. For more
information, see Create static logical networks on Azure Stack HCI.
Support for storage paths. While default storage paths are created during the
deployment, you can also specify custom storage paths for your Arc VMs. For more
information, see Create storage paths on Azure Stack HCI.
Support for Azure VM extensions on Arc VMs on Azure Stack HCI. Starting with
this preview release, you can also enable and manage the Azure VM extensions
that are supported on Azure Arc, on Azure Stack HCI Arc VMs created via the Azure
CLI. You can manage these VM extensions using the Azure CLI or the Azure portal.
For more information, see Manage VM extensions for Azure Stack HCI VMs.
Trusted launch for Azure Arc VMs. Azure Trusted Launch protects VMs against
boot kits, rootkits, and kernel-level malware. Starting this preview release, some of
those Trusted Launch capabilities are available for Arc VMs on Azure Stack HCI. For
more information, see Trusted launch for Arc VMs.

Security capabilities
The new installations with this release of Azure Stack HCI start with a secure-by-default
strategy. The new version #has a tailored security baseline coupled with a security drift
control mechanism and a set of well-known security features enabled by default. This
release provides:

A tailored security baseline with over 300 security settings configured and
enforced with a security drift control mechanism. For more information, see
Security baseline settings for Azure Stack HCI.
Out-of-box protection for data and network with SMB signing and BitLocker
encryption for OS and Cluster Shared Volumes. For more information, see
BitLocker encryption for Azure Stack HCI.
Reduced attack surface as Windows Defender Application Control is enabled by
default and limits the applications and the code that you can run on the core
platform. For more information, see Windows Defender Application Control for
Azure Stack HCI.

Capacity management
In this release, you can add and remove servers, or repair servers from your Azure Stack
HCI system via the PowerShell.

For more information, see Add server and Repair server.

ReFS deduplication and compression


This release introduces the Resilient File System (ReFS) deduplication and compression
feature designed specifically for active workloads, such as Azure Virtual Desktop (AVD)
on Azure Stack HCI. Enable this feature using Windows Admin Center or PowerShell to
optimize storage usage and reduce cost.

For more information, see Optimize storage with ReFS deduplication and compression
in Azure Stack HCI.

Next steps
Read the blog announcing the general availability of Azure Stack HCI, version
23H2 .
Read the blog about What’s new for Azure Stack HCI at Microsoft Ignite 2023 .
For Azure Stack HCI, version 23H2 deployments:
Read the Deployment overview.
Learn how to Deploy Azure Stack HCI, version 23H2 via the Azure portal.
View known issues in Azure Stack HCI 2402 release
Article • 03/11/2024

Applies to: Azure Stack HCI, version 23H2

This article identifies the critical known issues and their workarounds in Azure Stack HCI 2402 release.

The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy
your Azure Stack HCI, carefully review the information contained in the release notes.

) Important

This release supports both - new deployments and updates. You must be running version 2311.3 to update to this release.

For more information about the new features in this release, see What's new in 23H2.

Issues for version 2402


This software release maps to software version number 10.2402.0.23.

Release notes for this version include the issues fixed in this release, known issues in this release, and release noted issues carried over from
previous versions.

Fixed issues
Here are the issues fixed in this release:

ノ Expand table

Feature Issue Workaround/Comments

Deployment The first deployment step: Before Cloud Deployment when Deploying via Azure
portal can take from 45 minutes to an hour to complete.

Deployment There's a sporadic heartbeat reliability issue in this release due to which the This issue is intermittent. Try rerunning the
registration encounters the error: HCI registration failed. Error: Arc integration failed. deployment. For more information, see Rerun the
deployment.

Deployment There's an intermittent issue in this release where the Arc integration validation fails This issue is intermittent. Try rerunning the
with this error: Validator failed. Can't retrieve the dynamic parameters for the deployment. For more information, see Rerun the
cmdlet. PowerShell Gallery is currently unavailable. Please try again later. deployment.

Deployment In some instances, running the Arc registration script doesn't install the mandatory The issue was fixed in this release. The extensions
extensions, Azure Edge device Management or Azure Edge Lifecycle Manager. remediate themselves and get into a successful
deployment state.

Known issues in this release


Here are the known issues in this release:

ノ Expand table

Feature Issue Workaround/Comments

Deployment If you prepare the Active Directory on your own (not using the script and procedure Use the Prepare AD script method or if using your own
provided by Microsoft), your Active Directory validation could fail with missing Generic method, make sure to assign the specific permission
All permission. This is due to an issue in the validation check that checks for a dedicated msFVE-RecoverInformationobjects – General –
permission entry for msFVE-RecoverInformationobjects – General – Permissions Full Permissions Full control .
control , which is required for BitLocker recovery.

Deployment There's a rare issue in this release where the DNS record is deleted during the Azure Stack Check the DNS server to see if any DNS records of the
HCI deployment. When that occurs, the following exception is seen: cluster nodes are missing. Apply the following mitigation
Type 'PropagatePublicRootCertificate' of Role 'ASCA' raised an exception:<br>The on the nodes where its DNS record is missing.
operation on computer 'ASB88RQ22U09' failed: WinRM cannot process the request. The
following error occurred while using Kerberos authentication: Cannot find the Restart the DNS client service. Open a PowerShell session
Feature Issue Workaround/Comments

computer ASB88RQ22U09.local. Verify that the computer exists on the network and that and run the following cmdlet on the affected node:
the name provided is spelled correctly at PropagatePublicRootCertificate, Taskkill /f /fi "SERVICES eq dnscache"
C:\NugetStore\Microsoft.AzureStack, at
Orchestration.Roles.CertificateAuthority.10.2402.0.14\content\Classes\ASCA\ASCA.psm1:
line 38, at C:\CloudDeployment\ECEngine\InvokeInterfaceInternal.psm1: line 127,at
Invoke-EceInterfaceInternal,
C:\CloudDeployment\ECEngine\InvokeInterfaceInternal.psm1: line 123.

Deployment In this release, there's a remote task failure on a multi-node deployment that results in the The mitigation is to restart the ECE agent on the affected
following exception: node. On your server, open a PowerShell session and run
ECE RemoteTask orchestration failure with ASRR1N42R01U31 (node pingable - True): A the following command:
WebException occurred while sending a RestRequest. WebException.Status: Restart-Service ECEAgent .
ConnectFailure on [https://<URL>](https://<URL>).

Updates In this release, there's a health check issue owing to which a single server Azure Stack HCI Update your Azure Stack HCI via PowerShell.
can't be updated from the Azure portal.

Add/Repair In this release, when adding or repairing a server, a failure is seen when the software load There's no workaround in this release. If you encounter
server balancer or network controller VM certificates are being copied from the existing nodes. this issue, contact Microsoft Support to determine next
The failure is because these certificates weren't generated during the deployment/update. steps.

Deployment In this release, there's a transient issue resulting in the deployment failure with the As this is a transient issue, retrying the deployment
following exception: should fix this. For more information, see how to Rerun
Type 'SyncDiagnosticLevel' of Role 'ObservabilityConfig' raised an exception:* the deployment.
<br>*Syncing Diagnostic Level failed with error: The Diagnostic Level does not match.
Portal was not set to Enhanced, instead is Basic.

Deployment In this release, there's an issue with the Secrets URI/location field. This is a required field Use the sample parameters file in the Deploy Azure Stack
that is marked Not mandatory and results in the ARM template deployment failures. HCI, version 23H2 via ARM template to ensure that all the
inputs are provided in the required format and then try
the deployment.
If there's a failed deployment, you must also clean up the
following resources before you Rerun the deployment:
1. Delete C:\EceStore .
2. Delete C:\CloudDeployment .
3. Delete C:\nugetstore .
4. Remove-Item
HKLM:\Software\Microsoft\LCMAzureStackStampInformation .

Security For new deployments, Secured-core capable devices won't have Dynamic Root of DRTM is not supported in this release.
Measurement (DRTM) enabled by default. If you try to enable (DRTM) using the Enable-
AzSSecurity cmdlet, you'll see an error that DRTM setting isn't supported in the current
release.
Microsoft recommends defense in depth, and UEFI Secure Boot still protects the
components in the Static Root of Trust (SRT) boot chain by ensuring that they are loaded
only when they are signed and verified.

Known issues from previous releases


Here are the known issues from previous releases:

ノ Expand table

Feature Issue Workaround

Arc VM Deployment or update of Arc Resource Bridge Retry the deployment/update. The retry should regenerate the SPN secret and the
management could fail when the automatically generated operation will likely succeed.
temporary SPN secret during this operation, starts
with a hyphen.

Arc VM Arc Extensions on Arc VMs stay in "Creating" state Sign in to the VM, open a command prompt, and type the following:
management indefinitely. Windows:
notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the end of
the resource name, so this property matches the name of the VM. Then restart the VM.

Arc VM When a new server is added to an Azure Stack You can manually create a storage path for any new volumes. For more information,
management HCI cluster, storage path isn't created see Create a storage path.
automatically for the newly created volume.
Feature Issue Workaround

Arc VM Restart of Arc VM operation completes after There's no known workaround in this release.
management approximately 20 minutes although the VM itself
restarts in about a minute.

Arc VM In some instances, the status of the logical If the status of this logical network was Succeeded at the time when this network was
management network shows as Failed in Azure portal. This provisioned, then you can continue to create resources on this network.
occurs when you try to delete the logical network
without first deleting any resources such as
network interfaces associated with that logical
network.
You should still be able to create resources on this
logical network. The status is misleading in this
instance.

Arc VM In this release, when you update a VM with a data Use the Azure portal for all the VM update operations. For more information, see
management disk attached to it using the Azure CLI, the Manage Arc VMs and Manage Arc VM resources.
operation fails with the following error message:
Couldn't find a virtual hard disk with the name.

Update In rare instances, you may encounter this error If you see this issue, contact Microsoft Support to assist you with the next steps.
while updating your Azure Stack HCI: Type
'UpdateArbAndExtensions' of Role 'MocArb'
raised an exception: Exception Upgrading ARB
and Extension in step [UpgradeArbAndExtensions
:Get-ArcHciConfig] UpgradeArb: Invalid
applianceyaml = [C:\AksHci\hci-appliance.yaml].

Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword using command:
Set-AzureStackLCMUserPassword , you might
encounter this error:

Can't find an object with identity: 'object


id'*.

Networking There is an infrequent DNS client issue in this Restart the server. This operation registers the DNS record, which prevents it from
release that causes the deployment to fail on a getting deleted.
two-node cluster with a DNS resolution error: A
WebException occurred while sending a
RestRequest. WebException.Status:
NameResolutionFailure. As a result of the bug, the
DNS record of the second node is deleted soon
after it's created resulting in a DNS error.

Azure portal In some instances, the Azure portal might take a You might need to wait for 30 minutes or more to see the updated view.
while to update and the view might not be
current.

Arc VM Deleting a network interface on an Arc VM from Use the Azure CLI to first remove the network interface and then delete it. For more
management Azure portal doesn't work in this release. information, see Remove the network interface and see Delete the network interface.

Arc VM When you create a disk or a network interface in Make sure to not use underscore in the names for disks or network interfaces.
management this release with underscore in the name, the
operation fails.

Deployment Providing the OU name in an incorrect syntax isn't There's no known workaround in this release.
detected in the Azure portal. The incorrect syntax
is however detected at a later step during cluster
validation.

Deployment Deployments via Azure Resource Manager time To monitor the deployment in the Azure portal, go to the Azure Stack HCI cluster
out after 2 hours. Deployments that exceed 2 resource and then go to new Deployments entry.
hours show up as failed in the resource group
though the cluster is successfully created.

Azure Site Azure Site Recovery can't be installed on an Azure There's no known workaround in this release.
Recovery Stack HCI cluster in this release.

Update When updating the Azure Stack HCI cluster via To work around this issue, on each cluster node, add the following registry key (no
the Azure Update Manager, the update progress value needed):
and results may not be visible in the Azure portal.
New-Item -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters" -force
Feature Issue Workaround

Then on one of the cluster nodes, restart the Cloud Management cluster group.

Stop-ClusterGroup "Cloud Management"

Start-ClusterGroup "Cloud Management"

This won't fully remediate the issue as the progress details may still not be displayed
for a duration of the update process. To get the latest update details, you can Retrieve
the update progress with PowerShell.

Update In this release, if you run the Test-CauRun cmdlet No action is required on your part as the missing rule is automatically created when
prior to actually applying the 2311.2 update, you 2311.2 updates are applied.
see an error message regarding a missing firewall
rule to remotely shut down the Azure Stack HCI When applying future updates, make sure to run the Test-EnvironmentReadiness
system. cmdlet instead of Test-CauRun . For more information, see Step 2: Optionally validate
system health.

Updates In rare instances, if a failed update is stuck in an In To resume the update, run the following PowerShell command:
progress state in Azure Update Manager, the Try Get-SolutionUpdate | Start-SolutionUpdate .
again button is disabled.

Updates In some cases, SolutionUpdate commands could Make sure to close the PowerShell session used for Send-DiagnosticData . Open a new
fail if run after the Send-DiagnosticData PowerShell session and use it for SolutionUpdate commands.
command.

Updates In rare instances, when applying an update from Retry the update. If the issue persists, contact Microsoft Support.
2311.0.24 to 2311.2.4, cluster status reports In
Progress instead of expected Failed to update.

Arc VM If the resource group used to deploy an Arc VM Make sure that there are no underscores in the resource groups used to deploy Arc
management on your Azure Stack HCI has an underscore in the VMs.
name, the guest agent installation fails. As a
result, you won't be able to enable guest
management.

Cluster aware Resume node operation failed to resume node. This is a transient issue and could resolve on its own. Wait for a few minutes and retry
updating the operation. If the issue persists, contact Microsoft Support.

Cluster aware Suspend node operation was stuck for greater This is a transient issue and could resolve on its own. Wait for a few minutes and retry
updating than 90 minutes. the operation. If the issue persists, contact Microsoft Support.

Next steps
Read the Deployment overview.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


View known issues in Azure Stack HCI 2311.3
release
Article • 02/28/2024

Applies to: Azure Stack HCI, version 23H2

This article identifies the critical known issues and their workarounds in the Azure Stack HCI 2311.3 release.

The release notes are continuously updated, and as critical issues requiring a workaround are discovered,
they're added. Before you deploy your Azure Stack HCI, carefully review the information contained in the
release notes.

) Important

The production workloads are only supported on the Azure Stack HCI systems running the generally
available 2311.3 release. To run the GA version, you need to start with a new 2311 deployment and
then update to 2311.3.

For more information about the new features in this release, see What's new in 23H2.

Issues for version 2311.3


This software release maps to software version number 10.2311.3.12. This release only supports updates
from 2311 release.

Release notes for this version include the issues fixed in this release, known issues in this release, and
release noted issues carried over from previous versions.

Fixed issues
Microsoft isn't currently aware of any fixed issues with this release.

Known issues in this release


Here are the known issues in this release:

ノ Expand table

Feature Issue Workaround

Security In this release, if you enable Dynamic Root of Measurement (DRTM) using the Enable- DRTM is not
AzSSecurity cmdlet, you receive the following error: supported in
DRTM setting is not supported on current release at this release.
C:\ProgramFiles\WindowsPowerShell\Modules\AzureStackOSConfigAgent\AzureStackOSConfigAgent
psm1:4307 char:17 + ...throw "DRTM setting is not supported on current release".
Known issues from previous releases
Here are the known issues from previous releases:

ノ Expand table

Feature Issue Workaround

Arc VM Deployment or update of Retry the deployment/update. The retry should regenerate the SPN secret and
management Arc Resource Bridge could the operation will likely succeed.
fail when the automatically
generated temporary SPN
secret during this operation,
starts with a hyphen.

Arc VM Arc Extensions on Arc VMs Sign in to the VM, open a command prompt, and type the following:
management stay in "Creating" state Windows:
indefinitely. notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the
end of the resource name, so this property matches the name of the VM. Then
restart the VM.

Arc VM When a new server is added You can manually create a storage path for any new volumes. For more
management to an Azure Stack HCI information, see Create a storage path.
cluster, storage path isn't
created automatically for the
newly created volume.

Arc VM Restart of Arc VM operation There's no known workaround in this release.


management completes after
approximately 20 minutes
although the VM itself
restarts in about a minute.

Arc VM In some instances, the status If the status of this logical network was Succeeded at the time when this
management of the logical network shows network was provisioned, then you can continue to create resources on this
as Failed in Azure portal. This network.
occurs when you try to
delete the logical network
without first deleting any
resources such as network
interfaces associated with
that logical network.
You should still be able to
create resources on this
logical network. The status is
misleading in this instance.

Arc VM In this release, when you Use the Azure portal for all the VM update operations. For more information,
management update a VM with a data disk see Manage Arc VMs and Manage Arc VM resources.
attached to it using the
Azure CLI, the operation fails
with the following error
message:
Couldn't find a virtual hard
disk with the name.
Feature Issue Workaround

Deployment There's a sporadic heartbeat This issue is intermittent. Try rerunning the deployment. For more information,
reliability issue in this release see Rerun the deployment.
due to which the registration
encounters the error: HCI
registration failed. Error: Arc
integration failed.

Deployment There's an intermittent issue This issue is intermittent. Try rerunning the deployment. For more information,
in this release where the Arc see Rerun the deployment.
integration validation fails
with this error: Validator
failed. Can't retrieve the
dynamic parameters for the
cmdlet. PowerShell Gallery
is currently unavailable.
Please try again later.

Update In rare instances, you may If you see this issue, contact Microsoft Support to assist you with the next
encounter this error while steps.
updating your Azure Stack
HCI: Type
'UpdateArbAndExtensions' of
Role 'MocArb' raised an
exception: Exception
Upgrading ARB and
Extension in step
[UpgradeArbAndExtensions
:Get-ArcHciConfig]
UpgradeArb: Invalid
applianceyaml =
[C:\AksHci\hci-
appliance.yaml].

Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword
using command: Set-
AzureStackLCMUserPassword ,
you might encounter this
error:

Can't find an object with


identity: 'object id'*.

Networking There's an infrequent DNS Restart the server. This operation registers the DNS record, which prevents it
client issue in this release from getting deleted.
that causes the deployment
to fail on a two-node cluster
with a DNS resolution error:
A WebException occurred
while sending a RestRequest.
WebException.Status:
NameResolutionFailure. As a
result of the bug, the DNS
record of the second node is
deleted soon after it's
Feature Issue Workaround

created resulting in a DNS


error.

Azure portal In some instances, the Azure You might need to wait for 30 minutes or more to see the updated view.
portal might take a while to
update and the view might
not be current.

Arc VM Deleting a network interface Use the Azure CLI to first remove the network interface and then delete it. For
management on an Arc VM from Azure more information, see Remove the network interface and see Delete the
portal doesn't work in this network interface.
release.

Arc VM When you create a disk or a Make sure to not use underscore in the names for disks or network interfaces.
management network interface in this
release with underscore in
the name, the operation fails.

Deployment Providing the OU name in an There's no known workaround in this release.


incorrect syntax isn't
detected in the Azure portal.
The incorrect syntax is
however detected at a later
step during cluster
validation.

Deployment In some instances, running The workaround is to run the script again and make sure that all the
the Arc registration script mandatory extensions are installed before you Deploy via Azure portal.
doesn't install the mandatory
extensions, Azure Edge
device Management or
Azure Edge Lifecycle
Manager.

Deployment The first deployment step:


Before Cloud Deployment
when Deploying via Azure
portal can take from 45
minutes to an hour to
complete.

Deployment Deployments via Azure To monitor the deployment in the Azure portal, go to the Azure Stack HCI
Resource Manager time out cluster resource and then go to new Deployments entry.
after 2 hours. Deployments
that exceed 2 hours show up
as failed in the resource
group though the cluster is
successfully created.

Azure Site Azure Site Recovery can't be There's no known workaround in this release.
Recovery installed on an Azure Stack
HCI cluster in this release.

Update When updating the Azure To work around this issue, on each cluster node, add the following registry key
Stack HCI cluster via the (no value needed):
Azure Update Manager, the
update progress and results New-Item -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters"
Feature Issue Workaround

may not be visible in the -force


Azure portal.
Then on one of the cluster nodes, restart the Cloud Management cluster
group.

Stop-ClusterGroup "Cloud Management"

Start-ClusterGroup "Cloud Management"

This won't fully remediate the issue as the progress details may still not be
displayed for a duration of the update process. To get the latest update details,
you can Retrieve the update progress with PowerShell.

Update In this release, if you run the No action is required on your part as the missing rule is automatically created
Test-CauRun cmdlet prior to when 2311.2 updates are applied.
actually applying the 2311.2
update, you see an error When applying future updates, make sure to run the Test-
message regarding a missing EnvironmentReadiness cmdlet instead of Test-CauRun . For more information,
firewall rule to remotely shut see Step 2: Optionally validate system health.
down the Azure Stack HCI
system.

Updates In rare instances, if a failed To resume the update, run the following PowerShell command:
update is stuck in an In Get-SolutionUpdate | Start-SolutionUpdate .
progress state in Azure
Update Manager, the Try
again button is disabled.

Updates In some cases, Make sure to close the PowerShell session used for Send-DiagnosticData .
SolutionUpdate commands Open a new PowerShell session and use it for SolutionUpdate commands.
could fail if run after the
Send-DiagnosticData
command.

Updates In rare instances, when Retry the update. If the issue persists, contact Microsoft Support.
applying an update from
2311.0.24 to 2311.2.4, cluster
status reports In Progress
instead of expected Failed to
update.

Arc VM If the resource group used to Make sure that there are no underscores in the resource groups used to
management deploy an Arc VM on your deploy Arc VMs.
Azure Stack HCI has an
underscore in the name, the
guest agent installation fails.
As a result, you won't be
able to enable guest
management.

Cluster Resume node operation This is a transient issue and could resolve on its own. Wait for a few minutes
aware failed to resume node. and retry the operation. If the issue persists, contact Microsoft Support.
updating

Cluster Suspend node operation was This is a transient issue and could resolve on its own. Wait for a few minutes
aware stuck for greater than 90 and retry the operation. If the issue persists, contact Microsoft Support.
updating minutes.
Next steps
Read the Deployment overview.
View known issues in Azure Stack HCI 2311.2
General Availability release
Article • 02/26/2024

Applies to: Azure Stack HCI, version 23H2

This article identifies the critical known issues and their workarounds in Azure Stack HCI 2311.2 General
Availability (GA) release.

The release notes are continuously updated, and as critical issues requiring a workaround are discovered,
they're added. Before you deploy your Azure Stack HCI, carefully review the information contained in the
release notes.

) Important

The production workloads are only supported on the Azure Stack HCI systems running the generally
available 2311.2 release. To run the GA version, you need to start with a new 2311 deployment and then
update to 2311.2.

For more information about the new features in this release, see What's new in 23H2.

Issues for version 2311.2


This software release maps to software version number 10.2311.2.7. This release only supports updates from
2311 release.

Release notes for this version include the issues fixed in this release, known issues in this release, and release
noted issues carried over from previous versions.

Fixed issues
Here are the issues fixed in this release:

ノ Expand table

Feature Issue Workaround/Comments

Add server In this release, add server and repair server scenarios Follow these steps to work around this error:
and repair might fail with the following error:
server 1. Create a copy of the required PowerShell modules on
CloudEngine.Actions.InterfaceInvocationFailedException: the new node.
Type 'AddNewNodeConfiguration' of Role 'BareMetal'
raised an exception: The term 'Trace-Execution' is not 2. Connect to a node on your Azure Stack HCI system.
recognized as the name of a cmdlet, function, script file,
or operable program. 3. Run the following PowerShell cmdlet:

Copy-Item "C:\Program
Files\WindowsPowerShell\Modules\CloudCommon"
"\newserver\c$\Program
Files\WindowsPowerShell\Modules\CloudCommon" -
Feature Issue Workaround/Comments

recursive

For more information, see Prerequisite for add and repair


server scenarios.

Deployment When you update 2310 to 2311 software, the service If you encounter an issue with the software, use
principal doesn't migrate. PowerShell to migrate the service principal.

Deployment If you select Review + Create and you haven't filled There's no known workaround in this release.
out all the tabs, the deployment begins and then
eventually fails.

Deployment This issue is seen if an incorrect subscription or Before you run the registration the second time:
resource group was used during registration. When
you register the server a second time with Arc, the Make sure to delete the following folders from your
Azure Edge Lifecycle Manager extension fails during servers: C:\ecestore , C:\CloudDeployment , and
the registration, but the extension state is reported as C:\nugetstore .
Ready. Delete the registry key using the PowerShell cmdlet:
Remove-Item
HKLM:\Software\Microsoft\LCMAzureStackStampInformation

Deployment A new storage account is created for each run of the


deployment. Existing storage accounts aren't
supported in this release.

Deployment A new key vault is created for each run of the


deployment. Existing key vaults aren't supported in
this release.

Deployment On server hardware, a USB network adapter is created Make sure to disable the BMC network adapter before
to access the Baseboard Management Controller you begin cloud deployment.
(BMC). This adapter can cause the cluster validation to
fail during the deployment.

Deployment The network direct intent overrides defined on the Use the ARM template to override this parameter and
template aren't working in this release. disable RDMA for the intents.

Known issues in this release


Here are the known issues in this release:

ノ Expand table

Feature Issue Workaround/Comments

Update In this release, if you run the Test-CauRun cmdlet No action is required on your part as the missing
before actually applying the 2311.2 update, you rule is automatically created when 2311.2 updates
see an error message regarding a missing are applied.
firewall rule to remotely shut down the Azure
Stack HCI system. When applying future updates, make sure to run the
Test-EnvironmentReadiness cmdlet instead of Test-
CauRun . For more information, see Step 2: Optionally
validate system health.

Updates In rare instances, if a failed update is stuck in an To resume the update, run the following PowerShell
In progress state in Azure Update Manager, the command:
Try again button is disabled. Get-SolutionUpdate | Start-SolutionUpdate .
Feature Issue Workaround/Comments

Updates In some cases, SolutionUpdate commands could Make sure to close the PowerShell session used for
fail if run after the Send-DiagnosticData Send-DiagnosticData . Open a new PowerShell
command. session and use it for SolutionUpdate commands.

Updates In very rare instances, when applying an update Retry the update. If the issue persists, contact
from 2311.0.24 to 2311.2.4, cluster status reports Microsoft Support.
In Progress instead of expected Failed to update.

Arc VM If the resource group used to deploy an Arc VM Make sure that there are no underscores in the
management on your Azure Stack HCI has an underscore in resource groups used to deploy Arc VMs.
the name, the guest agent installation will fail.
As a result, you won't be able to enable guest
management.

Cluster aware Resume node operation failed to resume node. This is a transient issue and could resolve on its own.
updating Wait for a few minutes and retry the operation. If the
issue persists, contact Microsoft Support.

Cluster aware Suspend node operation was stuck for greater This is a transient issue and could resolve on its own.
updating than 90 minutes. Wait for a few minutes and retry the operation. If the
issue persists, contact Microsoft Support.

Known issues from previous releases


Here are the known issues from previous releases:

ノ Expand table

Feature Issue Workaround

Arc VM Deployment or update of Retry the deployment/update. The retry should regenerate the SPN secret and
management Arc Resource Bridge could the operation will likely succeed.
fail when the automatically
generated temporary SPN
secret during this operation,
starts with a hyphen.

Arc VM Arc Extensions on Arc VMs Log in to the VM, open a command prompt, and type the following:
management stay in "Creating" state Windows:
indefinitely. notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the
end of the resource name, so this property matches the name of the VM. Then
restart the VM.

Arc VM When a new server is added You can manually create a storage path for any new volumes. For more
management to an Azure Stack HCI information, see Create a storage path.
cluster, storage path isn't
created automatically for the
newly created volume.

Arc VM Restart of Arc VM operation There's no known workaround in this release.


management completes after
approximately 20 minutes
Feature Issue Workaround

although the VM itself


restarts in about a minute.

Arc VM In some instances, the status If the status of this logical network was Succeeded at the time when this
management of the logical network shows network was provisioned, then you can continue to create resources on this
as Failed in Azure portal. This network.
occurs when you try to
delete the logical network
without first deleting any
resources such as network
interfaces associated with
that logical network.
You should still be able to
create resources on this
logical network. The status is
misleading in this instance.

Arc VM In this release, when you Use the Azure portal for all the VM update operations. For more information,
management update a VM with a data disk see Manage Arc VMs and Manage Arc VM resources.
attached to it using the
Azure CLI, the operation fails
with the following error
message:
Couldn't find a virtual hard
disk with the name.

Deployment There's a sporadic heartbeat This issue is intermittent. Try rerunning the deployment. For more information,
reliability issue in this release see Rerun the deployment.
due to which the registration
encounters the error: HCI
registration failed. Error: Arc
integration failed.

Deployment There's an intermittent issue This issue is intermittent. Try rerunning the deployment. For more information,
in this release where the Arc see Rerun the deployment.
integration validation fails
with this error: Validator
failed. Cannot retrieve the
dynamic parameters for the
cmdlet. PowerShell Gallery is
currently unavailable. Please
try again later.

Update In rare instances, you may If you see this issue, contact Microsoft Support to assist you with the next
encounter this error while steps.
updating your Azure Stack
HCI: Type
'UpdateArbAndExtensions' of
Role 'MocArb' raised an
exception: Exception
Upgrading ARB and
Extension in step
[UpgradeArbAndExtensions
:Get-ArcHciConfig]
UpgradeArb: Invalid
applianceyaml =
[C:\AksHci\hci-
appliance.yaml].
Feature Issue Workaround

Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword
using command: Set-
AzureStackLCMUserPassword ,
you might encounter this
error:

Cannot find an object with


identity: 'object id'.

Networking There's an infrequent DNS Restart the server. This operation registers the DNS record, which prevents it
client issue in this release from getting deleted.
that causes the deployment
to fail on a two-node cluster
with a DNS resolution error:
A WebException occurred
while sending a RestRequest.
WebException.Status:
NameResolutionFailure. As a
result of the bug, the DNS
record of the second node is
deleted soon after it's
created resulting in a DNS
error.

Azure portal In some instances, the Azure You might need to wait for 30 minutes or more to see the updated view.
portal might take a while to
update and the view might
not be current.

Arc VM Deleting a network interface Use the Azure CLI to first remove the network interface and then delete it. For
management on an Arc VM from Azure more information, see Remove the network interface and see Delete the
portal doesn't work in this network interface.
release.

Arc VM When you create a disk or a Make sure to not use underscore in the names for disks or network interfaces.
management network interface in this
release with underscore in
the name, the operation fails.

Deployment Providing the OU name in an There's no known workaround in this release.


incorrect syntax isn't
detected in the Azure portal.
The incorrect syntax is
however detected at a later
step during cluster
validation.

Deployment In some instances, running Run the script again and make sure that all the mandatory extensions are
the Arc registration script installed before you Deploy via Azure portal.
doesn't install the mandatory
extensions, Azure Edge
device Management or
Azure Edge Lifecycle
Manager.

Deployment The first deployment step:


Before Cloud Deployment
when Deploying via Azure
Feature Issue Workaround

portal can take from 45


minutes to an hour to
complete.

Deployment Deployments via Azure To monitor the deployment in the Azure portal, go to the Azure Stack HCI
Resource Manager time out cluster resource and then go to new Deployments entry.
after 2 hours. Deployments
that exceed 2 hours show up
as failed in the resource
group though the cluster is
successfully created.

Azure Site Azure Site Recovery can't be There's no known workaround in this release.
Recovery installed on an Azure Stack
HCI cluster in this release.

Update When updating the Azure To work around this issue, on each cluster node, add the following registry key
Stack HCI cluster via the (no value needed):
Azure Update Manager, the
update progress and results New-Item -Path
may not be visible in the "HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters"
Azure portal. -force

Then on one of the cluster nodes, restart the Cloud Management cluster
group.

Stop-ClusterGroup "Cloud Management"

Start-ClusterGroup "Cloud Management"

This won't fully remediate the issue as the progress details may still not be
displayed for a duration of the update process. To get the latest update details,
you can Retrieve the update progress with PowerShell.

Next steps
Read the Deployment overview.
View known issues in Azure Stack HCI 2311 release
(preview)
Article • 02/06/2024

Applies to: Azure Stack HCI, version 23H2

This article identifies the critical known issues and their workarounds in Azure Stack HCI 2311 release.

The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added.
Before you deploy your Azure Stack HCI, carefully review the information contained in the release notes.

For more information about the new features in this release, see What's new in 23H2.

) Important

This feature is currently in PREVIEW. See the Supplemental Terms of Use for Microsoft Azure Previews for legal
terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.

Issues for version 2311


This software release maps to software version number 10.2311.0.26. This release supports new deployments and updates
from 2310.

Release notes for this version include the issues fixed in this release, known issues in this release, and release noted issues
carried over from previous versions.

Fixed issues
Here are the issues fixed in this release:

ノ Expand table

Feature Issue

Networking Use of proxy isn't supported in this release.

Security When using the Get-AzsSyslogForwarder cmdlet with -PerNode parameter, an exception is thrown. You aren't able to
retrieve the SyslogForwarder configuration information of multiple nodes.

Deployment During the deployment, Microsoft Open Cloud (MOC) Arc Resource Bridge installation fails with this error: Unable to
find a resource that satisfies the requirement Size [0] Location [MocLocation].: OutOfCapacity"\n".

Deployment Entering an incorrect DNS updates the DNS configuration in hosts during the validation and the hosts can lose internet
connectivity.

Deployment Password for deployment user (also referred to as AzureStackLCMUserCredential during Active Directory prep) and local
administrator can't include a : (colon).

Arc VM Detaching a disk via the Azure CLI results in an error in this release.
management

Arc VM A resource group with multiple clusters only shows storage paths of one cluster.
management

Arc VM When you create the Azure Marketplace image on Azure Stack HCI, sometimes the download provisioning state
management doesn't match the download percentage on Azure Stack HCI cluster. The provisioning state is returned as succeeded
while the download percentage is reported as less than 100.
Feature Issue

Arc VM In this release, depending on your environment, the VM deployments on Azure Stack HCI system can take 30 to 45
management minutes.

Arc VM While creating Arc VMs via the Azure CLI on Azure Stack HCI, if you provide the friendly name of marketplace image,
management an incorrect Azure Resource Manager ID is built and the VM creation errors out.

Known issues in this release


Here are the known issues in this release:

ノ Expand table

Feature Issue Workaround/Comments

Arc VM Deployment or update of Arc Resource Bridge could fail Retry the deployment/update. The retry should regenerate the SPN
management when the automatically generated SPN secret during this secret and the operation will likely succeed.
operation, starts with a hyphen.

Arc VM Arc Extensions on Arc VMs stay in "Creating" state Sign into the VM, open a command prompt, and type the following:
management indefinitely. Windows:
notepad
C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is
appended to the end of the resource name, so this property matches
the name of the VM. Then restart the VM.

Arc VM When a new server is added to an Azure Stack HCI You can manually create a storage path for any new volumes. For
management cluster, storage path isn't created automatically for the more information, see Create a storage path.
newly created volume.

Arc VM Restart of Arc VM operation completes after There's no known workaround in this release.
management approximately 20 minutes although the VM itself restarts
in about a minute.

Arc VM In some instances, the status of the logical network If the status of this logical network was Succeeded at the time when
management shows as Failed in Azure portal. This occurs when you try this network was provisioned, then you can continue to create
to delete the logical network without first deleting any resources on this network.
resources such as network interfaces associated with that
logical network.
You should still be able to create resources on this logical
network. The status is misleading in this instance.

Arc VM In this release, when you update a VM with a data disk Use the Azure portal for all the VM update operations. For more
management attached to it using the Azure CLI, the operation fails with information, see Manage Arc VMs and Manage Arc VM resources.
the following error message:
Couldn't find a virtual hard disk with the name.

Deployment Before you update 2310 to 2311 software, make sure to This script helps migrate the service principal.
run the following cmdlets on one of your Azure Stack HCI
nodes:
Import-module C:\CloudDeployment\CloudDeployment.psd1
Import-module
C:\CloudDeployment\ECEngine\EnterpriseCloudEngine.psd1

$Parameters = Get-EceInterfaceParameters -RolePath


'MocArb' -InterfaceName 'DeployPreRequisites'

$cloudRole =
$Parameters.Roles["Cloud"].PublicConfiguration

$domainRole =
Feature Issue Workaround/Comments

$Parameters.Roles["Domain"].PublicConfiguration

$securityInfo = $cloudRole.PublicInfo.SecurityInfo

$cloudSpUser = $securityInfo.AADUsers.User | Where-


Object Role -EQ "DefaultARBApplication"

$cloudSpCred =
$Parameters.GetCredential($cloudSpUser.Credential)

Set-ECEServiceSecret -ContainerName
"DefaultARBApplication" -Credential $cloudSpCred

Deployment There's a sporadic heartbeat reliability issue in this This issue is intermittent. Try rerunning the deployment. For more
release due to which the registration encounters the information, see Rerun the deployment.
error: HCI registration failed. Error: Arc integration failed.

Deployment There's an intermittent issue in this release where the Arc This issue is intermittent. Try rerunning the deployment. For more
integration validation fails with this error: Validator failed. information, see Rerun the deployment.
Can't retrieve the dynamic parameters for the cmdlet.
PowerShell Gallery is currently unavailable. Please try
again later.

Update In rare instances, you may encounter this error while If you see this issue, contact Microsoft Support to assist you with the
updating your Azure Stack HCI: Type next steps.
'UpdateArbAndExtensions' of Role 'MocArb' raised an
exception: Exception Upgrading ARB and Extension in
step [UpgradeArbAndExtensions :Get-ArcHciConfig]
UpgradeArb: Invalid applianceyaml = [C:\AksHci\hci-
appliance.yaml].

Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword using command: Set-
AzureStackLCMUserPassword , you might encounter this
error:

Cannot find an object with identity: 'object id'.

Update When you update from the 2311 build to Azure Stack HCI Microsoft is actively working to resolve this issue, and there's no
23H2, the update health checks stop reporting in the action required on your part. Although the health checks aren't
Azure portal after the update reaches the Install step. visible in the portal, they're still running and completing as expected.

Add server In this release, add server and repair server scenarios Follow these steps to work around this error:
and repair might fail with the following error:
server 1. Create a copy of the required PowerShell modules on the new
CloudEngine.Actions.InterfaceInvocationFailedException: node.
Type 'AddNewNodeConfiguration' of Role 'BareMetal'
raised an exception: The term 'Trace-Execution' isn't 2. Connect to a node on your Azure Stack HCI system.
recognized as the name of a cmdlet, function, script file, or
operable program. 3. Run the following PowerShell cmdlet:

Copy-Item "C:\Program
Files\WindowsPowerShell\Modules\CloudCommon"
"\newserver\c$\Program
Files\WindowsPowerShell\Modules\CloudCommon" -recursive

For more information, see Prerequisite for add and repair server
scenarios.

Known issues from previous releases


Here are the known issues from previous releases:
ノ Expand table

Feature Issue Workaround

Networking There's an infrequent DNS client Restart the server. This operation registers the DNS record, which prevents it from
issue in this release that causes the getting deleted.
deployment to fail on a two-node
cluster with a DNS resolution error:
A WebException occurred while
sending a RestRequest.
WebException.Status:
NameResolutionFailure. As a result
of the bug, the DNS record of the
second node is deleted soon after
it's created resulting in a DNS
error.

Azure portal In some instances, the Azure portal You might need to wait for 30 minutes or more to see the updated view.
might take a while to update and
the view might not be current.

Arc VM Deleting a network interface on an Use the Azure CLI to first remove the network interface and then delete it. For more
management Arc VM from Azure portal doesn't information, see Remove the network interface and see Delete the network
work in this release. interface.

Arc VM When you create a disk or a Make sure to not use underscore in the names for disks or network interfaces.
management network interface in this release
with underscore in the name, the
operation fails.

Deployment Providing the OU name in an There's no known workaround in this release.


incorrect syntax isn't detected in
the Azure portal. The incorrect
syntax is however detected at a
later step during cluster validation.

Deployment On server hardware, a USB Make sure to disable the BMC network adapter before you begin cloud
network adapter is created to deployment.
access the Baseboard
Management Controller (BMC).
This adapter can cause the cluster
validation to fail during the
deployment.

Deployment A new storage account is created


for each run of the deployment.
Existing storage accounts aren't
supported in this release.

Deployment A new key vault is created for each


run of the deployment. Existing
key vaults aren't supported in this
release.

Deployment In some instances, running the Arc The workaround is to run the script again and make sure that all the mandatory
registration script doesn't install extensions are installed before you Deploy via Azure portal.
the mandatory extensions, Azure
Edge device Management or Azure
Edge Lifecycle Manager.

Deployment The first deployment step: Before


Cloud Deployment when
Deploying via Azure portal can
take from 45 minutes to an hour to
complete.

Deployment The network direct intent overrides Use the ARM template to override this parameter and disable RDMA for the
defined on the template aren't intents.
Feature Issue Workaround

working in this release.

Deployment Deployments via Azure Resource To monitor the deployment in the Azure portal, go to the Azure Stack HCI cluster
Manager time out after 2 hours. resource and then go to new Deployments entry.
Deployments that exceed 2 hours
show up as failed in the resource
group though the cluster is
successfully created.

Deployment If you select Review + Create and There's no known workaround in this release.
you haven't filled out all the tabs,
the deployment begins and then
eventually fails.

Deployment This issue is seen if an incorrect Before you run the registration the second time:
subscription or resource group was
used during registration. When Make sure to delete the following folders from your server(s): C:\ecestore ,
you register the server a second C:\CloudDeployment , and C:\nugetstore .
time with Arc, the Azure Edge Delete the registry key using the PowerShell cmdlet:
Lifecycle Manager extension fails Remove-Item HKLM:\Software\Microsoft\LCMAzureStackStampInformation
during the registration, but the
extension state is reported as
Ready.

Azure Site Azure Site Recovery can't be There's no known workaround in this release.
Recovery installed on an Azure Stack HCI
cluster in this release.

Update When updating the Azure Stack To work around this issue, on each cluster node, add the following registry key (no
HCI cluster via the Azure Update value needed):
Manager, the update progress and
results might not be visible in the New-Item -Path
Azure portal. "HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters" -
force

Then on one of the cluster nodes, restart the Cloud Management cluster group.

Stop-ClusterGroup "Cloud Management"

Start-ClusterGroup "Cloud Management"

This won't fully remediate the issue as the progress details might still not be
displayed for a duration of the update process. To get the latest update details, you
can Retrieve the update progress with PowerShell.

Next steps
Read the Deployment overview.
Azure Stack HCI, version 23H2 release
information
Article • 02/28/2024

Applies to: Azure Stack HCI, version 23H2

Feature updates for Azure Stack HCI are released periodically to enhance customer
experience. To keep your Azure Stack HCI service in a supported state, you have up to
six months to install updates, but we recommend installing updates as they are released.

Azure Stack HCI also releases monthly quality and security updates. These releases are
cumulative, containing all previous updates to keep devices protected and productive.

This article presents the release information for Azure Stack HCI, version 23H2, including
the release build and OS build information.

Azure Stack HCI, version 23H2 release


information summary
The following table provides a summary of Azure Stack HCI, version 23H2 release
information.

All dates are listed in ISO 8601 format: YYYY-MM-DD

ノ Expand table

Release OS build Baseline/Update What's new Known


build 1 issues

10.2402.0.23 25398.709 Baseline Features and Known


improvements issues
KB 5034769

Availability date:
2024-02-13

10.2311.3.12 25398.709 Update Features and Known


improvements issues
KB 5034769

Availability date:
2024-02-13
Release OS build Baseline/Update What's new Known
build 1 issues

10.2311.2.7 25398.643 Update Features and Known


improvements issues
KB 5034130

Availability date:
2024-01-09

10.2311.0.26 25398.531 Baseline Features and Known


improvements issues
KB 5032202

Availability date:
2023-11-14

10.2310.0.30 25398.469 Baseline Features and Known


improvements issues

1
A Baseline build is the initial version of the software that you must deploy before
upgrading to the next version. An Update build includes incremental updates from the
most recent Baseline build. To deploy an Update build, it's necessary to first deploy the
previous Baseline build.

Next steps
Release Notes for Azure Stack HCI, version 23H2
Compare Azure Stack HCI to Windows
Server
Article • 02/01/2024

Applies to: Azure Stack HCI, versions 23H2 and 22H2; Windows Server 2022

This article explains key differences between Azure Stack HCI and Windows Server and
provides guidance about when to use each. Both products are actively supported and
maintained by Microsoft. Many organizations choose to deploy both as they are
intended for different and complementary purposes.

When to use Azure Stack HCI


Azure Stack HCI is Microsoft's premier hyperconverged infrastructure platform for
running VMs or virtual desktops on-premises with connections to Azure hybrid services.
Azure Stack HCI can help to modernize and secure your datacenters and branch offices,
and achieve industry-best performance with low latency and data sovereignty.

Use Azure Stack HCI for:

The best virtualization host to modernize your infrastructure, either for existing
workloads in your core datacenter or emerging requirements for branch office and
edge locations.

Easy extensibility to the cloud, with a regular stream of innovations from your
Azure subscription and a consistent set of tools and experiences.

All the benefits of hyperconverged infrastructure: a simpler, more consolidated


datacenter architecture with high-speed storage and networking.
7 Note

When using Azure Stack HCI, run all of your workloads inside virtual machines
or containers, not directly on the cluster. Azure Stack HCI isn't licensed for
clients to connect directly to it using Client Access Licenses (CALs).

For information about licensing Windows Server VMs running on an Azure Stack HCI
cluster, see Activate Windows Server VMs.

When to use Windows Server


Windows Server is a highly versatile, multi-purpose operating system with dozens of
roles and hundreds of features and includes the right for clients to connect directly with
appropriate CALs. Windows Server machines can be in the cloud or on-premises,
including virtualized on top of Azure Stack HCI.

Use Windows Server for:

A guest operating system inside of virtual machines (VMs) or containers


As the runtime server for a Windows application
To use one or more of the built-in server roles such as Active Directory, file
services, DNS, DHCP, or Internet Information Services (IIS)
As a traditional server, such as a bare-metal domain controller or SQL Server
installation
For traditional infrastructure, such as VMs connected to Fibre Channel SAN storage

Compare product positioning


The following table shows the high-level product packaging for Azure Stack HCI and
Windows Server.
ノ Expand table

Attribute Azure Stack HCI Windows Server

Product Cloud service that includes an Operating system


type operating system and more

Legal Covered under your Microsoft Has its own end-user license agreement
customer agreement or online
subscription agreement

Licensing Billed to your Azure subscription Has its own paid license

Support Covered under Azure support Can be covered by different support


agreements, including Microsoft Premier
Support

Where to Download from the Azure portal Microsoft Volume Licensing Service Center or
get it or comes preinstalled on Evaluation Center
integrated systems

Runs in For evaluation only; intended as a Yes, in the cloud or on premises


VMs host operating system

Hardware Runs on any of more than 200 Runs on any hardware with the "Certified for
pre-validated solutions from the Windows Server" logo. See the
Azure Stack HCI Catalog WindowsServerCatalog

Sizing Azure Stack HCI sizing tool None

Lifecycle Always up to date with the latest Use this option of the Windows Server
policy features. You have up to six servicing channels: Long-Term Servicing
months to install updates. Channel (LTSC)

Compare workloads and benefits


The following table compares the workloads and benefits of Azure Stack HCI and
Windows Server.

ノ Expand table

Attribute Azure Stack Windows


HCI Server

Azure Kubernetes Service (AKS) Yes Yes

Azure Arc-Enabled PaaS Services Yes Yes

Windows Server 2022 Azure Edition Yes No


Attribute Azure Stack Windows
HCI Server

Windows Server subscription add-on (Dec. 2021) Yes No

Free Extended Security Updates (ESUs) for Windows Server and Yes No 1
SQL 2008/R2 and 2012/R2

1
Requires purchasing an Extended Security Updates (ESU) license key and manually
applying it to every VM.

Compare technical features


The following table compares the technical features of Azure Stack HCI and Windows
Server 2022.

ノ Expand table

Attribute Azure Stack HCI Windows Server


2022

Hyper-V Yes Yes

Storage Spaces Direct Yes Yes

Software-Defined Networking Yes Yes

Adjustable storage repair speed Yes Yes

Secured-core Server Yes Yes

Stronger, faster network encryption Yes Yes

4-5x faster Storage Spaces repairs Yes Yes

Stretch clustering for disaster recovery with Yes No


Storage Spaces Direct

High availability for GPU workload Yes No

Restart up to 10x faster with kernel-only restarts Yes No

Simplified host networking with Network ATC Yes No

Storage Spaces Direct on a single server Yes No

Storage Spaces Direct thin provisioning Yes No

Dynamic processor compatibility mode Yes No


Attribute Azure Stack HCI Windows Server
2022

Cluster-Aware OS feature update Yes No

Integrated driver and firmware updates Yes (Integrated Systems No


only)

For more information, see What's New in Azure Stack HCI, version 23H2 and Using
Azure Stack HCI on a single server.

Compare management options


The following table compares the management options for Azure Stack HCI and
Windows Server. Both products are designed for remote management and can be
managed with many of the same tools.

ノ Expand table

Attribute Azure Stack HCI Windows Server

Windows Admin Center Yes Yes

Microsoft System Center Yes (sold Yes (sold separately)


separately)

Third-party tools Yes Yes

Azure Backup and Azure Site Recovery support Yes Yes

Azure portal Yes (natively) Requires Azure Arc


agent

Azure portal > Extensions and Arc-enabled host Yes Manual 1

Azure portal > Windows Admin Center integration Yes Azure VMs only 1
(preview)

Azure portal > Multi-cluster monitoring for Azure Yes No


Stack HCI

Azure portal > Azure Resource Manager integration Yes No


for clusters

Azure portal > Arc VM management Yes No

Desktop experience No Yes


1
Requires manually installing the Arc-git statusConnected Machine agent on every
machine.

Compare product pricing


The table below compares the product pricing for Azure Stack HCI and Windows Server.
For details, see Azure Stack HCI pricing .

ノ Expand table

Attribute Azure Stack HCI Windows Server

Price type Subscription service Varies: most often a one-time license

Price structure Per core, per month Varies: usually per core

Price Per core, per month See Pricing and licensing for Windows Server
2022

Evaluation/trial 60-day free trial once 180-day evaluation copy


period registered

Channels Enterprise agreement, cloud Enterprise agreement/volume licensing, OEM,


service provider, or direct services provider license agreement (SPLA)

Next steps
Compare Azure Stack HCI to Azure Stack Hub
Azure Stack HCI FAQ
FAQ

The Azure Stack HCI FAQ provides information on Azure Stack HCI connectivity with the
cloud, and how Azure Stack HCI relates to Windows Server and Azure Stack Hub.

How does Azure Stack HCI use the


cloud?
Azure Stack HCI is an on-premises hyperconverged infrastructure stack delivered as an
Azure hybrid service. You install the Azure Stack HCI software on physical servers that
you control on your premises. Then you connect to Azure for cloud-based monitoring,
support, billing, and optional management and security features. This FAQ section
clarifies how Azure Stack HCI uses the cloud by addressing frequently asked questions
about connectivity requirements and behavior.

Does my data stored on Azure Stack HCI


get sent to the cloud?
No. Customer data, including the names, metadata, configuration, and contents of your
on-premises virtual machines (VMs) is never sent to the cloud unless you turn on
additional services, like Azure Backup or Azure Site Recovery, or unless you enroll those
VMs individually into cloud management services like Azure Arc.

Because Azure Stack HCI doesn't store customer data in the cloud, business continuity
disaster recovery (BCDR) for the customer's on-premises data is defined and controlled
by the customer. To set up your own site-to-site replication using a stretched cluster, see
Stretched clusters overview.

To learn more about the diagnostic data we collect to keep Azure Stack HCI secure, up
to date, and working as expected, see Azure Stack HCI data collection and Data
residency in Azure .

Does the control plane for Azure Stack


HCI go through the cloud?
No. You can use edge-local tools, such as Windows Admin Center, PowerShell, or System
Center, to manage directly the host infrastructure and VMs even if your network
connection to the cloud is down or severely limited. Common everyday operations, such
as moving a VM between hosts, replacing a failed drive, or configuring IP addresses
don’t rely on the cloud. However, cloud connectivity is required to obtain over-the-air
software updates, change your Azure registration, or use features that directly rely on
cloud services for backup, monitoring, and more.

Are there bandwidth or latency


requirements between Azure Stack HCI
and the cloud?
No. Limited-bandwidth connections like rural T1 lines or satellite/cellular connections
are adequate for Azure Stack HCI to sync. The minimum required connectivity is just
several kilobytes per day. Additional services may require additional bandwidth,
especially to replicate or back up whole VMs, download large software updates, or
upload verbose logs for analysis and monitoring in the cloud.

Does Azure Stack HCI require


continuous connectivity to the cloud?
No. Azure Stack HCI is designed to handle periods of limited or zero connectivity.

What happens if my network


connection to the cloud temporarily
goes down?
While your connection is down, all host infrastructure and VMs continue to run
normally, and you can use edge-local tools for management. You would not be able to
use features that directly rely on cloud services. Information in the Azure portal also
would become out-of-date until Azure Stack HCI is able to sync again.

How long can Azure Stack HCI run with


the connection down?
At the minimum, Azure Stack HCI needs to sync successfully with Azure once per 30
consecutive days.

What happens if the 30-day limit is


exceeded?
If Azure Stack HCI hasn’t synced with Azure in more than 30 consecutive days, the
cluster’s connection status will show Out of policy in the Azure portal and other tools,
and the cluster will enter a reduced functionality mode. In this mode, the host
infrastructure stays up and all current VMs continue to run normally. However, new VMs
can’t be created until Azure Stack HCI is able to sync again. The internal technical reason
is that the cluster’s cloud-generated license has expired and needs to be renewed by
syncing with Azure.

What content does Azure Stack HCI


sync with the cloud?
This depends on which features you’re using. At the minimum, Azure Stack HCI syncs
basic cluster information to display in the Azure portal (like the list of clustered nodes,
hardware model, and software version); billing information that summarizes accrued
core-days since the last sync; and minimal required diagnostic information that helps
Microsoft keep your Azure Stack HCI secure, up-to-date, and working properly. The total
size is small – a few kilobytes. If you turn on additional services, they may upload more:
for example, Azure Log Analytics would upload logs and performance counters for
monitoring.

How often does Azure Stack HCI sync


with the cloud?
This depends on which features you’re using. At the minimum, Azure Stack HCI will try
to sync every 12 hours. If sync doesn’t succeed, the content is retained locally and sent
with the next successful sync. In addition to this regular timer, you can manually sync
any time, using either the Sync-AzureStackHCI PowerShell cmdlet or from Windows
Admin Center. If you turn on additional services, they may upload more frequently: for
example, Azure Log Analytics would upload every 5 minutes for monitoring.
Where does the synced information
actually go?
Azure Stack HCI syncs with Azure and stores data in a secure, Microsoft-operated
datacenter. To learn more about the diagnostic data we collect to keep Azure Stack HCI
secure, up to date, and working as expected, see Azure Stack HCI data collection and
Data residency in Azure .

Can I use Azure Stack HCI and never


connect to Azure?
No. Azure Stack HCI needs to sync successfully with Azure once per 30 consecutive days.

Can I transfer data offline between an


"air-gapped" Azure Stack HCI and
Azure?
No. There is currently no mechanism to register and sync between on-premises and
Azure without network connectivity. For example, you can't transport certificates or
billing data using removable storage. If there is sufficient customer demand, we're open
to exploring such a feature in the future. Let us know in the Azure Stack HCI feedback
forum .

How does Azure Stack HCI relate to


Windows Server?
Windows Server is the foundation of nearly every Azure product, and all the features
you value continue to release with support in Windows Server. The initial offering of
Azure Stack HCI was based on Windows Server 2019 and used the traditional Windows
Server licensing model. Today, Azure Stack HCI has its own operating system and
subscription-based licensing model. Azure Stack HCI is the recommended way to deploy
HCI on-premises, using Microsoft-validated hardware from our partners.
Which guest operating systems are
supported on Azure Stack HCI?
Azure Stack HCI supports several guest operating systems. For more information, see
Supported Windows guest operating systems for Hyper-V on Windows Server.

Can I upgrade from Windows Server


2019 to Azure Stack HCI?
There is no in-place upgrade from Windows Server to Azure Stack HCI at this time. Stay
tuned for specific migration guidance for customers running hyperconverged clusters
based on Windows Server 2019 and 2016.

What Azure services can I connect to


Azure Stack HCI?
For an updated list of Azure services that you can connect Azure Stack HCI to, see
Connecting Windows Server to Azure hybrid services.

What do the Azure Stack Hub and Azure


Stack HCI solutions have in common?
Azure Stack HCI features the same Hyper-V-based software-defined compute, storage,
and networking technologies as Azure Stack Hub. Both offerings meet rigorous testing
and validation criteria to ensure reliability and compatibility with the underlying
hardware platform.

How are the Azure Stack Hub and Azure


Stack HCI solutions different?
With Azure Stack Hub, you run cloud services on-premises. You can run Azure IaaS and
PaaS services on-premises to consistently build and run cloud apps anywhere, managed
with the Azure portal on-premises.

With Azure Stack HCI, you run virtualized workloads on-premises, managed with
Windows Admin Center and familiar Windows Server tools. You can also connect to
Azure for hybrid scenarios like cloud-based Site Recovery, monitoring, and others.

Can I upgrade from Azure Stack HCI to


Azure Stack Hub?
No, but customers can migrate their workloads from Azure Stack HCI to Azure Stack
Hub or Azure.

How do I identify an Azure Stack HCI


server?
Windows Admin Center lists the operating system in the All Connections list and various
other places, or you can use the following PowerShell command to query for the
operating system name and version.

PowerShell

Get-ComputerInfo -Property 'osName', 'osDisplayVersion'

Here’s some example output:

OsName OSDisplayVersion
------ ----------------
Microsoft Azure Stack HCI 20H2

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Deploy a virtual Azure Stack HCI,
version 23H2 system
Article • 02/20/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to deploy a virtualized single server or a multi-node Azure
Stack HCI, version 23H2, on a host system running Hyper-V on the Windows Server
2022, Windows 11, or later operating system (OS).

You need administrator privileges for the Azure Stack HCI virtual deployment and should
be familiar with the existing Azure Stack HCI solution. The deployment can take around
2.5 hours to complete.

) Important

A virtual deployment of Azure Stack HCI, version 23H2 is intended for educational
and demonstration purposes only. Microsoft Support doesn't support virtual
deployments.

Prerequisites
Here are the hardware, networking, and other prerequisites for the virtual deployment:

Physical host requirements


Before you begin, make sure that:

You have access to a physical host system that is running Hyper-V on Windows
Server 2022, Windows 11, or later. This host is used to provision a virtual Azure
Stack HCI deployment.

The physical hardware used for the virtual deployment meets the following
requirements:

ノ Expand table
Component Minimum

Processor Intel VT-x or AMD-V, with support for nested virtualization. For more
information, see Does My Processor Support Intel® virtualization
technology? .

Memory The physical host must have a minimum of 32 GB RAM for single virtual
node deployments. The virtual host VM should have at least 24 GB RAM.

The physical host must have a minimum of 64 GB RAM for two virtual
node deployments. Each virtual host VM should have at least 24 GB
RAM.

Host network A single network adapter.


adapters

Storage 1 TB Solid state drive (SSD).

Virtual host requirements


Before you begin, make sure that each virtual host system can dedicate the following
resources to provision your virtualized Azure Stack HCI system:

ノ Expand table

Component Requirement

Virtual machine (VM) type Secure Boot and Trusted Platform Module (TPM) enabled.

vCPUs Four cores.

Memory A minimum of 24 GB.

Networking At least two network adapters connected to internal network.


MAC spoofing must be enabled.

Boot disk One disk to install the Azure Stack HCI operating system from ISO.

Hard disks for Storage Six dynamic expanding disks. Maximum disk size is 1024 GB.
Spaces Direct

Data disk At least 127 GB.

Time synchronization in Disabled.


integration

Set up the virtual switch


When deploying Azure Stack HCI in a virtual environment, you can use your existing
networks and use IP addresses from that network if they're available. In such a case, you
just need to create an external switch and connect all the virtual network adapters to
that virtual switch. Virtual hosts will have connectivity to your physical network without
any extra configuration.

However, if your physical network where you're planning to deploy the Azure Stack HCI
virtual environment is scarce on IPs, you can create an internal virtual switch with NAT
enabled, to isolate the virtual hosts from your physical network while keeping outbound
connectivity to the internet.

The following lists the steps for the two options:

Deploy with external virtual switch


On your physical host computer, run the following PowerShell command to create an
external virtual switch:

PowerShell

New-VMSwitch -Name "external_switch_name" -SwitchType External -


NetAdapterName "network_adapter_name" -AllowManagementOS $true

Deploy with internal virtual switch and NAT enabled


On your physical host computer, run the following PowerShell command to create an
internal virtual switch. The use of this switch ensures that the Azure Stack HCI
deployment is isolated.

PowerShell

New-VMSwitch -Name "internal_switch_name" -SwitchType Internal -


NetAdapterName "network_adapter_name"

Once the internal virtual switch is created, a new network adapter is created on the host.
You must assign an IP address to this network adapter to become the default gateway
of your virtual hosts once connected to this internal switch network. You also need to
define the NAT network subnet where the virtual hosts are connected.

The following example script creates a NAT network HCINAT with prefix 192.168.4.0/24
and defines the 192.168.44.1 IP as the default gateway for the network using the
interface on the host:
PowerShell

#Check interface index of the new network adapter on the host connected to
InternalSwitch:
Get-NetAdapter -Name "vEthernet (InternalSwitch)"

#Create the NAT default gateway IP on top of the InternalSwitch network


adapter:
New-NetIPAddress -IPAddress 192.168.44.1 -PrefixLength 24 -InterfaceAlias
"vEthernet (InternalSwitch)"

#Create the NAT network:


New-NetNat -Name "HCINAT"-InternalIPInterfaceAddressPrefix 192.168.44.0/24

Create the virtual host


Create a VM to serve as the virtual host with the following configuration. You can create
this VM using either Hyper-V Manager or PowerShell:

Hyper-V Manager. For more information, see Create a virtual machine using
Hyper-V Manager to mirror your physical management network.

PowerShell cmdlets. Make sure to adjust the VM configuration parameters


referenced in the Virtual host requirements before you run the PowerShell cmdlets.

Follow these steps to create an example VM named Node1 using PowerShell cmdlets:

1. Create the VM:

PowerShell

New-VHD -Path "your_VHDX_path" -SizeBytes 127GB


New-VM -Name Node1 -MemoryStartupBytes 20GB -VHDPath "your_VHDX_path" -
Generation 2 -Path "VM_config_files_path"

2. Disable dynamic memory:

PowerShell

Set-VMMemory -VMName "Node1" -DynamicMemoryEnabled $false

3. Disable VM checkpoints:

PowerShell
Set-VM -VMName "Node1" -CheckpointType Disabled

4. Remove the default network adapter created during VM creation in the previous
step:

PowerShell

Get-VMNetworkAdapter -VMName "Node1" | Remove-VMNetworkAdapter

5. Add new network adapters to the VM using custom names. This example adds four
NICs, but you can add just two if needed. Having four NICs allows you to test two
network intents ( Mgmt_Compute and Storage for example) with two NICs each:

PowerShell

Add-VmNetworkAdapter -VmName "Node1" -Name "NIC1"


Add-VmNetworkAdapter -VmName "Node1" -Name "NIC2"
Add-VmNetworkAdapter -VmName "Node1" -Name "NIC3"
Add-VmNetworkAdapter -VmName "Node1" -Name "NIC4"

6. Attach all network adapters to the virtual switch. Specify the name of the virtual
switch you created, whether it was external without NAT, or internal with NAT:

PowerShell

Get-VmNetworkAdapter -VmName "Node1" |Connect-VmNetworkAdapter -


SwitchName "virtual_switch_name"

7. Enable MAC spoofing on all network adapters on VM Node1 . MAC address


spoofing is a technique that allows a network adapter to masquerade as another
by changing its Media Access Control (MAC) address. This is required in scenarios
where you're planning to use nested virtualization:

PowerShell

Get-VmNetworkAdapter -VmName "Node1" |Set-VmNetworkAdapter -


MacAddressSpoofing On

8. Enable trunk port (for multi-node deployments only) for all network adapters on
VM Node1 . This script configures the network adapter of a specific VM to operate
in trunk mode. This is typically used in multi-node deployments where you want to
allow multiple Virtual Local Area Networks (VLANs) to communicate through a
single network adapter:

PowerShell

Get-VmNetworkAdapter -VmName "Node1" |Set-VMNetworkAdapterVlan -Trunk -


NativeVlanId 0 -AllowedVlanIdList 0-1000

9. Create a new key protector and assign it to Node1 . This is typically done in the
context of setting up a guarded fabric in Hyper-V, a security feature that protects
VMs from unauthorized access or tampering.

After the following script is executed, Node1 will have a new key protector assigned
to it. This key protector protects the VM's keys, helping to secure the VM against
unauthorized access or tampering:

PowerShell

$owner = Get-HgsGuardian UntrustedGuardian


$kp = New-HgsKeyProtector -Owner $owner -AllowUntrustedRoot
Set-VMKeyProtector -VMName "Node1" -KeyProtector $kp.RawData

10. Enable the vTPM for Node1 . By enabling vTPM on a VM, you can use BitLocker and
other features that require TPM on the VM. After this command is executed, Node1
will have a vTPM enabled, assuming the host machine's hardware and the VM's
configuration support this feature.

PowerShell

Enable-VmTpm -VMName "Node1"

11. Change virtual processors to 8 :

PowerShell

Set-VmProcessor -VMName "Node1" -Count 8

12. Create extra drives to be used as the boot disk and hard disks for Storage Spaces
Direct. After these commands are executed, six new VHDXs will be created in the
C:\vms\Node1 directory as shown in this example:

PowerShell
new-VHD -Path "C:\vms\Node1\s2d1.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d2.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d3.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d4.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d5.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d6.vhdx" -SizeBytes 1024GB

13. Attach drives to the newly created VHDXs for the VM. In these commands, six
VHDs located in the C:\vms\Node1 directory and named s2d1.vhdx through
s2d6.vhdx are added to Node1 . Each Add-VMHardDiskDrive command adds one

VHD to the VM, so the command is repeated six times with different -Path
parameter values.

Afterwards, the Node1 VM has six VHDs attached to it. These VHDXs are used to
enable Storage Spaces Direct on the VM, which are required for Azure Stack HCI
deployments:

PowerShell

Add-VMHardDiskDrive -VMName "Node1" -Path "C:\vms\Node1\s2d1.vhdx"


Add-VMHardDiskDrive -VMName "Node1" -Path "C:\vms\Node1\s2d2.vhdx"
Add-VMHardDiskDrive -VMName "Node1" -Path "C:\vms\Node1\s2d3.vhdx"
Add-VMHardDiskDrive -VMName "Node1" -Path "C:\vms\Node1\s2d4.vhdx"
Add-VMHardDiskDrive -VMName "Node1" -Path "C:\vms\Node1\s2d5.vhdx"
Add-VMHardDiskDrive -VMName "Node1" -Path "C:\vms\Node1\s2d6.vhdx"

14. Disable time synchronization:

PowerShell

Get-VMIntegrationService -VMName "Node1" |Where-Object {$_.name -like


"T*"}|Disable-VMIntegrationService

15. Enable nested virtualization:

PowerShell

Set-VMProcessor -VMName "Node1" -ExposeVirtualizationExtensions $true

16. Start the VM:

PowerShell

Start-VM "Node1"
Install the OS on the virtual host VMs
Complete the following steps to install and configure the Azure Stack HCI OS on the
virtual host VMs:

1. Download Azure Stack HCI 23H2 ISO and Install the Azure Stack HCI operating
system.

2. Update the password since this is the first VM startup. Make sure the password
meets the Azure complexity requirements. The password is at least 12 characters
and includes 1 uppercase character, 1 lowercase character, 1 number, and 1 special
character.

3. After the password is changed, the Server Configuration Tool (SConfig) is


automatically loaded. Select option 15 to exit to the command line and run the
next steps from there.

4. Launch SConfig by running the following command:

PowerShell

SConfig

For information on how to use SConfig, see Configure with the Server
Configuration tool (SConfig).

5. Change hostname to Node1 . Use option 2 for Computer name in SConfig to do this.

The hostname change results in a restart. When prompted for a restart, enter Yes
and wait for the restart to complete. SConfig is launched again automatically.

6. From the physical host, run the Get-VMNetworkAdapter and ForEach-Object cmdlets
to configure the four network adapter names for VM Node1 by mapping the
assigned MAC addresses to the corresponding network adapters on the guest OS.
a. The Get-VMNetworkAdapter cmdlet is used to retrieve the network adapter object
for each NIC on the VM, where the -VMName parameter specifies the name of the
VM, and the -Name parameter specifies the name of the network adapter. The
MacAddress property of the network adapter object is then accessed to get the

MAC address:

PowerShell

Get-VMNetworkAdapter -VMName "Node1" -Name "NIC1"


b. The MAC address is a string of hexadecimal numbers. The ForEach-Object
cmdlet is used to format this string by inserting hyphens at specific intervals.
Specifically, the Insert method of the string object is used to insert a hyphen at
the 2nd, 5th, 8th, 11th, and 14th positions in the string. The join operator is
then used to concatenate the resulting array of strings into a single string with
spaces between each element.

c. The commands are repeated for each of the four NICs on the VM, and the final
formatted MAC address for each NIC is stored in a separate variable:

PowerShell

($Node1finalmacNIC1, $Node1finalmacNIC2, $Node1finalmacNIC3,


$Node1finalmacNIC4).

d. The following script outputs the final formatted MAC address for each NIC:

PowerShell

$Node1macNIC1 = Get-VMNetworkAdapter -VMName "Node1" -Name "NIC1"


$Node1macNIC1.MacAddress
$Node1finalmacNIC1=$Node1macNIC1.MacAddress|ForEach-
Object{($_.Insert(2,"-").Insert(5,"-").Insert(8,"-").Insert(11,"-
").Insert(14,"-"))-join " "}
$Node1finalmacNIC1

$Node1macNIC2 = Get-VMNetworkAdapter -VMName "Node1" -Name "NIC2"


$Node1macNIC2.MacAddress
$Node1finalmacNIC2=$Node1macNIC2.MacAddress|ForEach-
Object{($_.Insert(2,"-").Insert(5,"-").Insert(8,"-").Insert(11,"-
").Insert(14,"-"))-join " "}
$Node1finalmacNIC2

$Node1macNIC3 = Get-VMNetworkAdapter -VMName "Node1" -Name "NIC3"


$Node1macNIC3.MacAddress
$Node1finalmacNIC3=$Node1macNIC3.MacAddress|ForEach-
Object{($_.Insert(2,"-").Insert(5,"-").Insert(8,"-").Insert(11,"-
").Insert(14,"-"))-join " "}
$Node1finalmacNIC3

$Node1macNIC4 = Get-VMNetworkAdapter -VMName "Node1" -Name "NIC4"


$Node1macNIC4.MacAddress
$Node1finalmacNIC4=$Node1macNIC4.MacAddress|ForEach-
Object{($_.Insert(2,"-").Insert(5,"-").Insert(8,"-").Insert(11,"-
").Insert(14,"-"))-join " "}
$Node1finalmacNIC4

7. Obtain the Node1 VM local admin credentials and then rename Node1 :
PowerShell

$cred = get-credential

8. Rename and map the NICs on Node1 . The renaming is based on the MAC
addresses of the NICs assigned by Hyper-V when the VM is started the first time.
These commands should be run directly from the host:

Use the Get-NetAdapter command to retrieve the physical network adapters on the
VM, filter them based on their MAC address, and then rename them to the
matching adapter using the Rename-NetAdapter cmdlet.

This is repeated for each of the four NICs on the VM, with the MAC address and
new name of each NIC specified separately. This establishes a mapping between
the name of the NICs in Hyper-V Manager and the name of the NICs in the VM OS:

PowerShell

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock


{param($Node1finalmacNIC1) Get-NetAdapter -Physical | Where-Object
{$_.MacAddress -eq $Node1finalmacNIC1} | Rename-NetAdapter -NewName
"NIC1"} -ArgumentList $Node1finalmacNIC1

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock


{param($Node1finalmacNIC2) Get-NetAdapter -Physical | Where-Object
{$_.MacAddress -eq $Node1finalmacNIC2} | Rename-NetAdapter -NewName
"NIC2"} -ArgumentList $Node1finalmacNIC2

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock


{param($Node1finalmacNIC3) Get-NetAdapter -Physical | Where-Object
{$_.MacAddress -eq $Node1finalmacNIC3} | Rename-NetAdapter -NewName
"NIC3"} -ArgumentList $Node1finalmacNIC3

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock


{param($Node1finalmacNIC4) Get-NetAdapter -Physical | Where-Object
{$_.MacAddress -eq $Node1finalmacNIC4} | Rename-NetAdapter -NewName
"NIC4"} -ArgumentList $Node1finalmacNIC4

9. Disable the Dynamic Host Configuration Protocol (DHCP) on the four NICs for VM
Node1 by running the following commands.

7 Note

The interfaces won't automatically obtain IP addresses from a DHCP server


and instead need to have IP addresses manually assigned to them:
PowerShell

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {Set-


NetIPInterface -InterfaceAlias "NIC1" -Dhcp Disabled}

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {Set-


NetIPInterface -InterfaceAlias "NIC2" -Dhcp Disabled}

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {Set-


NetIPInterface -InterfaceAlias "NIC3" -Dhcp Disabled}

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {Set-


NetIPInterface -InterfaceAlias "NIC4" -Dhcp Disabled}

10. Set management IP, gateway, and DNS. After the following commands are
executed, Node1 will have the NIC1 network interface configured with the specified
IP address, subnet mask, default gateway, and DNS server address. Ensure that the
management IP address can resolve Active Directory and has outbound
connectivity to the internet:

PowerShell

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {New-


NetIPAddress -InterfaceAlias "NIC1" -IPAddress "192.168.44.201" -
PrefixLength 24 -AddressFamily IPv4 -DefaultGateway "192.168.44.1"}

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {Set-


DnsClientServerAddress -InterfaceAlias "NIC1" -ServerAddresses
"192.168.1.254"}

11. Enable the Hyper-V role. This command restarts the VM Node1 :

PowerShell

Invoke-Command -VMName "Node1"


-Credential $cred -ScriptBlock {Enable-WindowsOptionalFeature -Online -
FeatureName Microsoft-Hyper-V -All }

12. Once Node1 is restarted and the Hyper-V role is installed, install the Hyper-V
Management Tools:

PowerShell

Invoke-Command -VMName "Node1" -Credential $cred -ScriptBlock {Install-


WindowsFeature -Name Hyper-V -IncludeManagementTools}
13. Once the virtual host server is ready, you must register it and assign permissions in
Azure as an Arc resource.

14. Once the server is registered in Azure as an Arc resource and all the mandatory
extensions are installed, choose one of the following methods to deploy Azure
Stack HCI from Azure.

Deploy Azure Stack HCI using Azure portal.

Deploy Azure Stack HCI using an ARM template.

Repeat the process above for extra nodes if you plan to test multi-node deployments.
Ensure virtual host names and management IPs are unique and on the same subnet:

Next steps
Register to Arc and assign permissions for deployment.
System requirements for Azure Stack
HCI, version 23H2
Article • 02/01/2024

Applies to: Azure Stack HCI, versions 23H2 and 22H2

This article discusses Azure, server and storage, networking, and other requirements for
Azure Stack HCI. If you purchase Azure Stack HCI Integrated System solution hardware
from the Azure Stack HCI Catalog , you can skip to the Networking requirements since
the hardware already adheres to server and storage requirements.

Azure requirements
Here are the Azure requirements for your Azure Stack HCI cluster:

Azure subscription: If you don't already have an Azure account, create one . You
can use an existing subscription of any type:
Free account with Azure credits for students or Visual Studio subscribers .
Pay-as-you-go subscription with credit card.
Subscription obtained through an Enterprise Agreement (EA).
Subscription obtained through the Cloud Solution Provider (CSP) program.

Azure permissions: Make sure that you're assigned the required roles and
permissions for registration and deployment. For information on how to assign
permissions, see Assign Azure permissions for registration.

Azure regions: Azure Stack HCI is supported for the following regions:
East US
West Europe

Server and storage requirements


Before you begin, make sure that the physical server and storage hardware used to
deploy an Azure Stack HCI cluster meets the following requirements:

ノ Expand table
Component Minimum

Number of 1 to 16 servers are supported.


servers Each server must be the same model, manufacturer, have the same network
adapters, and have the same number and type of storage drives.

CPU A 64-bit Intel Nehalem grade or AMD EPYC or later compatible processor
with second-level address translation (SLAT).

Memory A minimum of 32 GB RAM per node.

Host network At least two network adapters listed in the Windows Server Catalog. Or
adapters dedicated network adapters per intent, which does require two separate
adapters for storage intent. For more information, see Windows Server
Catalog .

BIOS Intel VT or AMD-V must be turned on.

Boot drive A minimum size of 200 GB size.

Data drives At least 2 disks with a minimum capacity of 500 GB (SSD or HDD).

Trusted Platform TPM version 2.0 hardware must be present and turned on.
Module (TPM)

Secure boot Secure Boot must be present and turned on.

The servers should also meet these extra requirements:

Each server should have dedicated volumes for logs, with log storage at least as
fast as data storage.

Have direct-attached drives that are physically attached to one server each. RAID
controller cards or SAN (Fibre Channel, iSCSI, FCoE) storage, shared SAS enclosures
connected to multiple servers, or any form of multi-path IO (MPIO) where drives
are accessible by multiple paths, aren't supported.

7 Note

Host-bus adapter (HBA) cards must implement simple pass-through mode for
any storage devices used for Storage Spaces Direct.

For more feature-specific requirements for Hyper-V, see System requirements for Hyper-
V on Windows Server.

Networking requirements
An Azure Stack HCI cluster requires a reliable high-bandwidth, low-latency network
connection between each server node.

Verify that physical switches in your network are configured to allow traffic on any
VLANs you use. For more information, see Physical network requirements for Azure
Stack HCI.

Hardware requirements
In addition to Microsoft Azure Stack HCI updates, many OEMs also release regular
updates for your Azure Stack HCI hardware, such as driver and firmware updates. To
ensure that OEM package update notifications, reach your organization check with your
OEM about their specific notification process.

Before deploying Azure Stack HCI, version 23H2, ensure that your hardware is up to date
by:

Determining the current version of your Solution Builder Extension (SBE) package.
Finding the best method to download, install, and update your SBE package.

OEM information
This section contains OEM contact information and links to OEM Azure Stack HCI,
version 23H2 reference material.

ノ Expand table

HCI Solution How to How to How to How to


Solution platform configure update update update the
provider BIOS firmware drivers cluster after
settings it's running

Bluechip SERVERline bluechip bluechip bluechip bluechip


R42203a Service & Service & Service & Service &
Certified for Support Support Support Support
ASHCI

DataON AZS-XXXX AZS-XXXX AZS-XXXX AZS-XXXX AZS-XXXX


BIOS link driver link driver link update link

primeLine All models Contact Contact Contact


primeLine primeLine primeLine
service service service
HCI Solution How to How to How to How to
Solution platform configure update update update the
provider BIOS firmware drivers cluster after
settings it's running

Supermicro BigTwin 2U 2- Configure Firmware Driver


Node BIOS settings update update
process process

Thomas- All models Configure Firmware Driver


krenn BIOS settings update update
process process

For a comprehensive list of all OEM contact information, download the Azure Stack HCI
OEM Contact spreadsheet.

BIOS setting
Check with your OEM regarding the necessary generic BIOS settings for Azure Stack HCI,
version 23H2. These settings may include hardware virtualization, TPM enabled, and
secure core.

Driver
Check with your OEM regarding the necessary drivers that need to be installed for Azure
Stack HCI, version 23H2. Additionally, your OEM can provide you with their preferred
installation steps.

Driver installation steps


You should always follow the OEM's recommended installation steps. If the OEM's
guidance isn't available, see the following steps:

1. Identify the Ethernet using this command:

PowerShell

Get-NetAdapter

Here's a sample output:

Console
PS C:\Windows\system32> get-netadapter

Name InterfaceDescription
iflndex Status MacAddress LinkSpeed
vSMB(compute managemen… Hyper-V Virtual Ethernet Adapter #2
20 Up 00-15-5D-20-40-00 25 Gbps
vSMB(compute managemen… Hyper-V Virtual Ethernet Adapter #3
24 Up 00-15-5D-20-40-01 25 Gbps
ethernet HPE Ethernet 10/25Gb 2-port 640FLR…#2
7 Up B8-83-03-58-91-88 25 Gbps
ethernet 2 HPE Ethernet 10/25Gb 2-port 640FLR-S…
5 Up B8 83-03-58-91-89 25 Gbps
vManagement(compute_ma… Hyper-V Virtual Ethernet Adapter
14 Up B8-83-03-58-91-88 25 Gbps

2. Identify the DriverFileName, DriverVersion, DriverDate, DriverDescription, and the


DriverProvider using this command:

PowerShell

Get-NetAdapter -name ethernet | select *driver*

Here's a sample output:

Console

PS C:\Windows\system32> Get-NetAdapter -name ethernet | select *driver*

DriverInformation : Driver Date 2021-07-08 Version 2.70.24728.0


NDIS 6.85
DriverFileName : mlx5.sys
DriverVersion : 2.70.24728.0
DriverDate : 2021-07-08
DriverDateData : 132701760000000000
DriverDescription : HPE Ethernet 10/25Gb 2-port 640FLR-SFP28
Adapter
DriverMajorNdisVersion : 6
DriverMinorNdisVersion : 85
DriverName : \SystemRoot\System32\drivers\mlx5.sys
DriverProvider : Mellanox Technologies Ltd.
DriverVersionString : 2.70.24728.0
MajorDriverVersion : 2
MinorDriverVersion : 0

3. Search for your driver and the recommended installation steps.

4. Download your driver.


5. Install the driver identified in Step #2 by DriverFileName on all servers of the
cluster. For more information, see PnPUtil Examples - Windows Drivers.

Here's an example:

PowerShell

pnputil /add-driver mlx5.inf /install

6. Check to be sure the drivers are updated by reviewing DriverVersion and


DriverDate.

PowerShell

Get-NetAdapter -name ethernet | select *driver*

Here's are some sample outputs:

Console

PS C:\Windows\system32> Get-NetAdapter -name ethernet | select *driver*

DriverInformation : Driver Date 2023-05-03 Version 23.4.26054.0


NDIS 6.85
DriverFileName : mlx5.sys
DriverVersion : 23.4.26054.0
DriverDate : 2023-05-03
DriverDateData : 133275456000000000
DriverDescription : HPE Ethernet 10/25Gb 2-port 640FLR-SFP28
Adapter
DriverMajorNdisVersion : 6
DriverMinorNdisVersion : 85
DriverName : \SystemRoot\System32\drivers\mlx5.sys
DriverProvider : Mellanox Technologies Ltd.
DriverVersionString : 23.4.26054.0
MajorDriverVersion : 2
MinorDriverVersion : 0

Console

PS C:\Windows\system32> Get-NetAdapter "ethernet 2" | select *driver*

DriverInformation : Driver Date 2023-05-03 Version 23.4.26054.0


NDIS 6.85
DriverFileName : mlx5.sys
DriverVersion : 23.4.26054.0
DriverDate : 2023-05-03
DriverDateData : 133275456000000000
DriverDescription : HPE Ethernet 10/25Gb 2-port 640FLR-SFP28
Adapter
DriverMajorNdisVersion : 6
DriverMinorNdisVersion : 85
DriverName : \SystemRoot\System32\drivers\mlx5.sys
DriverProvider : Mellanox Technologies Ltd.
DriverVersionString : 23.4.26054.0
MajorDriverVersion : 2
MinorDriverVersion : 0

Firmware
Check with your OEM regarding the necessary firmware that needs to be installed for
Azure Stack HCI, version 23H2. Additionally, your OEM can provide you with their
preferred installation steps.

Drivers and firmware via the Windows Admin


Center extension
You should always follow the OEM's recommended installation steps. With Azure Stack
HCI, version 23H2, Windows Admin Center plugins can be used to install drivers and
firmware. For a comprehensive list of all OEM contact information, download the Azure
Stack HCI OEM Contact spreadsheet.

Next steps
Review firewall, physical network, and host network requirements:

Firewall requirements.
Physical network requirements.
Host network requirements.
Physical network requirements for
Azure Stack HCI
Article • 02/22/2024

Applies to: Azure Stack HCI, versions 23H2 and 22H2

This article discusses physical (fabric) network considerations and requirements for
Azure Stack HCI, particularly for network switches.

7 Note

Requirements for future Azure Stack HCI versions may change.

Network switches for Azure Stack HCI


Microsoft tests Azure Stack HCI to the standards and protocols identified in the
Network switch requirements section below. While Microsoft doesn't certify network
switches, we do work with vendors to identify devices that support Azure Stack HCI
requirements.

) Important

While other network switches using technologies and protocols not listed here may
work, Microsoft cannot guarantee they will work with Azure Stack HCI and may be
unable to assist in troubleshooting issues that occur.

When purchasing network switches, contact your switch vendor and ensure that the
devices meet the Azure Stack HCI requirements for your specified role types. The
following vendors (in alphabetical order) have confirmed that their switches support
Azure Stack HCI requirements:

Overview

Click on a vendor tab to see validated switches for each of the Azure Stack HCI
traffic types. These network classifications can be found here.

) Important
We update these lists as we're informed of changes by network switch vendors.

If your switch isn't included, contact your switch vendor to ensure that your switch
model and the version of the switch's operating system supports the requirements
in the next section.

Network switch requirements


This section lists industry standards that are mandatory for the specific roles of network
switches used in Azure Stack HCI deployments. These standards help ensure reliable
communications between nodes in Azure Stack HCI cluster deployments.

7 Note

Network adapters used for compute, storage, and management traffic require
Ethernet. For more information, see Host network requirements.

Here are the mandatory IEEE standards and specifications:

23H2

23H2 Role Requirements

ノ Expand table

Requirement Management Storage Compute Compute


(Standard) (SDN)

Virtual LANS ✓ ✓ ✓ ✓

Priority Flow Control ✓

Enhanced Transmission ✓
Selection

LLDP Port VLAN ID ✓

LLDP VLAN Name ✓ ✓ ✓

LLDP Link Aggregation ✓ ✓ ✓ ✓

LLDP ETS Configuration ✓


Requirement Management Storage Compute Compute
(Standard) (SDN)

LLDP ETS Recommendation ✓

LLDP PFC Configuration ✓

LLDP Maximum Frame Size ✓ ✓ ✓ ✓

Maximum Transmission ✓
Unit

Border Gateway Protocol ✓

DHCP Relay Agent ✓

7 Note

Guest RDMA requires both Compute (Standard) and Storage.

Standard: IEEE 802.1Q


Ethernet switches must comply with the IEEE 802.1Q specification that defines
VLANs. VLANs are required for several aspects of Azure Stack HCI and are required
in all scenarios.

Standard: IEEE 802.1Qbb


Ethernet switches used for Azure Stack HCI storage traffic must comply with the
IEEE 802.1Qbb specification that defines Priority Flow Control (PFC). PFC is required
where Data Center Bridging (DCB) is used. Since DCB can be used in both RoCE and
iWARP RDMA scenarios, 802.1Qbb is required in all scenarios. A minimum of three
Class of Service (CoS) priorities are required without downgrading the switch
capabilities or port speeds. At least one of these traffic classes must provide lossless
communication.

Standard: IEEE 802.1Qaz


Ethernet switches used for Azure Stack HCI storage traffic must comply with the
IEEE 802.1Qaz specification that defines Enhanced Transmission Select (ETS). ETS is
required where DCB is used. Since DCB can be used in both RoCE and iWARP RDMA
scenarios, 802.1Qaz is required in all scenarios.
A minimum of three CoS priorities are required without downgrading the switch
capabilities or port speed. Additionally, if your device allows ingress QoS rates to be
defined, we recommend that you do not configure ingress rates or configure them
to the exact same value as the egress (ETS) rates.

7 Note

Hyper-converged infrastructure has a high reliance on East-West Layer-2


communication within the same rack and therefore requires ETS. Microsoft
doesn't test Azure Stack HCI with Differentiated Services Code Point (DSCP).

Standard: IEEE 802.1AB


Ethernet switches must comply with the IEEE 802.1AB specification that defines the
Link Layer Discovery Protocol (LLDP). LLDP is required for Azure Stack HCI and
enables troubleshooting of physical networking configurations.

Configuration of the LLDP Type-Length-Values (TLVs) must be dynamically enabled.


Switches must not require additional configuration beyond enablement of a specific
TLV. For example, enabling 802.1 Subtype 3 should automatically advertise all
VLANs available on switch ports.

Custom TLV requirements


LLDP allows organizations to define and encode their own custom TLVs. These are
called Organizationally Specific TLVs. All Organizationally Specific TLVs start with an
LLDP TLV Type value of 127. The table below shows which Organizationally Specific
Custom TLV (TLV Type 127) subtypes are required.

ノ Expand table

Organization TLV Subtype

IEEE 802.1 Port VLAN ID (Subtype = 1)

IEEE 802.1 VLAN Name (Subtype = 3)


Minimum of 10 VLANS

IEEE 802.1 Link Aggregation (Subtype = 7)

IEEE 802.1 ETS Configuration (Subtype = 9)

IEEE 802.1 ETS Recommendation (Subtype = A)


Organization TLV Subtype

IEEE 802.1 PFC Configuration (Subtype = B)

IEEE 802.3 Maximum Frame Size (Subtype = 4)

Maximum Transmission Unit


The maximum transmission unit (MTU) is the largest size frame or packet that can
be transmitted across a data link. A range of 1514 - 9174 is required for SDN
encapsulation.

Border Gateway Protocol


Ethernet switches used for Azure Stack HCI SDN compute traffic must support
Border Gateway Protocol (BGP). BGP is a standard routing protocol used to
exchange routing and reachability information between two or more networks.
Routes are automatically added to the route table of all subnets with BGP
propagation enabled. This is required to enable tenant workloads with SDN and
dynamic peering. RFC 4271: Border Gateway Protocol 4

DHCP Relay Agent


Ethernet switches used for Azure Stack HCI management traffic must support DHCP
relay agent. The DHCP relay agent is any TCP/IP host which is used to forward
requests and replies between the DHCP server and client when the server is present
on a different network. It is required for PXE boot services. RFC 3046: DHCPv4 or
RFC 6148: DHCPv4

Network traffic and architecture


This section is predominantly for network administrators.

Azure Stack HCI can function in various data center architectures including 2-tier (Spine-
Leaf) and 3-tier (Core-Aggregation-Access). This section refers more to concepts from
the Spine-Leaf topology that is commonly used with workloads in hyper-converged
infrastructure such as Azure Stack HCI.

Network models
Network traffic can be classified by its direction. Traditional Storage Area Network (SAN)
environments are heavily North-South where traffic flows from a compute tier to a
storage tier across a Layer-3 (IP) boundary. Hyperconverged infrastructure is more
heavily East-West where a substantial portion of traffic stays within a Layer-2 (VLAN)
boundary.

) Important

We highly recommend that all cluster nodes in a site are physically located in the
same rack and connected to the same top-of-rack (ToR) switches.

North-South traffic for Azure Stack HCI


North-South traffic has the following characteristics:

Traffic flows out of a ToR switch to the spine or in from the spine to a ToR switch.
Traffic leaves the physical rack or crosses a Layer-3 boundary (IP).
Includes management (PowerShell, Windows Admin Center), compute (VM), and
inter-site stretched cluster traffic.
Uses an Ethernet switch for connectivity to the physical network.

East-West traffic for Azure Stack HCI


East-West traffic has the following characteristics:

Traffic remains within the ToR switches and Layer-2 boundary (VLAN).
Includes storage traffic or Live Migration traffic between nodes in the same cluster
and (if using a stretched cluster) within the same site.
May use an Ethernet switch (switched) or a direct (switchless) connection, as
described in the next two sections.

Using switches
North-South traffic requires the use of switches. Besides using an Ethernet switch that
supports the required protocols for Azure Stack HCI, the most important aspect is the
proper sizing of the network fabric.

It is imperative to understand the "non-blocking" fabric bandwidth that your Ethernet


switches can support and that you minimize (or preferably eliminate) oversubscription of
the network.
Common congestion points and oversubscription, such as the Multi-Chassis Link
Aggregation Group used for path redundancy, can be eliminated through proper use
of subnets and VLANs. Also see Host network requirements.

Work with your network vendor or network support team to ensure your network
switches have been properly sized for the workload you are intending to run.

Using switchless
Azure Stack HCI supports switchless (direct) connections for East-West traffic for all
cluster sizes so long as each node in the cluster has a redundant connection to every
node in the cluster. This is called a "full-mesh" connection.

ノ Expand table

Interface pair Subnet VLAN

Mgmt host vNIC Customer-specific Customer-specific

SMB01 192.168.71.x/24 711

SMB02 192.168.72.x/24 712

SMB03 192.168.73.x/24 713

7 Note

The benefits of switchless deployments diminish with clusters larger than three-
nodes due to the number of network adapters required.

Advantages of switchless connections


No switch purchase is necessary for East-West traffic. A switch is required for
North-South traffic. This may result in lower capital expenditure (CAPEX) costs but
is dependent on the number of nodes in the cluster.
Because there is no switch, configuration is limited to the host, which may reduce
the potential number of configuration steps needed. This value diminishes as the
cluster size increases.

Disadvantages of switchless connections


More planning is required for IP and subnet addressing schemes.
Provides only local storage access. Management traffic, VM traffic, and other traffic
requiring North-South access cannot use these adapters.
As the number of nodes in the cluster grows, the cost of network adapters could
exceed the cost of using network switches.
Doesn't scale well beyond three-node clusters. More nodes incur additional
cabling and configuration that can surpass the complexity of using a switch.
Cluster expansion is complex, requiring hardware and software configuration
changes.

Next steps
Learn about network adapter and host requirements. See Host network
requirements.
Brush up on failover clustering basics. See Failover Clustering Networking Basics .
Brush up on using SET. See Remote Direct Memory Access (RDMA) and Switch
Embedded Teaming (SET).
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Host network requirements for Azure Stack
HCI
Article • 03/11/2024

Applies to: Azure Stack HCI, versions 23H2 and 22H2

This topic discusses host networking considerations and requirements for Azure Stack HCI. For
information on datacenter architectures and the physical connections between servers, see
Physical network requirements.

For information on how to simplify host networking using Network ATC, see Simplify host
networking with Network ATC.

Network traffic types


Azure Stack HCI network traffic can be classified by its intended purpose:

Management traffic: Traffic to or from outside the local cluster. For example, storage replica
traffic or traffic used by the administrator for management of the cluster like Remote
Desktop, Windows Admin Center, Active Directory, etc.
Compute traffic: Traffic originating from or destined to a virtual machine (VM).
Storage traffic: Traffic using Server Message Block (SMB), for example Storage Spaces Direct
or SMB-based live migration. This traffic is layer-2 traffic and is not routable.

) Important

Storage replica uses non-RDMA based SMB traffic. This and the directional nature of the
traffic (North-South) makes it closely aligned to that of "management" traffic listed above,
similar to that of a traditional file share.

Select a network adapter


Network adapters are qualified by the network traffic types (see above) they are supported for
use with. As you review the Windows Server Catalog , the Windows Server 2022 certification now
indicates one or more of the following roles. Before purchasing a server for Azure Stack HCI, you
must minimally have at least one adapter that is qualified for management, compute, and storage
as all three traffic types are required on Azure Stack HCI. You can then use Network ATC to
configure your adapters for the appropriate traffic types.

For more information about this role-based NIC qualification, please see this link .

) Important
Using an adapter outside of its qualified traffic type is not supported.

ノ Expand table

Level Management Role Compute Role Storage Role

Role-based distinction Management Compute Standard Storage Standard

Maximum Award Not Applicable Compute Premium Storage Premium

7 Note

The highest qualification for any adapter in our ecosystem will contain the Management,
Compute Premium, and Storage Premium qualifications.

Driver Requirements
Inbox drivers are not supported for use with Azure Stack HCI. To identify if your adapter is using an
inbox driver, run the following cmdlet. An adapter is using an inbox driver if the DriverProvider
property is Microsoft.

Powershell

Get-NetAdapter -Name <AdapterName> | Select *Driver*

Overview of key network adapter capabilities


Important network adapter capabilities used by Azure Stack HCI include:

Dynamic Virtual Machine Multi-Queue (Dynamic VMMQ or d.VMMQ)


Remote Direct Memory Access (RDMA)
Guest RDMA
Switch Embedded Teaming (SET)

Dynamic VMMQ
All network adapters with the Compute (Premium) qualification support Dynamic VMMQ. Dynamic
VMMQ requires the use of Switch Embedded Teaming.
Applicable traffic types: compute

Certifications required: Compute (Premium)

Dynamic VMMQ is an intelligent, receive-side technology. It builds upon its predecessors of Virtual
Machine Queue (VMQ), Virtual Receive Side Scaling (vRSS), and VMMQ, to provide three primary
improvements:

Optimizes host efficiency by using fewer CPU cores.


Automatic tuning of network traffic processing to CPU cores, thus enabling VMs to meet and
maintain expected throughput.
Enables “bursty” workloads to receive the expected amount of traffic.

For more information on Dynamic VMMQ, see the blog post Synthetic accelerations .

RDMA
RDMA is a network stack offload to the network adapter. It allows SMB storage traffic to bypass
the operating system for processing.

RDMA enables high-throughput, low-latency networking, using minimal host CPU resources.
These host CPU resources can then be used to run additional VMs or containers.

Applicable traffic types: host storage

Certifications required: Storage (Standard)

All adapters with Storage (Standard) or Storage (Premium) qualification support host-side RDMA.
For more information on using RDMA with guest workloads, see the "Guest RDMA" section later in
this article.

Azure Stack HCI supports RDMA with either the Internet Wide Area RDMA Protocol (iWARP) or
RDMA over Converged Ethernet (RoCE) protocol implementations.

) Important

RDMA adapters only work with other RDMA adapters that implement the same RDMA
protocol (iWARP or RoCE).

Not all network adapters from vendors support RDMA. The following table lists those vendors (in
alphabetical order) that offer certified RDMA adapters. However, there are hardware vendors not
included in this list that also support RDMA. See the Windows Server Catalog to find adapters
with the Storage (Standard) or Storage (Premium) qualification which require RDMA support.

7 Note

InfiniBand (IB) is not supported with Azure Stack HCI.


ノ Expand table

NIC vendor iWARP RoCE

Broadcom No Yes

Intel Yes Yes (some models)

Marvell (Qlogic) Yes Yes

Nvidia No Yes

For more information on deploying RDMA for the host, we highly recommend you use Network
ATC. For information on manual deployment see the SDN GitHub repo .

iWARP
iWARP uses Transmission Control Protocol (TCP), and can be optionally enhanced with Priority-
based Flow Control (PFC) and Enhanced Transmission Service (ETS).

Use iWARP if:

You don't have experience managing RDMA networks.


You don't manage or are uncomfortable managing your top-of-rack (ToR) switches.
You won't be managing the solution after deployment.
You already have deployments that use iWARP.
You're unsure which option to choose.

RoCE
RoCE uses User Datagram Protocol (UDP), and requires PFC and ETS to provide reliability.

Use RoCE if:

You already have deployments with RoCE in your datacenter.


You're comfortable managing the DCB network requirements.

Guest RDMA
Guest RDMA enables SMB workloads for VMs to gain the same benefits of using RDMA on hosts.

Applicable traffic types: Guest-based storage

Certifications required: Compute (Premium)

The primary benefits of using Guest RDMA are:

CPU offload to the NIC for network traffic processing.


Extremely low latency.
High throughput.
For more information, download the document from the SDN GitHub repo .

Switch Embedded Teaming (SET)


SET is a software-based teaming technology that has been included in the Windows Server
operating system since Windows Server 2016. SET is the only teaming technology supported by
Azure Stack HCI. SET works well with compute, storage, and management traffic and is supported
with up to eight adapters in the same team.

Applicable traffic types: compute, storage, and management

Certifications required: Compute (Standard) or Compute (Premium)

SET is the only teaming technology supported by Azure Stack HCI. SET works well with compute,
storage, and management traffic.

) Important

Azure Stack HCI doesn’t support NIC teaming with the older Load Balancing/Failover (LBFO).
See the blog post Teaming in Azure Stack HCI for more information on LBFO in Azure
Stack HCI.

SET is important for Azure Stack HCI because it's the only teaming technology that enables:

Teaming of RDMA adapters (if needed).


Guest RDMA.
Dynamic VMMQ.
Other key Azure Stack HCI features (see Teaming in Azure Stack HCI ).

SET requires the use of symmetric (identical) adapters. Symmetric network adapters are those that
have the same:

make (vendor)
model (version)
speed (throughput)
configuration

In 22H2, Network ATC will automatically detect and inform you if the adapters you've chosen are
asymmetric. The easiest way to manually identify if adapters are symmetric is if the speeds and
interface descriptions are exact matches. They can deviate only in the numeral listed in the
description. Use the Get-NetAdapterAdvancedProperty cmdlet to ensure the configuration
reported lists the same property values.

See the following table for an example of the interface descriptions deviating only by numeral (#):

ノ Expand table
Name Interface description Link speed

NIC1 Network Adapter #1 25 Gbps

NIC2 Network Adapter #2 25 Gbps

NIC3 Network Adapter #3 25 Gbps

NIC4 Network Adapter #4 25 Gbps

7 Note

SET supports only switch-independent configuration by using either Dynamic or Hyper-V Port
load-balancing algorithms. For best performance, Hyper-V Port is recommended for use on
all NICs that operate at or above 10 Gbps. Network ATC makes all the required configurations
for SET.

RDMA traffic considerations


If you implement DCB, you must ensure that the PFC and ETS configuration is implemented
properly across every network port, including network switches. DCB is required for RoCE and
optional for iWARP.

For detailed information on how to deploy RDMA, download the document from the SDN GitHub
repo .

RoCE-based Azure Stack HCI implementations require the configuration of three PFC traffic
classes, including the default traffic class, across the fabric and all hosts.

Cluster traffic class

This traffic class ensures that there's enough bandwidth reserved for cluster heartbeats:

Required: Yes
PFC-enabled: No
Recommended traffic priority: Priority 7
Recommended bandwidth reservation:
10 GbE or lower RDMA networks = 2 percent
25 GbE or higher RDMA networks = 1 percent

RDMA traffic class


This traffic class ensures that there's enough bandwidth reserved for lossless RDMA
communications by using SMB Direct:

Required: Yes
PFC-enabled: Yes
Recommended traffic priority: Priority 3 or 4
Recommended bandwidth reservation: 50 percent

Default traffic class

This traffic class carries all other traffic not defined in the cluster or RDMA traffic classes, including
VM traffic and management traffic:

Required: By default (no configuration necessary on the host)


Flow control (PFC)-enabled: No
Recommended traffic class: By default (Priority 0)
Recommended bandwidth reservation: By default (no host configuration required)

Storage traffic models


SMB provides many benefits as the storage protocol for Azure Stack HCI, including SMB
Multichannel. SMB Multichannel isn't covered in this article, but it's important to understand that
traffic is multiplexed across every possible link that SMB Multichannel can use.

7 Note

We recommend using multiple subnets and VLANs to separate storage traffic in Azure Stack
HCI.

Consider the following example of a four node cluster. Each server has two storage ports (left and
right side). Because each adapter is on the same subnet and VLAN, SMB Multichannel will spread
connections across all available links. Therefore, the left-side port on the first server (192.168.1.1)
will make a connection to the left-side port on the second server (192.168.1.2). The right-side port
on the first server (192.168.1.12) will connect to the right-side port on the second server. Similar
connections are established for the third and fourth servers.

However, this creates unnecessary connections and causes congestion at the interlink (multi-
chassis link aggregation group or MC-LAG) that connects the ToR switches (marked with Xs). See
the following diagram:

The recommended approach is to use separate subnets and VLANs for each set of adapters. In the
following diagram, the right-hand ports now use subnet 192.168.2.x /24 and VLAN2. This allows
traffic on the left-side ports to remain on TOR1 and the traffic on the right-side ports to remain on
TOR2.

Traffic bandwidth allocation


The following table shows example bandwidth allocations of various traffic types, using common
adapter speeds, in Azure Stack HCI. Note that this is an example of a converged solution, where all
traffic types (compute, storage, and management) run over the same physical adapters, and are
teamed by using SET.

Because this use case poses the most constraints, it represents a good baseline. However,
considering the permutations for the number of adapters and speeds, this should be considered
an example and not a support requirement.

The following assumptions are made for this example:

There are two adapters per team.

Storage Bus Layer (SBL), Cluster Shared Volume (CSV), and Hyper-V (Live Migration) traffic:
Use the same physical adapters.
Use SMB.

SMB is given a 50 percent bandwidth allocation by using DCB.


SBL/CSV is the highest priority traffic, and receives 70 percent of the SMB bandwidth
reservation.
Live Migration (LM) is limited by using the Set-SMBBandwidthLimit cmdlet, and receives 29
percent of the remaining bandwidth.

If the available bandwidth for Live Migration is >= 5 Gbps, and the network adapters
are capable, use RDMA. Use the following cmdlet to do so:

Powershell

Set-VMHost -VirtualMachineMigrationPerformanceOption SMB

If the available bandwidth for Live Migration is < 5 Gbps, use compression to reduce
blackout times. Use the following cmdlet to do so:

Powershell

Set-VMHost -VirtualMachineMigrationPerformanceOption Compression

If you're using RDMA for Live Migration traffic, ensure that Live Migration traffic can't
consume the entire bandwidth allocated to the RDMA traffic class by using an SMB
bandwidth limit. Be careful, because this cmdlet takes entry in bytes per second (Bps),
whereas network adapters are listed in bits per second (bps). Use the following cmdlet to set
a bandwidth limit of 6 Gbps, for example:

Powershell

Set-SMBBandwidthLimit -Category LiveMigration -BytesPerSecond 750MB

7 Note

750 MBps in this example equates to 6 Gbps.

Here is the example bandwidth allocation table:

ノ Expand table

NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth

10 20 Gbps 10 Gbps 70% 7 Gbps ** 200 Mbps


Gbps

25 50 Gbps 25 Gbps 70% 17.5 Gbps 29% 7.25 Gbps 1% 250 Mbps
Gbps
NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth

40 80 Gbps 40 Gbps 70% 28 Gbps 29% 11.6 Gbps 1% 400 Mbps


Gbps

50 100 Gbps 50 Gbps 70% 35 Gbps 29% 14.5 Gbps 1% 500 Mbps
Gbps

100 200 Gbps 100 Gbps 70% 70 Gbps 29% 29 Gbps 1% 1 Gbps
Gbps

200 400 Gbps 200 Gbps 70% 140 Gbps 29% 58 Gbps 1% 2 Gbps
Gbps

* Use compression rather than RDMA, because the bandwidth allocation for Live Migration traffic
is <5 Gbps.

** 50 percent is an example bandwidth reservation.

Stretched clusters
Stretched clusters provide disaster recovery that spans multiple datacenters. In its simplest form, a
stretched Azure Stack HCI cluster network looks like this:

Stretched cluster requirements


Stretched clusters have the following requirements and characteristics:

RDMA is limited to a single site, and isn't supported across different sites or subnets.
Servers in the same site must reside in the same rack and Layer-2 boundary.

Host communication between sites must cross a Layer-3 boundary; stretched Layer-2
topologies aren't supported.

Have enough bandwidth to run the workloads at the other site. In the event of a failover, the
alternate site will need to run all traffic. We recommend that you provision sites at 50 percent
of their available network capacity. This isn't a requirement, however, if you are able to
tolerate lower performance during a failover.

Replication between sites (north/south traffic) can use the same physical NICs as the local
storage (east/west traffic). If you're using the same physical adapters, these adapters must be
teamed with SET. The adapters must also have additional virtual NICs provisioned for
routable traffic between sites.

Adapters used for communication between sites:

Can be physical or virtual (host vNIC). If adapters are virtual, you must provision one vNIC
in its own subnet and VLAN per physical NIC.

Must be on their own subnet and VLAN that can route between sites.

RDMA must be disabled by using the Disable-NetAdapterRDMA cmdlet. We recommend


that you explicitly require Storage Replica to use specific interfaces by using the Set-
SRNetworkConstraint cmdlet.

Must meet any additional requirements for Storage Replica.

Stretched cluster example


The following example illustrates a stretched cluster configuration. To ensure that a specific virtual
NIC is mapped to a specific physical adapter, use the Set-VMNetworkAdapterTeammapping
cmdlet.

The following shows the details for the example stretched cluster configuration.

7 Note

Your exact configuration, including NIC names, IP addresses, and VLANs, might be different
than what is shown. This is used only as a reference configuration that can be adapted to your
environment.

SiteA – Local replication, RDMA enabled, non-routable between sites

ノ Expand table

Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope

NodeA1 vSMB01 pNIC01 711 192.168.1.1/24 Local Site Only

NodeA2 vSMB01 pNIC01 711 192.168.1.2/24 Local Site Only

NodeA1 vSMB02 pNIC02 712 192.168.2.1/24 Local Site Only

NodeA2 vSMB02 pNIC02 712 192.168.2.2/24 Local Site Only


SiteB – Local replication, RDMA enabled, non-routable between sites

ノ Expand table

Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope

NodeB1 vSMB01 pNIC01 711 192.168.1.1/24 Local Site Only

NodeB2 vSMB01 pNIC01 711 192.168.1.2/24 Local Site Only

NodeB1 vSMB02 pNIC02 712 192.168.2.1/24 Local Site Only

NodeB2 vSMB02 pNIC02 712 192.168.2.2/24 Local Site Only

SiteA – Stretched replication, RDMA disabled, routable between sites

ノ Expand table

Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope

NodeA1 Stretch1 pNIC01 173.0.0.1/8 Cross-Site Routable

NodeA2 Stretch1 pNIC01 173.0.0.2/8 Cross-Site Routable

NodeA1 Stretch2 pNIC02 174.0.0.1/8 Cross-Site Routable

NodeA2 Stretch2 pNIC02 174.0.0.2/8 Cross-Site Routable

SiteB – Stretched replication, RDMA disabled, routable between sites

ノ Expand table

Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope

NodeB1 Stretch1 pNIC01 175.0.0.1/8 Cross-Site Routable

NodeB2 Stretch1 pNIC01 175.0.0.2/8 Cross-Site Routable

NodeB1 Stretch2 pNIC02 176.0.0.1/8 Cross-Site Routable

NodeB2 Stretch2 pNIC02 176.0.0.2/8 Cross-Site Routable

Next steps
Learn about network switch and physical network requirements. See Physical network
requirements.
Learn how to simplify host networking using Network ATC. See Simplify host networking with
Network ATC.
Brush up on failover clustering networking basics .
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Firewall requirements for Azure Stack
HCI
Article • 03/13/2024

Applies to: Azure Stack HCI, versions 23H2 and 22H2

This article provides guidance on how to configure firewalls for the Azure Stack HCI
operating system. It includes firewall requirements for outbound endpoints and internal
rules and ports. The article also provides information on how to use Azure service tags
with Microsoft Defender firewall.

If your network uses a proxy server for internet access, see Configure proxy settings for
Azure Stack HCI.

Firewall requirements for outbound endpoints


Opening port 443 for outbound network traffic on your organization's firewall meets the
connectivity requirements for the operating system to connect with Azure and Microsoft
Update. If your outbound firewall is restricted, then we recommend including the URLs
and ports described in the Recommended firewall URLs section of this article.

Azure Stack HCI needs to periodically connect to Azure. Access is limited only to:

Well-known Azure IPs


Outbound direction
Port 443 (HTTPS)

) Important

Azure Stack HCI doesn’t support HTTPS inspection. Make sure that HTTPS
inspection is disabled along your networking path for Azure Stack HCI to prevent
any connectivity errors.

As shown in the following diagram, Azure Stack HCI accesses Azure using more than
one firewall potentially.

This article describes how to optionally use a highly locked-down firewall configuration
to block all traffic to all destinations except those included in your allowlist.

Required firewall URLs


The following table provides a list of required firewall URLs. Make sure to include these
URLs to your allowlist.

Please also follow the required firewall requirements for AKS on Azure Stack HCI.

7 Note

The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.

ノ Expand table

Service URL Port Notes

Azure Stack fe3.delivery.mp.microsoft.com 443 For updating Azure Stack HCI,


HCI Updates version 23H2.
download
Service URL Port Notes

Azure Stack tlu.dl.delivery.mp.microsoft.com 80 For updating Azure Stack HCI,


HCI Updates version 23H2.
download

Azure Stack login.microsoftonline.com 443 For Active Directory Authority and


HCI used for authentication, token
fetch, and validation.

Azure Stack graph.windows.net 443 For Graph and used for


HCI authentication, token fetch, and
validation.

Azure Stack management.azure.com 443 For Resource Manager and used


HCI during initial bootstrapping of the
cluster to Azure for registration
purposes and to unregister the
cluster.

Azure Stack dp.stackhci.azure.com 443 For Data plane that pushes up


HCI diagnostics data and used in the
Azure portal pipeline and pushes
billing data.

Azure Stack *.platform.edge.azure.com 443 For Data plane used in the licensing
HCI and in pushing alerting and billing
data. Required only for Azure Stack
HCI, version 23H2.

Azure Stack azurestackhci.azurefd.net 443 Previous URL for Data plane. This
HCI URL was recently changed,
customers who registered their
cluster using this old URL must
allowlist it as well.

Arc For aka.ms 443 For resolving the download script


Servers during installation.

Arc For download.microsoft.com 443 For downloading the Windows


Servers installation package.

Arc For login.windows.net 443 For Microsoft Entra ID


Servers

Arc For login.microsoftonline.com 443 For Microsoft Entra ID


Servers

Arc For pas.windows.net 443 For Microsoft Entra ID


Servers
Service URL Port Notes

Arc For management.azure.com 443 For Azure Resource Manager to


Servers create or delete the Arc Server
resource

Arc For guestnotificationservice.azure.com 443 For the notification service for


Servers extension and connectivity
scenarios

Arc For *.his.arc.azure.com 443 For metadata and hybrid identity


Servers services

Arc For *.guestconfiguration.azure.com 443 For extension management and


Servers guest configuration services

Arc For *.guestnotificationservice.azure.com 443 For notification service for


Servers extension and connectivity
scenarios

Arc For azgn*.servicebus.windows.net 443 For notification service for


Servers extension and connectivity
scenarios

Arc For *.servicebus.windows.net 443 For Windows Admin Center and


Servers SSH scenarios

Arc For *.waconazure.com 443 For Windows Admin Center


Servers connectivity

Arc For *.blob.core.windows.net 443 For download source for Azure Arc-
Servers enabled servers extensions

For a comprehensive list of all the firewall URLs, download the firewall URLs
spreadsheet .

Recommended firewall URLs


The following table provides a list of recommended firewall URLs. If your outbound
firewall is restricted, we recommend including the URLs and ports described in this
section to your allowlist.

7 Note
The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.

ノ Expand table

Service URL Port Notes

Azure crl3.digicert.com 80 Enables the platform attestation service


Benefits on on Azure Stack HCI to perform a
Azure Stack certificate revocation list check to
HCI provide assurance that VMs are indeed
running on Azure environments.

Azure crl4.digicert.com 80 Enables the platform attestation service


Benefits on on Azure Stack HCI to perform a
Azure Stack certificate revocation list check to
HCI provide assurance that VMs are indeed
running on Azure environments.

Azure Stack *.powershellgallery.com 443 To obtain the Az.StackHCI PowerShell


HCI module, which is required for cluster
registration. Alternatively, you can
download and install the Az.StackHCI
PowerShell module manually
from PowerShell Gallery .

Cluster *.blob.core.windows.net 443 For firewall access to the Azure blob


Cloud container, if choosing to use a cloud
Witness witness as the cluster witness, which is
optional.

Microsoft windowsupdate.microsoft.com 80 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft download.windowsupdate.com 80 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft *.download.windowsupdate.com 80 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft download.microsoft.com 443 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft wustat.windows.com 80 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft ntservicepack.microsoft.com 80 For Microsoft Update, which allows the


Update OS to receive updates.
Service URL Port Notes

Microsoft go.microsoft.com 80 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft dl.delivery.mp.microsoft.com 80, For Microsoft Update, which allows the


Update 443 OS to receive updates.

Microsoft *.delivery.mp.microsoft.com 80, For Microsoft Update, which allows the


Update 443 OS to receive updates.

Microsoft *.windowsupdate.microsoft.com 80, For Microsoft Update, which allows the


Update 443 OS to receive updates.

Microsoft *.windowsupdate.com 80 For Microsoft Update, which allows the


Update OS to receive updates.

Microsoft *.update.microsoft.com 80, For Microsoft Update, which allows the


Update 443 OS to receive updates.

Firewall requirements for additional Azure


services
Depending on additional Azure services you enable on HCI, you may need to make
additional firewall configuration changes. Refer to the following links for information on
firewall requirements for each Azure service:

AKS on Azure Stack HCI


Azure Arc-enabled servers
Azure Arc VM management
Azure Monitor Agent
Azure portal
Azure Site Recovery
Azure Virtual Desktop
Microsoft Defender
Microsoft Monitoring Agent (MMA) and Log Analytics Agent
Qualys
Remote support
Windows Admin Center
Windows Admin Center in Azure portal

Firewall requirements for internal rules and


ports
Ensure that the proper network ports are open between all server nodes both within a
site and between sites (for stretched clusters). You'll need appropriate firewall rules to
allow ICMP, SMB (port 445, plus port 5445 for SMB Direct if using iWARP RDMA), and
WS-MAN (port 5985) bi-directional traffic between all servers in the cluster.

When using the Cluster Creation wizard in Windows Admin Center to create the cluster,
the wizard automatically opens the appropriate firewall ports on each server in the
cluster for Failover Clustering, Hyper-V, and Storage Replica. If you're using a different
firewall on each server, open the ports as described in the following sections:

Azure Stack HCI OS management


Ensure that the following firewall rules are configured in your on-premises firewall for
Azure Stack HCI OS management, including licensing and billing.

ノ Expand table

Rule Action Source Destination Service Ports

Allow inbound/outbound traffic to and Allow Cluster Cluster TCP 30301


from the Azure Stack HCI service on servers servers
cluster servers

Windows Admin Center


Ensure that the following firewall rules are configured in your on-premises firewall for
Windows Admin Center.

ノ Expand table

Rule Action Source Destination Service Ports

Provide access to Azure and Allow Windows Admin Azure Stack TCP 445
Microsoft Update Center HCI

Use Windows Remote Allow Windows Admin Azure Stack TCP 5985
Management (WinRM) 2.0 Center HCI
for HTTP connections to run
commands
on remote Windows servers

Use WinRM 2.0 for HTTPS Allow Windows Admin Azure Stack TCP 5986
connections to run Center HCI
commands on remote Windows
servers
7 Note

While installing Windows Admin Center, if you select the Use WinRM over HTTPS
only setting, then port 5986 is required.

Failover Clustering
Ensure that the following firewall rules are configured in your on-premises firewall for
Failover Clustering.

ノ Expand table

Rule Action Source Destination Service Ports

Allow Failover Cluster Allow Management Cluster TCP 445


validation system servers

Allow RPC dynamic port Allow Management Cluster TCP Minimum of


allocation system servers 100 ports
above port
5000

Allow Remote Procedure Allow Management Cluster TCP 135


Call (RPC) system servers

Allow Cluster Allow Management Cluster UDP 137


Administrator system servers

Allow Cluster Service Allow Management Cluster UDP 3343


system servers

Allow Cluster Service Allow Management Cluster TCP 3343


(Required during system servers
a server join operation.)

Allow ICMPv4 and Allow Management Cluster n/a n/a


ICMPv6 system servers
for Failover Cluster
validation

7 Note

The management system includes any computer from which you plan to administer
the cluster, using tools such as Windows Admin Center, Windows PowerShell or
System Center Virtual Machine Manager.
Hyper-V
Ensure that the following firewall rules are configured in your on-premises firewall for
Hyper-V.

ノ Expand table

Rule Action Source Destination Service Ports

Allow cluster Allow Management Hyper-V TCP 445


communication system server

Allow RPC Endpoint Allow Management Hyper-V TCP 135


Mapper and WMI system server

Allow HTTP connectivity Allow Management Hyper-V TCP 80


system server

Allow HTTPS connectivity Allow Management Hyper-V TCP 443


system server

Allow Live Migration Allow Management Hyper-V TCP 6600


system server

Allow VM Management Allow Management Hyper-V TCP 2179


Service system server

Allow RPC dynamic port Allow Management Hyper-V TCP Minimum of


allocation system server 100 ports
above port
5000

7 Note

Open up a range of ports above port 5000 to allow RPC dynamic port allocation.
Ports below 5000 may already be in use by other applications and could cause
conflicts with DCOM applications. Previous experience shows that a minimum of
100 ports should be opened, because several system services rely on these RPC
ports to communicate with each other. For more information, see How to
configure RPC dynamic port allocation to work with firewalls.

Storage Replica (stretched cluster)


Ensure that the following firewall rules are configured in your on-premises firewall for
Storage Replica (stretched cluster).
ノ Expand table

Rule Action Source Destination Service Ports

Allow Server Message Allow Stretched cluster Stretched cluster TCP 445
Block servers servers
(SMB) protocol

Allow Web Services- Allow Stretched cluster Stretched cluster TCP 5985
Management servers servers
(WS-MAN)

Allow ICMPv4 and Allow Stretched cluster Stretched cluster n/a n/a
ICMPv6 servers servers
(if using the Test-
SRTopology
PowerShell cmdlet)

Update Microsoft Defender firewall


This section shows how to configure Microsoft Defender firewall to allow IP addresses
associated with a service tag to connect with the operating system. A service tag
represents a group of IP addresses from a given Azure service. Microsoft manages the IP
addresses included in the service tag, and automatically updates the service tag as IP
addresses change to keep updates to a minimum. To learn more, see Virtual network
service tags.

1. Download the JSON file from the following resource to the target computer
running the operating system: Azure IP Ranges and Service Tags – Public Cloud .

2. Use the following PowerShell command to open the JSON file:

PowerShell

$json = Get-Content -Path .\ServiceTags_Public_20201012.json |


ConvertFrom-Json

3. Get the list of IP address ranges for a given service tag, such as the
"AzureResourceManager" service tag:

PowerShell

$IpList = ($json.values | where Name -Eq


"AzureResourceManager").properties.addressPrefixes
4. Import the list of IP addresses to your external corporate firewall, if you're using an
allowlist with it.

5. Create a firewall rule for each server in the cluster to allow outbound 443 (HTTPS)
traffic to the list of IP address ranges:

PowerShell

New-NetFirewallRule -DisplayName "Allow Azure Resource Manager" -


RemoteAddress $IpList -Direction Outbound -LocalPort 443 -Protocol TCP
-Action Allow -Profile Any -Enabled True

Next steps
For more information, see also:

The Windows Firewall and WinRM 2.0 ports section of Installation and
configuration for Windows Remote Management

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Network reference patterns overview
for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll gain an overview understanding for deploying network reference
patterns on Azure Stack HCI.

A deployment consists of single-server or two-node systems that connect to one or two


Top of Rack (TOR) switches. This environment has the following characteristics:

Two storage ports dedicated for storage traffic intent. The RDMA NIC is optional
for single-server deployments.

One or two ports dedicated to management and compute traffic intents.

One optional Baseboard Management Controller (BMC) for OOB management.

A single-server deployment features a single TOR switch for northbound/southbound


(internal-external) traffic. Two-node deployments consist of either a storage switchless
configuration using one or two TOR switches; or a storage switched configuration using
two TOR switches with either non-converged or fully converged host network adapters.

Switchless advantages and disadvantages


The following highlights some advantages and disadvantages of using switchless
configurations:

No switch is necessary for in-cluster (East-West) traffic; however, a switch is


required for traffic outside the cluster (North-South). This may result in lower
capital expenditure (CAPEX) costs, but is dependent on the number of nodes in the
cluster.

If switchless is used, configuration is limited to the host, which may reduce the
potential number of configuration steps needed. However, this value diminishes as
the cluster size increases.

Switchless has the lowest level of resiliency, and it comes with extra complexity and
planning if after the initial deployment it needs to be scaled up. Storage
connectivity needs to be enabled when adding the second node, which will require
to define what physical connectivity between nodes is needed.

More planning is required for IP and subnet addressing schemes.

Storage adapters are single-purpose interfaces. Management, compute, stretched


cluster, and other traffic requiring North-South communication can't use these
adapters.

As the number of nodes in the cluster grows beyond two nodes, the cost of
network adapters could exceed the cost of using network switches.

Beyond a three-node cluster, cable management complexity grows.

Cluster expansion beyond two-nodes is complex, potentially requiring per-node


hardware and software configuration changes.

For more information, see Physical network requirements for Azure Stack HCI.

Firewall requirements
Azure Stack HCI requires periodic connectivity to Azure. If your organization's outbound
firewall is restricted, you would need to include firewall requirements for outbound
endpoints and internal rules and ports. There are required and recommended endpoints
for the Azure Stack HCI core components, which include cluster creation, registration
and billing, Microsoft Update, and cloud cluster witness.

See the firewall requirements for a complete list of endpoints. Make sure to include
these URLS in your allowed list. Proper network ports need to be opened between all
server nodes both within a site and between sites (for stretched clusters).

With Azure Stack HCI the connectivity validator of the Environment Checker tool will
check for the outbound connectivity requirement by default during deployment.
Additionally, you can run the Environment Checker tool standalone before, during, or
after deployment to evaluate the outbound connectivity of your environment.

A best practice is to have all relevant endpoints in a data file that can be accessed by the
environment checker tool. The same file can also be shared with your firewall
administrator to open up the necessary ports and URLs.

For more information, see Firewall requirements.

Next steps
Choose a network pattern to review.
Azure Stack HCI network deployment
patterns
Article • 07/20/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

This article describes a set of network patterns references to architect, deploy, and
configure Azure Stack HCI using either one or two physical hosts. Depending on your
needs or scenarios, you can go directly to your pattern of interest. Each pattern is
described as a standalone entity and includes all the network components for specific
scenarios.

Choose a network reference pattern


Use the following table to directly go to a pattern and its content.

Single-server deployment pattern


Go to single server deployment

Two-node deployment patterns


Go to storage switchless, single TOR switch Go to storage switchless, two TOR switches

 

Go to storage switched, non-converged, two Go to storage switched, fully converged, two


TOR switches TOR switches.

 

Next steps
Download Azure Stack HCI
Review single-server storage
deployment network reference pattern
for Azure Stack HCI
Article • 03/04/2024

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the single-server storage network reference pattern that
you can use to deploy your Azure Stack HCI solution. The information in this article will
also help you determine if this configuration is viable for your deployment planning
needs. This article is targeted towards the IT administrators who deploy and manage
Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Introduction
Single-server deployments provide cost and space benefits while helping to modernize
your infrastructure and bring Azure hybrid computing to locations that can tolerate the
resiliency of a single server. Azure Stack HCI running on a single-server behaves similarly
to Azure Stack HCI on a multi-node cluster: it brings native Azure Arc integration, the
ability to add servers to scale out the cluster, and it includes the same Azure benefits.

It also supports the same workloads, such as Azure Virtual Desktop (AVD) and AKS
hybrid, and is supported and billed the same way.

Scenarios
Use the single-server storage pattern in the following scenarios:

Facilities that can tolerate lower level of resiliency. Consider implementing this
pattern whenever your location or service provided by this pattern can tolerate a
lower level of resiliency without impacting your business.

Food, healthcare, finance, retail, government facilities. Some food, healthcare,


finance, and retail scenarios can apply this option to minimize their costs without
impacting core operations and business transactions.
Although Software Defined Networking (SDN) Layer 3 (L3) services are fully supported
on this pattern, routing services such as Border Gateway Protocol (BGP) may need to be
configured for the firewall device on the top-of-rack (TOR) switch.

Network security features such as microsegmentation and Quality of Service (QoS) don't
require extra configuration for the firewall device, as they're implemented at the virtual
network adapter layer. For more information, see Microsegmentation with Azure Stack
HCI .

Physical connectivity components


As illustrated in the diagram below, this pattern has the following physical network
components:

For northbound/southbound traffic, the Azure Stack HCI cluster is implemented


using a single TOR L2 or L3 switch.
Two teamed network ports to handle the management and compute traffic
connected to the switch.
Two disconnected RDMA NICs that are only used if add a second server to your
cluster for scale-out. This means no increased costs for cabling or physical switch
ports.
(Optional) A BMC card can be used to enable remote management of your
environment. For security purposes, some solutions might use a headless
configuration without the BMC card.

The following table lists some guidelines for a single-server deployment:

ノ Expand table

Network Management & compute Storage BMC

Link speed At least 1Gbps if RDMA is At least 10Gbps. Check with


disabled, 10Gbps hardware
recommended. manufacturer.

Interface type RJ45, SFP+, or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Optional to allow adding One port
aggregation a second server;
disconnected ports.
Network Management & compute Storage BMC

RDMA Optional. Depends on N/A N/A


requirements for guest
RDMA and NIC support.

Network ATC intents


The single-server pattern uses only one Network ATC intent for management and
compute traffic. The RDMA network interfaces are optional and disconnected.

Management and compute intent


The management and compute intent has the following characteristics:

Intent type: Management and compute


Intent mode: Cluster mode
Teaming: Yes - pNIC01 and pNIC02 are teamed
Default management VLAN: Configured VLAN for management adapters is
ummodified
PA VLAN and vNICs: Network ATC is transparent to PA vNICs and VLANs
Compute VLANs and vNICs: Network ATC is transparent to compute VM vNICs and
VLANs

Storage intent
The storage intent has the following characteristics:

Intent type: None


Intent mode: None
Teaming: pNIC03 and pNIC04 are disconnected
Default VLANs: None
Default subnets: None

Follow these steps to create a network intent for this reference pattern:

1. Run PowerShell as Administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <management_compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

For more information, see Deploy host networking: Compute and management intent.

Logical network components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage network VLANs


Optional - this pattern doesn't require a storage network.

OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment. tenant
connections on each gateway, and switches network traffic flows to a standby
gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications


and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.

QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options


The following L3 networking service options are available:
Virtual network peering
Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.
SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures

Next steps
Learn about two-node patterns - Azure Stack HCI network deployment patterns.
Review single-server storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about which network components are deployed for the single-
server reference pattern, as shown in the following diagram:

Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.

SDN Network Controller VM


The Network Controller VM is optionally deployed. If a Network Controller VM is not
deployed, default network access policies are not available. A Network Controller VM is
needed if you have any of the following requirements:

Create and manage virtual networks or connect VMs to virtual network subnets.

Configure and manage microsegmentation for VMs connected to virtual networks


or traditional VLAN-based networks.

Attach virtual appliances to your virtual networks.

Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.

SDN Software Load Balancer VM

The SDN Software Load Balancer (SLB) VM is used to evenly distribute network traffic
among multiple VMs. It enables multiple servers to host the same workload, providing
high availability and scalability. It is also used to provide inbound Network Address
Translation (NAT) services for inbound access to VMs, and outbound NAT services for
outbound connectivity.

SDN Gateway VM
The SDN Gateway VM is used to route network traffic between a virtual network and
another network, either local or remote. SDN Gateways can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter is not an encrypted connection. For more
information about GRE connectivity, see GRE Tunneling in Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, SDN Gateway simply acts as a router between your virtual
network and the external network.

Host agents
The following components run as services or agents on the host server:
Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.

Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.

Monitor host agent: Orchestrator-managed agent used for emitting observability


(telemetry and diagnostics) pipeline data that upload to Geneva (Azure Storage).

Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.

Components running on VMs


The following table lists the various components running on virtual machines (VMs) for a
single-server network pattern:

Component Number of OS disk Data disk vCPUs Memory


VMs size size

Network Controller 1 100 GB 30 GB 4 4 GB

SDN Software Load 1 60 GB 30 GB 16 8 GB


Balancers

SDN Gateways 1 60 GB 30 GB 8 8 GB

OEM Management OEM defined OEM OEM OEM OEM


defined defined defined defined

Total 3 + OEM 270 GB + 90 GB + 32 + OEM 28 GB +


OEM OEM OEM

Next steps
Learn about single-server IP requirements.
Review single-server storage reference
pattern IP requirements for Azure Stack
HCI
Article • 07/20/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, learn about the IP requirements for deploying a single-server network
reference pattern in your environment.

Deployments without microsegmentation and


QoS enabled
The following table lists network attributes for deployments without microsegmentation
and Quality of Service (QoS) enabled. This is the default scenario and is deployed
automatically.

Network IP Network Network Subnet Required


component ATC intent routing properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 1 optional if


host gateway. managed connected
IP-less L2 VLAN. subnet. to switch.
Default VLAN
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 1 optional if


host gateway. managed connected
IP-less L2 VLAN. subnet. to switch.
Default VLAN
tag 712.

Management 1 IP for each Management Outbound Customer- 2 required,


host, connected defined 1 optional.
1 IP for (internet access management
Failover required). VLAN.
Cluster, Disconnected (Native VLAN
1 IP for OEM (Arc preferred but
VM autonomous trunk mode
(optional) controller). supported).
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Total 2 required.
2 optional
for storage,
1 optional
for OEM
VM.

(Optional) Deployments with


microsegmentation and QoS enabled
The following table lists network attributes for deployments with microsegmentation
and QoS enabled. This scenario is optional and deployed only with Network Controller.

Network IP Network Network Subnet Required


component ATC intent routing properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 1 optional


host GW. managed if
IP-less L2 subnet. connected
VLAN. Default VLAN to switch.
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 1 optional


host GW. managed if
IP-less L2 subnet. connected
VLAN. Default VLAN to switch.
tag 712.

Management 1 IP for each Management Outbound Customer- 4 required,


host, connected defined 1 optional
1 IP for (internet access management
Failover required). VLAN.
Cluster, Disconnected (Native VLAN
1 IP for (Arc preferred but
Network autonomous trunk mode
Controller VM, controller). supported).
1 IP for Arc
VM
management
stack VM,
1 IP for OEM
VM (new)
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Total 4 Required.
2 optional
for storage,
1 optional
for OEM
VM.

Deployments with SDN optional services


The following table lists network attributes for deployments SDN optional services:

Network IP Network Network Subnet Required IPs


component ATC intent routing properties

Storage 1 1 IP for each Storage No defined Network ATC 1 optional if


host GW. managed connected to
IP-less L2 subnet. switch.
VLAN. Default VLAN
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 1 optional if


host GW. managed connected to
IP-less L2 subnet. switch.
VLAN. Default VLAN
tag 712.

Tenant Tenant VM IPs Compute Tenant VLAN Customer-


compute connected to routing/access defined
corresponding customer-
VLANs managed.
VLAN trunk
configuration
on physical
switches
required.

Management 1 IP for each Management Connected Customer- 6 required


host, Outbound defined 1 optional
1 IP for (internet management
Failover access VLAN.
Cluster, required). (Native VLAN
1 IP for Disconnected preferred but
Network (Arc trunk mode
Controller VM, autonomous supported).
1 IP for Arc controller).
Network IP Network Network Subnet Required IPs
component ATC intent routing properties

VM
management
stack VM,
1 IP for OEM
VM (new)

Single node:
1 Network
Controller VM
IP
1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP

HNV (AKA 2 IPs for each N/A Requires Provider IPs


PA network) host default Address automatically
gateway to Network assigned out
Single node: route packets VLAN. of the subnet
1 SLB VM IP externally. Subnet needs by Network
1 gateway VM to allocate Controller
IP hosts and SLB
VMs.
Potential
subnet
growth
consideration.

Public VIPs LB and GWs, N/A Advertised Network


Public VIPs through BGP Controller-
managed IPs

Private VIPs LB Private VIPs N/A Advertised Network


through BGP Controller-
managed IPs

GRE VIPs GRE N/A Advertised Network


connections, through BGP Controller-
gateway VIPs managed IPs

L3 N/A Separate
Forwarding physical
subnet to
communicate
with virtual
network
Network IP Network Network Subnet Required IPs
component ATC intent routing properties

Total 6 required.
2 optional for
storage,
1 optional for
OEM VM.

Next steps
Download Azure Stack HCI
Review two-node storage switchless,
single switch deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switchless with single TOR switch
network reference pattern that you can use to deploy your Azure Stack HCI solution. The
information in this article will also help you determine if this configuration is viable for
your deployment planning needs. This article is targeted towards the IT administrators
who deploy and manage Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, factories, retail stores, and
government facilities.

Consider this pattern for a cost-effective solution that includes fault-tolerance at the
cluster level, but can tolerate northbound connectivity interruptions if the single physical
switch fails or requires maintenance.

You can scale out this pattern, but it will require workload downtime to reconfigure
storage physical connectivity and storage network reconfiguration. Although SDN L3
services are fully supported for this pattern, the routing services such as BGP will need to
be configured on the firewall device on top of the TOR switch if it doesn't support L3
services. Network security features such as microsegmentation and QoS don't require
extra configuration on the firewall device, as they're implemented on the virtual switch.

Physical connectivity components


As illustrate in the diagram below, this pattern has the following physical network
components:

Single TOR switch for north-south traffic communication.


Two teamed network ports to handle management and compute traffic, connected
to the L2 switch on each host

Two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each
node in the cluster has a redundant connection to the other node in the cluster.

As an option, some solutions might use a headless configuration without a BMC


card for security purposes.

Networks Management and compute Storage BMC

Link speed At least 1 Gbps. 10 Gbps At least 10 Check with hardware


recommended Gbps manufacturer
Networks Management and compute Storage BMC

Interface type RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Two standalone One port


aggregation ports

Network ATC intents


For two-node storage switchless patterns, two Network ATC intents are created. The first
for management and compute network traffic, and the second for storage traffic.

Management and compute intent


Intent Type: Management and compute
Intent Mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 are teamed
Default Management VLAN: Configured VLAN for management adapters isn’t
modified
PA & Compute VLANs and vNICs: Network ATC is transparent to PA vNICs and
VLAN or compute VM vNICs and VLANs
Storage intent
Intent type: Storage
Intent mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2

For more information, see Deploy host networking.

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>
Add-NetIntent -Name <Storage> -Storage -ClusterName <HCI01> -
AdapterName <pNIC03, pNIC04>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic consists of two individual networks supporting RDMA
traffic. Each interface will be dedicated to a separate storage network, and both may
utilize the same VLAN tag. This traffic is only intended to travel between the two nodes.
Storage traffic is a private network without connectivity to other resources.

The storage adapters operate on different IP subnets. To enable a switchless


configuration, each connected node supports a matching subnet of its neighbor. Each
storage network uses the Network ATC predefined VLANs by default (711 and 712).
However, these VLANs can be customized if necessary. In addition, if the default subnets
defined by Network ATC (10.71.1.0/24 and 10.71.2.0/24) aren't usable, you're responsible
for assigning all storage IP addresses in the cluster.

For more information, see Network ATC overview.

OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.

HNV Provider Address (PA) network


The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.
Microsegmentation involves creating granular network policies between applications
and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.

QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.


L3 networking services options
The following L3 networking service options are available:

Virtual network peering


Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.

SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switchless, two switches network pattern.
Review two-node storage switchless,
two switches deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switchless with two TOR L3
switches network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.

Consider implementing this pattern when looking for a cost-efficient solution that has
fault tolerance across all the network components. It is possible to scale out the pattern,
but will require workload downtime to reconfigure storage physical connectivity and
storage network reconfiguration. SDN L3 services are fully supported on this pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as micro-segmentation and QoS do
not require additional configuration for the firewall device as they are implemented at
the virtual network adapter layer.

Physical connectivity components


As illustrated in the diagram below, this pattern has the following physical network
components:

For northbound/southbound traffic, the cluster requires two TOR switches in MLAG
configuration.
Two teamed network cards to handle management and compute traffic, and
connected to the TOR switches. Each NIC will be connected to a different TOR
switch.

Two RDMA NICs in a full-mesh configuration for East-West storage traffic. Each
node in the cluster has a redundant connection to the other node in the cluster.

As an option, some solutions might use a headless configuration without a BMC


card for security purposes.

Networks Management and compute Storage BMC

Link speed At least 1 GBps. 10 GBps At least 10 Check with hardware


recommended GBps manufacturer

Interface type RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Two standalone One port


aggregation ports

Network ATC intents


For two-node storage switchless patterns, two Network ATC intents are created. The first
for management and compute network traffic, and the second for storage traffic.

Management and compute intent


Intent Type: Management and Compute
Intent Mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 Team
Default Management VLAN: Configured VLAN for management adapters isn’t
modified
PA & Compute VLANs and vNICs: Network ATC is transparent to PA vNICs and
VLAN or compute VM vNICs and VLANs

Storage intent
Intent type: Storage
Intent mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2

For more information, see Deploy host networking.

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>
Add-NetIntent -Name <Storage> -Storage -ClusterName <HCI01> -
AdapterName <pNIC03, pNIC04>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic consists of two individual networks supporting RDMA
traffic. Each interface is dedicated to a separate storage network, and both may share
the same VLAN tag. This traffic is only intended to travel between the two nodes.
Storage traffic is a private network without connectivity to other resources.

The storage adapters operate in different IP subnets. To enable a switchless


configuration, each connected node a matching subnet of its neighbor. Each storage
network uses the Network ATC predefined VLANs by default (711 and 712). These
VLANs can be customized if required. In addition, if the default subnet defined by ATC is
not usable, you are responsible for assigning all storage IP addresses in the cluster.
For more information, see Network ATC overview.

OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.

HNV Provider Address (PA) network


The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.
Microsegmentation involves creating granular network policies between applications
and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.

QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.


L3 networking services options
The following L3 networking service options are available:

Virtual network peering


Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.

SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switchless, one switch network pattern.
Review two-node storage switched,
non-converged deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switched, non-converged, two-
TOR-switch network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, factories, branch offices, and
datacenter facilities.

Deploy this pattern for enhanced network performance of your system and if you plan
to add additional nodes. East-West storage traffic replication won't interfere or compete
with north-sound traffic dedicated for management and compute. Logical network
configuration when adding additional nodes are ready without requiring workload
downtime or physical connection changes. SDN L3 services are fully supported on this
pattern.

Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.

Physical connectivity components


As described in the diagram below, this pattern has the following physical network
components:
For northbound/southbound traffic, the cluster in this pattern is implemented with
two TOR switches in MLAG configuration.

Two teamed network cards to handle management and compute traffic connected
to two TOR switches. Each NIC is connected to a different TOR switch.

Two RDMA NICs in standalone configuration. Each NIC is connected to a different


TOR switch. SMB multichannel capability provides path aggregation and fault
tolerance.

As an option, deployments can include a BMC card to enable remote management


of the environment. Some solutions might use a headless configuration without a
BMC card for security purposes.


Networks Management and compute Storage BMC

Link speed At least 1 Gbps. 10 Gbps At least 10 Check with hardware


recommended Gbps manufacturer

Interface type RJ45, SFP+ or SFP28 SFP+ or SFP28 RJ45

Ports and Two teamed ports Two standalone One port


aggregation ports

Network ATC intents

Management and compute intent


Intent type: Management and compute
Intent mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 are teamed
Default management VLAN: Configured VLAN for management adapters isn’t
modified
PA & compute VLANs and vNICs: Network ATC is transparent to PA vNICs and
VLAN or compute VM vNICs and VLANs
Storage intent
Intent Type: Storage
Intent Mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following commands:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -


ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>
Add-NetIntent -Name <Storage> -Storage -ClusterName <HCI01> -
AdapterName <pNIC03, pNIC04>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic consists of two individual networks supporting RDMA
traffic. Each interface is dedicated to a separate storage network, and both can use the
same VLAN tag.

The storage adapters operate in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.

For more information, see Network ATC overview.


OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications


and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.

QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options


The following L3 networking service options are available:
Virtual network peering
Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.
SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switched, fully converged network pattern.
Review two-node storage switched, fully
converged deployment network
reference pattern for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about the two-node storage switched, fully converged with
two TOR switches network reference pattern that you can use to deploy your Azure
Stack HCI solution. The information in this article will also help you determine if this
configuration is viable for your deployment planning needs. This article is targeted
towards the IT administrators who deploy and manage Azure Stack HCI in their
datacenters.

For information on other network patterns, see Azure Stack HCI network deployment
patterns.

Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.

Consider this pattern if you plan to add additional nodes and your bandwidth
requirements for north-south traffic don't require dedicated adapters. This solution
might be a good option when physical switch ports are scarce and you're looking for
cost reductions for your solution. This pattern requires additional operational costs to
fine-tune the shared host network adapters QoS policies to protect storage traffic from
workload and management traffic. SDN L3 services are fully supported on this pattern.

Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.

Physical connectivity components


As described in the diagram below, this pattern has the following physical network
components:
For northbound/southbound traffic, the cluster in this pattern is implemented with
two TOR switches in MLAG configuration.

Two teamed network cards handle the management, compute, and RDMA storage
traffic connected to the TOR switches. Each NIC is connected to a different TOR
switch. SMB multichannel capability provides path aggregation and fault tolerance.

As an option, deployments can include a BMC card to enable remote management


of the environment. Some solutions might use a headless configuration without a
BMC card for security purposes.

Networks Management, compute, storage BMC


Networks Management, compute, storage BMC

Link speed At 10 Gbps Check with hardware manufacturer

Interface type SFP+ or SFP28 RJ45

Ports and aggregation Two teamed ports One port

Network ATC intents

Management, compute, and storage intent


Intent Type: Management, compute, and storage
Intent Mode: Cluster mode
Teaming: Yes. pNIC01 and pNIC02 are teamed
Default Management VLAN: Configured VLAN for management adapters isn’t
modified
Storage vNIC 1:
VLAN 711
Subnet 10.71.1.0/24 for storage network 1
Storage vNIC 2:
VLAN 712
Subnet 10.71.2.0/24 for storage network 2
Storage vNIC1 and storage vNIC2 use SMB Multichannel to provide resiliency and
bandwidth aggregation
PA VLAN and vNICs: Network ATC is transparent to PA vNICs and VLAN
Compute VLANs and vNICs: Network ATC is transparent to compute VM vNICs and
VLANs

For more information, see Deploy host networking.

Follow these steps to create network intents for this reference pattern:

1. Run PowerShell as administrator.

2. Run the following command:

PowerShell

Add-NetIntent -Name <Management_Compute> -Management -Compute -Storage


-ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>

Logical connectivity components


As illustrated in the diagram below, this pattern has the following logical network
components:

Storage Network VLANs


The storage intent-based traffic in this pattern shares the physical network adapters with
management and compute.

The storage network operates in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.

For more information, see Network ATC overview.


OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.

Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.

Tagged VLAN - you supply VLAN IDs at the time of deployment.

The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options


The following network isolation options are supported:

VLANs (IEEE 802.1Q)


VLANs allow devices that must be kept separate to share the cabling of a physical
network and yet be prevented from directly interacting with one another. This managed
sharing yields gains in simplicity, security, traffic management, and economy. For
example, a VLAN can be used to separate traffic within a business based on individual
users or groups of users or their roles, or based on traffic characteristics. Many internet
hosting services use VLANs to separate private zones from one other, allowing each
customer's servers to be grouped in a single network segment no matter where the
individual servers are located in the data center. Some precautions are needed to
prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation


Default network access policies ensure that all virtual machines (VMs) in your Azure
Stack HCI cluster are secure by default from external threats. With these policies, we'll
block inbound access to a VM by default, while giving the option to enable selective
inbound ports and thus securing the VMs from external attacks. This enforcement is
available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications


and services. This essentially reduces the security perimeter to a fence around each
application or VM. This fence permits only necessary communication between
application tiers or other logical boundaries, thus making it exceedingly difficult for
cyberthreats to spread laterally from one system to another. Microsegmentation
securely isolates networks from each other and reduces the total attack surface of a
network security incident.

Default network access policies and microsegmentation are realized as five-tuple


stateful (source address prefix, source port, destination address prefix, destination port,
and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as
Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each
VM. The policies are pushed through the management layer, and the SDN Network
Controller distributes them to all applicable hosts. These policies are available for VMs
on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.

QoS for VM network adapters


You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth
on a virtual interface to prevent a high-traffic VM from contending with other VM
network traffic. You can also configure QoS to reserve a specific amount of bandwidth
for a VM to ensure that the VM can send traffic regardless of other traffic on the
network. This can be applied to VMs attached to traditional VLAN networks as well as
VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options


The following L3 networking service options are available:
Virtual network peering
Virtual network peering lets you connect two virtual networks seamlessly. Once peered,
for connectivity purposes, the virtual networks appear as one. The benefits of using
virtual network peering include:

Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer


Cloud Service Providers (CSPs) and enterprises that deploy Software Defined
Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer
network traffic among virtual network resources. SLB enables multiple servers to host
the same workload, providing high availability and scalability. It's also used to provide
inbound Network Address Translation (NAT) services for inbound access to VMs, and
outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways


SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router
designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V
Network Virtualization (HNV). You can use RAS Gateway to route network traffic
between a virtual network and another network, either local or remote.
SDN Gateway can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection.

For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.

Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

Next steps
Learn about the two-node storage switched, non-converged network pattern.
Review two-node storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll learn about which network components get deployed for two-node
reference patterns, as shown below:

VM components
The following table lists all the components running on VMs for two-node network
patterns:

Component Number of OS disk Data disk vCPUs Memory


VMs size size

Network Controller 1 100 GB 30 GB 4 4 GB

SDN Software Load 1 60 GB 30 GB 16 8 GB


Balancers (SLB)

SDN Gateways 1 60 GB 30 GB 8 8 GB

OEM Management OEM defined OEM OEM OEM OEM


defined defined defined defined

Total 3 + OEM 270 GB + 90 GB + 32 + OEM 28 GB +


OEM OEM OEM
Default components

Network Controller VM
The Network Controller VM is deployed optionally. If Network Controller VM isn't
deployed, the default access network access policies won't be available. Additionally, it's
needed if you have any of the following requirements:

Create and manage virtual networks. Connect virtual machines (VMs) to virtual
network subnets.

Configure and manage micro-segmentation for VMs connected to virtual networks


or traditional VLAN-based networks.

Attach virtual appliances to your virtual networks.

Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.

Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.

SDN Load Balancer VM

The SDN Software Load Balancer (SLB) VM is used to evenly distribute customer
network traffic among multiple VMs. It enables multiple servers to host the same
workload, providing high availability and scalability. It's also used to provide inbound
Network Address Translation (NAT) services for inbound access to virtual machines, and
outbound NAT services for outbound connectivity.

SDN Gateway VM
The SDN Gateway VM is used for routing network traffic between a virtual network and
another network, either local or remote. Gateways can be used to:

Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.

Create Generic Routing Encapsulation (GRE) connections between SDN virtual


networks and external networks. The difference between site-to-site connections
and GRE connections is that the latter isn't an encrypted connection. For more
information about GRE connectivity scenarios, see GRE Tunneling in Windows
Server.

Create Layer 3 connections between SDN virtual networks and external networks.
In this case, the SDN gateway simply acts as a router between your virtual network
and the external network.

Host service and agent components


The following components run as services or agents on the host server:

Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.

Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.

Monitor host agent: Orchestrator-managed agent used for emitting observability


(telemetry and diagnostics) pipeline data that upload to Geneva (Azure Storage).

Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.

Next steps
Learn about Two-node deployment IP requirements.
Review two-node storage reference
pattern IP requirements for Azure Stack
HCI
Article • 11/10/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, learn about the IP requirements for deploying a two-node network
reference pattern in your environment.

Deployments without microsegmentation and


QoS enabled
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Storage 1 1 IP for Storage No defined Network ATC 2


each host gateway. managed subnet.
IP-less L2 VLAN. Default VLAN tag
711.

Storage 2 1 IP for Storage No defined Network ATC 2


each host gateway. managed subnet.
IP-less L2 VLAN. Default VLAN tag
712.

Management 1 IP for Management Connected Customer-defined 2


each host, (outbound management required
1 IP for internet access VLAN. 1
Failover required). (Native VLAN optional
Cluster, Disconnected preferred but
1 IP for (Arc autonomous trunk mode
OEM VM controller). supported).
(optional)

Total 6
required,
1
optional
for OEM
VM.
Deployments with microsegmentation and QoS
enabled
Network IP Network Network Subnet Required
component ATC intent routing properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 2


host gateway. managed
IP-less L2 subnet.
VLAN. Default VLAN
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 2


host gateway. managed
IP-less L2 subnet.
VLAN. Default VLAN
tag 712.

Management 1 IP for each Management Connected Customer- 5 required


host, (outbound defined 1 optional
1 IP for internet access management
Failover required). VLAN.
Cluster, Disconnected (Native VLAN
1 IP for (Arc preferred but
Network autonomous trunk mode
Controller VM, controller). supported).
1 IP for Arc
VM
management
stack VM,
1 IP for OEM
VM (new)

Total 9
minimum.
10
maximum.

Deployments with SDN optional services


Network IP component Network Network routing Subnet Required
ATC intent properties IPs
Network IP component Network Network routing Subnet Required
ATC intent properties IPs

Storage 1 1 IP for each Storage No defined Network ATC 2


host gateway. managed
IP-less L2 VLAN. subnet.
Default VLAN
tag 711.

Storage 2 1 IP for each Storage No defined Network ATC 2


host gateway. managed
IP-less L2 VLAN. subnet.
Default VLAN
tag 712.

Tenant Tenant VM IPs Compute Tenant VLAN Customer-


compute connected to routing/access defined
corresponding customer-
VLANs managed.
VLAN trunk
configuration on
the physical
switches required.

Management 1 IP for each Management Connected Customer- 7 required


host, (outbound defined 1 optional
1 IP for internet access management
Failover required). VLAN.
Cluster, Disconnected (Native VLAN
1 IP for (Arc autonomous preferred but
Network controller). trunk mode
Controller VM, supported).
1 IP for Arc VM
management
stack VM,
1 IP for OEM
VM (new)

Two-node:
1 Network
Controller VM
IP
1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP
Network IP component Network Network routing Subnet Required
ATC intent properties IPs

HNV 2 IPs for each N/A Requires default Provider NC-


host gateway to route Address managed
the packets Network IPs
Two-node: externally. VLAN
1 SLB VM IP Subnet size
1 gateway VM needs to
IP allocate hosts
and SLB VMs
Potential
subnet
growth to be
considered

Public VIPs SLB and N/A Advertised Network


gateway public through BGP Controller-
VIPs managed
IPs

Private VIPs SLB private N/A Advertised Network


VIPs through BGP Controller-
managed
IPs

GRE VIPs GRE N/A Advertised Network


connections through BGP Controller-
for gateway managed
VIPs IPs

L3 N/A Separate physical


Forwarding network subnet
to communicate
with virtual
network

Total 11
minimum
12
maximum

Next steps
Choose a reference pattern.
Review two-node storage reference
pattern decision matrix for Azure Stack
HCI
Article • 07/20/2023

Applies to: Azure Stack HCI, versions 22H2 and 21H2

Study the two-node storage reference pattern decision matrix to help decide which
reference pattern is best suited for your deployment needs:

Feature Storage Storage Storage Storage


switchless switchless switched switched

Single switch Two switches Non- Fully-


converged converged

Scalable pattern unsuitable unsuitable suitable suitable

HA solution unsuitable suitable suitable suitable

VLAN-based tenants suitable suitable suitable suitable

SDN L3 integration neutral suitable suitable suitable

Total cost of ownership suitable neutral neutral neutral


(TCO)

Compacted/portable suitable neutral unsuitable unsuitable


solution

RDMA Performance neutral neutral suitable neutral

Physical switch operational suitable neutral neutral unsuitable


costs

Physical switch routing and neutral neutral neutral neutral


ACLs

Next steps
Download Azure Stack HCI
Review SDN considerations for network
reference patterns
Article • 12/12/2022

Applies to: Azure Stack HCI, versions 22H2 and 21H2

In this article, you'll review considerations when deploying Software Defined Networking
(SDN) in your Azure Stack HCI cluster.

SDN hardware requirements


When using SDN, you must ensure that the physical switches used in your Azure Stack
HCI cluster support a set of capabilities that are documented at Plan a Software Defined
Network infrastructure.

If you are using SDN Software Load Balancers (SLB) or Gateway Generic Routing
Encapsulation (GRE) gateways, you must also configure Border Gateway Protocol (BGP)
peering with the top of rack (ToR) switches so that the SLB and GRE Virtual IP addresses
(VIPs) can be advertised. For more information, see Switches and routers.

SDN Network Controller


SDN Network Controller is the centralized control plane to provision and manage
networking services for your Azure Stack HCI workloads. It provides virtual network
management, microsegmentation through Network Security Groups (NSGs),
management of Quality of Service (QoS) policies, virtual appliance chaining to allow you
to bring in third-party appliances, and is also responsible for managing SLB and GRE.
SLBs leverage virtual first-party appliances to provide high availability to applications,
while and Gateways are used to provide external network connectivity to workloads.

For more information about Network Controller, see What is Network Controller.

SDN configuration options


Based on your requirements, you may need to deploy a subset of the SDN
infrastructure. For example, if you want to only host customer workloads in your
datacenter, and external communication is not required, you can deploy Network
Controller and skip deploying SLB/MUX and gateway VMs. The following describes
networking feature infrastructure requirements for a phased deployment of the SDN
infrastructure.

Feature Deployment requirements Network requirements

Logical Network management Network Controller None


NSGs for VLAN-based network
QoS for VLAN-based networks

Virtual Networking Network Controller HNV PA VLAN, subnet, router


User Defined Routing
ACLs for virtual network
Encrypted subnets
QoS for virtual networks
Virtual network peering

Inbound/Outbound NAT Network Controller BGP on HNV PA network


Load Balancing SLB/MUX Private and public VIP subnets

GRE gateway connections Network Controller BGP on HNV PA network


SLB/MUX Private and public VIP subnets
Gateway GRE VIP subnet

IPSec gateway connections Network Controller BGP on HNV PA network


SLB/MUX Private and public VIP subnets
Gateway

L3 gateway connections Network Controller BGP on HNV PA network


SLB/MUX Private and public VIP subnets
Gateway Tenant VLAN, subnet, router
BGP on tenant VLAN optional

Next steps
Choose a network pattern to review.
Network considerations for cloud
deployments of Azure Stack HCI, version
23H2
Article • 03/01/2024

Applies to: Azure Stack HCI, version 23H2

This article discusses how to design and plan an Azure Stack HCI, version 23H2 system
network for cloud deployment. Before you continue, familiarize yourself with the various
Azure Stack HCI networking patterns and available configurations.

Network design framework


The following diagram shows the various decisions and steps that define the network
design framework for your Azure Stack HCI system - cluster size, cluster storage
connectivity, network traffic intents, management connectivity, and network adapter
configuration. Each design decision enables or permits the design options available in
subsequent steps:

Step 1: Determine cluster size



To help determine the size of your Azure Stack HCI system, use the Azure Stack HCI sizer
tool , where you can define your profile such as number of virtual machines (VMs), size
of the VMs, and the workload use of the VMs such as Azure Virtual Desktop, SQL Server,
or AKS.

As described in the Azure Stack HCI system server requirements article, the maximum
number of servers supported on Azure Stack HCI system is 16. Once you complete your
workload capacity planning, you should have a good understanding of the number of
server nodes required to run workloads on your infrastructure.

If your workloads require four or more nodes: You can't deploy and use a
switchless configuration for storage network traffic. You need to include a physical
switch with support for Remote Direct Memory Access (RDMA) to handle storage
traffic. For more information on Azure Stack HCI cluster network architecture, see
Network reference patterns overview.

If your workloads require three or less nodes: You can choose either switchless or
switched configurations for storage connectivity.

If you plan to scale out later to more than three nodes: You need to use a
physical switch for storage network traffic. Any scale out operation for switchless
deployments requires manual configuration of your network cabling between the
nodes that Microsoft isn't actively validating as part of its software development
cycle for Azure Stack HCI.

Here are the summarized considerations for the cluster size decision:

ノ Expand table

Decision Consideration

Cluster size (number of Switchless configuration via the Azure portal or ARM templates is
nodes per cluster) only available for 1, 2, or 3 node clusters.

Clusters with 4 or more nodes require physical switch for the


storage network traffic.

Scale out requirements If you intend to scale out your cluster using the orchestrator, you
need to use a physical switch for the storage network traffic.

Step 2: Determine cluster storage connectivity



As described in Physical network requirements, Azure Stack HCI supports two types of
connectivity for storage network traffic:

Use a physical network switch to handle the traffic.


Directly connect the nodes between them with crossover network or fiber cables
for the storage traffic.

The advantages and disadvantages of each option are documented in the article linked
above.

As stated previously, you can only decide between the two options when the size of
your cluster is three or less nodes. Any cluster with four or more nodes is automatically
deployed using a network switch for storage.

If clusters have fewer than three nodes, the storage connectivity decision influences the
number and type of network intents you can define in the next step.

For example, for switchless configurations, you need to define two network traffic
intents. Storage traffic for east-west communications using the crossover cables doesn't
have north-south connectivity and it is completely isolated from the rest of your
network infrastructure. That means you need to define a second network intent for
management outbound connectivity and for your compute workloads.

Although it is possible to define each network intent with only one physical network
adapter port each, that doesn't provide any fault tolerance. As such, we always
recommend using at least two physical network ports for each network intent. If you
decide to use a network switch for storage, you can group all network traffic including
storage in a single network intent, which is also known as a hyperconverged or fully
converged host network configuration.

Here are the summarized considerations for the cluster storage connectivity decision:

ノ Expand table

Decision Consideration

No switch for Switchless configuration via Azure portal or ARM template deployment is only
storage supported for 1, 2 or 3 node clusters.

1 or 2 node storage switchless clusters can be deployed using the Azure portal
or ARM templates.

3 node storage switchless clusters can be deployed only using ARM templates.

Scale out operations are not supported with the switchless deployments. Any
change to the number of nodes after the deployment requires a manual
Decision Consideration

configuration.

At least 2 network intents are required when using storage switchless


configuration.

Network switch If you intend to scale out your cluster using the orchestrator, you need to use
for storage a physical switch for the storage network traffic.

You can use this architecture with any number of nodes between 1 to 16.

Although is not enforced, you can use a single intent for all your network
traffic types (Management, Compute, and Storage)

The following diagram summarizes storage connectivity options available to you for
various deployments:

Step 3: Determine network traffic intents


For Azure Stack HCI, all deployments rely on Network ATC for the host network
configuration. The network intents are automatically configured when deploying Azure
Stack HCI via the Azure portal. To understand more about the network intents and how
to troubleshoot those, see Common network ATC commands.
This section explains the implications of your design decision for network traffic intents,
and how they influence the next step of the framework. For cloud deployments, you can
select between four options to group your network traffic into one or more intents. The
options available depend on the number of nodes in your cluster and the storage
connectivity type used.

The available network intent options are discussed in the following sections.

Network intent: Group all traffic


Network ATC configures a unique intent that includes management, compute, and
storage network traffic. The network adapters assigned to this intent share bandwidth
and throughput for all network traffic.

This option requires a physical switch for storage traffic. If you require a switchless
architecture, you can't use this type of intent. Azure portal automatically filters out
this option if you select a switchless configuration for storage connectivity.

At least two network adapter ports are recommended to ensure High Availability.

At least 10 Gbps network interfaces are required to support RDMA traffic for
storage.

Network intent: Group management and compute traffic


Network ATC configures two intents. The first intent includes management and compute
network traffic, and the second intent includes only storage network traffic. Each intent
must have a different set of network adapter ports.

You can use this option for both switched and switchless storage connectivity, if:

At least two network adapter ports are available for each intent to ensure high
availability.

A physical switch is used for RDMA if you use the network switch for storage.

At least 10 Gbps network interfaces are required to support RDMA traffic for
storage.

Network intent: Group compute and storage traffic


Network ATC configures two intents. The first intent includes compute and storage
network traffic, and the second intent includes only management network traffic. Each
intent must use a different set of network adapter ports.

This option requires a physical switch for storage traffic as the same ports are
shared with compute traffic, which require north-south communication. If you
require a switchless configuration, you can't use this type of intent. Azure portal
automatically filters out this option if you select a switchless configuration for
storage connectivity.

This option requires a physical switch for RDMA.

At least two network adapter ports are recommended to ensure high availability.

At least 10 Gbps network interfaces are recommended for the compute and
storage intent to support RDMA traffic.

Even when the management intent is declared without a compute intent, Network
ATC creates a Switch Embedded Teaming (SET) virtual switch to provide high
availability to the management network.

Network intent: Custom configuration


Define up to three intents using your own configuration as long as at least one of the
intents includes management traffic. We recommend that you use this option when you
need a second compute intent. Scenarios for this second compute intent requirement
include remote storage traffic, VMs backup traffic, or a separate compute intent for
distinct types of workloads.

Use this option for both switched and switchless storage connectivity if the storage
intent is different from the other intents.

Use this option when another compute intent is required or when you want to fully
separate the distinct types of traffic over different network adapters.

Use at least two network adapter ports for each intent to ensure high availability.

At least 10 Gbps network interfaces are recommended for the compute and
storage intent to support RDMA traffic.

The following diagram summarizes network intent options available to you for various
deployments:

Step 4: Determine management network


connectivity

In this step, you define the infrastructure subnet address space, how these addresses are
assigned to your cluster, and if there is any proxy or VLAN ID requirement for the nodes
for outbound connectivity to the internet and other intranet services such as Domain
Name System (DNS) or Active Directory Services.

The following infrastructure subnet components must be planned and defined before
you start deployment so you can anticipate any routing, firewall, or subnet
requirements.

Network adapter drivers


Once you install the operating system, and before configuring networking on your
nodes, you must ensure that your network adapters have the latest driver provided by
your OEM or network interface vendor. Important capabilities of the network adapters
might not surface when using the default Microsoft drivers.

Management IP pool
When doing the initial deployment of your Azure Stack HCI system, you must define an
IP range of consecutive IPs for infrastructure services deployed by default.
To ensure the range has enough IPs for current and future infrastructure services, you
must use a range of at least six consecutive available IP addresses. These addresses are
used for - the cluster IP, the Azure Resource Bridge VM and its components.

If you anticipate running other services in the infrastructure network, we recommend


that you assign an extra buffer of infrastructure IPs to the pool. It is possible to add
other IP pools after deployment for the infrastructure network using PowerShell if the
size of the pool you planned originally gets exhausted.

The following conditions must be met when defining your IP pool for the infrastructure
subnet during deployment:

ノ Expand table

# Condition

1 The IP range must use consecutive IPs and all IPs must be available within that range.

2 The range of IPs must not include the cluster node management IPs but must be on the same
subnet as your nodes.

3 The default gateway defined for the management IP pool must provide outbound
connectivity to the internet.

4 The DNS servers must ensure name resolution with Active Directory and the internet.

Management VLAN ID
We recommend that the management subnet of your Azure HCI cluster use the default
VLAN, which in most cases is declared as VLAN ID 0. However, if your network
requirements are to use a specific management VLAN for your infrastructure network, it
must be configured on your physical network adapters that you plan to use for
management traffic.

If you plan to use two physical network adapters for management, you need to set the
VLAN on both adapters. This must be done as part of the bootstrap configuration of
your servers, and before they're registered to Azure Arc, to ensure you successfully
register the nodes using this VLAN.

To set the VLAN ID on the physical network adapters, use the following PowerShell
command:

This example configures VLAN ID 44 on physical network adapter NIC1 .

PowerShell
Set-NetAdapter -Name "NIC1" -VlanID 44

Once the VLAN ID is set and the IPs of your nodes are configured on the physical
network adapters, the orchestrator reads this VLAN ID value from the physical network
adapter used for management and stores it, so it can be used for the Azure Resource
Bridge VM or any other infrastructure VM required during deployment. It isn't possible
to set the management VLAN ID during cloud deployment from Azure portal as this
carries the risk of breaking the connectivity between the nodes and Azure if the physical
switch VLANs aren't routed properly.

Management VLAN ID with a virtual switch


In some scenarios, there is a requirement to create a virtual switch before deployment
starts.

7 Note

Before you create a virtual switch, make sure to enable the Hype-V role. For more
information, see Install required Windows role.

If a virtual switch configuration is required and you must use a specific VLAN ID, follow
these steps:

1. Create virtual switch with recommended naming convention

Azure Stack HCI deployments rely on Network ATC to create and configure the virtual
switches and virtual network adapters for management, compute, and storage intents.
By default, when Network ATC creates the virtual switch for the intents, it uses a specific
name for the virtual switch.

Although it isn't required, we recommend naming your virtual switches with the same
naming convention. The recommended name for the virtual switches is as follows:

Name of the virtual switch: " ConvergedSwitch($IntentName) ", Where $IntentName can be
any string. This string should match the name of the virtual network adapter for
management as described in the next step.

The following example shows how to create the virtual switch with PowerShell using the
recommended naming convention with $IntentName describing the purpose of the
virtual switch. The list of network adapter names is a list of the physical network
adapters you plan to use for management and compute network traffic:

PowerShell

$IntentName = "MgmtCompute"
New-VMSwitch -Name "ConvergedSwitch($IntentName)" -SwitchType External -
NetAdapterName "NIC1","NIC2" -EnableEmbeddedTeaming $true -AllowManagementOS
$false

2. Configure management virtual network adapter using required


Network ATC naming convention for all nodes
Once the virtual switch is configured, the management virtual network adapter needs to
be created. The name of the virtual network adapter used for Management traffic must
use the following naming convention: - Name of the network adapter and the virtual
network adapter: vManagement($intentname) . - Name is case sensitive. - $Intentname can
be any string, but must be the same name used for the virtual switch.

To update the management virtual network adapter name, use the following command:

PowerShell

$IntentName = "MgmtCompute"
Add-VMNetworkAdapter -ManagementOS -SwitchName
"ConvergedSwitch($IntentName)" -Name "vManagement($IntentName)"

#NetAdapter needs to be renamed because during creation, Hyper-V adds the


string “vEthernet “ to the beginning of the name

Rename-NetAdapter -Name "vEthernet (vManagement($IntentName))" -NewName


"vManagement($IntentName)"

3. Configure VLAN ID to management virtual network adapter for


all nodes
Once the virtual switch and the management virtual network adapter are created, you
can specify the required VLAN ID for this adapter. Although there are different options
to assign a VLAN ID to a virtual network adapter, the only supported option is to use the
Set-VMNetworkAdapterIsolation command.

Once the required VLAN ID is configured, you can assign the IP address and gateways to
the management virtual network adapter to validate that it has connectivity with other
nodes, DNS, Active Directory, and the internet.
The following example shows how to configure the management virtual network
adapter to use VLAN ID 8 instead of the default:

PowerShell

Set-VMNetworkAdapterIsolation -ManagementOS -VMNetworkAdapterName


"vManagement($IntentName)" -AllowUntaggedTraffic $true -IsolationMode Vlan -
DefaultIsolationID

4. Reference physical network adapters for the management intent


during deployment
Although the newly created virtual network adapter shows as available when deploying
via Azure portal, it is important to remember that the network configuration is based on
Network ATC. This means that when configuring the management, or the management
and compute intent, we still need to select the physical network adapters used for that
intent.

7 Note

Do not select the virtual network adapter for the network intent.

The same logic applies to the Azure Resource Manager (ARM) templates. You must
specify the physical network adapters that you want to use for the network intents and
never the virtual network adapters.

Here are the summarized considerations for the VLAN ID:

ノ Expand table

# Considerations

1 VLAN ID must be specified on the physical network adapter for management before
registering the servers with Azure Arc.

2 Use specific steps when a virtual switch is required before registering the servers to Azure Arc.

3 The management VLAN ID is carried over from the host configuration to the infrastructure
VMs during deployment.

4 There is no VLAN ID input parameter for Azure portal deployment or for ARM template
deployment.
Node and cluster IP assignment
For Azure Stack HCI system, you have two options to assign IPs for the server nodes and
for the cluster IP.

Both the static and Dynamic Host Configuration Protocol (DHCP) protocols are
supported.

Proper node IP assignment is key for cluster lifecycle management. Decide


between the static and DHCP options before you register the nodes in Azure Arc.

Infrastructure VMs and services such as Arc Resource Bridge and Network
Controller keep using static IPs from the management IP pool. That implies that
even if you decide to use DHCP to assign the IPs to your nodes and your cluster IP,
the management IP pool is still required.

The following sections discuss the implications of each option.

Static IP assignment

If static IPs are used for the nodes, the management IP pool is used to obtain an
available IP and assign it to the cluster IP automatically during deployment.

It is important to use management IPs for the nodes that aren't part of the IP range
defined for the management IP pool. Server node IPs must be on the same subnet of
the defined IP range.

We recommend that you assign only one management IP for the default gateway and
for the configured DNS servers for all the physical network adapters of the node. This
ensures that the IP doesn't change once the management network intent is created. This
also ensures that the nodes keep their outbound connectivity during the deployment
process, including during the Azure Arc registration.

To avoid routing issues and to identify which IP will be used for outbound connectivity
and Arc registration, Azure portal validates if there is more than one default gateway
configured.

If a virtual switch and a management virtual network adapter were created during the
OS configuration, the management IP for the node must be assigned to that virtual
network adapter.

DHCP IP assignment
If IPs for the nodes are acquired from a DHCP server, a dynamic IP is also used for the
cluster IP. Infrastructure VMs and services still require static IPs, and that implies that the
management IP pool address range must be excluded from the DHCP scope used for
the nodes and the cluster IP.

For example, if the management IP range is defined as 192.168.1.20/24 to


192.168.1.30/24 for the infrastructure static IPs, the DHCP scope defined for subnet
192.168.1.0/24 must have an exclusion equivalent to the management IP pool to avoid
IP conflicts with the infrastructure services. We also recommend that you use DHCP
reservations for node IPs.

The process of defining the management IP after creating the management intent
involves using the MAC address of the first physical network adapter that is selected for
the network intent. This MAC address is then assigned to the virtual network adapter
that is created for management purposes. This means that the IP address that the first
physical network adapter obtains from the DHCP server is the same IP address that the
virtual network adapter uses as the management IP. Therefore, it is important to create
DHCP reservation for node IP.

Cluster node IP considerations


Here are the summarized considerations for the IP addresses:

ノ Expand table

# Considerations

1 Node IPs must be on the same subnet of the defined management IP pool range regardless if
they're static or dynamic addresses.

2 The management IP pool must not include node IPs. Use DHCP exclusions when dynamic IP
assignment is used.

3 Use DHCP reservations for the nodes as much as possible.

4 DHCP addresses are only supported for node IPs and the cluster IP. Infrastructure services use
static IPs from the management pool.

5 The MAC address from the first physical network adapter is assigned to the management
virtual network adapter once the management network intent is created.

Proxy requirements
A proxy is most likely required to access the internet within your on-premises
infrastructure. Azure Stack HCI supports only non-authenticated proxy configurations.
Given that internet access is required to register the nodes in Azure Arc, the proxy
configuration must be set as part of the OS configuration before server nodes are
registered. For more information, see Configure proxy settings.

The Azure Stack HCI OS has three different services (WinInet, WinHTTP, and
environment variables) that require the same proxy configuration to ensure all OS
components can access the internet. The same proxy configuration used for the nodes is
automatically carried over to the Arc Resource Bridge VM and AKS, ensuring that they
have internet access during deployment.

Here are the summarized considerations for proxy configuration:

ノ Expand table

# Consideration

1 Proxy configuration must be completed before registering the nodes in Azure Arc.

2 The same proxy configuration must be applied for WinINET, WinHTTP, and environment
variables.

3 The Environment Checker ensures that proxy configuration is consistent across all proxy
components.

4 Proxy configuration of Arc Resource Bridge VM and AKS is automatically done by the
orchestrator during deployment.

5 Only the non-authenticated proxies are supported.

Firewall requirements
You are currently required to open several internet endpoints in your firewalls to ensure
that Azure Stack HCI and its components can successfully connect to them. For a
detailed list of the required endpoints, see Firewall requirements.

Firewall configuration must be done prior to registering the nodes in Azure Arc. You can
use the standalone version of the environment checker to validate that your firewalls
aren't blocking traffic sent to these endpoints. For more information, see Azure Stack
HCI Environment Checker to assess deployment readiness for Azure Stack HCI.

Here are the summarized considerations for firewall:

ノ Expand table
# Consideration

1 Firewall configuration must be done before registering the nodes in Azure Arc.

2 Environment Checker in standalone mode can be used to validate the firewall configuration.

Step 5: Determine network adapter


configuration

Network adapters are qualified by network traffic type (management, compute, and
storage) they're used with. As you review the Windows Server Catalog , the Windows
Server 2022 certification indicates for which network traffic the adapters are qualified.

Before purchasing a server for Azure Stack HCI, you must have at least one adapter that
is qualified for management, compute, and storage as all three traffic types are required
on Azure Stack HCI. Cloud deployment relies on Network ATC to configure the network
adapters for the appropriate traffic types, so it is important to use supported network
adapters.

The default values used by Network ATC are documented in Cluster network settings.
We recommend that you use the default values. With that said, the following options
can be overridden using Azure portal or ARM templates if needed:

Storage VLANs: Set this value to the required VLANs for storage.
Jumbo Packets: Defines the size of the jumbo packets.
Network Direct: Set this value to false if you want to disable RDMA for your
network adapters.
Network Direct Technology: Set this value to RoCEv2 or iWarp .
Traffic Priorities Datacenter Bridging (DCB): Set the priorities that fit your
requirements. We highly recommend that you use the default DCB values as these
are validated by Microsoft and customers.

Here are the summarized considerations for network adapter configuration:

ノ Expand table

# Consideration

1 Use the default configurations as much as possible.


# Consideration

2 Physical switches must be configured according to the network adapter configuration. See
Physical network requirements for Azure Stack HCI.

3 Ensure that your network adapters are supported for Azure Stack HCI using the Windows
Server Catalog.

4 When accepting the defaults, Network ATC automatically configures the storage network
adapter IPs and VLANs. This is known as Storage Auto IP configuration.

In some instances, Storage Auto IP isn't supported and you need to declare each storage
network adapter IP using ARM templates.

Next steps
About Azure Stack HCI, version 23H2 deployment.
Security features for Azure Stack HCI,
version 23H2
Article • 02/22/2024

Applies to: Azure Stack HCI, version 23H2

Azure Stack HCI is a secure-by-default product that has more than 300 security settings
enabled right from the start. Default security settings provide a consistent security
baseline to ensure that devices start in a known good state.

This article provides a brief conceptual overview of the various security features
associated with your Azure Stack HCI cluster. This includes security defaults, Windows
Defender for Application Control (WDAC), volume encryption via BitLocker, secret
rotation, local built-in user accounts, Microsoft Defender for Cloud, and more.

Windows Defender Application Control


Application control (WDAC) is a software-based security layer that reduces attack
surface by enforcing an explicit list of software that is allowed to run. WDAC is enabled
by default and limits the applications and the code that you can run on the core
platform. For more information, see Manage Windows Defender Application Control for
Azure Stack HCI, version 23H2.

Security defaults
Your Azure Stack HCI has more than 300 security settings enabled by default that
provide a consistent security baseline, a baseline management system, and an
associated drift control mechanism.

You can monitor the security baseline and secured-core settings during both
deployment and runtime. You can also disable drift control during deployment when
you configure security settings.

With drift control applied, security settings are refreshed every 90 minutes. This refresh
interval ensures remediation of any changes from the desired state. Continuous
monitoring and auto-remediation allow you to have a consistent and reliable security
posture throughout the lifecycle of the device.

Secure baseline on Azure Stack HCI:


Improves the security posture by disabling legacy protocols and ciphers.
Reduces OPEX with a built-in drift protection mechanism and enables consistent
at-scale monitoring via the Azure Arc Hybrid Edge baseline.
Enables you to closely meet Center for Internet Security (CIS) benchmark and
Defense Information System Agency (DISA) Security Technical Implementation
Guide (STIG) requirements for the OS and recommended security baseline.

For more information, see Manage security defaults on Azure Stack HCI.

BitLocker encryption
Data-at-rest encryption is enabled on data volumes created during deployment. These
data volumes include both infrastructure volumes and workload volumes. When you
deploy your cluster, you have the option to modify security settings.

By default, data-at-rest encryption is enabled during deployment. We recommend that


you accept the default setting.

You must store BitLocker recovery keys in a secure location outside of the system. Once
Azure Stack HCI is successfully deployed, you can retrieve BitLocker recovery keys.

For more information about BitLocker encryption, see:

Use BitLocker with Cluster Shared Volumes (CSV).


Manage BitLocker encryption on Azure Stack HCI.

Local built-in user accounts


In this release, the following local built-in users, associated with RID 500 and RID 501 ,
are available on your Azure Stack HCI system:

ノ Expand table

Name in initial Name after Enabled Description


OS image deployment by default

Administrator ASBuiltInAdmin True Built-in account for administering the


computer/domain

Guest ASBuiltInGuest False Built-in account for guest access to the


computer/domain, protected by the
security baseline drift control mechanism.
) Important

We recommend that you create your own local administrator account, and that you
disable the well-known RID 500 user account.

Secret creation and rotation


The orchestrator in Azure Stack HCI requires multiple components to maintain secure
communications with other infrastructure resources and services. All the services
running on the cluster have authentication and encryption certificates associated with
them.

To ensure security, we have implemented internal secret creation and rotation


capabilities. When you review your cluster nodes, you see several certificates created
under the path LocalMachine/Personal certificate store ( Cert:\LocalMachine\My ).

In this release, the following capabilities are enabled:

The ability to create certificates during deployment and after cluster scale
operations.
Automated auto-rotation mechanism before certificates expire, and an option to
rotate certificates during the lifetime of the cluster.
The ability to monitor and alert whether certificates are still valid.

7 Note

This action takes about ten minutes, depending on the size of the cluster.

For more information, see Manage secrets rotation.

Syslog forwarding of security events


For customers and organizations that require their own local SIEM, Azure Stack HCI
version 23H2 includes an integrated mechanism that enables you to forward security-
related events to a SIEM.

Azure Stack HCI has an integrated syslog forwarder that, once configured, generates
syslog messages defined in RFC3164, with the payload in Common Event Format (CEF).
The following diagram illustrates integration of Azure Stack HCI with an SIEM. All audits,
security logs, and alerts are collected on each host and exposed via syslog with the CEF
payload.

Syslog forwarding agents are deployed on every Azure Stack HCI host to forward syslog
messages to the customer-configured syslog server. Syslog forwarding agents work
independently from each other but can be managed together on any one of the hosts.

The syslog forwarder in Azure Stack HCI supports various configurations based on
whether syslog forwarding is with TCP or UDP, whether the encryption is enabled or not,
and whether there is unidirectional or bidirectional authentication.

For more information, see Manage syslog forwarding.

Microsoft Defender for Cloud (preview)


Microsoft Defender for Cloud is a security posture management solution with advanced
threat protection capabilities. It provides you with tools to assess the security status of
your infrastructure, protect workloads, raise security alerts, and follow specific
recommendations to remediate attacks and address future threats. It performs all these
services at high speed in the cloud with no deployment overhead through auto-
provisioning and protection with Azure services.

With the basic Defender for Cloud plan, you get recommendations on how to improve
the security posture of your Azure Stack HCI system at no extra cost. With the paid
Defender for Servers plan, you get enhanced security features including security alerts
for individual servers and Arc VMs.

For more information, see Manage system security with Microsoft Defender for Cloud
(preview).

Next steps
Assess deployment readiness via the Environment Checker.
Read the Azure Stack HCI security book .
View the Azure Stack HCI security standards.
Evaluate the deployment readiness of
your environment for Azure Stack HCI,
version 23H2
Article • 03/07/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to use the Azure Stack HCI Environment Checker in a
standalone mode to assess how ready your environment is for deploying the Azure
Stack HCI solution.

For a smooth deployment of the Azure Stack HCI solution, your IT environment must
meet certain requirements for connectivity, hardware, networking, and Active Directory.
The Azure Stack HCI Environment Checker is a readiness assessment tool that checks
these minimum requirements and helps determine if your IT environment is deployment
ready.

About the Environment Checker tool


The Environment Checker tool runs a series of tests on each server in your Azure Stack
HCI cluster, reports the result for each test, provides remediation guidance when
available, and saves a log file and a detailed report file.

The Environment Checker tool consists of the following validators:

Connectivity validator. Checks whether each server in the cluster meets the
connectivity requirements. For example, each server in the cluster has internet
connection and can connect via HTTPS outbound traffic to well-known Azure
endpoints through all firewalls and proxy servers.
Hardware validator. Checks whether your hardware meets the system
requirements. For example, all the servers in the cluster have the same
manufacturer and model.
Active Directory validator. Checks whether the Active Directory preparation tool is
run prior to running the deployment.
Network validator. Validates your network infrastructure for valid IP ranges
provided by customers for deployment. For example, it checks there are no active
hosts on the network using the reserved IP range.
Arc integration validator. Checks if the Azure Stack HCI cluster meets all the
prerequisites for successful Arc onboarding.

Why use Environment Checker?


You can run the Environment Checker to:

Ensure that your Azure Stack HCI infrastructure is ready before deploying any
future updates or upgrades.
Identify the issues that could potentially block the deployment, such as not
running a pre-deployment Active Directory script.
Confirm that the minimum requirements are met.
Identify and remediate small issues early and quickly, such as a misconfigured
firewall URL or a wrong DNS.
Identify and remediate discrepancies on your own and ensure that your current
environment configuration complies with the Azure Stack HCI system
requirements.
Collect diagnostic logs and get remote support to troubleshoot any validation
issues.

Environment Checker modes


You can run the Environment Checker in two modes:

Integrated tool: The Environment Checker functionality is integrated into the


deployment process. By default, all validators are run during deployment to
perform pre-deployment readiness checks.

Standalone tool: This light-weight PowerShell tool is available for free download
from the Windows PowerShell gallery. You can run the standalone tool anytime,
outside of the deployment process. For example, you can run it even before
receiving the actual hardware to check if all the connectivity requirements are met.

This article describes how to run the Environment Checker in a standalone mode.

Prerequisites
Before you begin, complete the following tasks:

Review Azure Stack HCI system requirements.


Review Firewall requirements for Azure Stack HCI.
Make sure you have access to a client computer that is running on the network
where you'll deploy the Azure Stack HCI cluster.
Make sure that the client computer used is running PowerShell 5.1 or later.
Make sure you have permission to verify the Active Directory preparation tool is
run.

Install Environment Checker


The Environment Checker works with PowerShell 5.1, which is built into Windows.

You can install the Environment Checker on a client computer, staging server, or Azure
Stack HCI cluster node. However, if installed on an Azure Stack HCI cluster node, make
sure to uninstall it before you begin the deployment to avoid any potential conflicts.

To install the Environment Checker, follow these steps:

1. Run PowerShell as administrator (5.1 or later). If you need to install PowerShell, see
Installing PowerShell on Windows.

2. Enter the following cmdlet to install the latest version of the PowerShellGet
module:

PowerShell

Install-Module PowerShellGet -AllowClobber -Force

3. After the installation completes, close the PowerShell window and open a new
PowerShell session as administrator.

4. In the new PowerShell session, register PowerShell gallery as a trusted repo:

PowerShell

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted

5. Enter the following cmdlet to install the Environment Checker module:

PowerShell

Install-Module -Name AzStackHci.EnvironmentChecker

6. If prompted, press Y (Yes) or A (Yes to All) to install the module.


Run readiness checks
Each validator in the Environment Checker tool checks specific settings and
requirements. You can run these validators by invoking their respective PowerShell
cmdlet on each server in your Azure Stack HCI cluster or from any computer on the
network where you'll deploy Azure Stack HCI.

You can run the validators from the following locations:

Remotely via PowerShell session.

Locally from a workstation or a staging server.

Locally from the Azure Stack HCI cluster node. However, make sure to uninstall the
Environment Checker before you begin the deployment to avoid any potential
conflicts.

Select each of the following tabs to learn more about the corresponding validator.

Connectivity

Use the connectivity validator to check if all the servers in your cluster have internet
connectivity and meet the minimum connectivity requirements. For connectivity
prerequisites, see Firewall requirements for Azure Stack HCI.

You can use the connectivity validator to:

Check the connectivity of your servers before receiving the actual hardware.
You can run the connectivity validator from any client computer on the
network where you'll deploy the Azure Stack HCI cluster.
Check the connectivity of all the servers in your cluster after you've deployed
the cluster. You can check the connectivity of each server by running the
validator cmdlet locally on each server. Or, you can remotely connect from a
staging server to check the connectivity of one or more servers.

Run the connectivity validator


To run the connectivity validator, follow these steps.

1. Open PowerShell locally on the workstation, staging server, or Azure Stack HCI
cluster node.

2. Run a connectivity validation by entering the following cmdlet:


PowerShell

Invoke-AzStackHciConnectivityValidation

7 Note

Using the Invoke-AzStackHciConnectivityValidation cmdlet without any


parameter checks connectivity for all the service endpoints that are
enabled from your device. You can also pass parameters to run readiness
checks for specific scenarios. See examples, below.

Here are some examples of running the connectivity validator cmdlet with
parameters.

Example 1: Check connectivity of a remote computer


In this example, you remotely connect from your workstation or a staging server to
check the connectivity of one or more remote systems.

PowerShell

$session = New-PSSession -ComputerName remotesystem.contoso.com -


Credential $credential
Invoke-AzStackHciConnectivityValidation -PsSession $Session

Example 2: Check connectivity for a specific service

You can check connectivity for a specific service endpoint by passing the Service
parameter. In the following example, the validator checks connectivity for Azure Arc
service endpoints.

PowerShell

Invoke-AzStackHciConnectivityValidation -Service "Arc For Servers"

Example 3: Check connectivity if you're using a proxy


If you're using a proxy server, you can specify the connectivity validator to go
through the specified proxy and credentials, as shown in the following example:
PowerShell

Invoke-AzStackHciConnectivityValidation -Proxy
http://proxy.contoso.com:8080 -ProxyCredential $proxyCredential

7 Note

The connectivity validator validates general proxy, it doesn't check if your


Azure Stack HCI is configured correctly to use a proxy. For information about
how to configure firewalls for Azure Stack HCI, see Firewall requirements for
Azure Stack HCI.

Example 4: Check connectivity and create PowerShell output


object

You can view the output of the connectivity checker as an object by using the –
PassThru parameter:

PowerShell

Invoke-AzStackHciConnectivityValidation –PassThru

Here's a sample screenshot of the output:


Connectivity validator attributes


You can filter any of the following attributes and display the connectivity validator
result in your desired format:

ノ Expand table

Attribute name Description

EndPoint The endpoint being validated.

Protocol Protocol used – example https.

Service The service endpoint being validated.

Operation Type Type of operation – deployment, update.

Group Readiness Checks.

System For internal use.

Name Name of the individual service.

Title Service title; user facing name.

Severity Critical, Warning, Informational, Hidden.


Attribute name Description

Description Description of the service name.

Tags Internal Key-value pairs to group or filter tests.

Status Succeeded, Failed, In Progress.

Remediation URL link to documentation for remediation.

TargetResourceID Unique identifier for the affected resource (node or drive).

TargetResourceName Name of the affected resource.

TargetResourceType Type of the affected resource.

Timestamp The time in which the test was called.

AdditionalData Property bag of key value pairs for additional information.

HealthCheckSource The name of the services called for the health check.

Connectivity validator output


The following samples are the output from successful and unsuccessful runs of the
connectivity validator.

To learn more about different sections in the readiness check report, see
Understand readiness check report.

Sample output: Successful test

The following sample output is from a successful run of the connectivity validator.
The output indicates a healthy connection to all the endpoints, including well-
known Azure services and observability services. Under Diagnostics, you can see
the validator checks if a DNS server is present and healthy. It collects WinHttp, IE
proxy, and environment variable proxy settings for diagnostics and data collection.
It also checks if a transparent proxy is used in the outbound path and displays the
output.

Sample output: Failed test

If a test fails, the connectivity validator returns information to help you resolve the
issue, as shown in the sample output below. The Needs Remediation section
displays the issue that caused the failure. The Remediation section lists the relevant
article to help remediate the issue.

Potential failure scenario for connectivity validator


The connectivity validator checks for SSL inspection before testing connectivity of
any required endpoints. If SSL inspection is turned on in your Azure Stack HCI
system, you get the following error:

Workaround

Work with your network team to turn off SSL inspection for your Azure Stack HCI
system. To confirm your SSL inspection is turned off, you can use the following
examples. After SSL inspection is turned off, you can run the tool again to check
connectivity to all the endpoints.

If you receive the certificate validation error message, run the following commands
individually for each endpoint to manually check the certificate information:

PowerShell

C:\> Import-Module AzStackHci.EnvironmentChecker


C:\> Get-SigningRootChain -Uri <Endpoint-URI> | ft subject

For example, if you want to verify the certificate information for two endpoints, say
https://login.microsoftonline.com and https://portal.azure.com , run the

following commands individually for each endpoint:

For https://login.microsoftonline.com :

PowerShell

C:\> Import-Module AzStackHci.EnvironmentChecker


C:\> Get-SigningRootChain -Uri https://login.microsoftonline.com |
ft subject

Here's a sample output:

PowerShell

Subject
-------
CN=portal.office.com, O=Microsoft Corporation, L=Redmond, S=WA,
C=US
CN=Microsoft Azure TLS Issuing CA 02, O=Microsoft Corporation, C=US
CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc,
C=US

For https://portal.azure.com :

PowerShell

C:\> Import-Module AzStackHci.EnvironmentChecker


C:\> Get-SigningRootChain -Uri https://portal.azure.com | ft
Subject

Here's a sample output:

PowerShell

Subject
-------
CN=portal.azure.com, O=Microsoft Corporation, L=Redmond, S=WA, C=US
CN=Microsoft Azure TLS Issuing CA 01, O=Microsoft Corporation, C=US
CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc,
C=US

Understand readiness check report


Each validator generates a readiness check report after completing the check. Make sure
to review the report and correct any issues before starting the actual deployment.

The information displayed on each readiness check report varies depending on the
checks the validators perform. The following table summarizes the different sections in
the readiness check reports for each validator:
ノ Expand table

Section Description Available in

Services Displays the health status of each service endpoint that the Connectivity
connectivity validator checks. Any service endpoint that fails the validator
check is highlighted with the Needs Remediation tag. report

Diagnostics Displays the result of the diagnostic tests. For example, the Connectivity
health and availability of a DNS server. It also shows what validator
information the validator collects for diagnostic purposes, such report
as WinHttp, IE proxy, and environment variable proxy settings.

Hardware Displays the health status of all the physical servers and their Hardware
hardware components. For information on the tests performed validator
on each hardware, see the table under the "Hardware" tab in the report
Run readiness checks section.

AD OU Displays the result of the Active Directory organization unit test. Active
Diagnostics Displays if the specified organizational unit exists and contains Directory
proper sub-organizational units. validator
report

Network Displays the result of the network range test. If the test fails, it Network
range test displays the IP addresses that belong to the reserved IP range. validator
report

Summary Lists the count of successful and failed tests. Failed test results All reports
are expanded to show the failure details under Needs
Remediation.

Remediation Displays only if a test fails. Provides a link to the article that All reports
provides guidance on how to remediate the issue.

Log location Provides the path where the log file is saved. The default path is: All reports
(contains PII)
- $HOME\.AzStackHci\AzStackHciEnvironmentChecker.log when
you run the Environment Checker in a standalone mode.
- C:\CloudDeployment\Logs when the Environment Checker is run
as part of the deployment process.

Each run of the validator overwrites the existing file.

Report Provides the path where the completed readiness check report is All reports
Location saved in the JSON format. The default path is:
(contains PII)
- $HOME\.AzStackHci\AzStackHciEnvironmentReport.json when
you run the Environment Checker in a standalone mode.
- C:\CloudDeployment\Logs when the Environment Checker is run
as part of the deployment process.
Section Description Available in

The report provides detailed diagnostics that are collected during


each test. This information can be helpful for system integrators
or when you need to contact the support team to troubleshoot
the issue. Each run of the validator overwrites the existing file.

Completion At the end of the report, displays a message that the validation All reports
message check is completed.

Environment Checker results

7 Note

The results reported by the Environment Checker tool reflect the status of your
settings only at the time that you ran it. If you make changes later, for example to
your Active Directory or network settings, items that passed successfully earlier can
become critical issues.

For each test, the validator provides a summary of the unique issues and classifies them
into: success, critical issues, warning issues, and informational issues. Critical issues are
the blocking issues that you must fix before proceeding with the deployment.

Uninstall environment checker


The environment checker is shipped with Azure Stack HCI, make sure to uninstall it from
all Azure Stack HCI cluster nodes before you begin the deployment to avoid any
potential conflicts.

PowerShell

Remove-Module AzStackHci.EnvironmentChecker -Force


Get-Module AzStackHci.EnvironmentChecker -ListAvailable | Where-Object
{$_.Path -like "*$($_.Version)*"} | Uninstall-Module -force

Troubleshoot environment validation issues


For information about how to get support from Microsoft to troubleshoot any validation
issues that may arise during cluster deployment or pre-registration, see Get support for
deployment issues.
Next steps
Complete the prerequisites and deployment checklist.
Contact Microsoft Support.
About Azure Stack HCI, version 23H2
deployment
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

This article is the first in the series of deployment articles that describe how to deploy
Azure Stack HCI, version 23H2. This article applies to both single and multi-node
deployments. The target audience for this article is IT administrators who are
responsible for deploying Azure Stack HCI in their organization.

) Important

Azure Stack HCI, version 23H2 is the latest GA version, which doesn't support
upgrade from version 22H2. Begin with a new 2311 deployment, update to 2311.2,
and strictly follow version 23H2 deployment instructions. Don't mix steps from
version 22H2 and version 23H2.

About deployment methods


In this release, you can deploy Azure Stack HCI using one of the following methods:

Deploy from Azure portal: Select this option to deploy an Azure Stack HCI cluster
using Azure portal. You can choose from three deployment methods: New
configuration, Template spec, and QuickStart template. The deployment flow
guides you through the steps to deploy your Azure Stack HCI cluster.

For more information, see Deploy via Azure portal.

Deploy from an Azure Resource Manager(ARM) template: Select this option to


deploy an Azure Stack HCI cluster using an ARM Deployment Template and the
corresponding Parameters file. An ARM template is a JSON file containing
customized template expressions where you can define dynamic values and logic
that determine the Azure resources to deploy.

For more information, see Deploy via ARM template.

Deployment sequence
Follow this sequence to deploy Azure Stack HCI in your environment:

ノ Expand table

Step # Description

Select validated network Identify the network reference pattern that corresponds to the
topology way your servers are cabled. You will define the network settings
based on this topology.

Read the requirements and Review the requirements and complete all the prerequisites and a
complete the prerequisites deployment checklist before you begin the deployment.

Step 1: Prepare Active Prepare your Active Directory (AD) environment for Azure Stack
Directory HCI deployment.

Step 2: Download Azure Download Azure Stack HCI, version 23H2 OS ISO from Azure
Stack HCI, version 23H2 OS portal

Step 3: Install OS Install Azure Stack HCI operating system locally on each server in
your cluster.

(Optional) Configure the Optionally configure proxy settings for Azure Stack HCI if your
proxy network uses a proxy server for internet access.

Step 4: Register servers with Install and run the Azure Arc registration script on each of the
Arc and assign permissions servers that you intend to cluster.
Assign required permissions for the deployment.

Step 5A: Deploy the cluster Use the Azure portal to select Arc servers to create Azure Stack
via Azure portal HCI cluster. Use one of the three deployment methods described
previously.

Step 5B: Deploy the cluster Use the ARM Deployment Template and the Parameter file to
via ARM template deploy Azure Stack HCI cluster.

Validated network topologies


When you deploy Azure Stack HCI from Azure portal, the network configuration options
vary depending on the number of servers and the type of storage connectivity. Azure
portal guides you through the supported options for each configuration.

Before starting the deployment, we recommend you check the following table that
shows the supported and available options.

Supported network topologies


ノ Expand table

Network topology Azure portal ARM template

One node - no switch for storage By default Supported

One node - with network switch for storage Not Supported


applicable

Two nodes - no switch for storage Supported Supported

Two nodes - with network switch for storage Supported Supported

Three nodes - with no switch for storage Not Test only


supported No update or repair
support

Three nodes - with network switch for storage Supported Supported

Four to 16 nodes - with no network switch for Not Not supported


storage supported

Four to 16 nodes - with network switch for storage Supported Supported

The two network topology options are:

No switch for storage. When you select this option, your Azure Stack HCI system
uses crossover network cables directly connected to your network interfaces for
storage communication. The current supported switchless deployments from the
portal are one or two nodes.

Network switch for storage. When you select this option, your Azure Stack HCI
system uses network switches connected to your network interfaces for storage
communication. You can deploy up to 16 nodes using this configuration.

You can then select the network reference pattern corresponding to a validated network
topology that you intend to deploy.

Next steps
Read the prerequisites for Azure Stack HCI.
Review deployment prerequisites for
Azure Stack HCI, version 23H2
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

This article discusses the security, software, hardware, and networking prerequisites, and
the deployment checklist in order to deploy Azure Stack HCI, version 23H2.

Review requirements and complete


prerequisites
ノ Expand table

Requirements Links

Security features Link

Environment readiness Link

System requirements Link

Firewall requirements Link

Physical network requirements Link

Host network requirements Link

Complete deployment checklist


Use the following checklist to gather the required information ahead of the actual
deployment of your Azure Stack HCI, version 23H2 cluster.

ノ Expand table

Component What is needed

Server names Unique name for each server you wish to deploy.

Active Directory The name for the new cluster AD object during the Active Directory
cluster name preparation. This name is also used for the name of the cluster during
Component What is needed

deployment.

Active Directory The prefix used for all AD objects created for the Azure Stack HCI
prefix deployment. The prefix is used during the Active Directory preparation.
The prefix must not exceed 8 characters.

Active directory OU A new organizational unit (OU) to store all the objects for the Azure Stack
HCI deployment. The OU is created during the Active Directory
preparation.

Active Directory Fully-qualified domain name (FQDN) for the Active Directory Domain
Domain Services prepared for deployment.

Active Directory A new username and password that is created with the appropriate
Lifecycle Manager permissions for deployment. This account is the same as the user account
credential used by the Azure Stack HCI deployment.
The password must conform to the Azure length and complexity
requirements. Use a password that is at least 12 characters long. The
password must contain the following: a lowercase character, an
uppercase character, a numeral, and a special character.
The name must be unique for each deployment and you can't use admin
as the username.

IPv4 network range A subnet used for management network intent. You need an address
subnet for range for management network with a minimum of 6 available,
management contiguous IPs in this subnet. These IPs are used for infrastructure
network intent services with the first IP assigned to fail over clustering.
For more information, see the Specify network settings page in Deploy
via Azure portal.

Storage VLAN ID Two unique VLAN IDs to be used for the storage networks, from your IT
network administrator.
We recommend using the default VLANS from Network ATC for storage
subnets. If you plan to have two storage subnets, Network ATC will use
VLANS from 712 and 711 subnets.
For more information, see the Specify network settings page in Deploy
via Azure portal.

DNS Server A DNS Server that is used in your environment. The DNS server used
must resolve the Active Directory Domain.
For more information, see the Specify network settings page in Deploy
via Azure portal.

Local administrator Username and password for the local administrator for all the servers in
credentials your cluster. The credentials are identical for all the servers in your
system.
For more information, see the Specify management settings page in
Deploy via Azure portal.
Component What is needed

Custom location (Optional) A name for the custom location created for your cluster. This
name is used for Azure Arc VM management.
For more information, see the Specify management settings page in
Deploy via Azure portal.

Azure subscription ID ID for the Azure subscription used to register the cluster. Make sure that
you are a user access administrator and a contributor on this
subscription. This will allow you to manage access to Azure resources,
specifically to Arc-enable each server of an Azure Stack HCI cluster. For
more information, see Assign Azure permissions for deployment

Azure Storage For two-node clusters, a witness is required. For a cloud witness, an Azure
account Storage account is needed. In this release, you cannot use the same
storage account for multiple clusters. For more information, see Specify
management settings in Deploy via Azure portal.

Azure Key Vault A key vault is required to securely store secrets for this system, such as
cryptographic keys, local admin credentials, and BitLocker recovery keys.
For more information, see Basics in Deploy via Azure portal.

Outbound Run the Environment checker to ensure that your environment meets the
connectivity outbound network connectivity requirements for firewall rules.

Next steps
Prepare your Active Directory environment.
Prepare Active Directory for Azure Stack
HCI, version 23H2 deployment
Article • 03/11/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to prepare your Active Directory environment before you
deploy Azure Stack HCI, version 23H2.

Active Directory requirements for Azure Stack HCI include:

A dedicated Organization Unit (OU).


Group policy inheritance that is blocked for the applicable Group Policy Object
(GPO).
A user account that has all rights to the OU in the Active Directory.

7 Note

You can use your existing process to meet the above requirements. The script
used in this article is optional and is provided to simplify the preparation.
When group policy inheritance is blocked at the OU level, enforced GPO's
aren't blocked. Ensure that any applicable GPO, which are enforced, are also
blocked using other methods, for example, using WMI Filters or security
groups.

Prerequisites
Before you begin, make sure you've done the following:

Satisfy the prerequisites for new deployments of Azure Stack HCI.

Download and install the version 2402 module from the PowerShell Gallery . Run
the following command from the folder where the module is located:

PowerShell

Install-Module AsHciADArtifactsPreCreationTool -Repository PSGallery -


Force
7 Note

Make sure to uninstall any previous versions of the module before installing
the new version.

You have obtained permissions to create an OU. If you don't have permissions,
contact your Active Directory administrator.

Active Directory preparation module


The AsHciADArtifactsPreCreationTool.ps1 module is used to prepare Active Directory.
Here are the required parameters associated with the cmdlet:

ノ Expand table

Parameter Description

- A new user object that is created with the appropriate permissions


AzureStackLCMUserCredential for deployment. This account is the same as the user account used
by the Azure Stack HCI deployment.
Make sure that only the username is provided. The name should
not include the domain name, for example, contoso\username .
The password must conform to the length and complexity
requirements. Use a password that is at least 12 characters long.
The password must also contain three out of the four
requirements: a lowercase character, an uppercase character, a
numeral, and a special character.
For more information, see password complexity requirements.
The name must be unique for each deployment and you can't use
admin as the username.

-AsHciOUName A new Organizational Unit (OU) to store all the objects for the
Azure Stack HCI deployment. Existing group policies and
inheritance are blocked in this OU to ensure there's no conflict of
settings. The OU must be specified as the distinguished name
(DN). For more information, see the format of Distinguished
Names.

Prepare Active Directory


When you prepare Active Directory, you create a dedicated Organizational Unit (OU) to
place the Azure Stack HCI related objects such as deployment user.

To create a dedicated OU, follow these steps:


1. Sign in to a computer that is joined to your Active Directory domain.

2. Run PowerShell as administrator.

3. Run the following command to create the dedicated OU.

PowerShell

New-HciAdObjectsPreCreation -AzureStackLCMUserCredential (Get-


Credential) -AsHciOUName "<OU name or distinguished name including the
domain components>"

4. When prompted, provide the username and password for the deployment.
a. Make sure that only the username is provided. The name should not include the
domain name, for example, contoso\username . Username must be between 1 to
64 characters and only contain letters, numbers, hyphens, and underscores
and may not start with a hyphen or number.
b. Make sure that the password meets complexity and length requirements. Use a
password that is at least 12 characters long and contains: a lowercase
character, an uppercase character, a numeral, and a special character.

Here is a sample output from a successful completion of the script:

PS C:\work> $password = ConvertTo-SecureString '<password>' -


AsPlainText -Force
PS C:\work> $user = "ms309deployuser"
PS C:\work> $credential = New-Object
System.Management.Automation.PSCredential ($user, $password)
PS C:\work> New-HciAdObjectsPreCreation -AzureStackLCMUserCredential
$credential -AsHciOUName
"OU=ms309,DC=PLab8,DC=nttest,DC=microsoft,DC=com"
PS C:\work>

5. Verify that the OU is created. If using a Windows Server client, go to Server


Manager > Tools > Active Directory Users and Computers.

6. An OU with the specified name should be created and within that OU, you'll see
the deployment user.

7 Note

If you are repairing a single server, do not delete the existing OU. If the server
volumes are encrypted, deleting the OU removes the BitLocker recovery keys.

Next steps
Download Azure Stack HCI, version 23H2 software on each server in your cluster.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Download Azure Stack HCI, version
23H2 software
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to download the Azure Stack HCI, version 23H2 software from
the Azure portal.

The first step in deploying Azure Stack HCI, version 23H2 is to download Azure Stack
HCI software from the Azure portal. The software download includes a free 60-day trial.
However, if you've purchased Azure Stack HCI Integrated System solution hardware
from the Azure Stack HCI Catalog through your preferred Microsoft hardware partner,
the Azure Stack HCI software should be pre-installed. In that case, you can skip this step
and move on to Register your servers and assign permissions for Azure Stack HCI
deployment.

Prerequisites
Before you begin the download of Azure Stack HCI, version 23H2 software, ensure that
you have the following prerequisites:

An Azure account. If you don’t already have an Azure account, first create an
account .

An Azure subscription. You can use an existing subscription of any type:


Free account with Azure credits for students or Visual Studio subscribers .
Pay-as-you-go subscription with credit card.
Subscription obtained through an Enterprise Agreement (EA).
Subscription obtained through the Cloud Solution Provider (CSP) program.

Download the Azure Stack HCI software from


the Azure portal
Follow these steps to download the Azure Stack HCI software:

1. If not already signed in, sign in to the Azure portal with your Azure account
credentials.
2. In the Azure portal search bar at the top, enter Azure Stack HCI. As you type, the
portal starts suggesting related resources and services based on your input. Select
Azure Stack HCI under the Services category.

After you select Azure Stack HCI, you're directed to the Azure Stack HCI Get
started page, with the Get started tab selected by default.

3. On the Get started tab, under the Download software tile, select Download Azure
Stack HCI.

4. On the Download Azure Stack HCI page on the right, do the following:

a. Choose software version. By default, the latest generally available version of


Azure Stack HCI is selected.

b. Choose language from the dropdown list. Select English to download the
English version of the software.
We recommend that you use the ISO for the language you wish to install in. You
should download a VHDX only if you are performing virtual deployments. To
download the VHDX in English, select English VHDX from the dropdown list.

c. Select the Azure Stack HCI, version 23H2 option.

d. Review service terms and privacy notice.

e. Select the license terms and privacy notice checkbox.

f. Select the Download Azure Stack HCI button. This action begins the download.
Use the downloaded ISO file to install the software on each server that you want
to cluster.

Next steps
Install the Azure Stack HCI, version 23H2 operating system.
Install the Azure Stack HCI, version
23H2 operating system
Article • 02/27/2024

Applies to: Azure Stack HCI, version 23H2

This article describes the steps needed to install the Azure Stack HCI, version 23H2
operating system locally on each server in your cluster.

Prerequisites
Before you begin, make sure you do the following steps:

Satisfy the prerequisites.


Prepare your Active Directory environment.
Make sure to keep a password handy to use to sign in to the operating system.
This password must conform to the length and complexity requirements. Use a
password that is at least 12 characters long and contains a lowercase character, an
uppercase character, a numeral, and a special character.

Boot and install the operating system


To install the Azure Stack HCI, version 23H2 operating system, follow these steps:

1. Download the Azure Stack HCI operating system from the Azure portal.

2. Start the Install Azure Stack HCI wizard on the system drive of the server where
you want to install the operating system.

3. Choose the language to install or accept the default language settings, select Next,
and then on next page of the wizard, select Install now.

4. On the Applicable notices and license terms page, review the license terms, select
the I accept the license terms checkbox, and then select Next.

5. On the Which type of installation do you want? page, select Custom: Install the
newer version of Azure Stack HCI only (advanced).

7 Note

Upgrade installations are not supported in this release of the operating


system.

6. On the Where do you want to install Azure Stack HCI? page, confirm the drive on
which the operating system is installed, and then select Next.

7 Note
If the hardware was used before, run diskpart to clean the OS drive. For more
information, see how to use diskpart. Also see the instructions in Clean
drives.

7. The Installing Azure Stack HCI page displays to show status on the process.

7 Note

The installation process restarts the operating system twice to complete the
process, and displays notices on starting services before opening an
Administrator command prompt.

8. At the Administrator command prompt, select Ok to change the user's password


before signing in to the operating system, then press Enter.

9. At the Enter new credential for Administrator prompt, enter a new password.
) Important

Make sure that the local administrator password follows Azure password
length and complexity requirements. Use a password that is at least 12
characters long and contains a lowercase character, an uppercase character, a
numeral, and a special character.

Enter the password again to confirm it, then press Enter.

10. At the Your password has been changed confirmation prompt, press Enter.

Now you're ready to use the Server Configuration tool (SConfig) to perform important
tasks.

Configure the operating system using SConfig


You can use SConfig to configure Azure Stack HCI version 23H2 after installation.

To use SConfig, sign in to the server running the Azure Stack HCI operating system. This
could be locally via a keyboard and monitor, or using a remote management (headless
or BMC) controller, or Remote Desktop. The SConfig tool opens automatically when you
sign in to the server.


Follow these steps to configure the operating system using SConfig:

1. Install the latest drivers and firmware as per the instructions provided by your
hardware manufacturer. You can use SConfig to run driver installation apps. After
the installation is complete, restart your servers.

2. Configure networking as per your environment. You can configure the following
optional settings:

Configure VLAN IDs for the management network. For more information, see
Management VLAN ID and Management VLAN ID with a virtual switch.
Configure DHCP for the management network. For more information, see
DHCP IP assignment.
Configure a proxy server. For more information, see Configure proxy settings
for Azure Stack HCI, version 23H2.

3. Use the Network Settings option in SConfig to configure a default valid gateway
and a DNS server. Set DNS to the DNS of the domain you're joining.

4. Configure a valid time server on each server. Validate that your server is not using
the local CMOS clock as a time source, using the following command:

Windows Command Prompt

w32tm /query /status

To configure a valid time source, run the following command:

Windows Command Prompt

w32tm /config /manualpeerlist:"ntpserver.contoso.com"


/syncfromflags:manual /update

Confirm that the time is successfully synchronizing using the new time server:

Windows Command Prompt

w32tm /query /status

Once the server is domain joined, it synchronizes its time from the PDC emulator.

5. Rename all the servers using option 2 in SConfig to match what you used when
preparing Active Directory, as you won't rename the servers later.
6. (Optional) At this point, you can enable Remote Desktop Protocol (RDP) and then
RDP to each server rather than use the virtual console. This action should simplify
performing the remainder of the configuration.

7. Clean all the non-OS drives for each server that you intend to deploy. Remove any
virtual media that have been used when installing the OS. Also validate that no
other root drives exist.

8. Restart the servers.

9. Set the local administrator credentials to be identical across all servers.

7 Note

Make sure that the local administrator password follows Azure password
length and complexity requirements. Use a password that is at least 12
characters long and contains a lowercase character, an uppercase character, a
numeral, and a special character.

Install required Windows roles


1. Install the Hyper-V role. Run the following command on each server of the cluster:

PowerShell

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -


All

Your servers will restart; this takes a few minutes.

You are now ready to register the Azure Stack HCI server with Azure Arc and assign
permissions for deployment.

Next steps
(Optional) Configure proxy settings for Azure Stack HCI, version 23H2.
Register Azure Stack HCI servers in your system with Azure Arc and assign
permissions.
Register your servers and assign
permissions for Azure Stack HCI, version
23H2 deployment
Article • 03/15/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to register your Azure Stack HCI servers and then set up the
required permissions to deploy an Azure Stack HCI, version 23H2 cluster.

Prerequisites
Before you begin, make sure you've completed the following prerequisites:

Satisfy the prerequisites and complete deployment checklist.

Prepare your Active Directory environment.

Install the Azure Stack HCI, version 23H2 operating system on each server.

Register your subscription with the required resource providers (RPs). You can use
either the Azure portal or the Azure PowerShell to register. You need to be an
owner or contributor on your subscription to register the following resource RPs:
Microsoft.HybridCompute
Microsoft.GuestConfiguration
Microsoft.HybridConnectivity
Microsoft.AzureStackHCI

7 Note

The assumption is that the person registering the Azure subscription with the
resource providers is a different person than the one who is registering the
Azure Stack HCI servers with Arc.

If you're registering the servers as Arc resources, make sure that you have the
following permissions on the resource group where the servers were provisioned:
Azure Connected Machine Onboarding
Azure Connected Machine Resource Administrator
To verify that you have these roles, follow these steps in the Azure portal:

1. Go to the subscription that you use for the Azure Stack HCI deployment.
2. Go to the resource group where you're planning to register the servers.
3. In the left-pane, go to Access Control (IAM).
4. In the right-pane, go the Role assignments. Verify that you have the Azure
Connected Machine Onboarding and Azure Connected Machine Resource
Administrator roles assigned.

Register servers with Azure Arc

) Important

Run these steps on every Azure Stack HCI server that you intend to cluster.

1. Install the Arc registration script from PSGallery.

PowerShell

#Register PSGallery as a trusted repo


Register-PSRepository -Default -InstallationPolicy Trusted

#Install Arc registration script from PSGallery


Install-Module AzsHCI.ARCinstaller

#Install required PowerShell modules in your node for registration


Install-Module Az.Accounts -Force
Install-Module Az.ConnectedMachine -Force
Install-Module Az.Resources -Force

Here's a sample output of the installation:

Output

PS C:\Users\SetupUser> Install-Module -Name AzSHCI.ARCInstaller


NuGet provider is required to continue
PowerShellGet requires NuGet provider version '2.8.5.201' or newer to
interact with NuGet-based repositories. The NuGet provider must be
available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or
'C:\Users\SetupUser\AppData\Local\PackageManagement\ProviderAssemblies'
. You can also install the NuGet provider by
running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201
-Force'. Do you want PowerShellGet to install
and import the NuGet provider now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
PS C:\Users\SetupUser>
PS C:\Users\SetupUser> Install-Module Az.Accounts -Force
PS C:\Users\SetupUser> Install-Module Az.ConnectedMachine -Force
PS C:\Users\SetupUser> Install-Module Az.Resources -Force

2. Set the parameters. The script takes in the following parameters:

ノ Expand table

Parameters Description

SubscriptionID The ID of the subscription used to register your servers with Azure Arc.

TenantID The tenant ID used to register your servers with Azure Arc. Go to your
Microsoft Entra ID and copy the tenant ID property.

ResourceGroup The resource group precreated for Arc registration of the servers. A
resource group is created if one doesn't exist.

Region The Azure region used for registration. See the Supported regions that
can be used.

AccountID The user who registers and deploys the cluster.

DeviceCode The device code displayed in the console at


https://microsoft.com/devicelogin and is used to sign in to the device.

PowerShell

#Define the subscription where you want to register your server as Arc
device
$Subscription = "YourSubscriptionID"

#Define the resource group where you want to register your server as
Arc device
$RG = "YourResourceGroupName"

#Define the region you will use to register your server as Arc device
$Region = "eastus"

#Define the tenant you will use to register your server as Arc device
$Tenant = "YourTenantID"

Here's a sample output of the parameters:

Output

PS C:\Users\SetupUser> $Subscription = "<Subscription ID>"


PS C:\Users\SetupUser> $RG = "myashcirg"
PS C:\Users\SetupUser> $Tenant = "<Tenant ID>"
PS C:\Users\SetupUser> $Region = "eastus"

3. Connect to your Azure account and set the subscription. You'll need to open
browser on the client that you're using to connect to the server and open this
page: https://microsoft.com/devicelogin and enter the provided code in the
Azure CLI output to authenticate. Get the access token and account ID for the
registration.

Azure CLI

#Connect to your Azure account and Subscription


Connect-AzAccount -SubscriptionId $Subscription -TenantId $Tenant -
DeviceCode

#Get the Access Token for the registration


$ARMtoken = (Get-AzAccessToken).Token

#Get the Account ID for the registration


$id = (Get-AzContext).Account.Id

Here's a sample output of setting the subscription and authentication:

Output

PS C:\Users\SetupUser> Connect-AzAccount -SubscriptionId $Subscription


-TenantId $Tenant -DeviceCode
WARNING: To sign in, use a web browser to open the page
https://microsoft.com/devicelogin and enter the code A44KHK5B5
to authenticate.

Account SubscriptionName TenantId


Environment
------- ---------------- -------- ---
--------
[email protected] AzureStackHCI_Content <Tenant ID>
AzureCloud

PS C:\Users\SetupUser> $ARMtoken = (Get-AzAccessToken).Token


PS C:\Users\SetupUser> $id = (Get-AzContext).Account.Id

4. Finally run the Arc registration script. The script takes a few minutes to run.

PowerShell

#Invoke the registration script. Use a supported region.


Invoke-AzStackHciArcInitialization -SubscriptionID $Subscription -
ResourceGroup $RG -TenantID $Tenant -Region $Region -Cloud "AzureCloud"
-ArmAccessToken $ARMtoken -AccountID $id

If you're accessing the internet via a proxy server, you need to pass the -proxy
parameter and provide the proxy server as http://<Proxy server FQDN or IP
address>:Port when running the script.

Here's a sample output of a successful registration of your servers:

Output

PS C:\DeploymentPackage> Invoke-AzStackHciArcInitialization -
SubscriptionID $Subscription -ResourceGroup $RG -TenantID $Tenant -
Region $Region -Cloud "AzureCloud" -ArmAccessToken $ARMtoken -AccountID
$id -Force
Installing and Running Azure Stack HCI Environment Checker
All the environment validation checks succeeded
Installing Hyper-V Management Tools
Starting AzStackHci ArcIntegration Initialization
Installing Azure Connected Machine Agent
Total Physical Memory: 588,419 MB
PowerShell version: 5.1.25398.469
.NET Framework version: 4.8.9032
Downloading agent package from
https://aka.ms/AzureConnectedMachineAgent to
C:\Users\AzureConnectedMachineAgent.msi
Installing agent package
Installation of azcmagent completed successfully
0
Connecting to Azure using ARM Access Token
Connected to Azure successfully
Microsoft.HybridCompute RP already registered, skipping registration
Microsoft.GuestConfiguration RP already registered, skipping
registration
Microsoft.HybridConnectivity RP already registered, skipping
registration
Microsoft.AzureStackHCI RP already registered, skipping registration
INFO Connecting machine to Azure... This might take a few minutes.
INFO Testing connectivity to endpoints that are needed to connect to
Azure... This might take a few minutes.
20% [==> ]
30% [===> ]
INFO Creating resource in Azure...
Correlation ID=<Correlation ID>=/subscriptions/<Subscription
ID>/resourceGroups/myashci-
rg/providers/Microsoft.HybridCompute/machines/ms309
60% [========> ]
80% [===========> ]
100% [===============]
INFO Connected machine to Azure
INFO Machine overview page: https://portal.azure.com/
Connected Azure ARC agent successfully
Successfully got the content from IMDS endpoint
Successfully got Object Id for Arc Installation <Object ID>
$Checking if Azure Stack HCI Device Management Role is assigned already
for SPN with Object ID: <Object ID>
Assigning Azure Stack HCI Device Management Role to Object : <Object
ID>
$Successfully assigned Azure Stack HCI Device Management Role to Object
Id <Object ID>
Successfully assigned permission Azure Stack HCI Device Management
Service Role to create or update Edge Devices on the resource group
$Checking if Azure Connected Machine Resource Manager is assigned
already for SPN with Object ID: <Object ID>
Assigning Azure Connected Machine Resource Manager to Object : <Object
ID>
$Successfully assigned Azure Connected Machine Resource Manager to
Object Id <Object ID>
Successfully assigned the Azure Connected Machine Resource Manager role
on the resource group
$Checking if Reader is assigned already for SPN with Object ID: <Object
ID>
Assigning Reader to Object : <Object ID>
$Successfully assigned Reader to Object Id <Object ID>
Successfully assigned the reader Resource Manager role on the resource
group
Installing TelemetryAndDiagnostics Extension
Successfully triggered TelemetryAndDiagnostics Extension installation
Installing DeviceManagement Extension
Successfully triggered DeviceManagementExtension installation
Installing LcmController Extension
Successfully triggered LCMController Extension installation
Please verify that the extensions are successfully installed before
continuing...

Log location:
C:\Users\Administrator\.AzStackHci\AzStackHciEnvironmentChecker.log
Report location:
C:\Users\Administrator\.AzStackHci\AzStackHciEnvironmentReport.json
Use -Passthru parameter to return results as a PSObject.

5. After the script completes successfully on all the servers, verify that:

a. Your servers are registered with Arc. Go to the Azure portal and then go to the
resource group associated with the registration. The servers appear within the
specified resource group as Machine - Azure Arc type resources.

b. The mandatory Azure Stack HCI extensions are installed on your servers. From
the resource group, select the registered server. Go to the Extensions. The
mandatory extensions show up in the right pane.

Assign required permissions for deployment


This section describes how to assign Azure permissions for deployment from the Azure
portal.

1. In the Azure portal, go to the subscription used to register the servers. In the left
pane, select Access control (IAM). In the right pane, select + Add and from the
dropdown list, select Add role assignment.


2. Go through the tabs and assign the following role permissions to the user who
deploys the cluster:

Azure Stack HCI Administrator


Reader

3. In the Azure portal, go to the resource group used to register the servers on your
subscription. In the left pane, select Access control (IAM). In the right pane, select
+ Add and from the dropdown list, select Add role assignment.

4. Go through the tabs and assign the following permissions to the user who deploys
the cluster:

Key Vault Data Access Administrator: This permission is required to manage


data plane permissions to the key vault used for deployment.
Key Vault Secrets Officer: This permission is required to read and write
secrets in the key vault used for deployment.
Key Vault Contributor: This permission is required to create the key vault
used for deployment.
Storage Account Contributor: This permission is required to create the
storage account used for deployment.

5. In the right pane, go to Role assignments. Verify that the deployment user has all
the configured roles.

Next steps
After setting up the first server in your cluster, you're ready to deploy using Azure portal:

Deploy using Azure portal.

Feedback
Was this page helpful?  Yes  No
Provide product feedback | Get help at Microsoft Q&A
Deploy an Azure Stack HCI, version
23H2 system using the Azure portal
Article • 02/28/2024

Applies to: Azure Stack HCI, version 23H2

This article helps you deploy an Azure Stack HCI, version 23H2 system using the Azure
portal.

Prerequisites
Completion of Register your servers with Azure Arc and assign deployment
permissions.
For three-node clusters, the network adapters that carry the in-cluster storage
traffic must be connected to a network switch. Deploying three-node clusters with
storage network adapters that are directly connected to each server without a
switch isn't supported in this preview.

Start the wizard and fill out the basics


1. Open a web browser, navigate to Azure portal . Search for Azure Arc. Select
Azure Arc and then go to Infrastructure | Azure Stack HCI. On the Get started tab,
select Deploy cluster.

2. Select the Subscription and Resource group in which to store this system's
resources.

All resources in the Azure subscription are billed together.

3. Enter the Cluster name used for this Azure Stack HCI system when Active Directory
Domain Services (AD DS) was prepared for this deployment.

4. Select the Region to store this system's Azure resources. See System requirements
for a list of supported regions.

We don't transfer a lot of data so it's OK if the region isn't close.

5. Select or create an empty Key vault to securely store secrets for this system, such
as cryptographic keys, local admin credentials, and BitLocker recovery keys.
Key Vault adds cost in addition to the Azure Stack HCI subscription. For details, see
Key Vault Pricing .

6. Select the server or servers that make up this Azure Stack HCI system.

7. Select Validate, wait for the green validation checkbox to appear, and then select
Next: Configuration.

The validation process checks that each server is running the same exact version of
the OS, has the correct Azure extensions, and has matching (symmetrical) network
adapters.

Specify the deployment settings


Choose whether to create a new configuration for this system or to load deployment
settings from a template–either way you'll be able to review the settings before you
deploy:

1. Choose the source of the deployment settings:

New configuration - Specify all of the settings to deploy this system.


Template spec - Load the settings to deploy this system from a template
spec stored in your Azure subscription.
Quickstart template - This setting isn't available in this release.

2. Select Next: Networking.

Specify network settings


1. For multi-node clusters, select whether the cluster is cabled to use a network
switch for the storage network traffic:

No switch for storage - For two-node clusters with storage network adapters
that connect the two servers directly without going through a switch.
Network switch for storage traffic - For clusters with storage network
adapters connected to a network switch. This also applies to clusters that use
converged network adapters that carry all traffic types including storage.

2. Choose traffic types to group together on a set of network adapters–and which


types to keep physically isolated on their own adapters.

There are three types of traffic we're configuring:

Management traffic between this system, your management PC, and Azure;
also Storage Replica traffic
Compute traffic to or from VMs and containers on this system
Storage (SMB) traffic between servers in a multi-node cluster

Select how you intend to group the traffic:

Group all traffic - If you're using network switches for storage traffic you can
group all traffic types together on a set of network adapters.

Group management and compute traffic - This groups management and


compute traffic together on one set of adapters while keeping storage traffic
isolated on dedicated high-speed adapters.

Group compute and storage traffic - If you're using network switches for
storage traffic, you can group compute and storage traffic together on your
high-speed adapters while keeping management traffic isolated on another
set of adapters.

This is commonly used for private multi-access edge compute (MEC) systems.

Custom configuration - This lets you group traffic differently, such as


carrying each traffic type on its own set of adapters.

 Tip

If you're deploying a single server that you plan to add servers to later, select
the network traffic groupings you want for the eventual cluster. Then when
you add servers they automatically get the appropriate settings.

3. For each group of traffic types (known as an intent), select at least one unused
network adapter (but probably at least two matching adapters for redundancy).

Make sure to use high-speed adapters for the intent that includes storage traffic.

4. For the storage intent, enter the VLAN ID set on the network switches used for
each storage network.

5. To customize network settings for an intent, select Customize network settings


and provide the following information:

Storage traffic priority. This specifies the Priority Flow Control where Data
Center Bridging (DCB) is used.
Cluster traffic priority.
Storage traffic bandwidth reservation. This parameter defines the bandwidth
allocation in % for the storage traffic.
Adpater properties such as Jumbo frame size (in bytes) and RDMA protocol
(which can now be disabled).

6. Using the Starting IP and Ending IP (and related) fields, allocate a contiguous
block of at least six static IP addresses on your management network's subnet,
omitting addresses already used by the servers.

These IPs are used by Azure Stack HCI and internal infrastructure (Arc Resource
Bridge) that's required for Arc VM management and AKS Hybrid.

7. Select Next: Management.

Specify management settings


1. Optionally edit the suggested Custom location name that helps users identify this
system when creating resources such as VMs on it.

2. Select an existing Storage account or create a new Storage account to store the
cluster witness file.

When selecting an existing account, the dropdown list filters to display only the
storage accounts contained in the specified resource group for deployment. You
can use the same storage account with multiple clusters; each witness uses less
than a kilobyte of storage.

3. Enter the Active Directory Domain you're deploying this system into.

This must be the same fully qualified domain name (FQDN) used when the Active
Directory Domain Services (AD DS) domain was prepared for deployment.

4. Enter the Computer name prefix used by the AsHciDeploymentPrefix parameter


when the domain was prepared for deployment (some use the same name as the
OU name).

5. Enter the OU created for this deployment. For example:


OU=HCI01,DC=contoso,DC=com

6. Enter the Deployment account credentials.


This domain user account was created when the domain was prepared for
deployment.

7. Enter the Local administrator credentials for the servers.

The credentials must be identical on all servers in the system. If the current
password doesn't meet the complexity requirements (12+ characters long, a
lowercase and uppercase character, a numeral, and a special character), you must
change it on all servers before proceeding.

8. Select Next: Security.

Set the security level


1. Select the security level for your system's infrastructure:
Recommended security settings - Sets the highest security settings.
Customized security settings - Lets you turn off security settings.

2. Select Next: Advanced.

Optionally change advanced settings and apply


tags
1. Choose whether to create volumes for workloads now, saving time creating
volumes and storage paths for VM images. You can create more volumes later.

Create workload volumes and required infrastructure volumes


(Recommended) - Creates one thinly provisioned volume and storage path
per server for workloads to use. This is in addition to the required one
infrastructure volume per server.

Create required infrastructure volumes only - Creates only the required one
infrastructure volume per server. You'll need to later create workload volumes
and storage paths.

Use existing data drives (single servers only) - Preserves existing data drives
that contain a Storage Spaces pool and volumes.

To use this option you must be using a single server and have already created
a Storage Spaces pool on the data drives. You also might need to later create
an infrastructure volume and a workload volume and storage path if you
don't already have them.

) Important

Don't delete the infrastructure volumes created during deployment.

Here's a summary of the volumes that are created based on the number of servers
in your system. To change the resiliency setting of the workload volumes, delete
them and recreate them, being careful not to delete the infrastructure volumes.

ノ Expand table

# Servers Volume resiliency # Infrastructure volumes # Workload volumes

Single server Two-way mirror 1 1

Two servers Two-way mirror 1 2


# Servers Volume resiliency # Infrastructure volumes # Workload volumes

Three servers + Three-way mirror 1 1 per server

2. Select Next: Tags.

3. Optionally add a tag to the Azure Stack HCI resource in Azure.

Tags are name/value pairs you can use to categorize resources. You can then view
consolidated billing for all resources with a given tag.

4. Select Next: Validation. Select Start validation.

5. The validation will take about 15 minutes for one to two server deployment and
more for bigger deployments. Monitor the validation progress.

Validate and deploy the system


1. After the validation is complete, review the validation results.

If the validation has erorrs, resolve any actionable issues, and then select Next:
Review + create.

Don't select Try again while validation tasks are running as doing so can provide
inaccurate results in this release.

2. Review the settings that will be used for deployment and then select Review +
create to deploy the system.

The Deployments page then appears, which you can use to monitor the deployment
progress.

If the progress doesn't appear, wait for a few minutes and then select Refresh. This page
may show up as blank for an extended period of time owing to an issue in this release,
but the deployment is still running if no errors show up.

Once the deployment starts, the first step in the deployment: Begin cloud deployment
can take 45-60 minutes to complete. The total deployment time for a single server is
around 1.5-2 hours while a two-node cluster takes about 2.5 hours to deploy.
Verify a successful deployment
To confirm that the system and all of its Azure resources were successfully deployed

1. In the Azure portal, navigate to the resource group into which you deployed the
system.

2. On the Overview > Resources, you should see the following:

ノ Expand table

Number of resources Resource type

1 per server Machine - Azure Arc

1 Azure Stack HCI

1 Arc Resource Bridge

1 Key vault

1 Custom location

2* Storage account

1 per workload volume Azure Stack HCI storage path - Azure Arc

* One storage account is created for the cloud witness and one for key vault audit
logs. These accounts are locally redundant storage (LRS) account with a lock
placed on them.

Rerun deployment
If your deployment fails, you can rerun the deployment. In your cluster, go to
Deployments and in the right-pane, select Rerun deployment.

Post deployment tasks


For security reasons, Remote Desktop Protocol (RDP) is disabled and the local
administrator renamed after the deployment completes on Azure Stack HCI systems. For
more information on the renamed administrator, go to Local builtin user accounts.

You may need to connect to the system via RDP to deploy workloads. Follow these steps
to connect to your cluster via the Remote PowerShell and then enable RDP:

1. Run PowerShell as administrator on your management PC.

2. Connect to your Azure Stack HCI system via a remote PowerShell session.

PowerShell

$ip="<IP address of the Azure Stack HCI server>"


Enter-PSSession -ComputerName $ip -Credential get-Credential

3. Enable RDP.

PowerShell
Enable-ASRemoteDesktop

7 Note

As per the security best practices, keep the RDP access disabled when not
needed.

4. Disable RDP.

PowerShell

Disable-ASRemoteDesktop

Next steps
If you didn't create workload volumes during deployment, create workload
volumes and storage paths for each volume. For details, see Create volumes on
Azure Stack HCI and Windows Server clusters and Create storage path for Azure
Stack HCI.
Get support for Azure Stack HCI deployment issues.
Deploy an Azure Stack HCI, version
23H2 via Azure Resource Manager
deployment template
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

This article details how to use an Azure Resource Manager template (ARM template) in
the Azure portal to deploy an Azure Stack HCI in your environment. The article also
contains the prerequisites and the preparation steps required to begin the deployment.

) Important

ARM template deployment of Azure Stack HCI, version 23H2 systems is targeted for
deployments-at-scale. The intended audience for this deployment are IT
Administrators who have experience deploying Azure Stack HCI clusters. We
recommend that you deploy a version 23H2 system via the Azure portal first and
then perform subsequent deployments via the ARM template.

Prerequisites
Completion of Register your servers with Azure Arc and assign deployment
permissions. Make sure that:
All the mandatory extensions have installed successfully. The mandatory
extensions include: Azure Edge Lifecycle Manager, Azure Edge Device
Management, and Telemetry and Diagnostics.
All servers are running the same version of OS.
All the servers have the same network adapter configuration.

Step 1: Prepare Azure resources


Follow these steps to prepare the Azure resources you need for the deployment:

Create a service principal and client secret


To authenticate your cluster, you need to create a service principal and a corresponding
Client secret. You must also assign Azure Resource Bridge Deployment Role to the service
principal.

Create a service principal


Follow the steps in Create a Microsoft Entra application and service principal that can
access resources via Azure portal to create the service principal and assign the roles.
Alternatively, use the PowerShell procedure to Create an Azure service principal with
Azure PowerShell.

The steps are also summarized here:

1. Sign in to the Microsoft Entra admin center as at least a Cloud Application


Administrator. Browse to Identity > Applications > App registrations then select
New registration.

2. Provide a Name for the application, select a Supported account type and then
select Register.

3. Once the service principal is created, go to the Overview page. Copy the
Application (client) ID for this service principal. You encode and use this value
later.

Create a client secret


1. Go to the service principal that you created and browse to Certificates & secrets >
Client secrets.

2. Select + New client secret.

3. Add a Description for the client secret and provide a timeframe when it Expires.
Select Add.

4. Copy the client secret value as you encode and use it later.

7 Note

For the application client ID, you will need it's secret value. Client secret values
can't be viewed except for immediately after creation. Be sure to save this
value when created before leaving the page.

Create a cloud witness storage account


First, create a storage account to serve as a cloud witness. You then need to get the
access key for this storage account, and then use it in an encoded format with the ARM
deployment template.

Follow these steps to get and encode the access key for the ARM deployment template:

1. In the Azure portal, create a storage account in the same resource group that you
would use for deployment.

2. Once the Storage account is created, verify that you can see the account in the
resource group.

3. Go to the storage account that you created and then go to Access keys.

4. For key1, Key, select Show. Select the Copy to clipboard button at the right side of
the Key field.

After you copy the key, select Hide.

Encode parameter values


1. On a management computer, run PowerShell as administrator. Encode the copied
Key value string with the following script:

PowerShell

$secret="<Key value string coped earlier>"


[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($secret
))
The encoded output looks similar to this, and is based on the secret value for the
cloud witness storage account for your environment:

PowerShell

ZXhhbXBsZXNlY3JldGtleXZhbHVldGhhdHdpbGxiZWxvbmdlcnRoYW50aGlzYW5kb3V0cHV
0d2lsbGxvb2tkaWZmZXJlbnQ=

2. The encoded output value you generate is what the ARM deployment template
expects. Make a note of this value and the name of the storage account. You'll use
these values later in the deployment process.

In addition to the storage witness access key, you also need to similarly encode the
values for the following parameters.

ノ Expand table

Parameter Description

localaccountname , Username and password for the local administrator for all the servers
localaccountpassword in your cluster. The credentials are identical for all the servers in your
system.

domainaccountname , The new username and password that were created with the
omainaccountpassword appropriate permissions for deployment during the Active Directory
preparation step for the AzureStackLCMUserCredential object. This
account is the same as the user account used by the Azure Stack HCI
deployment.
For more information, see Prepare the Active Directory to get these
credentials.

clientId , The application (client) ID for the SPN that you created as a
clientSecretValue prerequisite to this deployment and the corresponding client secret
value for the application ID.

Run the PowerShell script used in the earlier step to encode these values:

Local account password. This corresponds to the localAdminSecretValue in the


parameters JSON. Encode localaccountname:localacountpassword to get this value
for the template.
Domain account password. This corresponds to the domainAdminSecretValue in the
parameters JSON. Encode domainaccountname:domainaccountpassword to get this
value for the template.
Application client ID secret value. This corresponds to the arbDeploymentSpnValue
in the parameters JSON. Encode clientId:clientSecretValue to get this value for
the template.

Step 2: Assign resource permissions


You need to create and assign the needed resource permissions before you use the
deployment template.

Verify access to the resource group

Verify access to the resource group for your registered Azure Stack HCI servers as
follows:

1. In Azure portal, go to the appropriate resource group.

2. Select Access control (IAM) from the left-hand side of the screen and then select
Check access.

3. In the Check access, input or select the following:

a. Select Managed identity.

b. Select the appropriate subscription from the drop-down list.

c. Select All system-assigned managed identities.

d. Filter the list by typing the prefix and name of the registered server(s) for this
deployment. Select one of the servers in your Azure Stack HCI cluster.

e. Under Current role assignments, verify the selected server has the following
roles enabled:
Azure Connected Machine Resource Manager.

Azure Stack HCI Device Management Role.

Reader.

f. Select the X on the upper right to go back to the server selection screen.

4. Select another server in your Azure Stack HCI cluster. Verify the selected server has
the same roles enabled as you verified on the earlier server.

Add access to the resource group


Add access to the resource group for your registered Azure Stack HCI servers as follows:

1. Go to the appropriate resource group for your Azure Stack HCI environment.

2. Select Access control (IAM) from the left-hand side of the screen.
3. Select + Add and then select Add role assignment.

4. Search for and select Azure Connected Machine Resource Manager. Select Next.

5. Leave the selection on User, group, or service principal. Select + Select members.

6. Filter the list by typing Microsoft.AzureStackHCI Resource Provider . Select the


Microsoft.AzureStackHCI Resource Provider option.

7. Select Select.

8. Select Review + assign, then select this again.


9. Once the role assignment is added, you are able to see it in the Notifications
activity log:

Add the Key Vault Secrets User


1. Go to the appropriate resource group for Azure Stack HCI environment.

2. Select Access control (IAM) from the left-hand side of the screen.

3. In the right-pane, select + Add and then select Add role assignment.

4. Search for and select Key Vault Secrets User and select Next.

5. Select Managed identity.

6. Select + Select members and input the following:

a. Select the appropriate subscription.

b. Select All system-assigned managed identities.

c. Filter the list by typing the prefix and name of the registered servers for your
deployment.

d. Select both servers for your environment and choose Select.


7. Select Review + assign, then select this again.

8. Once the roles are assigned as Key Vault Secrets User, you are able to see them in
the Notifications activity log.


Verify new role assignments
Optionally verify the role assignments you created.

1. Select Access Control (IAM) Check Access to verify the role assignments you
created.

2. Go to Azure Connected Machine Resource Manager > Microsoft.AzureStackHCI


Resource Provider for the appropriate resource group for your environment.

3. Go to Key Vault Secrets User for the appropriate resource group for the first server
in your environment.

4. Go to Key Vault Secrets User for the appropriate resource group for the second
server in your environment.

Step 3: Deploy using ARM template


With all the prerequisite and preparation steps complete, you're ready to deploy using a
known good and tested ARM deployment template and corresponding parameters
JSON file. Use the parameters contained in the JSON file to fill out all values, including
the encoded values generated previously.

) Important

In this release, make sure that all the parameters contained in the JSON value are
filled out including the ones that have a null value. If there are null values, then
those need to be populated or the validation fails.

1. In Azure portal, go to Home and select + Create a resource.

2. Select Create under Template deployment (deploy using custom templates).


3. Near the bottom of the page, find Start with a quickstart template or template
spec section. Select Quickstart template option.


4. Use the Quickstart template (disclaimer) field to filter for the appropriate
template. Type azurestackhci/create-cluster for the filter.

5. When finished, Select template.

6. On the Basics tab, you see the Custom deployment page. You can select the
various parameters through the dropdown list or select Edit parameters.

7. Edit parameters such as network intent or storage network intent. Once the
parameters are all filled out, Save the parameters file.

 Tip

Download a sample parameters file to understand the format in which you


must provide the inputs.

8. Select the appropriate resource group for your environment.

9. Scroll to the bottom, and confirm that Deployment Mode = Validate.


10. Select Review + create.

11. On the Review + Create tab, select Create. This will create the remaining
prerequisite resources and validate the deployment. Validation takes about 10
minutes to complete.

12. Once validation is complete, select Redeploy.

13. On the Custom deployment screen, select Edit parameters. Load up the previously
saved parameters and select Save.

14. At the bottom of the workspace, change the final value in the JSON from Validate
to Deploy, where Deployment Mode = Deploy.

15. Verify that all the fields for the ARM deployment template have been filled in by
the Parameters JSON.

16. Select the appropriate resource group for your environment.

17. Scroll to the bottom, and confirm that Deployment Mode = Deploy.

18. Select Review + create.

19. Select Create. This begins deployment, using the existing prerequisite resources
that were created during the Validate step.

The Deployment screen cycles on the Cluster resource during deployment.

Once deployment initiates, there's a limited Environment Checker run, a full


Environment Checker run, and cloud deployment starts. After a few minutes, you
can monitor deployment in the portal.

20. In a new browser window, navigate to the resource group for your environment.
Select the cluster resource.

21. Select Deployments.

22. Refresh and watch the deployment progress from the first server (also known as
the seed server and is the first server where you deployed the cluster). Deployment
takes between 2.5 and 3 hours. Several steps take 40-50 minutes or more.
7 Note

If you check back on the template deployment, you will see that it eventually
times out. This is a known issue, so watching Deployments is the best way to
monitor the progress of deployment.

23. The step in deployment that takes the longest is Deploy Moc and ARB Stack. This
step takes 40-45 minutes.

Once complete, the task at the top updates with status and end time.

Next steps
Learn more:

About Arc VM management


About how to Deploy Azure Arc VMs on Azure Stack HCI.
Deploy an SDN infrastructure using SDN
Express for Azure Stack HCI
Article • 03/15/2024

Applies to: Azure Stack HCI, version 23H2; Windows Server 2022, Windows Server
2019, Windows Server 2016

In this article, you deploy an end-to-end Software Defined Network (SDN) infrastructure
for Azure Stack HCI, version 23H2 using SDN Express PowerShell scripts. The
infrastructure includes a highly available (HA) Network Controller (NC), and optionally, a
highly available Software Load Balancer (SLB), and a highly available Gateway (GW). The
scripts support a phased deployment, where you can deploy just the Network Controller
component to achieve a core set of functionality with minimal network requirements.

You can also deploy an SDN infrastructure System Center Virtual Machine Manager
(VMM). For more information, Manage SDN resources in the VMM fabric.

Before you begin


Before you begin an SDN deployment, plan out and configure your physical and host
network infrastructure. Reference the following articles:

Physical network requirements


Host network requirements
Plan a Software Defined Network infrastructure

You don't have to deploy all SDN components. See the Phased deployment section of
Plan a Software Defined Network infrastructure to determine which infrastructure
components you need, and then run the scripts accordingly.

Make sure all host servers have the Azure Stack HCI operating system installed. See
Deploy the Azure Stack HCI operating system on how to do this.

Requirements
The following requirements must be met for a successful SDN deployment:

All host servers must have Hyper-V enabled.


All host servers must be joined to Active Directory.
Active Directory must be prepared. For more information, see Prepare Active
Directory.
A virtual switch must be created. You can use the default switch created for Azure
Stack HCI, version 23H2. You may need to create separate switches for compute
traffic and management traffic, for example.
The physical network must be configured for the subnets and VLANs defined in
the configuration file.
The SDN Express script needs to be run from a Windows Server 2016 or later
computer.
The VHDX file specified in the configuration file must be reachable from the
computer where the SDN Express script is run.

Download the VHDX file


SDN uses a VHDX file containing either the Azure Stack HCI or Windows Server
operating system (OS) as a source for creating the SDN virtual machines (VMs).

7 Note

The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.

To download an English-language version of the VHDX file, see Download the Azure
Stack HCI operating system from the Azure portal. Make sure to select English VHDX
from the Choose language dropdown list.

Currently, a non-English VHDX file isn't available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.

You'll probably need to run this script as Administrator and modify the execution policy
for scripts using the Set-ExecutionPolicy command.

The following syntax shows an example of using Convert-WindowsImage :

PowerShell

Install-Module -Name Convert-WindowsImage


Import-Module Convert-WindowsImage

$wimpath = "E:\sources\install.wim"
$vhdpath = "D:\temp\AzureStackHCI.vhdx"
$edition=1
Convert-WindowsImage -SourcePath $wimpath -Edition $edition -VHDPath
$vhdpath -SizeBytes 500GB -DiskLayout UEFI

Download the GitHub repository


The SDN Express script files live in GitHub. The first step is to get the necessary files and
folders onto your deployment computer.

1. Go to the Microsoft SDN GitHub repository.

2. In the repository, expand the Code drop-down list, and then choose either Clone
or Download ZIP to download the SDN files to your designated deployment
computer.

7 Note

The designated deployment computer must be running Windows Server 2016


or later.

3. Extract the ZIP file and copy the SDNExpress folder to your deployment computer's
C:\ folder.

Edit the configuration file


The PowerShell MultiNodeSampleConfig.psd1 configuration data file contains all the
parameters and settings that are needed for the SDN Express script as input for the
various parameters and configuration settings. This file has specific information about
what needs to be filled out based on whether you're deploying only the network
controller component, or the software load balancer and gateway components as well.
For detailed information, see Plan a Software Defined Network infrastructure article.

Navigate to the C:\SDNExpress\scripts folder and open the


MultiNodeSampleConfig.psd1 file in your favorite text editor. Change specific parameter

values to fit your infrastructure and deployment:

General settings and parameters


The settings and parameters are used by SDN in general for all deployments. For
specific recommendations, see SDN infrastructure VM role requirements.
VHDPath - VHD file path used by all SDN infrastructure VMs (NC, SLB, GW)
VHDFile - VHDX file name used by all SDN infrastructure VMs
VMLocation - file path to SDN infrastructure VMs. Universal Naming Convention
(UNC) paths aren't supported. For cluster storage-based paths, use a format like
C:\ClusterStorage\...

JoinDomain - domain to which SDN infrastructure VMs are joined


SDNMacPoolStart - beginning MAC pool address for client workload VMs
SDNMacPoolEnd - end MAC pool address for client workload VMs
ManagementSubnet - management network subnet used by NC to manage
Hyper-V hosts, SLB, and GW components
ManagementGateway - Gateway address for the management network
ManagementDNS - DNS server for the management network
ManagementVLANID - VLAN ID for the management network
DomainJoinUsername - administrator username. The username should be in the
following format: domainname\username . For example, if the domain is contoso.com ,
enter the username as contoso\<username> . Don't use formats like contoso.com\
<username> or [email protected]

LocalAdminDomainUser - local administrator username. The username should be


in the following format: domainname\username . For example, if the domain is
contoso.com , enter the username as contoso\<username> . Don't use formats like

contoso.com\<username> or [email protected]
RestName - DNS name used by management clients (such as Windows Admin
Center) to communicate with NC
RestIpAddress - Static IP address for your REST API, which is allocated from your
management network. It can be used for DNS resolution or REST IP-based
deployments
HyperVHosts - host servers to be managed by Network Controller
NCUsername - Network Controller account username
ProductKey - product key for SDN infrastructure VMs
SwitchName - only required if more than one virtual switch exists on the Hyper-V
hosts
VMMemory - memory (in GB) assigned to infrastructure VMs. Default is 4 GB
VMProcessorCount - number of processors assigned to infrastructure VMs.
Default is 8
Locale - if not specified, locale of deployment computer is used
TimeZone - if not specified, local time zone of deployment computer is used

Passwords can be optionally included if stored encrypted as text-encoded secure strings.


Passwords will only be used if SDN Express scripts are run on the same computer where
passwords were encrypted, otherwise it prompts for these passwords:
DomainJoinSecurePassword - for domain account
LocalAdminSecurePassword - for local administrator account
NCSecurePassword - for Network Controller account

Network Controller VM section


A minimum of three Network Controller VMs are recommended for SDN.

The NCs = @() section is used for the Network Controller VMs. Make sure that the MAC
address of each NC VM is outside the SDNMACPool range listed in the General settings.

ComputerName - name of NC VM
HostName - host name of server where the NC VM is located
ManagementIP - management network IP address for the NC VM
MACAddress - MAC address for the NC VM

Software Load Balancer VM section


A minimum of two Software Load Balancer VMs are recommended for SDN.

The Muxes = @() section is used for the SLB VMs. Make sure that the MACAddress and
PAMACAddress parameters of each SLB VM are outside the SDNMACPool range listed in the

General settings. Ensure that you get the PAIPAddress parameter from outside the PA
Pool specified in the configuration file, but part of the PASubnet specified in the
configuration file.

Leave this section empty ( Muxes = @() ) if not deploying the SLB component:

ComputerName - name of SLB VM


HostName - host name of server where the SLB VM is located
ManagementIP - management network IP address for the SLB VM
MACAddress - MAC address for the SLB VM
PAIPAddress - Provider network IP address (PA) for the SLB VM
PAMACAddress - Provider network IP address (PA) for the SLB VM

Gateway VM section
A minimum of two Gateway VMs (one active and one redundant) are recommended for
SDN.

The Gateways = @() section is used for the Gateway VMs. Make sure that the
MACAddress parameter of each Gateway VM is outside the SDNMACPool range listed in the
General settings. The FrontEndMac and BackendMac must be from within the SDNMACPool
range. Ensure that you get the FrontEndMac and the BackendMac parameters from the
end of the SDNMACPool range.

Leave this section empty ( Gateways = @() ) if not deploying the Gateway component:

ComputerName - name of Gateway VM


HostName - host name of server where the Gateway VM is located
ManagementIP - management network IP address for the Gateway VM
MACAddress - MAC address for the Gateway VM
FrontEndMac - Provider network front end MAC address for the Gateway VM
BackEndMac - Provider network back end MAC address for the Gateway VM

Additional settings for SLB and Gateway


The following other parameters are used by SLB and Gateway VMs. Leave these values
blank if you aren't deploying SLB or Gateway VMs:

SDNASN - Autonomous System Number (ASN) used by SDN to peer with network
switches
RouterASN - Gateway router ASN
RouterIPAddress - Gateway router IP address
PrivateVIPSubnet - virtual IP address (VIP) for the private subnet
PublicVIPSubnet - virtual IP address for the public subnet

The following other parameters are used by Gateway VMs only. Leave these values blank
if you aren't deploying Gateway VMs:

PoolName - pool name used by all Gateway VMs

GRESubnet - VIP subnet for GRE (if using GRE connections)

Capacity - capacity in Kbps for each Gateway VM in the pool

RedundantCount - number of gateways in redundant mode. The default value is 1.


Redundant gateways don't have any active connections. Once an active gateway
goes down, the connections from that gateway move to the redundant gateway
and the redundant gateway becomes active.

7 Note

If you fill in a value for RedundantCount, ensure that the total number of
gateway VMs is at least one more than the RedundantCount. By default, the
RedundantCount is 1, so you must have at least 2 gateway VMs to ensure
that there is at least 1 active gateway to host gateway connections.

Settings for tenant overlay networks


The following parameters are used if you are deploying and managing overlay
virtualized networks for tenants. If you're using Network Controller to manage
traditional VLAN networks instead, these values can be left blank.

PASubnet - subnet for the Provider Address (PA) network


PAVLANID - VLAN ID for the PA network
PAGateway - IP address for the PA network Gateway
PAPoolStart - beginning IP address for the PA network pool
PAPoolEnd - end IP address for the PA network pool

Here's how Hyper-V Network Virtualization (HNV) Provider logical network allocates IP
addresses. Use this to plan your address space for the HNV Provider network.

Allocates two IP addresses to each physical server


Allocates one IP address to each SLB MUX VM
Allocates one IP address to each gateway VM

Run the deployment script


The SDN Express script deploys your specified SDN infrastructure. When the script is
complete, your SDN infrastructure is ready to be used for VM workload deployments.

1. Review the README.md file for late-breaking information on how to run the
deployment script.

2. Run the following command from a user account with administrative credentials
for the cluster host servers:

PowerShell

SDNExpress\scripts\SDNExpress.ps1 -ConfigurationDataFile
MultiNodeSampleConfig.psd1 -Verbose

3. After the NC VMs are created, configure dynamic DNS updates for the Network
Controller cluster name on the DNS server. For more information, see Dynamic
DNS updates.
Configuration sample files
The following configuration sample files for deploying SDN are available on the
Microsoft SDN GitHub repository:

Traditional VLAN networks.psd1 - Deploy Network Controller for managing


network policies like microsegmentation and Quality of Service on traditional
VLAN Networks.

Virtualized networks.psd1 - Deploy Network Controller for managing virtual


networks and network policies on virtual networks.

Software Load Balancer.psd1 - Deploy Network Controller and Software Load


Balancer for load balancing on virtual networks.

SDN Gateways.psd1 - Deploy Network Controller, Software Load Balancer and


Gateway for connectivity to external networks.

Next steps
Manage VMs

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Deploy SDN using Windows Admin
Center for Azure Stack HCI
Article • 03/15/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to deploy Software Defined Networking (SDN) through
Windows Admin Center after you deployed your Azure Stack HCI, version 23H2 cluster
via the Azure portal.

Windows Admin Center enables you to deploy all the SDN infrastructure components
on your existing Azure Stack HCI cluster, in the following deployment order:

Network Controller
Software Load Balancer (SLB)
Gateway

Alternatively, you can deploy the entire SDN infrastructure through the SDN Express
scripts.

You can also deploy an SDN infrastructure using System Center Virtual Machine
Manager (VMM). For more information, see Manage SDN resources in the VMM fabric.

Before you begin


Before you begin an SDN deployment, plan out and configure your physical and host
network infrastructure. Reference the following articles:

Physical network requirements.


Host network requirements.
Deploy a cluster using Azure portal.
Plan a Software Defined Network infrastructure.
The Phased deployment section of Plan a Software Defined Network infrastructure
to determine the capabilities enabled by deploying Network Controller.

Requirements
The following requirements must be met for a successful SDN deployment:

All server nodes must have Hyper-V enabled.


Active Directory must be prepared. For more information, see Prepare Active
Directory.
All server nodes must be joined to Active Directory.
A virtual switch must be created. You can use the default switch created for Azure
Stack HCI. You may need to create separate switches for compute traffic and
management traffic, for example.
The physical network must be configured.

Download the VHDX file


SDN uses a VHDX file containing either the Azure Stack HCI or Windows Server
operating system (OS) as a source for creating the SDN virtual machines (VMs).

7 Note

The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.

To download an English-language version of the VHDX file, see Download the Azure
Stack HCI operating system from the Azure portal. Make sure to select English VHDX
from the Choose language dropdown list.

Currently, a non-English VHDX file isn't available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.

You'll probably need to run this script as Administrator and modify the execution policy
for scripts using the Set-ExecutionPolicy command.

The following syntax shows an example of using Convert-WindowsImage :

PowerShell

Install-Module -Name Convert-WindowsImage


Import-Module Convert-WindowsImage

$wimpath = "E:\sources\install.wim"
$vhdpath = "D:\temp\AzureStackHCI.vhdx"
$edition=1
Convert-WindowsImage -SourcePath $wimpath -Edition $edition -VHDPath
$vhdpath -SizeBytes 500GB -DiskLayout UEFI
Deploy SDN Network Controller
SDN Network Controller deployment is a functionality of the SDN Infrastructure
extension in Windows Admin Center. Complete the following steps to deploy Network
Controller on your existing Azure Stack HCI cluster.

1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.

2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.

3. In Windows Admin Center, under Tools, select SDN Infrastructure, then select Get
Started.

4. Under Cluster settings, under Host, enter a name for the Network Controller. This
is the DNS name used by management clients (such as Windows Admin Center) to
communicate with Network Controller. You can also use the default populated
name.

5. Specify a path to the Azure Stack HCI VHD file. Use Browse to find it quicker.

6. Specify the number of VMs to be dedicated for Network Controller. We strongly


recommend three VMs for production deployments.

7. Under Network, enter the VLAN ID of the management network. Network


Controller needs connectivity to same management network as the Hyper-V hosts
so that it can communicate and configure the hosts.

8. For VM network addressing, select either DHCP or Static.


For DHCP, enter the name for the Network Controller VMs. You can also use
the default populated names.

For Static, do the following:


a. Specify an IP address.
b. Specify a subnet prefix.
c. Specify the default gateway.
d. Specify one or more DNS servers. Select Add to add additional DNS
servers.

9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.

7 Note

You must enter the username in the following format: domainname\username .


For example, if the domain is contoso.com , enter the username as contoso\
<username> . Don't use formats like contoso.com\<username> or

[email protected] .

10. Enter the local administrator password for these VMs.

11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.

7 Note

Universal Naming Convention (UNC) paths aren't supported. For cluster


storage-based paths, use a format like C:\ClusterStorage\... .

12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values. This is the MAC pool used to assign MAC
addresses to VMs attached to SDN networks.

13. When finished, select Next: Deploy.

14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then select Finish.

15. After the Network Controller VMs are created, configure dynamic DNS updates for
the Network Controller cluster name on the DNS server. For more information, see
Dynamic DNS updates.
Redeploy SDN Network Controller
If the Network Controller deployment fails or you want to deploy it again, do the
following:

1. Delete all Network Controller VMs and their VHDs from all server nodes.

2. Remove the following registry key from all hosts by running this command:

PowerShell

Remove-ItemProperty -path
'HKLM:\SYSTEM\CurrentControlSet\Services\NcHostAgent\Parameters\' -Name
Connections

3. After removing the registry key, remove the cluster from the Windows Admin
Center management, and then add it back.

7 Note

If you don't do this step, you may not see the SDN deployment wizard in
Windows Admin Center.

4. (Additional step only if you plan to uninstall Network Controller and not deploy it
again) Run the following cmdlet on all the servers in your Azure Stack HCI cluster,
and then skip the last step.

PowerShell

Disable-VMSwitchExtension -VMSwitchName "<Compute vmswitch name>" -Name


"Microsoft Azure VFP Switch Extension"

5. Run the deployment wizard again.

Deploy SDN Software Load Balancer


SDN SLB deployment is a functionality of the SDN Infrastructure extension in Windows
Admin Center. Complete the following steps to deploy SLB on your existing Azure Stack
HCI cluster.

7 Note
Network Controller must be set up before you configure SLB.

1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.

2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.

3. In Windows Admin Center, under Tools, select SDN Infrastructure, then select Get
Started on the Load Balancer tab.

4. Under Load Balancer Settings, under Front-End subnets, provide the following:

Public VIP subnet prefix. This could be public Internet subnets. They serve as
the front end IP addresses for accessing workloads behind the load balancer,
which use IP addresses from a private backend network.

Private VIP subnet prefix. These don’t need to be routable on the public
Internet because they are used for internal load balancing.

5. Under BGP Router Settings, enter the SDN ASN for the SLB. This ASN is used to
peer the SLB infrastructure with the Top of the Rack switches to advertise the
Public VIP and Private VIP IP addresses.

6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. SLB infrastructure needs these settings to create a BGP peer with the
switch. If you have an additional Top of Rack switch that you want to peer the SLB
infrastructure with, add IP Address and ASN for that switch as well.

7. Under VM Settings, specify a path to the Azure Stack HCI VHDX file. Use Browse to
find it quicker.

8. Specify the number of VMs to be dedicated for software load balancing. We


strongly recommend at least two VMs for production deployments.

9. Under Network, enter the VLAN ID of the management network. SLB needs
connectivity to same management network as the Hyper-V hosts so that it can
communicate and configure the hosts.

10. For VM network addressing, select either DHCP or Static.

For DHCP, enter the name for the Network Controller VMs. You can also use
the default populated names.

For Static, do the following:


a. Specify an IP address.
b. Specify a subnet prefix.
c. Specify the default gateway.
d. Specify one or more DNS servers. Select Add to add additional DNS
servers.

11. Under Credentials, enter the username and password that you used to join the
Software Load Balancer VMs to the cluster domain.

7 Note

You must enter the username in the following format: domainname\username .


For example, if the domain is contoso.com , enter the username as contoso\
<username> . Don't use formats like contoso.com\<username> or
[email protected] .

12. Enter the local administrative password for these VMs.

13. Under Advanced, enter the path to the VMs. You can also use the default
populated path.

7 Note

Universal Naming Convention (UNC) paths aren't supported. For cluster


storage-based paths, use a format like C:\ClusterStorage\... .

14. When finished, select Next: Deploy.

15. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then select Finish.

Deploy SDN Gateway


SDN Gateway deployment is a functionality of the SDN Infrastructure extension in
Windows Admin Center. Complete the following steps to deploy SDN Gateways on your
existing Azure Stack HCI cluster.

7 Note

Network Controller and SLB must be set up before you configure Gateways.
1. In Windows Admin Center, under Tools, select Settings, then select Extensions.

2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.

3. In Windows Admin Center, under Tools, select SDN Infrastructure, then select Get
Started on the Gateway tab.

4. Under Define the Gateway Settings, under Tunnel subnets, provide the GRE
Tunnel Subnets. IP addresses from this subnet are used for provisioning on the
SDN gateway VMs for GRE tunnels. If you don't plan to use GRE tunnels, put any
placeholder subnets in this field.

5. Under BGP Router Settings, enter the SDN ASN for the Gateway. This ASN is used
to peer the gateway VMs with the Top of the Rack switches to advertise the GRE IP
addresses. This field is auto populated to the SDN ASN used by SLB.

6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. Gateway VMs need these settings to create a BGP peer with the switch.
These fields are auto populated from the SLB deployment wizard. If you have an
additional Top of Rack switch that you want to peer the gateway VMs with, add IP
Address and ASN for that switch as well.

7. Under Define the Gateway VM Settings, specify a path to the Azure Stack HCI
VHDX file. Use Browse to find it quicker.

8. Specify the number of VMs to be dedicated for gateways. We strongly recommend


at least two VMs for production deployments.

9. Enter the value for Redundant Gateways. Redundant gateways don't host any
gateway connections. In event of failure or restart of an active gateway VM,
gateway connections from the active VM are moved to the redundant gateway and
the redundant gateway is then marked as active. In a production deployment, we
strongly recommend that you have at least one redundant gateway.

7 Note

Ensure that the total number of gateway VMs is at least one more than the
number of redundant gateways. Otherwise, you won't have any active
gateways to host gateway connections.

10. Under Network, enter the VLAN ID of the management network. Gateways needs
connectivity to same management network as the Hyper-V hosts and Network
Controller VMs.

11. For VM network addressing, select either DHCP or Static.

For DHCP, enter the name for the Gateway VMs. You can also use the default
populated names.

For Static, do the following:


a. Specify an IP address.
b. Specify a subnet prefix.
c. Specify the default gateway.
d. Specify one or more DNS servers. Select Add to add additional DNS
servers.

12. Under Credentials, enter the username and password used to join the Gateway
VMs to the cluster domain.

7 Note

You must enter the username in the following format: domainname\username .


For example, if the domain is contoso.com , enter the username as contoso\
<username> . Don't use formats like contoso.com\<username> or
[email protected] .

13. Enter the local administrative password for these VMs.

14. Under Advanced, provide the Gateway Capacity. It is auto populated to 10 Gbps.
Ideally, you should set this value to approximate throughput available to the
gateway VM. This value may depend on various factors, such as physical NIC speed
on the host machine, other VMs on the host machine and their throughput
requirements.

7 Note

Universal Naming Convention (UNC) paths aren't supported. For cluster


storage-based paths, use a format like C:\ClusterStorage\... .

15. Enter the path to the VMs. You can also use the default populated path.

16. When finished, select Next: Deploy the Gateway.


17. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then select Finish.

Next steps
Manage your VMs. See Manage VMs.
Manage Software Load Balancers. See Manage Software Load Balancers.
Manage Gateway connections. See Manage Gateway Connections.
Troubleshoot SDN deployment. See Troubleshoot Software Defined Networking
deployment via Windows Admin Center.

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Set up an Azure Kubernetes Service host
on Azure Stack HCI and Windows Server
and deploy a workload cluster using
PowerShell
Article • 12/21/2023

Applies to: Azure Stack HCI or Windows Server Datacenter

This quickstart guides you through setting up an Azure Kubernetes Service (AKS) host.
You create Kubernetes clusters on Azure Stack HCI and Windows Server using
PowerShell. To use Windows Admin Center instead, see Set up with Windows Admin
Center.

7 Note

If you have pre-staged cluster service objects and DNS records, see Deploy an
AKS host with prestaged cluster service objects and DNS records using
PowerShell.
If you have a proxy server, see Set up an AKS host and deploy a workload
cluster using PowerShell and a proxy server.
Installing AKS on Azure Stack HCI after setting up Arc VMs is not supported.
For more information, see known issues with Arc VMs. If you want to install
AKS on Azure Stack HCI, you must uninstall Arc Resource Bridge and then
install AKS on Azure Stack HCI. You can deploy a new Arc Resource Bridge
again after you clean up and install AKS, but it won't remember the VM
entities you created previously.

Before you begin


Make sure you have satisfied all the prerequisites in system requirements.
Use an Azure account to register your AKS host for billing. For more information,
see Azure requirements.

Install the AksHci PowerShell module


Follow these steps on all nodes in your Azure Stack HCI cluster or Windows Server
cluster:

7 Note

If you are using remote PowerShell, you must use CredSSP.

1. Close all open PowerShell windows, open a new PowerShell session as


administrator, and run the following command on all nodes in your Azure Stack
HCI or Windows Server cluster:

PowerShell

Install-PackageProvider -Name NuGet -Force


Install-Module -Name PowershellGet -Force -Confirm:$false

You must close all existing PowerShell windows again to ensure that loaded
modules are refreshed. Don't continue to the next step until you close all open
PowerShell windows.

2. Install the AKS-HCI PowerShell module by running the following command on all
nodes in your Azure Stack HCI or Windows Server cluster:

PowerShell

Install-Module -Name AksHci -Repository PSGallery -Force -AcceptLicense

You must close all existing PowerShell windows again to ensure that loaded
modules are refreshed. Don't continue to the next step until you close all open
PowerShell windows.

You can use a helper script to delete old AKS-HCI PowerShell modules , to avoid any
PowerShell version-related issues in your AKS deployment.

Validate your installation


PowerShell

Get-Command -Module AksHci

To view the complete list of AksHci PowerShell commands, see AksHci PowerShell.
Register the resource provider to your
subscription
Before the registration process, enable the appropriate resource provider in Azure for
AKS enabled by Arc registration. To do that, run the following PowerShell commands:

To sign in to Azure, run the Connect-AzAccount PowerShell command:

PowerShell

Connect-AzAccount

If you want to switch to a different subscription, run the Set-AzContext PowerShell


command:

PowerShell

Set-AzContext -Subscription "xxxx-xxxx-xxxx-xxxx"

Run the following commands to register your Azure subscription to Azure Arc enabled
Kubernetes resource providers. This registration process can take up to 10 minutes, but
it only needs to be performed once on a specific subscription:

PowerShell

Register-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes


Register-AzResourceProvider -ProviderNamespace
Microsoft.KubernetesConfiguration
Register-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation

To validate the registration process, run the following PowerShell commands:

PowerShell

Get-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes


Get-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration
Get-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation

Step 1: Prepare your machine(s) for deployment


Run checks on every physical node to see if all the requirements to install AKS enabled
by Arc are satisfied. Open PowerShell as an administrator and run the following
Initialize-AksHciNode command on all nodes in your Azure Stack HCI and Windows
Server cluster:

PowerShell

Initialize-AksHciNode

Step 2: Create a virtual network


Run the following commands on any one node in your Azure Stack HCI and Windows
Server cluster.

To get the names of your available switches, run the following command. Make sure the
SwitchType of your VM switch is "External":

PowerShell

Get-VMSwitch

Sample output:

Output

Name SwitchType NetAdapterInterfaceDescription


---- ---------- ------------------------------
extSwitch External Mellanox ConnectX-3 Pro Ethernet Adapter

To create a virtual network for the nodes in your deployment to use, create an
environment variable with the New-AksHciNetworkSetting PowerShell command. This
virtual network is used later to configure a deployment that uses static IP. If you want to
configure your AKS deployment with DHCP, see New-AksHciNetworkSetting for
examples. You can also review some networking node concepts.

PowerShell

#static IP
$vnet = New-AksHciNetworkSetting -name myvnet -vSwitchName "extSwitch" -
k8sNodeIpPoolStart "172.16.10.1" -k8sNodeIpPoolEnd "172.16.10.255" -
vipPoolStart "172.16.255.0" -vipPoolEnd "172.16.255.254" -ipAddressPrefix
"172.16.0.0/16" -gateway "172.16.0.1" -dnsServers "172.16.0.1" -vlanId 9

7 Note
You must customize the values shown in this example command for your
environment.

Step 3: Configure your deployment


Run the following commands on any one node in your Azure Stack HCI and Windows
Server cluster.

To create the configuration settings for the AKS host, use the Set-AksHciConfig
command. You must specify the imageDir , workingDir , and cloudConfigLocation
parameters. If you want to reset your configuration details, run the command again with
new parameters.

Configure your deployment with the following command:

PowerShell

$csvPath = 'C:\clusterstorage\volume01' # Specify your preferred CSV path


Set-AksHciConfig -imageDir $csvPath\Images -workingDir $csvPath\ImageStore -
cloudConfigLocation $csvPath\Config -vnet $vnet

7 Note

You must customize the values shown in this example command for your
environment.

Step 4: Sign in to Azure and configure


registration settings

Option 1: Use your Microsoft Entra account if you have


"Owner" permissions
Run the following Set-AksHciRegistration PowerShell command with your subscription
and resource group name to sign in to Azure. You must have an Azure subscription, and
an existing Azure resource group in the Australia East, East US, Southeast Asia, or West
Europe Azure regions:

PowerShell
Set-AksHciRegistration -subscriptionId "<subscriptionId>" -resourceGroupName
"<resourceGroupName>"

Option 2: Use an Azure service principal


If you don't have access to a subscription on which you're an "Owner", you can register
your AKS host to Azure for billing using a service principal. For more information about
how to use a service principal, see register AKS on Azure Stack HCI and Windows Server
using a service principal.

Step 5: Start a new deployment


Run the following command on any one node in your Azure Stack HCI or Windows
Server cluster.

After you configure your deployment, you must start it in order to install the AKS
agents/services and the AKS host. To begin deployment, run the following command:

 Tip

To see additional status details during installation, set $VerbosePreference =


"Continue" before proceeding.

PowerShell

Install-AksHci

2 Warning

During installation of your AKS host, a Kubernetes - Azure Arc resource type is
created in the resource group that's set during registration. Do not delete this
resource, as it represents your AKS host. You can identify the resource by checking
its distribution field for a value of aks_management . If you delete this resource, it
results in an out-of-policy deployment.

Step 6: Create a Kubernetes cluster


After you install your AKS host, you can deploy a Kubernetes cluster. Open PowerShell
as an administrator and run the following New-AksHciCluster command. This example
command creates a new Kubernetes cluster with one Linux node pool named
linuxnodepool with a node count of 1.

For more information about node pools, see Use node pools in AKS.

PowerShell

New-AksHciCluster -name mycluster -nodePoolName linuxnodepool -nodeCount 1 -


osType Linux

Check your deployed clusters


To get a list of your deployed Kubernetes clusters, run the following Get-AksHciCluster
PowerShell command:

PowerShell

Get-AksHciCluster

Output

ProvisioningState : provisioned
KubernetesVersion : v1.20.7
NodePools : linuxnodepool
WindowsNodeCount : 0
LinuxNodeCount : 0
ControlPlaneNodeCount : 1
Name : mycluster

To get a list of the node pools in the cluster, run the following Get-AksHciNodePool
PowerShell command:

PowerShell

Get-AksHciNodePool -clusterName mycluster

Output

ClusterName : mycluster
NodePoolName : linuxnodepool
Version : v1.20.7
OsType : Linux
NodeCount : 1
VmSize : Standard_K8S3_v1
Phase : Deployed

Step 7: Connect your cluster to Arc-enabled


Kubernetes
Connect your cluster to Arc-enabled Kubernetes by running the Enable-
AksHciArcConnection command. The following example connects your Kubernetes
cluster to Arc using the subscription and resource group details you passed in the Set-
AksHciRegistration command:

PowerShell

Connect-AzAccount
Enable-AksHciArcConnection -name mycluster

7 Note

If you encounter issues or error messages during the installation process, see
installation known issues and errors for more information.

Scale a Kubernetes cluster


If you need to scale your cluster up or down, you can change the number of control
plane nodes by using the Set-AksHciCluster command. To change the number of Linux
or Windows worker nodes in your node pool, use the Set-AksHciNodePool command.

To scale control plane nodes, run the following command:

PowerShell

Set-AksHciCluster -name mycluster -controlPlaneNodeCount 3

To scale the worker nodes in your node pool, run the following command:

PowerShell

Set-AksHciNodePool -clusterName mycluster -name linuxnodepool -count 3


7 Note

In previous versions of AKS on Azure Stack HCI and Windows Server, the Set-
AksHciCluster command was also used to scale worker nodes. Now that AKS is
introducing node pools in workload clusters, you can only use this command to
scale worker nodes if your cluster was created with the old parameter set in New-
AksHciCluster.

To scale worker nodes in a node pool, use the Set-AksHciNodePool command.

Access your clusters using kubectl


To access your Kubernetes clusters using kubectl, run the Get-AksHciCredential
PowerShell command. This will use the specified cluster's kubeconfig file as the default
kubeconfig file for kubectl. You can also use kubectl to deploy applications using Helm:

PowerShell

Get-AksHciCredential -name mycluster

Delete a Kubernetes cluster


To delete a Kubernetes cluster, run the following command:

PowerShell

Remove-AksHciCluster -name mycluster

7 Note

Make sure that your cluster is deleted by looking at the existing VMs in the Hyper-
V Manager. If they are not deleted, then you can manually delete the VMs. Then,
run the command Restart-Service wssdagent . Run this command on each node in
the failover cluster.

Get logs
To get logs from your all your pods, run the Get-AksHciLogs command. This command
creates an output zipped folder called akshcilogs.zip in your working directory. The full
path to the akshcilogs.zip folder is the output after running the following command:

PowerShell

Get-AksHciLogs

In this quickstart, you learned how to set up an AKS host and create Kubernetes clusters
using PowerShell. You also learned how to use PowerShell to scale a Kubernetes cluster
and to access clusters with kubectl .

Next steps
Prepare an application
Deploy a Windows application on your Kubernetes cluster
Set up multiple administrators
Deploy Azure Virtual Desktop
Article • 01/24/2024

) Important

Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure
Government and Azure China. See the Supplemental Terms of Use for Microsoft
Azure Previews for legal terms that apply to Azure features that are in beta,
preview, or otherwise not yet released into general availability.

This article shows you how to deploy Azure Virtual Desktop on Azure or Azure Stack HCI
by using the Azure portal, Azure CLI, or Azure PowerShell. To deploy Azure Virtual
Desktop you:

Create a host pool.


Create a workspace.
Create an application group.
Create session host virtual machines.
Enable diagnostics settings (optional).
Assign users or groups to the application group for users to get access to desktops
and applications.

You can do all these tasks in a single process when using the Azure portal, but you can
also do them separately.

For more information on the terminology used in this article, see Azure Virtual Desktop
terminology, and to learn about the service architecture and resilience of the Azure
Virtual Desktop service, see Azure Virtual Desktop service architecture and resilience.

 Tip

The process covered in this article is an in-depth and adaptable approach to


deploying Azure Virtual Desktop. If you want to try Azure Virtual Desktop with a
more simple approach to deploy a sample Windows 11 desktop in Azure Virtual
Desktop, see Tutorial: Deploy a sample Azure Virtual Desktop infrastructure with
a Windows 11 desktop or use the getting started feature.

Prerequisites
Review the Prerequisites for Azure Virtual Desktop for a general idea of what's required
and supported, such as operating systems (OS), virtual networks, and identity providers.
It also includes a list of the supported Azure regions in which you can deploy host pools,
workspaces, and application groups. This list of regions is where the metadata for the
host pool can be stored. However, session hosts can be located in any Azure region, and
on-premises with Azure Stack HCI. For more information about the types of data and
locations, see Data locations for Azure Virtual Desktop.

Select the relevant tab for your scenario for more prerequisites.

Portal

In addition, you need:

The Azure account you use must be assigned the following built-in role-based
access control (RBAC) roles as a minimum on a resource group or subscription
to create the following resource types. If you want to assign the roles to a
resource group, you need to create this first.

ノ Expand table

Resource type RBAC role

Host pool, workspace, and application group Desktop Virtualization Contributor

Session hosts (Azure) Virtual Machine Contributor

Session hosts (Azure Stack HCI) Azure Stack HCI VM Contributor

Alternatively you can assign the Contributor RBAC role to create all of these
resource types.

For ongoing management of host pools, workspaces, and application groups,


you can use more granular roles for each resource type. For more information,
see Built-in Azure RBAC roles for Azure Virtual Desktop.

To assign users to the application group, you'll also need


Microsoft.Authorization/roleAssignments/write permissions on the

application group. Built-in RBAC roles that include this permission are User
Access Administrator and Owner.

Don't disable Windows Remote Management (WinRM) when creating session


hosts using the Azure portal, as PowerShell DSC requires it.

To add session hosts on Azure Stack HCI, you'll also need:


An Azure Stack HCI cluster registered with Azure. Your Azure Stack HCI
clusters need to be running a minimum of version 23H2. For more
information, see Azure Stack HCI, version 23H2 deployment overview.
Azure Arc virtual machine (VM) management is installed automatically.

A stable connection to Azure from your on-premises network.

At least one Windows OS image available on the cluster. For more


information, see how to create VM images using Azure Marketplace
images, use images in Azure Storage account, and use images in local
share.

Create a host pool


To create a host pool, select the relevant tab for your scenario and follow the steps.

Portal

Here's how to create a host pool using the Azure portal.

1. Sign in to the Azure portal .

2. In the search bar, enter Azure Virtual Desktop and select the matching service
entry.

3. Select Host pools, then select Create.

4. On the Basics tab, complete the following information:

ノ Expand table

Parameter Value/Description

Subscription Select the subscription you want to create the host pool in from the
drop-down list.

Resource Select an existing resource group or select Create new and enter a
group name.

Host pool Enter a name for the host pool, for example hp01.
name

Location Select the Azure region where you want to create your host pool.
Parameter Value/Description

Validation Select Yes to create a host pool that is used as a validation


environment environment.

Select No (default) to create a host pool that isn't used as a


validation environment.

Preferred app Select the preferred application group type for this host pool from
group type Desktop or RemoteApp. A Desktop application group is created
automatically when using the Azure portal, with whichever
application group type you set as the preferred.

Host pool type Select whether you want your host pool to be Personal or Pooled.

If you select Personal, a new option appears for Assignment type.


Select either Automatic or Direct.

If you select Pooled, two new options appear for Load balancing
algorithm and Max session limit.

- For Load balancing algorithm, choose either breadth-first or


depth-first, based on your usage pattern.

- For Max session limit, enter the maximum number of users you
want load-balanced to a single session host.

 Tip

Once you've completed this tab, you can continue to optionally create
session hosts, a workspace, register the default desktop application
group from this host pool, and enable diagnostics settings by selecting
Next: Virtual Machines. Alternatively, if you want to create and configure
these separately, select Next: Review + create and go to step 9.

5. Optional: On the Virtual machines tab, if you want to add session hosts,
complete the following information, depending on if you want to create
session hosts on Azure or Azure Stack HCI:

a. To add session hosts on Azure:

ノ Expand table
Parameter Value/Description

Add virtual Select Yes. This shows several new options.


machines

Resource group This automatically defaults to the same resource group you
chose your host pool to be in on the Basics tab, but you can
also select an alternative.

Name prefix Enter a name for your session hosts, for example hp01-sh.

This value is used as the prefix for your session hosts. Each
session host has a suffix of a hyphen and then a sequential
number added to the end, for example hp01-sh-0.

This name prefix can be a maximum of 11 characters and is


used in the computer name in the operating system. The
prefix and the suffix combined can be a maximum of 15
characters. Session host names must be unique.

Virtual machine Select Azure virtual machine.


type

Virtual machine Select the Azure region where you want to deploy your
location session hosts. This must be the same region that your virtual
network is in.

Availability Select from availability zones, availability set, or No


options infrastructure dependency required. If you select availability
zones or availability set, complete the extra parameters that
appear.

Security type Select from Standard, Trusted launch virtual machines, or


Confidential virtual machines.

- If you select Trusted launch virtual machines, options for


secure boot and vTPM are automatically selected.

- If you select Confidential virtual machines, options for


secure boot, vTPM, and integrity monitoring are
automatically selected. You can't opt out of vTPM when using
a confidential VM.

Image Select the OS image you want to use from the list, or select
See all images to see more, including any images you've
created and stored as an Azure Compute Gallery shared image
or a managed image.

Virtual machine Select a SKU. If you want to use different SKU, select Change
size size, then select from the list.
Parameter Value/Description

Hibernate Check the box to enable hibernate. Hibernate is only available


(preview) for personal host pools. You will need to self-register your
subscription to use the hibernation feature. For more
information, see Hibernation in virtual machines. If you're
using Teams media optimizations you should update the
WebRTC redirector service to 1.45.2310.13001.

Number of VMs Enter the number of virtual machines you want to deploy. You
can deploy up to 400 session hosts at this point if you wish
(depending on your subscription quota), or you can add more
later.

For more information, see Azure Virtual Desktop service limits


and Virtual Machines limits.

OS disk type Select the disk type to use for your session hosts. We
recommend only Premium SSD is used for production
workloads.

OS disk size Select a size for the OS disk.

If you enable hibernate, ensure the OS disk is large enough to


store the contents of the memory in addition to the OS and
other applications.

Confidential If you're using a confidential VM, you must select the


computing Confidential compute encryption check box to enable OS
encryption disk encryption.

This check box only appears if you selected Confidential


virtual machines as your security type.

Boot Diagnostics Select whether you want to enable boot diagnostics.

Network and
security

Virtual network Select your virtual network. An option to select a subnet


appears.

Subnet Select a subnet from your virtual network.

Network security Select whether you want to use a network security group
group (NSG).

- None doesn't create a new NSG.

- Basic creates a new NSG for the VM NIC.


Parameter Value/Description

- Advanced enables you to select an existing NSG.

We recommend that you don't create an NSG here, but create


an NSG on the subnet instead.

Public inbound You can select a port to allow from the list. Azure Virtual
ports Desktop doesn't require public inbound ports, so we
recommend you select No.

Domain to join

Select which Select from Microsoft Entra ID or Active Directory and


directory you complete the relevant parameters for the option you select.
would like to join

Virtual Machine
Administrator
account

Username Enter a name to use as the local administrator account for the
new session hosts.

Password Enter a password for the local administrator account.

Confirm password Reenter the password.

Custom
configuration

Custom If you want to run a PowerShell script during deployment you


configuration can enter the URL here.
script URL

b. To add session hosts on Azure Stack HCI:

ノ Expand table

Parameter Value/Description

Add virtual Select Yes. This shows several new options.


machines

Resource group This automatically defaults to the resource group you chose
your host pool to be in on the Basics tab, but you can also
select an alternative.

Name prefix Enter a name for your session hosts, for example hp01-sh.

This value is used as the prefix for your session hosts. Each
Parameter Value/Description

session host has a suffix of a hyphen and then a sequential


number added to the end, for example hp01-sh-0.

This name prefix can be a maximum of 11 characters and is


used in the computer name in the operating system. The
prefix and the suffix combined can be a maximum of 15
characters. Session host names must be unique.

Virtual machine Select Azure Stack HCI virtual machine.


type

Custom location Select the Azure Stack HCI cluster where you want to deploy
your session hosts from the drop-down list.

Images Select the OS image you want to use from the list, or select
Manage VM images to manage the images available on the
cluster you selected.

Number of VMs Enter the number of virtual machines you want to deploy.
You can add more later.

Virtual processor Enter the number of virtual processors you want to assign to
count each session host. This value isn't validated against the
resources available in the cluster.

Memory type Select Static for a fixed memory allocation, or Dynamic for a
dynamic memory allocation.

Memory (GB) Enter a number for the amount of memory in GB you want
to assign to each session host. This value isn't validated
against the resources available in the cluster.

Maximum memory If you selected dynamic memory allocation, enter a number


for the maximum amount of memory in GB you want your
session host to be able to use.

Minimum memory If you selected dynamic memory allocation, enter a number


for the minimum amount of memory in GB you want your
session host to be able to use.

Network and
security

Network dropdown Select an existing network to connect each session to.

Domain to join

Select which Active Directory is the only available option.


directory you would
like to join
Parameter Value/Description

AD domain join Enter the User Principal Name (UPN) of an Active Directory
UPN user that has permission to join the session hosts to your
domain.

Password Enter the password for the Active Directory user.

Specify domain or Select yes if you want to join session hosts to a specific
unit domain or be placed in a specific organization unit (OU). If
you select no, the suffix of the UPN will be used as the
domain.

Virtual Machine
Administrator
account

Username Enter a name to use as the local administrator account for


the new session hosts.

Password Enter a password for the local administrator account.

Confirm password Reenter the password.

Once you've completed this tab, select Next: Workspace.

6. Optional: On the Workspace tab, if you want to create a workspace and


register the default desktop application group from this host pool, complete
the following information:

ノ Expand table

Parameter Value/Description

Register desktop Select Yes. This registers the default desktop application group
app group to the selected workspace.

To this workspace Select an existing workspace from the list, or select Create new
and enter a name, for example ws01.

Once you've completed this tab, select Next: Advanced.

7. Optional: On the Advanced tab, if you want to enable diagnostics settings,


complete the following information:

ノ Expand table
Parameter Value/Description

Enable diagnostics settings Check the box.

Choosing destination details to send logs to Select one of the following destinations:

- Send to Log Analytics workspace

- Archive to storage account

- Stream to an event hub

Once you've completed this tab, select Next: Tags.

8. Optional: On the Tags tab, you can enter any name/value pairs you need, then
select Next: Review + create.

9. On the Review + create tab, ensure validation passes and review the
information that is during deployment.

10. Select Create to create the host pool.

11. Once the host pool has been created, select Go to resource to go to the
overview of your new host pool, then select Properties to view its properties.

Post deployment
If you also added session hosts to your host pool, there's some extra configuration
you need to do, which is covered in the following sections.

Licensing
To ensure your session hosts have licenses applied correctly, you'll need to do the
following tasks:

If you have the correct licenses to run Azure Virtual Desktop workloads, you
can apply a Windows or Windows Server license to your session hosts as part
of Azure Virtual Desktop and run them without paying for a separate license.
This is automatically applied when creating session hosts with the Azure
Virtual Desktop service, but you may have to apply the license separately if
you create session hosts outside of Azure Virtual Desktop. For more
information, see Apply a Windows license to session host virtual machines.

If your session hosts are running a Windows Server OS, you'll also need to
issue them a Remote Desktop Services (RDS) Client Access License (CAL) from
a Remote Desktop Licensing Server. For more information, see License your
RDS deployment with client access licenses (CALs).

For session hosts on Azure Stack HCI, you must license and activate the virtual
machines you use before you use them with Azure Virtual Desktop. For
activating Windows 10 and Windows 11 Enterprise multi-session, and
Windows Server 2022 Datacenter: Azure Edition, use Azure verification for
VMs. For all other OS images (such as Windows 10 and Windows 11
Enterprise, and other editions of Windows Server), you should continue to use
existing activation methods. For more information, see Activate Windows
Server VMs on Azure Stack HCI.

Microsoft Entra joined session hosts


If your users are going to connect to session hosts joined to Microsoft Entra ID,
you'll also need to enable single sign-on or legacy authentication protocols, assign
an RBAC role to users, and review your multifactor authentication policies so they
can sign in to the VMs.

For more information about using Microsoft Entra joined session hosts, see
Microsoft Entra joined session hosts.

7 Note

If you created a host pool, workspace, and registered the default desktop
application group from this host pool in the same process, go to the
section Assign users to an application group and complete the rest of
the article. A Desktop application group is created automatically when
using the Azure portal, whichever application group type you set as the
preferred.

If you created a host pool and workspace in the same process, but didn't
register the default desktop application group from this host pool, go to
the section Create an application group and complete the rest of the
article.

If you didn't create a workspace, continue to the next section and


complete the rest of the article.
Create a workspace
Next, to create a workspace, select the relevant tab for your scenario and follow the
steps.

Portal

Here's how to create a workspace using the Azure portal.

1. From the Azure Virtual Desktop overview, select Workspaces, then select
Create.

2. On the Basics tab, complete the following information:

ノ Expand table

Parameter Value/Description

Subscription Select the subscription you want to create the workspace in from
the drop-down list.

Resource group Select an existing resource group or select Create new and enter a
name.

Workspace Enter a name for the workspace, for example workspace01.


name

Friendly name Optional: Enter a friendly name for the workspace.

Description Optional: Enter a description for the workspace.

Location Select the Azure region where you want to deploy your workspace.

 Tip

Once you've completed this tab, you can continue to optionally register
an existing application group to this workspace, if you have one, and
enable diagnostics settings by selecting Next: Application groups.
Alternatively, if you want to create and configure these separately, select
Review + create and go to step 9.

3. Optional: On the Application groups tab, if you want to register an existing


application group to this workspace, complete the following information:
ノ Expand table

Parameter Value/Description

Register Select Yes, then select + Register application groups. In the new
application pane that opens, select the Add icon for the application group(s)
groups you want to add, then select Select.

Once you've completed this tab, select Next: Advanced.

4. Optional: On the Advanced tab, if you want to enable diagnostics settings,


complete the following information:

ノ Expand table

Parameter Value/Description

Enable diagnostics settings Check the box.

Choosing destination details to send logs to Select one of the following destinations:

- Send to Log Analytics workspace

- Archive to storage account

- Stream to an event hub

Once you've completed this tab, select Next: Tags.

5. Optional: On the Tags tab, you can enter any name/value pairs you need, then
select Next: Review + create.

6. On the Review + create tab, ensure validation passes and review the
information that is used during deployment.

7. Select Create to create the workspace.

8. Once the workspace has been created, select Go to resource to go to the


overview of your new workspace, then select Properties to view its properties.

7 Note

If you added an application group to this workspace, go to the section


Assign users to an application group and complete the rest of the
article.
If you didn't add an application group to this workspace, continue to the
next section and complete the rest of the article.

Create an application group


To create an application group, select the relevant tab for your scenario and follow the
steps.

Portal

Here's how to create an application group using the Azure portal.

1. From the Azure Virtual Desktop overview, select Application groups, then
select Create.

2. On the Basics tab, complete the following information:

ノ Expand table

Parameter Value/Description

Subscription Select the subscription you want to create the application group
in from the drop-down list.

Resource group Select an existing resource group or select Create new and
enter a name.

Host pool Select the host pool for the application group.

Location Metadata is stored in the same location as the host pool.

Application group Select the application group type for the host pool you selected
type from Desktop or RemoteApp.

Application group Enter a name for the application group, for example Session
name Desktop.

 Tip

Once you've completed this tab, select Next: Review + create. You don't
need to complete the other tabs to create an application group, but you'll
need to create a workspace, add an application group to a workspace
and assign users to the application group before users can access the
resources.

If you created an application group for RemoteApp, you will also need to
add applications to it. For more information, see Publish applications.

3. Optional: If you selected to create a RemoteApp application group, you can


add applications to this application group. On the Application groups tab,
select + Add applications, then select an application. For more information on
the application parameters, see Publish applications with RemoteApp. At least
one session host in the host pool must be powered on and available in Azure
Virtual Desktop.

Once you've completed this tab, or if you're creating a desktop application


group, select Next: Assignments.

4. Optional: On the Assignments tab, if you want to assign users or groups to


this application group, select + Add Microsoft Entra users or user groups. In
the new pane that opens, check the box next to the users or groups you want
to add, then select Select.

Once you've completed this tab, select Next: Workspace.

5. Optional: On the Workspace tab, if you're creating a desktop application


group, you can register the default desktop application group from the host
pool you selected by completing the following information:

ノ Expand table

Parameter Value/Description

Register application Select Yes. This registers the default desktop application group
group to the selected workspace.

Register application Select an existing workspace from the list.


group

Once you've completed this tab, select Next: Advanced.

6. Optional: If you want to enable diagnostics settings, on the Advanced tab,


complete the following information:

ノ Expand table
Parameter Value/Description

Enable diagnostics settings Check the box.

Choosing destination details to send logs to Select one of the following destinations:

- Send to Log Analytics workspace

- Archive to storage account

- Stream to an event hub

Once you've completed this tab, select Next: Tags.

7. Optional: On the Tags tab, you can enter any name/value pairs you need, then
select Next: Review + create.

8. On the Review + create tab, ensure validation passes and review the
information that is used during deployment.

9. Select Create to create the application group.

10. Once the application group has been created, select Go to resource to go to
the overview of your new application group, then select Properties to view its
properties.

7 Note

If you created a desktop application group, assigned users or groups, and


registered the default desktop application group to a workspace, your
assigned users can connect to the desktop and you don't need to
complete the rest of the article.

If you created a RemoteApp application group, added applications, and


assigned users or groups, go to the section Add an application group to
a workspace and complete the rest of the article.

If you didn't add applications, assign users or groups, or register the


application group to a workspace continue to the next section and
complete the rest of the article.

Add an application group to a workspace


Next, to add an application group to a workspace, select the relevant tab for your
scenario and follow the steps.

Portal

Here's how to add an application group to a workspace using the Azure portal.

1. From the Azure Virtual Desktop overview, select Workspaces, then select the
name of the workspace you want to assign an application group to.

2. From the workspace overview, select Application groups, then select + Add.

3. Select the plus icon (+) next to an application group from the list. Only
application groups that aren't already assigned to a workspace are listed.

4. Select Select. The application group is added to the workspace.

Assign users to an application group


Finally, to assign users or user groups to an application group, select the relevant tab for
your scenario and follow the steps. We recommend you assign user groups to
application groups to make ongoing management simpler.

Portal

Here's how to assign users or user groups to an application group to a workspace


using the Azure portal.

1. From the Azure Virtual Desktop overview, select Application groups.

2. Select the application group from the list.

3. From the application group overview, select Assignments.

4. Select + Add, then search for and select the user account or user group you
want to assign to this application group.

5. Finish by selecting Select.

Next steps
Portal

Once you've deployed Azure Virtual Desktop, your users can connect. There are
several platforms you can connect from, including from a web browser. For more
information, see Remote Desktop clients for Azure Virtual Desktop and Connect to
Azure Virtual Desktop with the Remote Desktop Web client.

Here are some extra tasks you might want to do:

Configure profile management with FSLogix. To learn more, see FSLogix


profile containers.

Add session hosts to a host pool.

Enable diagnostics settings.


Create Arc virtual machines on Azure
Stack HCI
Article • 03/04/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to create an Arc VM starting with the VM images that you've
created on your Azure Stack HCI cluster. You can create Arc VMs using the Azure CLI,
Azure portal, or Azure Resource Manager (ARM) template.

About Azure Stack HCI cluster resource


Use the Azure Stack HCI cluster resource page for the following operations:

Create and manage Arc VM resources such as VM images, disks, network


interfaces.
View and access Azure Arc Resource Bridge and Custom Location associated with
the Azure Stack HCI cluster.
Provision and manage Arc VMs.

The procedure to create Arc VMs is described in the next section.

Prerequisites
Before you create an Azure Arc-enabled VM, make sure that the following prerequisites
are completed.

Azure CLI

Access to an Azure subscription with Owner or Contributor access.


Access to a resource group where you want to provision the VM.
Access to one or more VM images on your Azure Stack HCI cluster. These VM
images could be created by one of the following procedures:
VM image starting from an image in Azure Marketplace.
VM image starting from an image in Azure Storage account.
VM image starting from an image in local share on your cluster.
A custom location for your Azure Stack HCI cluster that you'll use to provision
VMs. The custom location will also show up in the Overview page for Azure
Stack HCI cluster.
If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.

Access to a network interface that you have created on a logical network


associated with your Azure Stack HCI cluster. You can choose a network
interface with static IP or one with a dynamic IP allocation. For more
information, see how to Create network interfaces.

Create Arc VMs


Follow these steps to create an Arc VM on your Azure Stack HCI cluster.

Azure CLI

Follow these steps on the client running az CLI that is connected to your Azure
Stack HCI cluster.

Sign in and set subscription


1. Connect to a server on your Azure Stack HCI system.

2. Sign in. Type:

Azure CLI

az login --use-device-code

3. Set your subscription.

Azure CLI

az account set --subscription <Subscription ID>

Create a Windows VM
Depending on the type of the network interface that you created, you can create a
VM that has network interface with static IP or one with a dynamic IP allocation.

7 Note
If you need more than one network interface with static IPs for your VM, create
the interface(s) now before you create the VM. Adding a network interface with
static IP, after the VM is provisioned, is not supported.

Here we'll create a VM that uses specific memory and processor counts on a
specified storage path.

1. Set some parameters.

Azure CLI

$vmName ="myhci-vm"
$subscription = "<Subscription ID>"
$resource_group = "myhci-rg"
$customLocationName = "myhci-cl"
$customLocationID
="/subscriptions/$subscription/resourceGroups/$resource_group/provi
ders/Microsoft.ExtendedLocation/customLocations/$customLocationName
"
$location = "eastus"
$computerName = "mycomputer"
$userName = "myhci-user"
$password = "<Password for the VM>"
$imageName ="ws22server"
$nicName ="myhci-vnic"
$storagePathName = "myhci-sp"
$storagePathId = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/storagecontainers/myhci-sp"

The parameters for VM creation are tabulated as follows:

ノ Expand table

Parameters Description

name Name for the VM that you create for your Azure Stack HCI cluster.
Make sure to provide a name that follows the Rules for Azure
resources.

admin- Username for the user on the VM you're deploying on your Azure
username Stack HCI cluster.

admin- Password for the user on the VM you're deploying on your Azure
password Stack HCI cluster.

image-name Name of the VM image used to provision the VM.


Parameters Description

location Azure regions as specified by az locations . For example, this


could be eastus , westeurope .

resource-group Name of the resource group where you create the VM. For ease of
management, we recommend that you use the same resource
group as your Azure Stack HCI cluster.

subscription Name or ID of the subscription where your Azure Stack HCI is


deployed. This could be another subscription you use for VM on
your Azure Stack HCI cluster.

custom-location Use this to provide the custom location associated with your Azure
Stack HCI cluster where you're creating this VM.

authentication- Type of authentication to use with the VM. The accepted values are
type all , password , and ssh . Default is password for Windows and SSH
public key for Linux. Use all to enable both ssh and password
authentication.

nics Names or the IDs of the network interfaces associated with your
VM. You must have atleast one network interface when you create
a VM, to enable guest management.

memory-mb Memory in Megabytes allocated to your VM. If not specified,


defaults are used.

processors The number of processors allocated to your VM. If not specified,


defaults are used.

storage-path-id The associated storage path where the VM configuration and the
data are saved.

proxy- Use this optional parameter to configure a proxy server for your
configuration VM. For more information, see Create a VM with proxy configured.

2. Run the following command to create a VM.

Azure CLI

az stack-hci-vm create --name $vmName --resource-group


$resource_group --admin-username $userName --admin-password
$password --computer-name $computerName --image $imageName --
location $location --authentication-type all --nics $nicName --
custom-location $customLocationID --hardware-profile memory-
mb="8192" processors="4" --storage-path-id $storagePathId

The VM is successfully created when the provisioningState shows as succeeded in


the output.
7 Note

The VM created has guest management enabled by default. If for any reason
guest management fails during VM creation, you can follow the steps in
Enable guest management on Arc VM to enable it after the VM creation.

In this example, the storage path was specified using the --storage-path-id flag
and that ensured that the workload data (including the VM, VM image, non-OS
data disk) is placed in the specified storage path.

If the flag isn't specified, the workload (VM, VM image, non-OS data disk) is
automatically placed in a high availability storage path.

Create a Linux VM
To create a Linux VM, use the same command that you used to create the Windows
VM.

The gallery image specified should be a Linux image.


The username and password works with the authentication-type-all
parameter.
For SSH keys, you need to pass the ssh-key-values parameters along with the
authentication-type-all .

) Important

Setting the proxy server during VM creation is not supported for Linux VMs.

Create a VM with proxy configured


Use this optional parameter proxy-configuration to configure a proxy server for
your VM.

If creating a VM behind a proxy server, run the following command:

Azure CLI

az stack-hci-vm create --name $vmName --resource-group $resource_group -


-admin-username $userName --admin-password $password --computer-name
$computerName --image $imageName --location $location --authentication-
type all --nics $nicName --custom-location $customLocationID --hardware-
profile memory-mb="8192" processors="4" --storage-path-id $storagePathId
--proxy-configuration http_proxy="<Http URL of proxy server>"
https_proxy="<Https URL of proxy server>" no_proxy="<URLs which bypass
proxy>" cert_file_path="<Certificate file path for your server>"

You can input the following parameters for proxy-server-configuration :

ノ Expand table

Parameters Description

http_proxy HTTP URLs for proxy server. An example URL is: http://proxy.example.com:3128 .

https_proxy HTTPS URLs for proxy server. The server may still use an HTTP address as shown
in this example: http://proxy.example.com:3128 .

no_proxy URLs, which can bypass proxy. Typical examples would be


localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,100.0.0.0/8 .

cert_file_path Name of the certificate file path for your proxy server. An example is:
C:\Users\Palomino\proxycert.crt .

Here is a sample command:

Azure CLI

az stack-hci-vm create --name $vmName --resource-group $resource_group -


-admin-username $userName --admin-password $password --computer-name
$computerName --image $imageName --location $location --authentication-
type all --nics $nicName --custom-location $customLocationID --hardware-
profile memory-mb="8192" processors="4" --storage-path-id $storagePathId
--proxy-configuration
http_proxy="http://ubuntu:[email protected]:3128"
https_proxy="http://ubuntu:[email protected]:3128"
no_proxy="localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/
16,100.0.0.0/8,s-cluster.test.contoso.com"
cert_file_path="C:\ClusterStorage\UserStorage_1\server.crt"

For proxy authentication, you can pass the username and password combined in a
URL as follows: "http://username:[email protected]:3128" .

Depending on the PowerShell version you are running on your VM, you may need
to enable the proxy settings for your VM.

For Windows VMs running PowerShell version 5.1 or earlier, sign in to the VM
after the creation. Run the following command to enable proxy:

PowerShell
netsh winhttp set proxy proxy-server="http=myproxy;https=sproxy:88"
bypass-list="*.foo.com"

After the proxy is enabled, you can then Enable guest management.

For Windows VMs running PowerShell version later than 5.1, proxy settings
passed during VM creation are only used for enabling Arc guest management.
After the VM is created, sign in to the VM and run the above command to
enable proxy for other applications.

Use managed identity to authenticate Arc VMs


When the Arc VMs are created on your Azure Stack HCI system via Azure CLI or Azure
portal, a system-assigned managed identity is also created that lasts for the lifetime of
the Arc VMs.

The Arc VMs on Azure Stack HCI are extended from Arc-enabled servers and can use
system-assigned managed identity to access other Azure resources that support
Microsoft Entra ID-based authentication. For example, the Arc VMs can use a system-
assigned managed identity to access the Azure Key Vault.

For more information, see System-assigned managed identities and Authenticate


against Azure resource with Azure Arc-enabled servers.

Next steps
Install and manage VM extensions.
Troubleshoot Arc VMs.
Frequently Asked Questions for Arc VM management.
About updates for Azure Stack HCI,
version 23H2
Article • 02/02/2024

Applies to: Azure Stack HCI, version 23H2

This article describes the new update feature for this release, benefits of the feature, and
how to keep various pieces of your Azure Stack HCI solution up to date.

About the updates


Staying up to date with recent security fixes and feature improvements is important for
all pieces of the Azure Stack HCI solution. The latest release introduces new features and
components in addition to the OS, including the orchestrator (Lifecycle Manager).

The approach in this release provides a flexible foundation to integrate and manage
various aspects of the Azure Stack HCI solution in one place. The orchestrator for
updates is first installed during deployment and enables the new deployment
experience including the management of the OS, core agents and services, and the
solution extension.

Here's an example of a new cluster deployment using the updates in this release:

In this solution the Azure Stack HCI OS, agents and services, drivers, and firmware are
automatically updated.

Some new agents and services can't be updated outside the orchestrator and availability
of those updates depends on the specific feature. You might need to follow different
processes to apply updates depending on the services you use.
Benefits
This new approach:

Simplifies update management by consolidating update workflows for various


components into a single experience.

Keeps the system in a well-tested and optimal configuration.

Helps avoid downtime and effects on workloads with comprehensive health checks
before and during an update.

Improves reliability with automatic retry and the remediation of known issues.

Whether managed locally or via the Azure portal, the common back-end drives a
consistent experience.

Lifecycle cadence
The Azure Stack HCI platform follows the Modern Lifecycle policy. The Modern Lifecycle
policy defines the products and services that are continuously serviced and supported.
To stay current with this policy, you must stay within six months of the most recent
release. To learn more about the support windows, see Azure Stack HCI release
information.

Microsoft might release the following types of updates for the Azure Stack HCI platform:

ノ Expand table

Update Type Typical Description


Cadence

Monthly Monthly Monthly updates primarily contain quality and reliability


Updates improvements. They might include OS Latest Cumulative Updates
¹. Some updates require host system reboots, while others don't.

Baseline Quarterly Baseline updates include new features and improvements. They
Updates typically require host system reboots and might take longer.

Hotfixes As needed Hotfixes address blocking issues that could prevent regular
monthly or baseline updates. To fix critical or security issues,
hotfixes might be released sooner than monthly.

Solution As needed Solution Builder Extension² provides driver, firmware, and other
Builder partner content specific to the system solution used. They might
Extension require host system reboots.
¹ Quality updates released based on packages that contain monthly updates. These
updates supersede the previous month's updates and contain both security and non-
security changes.

² The Original Equipment Manufacturer determines the frequency of Solution Builder


Extension updates.

Sometimes you might see updates to the latest patch level of your current baseline. If a
new baseline is available, you might see the baseline update itself or the latest patch
level of the baseline. Your cluster must stay within six months of the most recent
baseline to consider it supported.

The next sections provide an overview of components, along with methods and
interfaces for updating your solution.

What's in the update package?


Solution updates managed by this feature contain new versions of the Azure Stack HCI
operating system (OS), core agents and services, and the solution extension (depending
on your cluster's hardware). Microsoft bundles these components into an update release
and validates the combination of versions to ensure interoperability.

Operating System: These updates help you stay productive and protected. They
provide users and IT administrators with the security fixes they need and protect
devices so that unpatched vulnerabilities can't be exploited.

Agents and services: The orchestrator updates its own agents to ensure it has the
recent fixes corresponding to the update. Azure Connected Machine agent and Arc
Resource Bridge and its dependencies, get updated automatically to the latest
validated version when Azure Stack HCI system is updated.

Solution Builder Extension: Hardware vendors might choose to integrate with this
feature to enhance the update management experience for their customers.
If a hardware vendor integrates with our update validation and release platform,
the solution extension content includes the drivers and firmware, and the
orchestrator manages the necessary system reboots within the same
maintenance window. You can spend less time searching for updates and
experience fewer maintenance windows.

This solution is the recommended way to update your Azure Stack HCI cluster.

7 Note
Customer workloads aren't covered by this update solution.

User interfaces for updates


There are two interfaces you can use to apply available updates.

PowerShell (Command line)


The Azure portal

PowerShell
The PowerShell procedures apply to a single server and multi-server cluster that runs
with the orchestrator installed. For more information, see Update your Azure Stack HCI
solution via PowerShell.

The Azure portal


You can install available Azure Stack HCI cluster updates via the Azure portal using the
Azure Update Manager. For more information, see Use Azure Update Manager to
update your Azure Stack HCI, version 23H2

Next steps
Learn to Understand update phases.

Learn how to Troubleshoot updates.


Review update phases of Azure Stack
HCI, version 23H2
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

This article describes the various phases of solution updates that are applied to your
Azure Stack HCI cluster to keep it up-to-date. This information is applicable to Azure
Stack HCI, version 23H2.

The procedure in this article applies to both a single server and a multi-server cluster
that is running the latest version including the orchestrator.

About update phases


The Azure Stack HCI solution updates can consist of OS, agents and service, and solution
extension updates. For more information on these solution updates, see About updates
for Azure Stack HCI, version 23H2.

The new update feature automates the update process for agents, services, operating
system content, and Solution Extension content, with the goal of maintaining availability
by shifting workloads around throughout the update process when needed.

The updates can be of the following types:

Updates not requiring reboots - The updates that can be applied to your Azure
Stack HCI cluster without any server reboots in the cluster.

Updates that require reboots - The updates that might need a server reboot in
your Azure Stack HCI cluster. Cluster-Aware Updating is used to reboot servers in
the cluster one by one, ensuring the availability of the cluster during the update
process.

The updates consist of several phases: discovering the update, staging the content,
deploying the update, and reviewing the installation. Each phase might not require your
input but distinct actions occur in each phase.

You can apply these updates via PowerShell or the Azure portal. Regardless of the
interface you choose, the subsequent sections summarize what happens within each
phase of an update. The following diagram shows what actions you might need to take
during each phase and what actions Azure Stack HCI takes through the update
operation.

Phase 1: Discovery and acquisition


Before Microsoft releases a new update package, the package is validated as a collection
of components. After the validation is complete, the content is released along with the
release notes.

The release notes include the update contents, changes, known issues, and links to any
external downloads that might be required (for example, drivers and firmware). For
more information, see the Latest release notes.

After Microsoft releases the update, your Azure Stack HCI update platform will
automatically detect the update. Though you don't need to scan for updates, you must
go to the Updates page in your management surface to see the new update’s details.

Depending on the hardware in your cluster and the scope of an update bundle, you
might need to acquire and sideload extra content to proceed with an update. The
operating system and agents and services content are provided by Microsoft, while
depending on your specific solution and the OEM, the Solution Extension might require
an extra download from the hardware OEM. If more is required, the installation flow
prompts you for the content.

Phase 2: Readiness checks and staging


There are a series of prechecks before installing a solution update. The prechecks are
related to the storage systems, failover cluster requirements, remote management of
the cluster, and solution extensions. These prechecks help to confirm that your Azure
Stack HCI cluster is safe to update and ensures updates go more smoothly.

A subset of these checks can be initiated outside the update process. Because new
checks can be included in each update, these readiness checks are executed after the
update content is downloaded and before it begins installing.

Readiness checks can also result in blocking conditions or warnings.

If the readiness checks detect a blocking condition, the issues must be remediated
before the update can proceed.

If the readiness checks result in warnings the updates, it could introduce longer
update times or affect the workloads. You might need to acknowledge the
potential impact and bypass the warning before the update can proceed.

7 Note

In this release, you can only initiate immediate install of the updates. Scheduling of
updates is not supported.

Phase 3: Installation progress and monitoring


While the update installs, you can monitor the progress via your chosen interface. Steps
within the update are shown within a hierarchy and correspond to the actions taken
throughout the workflow. Steps might be dynamically generated throughout the
workflow, so the list of steps could change. For more information, see examples of
Monitoring progress via PowerShell.

The new update solution includes retry and remediation logic. It attempts to fix update
issues automatically and in a non-disruptive way, but sometimes manual intervention is
required. For more information, see Troubleshooting updates.

7 Note

Once you remediate the issue, you need to rerun the checks to confirm the update
readiness before proceeding.

Next step
Learn more about how to Troubleshoot updates.
Update your Azure Stack HCI, version
23H2 via PowerShell
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

) Important

The procedure described here applies only when updating from one version of
Azure Stack HCI, version 23H2 to another higher version. For information on
updates for older versions, see Update clusters for Azure Stack HCI, version 22H2.

This article describes how to use Azure Update Manager to find and install available
cluster updates on selected Azure Stack HCI clusters. Additionally, we provide guidance
on how to review cluster updates, track progress, and browse cluster updates history.

This article describes how to apply a solution update to your Azure Stack HCI cluster via
PowerShell.

The procedure in this article applies to both a single server and multi-server cluster that
is running the latest version with the orchestrator (Lifecycle Manager) installed. If your
cluster was created via a new deployment of Azure Stack HCI, version 23H2, then the
orchestrator was automatically installed as part of the deployment.

For information on how to apply solution updates to clusters created with older versions
of Azure Stack HCI that didn't have the orchestrator installed see Update Azure Stack
HCI clusters, version 22H2.

About solution updates


The Azure Stack HCI solution updates can consist of platform, service, and solution
extension updates. For more information on each of these types of updates, see About
updates for Azure Stack HCI, version 23H2.

When you apply a solution update, here are the high-level steps that you take:

1. Make sure that all the prerequisites are completed.


2. Identify the software version running on your cluster.
3. Connect to your Azure Stack HCI cluster via remote PowerShell.
4. Use the Environment Checker to verify that your cluster is in good health.
5. Discover the updates that are available and filter the ones that you can apply to
your cluster.
6. Download the updates, assess the update readiness of your cluster and once
ready, install the updates on your cluster. Track the progress of the updates. If
needed, you can also monitor the detailed progress.
7. Verify the version of the updates installed.

The time taken to install the updates might vary based on the following factors:

Content of the update.


Load on your cluster.
Number of servers in your cluster.
Type of the hardware used.
Solution Builder Extension used.

The approximate time estimates for a typical single server and 4-server cluster are
summarized in the following table:

ノ Expand table

Cluster/Time Time for health check Time to install update


hh:mm:ss hh:mm:ss

Single server 0:01:44 1:25:42

4-server cluster 0:01:58 3:53:09

Prerequisites
Before you begin, make sure that:

You have access to an Azure Stack HCI, version 23H2 cluster that is running 2310 or
higher. The cluster should be registered in Azure.
You have access to a client that can connect to your Azure Stack HCI cluster. This
client should be running PowerShell 5.0 or later.
You have access to the solution update package over the network. You sideload or
copy these updates to the servers of your cluster.

Connect to your Azure Stack HCI cluster


Follow these steps on your client to connect to one of the servers of your Azure Stack
HCI cluster.
1. Run PowerShell as administrator on the client that you're using to connect to your
cluster.

2. Open a remote PowerShell session to a server on your Azure Stack HCI cluster. Run
the following command and provide the credentials of your server when
prompted:

PowerShell

$cred = Get-Credential
Enter-PSSession -ComputerName "<Computer IP>" -Credential $cred

7 Note

You should sign in using your deployment user account credentials: which is
the account you created when preparing Active Directory and used during
the deployment of the Azure Stack HCI system.

Here's an example output:

Console

PS C:\Users\Administrator> $cred = Get-Credential

cmdlet Get-Credential at command pipeline position 1


Supply values for the following parameters:
Credential
PS C:\Users\Administrator> Enter-PSSession -ComputerName
"100.100.100.10" -Credential $cred
[100.100.100.10]: PS C:\Users\Administrator\Documents>

Step 1: Identify the stamp version on your


cluster
Before you discover the updates, make sure that the cluster was deployed using the
Azure Stack HCI, version 23H2, software version 2310.

1. Make sure that you're connected to the cluster server using the deployment user
account. Run the following command:

PowerShell
whoami

2. To ensure that the cluster was deployed running Azure Stack HCI, version 23H2,
run the following command on one of the servers of your cluster:

PowerShell

Get-StampInformation

Here's a sample output:

Console

PS C:\Users\lcmuser> Get-StampInformation
Deployment ID : b4457f25-6681-4e0e-b197-a7a433d621d6
OemVersion : 2.1.0.0
PackageHash :
StampVersion : 10.2303.0.31
InitialDeployedVersion : 10.2303.0.26
PS C:\Users\lcmuser>

3. Make a note of the StampVersion on your cluster. The stamp version reflects the
solution version that your cluster is running.

Step 2: Optionally validate system health


Before you discover the updates, you can manually validate the system health. This step
is optional as the orchestrator always assesses update readiness prior to applying
updates.

7 Note

Any faults that have a severity of critical will block the updates from being applied.

1. Connect to a server on your Azure Stack HCI cluster using the deployment user
account.

2. Run the following command to validate system health via the Environment
Checker.

PowerShell
$result = Test-EnvironmentReadiness
$result | ft Name,Status,Severity

Here's a sample output:

Console

PS C:\Users\lcmuser> whoami
rq2205\lcmuser
PS C:\Users\lcmuser> $result=Test-EnvironmentReadiness
VERBOSE: Looking up shared vhd product drive letter.
WARNING: Unable to find volume with label Deployment
VERBOSE: Get-Package returned with Success:True
VERBOSE: Found package
Microsoft.AzureStack.Solution.Deploy.EnterpriseCloudEngine.Client.Deplo
yment with version 10.2303.0.31 at
C:\NugetStore\Microsoft.AzureStack.Solution.Deploy.EnterpriseCloudEngin
e.Client.Deployment.10.2303.0.31\Microsoft.Azure
Stack.Solution.Deploy.EnterpriseCloudEngine.Client.Deployment.nuspec.
03/29/2023 15:45:58 : Launching StoragePools
03/29/2023 15:45:58 : Launching StoragePhysicalDisks
03/29/2023 15:45:58 : Launching StorageMapping
03/29/2023 15:45:58 : Launching StorageSubSystems
03/29/2023 15:45:58 : Launching TestCauSetup
03/29/2023 15:45:58 : Launching StorageVolumes
03/29/2023 15:45:58 : Launching StorageVirtualDisks
03/29/2023 15:46:05 : Launching OneNodeEnvironment
03/29/2023 15:46:05 : Launching NonMigratableWorkload
03/29/2023 15:46:05 : Launching FaultSummary
03/29/2023 15:46:06 : Launching SBEHealthStatusOnNode
03/29/2023 15:46:06 : Launching StorageJobStatus
03/29/2023 15:46:07 : Launching StorageCsv
WARNING: There aren't any faults right now.
03/29/2023 15:46:09 : Launching SBEPrecheckStatus
WARNING: rq2205-cl: There aren't any faults right now.
VERBOSE: Looking up shared vhd product drive letter.
WARNING: Unable to find volume with label Deployment
VERBOSE: Get-Package returned with Success:True
VERBOSE: Found package Microsoft.AzureStack.Role.SBE with version
4.0.2303.66 at
C:\NugetStore\Microsoft.AzureStack.Role.SBE.4.0.2303.66\Microsoft.Azure
Stack.Role.SBE.nuspec.
VERBOSE: SolutionExtension module supports Tag
'HealthServiceIntegration'.
VERBOSE: SolutionExtension module SolutionExtension at
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\CloudMed
ia\SBE\Installed\Content\Configuration\SolutionExtension is valid.
VERBOSE: Looking up shared vhd product drive letter.
WARNING: Unable to find volume with label Deployment
VERBOSE: Get-Package returned with Success:True
VERBOSE: Found package Microsoft.AzureStack.Role.SBE with version
4.0.2303.66 at
C:\NugetStore\Microsoft.AzureStack.Role.SBE.4.0.2303.66\Microsoft.Azure
Stack.Role.SBE.nuspec.
VERBOSE: SolutionExtension module supports Tag
'HealthServiceIntegration'.
VERBOSE: SolutionExtension module SolutionExtension at
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\CloudMed
ia\SBE\Installed\Content\Configuration\SolutionExtension is valid.
PS C:\Users\lcmuser> $result|ft Name,Status,Severity

Name Status Severity


---- ------ --------
Storage Pool Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Physical Disks Summary SUCCESS CRITICAL
Storage Services Summary SUCCESS CRITICAL
Storage Services Summary SUCCESS CRITICAL
Storage Services Summary SUCCESS CRITICAL
Storage Subsystem Summary SUCCESS CRITICAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup SUCCESS CRITICAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup SUCCESS INFORMATIONAL
Test-CauSetup FAILURE INFORMATIONAL
Test-CauSetup FAILURE INFORMATIONAL
Test-CauSetup FAILURE INFORMATIONAL
Storage Volume Summary SUCCESS CRITICAL
Storage Volume Summary SUCCESS CRITICAL
Storage Volume Summary SUCCESS CRITICAL
Storage Volume Summary SUCCESS CRITICAL
Storage Virtual Disk Summary SUCCESS CRITICAL
Storage Virtual Disk Summary SUCCESS CRITICAL
Storage Virtual Disk Summary SUCCESS CRITICAL
Storage Virtual Disk Summary SUCCESS CRITICAL
Get-OneNodeRebootRequired SUCCESS WARNING
Test-NonMigratableVMs SUCCESS WARNING
Faults SUCCESS INFORMATIONAL
Test-SBEHealthStatusOnNode Success Informational
Test-SBEHealthStatusOnNode Success Informational
Storage Job Summary SUCCESS CRITICAL
Storage Cluster Shared Volume Summary SUCCESS CRITICAL
Storage Cluster Shared Volume Summary SUCCESS CRITICAL
Storage Cluster Shared Volume Summary SUCCESS CRITICAL
Test-SBEPrecheckStatus Success Informational

PS C:\Users\lcmuser>

7 Note

In this release, the informational failures for Test-CauSetup are expected and
will not impact the updates.

3. Review any failures and resolve them before you proceed to the discovery step.

Step 3: Discover the updates


You can discover updates in one of the following two ways:

Discover updates online - The recommended option when your cluster has good
internet connectivity. The solution updates are discovered via the online update
catalog.
Sideload and discover updates - An alternative to discovering updates online and
should be used for scenarios with unreliable or slow internet connectivity, or when
using solution extension updates provided by your hardware vendor. In these
instances, you download the solution updates to a central location. You then
sideload the updates to an Azure Stack HCI cluster and discover the updates
locally.

Discover solution updates online (recommended)


Discovering solution updates using the online catalog is the recommended method.
Follow these steps to discover solution updates online:

1. Connect to a server on your Azure Stack HCI cluster using the deployment user
account.

2. Verify that the Update service discovers the update package.


PowerShell

Get-SolutionUpdate | ft DisplayName, State

3. Optionally review the versions of the update package components.

PowerShell

$Update = Get-SolutionUpdate
$Update.ComponentVersions

Here's an example output:

Console

PS C:\Users\lcmuser> $Update = Get-SolutionUpdate


PS C:\Users\lcmuser> $Update.ComponentVersions

PackageType Version LastUpdated


----------- ------- -----------
Services 10.2303.0.31
Platform 10.2303.0.31
SBE 4.1.2.3
PS C:\Users\lcmuser>

You can now proceed to Download and install the updates.

Sideload and discover solution updates


If you're using solution extension updates from your hardware, you would need to
sideload those updates. Follow these steps to sideload and discover your solution
updates.

1. Connect to a server on your Azure Stack HCI cluster using the deployment user
account.

2. Go to the network share and acquire the update package that you use. Verify that
the update package you sideload contains the following files:

SolutionUpdate.xml
SolutionUpdate.zip
AS_Update_10.2303.4.1.zip

If a solution builder extension is part of the update package, you should also see
the following files:
SBE_Content_4.1.2.3.xml
SBE_Content_4.1.2.3.zip
SBE_Discovery_Contoso.xml

3. Create a folder for discovery by the update service at the following location in the
infrastructure volume of your cluster.

PowerShell

New-Item
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\sideload
-ItemType Directory

4. Copy the update package to the folder you created in the previous step.

5. Manually discover the update package using the Update service. Run the following
command:

PowerShell

Add-SolutionUpdate -SourceFolder
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\sideload

6. Verify that the Update service discovers the update package and that it's available
to start preparation and installation.

PowerShell

Get-SolutionUpdate | ft DisplayName, Version, State

Here's an example output:

Console

PS C:\Users\lcmuser> Get-SolutionUpdate | ft DisplayName, Version,


State

DisplayName Version State


----------- ------- -----
Azure Stack HCI 2303 bundle 10.2303.0.31 Ready

PS C:\Users\lcmuser>

7. Optionally check the version of the update package components. Run the
following command:
PowerShell

$Update = Get-SolutionUpdate
$Update.ComponentVersions

Here's an example output:

Console

PS C:\Users\lcmuser> $Update = Get-SolutionUpdate


PS C:\Users\lcmuser> $Update.ComponentVersions

PackageType Version LastUpdated


----------- ------- -----------
Services 10.2303.0.31
Platform 10.2303.0.31
SBE 4.1.2.3
PS C:\Users\lcmuser>

Step 4: Download, check readiness, and install


updates
You can download the updates, perform a set of checks to verify your cluster's update
readiness, and start installing the updates.

1. You can only download the update without starting the installation or download
and install the update.

To download and install the update, run the following command:

PowerShell

Get-SolutionUpdate | Start-SolutionUpdate

To only download the updates without starting the installation, use the -
PrepareOnly flag with Start-SolutionUpdate .

2. To track the update progress, monitor the update state. Run the following
command:

PowerShell

Get-SolutionUpdate | ft Version,State,UpdateStateProperties,HealthState
When the update starts, the following actions occur:

Download of the updates begins. Depending on the size of the download


package and the network bandwidth, the download might take several
minutes.

Here's an example output when the updates are being downloaded:

Console

PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState

Version State UpdateStateProperties HealthState


------- ----- --------------------- -----------
10.2303.4.1 Downloading InProgress

Once the package is downloaded, readiness checks are performed to assess


the update readiness of your cluster. For more information about the
readiness checks, see Update phases. During this phase, the State of the
update shows as HealthChecking .

Console

PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState

Version State UpdateStateProperties HealthState


------- ----- --------------------- -----------
10.2303.4.1 HealthChecking InProgress

When the system is ready, updates are installed. During this phase, the State
of the updates shows as Installing and UpdateStateProperties shows the
percentage of the installation that was completed.

) Important

During the install, the cluster servers may reboot and you may need to
establish the remote PowerShell session again to monitor the updates. If
updating a single server, your Azure Stack HCI will experience a
downtime.

Here's a sample output while the updates are being installed.


Console

PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState

Version State UpdateStateProperties HealthState


------- ----- --------------------- -----------
10.2303.4.1 Installing 6% complete. Success

PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState

Version State UpdateStateProperties HealthState


------- ----- --------------------- -----------
10.2303.4.1 Installing 25% complete. Success

PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState

Version State UpdateStateProperties HealthState


------- ----- --------------------- -----------
10.2303.4.1 Installing 40% complete. Success

PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState

Version State UpdateStateProperties HealthState


------- ----- --------------------- -----------
10.2303.4.1 Installing 89% complete. Success

Once the installation is complete, the State changes to Installed . For more information
on the various states of the updates, see Installation progress and monitoring.

Step 5: Verify the installation


After the updates are installed, verify the solution version of the environment and the
operating system version.

1. After the update is in Installed state, check the environment solution version. Run
the following command:

PowerShell

Get-SolutionUpdateEnvironment | ft State, CurrentVersion

Here's a sample output:


Console

PS C:\Users\lcmuser> Get-SolutionUpdateEnvironment | ft State,


CurrentVersion

State CurrentVersion
----- --------------
AppliedSuccessfully 10.2303.0.31

2. Check the operating system version to confirm it matches the recipe you installed.
Run the following command:

PowerShell

cmd /c ver

Here's a sample output:

Console

PS C:\Users\lcmuser> cmd /c ver

Microsoft Windows [Version 10.0.20349.1547]


PS C:\Users\lcmuser>

Troubleshoot updates
To resume a previously failed update run via PowerShell, use the following command:

PowerShell

get-solutionupdate | start-solutionupdate

To resume a previously failed update due to update health checks in a Warning state,
use the following command:

PowerShell

get-solutionUpdate | start-solutionUpdate -IgnoreWarnings

To troubleshoot other update run issues, see Troubleshoot updates.


Next step
Learn more about how to Update Azure Stack HCI clusters, version 22H2 when the
orchestrator isn't installed.
Use Azure Update Manager to update
your Azure Stack HCI, version 23H2
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

) Important

The procedure described here applies only when updating from one version of
Azure Stack HCI, version 23H2 to another higher version. For information on
updates for older versions, see Update clusters for Azure Stack HCI, version 22H2.

This article describes how to use Azure Update Manager to find and install available
cluster updates on selected Azure Stack HCI clusters. Additionally, we provide guidance
on how to review cluster updates, track progress, and browse cluster updates history.

About Azure Update Manager


Azure Update Manager is an Azure service that allows you to apply, view, and manage
updates for each of your Azure Stack HCI cluster's nodes. You can view Azure Stack HCI
clusters across your entire infrastructure, or in remote/branch offices and update at
scale.

Here are some benefits of the Azure Update Manager:

The update agent checks Azure Stack HCI clusters for update health and available
updates daily.
You can view the update status and readiness for each cluster.
You can update multiple clusters at the same time.
You can view the status of updates while they're in progress.
Once complete, you can view the results and history of updates.

Prerequisites
An Azure Stack HCI, version 23H2 cluster deployed and registered with Azure.

Browse for cluster updates


To browse for available cluster updates using Azure Update Manager, follow these steps:

1. Sign into the Azure portal and go to Azure Update Manager.

2. Under Manage Azure Stack HCI, select Azure Stack HCI.

Filter by Subscription, Resource group, Location, Status, Update readiness,


Current OS version, and/or Tags to view a list of clusters.

3. In the cluster list, view the clusters update status, update readiness, current OS
version, and the date and time of the last successful update.

Install cluster updates


To install cluster updates using Azure Update Manager, follow these steps:

1. Sign into the Azure portal and go to Azure Update Manager.

2. Under Manage Azure Stack HCI, select Azure Stack HCI.

3. Select one or more clusters from the list, then select One-time Update.

4. On the Check readiness page, review the list of readiness checks and their results.
You can select the links under Affected systems to view more details and
individual cluster results.

5. Select Next.

6. On the Select updates page, specify the updates you want to include in the
deployment.
a. Select Systems to update to view cluster updates to install or remove from the
update installation.
b. Select the Version link to view the update components, versions, and update
release notes.

7. Select Next.

8. On the Review + install page, verify your update deployment options, and then
select Install.

You should see a notification that confirms the installation of updates. If you don’t
see the notification, select the notification icon in the top right taskbar.


Track cluster update progress
When you install cluster updates via Azure Update Manager, you can check the progress
of those updates.

7 Note

After you trigger an update, it can take up to 5 minutes for the update run to show
up in the Azure portal.

To view the progress of your clusters, update installation, and completion results, follow
these steps:

1. Sign into the Azure portal and go to Azure Update Manager.

2. Under Manage Azure Stack HCI, select History.

3. Select an update run from the list with a status of In Progress.

4. On the Download updates page, review the progress of the download and
preparation, and then select Next.

5. On the Check readiness page, review the progress of the checks, and then select
Next.

6. On the Install page, review the progress of the update installation.


Browse cluster update job history


To browse for your clusters update history, follow these steps:

1. Sign into the Azure portal and go to Azure Update Manager.

2. Under Manage Azure Stack HCI, select History.

3. Select an update run with a status of “Failed to update” or “Successfully updated”.

4. On the Download updates page, review the results of the download and
preparation and then select Next.

5. On the Check readiness page, review the results and then select Next.
Under the Affected systems column, if you have an error, select View Details
for more information.

6. On the Install page, review the results of the installation.

Under the Result column, if you have an error, select View Details for more
information.

Update a cluster via the Azure Stack HCI cluster


resource page
In addition to using Azure Update Manager, you can update individual Azure Stack HCI
clusters from the Azure Stack HCI cluster resource page.

To install updates on a single cluster from the Azure Stack HCI cluster resource page,
follow these steps:

1. Sign into the Azure portal and go to Azure Update Manager.

2. Under Manage Azure Stack HCI, select Azure Stack HCI.

3. Select the cluster name from the list.

4. Select the update and then select One-time update.

5. On the Check readiness page, review the list of readiness checks and their results.
You can select the links under Affected systems to view more details and
individual cluster results.

6. Select Next.

7. On the Select updates page, specify the updates you want to include in the
deployment.
a. Select Systems to update to view cluster updates to install or remove from the
update installation.
b. Select the Version link to view the update components and their versions.
c. Select the Details, View details link, to view the update release notes.

8. Select Next.

9. On the Review + install page, verify your update deployment options, and select
Install.

You should see a notification that confirms the installation of updates. If you don’t
see the notification, select the notification icon in the top right taskbar.

Update your hardware via Windows Admin


Center
In addition to cluster updates using Azure Update Manager or the Azure Stack HCI
cluster resource page, you can use Windows Admin Center to check for and install
available hardware (firmware and driver) updates for your Azure Stack HCI system.

Here's an example of the Windows Admin Center updates tool for systems running
Azure Stack HCI, version 23H2.

Troubleshoot updates
To resume a previously failed update run, browse to the failed update and select the Try
again button. This functionality is available at the Download updates, Check readiness,
and Install stages of an update run.

If you're unable to successfully rerun a failed update or need to troubleshoot an error


further, follow these steps:

1. Select the View details of an error.

2. When the details box opens, you can download error logs by selecting the
Download logs button. This prompts the download of a JSON file.

3. Additionally, you can select the Open a support ticket button, fill in the
appropriate information, and attach your downloaded logs so that they're available
to Microsoft Support.

For more information on creating a support ticket, see Create a support request.

To troubleshoot other update run issues, see Troubleshoot updates.

Next steps
Learn to Understand update phases.

Learn more about how to Troubleshoot updates.


Troubleshoot solution updates for Azure
Stack HCI, version 23H2
Article • 01/31/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to troubleshoot solution updates that are applied to your
Azure Stack HCI cluster to keep it up-to-date.

About troubleshooting updates


If your cluster was created via a new deployment of Azure Stack HCI, version 23H2, then
an orchestrator was installed during the deployment. The orchestrator manages all of
the updates for the platform - OS, drivers and firmware, agents and services, and
updates for the workloads.

The new update solution includes a retry and remediation logic. This logic attempts to
fix update issues in a non-disruptive way, such as retrying a CAU run. If an update run
can't be remediated automatically, it fails. When an update fails, you can retry the
update.

Collect update logs


You can also collect diagnostic logs to help Microsoft identify and fix the issues.

To collect logs for updates using the Azure portal, see Use Azure Update Manager to
update your Azure Stack HCI, version 23H2.

To collect logs for the update failures using PowerShell, follow these steps on the client
that you're using to access your cluster:

1. Establish a remote PowerShell session with the server node. Run PowerShell as
administrator and run the following command:

PowerShell

Enter-PSSession -ComputerName <server_IP_address> -Credential


<username\password for the server>
2. Get all the solutions updates and then filter the solution updates corresponding to
a specific version. The version used corresponds to the version of solution update
that failed to install.

PowerShell

$Update = Get-SolutionUpdate | ? version -eq "<Version string>" -


verbose

3. Identify the action plan for the failed solution update run.

PowerShell

$Failure = $update | Get-SolutionUpdateRun

4. Identify the ResourceID for the Update.

PowerShell

$Failure

Here's a sample output:

Output

PS C:\Users\lcmuser> $Update = Get-SolutionUpdate| ? version -eq


"10.2303.1.7" -verbose
PS C:\Users\lcmuser> $Failure = $Update|Get-SolutionUpdateRun
PS C:\Users\lcmuser> $Failure

ResourceId : redmond/Solution10.2303.1.7/2c21b859-e063-4f24-a4db-
bc1d6be82c4e
Progress :
Microsoft.AzureStack.Services.Update.ResourceProvider.UpdateService.Mod
els.Step
TimeStarted : 4/21/2023 10:02:54 PM
LastUpdatedTime : 4/21/2023 3:19:05 PM
Duration : 00:16:37.9688878
State : Failed

Note the ResourceID GUID. This GUID corresponds to the ActionPlanInstanceID .

5. Copy the logs for the ActionPlanInstanceID that you noted earlier, to a text file
named log.txt. Use Notepad to open the text file.

PowerShell
Get-ActionplanInstance -ActionplanInstanceId <Action Plan Instance ID>
>log.txt
notepad log.txt

Here's sample output:

Output

PS C:\Users\lcmuser> Get-ActionplanInstance -actionplaninstanceid


2c21b859-e063-4f24-a4db-bc1d6be82c4e >log.txt

PS C:\Users\lcmuser>notepad log.txt

Resume an update
To resume a previously failed update run, you can retry the update run via the Azure
portal or PowerShell.

The Azure portal


We highly recommend using the Azure portal, to browse to your failed update and
select the Try again button. This functionality is available at the Download updates,
Check readiness, and Install stages of an update run.

PowerShell
If you're using PowerShell and need to resume a previously failed update run, use the
following command:

PowerShell

get-solutionupdate | start-solutionupdate

To resume a previously failed update due to update health checks in a Warning state,
use the following command:

PowerShell

get-solutionUpdate | start-solutionUpdate -IgnoreWarnings

Next steps
Learn more about how to Run updates via PowerShell.

Learn more about how to Run updates via the Azure portal.
What is Azure Arc VM management?
Article • 02/02/2024

Applies to: Azure Stack HCI, version 23H2

This article provides a brief overview of the Azure Arc VM management feature on Azure
Stack HCI including the benefits, its components, and high-level workflow.

About Azure Arc VM management


Azure Arc VM management lets you provision and manage Windows and Linux VMs
hosted in an on-premises Azure Stack HCI environment. This feature enables IT admins
create, modify, delete, and assign permissions and roles to app owners thereby enabling
self-service VM management.

Administrators can manage Arc VMs on their Azure Stack HCI clusters by using Azure
management tools, including Azure portal, Azure CLI, Azure PowerShell, and Azure
Resource Manager (ARM) templates. Using Azure Resource Manager templates, you can
also automate VM provisioning in a secure cloud environment.

To find answers to frequently asked questions about Arc VM management on Azure


Stack HCI, see the FAQ.

Benefits of Azure Arc VM management


While Hyper-V provides capabilities to manage your on-premises VMs, Azure Arc VMs
offer many benefits over traditional on-premises tools including:

Role-based access control via builtin Azure Stack HCI roles ensures that only
authorized users can perform VM management operations thereby enhancing
security. For more information, see Azure Stack HCI Arc VM management roles.

Arc VM management provides the ability to deploy with ARM templates, Bicep,
and Terraform.

The Azure portal acts as a single pane of glass to manage VMs on Azure Stack HCI
clusters and Azure VMs. With Azure Arc VM management, you can perform various
operations from the Azure portal or Azure CLI including:
Create, manage, update, and delete VMs. For more information, see Create Arc
VMs
Create, manage, and delete VM resources such as virtual disks, logical networks,
network interfaces, and VM images.

The self-service capabilities of Arc VM management reduce the administrative


overhead.

Components of Azure Arc VM management


Arc VM Management comprises several components including the Arc Resource Bridge,
Custom Location, and the Kubernetes Extension for the VM operator.

Arc Resource Bridge: This lightweight Kubernetes VM connects your on-premises


Azure Stack HCI cluster to the Azure Cloud. The Arc Resource Bridge is created
automatically when you deploy the Azure Stack HCI cluster.

For more information, see the Arc Resource Bridge overview.

Custom Location: Just like the Arc Resource Bridge, a custom location is created
automatically when you deploy your Azure Stack HCI cluster. You can use this
custom location to deploy Azure services. You can also deploy VMs in these user-
defined custom locations, integrating your on-premises setup more closely with
Azure.

Kubernetes Extension for VM Operator: The VM operator is the on-premises


counterpart of the Azure Resource Manager resource provider. It is a Kubernetes
controller that uses custom resources to manage your VMs.

By integrating these components, Azure Arc offers a unified and efficient VM


management solution, seamlessly bridging the gap between on-premises and cloud
infrastructures.

Azure Arc VM management workflow


In this release, the Arc VM management workflow is as follows:

1. During the deployment of Azure Stack HCI cluster, one Arc Resource Bridge is
installed per cluster and a custom location is also created.
2. Assign builtin RBAC roles for Arc VM management.
3. You can then create VM resources such as:
a. Storage paths for VM disks.
b. VM images starting with an Image in Azure Marketplace, in Azure Storage
account, or in Local share. These images are then used with other VM resources
to create VMs.
c. Logical networks.
d. VM network interfaces.
4. Use the VM resources to Create VMs.

To troubleshoot issues with your Arc VMs or to learn about existing known issues and
limitations, see Troubleshoot Arc virtual machines.

Next steps
Review Azure Arc VM management prerequisites
Azure Arc VM management
prerequisites
Article • 02/28/2024

Applies to: Azure Stack HCI, version 23H2

This article lists the requirements and prerequisites for Azure Arc VM management. We
recommend that you review the requirements and complete the prerequisites before
you manage your Arc VMs.

Azure requirements
The Azure requirements include:

To provision Arc VMs and VM resources such as virtual disks, logical network,
network interfaces and VM images through the Azure portal, you must have
Contributor level access at the subscription level.

Arc VM management infrastructure is supported in the regions documented in the


Azure requirements. For Arc VM management on Azure Stack HCI, all entities must
be registered, enabled, or created in the same region.

The entities include Azure Stack HCI cluster, Arc Resource Bridge, Custom Location,
VM operator, virtual machines created from Arc and Azure Arc for Servers guest
management. These entities can be in different or same resource groups as long as
all resource groups are in the same region.

Azure Command-Line Interface (CLI)


requirements
You can connect to Azure Stack HCI system directly or you can access the cluster
remotely. Depending on whether you're connecting to the cluster directly or remotely,
the steps are different.

For information on Azure CLI commands for Azure Stack HCI VMs, see az stack-hci-vm.

Connect to the cluster directly


If you're accessing the Azure Stack HCI cluster directly, no steps are needed on your
part.

During the cluster deployment, an Arc Resource Bridge is created and the Azure CLI
extension stack-hci-vm is installed on the cluster. You can connect to and manage the
cluster using the Azure CLI extension.

Connect to the cluster remotely


If you're accessing the Azure Stack HCI system remotely, the following requirements
must be met:

The latest version of Azure Command-Line Interface (CLI). You must install this
version on the client that you're using to connect to your Azure Stack HCI cluster.

For installation instructions, see Install Azure CLI. Once you have installed az
CLI, make sure to restart the system.

If you're using a local installation, sign in to the Azure CLI by using the az
login command. To finish the authentication process, follow the steps
displayed in your terminal. For other sign-in options, see Sign in with the
Azure CLI.

Run az version to find the version and dependent libraries that are installed.
To upgrade to the latest version, run az upgrade.

The Azure Stack HCI extension stack-hci-vm . Run PowerShell as an administrator


on your client and run the following command :

PowerShell

az extension add --name "stack-hci-vm"

Next steps
Assign RBAC role for Arc VM management.
Use Role-based Access Control to
manage Azure Stack HCI Virtual
Machines
Article • 02/14/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to use the Role-based Access Control (RBAC) to control access
to Arc virtual machines (VMs) running on your Azure Stack HCI cluster.

You can use the builtin RBAC roles to control access to VMs and VM resources such as
virtual disks, network interfaces, VM images, logical networks and storage paths. You can
assign these roles to users, groups, service principals and managed identities.

) Important

This feature is currently in PREVIEW. See the Supplemental Terms of Use for
Microsoft Azure Previews for legal terms that apply to Azure features that are in
beta, preview, or otherwise not yet released into general availability.

About builtin RBAC roles


To control access to VMs and VM resources on your Azure Stack HCI, you can use the
following RBAC roles:

Azure Stack HCI Administrator - This role grants full access to your Azure Stack
HCI cluster and its resources. An Azure Stack HCI administrator can register the
cluster as well as assign Azure Stack HCI VM contributor and Azure Stack HCI VM
reader roles to other users. They can also create cluster-shared resources such as
logical networks, VM images, and storage paths.
Azure Stack HCI VM Contributor - This role grants permissions to perform all VM
actions such as start, stop, restart the VMs. An Azure Stack HCI VM Contributor can
create and delete VMs, as well as the resources and extensions attached to VMs.
An Azure Stack HCI VM Contributor can't register the cluster or assign roles to
other users, nor create cluster-shared resources such as logical networks, VM
images, and storage paths.
Azure Stack HCI VM Reader - This role grants permissions to only view the VMs. A
VM reader can't perform any actions on the VMs or VM resources and extensions.
Here's a table that describes the VM actions granted by each role for the VMs and the
various VM resources. The VM resources are referred to resources required to create a
VM and include virtual disks, network interfaces, VM images, logical networks, and
storage paths:

ノ Expand table

Builtin role VMs VM resources

Azure Stack HCI Create, list, Create, list, delete all VM resources including logical
Administrator delete VMs networks, VM images, and storage paths

Start, stop,
restart VMs

Azure Stack HCI VM Create, list, Create, list, delete all VM resources except logical
Contributor delete VMs networks, VM images, and storage paths

Start, stop,
restart VMs

Azure Stack HCI VM List all VMs List all VM resources


Reader

Prerequisites
Before you begin, make sure to complete the following prerequisites:

1. Make sure that you have access to an Azure Stack HCI cluster that is deployed and
registered. During the deployment, an Arc Resource Bridge and a custom location
are also created.

Go to the resource group in Azure. You can see the custom location and Azure Arc
Resource Bridge created for the Azure Stack HCI cluster. Make a note of the
subscription, resource group, and the custom location as you use these later in this
scenario.

2. Make sure that you have access to Azure subscription as an Owner or User Access
Administrator to assign roles to others.

Assign RBAC roles to users


You can assign RBAC roles to user via the Azure portal. Follow these steps to assign
RBAC roles to users:
1. In the Azure Portal, search for the scope to grant access to, for example, search for
subscriptions, resource groups, or a specific resource. In this example, we use the
subscription in which the Azure Stack HCI cluster is deployed.

2. Go to your subscription and then go to Access control (IAM) > Role assignments.
From the top command bar, select + Add and then select Add role assignment.

If you don't have permissions to assign roles, the Add role assignment option is
disabled.

3. On the Role tab, select an RBAC role to assign and choose from one of the
following builtin roles:

Azure Stack HCI Administrator


Azure Stack HCI VM Contributor
Azure Stack HCI VM Reader

4. On the Members tab, select the User, group, or service principal. Also select a
member to assign the role.

5. Review the role and assign it.

6. Verify the role assignment. Go to Access control (IAM) > Check access > View my
access. You should see the role assignment.

For more information on role assignment, see Assign Azure roles using the Azure portal.

Next steps
Create a storage path for Azure Stack HCI VM.
Create storage path for Azure Stack HCI
Article • 01/24/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to create storage path for VM images used on your Azure
Stack HCI cluster. Storage paths are an Azure resource and are used to provide a path to
store VM configuration files, VM image, and VHDs on your cluster. You can create a
storage path using the Azure CLI.

About storage path


When the Azure Stack HCI cluster is deployed, storage paths are created as part of the
deployment. The default option automatically selects a storage path with high
availability. You might however decide to use a specific storage path. In this case, ensure
that the specified storage path has sufficient storage space.

The storage paths on your Azure Stack HCI cluster should point to cluster shared
volumes that can be accessed by all the servers on your cluster. In order to be highly
available, we strongly recommend that you create storage paths under cluster shared
volumes.

The available space in the cluster shared volume determines the size of the store
available at the storage path. For example, if the storage path is
C:\ClusterStorage\UserStorage_1\Volume01 and the Volume01 is 4 TB, then the size of

the storage path is the available space (out of the 4 TB) on Volume01 .

Prerequisites
Before you begin, make sure to complete the following prerequisites:

1. Make sure that you have access to an Azure Stack HCI cluster that is deployed and
registered. During the deployment, an Arc Resource Bridge and a custom location
are also created.

Go to the resource group in Azure. You can see the custom location and Azure Arc
Resource Bridge created for the Azure Stack HCI cluster. Make a note of the
subscription, resource group, and the custom location as you use these later in this
scenario.
2. Make sure that a cluster shared volume exists on your Azure Stack HCI cluster that
is accessible from all the servers in the cluster. The storage path that you intend to
provide on a cluster shared volume should have sufficient space for storing VM
images. By default, cluster shared volumes are created during the deployment of
Azure Stack HCI cluster.

You can create storage paths only within cluster shared volumes that are available
in the cluster. For more information, see Create a cluster shared volume.

Create a storage path on your cluster


You can use the Azure CLI or the Azure portal to create a storage path on your cluster.

Azure CLI

You can use the stack-hci-vm storagepath cmdlets to create, show, and list the
storage paths on your Azure Stack HCI cluster.

Review parameters used to create a storage path


The following parameters are required when you create a storage path:

ノ Expand table

Parameter Description

name Name of the storage path that you create for your Azure Stack HCI cluster.
Make sure to provide a name that follows the Rules for Azure resources. You
can't rename a storage path after it is created.

resource- Name of the resource group where you create the storage path. For ease of
group management, we recommend that you use the same resource group as your
Azure Stack HCI cluster.

subscription Name or ID of the subscription where your Azure Stack HCI is deployed. This
could also be another subscription you use for storage path on your Azure
Stack HCI cluster.

custom- Name or ID of the custom location associated with your Azure Stack HCI
location cluster where you're creating this storage path.

path Path on a disk to create storage path. The selected path should have
sufficient space available for storing your VM image.

You could also use the following optional parameters:


ノ Expand table

Parameter Description

location Azure regions as specified by az locations .

Create a storage path


Follow these steps on one of the servers of your Azure Stack HCI cluster to create a
storage path:

Sign in and set subscription


1. Connect to a server on your Azure Stack HCI system.

2. Sign in. Type:

Azure CLI

az login --use-device-code

3. Set your subscription.

Azure CLI

az account set --subscription <Subscription ID>

Set parameters
1. Set parameters for your subscription, resource group, location, OS type for the
image. Replace the < > with the appropriate values.

Azure CLI

$storagepathname="<Storage path name>"


$path="<Path on the disk to cluster shared volume>"
$subscription="<Subscription ID>"
$resource_group="<Resource group name>"
$customLocName="<Custom location of your Azure Stack HCI cluster>"
$customLocationID="/subscriptions/<Subscription
ID>/resourceGroups/$reource_group/providers/Microsoft.ExtendedLocat
ion/customLocations/$customLocName"
$location="<Azure region where the cluster is deployed>"
2. Create a storage path test-storagepath at the following path:
C:\ClusterStorage\test-storagepath . Run the following cmdlet:

Azure CLI

az stack-hci-vm storagepath create --resource-group $resource_group


--custom-location $customLocationID --name $storagepathname --path
$path

For more information on this cmdlet, see az stack-hci-vm storagepath create.

Here's a sample output:

Console

PS C:\windows\system32> $storagepathname="test-storagepath"
PS C:\windows\system32>
$path="C:\ClusterStorage\UserStorage_1\mypath"
PS C:\windows\system32> $subscription="<Subscription ID>"
PS C:\windows\system32> $resource_group="myhci-rg"
PS C:\windows\system32>
$customLocationID="/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.ExtendedLocation/customLocations/myhci-cl"

PS C:\windows\system32> az stack-hci-vm storagepath create --name


$storagepathname --resource-group $resource_group --custom-location
$customLocationID --path $path
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.ExtendedLocation/customLocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/storagecontainers/test-
storagepath",
"location": "eastus",
"name": "test-storagepath",
"properties": {
"path": "C:\\ClusterStorage\\UserStorage_1\\mypath",
"provisioningState": "Succeeded",
"status": {
"availableSizeMB": 36761,
"containerSizeMB": 243097
}
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-10-06T04:45:30.458242+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-10-06T04:45:57.386895+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": null,
"type": "microsoft.azurestackhci/storagecontainers"
}

Once the storage path creation is complete, you're ready to create virtual machine
images.

Delete a storage path


If a storage path isn't required, you can delete it. To delete a storage path, first
remove the associated workloads and then run the following command to delete
the storage path:

Azure CLI

az stack-hci-vm storagepath delete --resource-group "<resource group


name>" --name "<storagepath name>" --yes

To verify that a storage path is deleted, run the following command:

Azure CLI

az stack-hci-vm storagepath show --resource-group "<resource group


name>" --name "<storagepath name>"

You receive a notification that the storage path doesn't exist.

To delete a volume, first remove the associated workloads, then remove the storage
paths, and then delete the volume. For more information, see Delete a volume.

If there's insufficient space at the storage path, then the VM provisioning using that
storage path would fail. You might need to expand the volume associated with the
storage path. For more information, see Expand the volume.

Next steps
Create a VM image using one of the following methods:
Using the image in Azure Marketplace.
Using an image in Azure Storage account.
Using an image in local file share.
Create Azure Stack HCI VM image using
Azure Marketplace images
Article • 02/29/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to create virtual machine (VM) images for your Azure Stack
HCI using source images from Azure Marketplace. You can create VM images using the
Azure portal or Azure CLI and then use these VM images to create Arc VMs on your
Azure Stack HCI.

Prerequisites
Before you begin, make sure that the following prerequisites are completed.

Azure CLI

Make sure to review and Complete the prerequisites.

You have access to an Azure Stack HCI system that is deployed, has an Arc
Resource Bridge and a custom location.

Go to the Overview > Server page in the Azure Stack HCI system resource.
Verify that Azure Arc shows as Connected. You should also see a custom
location and an Arc Resource Bridge for your cluster.


If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.

Add VM image from Azure Marketplace


You create a VM image starting from an Azure Marketplace image and then use this
image to deploy VMs on your Azure Stack HCI cluster.

Azure CLI

Follow these steps to create a VM image using the Azure CLI.

Sign in and set subscription


1. Connect to a server on your Azure Stack HCI system.

2. Sign in. Type:

Azure CLI

az login --use-device-code

3. Set your subscription.

Azure CLI

az account set --subscription <Subscription ID>

Set some parameters


1. Set parameters for your subscription, resource group, location, OS type for the
image. Replace the parameters in < > with the appropriate values.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Resource group>"
$customLocationName = "<Custom location name>"
$customLocationID
/subscriptions/<Subscription
ID>/resourcegroups/$resource_group/providers/microsoft.extendedloca
tion/customlocations/$customLocationName
$location = "<Location for your Azure Stack HCI cluster>"
$osType = "<OS of source image>"

The parameters are described in the following table:

ノ Expand table

Parameter Description

subscription Subscription associated with your Azure Stack HCI cluster.

resource- Resource group for Azure Stack HCI cluster that you associate with
group this image.

location Location for your Azure Stack HCI cluster. For example, this could be
eastus .

os-type Operating system associated with the source image. This can be
Windows or Linux.

Here's a sample output:

PS C:\Users\azcli> $subscription = "<Subscription ID>"


PS C:\Users\azcli> $resource_group = "myhci-rg"
PS C:\Users\azcli> $customLocationName = "myhci-cl"
PS C:\Users\azcli> $location = "eastus"
PS C:\Users\azcli> $ostype = "Windows"

Create VM image from marketplace image


1. Select a custom location to deploy your VM image. The custom location
should correspond to the custom location for your Azure Stack HCI cluster. Get
the custom location ID for your Azure Stack HCI cluster. Run the following
command:

Azure CLI

$customLocationID=(az customlocation show --resource-group


$resource_group --name "<custom location name for Azure Stack HCI
cluster>" --query id -o tsv)

2. Create the VM image starting with a specified marketplace image. Make sure
to specify the offer, publisher, SKU, and version for the marketplace image. Use
the following table to find the available marketplace images and their attribute
values:

ノ Expand table

Name Publisher Offer SKU Version number

Windows microsoftwindowsdesktop office-365 win10- 19044.3570.231010


11 21h2-avd-
Enterprise m365-g2
multi-
session +
Microsoft
365 Apps,
version
21H2-
Gen2

Windows microsoftwindowsdesktop office-365 win11- 22000.2538.231010


10 21h2-avd-
Enterprise m365
multi-
session,
version
21H2 +
Microsoft
365 Apps-
Gen2

Windows microsoftwindowsdesktop windows-10 win10- 19044.3570.231001


10 21h2-avd-
Enterprise g2
multi-
session,
version
21H2-
Gen2

Windows microsoftwindowsdesktop windows-11 win11- 22000.2538.231001


11 21h2-avd
Enterprise
multi-
session,
version
21H2-
Gen2

Windows microsoftwindowsdesktop windows-11 win11- 22621.2428.231001


11 22h2-avd
Enterprise
Name Publisher Offer SKU Version number

multi-
session,
version
22H2 -
Gen2

Windows microsoftwindowsdesktop windows11preview win11- 22621.382.220810


11, version 22h2-avd-
22H2 m365
Enterprise
multi-
session +
Microsoft
365 Apps
(Preview) -
Gen2

Windows microsoftwindowsserver windowsserver 2022- 20348.2031.231006


Server datacenter-
2022 azure-
Datacenter: edition
Azure
Edition -
Gen2

Windows microsoftwindowsserver windowsserver 2022- 20348.2031.231006


Server datacenter-
2022 azure-
Datacenter: edition-
Azure core
Edition
Core -
Gen2

Windows microsoftwindowsserver windowsserver 2022- 20348.2031.231006


Server datacenter-
2022 azure-
Datacenter: edition-
Azure hotpatch
Edition
Hotpatch -
Gen2

Azure CLI

az stack-hci-vm image create --subscription $subscription --


resource-group $resource_group --custom-location $customLocationID
--location $location --name "<VM image name>" --os-type $ostype --
offer "windowsserver" --publisher "<Publisher name>" --sku "<SKU>"
--version "<Version number>" --storage-path-id $storagepathid

A deployment job starts for the VM image.

In this example, the storage path was specified using the --storage-path-id
flag and that ensured that the workload data (including the VM, VM image,
non-OS data disk) is placed in the specified storage path.

If the flag is not specified, the workload data is automatically placed in a high
availability storage path.

The image deployment takes a few minutes to complete. The time taken to
download the image depends on the size of the Marketplace image and the
network bandwidth available for the download.

Here's a sample output:

PS C:\Users\azcli> $customLocationID=(az customlocation show --resource-


group $resource_group --name "myhci-cl" --query id -o tsv)
PS C:\Users\azcli> $customLocationID
/subscriptions/<Subscription ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-cl
PS C:\Users\azcli> az stack-hci-vm image create --subscription
$subscription --resource-group $resource_group --custom-location
$customLocationID --location $location --name "myhci-marketplaceimage" -
-os-type $ostype --offer "windowsserver" --publisher
"microsoftwindowsserver" --sku "2022-datacenter-azure-edition-core" --
version "20348.2031.231006" --storage-path-id $storagepathid
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.ExtendedLocation/customLocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/marketplacegalleryimages/myhci-
marketplaceimage",
"location": "eastus",
"name": "myhci-marketplaceimage",
"properties": {
"identifier": {
"offer": "windowsserver",
"publisher": "microsoftwindowsserver",
"sku": "2022-datacenter-azure-edition-core"
},
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": {
"downloadStatus": {
"downloadSizeInMB": 6750
},
"progressPercentage": 98,
"provisioningStatus": {
"operationId": "13be90e0-a780-45bf-a84a-
ae91b6e5e468*A380D53083FF6B0A3A157ED7DFD00D33F6B3D40D5559D11AEAED6AD68F7
F1A4A",
"status": "Succeeded"
}
},
"storagepathId": "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/storagecontainers/myhci-
storagepath",
"version": {
"name": "20348.2031.231006",
"properties": {
"storageProfile": {
"osDiskImage": {
"sizeInMB": 130050
}
}
}
}
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-10-27T21:43:15.920502+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-10-27T22:06:15.092321+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": null,
"type": "microsoft.azurestackhci/marketplacegalleryimages"
}

PS C:\Users\azcli>

List VM images
You need to view the list of VM images to choose an image to manage.

Azure CLI

Follow these steps to list VM image using Azure CLI.


1. Run PowerShell as an administrator.

2. Set some parameters.

Azure CLI

$subscription = "<Subscription ID associated with your cluster>"


$resource_group = "<Resource group name for your cluster>"

3. List all the VM images associated with your cluster. Run the following
command:

Azure CLI

az stack-hci-vm image list --subscription $subscription --resource-


group $resource_group

Depending on the command used, a corresponding set of images associated


with the Azure Stack HCI cluster are listed.

If you specify just the subscription, the command lists all the images in
the subscription.
If you specify both the subscription and the resource group, the
command lists all the images in the resource group.

These images include:

VM images from marketplace images.


Custom images that reside in your Azure Storage account or are in a local
share on your cluster or a client connected to the cluster.

Here's a sample output.

PS C:\Users\azcli> az stack-hci-vm image list --subscription "


<Subscription ID>" --resource-group "myhci-rg"
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
[
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/microsoft.azurestackhci/marketplacegalleryimages/w
inServer2022Az-01",
"location": "eastus",
"name": "winServer2022Az-01",
"properties": {
"hyperVGeneration": "V2",
"identifier": {
"offer": "windowsserver",
"publisher": "microsoftwindowsserver",
"sku": "2022-datacenter-azure-edition-core"
},
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": {
"downloadStatus": {
"downloadSizeInMB": 6710
},
"progressPercentage": 100,
"provisioningStatus": {
"operationId": "19742d69-4a00-4086-8f17-
4dc1f7ee6681*E1E9889F0D1840B93150BD74D428EAE483CB67B0904F9A198C161AD471F
670ED",
"status": "Succeeded"
}
},
"storagepathId": null,
"version": {
"name": "20348.2031.231006",
"properties": {
"storageProfile": {
"osDiskImage": {
"sizeInMB": 130050
}
}
}
}
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-10-30T21:44:53.020512+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-10-30T22:08:25.495995+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": {},
"type": "microsoft.azurestackhci/marketplacegalleryimages"
}
]
PS C:\Users\azcli>
View VM image properties
You might want to view the properties of VM images before you use the image to create
a VM. Follow these steps to view the image properties:

Azure CLI

Follow these steps to use Azure CLI to view properties of an image:

1. Run PowerShell as an administrator.

2. Set the following parameters.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Cluster resource group>"
$mktplaceImage = "<Marketplace image name>"

3. You can view image properties in two different ways: specify ID or specify
name and resource group. Take the following steps when specifying
Marketplace image ID:

a. Set the following parameter.

Azure CLI

$mktplaceImageID = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage"

b. Run the following command to view the properties.

az stack-hci-vm image show --ids $mktplaceImageID

Here's a sample output for this command:

PS C:\Users\azcli> az stack-hci-vm image show --ids


$mktplaceImageID
Command group 'stack-hci-vm' is experimental and under
development. Reference and support levels:
https://aka.ms/CLI_refstatus
{
"extendedLocation": {
"name": "/subscriptions/<Subscription
ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-
cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage",
"location": "eastus",
"name": "myhci-marketplaceimage",
"properties": {
"containerName": null,
"hyperVGeneration": null,
"identifier": null,
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": null,
"version": null
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2022-08-05T20:52:38.579764+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2022-08-05T20:52:38.579764+00:00",
"lastModifiedBy": "[email protected]",
"lastModifiedByType": "User"
},
"tags": null,
"type": "microsoft.azurestackhci/galleryimages"
}
PS C:\Users\azcli>

Update VM image
When a new updated image is available in Azure Marketplace, the VM images on your
Azure Stack HCI cluster become stale and should be updated. The update operation isn't
an in-place update of the image. Rather you can see for which VM images an updated
image is available, and select images to update. After you update, the create VM image
operation uses the new updated image.

To update a VM image, use the following steps in Azure portal.

1. To see if an update is available, select a VM image from the list view.


In the Overview blade, you see a banner that shows the new VM image available
for download, if one is available. To update to the new image, select the arrow icon.

2. Review image details and then select Review and create. By default, the new image
uses the same resource group and instance details as the previous image.

The name for the new image is incremented based on the name of the previous
image. For example, an existing image named winServer2022-01 will have an
updated image named winServer2022-02.

3. To complete the operation, select Create.

After the new VM image is created, create a VM using the new image and verify
that the VM works properly. After verification, you can delete the old VM image.

7 Note
In this release, you can't delete a VM image if the VM associated with that
image is running. Stop the VM and then delete the VM image.

Delete VM image
You might want to delete a VM image if the download fails for some reason or if the
image is no longer needed. Follow these steps to delete the VM images.

Azure CLI

1. Run PowerShell as an administrator.

2. Set the following parameters.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Cluster resource group>"
$galleryImageName = "<Gallery image name>"

3. Remove an existing VM image. Run the following command:

Azure CLI

az stack-hci-vm image delete --subscription $subscription --


resource-group $resource_group --name $mktplaceImage --yes

You can delete image two ways:

Specify name and resource group.


Specify ID.

After you've deleted an image, you can check that the image is removed. Here's a
sample output when the image was deleted by specifying the name and the
resource-group.

PS C:\Users\azcli> $subscription = "<Subscription ID>"


PS C:\Users\azcli> $resource_group = "myhci-rg"
PS C:\Users\azcli> $mktplaceImage = "myhci-marketplaceimage"
PS C:\Users\azcli> az stack-hci-vm image delete --name $mktplaceImage --
resource-group $resource_group
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
Are you sure you want to perform this operation? (y/n): y
PS C:\Users\azcli> az stack-hci-vm image show --name $mktplaceImage --
resource-group $resource_group
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
ResourceNotFound: The Resource
'Microsoft.AzureStackHCI/marketplacegalleryimages/myhci-
marketplaceimage' under resource group 'myhci-rg' was not found. For
more details please go to https://aka.ms/ARMResourceNotFoundFix
PS C:\Users\azcli>

Next steps
Create logical networks

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Create Azure Stack HCI VM image using
image in Azure Storage account
Article • 03/01/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to create virtual machine (VM) images for your Azure Stack
HCI using source images from Azure Storage account. You can create VM images using
the Azure portal or Azure CLI and then use these VM images to create Arc VMs on your
Azure Stack HCI.

Prerequisites
Before you begin, make sure that the following prerequisites are completed.

Azure CLI

Make sure to review and Complete the prerequisites.

You have access to an Azure Stack HCI system that is deployed, has an Arc
Resource Bridge and a custom location.

Go to the Overview > Server page in the Azure Stack HCI system resource.
Verify that Azure Arc shows as Connected. You should also see a custom
location and an Arc Resource Bridge for your cluster.


For custom images in Azure Storage account, you have the following extra
prerequisites:
You should have a VHD loaded in your Azure Storage account. See how to
Upload a VHD image in your Azure Storage account.
If using a VHDX:
The VHDX image must be Gen 2 type and secure boot enabled.
The VHDX image must be prepared using sysprep /generalize
/shutdown /oobe . For more information, see Sysprep command-line

options.

If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.

Make sure that you have Storage Blob Data Contributor role on the Storage
account that you use for the image. For more information, see Assign an
Azure role for access to blob data.

Make sure that you're uploading your VHD or VHDX as a page blob image
into the Storage account. Only page blob images are supported to create VM
images via the Storage account.

Add VM image from Azure Storage account


You create a VM image starting from an image in Azure Storage account and then use
this image to deploy VMs on your Azure Stack HCI cluster.

Azure CLI

Follow these steps to create a VM image using the Azure CLI.

Sign in and set subscription


1. Connect to a server on your Azure Stack HCI system.

2. Sign in. Type:

Azure CLI

az login --use-device-code

3. Set your subscription.


Azure CLI

az account set --subscription <Subscription ID>

Set some parameters


1. Set your subscription, resource group, location, path to the image in local
share, and OS type for the image. Replace the parameters in < > with the
appropriate values.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Resource group>"
$location = "<Location for your Azure Stack HCI cluster>"
$osType = "<OS of source image>"
$imageName = "<VM image name>"
$imageSourcePath = "<path to the source image in the Storage account>"

The parameters are described in the following table:

ノ Expand table

Parameter Description

subscription Resource group for Azure Stack HCI cluster that you associate with this
image.

resource_group Resource group for Azure Stack HCI cluster that you associate with this
image.

location Location for your Azure Stack HCI cluster. For example, this could be
eastus .

imageName Name of the VM image created starting with the image in your local
share.
Note: Azure rejects all the names that contain the keyword Windows.

imageSourcePath Path to the Blob SAS URL of the image in the Storage account. For more
information, see instructions on how to Get a blob SAS URL of the image
in the Storage account.
Note: Make sure that all the Ampersands in the path are escaped with
double quotes and the entire path string is wrapped within single quotes.

os-type Operating system associated with the source image. This can be Windows
or Linux.
Here's a sample output:

PS C:\Users\azcli> $subscription = "<Subscription ID>"


PS C:\Users\azcli> $resource_group = "myhci-rg"
PS C:\Users\azcli> $location = "eastus"
PS C:\Users\azcli> $osType = "Windows"
PS C:\Users\azcli> $imageName = "myhci-storacctimage"
PS C:\Users\azcli> $imageSourcePath =
'https://vmimagevhdsa1.blob.core.windows.net/vhdcontainer/Windows_Inside
rPreview_ServerStandard_en-us_VHDX_25131.vhdx?sp=r"&"st=2022-08-
05T18:41:41Z"&"se=2022-08-06T02:41:41Z"&"spr=https"&"sv=2021-06-
08"&"sr=b"&"sig=X7A98cQm%2FmNRaHmTbs9b4OWVv%2F9Q%2FJkWDBHVPyAc8jo%3D'

Create VM image from image in Azure Storage


account
1. Select a custom location to deploy your VM image. The custom location
should correspond to the custom location for your Azure Stack HCI cluster.
Get the custom location ID for your Azure Stack HCI cluster. Run the following
command:

Azure CLI

$customLocationID=(az customlocation show --resource-group


$resource_group --name "<custom location name for HCI cluster>" --
query id -o tsv)

2. Create the VM image starting with a specified marketplace image. Make sure
to specify the offer, publisher, sku and version for the marketplace image.

Azure CLI

az stack-hci-vm image create --subscription $subscription --


resource-group $resource_Group --custom-location $customLocationID
--location $location --name $imageName --os-type $osType --image-
path $imageSourcePath --storage-path-id $storagepathid

A deployment job starts for the VM image.

In this example, the storage path was specified using the --storage-path-id
flag and that ensured that the workload data (including the VM, VM image,
non-OS data disk) is placed in the specified storage path.
If the flag is not specified, the workload data is automatically placed in a high
availability storage path.

The image deployment takes a few minutes to complete. The time taken to
download the image depends on the size of the image in Azure Storage account
and the network bandwidth available for the download.

Here's a sample output:

PS > $customLocationID=(az customlocation show --resource-group


$resource_group --name "myhci-cl" --query id -o tsv)
PS C:\Users\azcli> az stack-hci-vm image create --subscription
$subscription --resource-group $resource_Group --custom-location
$customLocationID --location $location --name $imageName --os-type
$osType --image-path $imageSourcePath --storage-path-id $storagepathid
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.ExtendedLocation/customLocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-storacctimage",
"location": "eastus",
"name": "windos",
"properties": {
"identifier": null,
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": {
"downloadStatus": {
"downloadSizeInMB": 7876
},
"progressPercentage": 100,
"provisioningStatus": {
"operationId": "cdc9c9a8-03a1-4fb6-8738-
7a8550c87fd1*31CE1EA001C4B3E38EE29B78ED1FD47CCCECF78B4CEA9E9A85C0BAEA5F6
D80CA",
"status": "Succeeded"
}
},
"storagepathId": "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/storagecontainers/myhci-
storagepath",
"version": null
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-11-03T20:17:10.971662+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-11-03T21:08:01.190475+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": null,
"type": "microsoft.azurestackhci/galleryimages"
}
PS C:\Users\azcli>

List VM images
You need to view the list of VM images to choose an image to manage.

Azure CLI

Follow these steps to list VM image using Azure CLI.

1. Run PowerShell as an administrator.

2. Set some parameters.

Azure CLI

$subscription = "<Subscription ID associated with your cluster>"


$resource_group = "<Resource group name for your cluster>"

3. List all the VM images associated with your cluster. Run the following
command:

Azure CLI

az stack-hci-vm image list --subscription $subscription --resource-


group $resource_group

Depending on the command used, a corresponding set of images associated


with the Azure Stack HCI cluster are listed.

If you specify just the subscription, the command lists all the images in
the subscription.
If you specify both the subscription and the resource group, the
command lists all the images in the resource group.

These images include:

VM images from marketplace images.


Custom images that reside in your Azure Storage account or are in a
local share on your cluster or a client connected to the cluster.

Here's a sample output.

PS C:\Users\azcli> az stack-hci-vm image list --subscription "


<Subscription ID>" --resource-group "myhci-rg"
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
[
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/microsoft.azurestackhci/marketplacegalleryimages/w
inServer2022Az-01",
"location": "eastus",
"name": "winServer2022Az-01",
"properties": {
"hyperVGeneration": "V2",
"identifier": {
"offer": "windowsserver",
"publisher": "microsoftwindowsserver",
"sku": "2022-datacenter-azure-edition-core"
},
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": {
"downloadStatus": {
"downloadSizeInMB": 6710
},
"progressPercentage": 100,
"provisioningStatus": {
"operationId": "19742d69-4a00-4086-8f17-
4dc1f7ee6681*E1E9889F0D1840B93150BD74D428EAE483CB67B0904F9A198C161AD471F
670ED",
"status": "Succeeded"
}
},
"storagepathId": null,
"version": {
"name": "20348.2031.231006",
"properties": {
"storageProfile": {
"osDiskImage": {
"sizeInMB": 130050
}
}
}
}
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-10-30T21:44:53.020512+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-10-30T22:08:25.495995+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": {},
"type": "microsoft.azurestackhci/marketplacegalleryimages"
}
]
PS C:\Users\azcli>

View VM image properties


You might want to view the properties of VM images before you use the image to create
a VM. Follow these steps to view the image properties:

Azure CLI

Follow these steps to use Azure CLI to view properties of an image:

1. Run PowerShell as an administrator.

2. Set the following parameters.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Cluster resource group>"
$mktplaceImage = "<Marketplace image name>"

3. You can view image properties in two different ways: specify ID or specify
name and resource group. Take the following steps when specifying
Marketplace image ID:

a. Set the following parameter.

Azure CLI

$mktplaceImageID = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage"

b. Run the following command to view the properties.

az stack-hci-vm image show --ids $mktplaceImageID

Here's a sample output for this command:

PS C:\Users\azcli> az stack-hci-vm image show --ids


$mktplaceImageID
Command group 'stack-hci-vm' is experimental and under
development. Reference and support levels:
https://aka.ms/CLI_refstatus
{
"extendedLocation": {
"name": "/subscriptions/<Subscription
ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-
cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage",
"location": "eastus",
"name": "myhci-marketplaceimage",
"properties": {
"containerName": null,
"hyperVGeneration": null,
"identifier": null,
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": null,
"version": null
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2022-08-05T20:52:38.579764+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2022-08-05T20:52:38.579764+00:00",
"lastModifiedBy": "[email protected]",
"lastModifiedByType": "User"
},
"tags": null,
"type": "microsoft.azurestackhci/galleryimages"
}
PS C:\Users\azcli>

Delete VM image
You might want to delete a VM image if the download fails for some reason or if the
image is no longer needed. Follow these steps to delete the VM images.

Azure CLI

1. Run PowerShell as an administrator.

2. Set the following parameters.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Cluster resource group>"
$galleryImageName = "<Gallery image name>"

3. Remove an existing VM image. Run the following command:

Azure CLI

az stack-hci-vm image delete --subscription $subscription --


resource-group $resource_group --name $mktplaceImage --yes

You can delete image two ways:

Specify name and resource group.


Specify ID.

After you've deleted an image, you can check that the image is removed. Here's a
sample output when the image was deleted by specifying the name and the
resource-group.
PS C:\Users\azcli> $subscription = "<Subscription ID>"
PS C:\Users\azcli> $resource_group = "myhci-rg"
PS C:\Users\azcli> $mktplaceImage = "myhci-marketplaceimage"
PS C:\Users\azcli> az stack-hci-vm image delete --name $mktplaceImage --
resource-group $resource_group
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
Are you sure you want to perform this operation? (y/n): y
PS C:\Users\azcli> az stack-hci-vm image show --name $mktplaceImage --
resource-group $resource_group
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
ResourceNotFound: The Resource
'Microsoft.AzureStackHCI/marketplacegalleryimages/myhci-
marketplaceimage' under resource group 'myhci-rg' was not found. For
more details please go to https://aka.ms/ARMResourceNotFoundFix
PS C:\Users\azcli>

Next steps
Create logical networks

Feedback
Was this page helpful?  Yes  No

Provide product feedback | Get help at Microsoft Q&A


Create Azure Stack HCI VM image using
images in a local share
Article • 02/27/2024

Applies to: Azure Stack HCI, version 23H2

This article describes how to create virtual machine (VM) images for your Azure Stack
HCI using source images from a local share on your cluster. You can create VM images
using the Azure portal or Azure CLI and then use these VM images to create Arc VMs on
your Azure Stack HCI.

Prerequisites
Before you begin, make sure that the following prerequisites are completed.

Azure CLI

Make sure to review and Complete the prerequisites.

You have access to an Azure Stack HCI system that is deployed, has an Arc
Resource Bridge and a custom location.

Go to the Overview > Server page in the Azure Stack HCI system resource.
Verify that Azure Arc shows as Connected. You should also see a custom
location and an Arc Resource Bridge for your cluster.


For custom images in a local share on your Azure Stack HCI, you'll have the
following extra prerequisites:
You should have a VHD/VHDX uploaded to a local share on your Azure
Stack HCI cluster.
The VHDX image must be Gen 2 type and secure boot enabled.
The VHDX image must be prepared using sysprep /generalize /shutdown
/oobe . For more information, see Sysprep command-line options.

The image should reside on a Cluster Shared Volume available to all the
servers in the cluster. Both the Windows and Linux operating systems are
supported.

If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.

Add VM image from image in local share


You create a VM image starting from an image in a local share of your cluster and then
use this image to deploy VMs on your Azure Stack HCI cluster.

Azure CLI

Follow these steps to create a VM image using the Azure CLI.

Sign in and set subscription


1. Connect to a server on your Azure Stack HCI system.

2. Sign in. Type:

Azure CLI

az login --use-device-code

3. Set your subscription.

Azure CLI

az account set --subscription <Subscription ID>

Set some parameters


1. Set your subscription, resource group, location, OS type for the image.
Replace the parameters in < > with the appropriate values.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Resource group>"
$location = "<Location for your Azure Stack HCI cluster>"
$imageName = <VM image name>
$imageSourcePath = <path to the source image>
$osType = "<OS of source image>"

The parameters are described in the following table:

ノ Expand table

Parameter Description

subscription Resource group for Azure Stack HCI cluster that you associate with
this image.

resource_group Resource group for Azure Stack HCI cluster that you associate with
this image.

location Location for your Azure Stack HCI cluster. For example, this could be
eastus .

image-path Name of the VM image created starting with the image in your local
share.
Note: Azure rejects all the names that contain the keyword
Windows.

name Path to the source gallery image (VHDX only) on your cluster. For
example, C:\OSImages\winos.vhdx. See the prerequisites of the
source image.

os-type Operating system associated with the source image. This can be
Windows or Linux.

Here's a sample output:

PS C:\Users\azcli> $subscription = "<Subscription ID>"


PS C:\Users\azcli> $resource_group = "myhci-rg"
PS C:\Users\azcli> $location = "eastus"
PS C:\Users\azcli> $osType = "Windows"
PS C:\ClusterStorage\Volume1> $imageName = "myhci-localimage"
PS C:\ClusterStorage\Volume1> $imageSourcePath =
"C:\ClusterStorage\Volume1\Windows_K8s_17763.2928.220505-
1621_202205101158.vhdx"

Create VM image from image in local share


1. Select a custom location to deploy your VM image. The custom location
should correspond to the custom location for your Azure Stack HCI cluster.
Get the custom location ID for your Azure Stack HCI cluster. Run the following
command:

Azure CLI

$customLocationID=(az customlocation show --resource-group


$resource_group --name "<custom location name for HCI cluster>" --
query id -o tsv)

2. Create the VM image starting with a specified image in a local share on your
Azure Stack HCI cluster.

Azure CLI

az stack-hci-vm image create --subscription $subscription --


resource-group $resource_group --custom-location $customLocationID
--location $location --image-path $ImageSourcePath --name
$ImageName --os-type $osType --storage-path-id $storagepathid

A deployment job starts for the VM image.

In this example, the storage path was specified using the --storage-path-id
flag and that ensured that the workload data (including the VM, VM image,
non-OS data disk) is placed in the specified storage path.

If the flag is not specified, the workload data is automatically placed in a high
availability storage path.

The image deployment takes a few minutes to complete. The time taken to
download the image depends on the size of the image in the local share and the
network bandwidth available for the download.

Here's a sample output:


PS C:\Users\azcli> $customLocationID=(az customlocation show --resource-
group $resource_group --name "myhci-cl" --query id -o tsv)
PS C:\Users\azcli> az stack-hci-vm image create --subscription
$subscription --resource-group $resource_group --custom-location
$customLocationID --location $location --image-path $ImageSourcePath --
name $ImageName --os-type $osType --storage-path-id $storagepathid
type="CustomLocation" --location $Location --name $mktplaceImage --os-
type $osType --image-path $mktImageSourcePath
Command group 'azurestackhci' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.ExtendedLocation/customLocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-localimage",
"location": "eastus",
"name": "myhci-localimage",
"properties": {
"identifier": null,
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": {
"downloadStatus": {},
"progressPercentage": 100,
"provisioningStatus": {
"operationId": "82f58893-b252-43db-97a9-
258f6f7831d9*43114797B86E6D2B28C4B52B02302C81C889DABDD9D890F993665E223A5
947C3",
"status": "Succeeded"
}
},
"storagepathId": "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/storagecontainers/myhci-
storagepath",
"version": {
"name": null,
"properties": {
"storageProfile": {
"osDiskImage": {}
}
}
}
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-11-02T06:15:10.450908+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-11-02T06:15:56.689323+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": null,
"type": "microsoft.azurestackhci/galleryimages"
}

PS C:\Users\azcli>

List VM images
You need to view the list of VM images to choose an image to manage.

Azure CLI

Follow these steps to list VM image using Azure CLI.

1. Run PowerShell as an administrator.

2. Set some parameters.

Azure CLI

$subscription = "<Subscription ID associated with your cluster>"


$resource_group = "<Resource group name for your cluster>"

3. List all the VM images associated with your cluster. Run the following
command:

Azure CLI

az stack-hci-vm image list --subscription $subscription --resource-


group $resource_group

Depending on the command used, a corresponding set of images associated


with the Azure Stack HCI cluster are listed.

If you specify just the subscription, the command lists all the images in
the subscription.
If you specify both the subscription and the resource group, the
command lists all the images in the resource group.

These images include:


VM images from marketplace images.
Custom images that reside in your Azure Storage account or are in a
local share on your cluster or a client connected to the cluster.

Here's a sample output.

PS C:\Users\azcli> az stack-hci-vm image list --subscription "


<Subscription ID>" --resource-group "myhci-rg"
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
[
{
"extendedLocation": {
"name": "/subscriptions/<Subscription ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/microsoft.azurestackhci/marketplacegalleryimages/w
inServer2022Az-01",
"location": "eastus",
"name": "winServer2022Az-01",
"properties": {
"hyperVGeneration": "V2",
"identifier": {
"offer": "windowsserver",
"publisher": "microsoftwindowsserver",
"sku": "2022-datacenter-azure-edition-core"
},
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": {
"downloadStatus": {
"downloadSizeInMB": 6710
},
"progressPercentage": 100,
"provisioningStatus": {
"operationId": "19742d69-4a00-4086-8f17-
4dc1f7ee6681*E1E9889F0D1840B93150BD74D428EAE483CB67B0904F9A198C161AD471F
670ED",
"status": "Succeeded"
}
},
"storagepathId": null,
"version": {
"name": "20348.2031.231006",
"properties": {
"storageProfile": {
"osDiskImage": {
"sizeInMB": 130050
}
}
}
}
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2023-10-30T21:44:53.020512+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2023-10-30T22:08:25.495995+00:00",
"lastModifiedBy": "319f651f-7ddb-4fc6-9857-7aef9250bd05",
"lastModifiedByType": "Application"
},
"tags": {},
"type": "microsoft.azurestackhci/marketplacegalleryimages"
}
]
PS C:\Users\azcli>

View VM image properties


You might want to view the properties of VM images before you use the image to create
a VM. Follow these steps to view the image properties:

Azure CLI

Follow these steps to use Azure CLI to view properties of an image:

1. Run PowerShell as an administrator.

2. Set the following parameters.

Azure CLI

$subscription = "<Subscription ID>"


$resource_group = "<Cluster resource group>"
$mktplaceImage = "<Marketplace image name>"

3. You can view image properties in two different ways: specify ID or specify
name and resource group. Take the following steps when specifying
Marketplace image ID:

a. Set the following parameter.

Azure CLI
$mktplaceImageID = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage"

b. Run the following command to view the properties.

az stack-hci-vm image show --ids $mktplaceImageID

Here's a sample output for this command:

PS C:\Users\azcli> az stack-hci-vm image show --ids


$mktplaceImageID
Command group 'stack-hci-vm' is experimental and under
development. Reference and support levels:
https://aka.ms/CLI_refstatus
{
"extendedLocation": {
"name": "/subscriptions/<Subscription
ID>/resourcegroups/myhci-
rg/providers/microsoft.extendedlocation/customlocations/myhci-
cl",
"type": "CustomLocation"
},
"id": "/subscriptions/<Subscription ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage",
"location": "eastus",
"name": "myhci-marketplaceimage",
"properties": {
"containerName": null,
"hyperVGeneration": null,
"identifier": null,
"imagePath": null,
"osType": "Windows",
"provisioningState": "Succeeded",
"status": null,
"version": null
},
"resourceGroup": "myhci-rg",
"systemData": {
"createdAt": "2022-08-05T20:52:38.579764+00:00",
"createdBy": "[email protected]",
"createdByType": "User",
"lastModifiedAt": "2022-08-05T20:52:38.579764+00:00",
"lastModifiedBy": "[email protected]",
"lastModifiedByType": "User"
},
"tags": null,
"type": "microsoft.azurestackhci/galleryimages"
}
PS C:\Users\azcli>

Delete VM image
You might want to delete a VM image if the download fails for some reason or if the
image is no longer needed. Follow these steps to delete the VM images.

Azure CLI

1. Run PowerShell as an administra