Azure Stack Hci
Azure Stack Hci
e OVERVIEW
h WHAT'S NEW
` DEPLOY
2. Install the OS
e OVERVIEW
About updates
c HOW-TO GUIDE
` DEPLOY
c HOW-TO GUIDE
Add server
Repair server
c HOW-TO GUIDE
Health alerts
Log alerts
Metric alerts
e OVERVIEW
Migrate (preview)
p CONCEPT
Security features
Observability
d TRAINING
Troubleshoot
c HOW-TO GUIDE
Collect logs
Deployment issues
Remote support
Azure Stack HCI solution overview
Article • 02/21/2024
Azure Stack HCI is a hyperconverged infrastructure (HCI) solution that hosts Windows
and Linux VM or containerized workloads and their storage. It's a hybrid product that
connects the on-premises system to Azure for cloud-based services, monitoring, and
management.
Overview
An Azure Stack HCI system consists of a server or a cluster of servers running the Azure
Stack HCI operating system and connected to Azure. You can use the Azure portal to
monitor and manage individual Azure Stack HCI systems as well as view all of your
Azure Stack HCI deployments. You can also manage with your existing tools, including
Windows Admin Center and PowerShell.
Azure Stack HCI is available for download from the Azure portal with a free 60-day trial
(Download Azure Stack HCI).
To acquire the servers to run Azure Stack HCI, you can purchase Azure Stack HCI
integrated systems from a Microsoft hardware partner with the operating system pre-
installed, or buy validated nodes and install the operating system yourself. See the Azure
Stack HCI Catalog for hardware options and use the Azure Stack HCI sizing tool to
estimate hardware requirements.
Each Azure Stack HCI system consists of between 1 and 16 physical servers. All servers
share common configurations and resources by leveraging the Windows Server Failover
Clustering feature.
See What's new in Azure Stack HCI for details on the latest enhancements.
ノ Expand table
Azure Virtual Azure Virtual Desktop for Azure Stack HCI lets you deploy Azure Virtual
Desktop (AVD) Desktop session hosts on your on-premises Azure Stack HCI infrastructure.
You manage your session hosts from the Azure portal. To learn more, see
Azure Virtual Desktop for Azure Stack HCI.
Azure Kubernetes You can leverage Azure Stack HCI to host container-based deployments,
Service (AKS) which increases workload density and resource usage efficiency. Azure Stack
hybrid HCI also further enhances the agility and resiliency inherent to Azure
Kubernetes deployments. Azure Stack HCI manages automatic failover of VMs
serving as Kubernetes cluster nodes in case of a localized failure of the
underlying physical components. This supplements the high availability built
into Kubernetes, which automatically restarts failed containers on either the
same or another VM. To learn more, see Azure Kubernetes Service on Azure
Stack HCI and Windows Server.
Run Azure Arc Azure Arc allows you to run Azure services anywhere. This allows you to build
services on- consistent hybrid and multicloud application architectures by using Azure
premises services that can run in Azure, on-premises, at the edge, or at other cloud
providers. Azure Arc enabled services allow you to run Arc VMs, Azure data
services and Azure application services such as Azure App Service, Functions,
Logic Apps, Event Grid, and API Management anywhere to support hybrid
workloads. To learn more, see Azure Arc overview.
Highly Azure Stack HCI provides an additional layer of resiliency to highly available,
performant SQL mission-critical Always On availability groups-based deployments of SQL
Server Server. This approach also offers extra benefits associated with the single-
vendor approach, including simplified support and performance
optimizations built into the underlying platform. To learn more, see Deploy
SQL Server on Azure Stack HCI.
Trusted Azure Stack HCI satisfies the trusted enterprise virtualization requirements
enterprise through its built-in support for Virtualization-based Security (VBS). VBS relies
virtualization on Hyper-V to implement the mechanism referred to as virtual secure mode,
which forms a dedicated, isolated memory region within its guest VMs. By
Use case Description
Scale-out storage Storage Spaces Direct is a core technology of Azure Stack HCI that uses
industry-standard servers with locally attached drives to offer high availability,
performance, and scalability. Using Storage Spaces Direct results in significant
cost reductions compared with competing offers based on storage area
network (SAN) or network-attached storage (NAS) technologies. These
benefits result from an innovative design and a wide range of enhancements,
such as persistent read/write cache drives, mirror-accelerated parity, nested
resiliency, and deduplication.
Disaster recovery An Azure Stack HCI stretched cluster provides automatic failover of virtualized
for virtualized workloads to a secondary site following a primary site failure. Synchronous
workloads replication ensures crash consistency of VM disks.
Data center Refreshing and consolidating aging virtualization hosts with Azure Stack HCI
consolidation can improve scalability and make your environment easier to manage and
and secure. It's also an opportunity to retire legacy SAN storage to reduce
modernization footprint and total cost of ownership. Operations and systems administration
are simplified with unified tools and interfaces and a single point of support.
Branch office and For branch office and edge workloads, you can minimize infrastructure costs
edge by deploying two-node clusters with inexpensive witness options, such as
Cloud Witness or a USB drive–based file share witness. Another factor that
contributes to the lower cost of two-node clusters is support for switchless
networking, which relies on crossover cable between cluster nodes instead of
more expensive high-speed switches. Customers can also centrally view
remote Azure Stack HCI deployments in the Azure portal. To learn more, see
Deploy branch office and edge on Azure Stack HCI.
Using a fictional customer, inspired directly by real customers, you will see how to
deploy Kubernetes, set up GitOps, deploy VMs, use Azure Monitor and drill into a
hardware failure, all without leaving the Azure portal.
https://www.youtube-nocookie.com/embed/t81MNUjAnEQ
This video includes preview functionality which shows real product functionality, but in a
closely controlled environment.
You can use the Azure portal for an increasing number of tasks including:
Monitoring: View all of your Azure Stack HCI systems in a single, global view
where you can group them by resource group and tag them.
Billing: Pay for Azure Stack HCI through your Azure subscription.
For more details on the cloud service components of Azure Stack HCI, see Azure Stack
HCI hybrid capabilities with Azure services.
One or more servers from the Azure Stack HCI Catalog , purchased from your
preferred Microsoft hardware partner.
An Azure subscription .
Operating system licenses for your workload VMs – for example, Windows Server.
See Activate Windows Server VMs.
An internet connection for each server in the cluster that can connect via HTTPS
outbound traffic to well-known Azure endpoints at least every 30 days. See Azure
connectivity requirements for more information.
For clusters stretched across sites:
At least four severs (two in each site)
At least one 1 Gb connection between sites (a 25 Gb RDMA connection is
preferred)
An average latency of 5 ms round trip between sites if you want to do
synchronous replication where writes occur simultaneously in both sites.
If you plan to use SDN, you'll need a virtual hard disk (VHD) for the Azure Stack
HCI operating system to create Network Controller VMs (see Plan to deploy
Network Controller).
Make sure your hardware meets the System requirements and that your network meets
the physical network and host network requirements for Azure Stack HCI.
For Azure Kubernetes Service on Azure Stack HCI and Windows Server requirements, see
AKS requirements on Azure Stack HCI.
Azure Stack HCI is priced on a per core basis on your on-premises servers. For current
pricing, see Azure Stack HCI pricing .
Browse the Azure Stack HCI Catalog to view Azure Stack HCI solutions from Microsoft
partners such as ASUS, Blue Chip, DataON, Dell EMC, Fujitsu, HPE, Hitachi, Lenovo, NEC,
primeLine Solutions, QCT, SecureGUARD, and Supermicro.
Some Microsoft partners are developing software that extends the capabilities of Azure
Stack HCI while allowing IT admins to use familiar tools. To learn more, see Utility
applications for Azure Stack HCI.
Next steps
Learn more about Azure Stack HCI, version 23H2 deployment.
What's new in Azure Stack HCI, version
23H2
Article • 03/04/2024
This article lists the various features and improvements that are available in Azure Stack
HCI, version 23H2.
Azure Stack HCI, version 23H2 is the latest version of the Azure Stack HCI solution. This
version focuses on cloud-based deployment and updates, cloud-based monitoring, new
and simplified experience for Arc VM management, security, and more. For an earlier
version of Azure Stack HCI, see What's new in Azure Stack HCI, version 22H2.
The following sections briefly describe the various features and enhancements in Azure
Stack HCI, version 23H2 releases.
The role applies the concept of least amount of privileges and must be assigned to the
service principal: clustername.arb before you update the cluster.
To take advantage of the constraint permissions, remove the permissions that were
applied before. Follow the steps to Assign an Azure RBAC role via the portal. Search for
and assign the Azure Resource Bridge Deployment role to the member: <deployment-
cluster-name>-cl.arb .
An update health check is also included in this release that confirms that the new role is
assigned before you apply the update.
Changes to Active Directory preparation
Beginning this release, the Active Directory preparation process is simplified. You can
use your own existing process to create an Organizational Unit (OU), a user account with
appropriate permissions, and with Group policy inheritance blocked for the Group Policy
Object (GPO). You can also use the Microsoft provided script to create the OU. For more
information, see Prepare Active Directory.
Region expansion
Azure Stack HCI, version 23H2 solution is now supported in Australia. For more
information, see Azure Stack HCI supported regions.
The role applies the concept of the least amount of privileges and must be assigned to
the Azure resource bridge service principal, clustername.arb , before you update the
cluster.
You must remove the previously assigned permissions to take advantage of the
constraint permission. Follow the steps to Assign an Azure RBAC role via the portal.
Search for and assign the Azure Resource Bridge Deployment role to the member:
<deployment-cluster-name>-cl.arb .
Additionally, this release includes an update health check that confirms the assignment
of the new role before applying the update.
) Important
The production workloads are only supported on the Azure Stack HCI systems
running the generally available 2311.2 release. To run the GA version, start with a
new 2311 deployment and then update to 2311.2.
In this generally available release of the Azure Stack HCI, version 23H2, all the features
that were available with the 2311 preview releases are also now generally available. In
addition, the following improvements and enhancements are available:
Deployment changes
With this release:
Deployment is supported using existing storage accounts and existing Azure Key
Vaults.
A failed deployment can be run using the Rerun deployment option that becomes
available in the cluster Overview page.
Network settings such as storage traffic priority, cluster traffic priority, storage
traffic bandwidth reservation, jumbo frames, and RDMA protocol can all be
customized.
Validation must be started explicitly via the Start validation button.
Guest management is available via Azure CLI. For more information, see Enable
guest management.
Proxy is supported for Arc VMs. For more information, see Set up proxy for Arc
VMs on Azure Stack HCI.
Storage path selection is available during the VM image creation via the Azure
portal. For more information, see Create a VM image from Azure Marketplace via
the Azure portal.
Monitoring changes
In the Azure portal, you can now monitor platform metrics of your cluster by navigating
to the Monitoring tab on your cluster’s Overview page. This tab offers a quick way to
view graphs for different platform metrics. You can select any graph to open it in Metrics
Explorer for a more in-depth analysis. For more information, see Monitor Azure Stack
HCI through the Monitoring tab.
Supported workloads
Starting with this release, the following workloads are generally available on Azure Stack
HCI:
Azure Kubernetes Service (AKS) on Azure Stack HCI. For more information, see Set
up an Azure Kubernetes Service host on Azure Stack HCI and deploy a workload
cluster using PowerShell.
In addition, AKS on HCI has a new CLI extension and Azure portal experience,
Support for logical networks, Support for taints and labels, Support for upgrade via
Azure CLI, Support for Nvidia A2 and more. For details, see What's new in AKS on
Azure Stack HCI, version 23H2.
Azure Virtual Desktops (AVD) on Azure Stack HCI. For more information, see
Deploy AVD on Azure Stack HCI.
Features and improvements in 2311
This section lists the new features and improvements in the 2311 release of Azure Stack
HCI, version 23H2. Additionally, this section includes features and improvements that
were originally released for 2310 starting with cloud-based deployment.
Cloud-based deployment
For servers running Azure Stack HCI, version 23H2, you can perform new deployments
via the cloud. You can deploy an Azure Stack HCI cluster in one of the two ways - via the
Azure portal or via an Azure Resource Manager deployment template.
For more information, see Deploy Azure Stack HCI cluster using the Azure portal and
Deploy Azure Stack HCI via the Azure Resource Manager deployment template.
Cloud-based updates
This new release has the infrastructure to consolidate all the relevant updates for the OS,
software agents, Azure Arc infrastructure, and OEM drivers and firmware into a unified
monthly update package. This comprehensive update package is identified and applied
from the cloud through the Azure Update Manager tool. Alternatively, you can apply the
updates using the PowerShell.
For more information, see Update your Azure Stack HCI cluster via the Azure Update
Manager and Update your Azure Stack HCI via the PowerShell.
Cloud-based monitoring
For more information, see Respond to Azure Stack HCI health alerts using Azure Monitor
alerts.
Monitor metrics
This release also integrates the Azure Monitor metrics with Azure Stack HCI so that you
can monitor the health of your Azure Stack HCI system via the metrics collected for
compute, storage, and network resources. This integration enables you to store cluster
data in a dedicated time-series database that you can use to analyze data from your
Azure Stack HCI system.
For more information, see Monitor Azure Stack HCI with Azure Monitor metrics.
With Insights for Azure Stack HCI, you can now monitor and analyze performance,
savings, and usage insights about key Azure Stack HCI features, such as ReFS
deduplication and compression. To use these enhanced monitoring capabilities, ensure
that your cluster is deployed, registered, and connected to Azure, and enrolled in
monitoring. For more information, see Monitor Azure Stack HCI features with Insights.
Simplified Arc Resource Bridge deployment. The Arc Resource Bridge is now
deployed as part of the Azure Stack HCI deployment. For more information, see
Deploy Azure Stack HCI cluster using the Azure portal.
New RBAC roles for Arc VMs. This release introduces new RBAC roles for Arc VMs.
For more information, see Manage RBAC roles for Arc VMs.
New Azure consistent CLI. Beginning this preview release, a new consistent
command line experience is available to create VM and VM resources such as VM
images, storage paths, logical networks, and network interfaces. For more
information, see Create Arc VMs on Azure Stack HCI.
Support for static IPs. This release has the support for static IPs. For more
information, see Create static logical networks on Azure Stack HCI.
Support for storage paths. While default storage paths are created during the
deployment, you can also specify custom storage paths for your Arc VMs. For more
information, see Create storage paths on Azure Stack HCI.
Support for Azure VM extensions on Arc VMs on Azure Stack HCI. Starting with
this preview release, you can also enable and manage the Azure VM extensions
that are supported on Azure Arc, on Azure Stack HCI Arc VMs created via the Azure
CLI. You can manage these VM extensions using the Azure CLI or the Azure portal.
For more information, see Manage VM extensions for Azure Stack HCI VMs.
Trusted launch for Azure Arc VMs. Azure Trusted Launch protects VMs against
boot kits, rootkits, and kernel-level malware. Starting this preview release, some of
those Trusted Launch capabilities are available for Arc VMs on Azure Stack HCI. For
more information, see Trusted launch for Arc VMs.
Security capabilities
The new installations with this release of Azure Stack HCI start with a secure-by-default
strategy. The new version #has a tailored security baseline coupled with a security drift
control mechanism and a set of well-known security features enabled by default. This
release provides:
A tailored security baseline with over 300 security settings configured and
enforced with a security drift control mechanism. For more information, see
Security baseline settings for Azure Stack HCI.
Out-of-box protection for data and network with SMB signing and BitLocker
encryption for OS and Cluster Shared Volumes. For more information, see
BitLocker encryption for Azure Stack HCI.
Reduced attack surface as Windows Defender Application Control is enabled by
default and limits the applications and the code that you can run on the core
platform. For more information, see Windows Defender Application Control for
Azure Stack HCI.
Capacity management
In this release, you can add and remove servers, or repair servers from your Azure Stack
HCI system via the PowerShell.
For more information, see Optimize storage with ReFS deduplication and compression
in Azure Stack HCI.
Next steps
Read the blog announcing the general availability of Azure Stack HCI, version
23H2 .
Read the blog about What’s new for Azure Stack HCI at Microsoft Ignite 2023 .
For Azure Stack HCI, version 23H2 deployments:
Read the Deployment overview.
Learn how to Deploy Azure Stack HCI, version 23H2 via the Azure portal.
View known issues in Azure Stack HCI 2402 release
Article • 03/11/2024
This article identifies the critical known issues and their workarounds in Azure Stack HCI 2402 release.
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy
your Azure Stack HCI, carefully review the information contained in the release notes.
) Important
This release supports both - new deployments and updates. You must be running version 2311.3 to update to this release.
For more information about the new features in this release, see What's new in 23H2.
Release notes for this version include the issues fixed in this release, known issues in this release, and release noted issues carried over from
previous versions.
Fixed issues
Here are the issues fixed in this release:
ノ Expand table
Deployment The first deployment step: Before Cloud Deployment when Deploying via Azure
portal can take from 45 minutes to an hour to complete.
Deployment There's a sporadic heartbeat reliability issue in this release due to which the This issue is intermittent. Try rerunning the
registration encounters the error: HCI registration failed. Error: Arc integration failed. deployment. For more information, see Rerun the
deployment.
Deployment There's an intermittent issue in this release where the Arc integration validation fails This issue is intermittent. Try rerunning the
with this error: Validator failed. Can't retrieve the dynamic parameters for the deployment. For more information, see Rerun the
cmdlet. PowerShell Gallery is currently unavailable. Please try again later. deployment.
Deployment In some instances, running the Arc registration script doesn't install the mandatory The issue was fixed in this release. The extensions
extensions, Azure Edge device Management or Azure Edge Lifecycle Manager. remediate themselves and get into a successful
deployment state.
ノ Expand table
Deployment If you prepare the Active Directory on your own (not using the script and procedure Use the Prepare AD script method or if using your own
provided by Microsoft), your Active Directory validation could fail with missing Generic method, make sure to assign the specific permission
All permission. This is due to an issue in the validation check that checks for a dedicated msFVE-RecoverInformationobjects – General –
permission entry for msFVE-RecoverInformationobjects – General – Permissions Full Permissions Full control .
control , which is required for BitLocker recovery.
Deployment There's a rare issue in this release where the DNS record is deleted during the Azure Stack Check the DNS server to see if any DNS records of the
HCI deployment. When that occurs, the following exception is seen: cluster nodes are missing. Apply the following mitigation
Type 'PropagatePublicRootCertificate' of Role 'ASCA' raised an exception:<br>The on the nodes where its DNS record is missing.
operation on computer 'ASB88RQ22U09' failed: WinRM cannot process the request. The
following error occurred while using Kerberos authentication: Cannot find the Restart the DNS client service. Open a PowerShell session
Feature Issue Workaround/Comments
computer ASB88RQ22U09.local. Verify that the computer exists on the network and that and run the following cmdlet on the affected node:
the name provided is spelled correctly at PropagatePublicRootCertificate, Taskkill /f /fi "SERVICES eq dnscache"
C:\NugetStore\Microsoft.AzureStack, at
Orchestration.Roles.CertificateAuthority.10.2402.0.14\content\Classes\ASCA\ASCA.psm1:
line 38, at C:\CloudDeployment\ECEngine\InvokeInterfaceInternal.psm1: line 127,at
Invoke-EceInterfaceInternal,
C:\CloudDeployment\ECEngine\InvokeInterfaceInternal.psm1: line 123.
Deployment In this release, there's a remote task failure on a multi-node deployment that results in the The mitigation is to restart the ECE agent on the affected
following exception: node. On your server, open a PowerShell session and run
ECE RemoteTask orchestration failure with ASRR1N42R01U31 (node pingable - True): A the following command:
WebException occurred while sending a RestRequest. WebException.Status: Restart-Service ECEAgent .
ConnectFailure on [https://<URL>](https://<URL>).
Updates In this release, there's a health check issue owing to which a single server Azure Stack HCI Update your Azure Stack HCI via PowerShell.
can't be updated from the Azure portal.
Add/Repair In this release, when adding or repairing a server, a failure is seen when the software load There's no workaround in this release. If you encounter
server balancer or network controller VM certificates are being copied from the existing nodes. this issue, contact Microsoft Support to determine next
The failure is because these certificates weren't generated during the deployment/update. steps.
Deployment In this release, there's a transient issue resulting in the deployment failure with the As this is a transient issue, retrying the deployment
following exception: should fix this. For more information, see how to Rerun
Type 'SyncDiagnosticLevel' of Role 'ObservabilityConfig' raised an exception:* the deployment.
<br>*Syncing Diagnostic Level failed with error: The Diagnostic Level does not match.
Portal was not set to Enhanced, instead is Basic.
Deployment In this release, there's an issue with the Secrets URI/location field. This is a required field Use the sample parameters file in the Deploy Azure Stack
that is marked Not mandatory and results in the ARM template deployment failures. HCI, version 23H2 via ARM template to ensure that all the
inputs are provided in the required format and then try
the deployment.
If there's a failed deployment, you must also clean up the
following resources before you Rerun the deployment:
1. Delete C:\EceStore .
2. Delete C:\CloudDeployment .
3. Delete C:\nugetstore .
4. Remove-Item
HKLM:\Software\Microsoft\LCMAzureStackStampInformation .
Security For new deployments, Secured-core capable devices won't have Dynamic Root of DRTM is not supported in this release.
Measurement (DRTM) enabled by default. If you try to enable (DRTM) using the Enable-
AzSSecurity cmdlet, you'll see an error that DRTM setting isn't supported in the current
release.
Microsoft recommends defense in depth, and UEFI Secure Boot still protects the
components in the Static Root of Trust (SRT) boot chain by ensuring that they are loaded
only when they are signed and verified.
ノ Expand table
Arc VM Deployment or update of Arc Resource Bridge Retry the deployment/update. The retry should regenerate the SPN secret and the
management could fail when the automatically generated operation will likely succeed.
temporary SPN secret during this operation, starts
with a hyphen.
Arc VM Arc Extensions on Arc VMs stay in "Creating" state Sign in to the VM, open a command prompt, and type the following:
management indefinitely. Windows:
notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the end of
the resource name, so this property matches the name of the VM. Then restart the VM.
Arc VM When a new server is added to an Azure Stack You can manually create a storage path for any new volumes. For more information,
management HCI cluster, storage path isn't created see Create a storage path.
automatically for the newly created volume.
Feature Issue Workaround
Arc VM Restart of Arc VM operation completes after There's no known workaround in this release.
management approximately 20 minutes although the VM itself
restarts in about a minute.
Arc VM In some instances, the status of the logical If the status of this logical network was Succeeded at the time when this network was
management network shows as Failed in Azure portal. This provisioned, then you can continue to create resources on this network.
occurs when you try to delete the logical network
without first deleting any resources such as
network interfaces associated with that logical
network.
You should still be able to create resources on this
logical network. The status is misleading in this
instance.
Arc VM In this release, when you update a VM with a data Use the Azure portal for all the VM update operations. For more information, see
management disk attached to it using the Azure CLI, the Manage Arc VMs and Manage Arc VM resources.
operation fails with the following error message:
Couldn't find a virtual hard disk with the name.
Update In rare instances, you may encounter this error If you see this issue, contact Microsoft Support to assist you with the next steps.
while updating your Azure Stack HCI: Type
'UpdateArbAndExtensions' of Role 'MocArb'
raised an exception: Exception Upgrading ARB
and Extension in step [UpgradeArbAndExtensions
:Get-ArcHciConfig] UpgradeArb: Invalid
applianceyaml = [C:\AksHci\hci-appliance.yaml].
Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword using command:
Set-AzureStackLCMUserPassword , you might
encounter this error:
Networking There is an infrequent DNS client issue in this Restart the server. This operation registers the DNS record, which prevents it from
release that causes the deployment to fail on a getting deleted.
two-node cluster with a DNS resolution error: A
WebException occurred while sending a
RestRequest. WebException.Status:
NameResolutionFailure. As a result of the bug, the
DNS record of the second node is deleted soon
after it's created resulting in a DNS error.
Azure portal In some instances, the Azure portal might take a You might need to wait for 30 minutes or more to see the updated view.
while to update and the view might not be
current.
Arc VM Deleting a network interface on an Arc VM from Use the Azure CLI to first remove the network interface and then delete it. For more
management Azure portal doesn't work in this release. information, see Remove the network interface and see Delete the network interface.
Arc VM When you create a disk or a network interface in Make sure to not use underscore in the names for disks or network interfaces.
management this release with underscore in the name, the
operation fails.
Deployment Providing the OU name in an incorrect syntax isn't There's no known workaround in this release.
detected in the Azure portal. The incorrect syntax
is however detected at a later step during cluster
validation.
Deployment Deployments via Azure Resource Manager time To monitor the deployment in the Azure portal, go to the Azure Stack HCI cluster
out after 2 hours. Deployments that exceed 2 resource and then go to new Deployments entry.
hours show up as failed in the resource group
though the cluster is successfully created.
Azure Site Azure Site Recovery can't be installed on an Azure There's no known workaround in this release.
Recovery Stack HCI cluster in this release.
Update When updating the Azure Stack HCI cluster via To work around this issue, on each cluster node, add the following registry key (no
the Azure Update Manager, the update progress value needed):
and results may not be visible in the Azure portal.
New-Item -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters" -force
Feature Issue Workaround
Then on one of the cluster nodes, restart the Cloud Management cluster group.
This won't fully remediate the issue as the progress details may still not be displayed
for a duration of the update process. To get the latest update details, you can Retrieve
the update progress with PowerShell.
Update In this release, if you run the Test-CauRun cmdlet No action is required on your part as the missing rule is automatically created when
prior to actually applying the 2311.2 update, you 2311.2 updates are applied.
see an error message regarding a missing firewall
rule to remotely shut down the Azure Stack HCI When applying future updates, make sure to run the Test-EnvironmentReadiness
system. cmdlet instead of Test-CauRun . For more information, see Step 2: Optionally validate
system health.
Updates In rare instances, if a failed update is stuck in an In To resume the update, run the following PowerShell command:
progress state in Azure Update Manager, the Try Get-SolutionUpdate | Start-SolutionUpdate .
again button is disabled.
Updates In some cases, SolutionUpdate commands could Make sure to close the PowerShell session used for Send-DiagnosticData . Open a new
fail if run after the Send-DiagnosticData PowerShell session and use it for SolutionUpdate commands.
command.
Updates In rare instances, when applying an update from Retry the update. If the issue persists, contact Microsoft Support.
2311.0.24 to 2311.2.4, cluster status reports In
Progress instead of expected Failed to update.
Arc VM If the resource group used to deploy an Arc VM Make sure that there are no underscores in the resource groups used to deploy Arc
management on your Azure Stack HCI has an underscore in the VMs.
name, the guest agent installation fails. As a
result, you won't be able to enable guest
management.
Cluster aware Resume node operation failed to resume node. This is a transient issue and could resolve on its own. Wait for a few minutes and retry
updating the operation. If the issue persists, contact Microsoft Support.
Cluster aware Suspend node operation was stuck for greater This is a transient issue and could resolve on its own. Wait for a few minutes and retry
updating than 90 minutes. the operation. If the issue persists, contact Microsoft Support.
Next steps
Read the Deployment overview.
Feedback
Was this page helpful? Yes No
This article identifies the critical known issues and their workarounds in the Azure Stack HCI 2311.3 release.
The release notes are continuously updated, and as critical issues requiring a workaround are discovered,
they're added. Before you deploy your Azure Stack HCI, carefully review the information contained in the
release notes.
) Important
The production workloads are only supported on the Azure Stack HCI systems running the generally
available 2311.3 release. To run the GA version, you need to start with a new 2311 deployment and
then update to 2311.3.
For more information about the new features in this release, see What's new in 23H2.
Release notes for this version include the issues fixed in this release, known issues in this release, and
release noted issues carried over from previous versions.
Fixed issues
Microsoft isn't currently aware of any fixed issues with this release.
ノ Expand table
Security In this release, if you enable Dynamic Root of Measurement (DRTM) using the Enable- DRTM is not
AzSSecurity cmdlet, you receive the following error: supported in
DRTM setting is not supported on current release at this release.
C:\ProgramFiles\WindowsPowerShell\Modules\AzureStackOSConfigAgent\AzureStackOSConfigAgent
psm1:4307 char:17 + ...throw "DRTM setting is not supported on current release".
Known issues from previous releases
Here are the known issues from previous releases:
ノ Expand table
Arc VM Deployment or update of Retry the deployment/update. The retry should regenerate the SPN secret and
management Arc Resource Bridge could the operation will likely succeed.
fail when the automatically
generated temporary SPN
secret during this operation,
starts with a hyphen.
Arc VM Arc Extensions on Arc VMs Sign in to the VM, open a command prompt, and type the following:
management stay in "Creating" state Windows:
indefinitely. notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the
end of the resource name, so this property matches the name of the VM. Then
restart the VM.
Arc VM When a new server is added You can manually create a storage path for any new volumes. For more
management to an Azure Stack HCI information, see Create a storage path.
cluster, storage path isn't
created automatically for the
newly created volume.
Arc VM In some instances, the status If the status of this logical network was Succeeded at the time when this
management of the logical network shows network was provisioned, then you can continue to create resources on this
as Failed in Azure portal. This network.
occurs when you try to
delete the logical network
without first deleting any
resources such as network
interfaces associated with
that logical network.
You should still be able to
create resources on this
logical network. The status is
misleading in this instance.
Arc VM In this release, when you Use the Azure portal for all the VM update operations. For more information,
management update a VM with a data disk see Manage Arc VMs and Manage Arc VM resources.
attached to it using the
Azure CLI, the operation fails
with the following error
message:
Couldn't find a virtual hard
disk with the name.
Feature Issue Workaround
Deployment There's a sporadic heartbeat This issue is intermittent. Try rerunning the deployment. For more information,
reliability issue in this release see Rerun the deployment.
due to which the registration
encounters the error: HCI
registration failed. Error: Arc
integration failed.
Deployment There's an intermittent issue This issue is intermittent. Try rerunning the deployment. For more information,
in this release where the Arc see Rerun the deployment.
integration validation fails
with this error: Validator
failed. Can't retrieve the
dynamic parameters for the
cmdlet. PowerShell Gallery
is currently unavailable.
Please try again later.
Update In rare instances, you may If you see this issue, contact Microsoft Support to assist you with the next
encounter this error while steps.
updating your Azure Stack
HCI: Type
'UpdateArbAndExtensions' of
Role 'MocArb' raised an
exception: Exception
Upgrading ARB and
Extension in step
[UpgradeArbAndExtensions
:Get-ArcHciConfig]
UpgradeArb: Invalid
applianceyaml =
[C:\AksHci\hci-
appliance.yaml].
Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword
using command: Set-
AzureStackLCMUserPassword ,
you might encounter this
error:
Networking There's an infrequent DNS Restart the server. This operation registers the DNS record, which prevents it
client issue in this release from getting deleted.
that causes the deployment
to fail on a two-node cluster
with a DNS resolution error:
A WebException occurred
while sending a RestRequest.
WebException.Status:
NameResolutionFailure. As a
result of the bug, the DNS
record of the second node is
deleted soon after it's
Feature Issue Workaround
Azure portal In some instances, the Azure You might need to wait for 30 minutes or more to see the updated view.
portal might take a while to
update and the view might
not be current.
Arc VM Deleting a network interface Use the Azure CLI to first remove the network interface and then delete it. For
management on an Arc VM from Azure more information, see Remove the network interface and see Delete the
portal doesn't work in this network interface.
release.
Arc VM When you create a disk or a Make sure to not use underscore in the names for disks or network interfaces.
management network interface in this
release with underscore in
the name, the operation fails.
Deployment In some instances, running The workaround is to run the script again and make sure that all the
the Arc registration script mandatory extensions are installed before you Deploy via Azure portal.
doesn't install the mandatory
extensions, Azure Edge
device Management or
Azure Edge Lifecycle
Manager.
Deployment Deployments via Azure To monitor the deployment in the Azure portal, go to the Azure Stack HCI
Resource Manager time out cluster resource and then go to new Deployments entry.
after 2 hours. Deployments
that exceed 2 hours show up
as failed in the resource
group though the cluster is
successfully created.
Azure Site Azure Site Recovery can't be There's no known workaround in this release.
Recovery installed on an Azure Stack
HCI cluster in this release.
Update When updating the Azure To work around this issue, on each cluster node, add the following registry key
Stack HCI cluster via the (no value needed):
Azure Update Manager, the
update progress and results New-Item -Path
"HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters"
Feature Issue Workaround
This won't fully remediate the issue as the progress details may still not be
displayed for a duration of the update process. To get the latest update details,
you can Retrieve the update progress with PowerShell.
Update In this release, if you run the No action is required on your part as the missing rule is automatically created
Test-CauRun cmdlet prior to when 2311.2 updates are applied.
actually applying the 2311.2
update, you see an error When applying future updates, make sure to run the Test-
message regarding a missing EnvironmentReadiness cmdlet instead of Test-CauRun . For more information,
firewall rule to remotely shut see Step 2: Optionally validate system health.
down the Azure Stack HCI
system.
Updates In rare instances, if a failed To resume the update, run the following PowerShell command:
update is stuck in an In Get-SolutionUpdate | Start-SolutionUpdate .
progress state in Azure
Update Manager, the Try
again button is disabled.
Updates In some cases, Make sure to close the PowerShell session used for Send-DiagnosticData .
SolutionUpdate commands Open a new PowerShell session and use it for SolutionUpdate commands.
could fail if run after the
Send-DiagnosticData
command.
Updates In rare instances, when Retry the update. If the issue persists, contact Microsoft Support.
applying an update from
2311.0.24 to 2311.2.4, cluster
status reports In Progress
instead of expected Failed to
update.
Arc VM If the resource group used to Make sure that there are no underscores in the resource groups used to
management deploy an Arc VM on your deploy Arc VMs.
Azure Stack HCI has an
underscore in the name, the
guest agent installation fails.
As a result, you won't be
able to enable guest
management.
Cluster Resume node operation This is a transient issue and could resolve on its own. Wait for a few minutes
aware failed to resume node. and retry the operation. If the issue persists, contact Microsoft Support.
updating
Cluster Suspend node operation was This is a transient issue and could resolve on its own. Wait for a few minutes
aware stuck for greater than 90 and retry the operation. If the issue persists, contact Microsoft Support.
updating minutes.
Next steps
Read the Deployment overview.
View known issues in Azure Stack HCI 2311.2
General Availability release
Article • 02/26/2024
This article identifies the critical known issues and their workarounds in Azure Stack HCI 2311.2 General
Availability (GA) release.
The release notes are continuously updated, and as critical issues requiring a workaround are discovered,
they're added. Before you deploy your Azure Stack HCI, carefully review the information contained in the
release notes.
) Important
The production workloads are only supported on the Azure Stack HCI systems running the generally
available 2311.2 release. To run the GA version, you need to start with a new 2311 deployment and then
update to 2311.2.
For more information about the new features in this release, see What's new in 23H2.
Release notes for this version include the issues fixed in this release, known issues in this release, and release
noted issues carried over from previous versions.
Fixed issues
Here are the issues fixed in this release:
ノ Expand table
Add server In this release, add server and repair server scenarios Follow these steps to work around this error:
and repair might fail with the following error:
server 1. Create a copy of the required PowerShell modules on
CloudEngine.Actions.InterfaceInvocationFailedException: the new node.
Type 'AddNewNodeConfiguration' of Role 'BareMetal'
raised an exception: The term 'Trace-Execution' is not 2. Connect to a node on your Azure Stack HCI system.
recognized as the name of a cmdlet, function, script file,
or operable program. 3. Run the following PowerShell cmdlet:
Copy-Item "C:\Program
Files\WindowsPowerShell\Modules\CloudCommon"
"\newserver\c$\Program
Files\WindowsPowerShell\Modules\CloudCommon" -
Feature Issue Workaround/Comments
recursive
Deployment When you update 2310 to 2311 software, the service If you encounter an issue with the software, use
principal doesn't migrate. PowerShell to migrate the service principal.
Deployment If you select Review + Create and you haven't filled There's no known workaround in this release.
out all the tabs, the deployment begins and then
eventually fails.
Deployment This issue is seen if an incorrect subscription or Before you run the registration the second time:
resource group was used during registration. When
you register the server a second time with Arc, the Make sure to delete the following folders from your
Azure Edge Lifecycle Manager extension fails during servers: C:\ecestore , C:\CloudDeployment , and
the registration, but the extension state is reported as C:\nugetstore .
Ready. Delete the registry key using the PowerShell cmdlet:
Remove-Item
HKLM:\Software\Microsoft\LCMAzureStackStampInformation
Deployment On server hardware, a USB network adapter is created Make sure to disable the BMC network adapter before
to access the Baseboard Management Controller you begin cloud deployment.
(BMC). This adapter can cause the cluster validation to
fail during the deployment.
Deployment The network direct intent overrides defined on the Use the ARM template to override this parameter and
template aren't working in this release. disable RDMA for the intents.
ノ Expand table
Update In this release, if you run the Test-CauRun cmdlet No action is required on your part as the missing
before actually applying the 2311.2 update, you rule is automatically created when 2311.2 updates
see an error message regarding a missing are applied.
firewall rule to remotely shut down the Azure
Stack HCI system. When applying future updates, make sure to run the
Test-EnvironmentReadiness cmdlet instead of Test-
CauRun . For more information, see Step 2: Optionally
validate system health.
Updates In rare instances, if a failed update is stuck in an To resume the update, run the following PowerShell
In progress state in Azure Update Manager, the command:
Try again button is disabled. Get-SolutionUpdate | Start-SolutionUpdate .
Feature Issue Workaround/Comments
Updates In some cases, SolutionUpdate commands could Make sure to close the PowerShell session used for
fail if run after the Send-DiagnosticData Send-DiagnosticData . Open a new PowerShell
command. session and use it for SolutionUpdate commands.
Updates In very rare instances, when applying an update Retry the update. If the issue persists, contact
from 2311.0.24 to 2311.2.4, cluster status reports Microsoft Support.
In Progress instead of expected Failed to update.
Arc VM If the resource group used to deploy an Arc VM Make sure that there are no underscores in the
management on your Azure Stack HCI has an underscore in resource groups used to deploy Arc VMs.
the name, the guest agent installation will fail.
As a result, you won't be able to enable guest
management.
Cluster aware Resume node operation failed to resume node. This is a transient issue and could resolve on its own.
updating Wait for a few minutes and retry the operation. If the
issue persists, contact Microsoft Support.
Cluster aware Suspend node operation was stuck for greater This is a transient issue and could resolve on its own.
updating than 90 minutes. Wait for a few minutes and retry the operation. If the
issue persists, contact Microsoft Support.
ノ Expand table
Arc VM Deployment or update of Retry the deployment/update. The retry should regenerate the SPN secret and
management Arc Resource Bridge could the operation will likely succeed.
fail when the automatically
generated temporary SPN
secret during this operation,
starts with a hyphen.
Arc VM Arc Extensions on Arc VMs Log in to the VM, open a command prompt, and type the following:
management stay in "Creating" state Windows:
indefinitely. notepad C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is appended to the
end of the resource name, so this property matches the name of the VM. Then
restart the VM.
Arc VM When a new server is added You can manually create a storage path for any new volumes. For more
management to an Azure Stack HCI information, see Create a storage path.
cluster, storage path isn't
created automatically for the
newly created volume.
Arc VM In some instances, the status If the status of this logical network was Succeeded at the time when this
management of the logical network shows network was provisioned, then you can continue to create resources on this
as Failed in Azure portal. This network.
occurs when you try to
delete the logical network
without first deleting any
resources such as network
interfaces associated with
that logical network.
You should still be able to
create resources on this
logical network. The status is
misleading in this instance.
Arc VM In this release, when you Use the Azure portal for all the VM update operations. For more information,
management update a VM with a data disk see Manage Arc VMs and Manage Arc VM resources.
attached to it using the
Azure CLI, the operation fails
with the following error
message:
Couldn't find a virtual hard
disk with the name.
Deployment There's a sporadic heartbeat This issue is intermittent. Try rerunning the deployment. For more information,
reliability issue in this release see Rerun the deployment.
due to which the registration
encounters the error: HCI
registration failed. Error: Arc
integration failed.
Deployment There's an intermittent issue This issue is intermittent. Try rerunning the deployment. For more information,
in this release where the Arc see Rerun the deployment.
integration validation fails
with this error: Validator
failed. Cannot retrieve the
dynamic parameters for the
cmdlet. PowerShell Gallery is
currently unavailable. Please
try again later.
Update In rare instances, you may If you see this issue, contact Microsoft Support to assist you with the next
encounter this error while steps.
updating your Azure Stack
HCI: Type
'UpdateArbAndExtensions' of
Role 'MocArb' raised an
exception: Exception
Upgrading ARB and
Extension in step
[UpgradeArbAndExtensions
:Get-ArcHciConfig]
UpgradeArb: Invalid
applianceyaml =
[C:\AksHci\hci-
appliance.yaml].
Feature Issue Workaround
Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword
using command: Set-
AzureStackLCMUserPassword ,
you might encounter this
error:
Networking There's an infrequent DNS Restart the server. This operation registers the DNS record, which prevents it
client issue in this release from getting deleted.
that causes the deployment
to fail on a two-node cluster
with a DNS resolution error:
A WebException occurred
while sending a RestRequest.
WebException.Status:
NameResolutionFailure. As a
result of the bug, the DNS
record of the second node is
deleted soon after it's
created resulting in a DNS
error.
Azure portal In some instances, the Azure You might need to wait for 30 minutes or more to see the updated view.
portal might take a while to
update and the view might
not be current.
Arc VM Deleting a network interface Use the Azure CLI to first remove the network interface and then delete it. For
management on an Arc VM from Azure more information, see Remove the network interface and see Delete the
portal doesn't work in this network interface.
release.
Arc VM When you create a disk or a Make sure to not use underscore in the names for disks or network interfaces.
management network interface in this
release with underscore in
the name, the operation fails.
Deployment In some instances, running Run the script again and make sure that all the mandatory extensions are
the Arc registration script installed before you Deploy via Azure portal.
doesn't install the mandatory
extensions, Azure Edge
device Management or
Azure Edge Lifecycle
Manager.
Deployment Deployments via Azure To monitor the deployment in the Azure portal, go to the Azure Stack HCI
Resource Manager time out cluster resource and then go to new Deployments entry.
after 2 hours. Deployments
that exceed 2 hours show up
as failed in the resource
group though the cluster is
successfully created.
Azure Site Azure Site Recovery can't be There's no known workaround in this release.
Recovery installed on an Azure Stack
HCI cluster in this release.
Update When updating the Azure To work around this issue, on each cluster node, add the following registry key
Stack HCI cluster via the (no value needed):
Azure Update Manager, the
update progress and results New-Item -Path
may not be visible in the "HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters"
Azure portal. -force
Then on one of the cluster nodes, restart the Cloud Management cluster
group.
This won't fully remediate the issue as the progress details may still not be
displayed for a duration of the update process. To get the latest update details,
you can Retrieve the update progress with PowerShell.
Next steps
Read the Deployment overview.
View known issues in Azure Stack HCI 2311 release
(preview)
Article • 02/06/2024
This article identifies the critical known issues and their workarounds in Azure Stack HCI 2311 release.
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added.
Before you deploy your Azure Stack HCI, carefully review the information contained in the release notes.
For more information about the new features in this release, see What's new in 23H2.
) Important
This feature is currently in PREVIEW. See the Supplemental Terms of Use for Microsoft Azure Previews for legal
terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Release notes for this version include the issues fixed in this release, known issues in this release, and release noted issues
carried over from previous versions.
Fixed issues
Here are the issues fixed in this release:
ノ Expand table
Feature Issue
Security When using the Get-AzsSyslogForwarder cmdlet with -PerNode parameter, an exception is thrown. You aren't able to
retrieve the SyslogForwarder configuration information of multiple nodes.
Deployment During the deployment, Microsoft Open Cloud (MOC) Arc Resource Bridge installation fails with this error: Unable to
find a resource that satisfies the requirement Size [0] Location [MocLocation].: OutOfCapacity"\n".
Deployment Entering an incorrect DNS updates the DNS configuration in hosts during the validation and the hosts can lose internet
connectivity.
Deployment Password for deployment user (also referred to as AzureStackLCMUserCredential during Active Directory prep) and local
administrator can't include a : (colon).
Arc VM Detaching a disk via the Azure CLI results in an error in this release.
management
Arc VM A resource group with multiple clusters only shows storage paths of one cluster.
management
Arc VM When you create the Azure Marketplace image on Azure Stack HCI, sometimes the download provisioning state
management doesn't match the download percentage on Azure Stack HCI cluster. The provisioning state is returned as succeeded
while the download percentage is reported as less than 100.
Feature Issue
Arc VM In this release, depending on your environment, the VM deployments on Azure Stack HCI system can take 30 to 45
management minutes.
Arc VM While creating Arc VMs via the Azure CLI on Azure Stack HCI, if you provide the friendly name of marketplace image,
management an incorrect Azure Resource Manager ID is built and the VM creation errors out.
ノ Expand table
Arc VM Deployment or update of Arc Resource Bridge could fail Retry the deployment/update. The retry should regenerate the SPN
management when the automatically generated SPN secret during this secret and the operation will likely succeed.
operation, starts with a hyphen.
Arc VM Arc Extensions on Arc VMs stay in "Creating" state Sign into the VM, open a command prompt, and type the following:
management indefinitely. Windows:
notepad
C:\ProgramData\AzureConnectedMachineAgent\Config\agentconfig.json
Linux:
sudo vi /var/opt/azcmagent/agentconfig.json
Next, find the resourcename property. Delete the GUID that is
appended to the end of the resource name, so this property matches
the name of the VM. Then restart the VM.
Arc VM When a new server is added to an Azure Stack HCI You can manually create a storage path for any new volumes. For
management cluster, storage path isn't created automatically for the more information, see Create a storage path.
newly created volume.
Arc VM Restart of Arc VM operation completes after There's no known workaround in this release.
management approximately 20 minutes although the VM itself restarts
in about a minute.
Arc VM In some instances, the status of the logical network If the status of this logical network was Succeeded at the time when
management shows as Failed in Azure portal. This occurs when you try this network was provisioned, then you can continue to create
to delete the logical network without first deleting any resources on this network.
resources such as network interfaces associated with that
logical network.
You should still be able to create resources on this logical
network. The status is misleading in this instance.
Arc VM In this release, when you update a VM with a data disk Use the Azure portal for all the VM update operations. For more
management attached to it using the Azure CLI, the operation fails with information, see Manage Arc VMs and Manage Arc VM resources.
the following error message:
Couldn't find a virtual hard disk with the name.
Deployment Before you update 2310 to 2311 software, make sure to This script helps migrate the service principal.
run the following cmdlets on one of your Azure Stack HCI
nodes:
Import-module C:\CloudDeployment\CloudDeployment.psd1
Import-module
C:\CloudDeployment\ECEngine\EnterpriseCloudEngine.psd1
$cloudRole =
$Parameters.Roles["Cloud"].PublicConfiguration
$domainRole =
Feature Issue Workaround/Comments
$Parameters.Roles["Domain"].PublicConfiguration
$securityInfo = $cloudRole.PublicInfo.SecurityInfo
$cloudSpCred =
$Parameters.GetCredential($cloudSpUser.Credential)
Set-ECEServiceSecret -ContainerName
"DefaultARBApplication" -Credential $cloudSpCred
Deployment There's a sporadic heartbeat reliability issue in this This issue is intermittent. Try rerunning the deployment. For more
release due to which the registration encounters the information, see Rerun the deployment.
error: HCI registration failed. Error: Arc integration failed.
Deployment There's an intermittent issue in this release where the Arc This issue is intermittent. Try rerunning the deployment. For more
integration validation fails with this error: Validator failed. information, see Rerun the deployment.
Can't retrieve the dynamic parameters for the cmdlet.
PowerShell Gallery is currently unavailable. Please try
again later.
Update In rare instances, you may encounter this error while If you see this issue, contact Microsoft Support to assist you with the
updating your Azure Stack HCI: Type next steps.
'UpdateArbAndExtensions' of Role 'MocArb' raised an
exception: Exception Upgrading ARB and Extension in
step [UpgradeArbAndExtensions :Get-ArcHciConfig]
UpgradeArb: Invalid applianceyaml = [C:\AksHci\hci-
appliance.yaml].
Update When you try to change your There's no known workaround in this release.
AzureStackLCMUserPassword using command: Set-
AzureStackLCMUserPassword , you might encounter this
error:
Update When you update from the 2311 build to Azure Stack HCI Microsoft is actively working to resolve this issue, and there's no
23H2, the update health checks stop reporting in the action required on your part. Although the health checks aren't
Azure portal after the update reaches the Install step. visible in the portal, they're still running and completing as expected.
Add server In this release, add server and repair server scenarios Follow these steps to work around this error:
and repair might fail with the following error:
server 1. Create a copy of the required PowerShell modules on the new
CloudEngine.Actions.InterfaceInvocationFailedException: node.
Type 'AddNewNodeConfiguration' of Role 'BareMetal'
raised an exception: The term 'Trace-Execution' isn't 2. Connect to a node on your Azure Stack HCI system.
recognized as the name of a cmdlet, function, script file, or
operable program. 3. Run the following PowerShell cmdlet:
Copy-Item "C:\Program
Files\WindowsPowerShell\Modules\CloudCommon"
"\newserver\c$\Program
Files\WindowsPowerShell\Modules\CloudCommon" -recursive
For more information, see Prerequisite for add and repair server
scenarios.
Networking There's an infrequent DNS client Restart the server. This operation registers the DNS record, which prevents it from
issue in this release that causes the getting deleted.
deployment to fail on a two-node
cluster with a DNS resolution error:
A WebException occurred while
sending a RestRequest.
WebException.Status:
NameResolutionFailure. As a result
of the bug, the DNS record of the
second node is deleted soon after
it's created resulting in a DNS
error.
Azure portal In some instances, the Azure portal You might need to wait for 30 minutes or more to see the updated view.
might take a while to update and
the view might not be current.
Arc VM Deleting a network interface on an Use the Azure CLI to first remove the network interface and then delete it. For more
management Arc VM from Azure portal doesn't information, see Remove the network interface and see Delete the network
work in this release. interface.
Arc VM When you create a disk or a Make sure to not use underscore in the names for disks or network interfaces.
management network interface in this release
with underscore in the name, the
operation fails.
Deployment On server hardware, a USB Make sure to disable the BMC network adapter before you begin cloud
network adapter is created to deployment.
access the Baseboard
Management Controller (BMC).
This adapter can cause the cluster
validation to fail during the
deployment.
Deployment In some instances, running the Arc The workaround is to run the script again and make sure that all the mandatory
registration script doesn't install extensions are installed before you Deploy via Azure portal.
the mandatory extensions, Azure
Edge device Management or Azure
Edge Lifecycle Manager.
Deployment The network direct intent overrides Use the ARM template to override this parameter and disable RDMA for the
defined on the template aren't intents.
Feature Issue Workaround
Deployment Deployments via Azure Resource To monitor the deployment in the Azure portal, go to the Azure Stack HCI cluster
Manager time out after 2 hours. resource and then go to new Deployments entry.
Deployments that exceed 2 hours
show up as failed in the resource
group though the cluster is
successfully created.
Deployment If you select Review + Create and There's no known workaround in this release.
you haven't filled out all the tabs,
the deployment begins and then
eventually fails.
Deployment This issue is seen if an incorrect Before you run the registration the second time:
subscription or resource group was
used during registration. When Make sure to delete the following folders from your server(s): C:\ecestore ,
you register the server a second C:\CloudDeployment , and C:\nugetstore .
time with Arc, the Azure Edge Delete the registry key using the PowerShell cmdlet:
Lifecycle Manager extension fails Remove-Item HKLM:\Software\Microsoft\LCMAzureStackStampInformation
during the registration, but the
extension state is reported as
Ready.
Azure Site Azure Site Recovery can't be There's no known workaround in this release.
Recovery installed on an Azure Stack HCI
cluster in this release.
Update When updating the Azure Stack To work around this issue, on each cluster node, add the following registry key (no
HCI cluster via the Azure Update value needed):
Manager, the update progress and
results might not be visible in the New-Item -Path
Azure portal. "HKLM:\SYSTEM\CurrentControlSet\Services\HciCloudManagementSvc\Parameters" -
force
Then on one of the cluster nodes, restart the Cloud Management cluster group.
This won't fully remediate the issue as the progress details might still not be
displayed for a duration of the update process. To get the latest update details, you
can Retrieve the update progress with PowerShell.
Next steps
Read the Deployment overview.
Azure Stack HCI, version 23H2 release
information
Article • 02/28/2024
Feature updates for Azure Stack HCI are released periodically to enhance customer
experience. To keep your Azure Stack HCI service in a supported state, you have up to
six months to install updates, but we recommend installing updates as they are released.
Azure Stack HCI also releases monthly quality and security updates. These releases are
cumulative, containing all previous updates to keep devices protected and productive.
This article presents the release information for Azure Stack HCI, version 23H2, including
the release build and OS build information.
ノ Expand table
Availability date:
2024-02-13
Availability date:
2024-02-13
Release OS build Baseline/Update What's new Known
build 1 issues
Availability date:
2024-01-09
Availability date:
2023-11-14
1
A Baseline build is the initial version of the software that you must deploy before
upgrading to the next version. An Update build includes incremental updates from the
most recent Baseline build. To deploy an Update build, it's necessary to first deploy the
previous Baseline build.
Next steps
Release Notes for Azure Stack HCI, version 23H2
Compare Azure Stack HCI to Windows
Server
Article • 02/01/2024
Applies to: Azure Stack HCI, versions 23H2 and 22H2; Windows Server 2022
This article explains key differences between Azure Stack HCI and Windows Server and
provides guidance about when to use each. Both products are actively supported and
maintained by Microsoft. Many organizations choose to deploy both as they are
intended for different and complementary purposes.
The best virtualization host to modernize your infrastructure, either for existing
workloads in your core datacenter or emerging requirements for branch office and
edge locations.
Easy extensibility to the cloud, with a regular stream of innovations from your
Azure subscription and a consistent set of tools and experiences.
When using Azure Stack HCI, run all of your workloads inside virtual machines
or containers, not directly on the cluster. Azure Stack HCI isn't licensed for
clients to connect directly to it using Client Access Licenses (CALs).
For information about licensing Windows Server VMs running on an Azure Stack HCI
cluster, see Activate Windows Server VMs.
Legal Covered under your Microsoft Has its own end-user license agreement
customer agreement or online
subscription agreement
Licensing Billed to your Azure subscription Has its own paid license
Where to Download from the Azure portal Microsoft Volume Licensing Service Center or
get it or comes preinstalled on Evaluation Center
integrated systems
Hardware Runs on any of more than 200 Runs on any hardware with the "Certified for
pre-validated solutions from the Windows Server" logo. See the
Azure Stack HCI Catalog WindowsServerCatalog
Lifecycle Always up to date with the latest Use this option of the Windows Server
policy features. You have up to six servicing channels: Long-Term Servicing
months to install updates. Channel (LTSC)
ノ Expand table
Free Extended Security Updates (ESUs) for Windows Server and Yes No 1
SQL 2008/R2 and 2012/R2
1
Requires purchasing an Extended Security Updates (ESU) license key and manually
applying it to every VM.
ノ Expand table
For more information, see What's New in Azure Stack HCI, version 23H2 and Using
Azure Stack HCI on a single server.
ノ Expand table
Azure portal > Windows Admin Center integration Yes Azure VMs only 1
(preview)
ノ Expand table
Price structure Per core, per month Varies: usually per core
Price Per core, per month See Pricing and licensing for Windows Server
2022
Next steps
Compare Azure Stack HCI to Azure Stack Hub
Azure Stack HCI FAQ
FAQ
The Azure Stack HCI FAQ provides information on Azure Stack HCI connectivity with the
cloud, and how Azure Stack HCI relates to Windows Server and Azure Stack Hub.
Because Azure Stack HCI doesn't store customer data in the cloud, business continuity
disaster recovery (BCDR) for the customer's on-premises data is defined and controlled
by the customer. To set up your own site-to-site replication using a stretched cluster, see
Stretched clusters overview.
To learn more about the diagnostic data we collect to keep Azure Stack HCI secure, up
to date, and working as expected, see Azure Stack HCI data collection and Data
residency in Azure .
With Azure Stack HCI, you run virtualized workloads on-premises, managed with
Windows Admin Center and familiar Windows Server tools. You can also connect to
Azure for hybrid scenarios like cloud-based Site Recovery, monitoring, and others.
PowerShell
OsName OSDisplayVersion
------ ----------------
Microsoft Azure Stack HCI 20H2
Feedback
Was this page helpful? Yes No
This article describes how to deploy a virtualized single server or a multi-node Azure
Stack HCI, version 23H2, on a host system running Hyper-V on the Windows Server
2022, Windows 11, or later operating system (OS).
You need administrator privileges for the Azure Stack HCI virtual deployment and should
be familiar with the existing Azure Stack HCI solution. The deployment can take around
2.5 hours to complete.
) Important
A virtual deployment of Azure Stack HCI, version 23H2 is intended for educational
and demonstration purposes only. Microsoft Support doesn't support virtual
deployments.
Prerequisites
Here are the hardware, networking, and other prerequisites for the virtual deployment:
You have access to a physical host system that is running Hyper-V on Windows
Server 2022, Windows 11, or later. This host is used to provision a virtual Azure
Stack HCI deployment.
The physical hardware used for the virtual deployment meets the following
requirements:
ノ Expand table
Component Minimum
Processor Intel VT-x or AMD-V, with support for nested virtualization. For more
information, see Does My Processor Support Intel® virtualization
technology? .
Memory The physical host must have a minimum of 32 GB RAM for single virtual
node deployments. The virtual host VM should have at least 24 GB RAM.
The physical host must have a minimum of 64 GB RAM for two virtual
node deployments. Each virtual host VM should have at least 24 GB
RAM.
ノ Expand table
Component Requirement
Virtual machine (VM) type Secure Boot and Trusted Platform Module (TPM) enabled.
Boot disk One disk to install the Azure Stack HCI operating system from ISO.
Hard disks for Storage Six dynamic expanding disks. Maximum disk size is 1024 GB.
Spaces Direct
However, if your physical network where you're planning to deploy the Azure Stack HCI
virtual environment is scarce on IPs, you can create an internal virtual switch with NAT
enabled, to isolate the virtual hosts from your physical network while keeping outbound
connectivity to the internet.
PowerShell
PowerShell
Once the internal virtual switch is created, a new network adapter is created on the host.
You must assign an IP address to this network adapter to become the default gateway
of your virtual hosts once connected to this internal switch network. You also need to
define the NAT network subnet where the virtual hosts are connected.
The following example script creates a NAT network HCINAT with prefix 192.168.4.0/24
and defines the 192.168.44.1 IP as the default gateway for the network using the
interface on the host:
PowerShell
#Check interface index of the new network adapter on the host connected to
InternalSwitch:
Get-NetAdapter -Name "vEthernet (InternalSwitch)"
Hyper-V Manager. For more information, see Create a virtual machine using
Hyper-V Manager to mirror your physical management network.
Follow these steps to create an example VM named Node1 using PowerShell cmdlets:
PowerShell
PowerShell
3. Disable VM checkpoints:
PowerShell
Set-VM -VMName "Node1" -CheckpointType Disabled
4. Remove the default network adapter created during VM creation in the previous
step:
PowerShell
5. Add new network adapters to the VM using custom names. This example adds four
NICs, but you can add just two if needed. Having four NICs allows you to test two
network intents ( Mgmt_Compute and Storage for example) with two NICs each:
PowerShell
6. Attach all network adapters to the virtual switch. Specify the name of the virtual
switch you created, whether it was external without NAT, or internal with NAT:
PowerShell
PowerShell
8. Enable trunk port (for multi-node deployments only) for all network adapters on
VM Node1 . This script configures the network adapter of a specific VM to operate
in trunk mode. This is typically used in multi-node deployments where you want to
allow multiple Virtual Local Area Networks (VLANs) to communicate through a
single network adapter:
PowerShell
9. Create a new key protector and assign it to Node1 . This is typically done in the
context of setting up a guarded fabric in Hyper-V, a security feature that protects
VMs from unauthorized access or tampering.
After the following script is executed, Node1 will have a new key protector assigned
to it. This key protector protects the VM's keys, helping to secure the VM against
unauthorized access or tampering:
PowerShell
10. Enable the vTPM for Node1 . By enabling vTPM on a VM, you can use BitLocker and
other features that require TPM on the VM. After this command is executed, Node1
will have a vTPM enabled, assuming the host machine's hardware and the VM's
configuration support this feature.
PowerShell
PowerShell
12. Create extra drives to be used as the boot disk and hard disks for Storage Spaces
Direct. After these commands are executed, six new VHDXs will be created in the
C:\vms\Node1 directory as shown in this example:
PowerShell
new-VHD -Path "C:\vms\Node1\s2d1.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d2.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d3.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d4.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d5.vhdx" -SizeBytes 1024GB
new-VHD -Path "C:\vms\Node1\s2d6.vhdx" -SizeBytes 1024GB
13. Attach drives to the newly created VHDXs for the VM. In these commands, six
VHDs located in the C:\vms\Node1 directory and named s2d1.vhdx through
s2d6.vhdx are added to Node1 . Each Add-VMHardDiskDrive command adds one
VHD to the VM, so the command is repeated six times with different -Path
parameter values.
Afterwards, the Node1 VM has six VHDs attached to it. These VHDXs are used to
enable Storage Spaces Direct on the VM, which are required for Azure Stack HCI
deployments:
PowerShell
PowerShell
PowerShell
PowerShell
Start-VM "Node1"
Install the OS on the virtual host VMs
Complete the following steps to install and configure the Azure Stack HCI OS on the
virtual host VMs:
1. Download Azure Stack HCI 23H2 ISO and Install the Azure Stack HCI operating
system.
2. Update the password since this is the first VM startup. Make sure the password
meets the Azure complexity requirements. The password is at least 12 characters
and includes 1 uppercase character, 1 lowercase character, 1 number, and 1 special
character.
PowerShell
SConfig
For information on how to use SConfig, see Configure with the Server
Configuration tool (SConfig).
5. Change hostname to Node1 . Use option 2 for Computer name in SConfig to do this.
The hostname change results in a restart. When prompted for a restart, enter Yes
and wait for the restart to complete. SConfig is launched again automatically.
6. From the physical host, run the Get-VMNetworkAdapter and ForEach-Object cmdlets
to configure the four network adapter names for VM Node1 by mapping the
assigned MAC addresses to the corresponding network adapters on the guest OS.
a. The Get-VMNetworkAdapter cmdlet is used to retrieve the network adapter object
for each NIC on the VM, where the -VMName parameter specifies the name of the
VM, and the -Name parameter specifies the name of the network adapter. The
MacAddress property of the network adapter object is then accessed to get the
MAC address:
PowerShell
c. The commands are repeated for each of the four NICs on the VM, and the final
formatted MAC address for each NIC is stored in a separate variable:
PowerShell
d. The following script outputs the final formatted MAC address for each NIC:
PowerShell
7. Obtain the Node1 VM local admin credentials and then rename Node1 :
PowerShell
$cred = get-credential
8. Rename and map the NICs on Node1 . The renaming is based on the MAC
addresses of the NICs assigned by Hyper-V when the VM is started the first time.
These commands should be run directly from the host:
Use the Get-NetAdapter command to retrieve the physical network adapters on the
VM, filter them based on their MAC address, and then rename them to the
matching adapter using the Rename-NetAdapter cmdlet.
This is repeated for each of the four NICs on the VM, with the MAC address and
new name of each NIC specified separately. This establishes a mapping between
the name of the NICs in Hyper-V Manager and the name of the NICs in the VM OS:
PowerShell
9. Disable the Dynamic Host Configuration Protocol (DHCP) on the four NICs for VM
Node1 by running the following commands.
7 Note
10. Set management IP, gateway, and DNS. After the following commands are
executed, Node1 will have the NIC1 network interface configured with the specified
IP address, subnet mask, default gateway, and DNS server address. Ensure that the
management IP address can resolve Active Directory and has outbound
connectivity to the internet:
PowerShell
11. Enable the Hyper-V role. This command restarts the VM Node1 :
PowerShell
12. Once Node1 is restarted and the Hyper-V role is installed, install the Hyper-V
Management Tools:
PowerShell
14. Once the server is registered in Azure as an Arc resource and all the mandatory
extensions are installed, choose one of the following methods to deploy Azure
Stack HCI from Azure.
Repeat the process above for extra nodes if you plan to test multi-node deployments.
Ensure virtual host names and management IPs are unique and on the same subnet:
Next steps
Register to Arc and assign permissions for deployment.
System requirements for Azure Stack
HCI, version 23H2
Article • 02/01/2024
This article discusses Azure, server and storage, networking, and other requirements for
Azure Stack HCI. If you purchase Azure Stack HCI Integrated System solution hardware
from the Azure Stack HCI Catalog , you can skip to the Networking requirements since
the hardware already adheres to server and storage requirements.
Azure requirements
Here are the Azure requirements for your Azure Stack HCI cluster:
Azure subscription: If you don't already have an Azure account, create one . You
can use an existing subscription of any type:
Free account with Azure credits for students or Visual Studio subscribers .
Pay-as-you-go subscription with credit card.
Subscription obtained through an Enterprise Agreement (EA).
Subscription obtained through the Cloud Solution Provider (CSP) program.
Azure permissions: Make sure that you're assigned the required roles and
permissions for registration and deployment. For information on how to assign
permissions, see Assign Azure permissions for registration.
Azure regions: Azure Stack HCI is supported for the following regions:
East US
West Europe
ノ Expand table
Component Minimum
CPU A 64-bit Intel Nehalem grade or AMD EPYC or later compatible processor
with second-level address translation (SLAT).
Host network At least two network adapters listed in the Windows Server Catalog. Or
adapters dedicated network adapters per intent, which does require two separate
adapters for storage intent. For more information, see Windows Server
Catalog .
Data drives At least 2 disks with a minimum capacity of 500 GB (SSD or HDD).
Trusted Platform TPM version 2.0 hardware must be present and turned on.
Module (TPM)
Each server should have dedicated volumes for logs, with log storage at least as
fast as data storage.
Have direct-attached drives that are physically attached to one server each. RAID
controller cards or SAN (Fibre Channel, iSCSI, FCoE) storage, shared SAS enclosures
connected to multiple servers, or any form of multi-path IO (MPIO) where drives
are accessible by multiple paths, aren't supported.
7 Note
Host-bus adapter (HBA) cards must implement simple pass-through mode for
any storage devices used for Storage Spaces Direct.
For more feature-specific requirements for Hyper-V, see System requirements for Hyper-
V on Windows Server.
Networking requirements
An Azure Stack HCI cluster requires a reliable high-bandwidth, low-latency network
connection between each server node.
Verify that physical switches in your network are configured to allow traffic on any
VLANs you use. For more information, see Physical network requirements for Azure
Stack HCI.
Hardware requirements
In addition to Microsoft Azure Stack HCI updates, many OEMs also release regular
updates for your Azure Stack HCI hardware, such as driver and firmware updates. To
ensure that OEM package update notifications, reach your organization check with your
OEM about their specific notification process.
Before deploying Azure Stack HCI, version 23H2, ensure that your hardware is up to date
by:
Determining the current version of your Solution Builder Extension (SBE) package.
Finding the best method to download, install, and update your SBE package.
OEM information
This section contains OEM contact information and links to OEM Azure Stack HCI,
version 23H2 reference material.
ノ Expand table
For a comprehensive list of all OEM contact information, download the Azure Stack HCI
OEM Contact spreadsheet.
BIOS setting
Check with your OEM regarding the necessary generic BIOS settings for Azure Stack HCI,
version 23H2. These settings may include hardware virtualization, TPM enabled, and
secure core.
Driver
Check with your OEM regarding the necessary drivers that need to be installed for Azure
Stack HCI, version 23H2. Additionally, your OEM can provide you with their preferred
installation steps.
PowerShell
Get-NetAdapter
Console
PS C:\Windows\system32> get-netadapter
Name InterfaceDescription
iflndex Status MacAddress LinkSpeed
vSMB(compute managemen… Hyper-V Virtual Ethernet Adapter #2
20 Up 00-15-5D-20-40-00 25 Gbps
vSMB(compute managemen… Hyper-V Virtual Ethernet Adapter #3
24 Up 00-15-5D-20-40-01 25 Gbps
ethernet HPE Ethernet 10/25Gb 2-port 640FLR…#2
7 Up B8-83-03-58-91-88 25 Gbps
ethernet 2 HPE Ethernet 10/25Gb 2-port 640FLR-S…
5 Up B8 83-03-58-91-89 25 Gbps
vManagement(compute_ma… Hyper-V Virtual Ethernet Adapter
14 Up B8-83-03-58-91-88 25 Gbps
PowerShell
Console
Here's an example:
PowerShell
PowerShell
Console
Console
Firmware
Check with your OEM regarding the necessary firmware that needs to be installed for
Azure Stack HCI, version 23H2. Additionally, your OEM can provide you with their
preferred installation steps.
Next steps
Review firewall, physical network, and host network requirements:
Firewall requirements.
Physical network requirements.
Host network requirements.
Physical network requirements for
Azure Stack HCI
Article • 02/22/2024
This article discusses physical (fabric) network considerations and requirements for
Azure Stack HCI, particularly for network switches.
7 Note
) Important
While other network switches using technologies and protocols not listed here may
work, Microsoft cannot guarantee they will work with Azure Stack HCI and may be
unable to assist in troubleshooting issues that occur.
When purchasing network switches, contact your switch vendor and ensure that the
devices meet the Azure Stack HCI requirements for your specified role types. The
following vendors (in alphabetical order) have confirmed that their switches support
Azure Stack HCI requirements:
Overview
Click on a vendor tab to see validated switches for each of the Azure Stack HCI
traffic types. These network classifications can be found here.
) Important
We update these lists as we're informed of changes by network switch vendors.
If your switch isn't included, contact your switch vendor to ensure that your switch
model and the version of the switch's operating system supports the requirements
in the next section.
7 Note
Network adapters used for compute, storage, and management traffic require
Ethernet. For more information, see Host network requirements.
23H2
ノ Expand table
Virtual LANS ✓ ✓ ✓ ✓
Enhanced Transmission ✓
Selection
Maximum Transmission ✓
Unit
7 Note
7 Note
ノ Expand table
Azure Stack HCI can function in various data center architectures including 2-tier (Spine-
Leaf) and 3-tier (Core-Aggregation-Access). This section refers more to concepts from
the Spine-Leaf topology that is commonly used with workloads in hyper-converged
infrastructure such as Azure Stack HCI.
Network models
Network traffic can be classified by its direction. Traditional Storage Area Network (SAN)
environments are heavily North-South where traffic flows from a compute tier to a
storage tier across a Layer-3 (IP) boundary. Hyperconverged infrastructure is more
heavily East-West where a substantial portion of traffic stays within a Layer-2 (VLAN)
boundary.
) Important
We highly recommend that all cluster nodes in a site are physically located in the
same rack and connected to the same top-of-rack (ToR) switches.
Traffic flows out of a ToR switch to the spine or in from the spine to a ToR switch.
Traffic leaves the physical rack or crosses a Layer-3 boundary (IP).
Includes management (PowerShell, Windows Admin Center), compute (VM), and
inter-site stretched cluster traffic.
Uses an Ethernet switch for connectivity to the physical network.
Traffic remains within the ToR switches and Layer-2 boundary (VLAN).
Includes storage traffic or Live Migration traffic between nodes in the same cluster
and (if using a stretched cluster) within the same site.
May use an Ethernet switch (switched) or a direct (switchless) connection, as
described in the next two sections.
Using switches
North-South traffic requires the use of switches. Besides using an Ethernet switch that
supports the required protocols for Azure Stack HCI, the most important aspect is the
proper sizing of the network fabric.
Work with your network vendor or network support team to ensure your network
switches have been properly sized for the workload you are intending to run.
Using switchless
Azure Stack HCI supports switchless (direct) connections for East-West traffic for all
cluster sizes so long as each node in the cluster has a redundant connection to every
node in the cluster. This is called a "full-mesh" connection.
ノ Expand table
7 Note
The benefits of switchless deployments diminish with clusters larger than three-
nodes due to the number of network adapters required.
Next steps
Learn about network adapter and host requirements. See Host network
requirements.
Brush up on failover clustering basics. See Failover Clustering Networking Basics .
Brush up on using SET. See Remote Direct Memory Access (RDMA) and Switch
Embedded Teaming (SET).
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Host network requirements for Azure Stack
HCI
Article • 03/11/2024
This topic discusses host networking considerations and requirements for Azure Stack HCI. For
information on datacenter architectures and the physical connections between servers, see
Physical network requirements.
For information on how to simplify host networking using Network ATC, see Simplify host
networking with Network ATC.
Management traffic: Traffic to or from outside the local cluster. For example, storage replica
traffic or traffic used by the administrator for management of the cluster like Remote
Desktop, Windows Admin Center, Active Directory, etc.
Compute traffic: Traffic originating from or destined to a virtual machine (VM).
Storage traffic: Traffic using Server Message Block (SMB), for example Storage Spaces Direct
or SMB-based live migration. This traffic is layer-2 traffic and is not routable.
) Important
Storage replica uses non-RDMA based SMB traffic. This and the directional nature of the
traffic (North-South) makes it closely aligned to that of "management" traffic listed above,
similar to that of a traditional file share.
For more information about this role-based NIC qualification, please see this link .
) Important
Using an adapter outside of its qualified traffic type is not supported.
ノ Expand table
7 Note
The highest qualification for any adapter in our ecosystem will contain the Management,
Compute Premium, and Storage Premium qualifications.
Driver Requirements
Inbox drivers are not supported for use with Azure Stack HCI. To identify if your adapter is using an
inbox driver, run the following cmdlet. An adapter is using an inbox driver if the DriverProvider
property is Microsoft.
Powershell
Dynamic VMMQ
All network adapters with the Compute (Premium) qualification support Dynamic VMMQ. Dynamic
VMMQ requires the use of Switch Embedded Teaming.
Applicable traffic types: compute
Dynamic VMMQ is an intelligent, receive-side technology. It builds upon its predecessors of Virtual
Machine Queue (VMQ), Virtual Receive Side Scaling (vRSS), and VMMQ, to provide three primary
improvements:
For more information on Dynamic VMMQ, see the blog post Synthetic accelerations .
RDMA
RDMA is a network stack offload to the network adapter. It allows SMB storage traffic to bypass
the operating system for processing.
RDMA enables high-throughput, low-latency networking, using minimal host CPU resources.
These host CPU resources can then be used to run additional VMs or containers.
All adapters with Storage (Standard) or Storage (Premium) qualification support host-side RDMA.
For more information on using RDMA with guest workloads, see the "Guest RDMA" section later in
this article.
Azure Stack HCI supports RDMA with either the Internet Wide Area RDMA Protocol (iWARP) or
RDMA over Converged Ethernet (RoCE) protocol implementations.
) Important
RDMA adapters only work with other RDMA adapters that implement the same RDMA
protocol (iWARP or RoCE).
Not all network adapters from vendors support RDMA. The following table lists those vendors (in
alphabetical order) that offer certified RDMA adapters. However, there are hardware vendors not
included in this list that also support RDMA. See the Windows Server Catalog to find adapters
with the Storage (Standard) or Storage (Premium) qualification which require RDMA support.
7 Note
Broadcom No Yes
Nvidia No Yes
For more information on deploying RDMA for the host, we highly recommend you use Network
ATC. For information on manual deployment see the SDN GitHub repo .
iWARP
iWARP uses Transmission Control Protocol (TCP), and can be optionally enhanced with Priority-
based Flow Control (PFC) and Enhanced Transmission Service (ETS).
RoCE
RoCE uses User Datagram Protocol (UDP), and requires PFC and ETS to provide reliability.
Guest RDMA
Guest RDMA enables SMB workloads for VMs to gain the same benefits of using RDMA on hosts.
SET is the only teaming technology supported by Azure Stack HCI. SET works well with compute,
storage, and management traffic.
) Important
Azure Stack HCI doesn’t support NIC teaming with the older Load Balancing/Failover (LBFO).
See the blog post Teaming in Azure Stack HCI for more information on LBFO in Azure
Stack HCI.
SET is important for Azure Stack HCI because it's the only teaming technology that enables:
SET requires the use of symmetric (identical) adapters. Symmetric network adapters are those that
have the same:
make (vendor)
model (version)
speed (throughput)
configuration
In 22H2, Network ATC will automatically detect and inform you if the adapters you've chosen are
asymmetric. The easiest way to manually identify if adapters are symmetric is if the speeds and
interface descriptions are exact matches. They can deviate only in the numeral listed in the
description. Use the Get-NetAdapterAdvancedProperty cmdlet to ensure the configuration
reported lists the same property values.
See the following table for an example of the interface descriptions deviating only by numeral (#):
ノ Expand table
Name Interface description Link speed
7 Note
SET supports only switch-independent configuration by using either Dynamic or Hyper-V Port
load-balancing algorithms. For best performance, Hyper-V Port is recommended for use on
all NICs that operate at or above 10 Gbps. Network ATC makes all the required configurations
for SET.
For detailed information on how to deploy RDMA, download the document from the SDN GitHub
repo .
RoCE-based Azure Stack HCI implementations require the configuration of three PFC traffic
classes, including the default traffic class, across the fabric and all hosts.
This traffic class ensures that there's enough bandwidth reserved for cluster heartbeats:
Required: Yes
PFC-enabled: No
Recommended traffic priority: Priority 7
Recommended bandwidth reservation:
10 GbE or lower RDMA networks = 2 percent
25 GbE or higher RDMA networks = 1 percent
Required: Yes
PFC-enabled: Yes
Recommended traffic priority: Priority 3 or 4
Recommended bandwidth reservation: 50 percent
This traffic class carries all other traffic not defined in the cluster or RDMA traffic classes, including
VM traffic and management traffic:
7 Note
We recommend using multiple subnets and VLANs to separate storage traffic in Azure Stack
HCI.
Consider the following example of a four node cluster. Each server has two storage ports (left and
right side). Because each adapter is on the same subnet and VLAN, SMB Multichannel will spread
connections across all available links. Therefore, the left-side port on the first server (192.168.1.1)
will make a connection to the left-side port on the second server (192.168.1.2). The right-side port
on the first server (192.168.1.12) will connect to the right-side port on the second server. Similar
connections are established for the third and fourth servers.
However, this creates unnecessary connections and causes congestion at the interlink (multi-
chassis link aggregation group or MC-LAG) that connects the ToR switches (marked with Xs). See
the following diagram:
The recommended approach is to use separate subnets and VLANs for each set of adapters. In the
following diagram, the right-hand ports now use subnet 192.168.2.x /24 and VLAN2. This allows
traffic on the left-side ports to remain on TOR1 and the traffic on the right-side ports to remain on
TOR2.
Because this use case poses the most constraints, it represents a good baseline. However,
considering the permutations for the number of adapters and speeds, this should be considered
an example and not a support requirement.
Storage Bus Layer (SBL), Cluster Shared Volume (CSV), and Hyper-V (Live Migration) traffic:
Use the same physical adapters.
Use SMB.
If the available bandwidth for Live Migration is >= 5 Gbps, and the network adapters
are capable, use RDMA. Use the following cmdlet to do so:
Powershell
If the available bandwidth for Live Migration is < 5 Gbps, use compression to reduce
blackout times. Use the following cmdlet to do so:
Powershell
If you're using RDMA for Live Migration traffic, ensure that Live Migration traffic can't
consume the entire bandwidth allocated to the RDMA traffic class by using an SMB
bandwidth limit. Be careful, because this cmdlet takes entry in bytes per second (Bps),
whereas network adapters are listed in bits per second (bps). Use the following cmdlet to set
a bandwidth limit of 6 Gbps, for example:
Powershell
7 Note
ノ Expand table
NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth
25 50 Gbps 25 Gbps 70% 17.5 Gbps 29% 7.25 Gbps 1% 250 Mbps
Gbps
NIC Teamed SMB SBL/CSV SBL/CSV Live Max Live Heartbeat Heartbeat
speed bandwidth bandwidth % bandwidth Migration Migration % bandwidth
reservation** % bandwidth
50 100 Gbps 50 Gbps 70% 35 Gbps 29% 14.5 Gbps 1% 500 Mbps
Gbps
100 200 Gbps 100 Gbps 70% 70 Gbps 29% 29 Gbps 1% 1 Gbps
Gbps
200 400 Gbps 200 Gbps 70% 140 Gbps 29% 58 Gbps 1% 2 Gbps
Gbps
* Use compression rather than RDMA, because the bandwidth allocation for Live Migration traffic
is <5 Gbps.
Stretched clusters
Stretched clusters provide disaster recovery that spans multiple datacenters. In its simplest form, a
stretched Azure Stack HCI cluster network looks like this:
RDMA is limited to a single site, and isn't supported across different sites or subnets.
Servers in the same site must reside in the same rack and Layer-2 boundary.
Host communication between sites must cross a Layer-3 boundary; stretched Layer-2
topologies aren't supported.
Have enough bandwidth to run the workloads at the other site. In the event of a failover, the
alternate site will need to run all traffic. We recommend that you provision sites at 50 percent
of their available network capacity. This isn't a requirement, however, if you are able to
tolerate lower performance during a failover.
Replication between sites (north/south traffic) can use the same physical NICs as the local
storage (east/west traffic). If you're using the same physical adapters, these adapters must be
teamed with SET. The adapters must also have additional virtual NICs provisioned for
routable traffic between sites.
Can be physical or virtual (host vNIC). If adapters are virtual, you must provision one vNIC
in its own subnet and VLAN per physical NIC.
Must be on their own subnet and VLAN that can route between sites.
The following shows the details for the example stretched cluster configuration.
7 Note
Your exact configuration, including NIC names, IP addresses, and VLANs, might be different
than what is shown. This is used only as a reference configuration that can be adapted to your
environment.
ノ Expand table
Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope
ノ Expand table
Node name vNIC name Physical NIC (mapped) VLAN IP and subnet Traffic scope
ノ Expand table
Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope
ノ Expand table
Node name vNIC name Physical NIC (mapped) IP and subnet Traffic scope
Next steps
Learn about network switch and physical network requirements. See Physical network
requirements.
Learn how to simplify host networking using Network ATC. See Simplify host networking with
Network ATC.
Brush up on failover clustering networking basics .
For deployment, see Create a cluster using Windows Admin Center.
For deployment, see Create a cluster using Windows PowerShell.
Feedback
Was this page helpful? Yes No
This article provides guidance on how to configure firewalls for the Azure Stack HCI
operating system. It includes firewall requirements for outbound endpoints and internal
rules and ports. The article also provides information on how to use Azure service tags
with Microsoft Defender firewall.
If your network uses a proxy server for internet access, see Configure proxy settings for
Azure Stack HCI.
Azure Stack HCI needs to periodically connect to Azure. Access is limited only to:
) Important
Azure Stack HCI doesn’t support HTTPS inspection. Make sure that HTTPS
inspection is disabled along your networking path for Azure Stack HCI to prevent
any connectivity errors.
As shown in the following diagram, Azure Stack HCI accesses Azure using more than
one firewall potentially.
This article describes how to optionally use a highly locked-down firewall configuration
to block all traffic to all destinations except those included in your allowlist.
Please also follow the required firewall requirements for AKS on Azure Stack HCI.
7 Note
The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.
ノ Expand table
Azure Stack *.platform.edge.azure.com 443 For Data plane used in the licensing
HCI and in pushing alerting and billing
data. Required only for Azure Stack
HCI, version 23H2.
Azure Stack azurestackhci.azurefd.net 443 Previous URL for Data plane. This
HCI URL was recently changed,
customers who registered their
cluster using this old URL must
allowlist it as well.
Arc For *.blob.core.windows.net 443 For download source for Azure Arc-
Servers enabled servers extensions
For a comprehensive list of all the firewall URLs, download the firewall URLs
spreadsheet .
7 Note
The Azure Stack HCI firewall rules are the minimum endpoints required for HciSvc
connectivity, and don't contain wildcards. However, the following table currently
contains wildcard URLs, which may be updated into precise endpoints in the future.
ノ Expand table
When using the Cluster Creation wizard in Windows Admin Center to create the cluster,
the wizard automatically opens the appropriate firewall ports on each server in the
cluster for Failover Clustering, Hyper-V, and Storage Replica. If you're using a different
firewall on each server, open the ports as described in the following sections:
ノ Expand table
ノ Expand table
Provide access to Azure and Allow Windows Admin Azure Stack TCP 445
Microsoft Update Center HCI
Use Windows Remote Allow Windows Admin Azure Stack TCP 5985
Management (WinRM) 2.0 Center HCI
for HTTP connections to run
commands
on remote Windows servers
Use WinRM 2.0 for HTTPS Allow Windows Admin Azure Stack TCP 5986
connections to run Center HCI
commands on remote Windows
servers
7 Note
While installing Windows Admin Center, if you select the Use WinRM over HTTPS
only setting, then port 5986 is required.
Failover Clustering
Ensure that the following firewall rules are configured in your on-premises firewall for
Failover Clustering.
ノ Expand table
7 Note
The management system includes any computer from which you plan to administer
the cluster, using tools such as Windows Admin Center, Windows PowerShell or
System Center Virtual Machine Manager.
Hyper-V
Ensure that the following firewall rules are configured in your on-premises firewall for
Hyper-V.
ノ Expand table
7 Note
Open up a range of ports above port 5000 to allow RPC dynamic port allocation.
Ports below 5000 may already be in use by other applications and could cause
conflicts with DCOM applications. Previous experience shows that a minimum of
100 ports should be opened, because several system services rely on these RPC
ports to communicate with each other. For more information, see How to
configure RPC dynamic port allocation to work with firewalls.
Allow Server Message Allow Stretched cluster Stretched cluster TCP 445
Block servers servers
(SMB) protocol
Allow Web Services- Allow Stretched cluster Stretched cluster TCP 5985
Management servers servers
(WS-MAN)
Allow ICMPv4 and Allow Stretched cluster Stretched cluster n/a n/a
ICMPv6 servers servers
(if using the Test-
SRTopology
PowerShell cmdlet)
1. Download the JSON file from the following resource to the target computer
running the operating system: Azure IP Ranges and Service Tags – Public Cloud .
PowerShell
3. Get the list of IP address ranges for a given service tag, such as the
"AzureResourceManager" service tag:
PowerShell
5. Create a firewall rule for each server in the cluster to allow outbound 443 (HTTPS)
traffic to the list of IP address ranges:
PowerShell
Next steps
For more information, see also:
The Windows Firewall and WinRM 2.0 ports section of Installation and
configuration for Windows Remote Management
Feedback
Was this page helpful? Yes No
In this article, you'll gain an overview understanding for deploying network reference
patterns on Azure Stack HCI.
Two storage ports dedicated for storage traffic intent. The RDMA NIC is optional
for single-server deployments.
If switchless is used, configuration is limited to the host, which may reduce the
potential number of configuration steps needed. However, this value diminishes as
the cluster size increases.
Switchless has the lowest level of resiliency, and it comes with extra complexity and
planning if after the initial deployment it needs to be scaled up. Storage
connectivity needs to be enabled when adding the second node, which will require
to define what physical connectivity between nodes is needed.
As the number of nodes in the cluster grows beyond two nodes, the cost of
network adapters could exceed the cost of using network switches.
For more information, see Physical network requirements for Azure Stack HCI.
Firewall requirements
Azure Stack HCI requires periodic connectivity to Azure. If your organization's outbound
firewall is restricted, you would need to include firewall requirements for outbound
endpoints and internal rules and ports. There are required and recommended endpoints
for the Azure Stack HCI core components, which include cluster creation, registration
and billing, Microsoft Update, and cloud cluster witness.
See the firewall requirements for a complete list of endpoints. Make sure to include
these URLS in your allowed list. Proper network ports need to be opened between all
server nodes both within a site and between sites (for stretched clusters).
With Azure Stack HCI the connectivity validator of the Environment Checker tool will
check for the outbound connectivity requirement by default during deployment.
Additionally, you can run the Environment Checker tool standalone before, during, or
after deployment to evaluate the outbound connectivity of your environment.
A best practice is to have all relevant endpoints in a data file that can be accessed by the
environment checker tool. The same file can also be shared with your firewall
administrator to open up the necessary ports and URLs.
Next steps
Choose a network pattern to review.
Azure Stack HCI network deployment
patterns
Article • 07/20/2023
This article describes a set of network patterns references to architect, deploy, and
configure Azure Stack HCI using either one or two physical hosts. Depending on your
needs or scenarios, you can go directly to your pattern of interest. Each pattern is
described as a standalone entity and includes all the network components for specific
scenarios.
Next steps
Download Azure Stack HCI
Review single-server storage
deployment network reference pattern
for Azure Stack HCI
Article • 03/04/2024
In this article, you'll learn about the single-server storage network reference pattern that
you can use to deploy your Azure Stack HCI solution. The information in this article will
also help you determine if this configuration is viable for your deployment planning
needs. This article is targeted towards the IT administrators who deploy and manage
Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Introduction
Single-server deployments provide cost and space benefits while helping to modernize
your infrastructure and bring Azure hybrid computing to locations that can tolerate the
resiliency of a single server. Azure Stack HCI running on a single-server behaves similarly
to Azure Stack HCI on a multi-node cluster: it brings native Azure Arc integration, the
ability to add servers to scale out the cluster, and it includes the same Azure benefits.
It also supports the same workloads, such as Azure Virtual Desktop (AVD) and AKS
hybrid, and is supported and billed the same way.
Scenarios
Use the single-server storage pattern in the following scenarios:
Facilities that can tolerate lower level of resiliency. Consider implementing this
pattern whenever your location or service provided by this pattern can tolerate a
lower level of resiliency without impacting your business.
Network security features such as microsegmentation and Quality of Service (QoS) don't
require extra configuration for the firewall device, as they're implemented at the virtual
network adapter layer. For more information, see Microsegmentation with Azure Stack
HCI .
ノ Expand table
Ports and Two teamed ports Optional to allow adding One port
aggregation a second server;
disconnected ports.
Network Management & compute Storage BMC
Storage intent
The storage intent has the following characteristics:
Follow these steps to create a network intent for this reference pattern:
PowerShell
For more information, see Deploy host networking: Compute and management intent.
OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
Tagged VLAN - you supply VLAN IDs at the time of deployment. tenant
connections on each gateway, and switches network traffic flows to a standby
gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures
Next steps
Learn about two-node patterns - Azure Stack HCI network deployment patterns.
Review single-server storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about which network components are deployed for the single-
server reference pattern, as shown in the following diagram:
Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.
Create and manage virtual networks or connect VMs to virtual network subnets.
Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.
The SDN Software Load Balancer (SLB) VM is used to evenly distribute network traffic
among multiple VMs. It enables multiple servers to host the same workload, providing
high availability and scalability. It is also used to provide inbound Network Address
Translation (NAT) services for inbound access to VMs, and outbound NAT services for
outbound connectivity.
SDN Gateway VM
The SDN Gateway VM is used to route network traffic between a virtual network and
another network, either local or remote. SDN Gateways can be used to:
Create secure site-to-site IPsec connections between SDN virtual networks and
external networks over the internet.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, SDN Gateway simply acts as a router between your virtual
network and the external network.
Host agents
The following components run as services or agents on the host server:
Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.
Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.
Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.
SDN Gateways 1 60 GB 30 GB 8 8 GB
Next steps
Learn about single-server IP requirements.
Review single-server storage reference
pattern IP requirements for Azure Stack
HCI
Article • 07/20/2023
In this article, learn about the IP requirements for deploying a single-server network
reference pattern in your environment.
Total 2 required.
2 optional
for storage,
1 optional
for OEM
VM.
Total 4 Required.
2 optional
for storage,
1 optional
for OEM
VM.
VM
management
stack VM,
1 IP for OEM
VM (new)
Single node:
1 Network
Controller VM
IP
1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP
L3 N/A Separate
Forwarding physical
subnet to
communicate
with virtual
network
Network IP Network Network Subnet Required IPs
component ATC intent routing properties
Total 6 required.
2 optional for
storage,
1 optional for
OEM VM.
Next steps
Download Azure Stack HCI
Review two-node storage switchless,
single switch deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022
In this article, you'll learn about the two-node storage switchless with single TOR switch
network reference pattern that you can use to deploy your Azure Stack HCI solution. The
information in this article will also help you determine if this configuration is viable for
your deployment planning needs. This article is targeted towards the IT administrators
who deploy and manage Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, factories, retail stores, and
government facilities.
Consider this pattern for a cost-effective solution that includes fault-tolerance at the
cluster level, but can tolerate northbound connectivity interruptions if the single physical
switch fails or requires maintenance.
You can scale out this pattern, but it will require workload downtime to reconfigure
storage physical connectivity and storage network reconfiguration. Although SDN L3
services are fully supported for this pattern, the routing services such as BGP will need to
be configured on the firewall device on top of the TOR switch if it doesn't support L3
services. Network security features such as microsegmentation and QoS don't require
extra configuration on the firewall device, as they're implemented on the virtual switch.
Two RDMA NICs in a full-mesh configuration for east-west traffic for storage. Each
node in the cluster has a redundant connection to the other node in the cluster.
Follow these steps to create network intents for this reference pattern:
PowerShell
OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switchless, two switches network pattern.
Review two-node storage switchless,
two switches deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022
In this article, you'll learn about the two-node storage switchless with two TOR L3
switches network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.
Consider implementing this pattern when looking for a cost-efficient solution that has
fault tolerance across all the network components. It is possible to scale out the pattern,
but will require workload downtime to reconfigure storage physical connectivity and
storage network reconfiguration. SDN L3 services are fully supported on this pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as micro-segmentation and QoS do
not require additional configuration for the firewall device as they are implemented at
the virtual network adapter layer.
For northbound/southbound traffic, the cluster requires two TOR switches in MLAG
configuration.
Two teamed network cards to handle management and compute traffic, and
connected to the TOR switches. Each NIC will be connected to a different TOR
switch.
Two RDMA NICs in a full-mesh configuration for East-West storage traffic. Each
node in the cluster has a redundant connection to the other node in the cluster.
Storage intent
Intent type: Storage
Intent mode: Cluster mode
Teaming: pNIC03 and pNIC04 use SMB Multichannel to provide resiliency and
bandwidth aggregation
Default VLANs:
711 for storage network 1
712 for storage network 2
Default subnets:
10.71.1.0/24 for storage network 1
10.71.2.0/24 for storage network 2
Follow these steps to create network intents for this reference pattern:
PowerShell
OOB network
The Out of Band (OOB) network is dedicated to supporting the "lights-out" server
management interface also known as the baseboard management controller (BMC).
Each BMC interface connects to a customer-supplied switch. The BMC is used to
automate PXE boot scenarios.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switchless, one switch network pattern.
Review two-node storage switched,
non-converged deployment network
reference pattern for Azure Stack HCI
Article • 12/16/2022
In this article, you'll learn about the two-node storage switched, non-converged, two-
TOR-switch network reference pattern that you can use to deploy your Azure Stack HCI
solution. The information in this article will also help you determine if this configuration
is viable for your deployment planning needs. This article is targeted towards the IT
administrators who deploy and manage Azure Stack HCI in their datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, factories, branch offices, and
datacenter facilities.
Deploy this pattern for enhanced network performance of your system and if you plan
to add additional nodes. East-West storage traffic replication won't interfere or compete
with north-sound traffic dedicated for management and compute. Logical network
configuration when adding additional nodes are ready without requiring workload
downtime or physical connection changes. SDN L3 services are fully supported on this
pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.
Two teamed network cards to handle management and compute traffic connected
to two TOR switches. Each NIC is connected to a different TOR switch.
Networks Management and compute Storage BMC
Follow these steps to create network intents for this reference pattern:
PowerShell
The storage adapters operate in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switched, fully converged network pattern.
Review two-node storage switched, fully
converged deployment network
reference pattern for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about the two-node storage switched, fully converged with
two TOR switches network reference pattern that you can use to deploy your Azure
Stack HCI solution. The information in this article will also help you determine if this
configuration is viable for your deployment planning needs. This article is targeted
towards the IT administrators who deploy and manage Azure Stack HCI in their
datacenters.
For information on other network patterns, see Azure Stack HCI network deployment
patterns.
Scenarios
Scenarios for this network pattern include laboratories, branch offices, and datacenter
facilities.
Consider this pattern if you plan to add additional nodes and your bandwidth
requirements for north-south traffic don't require dedicated adapters. This solution
might be a good option when physical switch ports are scarce and you're looking for
cost reductions for your solution. This pattern requires additional operational costs to
fine-tune the shared host network adapters QoS policies to protect storage traffic from
workload and management traffic. SDN L3 services are fully supported on this pattern.
Routing services such as BGP can be configured directly on the TOR switches if they
support L3 services. Network security features such as microsegmentation and QoS
don't require extra configuration on the firewall device as they're implemented at the
virtual network adapter layer.
Two teamed network cards handle the management, compute, and RDMA storage
traffic connected to the TOR switches. Each NIC is connected to a different TOR
switch. SMB multichannel capability provides path aggregation and fault tolerance.
Follow these steps to create network intents for this reference pattern:
PowerShell
The storage network operates in different IP subnets. Each storage network uses the ATC
predefined VLANs by default (711 and 712). However, these VLANs can be customized if
necessary. In addition, if the default subnet defined by ATC isn't usable, you're
responsible for assigning all storage IP addresses in the cluster.
The management network requires access to the BMC interface using Intelligent
Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.
The OOB network is isolated from compute workloads and is optional for non-solution-
based deployments.
Management VLAN
All physical compute hosts require access to the management logical network. For IP
address planning, each physical compute host must have at least one IP address
assigned from the management logical network.
A DHCP server can automatically assign IP addresses for the management network, or
you can manually assign static IP addresses. When DHCP is the preferred IP assignment
method, we recommend that you use DHCP reservations without expiration.
Native VLAN - you aren't required to supply VLAN IDs. This is required for
solution-based installations.
The management network supports all traffic used for management of the cluster,
including Remote Desktop, Windows Admin Center, and Active Directory.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
Compute VLANs
In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible
LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your
tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode.
When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on
the virtual network adapter.
HNV Provider Address (PA) network
The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the
underlying physical network for East/West (internal-internal) tenant traffic, North/South
(external-internal) tenant traffic, and to exchange BGP peering information with the
physical network. This network is only required when there's a need for deploying virtual
networks using VXLAN encapsulation for another layer of isolation and for network
multitenancy.
For more information, see Plan an SDN infrastructure: Management and HNV Provider.
For more information, see Understand the usage of virtual networks and VLANs.
Virtual networks
Network virtualization provides virtual networks to VMs similar to how server
virtualization (hypervisor) provides VMs to the operating system. Network virtualization
decouples virtual networks from the physical network infrastructure and removes the
constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such
flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and
is efficient for hosters and datacenter administrators to manage their infrastructure, and
maintaining the necessary multi-tenant isolation, security requirements, and overlapping
VM IP addresses.
Traffic between VMs in the peered virtual networks gets routed through the
backbone infrastructure through private IP addresses only. The communication
between the virtual networks doesn't require public Internet or gateways.
A low-latency, high-bandwidth connection between resources in different virtual
networks.
The ability for resources in one virtual network to communicate with resources in a
different virtual network.
No downtime to resources in either virtual network when creating the peering.
Using SLB, you can scale out your load balancing capabilities using SLB VMs on the
same Hyper-V compute servers that you use for your other VM workloads. SLB supports
rapid creation and deletion of load balancing endpoints as required for CSP operations.
In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning
model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise
virtual IP addresses to the physical network.
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
For more information about GRE connectivity scenarios, see GRE Tunneling in
Windows Server.
Create Layer 3 (L3) connections between SDN virtual networks and external
networks. In this case, the SDN gateway simply acts as a router between your
virtual network and the external network.
SDN Gateway requires SDN Network Controller. Network Controller performs the
deployment of gateway pools, configures tenant connections on each gateway, and
switches network traffic flows to a standby gateway if a gateway fails.
Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-
to-point connections. SDN deployment creates a default gateway pool that supports all
connection types. Within this pool, you can specify how many gateways are reserved on
standby in case an active gateway fails.
Next steps
Learn about the two-node storage switched, non-converged network pattern.
Review two-node storage reference
pattern components for Azure Stack HCI
Article • 12/12/2022
In this article, you'll learn about which network components get deployed for two-node
reference patterns, as shown below:
VM components
The following table lists all the components running on VMs for two-node network
patterns:
SDN Gateways 1 60 GB 30 GB 8 8 GB
Network Controller VM
The Network Controller VM is deployed optionally. If Network Controller VM isn't
deployed, the default access network access policies won't be available. Additionally, it's
needed if you have any of the following requirements:
Create and manage virtual networks. Connect virtual machines (VMs) to virtual
network subnets.
Configure Quality of Service (QoS) policies for VMs attached to virtual networks or
traditional VLAN-based networks.
Optional components
The following are optional components. For more information on Software Defined
Networking (SDN), see Plan a Software Defined Network infrastructure.
The SDN Software Load Balancer (SLB) VM is used to evenly distribute customer
network traffic among multiple VMs. It enables multiple servers to host the same
workload, providing high availability and scalability. It's also used to provide inbound
Network Address Translation (NAT) services for inbound access to virtual machines, and
outbound NAT services for outbound connectivity.
SDN Gateway VM
The SDN Gateway VM is used for routing network traffic between a virtual network and
another network, either local or remote. Gateways can be used to:
Create secure site-to-site IPsec connections between SDN virtual networks and
external customer networks over the internet.
Create Layer 3 connections between SDN virtual networks and external networks.
In this case, the SDN gateway simply acts as a router between your virtual network
and the external network.
Arc host agent: Enables you to manage your Windows and Linux computers hosted
outside of Azure on your corporate network or other cloud providers.
Network Controller host agent: Allows Network Controller to manage the goal state of
the data plane, and to receive notification of events as the configuration of the data
plane changes.
Software Load Balancer host agent: Listens for policy updates from the Network
Controller. In addition, this agent programs agent rules into the SDN-enabled Hyper-V
virtual switches that are configured on the local computer.
Next steps
Learn about Two-node deployment IP requirements.
Review two-node storage reference
pattern IP requirements for Azure Stack
HCI
Article • 11/10/2022
In this article, learn about the IP requirements for deploying a two-node network
reference pattern in your environment.
Total 6
required,
1
optional
for OEM
VM.
Deployments with microsegmentation and QoS
enabled
Network IP Network Network Subnet Required
component ATC intent routing properties IPs
Total 9
minimum.
10
maximum.
Two-node:
1 Network
Controller VM
IP
1 Software
Load Balancer
(SLB) VM IP
1 gateway VM
IP
Network IP component Network Network routing Subnet Required
ATC intent properties IPs
Total 11
minimum
12
maximum
Next steps
Choose a reference pattern.
Review two-node storage reference
pattern decision matrix for Azure Stack
HCI
Article • 07/20/2023
Study the two-node storage reference pattern decision matrix to help decide which
reference pattern is best suited for your deployment needs:
Next steps
Download Azure Stack HCI
Review SDN considerations for network
reference patterns
Article • 12/12/2022
In this article, you'll review considerations when deploying Software Defined Networking
(SDN) in your Azure Stack HCI cluster.
If you are using SDN Software Load Balancers (SLB) or Gateway Generic Routing
Encapsulation (GRE) gateways, you must also configure Border Gateway Protocol (BGP)
peering with the top of rack (ToR) switches so that the SLB and GRE Virtual IP addresses
(VIPs) can be advertised. For more information, see Switches and routers.
For more information about Network Controller, see What is Network Controller.
Next steps
Choose a network pattern to review.
Network considerations for cloud
deployments of Azure Stack HCI, version
23H2
Article • 03/01/2024
This article discusses how to design and plan an Azure Stack HCI, version 23H2 system
network for cloud deployment. Before you continue, familiarize yourself with the various
Azure Stack HCI networking patterns and available configurations.
As described in the Azure Stack HCI system server requirements article, the maximum
number of servers supported on Azure Stack HCI system is 16. Once you complete your
workload capacity planning, you should have a good understanding of the number of
server nodes required to run workloads on your infrastructure.
If your workloads require four or more nodes: You can't deploy and use a
switchless configuration for storage network traffic. You need to include a physical
switch with support for Remote Direct Memory Access (RDMA) to handle storage
traffic. For more information on Azure Stack HCI cluster network architecture, see
Network reference patterns overview.
If your workloads require three or less nodes: You can choose either switchless or
switched configurations for storage connectivity.
If you plan to scale out later to more than three nodes: You need to use a
physical switch for storage network traffic. Any scale out operation for switchless
deployments requires manual configuration of your network cabling between the
nodes that Microsoft isn't actively validating as part of its software development
cycle for Azure Stack HCI.
Here are the summarized considerations for the cluster size decision:
ノ Expand table
Decision Consideration
Cluster size (number of Switchless configuration via the Azure portal or ARM templates is
nodes per cluster) only available for 1, 2, or 3 node clusters.
Scale out requirements If you intend to scale out your cluster using the orchestrator, you
need to use a physical switch for the storage network traffic.
The advantages and disadvantages of each option are documented in the article linked
above.
As stated previously, you can only decide between the two options when the size of
your cluster is three or less nodes. Any cluster with four or more nodes is automatically
deployed using a network switch for storage.
If clusters have fewer than three nodes, the storage connectivity decision influences the
number and type of network intents you can define in the next step.
For example, for switchless configurations, you need to define two network traffic
intents. Storage traffic for east-west communications using the crossover cables doesn't
have north-south connectivity and it is completely isolated from the rest of your
network infrastructure. That means you need to define a second network intent for
management outbound connectivity and for your compute workloads.
Although it is possible to define each network intent with only one physical network
adapter port each, that doesn't provide any fault tolerance. As such, we always
recommend using at least two physical network ports for each network intent. If you
decide to use a network switch for storage, you can group all network traffic including
storage in a single network intent, which is also known as a hyperconverged or fully
converged host network configuration.
Here are the summarized considerations for the cluster storage connectivity decision:
ノ Expand table
Decision Consideration
No switch for Switchless configuration via Azure portal or ARM template deployment is only
storage supported for 1, 2 or 3 node clusters.
1 or 2 node storage switchless clusters can be deployed using the Azure portal
or ARM templates.
3 node storage switchless clusters can be deployed only using ARM templates.
Scale out operations are not supported with the switchless deployments. Any
change to the number of nodes after the deployment requires a manual
Decision Consideration
configuration.
Network switch If you intend to scale out your cluster using the orchestrator, you need to use
for storage a physical switch for the storage network traffic.
You can use this architecture with any number of nodes between 1 to 16.
Although is not enforced, you can use a single intent for all your network
traffic types (Management, Compute, and Storage)
The following diagram summarizes storage connectivity options available to you for
various deployments:
For Azure Stack HCI, all deployments rely on Network ATC for the host network
configuration. The network intents are automatically configured when deploying Azure
Stack HCI via the Azure portal. To understand more about the network intents and how
to troubleshoot those, see Common network ATC commands.
This section explains the implications of your design decision for network traffic intents,
and how they influence the next step of the framework. For cloud deployments, you can
select between four options to group your network traffic into one or more intents. The
options available depend on the number of nodes in your cluster and the storage
connectivity type used.
The available network intent options are discussed in the following sections.
This option requires a physical switch for storage traffic. If you require a switchless
architecture, you can't use this type of intent. Azure portal automatically filters out
this option if you select a switchless configuration for storage connectivity.
At least two network adapter ports are recommended to ensure High Availability.
At least 10 Gbps network interfaces are required to support RDMA traffic for
storage.
You can use this option for both switched and switchless storage connectivity, if:
At least two network adapter ports are available for each intent to ensure high
availability.
A physical switch is used for RDMA if you use the network switch for storage.
At least 10 Gbps network interfaces are required to support RDMA traffic for
storage.
This option requires a physical switch for storage traffic as the same ports are
shared with compute traffic, which require north-south communication. If you
require a switchless configuration, you can't use this type of intent. Azure portal
automatically filters out this option if you select a switchless configuration for
storage connectivity.
At least two network adapter ports are recommended to ensure high availability.
At least 10 Gbps network interfaces are recommended for the compute and
storage intent to support RDMA traffic.
Even when the management intent is declared without a compute intent, Network
ATC creates a Switch Embedded Teaming (SET) virtual switch to provide high
availability to the management network.
Use this option for both switched and switchless storage connectivity if the storage
intent is different from the other intents.
Use this option when another compute intent is required or when you want to fully
separate the distinct types of traffic over different network adapters.
Use at least two network adapter ports for each intent to ensure high availability.
At least 10 Gbps network interfaces are recommended for the compute and
storage intent to support RDMA traffic.
The following diagram summarizes network intent options available to you for various
deployments:
In this step, you define the infrastructure subnet address space, how these addresses are
assigned to your cluster, and if there is any proxy or VLAN ID requirement for the nodes
for outbound connectivity to the internet and other intranet services such as Domain
Name System (DNS) or Active Directory Services.
The following infrastructure subnet components must be planned and defined before
you start deployment so you can anticipate any routing, firewall, or subnet
requirements.
Management IP pool
When doing the initial deployment of your Azure Stack HCI system, you must define an
IP range of consecutive IPs for infrastructure services deployed by default.
To ensure the range has enough IPs for current and future infrastructure services, you
must use a range of at least six consecutive available IP addresses. These addresses are
used for - the cluster IP, the Azure Resource Bridge VM and its components.
The following conditions must be met when defining your IP pool for the infrastructure
subnet during deployment:
ノ Expand table
# Condition
1 The IP range must use consecutive IPs and all IPs must be available within that range.
2 The range of IPs must not include the cluster node management IPs but must be on the same
subnet as your nodes.
3 The default gateway defined for the management IP pool must provide outbound
connectivity to the internet.
4 The DNS servers must ensure name resolution with Active Directory and the internet.
Management VLAN ID
We recommend that the management subnet of your Azure HCI cluster use the default
VLAN, which in most cases is declared as VLAN ID 0. However, if your network
requirements are to use a specific management VLAN for your infrastructure network, it
must be configured on your physical network adapters that you plan to use for
management traffic.
If you plan to use two physical network adapters for management, you need to set the
VLAN on both adapters. This must be done as part of the bootstrap configuration of
your servers, and before they're registered to Azure Arc, to ensure you successfully
register the nodes using this VLAN.
To set the VLAN ID on the physical network adapters, use the following PowerShell
command:
PowerShell
Set-NetAdapter -Name "NIC1" -VlanID 44
Once the VLAN ID is set and the IPs of your nodes are configured on the physical
network adapters, the orchestrator reads this VLAN ID value from the physical network
adapter used for management and stores it, so it can be used for the Azure Resource
Bridge VM or any other infrastructure VM required during deployment. It isn't possible
to set the management VLAN ID during cloud deployment from Azure portal as this
carries the risk of breaking the connectivity between the nodes and Azure if the physical
switch VLANs aren't routed properly.
7 Note
Before you create a virtual switch, make sure to enable the Hype-V role. For more
information, see Install required Windows role.
If a virtual switch configuration is required and you must use a specific VLAN ID, follow
these steps:
Azure Stack HCI deployments rely on Network ATC to create and configure the virtual
switches and virtual network adapters for management, compute, and storage intents.
By default, when Network ATC creates the virtual switch for the intents, it uses a specific
name for the virtual switch.
Although it isn't required, we recommend naming your virtual switches with the same
naming convention. The recommended name for the virtual switches is as follows:
Name of the virtual switch: " ConvergedSwitch($IntentName) ", Where $IntentName can be
any string. This string should match the name of the virtual network adapter for
management as described in the next step.
The following example shows how to create the virtual switch with PowerShell using the
recommended naming convention with $IntentName describing the purpose of the
virtual switch. The list of network adapter names is a list of the physical network
adapters you plan to use for management and compute network traffic:
PowerShell
$IntentName = "MgmtCompute"
New-VMSwitch -Name "ConvergedSwitch($IntentName)" -SwitchType External -
NetAdapterName "NIC1","NIC2" -EnableEmbeddedTeaming $true -AllowManagementOS
$false
To update the management virtual network adapter name, use the following command:
PowerShell
$IntentName = "MgmtCompute"
Add-VMNetworkAdapter -ManagementOS -SwitchName
"ConvergedSwitch($IntentName)" -Name "vManagement($IntentName)"
Once the required VLAN ID is configured, you can assign the IP address and gateways to
the management virtual network adapter to validate that it has connectivity with other
nodes, DNS, Active Directory, and the internet.
The following example shows how to configure the management virtual network
adapter to use VLAN ID 8 instead of the default:
PowerShell
7 Note
Do not select the virtual network adapter for the network intent.
The same logic applies to the Azure Resource Manager (ARM) templates. You must
specify the physical network adapters that you want to use for the network intents and
never the virtual network adapters.
ノ Expand table
# Considerations
1 VLAN ID must be specified on the physical network adapter for management before
registering the servers with Azure Arc.
2 Use specific steps when a virtual switch is required before registering the servers to Azure Arc.
3 The management VLAN ID is carried over from the host configuration to the infrastructure
VMs during deployment.
4 There is no VLAN ID input parameter for Azure portal deployment or for ARM template
deployment.
Node and cluster IP assignment
For Azure Stack HCI system, you have two options to assign IPs for the server nodes and
for the cluster IP.
Both the static and Dynamic Host Configuration Protocol (DHCP) protocols are
supported.
Infrastructure VMs and services such as Arc Resource Bridge and Network
Controller keep using static IPs from the management IP pool. That implies that
even if you decide to use DHCP to assign the IPs to your nodes and your cluster IP,
the management IP pool is still required.
Static IP assignment
If static IPs are used for the nodes, the management IP pool is used to obtain an
available IP and assign it to the cluster IP automatically during deployment.
It is important to use management IPs for the nodes that aren't part of the IP range
defined for the management IP pool. Server node IPs must be on the same subnet of
the defined IP range.
We recommend that you assign only one management IP for the default gateway and
for the configured DNS servers for all the physical network adapters of the node. This
ensures that the IP doesn't change once the management network intent is created. This
also ensures that the nodes keep their outbound connectivity during the deployment
process, including during the Azure Arc registration.
To avoid routing issues and to identify which IP will be used for outbound connectivity
and Arc registration, Azure portal validates if there is more than one default gateway
configured.
If a virtual switch and a management virtual network adapter were created during the
OS configuration, the management IP for the node must be assigned to that virtual
network adapter.
DHCP IP assignment
If IPs for the nodes are acquired from a DHCP server, a dynamic IP is also used for the
cluster IP. Infrastructure VMs and services still require static IPs, and that implies that the
management IP pool address range must be excluded from the DHCP scope used for
the nodes and the cluster IP.
The process of defining the management IP after creating the management intent
involves using the MAC address of the first physical network adapter that is selected for
the network intent. This MAC address is then assigned to the virtual network adapter
that is created for management purposes. This means that the IP address that the first
physical network adapter obtains from the DHCP server is the same IP address that the
virtual network adapter uses as the management IP. Therefore, it is important to create
DHCP reservation for node IP.
ノ Expand table
# Considerations
1 Node IPs must be on the same subnet of the defined management IP pool range regardless if
they're static or dynamic addresses.
2 The management IP pool must not include node IPs. Use DHCP exclusions when dynamic IP
assignment is used.
4 DHCP addresses are only supported for node IPs and the cluster IP. Infrastructure services use
static IPs from the management pool.
5 The MAC address from the first physical network adapter is assigned to the management
virtual network adapter once the management network intent is created.
Proxy requirements
A proxy is most likely required to access the internet within your on-premises
infrastructure. Azure Stack HCI supports only non-authenticated proxy configurations.
Given that internet access is required to register the nodes in Azure Arc, the proxy
configuration must be set as part of the OS configuration before server nodes are
registered. For more information, see Configure proxy settings.
The Azure Stack HCI OS has three different services (WinInet, WinHTTP, and
environment variables) that require the same proxy configuration to ensure all OS
components can access the internet. The same proxy configuration used for the nodes is
automatically carried over to the Arc Resource Bridge VM and AKS, ensuring that they
have internet access during deployment.
ノ Expand table
# Consideration
1 Proxy configuration must be completed before registering the nodes in Azure Arc.
2 The same proxy configuration must be applied for WinINET, WinHTTP, and environment
variables.
3 The Environment Checker ensures that proxy configuration is consistent across all proxy
components.
4 Proxy configuration of Arc Resource Bridge VM and AKS is automatically done by the
orchestrator during deployment.
Firewall requirements
You are currently required to open several internet endpoints in your firewalls to ensure
that Azure Stack HCI and its components can successfully connect to them. For a
detailed list of the required endpoints, see Firewall requirements.
Firewall configuration must be done prior to registering the nodes in Azure Arc. You can
use the standalone version of the environment checker to validate that your firewalls
aren't blocking traffic sent to these endpoints. For more information, see Azure Stack
HCI Environment Checker to assess deployment readiness for Azure Stack HCI.
ノ Expand table
# Consideration
1 Firewall configuration must be done before registering the nodes in Azure Arc.
2 Environment Checker in standalone mode can be used to validate the firewall configuration.
Network adapters are qualified by network traffic type (management, compute, and
storage) they're used with. As you review the Windows Server Catalog , the Windows
Server 2022 certification indicates for which network traffic the adapters are qualified.
Before purchasing a server for Azure Stack HCI, you must have at least one adapter that
is qualified for management, compute, and storage as all three traffic types are required
on Azure Stack HCI. Cloud deployment relies on Network ATC to configure the network
adapters for the appropriate traffic types, so it is important to use supported network
adapters.
The default values used by Network ATC are documented in Cluster network settings.
We recommend that you use the default values. With that said, the following options
can be overridden using Azure portal or ARM templates if needed:
Storage VLANs: Set this value to the required VLANs for storage.
Jumbo Packets: Defines the size of the jumbo packets.
Network Direct: Set this value to false if you want to disable RDMA for your
network adapters.
Network Direct Technology: Set this value to RoCEv2 or iWarp .
Traffic Priorities Datacenter Bridging (DCB): Set the priorities that fit your
requirements. We highly recommend that you use the default DCB values as these
are validated by Microsoft and customers.
ノ Expand table
# Consideration
2 Physical switches must be configured according to the network adapter configuration. See
Physical network requirements for Azure Stack HCI.
3 Ensure that your network adapters are supported for Azure Stack HCI using the Windows
Server Catalog.
4 When accepting the defaults, Network ATC automatically configures the storage network
adapter IPs and VLANs. This is known as Storage Auto IP configuration.
In some instances, Storage Auto IP isn't supported and you need to declare each storage
network adapter IP using ARM templates.
Next steps
About Azure Stack HCI, version 23H2 deployment.
Security features for Azure Stack HCI,
version 23H2
Article • 02/22/2024
Azure Stack HCI is a secure-by-default product that has more than 300 security settings
enabled right from the start. Default security settings provide a consistent security
baseline to ensure that devices start in a known good state.
This article provides a brief conceptual overview of the various security features
associated with your Azure Stack HCI cluster. This includes security defaults, Windows
Defender for Application Control (WDAC), volume encryption via BitLocker, secret
rotation, local built-in user accounts, Microsoft Defender for Cloud, and more.
Security defaults
Your Azure Stack HCI has more than 300 security settings enabled by default that
provide a consistent security baseline, a baseline management system, and an
associated drift control mechanism.
You can monitor the security baseline and secured-core settings during both
deployment and runtime. You can also disable drift control during deployment when
you configure security settings.
With drift control applied, security settings are refreshed every 90 minutes. This refresh
interval ensures remediation of any changes from the desired state. Continuous
monitoring and auto-remediation allow you to have a consistent and reliable security
posture throughout the lifecycle of the device.
For more information, see Manage security defaults on Azure Stack HCI.
BitLocker encryption
Data-at-rest encryption is enabled on data volumes created during deployment. These
data volumes include both infrastructure volumes and workload volumes. When you
deploy your cluster, you have the option to modify security settings.
You must store BitLocker recovery keys in a secure location outside of the system. Once
Azure Stack HCI is successfully deployed, you can retrieve BitLocker recovery keys.
ノ Expand table
We recommend that you create your own local administrator account, and that you
disable the well-known RID 500 user account.
The ability to create certificates during deployment and after cluster scale
operations.
Automated auto-rotation mechanism before certificates expire, and an option to
rotate certificates during the lifetime of the cluster.
The ability to monitor and alert whether certificates are still valid.
7 Note
This action takes about ten minutes, depending on the size of the cluster.
Azure Stack HCI has an integrated syslog forwarder that, once configured, generates
syslog messages defined in RFC3164, with the payload in Common Event Format (CEF).
The following diagram illustrates integration of Azure Stack HCI with an SIEM. All audits,
security logs, and alerts are collected on each host and exposed via syslog with the CEF
payload.
Syslog forwarding agents are deployed on every Azure Stack HCI host to forward syslog
messages to the customer-configured syslog server. Syslog forwarding agents work
independently from each other but can be managed together on any one of the hosts.
The syslog forwarder in Azure Stack HCI supports various configurations based on
whether syslog forwarding is with TCP or UDP, whether the encryption is enabled or not,
and whether there is unidirectional or bidirectional authentication.
With the basic Defender for Cloud plan, you get recommendations on how to improve
the security posture of your Azure Stack HCI system at no extra cost. With the paid
Defender for Servers plan, you get enhanced security features including security alerts
for individual servers and Arc VMs.
For more information, see Manage system security with Microsoft Defender for Cloud
(preview).
Next steps
Assess deployment readiness via the Environment Checker.
Read the Azure Stack HCI security book .
View the Azure Stack HCI security standards.
Evaluate the deployment readiness of
your environment for Azure Stack HCI,
version 23H2
Article • 03/07/2024
This article describes how to use the Azure Stack HCI Environment Checker in a
standalone mode to assess how ready your environment is for deploying the Azure
Stack HCI solution.
For a smooth deployment of the Azure Stack HCI solution, your IT environment must
meet certain requirements for connectivity, hardware, networking, and Active Directory.
The Azure Stack HCI Environment Checker is a readiness assessment tool that checks
these minimum requirements and helps determine if your IT environment is deployment
ready.
Connectivity validator. Checks whether each server in the cluster meets the
connectivity requirements. For example, each server in the cluster has internet
connection and can connect via HTTPS outbound traffic to well-known Azure
endpoints through all firewalls and proxy servers.
Hardware validator. Checks whether your hardware meets the system
requirements. For example, all the servers in the cluster have the same
manufacturer and model.
Active Directory validator. Checks whether the Active Directory preparation tool is
run prior to running the deployment.
Network validator. Validates your network infrastructure for valid IP ranges
provided by customers for deployment. For example, it checks there are no active
hosts on the network using the reserved IP range.
Arc integration validator. Checks if the Azure Stack HCI cluster meets all the
prerequisites for successful Arc onboarding.
Ensure that your Azure Stack HCI infrastructure is ready before deploying any
future updates or upgrades.
Identify the issues that could potentially block the deployment, such as not
running a pre-deployment Active Directory script.
Confirm that the minimum requirements are met.
Identify and remediate small issues early and quickly, such as a misconfigured
firewall URL or a wrong DNS.
Identify and remediate discrepancies on your own and ensure that your current
environment configuration complies with the Azure Stack HCI system
requirements.
Collect diagnostic logs and get remote support to troubleshoot any validation
issues.
Standalone tool: This light-weight PowerShell tool is available for free download
from the Windows PowerShell gallery. You can run the standalone tool anytime,
outside of the deployment process. For example, you can run it even before
receiving the actual hardware to check if all the connectivity requirements are met.
This article describes how to run the Environment Checker in a standalone mode.
Prerequisites
Before you begin, complete the following tasks:
You can install the Environment Checker on a client computer, staging server, or Azure
Stack HCI cluster node. However, if installed on an Azure Stack HCI cluster node, make
sure to uninstall it before you begin the deployment to avoid any potential conflicts.
1. Run PowerShell as administrator (5.1 or later). If you need to install PowerShell, see
Installing PowerShell on Windows.
2. Enter the following cmdlet to install the latest version of the PowerShellGet
module:
PowerShell
3. After the installation completes, close the PowerShell window and open a new
PowerShell session as administrator.
PowerShell
PowerShell
Locally from the Azure Stack HCI cluster node. However, make sure to uninstall the
Environment Checker before you begin the deployment to avoid any potential
conflicts.
Select each of the following tabs to learn more about the corresponding validator.
Connectivity
Use the connectivity validator to check if all the servers in your cluster have internet
connectivity and meet the minimum connectivity requirements. For connectivity
prerequisites, see Firewall requirements for Azure Stack HCI.
Check the connectivity of your servers before receiving the actual hardware.
You can run the connectivity validator from any client computer on the
network where you'll deploy the Azure Stack HCI cluster.
Check the connectivity of all the servers in your cluster after you've deployed
the cluster. You can check the connectivity of each server by running the
validator cmdlet locally on each server. Or, you can remotely connect from a
staging server to check the connectivity of one or more servers.
1. Open PowerShell locally on the workstation, staging server, or Azure Stack HCI
cluster node.
Invoke-AzStackHciConnectivityValidation
7 Note
Here are some examples of running the connectivity validator cmdlet with
parameters.
PowerShell
You can check connectivity for a specific service endpoint by passing the Service
parameter. In the following example, the validator checks connectivity for Azure Arc
service endpoints.
PowerShell
Invoke-AzStackHciConnectivityValidation -Proxy
http://proxy.contoso.com:8080 -ProxyCredential $proxyCredential
7 Note
You can view the output of the connectivity checker as an object by using the –
PassThru parameter:
PowerShell
Invoke-AzStackHciConnectivityValidation –PassThru
ノ Expand table
HealthCheckSource The name of the services called for the health check.
To learn more about different sections in the readiness check report, see
Understand readiness check report.
The following sample output is from a successful run of the connectivity validator.
The output indicates a healthy connection to all the endpoints, including well-
known Azure services and observability services. Under Diagnostics, you can see
the validator checks if a DNS server is present and healthy. It collects WinHttp, IE
proxy, and environment variable proxy settings for diagnostics and data collection.
It also checks if a transparent proxy is used in the outbound path and displays the
output.
If a test fails, the connectivity validator returns information to help you resolve the
issue, as shown in the sample output below. The Needs Remediation section
displays the issue that caused the failure. The Remediation section lists the relevant
article to help remediate the issue.
Workaround
Work with your network team to turn off SSL inspection for your Azure Stack HCI
system. To confirm your SSL inspection is turned off, you can use the following
examples. After SSL inspection is turned off, you can run the tool again to check
connectivity to all the endpoints.
If you receive the certificate validation error message, run the following commands
individually for each endpoint to manually check the certificate information:
PowerShell
For example, if you want to verify the certificate information for two endpoints, say
https://login.microsoftonline.com and https://portal.azure.com , run the
For https://login.microsoftonline.com :
PowerShell
PowerShell
Subject
-------
CN=portal.office.com, O=Microsoft Corporation, L=Redmond, S=WA,
C=US
CN=Microsoft Azure TLS Issuing CA 02, O=Microsoft Corporation, C=US
CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc,
C=US
For https://portal.azure.com :
PowerShell
PowerShell
Subject
-------
CN=portal.azure.com, O=Microsoft Corporation, L=Redmond, S=WA, C=US
CN=Microsoft Azure TLS Issuing CA 01, O=Microsoft Corporation, C=US
CN=DigiCert Global Root G2, OU=www.digicert.com, O=DigiCert Inc,
C=US
The information displayed on each readiness check report varies depending on the
checks the validators perform. The following table summarizes the different sections in
the readiness check reports for each validator:
ノ Expand table
Services Displays the health status of each service endpoint that the Connectivity
connectivity validator checks. Any service endpoint that fails the validator
check is highlighted with the Needs Remediation tag. report
Diagnostics Displays the result of the diagnostic tests. For example, the Connectivity
health and availability of a DNS server. It also shows what validator
information the validator collects for diagnostic purposes, such report
as WinHttp, IE proxy, and environment variable proxy settings.
Hardware Displays the health status of all the physical servers and their Hardware
hardware components. For information on the tests performed validator
on each hardware, see the table under the "Hardware" tab in the report
Run readiness checks section.
AD OU Displays the result of the Active Directory organization unit test. Active
Diagnostics Displays if the specified organizational unit exists and contains Directory
proper sub-organizational units. validator
report
Network Displays the result of the network range test. If the test fails, it Network
range test displays the IP addresses that belong to the reserved IP range. validator
report
Summary Lists the count of successful and failed tests. Failed test results All reports
are expanded to show the failure details under Needs
Remediation.
Remediation Displays only if a test fails. Provides a link to the article that All reports
provides guidance on how to remediate the issue.
Log location Provides the path where the log file is saved. The default path is: All reports
(contains PII)
- $HOME\.AzStackHci\AzStackHciEnvironmentChecker.log when
you run the Environment Checker in a standalone mode.
- C:\CloudDeployment\Logs when the Environment Checker is run
as part of the deployment process.
Report Provides the path where the completed readiness check report is All reports
Location saved in the JSON format. The default path is:
(contains PII)
- $HOME\.AzStackHci\AzStackHciEnvironmentReport.json when
you run the Environment Checker in a standalone mode.
- C:\CloudDeployment\Logs when the Environment Checker is run
as part of the deployment process.
Section Description Available in
Completion At the end of the report, displays a message that the validation All reports
message check is completed.
7 Note
The results reported by the Environment Checker tool reflect the status of your
settings only at the time that you ran it. If you make changes later, for example to
your Active Directory or network settings, items that passed successfully earlier can
become critical issues.
For each test, the validator provides a summary of the unique issues and classifies them
into: success, critical issues, warning issues, and informational issues. Critical issues are
the blocking issues that you must fix before proceeding with the deployment.
PowerShell
This article is the first in the series of deployment articles that describe how to deploy
Azure Stack HCI, version 23H2. This article applies to both single and multi-node
deployments. The target audience for this article is IT administrators who are
responsible for deploying Azure Stack HCI in their organization.
) Important
Azure Stack HCI, version 23H2 is the latest GA version, which doesn't support
upgrade from version 22H2. Begin with a new 2311 deployment, update to 2311.2,
and strictly follow version 23H2 deployment instructions. Don't mix steps from
version 22H2 and version 23H2.
Deploy from Azure portal: Select this option to deploy an Azure Stack HCI cluster
using Azure portal. You can choose from three deployment methods: New
configuration, Template spec, and QuickStart template. The deployment flow
guides you through the steps to deploy your Azure Stack HCI cluster.
Deployment sequence
Follow this sequence to deploy Azure Stack HCI in your environment:
ノ Expand table
Step # Description
Select validated network Identify the network reference pattern that corresponds to the
topology way your servers are cabled. You will define the network settings
based on this topology.
Read the requirements and Review the requirements and complete all the prerequisites and a
complete the prerequisites deployment checklist before you begin the deployment.
Step 1: Prepare Active Prepare your Active Directory (AD) environment for Azure Stack
Directory HCI deployment.
Step 2: Download Azure Download Azure Stack HCI, version 23H2 OS ISO from Azure
Stack HCI, version 23H2 OS portal
Step 3: Install OS Install Azure Stack HCI operating system locally on each server in
your cluster.
(Optional) Configure the Optionally configure proxy settings for Azure Stack HCI if your
proxy network uses a proxy server for internet access.
Step 4: Register servers with Install and run the Azure Arc registration script on each of the
Arc and assign permissions servers that you intend to cluster.
Assign required permissions for the deployment.
Step 5A: Deploy the cluster Use the Azure portal to select Arc servers to create Azure Stack
via Azure portal HCI cluster. Use one of the three deployment methods described
previously.
Step 5B: Deploy the cluster Use the ARM Deployment Template and the Parameter file to
via ARM template deploy Azure Stack HCI cluster.
Before starting the deployment, we recommend you check the following table that
shows the supported and available options.
No switch for storage. When you select this option, your Azure Stack HCI system
uses crossover network cables directly connected to your network interfaces for
storage communication. The current supported switchless deployments from the
portal are one or two nodes.
Network switch for storage. When you select this option, your Azure Stack HCI
system uses network switches connected to your network interfaces for storage
communication. You can deploy up to 16 nodes using this configuration.
You can then select the network reference pattern corresponding to a validated network
topology that you intend to deploy.
Next steps
Read the prerequisites for Azure Stack HCI.
Review deployment prerequisites for
Azure Stack HCI, version 23H2
Article • 01/31/2024
This article discusses the security, software, hardware, and networking prerequisites, and
the deployment checklist in order to deploy Azure Stack HCI, version 23H2.
Requirements Links
ノ Expand table
Server names Unique name for each server you wish to deploy.
Active Directory The name for the new cluster AD object during the Active Directory
cluster name preparation. This name is also used for the name of the cluster during
Component What is needed
deployment.
Active Directory The prefix used for all AD objects created for the Azure Stack HCI
prefix deployment. The prefix is used during the Active Directory preparation.
The prefix must not exceed 8 characters.
Active directory OU A new organizational unit (OU) to store all the objects for the Azure Stack
HCI deployment. The OU is created during the Active Directory
preparation.
Active Directory Fully-qualified domain name (FQDN) for the Active Directory Domain
Domain Services prepared for deployment.
Active Directory A new username and password that is created with the appropriate
Lifecycle Manager permissions for deployment. This account is the same as the user account
credential used by the Azure Stack HCI deployment.
The password must conform to the Azure length and complexity
requirements. Use a password that is at least 12 characters long. The
password must contain the following: a lowercase character, an
uppercase character, a numeral, and a special character.
The name must be unique for each deployment and you can't use admin
as the username.
IPv4 network range A subnet used for management network intent. You need an address
subnet for range for management network with a minimum of 6 available,
management contiguous IPs in this subnet. These IPs are used for infrastructure
network intent services with the first IP assigned to fail over clustering.
For more information, see the Specify network settings page in Deploy
via Azure portal.
Storage VLAN ID Two unique VLAN IDs to be used for the storage networks, from your IT
network administrator.
We recommend using the default VLANS from Network ATC for storage
subnets. If you plan to have two storage subnets, Network ATC will use
VLANS from 712 and 711 subnets.
For more information, see the Specify network settings page in Deploy
via Azure portal.
DNS Server A DNS Server that is used in your environment. The DNS server used
must resolve the Active Directory Domain.
For more information, see the Specify network settings page in Deploy
via Azure portal.
Local administrator Username and password for the local administrator for all the servers in
credentials your cluster. The credentials are identical for all the servers in your
system.
For more information, see the Specify management settings page in
Deploy via Azure portal.
Component What is needed
Custom location (Optional) A name for the custom location created for your cluster. This
name is used for Azure Arc VM management.
For more information, see the Specify management settings page in
Deploy via Azure portal.
Azure subscription ID ID for the Azure subscription used to register the cluster. Make sure that
you are a user access administrator and a contributor on this
subscription. This will allow you to manage access to Azure resources,
specifically to Arc-enable each server of an Azure Stack HCI cluster. For
more information, see Assign Azure permissions for deployment
Azure Storage For two-node clusters, a witness is required. For a cloud witness, an Azure
account Storage account is needed. In this release, you cannot use the same
storage account for multiple clusters. For more information, see Specify
management settings in Deploy via Azure portal.
Azure Key Vault A key vault is required to securely store secrets for this system, such as
cryptographic keys, local admin credentials, and BitLocker recovery keys.
For more information, see Basics in Deploy via Azure portal.
Outbound Run the Environment checker to ensure that your environment meets the
connectivity outbound network connectivity requirements for firewall rules.
Next steps
Prepare your Active Directory environment.
Prepare Active Directory for Azure Stack
HCI, version 23H2 deployment
Article • 03/11/2024
This article describes how to prepare your Active Directory environment before you
deploy Azure Stack HCI, version 23H2.
7 Note
You can use your existing process to meet the above requirements. The script
used in this article is optional and is provided to simplify the preparation.
When group policy inheritance is blocked at the OU level, enforced GPO's
aren't blocked. Ensure that any applicable GPO, which are enforced, are also
blocked using other methods, for example, using WMI Filters or security
groups.
Prerequisites
Before you begin, make sure you've done the following:
Download and install the version 2402 module from the PowerShell Gallery . Run
the following command from the folder where the module is located:
PowerShell
Make sure to uninstall any previous versions of the module before installing
the new version.
You have obtained permissions to create an OU. If you don't have permissions,
contact your Active Directory administrator.
ノ Expand table
Parameter Description
-AsHciOUName A new Organizational Unit (OU) to store all the objects for the
Azure Stack HCI deployment. Existing group policies and
inheritance are blocked in this OU to ensure there's no conflict of
settings. The OU must be specified as the distinguished name
(DN). For more information, see the format of Distinguished
Names.
PowerShell
4. When prompted, provide the username and password for the deployment.
a. Make sure that only the username is provided. The name should not include the
domain name, for example, contoso\username . Username must be between 1 to
64 characters and only contain letters, numbers, hyphens, and underscores
and may not start with a hyphen or number.
b. Make sure that the password meets complexity and length requirements. Use a
password that is at least 12 characters long and contains: a lowercase
character, an uppercase character, a numeral, and a special character.
6. An OU with the specified name should be created and within that OU, you'll see
the deployment user.
7 Note
If you are repairing a single server, do not delete the existing OU. If the server
volumes are encrypted, deleting the OU removes the BitLocker recovery keys.
Next steps
Download Azure Stack HCI, version 23H2 software on each server in your cluster.
Feedback
Was this page helpful? Yes No
This article describes how to download the Azure Stack HCI, version 23H2 software from
the Azure portal.
The first step in deploying Azure Stack HCI, version 23H2 is to download Azure Stack
HCI software from the Azure portal. The software download includes a free 60-day trial.
However, if you've purchased Azure Stack HCI Integrated System solution hardware
from the Azure Stack HCI Catalog through your preferred Microsoft hardware partner,
the Azure Stack HCI software should be pre-installed. In that case, you can skip this step
and move on to Register your servers and assign permissions for Azure Stack HCI
deployment.
Prerequisites
Before you begin the download of Azure Stack HCI, version 23H2 software, ensure that
you have the following prerequisites:
An Azure account. If you don’t already have an Azure account, first create an
account .
1. If not already signed in, sign in to the Azure portal with your Azure account
credentials.
2. In the Azure portal search bar at the top, enter Azure Stack HCI. As you type, the
portal starts suggesting related resources and services based on your input. Select
Azure Stack HCI under the Services category.
After you select Azure Stack HCI, you're directed to the Azure Stack HCI Get
started page, with the Get started tab selected by default.
3. On the Get started tab, under the Download software tile, select Download Azure
Stack HCI.
4. On the Download Azure Stack HCI page on the right, do the following:
b. Choose language from the dropdown list. Select English to download the
English version of the software.
We recommend that you use the ISO for the language you wish to install in. You
should download a VHDX only if you are performing virtual deployments. To
download the VHDX in English, select English VHDX from the dropdown list.
f. Select the Download Azure Stack HCI button. This action begins the download.
Use the downloaded ISO file to install the software on each server that you want
to cluster.
Next steps
Install the Azure Stack HCI, version 23H2 operating system.
Install the Azure Stack HCI, version
23H2 operating system
Article • 02/27/2024
This article describes the steps needed to install the Azure Stack HCI, version 23H2
operating system locally on each server in your cluster.
Prerequisites
Before you begin, make sure you do the following steps:
1. Download the Azure Stack HCI operating system from the Azure portal.
2. Start the Install Azure Stack HCI wizard on the system drive of the server where
you want to install the operating system.
3. Choose the language to install or accept the default language settings, select Next,
and then on next page of the wizard, select Install now.
4. On the Applicable notices and license terms page, review the license terms, select
the I accept the license terms checkbox, and then select Next.
5. On the Which type of installation do you want? page, select Custom: Install the
newer version of Azure Stack HCI only (advanced).
7 Note
6. On the Where do you want to install Azure Stack HCI? page, confirm the drive on
which the operating system is installed, and then select Next.
7 Note
If the hardware was used before, run diskpart to clean the OS drive. For more
information, see how to use diskpart. Also see the instructions in Clean
drives.
7. The Installing Azure Stack HCI page displays to show status on the process.
7 Note
The installation process restarts the operating system twice to complete the
process, and displays notices on starting services before opening an
Administrator command prompt.
9. At the Enter new credential for Administrator prompt, enter a new password.
) Important
Make sure that the local administrator password follows Azure password
length and complexity requirements. Use a password that is at least 12
characters long and contains a lowercase character, an uppercase character, a
numeral, and a special character.
10. At the Your password has been changed confirmation prompt, press Enter.
Now you're ready to use the Server Configuration tool (SConfig) to perform important
tasks.
To use SConfig, sign in to the server running the Azure Stack HCI operating system. This
could be locally via a keyboard and monitor, or using a remote management (headless
or BMC) controller, or Remote Desktop. The SConfig tool opens automatically when you
sign in to the server.
Follow these steps to configure the operating system using SConfig:
1. Install the latest drivers and firmware as per the instructions provided by your
hardware manufacturer. You can use SConfig to run driver installation apps. After
the installation is complete, restart your servers.
2. Configure networking as per your environment. You can configure the following
optional settings:
Configure VLAN IDs for the management network. For more information, see
Management VLAN ID and Management VLAN ID with a virtual switch.
Configure DHCP for the management network. For more information, see
DHCP IP assignment.
Configure a proxy server. For more information, see Configure proxy settings
for Azure Stack HCI, version 23H2.
3. Use the Network Settings option in SConfig to configure a default valid gateway
and a DNS server. Set DNS to the DNS of the domain you're joining.
4. Configure a valid time server on each server. Validate that your server is not using
the local CMOS clock as a time source, using the following command:
Confirm that the time is successfully synchronizing using the new time server:
Once the server is domain joined, it synchronizes its time from the PDC emulator.
5. Rename all the servers using option 2 in SConfig to match what you used when
preparing Active Directory, as you won't rename the servers later.
6. (Optional) At this point, you can enable Remote Desktop Protocol (RDP) and then
RDP to each server rather than use the virtual console. This action should simplify
performing the remainder of the configuration.
7. Clean all the non-OS drives for each server that you intend to deploy. Remove any
virtual media that have been used when installing the OS. Also validate that no
other root drives exist.
7 Note
Make sure that the local administrator password follows Azure password
length and complexity requirements. Use a password that is at least 12
characters long and contains a lowercase character, an uppercase character, a
numeral, and a special character.
PowerShell
You are now ready to register the Azure Stack HCI server with Azure Arc and assign
permissions for deployment.
Next steps
(Optional) Configure proxy settings for Azure Stack HCI, version 23H2.
Register Azure Stack HCI servers in your system with Azure Arc and assign
permissions.
Register your servers and assign
permissions for Azure Stack HCI, version
23H2 deployment
Article • 03/15/2024
This article describes how to register your Azure Stack HCI servers and then set up the
required permissions to deploy an Azure Stack HCI, version 23H2 cluster.
Prerequisites
Before you begin, make sure you've completed the following prerequisites:
Install the Azure Stack HCI, version 23H2 operating system on each server.
Register your subscription with the required resource providers (RPs). You can use
either the Azure portal or the Azure PowerShell to register. You need to be an
owner or contributor on your subscription to register the following resource RPs:
Microsoft.HybridCompute
Microsoft.GuestConfiguration
Microsoft.HybridConnectivity
Microsoft.AzureStackHCI
7 Note
The assumption is that the person registering the Azure subscription with the
resource providers is a different person than the one who is registering the
Azure Stack HCI servers with Arc.
If you're registering the servers as Arc resources, make sure that you have the
following permissions on the resource group where the servers were provisioned:
Azure Connected Machine Onboarding
Azure Connected Machine Resource Administrator
To verify that you have these roles, follow these steps in the Azure portal:
1. Go to the subscription that you use for the Azure Stack HCI deployment.
2. Go to the resource group where you're planning to register the servers.
3. In the left-pane, go to Access Control (IAM).
4. In the right-pane, go the Role assignments. Verify that you have the Azure
Connected Machine Onboarding and Azure Connected Machine Resource
Administrator roles assigned.
) Important
Run these steps on every Azure Stack HCI server that you intend to cluster.
PowerShell
Output
ノ Expand table
Parameters Description
SubscriptionID The ID of the subscription used to register your servers with Azure Arc.
TenantID The tenant ID used to register your servers with Azure Arc. Go to your
Microsoft Entra ID and copy the tenant ID property.
ResourceGroup The resource group precreated for Arc registration of the servers. A
resource group is created if one doesn't exist.
Region The Azure region used for registration. See the Supported regions that
can be used.
PowerShell
#Define the subscription where you want to register your server as Arc
device
$Subscription = "YourSubscriptionID"
#Define the resource group where you want to register your server as
Arc device
$RG = "YourResourceGroupName"
#Define the region you will use to register your server as Arc device
$Region = "eastus"
#Define the tenant you will use to register your server as Arc device
$Tenant = "YourTenantID"
Output
3. Connect to your Azure account and set the subscription. You'll need to open
browser on the client that you're using to connect to the server and open this
page: https://microsoft.com/devicelogin and enter the provided code in the
Azure CLI output to authenticate. Get the access token and account ID for the
registration.
Azure CLI
Output
4. Finally run the Arc registration script. The script takes a few minutes to run.
PowerShell
If you're accessing the internet via a proxy server, you need to pass the -proxy
parameter and provide the proxy server as http://<Proxy server FQDN or IP
address>:Port when running the script.
Output
PS C:\DeploymentPackage> Invoke-AzStackHciArcInitialization -
SubscriptionID $Subscription -ResourceGroup $RG -TenantID $Tenant -
Region $Region -Cloud "AzureCloud" -ArmAccessToken $ARMtoken -AccountID
$id -Force
Installing and Running Azure Stack HCI Environment Checker
All the environment validation checks succeeded
Installing Hyper-V Management Tools
Starting AzStackHci ArcIntegration Initialization
Installing Azure Connected Machine Agent
Total Physical Memory: 588,419 MB
PowerShell version: 5.1.25398.469
.NET Framework version: 4.8.9032
Downloading agent package from
https://aka.ms/AzureConnectedMachineAgent to
C:\Users\AzureConnectedMachineAgent.msi
Installing agent package
Installation of azcmagent completed successfully
0
Connecting to Azure using ARM Access Token
Connected to Azure successfully
Microsoft.HybridCompute RP already registered, skipping registration
Microsoft.GuestConfiguration RP already registered, skipping
registration
Microsoft.HybridConnectivity RP already registered, skipping
registration
Microsoft.AzureStackHCI RP already registered, skipping registration
INFO Connecting machine to Azure... This might take a few minutes.
INFO Testing connectivity to endpoints that are needed to connect to
Azure... This might take a few minutes.
20% [==> ]
30% [===> ]
INFO Creating resource in Azure...
Correlation ID=<Correlation ID>=/subscriptions/<Subscription
ID>/resourceGroups/myashci-
rg/providers/Microsoft.HybridCompute/machines/ms309
60% [========> ]
80% [===========> ]
100% [===============]
INFO Connected machine to Azure
INFO Machine overview page: https://portal.azure.com/
Connected Azure ARC agent successfully
Successfully got the content from IMDS endpoint
Successfully got Object Id for Arc Installation <Object ID>
$Checking if Azure Stack HCI Device Management Role is assigned already
for SPN with Object ID: <Object ID>
Assigning Azure Stack HCI Device Management Role to Object : <Object
ID>
$Successfully assigned Azure Stack HCI Device Management Role to Object
Id <Object ID>
Successfully assigned permission Azure Stack HCI Device Management
Service Role to create or update Edge Devices on the resource group
$Checking if Azure Connected Machine Resource Manager is assigned
already for SPN with Object ID: <Object ID>
Assigning Azure Connected Machine Resource Manager to Object : <Object
ID>
$Successfully assigned Azure Connected Machine Resource Manager to
Object Id <Object ID>
Successfully assigned the Azure Connected Machine Resource Manager role
on the resource group
$Checking if Reader is assigned already for SPN with Object ID: <Object
ID>
Assigning Reader to Object : <Object ID>
$Successfully assigned Reader to Object Id <Object ID>
Successfully assigned the reader Resource Manager role on the resource
group
Installing TelemetryAndDiagnostics Extension
Successfully triggered TelemetryAndDiagnostics Extension installation
Installing DeviceManagement Extension
Successfully triggered DeviceManagementExtension installation
Installing LcmController Extension
Successfully triggered LCMController Extension installation
Please verify that the extensions are successfully installed before
continuing...
Log location:
C:\Users\Administrator\.AzStackHci\AzStackHciEnvironmentChecker.log
Report location:
C:\Users\Administrator\.AzStackHci\AzStackHciEnvironmentReport.json
Use -Passthru parameter to return results as a PSObject.
5. After the script completes successfully on all the servers, verify that:
a. Your servers are registered with Arc. Go to the Azure portal and then go to the
resource group associated with the registration. The servers appear within the
specified resource group as Machine - Azure Arc type resources.
b. The mandatory Azure Stack HCI extensions are installed on your servers. From
the resource group, select the registered server. Go to the Extensions. The
mandatory extensions show up in the right pane.
1. In the Azure portal, go to the subscription used to register the servers. In the left
pane, select Access control (IAM). In the right pane, select + Add and from the
dropdown list, select Add role assignment.
2. Go through the tabs and assign the following role permissions to the user who
deploys the cluster:
3. In the Azure portal, go to the resource group used to register the servers on your
subscription. In the left pane, select Access control (IAM). In the right pane, select
+ Add and from the dropdown list, select Add role assignment.
4. Go through the tabs and assign the following permissions to the user who deploys
the cluster:
5. In the right pane, go to Role assignments. Verify that the deployment user has all
the configured roles.
Next steps
After setting up the first server in your cluster, you're ready to deploy using Azure portal:
Feedback
Was this page helpful? Yes No
Provide product feedback | Get help at Microsoft Q&A
Deploy an Azure Stack HCI, version
23H2 system using the Azure portal
Article • 02/28/2024
This article helps you deploy an Azure Stack HCI, version 23H2 system using the Azure
portal.
Prerequisites
Completion of Register your servers with Azure Arc and assign deployment
permissions.
For three-node clusters, the network adapters that carry the in-cluster storage
traffic must be connected to a network switch. Deploying three-node clusters with
storage network adapters that are directly connected to each server without a
switch isn't supported in this preview.
2. Select the Subscription and Resource group in which to store this system's
resources.
3. Enter the Cluster name used for this Azure Stack HCI system when Active Directory
Domain Services (AD DS) was prepared for this deployment.
4. Select the Region to store this system's Azure resources. See System requirements
for a list of supported regions.
5. Select or create an empty Key vault to securely store secrets for this system, such
as cryptographic keys, local admin credentials, and BitLocker recovery keys.
Key Vault adds cost in addition to the Azure Stack HCI subscription. For details, see
Key Vault Pricing .
6. Select the server or servers that make up this Azure Stack HCI system.
7. Select Validate, wait for the green validation checkbox to appear, and then select
Next: Configuration.
The validation process checks that each server is running the same exact version of
the OS, has the correct Azure extensions, and has matching (symmetrical) network
adapters.
No switch for storage - For two-node clusters with storage network adapters
that connect the two servers directly without going through a switch.
Network switch for storage traffic - For clusters with storage network
adapters connected to a network switch. This also applies to clusters that use
converged network adapters that carry all traffic types including storage.
Management traffic between this system, your management PC, and Azure;
also Storage Replica traffic
Compute traffic to or from VMs and containers on this system
Storage (SMB) traffic between servers in a multi-node cluster
Group all traffic - If you're using network switches for storage traffic you can
group all traffic types together on a set of network adapters.
Group compute and storage traffic - If you're using network switches for
storage traffic, you can group compute and storage traffic together on your
high-speed adapters while keeping management traffic isolated on another
set of adapters.
This is commonly used for private multi-access edge compute (MEC) systems.
Tip
If you're deploying a single server that you plan to add servers to later, select
the network traffic groupings you want for the eventual cluster. Then when
you add servers they automatically get the appropriate settings.
3. For each group of traffic types (known as an intent), select at least one unused
network adapter (but probably at least two matching adapters for redundancy).
Make sure to use high-speed adapters for the intent that includes storage traffic.
4. For the storage intent, enter the VLAN ID set on the network switches used for
each storage network.
Storage traffic priority. This specifies the Priority Flow Control where Data
Center Bridging (DCB) is used.
Cluster traffic priority.
Storage traffic bandwidth reservation. This parameter defines the bandwidth
allocation in % for the storage traffic.
Adpater properties such as Jumbo frame size (in bytes) and RDMA protocol
(which can now be disabled).
6. Using the Starting IP and Ending IP (and related) fields, allocate a contiguous
block of at least six static IP addresses on your management network's subnet,
omitting addresses already used by the servers.
These IPs are used by Azure Stack HCI and internal infrastructure (Arc Resource
Bridge) that's required for Arc VM management and AKS Hybrid.
2. Select an existing Storage account or create a new Storage account to store the
cluster witness file.
When selecting an existing account, the dropdown list filters to display only the
storage accounts contained in the specified resource group for deployment. You
can use the same storage account with multiple clusters; each witness uses less
than a kilobyte of storage.
3. Enter the Active Directory Domain you're deploying this system into.
This must be the same fully qualified domain name (FQDN) used when the Active
Directory Domain Services (AD DS) domain was prepared for deployment.
The credentials must be identical on all servers in the system. If the current
password doesn't meet the complexity requirements (12+ characters long, a
lowercase and uppercase character, a numeral, and a special character), you must
change it on all servers before proceeding.
Create required infrastructure volumes only - Creates only the required one
infrastructure volume per server. You'll need to later create workload volumes
and storage paths.
Use existing data drives (single servers only) - Preserves existing data drives
that contain a Storage Spaces pool and volumes.
To use this option you must be using a single server and have already created
a Storage Spaces pool on the data drives. You also might need to later create
an infrastructure volume and a workload volume and storage path if you
don't already have them.
) Important
Here's a summary of the volumes that are created based on the number of servers
in your system. To change the resiliency setting of the workload volumes, delete
them and recreate them, being careful not to delete the infrastructure volumes.
ノ Expand table
Tags are name/value pairs you can use to categorize resources. You can then view
consolidated billing for all resources with a given tag.
5. The validation will take about 15 minutes for one to two server deployment and
more for bigger deployments. Monitor the validation progress.
If the validation has erorrs, resolve any actionable issues, and then select Next:
Review + create.
Don't select Try again while validation tasks are running as doing so can provide
inaccurate results in this release.
2. Review the settings that will be used for deployment and then select Review +
create to deploy the system.
The Deployments page then appears, which you can use to monitor the deployment
progress.
If the progress doesn't appear, wait for a few minutes and then select Refresh. This page
may show up as blank for an extended period of time owing to an issue in this release,
but the deployment is still running if no errors show up.
Once the deployment starts, the first step in the deployment: Begin cloud deployment
can take 45-60 minutes to complete. The total deployment time for a single server is
around 1.5-2 hours while a two-node cluster takes about 2.5 hours to deploy.
Verify a successful deployment
To confirm that the system and all of its Azure resources were successfully deployed
1. In the Azure portal, navigate to the resource group into which you deployed the
system.
ノ Expand table
1 Key vault
1 Custom location
2* Storage account
1 per workload volume Azure Stack HCI storage path - Azure Arc
* One storage account is created for the cloud witness and one for key vault audit
logs. These accounts are locally redundant storage (LRS) account with a lock
placed on them.
Rerun deployment
If your deployment fails, you can rerun the deployment. In your cluster, go to
Deployments and in the right-pane, select Rerun deployment.
You may need to connect to the system via RDP to deploy workloads. Follow these steps
to connect to your cluster via the Remote PowerShell and then enable RDP:
2. Connect to your Azure Stack HCI system via a remote PowerShell session.
PowerShell
3. Enable RDP.
PowerShell
Enable-ASRemoteDesktop
7 Note
As per the security best practices, keep the RDP access disabled when not
needed.
4. Disable RDP.
PowerShell
Disable-ASRemoteDesktop
Next steps
If you didn't create workload volumes during deployment, create workload
volumes and storage paths for each volume. For details, see Create volumes on
Azure Stack HCI and Windows Server clusters and Create storage path for Azure
Stack HCI.
Get support for Azure Stack HCI deployment issues.
Deploy an Azure Stack HCI, version
23H2 via Azure Resource Manager
deployment template
Article • 01/31/2024
This article details how to use an Azure Resource Manager template (ARM template) in
the Azure portal to deploy an Azure Stack HCI in your environment. The article also
contains the prerequisites and the preparation steps required to begin the deployment.
) Important
ARM template deployment of Azure Stack HCI, version 23H2 systems is targeted for
deployments-at-scale. The intended audience for this deployment are IT
Administrators who have experience deploying Azure Stack HCI clusters. We
recommend that you deploy a version 23H2 system via the Azure portal first and
then perform subsequent deployments via the ARM template.
Prerequisites
Completion of Register your servers with Azure Arc and assign deployment
permissions. Make sure that:
All the mandatory extensions have installed successfully. The mandatory
extensions include: Azure Edge Lifecycle Manager, Azure Edge Device
Management, and Telemetry and Diagnostics.
All servers are running the same version of OS.
All the servers have the same network adapter configuration.
2. Provide a Name for the application, select a Supported account type and then
select Register.
3. Once the service principal is created, go to the Overview page. Copy the
Application (client) ID for this service principal. You encode and use this value
later.
3. Add a Description for the client secret and provide a timeframe when it Expires.
Select Add.
4. Copy the client secret value as you encode and use it later.
7 Note
For the application client ID, you will need it's secret value. Client secret values
can't be viewed except for immediately after creation. Be sure to save this
value when created before leaving the page.
Follow these steps to get and encode the access key for the ARM deployment template:
1. In the Azure portal, create a storage account in the same resource group that you
would use for deployment.
2. Once the Storage account is created, verify that you can see the account in the
resource group.
3. Go to the storage account that you created and then go to Access keys.
4. For key1, Key, select Show. Select the Copy to clipboard button at the right side of
the Key field.
PowerShell
PowerShell
ZXhhbXBsZXNlY3JldGtleXZhbHVldGhhdHdpbGxiZWxvbmdlcnRoYW50aGlzYW5kb3V0cHV
0d2lsbGxvb2tkaWZmZXJlbnQ=
2. The encoded output value you generate is what the ARM deployment template
expects. Make a note of this value and the name of the storage account. You'll use
these values later in the deployment process.
In addition to the storage witness access key, you also need to similarly encode the
values for the following parameters.
ノ Expand table
Parameter Description
localaccountname , Username and password for the local administrator for all the servers
localaccountpassword in your cluster. The credentials are identical for all the servers in your
system.
domainaccountname , The new username and password that were created with the
omainaccountpassword appropriate permissions for deployment during the Active Directory
preparation step for the AzureStackLCMUserCredential object. This
account is the same as the user account used by the Azure Stack HCI
deployment.
For more information, see Prepare the Active Directory to get these
credentials.
clientId , The application (client) ID for the SPN that you created as a
clientSecretValue prerequisite to this deployment and the corresponding client secret
value for the application ID.
Run the PowerShell script used in the earlier step to encode these values:
Verify access to the resource group for your registered Azure Stack HCI servers as
follows:
2. Select Access control (IAM) from the left-hand side of the screen and then select
Check access.
d. Filter the list by typing the prefix and name of the registered server(s) for this
deployment. Select one of the servers in your Azure Stack HCI cluster.
e. Under Current role assignments, verify the selected server has the following
roles enabled:
Azure Connected Machine Resource Manager.
Reader.
f. Select the X on the upper right to go back to the server selection screen.
4. Select another server in your Azure Stack HCI cluster. Verify the selected server has
the same roles enabled as you verified on the earlier server.
1. Go to the appropriate resource group for your Azure Stack HCI environment.
2. Select Access control (IAM) from the left-hand side of the screen.
3. Select + Add and then select Add role assignment.
4. Search for and select Azure Connected Machine Resource Manager. Select Next.
5. Leave the selection on User, group, or service principal. Select + Select members.
7. Select Select.
9. Once the role assignment is added, you are able to see it in the Notifications
activity log:
2. Select Access control (IAM) from the left-hand side of the screen.
3. In the right-pane, select + Add and then select Add role assignment.
4. Search for and select Key Vault Secrets User and select Next.
c. Filter the list by typing the prefix and name of the registered servers for your
deployment.
8. Once the roles are assigned as Key Vault Secrets User, you are able to see them in
the Notifications activity log.
Verify new role assignments
Optionally verify the role assignments you created.
1. Select Access Control (IAM) Check Access to verify the role assignments you
created.
3. Go to Key Vault Secrets User for the appropriate resource group for the first server
in your environment.
4. Go to Key Vault Secrets User for the appropriate resource group for the second
server in your environment.
) Important
In this release, make sure that all the parameters contained in the JSON value are
filled out including the ones that have a null value. If there are null values, then
those need to be populated or the validation fails.
3. Near the bottom of the page, find Start with a quickstart template or template
spec section. Select Quickstart template option.
4. Use the Quickstart template (disclaimer) field to filter for the appropriate
template. Type azurestackhci/create-cluster for the filter.
6. On the Basics tab, you see the Custom deployment page. You can select the
various parameters through the dropdown list or select Edit parameters.
7. Edit parameters such as network intent or storage network intent. Once the
parameters are all filled out, Save the parameters file.
Tip
11. On the Review + Create tab, select Create. This will create the remaining
prerequisite resources and validate the deployment. Validation takes about 10
minutes to complete.
13. On the Custom deployment screen, select Edit parameters. Load up the previously
saved parameters and select Save.
14. At the bottom of the workspace, change the final value in the JSON from Validate
to Deploy, where Deployment Mode = Deploy.
15. Verify that all the fields for the ARM deployment template have been filled in by
the Parameters JSON.
17. Scroll to the bottom, and confirm that Deployment Mode = Deploy.
19. Select Create. This begins deployment, using the existing prerequisite resources
that were created during the Validate step.
20. In a new browser window, navigate to the resource group for your environment.
Select the cluster resource.
22. Refresh and watch the deployment progress from the first server (also known as
the seed server and is the first server where you deployed the cluster). Deployment
takes between 2.5 and 3 hours. Several steps take 40-50 minutes or more.
7 Note
If you check back on the template deployment, you will see that it eventually
times out. This is a known issue, so watching Deployments is the best way to
monitor the progress of deployment.
23. The step in deployment that takes the longest is Deploy Moc and ARB Stack. This
step takes 40-45 minutes.
Once complete, the task at the top updates with status and end time.
Next steps
Learn more:
Applies to: Azure Stack HCI, version 23H2; Windows Server 2022, Windows Server
2019, Windows Server 2016
In this article, you deploy an end-to-end Software Defined Network (SDN) infrastructure
for Azure Stack HCI, version 23H2 using SDN Express PowerShell scripts. The
infrastructure includes a highly available (HA) Network Controller (NC), and optionally, a
highly available Software Load Balancer (SLB), and a highly available Gateway (GW). The
scripts support a phased deployment, where you can deploy just the Network Controller
component to achieve a core set of functionality with minimal network requirements.
You can also deploy an SDN infrastructure System Center Virtual Machine Manager
(VMM). For more information, Manage SDN resources in the VMM fabric.
You don't have to deploy all SDN components. See the Phased deployment section of
Plan a Software Defined Network infrastructure to determine which infrastructure
components you need, and then run the scripts accordingly.
Make sure all host servers have the Azure Stack HCI operating system installed. See
Deploy the Azure Stack HCI operating system on how to do this.
Requirements
The following requirements must be met for a successful SDN deployment:
7 Note
The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.
To download an English-language version of the VHDX file, see Download the Azure
Stack HCI operating system from the Azure portal. Make sure to select English VHDX
from the Choose language dropdown list.
Currently, a non-English VHDX file isn't available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.
You'll probably need to run this script as Administrator and modify the execution policy
for scripts using the Set-ExecutionPolicy command.
PowerShell
$wimpath = "E:\sources\install.wim"
$vhdpath = "D:\temp\AzureStackHCI.vhdx"
$edition=1
Convert-WindowsImage -SourcePath $wimpath -Edition $edition -VHDPath
$vhdpath -SizeBytes 500GB -DiskLayout UEFI
2. In the repository, expand the Code drop-down list, and then choose either Clone
or Download ZIP to download the SDN files to your designated deployment
computer.
7 Note
3. Extract the ZIP file and copy the SDNExpress folder to your deployment computer's
C:\ folder.
contoso.com\<username> or [email protected]
RestName - DNS name used by management clients (such as Windows Admin
Center) to communicate with NC
RestIpAddress - Static IP address for your REST API, which is allocated from your
management network. It can be used for DNS resolution or REST IP-based
deployments
HyperVHosts - host servers to be managed by Network Controller
NCUsername - Network Controller account username
ProductKey - product key for SDN infrastructure VMs
SwitchName - only required if more than one virtual switch exists on the Hyper-V
hosts
VMMemory - memory (in GB) assigned to infrastructure VMs. Default is 4 GB
VMProcessorCount - number of processors assigned to infrastructure VMs.
Default is 8
Locale - if not specified, locale of deployment computer is used
TimeZone - if not specified, local time zone of deployment computer is used
The NCs = @() section is used for the Network Controller VMs. Make sure that the MAC
address of each NC VM is outside the SDNMACPool range listed in the General settings.
ComputerName - name of NC VM
HostName - host name of server where the NC VM is located
ManagementIP - management network IP address for the NC VM
MACAddress - MAC address for the NC VM
The Muxes = @() section is used for the SLB VMs. Make sure that the MACAddress and
PAMACAddress parameters of each SLB VM are outside the SDNMACPool range listed in the
General settings. Ensure that you get the PAIPAddress parameter from outside the PA
Pool specified in the configuration file, but part of the PASubnet specified in the
configuration file.
Leave this section empty ( Muxes = @() ) if not deploying the SLB component:
Gateway VM section
A minimum of two Gateway VMs (one active and one redundant) are recommended for
SDN.
The Gateways = @() section is used for the Gateway VMs. Make sure that the
MACAddress parameter of each Gateway VM is outside the SDNMACPool range listed in the
General settings. The FrontEndMac and BackendMac must be from within the SDNMACPool
range. Ensure that you get the FrontEndMac and the BackendMac parameters from the
end of the SDNMACPool range.
Leave this section empty ( Gateways = @() ) if not deploying the Gateway component:
SDNASN - Autonomous System Number (ASN) used by SDN to peer with network
switches
RouterASN - Gateway router ASN
RouterIPAddress - Gateway router IP address
PrivateVIPSubnet - virtual IP address (VIP) for the private subnet
PublicVIPSubnet - virtual IP address for the public subnet
The following other parameters are used by Gateway VMs only. Leave these values blank
if you aren't deploying Gateway VMs:
7 Note
If you fill in a value for RedundantCount, ensure that the total number of
gateway VMs is at least one more than the RedundantCount. By default, the
RedundantCount is 1, so you must have at least 2 gateway VMs to ensure
that there is at least 1 active gateway to host gateway connections.
Here's how Hyper-V Network Virtualization (HNV) Provider logical network allocates IP
addresses. Use this to plan your address space for the HNV Provider network.
1. Review the README.md file for late-breaking information on how to run the
deployment script.
2. Run the following command from a user account with administrative credentials
for the cluster host servers:
PowerShell
SDNExpress\scripts\SDNExpress.ps1 -ConfigurationDataFile
MultiNodeSampleConfig.psd1 -Verbose
3. After the NC VMs are created, configure dynamic DNS updates for the Network
Controller cluster name on the DNS server. For more information, see Dynamic
DNS updates.
Configuration sample files
The following configuration sample files for deploying SDN are available on the
Microsoft SDN GitHub repository:
Next steps
Manage VMs
Feedback
Was this page helpful? Yes No
This article describes how to deploy Software Defined Networking (SDN) through
Windows Admin Center after you deployed your Azure Stack HCI, version 23H2 cluster
via the Azure portal.
Windows Admin Center enables you to deploy all the SDN infrastructure components
on your existing Azure Stack HCI cluster, in the following deployment order:
Network Controller
Software Load Balancer (SLB)
Gateway
Alternatively, you can deploy the entire SDN infrastructure through the SDN Express
scripts.
You can also deploy an SDN infrastructure using System Center Virtual Machine
Manager (VMM). For more information, see Manage SDN resources in the VMM fabric.
Requirements
The following requirements must be met for a successful SDN deployment:
7 Note
The version of the OS in your VHDX must match the version used by the Azure
Stack HCI Hyper-V hosts. This VHDX file is used by all SDN infrastructure
components.
To download an English-language version of the VHDX file, see Download the Azure
Stack HCI operating system from the Azure portal. Make sure to select English VHDX
from the Choose language dropdown list.
Currently, a non-English VHDX file isn't available for download. If you require a non-
English version, download the corresponding ISO file and convert it to VHDX using the
Convert-WindowsImage cmdlet. You must run this script from a Windows client computer.
You'll probably need to run this script as Administrator and modify the execution policy
for scripts using the Set-ExecutionPolicy command.
PowerShell
$wimpath = "E:\sources\install.wim"
$vhdpath = "D:\temp\AzureStackHCI.vhdx"
$edition=1
Convert-WindowsImage -SourcePath $wimpath -Edition $edition -VHDPath
$vhdpath -SizeBytes 500GB -DiskLayout UEFI
Deploy SDN Network Controller
SDN Network Controller deployment is a functionality of the SDN Infrastructure
extension in Windows Admin Center. Complete the following steps to deploy Network
Controller on your existing Azure Stack HCI cluster.
1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.
2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then select Get
Started.
4. Under Cluster settings, under Host, enter a name for the Network Controller. This
is the DNS name used by management clients (such as Windows Admin Center) to
communicate with Network Controller. You can also use the default populated
name.
5. Specify a path to the Azure Stack HCI VHD file. Use Browse to find it quicker.
9. Under Credentials, enter the username and password used to join the Network
Controller VMs to the cluster domain.
7 Note
11. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
7 Note
12. Enter values for MAC address pool start and MAC address pool end. You can also
use the default populated values. This is the MAC pool used to assign MAC
addresses to VMs attached to SDN networks.
14. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then select Finish.
15. After the Network Controller VMs are created, configure dynamic DNS updates for
the Network Controller cluster name on the DNS server. For more information, see
Dynamic DNS updates.
Redeploy SDN Network Controller
If the Network Controller deployment fails or you want to deploy it again, do the
following:
1. Delete all Network Controller VMs and their VHDs from all server nodes.
2. Remove the following registry key from all hosts by running this command:
PowerShell
Remove-ItemProperty -path
'HKLM:\SYSTEM\CurrentControlSet\Services\NcHostAgent\Parameters\' -Name
Connections
3. After removing the registry key, remove the cluster from the Windows Admin
Center management, and then add it back.
7 Note
If you don't do this step, you may not see the SDN deployment wizard in
Windows Admin Center.
4. (Additional step only if you plan to uninstall Network Controller and not deploy it
again) Run the following cmdlet on all the servers in your Azure Stack HCI cluster,
and then skip the last step.
PowerShell
7 Note
Network Controller must be set up before you configure SLB.
1. In Windows Admin Center, under Tools, select Settings, and then select
Extensions.
2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then select Get
Started on the Load Balancer tab.
4. Under Load Balancer Settings, under Front-End subnets, provide the following:
Public VIP subnet prefix. This could be public Internet subnets. They serve as
the front end IP addresses for accessing workloads behind the load balancer,
which use IP addresses from a private backend network.
Private VIP subnet prefix. These don’t need to be routable on the public
Internet because they are used for internal load balancing.
5. Under BGP Router Settings, enter the SDN ASN for the SLB. This ASN is used to
peer the SLB infrastructure with the Top of the Rack switches to advertise the
Public VIP and Private VIP IP addresses.
6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. SLB infrastructure needs these settings to create a BGP peer with the
switch. If you have an additional Top of Rack switch that you want to peer the SLB
infrastructure with, add IP Address and ASN for that switch as well.
7. Under VM Settings, specify a path to the Azure Stack HCI VHDX file. Use Browse to
find it quicker.
9. Under Network, enter the VLAN ID of the management network. SLB needs
connectivity to same management network as the Hyper-V hosts so that it can
communicate and configure the hosts.
For DHCP, enter the name for the Network Controller VMs. You can also use
the default populated names.
11. Under Credentials, enter the username and password that you used to join the
Software Load Balancer VMs to the cluster domain.
7 Note
13. Under Advanced, enter the path to the VMs. You can also use the default
populated path.
7 Note
15. Wait until the wizard completes its job. Stay on this page until all progress tasks
are complete, and then select Finish.
7 Note
Network Controller and SLB must be set up before you configure Gateways.
1. In Windows Admin Center, under Tools, select Settings, then select Extensions.
2. On the Installed Extensions tab, verify that the SDN Infrastructure extension is
installed. If not, install it.
3. In Windows Admin Center, under Tools, select SDN Infrastructure, then select Get
Started on the Gateway tab.
4. Under Define the Gateway Settings, under Tunnel subnets, provide the GRE
Tunnel Subnets. IP addresses from this subnet are used for provisioning on the
SDN gateway VMs for GRE tunnels. If you don't plan to use GRE tunnels, put any
placeholder subnets in this field.
5. Under BGP Router Settings, enter the SDN ASN for the Gateway. This ASN is used
to peer the gateway VMs with the Top of the Rack switches to advertise the GRE IP
addresses. This field is auto populated to the SDN ASN used by SLB.
6. Under BGP Router Settings, enter the IP Address and ASN of the Top of Rack
switch. Gateway VMs need these settings to create a BGP peer with the switch.
These fields are auto populated from the SLB deployment wizard. If you have an
additional Top of Rack switch that you want to peer the gateway VMs with, add IP
Address and ASN for that switch as well.
7. Under Define the Gateway VM Settings, specify a path to the Azure Stack HCI
VHDX file. Use Browse to find it quicker.
9. Enter the value for Redundant Gateways. Redundant gateways don't host any
gateway connections. In event of failure or restart of an active gateway VM,
gateway connections from the active VM are moved to the redundant gateway and
the redundant gateway is then marked as active. In a production deployment, we
strongly recommend that you have at least one redundant gateway.
7 Note
Ensure that the total number of gateway VMs is at least one more than the
number of redundant gateways. Otherwise, you won't have any active
gateways to host gateway connections.
10. Under Network, enter the VLAN ID of the management network. Gateways needs
connectivity to same management network as the Hyper-V hosts and Network
Controller VMs.
For DHCP, enter the name for the Gateway VMs. You can also use the default
populated names.
12. Under Credentials, enter the username and password used to join the Gateway
VMs to the cluster domain.
7 Note
14. Under Advanced, provide the Gateway Capacity. It is auto populated to 10 Gbps.
Ideally, you should set this value to approximate throughput available to the
gateway VM. This value may depend on various factors, such as physical NIC speed
on the host machine, other VMs on the host machine and their throughput
requirements.
7 Note
15. Enter the path to the VMs. You can also use the default populated path.
Next steps
Manage your VMs. See Manage VMs.
Manage Software Load Balancers. See Manage Software Load Balancers.
Manage Gateway connections. See Manage Gateway Connections.
Troubleshoot SDN deployment. See Troubleshoot Software Defined Networking
deployment via Windows Admin Center.
Feedback
Was this page helpful? Yes No
This quickstart guides you through setting up an Azure Kubernetes Service (AKS) host.
You create Kubernetes clusters on Azure Stack HCI and Windows Server using
PowerShell. To use Windows Admin Center instead, see Set up with Windows Admin
Center.
7 Note
If you have pre-staged cluster service objects and DNS records, see Deploy an
AKS host with prestaged cluster service objects and DNS records using
PowerShell.
If you have a proxy server, see Set up an AKS host and deploy a workload
cluster using PowerShell and a proxy server.
Installing AKS on Azure Stack HCI after setting up Arc VMs is not supported.
For more information, see known issues with Arc VMs. If you want to install
AKS on Azure Stack HCI, you must uninstall Arc Resource Bridge and then
install AKS on Azure Stack HCI. You can deploy a new Arc Resource Bridge
again after you clean up and install AKS, but it won't remember the VM
entities you created previously.
7 Note
PowerShell
You must close all existing PowerShell windows again to ensure that loaded
modules are refreshed. Don't continue to the next step until you close all open
PowerShell windows.
2. Install the AKS-HCI PowerShell module by running the following command on all
nodes in your Azure Stack HCI or Windows Server cluster:
PowerShell
You must close all existing PowerShell windows again to ensure that loaded
modules are refreshed. Don't continue to the next step until you close all open
PowerShell windows.
You can use a helper script to delete old AKS-HCI PowerShell modules , to avoid any
PowerShell version-related issues in your AKS deployment.
To view the complete list of AksHci PowerShell commands, see AksHci PowerShell.
Register the resource provider to your
subscription
Before the registration process, enable the appropriate resource provider in Azure for
AKS enabled by Arc registration. To do that, run the following PowerShell commands:
PowerShell
Connect-AzAccount
PowerShell
Run the following commands to register your Azure subscription to Azure Arc enabled
Kubernetes resource providers. This registration process can take up to 10 minutes, but
it only needs to be performed once on a specific subscription:
PowerShell
PowerShell
PowerShell
Initialize-AksHciNode
To get the names of your available switches, run the following command. Make sure the
SwitchType of your VM switch is "External":
PowerShell
Get-VMSwitch
Sample output:
Output
To create a virtual network for the nodes in your deployment to use, create an
environment variable with the New-AksHciNetworkSetting PowerShell command. This
virtual network is used later to configure a deployment that uses static IP. If you want to
configure your AKS deployment with DHCP, see New-AksHciNetworkSetting for
examples. You can also review some networking node concepts.
PowerShell
#static IP
$vnet = New-AksHciNetworkSetting -name myvnet -vSwitchName "extSwitch" -
k8sNodeIpPoolStart "172.16.10.1" -k8sNodeIpPoolEnd "172.16.10.255" -
vipPoolStart "172.16.255.0" -vipPoolEnd "172.16.255.254" -ipAddressPrefix
"172.16.0.0/16" -gateway "172.16.0.1" -dnsServers "172.16.0.1" -vlanId 9
7 Note
You must customize the values shown in this example command for your
environment.
To create the configuration settings for the AKS host, use the Set-AksHciConfig
command. You must specify the imageDir , workingDir , and cloudConfigLocation
parameters. If you want to reset your configuration details, run the command again with
new parameters.
PowerShell
7 Note
You must customize the values shown in this example command for your
environment.
PowerShell
Set-AksHciRegistration -subscriptionId "<subscriptionId>" -resourceGroupName
"<resourceGroupName>"
After you configure your deployment, you must start it in order to install the AKS
agents/services and the AKS host. To begin deployment, run the following command:
Tip
PowerShell
Install-AksHci
2 Warning
During installation of your AKS host, a Kubernetes - Azure Arc resource type is
created in the resource group that's set during registration. Do not delete this
resource, as it represents your AKS host. You can identify the resource by checking
its distribution field for a value of aks_management . If you delete this resource, it
results in an out-of-policy deployment.
For more information about node pools, see Use node pools in AKS.
PowerShell
PowerShell
Get-AksHciCluster
Output
ProvisioningState : provisioned
KubernetesVersion : v1.20.7
NodePools : linuxnodepool
WindowsNodeCount : 0
LinuxNodeCount : 0
ControlPlaneNodeCount : 1
Name : mycluster
To get a list of the node pools in the cluster, run the following Get-AksHciNodePool
PowerShell command:
PowerShell
Output
ClusterName : mycluster
NodePoolName : linuxnodepool
Version : v1.20.7
OsType : Linux
NodeCount : 1
VmSize : Standard_K8S3_v1
Phase : Deployed
PowerShell
Connect-AzAccount
Enable-AksHciArcConnection -name mycluster
7 Note
If you encounter issues or error messages during the installation process, see
installation known issues and errors for more information.
PowerShell
To scale the worker nodes in your node pool, run the following command:
PowerShell
In previous versions of AKS on Azure Stack HCI and Windows Server, the Set-
AksHciCluster command was also used to scale worker nodes. Now that AKS is
introducing node pools in workload clusters, you can only use this command to
scale worker nodes if your cluster was created with the old parameter set in New-
AksHciCluster.
PowerShell
PowerShell
7 Note
Make sure that your cluster is deleted by looking at the existing VMs in the Hyper-
V Manager. If they are not deleted, then you can manually delete the VMs. Then,
run the command Restart-Service wssdagent . Run this command on each node in
the failover cluster.
Get logs
To get logs from your all your pods, run the Get-AksHciLogs command. This command
creates an output zipped folder called akshcilogs.zip in your working directory. The full
path to the akshcilogs.zip folder is the output after running the following command:
PowerShell
Get-AksHciLogs
In this quickstart, you learned how to set up an AKS host and create Kubernetes clusters
using PowerShell. You also learned how to use PowerShell to scale a Kubernetes cluster
and to access clusters with kubectl .
Next steps
Prepare an application
Deploy a Windows application on your Kubernetes cluster
Set up multiple administrators
Deploy Azure Virtual Desktop
Article • 01/24/2024
) Important
Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure
Government and Azure China. See the Supplemental Terms of Use for Microsoft
Azure Previews for legal terms that apply to Azure features that are in beta,
preview, or otherwise not yet released into general availability.
This article shows you how to deploy Azure Virtual Desktop on Azure or Azure Stack HCI
by using the Azure portal, Azure CLI, or Azure PowerShell. To deploy Azure Virtual
Desktop you:
You can do all these tasks in a single process when using the Azure portal, but you can
also do them separately.
For more information on the terminology used in this article, see Azure Virtual Desktop
terminology, and to learn about the service architecture and resilience of the Azure
Virtual Desktop service, see Azure Virtual Desktop service architecture and resilience.
Tip
Prerequisites
Review the Prerequisites for Azure Virtual Desktop for a general idea of what's required
and supported, such as operating systems (OS), virtual networks, and identity providers.
It also includes a list of the supported Azure regions in which you can deploy host pools,
workspaces, and application groups. This list of regions is where the metadata for the
host pool can be stored. However, session hosts can be located in any Azure region, and
on-premises with Azure Stack HCI. For more information about the types of data and
locations, see Data locations for Azure Virtual Desktop.
Select the relevant tab for your scenario for more prerequisites.
Portal
The Azure account you use must be assigned the following built-in role-based
access control (RBAC) roles as a minimum on a resource group or subscription
to create the following resource types. If you want to assign the roles to a
resource group, you need to create this first.
ノ Expand table
Alternatively you can assign the Contributor RBAC role to create all of these
resource types.
application group. Built-in RBAC roles that include this permission are User
Access Administrator and Owner.
Portal
2. In the search bar, enter Azure Virtual Desktop and select the matching service
entry.
ノ Expand table
Parameter Value/Description
Subscription Select the subscription you want to create the host pool in from the
drop-down list.
Resource Select an existing resource group or select Create new and enter a
group name.
Host pool Enter a name for the host pool, for example hp01.
name
Location Select the Azure region where you want to create your host pool.
Parameter Value/Description
Preferred app Select the preferred application group type for this host pool from
group type Desktop or RemoteApp. A Desktop application group is created
automatically when using the Azure portal, with whichever
application group type you set as the preferred.
Host pool type Select whether you want your host pool to be Personal or Pooled.
If you select Pooled, two new options appear for Load balancing
algorithm and Max session limit.
- For Max session limit, enter the maximum number of users you
want load-balanced to a single session host.
Tip
Once you've completed this tab, you can continue to optionally create
session hosts, a workspace, register the default desktop application
group from this host pool, and enable diagnostics settings by selecting
Next: Virtual Machines. Alternatively, if you want to create and configure
these separately, select Next: Review + create and go to step 9.
5. Optional: On the Virtual machines tab, if you want to add session hosts,
complete the following information, depending on if you want to create
session hosts on Azure or Azure Stack HCI:
ノ Expand table
Parameter Value/Description
Resource group This automatically defaults to the same resource group you
chose your host pool to be in on the Basics tab, but you can
also select an alternative.
Name prefix Enter a name for your session hosts, for example hp01-sh.
This value is used as the prefix for your session hosts. Each
session host has a suffix of a hyphen and then a sequential
number added to the end, for example hp01-sh-0.
Virtual machine Select the Azure region where you want to deploy your
location session hosts. This must be the same region that your virtual
network is in.
Image Select the OS image you want to use from the list, or select
See all images to see more, including any images you've
created and stored as an Azure Compute Gallery shared image
or a managed image.
Virtual machine Select a SKU. If you want to use different SKU, select Change
size size, then select from the list.
Parameter Value/Description
Number of VMs Enter the number of virtual machines you want to deploy. You
can deploy up to 400 session hosts at this point if you wish
(depending on your subscription quota), or you can add more
later.
OS disk type Select the disk type to use for your session hosts. We
recommend only Premium SSD is used for production
workloads.
Network and
security
Network security Select whether you want to use a network security group
group (NSG).
Public inbound You can select a port to allow from the list. Azure Virtual
ports Desktop doesn't require public inbound ports, so we
recommend you select No.
Domain to join
Virtual Machine
Administrator
account
Username Enter a name to use as the local administrator account for the
new session hosts.
Custom
configuration
ノ Expand table
Parameter Value/Description
Resource group This automatically defaults to the resource group you chose
your host pool to be in on the Basics tab, but you can also
select an alternative.
Name prefix Enter a name for your session hosts, for example hp01-sh.
This value is used as the prefix for your session hosts. Each
Parameter Value/Description
Custom location Select the Azure Stack HCI cluster where you want to deploy
your session hosts from the drop-down list.
Images Select the OS image you want to use from the list, or select
Manage VM images to manage the images available on the
cluster you selected.
Number of VMs Enter the number of virtual machines you want to deploy.
You can add more later.
Virtual processor Enter the number of virtual processors you want to assign to
count each session host. This value isn't validated against the
resources available in the cluster.
Memory type Select Static for a fixed memory allocation, or Dynamic for a
dynamic memory allocation.
Memory (GB) Enter a number for the amount of memory in GB you want
to assign to each session host. This value isn't validated
against the resources available in the cluster.
Network and
security
Domain to join
AD domain join Enter the User Principal Name (UPN) of an Active Directory
UPN user that has permission to join the session hosts to your
domain.
Specify domain or Select yes if you want to join session hosts to a specific
unit domain or be placed in a specific organization unit (OU). If
you select no, the suffix of the UPN will be used as the
domain.
Virtual Machine
Administrator
account
ノ Expand table
Parameter Value/Description
Register desktop Select Yes. This registers the default desktop application group
app group to the selected workspace.
To this workspace Select an existing workspace from the list, or select Create new
and enter a name, for example ws01.
ノ Expand table
Parameter Value/Description
Choosing destination details to send logs to Select one of the following destinations:
8. Optional: On the Tags tab, you can enter any name/value pairs you need, then
select Next: Review + create.
9. On the Review + create tab, ensure validation passes and review the
information that is during deployment.
11. Once the host pool has been created, select Go to resource to go to the
overview of your new host pool, then select Properties to view its properties.
Post deployment
If you also added session hosts to your host pool, there's some extra configuration
you need to do, which is covered in the following sections.
Licensing
To ensure your session hosts have licenses applied correctly, you'll need to do the
following tasks:
If you have the correct licenses to run Azure Virtual Desktop workloads, you
can apply a Windows or Windows Server license to your session hosts as part
of Azure Virtual Desktop and run them without paying for a separate license.
This is automatically applied when creating session hosts with the Azure
Virtual Desktop service, but you may have to apply the license separately if
you create session hosts outside of Azure Virtual Desktop. For more
information, see Apply a Windows license to session host virtual machines.
If your session hosts are running a Windows Server OS, you'll also need to
issue them a Remote Desktop Services (RDS) Client Access License (CAL) from
a Remote Desktop Licensing Server. For more information, see License your
RDS deployment with client access licenses (CALs).
For session hosts on Azure Stack HCI, you must license and activate the virtual
machines you use before you use them with Azure Virtual Desktop. For
activating Windows 10 and Windows 11 Enterprise multi-session, and
Windows Server 2022 Datacenter: Azure Edition, use Azure verification for
VMs. For all other OS images (such as Windows 10 and Windows 11
Enterprise, and other editions of Windows Server), you should continue to use
existing activation methods. For more information, see Activate Windows
Server VMs on Azure Stack HCI.
For more information about using Microsoft Entra joined session hosts, see
Microsoft Entra joined session hosts.
7 Note
If you created a host pool, workspace, and registered the default desktop
application group from this host pool in the same process, go to the
section Assign users to an application group and complete the rest of
the article. A Desktop application group is created automatically when
using the Azure portal, whichever application group type you set as the
preferred.
If you created a host pool and workspace in the same process, but didn't
register the default desktop application group from this host pool, go to
the section Create an application group and complete the rest of the
article.
Portal
1. From the Azure Virtual Desktop overview, select Workspaces, then select
Create.
ノ Expand table
Parameter Value/Description
Subscription Select the subscription you want to create the workspace in from
the drop-down list.
Resource group Select an existing resource group or select Create new and enter a
name.
Location Select the Azure region where you want to deploy your workspace.
Tip
Once you've completed this tab, you can continue to optionally register
an existing application group to this workspace, if you have one, and
enable diagnostics settings by selecting Next: Application groups.
Alternatively, if you want to create and configure these separately, select
Review + create and go to step 9.
Parameter Value/Description
Register Select Yes, then select + Register application groups. In the new
application pane that opens, select the Add icon for the application group(s)
groups you want to add, then select Select.
ノ Expand table
Parameter Value/Description
Choosing destination details to send logs to Select one of the following destinations:
5. Optional: On the Tags tab, you can enter any name/value pairs you need, then
select Next: Review + create.
6. On the Review + create tab, ensure validation passes and review the
information that is used during deployment.
7 Note
Portal
1. From the Azure Virtual Desktop overview, select Application groups, then
select Create.
ノ Expand table
Parameter Value/Description
Subscription Select the subscription you want to create the application group
in from the drop-down list.
Resource group Select an existing resource group or select Create new and
enter a name.
Host pool Select the host pool for the application group.
Application group Select the application group type for the host pool you selected
type from Desktop or RemoteApp.
Application group Enter a name for the application group, for example Session
name Desktop.
Tip
Once you've completed this tab, select Next: Review + create. You don't
need to complete the other tabs to create an application group, but you'll
need to create a workspace, add an application group to a workspace
and assign users to the application group before users can access the
resources.
If you created an application group for RemoteApp, you will also need to
add applications to it. For more information, see Publish applications.
ノ Expand table
Parameter Value/Description
Register application Select Yes. This registers the default desktop application group
group to the selected workspace.
ノ Expand table
Parameter Value/Description
Choosing destination details to send logs to Select one of the following destinations:
7. Optional: On the Tags tab, you can enter any name/value pairs you need, then
select Next: Review + create.
8. On the Review + create tab, ensure validation passes and review the
information that is used during deployment.
10. Once the application group has been created, select Go to resource to go to
the overview of your new application group, then select Properties to view its
properties.
7 Note
Portal
Here's how to add an application group to a workspace using the Azure portal.
1. From the Azure Virtual Desktop overview, select Workspaces, then select the
name of the workspace you want to assign an application group to.
2. From the workspace overview, select Application groups, then select + Add.
3. Select the plus icon (+) next to an application group from the list. Only
application groups that aren't already assigned to a workspace are listed.
Portal
4. Select + Add, then search for and select the user account or user group you
want to assign to this application group.
Next steps
Portal
Once you've deployed Azure Virtual Desktop, your users can connect. There are
several platforms you can connect from, including from a web browser. For more
information, see Remote Desktop clients for Azure Virtual Desktop and Connect to
Azure Virtual Desktop with the Remote Desktop Web client.
This article describes how to create an Arc VM starting with the VM images that you've
created on your Azure Stack HCI cluster. You can create Arc VMs using the Azure CLI,
Azure portal, or Azure Resource Manager (ARM) template.
Prerequisites
Before you create an Azure Arc-enabled VM, make sure that the following prerequisites
are completed.
Azure CLI
Azure CLI
Follow these steps on the client running az CLI that is connected to your Azure
Stack HCI cluster.
Azure CLI
az login --use-device-code
Azure CLI
Create a Windows VM
Depending on the type of the network interface that you created, you can create a
VM that has network interface with static IP or one with a dynamic IP allocation.
7 Note
If you need more than one network interface with static IPs for your VM, create
the interface(s) now before you create the VM. Adding a network interface with
static IP, after the VM is provisioned, is not supported.
Here we'll create a VM that uses specific memory and processor counts on a
specified storage path.
Azure CLI
$vmName ="myhci-vm"
$subscription = "<Subscription ID>"
$resource_group = "myhci-rg"
$customLocationName = "myhci-cl"
$customLocationID
="/subscriptions/$subscription/resourceGroups/$resource_group/provi
ders/Microsoft.ExtendedLocation/customLocations/$customLocationName
"
$location = "eastus"
$computerName = "mycomputer"
$userName = "myhci-user"
$password = "<Password for the VM>"
$imageName ="ws22server"
$nicName ="myhci-vnic"
$storagePathName = "myhci-sp"
$storagePathId = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/storagecontainers/myhci-sp"
ノ Expand table
Parameters Description
name Name for the VM that you create for your Azure Stack HCI cluster.
Make sure to provide a name that follows the Rules for Azure
resources.
admin- Username for the user on the VM you're deploying on your Azure
username Stack HCI cluster.
admin- Password for the user on the VM you're deploying on your Azure
password Stack HCI cluster.
resource-group Name of the resource group where you create the VM. For ease of
management, we recommend that you use the same resource
group as your Azure Stack HCI cluster.
custom-location Use this to provide the custom location associated with your Azure
Stack HCI cluster where you're creating this VM.
authentication- Type of authentication to use with the VM. The accepted values are
type all , password , and ssh . Default is password for Windows and SSH
public key for Linux. Use all to enable both ssh and password
authentication.
nics Names or the IDs of the network interfaces associated with your
VM. You must have atleast one network interface when you create
a VM, to enable guest management.
storage-path-id The associated storage path where the VM configuration and the
data are saved.
proxy- Use this optional parameter to configure a proxy server for your
configuration VM. For more information, see Create a VM with proxy configured.
Azure CLI
The VM created has guest management enabled by default. If for any reason
guest management fails during VM creation, you can follow the steps in
Enable guest management on Arc VM to enable it after the VM creation.
In this example, the storage path was specified using the --storage-path-id flag
and that ensured that the workload data (including the VM, VM image, non-OS
data disk) is placed in the specified storage path.
If the flag isn't specified, the workload (VM, VM image, non-OS data disk) is
automatically placed in a high availability storage path.
Create a Linux VM
To create a Linux VM, use the same command that you used to create the Windows
VM.
) Important
Setting the proxy server during VM creation is not supported for Linux VMs.
Azure CLI
ノ Expand table
Parameters Description
http_proxy HTTP URLs for proxy server. An example URL is: http://proxy.example.com:3128 .
https_proxy HTTPS URLs for proxy server. The server may still use an HTTP address as shown
in this example: http://proxy.example.com:3128 .
cert_file_path Name of the certificate file path for your proxy server. An example is:
C:\Users\Palomino\proxycert.crt .
Azure CLI
For proxy authentication, you can pass the username and password combined in a
URL as follows: "http://username:[email protected]:3128" .
Depending on the PowerShell version you are running on your VM, you may need
to enable the proxy settings for your VM.
For Windows VMs running PowerShell version 5.1 or earlier, sign in to the VM
after the creation. Run the following command to enable proxy:
PowerShell
netsh winhttp set proxy proxy-server="http=myproxy;https=sproxy:88"
bypass-list="*.foo.com"
After the proxy is enabled, you can then Enable guest management.
For Windows VMs running PowerShell version later than 5.1, proxy settings
passed during VM creation are only used for enabling Arc guest management.
After the VM is created, sign in to the VM and run the above command to
enable proxy for other applications.
The Arc VMs on Azure Stack HCI are extended from Arc-enabled servers and can use
system-assigned managed identity to access other Azure resources that support
Microsoft Entra ID-based authentication. For example, the Arc VMs can use a system-
assigned managed identity to access the Azure Key Vault.
Next steps
Install and manage VM extensions.
Troubleshoot Arc VMs.
Frequently Asked Questions for Arc VM management.
About updates for Azure Stack HCI,
version 23H2
Article • 02/02/2024
This article describes the new update feature for this release, benefits of the feature, and
how to keep various pieces of your Azure Stack HCI solution up to date.
The approach in this release provides a flexible foundation to integrate and manage
various aspects of the Azure Stack HCI solution in one place. The orchestrator for
updates is first installed during deployment and enables the new deployment
experience including the management of the OS, core agents and services, and the
solution extension.
Here's an example of a new cluster deployment using the updates in this release:
In this solution the Azure Stack HCI OS, agents and services, drivers, and firmware are
automatically updated.
Some new agents and services can't be updated outside the orchestrator and availability
of those updates depends on the specific feature. You might need to follow different
processes to apply updates depending on the services you use.
Benefits
This new approach:
Helps avoid downtime and effects on workloads with comprehensive health checks
before and during an update.
Improves reliability with automatic retry and the remediation of known issues.
Whether managed locally or via the Azure portal, the common back-end drives a
consistent experience.
Lifecycle cadence
The Azure Stack HCI platform follows the Modern Lifecycle policy. The Modern Lifecycle
policy defines the products and services that are continuously serviced and supported.
To stay current with this policy, you must stay within six months of the most recent
release. To learn more about the support windows, see Azure Stack HCI release
information.
Microsoft might release the following types of updates for the Azure Stack HCI platform:
ノ Expand table
Baseline Quarterly Baseline updates include new features and improvements. They
Updates typically require host system reboots and might take longer.
Hotfixes As needed Hotfixes address blocking issues that could prevent regular
monthly or baseline updates. To fix critical or security issues,
hotfixes might be released sooner than monthly.
Solution As needed Solution Builder Extension² provides driver, firmware, and other
Builder partner content specific to the system solution used. They might
Extension require host system reboots.
¹ Quality updates released based on packages that contain monthly updates. These
updates supersede the previous month's updates and contain both security and non-
security changes.
Sometimes you might see updates to the latest patch level of your current baseline. If a
new baseline is available, you might see the baseline update itself or the latest patch
level of the baseline. Your cluster must stay within six months of the most recent
baseline to consider it supported.
The next sections provide an overview of components, along with methods and
interfaces for updating your solution.
Operating System: These updates help you stay productive and protected. They
provide users and IT administrators with the security fixes they need and protect
devices so that unpatched vulnerabilities can't be exploited.
Agents and services: The orchestrator updates its own agents to ensure it has the
recent fixes corresponding to the update. Azure Connected Machine agent and Arc
Resource Bridge and its dependencies, get updated automatically to the latest
validated version when Azure Stack HCI system is updated.
Solution Builder Extension: Hardware vendors might choose to integrate with this
feature to enhance the update management experience for their customers.
If a hardware vendor integrates with our update validation and release platform,
the solution extension content includes the drivers and firmware, and the
orchestrator manages the necessary system reboots within the same
maintenance window. You can spend less time searching for updates and
experience fewer maintenance windows.
This solution is the recommended way to update your Azure Stack HCI cluster.
7 Note
Customer workloads aren't covered by this update solution.
PowerShell
The PowerShell procedures apply to a single server and multi-server cluster that runs
with the orchestrator installed. For more information, see Update your Azure Stack HCI
solution via PowerShell.
Next steps
Learn to Understand update phases.
This article describes the various phases of solution updates that are applied to your
Azure Stack HCI cluster to keep it up-to-date. This information is applicable to Azure
Stack HCI, version 23H2.
The procedure in this article applies to both a single server and a multi-server cluster
that is running the latest version including the orchestrator.
The new update feature automates the update process for agents, services, operating
system content, and Solution Extension content, with the goal of maintaining availability
by shifting workloads around throughout the update process when needed.
Updates not requiring reboots - The updates that can be applied to your Azure
Stack HCI cluster without any server reboots in the cluster.
Updates that require reboots - The updates that might need a server reboot in
your Azure Stack HCI cluster. Cluster-Aware Updating is used to reboot servers in
the cluster one by one, ensuring the availability of the cluster during the update
process.
The updates consist of several phases: discovering the update, staging the content,
deploying the update, and reviewing the installation. Each phase might not require your
input but distinct actions occur in each phase.
You can apply these updates via PowerShell or the Azure portal. Regardless of the
interface you choose, the subsequent sections summarize what happens within each
phase of an update. The following diagram shows what actions you might need to take
during each phase and what actions Azure Stack HCI takes through the update
operation.
The release notes include the update contents, changes, known issues, and links to any
external downloads that might be required (for example, drivers and firmware). For
more information, see the Latest release notes.
After Microsoft releases the update, your Azure Stack HCI update platform will
automatically detect the update. Though you don't need to scan for updates, you must
go to the Updates page in your management surface to see the new update’s details.
Depending on the hardware in your cluster and the scope of an update bundle, you
might need to acquire and sideload extra content to proceed with an update. The
operating system and agents and services content are provided by Microsoft, while
depending on your specific solution and the OEM, the Solution Extension might require
an extra download from the hardware OEM. If more is required, the installation flow
prompts you for the content.
A subset of these checks can be initiated outside the update process. Because new
checks can be included in each update, these readiness checks are executed after the
update content is downloaded and before it begins installing.
If the readiness checks detect a blocking condition, the issues must be remediated
before the update can proceed.
If the readiness checks result in warnings the updates, it could introduce longer
update times or affect the workloads. You might need to acknowledge the
potential impact and bypass the warning before the update can proceed.
7 Note
In this release, you can only initiate immediate install of the updates. Scheduling of
updates is not supported.
The new update solution includes retry and remediation logic. It attempts to fix update
issues automatically and in a non-disruptive way, but sometimes manual intervention is
required. For more information, see Troubleshooting updates.
7 Note
Once you remediate the issue, you need to rerun the checks to confirm the update
readiness before proceeding.
Next step
Learn more about how to Troubleshoot updates.
Update your Azure Stack HCI, version
23H2 via PowerShell
Article • 01/31/2024
) Important
The procedure described here applies only when updating from one version of
Azure Stack HCI, version 23H2 to another higher version. For information on
updates for older versions, see Update clusters for Azure Stack HCI, version 22H2.
This article describes how to use Azure Update Manager to find and install available
cluster updates on selected Azure Stack HCI clusters. Additionally, we provide guidance
on how to review cluster updates, track progress, and browse cluster updates history.
This article describes how to apply a solution update to your Azure Stack HCI cluster via
PowerShell.
The procedure in this article applies to both a single server and multi-server cluster that
is running the latest version with the orchestrator (Lifecycle Manager) installed. If your
cluster was created via a new deployment of Azure Stack HCI, version 23H2, then the
orchestrator was automatically installed as part of the deployment.
For information on how to apply solution updates to clusters created with older versions
of Azure Stack HCI that didn't have the orchestrator installed see Update Azure Stack
HCI clusters, version 22H2.
When you apply a solution update, here are the high-level steps that you take:
The time taken to install the updates might vary based on the following factors:
The approximate time estimates for a typical single server and 4-server cluster are
summarized in the following table:
ノ Expand table
Prerequisites
Before you begin, make sure that:
You have access to an Azure Stack HCI, version 23H2 cluster that is running 2310 or
higher. The cluster should be registered in Azure.
You have access to a client that can connect to your Azure Stack HCI cluster. This
client should be running PowerShell 5.0 or later.
You have access to the solution update package over the network. You sideload or
copy these updates to the servers of your cluster.
2. Open a remote PowerShell session to a server on your Azure Stack HCI cluster. Run
the following command and provide the credentials of your server when
prompted:
PowerShell
$cred = Get-Credential
Enter-PSSession -ComputerName "<Computer IP>" -Credential $cred
7 Note
You should sign in using your deployment user account credentials: which is
the account you created when preparing Active Directory and used during
the deployment of the Azure Stack HCI system.
Console
1. Make sure that you're connected to the cluster server using the deployment user
account. Run the following command:
PowerShell
whoami
2. To ensure that the cluster was deployed running Azure Stack HCI, version 23H2,
run the following command on one of the servers of your cluster:
PowerShell
Get-StampInformation
Console
PS C:\Users\lcmuser> Get-StampInformation
Deployment ID : b4457f25-6681-4e0e-b197-a7a433d621d6
OemVersion : 2.1.0.0
PackageHash :
StampVersion : 10.2303.0.31
InitialDeployedVersion : 10.2303.0.26
PS C:\Users\lcmuser>
3. Make a note of the StampVersion on your cluster. The stamp version reflects the
solution version that your cluster is running.
7 Note
Any faults that have a severity of critical will block the updates from being applied.
1. Connect to a server on your Azure Stack HCI cluster using the deployment user
account.
2. Run the following command to validate system health via the Environment
Checker.
PowerShell
$result = Test-EnvironmentReadiness
$result | ft Name,Status,Severity
Console
PS C:\Users\lcmuser> whoami
rq2205\lcmuser
PS C:\Users\lcmuser> $result=Test-EnvironmentReadiness
VERBOSE: Looking up shared vhd product drive letter.
WARNING: Unable to find volume with label Deployment
VERBOSE: Get-Package returned with Success:True
VERBOSE: Found package
Microsoft.AzureStack.Solution.Deploy.EnterpriseCloudEngine.Client.Deplo
yment with version 10.2303.0.31 at
C:\NugetStore\Microsoft.AzureStack.Solution.Deploy.EnterpriseCloudEngin
e.Client.Deployment.10.2303.0.31\Microsoft.Azure
Stack.Solution.Deploy.EnterpriseCloudEngine.Client.Deployment.nuspec.
03/29/2023 15:45:58 : Launching StoragePools
03/29/2023 15:45:58 : Launching StoragePhysicalDisks
03/29/2023 15:45:58 : Launching StorageMapping
03/29/2023 15:45:58 : Launching StorageSubSystems
03/29/2023 15:45:58 : Launching TestCauSetup
03/29/2023 15:45:58 : Launching StorageVolumes
03/29/2023 15:45:58 : Launching StorageVirtualDisks
03/29/2023 15:46:05 : Launching OneNodeEnvironment
03/29/2023 15:46:05 : Launching NonMigratableWorkload
03/29/2023 15:46:05 : Launching FaultSummary
03/29/2023 15:46:06 : Launching SBEHealthStatusOnNode
03/29/2023 15:46:06 : Launching StorageJobStatus
03/29/2023 15:46:07 : Launching StorageCsv
WARNING: There aren't any faults right now.
03/29/2023 15:46:09 : Launching SBEPrecheckStatus
WARNING: rq2205-cl: There aren't any faults right now.
VERBOSE: Looking up shared vhd product drive letter.
WARNING: Unable to find volume with label Deployment
VERBOSE: Get-Package returned with Success:True
VERBOSE: Found package Microsoft.AzureStack.Role.SBE with version
4.0.2303.66 at
C:\NugetStore\Microsoft.AzureStack.Role.SBE.4.0.2303.66\Microsoft.Azure
Stack.Role.SBE.nuspec.
VERBOSE: SolutionExtension module supports Tag
'HealthServiceIntegration'.
VERBOSE: SolutionExtension module SolutionExtension at
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\CloudMed
ia\SBE\Installed\Content\Configuration\SolutionExtension is valid.
VERBOSE: Looking up shared vhd product drive letter.
WARNING: Unable to find volume with label Deployment
VERBOSE: Get-Package returned with Success:True
VERBOSE: Found package Microsoft.AzureStack.Role.SBE with version
4.0.2303.66 at
C:\NugetStore\Microsoft.AzureStack.Role.SBE.4.0.2303.66\Microsoft.Azure
Stack.Role.SBE.nuspec.
VERBOSE: SolutionExtension module supports Tag
'HealthServiceIntegration'.
VERBOSE: SolutionExtension module SolutionExtension at
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\CloudMed
ia\SBE\Installed\Content\Configuration\SolutionExtension is valid.
PS C:\Users\lcmuser> $result|ft Name,Status,Severity
PS C:\Users\lcmuser>
7 Note
In this release, the informational failures for Test-CauSetup are expected and
will not impact the updates.
3. Review any failures and resolve them before you proceed to the discovery step.
Discover updates online - The recommended option when your cluster has good
internet connectivity. The solution updates are discovered via the online update
catalog.
Sideload and discover updates - An alternative to discovering updates online and
should be used for scenarios with unreliable or slow internet connectivity, or when
using solution extension updates provided by your hardware vendor. In these
instances, you download the solution updates to a central location. You then
sideload the updates to an Azure Stack HCI cluster and discover the updates
locally.
1. Connect to a server on your Azure Stack HCI cluster using the deployment user
account.
PowerShell
$Update = Get-SolutionUpdate
$Update.ComponentVersions
Console
1. Connect to a server on your Azure Stack HCI cluster using the deployment user
account.
2. Go to the network share and acquire the update package that you use. Verify that
the update package you sideload contains the following files:
SolutionUpdate.xml
SolutionUpdate.zip
AS_Update_10.2303.4.1.zip
If a solution builder extension is part of the update package, you should also see
the following files:
SBE_Content_4.1.2.3.xml
SBE_Content_4.1.2.3.zip
SBE_Discovery_Contoso.xml
3. Create a folder for discovery by the update service at the following location in the
infrastructure volume of your cluster.
PowerShell
New-Item
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\sideload
-ItemType Directory
4. Copy the update package to the folder you created in the previous step.
5. Manually discover the update package using the Update service. Run the following
command:
PowerShell
Add-SolutionUpdate -SourceFolder
C:\ClusterStorage\Infrastructure_1\Shares\SU1_Infrastructure_1\sideload
6. Verify that the Update service discovers the update package and that it's available
to start preparation and installation.
PowerShell
Console
PS C:\Users\lcmuser>
7. Optionally check the version of the update package components. Run the
following command:
PowerShell
$Update = Get-SolutionUpdate
$Update.ComponentVersions
Console
1. You can only download the update without starting the installation or download
and install the update.
PowerShell
Get-SolutionUpdate | Start-SolutionUpdate
To only download the updates without starting the installation, use the -
PrepareOnly flag with Start-SolutionUpdate .
2. To track the update progress, monitor the update state. Run the following
command:
PowerShell
Get-SolutionUpdate | ft Version,State,UpdateStateProperties,HealthState
When the update starts, the following actions occur:
Console
PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState
Console
PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState
When the system is ready, updates are installed. During this phase, the State
of the updates shows as Installing and UpdateStateProperties shows the
percentage of the installation that was completed.
) Important
During the install, the cluster servers may reboot and you may need to
establish the remote PowerShell session again to monitor the updates. If
updating a single server, your Azure Stack HCI will experience a
downtime.
PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState
PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState
PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState
PS C:\Users\lcmuser> Get-SolutionUpdate|ft
Version,State,UpdateStateProperties,HealthState
Once the installation is complete, the State changes to Installed . For more information
on the various states of the updates, see Installation progress and monitoring.
1. After the update is in Installed state, check the environment solution version. Run
the following command:
PowerShell
State CurrentVersion
----- --------------
AppliedSuccessfully 10.2303.0.31
2. Check the operating system version to confirm it matches the recipe you installed.
Run the following command:
PowerShell
cmd /c ver
Console
Troubleshoot updates
To resume a previously failed update run via PowerShell, use the following command:
PowerShell
get-solutionupdate | start-solutionupdate
To resume a previously failed update due to update health checks in a Warning state,
use the following command:
PowerShell
) Important
The procedure described here applies only when updating from one version of
Azure Stack HCI, version 23H2 to another higher version. For information on
updates for older versions, see Update clusters for Azure Stack HCI, version 22H2.
This article describes how to use Azure Update Manager to find and install available
cluster updates on selected Azure Stack HCI clusters. Additionally, we provide guidance
on how to review cluster updates, track progress, and browse cluster updates history.
The update agent checks Azure Stack HCI clusters for update health and available
updates daily.
You can view the update status and readiness for each cluster.
You can update multiple clusters at the same time.
You can view the status of updates while they're in progress.
Once complete, you can view the results and history of updates.
Prerequisites
An Azure Stack HCI, version 23H2 cluster deployed and registered with Azure.
3. In the cluster list, view the clusters update status, update readiness, current OS
version, and the date and time of the last successful update.
3. Select one or more clusters from the list, then select One-time Update.
4. On the Check readiness page, review the list of readiness checks and their results.
You can select the links under Affected systems to view more details and
individual cluster results.
5. Select Next.
6. On the Select updates page, specify the updates you want to include in the
deployment.
a. Select Systems to update to view cluster updates to install or remove from the
update installation.
b. Select the Version link to view the update components, versions, and update
release notes.
7. Select Next.
8. On the Review + install page, verify your update deployment options, and then
select Install.
You should see a notification that confirms the installation of updates. If you don’t
see the notification, select the notification icon in the top right taskbar.
Track cluster update progress
When you install cluster updates via Azure Update Manager, you can check the progress
of those updates.
7 Note
After you trigger an update, it can take up to 5 minutes for the update run to show
up in the Azure portal.
To view the progress of your clusters, update installation, and completion results, follow
these steps:
4. On the Download updates page, review the progress of the download and
preparation, and then select Next.
5. On the Check readiness page, review the progress of the checks, and then select
Next.
4. On the Download updates page, review the results of the download and
preparation and then select Next.
5. On the Check readiness page, review the results and then select Next.
Under the Affected systems column, if you have an error, select View Details
for more information.
Under the Result column, if you have an error, select View Details for more
information.
To install updates on a single cluster from the Azure Stack HCI cluster resource page,
follow these steps:
5. On the Check readiness page, review the list of readiness checks and their results.
You can select the links under Affected systems to view more details and
individual cluster results.
6. Select Next.
7. On the Select updates page, specify the updates you want to include in the
deployment.
a. Select Systems to update to view cluster updates to install or remove from the
update installation.
b. Select the Version link to view the update components and their versions.
c. Select the Details, View details link, to view the update release notes.
8. Select Next.
9. On the Review + install page, verify your update deployment options, and select
Install.
You should see a notification that confirms the installation of updates. If you don’t
see the notification, select the notification icon in the top right taskbar.
Here's an example of the Windows Admin Center updates tool for systems running
Azure Stack HCI, version 23H2.
Troubleshoot updates
To resume a previously failed update run, browse to the failed update and select the Try
again button. This functionality is available at the Download updates, Check readiness,
and Install stages of an update run.
2. When the details box opens, you can download error logs by selecting the
Download logs button. This prompts the download of a JSON file.
3. Additionally, you can select the Open a support ticket button, fill in the
appropriate information, and attach your downloaded logs so that they're available
to Microsoft Support.
For more information on creating a support ticket, see Create a support request.
Next steps
Learn to Understand update phases.
This article describes how to troubleshoot solution updates that are applied to your
Azure Stack HCI cluster to keep it up-to-date.
The new update solution includes a retry and remediation logic. This logic attempts to
fix update issues in a non-disruptive way, such as retrying a CAU run. If an update run
can't be remediated automatically, it fails. When an update fails, you can retry the
update.
To collect logs for updates using the Azure portal, see Use Azure Update Manager to
update your Azure Stack HCI, version 23H2.
To collect logs for the update failures using PowerShell, follow these steps on the client
that you're using to access your cluster:
1. Establish a remote PowerShell session with the server node. Run PowerShell as
administrator and run the following command:
PowerShell
PowerShell
3. Identify the action plan for the failed solution update run.
PowerShell
PowerShell
$Failure
Output
ResourceId : redmond/Solution10.2303.1.7/2c21b859-e063-4f24-a4db-
bc1d6be82c4e
Progress :
Microsoft.AzureStack.Services.Update.ResourceProvider.UpdateService.Mod
els.Step
TimeStarted : 4/21/2023 10:02:54 PM
LastUpdatedTime : 4/21/2023 3:19:05 PM
Duration : 00:16:37.9688878
State : Failed
5. Copy the logs for the ActionPlanInstanceID that you noted earlier, to a text file
named log.txt. Use Notepad to open the text file.
PowerShell
Get-ActionplanInstance -ActionplanInstanceId <Action Plan Instance ID>
>log.txt
notepad log.txt
Output
PS C:\Users\lcmuser>notepad log.txt
Resume an update
To resume a previously failed update run, you can retry the update run via the Azure
portal or PowerShell.
PowerShell
If you're using PowerShell and need to resume a previously failed update run, use the
following command:
PowerShell
get-solutionupdate | start-solutionupdate
To resume a previously failed update due to update health checks in a Warning state,
use the following command:
PowerShell
Next steps
Learn more about how to Run updates via PowerShell.
Learn more about how to Run updates via the Azure portal.
What is Azure Arc VM management?
Article • 02/02/2024
This article provides a brief overview of the Azure Arc VM management feature on Azure
Stack HCI including the benefits, its components, and high-level workflow.
Administrators can manage Arc VMs on their Azure Stack HCI clusters by using Azure
management tools, including Azure portal, Azure CLI, Azure PowerShell, and Azure
Resource Manager (ARM) templates. Using Azure Resource Manager templates, you can
also automate VM provisioning in a secure cloud environment.
Role-based access control via builtin Azure Stack HCI roles ensures that only
authorized users can perform VM management operations thereby enhancing
security. For more information, see Azure Stack HCI Arc VM management roles.
Arc VM management provides the ability to deploy with ARM templates, Bicep,
and Terraform.
The Azure portal acts as a single pane of glass to manage VMs on Azure Stack HCI
clusters and Azure VMs. With Azure Arc VM management, you can perform various
operations from the Azure portal or Azure CLI including:
Create, manage, update, and delete VMs. For more information, see Create Arc
VMs
Create, manage, and delete VM resources such as virtual disks, logical networks,
network interfaces, and VM images.
Custom Location: Just like the Arc Resource Bridge, a custom location is created
automatically when you deploy your Azure Stack HCI cluster. You can use this
custom location to deploy Azure services. You can also deploy VMs in these user-
defined custom locations, integrating your on-premises setup more closely with
Azure.
1. During the deployment of Azure Stack HCI cluster, one Arc Resource Bridge is
installed per cluster and a custom location is also created.
2. Assign builtin RBAC roles for Arc VM management.
3. You can then create VM resources such as:
a. Storage paths for VM disks.
b. VM images starting with an Image in Azure Marketplace, in Azure Storage
account, or in Local share. These images are then used with other VM resources
to create VMs.
c. Logical networks.
d. VM network interfaces.
4. Use the VM resources to Create VMs.
To troubleshoot issues with your Arc VMs or to learn about existing known issues and
limitations, see Troubleshoot Arc virtual machines.
Next steps
Review Azure Arc VM management prerequisites
Azure Arc VM management
prerequisites
Article • 02/28/2024
This article lists the requirements and prerequisites for Azure Arc VM management. We
recommend that you review the requirements and complete the prerequisites before
you manage your Arc VMs.
Azure requirements
The Azure requirements include:
To provision Arc VMs and VM resources such as virtual disks, logical network,
network interfaces and VM images through the Azure portal, you must have
Contributor level access at the subscription level.
The entities include Azure Stack HCI cluster, Arc Resource Bridge, Custom Location,
VM operator, virtual machines created from Arc and Azure Arc for Servers guest
management. These entities can be in different or same resource groups as long as
all resource groups are in the same region.
For information on Azure CLI commands for Azure Stack HCI VMs, see az stack-hci-vm.
During the cluster deployment, an Arc Resource Bridge is created and the Azure CLI
extension stack-hci-vm is installed on the cluster. You can connect to and manage the
cluster using the Azure CLI extension.
The latest version of Azure Command-Line Interface (CLI). You must install this
version on the client that you're using to connect to your Azure Stack HCI cluster.
For installation instructions, see Install Azure CLI. Once you have installed az
CLI, make sure to restart the system.
If you're using a local installation, sign in to the Azure CLI by using the az
login command. To finish the authentication process, follow the steps
displayed in your terminal. For other sign-in options, see Sign in with the
Azure CLI.
Run az version to find the version and dependent libraries that are installed.
To upgrade to the latest version, run az upgrade.
PowerShell
Next steps
Assign RBAC role for Arc VM management.
Use Role-based Access Control to
manage Azure Stack HCI Virtual
Machines
Article • 02/14/2024
This article describes how to use the Role-based Access Control (RBAC) to control access
to Arc virtual machines (VMs) running on your Azure Stack HCI cluster.
You can use the builtin RBAC roles to control access to VMs and VM resources such as
virtual disks, network interfaces, VM images, logical networks and storage paths. You can
assign these roles to users, groups, service principals and managed identities.
) Important
This feature is currently in PREVIEW. See the Supplemental Terms of Use for
Microsoft Azure Previews for legal terms that apply to Azure features that are in
beta, preview, or otherwise not yet released into general availability.
Azure Stack HCI Administrator - This role grants full access to your Azure Stack
HCI cluster and its resources. An Azure Stack HCI administrator can register the
cluster as well as assign Azure Stack HCI VM contributor and Azure Stack HCI VM
reader roles to other users. They can also create cluster-shared resources such as
logical networks, VM images, and storage paths.
Azure Stack HCI VM Contributor - This role grants permissions to perform all VM
actions such as start, stop, restart the VMs. An Azure Stack HCI VM Contributor can
create and delete VMs, as well as the resources and extensions attached to VMs.
An Azure Stack HCI VM Contributor can't register the cluster or assign roles to
other users, nor create cluster-shared resources such as logical networks, VM
images, and storage paths.
Azure Stack HCI VM Reader - This role grants permissions to only view the VMs. A
VM reader can't perform any actions on the VMs or VM resources and extensions.
Here's a table that describes the VM actions granted by each role for the VMs and the
various VM resources. The VM resources are referred to resources required to create a
VM and include virtual disks, network interfaces, VM images, logical networks, and
storage paths:
ノ Expand table
Azure Stack HCI Create, list, Create, list, delete all VM resources including logical
Administrator delete VMs networks, VM images, and storage paths
Start, stop,
restart VMs
Azure Stack HCI VM Create, list, Create, list, delete all VM resources except logical
Contributor delete VMs networks, VM images, and storage paths
Start, stop,
restart VMs
Prerequisites
Before you begin, make sure to complete the following prerequisites:
1. Make sure that you have access to an Azure Stack HCI cluster that is deployed and
registered. During the deployment, an Arc Resource Bridge and a custom location
are also created.
Go to the resource group in Azure. You can see the custom location and Azure Arc
Resource Bridge created for the Azure Stack HCI cluster. Make a note of the
subscription, resource group, and the custom location as you use these later in this
scenario.
2. Make sure that you have access to Azure subscription as an Owner or User Access
Administrator to assign roles to others.
2. Go to your subscription and then go to Access control (IAM) > Role assignments.
From the top command bar, select + Add and then select Add role assignment.
If you don't have permissions to assign roles, the Add role assignment option is
disabled.
3. On the Role tab, select an RBAC role to assign and choose from one of the
following builtin roles:
4. On the Members tab, select the User, group, or service principal. Also select a
member to assign the role.
6. Verify the role assignment. Go to Access control (IAM) > Check access > View my
access. You should see the role assignment.
For more information on role assignment, see Assign Azure roles using the Azure portal.
Next steps
Create a storage path for Azure Stack HCI VM.
Create storage path for Azure Stack HCI
Article • 01/24/2024
This article describes how to create storage path for VM images used on your Azure
Stack HCI cluster. Storage paths are an Azure resource and are used to provide a path to
store VM configuration files, VM image, and VHDs on your cluster. You can create a
storage path using the Azure CLI.
The storage paths on your Azure Stack HCI cluster should point to cluster shared
volumes that can be accessed by all the servers on your cluster. In order to be highly
available, we strongly recommend that you create storage paths under cluster shared
volumes.
The available space in the cluster shared volume determines the size of the store
available at the storage path. For example, if the storage path is
C:\ClusterStorage\UserStorage_1\Volume01 and the Volume01 is 4 TB, then the size of
the storage path is the available space (out of the 4 TB) on Volume01 .
Prerequisites
Before you begin, make sure to complete the following prerequisites:
1. Make sure that you have access to an Azure Stack HCI cluster that is deployed and
registered. During the deployment, an Arc Resource Bridge and a custom location
are also created.
Go to the resource group in Azure. You can see the custom location and Azure Arc
Resource Bridge created for the Azure Stack HCI cluster. Make a note of the
subscription, resource group, and the custom location as you use these later in this
scenario.
2. Make sure that a cluster shared volume exists on your Azure Stack HCI cluster that
is accessible from all the servers in the cluster. The storage path that you intend to
provide on a cluster shared volume should have sufficient space for storing VM
images. By default, cluster shared volumes are created during the deployment of
Azure Stack HCI cluster.
You can create storage paths only within cluster shared volumes that are available
in the cluster. For more information, see Create a cluster shared volume.
Azure CLI
You can use the stack-hci-vm storagepath cmdlets to create, show, and list the
storage paths on your Azure Stack HCI cluster.
ノ Expand table
Parameter Description
name Name of the storage path that you create for your Azure Stack HCI cluster.
Make sure to provide a name that follows the Rules for Azure resources. You
can't rename a storage path after it is created.
resource- Name of the resource group where you create the storage path. For ease of
group management, we recommend that you use the same resource group as your
Azure Stack HCI cluster.
subscription Name or ID of the subscription where your Azure Stack HCI is deployed. This
could also be another subscription you use for storage path on your Azure
Stack HCI cluster.
custom- Name or ID of the custom location associated with your Azure Stack HCI
location cluster where you're creating this storage path.
path Path on a disk to create storage path. The selected path should have
sufficient space available for storing your VM image.
Parameter Description
Azure CLI
az login --use-device-code
Azure CLI
Set parameters
1. Set parameters for your subscription, resource group, location, OS type for the
image. Replace the < > with the appropriate values.
Azure CLI
Azure CLI
Console
PS C:\windows\system32> $storagepathname="test-storagepath"
PS C:\windows\system32>
$path="C:\ClusterStorage\UserStorage_1\mypath"
PS C:\windows\system32> $subscription="<Subscription ID>"
PS C:\windows\system32> $resource_group="myhci-rg"
PS C:\windows\system32>
$customLocationID="/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.ExtendedLocation/customLocations/myhci-cl"
Once the storage path creation is complete, you're ready to create virtual machine
images.
Azure CLI
Azure CLI
To delete a volume, first remove the associated workloads, then remove the storage
paths, and then delete the volume. For more information, see Delete a volume.
If there's insufficient space at the storage path, then the VM provisioning using that
storage path would fail. You might need to expand the volume associated with the
storage path. For more information, see Expand the volume.
Next steps
Create a VM image using one of the following methods:
Using the image in Azure Marketplace.
Using an image in Azure Storage account.
Using an image in local file share.
Create Azure Stack HCI VM image using
Azure Marketplace images
Article • 02/29/2024
This article describes how to create virtual machine (VM) images for your Azure Stack
HCI using source images from Azure Marketplace. You can create VM images using the
Azure portal or Azure CLI and then use these VM images to create Arc VMs on your
Azure Stack HCI.
Prerequisites
Before you begin, make sure that the following prerequisites are completed.
Azure CLI
You have access to an Azure Stack HCI system that is deployed, has an Arc
Resource Bridge and a custom location.
Go to the Overview > Server page in the Azure Stack HCI system resource.
Verify that Azure Arc shows as Connected. You should also see a custom
location and an Arc Resource Bridge for your cluster.
If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.
Azure CLI
Azure CLI
az login --use-device-code
Azure CLI
Azure CLI
ノ Expand table
Parameter Description
resource- Resource group for Azure Stack HCI cluster that you associate with
group this image.
location Location for your Azure Stack HCI cluster. For example, this could be
eastus .
os-type Operating system associated with the source image. This can be
Windows or Linux.
Azure CLI
2. Create the VM image starting with a specified marketplace image. Make sure
to specify the offer, publisher, SKU, and version for the marketplace image. Use
the following table to find the available marketplace images and their attribute
values:
ノ Expand table
multi-
session,
version
22H2 -
Gen2
Azure CLI
In this example, the storage path was specified using the --storage-path-id
flag and that ensured that the workload data (including the VM, VM image,
non-OS data disk) is placed in the specified storage path.
If the flag is not specified, the workload data is automatically placed in a high
availability storage path.
The image deployment takes a few minutes to complete. The time taken to
download the image depends on the size of the Marketplace image and the
network bandwidth available for the download.
PS C:\Users\azcli>
List VM images
You need to view the list of VM images to choose an image to manage.
Azure CLI
Azure CLI
3. List all the VM images associated with your cluster. Run the following
command:
Azure CLI
If you specify just the subscription, the command lists all the images in
the subscription.
If you specify both the subscription and the resource group, the
command lists all the images in the resource group.
Azure CLI
Azure CLI
3. You can view image properties in two different ways: specify ID or specify
name and resource group. Take the following steps when specifying
Marketplace image ID:
Azure CLI
$mktplaceImageID = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage"
Update VM image
When a new updated image is available in Azure Marketplace, the VM images on your
Azure Stack HCI cluster become stale and should be updated. The update operation isn't
an in-place update of the image. Rather you can see for which VM images an updated
image is available, and select images to update. After you update, the create VM image
operation uses the new updated image.
In the Overview blade, you see a banner that shows the new VM image available
for download, if one is available. To update to the new image, select the arrow icon.
2. Review image details and then select Review and create. By default, the new image
uses the same resource group and instance details as the previous image.
The name for the new image is incremented based on the name of the previous
image. For example, an existing image named winServer2022-01 will have an
updated image named winServer2022-02.
After the new VM image is created, create a VM using the new image and verify
that the VM works properly. After verification, you can delete the old VM image.
7 Note
In this release, you can't delete a VM image if the VM associated with that
image is running. Stop the VM and then delete the VM image.
Delete VM image
You might want to delete a VM image if the download fails for some reason or if the
image is no longer needed. Follow these steps to delete the VM images.
Azure CLI
Azure CLI
Azure CLI
After you've deleted an image, you can check that the image is removed. Here's a
sample output when the image was deleted by specifying the name and the
resource-group.
Next steps
Create logical networks
Feedback
Was this page helpful? Yes No
This article describes how to create virtual machine (VM) images for your Azure Stack
HCI using source images from Azure Storage account. You can create VM images using
the Azure portal or Azure CLI and then use these VM images to create Arc VMs on your
Azure Stack HCI.
Prerequisites
Before you begin, make sure that the following prerequisites are completed.
Azure CLI
You have access to an Azure Stack HCI system that is deployed, has an Arc
Resource Bridge and a custom location.
Go to the Overview > Server page in the Azure Stack HCI system resource.
Verify that Azure Arc shows as Connected. You should also see a custom
location and an Arc Resource Bridge for your cluster.
For custom images in Azure Storage account, you have the following extra
prerequisites:
You should have a VHD loaded in your Azure Storage account. See how to
Upload a VHD image in your Azure Storage account.
If using a VHDX:
The VHDX image must be Gen 2 type and secure boot enabled.
The VHDX image must be prepared using sysprep /generalize
/shutdown /oobe . For more information, see Sysprep command-line
options.
If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.
Make sure that you have Storage Blob Data Contributor role on the Storage
account that you use for the image. For more information, see Assign an
Azure role for access to blob data.
Make sure that you're uploading your VHD or VHDX as a page blob image
into the Storage account. Only page blob images are supported to create VM
images via the Storage account.
Azure CLI
Azure CLI
az login --use-device-code
Azure CLI
ノ Expand table
Parameter Description
subscription Resource group for Azure Stack HCI cluster that you associate with this
image.
resource_group Resource group for Azure Stack HCI cluster that you associate with this
image.
location Location for your Azure Stack HCI cluster. For example, this could be
eastus .
imageName Name of the VM image created starting with the image in your local
share.
Note: Azure rejects all the names that contain the keyword Windows.
imageSourcePath Path to the Blob SAS URL of the image in the Storage account. For more
information, see instructions on how to Get a blob SAS URL of the image
in the Storage account.
Note: Make sure that all the Ampersands in the path are escaped with
double quotes and the entire path string is wrapped within single quotes.
os-type Operating system associated with the source image. This can be Windows
or Linux.
Here's a sample output:
Azure CLI
2. Create the VM image starting with a specified marketplace image. Make sure
to specify the offer, publisher, sku and version for the marketplace image.
Azure CLI
In this example, the storage path was specified using the --storage-path-id
flag and that ensured that the workload data (including the VM, VM image,
non-OS data disk) is placed in the specified storage path.
If the flag is not specified, the workload data is automatically placed in a high
availability storage path.
The image deployment takes a few minutes to complete. The time taken to
download the image depends on the size of the image in Azure Storage account
and the network bandwidth available for the download.
List VM images
You need to view the list of VM images to choose an image to manage.
Azure CLI
Azure CLI
3. List all the VM images associated with your cluster. Run the following
command:
Azure CLI
If you specify just the subscription, the command lists all the images in
the subscription.
If you specify both the subscription and the resource group, the
command lists all the images in the resource group.
Azure CLI
Azure CLI
3. You can view image properties in two different ways: specify ID or specify
name and resource group. Take the following steps when specifying
Marketplace image ID:
Azure CLI
$mktplaceImageID = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage"
Delete VM image
You might want to delete a VM image if the download fails for some reason or if the
image is no longer needed. Follow these steps to delete the VM images.
Azure CLI
Azure CLI
Azure CLI
After you've deleted an image, you can check that the image is removed. Here's a
sample output when the image was deleted by specifying the name and the
resource-group.
PS C:\Users\azcli> $subscription = "<Subscription ID>"
PS C:\Users\azcli> $resource_group = "myhci-rg"
PS C:\Users\azcli> $mktplaceImage = "myhci-marketplaceimage"
PS C:\Users\azcli> az stack-hci-vm image delete --name $mktplaceImage --
resource-group $resource_group
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
Are you sure you want to perform this operation? (y/n): y
PS C:\Users\azcli> az stack-hci-vm image show --name $mktplaceImage --
resource-group $resource_group
Command group 'stack-hci-vm' is experimental and under development.
Reference and support levels: https://aka.ms/CLI_refstatus
ResourceNotFound: The Resource
'Microsoft.AzureStackHCI/marketplacegalleryimages/myhci-
marketplaceimage' under resource group 'myhci-rg' was not found. For
more details please go to https://aka.ms/ARMResourceNotFoundFix
PS C:\Users\azcli>
Next steps
Create logical networks
Feedback
Was this page helpful? Yes No
This article describes how to create virtual machine (VM) images for your Azure Stack
HCI using source images from a local share on your cluster. You can create VM images
using the Azure portal or Azure CLI and then use these VM images to create Arc VMs on
your Azure Stack HCI.
Prerequisites
Before you begin, make sure that the following prerequisites are completed.
Azure CLI
You have access to an Azure Stack HCI system that is deployed, has an Arc
Resource Bridge and a custom location.
Go to the Overview > Server page in the Azure Stack HCI system resource.
Verify that Azure Arc shows as Connected. You should also see a custom
location and an Arc Resource Bridge for your cluster.
For custom images in a local share on your Azure Stack HCI, you'll have the
following extra prerequisites:
You should have a VHD/VHDX uploaded to a local share on your Azure
Stack HCI cluster.
The VHDX image must be Gen 2 type and secure boot enabled.
The VHDX image must be prepared using sysprep /generalize /shutdown
/oobe . For more information, see Sysprep command-line options.
The image should reside on a Cluster Shared Volume available to all the
servers in the cluster. Both the Windows and Linux operating systems are
supported.
If using a client to connect to your Azure Stack HCI cluster, see Connect to
Azure Stack HCI via Azure CLI client.
Azure CLI
Azure CLI
az login --use-device-code
Azure CLI
Azure CLI
ノ Expand table
Parameter Description
subscription Resource group for Azure Stack HCI cluster that you associate with
this image.
resource_group Resource group for Azure Stack HCI cluster that you associate with
this image.
location Location for your Azure Stack HCI cluster. For example, this could be
eastus .
image-path Name of the VM image created starting with the image in your local
share.
Note: Azure rejects all the names that contain the keyword
Windows.
name Path to the source gallery image (VHDX only) on your cluster. For
example, C:\OSImages\winos.vhdx. See the prerequisites of the
source image.
os-type Operating system associated with the source image. This can be
Windows or Linux.
Azure CLI
2. Create the VM image starting with a specified image in a local share on your
Azure Stack HCI cluster.
Azure CLI
In this example, the storage path was specified using the --storage-path-id
flag and that ensured that the workload data (including the VM, VM image,
non-OS data disk) is placed in the specified storage path.
If the flag is not specified, the workload data is automatically placed in a high
availability storage path.
The image deployment takes a few minutes to complete. The time taken to
download the image depends on the size of the image in the local share and the
network bandwidth available for the download.
PS C:\Users\azcli>
List VM images
You need to view the list of VM images to choose an image to manage.
Azure CLI
Azure CLI
3. List all the VM images associated with your cluster. Run the following
command:
Azure CLI
If you specify just the subscription, the command lists all the images in
the subscription.
If you specify both the subscription and the resource group, the
command lists all the images in the resource group.
Azure CLI
Azure CLI
3. You can view image properties in two different ways: specify ID or specify
name and resource group. Take the following steps when specifying
Marketplace image ID:
Azure CLI
$mktplaceImageID = "/subscriptions/<Subscription
ID>/resourceGroups/myhci-
rg/providers/Microsoft.AzureStackHCI/galleryimages/myhci-
marketplaceimage"
Delete VM image
You might want to delete a VM image if the download fails for some reason or if the
image is no longer needed. Follow these steps to delete the VM images.
Azure CLI