0% found this document useful (0 votes)
1K views1,190 pages

Nutanix Kubernetes Platform v2 12

Uploaded by

Sunny Rampalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views1,190 pages

Nutanix Kubernetes Platform v2 12

Uploaded by

Sunny Rampalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1190

Nutanix Kubernetes®

Platform Guide
Nutanix Kubernetes Platform 2.12
October 8, 2024
Contents

1. Nutanix Kubernetes Platform Overview....................................................11


Architecture................................................................................................................................................11
Supported Infrastructure Operating Systems............................................................................................12

2. Downloading NKP....................................................................................... 16

3. Getting Started with NKP........................................................................... 17


NKP Concepts and Terms........................................................................................................................ 18
Cluster Types..................................................................................................................................19
CAPI Concepts and Terms............................................................................................................ 20
Air-Gapped or Non-Air-Gapped Environment................................................................................ 21
Pre-provisioned Infrastructure........................................................................................................ 22
Licenses.....................................................................................................................................................23
NKP Starter License.......................................................................................................................25
NKP Pro License............................................................................................................................ 27
NKP Ultimate License.................................................................................................................... 28
Add an NKP License......................................................................................................................30
Remove an NKP License............................................................................................................... 30
Commands within a kubeconfig File......................................................................................................... 31
Storage...................................................................................................................................................... 32
Default Storage Providers.............................................................................................................. 33
Change or Manage Multiple StorageClasses.................................................................................34
Provisioning a Static Local Volume................................................................................................36
Resource Requirements............................................................................................................................38
General Resource Requirements................................................................................................... 38
Infrastructure Provider-Specific Requirements............................................................................... 39
Kommander Component Requirements......................................................................................... 40
Managed Cluster Requirements.....................................................................................................40
Management Cluster Application Requirements............................................................................ 41
Workspace Platform Application Defaults and Resource Requirements....................................... 42
Prerequisites for Installation......................................................................................................................44
Installing NKP............................................................................................................................................ 47

4. Basic Installations by Infrastructure......................................................... 50


Nutanix Installation Options...................................................................................................................... 50
Nutanix Basic Prerequisites........................................................................................................... 51
Nutanix Non-Air-gapped Installation...............................................................................................57
Nutanix Air-gapped Installation.......................................................................................................61
Pre-provisioned Installation Options..........................................................................................................65
Pre-provisioned Installation............................................................................................................ 66
Pre-provisioned Air-gapped Installation..........................................................................................78
Pre-provisioned FIPS Install........................................................................................................... 95
Pre-provisioned FIPS Air-gapped Install...................................................................................... 107
Pre-provisioned with GPU Install................................................................................................. 123
Pre-provisioned Air-gapped with GPU Install...............................................................................138
AWS Installation Options........................................................................................................................ 156

ii
AWS Installation........................................................................................................................... 156
AWS Air-gapped Installation........................................................................................................ 167
AWS with FIPS Installation.......................................................................................................... 181
AWS Air-gapped with FIPS Installation........................................................................................192
AWS with GPU Installation...........................................................................................................205
AWS Air-gapped with GPU Installation........................................................................................217
EKS Installation Options......................................................................................................................... 230
EKS Installation............................................................................................................................ 230
EKS: Minimal User Permission for Cluster Creation....................................................................231
EKS: Cluster IAM Policies and Roles.......................................................................................... 233
EKS: Create an EKS Cluster....................................................................................................... 237
EKS: Grant Cluster Access.......................................................................................................... 242
EKS: Retrieve kubeconfig for EKS Cluster.................................................................................. 243
EKS: Attach a Cluster.................................................................................................................. 244
vSphere Installation Options................................................................................................................... 249
vSphere Prerequisites: All Installation Types...............................................................................249
vSphere Installation...................................................................................................................... 254
vSphere Air-gapped Installation................................................................................................... 267
vSphere with FIPS Installation..................................................................................................... 281
vSphere Air-gapped FIPS Installation.......................................................................................... 294
VMware Cloud Director Installation Options........................................................................................... 309
Azure Installation Options....................................................................................................................... 309
Azure Installation.......................................................................................................................... 310
Azure: Creating an Image............................................................................................................ 311
Azure: Creating the Management Cluster....................................................................................312
Azure: Install Kommander............................................................................................................ 313
Azure: Verifying your Installation and UI Log in.......................................................................... 315
Azure: Creating Managed Clusters Using the NKP CLI.............................................................. 316
AKS Installation Options......................................................................................................................... 319
AKS Installation............................................................................................................................ 319
AKS: Create an AKS Cluster....................................................................................................... 322
AKS: Retrieve kubeconfig for AKS Cluster.................................................................................. 323
AKS: Attach a Cluster.................................................................................................................. 325
GCP Installation Options.........................................................................................................................327
GCP Installation............................................................................................................................328

5. Cluster Operations Management............................................................. 339


Operations............................................................................................................................................... 339
Access Control..............................................................................................................................340
Identity Providers.......................................................................................................................... 350
Infrastructure Providers................................................................................................................ 359
Header, Footer, and Logo Implementation.................................................................................. 374
Applications..............................................................................................................................................376
Customizing Your Application.......................................................................................................376
Printing and Reviewing the Current State of an AppDeployment Resource................................ 377
Deployment Scope....................................................................................................................... 377
Logging Stack Application Sizing Recommendations.................................................................. 377
Rook Ceph Cluster Sizing Recommendations............................................................................. 380
Application Management Using the UI.........................................................................................382
Platform Applications.................................................................................................................... 386
Setting Priority Classes in NKP Applications............................................................................... 394
AppDeployment Resources.......................................................................................................... 396
Workspaces............................................................................................................................................. 396
Creating a Workspace..................................................................................................................397
Adding or Editing Workspace Annotations and Labels................................................................ 397

iii
Deleting a Workspace.................................................................................................................. 398
Workspace Applications............................................................................................................... 398
Workplace Catalog Applications...................................................................................................406
Configuring Workspace Role Bindings.........................................................................................420
Multi-Tenancy in NKP...................................................................................................................421
Generating a Dedicated Login URL for Each Tenant.................................................................. 423
Projects.................................................................................................................................................... 423
Creating a Project Using the UI................................................................................................... 424
Creating a Project Using the CLI................................................................................................. 424
Project Applications...................................................................................................................... 425
Project Deployments.....................................................................................................................441
Project Role Bindings................................................................................................................... 447
Project Roles................................................................................................................................ 450
Project ConfigMaps...................................................................................................................... 453
Project Secrets............................................................................................................................. 454
Project Quotas and Limit Ranges................................................................................................ 455
Project Network Policies...............................................................................................................457
Cluster Management............................................................................................................................... 462
Creating a Managed Nutanix Cluster Through the NKP UI......................................................... 462
Creating a Managed Azure Cluster Through the NKP UI............................................................464
Creating a Managed vSphere Cluster Through the NKP UI........................................................ 465
Creating a Managed Cluster on VCD Through the NKP UI.........................................................470
Kubernetes Cluster Attachment....................................................................................................473
Platform Expansion: Conversion of an NKP Pro Cluster to an NKP Ultimate Managed
Cluster......................................................................................................................................515
Creating Advanced CLI Clusters..................................................................................................532
Custom Domains and Certificates Configuration for All Cluster Types........................................533
Disconnecting or Deleting Clusters.............................................................................................. 538
Management Cluster.................................................................................................................... 539
Cluster Statuses........................................................................................................................... 539
Cluster Resources........................................................................................................................ 540
NKP Platform Applications........................................................................................................... 541
Cluster Applications and Statuses............................................................................................... 541
Custom Cluster Application Dashboard Cards.............................................................................542
Kubernetes Cluster Federation (KubeFed).................................................................................. 543
Backup and Restore................................................................................................................................544
Velero Configuration..................................................................................................................... 544
Velero Backup.............................................................................................................................. 557
Logging.................................................................................................................................................... 561
Logging Operator..........................................................................................................................562
Logging Stack............................................................................................................................... 562
Admin-level Logs.......................................................................................................................... 565
Workspace-level Logging............................................................................................................. 565
Multi-Tenant Logging.................................................................................................................... 573
Fluent Bit.......................................................................................................................................578
Configuring Loki to Use AWS S3 Storage in NKP.......................................................................582
Customizing Logging Stack Applications..................................................................................... 584
Security.................................................................................................................................................... 585
OpenID Connect (OIDC).............................................................................................................. 585
Identity Providers.......................................................................................................................... 586
Login Connectors..........................................................................................................................586
Access Token Lifetime................................................................................................................. 587
Authentication............................................................................................................................... 587
Connecting Kommander to an IdP Using SAML..........................................................................588
Enforcing Policies Using Gatekeeper...........................................................................................589
Traefik-Forward-Authentication in NKP (TFA)..............................................................................592

iv
Local Users...................................................................................................................................594
Networking............................................................................................................................................... 597
Networking Service.......................................................................................................................598
Required Domains........................................................................................................................ 602
Load Balancing............................................................................................................................. 602
Ingress.......................................................................................................................................... 603
Configuring Ingress for Load Balancing.......................................................................................604
Istio as a Microservice................................................................................................................. 606
GPUs....................................................................................................................................................... 607
Configuring GPU for Kommander Clusters.................................................................................. 608
Enabling the NVIDIA Platform Application on a Management Cluster.........................................608
Enabling the NVIDIA Platform Application on Attached or Managed Clusters.............................610
Validating the Application............................................................................................................. 612
NVIDIA GPU Monitoring...............................................................................................................612
Configuring MIG for NVIDIA.........................................................................................................612
Troubleshooting NVIDIA GPU Operator on Kommander.............................................................614
Disabling NVIDIA GPU Operator Platform Application on Kommander....................................... 615
GPU Toolkit Versions................................................................................................................... 615
Enabling GPU After Installing NKP.............................................................................................. 616
Monitoring and Alerts.............................................................................................................................. 617
Recommendations........................................................................................................................ 617
Grafana Dashboards.................................................................................................................... 619
Cluster Metrics..............................................................................................................................621
Alerts Using AlertManager........................................................................................................... 621
Centralized Monitoring..................................................................................................................626
Centralized Metrics....................................................................................................................... 627
Centralized Alerts......................................................................................................................... 627
Federating Prometheus Alerting Rules........................................................................................ 628
Centralized Cost Monitoring......................................................................................................... 628
Application Monitoring using Prometheus.................................................................................... 630
Setting Storage Capacity for Prometheus....................................................................................632
Storage for Applications.......................................................................................................................... 632
Rook Ceph in NKP.......................................................................................................................633
Bring Your Own Storage (BYOS) to NKP Clusters......................................................................637

6. Custom Installation and Infrastructure Tools........................................ 644


Universal Configurations for all Infrastructure Providers........................................................................ 644
Container Runtime Engine (CRE)................................................................................................ 644
Configuring an HTTP or HTTPS Proxy........................................................................................644
Output Directory Flag................................................................................................................... 649
Customization of Cluster CAPI Components............................................................................... 649
Registry and Registry Mirrors.......................................................................................................650
Managing Subnets and Pods....................................................................................................... 651
Creating a Bastion Host............................................................................................................... 652
Provision Flatcar Linux OS...........................................................................................................653
Load Balancers.............................................................................................................................654
Inspect Cluster for Issues............................................................................................................ 656
Nutanix Infrastructure.............................................................................................................................. 656
Nutanix Infrastructure Prerequisites............................................................................................. 657
Nutanix Installation in a Non-air-gapped Environment.................................................................666
Nutanix Installation in an Air-Gapped Environment..................................................................... 679
Nutanix Management Tools......................................................................................................... 693
Pre-provisioned Infrastructure................................................................................................................. 695
Pre-provisioned Prerequisites and Environment Variables.......................................................... 695
Pre-provisioned Cluster Creation Customization Choices........................................................... 703

v
Pre-provisioned Installation in a Non-air-gapped Environment.................................................... 714
Pre-provisioned Installation in an Air-gapped Environment......................................................... 723
Pre-Provisioned Management Tools............................................................................................ 736
AWS Infrastructure.................................................................................................................................. 742
AWS Prerequisites and Permissions........................................................................................... 743
AWS Installation in a Non-air-gapped Environment.....................................................................759
AWS Installation in an Air-gapped Environment.......................................................................... 774
AWS Management Tools............................................................................................................. 789
EKS Infrastructure................................................................................................................................... 806
EKS Introduction...........................................................................................................................806
EKS Prerequisites and Permissions............................................................................................ 807
Creating an EKS Cluster from the CLI........................................................................................ 814
Create an EKS Cluster from the UI............................................................................................. 820
Granting Cluster Access...............................................................................................................822
Exploring your EKS Cluster..........................................................................................................823
Attach an Existing Cluster to the Management Cluster............................................................... 825
Deleting the EKS Cluster from CLI.............................................................................................. 829
Deleting EKS Cluster from the NKP UI....................................................................................... 830
Manage EKS Node Pools............................................................................................................ 832
Azure Infrastructure................................................................................................................................. 834
Azure Prerequisites...................................................................................................................... 835
Azure Non-air-gapped Install........................................................................................................839
Azure Management Tools............................................................................................................ 850
AKS Infrastructure................................................................................................................................... 854
Use Nutanix Kubernetes Platform to Create a New AKS Cluster................................................855
Create a New AKS Cluster from the NKP UI.............................................................................. 857
Explore New AKS Cluster............................................................................................................ 859
Delete an AKS Cluster................................................................................................................. 861
vSphere Infrastructure............................................................................................................................. 862
vSphere Prerequisites.................................................................................................................. 864
vSphere Installation in a Non-air-gapped Environment................................................................872
vSphere Installation in an Air-Gapped Environment.................................................................... 886
vSphere Management Tools........................................................................................................ 902
VMware Cloud Director Infrastructure.....................................................................................................912
VMware Cloud Director Prerequisites.......................................................................................... 912
Cloud Director Configure the Organization.................................................................................. 916
Cloud Director Install NKP........................................................................................................... 925
Cloud Director Management Tools.............................................................................................. 935
Google Cloud Platform (GCP) Infrastructure.......................................................................................... 942
GCP Prerequisites........................................................................................................................ 943
GCP Installation in a Non-air-gapped Environment..................................................................... 946
GCP Management Tools..............................................................................................................957

7. Additional Kommander Configuration.................................................... 964


Kommander Installation Based on Your Environment............................................................................ 964
Installing Kommander in an Air-gapped Environment............................................................................ 965
Pro License: Installing Kommander in an Air-gapped Environment.............................................966
Ultimate License: Installing Kommander in an Air-gapped Environment..................................... 967
Images Download into Your Registry: Air-gapped Environments................................................ 967
Installing Kommander in a Non-Air-gapped Environment.......................................................................969
Pro License: Installing Kommander in a Non-Air-gapped Environment....................................... 970
Ultimate License: Installing Kommander in Non-Air-gapped with NKP Catalog Applications....... 971
Installing Kommander in a Pre-provisioned Air-gapped Environment.................................................... 971
Pro License: Installing Kommander in a Pre-provisioned Air-gapped Environment..................... 974
Ultimate License: Installing Kommander in a Pre-provisioned, Air-gapped Environment.............974

vi
Images Download into Your Registry: Air-gapped, Pre-provisioned Environments......................974
Installing Kommander in a Pre-provisioned, Non-Air-gapped Environment............................................977
Pro License: Installing Kommander in a Pre-provisioned, Non-Air-gapped Environment.............978
Ultimate License: Installing Kommander in Pre-provisioned, Non-Air-gapped with NKP
Catalog Applications................................................................................................................ 979
Installing Kommander in a Small Environment.......................................................................................979
Dashboard UI Functions......................................................................................................................... 981
Logging into the UI with Kommander..................................................................................................... 981
Default StorageClass...............................................................................................................................982
Identifying and Modifying Your StorageClass.............................................................................. 982
Installling Kommander............................................................................................................................. 983
Installing Kommander with a Configuration File..................................................................................... 983
Configuring Applications After Installing Kommander.............................................................................984
Verifying Kommander Installation............................................................................................................985
Kommander Configuration Reference.....................................................................................................986
Configuring the Kommander Installation with a Custom Domain and Certificate................................... 990
Reasons for Setting Up a Custom Domain or Certificate.......................................................................990
Certificate Issuer and KommanderCluster Concepts..............................................................................991
Certificate Authority................................................................................................................................. 992
Certificate Configuration Options............................................................................................................ 992
Using an Automatically-generated Certificate with ACME and Required Basic Configuration..... 992
Using an Automatically-generated Certificate with ACME and Required Advanced
Configuration............................................................................................................................993
Using a Manually-generated Certificate....................................................................................... 995
Advanced Configuration: ClusterIssuer...................................................................................................996
Configuring a Custom Domain Without a Custom Certificate.................................................................997
Verifying and Troubleshooting the Domain and Certificate Customization............................................. 998
DNS Record Creation with External DNS...............................................................................................998
Configuring External DNS with the CLI: Management or Pro Cluster..........................................999
Configuring the External DNS Using the UI...............................................................................1000
Customizing the Traefik Deployment Using the UI.................................................................... 1001
Verifying Your External DNS Configuration............................................................................... 1002
Verifying Whether the DNS Deployment Is Successful............................................................. 1002
Examining the Cluster’s Ingress.................................................................................................1003
Verifying the DNS Record.......................................................................................................... 1003
External Load Balancer.........................................................................................................................1004
Configuring Kommander to Use an External Load Balancer..................................................... 1005
Configuring the External Load Balancer to Target the Specified Ports.....................................1005
HTTP Proxy Configuration Considerations........................................................................................... 1006
Configuring HTTP proxy for the Kommander Clusters......................................................................... 1006
Enabling Gatekeeper.............................................................................................................................1007
Creating Gatekeeper ConfigMap in the Kommander Namespace........................................................1008
Installing Kommander Using the Configuration Files and ConfigMap...................................................1009
Configuring the Workspace or Project.................................................................................................. 1009
Configuring HTTP Proxy in Attached Clusters..................................................................................... 1009
Creating Gatekeeper ConfigMap in the Workspace Namespace......................................................... 1010
Configuring Your Applications............................................................................................................... 1010
Configuring Your Application Manually................................................................................................. 1011
NKP Catalog Applications Enablement after Installing NKP.................................................................1011
Configuring a Default Ultimate Catalog after Installing NKP................................................................ 1011
NKP Catalog Application Labels........................................................................................................... 1012

8. Additional Konvoy Configurations........................................................ 1013


FIPS 140-2 Compliance........................................................................................................................ 1013
FIPS Support in NKP................................................................................................................. 1013

vii
Infrastructure Requirements for FIPS 140-2 Mode.................................................................... 1014
Deploying Clusters in FIPS Mode.............................................................................................. 1014
FIPS 140 Images: Non-Air-Gapped Environments.................................................................... 1015
FIPS 140 Images: Air-gapped Environment...............................................................................1015
Validate FIPS 140 in Cluster......................................................................................................1016
FIPS 140 Mode Performance Impact.........................................................................................1017
Registry Mirror Tools.............................................................................................................................1017
Air-gapped vs. Non-air-gapped Environments........................................................................... 1018
Local Registry Tools Compatible with NKP............................................................................... 1018
Using a Registry Mirror.............................................................................................................. 1019
Seeding the Registry for an Air-gapped Cluster...................................................................................1021
Configure the Control Plane..................................................................................................................1022
Modifying Audit Logs.................................................................................................................. 1022
Viewing the Audit Logs.............................................................................................................. 1027
Updating Cluster Node Pools................................................................................................................1028
Cluster and NKP Installation Verification.............................................................................................. 1029
Checking the Cluster Infrastructure and Nodes......................................................................... 1029
Monitor the CAPI Resources......................................................................................................1030
Verify all Pods............................................................................................................................ 1030
Troubleshooting.......................................................................................................................... 1030
GPU for Konvoy.................................................................................................................................... 1031
Delete a NKP Cluster with One Command.......................................................................................... 1031

9. Konvoy Image Builder............................................................................ 1032


Creating an Air-gapped Package Bundle............................................................................................. 1034
Use KIB with AWS................................................................................................................................ 1035
Creating Minimal IAM Permissions for KIB................................................................................ 1035
Integrating your AWS Image with NKP CLI............................................................................... 1038
Create a Custom AMI................................................................................................................ 1039
KIB for EKS................................................................................................................................ 1045
Using KIB with Azure............................................................................................................................ 1045
KIB for AKS................................................................................................................................ 1048
Using KIB with GCP..............................................................................................................................1048
Building the GCP Image............................................................................................................ 1049
Creating a Network (Optional)....................................................................................................1050
Using KIB with GPU..............................................................................................................................1050
Verification.................................................................................................................................. 1052
Using KIB with vSphere........................................................................................................................ 1052
Create a vSphere Base OS Image............................................................................................ 1052
Create a vSphere Virtual Machine Template............................................................................. 1054
Using KIB with Pre-provisioned Environments..................................................................................... 1059
Customize your Image.......................................................................................................................... 1060
Customize your Image YAML or Manifest File.......................................................................... 1060
Customize your Packer Configuration........................................................................................1064
Adding Custom Tags to your Image.......................................................................................... 1066
Ansible Variables........................................................................................................................ 1067
Use Override Files with Konvoy Image Builder......................................................................... 1067
Konvoy Image Builder CLI.................................................................................................................... 1077
konvoy-image build.....................................................................................................................1077
konvoy-image completion........................................................................................................... 1082
konvoy-image generate-docs..................................................................................................... 1084
konvoy-image generate.............................................................................................................. 1084
konvoy-image provision.............................................................................................................. 1086
konvoy-image upload..................................................................................................................1087
konvoy-image validate................................................................................................................ 1087

viii
konvoy-image version.................................................................................................................1088

10. Upgrade NKP......................................................................................... 1089


Upgrade Compatibility Tables............................................................................................................... 1090
Supported Operating Systems................................................................................................... 1090
Konvoy Image Builder................................................................................................................ 1090
Supported Kubernetes Cluster Versions.................................................................................... 1090
Upgrading Cluster Node Pools...................................................................................................1091
Upgrade Prerequisites...........................................................................................................................1092
Prerequisites for the Kommander Component...........................................................................1092
Prerequisites for the Konvoy Component.................................................................................. 1093
Upgrade: For Air-gapped Environments Only.......................................................................................1094
Downloading all Images for Air-gapped Deployments............................................................... 1094
Extracting Air-gapped Images and Set Variables...................................................................... 1095
Loading Images for Deployments - Konvoy Pre-provisioned..................................................... 1095
Load Images to your Private Registry - Konvoy........................................................................ 1096
Load Images to your Private Registry - Kommander.................................................................1096
Load Images to your Private Registry - NKP Catalog Applications........................................... 1097
Next Step.................................................................................................................................... 1097
Upgrade NKP Ultimate..........................................................................................................................1097
Ultimate: For Air-gapped Environments Only.............................................................................1098
Ultimate: Upgrade the Management Cluster and Platform Applications.................................... 1099
Ultimate: Upgrade Platform Applications on Managed and Attached Clusters.......................... 1101
Ultimate: Upgrade Workspace NKP Catalog Applications......................................................... 1102
Ultimate: Upgrade Project Catalog Applications........................................................................ 1107
Ultimate: Upgrade Custom Applications.....................................................................................1108
Ultimate: Upgrade the Management Cluster CAPI Components............................................... 1109
Ultimate: Upgrade the Management Cluster Core Addons........................................................1109
Ultimate: Upgrading the Management Cluster Kubernetes Version...........................................1111
Ultimate: Upgrading Managed Clusters..................................................................................... 1114
Ultimate: Upgrade images used by Catalog Applications.......................................................... 1116
Upgrade NKP Pro................................................................................................................................. 1118
NKP Pro: For Air-gapped Environments Only............................................................................1119
NKP Pro: Upgrade the Cluster and Platform Applications......................................................... 1120
NKP Pro: Upgrade the Cluster CAPI Components....................................................................1121
NKP Pro: Upgrade the Cluster Core Addons............................................................................ 1122
NKP Pro: Upgrading the Kubernetes Version............................................................................1123

11. AI Navigator........................................................................................... 1128


AI Navigator User Agreement and Guidelines......................................................................................1128
Accept the User Agreement....................................................................................................... 1128
AI Navigator Guidelines..............................................................................................................1129
AI Navigator Installation and Upgrades................................................................................................ 1130
Installing AI Navigator................................................................................................................ 1130
Disabling AI Navigator................................................................................................................1130
Upgrades to 2.7.0 or Later.........................................................................................................1130
Related Information.................................................................................................................... 1131
Accessing the AI Navigator...................................................................................................................1131
AI Navigator Queries.............................................................................................................................1133
What Goes in a Prompt............................................................................................................. 1133
Inline Commands or Code Snippets.......................................................................................... 1133
Code Blocks................................................................................................................................1135
Selected Prompt Examples........................................................................................................ 1135
AI Navigator Cluster Info Agent: Obtain Live Cluster Information........................................................ 1137

ix
Enable the NKP AI Navigator Cluster Info Agent...................................................................... 1138
Customizing the AI Navigator Cluster Info Agent...................................................................... 1138
Data Privacy FAQs.....................................................................................................................1138

12. Access Documentation.........................................................................1140

13. CVE Management Policies................................................................... 1141

14. Nutanix Kubernetes Platform Insights Guide.....................................1143


Nutanix Kubernetes Platform Insights Overview...................................................................................1143
NKP Insights Alert Table............................................................................................................ 1143
NKP Insights Engine.................................................................................................................. 1144
NKP Insights Architecture.......................................................................................................... 1144
Nutanix Kubernetes Platform Insights Setup........................................................................................ 1145
NKP Insights Resource Requirements.......................................................................................1146
NKP Insights Setup and Configuration...................................................................................... 1146
Grant View Rights to Users Using the UI.................................................................................. 1147
Uninstall NKP Insights................................................................................................................1151
NKP Insights Bring Your Own Storage (BYOS) to Insights..................................................................1154
Requirements..............................................................................................................................1154
Create a Secret to support BYOS for Nutanix Kubernetes Platform Insights.............................1155
Helm Values for Insights Storage.............................................................................................. 1155
Installing NKP Insights Special Storage in the UI......................................................................1156
Installing Insights Storage using CLI..........................................................................................1156
Installing Nutanix Kubernetes Platform Insights in an Air-gapped Environment........................ 1157
Manually Creating the Object Bucket Claim.............................................................................. 1158
Nutanix Kubernetes Platform Insights Alerts........................................................................................ 1159
Resolving or Muting Alerts......................................................................................................... 1159
Viewing Resolved or Muted Alerts............................................................................................. 1160
Insight Alert Usage Tips............................................................................................................. 1161
NKP Insight Alert Details............................................................................................................1161
NKP Insights Alert Notifications With Alertmanager...................................................................1161
Enable NKP-Related Insights Alerts.......................................................................................... 1169
Configuration Anomalies.............................................................................................................1170
1
NUTANIX KUBERNETES PLATFORM
OVERVIEW
Architecture
Kubernetes® creates the foundation for the Nutanix Kubernetes® Platform (NKP) cluster. This topic briefly
overviews the native Kubernetes architecture, a simplified version of the NKP architecture for both Pro and Ultimate
versions. The architecture also depicts the operational workflow for an NKP cluster.
NKP for Kubernetes v1.29.6. Kubernetes® is a registered trademark of The Linux Foundation in the United States
and other countries and is used according to a license from The Linux Foundation.

Components of the Kubernetes Control Plane


The native Kubernetes cluster consists of components in the cluster’s control plane and worker nodes that run
containers and maintain the runtime environment.
NKP supplements the native Kubernetes cluster by including a pre-defined and pre-configured set of applications. As
the pre-defined set of applications critical features for managing a Kubernetes cluster in a production environment,
the default set is identified as the NKP platform applications.
To view the full set of NKP platform services, see Platform Applications on page 386.
The following illustration depicts the NKP architecture and the workflow of the key components:

Nutanix Kubernetes Platform | Nutanix Kubernetes Platform Overview | 11


Figure 1: NKP Architecture

Related Information
For information on related topics or procedures, see the Kubernetes documentation.

Supported Infrastructure Operating Systems


This topic contains all the supported and tested operating systems (OS) that are currently supported for use with
Nutanix Kubernetes Platform (NKP).

Table 1: Nutanix

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Nutanix
System Config gapped gapped Support gapped Image
Builder

Ubuntu 5.4.0-125- Yes Yes Yes Yes Yes


22.04 generic

Rocky Linux 5.14.0-162.6.1.el9_1.x86_64


Yes Yes Yes
9.4

Nutanix Kubernetes Platform | Nutanix Kubernetes Platform Overview | 12


Table 2: Amazon Web Services (AWS)

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

RHEL 8.6 4.18.0-372.70.1.el8_6.x86_64


Yes Yes Yes Yes Yes Yes Yes

RHEL 8.8 4.18.0-477.27.1.el8_8.x86_64


Yes Yes Yes Yes Yes Yes Yes

Ubuntu 5.4.0-1103- Yes Yes


18.04 (Bionic aws
Beaver)

Ubuntu 20.04 5.15.0-1051-Yes Yes Yes


(Focal Fossa) aw

Rocky Linux 5.14.0-162.12.1.el9_1.0.2.x86_64


Yes Yes Yes
9.1

Flatcar 5.10.198- Yes Yes


3033.3.x flatcar

Table 3: Microsoft Azure

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

Ubuntu 20.04 5.15.0-1053- Yes Yes


(Focal Fossa) azure

Rocky Linux 5.14.0-162.12.1.el9_1.0.2.x86_64


Yes Yes
9.1

Table 4: Google Cloud Platform (GCP)

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

Ubuntu 18.04 5.4.0-1072- Yes Yes


gcp

Ubuntu 20.04 5.13.0-1024- Yes Yes


gcp

Nutanix Kubernetes Platform | Nutanix Kubernetes Platform Overview | 13


Table 5: Pre-provisioned

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

RHEL 8.6 4.18.0-372.70.1.el8_6.x86_64


Yes Yes Yes Yes Yes Yes Yes

RHEL 8.8 4.18.0-477.27.1.el8_8.x86_64


Yes Yes Yes Yes Yes Yes Yes

Flatcar 3033.3.16- Yes Yes


3033.3.x flatcar

Ubuntu 5.15.0-1048- Yes Yes Yes


20.04 aws

Rocky 5.14.0-162.12.1.el9_1.0.2.x86_64
Yes Yes Yes
Linux 9.1

Table 6: Pre-provisioned or Azure

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

RHEL 8.6 4.18.0-372.52.1.el8_6.x86_64


Yes Yes Yes Yes Yes

Table 7: vSphere

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

RHEL 8.6 4.18.0-372.9.1.el8.x86_64


Yes Yes Yes Yes Yes

RHEL 8.8 4.18.0-477.10.1.el8_8.x86_64


Yes Yes Yes Yes Yes
Ubuntu 5.4.0-125- Yes Yes Yes
20.04 generic

Rocky Linux 5.14.0-162.6.1.el9_1.x86_64


Yes Yes Yes
9.1

Flatcar 3033.3.16- Yes Yes


3033.3.x flatcar

Table 8: VMware Cloud Director (VCD)

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

Ubuntu 5.4.0-125- Yes Yes


20.04 generic

Nutanix Kubernetes Platform | Nutanix Kubernetes Platform Overview | 14


Table 9: Amazon Elastic Kubernetes (EKS)

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

Amazon 5.10.199-190.747.amzn2.x86_64
Yes
Linux 2
v7.9

Table 10: Azure Kubernetes Service (AKS)

Operating Kernel Default FIPS Air- FIPS Air- GPU GPU Air- Konvoy
System Config gapped gapped Support gapped Image
Builder

Ubuntu 5.15.0-1051- Yes


22.04.2 azure
LTS

Nutanix Kubernetes Platform | Nutanix Kubernetes Platform Overview | 15


2
DOWNLOADING NKP
You can download NKP from the Nutanix Support portal.

Procedure

1. From the Nutanix Download Site, select the NKP binary for either Darwin (MacOS) or Linux OS.

2. Extract the .tar file that is compatible with your OS as follows.

• For MacOS or Darwin:


1. Right-click on the .tar file using a file manager program such as 7-Zip.
2. Select Open With and then 7-Zip File Manager.
3. Select Extract and choose a location to save the extracted files.
• For Linux:
1. To extract the .tar file to the current directory using the CLI, type tar -xvf filename.tar.
2. If the file is compressed with gzip, add the tar -xvzf option.
3
GETTING STARTED WITH NKP
At Nutanix, we partner with you throughout the entire cloud-native journey as follows:

About this task

• Help you in getting started with Nutanix Kubernetes Platform (NKP) is the planning phase that introduces
definitions and concepts.
• Guide you with the Basic Installations by Infrastructure on page 50 through the NKP software installation
and start-up.
• Guide you with the Cluster Operations Management on page 339, which involves customizing applications
and managing operations.
You can install in multiple ways:

• On Nutanix infrastructure.
• On a public cloud infrastructure, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Azure.
• On an internal network, on-premises environment, or with a physical or virtual infrastructure.
• On an air-gapped environment.
• With or without Federal Information Processing Standards (FIPS) and graphics processing unit (GPU).
Before you install NKP:

Procedure

1. Complete the prerequisites (see Prerequisites for Installation on page 44) required to install NKP.

2. Determine the infrastructure (see Resource Requirements on page 38) on which you want to deploy NKP.

3. After you choose your environment, download NKP, and select the Basic Installations by Infrastructure on
page 50 for your infrastructure provider and environment.
The basic installations set up the cluster with the Konvoy component and then install the Kommander component
to access the dashboards through the NKP UI. The topics in the Basic Installations by Infrastructure on
page 50 chapter help you explore ##NKP# and prepare clusters for production to deploy and enable the
applications that support Cluster Operations Management on page 339.

4. (Optional) After you complete the basic installation and are ready to customize, perform Custom Installation
and Additional Infrastructure Tools, if required.

5. To prepare the software, perform the steps described in the Cluster Operations Management chapter.

6. Deploy and test your workloads.

Nutanix Kubernetes Platform | Getting Started with NKP | 17


NKP Concepts and Terms
This topic describes the terminology used in Nutanix Kubernetes Platform (NKP) to help you understand important
NKP concepts and terms.
NKP is composed of three main components: Konvoy, Kommander, and Konvoy Image Builder (KIB). These
three components work together to provide a single and centralized control point for an organization’s application
infrastructure. NKP empowers organizations to deploy, manage, and scale Kubernetes workloads in production
environments more efficiently.
Each of the three main components specifically manages the following:

• Konvoy is the cluster life cycle manager component of NKP. Konvoy relies on Cluster Application Programming
Interface (API), Calico, and other open-source and proprietary software to provide simple cluster life cycle
management for conformant Kubernetes clusters with networking and storage capabilities.
Konvoy uses industry-standard tools to provision certified Kubernetes clusters on multiple cloud providers,
vSphere, and on-premises hardware in connected and air-gapped environments. Konvoy contains the following
components:
Cluster Manager consists of Cluster API, Container Storage Interface (CSI), Container Network Interface (CNI),
Cluster AutoScaler, Cert Manager, and MetalLB.
For Networking, Kubernetes uses CNI (Container Network Interface) as an interface between network
infrastructure and Kubernetes pod networking. In NKP, the Nutanix provider uses the Cilium CNI. All other
providers use Calico CNI.
The Konvoy component is installed according to the cluster’s infrastructure. Remember:
1. To install NKP quickly and without much customization, see Basic Installations by Infrastructure on
page 50.
2. To choose more environments and cluster customizations, see Custom Installation and Additional
Infrastructure Tools.
• Kommander is the fleet management component of NKP. Kommander delivers centralized observability, control,
governance, unified policy, and better operational insights. With NKP Pro, Kommander manages a single
Kubernetes cluster.
In NKP Ultimate, Kommander supports attaching workload clusters and life cycle management of clusters using
Cluster API. NKP Ultimate also offers life cycle management of applications through FluxCD. Kommander
contains the following components:

• User interface, Security, Observability, Networking, and Application Management.


• Platform Applications: Applications such as observability, cost management, monitoring, and logging are
available with NKP and making NKP clusters production-ready right through. Platform applications are a
choice of selected applications from the open-source community consumed by the platform.
• Pro Platform Applications: Monitoring, Logging, Backup or Restore, Policy Agent, External DNS, Load
Balance, Ingress, SSO, Service Mesh.
• Ultimate Platform Applications: Includes all of the Pro Platform applications, plus additional Access Control
and Centralized Cost Management.
• Catalog Applications: Applications in NKP Ultimate that are deployed to be used for customer workloads,
such as Kafka, Spark, and ZooKeeper.
The Kommander component is installed according to the cluster’s environment type. For more information, see
Installing Kommander by Environment .
• Konvoy Image Builder (KIB) creates Cluster API-compliant machine images. It configures only those images to
contain all the necessary software to deploy Kubernetes cluster nodes. For more information, see Konvoy Image
Builder.

Nutanix Kubernetes Platform | Getting Started with NKP | 18


Section Contents

Cluster Types
Cluster types such as Management clusters, Managed clusters, and Attached clusters are key concepts in
understanding and getting the most out of Nutanix Kubernetes Platform (NKP) Pro versus Ultimate environments.

Multi-cluster Environment

• Management Cluster: Is the cluster where you install NKP, and it is self-managed. In a multi-cluster
environment, the Management cluster also manages other clusters. Customers with an Ultimate license need to
run workloads on Managed and Attached clusters, not on the Management cluster. For more information, see
License Packaging.
• Managed Cluster: Also called an “NKP cluster,” this is a type of workload cluster that you can create with
NKP. The NKP Management cluster manages its infrastructure, its life cycle, and its applications.
• Attached Cluster: This is a type of workload cluster that is created outside of NKP but is then connected to the
NKP Management Cluster so that NKP can manage it. In these cases, the NKP Management cluster only manages
the attached cluster’s applications.

Figure 2: Multi-cluster Environment

Single-cluster Environment
NKP Pro Cluster: Is the cluster where you install NKP. A NKP Pro cluster is a stand-alone cluster. It is self-
managed and, therefore, capable of provisioning itself. In this single-cluster environment, you cannot attach other
clusters; all workloads are run on your NKP Pro cluster. You can, however, have several separate NKP Pro instances,
each with its own license.
Customers with a Pro license can run workloads on their NKP Pro cluster.

Note: If you have not decided which license to get but plan on adding one or several clusters to your environment and
managing them centrally, Nutanix recommends obtaining a license.

Nutanix Kubernetes Platform | Getting Started with NKP | 19


Figure 3: Single-cluster Environment

Self-Managed Cluster
In the Nutanix Kubernetes Platform (NKP) landscape, only NKP Pro and NKP Ultimate Management clusters are
self-managed. Self-managed clusters manage the provisioning and deployment of their own nodes through CAPI
controllers. The CAPI controllers are a managing entity that automatically manages the life cycle of a cluster’s nodes
based on a customizable definition of the resources.
A self-managed cluster is one in which the CAPI resources and controllers that describe and manage it run on the
same cluster they are managing. As part of the underlying processing using the --self-managed flag, the NKP CLI
does the following:

• Creates a bootstrap cluster.


• Creates a workload cluster.
• Moves CAPI controllers from the bootstrap cluster to the workload cluster, making it self-managed.
• Deletes the bootstrap cluster.

Network-Restricted Cluster
A network-restricted cluster is not the same as an air-gapped cluster.
A firewall secures a network-restricted or firewalled cluster, Perimeter Network, Network Address Translation (NAT)
gateway, or proxy server requires additional access information. Network-restricted clusters are usually located in
remote locations or at the edge and, therefore, not in the same network as the Management cluster.
The main difference between network-restricted and air-gapped clusters is that network-restricted clusters can reach
external networks (like the Internet), but their services or ingresses cannot be accessed from outside. Air-gapped
clusters, however, do not allow ingress or egress traffic.
In a multi-cluster environment, NKP supports attaching a network-restricted cluster to an NKP Management cluster.
You can also enable a proxied access pipeline through the Management cluster, which allows you to access the
network-restricted cluster’s dashboards without being in the same network.

CAPI Concepts and Terms


Nutanix Kubernetes Platform (NKP) uses ClusterAPI (CAPI) technology to create and manage the life cycle of
Kubernetes Clusters. A basic understanding of CAPI concepts and terms helps understand how to install and maintain
NKP. You can find a deeper discussion of the architecture in the ClusterAPI Book.
CAPI makes use of a bootstrap cluster for provisioning and starting clusters. A bootstrap cluster handles the following
actions:

Nutanix Kubernetes Platform | Getting Started with NKP | 20


• Generating the cluster certificates if they are not otherwise specified.
• Initialize the control plane and manage the creation of other nodes until it is complete.
• Joining control plane and worker nodes to the cluster.
• Installing and configuring the networking plugin (Calico CNI), Container Storage Interface (CSI) volume
provisioners, and cluster autoscalercore Kubernetes components.
BootstrapData
BootstrapData is machine or node role-specific data, such as cloud initialization data, used to bootstrap a “machine”
onto a node.
For customers using Kommander for multi-cluster management, a management cluster manages the life cycle of
workload clusters. As the management cluster, NKP Kommander works with bootstrap and infrastructure providers
and maintains cluster resources such as bootstrap configurations and templates. If you are working with only one
cluster, Kommander will provide you with add-on (platform application) management for that cluster but not others.
Workload Cluster
A workload cluster is a Kubernetes cluster whose life cycle is managed by a management cluster. It provides the
platform to deploy, execute, and run workloads.
These additional concepts are essential for understanding the upgrade. They are part of a collection of Custom
Resource Definitions (CRDs) that extend the Kubernetes API.
ClusterResourceSet
A ClusterResourceSet Kubernetes cluster created by CAPI is functionally minimal. Crucial components like CSI and
CNI are not in the default cluster spec. A ClusterResourceSet is a custom resource definition (CRD) that can be used
to group and deploy core cluster components after the installation of the Kubernetes cluster.
When you create a bootstrap cluster, you can find all the components in the default namespace, and we move them to
the workload cluster while making the cluster self-managed.
A machine is a declarative specification for a platform or infrastructure component that hosts a Kubernetes node as
a bare metal server or a VM. CAPI uses provider-specific controllers to provision and install new hosts that register
as nodes. When you update a machine spec other than for specific values, such as annotations, status, and labels, the
controller deletes the host and creates a new one that conforms to the latest spec. This is called machine immutability.
If you delete a machine, the controller deletes the infrastructure and the node. Provider-specific information is not
portable between providers.
Within CAPI, you use declarative MachineDeployments to handle changes to machines by replacing them like a
core Kubernetes Deployment replaces Pods. MachineDeployments reconcile changes to machine specs by rolling out
changes to two MachineSets (similar to a ReplicaSet), both the old and the newly updated.
MachineHealthCheck
A MachineHealthCheck identifies unhealthy node conditions and initiates remediation for nodes owned by a
MachineSet.

Related Information
For information on related topics or procedures, see:

• ClusterAPI Book: https://cluster-api.sigs.k8s.io/user/concepts.html


• Customizing CAPI Components for a Cluster: Customization of Cluster CAPI Components on page 649

Air-Gapped or Non-Air-Gapped Environment


How to know if you are in an air-gapped or a non-air-gapped environment?

Nutanix Kubernetes Platform | Getting Started with NKP | 21


Air-Gapped Environments
In an air-gapped environment, your environment is isolated from unsecured networks like the Internet. Running your
workloads in an air-gapped environment is expected in scenarios where security is a determining factor. Nutanix
Kubernetes Platform (NKP) in air-gapped environments allows you to manage your clusters while shielding them
from undesired external influences.
You can create an air-gapped cluster in on-premises environments or any other environment. In this configuration,
you are responsible for providing an image registry. You must also retrieve required artifacts and configure NKP
to use those from a local directory when creating and managing NKP clusters. See Supported Infrastructure
Operating Systems on page 12.
Air-gapped environments provide secure interactions with other networks. You can perform actions in several ways
that require incoming data from other networks regardless of your environment’s isolation. In some configurations,
air-gapped clusters allow inbound connections but cannot initiate outbound connections. In other configurations,
you can set up a bastion host, which serves as a gateway between the Internet (or other untrusted networks) and
your environment and facilitates the download of install, upgrade, etc., files and images that are required for other
machines to run in air-gapped environments.
Common Industry Synonyms: Fettered, disconnected, restricted, Session Initiation Protocol (SIPREC), etc.

Non-Air-Gapped Environments
In a non-air-gapped environment, two-way access to and from the Internet exists. You can create a non-air-gapped
cluster on pre-provisioned (on-premises) environments or any cloud infrastructure.
NKP in a non-air-gapped environment allows you to manage your clusters while facilitating connections and offering
integration with other tools and systems.
Common Industry Synonyms: Open, accessible (to the Internet), not restricted, Non-classified Internet Protocol
(IP) Router Network (NIPRNet), etc.

Pre-provisioned Infrastructure
The pre-provisioned infrastructure allows the deployment of Kubernetes using Nutanix Kubernetes Platform (NKP)
to pre-existing machines. Other providers, such as vSphere, AWS, or Azure, create or provision the machines before
Kubernetes is deployed. On most infrastructures (including vSphere and cloud providers), NKP provisions the actual
nodes automatically as part of deploying a cluster. It creates the virtual machine (VM) using the appropriate image
and then handles the networking and installation of Kubernetes.
However, NKP can also work with pre-provisioned infrastructure in which you provision the VMs for the nodes.
You can pre-provision nodes for NKP on bare metal, vSphere, or cloud. Pre-provisioned and vSphere combine the
physical (on-premises bare metal) and virtual servers (VMware vSphere).

Usage of Pre-provisioned Environments


Pre-provisioned environments is often used in bare metal deployments, where you deploy your OS (see Cluster
Types on page 19 (such as Red Hat Enterprise Linux (RHEL) or Ubuntu, and so on) on physical machines.
Creating a pre-provisioned cluster as an Infrastructure Operations Manager, you are responsible for allocating
compute resources, setting up networking, and collecting IP and Secure Shell (SSH) information to NKP. You can
then provide all the required details to the pre-provisioned provider to deploy Kubernetes. These operations are done
manually or with the help of other tools.
In pre-provisioned environments, NKP handles your cluster’s life cycle (installation, upgrade, node management, and
so on). NKP installs Kubernetes, performs monitoring and logging applications, and has its own UI.
The main use cases for the pre-provisioned provider are:

• On-premises clusters.
• Cloud or Infrastructure as a Service (IaaS) environments that do not currently have a Nutanix-supported
infrastructure provider.

Nutanix Kubernetes Platform | Getting Started with NKP | 22


• Cloud environments, you must use pre-defined infrastructure instead of having one of the supported cloud
providers create it for you.
In an environment with access to the Internet, you can retrieve artifacts from specialized repositories dedicated
to them, such as Docker images from the DockerHub and Helm Charts from a dedicated Helm Chart repository.
However, in an air-gapped environment, you need local repositories to store Helm charts, Docker images, and other
artifacts. Tools such as JFrog, Harbor, and Nexus handle multiple types of artifacts in a single local repository.

Related Information
For information on related topics or procedures see Pre-provisioned Installation Options on page 65.

Licenses
This chapter describes the Nutanix Kubernetes Platform (NKP) licenses. The license type you subscribe to determines
what NKP features are available to you. Features compatible with all versions of NKP can be activated by purchasing
additional Add-on licenses.
The NKP licenses available are:

• NKP Starter
• NKP Pro
• NKP Ultimate

Table 11: Feature Support Matrix

Feature NKP Starter NKP Pro NKP Ultimate


APPLICATIONS
Workspace Platform Applications X X
Prometheus X X
Kubernetes Dashboard X X
Reloader X X X
Traefik X X X
Project Level Platform Applications X
Catalog Applications (Kafka and X
Zookeeper)
Custom Applications X
Partner Applications X
COST MONITORING (Kubecost)
AI OPS X
Insights X
AI Navigator X X
CLUSTER MANAGEMENT
LCM Management Cluster X X X
LCM Workload Clusters X X X

Nutanix Kubernetes Platform | Getting Started with NKP | 23


Feature NKP Starter NKP Pro NKP Ultimate
Workload Cluster Creation using UI, X X X
CLI, or YAML
Attaching Workload Cluster X
Upgrade Management Cluster X X X
Upgrade Workload Clusters X X X
Third-Party Kubernetes Management
LCM of EKS Cluster X
LCM of AKS Cluster X
GITOPS
Continuous Deployment (FluxCD) X
FluxCD (as an application) X X
UX
NKP CLI X X X
NKP UI X X X
Workspaces Management X
Projects X
Add new Infrastructure Provider X
LOGGING
Workspace Level Logging X X
Fluentbit X X
Multi-tenant Logging X
MONITORING
Backup & Restore X X
GPU X X
SECURITY
Single Sign On X X X
Policy control using Gatekeeper X X
CLUSTER PROVISIONING
NKP on Nutanix Infrastructure X X X
NKP on AWS X X
NKP on Azure X X
NKP on GCP X X
NKP on vSphere X X
Pre-provisioned X X
VMware Cloud Director X X
EKS Provisioning X

Nutanix Kubernetes Platform | Getting Started with NKP | 24


Feature NKP Starter NKP Pro NKP Ultimate
Multi-Cloud, Hybrid Cloud X
(Management and Workload clusters
on different infrastructures)
CLUSTER PROVISIONING

SECURITY
FIPS Compliant Build X X
Konvoy Image Builder or Bring your X X
own OS
Nutanix provided OS Image (Rocky X X X
Linux)
Air-Gapped Deployments X X X
RBAC
RBAC - Admin role only X X X
RBAC - Kubernetes X X X
NKP RBAC X X
Customize UI Banners X X
Upload custom Logo X

Purchase of a License
NKP Licenses are sold in units of cores. To learn more about licenses and to obtain a valid license:

• Contact a Nutanix sales representative.


• Download the binary files from the Nutanix Support portal.

NKP Starter License


Nutanix Kubernetes Platform (NKP) Starter license is a self-managed single cluster Kubernetes solution that offers
a feature-rich, easy-to-deploy, and easy-to-manage entry-level cloud container platform. The NKP Starter license
provides access to the entire Konvoy cluster environment and the Kommander platform application manager.
NKP Starter is bundled with NCI Pro and NCI Ultimate.

Compatible Infrastructure
NKP Starter operates across Nutanix's entire range of cloud, on-premises, edge, and air-gapped infrastructures and
has support for various OSes, including immutable OSes. To view the complete list of compatible infrastructure, see
Supported Infrastructure Operating Systems on page 12.
To understand the NKP Starter cluster in one of the listed environments of your choice, see Basic Installations by
Infrastructure on page 50 or Custom Installation and Infrastructure Tools on page 644.

Cluster Manager
Konvoy is the Kubernetes installer component of NKP Pro that uses industry-standard tools to create a certified
Kubernetes cluster. These industry standard tools create a cluster management system that includes:

• Control Plane: Manages the worker nodes and pods in the cluster.

Nutanix Kubernetes Platform | Getting Started with NKP | 25


• Worker Nodes: Used to run containerized applications and handle networking to ensure that the traffic between
applications across and outside the cluster is facilitated correctly.
• Container Networking Interface (CNI): Calico’s open-source networking and network security solution for
containers, virtual machines, and native host-based workloads.
• Container Storage Interface (CSI): A common abstraction to container orchestrations for interacting with storage
subsystems of various types.
• Kubernetes Cluster API (CAPI): Cluster API uses Kubernetes-style APIs and patterns to automate cluster life
cycle management for platform operators. For more information on how CAPI is integrated into NKP Pro, see
CAPI Concepts and Terms on page 20.
• Cert Manager: A Kubernetes addon to automate the management and issuance of TLS certificates from various
issuing sources.
• Cluster Autoscaler: A component that automatically adjusts the size of a Kubernetes cluster so that all pods have a
location to run and there are no unwanted nodes.

Builtin GitOps
NKP Starter is bundled with GitOps, an operating model for Kubernetes and other cloud native technologies,
providing a set of best practices that unify Git deployment, management, and monitoring for containerized clusters
and applications. GitOps uses Git as a single source of truth for declarative infrastructure and applications. With
GitOps, software agents can alert any divergence between Git and what it is running in a cluster and if there’s a
difference. Kubernetes reconcilers automatically update or roll back the cluster depending on the case.

Platform Applications
NKP Starter deploys only the required applications during installation by default. You can use the Kommander UI
to customize which Platform applications to deploy to the cluster in a workspace. For a list of available Platform
applications included with NKP, see Workspace Platform Application Resource Requirements on page 394.

Figure 4: NKP Starter Diagram

Nutanix Kubernetes Platform | Getting Started with NKP | 26


NKP Pro License
Nutanix Kubernetes Platform (NKP) Pro is an enterprise-ready license version that is stack-ready to move
applications in a cluster to production.
NKP Pro is a multi-cluster life cycle management Kubernetes solution centered around a management cluster that
manages many attached or managed Kubernetes clusters through a centralized management dashboard. The
management dashboard provides a single observability point and control throughout your attached or managed
clusters. The NKP Pro license gives you access to the entire Konvoy cluster environment, and the NKP UI dashboard
that deploys platform and catalog applications provides multi-cluster management, as well as comprehensive
compatibility with a complete range of infrastructure deployment options.
The Pro license is equivalent to the Essential license in previous releases. A new behavior feature is that Pro allows
for creating workspace clusters.

Compatible Infrastructure
NKP Pro operates across Nutanix entire range of cloud, on-premises, edge, and air-gapped infrastructures and has
support for various OSs, including immutable OSs. For a complete list of compatible infrastructure, see Supported
Infrastructure Operating Systems on page 12.
For instructions on standing up an NKP Pro cluster in one of the listed environments, see Basic Installations by
Infrastructure on page 50 or Custom Installation and Infrastructure Tools on page 644.

Cluster Manager
Konvoy is the Kubernetes installer component of NKP Pro that uses industry-standard tools to create a certified
Kubernetes cluster. These industry standard tools create a cluster management system that includes:

• Control Plane: Manages the worker nodes and pods in the cluster.
• Worker Nodes: Used to run containerized applications and handle networking to ensure that the traffic between
applications across and outside the cluster is facilitated correctly.
• Container Networking Interface (CNI): Calico’s open-source networking and network security solution for
containers, virtual machines, and native host-based workloads.
• Container Storage Interface (CSI): A common abstraction to container orchestrations for interacting with storage
subsystems of various types.
• Kubernetes Cluster API (CAPI): Cluster API uses Kubernetes-style APIs and patterns to automate cluster life
cycle management for platform operators. For more information on how CAPI is integrated into NKP Pro, see
CAPI Concepts and Terms on page 20.
• Cert Manager: A Kubernetes addon to automate the management and issuance of TLS certificates from various
issuing sources.
• Cluster Autoscaler: A component that automatically adjusts the size of a Kubernetes cluster so that all pods have a
location to run and there are no unwanted nodes.

Builtin GitOps
NKP Pro is bundled with GitOps, an operating model for Kubernetes and other cloud native technologies, providing
a set of best practices that unify Git deployment, management, and monitoring for containerized clusters and
applications. GitOps uses Git as a single source of truth for declarative infrastructure and applications. With GitOps,
software agents can alert any divergence between Git and what it is running in a cluster and if there’s a difference.
Kubernetes reconcilers automatically update or roll back the cluster depending on the case.

Platform Applications
When creating a cluster, the application manager deploys specific platform applications on the newly created cluster.
You can deploy applications in any NKP managed cluster with the complete flexibility to operate across cloud, on-

Nutanix Kubernetes Platform | Getting Started with NKP | 27


premises, edge, and air-gapped scenarios. Customers can also use the UI with Kommander to customize the required
platform applications to deploy to the cluster in a given workspace.
With NKP Pro, you can use the Kommander UI to customize which platform applications to deploy in the cluster in
a workspace. For a list of available platform applications included with NKP, see Workspace Platform Application
Resource Requirements on page 394.

Figure 5: NKP Pro Diagram

NKP Ultimate License


Nutanix Kubernetes Platform (NKP) Ultimate is true fleet management for clusters running on-premises, in the
cloud, or anywhere. It is a multi-cluster life cycle management Kubernetes solution centered around a management
cluster that manages many attached or managed Kubernetes clusters through a centralized management dashboard.
The management dashboard provides a single observability point and control throughout your attached or managed
clusters. The NKP Ultimate license gives you access to the Konvoy cluster environment, the NKP UI dashboard that
deploys platform and catalog applications, multi-cluster management, and comprehensive compatibility with the
complete infrastructure deployment options.
The Ultimate license is equivalent to the Enterprise license in previous releases.

Compatible Infrastructure
NKP Ultimate operates across Nutanix entire range of cloud, on-premises, edge, and air-gapped infrastructures and
has support for various OSs, including immutable OSs. See Supported Infrastructure Operating Systems on
page 12 for a complete list of compatible infrastructure.
For the basics on standing up a NKP Ultimate cluster in one of the listed environments of your choice, see Basic
Installs by Infrastructure or Custom Installation and Infrastructure Tools on page 644.

Cluster Manager
Konvoy is the Kubernetes installer component of NKP Pro that uses industry-standard tools to create a certified
Kubernetes cluster. These industry standard tools create a cluster management system that includes:

• Control Plane: Manages the worker nodes and pods in the cluster.

Nutanix Kubernetes Platform | Getting Started with NKP | 28


• Worker Nodes: Used to run containerized applications and handle networking to ensure that the traffic between
applications across and outside the cluster is facilitated correctly.
• Container Networking Interface (CNI): Calico’s open-source networking and network security solution for
containers, virtual machines, and native host-based workloads.
• Container Storage Interface (CSI): A common abstraction to container orchestrations for interacting with storage
subsystems of various types.
• Kubernetes Cluster API (CAPI): Cluster API uses Kubernetes-style APIs and patterns to automate cluster life
cycle management for platform operators. For more information on how CAPI is integrated into NKP Pro, see
CAPI Concepts and Terms on page 20.
• Cert Manager: A Kubernetes addon to automate the management and issuance of TLS certificates from various
issuing sources.
• Cluster Autoscaler: A component that automatically adjusts the size of a Kubernetes cluster so that all pods have a
location to run and there are no unwanted nodes.

Builtin GitOps
NKP Ultimate is bundled with GitOps, an operating model for Kubernetes and other cloud native technologies,
providing a set of best practices that unify Git deployment, management, and monitoring for containerized clusters
and applications. GitOps uses Git as a single source of truth for declarative infrastructure and applications. With
GitOps, software agents can alert any divergence between Git and what it is running in a cluster and if there’s a
difference. Kubernetes reconcilers automatically update or roll back the cluster depending on the case.

Platform Applications
When creating a cluster, the application manager deploys specific platform applications on the newly created cluster.
Applications can be deployed in any NKP managed cluster, giving you complete flexibility to operate across cloud,
on-premises, edge, and air-gapped scenarios. Customers can also use the UI with Kommander to customize which
platform applications to deploy to the cluster in a given workspace.
With NKP Ultimate, you can use the Kommander UI to customize which platform applications to deploy to the
cluster in a workspace. For a list of available platform applications included with NKP, see Workspace Platform
Application Resource Requirements on page 394.

Nutanix Kubernetes Platform | Getting Started with NKP | 29


Figure 6: NKP Ultimate Diagram

Add an NKP License


If not done in the prompt-based CLI, add your license through the UI.

About this task


For licenses bought directly from D2iQ or Nutanix, you can obtain the license token through the Support Portal.
Insert this token in the last step of adding a license in the UI.

Note: You must be an administrator to add licenses to NKP.

Procedure

1. Select Global in the workspace header drop-down.

2. In the sidebar menu, select Administration > Licensing.

3. Select + and Activate License to enter the Activate License form.

4. On the Activate License form page, select Nutanix.

5. Paste your license token in the Enter License section inside the License Key field.

» For Nutanix licenses, paste your license token in the provided fields.
» For D2iQ licenses, paste the license token in the text box.

6. Select Save.

Remove an NKP License


Remove a license via the UI.

Nutanix Kubernetes Platform | Getting Started with NKP | 30


About this task
If your license information has changed, you may need to remove an existing license from NKP to add a new one.
Only NKP administrators can remove licenses.

Procedure

1. Open the NKP UI dashboard.

2. Select Global in the workspace header drop-down.

3. In the sidebar menu, select Administration > Licensing.

4. Your existing licenses will be listed. Select Remove License on the license you would like to remove and
follow the prompts.

Commands within a kubeconfig File


This topic specifies some basic recommendations regarding the kubeconfig file related to target clusters and the --
kubeconfig=<CLUSTER_NAME>.conf flag. For more information, see Kubernetes Documentation.

For kubectl and NKP commands to run, it is often necessary to specify the environment or cluster in which you
want to run them. This also applies to commands that create, delete, or update a cluster’s resources.
There are two options:

Table 12: Table

Export an Environment Variable Specify the Target Cluster in the Command


Export an environment variable from a cluster’s Specify an environment variable for one
kubeconfig file, which sets the environment for the command at a time by running it with the --
commands you run after exporting it. kubeconfig=<CLUSTER_NAME>.conf flag.

Better suited for single-cluster environments. Better suited for multi-cluster environments.

Single-cluster Environment
In a single-cluster environment, you do not need to switch between clusters to run commands and perform operations.
However, specifying an environment for each terminal session is still necessary. Hence, the NKP CLI runs the
operations on the NKP cluster and does not accidentally run operations on, for example, the bootstrap cluster.
To set the environment variable for all your operations using the kubeconfig file, you must first set the variable:

• When you create a cluster, a kubeconfig file is generated automatically. Get the kubeconfig file and write it to
the ${CLUSTER_NAME}.conf variable :
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

• Set the context by exporting the kubeconfig file from the source file and executing the command for each
terminal session using the --kubeconfig file except the current session:
export KUBECONFIG=${CLUSTER_NAME}.conf

Multi-cluster Environment
Having multiple clusters means switching between two clusters to run operations. Nutanix recommends two
approaches:

Nutanix Kubernetes Platform | Getting Started with NKP | 31


• You can start several terminal sessions, one per cluster, and set the environment variable as shown in the single-
cluster environment example above, one time per cluster.
• You can use a single terminal session and run the commands with a flag every time. The flag specifies the target
cluster for the operation every time so that you can run the same command several times but with a different flag.

• You can use a flag to reference the target cluster. The --kubeconfig=<CLUSTER_NAME>.conf flag defines the
configuration file for the cluster that you configure and try to access.
This is the easiest way to ensure you are working on the correct cluster when operating and using multiple
clusters. If you create additional clusters and do not store the name as an environment variable, you can enter the
cluster name followed by .conf to access your cluster.

Note: Ensure that you run nkp get kubeconfig for each cluster you want to create to generate a
kubeconfig file.

Storage
This document describes the model used in Kubernetes for managing persistent, cluster-scoped storage for workloads
requiring access to persistent data.
A workload on Kubernetes typically requires two types of storage:

• Ephemeral Storage
• Persistent Volume

Ephemeral Storage
Ephemeral storage, by its name, is ephemeral because it is cleaned up when the workload is deleted or the container
crashes. For example, the following are examples of ephemeral storage provided by Kubernetes:

Table 13: Types of Ephemeral Storage

Ephemeral Storage Type Location


EmptyDir volume Managed by kubelet under /var/lib/kubelet
Container logs Typically under /var/logs/containers
Container image layers Managed by container runtime (for example, under /var/lib/
containerd)

Container writable layers Managed by container runtime (e.g., under /var/lib/containerd)

Kubernetes automatically manages ephemeral storage and typically does not require explicit settings. However, you
might need to express capacity requests for temporary storage so that kubelet can use that information to ensure that
each node has enough.

Persistent Volume
A persistent volume claim (PVC) is a storage request. A workload that requires persistent volumes uses a persistent
volume claim (PVC) to express its request for persistent storage. A PVC can request a specific size and Access
Modes (for example, they can be mounted after read/write or many times read-only).
Any workload can specify a PersistentVolumeClaim. For example, a Pod may need a volume that is at least 4Gi large
or a volume mounted under /data in the container’s filesystem. If a PersistentVolume (PV) satisfies the specified
requirements in the PersistentVolumeClaim (PVC), it will be bound to the PVC before the Pod starts.

Nutanix Kubernetes Platform | Getting Started with NKP | 32


Default Storage Providers
When deploying Nutanix Kubernetes Platform (NKP) using a supported cloud provider (AWS, Azure, or GCP), NKP
automatically configures native storage drivers for the target platform. In addition, NKP deploys a default storage
class (see Storage Classes) for provisioning dynamic volumes (see Dynamic Volume Provisioning) creation.
The following table lists the driver and default StorageClass for each supported cloud provisioner:

Table 14: Default StorageClass for Supported Cloud Provisioners

Cloud Version Driver Default Storage Class


Provisioner
AWS 1.23 aws-ebs-csi-driver ebs-sc
The AWS CSI driver is
upgraded to a new minor
version in NKP 2.7, which
has an upgraded version
(see https://github.com/
kubernetes-sigs/aws-ebs-
csi-driver/blob/master/
CHANGELOG.md#urgent-
upgrade-notes of the package.

Nutanix 3.0.0 nutanix-csi-driver


Azure 1.29.6 azuredisk-csi-driver azuredisk-sc

Pre- 2.5.0 local-static-provisioner localvolumeprovisioner


provisioned
vSphere 2.12.0 vsphere-csi-driver vsphere-raw-block-sc

Cloud 1.4.0-d2iq.0 cloud-director-named-csi- vcd-disk-sc


Director driver
(VCD)
GCP 1.10.3 gcp-compute-persistent-disk- csi-gce-pd
csi-driver

Note: NKP uses the local static provisioner as the default storage provider for pre-provisioned clusters.
However, localvolumeprovisioner is not suitable for production use. Use a Kubernetes CSI (see https://
kubernetes.io/docs/concepts/storage/volumes/#volume-types that is compatible with storage that is
suitable for production.
You can choose from any storage option https://kubernetes.io/docs/concepts/storage/volumes/
#volume-types available for Kubernetes. To disable the default that Konvoy deploys, set the default
StorageClass localvolumeprovisioner as non-default. Then, set the newly created StorageClass to
default by following the commands in the Changing the default StorageClass topic in the Kubernetes
documentation (see https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-
class/).
When a default StorageClass is specified, you can create PVCs without specifying the StorageClass. For
instance, to request a volume using the default provisioner, create a PVC with the following configurations:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pv-claim

Nutanix Kubernetes Platform | Getting Started with NKP | 33


spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi

To start the provisioning of a volume, launch a pod that references the PVC:
...
volumeMounts:
- mountPath: /data
name: persistent-storage
...
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: my-pv-claim

Note: To specify a StorageClass that references a storage policy when making a PVC (see https://kubernetes.io/
docs/concepts/storage/persistent-volumes/#class-1) and specify a name in storageClassName. If left
blank, the default StorageClass is used.

Change or Manage Multiple StorageClasses


The default StorageClass provisioned with Nutanix Kubernetes Platform (NKP) is acceptable for installation
but unsuitable for production. If your workload has different requirements, you can create additional. You can use
a single terminal session and run the commands with a flag every time. The flag specifies the target cluster for the
operation every time so that you can run the same command several times but with a different flag.

• StorageClass types with specific configurations. You can change the default StorageClass using these steps
from the Kubernetes site: Changing the default storage class
Ceph can also be used as Container Storage Interface (CSI) storage. For information on how to use Rook Ceph, see
Rook Ceph in NKP on page 633.

Driver Information
Below is infrastructure provider CSI driver specifics.

Amazon Elastic Block Store (EBS) CSI Driver


NKP EBS default StorageClass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true" # This tells kubernetes to make
this the default storage class
name: ebs-sc
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete # volumes are automatically reclaimed when no longer in use and
PVCs are deleted
volumeBindingMode: WaitForFirstConsumer # Physical volumes will not be created until a
pod is created that uses the PVC, required to use CSI's Topology feature
parameters:
csi.storage.k8s.io/fstype: ext4
type: gp3 # General Purpose SSD
NKP deploys with gp3 (general purpose SSDs) EBS volumes.

Nutanix Kubernetes Platform | Getting Started with NKP | 34


• Driver documentation: aws-ebs-csi-driver
• Volume types and pricing: volume types

Nutanix CSI Driver


NKP default storage class for Nutanix supports dynamic provisioning and static provisioning of block volumes.

• Driver documentation: Nutanix CSI Driver Configuration


• Nutanix Volumes documentation: Nutanix Creating a Storage Class - Nutanix Volumes
• Hypervisor Attached Volumes documentation: Nutanix Creating a Storage Class - Hypervisor Attached
Volumes
The CLI and UI allow you to enable or disable Hypervisor Attached volumes. The selection passes to the CSI driver's
storage class. See Manage Hypervisor.
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: default-hypervisorattached-storageclass
parameters:
csi.storage.k8s.io/fstype: file-system type
hypervisorAttached: ENABLED | DISABLED <==========
flashMode: ENABLED | DISABLED
storageContainer: storage-container-name
storageType: NutanixVolumes
provisioner: csi.nutanix.com
reclaimPolicy: Delete | Retain
mountOptions:
-option1
-option2

Azure CSI Driver


NKP deploys with StandardSSD_LRS for Azure Virtual Disks.

• Driver documentation: azuredisk-csi-driver


• Volume types and pricing: volume types
• Specifics for Azure using Pre-provisioning can be found here: Pre-provisioned Azure-only Configurations

vSphere CSI Driver


NKP default storage class for vSphere supports dynamic provisioning and static provisioning of block volumes.

• Driver documentation: VMware vSphere Container Storage Plug-in Documentation


• Specifics for using vSphere storage driver: Using vSphere Container Storage Plug-in

VMware Cloud Director


In VMware Cloud Director:

• The purpose is to manage the life cycle of the load balancers as well as associate Kubernetes nodes with virtual
machines in the infrastructure. See Cloud provider component (CPI)
• Cluster API for VMware Cloud Director (CAPVCD) is a component that runs in a Kubernetes cluster that
connects to the VCD API. It uses the Cloud Provider Interface (CPI) to create and manage the infrastructure.

Nutanix Kubernetes Platform | Getting Started with NKP | 35


• For storage, NKP has a CSI plugin for interfacing with CPI and can create disks dynamically and become the
StorageClass

Pre-provisioned CSI Driver


In a Pre-provisioned environment, NKP will also deploy a CSI-compatible driver and configure a default
StorageClass - localvolumeprovisioner. See pre-provisioned.

• Driver documentation: local-static-provisioner


NKP uses (localvolumeprovisioner) as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use an alternate compatible storage that is
suitable for production. See local-static-provisioner and Kubernetes CSI.
To disable the default that Konvoy deploys, set the default StorageClass localvolumeprovisioner as non-default.
Then, set your newly created StorageClass by following the steps in the Kubernetes documentation: See Change
the default StorageClass. You can choose from any of the storage options available for Kubernetes and make your
storage choice the default storage. See Storage choice
Ceph can also be used as CSI storage. For information on how to use Rook Ceph, see Rook Ceph in NKP on
page 633.

GCP CSI Driver


This driver allows volumes backed by Google Cloud Filestore instances to be dynamically created and mounted by
workloads.

• Driver documentation: gcp-filestore-csi-driver


• Persistent volumes and dynamic provisioning: volume types

Provisioning a Static Local Volume


You can provision a static local volume for a Nutanix Kubernetes Platform (NKP) cluster.

About this task


You can choose from any of the storage options available for Kubernetes. To disable the default that Konvoy deploys,
set the default StorageClass localvolumeprovisioner to non-default. Then, set the newly created StorageClass by
following the steps in the Kubernetes documentation (see Change the Default Storage Class).
For the Pre-provisioned infrastructure, the localvolumeprovisioner component uses the local volume static
provisioner (see https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) to manage persistent
volumes for pre-allocated disks. The volume static provisioners do this by watching the /mnt/disks folder on each
host and creating persistent volumes in the localvolumeprovisioner storage class for each disk it discovers in
this folder.
For Nutanix, see documentation topics in the Nutanix Portal: Creating a Persistent Volume Claim.

• Persistent volumes with a Filesystem volume mode are discovered if you mount them under /mnt/disks.
• Persistent volumes with a Block volume-mode are discovered if you create a symbolic link to the block device in
/mnt/disks.

For additional NKP documentation regarding StorageClass, see Default Storage Providers on page 33.

Note: When creating a pre-provisioned infrastructure cluster, NKP uses localvolumeprovisioner as the
default storage provider (see https://d2iq.atlassian.net/wiki/spaces/DENT/pages/29919120). However,
localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI (see https://kubernetes.io/
docs/concepts/storage/volumes/#volume-types) to check for compatible storage suitable for production.

Nutanix Kubernetes Platform | Getting Started with NKP | 36


Before starting, verify the following:

• You can access a Linux, macOS, or Windows computer with a supported OS version.
• You have a provisioned NKP cluster that uses the localvolumeprovisioner platform application but has not
added any other Kommander applications to the cluster yet.
This distinction between provisioning and deployment is important because some applications depend on the storage
class provided by the localvolumeprovisioner component and can fail to start if not configured.

Procedure

1. Create a pre-provisioned cluster by following the steps outlined in the pre-provisioned infrastructure topic.
As volumes are created or mounted on the nodes, the local volume provisioner detects each volume in the /mnt/
disks directory. It adds it as a persistent volume with the localvolumeprovisioner StorageClass. For more
information, see the documentation regarding Kubernetes Local Storage.

2. Create at least one volume in /mnt/disks on each host.


For example, mount a tmpfs volume.
mkdir -p /mnt/disks/example-volume && mount -t tmpfs example-volume /mnt/disks/
example-volume

3. Verify the persistent volume by running the following command.


kubectl get pv

4. Claim the persistent volume using a PVC by running the following command.
cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: localvolumeprovisioner
EOF

5. Reference the persistent volume claim in a pod by running the following command.
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-with-persistent-volume
spec:
containers:
- name: frontend
image: nginx
volumeMounts:
- name: data
mountPath: "/var/www/html"
volumes:
- name: data
persistentVolumeClaim:
claimName: example-claim

Nutanix Kubernetes Platform | Getting Started with NKP | 37


EOF

6. Verify the persistent volume claim using the command kubectl get pvc.
Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
AGE
example-claim Bound local-pv-4c7fc8ba 3986Mi RWO
localvolumeprovisioner 78s

Resource Requirements
To ensure a successful Nutanix Kubernetes Platform (NKP) installation, you must meet certain resource requirements
for the control plane nodes and worker nodes. These resource requirements can be slightly different for different
infrastructure providers and license type.

Section Contents

General Resource Requirements


To install NKP with the minimum amount of resources. Review the requirements list for your install before beginning
installation.
For more information, see Installing NKP on page 47.

Pro and Ultimate Cluster Minimum Requirements


This is a Nutanix reference.
To install Pro and Ultimate Clusters NKP with the minimum amount of resources, review the following before
beginning installation.
You need at least three control plane nodes and you must have at least four worker nodes. The specific number of
worker nodes required for your environment can vary depending on the cluster workload and size of the nodes.

Table 15: General Resource Requirements for Pro and Ultimate Clusters

Resources Control Plane Nodes Worker Nodes


vCPU Count 4 8
Memory 16 GB 32 GB
Disk Volume Approximately 80 GiB: Used for / Approximately 95 GiB: used for /
var/lib/kubelet and /var/lib/ var/lib/kubelet and /var/lib/
containerd containerd

Root Volume Disk usage must be below 85% Disk usage must be below 85%

Note: If you use the instructions to create a cluster using the NKP default settings without any edits to configuration
files or additional flags, your cluster is deployed on an Ubuntu 20.04 OS image (see Supported Infrastructure
Operating Systems on page 12 with three control plane nodes, and four worker nodes which match the
requirements above.

Nutanix Kubernetes Platform | Getting Started with NKP | 38


Starter Cluster Minimum Requirements
To install the NKP Starter Cluster with the minimum amount of resources, review the following list before beginning
installation. The specific number of worker nodes required for your environment can vary depending on the cluster
workload and size of the nodes. The suggested two (2) are the absolute minimum needed by NKP components.

Note: The Starter License is supported exclusively with the Nutanix Infrastructure.

Table 16: Management Cluster

Resources Control Plane Nodes Worker Nodes


vCPU Count 2 4
Memory 6 GB 4 GB
Disk Volume Approximately 80 GiB: Used for / Approximately 80 GiB: used for /
var/lib/kubelet and /var/lib/ var/lib/kubelet and /var/lib/
containerd containerd

Root Volume Disk usage must be below 85% Disk usage must be below 85%
Non-Default Flags in the
CLI --control-plane-vcpus 2 \
--control-plane-memory 6 \
--worker-replicas 2 \
--worker-vcpus 4 \
--worker-memory 4

Table 17: Managed Cluster

Resources Control Plane Nodes Worker Nodes


vCPU Count 2 3
Memory 4 GB 3 GB
Disk Volume Approximately 80 GiB: Used for / Approximately 80 GiB: used for /
var/lib/kubelet and /var/lib/ var/lib/kubelet and /var/lib/
containerd containerd

Root Volume Disk usage must be below 85% Disk usage must be below 85%
Non-Default Flags in the
CLI --control-plane-vcpus 2 \
--control-plane-memory 4 \
--worker-replicas 2 \
--worker-vcpus 3 \
--worker-memory 3

Infrastructure Provider-Specific Requirements


For specific infrastructure providers, additional requirements might apply. For example, NKP on Azure defaults to
deploying a Standard_D4s_v3 VM with a 128 GiB volume for the OS and an 80 GiB volume for etcd storage, which
meets the above requirements. For specific additional resource information:

• Pre-provisioned Installation Options on page 65

Nutanix Kubernetes Platform | Getting Started with NKP | 39


• AWS Installation Options on page 156
• Azure Installation Options on page 309
• vSphere Installation Options on page 249
• VMware Cloud Director Installation Options on page 309
• EKS Installation Options on page 230
• AKS Installation Options on page 319

Kommander Component Requirements

• Management Cluster Application Requirements


• Workspace Platform Application Defaults and Resource Requirements

Managed Cluster Requirements


To create additional clusters in that environment, ensure that you have at least the minimal recommended resources
below.

Minimum Recommendation for Managed Clusters


Worker Nodes: For a default installation, at least four worker nodes with:

• 8 CPU cores each


• 12 GiB of memory
• A storage class and disk mounts that can accommodate at least four persistent volumes

Note: Four worker nodes are required to support upgrades to the rook-ceph platform application. rook-ceph
supports the logging stack, the velero backup tool, and NKP Insights. If you have disabled the rook-ceph platform
application, only three worker nodes are required.

Control Plane Nodes:

• 8 CPU cores each


• 12 GiB of memory
Cluster Needs:

• Default Storage Class and four volumes of 32GiB, 32GiB, 2GiB, and 100GiB or the ability to create those
volumes depending on the Storage Class:
$ kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY
STATUS CLAIM
STORAGECLASS REASON AGE
pvc-08de8c06-bd66-40a3-9dd4-b0aece8ccbe8 32Gi RWO Delete
Bound kommander-default-workspace/kubecost-cost-analyzer
ebs-sc 124m
pvc-64552486-7f4c-476a-a35d-19432b3931af 32Gi RWO Delete
Bound kommander-default-workspace/kubecost-prometheus-server
ebs-sc 124m
pvc-972c3ee3-20bd-449b-84d9-25b7a06a6630 2Gi RWO Delete
Bound kommander-default-workspace/kubecost-prometheus-alertmanager
ebs-sc 124m

Nutanix Kubernetes Platform | Getting Started with NKP | 40


pvc-98ab93f1-2c2f-46b6-b7d3-505c55437fbb 100Gi RWO Delete
Bound kommander-default-workspace/db-prometheus-kube-prometheus-stack-
prometheus-0 ebs-sc 123m

Note: Actual workloads might demand more resources depending on the usage.

Management Cluster Application Requirements


This topic only details requirements for management cluster-specific applications in the Kommander component. For
the list of all platform applications, see Platform Application Configuration Requirements.
The following table describes the workspace platform applications specific to the management cluster, minimum
resource requirements, minimum persistent storage requirements, and default priority class values:

Common App ID (for Deployed by Minimum Minimum Default Priority


Name App versions, default Resources Persistent Class
see the Release Suggested Storage
Notes) Required

Centralized centralized- Yes cpu: 200m NKP Critical


Grafana* grafana (100002000)
memory: 100Mi

Centralized centralized- Yes cpu: 1200m # of PVs: 1 NKP High


Kubecost* kubecost (100001000)
memory: 4151Mi PV sizes: 32Gi

Chartmuseum chartmuseum Yes # of PVs: 1 NKP Critical


(100002000)
PV sizes: 2Gi

Dex dex Yes cpu: 100m NKP Critical


(100002000)
memory: 50Mi

Dex dex-k8s- Yes cpu: 100m NKP High


Authenticator authenticator (100001000)
memory: 128Mi

NKP Insights NKP-insights- No cpu: 100m NKP Critical


Management management (100002000)
memory: 128Mi

Karma* karma Yes NKP Critical


(100002000)

Kommander kommander No cpu: 1100m NKP Critical


(100002000)
memory: 896Mi

Kommander kommander- Yes cpu: 300m NKP Critical


AppManagement appmanagement (100002000)
memory: 256Mi

Kommander kommander-flux Yes cpu: 5000m NKP Critical


Flux (100002000)
memory: 5Gi

Nutanix Kubernetes Platform | Getting Started with NKP | 41


Kommander UI kommander-ui No cpu: 100m NKP Critical
(100002000)
memory: 256Mi

Kubefed kube fed Yes cpu: 300m NKP Critical


(100002000)
memory: 192Mi

Kubetunnel kubetunnel Yes cpu: 200m NKP Critical


(100002000)
memory: 148Mi

Thanos* thanos Yes NKP Critical


(100002000)

Traefik traefik-forward- Yes cpu: 100m NKP Critical


ForwardAuth auth-mgmt (100002000)
memory: 128Mi

Note: Applications with an asterisk (“*”) are NKPUltimate-only apps deployed by default for NKP Ultimate customers
only.

Workspace Platform Application Defaults and Resource Requirements


This topic lists the platform applications available in Nutanix Kubernetes Platform (NKP) with the Kommander
component. Some are deployed by default on attachment, while others require manual installation by deploying
platform applications (see Platform Applications) through the CLI under Cluster Operations.
Workspace platform applications require more resources than solely deploying or attaching clusters to a workspace.
Your cluster must have sufficient resources when deploying or attaching to ensure the platform services are installed
successfully.
The following table describes all the workspace platform applications that are available to the clusters in a workspace,
minimum resource requirements, whether they are enabled by default, and their default priority classes:

Table 18: Available Workspace Platform Applications

Common App ID Deployed Minimum Minimum Default Priority


Name (for App by default Resources Persistent Class
versions, see Suggested Storage Required
the Release
Notes)
Cert Manager cert-manager Yes cpu: 10m system-
cluster-critical
memory: 32Mi (2000000000)
External DNS external-dns No NKP High
(100001000)
Fluent Bit fluent-bit No cpu: 350m NKP Critical
(100002000)
memory: 350Mi

Gatekeeper gatekeeper Yes cpu: 300m system-


cluster-critical
memory: 768Mi (2000000000)

Nutanix Kubernetes Platform | Getting Started with NKP | 42


Common App ID Deployed Minimum Minimum Default Priority
Name (for App by default Resources Persistent Class
versions, see Suggested Storage Required
the Release
Notes)
Grafana grafana- No cpu: 200m NKP Critical
logging (100002000)
memory: 100Mi

Loki grafana-loki No # of PVs: 8 NKP Critical


(100002000)
PV sizes: 10Gi x 8
(total: 80Gi)

Istio istio No cpu: 1270m NKP Critical


(100002000)
memory: 4500Mi

Jaeger jaeger No NKP High


(100001000)
Kiali kiali No cpu: 20m NKP High
(100001000)
memory: 128Mi

Knative knative No cpu: 610m NKP High


(100001000)
memory: 400Mi

Kube OIDC kube-oidc- Yes NKP Critical


Proxy proxy (100002000)
Kube kube- Yes cpu: 1300m # of PVs: 1 NKP Critical
Prometheus prometheus- (100002000)
Stack stack memory: 4300Mi PV sizes: 100Gi

Kubecost kubecost Yes cpu: 700m # of PVs: 3 NKP High


(100001000)
memory: 1700Mi PV sizes: 2Gi, 32Gi,
32Gi (total: 66Gi)

Kubernetes kubernetes- Yes cpu: 250m NKP High


Dashboard dashboard (100001000)
memory: 300Mi

Logging logging- No cpu: 350m * # of # of PVs: 1 NKP Critical


Operator operator nodes + 600m (100002000)
PV sizes: 10Gi
memory: 228Mi +
350Mi * # of nodes

NFS Server nfs-server- No # of PVs: 1 NKP High


Provisioner provisioner (100001000)
PV size: 100Gi

NVIDIA GPU nvidia-gpu- No cpu: 100m system-


Operator operator cluster-critical
memory: 128Mi (2000000000)

Nutanix Kubernetes Platform | Getting Started with NKP | 43


Common App ID Deployed Minimum Minimum Default Priority
Name (for App by default Resources Persistent Class
versions, see Suggested Storage Required
the Release
Notes)
Prometheus prometheus- Yes cpu: 1000m NKP Critical
Adapter adapter (100002000)
memory: 1000Mi

Reloader reloader Yes cpu: 100m NKP High


(100001000)
memory: 128Mi

Rook Ceph rook-ceph Yes cpu: 100m system-


cluster-critical
memory: 128Mi (2000000000)
Rook Ceph rook-ceph- Yes cpu 2500m # of PVs: 4 NKP Critical
Cluster cluster (100002000)
mem 8Gi PV sizes: 40Gi
system-
cluster-critical
(2000000000)
system-node-critical

Traefik traefik Yes cpu: 500m NKP Critical


(100002000)
Traefik traefik- Yes cpu: 100m NKP Critical
ForwardAuth forward-auth (100002000)
memory: 128Mi

Velero velero No cpu: 1000m NKP Critical


(100002000)
memory: 1024Mi

• Currently, NKP only supports a single deployment of cert-manager per cluster. Because of this, cert-
manager cannot be installed on any Konvoy managed clusters or clusters with cert-manager pre-installed.

• Only a single deployment of traefik per cluster is supported.


• NKP automatically manages the deployment of traefik-forward-auth and kube-oidc-proxy when clusters
are attached to the workspace. These applications are not shown in the NKP UI.
• Applications are enabled in NKP and then deployed to attached clusters. To confirm that the application you
enabled is deployed successfully and verified through the CLI, see Deployment of Catalog Applications in
Workspaces on page 410.

Prerequisites for Installation


Before you create a Nutanix Kubernetes Platform (NKP) image and deploy the initial NKP cluster, the operator's
machine must be either a Linux-based or MacOS machine of a supported version. Ensure you have met all the
prerequisites for the Konvoy and Kommander components.

• Prerequisites for the Konvoy component:

• Non-air-gapped (all environments)


• Air-gapped (additional prerequisites)

Nutanix Kubernetes Platform | Getting Started with NKP | 44


• Prerequisites for the Kommander component:

• Non-air-gapped (all environments)


• Air-gapped only (additional prerequisites)

Note: Additional prerequisites are necessary for air-gapped; verify that all the non-air-gapped conditions are met and
then add any additional air-gapped prerequisites listed.

If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47

Prerequisites for the Konvoy Component


For NKP and Konvoy Image Builder to run, the operator machine requires:
Non-air-gapped (all environments)
In a non-air-gapped environment, your environment has two-way access to and from the Internet. The prerequisites
required if installing in a non-air-gapped environment are as follows:

• An x86_64-based Linux or MacOS machine.


• The NKP binary on the bastion by downloading NKP (see Downloading NKP on page 16).
To check which version of NKP you installed for compatibility reasons, run the nkp version command.
• A container engine or runtime installed is required to install NKP and bootstrap:

• Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.
• Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
• kubectl (see https://kubernetes.io/docs/tasks/tools/#kubectl) for interacting with the cluster running on the
host where the NKP Konvoy CLI runs.
• Konvoy Image Builder in KIB (see Konvoy Image Builder).
• Valid provider account with the credentials configured.

• AWS credentials (see https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) that


can manage CloudFormation Stacks, IAM Policies, IAM Roles, and IAM Instance Profiles.
• Azure credentials (see https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/master/docs/
book/src/topics/getting-started.md#prerequisites).
• CLI tooling of the cloud provider to deploy NKP commands:

• aws-cli (see https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)

• googlecloud-cli (see https://cloud.google.com/sdk/docs/install)

• azure-cli (see https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)

Nutanix Kubernetes Platform | Getting Started with NKP | 45


• Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS): A management cluster (self-
managed) cluster is required:

• If you follow the instructions in the Basic Installations by Infrastructure on page 50 topic for installing
NKP, use the --self-managed flag for a self-managed cluster. If you use the instructions in Custom
Installation and Additional Infrastructure Tools, ensure that you perform the self-managed process on your
new cluster:
• A self-managed AWS cluster.
• A self-managed Azure cluster.
• Pre-provisioned only:

• Pre-provisioned hosts with SSH access enabled.


• An unencrypted SSH private key, whose public key is configured on the hosts.
• Pre-provisioned Override Files if needed.
• vSphere only:

• A valid VMware vSphere account with credentials configured.


Air-gapped Only (additional prerequisites) In an air-gapped environment, your environment is isolated from
unsecured networks, like the Internet, and therefore requires additional considerations for installation. Configure the
following additional prerequisites if installing in an air-gapped environment:

• Linux machine (bastion) that has access to the existing Virtual Private Cloud (VPC) instead of an x86_64-based
Linux or macOS machine.
• Ability to download artifacts from the internet and then copy those onto your Bastion machine.
• Download the complete NKP air-gapped bundle NKP-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz
for this release (see Downloading NKP on page 16).

• To use a local registry, whether air-gapped or non-air-gapped environment, download and


extract the complete NKP air-gapped bundle for this release (that is, NKP-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load your registry.

• An existing local registry to seed the air-gapped environment.

Prerequisites for the Kommander Component


Non-air-gapped (all environments)
In a non-air-gapped environment, your environment has two-way access to and from the Internet. Below are the
prerequisites required if installing in a non-air-gapped environment:

• The version of the CLI that matches the NKP version you want to install.
• Review the Management Cluster Application Requirements and Workspace Platform Application Defaults
and Resource Requirements to ensure your cluster has sufficient resources.
• Ensure you have a default StorageClass (see default-storage-providers-c.dita) configured (the Konvoy
component is responsible for configuring one).
• A load balancer to route external traffic.
In cloud environments, this information is provided by your cloud provider. You can configure MetalLB for on-
premises and vSphere deployments. It is also possible to use Virtual IP. For more details, see Load Balancing on
page 602.

Nutanix Kubernetes Platform | Getting Started with NKP | 46


• Ensure your firewall allows connections to github.com.
• For pre-provisioned on-premises environments:

• Ensure you meet the storage requirements (see storage-c.dita), default storage class , and Workspace
Platform Application Defaults and Resource Requirements on page 42.
• Ensure you have added at least 40 GB of raw storage to your clusters' worker nodes.
Air-gapped Only (additional prerequisites)
In an air-gapped environment, your environment is isolated from unsecured networks, like the Internet, and therefore
requires additional considerations for installation. Below are the additional prerequisites required if installing in an
air-gapped environment:

• A local registry (see reg-mirror-tools-c.dita) containing all the necessary installation images, including
the Kommander images, which were downloaded in the air-gapped bundle above, NKP-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz. To view how to push images required to this registry and load the
registry under each provider in the Basic Installations by Infrastructure on page 50 section.
• Connectivity with the clusters attached to the management cluster:

• Both management and attached clusters must connect to the registry.


• The management cluster must connect to all the attached cluster’s API servers.
• The management cluster must connect to any load balancers created for platform services on the management
cluster.

Note: If you want to customize your cluster’s domain or certificate, ensure you review the respective documentation
sections:

• Configure custom domains and certificates during the Kommander installation.


• Configure custom domains and certificates after Kommander has been installed.

• For pre-provisioned environments:

• Ensure you meet the storage requirements.


• Ensure you have added at least 40 GB of raw storage to each of your cluster worker nodes.

Installing NKP
The topic lists the basic package requirements for your environment to perform a successful installation of
Nutanix Kubernetes Platform (NKP). Next, install NKP, and then you can begin any custom configurations
based on your environment.

About this task


Perform the following steps to install NKP:

Nutanix Kubernetes Platform | Getting Started with NKP | 47


Procedure

1. Install the required packages.


In most cases, you can install the required software using your preferred package manager. For example, on a
macOS computer, use Homebrew (see Homebrew Documentation) to install kubectl and the aws command-
line utility by running the following command. Replace aws with your provider.
brew install kubernetes-cli awscli
Replace aws with the name of the provider.

2. Check the Kubernetes client version.


Many important Kubernetes functions do not work if your client is outdated. You can verify the version of
kubectl you have installed to check whether it is supported by running the following command.
kubectl version --short=true

3. Check the supported Kubernetes versions after finding your version with the preceding command.

4. For air-gapped environments, create a bastion host for the cluster nodes to use within the air-gapped network.
The bastion host needs access to a local registry instead of an Internet connection to export images. The
recommended template naming pattern is ../folder-name/NKP-e2e-bastion-template or similar.
Each infrastructure provider has its own set of bastion host instructions. For specific details of your provider,
see the respective provider's sit for more information: Azure (see https://learn.microsoft.com/en-us/
azure/bastion/quickstart-host-portal, AWS (see https://aws.amazon.com/solutions/implementations/
linux-bastion/, GCP https://blogs.vmware.com/cloud/2021/06/02/intro-google-cloud-vmware-
engine-bastion-host-access-iap/, or vSphere (see https://docs.vmware.com/en/VMware-vSphere/7.0/
com.vmware.vsphere.security.doc/GUID-6975426F-56D0-4FE2-8A58-580B40D2F667.html).

5. Create NKP machine images by downloading the Konvoy Image Builder and extracting it.

6. Download NKP. For more information, see Downloading NKP on page 16.

7. Verify that you have valid cloud provider security credentials to deploy the cluster.

Note: This step regarding the provider security credentials is not required if you install NKP on an on-premises
environment. For information about installing NKP in an on-premises environment, see Pre-provisioned
Infrastructure on page 695.

8. Install the Konvoy component depending on which infrastructure you have. For more information, see Basic
Installations by Infrastructure on page 50. To use customized YAML and other advanced features, see
Custom Installation and Infrastructure Tools on page 644.

9. Configure the Kommander component by initializing the configuration file under the Kommander Installer
Configuration File component of NKP.

10. (Optional) Test operations by deploying a sample application, customizing the cluster configuration, and
checking the status of cluster components.

11. Initialize the configuration file under the Kommander Installer Configuration File component of NKP.

What to do next
Here are some links to the NKP installation-specific information:

• To view supported Kubernetes versions, see Upgrade Compatibility Tables.


• To view the list of NKP versions and compatibility software, see Konvoy Image Builder .
• For details about default storage providers and drivers, see Default Storage Providers.

Nutanix Kubernetes Platform | Getting Started with NKP | 48


• For supported FIPS builds, see Deploying a Cluster in FIPS mode.

Nutanix Kubernetes Platform | Getting Started with NKP | 49


4
BASIC INSTALLATIONS BY
INFRASTRUCTURE
This topic provides basic installation instructions for your infrastructure using combinations of providers and other
variables.
A basic cluster contains nodes and a running instance of Nutanix Kubernetes Platform (NKP) but is not yet a
production cluster. While you might not be ready to deploy a workload after completing the basic installation
procedures, you will familiarize yourself with NKP and view the cluster structure.

Note: For custom installation procedures, see Custom Installation and Additional Infrastructure Tools.

Production cluster configuration allows you to deploy and enable the cluster management applications and your
workload applications that you need for production operations. For more information, see Cluster Operations
Management on page 339.
For virtualized environments, NKP can provision the virtual machines necessary to run Kubernetes clusters. If you
want to allow NKP to manage your infrastructure, select your supported infrastructure provider installation choices
below.

Note: If you want to provision your nodes in a bare metal environment or manually, see Pre-provisioned
Infrastructure on page 695.

If not already done, perform the procedures in the following topics:

• Resource Requirements on page 38


• Prerequisites for Installation on page 44
• Installing NKP on page 47

Section Contents
Scenario-based installation options:

Nutanix Installation Options


This chapter describes the installation options for environments on the Nutanix infrastructure.
For additional options to customize YAML Ain't Markup Language (YAML), see Custom Installation and
Infrastructure Tools on page 644.
To determine whether your OS is supported, see Supported Infrastructure Operating Systems on page 12.
The process of configuring Nutanix and NKP comprises the following steps:
1. Configure Nutanix to provide the required elements described in Nutanix Infrastructure Prerequisites on
page 657.
2. For air-gapped environments, create a bastion VM host (see Creating a Bastion Host on page 652).
3. Create a base OS image (see Nutanix Base OS Image Requirements on page 663).

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 50


4. Create a new self-managed cluster.
5. Verify and log on to the UI.

Section Contents

Nutanix Basic Prerequisites


This topic contains the prerequisites specific to the Nutanix infrastructure.
These prerequisites are above and beyond the ones mentioned in Prerequisites for Installation on page 44. Fulfilling
the prerequisites for Nutnanix involves completing the following two tasks:
1. NKP Prerequisites on page 51
2. Nutanix Prerequisites on page 51

NKP Prerequisites
Before using NKP to create a Nutanix cluster, verify that you have the following:

• An x86_64-based Linux or macOS machine.


• Download the NKP binaries and NKP Image Builder (NIB). For more information, see Downloading NKP on
page 16.
• Install a container engine or runtime to install NKP and bootstrap:

• Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.
• Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
• A registry is required in your environment:

• Install kubectl 1.28.x to interact with the running cluster on the host where the NKP Konvoy CLI runs. For
more information, see https://kubernetes.io/docs/tasks/tools/#kubectl.
• A valid Nutanix account with credentials configured.

Note:

• NKP uses the Nutanix CSI Volume Driver (CSI) 3.0 as the default storage provider. For more
information on the default storage providers, see Default Storage Providers on page 33.
• For compatible storage suitable for production, choose from any of the storage options available for
Kubernetes. For more information, see https://kubernetes.io/docs/concepts/storage/volumes/
#volume-types.
• To turn off the default StorageClass that Konvoy deploys:
1. Set the default StorageClass as non-default.
2. Set your newly created StorageClass as default.
For information on changing the default storage class, see https://kubernetes.io/docs/tasks/
administer-cluster/change-default-storage-class/.

Nutanix Prerequisites
Before installing, verify that your environment meets the following basic requirements:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 51


• Nutanix Prism Central version 2024.1 has role credentials configured with administrator privileges.
• AOS 6.5, 6.8+
• Configure valid values in Prism Central (see Prism Central Settings (Infrastructure)).

• Pre-designated subnets.
• A subnet with unused IP addresses. The number of IP addresses required is computed as follows:

• One IP address for each node in the Kubernetes cluster. The default cluster size has three control plane
nodes and four worker nodes. So, this requires seven IP addresses.
• One IP address in the same Classless Inter-Domain Routing (CIDR) as the subnet but not part of the
address pool for the Kubernetes API server (kubevip).
• One IP address in the same CIDR as the subnet but not part of an address pool for the Loadbalancer service
used by Traefik (metallb).
• Additional IP addresses may be assigned to accommodate other services such as NDK, that also need the
Loadbalancer service used by Metallb. For more information, see the Prerequisites and Limitations section
in the Nutanix Data Services for Kubernetes guide at https://portal.nutanix.com/page/documents/
details?targetId=Nutanix-Data-Services-for-Kubernetes-v1_1:top-prerequisites-k8s-c.html.
• For air-gapped environments, a bastion VM host template with access to a configured local registry. The
recommended template naming pattern is ../folder-name/NKP-e2e-bastion-template or similar. Each
infrastructure provider has its own bastion host instructions (see Creating a Bastion Host on page 652.
• Access to a bastion VM or other network-connected host running NKP Image Builder.

Note: Nutanix provides a full image built on Nutanix with base images if you do not want to create your own
from a BaseOS image.

• You must be able to reach the Nutanix endpoint where the Konvoy CLI runs.
• Note: For air-gapped, ensure you download the bundle nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more
information, see Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Note: For troubleshooting or additional information, see Nutanix Knowledge Base.

Prism Central Credential Management


Create the required credentials for Nutanix Prism Central (PC).
Nutanix Kubernetes Platform(NKP) infrastructure cluster uses Prism Central credentials for the following:
1. To manage the cluster, such as listing subnets and other infrastructure and creating VMs in Prism Central used by
the Cluster API Provider Nutanix Cloud Infrastructure (CAPX) infrastructure provider.
2. To manage persistent storage used by Nutanix CSI providers.
3. To discover node metadata used by the Nutanix Cloud Cost Management (CCM) provider.
Prism Central (PC) credentials are required to authenticate the PC APIs. CAPX currently supports two mechanisms
for assigning the required credentials.

• Credentials injected into the CAPX manager deployment.


• Workload cluster-specific credentials.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 52


For examples, see Credential Management at https://opendocs.nutanix.com/capx/v1.5.x/
credential_management/.

Injected Credentials

By default, credentials are injected into the CAPX manager deployment when CAPX is initialized. For information
about getting started with CAPX, see Getting Started in https://opendocs.nutanix.com/capx/v1.5.x/
getting_started/.
Upon initialization, a nutanix-creds secret is automatically created in the capx-system namespace. This secret
contains the values specified in the NUTANIX_USER and NUTANIX_PASSWORD parameters.
The nutanix-creds secret is used for workload cluster deployments if no other credentials are supplied.

Workload Cluster Credentials

Users can override the credentials injected in CAPX manager deployment by supplying a credential specific to a
workload cluster. See Credentials injected into the CAPX manager deployment in https://opendocs.nutanix.com/
capx/v1.5.x/credential_management/#credentials-injected-into-the-capx-manager-deployment. The
credentials are provided by creating a secret in the same namespace as the NutanixCluster namespace.
The secret is referenced by adding a credentialRef inside the prismCentral attribute contained in
the NutanixCluster. See Prism Central Admin Center Guide. The secret is also deleted when the
NutanixCluster is deleted.

Note: There is a 1:1 relation between the secret and the NutanixCluster object.

Prism Central Role


When provisioning Kubernetes clusters with NKP on Nutanix infrastructure, There is a role that contains the
minimum permissions required for NKP to provide proper access to deploy clusters, but the minimum required CAPX
permissions for domain users is found in the topic User Requirements.
An NKP Nutanix cluster uses Prism Central credentials for three components:
1. To manage the cluster for actions such as to list subnets other infrastructure and to create VMs in Prism Central
used by Cluster API Provider Nutanix Cloud Infrastructure (CAPX) infrastructure provider).
2. To manage persistent storage used by Nutanix Container Storage Interface (CSI) provider.
3. To discover node metadata used by Nutanix Cloud Cost Management (CCM) provider.

Prism Central Pre-defined Role Permissions

This table contains the permissions that are pre-defined for the Kubernetes Infrastructure Provisions role in
Prism Central.

Role Permission
AHV VM
Create Virtual Machine
Create Virtual Machine Disk
Delete Virtual Machine
Delete Virtual Machine Disk
Update Virtual Machine
Update Virtual Machine Project

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 53


Role Permission
View Virtual Machine
Category
Create Or Update Name Category
Create Or Update Value Category
Delete Name Category
Delete Value Category
View Name Category
View Value Category
Category Mapping
Create Category Mapping
Delete Category Mapping
Update Category Mapping
View Category Mapping
Cluster
View Cluster
Create Image
Delete Image
View Image
Project
View Project
Subnet
View Subnet

Configuring the Role with an Authorization Policy


When provisioning Kubernetes clusters with NKP on Nutanix infrastructure, a pre-defined role that contains
the minimum permissions to deploy clusters is also provisioned.

About this task


On the Kubernetes Infrastructure Provision Role Details screen, you are assigned to the system-defined roles
by creating an authorization policy. For more information, see Configuring an Authorization Policy in the Security
Guide.

Procedure

1. Log in to Prism Central as an administrator.

2. In the Application Switcher, select Admin Center.

3. Select IAM and go to Authorization Policies.

4. To create an authorization policy, select Create New Authorization Policy.


The Create New Authorization Policy window appears.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 54


5. In the Choose Role step, enter a role name in the Select the role to add to this policy field and select
Next.
You can enter any built-in or custom roles.

6. In the Define Scope step, select one of the following.

» Full Access: provides all existing users access to all entity types in the associated role.
» Configure Access: provides you with the option to configure the entity types and instances for the added
users in the associated role.

7. Click Next.

8. In the Assign Users step, do the following.

» From the dropdown list, select Local User to add a local user or group to the policy. Search a user or group
by typing the first few letters of the name in the text field.
» From the dropdown list, select the available directory to add a directory user or group. Search a user or group
by typing the first few letters of the name in the text field.

9. Click Save.

Note: To display role permissions for any built-in role, see Displaying Role Permissions. in the Security
Guide.

The authorization policy configurations are saved and the authorization policy is listed in the Authorization
Policies window.

BaseOS Image Requirements


For the NKP Starter license tier, use the pre-built Rocky Linux 9.4 image downloaded along with the binary file. The
downloaded image must then be uploaded to the Prism Central images folder.
The base OS image is used by Nutanix Kubernetes Platform (NKP) Image Builder (NIB) to create a custom image.
For a base OS image, you have two choices:
1. Use the pre-built Rocky Linux 9.4 image downloaded from the portal.
2. Create your custom image for Rocky Linux 9.4 or Ubuntu 22.04. If not using the out-of-box image, see the topics
in the Custom Installation section Create the OS Image for Prism Central or Create the Air-gapped OS
Image for Prism Central.
Starter license level workload clusters are only licensed to use Nutanix pre-built images.
For Nutanix, the Kubernetes Infrastructure Provision role in Prism Central is required as a minimum permission set
for managing clusters. You can also use the Administrator role with administrator privileges.
The BaseOS requirements are as follows:

• Configure Prism Central. For more information, see Prism Central Admin Center Guide.
• Upload the image downloaded with the NKP binary to the Prism Central images folder.
• Configure the network by downloading NKP Image Builder (NIB) and installing packages to activate the network.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 55


• Install a container engine or runtime to install NKP and bootstrap:

• Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.
• Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.

Migrating VMs from VLAN to OVN


Describes how to create and migrate a subnet.

About this task


Migrating Virtual Machines (VMs) from VLAN basic to OVN VLAN is not done through atlas_cli , which is
recommended by other projects in Nutanix.
Some subnets reserved by Kubernetes can prevent proper cluster deployment if you unknowingly configure Nutanix
Kubernetes Platform (NKP) so that the Node subnet collides with either the Pod or Service subnet. Ensure your
subnets do not overlap with your host subnet because the subnets cannot be changed after cluster creation.

Note: The default subnets used in NKP are:


spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12

The existing VLAN implementation is basic VLAN. However, advanced VLAN uses OVN as the control plane
instead of the Acropolis. The subnet creation workflow is from Prism Central (PC) rather than Prism Element (PE).
Subnet creation can be done using API or through the UI.

Procedure

1. Naviagate to PC settings > Network Controller.

2. Select the option next to Use the VLAN migrate workflow to convert VLAN Basic subnets to Network
Controller managed VLAN Subnets.

3. In the NKP UI, Create Subnet.

4. Under Advanced Configuration, remove the check from the checkbox next to VLAN Basic Networking to
change from Basic to Advanced OVN.

5. Modify the subnet specification in the control plane and worker nodes to use the new subnet. kubectl edit
cluster <clustername>.
CAPX will roll out the new control plane and worker nodes in the new Subnet and destroy the old ones.

Note: You can choose Basic or Advanced OVN when creating the subnet(s) you used during cluster creation. If
you created the cluster with basic, you can migrate to OVN.

To modify the service subnet, add or edit the configmap. See the topic Managing Subnets and Pods for more
details.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 56


Nutanix Non-Air-gapped Installation
This topic provides instructions on how to install NKP in a Nutanix non-air-gapped environment.
For additional options to customize YAML Ain't Markup Language (YAML), see Custom Installation and
Infrastructure Tools on page 644.
If not already done, perform the procedures in the following topics:

• Resource Requirements on page 38


• Prerequisites for Installation on page 44
• Installing NKP on page 47

Nutanix Non-Air-gapped: Installing NKP


Create a Nutanix cluster and install the UI in a non-air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one in
which the CAPI resources and controllers that describe and manage it are running on the same cluster that they are
managing.

Note: For Virtual Private Cloud (VPC) installation, see the topic Nutanix with VPC Creating a New Cluster.

Note: NKP uses the Nutanix CSI driver as the default storage provider. For more information see, Default Storage
Providers on page 33.

Note: NKP uses a CSI storage container on your Prism Element (PE). The CSI Storage Container image names must
be the same for every PE environment in which you deploy an NKP cluster.

Before you begin


Decide on your Base OS image selection. See BaseOS Image Requirements in the Prerequisites section.
Specify a name for your cluster. The cluster name must only contain the following characters: a-z, 0-9,. , and -.
Cluster creation will fail if the name has capital letters. For more instructions on naming, see Object Names and IDs
at https://kubernetes.io/docs/concepts/overview/working-with-objects/names/.
The installation prompt will pull information from the information you have configured in Prism Central (PC) and
Prism Element (PE), such as Cluster name, Storage Containers, and Images.

Procedure

1. Set the environment variable for the cluster name using the following command.
export CLUSTER_NAME=<my-nutanix-cluster>

2. Export Nutanix PC credentials.


export NUTANIX_USER=<user>
export NUTANIX_PASSWORD=<password>

3. Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation.
You must do this at the cluster creation stage to change the Kubernetes subnets. The default subnets used in NKP
are below.
spec:
clusterNetwork:
pods:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 57


cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
For more information, see Managing Subnets and Pods on page 651.

4. Create a Kubernetes cluster. The following example shows a standard configuration.


nkp create cluster nutanix
Throughout the prompt-based CLI installation of NKP, there will be options for navigating the screens.
Navigation inside the pages is shown in the legend at the bottom of each screen.

» Press 'ctrl+p' to go to the previous page.


» Press 'ctrl+n' to go to the next page.
» Press 'ctrl+c' to Quit.
» Press 'shift+tab' to move up.
» Press 'tab' to move down.

5. Enter your Nutanix Prism Central details. Required fields are denoted with a red asterisk (*). Other fields are
optional.

a. Enter your Prism Central Endpoint in the following prompt: Prism Central Endpoint: >https://
b. > Prism Central Username: Enter your username. For example, admin.
c. > Prism Central Password: Enter your password.
d. Enter yes or no for the prompt Insecure Mode
e. (Optional) Enter trust information in the prompt for Additional Trust Bundle: A PEM file as
base64 encoded string

f. Project: Select the project name from the generated list.


g. Prism Element Cluster*: Select the PE cluster name from the generated list.
h. Subnet*: Select subnet information from PE/PC.

6. On the next screen, enter additional information on the Cluster Configuration screen. Required fields are
denoted with a red asterisk (*). Other fields are optional.

» Cluster Name*
» Control Plane Endpoint*
» VM Image*: A generated list appears from PC images where you select the desired image.
» Kubernetes Service Load Balancer IP Range*
» Pod Network
» Service Network
» Reclaim Policy
» File System
» Hypervisor Attached Volumes
» Storage Container*

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 58


7. Enter any additional optional information on the Additonal Configuration: (optional) screen.

» ACME Configuration: ACME Server address issuing the certificates.


» Email address for ACME Server:
» Ingress certificate file
» Ingress Private key file
» CA chain file
» Registry URL
» Registry CA Certificate
» Registry Username
» Registry Password
» SSH User
» If you entered an SSH Username, you must enter the SSH path. Path to file containing SSH Key
for user

» HTTP Proxy:
» HTTPS Proxy:
» No Proxy List:

8. Review and confirm your changes to create the cluster.

9. Select one of the choices for creating your cluster.


Create NKP Cluster?

» ( ) Create
» ( ) Dry Run
After the installation, the required components are installed, and the Kommander component deploys the
minimum applications needed by default. For more information, see NKP Concepts and Terms or NKP
Catalog Applications Enablement after Installing NKP .

Caution: You cannot use the NKP CLI to re-install the Kommander component on a cluster created using the
interactive prompt-based CLI. If you need to re-install or reconfigure Kommander to enable more applications,
contact Nutanix Support.

Verifying the Installation


Verify your Kommander installation.

About this task


After you build the Konvoy cluster and install the Kommander component for the UI, you can verify your
installation. By default, it waits for all applications to be ready.

Procedure
Run the following command to check the installation status.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 59


Note: If you prefer using the CLI and not waiting for all applications to be available, you can set the flag to --
wait=false.

The first wait is for each of the helm charts to reach the Ready condition, eventually resulting in an output as follows:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

What to do next
If an application fails to deploy, check the status of the HelmRelease using the following command.
kubectl -n kommander get helmrelease <HELMRELEASE_NAME>
If you find any HelmReleases in a broken release state, such as exhausted or another rollback/release in progress,
trigger a reconciliation of the HelmRelease using the following commands. kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Logging In To the UI
Log in to the UI dashboard.

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using the following command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret nkp-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 60


3. Retrieve the URL used for accessing the UI with the following command.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use static credentials to access the UI for configuring an external identity provider (see Identity Providers
on page 350). Treat them as back up credentials rather than using them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password.
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Nutanix Air-gapped Installation


Installation instructions for installing NKP in a Nutanix air-gapped environment.
For additional options to customize YAML Ain't Markup Language (YAML), see Custom Installation and
Infrastructure Tools on page 644.
If not already done, perform the procedures in the following topics:

• Resource Requirements on page 38


• Prerequisites for Installation on page 44
• Installing NKP on page 47

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Nutanix Air-gapped: Loading the Registry


Before creating an air-gapped Kubernetes cluster, you must load the required images in a local registry for
the Konvoy component.

About this task


The complete NKP air-gapped bundle is needed in an air-gapped environment but can also be used
in a non-air-gapped climate. The bundle contains all the NKP components required for an air-gapped
environment installation and a local registry in a non-air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images, is required. This registry must be accessible from both the bastion
machine and other machines that will be created for the Kubernetes cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tar file to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 61


2. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory,
similar to the example below, depending on your current location
cd nkp-v2.12.0

3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

• REGISTRY_URL: the address of an existing registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.
• REGISTRY_CA: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the
cluster nodes to trust this CA. This value is only needed if the registry uses a self-signed certificate and the
images are not already configured to trust this CA.
• REGISTRY_USERNAME: optional, set to a user with pull access to this registry.

• REGISTRY_PASSWORD: optional if the username is not set.

4. Execute the following command to load the air-gapped image bundle into your private registry using any relevant
flags to apply the above variables.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It might take some time to push all the images to your image registry, depending on the network's
performance between the machine you are running the script on and the registry.

5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

Nutanix Air-gapped: Installing NKP


Create a Nutanix cluster and install the UI in an air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one in
which the CAPI resources and controllers that describe and manage it are running on the same cluster that they are
managing.

Note: For Virtual Private Cloud (VPC) installation, see the topic Nutanix with VPC Creating a New Air-
gapped Cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 62


• Configure your cluster to use an existing local registry as a mirror when attempting to export images
previously pushed to your registry while defining your infrastructure. For registry mirror information,
see Using a Registry Mirror and Registry Mirror Tools.

Note: NKP uses a CSI storage container on your Prism Element (PE). The CSI Storage Container image names must
be the same for every PE environment in which you deploy an NKP cluster.

Before you begin


1. Decide your Base OS image selection. See BaseOS Image Requirements in the Prerequisites section.
2. Ensure to load the registry (see Nutanix Air-gapped: Loading the Registry on page 61).
3. Specify a name for your cluster. The cluster name must only contain the following characters: a-z, 0-9,. , and -.
Cluster creation will fail if the name has capital letters. For more instructions on naming, see Object Names and
IDs at https://kubernetes.io/docs/concepts/overview/working-with-objects/names/.

Procedure

1. Enter a unique name for your cluster suitable for your environment.

2. Set the environment variable for the cluster name using the following command.
export CLUSTER_NAME=<my-nutanix-cluster>

3. Export Nutanix PC credentials.


export NUTANIX_USER=<user>
export NUTANIX_PASSWORD=<password>

4. Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation.
You must do this at the cluster creation stage to change the kubernetes subnets. The default subnets used in NKP
are.
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
For more information, see Managing Subnets and Pods on page 651.

5. Create a Kubernetes cluster. The following example shows a common configuration.


nkp create cluster nutanix \
--cluster-name=<CLUSTER_NAME> \
--control-plane-prism-element-cluster=<PE_NAME> \
--worker-prism-element-cluster=<PE_NAME> \
--control-plane-subnets=<SUBNET_ASSOCIATED_WITH_PE> \
--worker-subnets=<SUBNET_ASSOCIATED_WITH_PE> \
--control-plane-endpoint-ip=<AVAILABLE_IP_FROM_SAME_SUBNET> \
--csi-storage-container=<NAME_OF_YOUR_STORAGE_CONTAINER> \
--endpoint=<PC_ENDPOINT_URL> \
--control-plane-vm-image=<NAME_OF_OS_IMAGE_CREATED_BY_NKP_CLI>\
--worker-vm-image=<<NAME_OF_OS_IMAGE_CREATED_BY_NKP_CLI> \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 63


--self-managed

Note: If the cluster creation fails, check for issues with your environment, such as storage resources. If the
cluster becomes self-managed before it stalls, you can investigate what is running and what has failed to try to
resolve those issues independently. See Resource Requirements and Inspect Cluster Issues for more
information.

Verifying the Installation


Verify your Kommander installation.

About this task


After you build the Konvoy cluster and install the Kommander component for the UI, you can verify your
installation. It waits for all applications to be ready by default.

Procedure
Run the following command to check the status of the installation.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer using the CLI to not wait for all applications to be available, you can set the flag to --
wait=false.

The first wait is for each of the helm charts to reach the Ready condition, eventually resulting in an output as follows:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Logging In To the UI
Log in to the Dashboard UI.

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using the following command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 64


2. Retrieve your credentials at any time if necessary.
kubectl -n kommander get secret nkp-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following command.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use static credentials to access the UI for configuring an external identity provider (see Identity Providers
on page 350). Treat them as back up credentials rather than using them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password.
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Pre-provisioned Installation Options


Pre-provisioned infrastructure is provided for non-air-gapped and air-gapped environments.
For more information, see Pre-provisioned Infrastructure on page 695.
The required specific machine resources are as follows:

• Control Plane machines:

• 15% of free space is available on the root file system.


• Multiple ports are open as described in NKP ports.
• firewalld systemd service is disabled. If it exists and is enabled, use the commands systemctl stop
firewalld and systemctl disable firewalld to disable firewalldafter the machine restarts.

• Worker machines:

• 15% of free space is available on the root file system.


• Multiple ports are open as described in the NKP ports.
• If you plan to use local volume provisioning to provide persistent volumes for your workloads, you must
mount at least four volumes to the /mnt/disks/ mount point on each machine. Each volume must have at least
100 GiB of capacity.
• Ensure your disk meets the resource requirements for Rook Ceph in Block mode for ObjectStorageDaemons as
specified in the requirements table.
• firewalld systemd service disabled. If it exists and is enabled, use the commands systemctl stop
firewalld then systemctl disable firewalld, so that firewalld remains disabled after the machine
restarts.

Note: Swap is disabled. kubelet does not support swapping. Due to variable commands, see the respective
Operating System documentation.

Installation Scenarios
Select your installation scenario:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 65


Pre-provisioned Installation
This section provides installation instructions for NKP in a pre-provisioned, non-air-gapped environment.

Pre-provisioned: Defining the Infrastructure


Define the cluster hosts and infrastructure in a pre-provisioned environment.

About this task


The Konvoy component of NKP must know how to access your cluster hosts. Hence, you must define the cluster
hosts and infrastructure using the inventory resources. For the initial cluster creation, define a control plane and at
least one worker pool.
Set the necessary environment variables as follows:

Procedure

1. Export the following environment variables, ensuring that all the control plane and worker nodes are included.
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"

2. Use the following template to define your infrastructure. The environment variables that you set in the previous
step automatically replaces the variable names when the inventory YAML file is created.
cat <<EOF > preprovisioned_inventory.yaml
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
# Note that the command line parameter --control-plane-replicas determines how
many control plane nodes will actually be used.
#
- address: $CONTROL_PLANE_1_ADDRESS
- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22
# This is the username used to connect to your infrastructure. This user must be
root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must
exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 66


namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS
- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF

Pre-provisioned: Control Plane Endpoints


Define the control plane endpoints for your cluster as well as the connection mechanism. A control plane
must have three, five, or seven nodes so it can remain available if one or more nodes fail. A control plane
with one node cannot be used in production.
In addition, the control plane needs an endpoint that remains available if some nodes fail.
-------- cp1.example.com:6443
|
lb.example.com:6443 ---------- cp2.example.com:6443
|
-------- cp3.example.com:6443
In this example, the control plane endpoint host is lb.example.com , and the control plane endpoint port is 6443.
The control plane nodes are cp1.example.com, cp2.example.com, and cp3.example.com. The port of each API
server is 6443.

Connection Mechanism Selection


A virtual IP is the address that the client uses to connect to the service. A load balancer is a device that distributes
the client connections to the backend servers. Before you create a new NKP cluster, choose an external load balancer
(LB) or virtual IP.

• External load balancer: Nutanix recommends that an external load balancer be the control plane endpoint. To
distribute request load among the control plane machines, configure the load balancer to send requests to all the
control plane machines. Configure the load balancer to send requests only to control plane machines that are
responding to API requests.
• Built-in virtual IP: If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not
a load balancer; it does not distribute request load among the control plane machines. However, if the machine
receiving requests does not respond to them, the virtual IP automatically moves to another machine.

Single-Node Control Plane

Caution: Do not use a single-node control plane in a production cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 67


A control plane with one node can use its single node as the endpoint, so you will not require an external load
balancer or a built-in virtual IP. At least one control plane node must always be running. Therefore, to upgrade a
cluster with one control plane node, a spare machine must be available in the control plane inventory. This machine
is used to provision the new node before the old node is deleted. When the Application Programming Interface (API)
server endpoints are defined, you can create the cluster using the link in the next step below. .

Note: Modify control plane audit log settings using the information in the Configure the Control Plane page. See
Configuring the Control Plane.

Known Limitations
The control plane endpoint port is also used as the API server port on each control plane machine. The default port is
6443. Before you create the cluster, ensure the port is available for use on each control plane machine.

Pre-provisioned: Creating the Management Cluster


Create a new pre-provisioned Kubernetes cluster in a non-air-gapped environment.

About this task


After you define the infrastructure and control plane endpoints, proceed with creating the cluster by following
the steps below to create a new pre-provisioned cluster. This process creates a self-managed cluster for use as the
Management cluster.

Before you begin


Specify a name for your cluster and run the command to deploy it. When specifying the cluster-name, you must
use the same cluster-name as used when defining your inventory objects (see Pre-provisioned Air-gapped:
Configure Environment on page 78).

Note: The cluster can contain only the following characters: a-z, 0-9,., and -. The cluster creation will fail if the
name has capital letters. For more instructions on naming, see Object Names and IDs at https://kubernetes.io/
docs/concepts/overview/working-with-objects/names/.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Procedure

1. Enter a unique name for your cluster that is suitable for your environment.

2. Set the environment variable for the cluster name using the following command.
export CLUSTER_NAME=preprovisioned-example

3. Create a Kubernetes Cluster.


After you define the infrastructure and control plane endpoints, you can proceed to create the cluster by following
these steps to create a new Pre-provisioned cluster. This process creates a self-managed cluster to be used as the
Management cluster.

4.

What to do next

Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding nkp create cluster command.
In a pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 68


NKP uses local static provisioner as the default storage provider for a pre-provisioned environment. However,
localvolumeprovisioner is not suitable for production use. You can use a Kubernetes CSI compatible storage
that is suitable for production.
After disabling localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder (KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.
The create cluster command below includes the --self-managed flag. A self-managed cluster refers to one in which
the Cluster API (CAPI) resources and controllers that describe and manage it are running on the same cluster they are
managing.
This command uses the default external load balancer (LB) option (see alternative Step 1 for virtual IP):
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

1. ALTERNATIVE Virtual IP - if you don’t have an external LB, and want to use a VIRTUAL IP provided by
kube-vip, specify these flags example below:
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host 196.168.1.10 \
--virtual-ip-interface eth1

2. Use the wait command to monitor the cluster control-plane readiness:


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m
Output:
cluster.cluster.x-k8s.io/preprovisioned-example condition met

Note: Depending on the cluster size, it will take a few minutes to create.

When the command completes, you will have a running Kubernetes cluster! For bootstrap and custom YAML cluster
creation, see the Additional Infrastructure Customization section of the documentation for Pre-provisioned: Pre-
provisioned Infrastructure
Use this command to get the Kubernetes kubeconfig for the new cluster and proceed to installing the NKP
Kommander UI:
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Note: If changing the Calico encapsulation, Nutanix recommends changing it after cluster creation, but before
production. See Calico encapsulation.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 69


Important: If you need to increase Docker Hub's rate limit, use your Docker Hub credentials when creating the
cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command. See Docker Hub's rate limit.

Audit Logs
To modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.
Further Steps
For more customized cluster creation, access the Pre-Provisioned Additional Configurations section. That section
is for Pre-Provisioned Override Files, custom flags, and more that specify the secret as part of the create cluster
command. If these are not specified, the overrides for your nodes will not be applied.

MetalLB Configuration
Create a MetalLB configmap for your pre-provisioned infrastructure.
Nutanix recommends that an external load balancer be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure
the load balancer to send requests only to control plane machines that are responding to API requests. If you do not
have one, you can use Metal LB to create a MetalLB configmap for your pre-provisioned infrastructure.
Choose one of the following two protocols you want to use to define service IPs. If your environment is not currently
equipped with a load balancer, use MetalLB. Otherwise, your load balancer will work, and you can continue with the
installation process. To use MetalLB, create a MetalLB configMap for your pre-provisioned infrastructure. MetalLB
uses one of two protocols for exposing Kubernetes services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)

Layer 2 Configuration
Layer 2 mode is the simplest to configure. In many cases, you do not require any protocol-specific configuration, only
IP addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly to give the machine’s MAC address to clients.

• MetalLB IP address ranges or Classless Inter-Domain Routing (CIDR) needs to be within the node’s primary
network subnets. For more information, see Managing Subnets and Pods on page 651.
• MetalLB IP address ranges or CIDRs and node subnets must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250 and
configures Layer 2 mode:
The following values are generic; enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 70


addresses:
- 192.168.1.240-192.168.1.250
EOF
After this is complete, run the kubectl apply -f metallb-conf.yaml command.

Border Gateway Protocol (BGP) Configuration


For a basic configuration featuring one BGP router and one IP address range, you need the following four pieces of
information:

• The router IP address that MetalLB must connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to use.
• An IP address range is expressed as a CIDR prefix.
As an example, if you want to specify the MetalLB range as 192.168.10.0/24 and AS number as 64500 and connect it
to a router at 10.0.0.1 with AS number 64501, your configuration will be as follows:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
After this is complete, run the kubectl apply -f metallb-conf.yaml command.

Pre-provisioned: Installing Kommander


This section describes the installation instructions for the Kommander component of NKP in a non-air-
gapped pre-provisioned environment.

About this task


After installing the Konvoy component of NKP, continue with the installation of the Kommander component
that enables you to bring up the UI dashboard.

Note:

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands using the kubeconfig file.
• Applications take longer to deploy and sometimes time out the installation. Add the --wait-timeout
<time to wait> flag and specify a period (for example, 1 h) to allocate more time to the deployment
of applications.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 71


• If the Kommander installation fails and you want PVC-based storage that requires your CSI provider to
support, rerun the install command to retry.

Before you begin

• Ensure you review all the prerequisites required for the installation.
• Ensure you have a default StorageClass (see Identifying and Modifying Your StorageClass on page 982).
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to search and find it.

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. Edit the installer file to include configuration overrides for the rook-ceph-cluster. NKP’s default
configuration ships Ceph with PVC based storage that requires your CSI povider to support for PVC with type
volumeMode: Block. As this is not possible with the default local static provisioner, you can install Ceph in
host storage mode and choose whether Ceph’s object storage daemon (osd) pods can consume all or just some of
the devices on your nodes. Include one of the following overrides.

a. To automatically assign all raw storage devices on all nodes to the Ceph cluster.
rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllDevices: true
useAllNodes: true
deviceFilter: "<<value>>"

b. To assign specific storage devices on all nodes to the Ceph cluster.


rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllNodes: true
useAllDevices: false
deviceFilter: "^sdb."

Note: If you want to assign specific devices to specific nodes using the deviceFilter option, refer to
Specific Nodes and Devices. For general information on the deviceFilter value, refer to Storage
Selection Settings.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 72


5. (Optional) Customize your kommander.yaml.
Some options include custom domains and certificates, HTTP proxy, and external load balancer.

6. Enable NKP catalog applications and install Kommander in the same kommander.yaml, add these values (if you
are enabling NKP catalog applications) in nkp-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP catalog applications after installing NKP, see Configuring Applications
After Installing Kommander on page 984.

Verifying your Installation


Verify Kommander installation. After you build the Konvoy cluster and you install Kommander, verify your
installation. The cluster waits for all the applications to be ready by default.

About this task

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer using the CLI to not wait for all applications to be available, you can set the flag to --
wait=false.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output.
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 73


helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

What to do next
If an application fails to deploy, check the status of the HelmRelease using the following command.
kubectl -n kommander get helmrelease <HELMRELEASE_NAME>
If you find any HelmReleases in a broken release state, such as exhausted or another rollback/release in progress,
trigger a reconciliation of the HelmRelease using the following commands. kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Logging In To the UI
Log in to the UI Dashboard. After you build the Konvoy cluster and install Kommander, verify your
installation. The cluster waits for all the applications to be ready by default.

Procedure

1. By default, you can log in to the UI in Kommander with the credentials provided in this command.
NKP open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use static credentials to access the UI for configuring an external identity provider (see Identity Providers
on page 350). Treat them as back up credentials rather than using them for normal access.

a. Rotate the password using the following command.


NKP experimental rotate dashboard-password
The output displays the new password.
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 74


What to do next
After installing the Konvoy component and building a cluster as well as successfully installing Kommander and
logging into the UI, you are now ready to customize configurations. For more information, Cluster Operations
Management. The majority of the customization such as attaching clusters and deploying applications takes place in
the dashboard or the NKP UI.

Create Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed, which allows it to be a Management cluster or a stand-alone
cluster. Subsequent new clusters are not self-managed as they are likely to be managed or attached clusters to this
Management Cluster.

Note: When creating managed clusters, do not create and move CAPI objects or install the Kommander component.
Those tasks are only done on Management clusters.
Your new managed cluster must be part of a workspace under a management cluster. To make the new
managed cluster a part of a workspace, set that workspace's environment variable.

Procedure

1. If you have an existing workspace name, run this command to find the name.
kubectl get workspace -A

2. After you find the workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>
If you need to create a new workspace, see Creating a Workspace on page 397.

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to createthat cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster that can be
used as the management cluster.
First, you must name your cluster. Then, you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 75


2. Set the environment variable.
export CLUSTER_NAME=<preprovisioned-additional>

Create a Kubernetes Cluster

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to createthat cluster
by following these steps to create a new pre-provisioned cluster.
This process creates a self-managed cluster that can be used as the management cluster.

Tip: Before you create a new Nutanix Kubernetes Platform (NKP) cluster below, choose an external load balancer
(LB) or virtual IP and use the corresponding NKP create cluster command.

In a Pre-provisioned environment, use the Kubernetes CSI and third-party drivers for local volumes and other storage
devices in your datacenter.

Caution: NKP uses a local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI compatible
storage that is suitable for production.

After turning off localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Change the default StorageClass.
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster Application Programming Interface (API) infrastructure
provider to initialize the Kubernetes control plane and worker nodes on the hosts defined in the inventory YAML
previously created.

Procedure

1. This command uses the default external load balancer (LB) option.
nkp create cluster pre-provisioned
--cluster-name ${CLUSTER_NAME}
--control-plane-endpoint-host <control plane endpoint host>
--control-plane-endpoint-port <control plane endpoint port, if different than
6443>
--pre-provisioned-inventory-file preprovisioned_inventory.yaml
--ssh-private-key-file <path-to-ssh-private-key>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 76


2. Use the wait command to monitor the cluster control-plane readiness.
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m

Note: NOTE: Depending on the cluster size, it will take a few minutes to create.

cluster.cluster.x-k8s.io/preprovisioned-additional condition met

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to the workspace through the UI that was earlier, or
attach your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 77


data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI, and you can confirm its status by running the
command below. It might take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them into a Managed Cluster to be centrally
administrated by a Management Cluster, refer to Platform Expansion:

Pre-provisioned Air-gapped Installation


Installation instructions for installing NKP in a pre-provisioned air-gapped environment.

Note: For air-gapped, ensure you have downloaded nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , so you can extract the tarball to a local registry.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Pre-provisioned Air-gapped: Configure Environment


In order to create a cluster in a Pre-provisioned Air-gapped environment, you must first prepare the environment.
The instructions below outline how to fulfill the requirements for using pre-provisioned infrastructure in an air-
gapped environment. In order to create a cluster, you must first set uppre-provisioned air-gapped need to be placed
on the environment with necessary artifacts. All artifacts for Pre-provisioned Air-gapped need to get onto the bastion
host. Artifacts needed by nodes must be unpacked and distributed on the bastion before other provisioning will work
in the absence of an internet connection.
There is an air-gapped bundle available to download NKP. In the previous NKP releases, the distro package bundles
were included in the downloaded air-gapped bundle. Currently, that air-gapped bundle contains the following
artifacts, with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tar file

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 78


1. Downloading NKP on page 16 nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the
tar file to a local directory:
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.
3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command:
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts
NOTE: For FIPS, pass the flag: --fips
NOTE: For RHEL OS, pass your RedHat subscription manager credentials: export RMS_ACTIVATION_KEY
Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

Setup Process

1. The bootstrap image must be extracted and loaded onto the bastion host.
2. Artifacts must be copied onto cluster hosts for nodes to access.
3. If using a graphics processing unit (GPU), those artifacts must be positioned locally.
4. Registry seeded with images locally.

Load the Bootstrap Image


1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz from the
Downloading NKP on page 16 mentioned above and extracted the tar file, you will load the bootstrap.
2. Load the bootstrap image on your bastion machine:
docker load -i konvoy-bootstrap-image-v2.12.0.tar

Copy air-gapped artifacts onto cluster hosts


.
Using the Konvoy Image Builder, you can copy the required artifacts onto your cluster hosts.
1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz, extract the tar
file to a local directory:
2. The Kubernetes image bundle will be located in kib/artifacts/images , and you will want to verify the
image and artifacts.
1. Verify the image bundles exist in artifacts/images:
$ ls artifacts/images/
Kubernetes-images-1.29.6-d2iq.1.tar Kubernetes-images-1.29.6-d2iq.1-fips.tar

2. Verify the artifacts for your OS exist in the artifacts/ directory and export the appropriate variables:
$ ls artifacts/
1.29.6_centos_7_x86_64.tar.gz 1.29.6_redhat_8_x86_64_fips.tar.gz
containerd-1.6.28-d2iq.1-rhel-7.9-x86_64.tar.gz containerd-1.6.28-d2iq.1-
rhel-8.6-x86_64_fips.tar.gz pip-packages.tar.gz
1.29.6_centos_7_x86_64_fips.tar.gz 1.29.6_rocky_9_x86_64.tar.gz
containerd-1.6.28-d2iq.1-rhel-7.9-x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-
rocky-9.0-x86_64.tar.gz

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 79


1.29.6_redhat_7_x86_64.tar.gz 1.29.6_ubuntu_20_x86_64.tar.gz
containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz containerd-1.6.28-d2iq.1-
rocky-9.1-x86_64.tar.gz
1.29.6_redhat_7_x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-
x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-8.4-x86_64_fips.tar.gz
containerd-1.6.28-d2iq.1-ubuntu-20.04-x86_64.tar.gz
1.29.6_redhat_8_x86_64.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-
x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-rhel-8.6-x86_64.tar.gz images

3. For example, for RHEL 8.4, you set:


export OS_PACKAGES_BUNDLE=1.29.6_redhat_8_x86_64.tar.gz
export CONTAINERD_BUNDLE=containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz

3. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_FILE="<private key file>"
SSH_PRIVATE_KEY_FILE must be either the name of the SSH private key file in your working directory or an
absolute path to the file in your user’s home directory.
4. Generate an inventory.yaml , which is automatically picked up by the konvoy-image upload in the next
step.
cat <<EOF > inventory.yaml
all:
vars:
ansible_user: $SSH_USER
ansible_port: 22
ansible_ssh_private_key_file: $SSH_PRIVATE_KEY_FILE
hosts:
$CONTROL_PLANE_1_ADDRESS:
ansible_host: $CONTROL_PLANE_1_ADDRESS
$CONTROL_PLANE_2_ADDRESS:
ansible_host: $CONTROL_PLANE_2_ADDRESS
$CONTROL_PLANE_3_ADDRESS:
ansible_host: $CONTROL_PLANE_3_ADDRESS
$WORKER_1_ADDRESS:
ansible_host: $WORKER_1_ADDRESS
$WORKER_2_ADDRESS:
ansible_host: $WORKER_2_ADDRESS
$WORKER_3_ADDRESS:
ansible_host: $WORKER_3_ADDRESS
$WORKER_4_ADDRESS:
ansible_host: $WORKER_4_ADDRESS
EOF

5. Upload the artifacts onto cluster hosts with the following command:
konvoy-image upload artifacts \
--container-images-dir=./artifacts/images/ \
--os-packages-bundle=./artifacts/$OS_PACKAGES_BUNDLE \
--contained-bundle=artifacts/$CONTAINERD_BUNDLE \
--pip-packages-bundle=./artifacts/pip-packages.tar.gz
KIB uses variable overrides to specify the base image and container images to use in your new machine image.
The variable overrides files for NVIDIA and Federal Information Processing Standards (FIPS) can be ignored
unless an overlay feature is added.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 80


Pre-provisioned Air-gapped: Configuring the Environment
In order to create a cluster in a pre-provisioned air-gapped environment, you must first prepare the
environment.

About this task


The instructions in this topic describes using pre-provisioned infrastructure in an air-gapped environment. To create
a cluster, you must first set up the environment with the necessary artifacts. All the artifacts for pre-provisioned air-
gapped environments must be hosted onto the bastion host. Unpack the artifacts required by the nodes and distribute
them on the bastion before other provisioning starts working in the absence of an internet connection.
There is an air-gapped bundle available for download in the Nutanix Support portal (see Downloading NKP on
page 16). In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle.
Now, that air-gapped bundle contains the following artifacts, with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Contains the tarball .tar file.

Procedure
Task step.

Pre-provisioned Air-gapped: Load the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


The complete Nutanix Kubernetes Platform (NKP) air-gapped bundle is needed for an air-gapped
environment but can also be used in a non-air-gapped environment. The bundle contains all the NKP
components needed for an air-gapped environment installation and also for using a local registry in a non-
air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images, is required. This registry must be accessible from both the bastion
machine and either the Amazon Web Services (AWS) EC2 instances (if deploying to AWS) or other machines that
will be created for the Kubernetes cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tar file to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

2. The directory structure after extraction can be accessed in subsequent steps using commands to access files
from different directories. For example, for the bootstrap cluster, change your directory to the nkp-<version>
directory, similar to the example below, depending on your current location
cd nkp-v2.12.0

3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 81


export REGISTRY_CA=<path to the cacert file on the bastion>

4. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply the variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It might take some time to push all the images to your image registry, depending on the performance of the
network between the machine you are running the script on and the registry.

Important: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster,
by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

Pre-provisioned Air-gapped: Define Infrastructure


Define the cluster hosts and infrastructure in a pre-provisioned environment.

About this task


The Konvoy component of Nutanix Kubernetes Platform (NKP) needs to know how to access your cluster hosts
so you must define the cluster hosts and infrastructure. This is done using inventory resources. For initial cluster
creation, you must define a control-plane and at least one worker pool.
This procedure sets the necessary environment variables.

Procedure

1. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"

2. Use the following template to help you define your infrastructure. The environment variables that you set in
the previous step automatically replace the variable names when the inventory YAML Ain't Markup Language
(YAML) file is created.
cat <<EOF > preprovisioned_inventory.yaml

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 82


---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
# Note that the command line parameter --control-plane-replicas determines how
many control plane nodes will actually be used.
#
- address: $CONTROL_PLANE_1_ADDRESS
- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22
# This is the username used to connect to your infrastructure. This user must be
root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must
exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS
- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF

Pre-provisioned Air-gapped: Define Control Plane Endpoint


Define the control plane endpoint for your cluster and the connection mechanism. A control plane needs to have
three, five, or seven nodes so it can remain available if one or more nodes fail. A control plane with one node is not
for production use.
In addition, the control plane needs an endpoint that remains available if some nodes fail.
-------- cp1.example.com:6443

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 83


|
lb.example.com:6443 ---------- cp2.example.com:6443
|
-------- cp3.example.com:6443
In this example, the control plane endpoint host is lb.example.com, and the control plane endpoint port is 6443.
The control plane nodes are cp1.example.com, cp2.example.com, and cp3.example.com. The port of each API
server is 6443.

Select your Connection Mechanism


A virtual IP is the address that the client uses to connect to the service. A load balancer is a device that distributes
the client connections to the backend servers. Before you create a new Nutanix Kubernetes Platform (NKP) cluster,
choose an external load balancer (LB) or virtual IP.

• External load balancer


It is recommended that an external load balancer be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines.
Configure the load balancer to send requests only to control plane machines that are responding to Application
Programming Interface (API) requests.
• Built-in virtual IP
If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not a load balancer; it does
not distribute request load among the control plane machines. However, if the machine receiving requests does not
respond to them, the virtual IP automatically moves to another machine.

Single-Node Control Plane

Caution: Do not use a single-node control plane in a production cluster.

A control plane with one node can use its single node as the endpoint, so you will not require an external load
balancer or a built-in virtual IP. At least one control plane node must always be running. Therefore, to upgrade a
cluster with one control plane node, a spare machine must be available in the control plane inventory. This machine
is used to provision the new node before the old node is deleted. When the API server endpoints are defined, you can
create the cluster using the link in the Next Step below.

Note: Modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Known Limitations
The control plane endpoint port is also used as the API server port on each control plane machine. The default port is
6443. Before you create the cluster, ensure the port is available for use on each control plane machine.

Pre-provisioned Air-gapped: Creating a Management Cluster


Create a new Pre-provisioned Kubernetes cluster in an air-gapped environment.

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster that can be
used as the management cluster.

Before you begin


First, you must name your cluster. Then, you run the command to deploy it. When specifying the cluster-name,
you must use the same cluster-name as used when defining your inventory objects.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 84


Note: The cluster name might only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if
the name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable: export CLUSTER_NAME=preprovisioned-example

What to do next
Create a Kubernetes Cluster
If your cluster is air-gapped or you have a local registry, you must provide additional arguments when creating the
cluster. These tell the cluster where to locate the local registry to use by defining the URL.
export REGISTRY_URL=<https/http>://<registry-address>:<registry-port>
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

• REGISTRY_URL: the address of an existing registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.
• REGISTRY_CA: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the cluster
nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are
not already configured to trust this CA.
• REGISTRY_USERNAME: optional, set to a user that has pull access to this registry.

• REGISTRY_PASSWORD: optional if the username is not set.

Before you create a new NKP cluster below, you might choose an external load balancer or virtual IP and use the
corresponding nkp create cluster command example from that page in the docs from the links below. Other
customizations are available but require different flags during nkp create cluster command also. Refer to Pre-
provisioned Cluster Creation Customization Choices for more cluster customizations.
When you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding nkp create cluster command.
In a pre-provisioned environment, use the Kubernetes CSI and third-party drivers for local volumes and other storage
devices in your datacenter.

Note: NKP uses a local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI compatible
storage that is suitable for production.

After turning off localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Changing the Default Storage Class.

Note: (Optional) Use a registry mirror. Configure your cluster to use an existing local registry as a mirror when
attempting to pull images previously pushed to your registry when defining your infrastructure. Instructions in the
expandable Custom Installation section. For registry mirror information, see topics Using a Registry Mirror and
Registry Mirror Tools.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 85


The create cluster command below includes the --self-managed flag. A self-managed cluster refers to one in
which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
This command uses the default external load balancer (LB) option (see alternative Step 1 for virtual IP):
nkp create cluster pre-provisioned --cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

1. ALTERNATIVE Virtual IP - if you don’t have an external LB and want to use a VIRTUAL IP provided by
kube-vip, specify these flags example below:
nkp create cluster pre-provisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host 196.168.1.10 \
--virtual-ip-interface eth1

2. Use the wait command to monitor the cluster control-plane readiness:


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m
Output:
cluster.cluster.x-k8s.io/preprovisioned-example condition met

Note: Depending on the cluster size, it will take a few minutes to create.

When the command is complete, you will have a running Kubernetes cluster! For bootstrap and custom YAML
cluster creation, refer to the Additional Infrastructure Customization section of the documentation for Pre-
provisioned: Pre-provisioned Infrastructure.
Use this command to get the Kubernetes kubeconfig for the new cluster and proceed to install the NKP
Kommander UI:
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Note: If changing the Calico encapsulation, Nutanix recommends doing so after cluster creation but before
production.

Audit Logs
To modify Control Plane Audit logs settings using the information contained on the page Configure the Control
Plane.

Configure Air-gapped MetalLB


Create a MetalLB configmap for your Pre-provisioned Infrastructure.
It is recommended that an external load balancer (LB) be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 86


the load balancer to send requests only to control plane machines that are responding to API requests. If you do not
have one, you can use Metal LB to create a MetalLB configmap for your Pre-provisioned infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work and you
can continue the installation process with Pre-provisioned: Install Kommander . To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

• MetalLB IP address ranges or Classless Inter-Domain Routing (CIDR) need to be within the node’s primary
network subnet.
• MetalLB IP address ranges or CIDRs and node subnet must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic, enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range expressed as a CIDR prefix.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 87


As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

Pre-provisioned Air-gapped: Kommander Installation


Installation instructions for installing the Kommander component of Nutanix Kubernetes Platform (NKP) in
a non-air-gapped pre-provisioned environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you want to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Installation.


• Ensure you have a default StorageClass.
• Ensure you have loaded all the necessary images for your configuration. See: Load the Images into Your Registry:
Air-gapped Environments.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 88


Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. Edit the installer file to include configuration overrides for the rook-ceph-cluster. NKP’s default
configuration ships Ceph with PersistentVolumeClaim (PVC) based storage, which requires your CSI provider
to support PVC with type volumeMode: Block. As this is not possible with the default local static provisioner,
you can install Ceph in host storage mode. You can choose whether Ceph’s object storage daemon (osd) pods can
consume all or just some of the devices on your nodes. Include one of the following Overrides.

a. To automatically assign all raw storage devices on all nodes to the Ceph cluster.
rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllDevices: true
useAllNodes: true
deviceFilter: "<<value>>"

b. To assign specific storage devices on all nodes to the Ceph cluster.


rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllNodes: true
useAllDevices: false
deviceFilter: "^sdb."

Note: If you want to assign specific devices to specific nodes using the deviceFilter option, refer to
Specific Nodes and Devices. For general information on the deviceFilter value, refer to Storage
Selection Settings.

5. If required: Customize your kommander.yaml.

a. See Kommanderthe Customizations page for customization options. Some options include Custom Domains
and Certificates, HTTP proxy, and External Load Balancer.

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 89


kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the command-line interface (CLI) to not wait for all applications to become ready, you can set the
--wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 90


helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
NKP open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


NKP experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Create Managed Clusters Using the NKP CLI


This topic explains how to continue using the command-line interface (CLI) to create managed clusters in
an air-gapped Pre-provisioned environment rather than switching to the UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 91


Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the Kommander
component. Those tasks are only done on Management clusters.
To make the new managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, find the name using the command kubectl get workspace -A.

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable using the command
export WORKSPACE_NAMESPACE=<workspace_namespace>.

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export CLUSTER_NAME=<preprovisioned-


additional>.

Create a Kubernetes Cluster

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to creating the
cluster by following these steps to create a new pre-provisioned cluster.
This process creates a self-managed cluster that can be used as the Management cluster.

Tip: Before you create a new NKP cluster below, choose an external load balancer (LB) or Pre-provisioned Built-
in Virtual IP on page 706 and use the corresponding NKP create cluster command.

In a Pre-provisioned environment, use the Kubernetes CSI and third-party drivers for local volumes and other storage
devices in your data center.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 92


Caution: NKP uses a local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI-compatible storage
that is suitable for production. For more information, see https://kubernetes.io/docs/concepts/storage/
volumes/#volume-types.

After turning off localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in the Change the default
StorageClass section of the Kubernetes documentation. For more information, see https://kubernetes.io/docs/tasks/
administer-cluster/change-default-storage-class/
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder (KIB) is built into NKP and automatically runs the machine configuration process (which KIB
uses to build images for other providers) against the set of nodes that you defined. This results in your pre-existing or
pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.

Procedure

1. This command uses the default external load balancer (LB) option.
nkp create cluster earlier earlierisioned --cluster-name ${CLUSTER_NAME}
--control-plane-endpoint-host <control plane endpoint host>
--control-plane-endpoint-port <control plane endpoint port, if different than 6443>
--pre-provisioned-inventory-file preprovisioned_inventory.yaml
--ssh-private-key-file <path-to-ssh-private-key>
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD}

2. Use the wait command to monitor the cluster control-plane readiness.


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m

Note: NOTE: Depending on the cluster size, it will take a few minutes to create.

cluster.cluster.x-k8s.io/preprovisioned-additional condition met

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information,
see Clusters with HTTP or HTTPS Proxy on page 647.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set using the command echo
${MANAGED_CLUSTER_NAME} .

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 93


2. Retrieve your kubeconfig from the cluster you have created without setting a workspace using the command nkp
get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} > ${MANAGED_CLUSTER_NAME}.conf.
You can now either attach it in the UI, link to attaching it to the workspace through earlier UI, or attach your
cluster to the workspace you want in the CLI.

Note: This is only necessary if you never set the workspace of your cluster upon creation.

3. Retrieve the workspace where you want to attach the cluster using the command kubectl get workspaces -
A.

4. Set the WORKSPACE_NAMESPACE environment variable using the command export


WORKSPACE_NAMESPACE=<workspace-namespace>.

5. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve
the kubeconfig secret value of your cluster using the command kubectl -n default get secret
${MANAGED_CLUSTER_NAME}-kubeconfig -o go-template='{{.data.value}}{{ "\n"}}'.

6. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference. Create
a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

7. Create this secret in the desired workspace using the command kubectl apply -f attached-cluster-
kubeconfig.yaml --namespace ${WORKSPACE_NAMESPACE}

8. Create this kommandercluster object to attach the cluster to the workspace.


Example:
cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

9. You can now view this cluster in your Workspace in the UI, and you can confirm its status by using the command
kubectl get kommanderclusters -A.
It might take a few minutes to reach "Joined" status.
If you have several Pro Clusters and want to turn one of them into a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 94


Pre-provisioned FIPS Install
This section provides instructions to install NKP in a Pre-provisioned non-air-gapped environment with FIPS
requirements.

Ensure Configuration
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Section Contents

Pre-provisioned FIPS: Define Infrastructure


Define the cluster hosts and infrastructure in a pre-provisioned environment.

About this task


Konvoy needs to know how to access your cluster hosts. This is done using inventory resources. For initial
cluster creation, you must define a controlcontrol-plane controland at least one worker pool.
This procedure sets the necessary environment variables.

Procedure

1. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"

2. Use the following template to help you define your infrastructure. The environment variables that you set in the
previous step automatically replace the variable names when the inventory YAML file is created.
cat <<EOF > preprovisioned_inventory.yaml
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
# Note that the command-line parameter—-control-plane-replicas determines how
many control plane nodes will actually be used.
#
- address: $CONTROL_PLANE_1_ADDRESS

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 95


- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22
# This is the username used to connect to your infrastructure. This user must be
root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must
exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS
- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF

Pre-provisioned FIPS: Define Control Plane Endpoint


Define the control plane endpoint for your cluster and the connection mechanism. A control plane needs to have
three, five, or seven nodes so it can remain available if one or more nodes fail. A control plane with one node, is not
for production use.
In addition, the control plane needs an endpoint that remains available if some nodes fail.
-------- cp1.example.com:6443
|
lb.example.com:6443 ---------- cp2.example.com:6443
|
-------- cp3.example.com:6443
In this example, the control plane endpoint host is lb.example.com, and the control plane endpoint port is 6443.
The control plane nodes are cp1.example.com, cp2.example.com, and cp3.example.com. The port of each API
server is 6443.

Select your Connection Mechanism


A virtual IP is the address that the client uses to connect to the service. A load balancer is a device that distributes
the client connections to the backend servers. Before you create a new NKP cluster, choose an external load balancer
(LB) or virtual IP.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 96


• External load balancer
It is recommended that an external load balancer be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines.
Configure the load balancer to send requests only to control plane machines that are responding to API requests.
• Built-in virtual IP
If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not a load balancer; it does
not distribute request load among the control plane machines. However, if the machine receiving requests does not
respond to them, the virtual IP automatically moves to another machine.

Single-Node Control Plane

Caution: Do not use a single-node control plane in a production cluster.

A control plane with one node can use its single node as the endpoint, so you will not require an external load
balancer, or a built-in virtual IP. At least one control plane node must always be running. Therefore, to upgrade a
cluster with one control plane node, a spare machine must be available in the control plane inventory. This machine
is used to provision the new node before the old node is deleted. When the API server endpoints are defined, you can
create the cluster using the link in the Next Step below.

Note: Modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Known Limitations
The control plane endpoint port is also used as the API server port on each control plane machine. The default port is
6443. Before you create the cluster, ensure the port is available for use on each control plane machine.

Pre-provisioned FIPS: Creating the Management Cluster


Create a new Pre-provisioned Kubernetes cluster in a non-air-gapped environment using the steps below.

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster
by following these steps to create a new pre-provisioned cluster. This process creates a self-managed
cluster that can be used as the management cluster.

Before you begin


First, you must name your cluster. Then, you run the command to deploy it. When specifying the cluster-name,
you must use the same cluster-name as used when defining your inventory objects.

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable: export CLUSTER_NAME=preprovisioned-example

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 97


What to do next
Create a Kubernetes Cluster
After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster by
following these steps to create a new Pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding nkp create cluster command.
In a pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.
NKP uses a local static provisioner as the default storage provider for a pre-provisioned environment. However,
localvolumeprovisioner is not suitable for production use. You can use a Kubernetes CSI that is suitable for
production. For more information, see https://kubernetes.io/docs/concepts/storage/volumes/#volume-types
After turning off localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Changing the Default Storage Class.
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.
The create cluster command below includes the --self-managed flag. A self-managed cluster refers to one in which
the CAPI resources and controllers that describe and manage it are running on the same cluster they are managing.
This command uses the default external load balancer (LB) option (see alternative Step 1 for virtual IP):
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

1. ALTERNATIVE Virtual IP - if you don’t have an external LB and want to use a VIRTUAL IP provided by
kube-vip, specify these flags example below:
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host 196.168.1.10 \
--virtual-ip-interface eth1

2. Use the wait command to monitor the cluster control-plane readiness:


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m
Output:
cluster.cluster.x-k8s.io/preprovisioned-example condition met

Note: Depending on the cluster size, it will take a few minutes to create.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 98


3. When the command completes, you will have a running Kubernetes cluster! For bootstrap and custom YAML
cluster creation, refer to the Additional Infrastructure Customization section of the documentation for Pre-
provisioned: Pre-Provisioned Infrastructure.
Use this command to get the Kubernetes kubeconfig for the new cluster and proceed to install the NKP
Kommander UI:
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Important: If you need to increase Docker Hub's rate limit, use your Docker Hub credentials when creating the
cluster by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command. See Docker Hub's rate limit.

Note: If changing the Calico encapsulation, Nutanix recommends doing so after cluster creation, but before
production. See Calico encapsulation.

Configure MetalLB
Create a MetalLB configmap for your Pre-provisioned Infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work, and you
can continue the installation process with Pre-provisioned: Install Kommander. To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly and giving the machine’s MAC address to clients.

• MetalLB IP address ranges or CIDRs needs to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnets must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic, enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 99


- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range expressed as a Classless Inter-Domain Routing (CIDR) prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

Pre-provisioned FIPS: Install Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
pre-provisioned environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Before you begin:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 100


• Ensure you have reviewed all Prerequisites for Install.
• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. Edit the installer file to include configuration overrides for the rook-ceph-cluster. NKP’s default
configuration ships Ceph with PersistentVolumeClaim (PVC) based storage which requires your CSI provider
to support PVC with type volumeMode: Block. As this is not possible with the default local static provisioner,
you can install Ceph in host storage mode. You can choose whether Ceph’s object storage daemon (osd) pods can
consume all or just some of the devices on your nodes. Include one of the following Overrides.

a. To automatically assign all raw storage devices on all nodes to the Ceph cluster.
rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllDevices: true
useAllNodes: true
deviceFilter: "<<value>>"

b. To assign specific storage devices on all nodes to the Ceph cluster.


rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllNodes: true
useAllDevices: false
deviceFilter: "^sdb."

Note: If you want to assign specific devices to specific nodes using the deviceFilter option, refer to
Specific Nodes and Devices. For general information on the deviceFilter value, refer to Storage
Selection Settings.

5. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 101


6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see Enable NKP Catalog
Applications after Installing NKP.

Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 102


helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
NKP open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use the static credentials to access the UI for configuring an external identity provider. Treat them as back
up credentials rather than use them for normal access.

a. Rotate the password using the following command.


NKP experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 103


Create Managed Clusters Using the NKP CLI
This topic explains how to continue using the CLI to create managed clusters in a Pre-provisioned
environment with FIPS rather than switching to the UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous
step, the new cluster was created as Self-managed which allows it to be a Management cluster or a stand
alone cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached
clusters to this Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
To make the new managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<preprovisioned-additional>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 104


Create a Kubernetes Cluster

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to creating the
cluster by following these steps to create a new pre-provisioned cluster.
This process creates a self-managed cluster to be used as the Management cluster.

Tip: Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding NKP create cluster command.

In a Pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.

Caution: NKP uses local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI compatible
storage that is suitable for production.

After disabling localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Change the default StorageClass
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.

Procedure

1. This command uses the default external load balancer (LB) option.
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443>
\
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--kubernetes-version=v1.29.6+fips.0 \
--etcd-version=3.5.10+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere

2. Use the wait command to monitor the cluster control-plane readiness.


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m

Note: NOTE: Depending on the cluster size, it will take a few minutes to create.

cluster.cluster.x-k8s.io/preprovisioned-additional condition met

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 105


Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 106


namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Pre-provisioned FIPS Air-gapped Install


This section provides instructions to install NKP in a Pre-provisioned air-gapped environment with FIPS
requirements.

Ensure Configuration
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Pre-provisioned Air-gapped FIPS: Configure Environment


In order to create a cluster in a Pre-provisioned Air-gapped environment, you must first prepare the environment.
The instructions below outline how to fulfill the requirements for using pre-provisioned infrastructure in an air-
gapped environment. In order to create a cluster, you must first setup the environment with necessary artifacts.
All artifacts for Pre-provisioned Air-gapped need to get onto the bastion host. Artifacts needed by nodes must be
unpacked and distributed on the bastion before other provisioning will work in the absence of an internet connection.
There is an air-gapped bundle available to download. In previous NKP releases, the distro package bundles were
included in the downloaded air-gapped bundle. Currently, that air-gapped bundle contains the following artifacts with
the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tarball
1. Download nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the tarball to a local
directory:
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 107


2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.
3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command:
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts
NOTE: For FIPS, pass the flag: --fips
NOTE: For RHEL OS, pass your RedHat subscription manager credentials: export RMS_ACTIVATION_KEY
Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

Setup Process

1. The bootstrap image must be extracted and loaded onto the bastion host.
2. Artifacts must be copied onto cluster hosts for nodes to access.
3. If using GPU, those artifacts must be positioned locally.
4. Registry seeded with images locally.

Load the Bootstrap Image


1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz from the
download site mentioned above and extracted the tarball, you will load the bootstrap.
2. Load the bootstrap image on your bastion machine:
docker load -i konvoy-bootstrap-image-v2.12.0.tar

Copy air-gapped artifacts onto cluster hosts


Using the Konvoy Image Builder, you can copy the required artifacts onto your cluster hosts.
1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , extract the
tarball to a local directory:
2. The kubernetes image bundle will be located in kib/artifacts/images and you will want to verify image and
artifacts.
1. Verify the image bundles exist in artifacts/images:
$ ls artifacts/images/
kubernetes-images-1.29.6-d2iq.1.tar kubernetes-images-1.29.6-d2iq.1-fips.tar

2. Verify the artifacts for your OS exist in the artifacts/ directory and export the appropriate variables:
$ ls artifacts/
1.29.6_centos_7_x86_64.tar.gz 1.29.6_redhat_8_x86_64_fips.tar.gz
containerd-1.6.28-d2iq.1-rhel-7.9-x86_64.tar.gz containerd-1.6.28-d2iq.1-
rhel-8.6-x86_64_fips.tar.gz pip-packages.tar.gz
1.29.6_centos_7_x86_64_fips.tar.gz 1.29.6_rocky_9_x86_64.tar.gz
containerd-1.6.28-d2iq.1-rhel-7.9-x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-
rocky-9.0-x86_64.tar.gz
1.29.6_redhat_7_x86_64.tar.gz 1.29.6_ubuntu_20_x86_64.tar.gz
containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz containerd-1.6.28-d2iq.1-
rocky-9.1-x86_64.tar.gz
1.29.6_redhat_7_x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-
x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-8.4-x86_64_fips.tar.gz
containerd-1.6.28-d2iq.1-ubuntu-20.04-x86_64.tar.gz

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 108


1.29.6_redhat_8_x86_64.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-
x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-rhel-8.6-x86_64.tar.gz images

3. For example, for RHEL 8.4 you set:


export OS_PACKAGES_BUNDLE=1.29.6_redhat_8_x86_64.tar.gz
export CONTAINERD_BUNDLE=containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz

3. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_FILE="<private key file>"
SSH_PRIVATE_KEY_FILE must be either the name of the SSH private key file in your working directory or an
absolute path to the file in your user’s home directory.
4. Generate an inventory.yaml which is automatically picked up by the konvoy-image upload in the next step.
cat <<EOF > inventory.yaml
all:
vars:
ansible_user: $SSH_USER
ansible_port: 22
ansible_ssh_private_key_file: $SSH_PRIVATE_KEY_FILE
hosts:
$CONTROL_PLANE_1_ADDRESS:
ansible_host: $CONTROL_PLANE_1_ADDRESS
$CONTROL_PLANE_2_ADDRESS:
ansible_host: $CONTROL_PLANE_2_ADDRESS
$CONTROL_PLANE_3_ADDRESS:
ansible_host: $CONTROL_PLANE_3_ADDRESS
$WORKER_1_ADDRESS:
ansible_host: $WORKER_1_ADDRESS
$WORKER_2_ADDRESS:
ansible_host: $WORKER_2_ADDRESS
$WORKER_3_ADDRESS:
ansible_host: $WORKER_3_ADDRESS
$WORKER_4_ADDRESS:
ansible_host: $WORKER_4_ADDRESS
EOF

5. Upload the artifacts onto cluster hosts with the following command:
konvoy-image upload artifacts \
--container-images-dir=./artifacts/images/ \
--os-packages-bundle=./artifacts/$OS_PACKAGES_BUNDLE \
--containerd-bundle=artifacts/$CONTAINERD_BUNDLE \
--pip-packages-bundle=./artifacts/pip-packages.tar.gz
KIB uses variable overrides to specify base image and container images to use in your new machine image. The
variable overrides files for NVIDIA and FIPS can be ignored unless adding an overlay feature.

• Use the --overrides flag and reference either fips.yaml or offline-fips.yaml manifests located in the
overrides directory or see these pages in the documentation:

• FIPS Overrides
• Create FIPS 140 Images

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 109


Pre-provisioned Air-gapped FIPS: Load the Registry
Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


The complete NKP air-gapped bundle is needed for an air-gapped environment but can also be used in
a non-air-gapped environment. The bundle contains all the NKP components needed for an air-gapped
environment installation and also to use a local registry in a non-air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images is required. This registry must be accessible from both the bastion machine
and either the AWS EC2 instances (if deploying to AWS) or other machines that will be created for the Kubernetes
cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

2. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory similar
to example below depending on your current location
cd nkp-v2.12.0

3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

4. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It may take some time to push all the images to your image registry, depending on the performance of the
network between the machine you are running the script on and the registry.

Important: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster,
by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 110


5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

Pre-provisioned Air-gapped FIPS: Define Infrastructure


Define the cluster hosts and infrastructure in a pre-provisioned environment.

About this task


Konvoy needs to know how to access your cluster hosts. This is done using inventory resources. For initial
cluster creation, you must define a control-plane and at least one worker pool.
This procedure sets the necessary environment variables.

Procedure

1. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"

2. Use the following template to help you define your infrastructure. The environment variables that you set in the
previous step automatically replace the variable names when the inventory YAML file is created.
cat <<EOF > preprovisioned_inventory.yaml
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
# Note that the command line parameter --control-plane-replicas determines how
many control plane nodes will actually be used.
#
- address: $CONTROL_PLANE_1_ADDRESS
- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 111


# This is the username used to connect to your infrastructure. This user must be
root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must
exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS
- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF

Pre-provisioned Air-gapped FIPS: Define Control Plane Endpoint


Define the control plane endpoint for your cluster as well as the connection mechanism. A control plane needs to have
three, five, or seven nodes, so it can remain available if one or more nodes fail. A control plane with one node, is not
for production use.
In addition, the control plane needs an endpoint that remains available if nodes fail.
-------- cp1.example.com:6443
|
lb.example.com:6443 ---------- cp2.example.com:6443
|
-------- cp3.example.com:6443
In this example, the control plane endpoint host is lb.example.com, and the control plane endpoint port is 6443.
The control plane nodes are cp1.example.com, cp2.example.com, and cp3.example.com. The port of each API
server is 6443.

Select your Connection Mechanism


A virtual IP is the address that the client uses to connect to the service. A load balancer is the device that distributes
the client connections to the backend servers. Before you create a new NKP cluster, choose an external load balancer
(LB) or virtual IP.

• External load balancer


It is recommended that an external load balancer be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines.
Configure the load balancer to send requests only to control plane machines that are responding to API requests.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 112


• Built-in virtual IP
If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not a load balancer; it does
not distribute request load among the control plane machines. However, if the machine receiving requests does not
respond to them, the virtual IP automatically moves to another machine.

Single-Node Control Plane

Caution: Do not use a single-node control plane in a production cluster.

A control plane with one node can use its single node as the endpoint, so you will not require an external load
balancer, or a built-in virtual IP. At least one control plane node must always be running. Therefore, to upgrade a
cluster with one control plane node, a spare machine must be available in the control plane inventory. This machine
is used to provision the new node before the old node is deleted. When the API server endpoints are defined, you can
create the cluster using the link in Next Step below.

Note: Modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Known Limitations
The control plane endpoint port is also used as the API server port on each control plane machine. The default port is
6443. Before you create the cluster, ensure the port is available for use on each control plane machine.

Pre-provisioned Air-gapped FIPS: Creating a Management Cluster


Create a new Pre-provisioned Kubernetes cluster in an air-gapped environment.

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to creating the
cluster by following these steps to create a new pre-provisioned cluster.

Before you begin


First you must name your cluster. Then you run the command to deploy it. When specifying the cluster-name, you
must use the same cluster-name as used when defining your inventory objects.

Note: When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Procedure

1. Give your cluster a unique name suitable for your environment.

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.

2. Set the environment variable: export CLUSTER_NAME=<preprovisioned-example>

What to do next
Create a Kubernetes Cluster
If your cluster is air-gapped or you have a local registry, you must provide additional arguments when creating the
cluster. These tell the cluster where to locate the local registry to use by defining the URL.
export REGISTRY_URL=<https/http>://<registry-address>:<registry-port>
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 113


export REGISTRY_PASSWORD=<password>

• REGISTRY_URL: the address of an existing registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.
• REGISTRY_CA: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the cluster
nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are
not already configured to trust this CA.
• REGISTRY_USERNAME: optional, set to a user that has pull access to this registry.

• REGISTRY_PASSWORD: optional if username is not set.

Before you create a new NKP cluster below, you may choose an external load balancer or virtual IP and use the
corresponding nkp create cluster command example from that page in the docs from the links below. Other
customizations are available, but require different flags during nkp create cluster command also. Refer to Pre-
provisioned Cluster Creation Customization Choices for more cluster customizations.
When you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding nkp create cluster command.
In a pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.

Note: NKP uses local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI compatible
storage that is suitable for production.

After disabling localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Changing the Default Storage Class

Important: If you need to increase Docker Hub's rate limit, use your Docker Hub credentials when creating the
cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command. See Docker Hub's rate limit.

Note: (Optional) Use a registry mirror. Configure your cluster to use an existing local registry as a mirror when
attempting to pull images previously pushed to your registry when defining your infrastructure. Instructions in the
expandable Custom Installation section. For registry mirror information, see topics Using a Registry Mirror and
Registry Mirror Tools.

The create cluster command below includes the --self-managed flag. A self-managed cluster refers to one in
which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
This command uses the default external load balancer (LB) option (see alternative Step 1 for virtual IP):
nkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--self-managed

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 114


Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

1. ALTERNATIVE Virtual IP - if you don’t have an external LB, and want to use a VIRTUAL IP provided by
kube-vip, specify these flags example below:
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host 196.168.1.10 \
--virtual-ip-interface eth1

2. Use the wait command to monitor the cluster control-plane readiness:


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m
Output:
cluster.cluster.x-k8s.io/preprovisioned-example condition met

Note: Depending on the cluster size, it will take a few minutes to create.

When the command completes, you will have a running Kubernetes cluster! For bootstrap and custom YAML cluster
creation, refer to the Additional Infrastructure Customization section of the documentation for Pre-provisioned: Pre-
provisioned Infrastructure
Use this command to get the Kubernetes kubeconfig for the new cluster and proceed to installing the NKP
Kommander UI:
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Note: If changing the Calico encapsulation, Nutanix recommends changing it after cluster creation, but before
production.

Audit Logs
To modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Configure Air-gapped MetalLB


Create a MetalLB configmap for your Pre-provisioned Infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work and you
can continue the installation process with Pre-provisioned: Install Kommander . To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 115


• MetalLB IP address ranges or CIDRs needs to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnet must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic, enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range expressed as a Classless Inter-Domain Routing (CIDR) prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 116


Pre-provisioned Air-gapped FIPS: Install Kommander
This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
pre-provisioned environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Ensure you have loaded all the necessary images for your configuration. See: Load the Images into Your Registry:
Air-gapped Environments.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. Edit the installer file to include configuration overrides for the rook-ceph-cluster. NKP’s default
configuration ships Ceph with PVC based storage which requires your CSI provider to support PVC with type
volumeMode: Block. As this is not possible with the default local static provisioner, you can install Ceph in
host storage mode. You can choose whether Ceph’s object storage daemon (osd) pods can consume all or just
some of the devices on your nodes. Include one of the following Overrides.

a. To automatically assign all raw storage devices on all nodes to the Ceph cluster.
rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 117


storage:
storageClassDeviceSets: []
useAllDevices: true
useAllNodes: true
deviceFilter: "<<value>>"

b. To assign specific storage devices on all nodes to the Ceph cluster.


rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllNodes: true
useAllDevices: false
deviceFilter: "^sdb."

Note: If you want to assign specific devices to specific nodes using the deviceFilter option, refer to
Specific Nodes and Devices. For general information on the deviceFilter value, refer to Storage
Selection Settings.

5. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-applications.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see Enable NKP Catalog
Applications after Installing NKP.

Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 118


Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 119


Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
NKP open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use the static credentials to access the UI for configuring an external identity provider. Treat them as back
up credentials rather than use them for normal access.

a. Rotate the password using the following command.


NKP experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Create Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters in an air-gapped Pre-
provisioned environment with FIPS rather than switching to the UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous
step, the new cluster was created as Self-managed which allows it to be a Management cluster or a stand
alone cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached
clusters to this Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
To make the new managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 120


Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<preprovisioned-additional>

Create a Kubernetes Cluster

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to creating the
cluster by following these steps to create a new pre-provisioned cluster.
This process creates a self-managed cluster to be used as the Management cluster.

Tip: Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding NKP create cluster command.

In a Pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.

Caution: NKP uses local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI compatible
storage that is suitable for production.

After disabling localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Change the default StorageClass
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 121


Procedure

1. This command uses the default external load balancer (LB) option.
nkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443>
\
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD}

2. Use the wait command to monitor the cluster control-plane readiness.


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m

Note: NOTE: Depending on the cluster size, it will take a few minutes to create.

cluster.cluster.x-k8s.io/preprovisioned-additional condition met

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 122


6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Pre-provisioned with GPU Install


This section provides instructions to install NKP in a Pre-provisioned non-air-gapped environment with GPU.

Ensure Configuration
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 123


Pre-provisioned GPU: Nodepool Secrets and Overrides
Install NVIDIA runfile and place it in the artifacts directory.

About this task


For pre-provisioned environments, NKP has introduced the nvidia-runfile flag for Air-gapped Pre-
provisioned environments. If the NVIDIA runfile installer has not been downloaded, then retrieve and install
the download first by running the following command. The first line in the command below downloads and
installs the runfile and the second line places it in the artifacts directory (you must create an artifacts
directory if it doesn’t already exist).
curl -O https://download.nvidia.com/XFree86/Linux-x86_64/470.82.01/NVIDIA-Linux-
x86_64-470.82.01.run mv NVIDIA-Linux-x86_64-470.82.01.run artifacts

Note: NKP supports NVIDIA version is 470.x. For more information, see NVIDIA driver.

Procedure

1. Create the secret that GPU nodepool uses. This secret is populated from the KIB overrides.
Example output of a file named overrides/nvidia.yaml.
gpu:
types:
- nvidia
build_name_extra: "-nvidia"

2. Create a secret on the bootstrap cluster that is populated from the above file. We will name it
${CLUSTER_NAME}-user-overrides
kubectl create secret generic ${CLUSTER_NAME}-user-overrides --from-
file=overrides.yaml=overrides/nvidia.yaml

3. Create an inventory and nodepool with the instructions below and use the ${CLUSTER_NAME}-user-overrides
secret.

a. Create an inventory object that has the same name as the node pool you’re creating, and the details of the pre-
provisioned machines that you want to add to it. For example, to create a node pool named gpu-nodepool an
inventory named gpu-nodepool must be present in the same namespace.
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: ${MY_NODEPOOL_NAME}
spec:
hosts:
- address: ${IP_OF_NODE}
sshConfig:
port: 22
user: ${SSH_USERNAME}
privateKeyRef:
name: ${NAME_OF_SSH_SECRET}
namespace: ${NAMESPACE_OF_SSH_SECRET}

b. (Optional) If your pre-provisioned machines have overrides, you must create a secret that includes all of the
overrides you want to provide in one file. Create an override secret using the instructions detailed on this page.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 124


c. Once the PreprovisionedInventory object and overrides are created, create a node pool.
nkp create nodepool preprovisioned -c ${MY_CLUSTER_NAME} ${MY_NODEPOOL_NAME} --
override-secret-name ${MY_OVERRIDE_SECRET}

Note: Advanced users can use a combination of the --dry-run and --output=yaml or --output-
directory=<existing-directory> flags to get a complete set of node pool objects to modify locally
or store in version control.

Note: For more information regarding this flag or others, see the nkp create nodepool section of the
documentation for either cluster or nodepool and select your provider.

Pre-provisioned GPU: Define Infrastructure


Define the cluster hosts and infrastructure in a pre-provisioned environment.

About this task


Konvoy needs to know how to access your cluster hosts. This is done using inventory resources. For initial
cluster creation, you must define a control-plane and at least one worker pool.
This procedure sets the necessary environment variables.

Procedure

1. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"

2. Use the following template to help you define your infrastructure. The environment variables that you set in the
previous step automatically replace the variable names when the inventory YAML file is created.
cat <<EOF > preprovisioned_inventory.yaml
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
# Note that the command line parameter --control-plane-replicas determines how
many control plane nodes will actually be used.
#
- address: $CONTROL_PLANE_1_ADDRESS
- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 125


# This is the username used to connect to your infrastructure. This user must be
root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must
exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS
- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF

Pre-provisioned GPU: Define Control Plane Endpoint


Define the control plane endpoint for your cluster as well as the connection mechanism. A control plane needs to have
three, five, or seven nodes, so it can remain available if one or more nodes fail. A control plane with one node, is not
for production use.
In addition, the control plane needs an endpoint that remains available if some nodes fail.
-------- cp1.example.com:6443
|
lb.example.com:6443 ---------- cp2.example.com:6443
|
-------- cp3.example.com:6443
In this example, the control plane endpoint host is lb.example.com, and the control plane endpoint port is 6443.
The control plane nodes are cp1.example.com, cp2.example.com, and cp3.example.com. The port of each API
server is 6443.

Select your Connection Mechanism


A virtual IP is the address that the client uses to connect to the service. A load balancer is the device that distributes
the client connections to the backend servers. Before you create a new NKP cluster, choose an external load balancer
(LB) or virtual IP.

• External load balancer


It is recommended that an external load balancer be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines.
Configure the load balancer to send requests only to control plane machines that are responding to API requests.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 126


• Built-in virtual IP
If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not a load balancer; it does
not distribute request load among the control plane machines. However, if the machine receiving requests does not
respond to them, the virtual IP automatically moves to another machine.

Single-Node Control Plane

Caution: Do not use a single-node control plane in a production cluster.

A control plane with one node can use its single node as the endpoint, so you will not require an external load
balancer, or a built-in virtual IP. At least one control plane node must always be running. Therefore, to upgrade a
cluster with one control plane node, a spare machine must be available in the control plane inventory. This machine
is used to provision the new node before the old node is deleted. When the API server endpoints are defined, you can
create the cluster using the link in Next Step below.

Note: Modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Known Limitations
The control plane endpoint port is also used as the API server port on each control plane machine. The default port is
6443. Before you create the cluster, ensure the port is available for use on each control plane machine.

Pre-provisioned GPU: Creating the Management Cluster


Create a new Pre-provisioned Kubernetes cluster in a non-air-gapped environment with the steps below.

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to creating the
cluster by following these steps to create a new pre-provisioned cluster. This process creates a self-
managed cluster to be used as the Management cluster.
If a custom AMI was created using Konvoy Image Builder, the custom ami id is printed and written to
packer.pkr.hcl.

To use the built ami with Konvoy, specify it with the --ami flag when calling cluster create.
For GPU Steps in Pre-provisioned section of the documentation to use the overrides/nvidia.yaml.
Additional helpful information can be found in the NVIDIA Device Plug-in for Kubernetes instructions and the
Installation Guide of Supported Platforms.

Before you begin


First you must name your cluster. Then you run the command to deploy it. When specifying the cluster-
name, you must use the same cluster-name as used when defining your inventory objects.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable: export CLUSTER_NAME=preprovisioned-example

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 127


What to do next
Create a Kubernetes Cluster
Create a Kubernetes Cluster
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new Pre-provisioned cluster. This process creates a self-managed cluster to be
used as the Management cluster. By default, the control-plane Nodes will be created in 3 different zones. However,
the default worker Nodes will reside in a single Availability Zone. You may create additional node pools in other
Availability Zones with the nkp create nodepool command.
Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding nkp create cluster command.
In a pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.
NKP uses local static provisioner as the default storage provider for a pre-provisioned environment. However,
localvolumeprovisioner is not suitable for production use. You can use a Kubernetes CSI compatible storage
that is suitable for production.
After disabling localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Changing the Default Storage Class
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.
The create cluster command below includes the --self-managed flag. A self-managed cluster refers to one in
which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
This command uses the default external load balancer (LB) option (see alternative Step 1 for virtual IP):
nkp create cluster preprovisioned \
--cluster-name=${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--ami <ami> \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

1. ALTERNATIVE Virtual IP - if you don’t have an external LB, and want to use a VIRTUAL IP provided by
kube-vip, specify these flags example below:
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host 196.168.1.10 \
--virtual-ip-interface eth1

2. Create the node pool after cluster creation:


nkp create nodepool aws -c ${CLUSTER_NAME} \
--instance-type p2.xlarge \
--ami-id=${AMI_ID_FROM_KIB} \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 128


--replicas=1 ${NODEPOOL_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf

3. Use the wait command to monitor the cluster control-plane readiness:


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m
Output:
cluster.cluster.x-k8s.io/preprovisioned-example condition met

Note: Depending on the cluster size, it will take a few minutes to create.

When the command completes, you will have a running Kubernetes cluster! For bootstrap and custom YAML cluster
creation, refer to the Additional Infrastructure Customization section of the documentation for Pre-provisioned: Pre-
Provisioned Infrastructure
Use this command to get the Kubernetes kubeconfig for the new cluster and proceed to installing the NKP
Kommander UI:
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Note: If changing the Calico encapsulation, Nutanix recommends changing it after cluster creation, but before
production.

Important: If you need to increase Docker Hub's rate limit, use your Docker Hub credentials when creating the
cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

Audit Logs
To modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.
Further Steps
For more customized cluster creation, access the Pre-Provisioned Infrastructure section. That section is for Pre-
Provisioned Override Files, custom flags, and more that specify the secret as part of the create cluster command. If
these are not specified, the overrides for your nodes will not be applied.
Cluster Verification: If you want to monitor or verify the installation of your clusters, refer to: Verify your Cluster
and NKP Installation.

Configure MetalLB with GPU


Create a MetalLB configmap for your Pre-provisioned Infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work and you
can continue the installation process with Pre-provisioned: Install Kommander . To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 129


Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

• MetalLB IP address ranges or CIDRs needs to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnet must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic, enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range expressed as a Classless Inter-Domain Routing (CIDR) prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 130


addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

Pre-provisioned GPU: Install Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
pre-provisioned environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. Edit the installer file to include configuration overrides for the rook-ceph-cluster. NKP’s default
configuration ships Ceph with PersistentVolumeClaim (PVC) based storage which requires your CSI provider
to support PVC with type volumeMode: Block. As this is not possible with the default local static provisioner,
you can install Ceph in host storage mode. You can choose whether Ceph’s object storage daemon (osd) pods can
consume all or just some of the devices on your nodes. Include one of the following Overrides.

a. To automatically assign all raw storage devices on all nodes to the Ceph cluster.
rook-ceph-cluster:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 131


enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllDevices: true
useAllNodes: true
deviceFilter: "<<value>>"

b. To assign specific storage devices on all nodes to the Ceph cluster.


rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllNodes: true
useAllDevices: false
deviceFilter: "^sdb."

Note: If you want to assign specific devices to specific nodes using the deviceFilter option, refer to
Specific Nodes and Devices. For general information on the deviceFilter value, refer to Storage
Selection Settings.

5. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

6. Enable NVIDIA platform services in the same kommander.yamlfile. for GPU resources.
apps:
nvidia-gpu-operator:
enabled: true

7. Append the correct Toolkit version based on your OS.

a. RHEL 8.4/8.6
If you’re using RHEL 8.4/8.6 as the base operating system for your GPU enabled nodes, set the
toolkit.version parameter in your Kommander Installer Configuration file or <kommander.yaml> to
the following
kind: Installation
apps:
nvidia-gpu-operator:
enabled: true
values: |
toolkit:
version: v1.14.6-ubi8

b. Ubuntu 18.04 and 20.04


If you’re using Ubuntu 18.04 or 20.04 as the base operating system for your GPU enabled nodes, set the
toolkit.version parameter in your Kommander Installer Configuration file or <kommander.yaml> to
the following
kind: Installation
apps:
nvidia-gpu-operator:
enabled: true

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 132


values: |
toolkit:
version: v1.14.6-ubuntu20.04

8. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

9. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 133


helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
NKP open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 134


3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use the static credentials to access the UI for configuring an external identity provider. Treat them as back
up credentials rather than use them for normal access.

a. Rotate the password using the following command.


NKP experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Create Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters in a Pre-provisioned
environment with GPU rather than switching to the UI dashboard.

About this task


After initial cluster creation, you can create additional clusters from the CLI. In a previous step, the new
cluster was created as Self-managed, which allows it to be a Management cluster or a stand-alone cluster.
Subsequent new clusters are not self-managed, as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
To make the new managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster that can be
used as the management cluster.
First, you must name your cluster. Then, you run the command to deploy it.

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 135


When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<preprovisioned-additional>

Create a Cluster with GPU AMI

Procedure

• If a custom AMI was created using Konvoy Image Builder, use the --ami flag. The custom ami id is printed and
written to ./manifest.json. To use the built ami with Konvoy, specify it with the --ami flag when calling
cluster create in Step 1 in the next section where you create your Kubernetes cluster.

Create a Kubernetes Cluster

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster
by following these steps to create a new pre-provisioned cluster.
This process creates a self-managed cluster that can be used as the management cluster.

Tip: Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding NKP create cluster command.

In a Pre-provisioned environment, use the Kubernetes CSI and third-partythird-party drivers for local volumes and
other storage devices in your data center.

Caution: NKP uses a local static provisioner as the default storage provider for a pre-provisioned environment.
However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI compatible
storage that is suitable for production.

After turning off localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Change the default StorageClass.
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which
KIB uses to build images for other providers) against the set of nodes that you defined. This results in your pre-
existing or pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.

Procedure

1. This command uses the default external load balancer (LB) option.
nkp create cluster pre-provisioned --cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 136


--control-plane-endpoint-port <control plane endpoint port, if different than 6443>
\
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD}

2. Use the wait command to monitor the cluster control-plane readiness.


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m

Note: NOTE: Depending on the cluster size, it will take a few minutes to create.

cluster.cluster.x-k8s.io/preprovisioned-additional condition met

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 137


7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Pre-provisioned Air-gapped with GPU Install


This section provides instructions to install NKP in a Pre-provisioned air-gapped environment with GPU.

Ensure Configuration
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 138


Pre-provisioned Air-gapped with GPU: Configure Environment
In order to create a cluster in a Pre-provisioned Air-gapped environment with GPU, you must first prepare the
environment.

Note: If the NVIDIA runfile installer has not been downloaded, then retrieve and install the download first by running
the following command. The first line in the command below downloads and installs the runfile, and the second line
places it in the artifacts directory.
curl -O https://download.nvidia.com/XFree86/Linux-x86_64/470.82.01/NVIDIA-Linux-
x86_64-470.82.01.run
mv NVIDIA-Linux-x86_64-470.82.01.run artifacts

The instructions below outline how to fulfill the requirements for using pre-provisioned infrastructure in an air-
gapped environment. In order to create a cluster, you must first set up the environment with necessarthe y artifacts.
All artifacts for Pre-provisioned Air-gapped need to get onto the bastion host. Artifacts needed by nodes must be
unpacked and distributed on the bastion before other provisioning will work in the absence of an internet connection.
There is an air-gapped bundle available to download. In previous NKP releases, the distro package bundles were
included in the downloaded air-gapped bundle. Currently, that air-gapped bundle contains the following artifacts, with
the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• ContainerContainersd tar file
1. Download nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the tar file to a local
directory:
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.
3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command:
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note:

• For FIPS, pass the flag: --fips.


• For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials:
export RMS_ACTIVATION_KEY
Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

Setup Process

1. The bootstrap image must be extracted and loaded onto the bastion host.
2. Artifacts must be copied onto cluster hosts for nodes to access.
3. If using GPU, those artifacts must be positioned locally.
4. Registry seeded with images locally.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 139


Load the Bootstrap Image
1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz from the
download site mentioned above and extracted the tar file, you will load the bootstrap.
2. Load the bootstrap image on your bastion machine:
docker load -i konvoy-bootstrap-image-v2.12.0.tar

Copy air-gapped artifacts onto cluster hosts


.
Using the Konvoy Image Builder, you can copy the required artifacts onto your cluster hosts.
1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , extract the tar
file to a local directory:
2. The Kubernetes image bundle will be located in kib/artifacts/images and you will want to verify image and
artifacts.
1. Verify the image bundles exist in artifacts/images:
$ ls artifacts/images/
kubernetes-images-1.29.6-d2iq.1.tar kubernetes-images-1.29.6-d2iq.1-fips.tar

2. Verify the artifacts for your OS exist in the artifacts/ directory and export the appropriate variables:
$ ls artifacts/
1.29.6_centos_7_x86_64.tar.gz 1.29.6_redhat_8_x86_64_fips.tar.gz
containerd-1.6.28-d2iq.1-rhel-7.9-x86_64.tar.gz containerd-1.6.28-d2iq.1-
rhel-8.6-x86_64_fips.tar.gz pip-packages.tar.gz
1.29.6_centos_7_x86_64_fips.tar.gz 1.29.6_rocky_9_x86_64.tar.gz
containerd-1.6.28-d2iq.1-rhel-7.9-x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-
rocky-9.0-x86_64.tar.gz
1.29.6_redhat_7_x86_64.tar.gz 1.29.6_ubuntu_20_x86_64.tar.gz
containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz containerd-1.6.28-d2iq.1-
rocky-9.1-x86_64.tar.gz
1.29.6_redhat_7_x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-
x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-8.4-x86_64_fips.tar.gz
containerd-1.6.28-d2iq.1-ubuntu-20.04-x86_64.tar.gz
1.29.6_redhat_8_x86_64.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-
x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-rhel-8.6-x86_64.tar.gz images

3. For example, for RHEL 8.4 you set:


export OS_PACKAGES_BUNDLE=1.29.6_redhat_8_x86_64.tar.gz
export CONTAINERD_BUNDLE=containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz

3. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_FILE="<private key file>"
SSH_PRIVATE_KEY_FILE must be either the name of the SSH private key file in your working directory or an
absolute path to the file in your user’s home directory.
4. Generate an inventory.yaml which is automatically picked up by the konvoy-image upload in the next step.
cat <<EOF > inventory.yaml
all:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 140


vars:
ansible_user: $SSH_USER
ansible_port: 22
ansible_ssh_private_key_file: $SSH_PRIVATE_KEY_FILE
hosts:
$CONTROL_PLANE_1_ADDRESS:
ansible_host: $CONTROL_PLANE_1_ADDRESS
$CONTROL_PLANE_2_ADDRESS:
ansible_host: $CONTROL_PLANE_2_ADDRESS
$CONTROL_PLANE_3_ADDRESS:
ansible_host: $CONTROL_PLANE_3_ADDRESS
$WORKER_1_ADDRESS:
ansible_host: $WORKER_1_ADDRESS
$WORKER_2_ADDRESS:
ansible_host: $WORKER_2_ADDRESS
$WORKER_3_ADDRESS:
ansible_host: $WORKER_3_ADDRESS
$WORKER_4_ADDRESS:
ansible_host: $WORKER_4_ADDRESS
EOF

5. Upload the artifacts onto cluster hosts with the following command:
konvoy-image upload artifacts --inventory-file=gpu_inventory.yaml \
--container-images-dir=./artifacts/images/ \
--os-packages-bundle=./artifacts/$OS_PACKAGES_BUNDLE \
--containerd-bundle=artifacts/$CONTAINERD_BUNDLE \
--pip-packages-bundle=./artifacts/pip-packages.tar.gz \
--nvidia-runfile=./artifacts/NVIDIA-Linux-x86_64-470.82.01.run
The konvoy-image upload artifacts command copies all OS packages and other artifacts onto each of
the machines in your inventory. When you create the cluster, the provisioning process connects to each node and
runs commands to install those artifacts and consequently Kubernetes running.. KIB uses variable overrides
to specify base image and container images to use in your new machine image. The variable overrides files
for NVIDIA and FIPS can be ignored unless adding an overlay feature. Use the --overrides overrides/
fips.yaml,overrides/offline-fips.yaml flag with manifests located in the overrides directory

Pre-provisioned Air-gapped GPU: Load the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


The complete NKP air-gapped bundle is needed for an air-gapped environment but can also be used in
a non-air-gapped environment. The bundle contains all the NKP components needed for an air-gapped
environment installation and also for using a local registry in a non-air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images, is required. This registry must be accessible from both the bastion
machine and either the AWS EC2 instances (if deploying to AWS) or other machines that will be created for the
Kubernetes cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tar file to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 141


2. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory,
similar to the example below, depending on your current location
cd nkp-v2.12.0

3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

4. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply the variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It might take some time to push all the images to your image registry, depending on the network
performance between the machine you are running the script on and the registry.

Important: To increase Docker Hub's rate limit, use your Docker Hub credentials when creating the cluster
by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

Pre-provisioned GPU: Nodepool Secrets and Overrides


Install NVIDIA runfile and place it in the artifacts directory.

About this task


For pre-provisioned environments, NKP has introduced the nvidia-runfile flag for Air-gapped Pre-provisioned
environments. If the NVIDIA runfile installer has not been downloaded, then retrieve and install the download first
by running the following command. The first line in the command below downloads and installs the runfile and the
second line places it in the artifacts directory (you must create an artifacts directory if it doesn’t already exist).
curl -O https://download.nvidia.com/XFree86/Linux-x86_64/470.82.01/NVIDIA-Linux-
x86_64-470.82.01.run mv NVIDIA-Linux-x86_64-470.82.01.run artifacts

Note: NKP supports NVIDIA driver version is 470.x. For more information, see NVIDIA driver.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 142


Procedure

1. Create the secret that the GPU node pool uses. This secret is populated from the KIB overrides.
Example output of a file named overrides/nvidia.yaml.
gpu:
types:
- nvidia
build_name_extra: "-nvidia"

2. Create a secret on the bootstrap cluster that is populated from the above file. We will name it
${CLUSTER_NAME}-user-overrides
kubectl create secret generic ${CLUSTER_NAME}-user-overrides --from-
file=overrides.yaml=overrides/nvidia.yaml

3. Create an inventory and nodepool with the instructions below and use the ${CLUSTER_NAME}-user-overrides
secret.

a. Create an inventory object that has the same name as the node pool you’re creating and the details of the pre-
provisioned machines that you want to add to it. For example, to create a node pool named gpu-nodepool an
inventory named gpu-nodepool must be present in the same namespace.
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: ${MY_NODEPOOL_NAME}
spec:
hosts:
- address: ${IP_OF_NODE}
sshConfig:
port: 22
user: ${SSH_USERNAME}
privateKeyRef:
name: ${NAME_OF_SSH_SECRET}
namespace: ${NAMESPACE_OF_SSH_SECRET}

b. (Optional) If your pre-provisioned machines have overrides, you must create a secret that includes all of the
overrides you want to provide in one file. Create an override secret using the instructions detailed on this page.
c. Once the PreprovisionedInventory object and overrides are created, create a node pool.
nkp create nodepool preprovisioned -c ${MY_CLUSTER_NAME} ${MY_NODEPOOL_NAME} --
override-secret-name ${MY_OVERRIDE_SECRET}

Note: Advanced users can use a combination of the --dry-run and --output=yaml or --output-
directory=<existing-directory> flags to get a complete set of node pool objects to modify locally
or store in version control.

Note: For more information regarding this flag or others, see the nkp create node pool section of the
documentation for either cluster or nodepool and select your provider.

Pre-provisioned Air-gapped GPU: Define Infrastructure


Define the cluster hosts and infrastructure in a pre-provisioned environment.

About this task


Konvoy needs to know how to access your cluster hosts. This is done using inventory resources. For initial cluster
creation, you must define a control plane and at least one worker pool.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 143


This procedure sets the necessary environment variables.

Procedure

1. Export the following environment variables, ensuring that all control plane and worker nodes are included:
export CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>"
export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>"
export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>"
export WORKER_1_ADDRESS="<worker-address-1>"
export WORKER_2_ADDRESS="<worker-address-2>"
export WORKER_3_ADDRESS="<worker-address-3>"
export WORKER_4_ADDRESS="<worker-address-4>"
export SSH_USER="<ssh-user>"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"

2. Use the following template to help you define your infrastructure. The environment variables that you set in the
previous step automatically replace the variable names when the inventory YAML file is created.
cat <<EOF > preprovisioned_inventory.yaml
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
# Note that the command-line parameter—-control-plane-replicas determines how
many control plane nodes will actually be used.
#
- address: $CONTROL_PLANE_1_ADDRESS
- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22
# This is the username used to connect to your infrastructure. This user must be
root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must
exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 144


- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF

Pre-provisioned Air-gapped GPU: Define Control Plane Endpoint


Define the control plane endpoint for your cluster and the connection mechanism. A control plane needs to have
three, five, or seven nodes so it can remain available if one or more nodes fail. A control plane with one node is not
for production use.
In addition, the control plane needs an endpoint that remains available if some nodes fail.
-------- cp1.example.com:6443
|
lb.example.com:6443 ---------- cp2.example.com:6443
|
-------- cp3.example.com:6443
In this example, the control plane endpoint host is lb.example.com, and the control plane endpoint port is 6443.
The control plane nodes are cp1.example.com, cp2.example.com, and cp3.example.com. The port of each API
server is 6443.

Select your Connection Mechanism


A virtual IP is the address that the client uses to connect to the service. A load balancer is a device that distributes
the client connections to the backend servers. Before you create a new NKP cluster, choose an external load balancer
(LB) or virtual IP.

• External load balancer


It is recommended that an external load balancer be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines.
Configure the load balancer to send requests only to control plane machines that are responding to API requests.
• Built-in virtual IP
If an external load balancer is not available, use the built-in virtual IP. The virtual IP is not a load balancer; it does
not distribute request load among the control plane machines. However, if the machine receiving requests does not
respond to them, the virtual IP automatically moves to another machine.

Single-Node Control Plane

Caution: Do not use a single-node control plane in a production cluster.

A control plane with one node can use its single node as the endpoint, so you will not require an external load
balancer, or a built-in virtual IP. At least one control plane node must always be running. Therefore, to upgrade a
cluster with one control plane node, a spare machine must be available in the control plane inventory. This machine
is used to provision the new node before the old node is deleted. When the API server endpoints are defined, you can
create the cluster using the link in the Next Step below.

Note: Modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 145


Known Limitations
The control plane endpoint port is also used as the API server port on each control plane machine. The default port is
6443. Before you create the cluster, ensure the port is available for use on each control plane machine.

Pre-provisioned Air-gapped GPU: Creating a Management Cluster


Create a new Pre-provisioned Kubernetes cluster in an air-gapped environment.

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster
by following these steps to create a new pre-provisioned cluster.

Before you begin


First, you must name your cluster. Then yo,u run the command to deploy it. When specifying the cluster-name,
you must use the same cluster-name as used when defining your inventory objects.

Procedure

1. Give your cluster a unique name suitable for your environment.

Note: The cluster name might only contain the following characters: a-z, 0-9, , and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.

2. Set the environment variable: export CLUSTER_NAME=<preprovisioned-example>

What to do next
Create a Kubernetes Cluster

• the

Note: a

Turning offthird-partymight.
nkp create cluster pre-provisioned --cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443> \
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

1. ALTERNATIVE Virtual IP - if you don’t have an external LB, and want to use a VIRTUAL IP provided by
kube-vip, specify these flags example below:
nkp create cluster preprovisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host 196.168.1.10 \
--virtual-ip-interface eth1

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 146


2. Use the wait command to monitor the cluster control-plane readiness:
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m
Output:
cluster.cluster.x-k8s.io/preprovisioned-example condition met

Note: Depending on the cluster size, it will take a few minutes to create.

When the command completes, you will have a running Kubernetes cluster! For bootstrap and custom YAML cluster
creation, refer to the Additional Infrastructure Customization section of the documentation for Pre-provisioned: Pre-
provisioned Infrastructure
Use this command to get the Kubernetes kubeconfig for the new cluster and proceed to installing the NKP
Kommander UI:
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

Note: If changing the Calico encapsulation, Nutanix recommends changing it after cluster creation, but before
production.

Audit Logs
To modify Control Plane Audit logs settings using the information contained in the page Configure the Control
Plane.

Configure Air-gapped MetalLB with GPU


Create a MetalLB configmap for your Pre-provisioned Infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work, and you
can continue the installation process with Pre-provisioned: Install Kommander. To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly and give the machine’s MAC address to clients.

• MetalLB IP address ranges or CIDRs needs to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnets must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic; enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 147


metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need four pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range is expressed as a Classless Inter-Domain Routing (CIDR) prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like this:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

Pre-provisioned Air-gapped GPU: Install Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
pre-provisioned environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 148


• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Ensure you have loaded all the necessary images for your configuration. See: Load the Images into Your Registry:
Air-gapped Environments.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. Edit the installer file to include configuration overrides for the rook-ceph-cluster. NKP’s default
configuration ships Ceph with PersistentVolumeClaim (PVC) based storage which requires your CSI provider
to support PVC with type volumeMode: Block. As this is not possible with the default local static provisioner,
you can install Ceph in host storage mode. You can choose whether Ceph’s object storage daemon (osd) pods can
consume all or just some of the devices on your nodes. Include one of the following Overrides.

a. To automatically assign all raw storage devices on all nodes to the Ceph cluster.
rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []
useAllDevices: true
useAllNodes: true
deviceFilter: "<<value>>"

b. To assign specific storage devices on all nodes to the Ceph cluster.


rook-ceph-cluster:
enabled: true
values: |
cephClusterSpec:
storage:
storageClassDeviceSets: []

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 149


useAllNodes: true
useAllDevices: false
deviceFilter: "^sdb."

Note: If you want to assign specific devices to specific nodes using the deviceFilter option, refer to
Specific Nodes and Devices. For general information on the deviceFilter value, refer to Storage
Selection Settings.

5. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

6. Enable NVIDIA platform services in the same kommander.yamlfile. for GPU resources.
apps:
nvidia-gpu-operator:
enabled: true

7. Append the correct Toolkit version based on your OS.

a. RHEL 8.4/8.6
If you’re using RHEL 8.4/8.6 as the base operating system for your GPU enabled nodes, set the
toolkit.version parameter in your Kommander Installer Configuration file or <kommander.yaml> to
the following
kind: Installation
apps:
nvidia-gpu-operator:
enabled: true
values: |
toolkit:
version: v1.14.6-ubi8

b. Ubuntu 18.04 and 20.04


If you’re using Ubuntu 18.04 or 20.04 as the base operating system for your GPU enabled nodes, set the
toolkit.version parameter in your Kommander Installer Configuration file or <kommander.yaml> to
the following
kind: Installation
apps:
nvidia-gpu-operator:
enabled: true
values: |
toolkit:
version: v1.14.6-ubuntu20.04

8. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 150


url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

9. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the command kubectl -n kommander wait --for
condition=Ready helmreleases --all --timeout 15m.

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 151


helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using the command NKP open
dashboard --kubeconfig=${CLUSTER_NAME}.conf.

2. Retrieve your credentials at any time using the command kubectl -n kommander get secret
NKP-credentials -o go-template='Username: {{.data.username|base64decode}}
{{ "\n"}}Password: {{.data.password|base64decode}}{{ "\n"}}'.

3. Retrieve the URL used for accessing the UI using the command kubectl -n kommander get svc
kommander-traefik -o go-template='https://{{with index .status.loadBalancer.ingress
0}}{{or .hostname .ip}}{{end}}/NKP/kommander/dashboard{{ "\n"}}'
Only use the static credentials to access the UI for configuring an external identity provider. Treat them as back
up credentials rather than use them for normal access.

a. Rotate the password using the command NKP experimental rotate dashboard-password.
The example output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Create Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters in an air-gapped Pre-
provisioned environment with GPU rather than switching to the UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous
step, the new cluster was created as Self-managed which allows it to be a Management cluster or a stand
alone cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached
clusters to this Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the Kommander
component. Those tasks are only done on Management clusters!
To make the new managed cluster a part of a Workspace, set that workspace environment variable.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 152


Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name might only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export CLUSTER_NAME=<preprovisioned-additional>

Create a Cluster with GPU AMI

Procedure

• If a custom AMI was created using Konvoy Image Builder, use the --ami flag. The custom ami id is printed and
written to ./manifest.json. To use the built ami with Konvoy, specify it with the --ami flag when calling
cluster create in Step 1 in the next section where you create your Kubernetes cluster.

Create a Kubernetes Cluster

About this task


After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster
by following these steps to create a new pre-provisioned cluster.
This process creates a self-managed cluster that can be used as the managementthird-party cluster.

Tip: Before you create a new NKP cluster below, choose an external load balancer (LB) or virtual IP and use the
corresponding NKP create cluster command.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 153


In a Pre-provisioned environment, use the Kubernetes CSI and third party drivers for local volumes and other storage
devices in your data center.

Caution: NKP uses local static provisioners as the Default Storage Providers on page 33 for a pre-provisioned
environment. However, localvolumeprovisioner is not suitable for production use. Use Kubernetes CSI
compatible storage that is suitable for production.

After disabling localvolumeprovisioner, you can choose from any of the storage options available for
Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes
documentation: Change or Manage Multiple StorageClasses on page 34
For Pre-provisioned environments, you define a set of nodes that already exist. During the cluster creation process,
Konvoy Image Builder(KIB) is built into NKP and automatically runs the machine configuration process (which KIB
uses to build images for other providers) against the set of nodes that you defined. This results in your pre-existing or
pre-provisioned nodes being configured properly.
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes
control plane and worker nodes on the hosts defined in the inventory YAML previously created.

Procedure

1. This command uses the default external load balancer (LB) option.
nkp create cluster pre-provisioned \
--cluster-name ${CLUSTER_NAME} \
--control-plane-endpoint-host <control plane endpoint host> \
--control-plane-endpoint-port <control plane endpoint port, if different than 6443>
\
--pre-provisioned-inventory-file preprovisioned_inventory.yaml \
--ssh-private-key-file <path-to-ssh-private-key> \
--registry-mirror-url=${_REGISTRY_URL} \
--registry-mirror-cacert=${_REGISTRY_CA} \
--registry-mirror-username=${_REGISTRY_USERNAME} \
--registry-mirror-password=${_REGISTRY_PASSWORD}

2. Use the wait command to monitor the cluster control-plane readiness.


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=30m

Note: NOTE: Depending on the cluster size, it will take a few minutes to create.

cluster.cluster.x-k8s.io/preprovisioned-additional condition met

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 154


2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to the workspace through earlier UI, or attach your
cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 155


10. You can now view this cluster in your Workspace in the UI, and you can confirm its status by running the
command below. It might take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them into a Managed Cluster to be centrally
administrated by a Management Cluster, refer to Platform Expansion:

AWS Installation Options


For an environment that is on the AWS Infrastructure, install options based on those environment variables are
provided for you in this location.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operative in the most common scenarios.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: Additional Resource Information specific to AWS is below.

• Control Plane Nodes - NKP on AWS defaults to deploying an m5.xlarge instance with an 80GiB root
volume for control plane nodes, which meets the above resource requirements.
• Worker Nodes - NKP on AWS defaults to deploying am5.2xlarge instance with an 80GiB root volume for
worker nodes, which meets the above resource requirements.

Section Contents
Supported environment combinations:

AWS Installation
This installation provides instructions to install NKP in an AWS non-air-gapped environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. Follow the steps to create permissions and roles on the Minimal Permissions and Role to Create Clusters page.
2. Create Cluster IAM Policies and Roles
3. Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 156


4. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>

If using AWS ECR as your local private registry, more information can be found on the Registry Mirror Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your own image for the region.

Note: For multi-tenancy, every tenant needs to be in a different AWS account to ensure they are truly independent of
other tenants in order to enforce security.

Section Contents

AWS: Creating an Image


Learn how to build a custom AMI for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides to specify base image and container images to use in your new
AMI.
AMI images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create an AMI image of your current computer system settings and software.
The AMI image can then be replicated and distributed, creating your computer system for other users. You can use
overrides files to customize some of the components installed on your machine image. For example, having the FIPS
versions of the Kubernetes components installed by KIB.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder. Explore the Customize your Image topic for more options about overrides.
The prerequisites to use Konvoy Image Builder are:

Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

2. Check the Supported Infrastructure Operating Systems.

3. Check the Supported Kubernetes Version for your Provider.

4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

5. Ensure you have met the minimal set of permissions from the AWS Image Builder Book.

6. A Minimal IAM Permissions for KIB to create an Image for an AWS account using Konvoy Image Builder.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 157


Extract the KIB Bundle

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to Build the Image below.
In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle. Currently,
that air-gapped bundle contains the following artifacts with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tarball

Procedure

1. Downloading NKP on page 16nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the


tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note:

• For FIPS, pass the flag: --fips


• Note: For RHEL OS, pass your RedHat subscription manager credentials: export
RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

4. Follow the instructions below to build an AMI.

Note: The konvoy-image binary and all supporting folders are also extracted. When run, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Set the environment variables for AWS access. The following variables must be set using your credentials
including required IAM:
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION
If you have an override file to configure specific attributes of your AMI file, add it. Instructions for customizing
an override file are found on this page: Image Overrides

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 158


Build the Image

About this task


Depending on which version of NKP you are running, steps and flags will be different. To deploy in a region where
CAPI images are not provided, you need to use KIB to create your own image for the region. For a list of supported
AWS regions, refer to the Published AMI information from AWS.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build aws images/ami/rhel-86.yaml

a. By default, it builds in the us-west-2 region. to specify another region set the --region flag as shown in the
command below.
konvoy-image build aws --region us-east-1 images/ami/rhel-86.yaml

Note: Ensure you have named the correct AMI image YAML file for your OS in the konvoy-image build
command.

What to do next
After KIB provisions the image successfully, the ami
id is printed and written to the packer.pkr.hcl (Packer config) file. This file has an artifact_id field whose
value provides the name of the AMI ID as shown in the example below. That is the ami you use in the NKP create
cluster command.
{
"builds": [
{
"name": "kib_image",
"builder_type": "amazon-ebs",
"build_time": 1698086886,
"files": null,
"artifact_id": "us-west-2:ami-04b8dfef8bd33a016",
"packer_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05",
"custom_data": {
"containerd_version": "",
"distribution": "RHEL",
"distribution_version": "8.6",
"kubernetes_cni_version": "",
"kubernetes_version": "1.26.6"
}
}
],
"last_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05"
}

What to do next
1. To use a custom AMI when creating your cluster, you must create that AMI using KIB first. Then perform the
export and name the custom AMI for use in the command nkp create cluster:
export AWS_AMI_ID=ami-<ami-id-here>

Note: Inside the sections for either Non-air-gapped or Air-gapped cluster creation, you will find the instructions for
how to apply custom images.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 159


Related Information

Procedure

• To use a local registry even in a non-air-gapped environment, download and extract the bundle. Downloading
NKP on page 16 the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

AWS: Creating the Management Cluster


Create an AWS Management Cluster in a non-air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
If you use these instructions to create a cluster on AWS using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSI compatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder.

Before you begin


First you must name your cluster.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable:


export CLUSTER_NAME=<aws-example>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 160


3. There are two approaches to supplying the ID of your AMI. Either provide the ID of the AMI or provide a way for
NKP to discover the AMI using location, format and OS information.

a. Option One - Provide the ID of your AMI.


Use the example command below leaving the existing flag that provides the AMI ID: --ami AMI_ID
b. Option Two - Provide a path for your AMI with the information required for image discover.

• Where the AMI is published using your AWS Account ID: --ami-owner AWS_ACCOUNT_ID
• The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus
the base OS name: --ami-base-os ubuntu-20.04
• The base OS information: --ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'

Note:

• The AMI must be created with Konvoy Image Builder in order to use the registry mirror
feature.
export AWS_AMI_ID=<ami-...>

• (Optional) Registry Mirror - Configure your cluster to use an existing local registry as a mirror
when attempting to pull images. Below is an AWS ECR example where REGISTRY_URL: the
address of an existing local registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.:
export REGISTRY_URL=<ecr-registry-URI>

4. Run this command to create your Kubernetes cluster by providing the image ID and using any relevant flags.
nkp create cluster aws \
--cluster-name=${CLUSTER_NAME} \
--ami=${AWS_AMI_ID} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \
--self-managed
If providing the AMI path, use these flags in place of AWS_AMI_ID:
--ami-owner AWS_ACCOUNT_ID \
--ami-base-os ubuntu-20.04 \
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*' \

» Additional cluster creation flags based on your environment:


» Optional Registry flag: --registry-mirror-url=${REGISTRY_URL}
» Flatcar OS flag to instruct the bootstrap cluster to make changes related to the installation paths: --os-hint
flatcar

» HTTP or HTTPS flags if you use proxies: --http-proxy, --https-proxy, and --no-proxy

AWS: Install Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
AWS environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 161


Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer, set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: nkp-catalog-applications

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 162


labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see Configuring NKP Catalog
Applications after Installing NKP.

AWS: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 163


helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 164


• Continue to the NKP Dashboard.

AWS: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous
step, the new cluster was created as Self-managed which allows it to be a Management cluster or a stand
alone cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached
clusters to this Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the Kommander
component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<aws-additional>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 165


Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure
Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster aws \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--kubeconfig=<management-cluster-kubeconfig-path> \
--namespace ${WORKSPACE_NAMESPACE}

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information, see
Clusters with HTTP or HTTPS Proxy on page 647.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 166


7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

AWS Air-gapped Installation


This installation provides instructions to install NKP in an Amazon Web Services (AWS) air-gapped environment.
Remember, there are always more options for custom YAML Ain't Markup Language (YAML) in the Custom
Installation and Additional Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 167


AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. Follow the steps to create permissions and roles on the Minimal Permissions and Role to Create Clusters page.
2. Create Cluster IAM Policies and Roles
3. Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2

4. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>

If using AWS ECR as your local private registry, more information can be found on the Registry Mirror Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your own image for the region.

Note: For multi-tenancy, every tenant needs to be in a different AWS account to ensure they are truly independent of
other tenants in order to enforce security.

Section Contents

AWS Air-gapped: Creating an Image


Learn how to build a custom AMI for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides variable overrides to specify base image and container images to
use in your new AMI.
AMI images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create an AMI image of your current computer system settings and software.
The AMI image can then be replicated and distributed, creating your computer system for other users. You can use
overrides files to customize some of the components installed on your machine image. For example, you can tell KIB
to install the FIPS versions of the Kubernetes components.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder. Explore the Create a Custom AMI on page 1039 topic for more options about overrides.
The prerequisites to use Konvoy Image Builder are:

Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

2. Check the Supported Infrastructure Operating Systems

3. Check the Supported Kubernetes Version for your Provider.

4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 168


5. Ensure you have met the minimal set of permissions from the AWS Image Builder Book.

6. A Minimal IAM Permissions for KIB to create an Image for an AWS account using Konvoy Image Builder. For
more information, see Creating Minimal IAM Permissions for KIB on page 1035.

Extract the KIB Bundle

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to Build the Image below.
In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle. Currently,
that air-gapped bundle contains the following artifacts with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tarball

Procedure

1. Download nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz, and extract the tarball to a local


directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml.
Example
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note:

• For FIPS, pass the flag: --fips


• For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials:
export RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"
OR
export RHSM_USER=""
export RHSM_PASS=""

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 169


4. Follow the instructions below to build an AMI.

Note: The konvoy-image binary and all supporting folders are also extracted. When run, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Set the environment variables for AWS access. The following variables must be set using your credentials
including required IAM:
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION
If you have an override file to configure specific attributes of your AMI file, add it. For more information on
customizing an override file, see Image Overrides on page 1073.

Build the Image

About this task


Depending on which version of NKP you are running, steps and flags will be different. To deploy in a region where
CAPI images are not provided, you need to use KIB to create your own image for the region. For a list of supported
AWS regions, refer to the Published AMI information from AWS.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build aws images/ami/rhel-86.yaml

a. By default, it builds in the us-west-2 region. to specify another region set the --region flag as shown in the
command below.
konvoy-image build aws --region us-east-1 images/ami/rhel-86.yaml

Note: Ensure you have named the correct AMI image YAML file for your OS in the konvoy-image build
command.

What to do next
After KIB provisions the image successfully, the ami
id is printed and written to the packer.pkr.hcl (Packer config) file. This file has an artifact_id field whose
value provides the name of the AMI ID as shown in the example below. That is the ami you use in the nkp create
cluster command.
...
amazon-ebs.kib_image: Adding tag: "distribution_version": "8.6"
amazon-ebs.kib_image: Adding tag: "gpu_nvidia_version": ""
amazon-ebs.kib_image: Adding tag: "kubernetes_cni_version": ""
amazon-ebs.kib_image: Adding tag: "build_timestamp": "20231023182049"
amazon-ebs.kib_image: Adding tag: "gpu_types": ""
amazon-ebs.kib_image: Adding tag: "kubernetes_version": "1.28.7"
==> amazon-ebs.kib_image: Creating snapshot tags
amazon-ebs.kib_image: Adding tag: "ami_name": "konvoy-ami-
rhel-8.6-1.26.6-20231023182049"
==> amazon-ebs.kib_image: Terminating the source AWS instance...
==> amazon-ebs.kib_image: Cleaning up any extra volumes...
==> amazon-ebs.kib_image: No volumes to clean up, skipping
==> amazon-ebs.kib_image: Deleting temporary security group...
==> amazon-ebs.kib_image: Deleting temporary keypair...
==> amazon-ebs.kib_image: Running post-processor: (type manifest)
Build 'amazon-ebs.kib_image' finished after 26 minutes 52 seconds.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 170


==> Wait completed after 26 minutes 52 seconds

==> Builds finished. The artifacts of successful builds are:


--> amazon-ebs.kib_image: AMIs were created:
us-west-2: ami-04b8dfef8bd33a016

--> amazon-ebs.kib_image: AMIs were created:


us-west-2: ami-04b8dfef8bd33a016

What to do next
1. To use a custom AMI when creating your cluster, you must create that AMI using KIB first. Then perform the
export and name the custom AMI for use in the command nkp create cluster:
export AWS_AMI_ID=ami-<ami-id-here>

Note: Inside the sections for either Non-air-gapped or Air-gapped cluster creation, you will find the instructions for
how to apply custom images.

Related Information

Procedure

• To use a local registry even in a non-air-gapped environment, download and extract the bundle.
Download the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

AWS Air-gapped: Load the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


The complete NKP air-gapped bundle is needed for an air-gapped environment but can also be used in
a non-air-gapped environment. The bundle contains all the NKP components needed for an air-gapped
environment installation and also to use a local registry in a non-air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images is required. This registry must be accessible from both the bastion machine
and either the AWS EC2 instances (if deploying to AWS) or other machines that will be created for the Kubernetes
cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

2. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory similar
to example below depending on your current location
cd nkp-v2.12.0

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 171


3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

4. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It may take some time to push all the images to your image registry, depending on the performance of the
network between the machine you are running the script on and the registry.

Important: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster,
by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

AWS Air-gapped: Creating the Management Cluster


Create an AWS Management Cluster in an air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
If you use these instructions to create a cluster on AWS using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSI compatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder.

Before you begin


First you must name your cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 172


Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<aws-example>

3. Export variables for the existing infrastructure details.


export AWS_VPC_ID=<vpc-...>
export AWS_SUBNET_IDS=<subnet-...,subnet-...,subnet-...>
export AWS_ADDITIONAL_SECURITY_GROUPS=<sg-...>
export AWS_AMI_ID=<ami-...>

• AWS_VPC_ID: the VPC ID where the cluster will be created. The VPC requires the following AWS VPC
Endpoints to be already present:

• ec2 - com.amazonaws.{region}.ec2

• elasticloadbalancing - com.amazonaws.{region}.elasticloadbalancing

• secretsmanager - com.amazonaws.{region}.secretsmanager

• autoscaling - com.amazonaws.{region}.autoscaling

• ecr - com.amazonaws.{region}.ecr.api - (authentication)

• ecr - com.amazonaws.{region}.ecr.dkr -

More details about AWS service using an interface VPC endpoint and AWS VPC endpoints list at
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html and https://
docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html respectively.
• AWS_SUBNET_IDS: a comma-separated list of one or more private Subnet IDs with each one in a different
Availability Zone. The cluster control-plane and worker nodes will automatically be spread across these
Subnets.
• AWS_ADDITIONAL_SECURITY_GROUPS: a comma-seperated list of one or more Security Groups IDs to
use in addition to the ones automatically created by CAPA. For more information, see https://github.com/
kubernetes-sigs/cluster-api-provider-aws.
• AWS_AMI_ID: the AMI ID to use for control-plane and worker nodes. The AMI must be created by the
konvoy-image-builder.

Note: In previous NKP releases, AMI images provided by the upstream CAPA project would be used if you did
not specify an AMI. However, the upstream images are not recommended for production and may not always be
available. Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use
Konvoy Image Builder on page 1032.

There are two approaches to supplying the ID of your AMI. Either provide the ID of the AMI or provide a way for
NKP to discover the AMI using location, format and OS information:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 173


4. Use one of the following options:

• Option One - Provide the ID of your AMI: Use the example command below leaving the existing flag that
provides the AMI ID: --ami AMI_ID.
• Option Two - Provide a path for your AMI with the information required for image discover.

• Where the AMI is published using your AWS Account ID: --ami-owner AWS_ACCOUNT_ID
• The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus
the base OS name: --ami-base-os ubuntu-20.04
• The base OS information: --ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'

5. (Optional) Configure your cluster to use an existing container registry as a mirror when attempting to pull images.
The example below is for AWS ECR:

Warning: Ensure that the local registry is set up if you do not have this set up already.

Warning: The AMI must be created with Konvoy Image Builder in order to use the registry mirror feature.
export AWS_AMI_ID=<ami-...>

• (Optional) Registry Mirror - Configure your cluster to use an existing local registry as a mirror when
attempting to pull images. Below is an AWS ECR example where REGISTRY_URL: the address of an existing
local registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when
pulling images.:
export REGISTRY_URL=<ecr-registry-URI>

• REGISTRY_URL: the address of an existing registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.
• Other local registries may use the options below:

• JFrog - REGISTRY_CA: (optional) the path on the bastion machine to the registry CA. This value is only
needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this
CA.
• REGISTRY_USERNAME: optional, set to a user that has pull access to this registry.

• REGISTRY_PASSWORD: optional if username is not set.

6. Create a Kubernetes cluster. The following example shows a common configuration. For the complete list of
cluster creation options, see the dkp create cluster aws CLI Command reference.

Note: DKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSI (see https://
kubernetes.io/docs/concepts/storage/volumes/#volume-types) compatible storage solution that
is suitable for production. For more information, see the https://kubernetes.io/docs/tasks/administer-
cluster/change-default-storage-class/ topic in the Kubernetes documentation.

7. Do one of the following:

• Option1 - Run this command to create your Kubernetes cluster using any relevant flags for Option One
explained above providing the AMI ID:
nkp create cluster aws --cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 174


--vpc-id=${AWS_VPC_ID} \
--ami=${AWS_AMI_ID} \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=<YOUR_ECR_URL> \
--self-managed

• Option 2 - Run the command as shown from the explanation above to allow discovery of your AMI:
nkp create cluster aws \
--cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \
--vpc-id=${AWS_VPC_ID} \
--ami-owner AWS_ACCOUNT_ID \
--ami-base-os ubuntu-20.04 \
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*' \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=<YOUR_ECR_URL> \
--self-managed

8. Additional configurations that you can perform:

» Additional cluster creation flags based on your environment:


» Optional Registry flag: --registry-mirror-url=${REGISTRY_URL}
» Flatcar OS flag to instruct the bootstrap cluster to make changes related to the installation paths: --os-hint
flatcar

» HTTP or HTTPS flags if you use proxies: --http-proxy, --https-proxy, and --no-proxy

AWS Air-gapped: Install Kommander


This section provides installation instructions for the Kommander component of NKP in an air-gapped AWS
environment.

About this task


Once you have installed the Konvoy component and created a cluster, continue with the installation of the
Kommander component which will allow you to access the UI and attach new or existing clusters to monitor.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all the prerequisites required for the installation. For more information, see
Prerequisites for Installation on page 44.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 175


• Ensure you have a default StorageClass. For more information, see Creating a Default StorageClass on
page 474.
• Ensure you have loaded all necessary images for your configuration. For more information, see AWS Air-
gapped: Load the Registry on page 171.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. (Optional) Customize your kommander.yaml.

a. For customization options, see Additional Kommander Configuration on page 964. Some options include
Custom Domains and Certificates, HTTP proxy, and External Load Balancer.

5. (Optional) If your cluster uses a custom AWS VPC and requires an internal load-balancer, set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 176


7. Use the customized kommander.yaml to install NKP.
nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see Configuring Applications
After Installing Kommander on page 984.

AWS Air-gapped: Verifying your Installation and UI Log in


Verify the Kommander installation and log in to the Dashboard UI. After you build the Konvoy cluster and
you install Kommander, verify your installation. It waits for all applications to be ready by default.

About this task

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 177


Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command
kubectl -n kommander get helmrelease <HELMRELEASE_NAME>
If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the following commands.
kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op":
"replace", "path": "/spec/suspend", "value": true}]'
kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op":
"replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

AWS Air-gapped: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 178


About this task
After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the Kommander
component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<aws-additional>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 179


Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure
Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster aws --cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE}
--kubeconfig=<management-cluster-kubeconfig-path> \
--with-aws-bootstrap-credentials=true \
--vpc-id=${AWS_VPC_ID} \
--ami=${AWS_AMI_ID} \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information, see
Clusters with HTTP or HTTPS Proxy on page 647.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 180


5. Set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

AWS with FIPS Installation


This installation provides instructions to install NKP in an AWS non-air-gapped environment using FIPS.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 181


• Installing NKP on page 47
• Prerequisites for Installation on page 44

AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. Follow the steps to create permissions and roles on the Minimal Permissions and Role to Create Clusters page.
2. Create Cluster IAM Policies and Roles
3. Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2

4. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>

If using AWS ECR as your local private registry, more information can be found on the Registry Mirror Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your own image for the region.

Note: For multi-tenancy, every tenant needs to be in a different AWS account to ensure they are truly independent of
other tenants in order to enforce security.

Section Contents

AWS FIPS: Creating an Image


Learn how to build a custom AMI for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides to specify base image and container images to use in your new
AMI.
AMI images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create an AMI image of your current computer system settings and software.
The AMI image can then be replicated and distributed, creating your computer system for other users. You can use
overrides files to customize some of the components installed on your machine image. For example, having the FIPS
versions of the Kubernetes components installed by KIB.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder. Explore the Customize your Image topic for more options about overrides.
The prerequisites to use Konvoy Image Builder are:

Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

2. Check the Supported Infrastructure Operating Systems

3. Check the Supported Kubernetes Version for your Provider.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 182


4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

5. Ensure you have met the minimal set of permissions from the AWS Image Builder Book.

6. A Minimal IAM Permissions for KIB to create an Image for an AWS account using Konvoy Image Builder.

Extract the KIB Bundle

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to Build the Image below.
In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle. Currently,
that air-gapped bundle contains the following artifacts with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tarball

Procedure

1. Downloading NKP on page 16nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the


tarball to a local directory
For example:
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note:

• For FIPS, pass the flag: --fips


• For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials:
export RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 183


4. Follow the instructions below to build an AMI.

Note: The konvoy-image binary and all supporting folders are also extracted. When run, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Set the environment variables for AWS access. The following variables must be set using your credentials
including required IAM:
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION
If you have an override file to configure specific attributes of your AMI file, add it. Instructions for customizing
an override file are found on this page: Image Overrides

Build the Image

About this task


Depending on which version of NKP you are running, steps and flags will be different. To deploy in a region where
CAPI images are not provided, you need to use KIB to create your own image for the region. For a list of supported
AWS regions, refer to the Published AMI information from AWS.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build aws images/ami/rhel-86.yaml

a. By default, it builds in the us-west-2 region. to specify another region set the --region flag as shown in the
command below.
konvoy-image build aws --region us-east-1 images/ami/rhel-86.yaml

Note: Ensure you have named the correct AMI image YAML file for your OS in the konvoy-image build
command.

What to do next
After KIB provisions the image successfully, the ami
id is printed and written to the packer.pkr.hcl (Packer config) file. This file has an artifact_id field whose
value provides the name of the AMI ID as shown in the example below. That is the ami you use in the NKP create
cluster command.
{
"builds": [
{
"name": "kib_image",
"builder_type": "amazon-ebs",
"build_time": 1698086886,
"files": null,
"artifact_id": "us-west-2:ami-04b8dfef8bd33a016",
"packer_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05",
"custom_data": {
"containerd_version": "",
"distribution": "RHEL",
"distribution_version": "8.6",
"kubernetes_cni_version": "",
"kubernetes_version": "1.26.6"
}
}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 184


],
"last_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05"
}

What to do next
1. To use a custom AMI when creating your cluster, you must create that AMI using KIB first. Then perform the
export and name the custom AMI for use in the command nkp create cluster:
export AWS_AMI_ID=ami-<ami-id-here>

Note: Inside the sections for either Non-air-gapped or Air-gapped cluster creation, you will find the instructions for
how to apply custom images.

Related Information

Procedure

• To use a local registry even in a non-air-gapped environment, download and extract the bundle. Downloading
NKP on page 16 the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

AWS FIPS: Creating the Management Cluster


Create an AWS Management Cluster in a non-air-gapped environment using FIPS.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing. First you must name your cluster.
Name Your Cluster

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export CLUSTER_NAME=<aws-example>.

Create a New AWS Kubernetes Cluster

About this task


If you use these instructions to create a cluster on AWS using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSI compatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 185


Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder.
There are two approaches to supplying the ID of your AMI. Either provide the ID of the AMI or provide a way for
NKP to discover the AMI using location, format and OS information:

Procedure

1. Option One - Provide the ID of your AMI.

a. Use the example command below leaving the existing flag that provides the AMI ID: --ami AMI_ID

2. Option Two - Provide a path for your AMI with the information required for image discover.

a. Where the AMI is published using your AWS Account ID: --ami-owner AWS_ACCOUNT_ID
b. The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus the
base OS name: --ami-base-os ubuntu-20.04
c. The base OS information: --ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'

Note:

• The AMI must be created with Konvoy Image Builder in order to use the registry mirror feature.
export AWS_AMI_ID=<ami-...>

• (Optional) Registry Mirror - Configure your cluster to use an existing local registry as a mirror when
attempting to pull images. Below is an AWS ECR example where REGISTRY_URL: the address of
an existing local registry accessible in the VPC that the new cluster nodes will be configured to use a
mirror registry when pulling images.:
export REGISTRY_URL=<ecr-registry-URI>

3. Run this command to create your Kubernetes cluster using any relevant flags.
nkp create cluster aws \
--cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \
--ami=${AWS_AMI_ID} \
--kubernetes-version=v1.29.6+fips.0 \
--etcd-version=3.5.10+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--self-managed
If providing the AMI path, use these flags in place of AWS_AMI_ID:
--ami-owner AWS_ACCOUNT_ID \
--ami-base-os ubuntu-20.04 \
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*' \

» Additional cluster creation flags based on your environment:


» Optional Registry flag: --registry-mirror-url=${REGISTRY_URL}
» Flatcar OS flag to instruct the bootstrap cluster to make changes related to the installation paths: --os-hint
flatcar

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 186


» HTTP or HTTPS flags if you use proxies: --http-proxy, --https-proxy, and --no-proxy
If you want to monitor or verify the installation of your clusters, refer to the topic: Verify your Cluster and NKP
Installation

AWS FIPS: Install Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
AWS environment using FIPS.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer, set the traefik
annotation to create an internal-facing ELB.
apps:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 187


traefik:
enabled: true
values: |
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

AWS FIPS: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the command kubectl -n kommander wait --for
condition=Ready helmreleases --all --timeout 15m.

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 188


helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using the command nkp open
dashboard --kubeconfig=${CLUSTER_NAME}.conf.

2. Retrieve your credentials at any time using the command kubectl -n kommander get secret
NKP-credentials -o go-template='Username: {{.data.username|base64decode}}
{{ "\n"}}Password: {{.data.password|base64decode}}{{ "\n"}}'.

3. Retrieve the URL used for accessing the UI using the command kubectl -n kommander get svc
kommander-traefik -o go-template='https://{{with index .status.loadBalancer.ingress
0}}{{or .hostname .ip}}{{end}}/NKP/kommander/dashboard{{ "\n"}}'.
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 189


Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

AWS FIPS: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 190


Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<aws-additional>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure
Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster aws \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE}
--with-aws-bootstrap-credentials=true \
--kubernetes-version=v1.29.6+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere
--kubeconfig=<management-cluster-kubeconfig-path>

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information, see
Clusters with HTTP or HTTPS Proxy on page 647.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 191


5. Set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

AWS Air-gapped with FIPS Installation


Installation instructions for installing NKP in an Amazon Web Services (AWS) air-gapped environment.
Remember, there are always more options for custom YAML Ain't Markup Language (YAML) in the Custom
Installation and Additional Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 192


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. Follow the steps to create permissions and roles on the Minimal Permissions and Role to Create Clusters page.
2. Create Cluster IAM Policies and Roles
3. Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2

4. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>

If using AWS ECR as your local private registry, more information can be found on the Registry Mirror Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your own image for the region.

Note: For multi-tenancy, every tenant needs to be in a different AWS account to ensure they are truly independent of
other tenants in order to enforce security.

Section Contents

AWS Air-gapped FIPS: Creating an Image


Learn how to build a custom AMI for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides to specify base image and container images to use in your new
AMI.
AMI images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create an AMI image of your current computer system settings and software.
The AMI image can then be replicated and distributed, creating your computer system for other users. You can use
overrides files to customize some of the components installed on your machine image. For example, having the FIPS
versions of the Kubernetes components installed by KIB. components.
In previous Nutanix Kubernetes Platform (NKP) releases, AMI images provided by the upstream CAPA project were
used if you did not specify an AMI. However, the upstream images are not recommended for production and may not
always be available. Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI,
use Konvoy Image Builder. Explore the Customize your Image topic for more options about overrides.
The prerequisites to use Konvoy Image Builder are:

Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 193


2. Check the Supported Infrastructure Operating Systems.

3. Check the Supported Kubernetes Version for your Provider.

4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

5. Ensure you have met the minimal set of permissions from the AWS Image Builder Book.

6. A Minimal IAM Permissions for KIB to create an Image for an AWS account using Konvoy Image Builder.

Extract the KIB Bundle

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to Build the Image below.
In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle. Currently,
that air-gapped bundle contains the following artifacts with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tarball

Procedure

1. Downloading NKP on page 16nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the


tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the NKP command create-package-bundle. This builds an OS bundle using
the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note:

• For FIPS, pass the flag: --fips


• For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials:
export RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 194


4. Follow the instructions below to build an AMI.

Note: The konvoy-image binary and all supporting folders are also extracted. When run, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Set the environment variables for AWS access. The following variables must be set using your credentials
including required IAM:
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION
If you have an override file to configure specific attributes of your AMI file, add it. Instructions for customizing
an override file are found on this page: Image Overrides

Build the Image

About this task


Depending on which version of NKP you are running, steps and flags will be different. To deploy in a region where
CAPI images are not provided, you need to use KIB to create your own image for the region. For a list of supported
AWS regions, refer to the Published AMI information from AWS.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build aws images/ami/rhel-86.yaml

a. By default, it builds in the us-west-2 region. to specify another region set the --region flag as shown in the
command below.
konvoy-image build aws --region us-east-1 images/ami/rhel-86.yaml

Note: Ensure you have named the correct AMI image YAML file for your OS in the konvoy-image build
command.

What to do next
After KIB provisions the image successfully, the ami
id is printed and written to the packer.pkr.hcl (Packer config) file. This file has an artifact_id field whose
value provides the name of the AMI ID as shown in the example below. That is the ami you use in the NKP create
cluster command.
{
"builds": [
{
"name": "kib_image",
"builder_type": "amazon-ebs",
"build_time": 1698086886,
"files": null,
"artifact_id": "us-west-2:ami-04b8dfef8bd33a016",
"packer_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05",
"custom_data": {
"containerd_version": "",
"distribution": "RHEL",
"distribution_version": "8.6",
"kubernetes_cni_version": "",
"kubernetes_version": "1.26.6"
}
}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 195


],
"last_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05"
}

What to do next
1. To use a custom AMI when creating your cluster, you must create that AMI using KIB first. Then perform the
export and name the custom AMI for use in the command nkp create cluster:
export AWS_AMI_ID=ami-<ami-id-here>

Note: Inside the sections for either Non-air-gapped or Air-gapped cluster creation, you will find the instructions for
how to apply custom images.

Related Information

Procedure

• To use a local registry even in a non-air-gapped environment, download and extract the bundle. Downloading
NKP on page 16 the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2..0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

AWS Air-gapped FIPS: Loading the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


The complete NKP air-gapped bundle is needed for an air-gapped environment but can also be used in
a non-air-gapped environment. The bundle contains all the NKP components needed for an air-gapped
environment installation and also to use a local registry in a non-air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images is required. This registry must be accessible from both the bastion machine
and either the AWS EC2 instances (if deploying to AWS) or other machines that will be created for the Kubernetes
cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

2. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory similar
to example below depending on your current location
cd nkp-v2.12.0

3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 196


export REGISTRY_CA=<path to the cacert file on the bastion>

4. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It may take some time to push all the images to your image registry, depending on the performance of the
network between the machine you are running the script on and the registry.

Important: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster,
by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

AWS Air-gapped FIPS: Creating the Management Cluster


Create an AWS Management Cluster in an air-gapped environment using FIPS.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
If you use these instructions to create a cluster on AWS using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSI compatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder.

Before you begin


First you must name your cluster.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 197


Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable:


export CLUSTER_NAME=<aws-example>

3. Export variables for the existing infrastructure details.


export AWS_VPC_ID=<vpc-...>
export AWS_SUBNET_IDS=<subnet-...,subnet-...,subnet-...>
export AWS_ADDITIONAL_SECURITY_GROUPS=<sg-...>
export AWS_AMI_ID=<ami-...>
AWS_VPC_ID: the VPC ID where the cluster will be created. The VPC requires the following AWS VPC
Endpoints to be already present

4. There are two approaches to supplying the ID of your AMI. Either provide the ID of the AMI or provide a way for
NKP to discover the AMI using location, format and OS information:

a. Option One - Provide the ID of your AMI.


Use the example command below leaving the existing flag that provides the AMI ID: --ami AMI_ID
b. Option Two - Provide a path for your AMI with the information required for image discover.

• Where the AMI is published using your AWS Account ID: --ami-owner AWS_ACCOUNT_ID
• The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus
the base OS name: --ami-base-os ubuntu-20.04
• The base OS information: --ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'

Note:

• The AMI must be created with Konvoy Image Builder in order to use the registry mirror
feature.
export AWS_AMI_ID=<ami-...>

• (Optional) Registry Mirror - Configure your cluster to use an existing local registry as a mirror
when attempting to pull images. Below is an AWS ECR example where REGISTRY_URL: the
address of an existing local registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.:
export REGISTRY_URL=<ecr-registry-URI>

5. Run this command to create your Kubernetes cluster by providing the image ID and using any relevant flags.
nkp create cluster aws --cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \
--vpc-id=${AWS_VPC_ID} \
--ami=${AWS_AMI_ID} \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 198


--kubernetes-version=v1.29.6+fips.0 \
--etcd-version=3.5.10+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--self-managed
If providing the AMI path, use these flags in place of AWS_AMI_ID:
--ami-owner AWS_ACCOUNT_ID \
--ami-base-os ubuntu-20.04 \
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*' \

» Additional cluster creation flags based on your environment:


» Optional Registry flag: --registry-mirror-url=${REGISTRY_URL}
» Flatcar OS flag to instruct the bootstrap cluster to make changes related to the installation paths: --os-hint
flatcar

» HTTP or HTTPS flags if you use proxies: --http-proxy, --https-proxy, and --no-proxy

AWS Air-gapped FIPS: Install Kommander


This section provides installation instructions for the Kommander component of NKP in an air-gapped AWS
environment with FIPS.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass. For more information, see Creating a Default StorageClass on
page 474.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 199


3. Create a configuration file for the deployment.
nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer, set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP,see the topic Configuring NKP
Catalog Applications after Installing NKP.

AWS Air-gapped FIPS: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 200


Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 201


2. Retrieve your credentials at any time if necessary.
kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

AWS Air-gapped FIPS: Creating Managed Clusters Using the NKP CLI
This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the Kommander
component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 202


2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export MANAGED_CLUSTER_NAME=<aws-additional>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure
Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster aws
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE}
--with-aws-bootstrap-credentials=true \
--vpc-id=${AWS_VPC_ID} \
--ami=${AWS_AMI_ID} \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 203


--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information, see
Clusters with HTTP or HTTPS Proxy on page 647.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set using the command echo
${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace using
the command nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

Note: This is only necessary if you never set the workspace of your cluster upon creation.

4. Retrieve the workspace where you want to attach the cluster using the command kubectl get workspaces
-A.

5. Set the WORKSPACE_NAMESPACE environment variable using the command export


WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve
the kubeconfig secret value of your cluster using the command kubectl -n default get secret
${MANAGED_CLUSTER_NAME}-kubeconfig -o go-template='{{.data.value}}{{ "\n"}}'.

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
Example:
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace using the command kubectl apply -f attached-cluster-
kubeconfig.yaml --namespace ${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


Example:
cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 204


metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by using the
command kubectl get kommanderclusters -A.
It may take a few minutes to reach "Joined" status. If you have several Pro Clusters and want to turn one of them
to a Managed Cluster to be centrally administrated by a Management Cluster, review Platform Expansion.

AWS with GPU Installation


This installation provides instructions to install NKP in an AWS non-air-gapped environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. Follow the steps to create permissions and roles on the Minimal Permissions and Role to Create Clusters page.
2. Create Cluster IAM Policies and Roles
3. Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2

4. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>

If using AWS ECR as your local private registry, more information can be found on the Registry Mirror Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your own image for the region.

Note: For multi-tenancy, every tenant needs to be in a different AWS account to ensure they are truly independent of
other tenants in order to enforce security.

GPU Prerequisites
Before you begin, you must:

• Ensure nodes provide an NVIDIA GPU


• If you are using a public cloud services, such as AWS, create an AMI with KIB using the Instructions on the KIB
for GPU topic.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 205


• If you are deploying in a pre-provisioned environment, ensure that you have created the appropriate secret for
your GPU nodepool and have uploaded the appropriate artifacts to each node.

Section Contents

AWS with GPU: Using Node Label Auto Configuration


When using GPU nodes, it is important they have the proper label identifying them as Nvidia GPU nodes. Node
feature discovery (NFD), by default labels PCI hardware as:
"feature.node.kubernetes.io/pci-<device label>.present": "true"
where <device label> is by default as defined in this topic:
< class > _ < vendor >
However, because there is a wide variety in devices and their assigned PCI classes, you may find that the labels
assigned to your GPU nodes do not always properly identify them as containing an Nvidia GPU.
If the default detection does not work, you can manually change the daemonset that the GPU operator creates by
running the following command:
nodeSelector:
feature.node.kubernetes.io/pci-< class > _ < vendor>.present: "true"
where class is any 4 digit number starting with 03xy and the vendor for Nvidia is 10de. If this is already deployed,
you can always change the daemonset and change the nodeSelector field so that it deploys to the right nodes.

AWS with GPU: Creating an Image


Learn how to build a custom AMI for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides to specify base image and container images to use in your new
AMI.
AMI images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create an AMI image of your current computer system settings and software.
The AMI image can then be replicated and distributed, creating your computer system for other users. You can use
overrides files to customize some of the components installed on your machine image. For example, having the FIPS
versions of the Kubernetes components installed by KIB.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder. Explore the Customize your Image topic for more options about overrides.
The prerequisites to use Konvoy Image Builder are:

Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

2. Check the Supported Infrastructure Operating Systems

3. Check the Supported Kubernetes Version for your Provider.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 206


4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

5. Ensure you have met the minimal set of permissions from the AWS Image Builder Book.

6. A Minimal IAM Permissions for KIB to create an Image for an AWS account using Konvoy Image Builder.

Extract the KIB Bundle

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to Build the Image below.
In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle. Currently,
that air-gapped bundle contains the following artifacts with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)
• Containerd tarball

Procedure

1. Downloading NKP on page 16nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the


tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note: For FIPS, pass the flag: --fips

Note: For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials: export
RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 207


4. Follow the instructions below to build an AMI.

Note: The konvoy-image binary and all supporting folders are also extracted. When run, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Set the environment variables for AWS access. The following variables must be set using your credentials
including required IAM:
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION
If you have an override file to configure specific attributes of your AMI file, add it. Instructions for customizing
an override file are found on this page: Image Overrides

Build the Image

About this task


Depending on which version of NKP you are running, steps and flags will be different. To deploy in a region where
CAPI images are not provided, you need to use KIB to create your own image for the region. For a list of supported
AWS regions, refer to the Published AMI information from AWS.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build aws images/ami/rhel-86.yaml

a. By default, it builds in the us-west-2 region. to specify another region set the --region flag as shown in the
command below.
konvoy-image build aws --region us-east-1 images/ami/rhel-86.yaml

Note: Ensure you have named the correct AMI image YAML file for your OS in the konvoy-image build
command.

What to do next
After KIB provisions the image successfully, the ami
id is printed and written to the packer.pkr.hcl (Packer config) file. This file has an artifact_id field whose
value provides the name of the AMI ID as shown in the example below. That is the ami you use in the NKP create
cluster command.
{
"builds": [
{
"name": "kib_image",
"builder_type": "amazon-ebs",
"build_time": 1698086886,
"files": null,
"artifact_id": "us-west-2:ami-04b8dfef8bd33a016",
"packer_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05",
"custom_data": {
"containerd_version": "",
"distribution": "RHEL",
"distribution_version": "8.6",
"kubernetes_cni_version": "",
"kubernetes_version": "1.26.6"
}
}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 208


],
"last_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05"
}

What to do next
1. To use a custom AMI when creating your cluster, you must create that AMI using KIB first. Then perform the
export and name the custom AMI for use in the command nkp create cluster:
export AWS_AMI_ID=ami-<ami-id-here>

Note: Inside the sections for either Non-air-gapped or Air-gapped cluster creation, you will find the instructions for
how to apply custom images.

Related Information

Procedure

• To use a local registry even in a non-air-gapped environment, download and extract the bundle. Downloading
NKP on page 16 the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

AWS with GPU: Creating the Management Cluster


Create an AWS Management Cluster in a non-air-gapped environment using GPU.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
If you use these instructions to create a cluster on AWS using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSIcompatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder.

Before you begin


First you must name your cluster.

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable:


export CLUSTER_NAME=<aws-example>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 209


3. Option One - Provide the ID of your AMI.

a. Option One
Use the example command below leaving the existing flag that provides the AMI ID: --ami AMI_ID
b. Option Two - Provide a path for your AMI with the information required for image discover.

• Where the AMI is published using your AWS Account ID: --ami-owner AWS_ACCOUNT_ID
• The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus
the base OS name: --ami-base-os ubuntu-20.04
• The base OS information: --ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'

Note:

• The AMI must be created with Konvoy Image Builder in order to use the registry mirror
feature.
export AWS_AMI_ID=<ami-...>

• (Optional) Registry Mirror - Configure your cluster to use an existing local registry as a mirror
when attempting to pull images. Below is an AWS ECR example where REGISTRY_URL: the
address of an existing local registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.:
export REGISTRY_URL=<ecr-registry-URI>

4. Run this command to create your Kubernetes cluster by providing the image ID and using any relevant flags.
nkp create cluster aws \
--cluster-name=${CLUSTER_NAME} \
--ami=${AWS_AMI_ID} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \
--self-managed
If providing the AMI path, use these flags in place of AWS_AMI_ID:
--ami-owner AWS_ACCOUNT_ID \
--ami-base-os ubuntu-20.04 \
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*' \

» Additional cluster creation flags based on your environment:


» Optional Registry flag: --registry-mirror-url=${REGISTRY_URL}
» Flatcar OS flag to instruct the bootstrap cluster to make changes related to the installation paths: --os-hint
flatcar

» HTTP or HTTPS flags if you use proxies: --http-proxy, --https-proxy, and --no-proxy

5. After cluster creation, create the node pool after cluster creation.
nkp create nodepool aws -c ${CLUSTER_NAME} \
--instance-type p2.xlarge \
--ami-id=${AMI_ID_FROM_KIB} \
--replicas=1 ${NODEPOOL_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 210


AWS with GPU: Install Kommander
This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
AWS environment using GPU.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer, set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |
service:
annotations:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 211


service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP,see the topic Configuring NKP
Catalog Applications after Installing NKP.

AWS with GPU: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 212


helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 213


Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

AWS with GPU: Creating Managed Clusters Using the NKP CLI
This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 214


Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<aws-additional>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster aws \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--kubeconfig=<management-cluster-kubeconfig-path> \
--namespace ${WORKSPACE_NAMESPACE}
--with-aws-bootstrap-credentials=true \
--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information,
see Clusters with HTTP or HTTPS Proxy on page 647.

2. Create the node pool after cluster creation.


nkp create nodepool aws -c ${CLUSTER_NAME} \
--instance-type p2.xlarge \
--ami-id=${AMI_ID_FROM_KIB} \
--replicas=1 ${NODEPOOL_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf \

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 215


3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 216


AWS Air-gapped with GPU Installation
Installation instructions for installing NKP in an Amazon Web Services (AWS) air-gapped environment.
Remember, there are always more options for custom YAML Ain't Markup Language (YAML) in the Custom
Installation and Additional Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. Follow the steps to create permissions and roles on the Minimal Permissions and Role to Create Clusters page.
2. Create Cluster IAM Policies and Roles
3. Export the AWS region where you want to deploy the cluster:
export AWS_REGION=us-west-2

4. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster:
export AWS_PROFILE=<profile>

If using AWS ECR as your local private registry, more information can be found on the Registry Mirror Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your own image for the region.

Note: For multi-tenancy, every tenant needs to be in a different AWS account to ensure they are truly independent of
other tenants in order to enforce security.

Section Contents

AWS Air-gapped with GPU: Using Node Label Auto Configuration


When using GPU nodes, it is important they have the proper label identifying them as Nvidia GPU nodes. Node
feature discovery (NFD), by default labels PCI hardware as:
"feature.node.kubernetes.io/pci-<device label>.present": "true"
where <device label> is by default as defined in this topic:
< class > _ < vendor >
However, because there is a wide variety in devices and their assigned PCI classes, you may find that the labels
assigned to your GPU nodes do not always properly identify them as containing an Nvidia GPU.
If the default detection does not work, you can manually change the daemonset that the GPU operator creates by
running the following command:
nodeSelector:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 217


feature.node.kubernetes.io/pci-< class > _ < vendor>.present: "true"
whereclass is any 4 digit number starting with 03xy and the vendor for Nvidia is 10de. If this is already deployed,
you can always change the daemonset and change the nodeSelector field so that it deploys to the right nodes.

AWS Air-gapped with GPU: Creating an Image


Learn how to build a custom AMI for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides to specify base image and container images to use in your new
AMI.
AMI images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create an AMI image of your current computer system settings and software.
The AMI image can then be replicated and distributed, creating your computer system for other users. You can use
overrides files to customize some of the components installed on your machine image. For example, having the FIPS
versions of the Kubernetes components installed by KIB. components.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder. Explore the Customize your Image topic for more options about overrides.
The prerequisites to use Konvoy Image Builder are:

Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

2. Check the Supported Infrastructure Operating Systems

3. Check the Supported Kubernetes Version for your Provider.

4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

5. Ensure you have met the minimal set of permissions from the AWS Image Builder Book.

6. A Minimal IAM Permissions for KIB to create an Image for an AWS account using Konvoy Image Builder.

Extract the KIB Bundle

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to Build the Image below.
In previous NKP releases, the distro package bundles were included in the downloaded air-gapped bundle. Currently,
that air-gapped bundle contains the following artifacts with the exception of the distro packages:

• NKP Kubernetes packages


• Python packages (provided by upstream)

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 218


• Containerd tarball

Procedure

1. Downloading NKP on page 16nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz , and extract the


tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

Note: For FIPS, pass the flag: --fips

Note: For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials: export
RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

4. Follow the instructions below to build an AMI.

Note: The konvoy-image binary and all supporting folders are also extracted. When run, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Set the environment variables for AWS access. The following variables must be set using your credentials
including required IAM:
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION
If you have an override file to configure specific attributes of your AMI file, add it. Instructions for customizing
an override file are found on this page: Image Overrides

Build the Image

About this task


Depending on which version of NKP you are running, steps and flags will be different. To deploy in a region where
CAPI images are not provided, you need to use KIB to create your own image for the region. For a list of supported
AWS regions, refer to the Published AMI information from AWS.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build aws images/ami/rhel-86.yaml

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 219


a. By default, it builds in the us-west-2 region. to specify another region set the --region flag as shown in the
command below.
konvoy-image build aws --region us-east-1 images/ami/rhel-86.yaml

Note: Ensure you have named the correct AMI image YAML file for your OS in the konvoy-image build
command.

What to do next
After KIB provisions the image successfully, the ami
id is printed and written to the packer.pkr.hcl (Packer config) file. This file has an artifact_id field whose
value provides the name of the AMI ID as shown in the example below. That is the ami you use in the NKP create
cluster command.
{
"builds": [
{
"name": "kib_image",
"builder_type": "amazon-ebs",
"build_time": 1698086886,
"files": null,
"artifact_id": "us-west-2:ami-04b8dfef8bd33a016",
"packer_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05",
"custom_data": {
"containerd_version": "",
"distribution": "RHEL",
"distribution_version": "8.6",
"kubernetes_cni_version": "",
"kubernetes_version": "1.26.6"
}
}
],
"last_run_uuid": "80f8296c-e975-d394-45f9-49ef2ccc6e05"
}

What to do next
1. To use a custom AMI when creating your cluster, you must create that AMI using KIB first. Then perform the
export and name the custom AMI for use in the command nkp create cluster:
export AWS_AMI_ID=ami-<ami-id-here>

Note: Inside the sections for either Non-air-gapped or Air-gapped cluster creation, you will find the instructions for
how to apply custom images.

Related Information

Procedure

• To use a local registry even in a non-air-gapped environment, download and extract the bundle. Downloading
NKP on page 16 the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

AWS Air-gapped with GPU: Loading the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 220


About this task
The complete NKP air-gapped bundle is needed for an air-gapped environment but can also be used in
a non-air-gapped environment. The bundle contains all the NKP components needed for an air-gapped
environment installation and also to use a local registry in a non-air-gapped environment.

Note: If you do not already have a local registry set up, see the Local Registry Tools page for more information.

If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander images is required. This registry must be accessible from both the bastion machine
and either the AWS EC2 instances (if deploying to AWS) or other machines that will be created for the Kubernetes
cluster.

Procedure

1. If not already done in prerequisites, download the air-gapped bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz , and extract the tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

2. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory similar
to example below depending on your current location
cd nkp-v2.12.0

3. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

4. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It may take some time to push all the images to your image registry, depending on the performance of the
network between the machine you are running the script on and the registry.

Important: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster,
by setting the following flag --registry-mirror-url=https://registry-1.docker.io --
registry-mirror-username= --registry-mirror-password= on the nkp create cluster
command.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 221


5. Load the Kommander component images to your private registry using the command.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}
Optional: This step is required only if you have an Ultimate license.
For NKP Catalog Applications available with the Ultimate license, perform this image load by running the
following command to load the nkp-catalog-applications image bundle into your private registry:
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

AWS Air-gapped with GPU: Creating the Management Cluster


Create an AWS Management Cluster in an air-gapped environment using GPU.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.
If you use these instructions to create a cluster on AWS using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses AWS CSI as the default storage provider. You can use a Kubernetes CSI compatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
In previous NKP releases, AMI images provided by the upstream CAPA project were used if you did not specify
an AMI. However, the upstream images are not recommended for production and may not always be available.
Therefore, NKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image
Builder.

Before you begin


First you must name your cluster.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable:


export CLUSTER_NAME=<aws-example>

3. Export variables for the existing infrastructure details.


export AWS_VPC_ID=<vpc-...>
export AWS_SUBNET_IDS=<subnet-...,subnet-...,subnet-...>
export AWS_ADDITIONAL_SECURITY_GROUPS=<sg-...>
export AWS_AMI_ID=<ami-...>
AWS_VPC_ID: the VPC ID where the cluster will be created. The VPC requires the following AWS VPC
Endpoints to be already present

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 222


4. There are two approaches to supplying the ID of your AMI. Either provide the ID of the AMI or provide a way for
NKP to discover the AMI using location, format and OS information.

a. Option One - Provide the ID of your AMI.


Use the example command below leaving the existing flag that provides the AMI ID: --ami AMI_ID
b. Option Two - Provide a path for your AMI with the information required for image discover.

• Where the AMI is published using your AWS Account ID: --ami-owner AWS_ACCOUNT_ID
• The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus
the base OS name: --ami-base-os ubuntu-20.04
• The base OS information: --ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'

Note:

• The AMI must be created with Konvoy Image Builder in order to use the registry mirror
feature.
export AWS_AMI_ID=<ami-...>

• (Optional) Registry Mirror - Configure your cluster to use an existing local registry as a mirror
when attempting to pull images. Below is an AWS ECR example where REGISTRY_URL: the
address of an existing local registry accessible in the VPC that the new cluster nodes will be
configured to use a mirror registry when pulling images.:
export REGISTRY_URL=<ecr-registry-URI>

5. Run this command to create your Kubernetes cluster by providing the image ID and using any relevant flags.
nkp create cluster aws --cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--with-aws-bootstrap-credentials=true \
--vpc-id=${AWS_VPC_ID} \
--ami=${AWS_AMI_ID} \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--kubernetes-version=v1.29.6+fips.0 \
--etcd-version=3.5.10+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--self-managed
If providing the AMI path, use these flags in place of AWS_AMI_ID:
--ami-owner AWS_ACCOUNT_ID \
--ami-base-os ubuntu-20.04 \
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*' \

» Additional cluster creation flags based on your environment:


» Optional Registry flag: --registry-mirror-url=${REGISTRY_URL}
» Flatcar OS flag to instruct the bootstrap cluster to make changes related to the installation paths: --os-hint
flatcar

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 223


» HTTP or HTTPS flags if you use proxies: --http-proxy, --https-proxy, and --no-proxy

AWS Air-gapped with GPU: Install Kommander


This section provides installation instructions for the Kommander component of NKP in an air-gapped AWS
environment with GPU.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer, set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 224


service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "nkp"
gitRepositorySpec:
url: https://github.com/mesosphere/nkp-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

AWS Air-gapped with GPU: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 225


helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 226


3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

AWS Air-gapped GPU: Creating Managed Clusters Using the NKP CLI
This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 227


Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<aws-additional>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster aws --cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE}
--with-aws-bootstrap-credentials=true \
--vpc-id=${AWS_VPC_ID} \
--ami=${AWS_AMI_ID} \
--subnet-ids=${AWS_SUBNET_IDS} \
--internal-load-balancer=true \
--additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. For more information,
see Clusters with HTTP or HTTPS Proxy on page 647.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 228


2. Create the node pool after cluster creation.
nkp create nodepool aws -c ${CLUSTER_NAME} \
--instance-type p2.xlarge \
--ami-id=${AMI_ID_FROM_KIB} \
--replicas=1 ${NODEPOOL_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 229


8. Create this secret in the desired workspace.
kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

EKS Installation Options


For an environment that is on the EKS Infrastructure, install options based on those environment variables are
provided for you in this location.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operative in the most common scenarios.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: An EKS cluster cannot be a Management or Pro cluster. To install NKP on your EKS cluster, first, ensure
you have a Management cluster with NKP and the Kommander component installed that handles the life cycle of your
EKS cluster.

In order to install Kommander, you need to have CAPI components, cert-manager, etc on a self-managed cluster.
The CAPI components mean you can control the life cycle of the cluster, and other clusters. However, because EKS
is semi-managed by AWS, the EKS clusters are under AWS control and don’t have those components. Therefore,
Kommander will not be installed.

Section Contents

EKS Installation
This installation provides instructions to install NKP in an AWS non-air-gapped environment.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 230


Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: Ensure that the KUBECONFIG environment variable is set to the Management cluster by running export
KUBECONFIG=<Management_cluster_kubeconfig>.conf.

AWS Prerequisites
Before you begin using Konvoy with AWS, you must:
1. A Management cluster with the Kommander component installed.
2. You have a valid AWS account with credentials configured that can manage CloudFormation Stacks, IAM
Policies, and IAM Roles.
3. You will need to have the AWS CLI utility installed.
4. Install aws-iam-authenticator. This binary is used to access your cluster using kubectl.

Note: In order to install Kommander, you need to have CAPI components, cert-manager, etc on a self-managed cluster.
The CAPI components mean you can control the life cycle of the cluster, and other clusters. However, because EKS
is semi-managed by AWS, the EKS clusters are under AWS control and don’t have those components. Therefore,
Kommander will not be installed and these clusters will be attached to the management cluster.

If you are found using AWS ECR as your local private registry; more information is available on the Registry Mirror
Tools page.
To deploy a cluster with a custom image in a region where CAPI images are not provided, you need to use Konvoy
Image Builder to create your image for the region.

EKS: Minimal User Permission for Cluster Creation


The following is a CloudFormation stack which adds a policy named eks-bootstrapper to manage EKS cluster to the
NKP-bootstrapper-role created by the CloudFormation stack for AWS in the Minimal Permissions and Role to Create
Cluster section. Consult the Leveraging the Role section for an example of how to use this role and how a system
administrator wants to expose using the permissions.

EKS CloudFormation Stack:

Note: If your role is not named NKP-bootstrapper-role change the parameter on line 6 of the file.

AWSTemplateFormatVersion: 2010-09-09
Parameters:
existingBootstrapperRole:
Type: CommaDelimitedList
Description: 'Name of existing minimal role you want to add to add EKS cluster
management permissions to'
Default: NKP-bootstrapper-role
Resources:
EKSMinimumPermissions:
Properties:
Description: Minimal user policy to manage eks clusters
ManagedPolicyName: eks-bootstrapper
PolicyDocument:
Statement:
- Action:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 231


- 'ssm:GetParameter'
Effect: Allow
Resource:
- 'arn:*:ssm:*:*:parameter/aws/service/eks/optimized-ami/*'
- Action:
- 'iam:CreateServiceLinkedRole'
Condition:
StringLike:
'iam:AWSServiceName': eks.amazonaws.com
Effect: Allow
Resource:
- >-
arn:*:iam::*:role/aws-service-role/eks.amazonaws.com/
AWSServiceRoleForAmazonEKS
- Action:
- 'iam:CreateServiceLinkedRole'
Condition:
StringLike:
'iam:AWSServiceName': eks-nodegroup.amazonaws.com
Effect: Allow
Resource:
- >-
arn:*:iam::*:role/aws-service-role/eks-nodegroup.amazonaws.com/
AWSServiceRoleForAmazonEKSNodegroup
- Action:
- 'iam:CreateServiceLinkedRole'
Condition:
StringLike:
'iam:AWSServiceName': eks-fargate.amazonaws.com
Effect: Allow
Resource:
- >-
arn:aws:iam::*:role/aws-service-role/eks-fargate-pods.amazonaws.com/
AWSServiceRoleForAmazonEKSForFargate
- Action:
- 'iam:GetRole'
- 'iam:ListAttachedRolePolicies'
Effect: Allow
Resource:
- 'arn:*:iam::*:role/*'
- Action:
- 'iam:GetPolicy'
Effect: Allow
Resource:
- 'arn:aws:iam::aws:policy/AmazonEKSClusterPolicy'
- Action:
- 'eks:DescribeCluster'
- 'eks:ListClusters'
- 'eks:CreateCluster'
- 'eks:TagResource'
- 'eks:UpdateClusterVersion'
- 'eks:DeleteCluster'
- 'eks:UpdateClusterConfig'
- 'eks:UntagResource'
- 'eks:UpdateNodegroupVersion'
- 'eks:DescribeNodegroup'
- 'eks:DeleteNodegroup'
- 'eks:UpdateNodegroupConfig'
- 'eks:CreateNodegroup'
- 'eks:AssociateEncryptionConfig'
- 'eks:ListIdentityProviderConfigs'
- 'eks:AssociateIdentityProviderConfig'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 232


- 'eks:DescribeIdentityProviderConfig'
- 'eks:DisassociateIdentityProviderConfig'
Effect: Allow
Resource:
- 'arn:*:eks:*:*:cluster/*'
- 'arn:*:eks:*:*:nodegroup/*/*/*'
- Action:
- 'ec2:AssociateVpcCidrBlock'
- 'ec2:DisassociateVpcCidrBlock'
- 'eks:ListAddons'
- 'eks:CreateAddon'
- 'eks:DescribeAddonVersions'
- 'eks:DescribeAddon'
- 'eks:DeleteAddon'
- 'eks:UpdateAddon'
- 'eks:TagResource'
- 'eks:DescribeFargateProfile'
- 'eks:CreateFargateProfile'
- 'eks:DeleteFargateProfile'
Effect: Allow
Resource:
- '*'
- Action:
- 'iam:PassRole'
Condition:
StringEquals:
'iam:PassedToService': eks.amazonaws.com
Effect: Allow
Resource:
- '*'
- Action:
- 'kms:CreateGrant'
- 'kms:DescribeKey'
Condition:
'ForAnyValue:StringLike':
'kms:ResourceAliases': alias/cluster-api-provider-aws-*
Effect: Allow
Resource:
- '*'
Version: 2012-10-17
Roles: !Ref existingBootstrapperRole
Type: 'AWS::IAM::ManagedPolicy'
To create the resources in the cloudformation stack, copy the contents above into a file. Before executing the
following command, replace MYFILENAME.yaml and MYSTACKNAME with the intended values for your system when
running the command to create the AWS cloudformation stack:
aws cloudformation create-stack --template-body=file://MYFILENAME.yaml --stack-
name=MYSTACKNAME --capabilities CAPABILITY_NAMED_IAM

EKS: Cluster IAM Policies and Roles


This section guides a NKP user in creating IAM Policies and Instance Profiles that governs who has access to
the cluster. The IAM Role is used by the cluster’s control plane and worker nodes using the provided AWS
CloudFormation Stack specific to EKS. This CloudFormation Stack has additional permissions that are used to
delegate access roles for other users.

Prerequisites from AWS:


Before you begin, ensure you have met the AWS prerequisites:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 233


• The user you delegate from your role must have a minimum set of permissions, see User Roles and Instance
Profiles page for AWS.
• Create the Cluster IAM Policies in your AWS account.

EKS IAM Artifacts


Policies

• controllers-eks.cluster-api-provider-aws.sigs.k8s.io - enumerates the Actions required by the


workload cluster to create and modify EKS clusters in the user's AWS Account. It is attached to the existing
control-plane.cluster-api-provider-aws.sigs.k8s.io role

• eks-nodes.cluster-api-provider-aws.sigs.k8s.io - enumerates the Actions required by the


EKS workload cluster’s worker machines. It is attached to the existing nodes.cluster-api-provider-
aws.sigs.k8s.io

Roles

• eks-controlplane.cluster-api-provider-aws.sigs.k8s.io - is the Role associated with EKS cluster


control planes
NOTE: control-plane.cluster-api-provider-aws.sigs.k8s.io and nodes.cluster-api-provider-
aws.sigs.k8s.io roles were created by Cluster IAM Policies and Roles in AWS.

Below is a CloudFormation stack that includes IAM policies and roles required to setup EKS Clusters.

Note: To create the resources in the CloudFormation stack, copy the contents above into a file and execute the
following command after replacing MYFILENAME.yaml and MYSTACKNAME with the intended values:
aws cloudformation create-stack
--template-body=file://MYFILENAME.yaml --stack-name=MYSTACKNAME --
capabilities
CAPABILITY_NAMED_IAM

AWSTemplateFormatVersion: 2010-09-09
Parameters:
existingControlPlaneRole:
Type: CommaDelimitedList
Description: 'Names of existing Control Plane Role you want to add to the newly
created EKS Managed Policy for AWS cluster API controllers'
Default: control-plane.cluster-api-provider-aws.sigs.k8s.io
existingNodeRole:
Type: CommaDelimitedList
Description: 'ARN of the Nodes Managed Policy to add to the role for nodes'
Default: nodes.cluster-api-provider-aws.sigs.k8s.io
Resources:
AWSIAMManagedPolicyControllersEKS:
Properties:
Description: For the Kubernetes Cluster API Provider AWS Controllers
ManagedPolicyName: controllers-eks.cluster-api-provider-aws.sigs.k8s.io
PolicyDocument:
Statement:
- Action:
- 'ssm:GetParameter'
Effect: Allow
Resource:
- 'arn:*:ssm:*:*:parameter/aws/service/eks/optimized-ami/*'
- Action:
- 'iam:CreateServiceLinkedRole'
Condition:
StringLike:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 234


'iam:AWSServiceName': eks.amazonaws.com
Effect: Allow
Resource:
- >-
arn:*:iam::*:role/aws-service-role/eks.amazonaws.com/
AWSServiceRoleForAmazonEKS
- Action:
- 'iam:CreateServiceLinkedRole'
Condition:
StringLike:
'iam:AWSServiceName': eks-nodegroup.amazonaws.com
Effect: Allow
Resource:
- >-
arn:*:iam::*:role/aws-service-role/eks-nodegroup.amazonaws.com/
AWSServiceRoleForAmazonEKSNodegroup
- Action:
- 'iam:CreateServiceLinkedRole'
Condition:
StringLike:
'iam:AWSServiceName': eks-fargate.amazonaws.com
Effect: Allow
Resource:
- >-
arn:aws:iam::*:role/aws-service-role/eks-fargate-pods.amazonaws.com/
AWSServiceRoleForAmazonEKSForFargate
- Action:
- 'iam:GetRole'
- 'iam:ListAttachedRolePolicies'
Effect: Allow
Resource:
- 'arn:*:iam::*:role/*'
- Action:
- 'iam:GetPolicy'
Effect: Allow
Resource:
- 'arn:aws:iam::aws:policy/AmazonEKSClusterPolicy'
- Action:
- 'eks:DescribeCluster'
- 'eks:ListClusters'
- 'eks:CreateCluster'
- 'eks:TagResource'
- 'eks:UpdateClusterVersion'
- 'eks:DeleteCluster'
- 'eks:UpdateClusterConfig'
- 'eks:UntagResource'
- 'eks:UpdateNodegroupVersion'
- 'eks:DescribeNodegroup'
- 'eks:DeleteNodegroup'
- 'eks:UpdateNodegroupConfig'
- 'eks:CreateNodegroup'
- 'eks:AssociateEncryptionConfig'
- 'eks:ListIdentityProviderConfigs'
- 'eks:AssociateIdentityProviderConfig'
- 'eks:DescribeIdentityProviderConfig'
- 'eks:DisassociateIdentityProviderConfig'
Effect: Allow
Resource:
- 'arn:*:eks:*:*:cluster/*'
- 'arn:*:eks:*:*:nodegroup/*/*/*'
- Action:
- 'ec2:AssociateVpcCidrBlock'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 235


- 'ec2:DisassociateVpcCidrBlock'
- 'eks:ListAddons'
- 'eks:CreateAddon'
- 'eks:DescribeAddonVersions'
- 'eks:DescribeAddon'
- 'eks:DeleteAddon'
- 'eks:UpdateAddon'
- 'eks:TagResource'
- 'eks:DescribeFargateProfile'
- 'eks:CreateFargateProfile'
- 'eks:DeleteFargateProfile'
Effect: Allow
Resource:
- '*'
- Action:
- 'iam:PassRole'
Condition:
StringEquals:
'iam:PassedToService': eks.amazonaws.com
Effect: Allow
Resource:
- '*'
- Action:
- 'kms:CreateGrant'
- 'kms:DescribeKey'
Condition:
'ForAnyValue:StringLike':
'kms:ResourceAliases': alias/cluster-api-provider-aws-*
Effect: Allow
Resource:
- '*'
Version: 2012-10-17
Roles: !Ref existingControlPlaneRole
Type: 'AWS::IAM::ManagedPolicy'
AWSIAMManagedEKSNodesPolicy:
Properties:
Description: Additional Policies to nodes role to work for EKS
ManagedPolicyName: eks-nodes.cluster-api-provider-aws.sigs.k8s.io
PolicyDocument:
Statement:
- Action:
- "ec2:AssignPrivateIpAddresses"
- "ec2:AttachNetworkInterface"
- "ec2:CreateNetworkInterface"
- "ec2:DeleteNetworkInterface"
- "ec2:DescribeInstances"
- "ec2:DescribeTags"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DescribeInstanceTypes"
- "ec2:DetachNetworkInterface"
- "ec2:ModifyNetworkInterfaceAttribute"
- "ec2:UnassignPrivateIpAddresses"
Effect: Allow
Resource:
- '*'
- Action:
- ec2:CreateTags
Effect: Allow
Resource:
- arn:aws:ec2:*:*:network-interface/*
- Action:
- "ec2:DescribeInstances"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 236


- "ec2:DescribeInstanceTypes"
- "ec2:DescribeRouteTables"
- "ec2:DescribeSecurityGroups"
- "ec2:DescribeSubnets"
- "ec2:DescribeVolumes"
- "ec2:DescribeVolumesModifications"
- "ec2:DescribeVpcs"
- "eks:DescribeCluster"
Effect: Allow
Resource:
- '*'
Version: 2012-10-17
Roles: !Ref existingNodeRole
Type: 'AWS::IAM::ManagedPolicy'
AWSIAMRoleEKSControlPlane:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action:
- 'sts:AssumeRole'
Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Version: 2012-10-17
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AmazonEKSClusterPolicy'
RoleName: eks-controlplane.cluster-api-provider-aws.sigs.k8s.io
Type: 'AWS::IAM::Role'
Add EKS CSI Policy
AWS CloudFormation does not support attaching an existing IAM Policy to an existing IAM Role. Add the necessary
IAM policy to your worker instance profile using the aws CLI:
aws iam attach-role-policy --role-name
nodes.cluster-api-provider-aws.sigs.k8s.io --policy-arn
arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
In other infrastructures, you create an image next. However, AWS EKS best practices discourage building custom
images. The Amazon EKS Optimized AMI is the preferred way to deploy containers for EKS. If the image is
customized, it breaks some of the autoscaling and security capabilities of EKS. Therefore, you will proceed to
creating your EKS cluster.

EKS: Create an EKS Cluster


EKS clusters can be created from the UI or CLI, but require permissions first.

About this task


When creating a Managed cluster on your EKS infrastructure, you can choose from multiple configuration types.
The steps for creating and accessing your cluster are listed below after setting minimal permissions and creating
IAM Policies and Roles. For more information about custom EKS configurations, refer to the EKS Infrastructure
under Custom Installation and Additional Infrastructure Tools.
Access your Cluster
Use the previously installed aws-iam-authenticator to access your cluster using kubectl. Amazon EKS uses IAM to
provide authentication to your Kubernetes cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 237


Procedure

1. Export the AWS region where you want to deploy the cluster.
export AWS_REGION=us-west-2

2. Export the AWS profile with the credentials you want to use to create the Kubernetes cluster.
export AWS_PROFILE=<profile>

3. Name your cluster


Give your cluster a unique name suitable for your environment. In AWS it is critical that the name is unique, as no
two clusters in the same AWS account can have the same name.

4. Set the environment variable.


export CLUSTER_NAME=<aws-example>

Note: The cluster name may only contain the following characters: a-z, 0-9, ., and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.

Known Limitations

About this task


Be aware of these limitations in the current release of Konvoy:

Procedure

• The Konvoy version used to create a workload cluster must match the Konvoy version used to delete a workload
cluster.
• EKS clusters cannot be Self-managed.
• Konvoy supports deploying one workload cluster. Konvoy generates a set of objects for one Node Pool.
• Konvoy does not validate edits to cluster objects.

Create an EKS Cluster from the CLI


Create an EKS cluster using the cli rather than in the UI.

About this task


If you prefer to work in the shell, you can continue by creating a new cluster following these steps. If you prefer to
log in to the NKP UI, you can create a new cluster from there using the steps on this page: Create an EKS Cluster
from the NKP UI

Procedure

1. Set the environment variable to the name you assigned this cluster.
export CLUSTER_NAME=eks-example

2. Make sure your AWS credentials are up-to-date. Refresh the credentials command is only necessary if you
are using Access Keys. For more information, see Leverage the NKP Create Cluster Role on page 750
otherwise, if you are using role-based authentication on a bastion host, proceed to step 3.
nkp update bootstrap credentials aws

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 238


3. Create the cluster.
nkp create cluster eks \
--cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami)

4. Inspect or edit the cluster objects. Familiarize yourself with Cluster API before editing the cluster objects as edits
can prevent the cluster from deploying successfully. See Customizing CAPI Clusters.

5. Wait for the cluster control-plane to be ready.


kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=20m
The READY status will become True after the cluster control-plane becomes ready in one of the following steps.

6. After the objects are created on the API server, the Cluster API controllers reconcile them. They create
infrastructure and machines. As they progress, they update the Status of each object. Konvoy provides a command
to describe the current status of the cluster.
nkp describe cluster -c ${CLUSTER_NAME}
NAME READY SEVERITY
REASON SINCE MESSAGE
Cluster/eks-example True
10m
##ControlPlane - AWSManagedControlPlane/eks-example-control-plane True
10m
##Workers
##MachineDeployment/eks-example-md-0 True
26s
##Machine/eks-example-md-0-78fcd7c7b7-66ntt True
84s
##Machine/eks-example-md-0-78fcd7c7b7-b9qmc True
84s
##Machine/eks-example-md-0-78fcd7c7b7-v5vfq True
84s
##Machine/eks-example-md-0-78fcd7c7b7-zl6m2 True
84s

7. As they progress, the controllers also create Events. List the Events using this command.
kubectl get events | grep ${CLUSTER_NAME}
For brevity, the example uses grep. It is also possible to use separate commands to get Events for specific objects.
For example, kubectl get events --field-selector involvedObject.kind="AWSCluster" and
kubectl get events --field-selector involvedObject.kind="AWSMachine".
46m Normal SuccessfulCreateVPC
awsmanagedcontrolplane/eks-example-control-plane Created new managed VPC
"vpc-05e775702092abf09"
46m Normal SuccessfulSetVPCAttributes
awsmanagedcontrolplane/eks-example-control-plane Set managed VPC attributes for
"vpc-05e775702092abf09"
46m Normal SuccessfulCreateSubnet
awsmanagedcontrolplane/eks-example-control-plane Created new managed Subnet
"subnet-0419dd3f2dfd95ff8"
46m Normal SuccessfulModifySubnetAttributes
awsmanagedcontrolplane/eks-example-control-plane Modified managed Subnet
"subnet-0419dd3f2dfd95ff8" attributes
46m Normal SuccessfulCreateSubnet
awsmanagedcontrolplane/eks-example-control-plane Created new managed Subnet
"subnet-0e724b128e3113e47"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 239


46m Normal SuccessfulCreateSubnet
awsmanagedcontrolplane/eks-example-control-plane Created new managed Subnet
"subnet-06b2b31ea6a8d3962"
46m Normal SuccessfulModifySubnetAttributes
awsmanagedcontrolplane/eks-example-control-plane Modified managed Subnet
"subnet-06b2b31ea6a8d3962" attributes
46m Normal SuccessfulCreateSubnet
awsmanagedcontrolplane/eks-example-control-plane Created new managed Subnet
"subnet-0626ce238be32bf98"
46m Normal SuccessfulCreateSubnet
awsmanagedcontrolplane/eks-example-control-plane Created new managed Subnet
"subnet-0f53cf59f83177800"
46m Normal SuccessfulModifySubnetAttributes
awsmanagedcontrolplane/eks-example-control-plane Modified managed Subnet
"subnet-0f53cf59f83177800" attributes
46m Normal SuccessfulCreateSubnet
awsmanagedcontrolplane/eks-example-control-plane Created new managed Subnet
"subnet-0878478f6bbf153b2"
46m Normal SuccessfulCreateInternetGateway
awsmanagedcontrolplane/eks-example-control-plane Created new managed Internet
Gateway "igw-09fb52653949d4579"
46m Normal SuccessfulAttachInternetGateway
awsmanagedcontrolplane/eks-example-control-plane Internet Gateway
"igw-09fb52653949d4579" attached to VPC "vpc-05e775702092abf09"
46m Normal SuccessfulCreateNATGateway
awsmanagedcontrolplane/eks-example-control-plane Created new NAT Gateway
"nat-06356aac28079952d"
46m Normal SuccessfulCreateNATGateway
awsmanagedcontrolplane/eks-example-control-plane Created new NAT Gateway
"nat-0429d1cd9d956bf35"
46m Normal SuccessfulCreateNATGateway
awsmanagedcontrolplane/eks-example-control-plane Created new NAT Gateway
"nat-059246bcc9d4e88e7"
46m Normal SuccessfulCreateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Created managed RouteTable
"rtb-01689c719c484fd3c"
46m Normal SuccessfulCreateRoute
awsmanagedcontrolplane/eks-example-control-plane Created route {...
46m Normal SuccessfulAssociateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Associated managed RouteTable
"rtb-01689c719c484fd3c" with subnet "subnet-0419dd3f2dfd95ff8"
46m Normal SuccessfulCreateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Created managed RouteTable
"rtb-065af81b9752eeb69"
46m Normal SuccessfulCreateRoute
awsmanagedcontrolplane/eks-example-control-plane Created route {...
46m Normal SuccessfulAssociateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Associated managed RouteTable
"rtb-065af81b9752eeb69" with subnet "subnet-0e724b128e3113e47"
46m Normal SuccessfulCreateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Created managed RouteTable
"rtb-03eeff810a89afc98"
46m Normal SuccessfulCreateRoute
awsmanagedcontrolplane/eks-example-control-plane Created route {...
46m Normal SuccessfulAssociateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Associated managed RouteTable
"rtb-03eeff810a89afc98" with subnet "subnet-06b2b31ea6a8d3962"
46m Normal SuccessfulCreateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Created managed RouteTable
"rtb-0fab36f8751fdee73"
46m Normal SuccessfulCreateRoute
awsmanagedcontrolplane/eks-example-control-plane Created route {...

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 240


46m Normal SuccessfulAssociateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Associated managed RouteTable
"rtb-0fab36f8751fdee73" with subnet "subnet-0626ce238be32bf98"
46m Normal SuccessfulCreateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Created managed RouteTable
"rtb-0e5c9c7bbc3740a0f"
46m Normal SuccessfulCreateRoute
awsmanagedcontrolplane/eks-example-control-plane Created route {...
46m Normal SuccessfulAssociateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Associated managed RouteTable
"rtb-0e5c9c7bbc3740a0f" with subnet "subnet-0f53cf59f83177800"
46m Normal SuccessfulCreateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Created managed RouteTable
"rtb-0bf58eb5f73c387af"
46m Normal SuccessfulCreateRoute
awsmanagedcontrolplane/eks-example-control-plane Created route {...
46m Normal SuccessfulAssociateRouteTable
awsmanagedcontrolplane/eks-example-control-plane Associated managed RouteTable
"rtb-0bf58eb5f73c387af" with subnet "subnet-0878478f6bbf153b2"
46m Normal SuccessfulCreateSecurityGroup
awsmanagedcontrolplane/eks-example-control-plane Created managed SecurityGroup
"sg-0b045c998a120a1b2" for Role "node-eks-additional"
46m Normal InitiatedCreateEKSControlPlane
awsmanagedcontrolplane/eks-example-control-plane Initiated creation of a new EKS
control plane default_eks-example-control-plane
37m Normal SuccessfulCreateEKSControlPlane
awsmanagedcontrolplane/eks-example-control-plane Created new EKS control plane
default_eks-example-control-plane
37m Normal SucessfulCreateKubeconfig
awsmanagedcontrolplane/eks-example-control-plane Created kubeconfig for cluster
"eks-example"
37m Normal SucessfulCreateUserKubeconfig
awsmanagedcontrolplane/eks-example-control-plane Created user kubeconfig for
cluster "eks-example"
27m Normal SuccessfulCreate awsmachine/
eks-example-md-0-4t9nc Created new node instance with id
"i-0aecc1897c93df740"
26m Normal SuccessfulDeleteEncryptedBootstrapDataSecrets awsmachine/eks-
example-md-0-4t9nc AWS Secret entries containing userdata deleted
26m Normal SuccessfulSetNodeRef machine/eks-
example-md-0-78fcd7c7b7-fn7x9 ip-10-0-88-24.us-west-2.compute.internal
26m Normal SuccessfulSetNodeRef machine/eks-
example-md-0-78fcd7c7b7-g64nv ip-10-0-110-219.us-west-2.compute.internal
26m Normal SuccessfulSetNodeRef machine/eks-
example-md-0-78fcd7c7b7-gwc5j ip-10-0-101-161.us-west-2.compute.internal
26m Normal SuccessfulSetNodeRef machine/eks-
example-md-0-78fcd7c7b7-j58s4 ip-10-0-127-49.us-west-2.compute.internal
46m Normal SuccessfulCreate machineset/eks-
example-md-0-78fcd7c7b7 Created machine "eks-example-md-0-78fcd7c7b7-
fn7x9"
46m Normal SuccessfulCreate machineset/eks-
example-md-0-78fcd7c7b7 Created machine "eks-example-md-0-78fcd7c7b7-
g64nv"
46m Normal SuccessfulCreate machineset/eks-
example-md-0-78fcd7c7b7 Created machine "eks-example-md-0-78fcd7c7b7-
j58s4"
46m Normal SuccessfulCreate machineset/eks-
example-md-0-78fcd7c7b7 Created machine "eks-example-md-0-78fcd7c7b7-
gwc5j"
27m Normal SuccessfulCreate awsmachine/
eks-example-md-0-7whkv Created new node instance with id
"i-06dfc0466b8f26695"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 241


26m Normal SuccessfulDeleteEncryptedBootstrapDataSecrets awsmachine/eks-
example-md-0-7whkv AWS Secret entries containing userdata deleted
27m Normal SuccessfulCreate awsmachine/
eks-example-md-0-ttgzv Created new node instance with id
"i-0544fce0350fd41fb"
26m Normal SuccessfulDeleteEncryptedBootstrapDataSecrets awsmachine/eks-
example-md-0-ttgzv AWS Secret entries containing userdata deleted
27m Normal SuccessfulCreate awsmachine/
eks-example-md-0-v2hrf Created new node instance with id
"i-0498906edde162e59"
26m Normal SuccessfulDeleteEncryptedBootstrapDataSecrets awsmachine/eks-
example-md-0-v2hrf AWS Secret entries containing userdata deleted
46m Normal SuccessfulCreate
machinedeployment/eks-example-md-0 Created MachineSet "eks-example-
md-0-78fcd7c7b7"

EKS: Grant Cluster Access


This topic explains how to Grant Cluster Access.

About this task


You can access your cluster using AWS IAM roles in the dashboard. When you create an EKS cluster, the IAM entity
is granted system:masters permissions in Kubernetes Role Based Access Control (RBAC) configuration.

Note: More information about the configuration of the EKS control plane can be found on the EKS Cluster IAM
Policies and Roles page.

If the EKS cluster was created as a cluster using a self-managed AWS cluster that uses IAM Instance Profiles, you
will need to modify the IAMAuthenticatorConfig field in the AWSManagedControlPlane API object to allow
other IAM entities to access the EKS workload cluster. Follow the steps below:

Procedure

1. Run the following command with your KUBECONFIG configured to select the self-managed cluster
previously used to create the workload EKS cluster. Ensure you substitute ${CLUSTER_NAME} and
${CLUSTER_NAMESPACE} with their corresponding values for your cluster.
kubectl edit awsmanagedcontrolplane ${CLUSTER_NAME}-control-plane -n
${CLUSTER_NAMESPACE}

2. Edit the IamAuthenticatorConfig field with the IAM Role to the corresponding Kubernetes Role. In
this example, the IAM role arn:aws:iam::111122223333:role/PowerUser is granted the cluster role
system:masters. Note that this example uses example AWS resource ARNs, remember to substitute real values
in the corresponding AWS account.
iamAuthenticatorConfig:
mapRoles:
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/my-node-role
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::111122223333:role/PowerUser
username: admin
For further instructions on changing or assigning roles or clusterroles to which you can map IAM users or
roles, see Amazon Enabling IAM access to your cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 242


EKS: Retrieve kubeconfig for EKS Cluster
This guide explains how to use the command line to interact with your newly deployed Kubernetes cluster.

About this task


Before you start, make sure you have created a workload cluster, as described in EKS: Create an EKS Cluster.
When the workload cluster is created, the cluster life cycle services generate a kubeconfig file for the workload
cluster, and write it to a Secret. The kubeconfig file is scoped to the cluster administrator.

Procedure

1. Get a kubeconfig file for the workload cluster from the Secret, and write it to a file using this command.
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

2. List the Nodes using this command.


kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
Output will be similar to:
NAME STATUS ROLES AGE VERSION
ip-10-0-122-211.us-west-2.compute.internal Ready <none> 35m v1.27.12-eks-
ae9a62a
ip-10-0-127-74.us-west-2.compute.internal Ready <none> 35m v1.27.12-eks-
ae9a62a
ip-10-0-71-155.us-west-2.compute.internal Ready <none> 35m v1.27.12-eks-
ae9a62a
ip-10-0-93-47.us-west-2.compute.internal Ready <none> 35m v1.27.12-eks-
ae9a62a

Note: It may take a few minutes for the Status to move to Ready while the Pod network is deployed. The node
status will change to Ready soon after the calico-node DaemonSet Pods are Ready.

3. List the Pods using this command.


kubectl --kubeconfig=${CLUSTER_NAME}.conf get --all-namespaces pods
Output will be similar to:
NAMESPACE NAME READY
STATUS RESTARTS AGE
calico-system calico-kube-controllers-7d6749878f-ccsx9 1/1
Running 0 34m
calico-system calico-node-2r6l8 1/1
Running 0 34m
calico-system calico-node-5pdlb 1/1
Running 0 34m
calico-system calico-node-n24hh 1/1
Running 0 34m
calico-system calico-node-qrh7p 1/1
Running 0 34m
calico-system calico-typha-7bbcb87696-7pk45 1/1
Running 0 34m
calico-system calico-typha-7bbcb87696-t4c8r 1/1
Running 0 34m
calico-system csi-node-driver-bz48k 2/2
Running 0 34m
calico-system csi-node-driver-k5mmk 2/2
Running 0 34m
calico-system csi-node-driver-nvcck 2/2
Running 0 34m

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 243


calico-system csi-node-driver-x4xnh 2/2
Running 0 34m
kube-system aws-node-2xp86 1/1
Running 0 35m
kube-system aws-node-5f2kx 1/1
Running 0 35m
kube-system aws-node-6lzm7 1/1
Running 0 35m
kube-system aws-node-pz8c6 1/1
Running 0 35m
kube-system cluster-autoscaler-789d86b489-sz9x2 0/1
Init:0/1 0 36m
kube-system coredns-57ff979f67-pk5cg 1/1
Running 0 75m
kube-system coredns-57ff979f67-sf2j9 1/1
Running 0 75m
kube-system ebs-csi-controller-5f6bd5d6dc-bplwm 6/6
Running 0 36m
kube-system ebs-csi-controller-5f6bd5d6dc-dpjt7 6/6
Running 0 36m
kube-system ebs-csi-node-7hmm5 3/3
Running 0 35m
kube-system ebs-csi-node-l4vfh 3/3
Running 0 35m
kube-system ebs-csi-node-mfr7c 3/3
Running 0 35m
kube-system ebs-csi-node-v8krq 3/3
Running 0 35m
kube-system kube-proxy-7fc5x 1/1
Running 0 35m
kube-system kube-proxy-vvkmk 1/1
Running 0 35m
kube-system kube-proxy-x6hcc 1/1
Running 0 35m
kube-system kube-proxy-x8frb 1/1
Running 0 35m
kube-system snapshot-controller-8ff89f489-4cfxv 1/1
Running 0 36m
kube-system snapshot-controller-8ff89f489-78gg8 1/1
Running 0 36m
node-feature-discovery node-feature-discovery-master-7d5985467-52fcn 1/1
Running 0 36m
node-feature-discovery node-feature-discovery-worker-88hr7 1/1
Running 0 34m
node-feature-discovery node-feature-discovery-worker-h95nq 1/1
Running 0 35m
node-feature-discovery node-feature-discovery-worker-lfghg 1/1
Running 0 34m
node-feature-discovery node-feature-discovery-worker-prc8p 1/1
Running 0 35m
tigera-operator tigera-operator-6dcd98c8ff-k97hq 1/1
Running 0 36m

EKS: Attach a Cluster


You can attach existing Kubernetes clusters to the Management Cluster with the instructions below.

About this task


After attaching the cluster, you can use the UI to examine and manage this cluster. The following procedure shows
how to attach an existing Amazon Elastic Kubernetes Service (EKS) cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 244


This procedure assumes you have an existing and spun up Amazon EKS cluster(s) with administrative privileges.
Refer to the Amazon EKS for setup and configuration information.

• Install aws-iam-authenticator. This binary is used to access your cluster using kubectl.
Attach a Pre-existing EKS Cluster
Ensure that the KUBECONFIG environment variable is set to the Management cluster before attaching by running:

Note:
export KUBECONFIG=<Management_cluster_kubeconfig>.conf

Access Your EKS Clusters

Procedure

1. Ensure you are connected to your EKS clusters. Enter the following commands for each of your clusters.
kubectl config get-contexts
kubectl config use-context <context for first eks cluster>

2. Confirm kubectl can access the EKS cluster.


kubectl get nodes

Create a kubeconfig File

About this task


To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect
to Kommander.

Procedure

1. Create the necessary service account.


kubectl -n kube-system create serviceaccount kommander-cluster-admin

2. Create a token secret for the serviceaccount.


kubectl -n kube-system create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: kommander-cluster-admin-sa-token
annotations:
kubernetes.io/service-account.name: kommander-cluster-admin
type: kubernetes.io/service-account-token
EOF
For more information on Service Account Tokens, refer to this article in our blog.

3. Verify that the serviceaccount token is ready by running this command.


kubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml
Verify that the data.token field is populated.
Example output:
apiVersion: v1
data:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 245


ca.crt: LS0tLS1CRUdJTiBDR...
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVX...
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: kommander-cluster-admin
kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8
creationTimestamp: "2022-08-19T13:36:42Z"
name: kommander-cluster-admin-sa-token
namespace: default
resourceVersion: "8554"
uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520
type: kubernetes.io/service-account-token

4. Configure the new service account for cluster-admin permissions.


cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kommander-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kommander-cluster-admin
namespace: kube-system
EOF

5. Set up the following environment variables with the access data that is needed for producing a new kubeconfig
file.
export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-
sa-token -o=go-template='{{.data.token}}' | base64 --decode)
export CURRENT_CONTEXT=$(kubectl config current-context)
export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-
template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}
{{ index .context "cluster" }}{{end}}{{end}}')
export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if
eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-
data" }}{{.}}{{end}}"{{ end }}{{ end }}')
export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}
{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')

6. Confirm these variables have been set correctly.


export -p | grep -E 'USER_TOKEN_VALUE|CURRENT_CONTEXT|CURRENT_CLUSTER|CLUSTER_CA|
CLUSTER_SERVER'

7. Generate a kubeconfig file that uses the environment variable values from the previous step.
cat << EOF > kommander-cluster-admin-config
apiVersion: v1
kind: Config
current-context: ${CURRENT_CONTEXT}
contexts:
- name: ${CURRENT_CONTEXT}
context:
cluster: ${CURRENT_CONTEXT}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 246


user: kommander-cluster-admin
namespace: kube-system
clusters:
- name: ${CURRENT_CONTEXT}
cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_SERVER}
users:
- name: kommander-cluster-admin
user:
token: ${USER_TOKEN_VALUE}
EOF

8. This process produces a file in your current working directory called kommander-cluster-admin-config. The
contents of this file are used in Kommander to attach the cluster. Before importing this configuration, verify the
kubeconfig file can access the cluster.
kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces

Attach EKS Cluster Manually Using the CLI


These steps are only applicable if you do not set a WORKSPACE_NAMESPACE when creating a cluster. If you
already set a WORKSPACE_NAMESPACE, then you do not need to perform these steps since the cluster is
already attached to the workspace.

About this task


When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after
a few moments.
However, if you do not set a workspace, the attached cluster will be created in the default workspace. To ensure
that the attached cluster is created in your desired workspace namespace, follow these instructions:

Procedure

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. You can now either attach it in the UI, or attach your cluster to the workspace you want in the CLI. This is only
necessary if you never set the workspace of your cluster upon creation.

4. Retrieve the workspace where you want to attach the cluster:


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 247


7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A

Attach EKS Cluster from the UI Dashboard


Attach your new EKS cluster using the UI.

About this task


Now that you have a kubeconfig from the previous page, go to the NKP UI and follow these steps below:

Procedure

1. From the top menu bar, select your target workspace.

2. On the Dashboard page, select the Add Cluster option in the Actions dropdown list at the top right.

3. Select Attach Cluster.

4. Select the No additional networking restrictions card. Alternatively, if you must use network restrictions,
stop following the steps below, and see the instructions on the page Attach a cluster WITH network restrictions.

5. Upload the kubeconfig file you created in the previous section (or copy its contents) into the Cluster
Configuration section.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 248


6. The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit
this field with the name you want for your cluster.

7. Add labels to classify your cluster as needed.

8. Select Create to attach your cluster.

Note: If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in
the NKP UI. If this happens, ensure your system has sufficient resources for all pods.

vSphere Installation Options


For an environment that is on the vSphere Infrastructure, install options based on those environment variables are
provided for you in this location.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operative in the most common scenarios.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

vSphere Overview
vSphere is a more complex setup than some of the other providers and infrastructures, so an overview of steps has
been provided to help. To confirm that your OS is supported, see Supported Operating System.
The overall process for configuring vSphere and NKP together includes the following steps:
1. Configure vSphere to provide the needed elements described in the vSphere Prerequisites: All Installation
Types.
2. For air-gapped environments: Creating a Bastion Host on page 652.
3. Create a base OS image (for use in the OVA package containing the disk images packaged with the OVF).
4. Create a CAPI VM image template that uses the base OS image and adds the needed Kubernetes cluster
components.
5. Create a new self-managing cluster on vSphere.
6. Install Kommander.
7. Verify and log on to the UI.

Section Contents
Supported environment variable combinations:

vSphere Prerequisites: All Installation Types


This section contains all the prerequisite information specific to VMware vSphere infrastructure. These are above and
beyond all of the NKP prerequisites for Install. Fulfilling the prerequisites involves completing these two areas:
1. NKP prerequisites
2. vSphere prerequisites - vCenter Server + ESXi

1. NKP Prerequisites
Before using NKP to create a vSphere cluster, verify that you have:

• An x86_64-based Linux or macOS machine.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 249


• Download NKP binaries and Konvoy Image Builder (KIB) image bundle for Linux or macOS.
• A Container engine/runtime installed is required to install NKP and bootstrap:

• Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.
• Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
• A registry needs installed on the host where the NKP Konvoy CLI runs. For example, if you are installing Konvoy
on your laptop, ensure the laptop has a supported version of Docker or other registry. On macOS, Docker runs in a
virtual machine. Configure this virtual machine with at least 8GB of memory.
• CLI tool Kubectl 1.21.6 for interacting with the running cluster, installed on the host where the NKP Konvoy
command line interface (CLI) runs. For more information, see https://kubernetes.io/docs/tasks/tools/#kubectl.
• A valid VMware vSphere account with credentials configured.

Note: NKP uses the vsphere CSI driver as the default storage provider. Use a Kubernetes CSI-compatible storage
that is suitable for production. For more information, see https://kubernetes.io/docs/concepts/storage/
volumes/#volume-types.

Note: You can choose from any of the storage options available for Kubernetes. To turn off the default that
Konvoy deploys, set the default StorageClass as non-default. Then, set your newly created StorageClass to be the
default by following the commands in the Kubernetes documentation called Changing the Default Storage
Class.

VMware vSphere Prerequisites


Before installing, verify that your VMware vSphere Client environment meets the following basic requirements:

• Access to a bastion VM or other network connectednetwork-connected host, running vSphere Client version
v6.7.x with Update 3 or later version.

• You must be able to reach the vSphere API endpoint from where the Konvoy command line interface (CLI)
runs.
• vSphere account with credentials configured - this account must have Administrator privileges.
• A RedHat subscription with a username and password for downloading DVD ISOs.
• For air-gapped environments, a bastion VM host template with access to a configured local registry. The
recommended template naming pattern is ../folder-name/NKP-e2e-bastion-template or similar. Each
infrastructure provider has its own set of bastion host instructions. For more information on Creating a Bastion
Host on page 652, see your provider’s documentation:

• AWS: https://aws.amazon.com/solutions/implementations/linux-bastion/
• Azure: https://learn.microsoft.com/en-us/azure/bastion/quickstart-host-portal
• GCP: https://blogs.vmware.com/cloud/2021/06/02/intro-google-cloud-vmware-engine-bastion-host-
access-iap/
• vSphere: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/
GUID-6975426F-56D0-4FE2-8A58-580B40D2F667.html
• VMware: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/
GUID-6975426F-56D0-4FE2-8A58-580B40D2F667.html.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 250


• Valid vSphere values for the following:

• vCenter API server URL


• Datacenter name
• Zone name that contains ESXi hosts for your cluster’s nodes. For more information, see
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-
B2F01BF5-078A-4C7E-B505-5DFFED0B8C38.html
• Datastore name for the shared storage resource to be used for the VMs in the cluster.

• Use of PersistentVolumes in your cluster depends on Cloud Native Storage (CNS), available in vSphere
v6.7.x with Update 3 and later versions. CNS depends on this shared Datastore’s configuration.
• Datastore URL from the datastore record for the shared datastore you want your cluster to use.

• You need this URL value to ensure that the correct Datastore is used when NKP creates VMs for your
cluster in vSphere.
• Folder name.
• Base template name, such as base-rhel-8 or base-rhel-7.
• Name of a Virtual Network that has DHCP enabled for both air-gapped and non-air-gapped environments.
• Resource Pools - at least one resource pool is needed, with every host in the pool having access to shared
storage, such as VSAN.

• Each host in the resource pool needs access to shared storage, such as NFS or VSAN, to make use of
machine deployments and high-availability control planes.

Section Contents

vSphere Roles
When provisioning Kubernetes clusters with the Nutanix Kubernetes Platform (NKP) vSphere provider, four
roles are needed for NKP to provide proper permissions.

About this task


Roles in vSphere are more like policy statements for the objects in a vSphere inventory. The Role is assigned to a user
and the Object assignment can be inherited by any siblings if desired through propagation.
Add the permission at the highest level and set to propagate the permissions. In small vSphere environments, with just
a few hosts, assigning the role or user at the top level and propagating to the child resources is appropriate. However,
in the majority of cases, this is not possible since security teams will enforce strict restrictions on who has access to
specific resources.
The table below describes the level at which these permissions are assigned, followed by the steps to Add Roles in
vCenter. These roles provide user permissions that are less than those of the admin.

Procedure

Table 19: vSphere Permissions Propagation

Level Required Propagate to Child


vCenter Server (Top Level) No No
Data Center Yes No

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 251


Level Required Propagate to Child
Resource Pool Yes No
Folder Yes Yes
Template Yes No

1. Open a vSphere Client connection to the vCenter Server, described in the Prerequisites.

2. Select Home > Administration > Roles > Add Role.

3. Give the new Role a name from the four choices detailed in the next section.

4. Select the Privileges from the permissions directory tree dropdown list below each of the four roles.

• The list of permissions can be set so that the provider is able to create, modify, or delete resources or clone
templates, VMs, disks, attach network, etc.

vSphere: Minimum User Permissions


When a user needs permissions less than Admin, a role must be created with those permissions.
In small vSphere environments, with just a few hosts, assigning the role or user at the top level and propagating to the
child resources is appropriate, as shown on this page in the permissions tree below.
However, in the majority of cases, this is not possible, as security teams will enforce strict restrictions on who needs
access to specific resources.
The process for configuring a vSphere role with the permissions for provisioning nodes and installing includes the
following steps:
1. Open a vSphere Client connection to the vCenter Server, as described in the Prerequisites.
2. Select Home > Administration > Roles > Add Role.
3. Give the new role a name, then select these Privileges:

Cns

XSearchable

Datastore

XAllocate space

XLow-level file operations

Host

• Configuration

XStorage partition configuration

Profile-driven storage

XProfile-driven storage view

Network

XAssign network

Resource

Assign virtual machine to resource pool.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 252


Virtual machine

• Change Configuration - from the list in that section, select these permissions below:

XAdd new disk

XAdd existing disk

XAdd or remove device

XAdvanced configuration

XChange CPU count

XChange Memory

XChange Settings

XReload from path

Edit inventory

XCreate from existing

XRemove

Interaction

XPower off

XPower on

Provisioning

XClone template

XDeploy template

Session

XValidateSession

In the table below we describe the level at which these permissions get assigned.

Level Required Propagate to Child

vCenter Server (Top Level) No No


Data Center Yes No
Resource Pool Yes No
Folder Yes Yes
Template Yes No

vSphere Storage Options


Explore storage options and considerations for using NKP with VMware vSphere.
The vSphere Container Storage plugin supports shared NFS, vNFS, and vSAN. You need to provision your storage
options in vCenter prior to creating a CAPI image in NKP for use with vSphere.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 253


NKP has integrated the CSI 2.x driver used in vSphere. When creating your NKP cluster, NKP uses whatever
configuration you provide for the Datastore name. vSAN is not required. Using NFS can reduce the amount of
tagging and permission granting required to configure your cluster.

vSphere Installation
This topic provides instructions on how to install NKP in a vSphere non-air-gapped environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Further vSphere Prerequisites


Before you begin using Nutanix Kubernetes Platform (NKP), you must ensure you already meet the other
prerequisites in the vSphere Prerequisites: All Installation Types section.

Section Contents

vSphere: Image Creation Overview


This diagram illustrates the image creation process:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 254


Figure 7: vSphere Image Creation Process

The workflow on the left shows the creation of a base OS image in the vCenter vSphere client using inputs from
Packer. The workflow on the right shows how NKP uses that same base OS image to create CAPI-enabled VM
images for your cluster.
After creating the base image, the NKP image builder uses it to create a CAPI-enabled vSphere template that includes
the Kubernetes objects for the cluster. You can use that resulting template with the NKP create cluster command
to create the VM nodes in your cluster directly on a vCenter server. From that point, you can use NKP to provision
and manage your cluster.
NKP communicates with the code in vCenter Server as the management layer for creating and managing virtual
machines after ESXi 6.7 Update 3 or later is installed and configured.

Next Step
vSphere Air-gapped: Create an Image

vSphere: BaseOS Image in vCenter


Creating a base OS image from DVD ISO files is a one-time process. The base OS image file is created in the
vSphere Client for use in the vSphere VM template. Therefore, the base OS image is used by Konvoy Image Builder
(KIB) to create a VM template to configure Kubernetes nodes by the NKP vSphere provider.

The Base OS Image


For vSphere, a username is populated by SSH_USERNAME , and the user can use authorization through
SSH_PASSWORD or SSH_PRIVATE_KEY_FILE environment variables and required by default by the packer. This user

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 255


needs administrator privileges. It is possible to configure a custom user and password when building the OS image;
however, that requires the Konvoy Image Builder (KIB) configuration to be overridden.
While creating the base OS image, it is important to take into consideration the following elements:

• Storage configuration: Nutanix recommends customizing disk partitions and not configuring a SWAP partition.
• Network configuration: as KIB must download and install packages, activating the network is required.
• Connect to Red Hat: if using Red Hat Enterprise Linux (RHEL), registering with Red Hat is required to configure
software repositories and install software packages.
• Software selection: Nutanix recommends choosing Minimal Install.
• NKP recommends installing with the packages provided by the operating system package managers. Use the
version that corresponds to the major version of your operating system.

vSphere: Creating a CAPI VM Template


The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server.

About this task


You must have at least one image before creating a new cluster. If you have an image, this step in your configuration
is not required each time since that image can be used to spin up a new cluster. However, if you need different images
for different environments or providers, you will need to create a new custom image.

Procedure

1. Users need to perform the steps in the topic vSphere: Creating an Image before starting this procedure.

2. Build an image template with Konvoy Image Builder (KIB).

Create a vSphere Template for Your Cluster from a Base OS Image

Procedure

1. Set the following vSphere environment variables on the bastion VM host.


export VSPHERE_SERVER=your_vCenter_APIserver_URL
export VSPHERE_USERNAME=your_vCenter_user_name
export VSPHERE_PASSWORD=your_vCenter_password

2. Copy the base OS image file created in the vSphere Client to your desired location on the bastion VM host and
make a note of the path and file name.

3. Create an image.yaml file and add the following variables for vSphere. NKP uses this file and these variables as
inputs in the next step. To customize your image.yaml file, refer to this section: Customize your Image.

Note: This example is Ubuntu 20.04. You will need to replace the OS name below based on your OS. Also, refer to
the example YAML files located here: OVA YAML.

---
download_images: true
build_name: "ubuntu-2004"
packer_builder_type: "vsphere"
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-
vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 256


guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/
{{guestinfo_datasource_ref}}/install.sh"
packer:
cluster: "<VSPHERE_CLUSTER_NAME>"
datacenter: "<VSPHERE_DATACENTER_NAME>"
datastore: "<VSPHERE_DATASTORE_NAME>"
folder: "<VSPHERE_FOLDER>"
insecure_connection: "false"
network: "<VSPHERE_NETWORK>"
resource_pool: "<VSPHERE_RESOURCE_POOL>"
template: "os-qualification-templates/d2iq-base-Ubuntu-20.04" # change default
value with your base template name
vsphere_guest_os_type: "other4xLinux64Guest"
guest_os_type: "ubuntu2004-64"
# goss params
distribution: "ubuntu"
distribution_version: "20.04"
# Use following overrides to select the authentication method that can be used with
base template
# ssh_username: "" # can be exported as environment variable 'SSH_USERNAME'
# ssh_password: "" # can be exported as environment variable 'SSH_PASSWORD'
# ssh_private_key_file = "" # can be exported as environment variable
'SSH_PRIVATE_KEY_FILE'
# ssh_agent_auth: false # is set to true, ssh_password and ssh_private_key will be
ignored

4. Create a vSphere VM template with your variation of the following command.


konvoy-image build images/ova/<image.yaml>

• Any additional configurations can be added to this command using --overrides flags as shown below:
1. Any credential overrides: --overrides overrides.yaml
2. for FIPS, add this flag: --overrides overrides/fips.yaml
3. for air-gapped, add this flag: --overrides overrides/offline-fips.yaml

5. The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server. This template contains the required artifacts needed to create a
Kubernetes cluster. When KIB successfully provisions the OS image, it creates a manifest file. The artifact_id
field of this file contains the name of the AMI ID (AWS), template name (vSphere), or image name (GCP/Azure),
for example.
{
"name": "vsphere-clone",
"builder_type": "vsphere-clone",
"build_time": 1644985039,
"files": null,
"artifact_id": "konvoy-ova-vsphere-rhel-84-1.21.6-1644983717",
"packer_run_uuid": "260e8110-77f8-ca94-e29e-ac7a2ae779c8",
"custom_data": {
"build_date": "2022-02-16T03:55:17Z",
"build_name": "vsphere-rhel-84",
"build_timestamp": "1644983717",
[...]
}
}

Tip: Recommendation: Now that you can now see the template created in your vCenter, it is best to rename it to
nkp-<NKP_VERSION>-k8s-<K8S_VERSION>-<DISTRO>, like nkp-2.4.0-k8s-1.24.6-ubuntu to
keep templates organized.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 257


6. : The next steps are to deploy a NKP cluster using your vSphere template.

Next Step

Procedure

• vSphere: Creating the Management Cluster

vSphere: Creating the Management Cluster


Create a vSphere Management Cluster in a non-air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing.

Note: To increase Docker Hub's rate limit, use your Docker Hub credentials when creating the cluster by setting
the following flag --registry-mirror-url=https://registry-1.docker.io --registry-
mirror-username=<username> --registry-mirror-password=<password> on the nkp create
cluster command.

Before you begin


First, you must name your cluster.

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable:


export CLUSTER_NAME=<my-vsphere-cluster>

3. Use the following command to set the environment variables for vSphere.
export VSPHERE_SERVER=example.vsphere.url
export [email protected]
export VSPHERE_PASSWORD=example_password

Note: NKP uses the vSphere CSI driver as the default storage provider. Use a Kubernetes CSI
compatibleCSI-compatibleat is suitable for production. See the Kubernetes documentation called Changing the
Default Storage Class for more information. If you’re not using the default, you cannot deploy an alternate
provider until after the nkp create cluster is finished. However, this must be determined before the
installation

4. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure.
nkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 258


--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template <TEMPLATE_NAME> \
--virtual-ip-interface <ip_interface_name> \
--self-managed

Important: If you need to increase Docker Hub's rate limit, use your Docker Hub credentials when creating
the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io
--registry-mirror-username= --registry-mirror-password= on the nkp create
cluster command.

» Flatcar OS flag: Flatcar OS use --os-hint flatcar to instruct the bootstrap cluster to make some changes
related to the installation paths:
» HTTP: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --
https-proxy, and --no-proxy and their related values in this command for it to be successful. More
information is available in Configuring an HTTP or HTTPS Proxy on page 644.

Next Step

Procedure

• vSphere: Configure MetalLB

vSphere: Configure MetalLB


Create a MetalLB configmap for your Pre-provisioned Infrastructure.
It is recommended that an external load balancer (LB) be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure
the load balancer to send requests only to control plane machines that are responding to API requests. If you do not
have one, you can use Metal LB to create a MetalLB configmap for your Pre-provisioned infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your load balancer will work, and you
can continue the installation process with Pre-provisioned: Install Kommander. To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly and give the machine’s MAC address to clients.

• and giving IP address ranges or CIDR needs to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnets must not conflict with the Kubernetes cluster pod and
service subnets.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 259


For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250 and
configures Layer 2 mode:
The following values are generic; enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need four pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range is expressed as a CIDR prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like this:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

vSphere: Kommander Installation


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
vSphere environment.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 260


About this task
Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period (for example, 1h) to allocate more time to the deployment of
applications.
• If the Kommander installation fails, or you want to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for installation.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See the Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy, and External Load Balancer.

5. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/NKP-catalog-applications

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 261


ref:
tag: v2.12.0

6. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

Next Step

Procedure

• vSphere: Verify Install and Log in to the UI

vSphere: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 262


helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Cluster Operations Management section of the
documentation. The majority of this customization such as attaching clusters and deploying applications will take
place in the dashboard or UI of NKP. The Cluster Operations section allows you to manage cluster operations and
their application workloads to optimize your organization’s productivity.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 263


• Continue to the NKP Dashboard.

vSphere: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After ,stand-alonethe initial cluster creation; you can create additional clusters from the CLI. In a previous
step, the new cluster was created as Self-managed, which allows it to be a Management cluster or a stand-alone
cluster. Subsequent new clusters are not self-managed, as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster that can be
used as the management cluster.
First, you must name your cluster. Then, you run the command to deploy it.

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<my-managed-vsphere-cluster>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 264


Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Use the following command to set the environment variables for vSphere.
export VSPHERE_SERVER=example.vsphere.url
export [email protected]
export VSPHERE_PASSWORD=example_password

2. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure:.
nkp create cluster vsphere \
--cluster-name ${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURCE_POOL_NAME> \
--virtual-ip-interface <ip_interface_name> \
--vm-template <TEMPLATE_NAME> \
--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 265


3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 266


Next Step

Procedure

• Cluster Operations Management

vSphere Air-gapped Installation


This installation provides instructions on how to install Nutanix Kubernetes Platform (NKP) in a vSphere air-gapped
environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Further vSphere Prerequisites


Before you begin using NKP, you must ensure you already meet the other prerequisites in the vSphere
Prerequisites: All Installation Types section.

Section Contents

vSphere Air-gapped: Image Creation Overview


This diagram illustrates the image creation process:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 267


Figure 8: vSphere Image Creation Process

The workflow on the left shows the creation of a base OS image in the vCenter vSphere client using inputs from
Packer. The workflow on the right shows how NKP uses that same base OS image to create CAPI-enabled VM
images for your cluster.
After creating the base image, the NKP image builder uses it to create a CAPI-enabled vSphere template that includes
the Kubernetes objects for the cluster. You can use that resulting template with the NKP create cluster command
to create the VM nodes in your cluster directly on a vCenter server. From that point, you can use NKP to provision
and manage your cluster.

vSphere Air-gapped: BaseOS Image in vCenter


Creating a base OS image from DVD ISO files is a one-time process. The base OS image file is created in the
vSphere Client for use in the vSphere VM template. Therefore, the base OS image is used by Konvoy Image Builder
(KIB) to create a VM template to configure Kubernetes nodes by the NKP vSphere provider.

The Base OS Image


For vSphere, a username is populated by SSH_USERNAME , and the user can use authorization through
SSH_PASSWORD or SSH_PRIVATE_KEY_FILE environment variables and required by default bythe packer. This user
needs administrator privileges. It is possible to configure a custom user and password when building the OS image;
however, that requires the Konvoy Image Builder (KIB) configuration to be overridden.
While creating the base OS image, it is important to take into consideration the following elements:

• Storage configuration: Nutanix recommends customizing disk partitions and not configuring a SWAP partition.
• Network configuration: as KIB must download and install packages, activating the network is required.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 268


• Connect to Red Hat: if using Red Hat Enterprise Linux (RHEL), registering with Red Hat is required to configure
software repositories and install software packages.
• Software selection: Nutanix recommends choosing Minimal Install.
• NKP recommends installing with the packages provided by the operating system package managers. Use the
version that corresponds to the major version of your operating system.

vSphere Air-gapped: Loading the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


If you do not already have a local registry set up, see the Local Registry Tools page for more information.

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Procedure

1. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. EX: For the bootstrap cluster, change your directory to the nkp-<version> directory similar
to example below depending on your current location.
cd nkp-v2.12.0

2. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

3. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It may take some time to push all the images to your image registry, depending on the performance of the
network between the machine you are running the script on and the registry.

Kommander Load Images

About this task


If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander component images, is required. See below for how to push the necessary images to
this registry.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 269


Procedure

1. Load the Kommander images into your private registry using the command below to load the image bundle.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

2. Optional Step for Ultimate License to load NKP Catalog Applications images.
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

vSphere Air-gapped: Creating a CAPI VM Template


The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server.

About this task


You must have at least one image before creating a new cluster. As long as you have an image, this step in your
configuration is not required each time since that image can be used to spin up a new cluster. However, if you need
different images for different environments or providers, you will need to create a new custom image.

Note: Users need to perform the steps in the topic vSphere: Creating an Image before starting this procedure.

Procedure

1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz, extract the


tarball to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

» For FIPS, pass the flag: --fips


» For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials: export
RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

4. Build image template with Konvoy Image Builder (KIB).

5. Follow the instructions to build a vSphere template below and if applicable, set the override --overrides
overrides/offline.yaml flag described in Step 4 below.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 270


Create a vSphere Template for Your Cluster from a Base OS Image

Procedure

1. Set the following vSphere environment variables on the bastion VM host.


export VSPHERE_SERVER=your_vCenter_APIserver_URL
export VSPHERE_USERNAME=your_vCenter_user_name
export VSPHERE_PASSWORD=your_vCenter_password

2. Copy the base OS image file created in the vSphere Client to your desired location on the bastion VM host and
make a note of the path and file name.

3. Create an image.yaml file and add the following variables for vSphere. NKP uses this file and these variables as
inputs in the next step. To customize your image.yaml file, refer to this section: Customize your Image.

Note: This example is Ubuntu 20.04. You will need to replace OS name below based on your OS. Also refer to
example YAML files located here: OVA YAML

---
download_images: true
build_name: "ubuntu-2004"
packer_builder_type: "vsphere"
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-
vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/
{{guestinfo_datasource_ref}}/install.sh"
packer:
cluster: "<VSPHERE_CLUSTER_NAME>"
datacenter: "<VSPHERE_DATACENTER_NAME>"
datastore: "<VSPHERE_DATASTORE_NAME>"
folder: "<VSPHERE_FOLDER>"
insecure_connection: "false"
network: "<VSPHERE_NETWORK>"
resource_pool: "<VSPHERE_RESOURCE_POOL>"
template: "os-qualification-templates/d2iq-base-Ubuntu-20.04" # change default
value with your base template name
vsphere_guest_os_type: "other4xLinux64Guest"
guest_os_type: "ubuntu2004-64"
# goss params
distribution: "ubuntu"
distribution_version: "20.04"
# Use following overrides to select the authentication method that can be used with
base template
# ssh_username: "" # can be exported as environment variable 'SSH_USERNAME'
# ssh_password: "" # can be exported as environment variable 'SSH_PASSWORD'
# ssh_private_key_file = "" # can be exported as environment variable
'SSH_PRIVATE_KEY_FILE'
# ssh_agent_auth: false # is set to true, ssh_password and ssh_private_key will be
ignored

4. Create a vSphere VM template with your variation of the following command.


konvoy-image build images/ova/<image.yaml>

• Any additional configurations can be added to this command using --overrides flags as shown below:
1. Any credential overrides: --overrides overrides.yaml
2. for FIPS, add this flag: --overrides overrides/fips.yaml
3. for air-gapped, add this flag: --overrides overrides/offline-fips.yaml

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 271


5. The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server. This template contains the required artifacts needed to create a
Kubernetes cluster. When KIB provisions the OS image successfully, it creates a manifest file. The artifact_id
field of this file contains the name of the AMI ID (AWS), template name (vSphere), or image name (GCP/Azure),
for example.
{
"name": "vsphere-clone",
"builder_type": "vsphere-clone",
"build_time": 1644985039,
"files": null,
"artifact_id": "konvoy-ova-vsphere-rhel-84-1.21.6-1644983717",
"packer_run_uuid": "260e8110-77f8-ca94-e29e-ac7a2ae779c8",
"custom_data": {
"build_date": "2022-02-16T03:55:17Z",
"build_name": "vsphere-rhel-84",
"build_timestamp": "1644983717",
[...]
}
}

Tip: Recommendation: Now that you can now see the template created in your vCenter, it is best to rename it to
nkp-<NKP_VERSION>-k8s-<K8S_VERSION>-<DISTRO>, like nkp-2.4.0-k8s-1.24.6-ubuntu to
keep templates organized.

6. Next steps are to deploy a NKP cluster using your vSphere template.

vSphere Air-gapped: Creating the Management Cluster


Create a vSphere Management Cluster in an air-gapped environment.

About this task


If you use these instructions to create a cluster on vSphere using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes. First you must name your cluster.

Before you begin


Name Your Cluster

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export CLUSTER_NAME=<my-vsphere-cluster>.

3. Use the following command to set the environment variables for vSphere.
export VSPHERE_SERVER=example.vsphere.url
export [email protected]
export VSPHERE_PASSWORD=example_password

Note: NKP uses the vSphere CSI driver as the default storage provider. Use a Kubernetes CSI compatible
storage that is suitable for production. See the Kubernetes documentation called Changing the Default
Storage Class for more information. If you’re not using the default, you cannot deploy an alternate provider

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 272


until after the nkp create cluster is finished. However, it must be determined before Kommander
installation.

4. Load the image, using either the docker or podman command

» docker load -i konvoy-bootstrap-image-v2.12.0.tar

» podman load -i konvoy-bootstrap-image-v2.12.0.tar


podman image tag localhost/mesosphere/konvoy-bootstrap:2.12.0 docker.io/
mesosphere/konvoy-bootstrap:v2.12.0

The bootstrap image is loaded.

5. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure.
nkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template <TEMPLATE_NAME> \
--virtual-ip-interface <ip_interface_name> \
--self-managed

Important: If you need to increase Docker Hub's rate limit, use your Docker Hub credentials when creating
the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io
--registry-mirror-username= --registry-mirror-password= on the nkp create
cluster command.

» Flatcar OS flag: Flatcar OS use --os-hint flatcar to instruct the bootstrap cluster to make some changes
related to the installation paths:
» HTTP: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --
https-proxy, and --no-proxy and their related values in this command for it to be successful. More
information is available in Configuring an HTTP or HTTPS Proxy on page 644.

vSphere Air-gapped: Configure MetalLB


Create a MetalLB configmap for your Pre-provisioned Insfrastructure.
It is recommended that an external load balancer (LB) be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure
the load balancer to send requests only to control plane machines that are responding to API requests. If you do not
have one, you can use Metal LB to create a MetalLB configmap for your Pre-provisioned infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work and you
can continue the installation process with Pre-provisioned: Install Kommander . To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 273


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

• MetalLB IP address ranges or CIDRs need to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnet must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic, enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range expressed as a CIDR prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 274


my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

vSphere Air-gapped: Installing Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
vSphere environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 275


5. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/NKP-catalog-applications
ref:
tag: v2.12.0

6. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

vSphere Air-gapped: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 276


helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 277


Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Cluster Operations Management section of the
documentation. The majority of this customization such as attaching clusters and deploying applications will take
place in the dashboard or UI of NKP. The Cluster Operations section allows you to manage cluster operations and
their application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

vSphere Air-gapped: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 278


Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<my-managed-vsphere-cluster>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Configure your cluster to use an existing local registry as a mirror when attempting to pull images: IMPORTANT:
The image must be created by Konvoy Image Builder in order to use the registry mirror feature..
export REGISTRY_URL=<https/http>://<registry-address>:<registry-port>
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

2. Load the images, using either the docker or podman command.

» docker load -i konvoy-bootstrap-image-v2.12.0.tar

» podman load -i konvoy-bootstrap-image-v2.12.0.tar

3. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure:.
nkp create cluster vsphere \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--kubeconfig=<management-cluster-kubeconfig-path> \
--namespace ${WORKSPACE_NAMESPACE}
--network <NETWORK_NAME> \
--control-plane-endpoint-host <CONTROL_PLANE_IP> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file </path/to/key.pub> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
--virtual-ip-interface <ip_interface_name> \
--extra-sans "127.0.0.1" \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 279


--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 280


9. Create this kommandercluster object to attach the cluster to the workspace.
cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Next Step

Procedure

• Cluster Operations Management

vSphere with FIPS Installation


This installation provides instructions to install NKP in a vSphere non-air-gapped environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Further vSphere Prerequisites


Before you begin using NKP, you must ensure you already meet the other prerequisites in the vSphere
Prerequisites: All Installation Types section.

Section Contents

vSphere FIPS: Image Creation Overview


This diagram illustrates the image creation process:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 281


Figure 9: vSphere Image Creation Process

The workflow on the left shows the creation of a base OS image in the vCenter vSphere client using inputs from
Packer. The workflow on the right shows how NKP uses that same base OS image to create CAPI-enabled VM
images for your cluster.
After creating the base image, the NKP image builder uses it to create a CAPI-enabled vSphere template that includes
the Kubernetes objects for the cluster. You can use that resulting template with the NKP create cluster command
to create the VM nodes in your cluster directly on a vCenter server. From that point, you can use NKP to provision
and manage your cluster.

vSphere FIPS: BaseOS Image in vCenter


Creating a base OS image from DVD ISO files is a one-time process. The base OS image file is created in the
vSphere Client for use in the vSphere VM template. Therefore, the base OS image is used by Konvoy Image Builder
(KIB) to create a VM template to configure Kubernetes nodes by the NKP vSphere provider.

The Base OS Image


For vSphere, a username is populated by SSH_USERNAME and the user can use authorization through SSH_PASSWORD
or SSH_PRIVATE_KEY_FILE environment variables and required by default by packer. This user needs administrator
privileges. It is possible to configure a custom user and password when building the OS image, however, that requires
the Konvoy Image Builder (KIB) configuration to be overridden.
While creating the base OS image, it is important to take into consideration the following elements:

• Storage configuration: Nutanix recommends customizing disk partitions and not configuring a SWAP partition.
• Network configuration: as KIB must download and install packages, activating the network is required.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 282


• Connect to Red Hat: if using Red Hat Enterprise Linux (RHEL), registering with Red Hat is required to configure
software repositories and install software packages.
• Software selection: Nutanix recommends choosing Minimal Install.
• NKP recommends to install with the packages provided by the operating system package managers. Use the
version that corresponds to the major version of your operating system.

vSphere FIPS: Creating a CAPI VM Template


The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server.

About this task


You must have at least one image before creating a new cluster. As long as you have an image, this step in your
configuration is not required each time since that image can be used to spin up a new cluster. However, if you need
different images for different environments or providers, you will need to create a new custom image.

Procedure

1. Users need to perform the steps in the topic vSphere FIPS: Creating an Image before starting this procedure.

2. Build image template with Konvoy Image Builder (KIB).

Create a vSphere Template for Your Cluster from a Base OS Image

Procedure

1. Set the following vSphere environment variables on the bastion VM host.


export VSPHERE_SERVER=your_vCenter_APIserver_URL
export VSPHERE_USERNAME=your_vCenter_user_name
export VSPHERE_PASSWORD=your_vCenter_password

2. Copy the base OS image file created in the vSphere Client to your desired location on the bastion VM host and
make a note of the path and file name.

3. Create an image.yaml file and add the following variables for vSphere. NKP uses this file and these variables as
inputs in the next step. To customize your image.yaml file, refer to this section: Customize your Image.

Note: This example is Ubuntu 20.04. You will need to replace OS name below based on your OS. Also refer to
example YAML files located here: OVA YAML

---
download_images: true
build_name: "ubuntu-2004"
packer_builder_type: "vsphere"
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-
vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/
{{guestinfo_datasource_ref}}/install.sh"
packer:
cluster: "<VSPHERE_CLUSTER_NAME>"
datacenter: "<VSPHERE_DATACENTER_NAME>"
datastore: "<VSPHERE_DATASTORE_NAME>"
folder: "<VSPHERE_FOLDER>"
insecure_connection: "false"
network: "<VSPHERE_NETWORK>"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 283


resource_pool: "<VSPHERE_RESOURCE_POOL>"
template: "os-qualification-templates/d2iq-base-Ubuntu-20.04" # change default
value with your base template name
vsphere_guest_os_type: "other4xLinux64Guest"
guest_os_type: "ubuntu2004-64"
# goss params
distribution: "ubuntu"
distribution_version: "20.04"
# Use following overrides to select the authentication method that can be used with
base template
# ssh_username: "" # can be exported as environment variable 'SSH_USERNAME'
# ssh_password: "" # can be exported as environment variable 'SSH_PASSWORD'
# ssh_private_key_file = "" # can be exported as environment variable
'SSH_PRIVATE_KEY_FILE'
# ssh_agent_auth: false # is set to true, ssh_password and ssh_private_key will be
ignored

4. Create a vSphere VM template with your variation of the following command.


konvoy-image build images/ova/<image.yaml>

• Any additional configurations can be added to this command using --overrides flags as shown below:
1. Any credential overrides: --overrides overrides.yaml
2. for FIPS, add this flag: --overrides overrides/fips.yaml
3. for air-gapped, add this flag: --overrides overrides/offline-fips.yaml

5. The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server. This template contains the required artifacts needed to create a
Kubernetes cluster. When KIB provisions the OS image successfully, it creates a manifest file. The artifact_id
field of this file contains the name of the AMI ID (AWS), template name (vSphere), or image name (GCP or
Azure), for example.
{
"name": "vsphere-clone",
"builder_type": "vsphere-clone",
"build_time": 1644985039,
"files": null,
"artifact_id": "konvoy-ova-vsphere-rhel-84-1.21.6-1644983717",
"packer_run_uuid": "260e8110-77f8-ca94-e29e-ac7a2ae779c8",
"custom_data": {
"build_date": "2022-02-16T03:55:17Z",
"build_name": "vsphere-rhel-84",
"build_timestamp": "1644983717",
[...]
}
}

Tip: Recommendation: Now that you can now see the template created in your vCenter, it is best to rename it to
nkp-<NKP_VERSION>-k8s-<K8S_VERSION>-<DISTRO>, like nkp-2.4.0-k8s-1.24.6-ubuntu to
keep templates organized.

6. Next steps are to deploy a NKP cluster using your vSphere template.

vSphere FIPS: Creating the Management Cluster


Create a vSphere Management Cluster in a non-air-gapped environment using FIPS.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 284


About this task
Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing. First you must name your cluster.

Before you begin

Procedure

• The table below identifies the current FIPS and etcd versions for this release.

Table 20: Supported FIPS Builds

Component Repository Version


Kubernetes docker.io/mesosphere v1.29.6+fips.0
etcd docker.io/mesosphere 3.5.10+fips.0

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail
if the name has capital letters. For more Kubernetes naming information, see https://kubernetes.io/docs/
concepts/overview/working-with-objects/names/.

• Name your cluster and give it a unique name suitable for your environment.
• Set the environment variable:
export CLUSTER_NAME=<my-vsphere-cluster>

Create a New vSphere Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Kubernetes Cluster Attachment on
page 473.

Procedure

1. Use the following command to set the environment variables for vSphere.
export VSPHERE_SERVER=example.vsphere.url
export [email protected]
export VSPHERE_PASSWORD=example_password

2. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure.
nkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 285


--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template <TEMPLATE_NAME> \
--virtual-ip-interface <ip_interface_name> \
--kubernetes-version=v1.29.6+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--etcd-image-repository=docker.io/mesosphere \
--etcd-version=3.5.10+fips.0 \
--self-managed

Note: To increase Dockerhub's rate limit use your Dockerhub credentials when creating the cluster, by setting
the following flag --registry-mirror-url=https://registry-1.docker.io --registry-
mirror-username= --registry-mirror-password= on the nkp create cluster command.

» Flatcar OS flag: Flatcar OS use --os-hint flatcar to instruct the bootstrap cluster to make some changes
related to the installation paths:
» HTTP: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --
https-proxy, and --no-proxy and their related values in this command for it to be successful. More
information is available in Configuring an HTTP or HTTPS Proxy on page 644.

Next Step

Procedure

• vSphere FIPS: Configure MetalLB

vSphere FIPS: Configure MetalLB


Create a MetalLB configmap for your Pre-provisioned Insfrastructure.
It is recommended that an external load balancer (LB) be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure
the load balancer to send requests only to control plane machines that are responding to API requests. If you do not
have one, you can use Metal LB to create a MetalLB configmap for your Pre-provisioned infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your own load balancer will work and you
can continue the installation process with Pre-provisioned: Install Kommander . To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.

• MetalLB IP address ranges or CIDRs need to be within the node’s primary network subnet.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 286


• MetalLB IP address ranges or CIDRs and node subnet must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250, and
configures Layer 2 mode:
The following values are generic, enter your specific values into the fields where applicable.
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB to be used.
• An IP address range expressed as a CIDR prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500, and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

vSphere FIPS: Installing Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
vSphere environment.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 287


About this task
Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you wish to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See Kommander Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy and External Load Balancer.

5. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/NKP-catalog-applications

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 288


ref:
tag: v2.12.0

6. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

vSphere FIPS: Verifying your Installation and UI Log in


Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 289


Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>
If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Cluster Operations Management section of the
documentation. The majority of this customization such as attaching clusters and deploying applications will take
place in the dashboard or UI of NKP. The Cluster Operations section allows you to manage cluster operations and
their application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

vSphere FIPS: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 290


About this task
After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects, or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First you must name your cluster. Then you run the command to deploy it.

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if the
name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<my-managed-vsphere-cluster>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 291


Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Use the following command to set the environment variables for vSphere.
export VSPHERE_SERVER=example.vsphere.url
export [email protected]
export VSPHERE_PASSWORD=example_password

2. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure:.
nkp create cluster vsphere \
--cluster-name ${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURCE_POOL_NAME> \
--virtual-ip-interface <ip_interface_name> \
--vm-template <TEMPLATE_NAME> \
--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Use HTTP or HTTPS Proxy with KIB Images on page 1076.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 292


3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 293


Next Step

Procedure

• Cluster Operations Management

vSphere Air-gapped FIPS Installation


This installation provides instructions on how to install NKP in a vSphere air-gapped environment using FIPS.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Further vSphere Prerequisites


Before you begin using NKP, you must ensure you already meet the other prerequisites in the vSphere
Prerequisites: All Installation Types section.

Section Contents

vSphere Air-gapped FIPS: Image Creation Overview


This diagram illustrates the image creation process:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 294


Figure 10: vSphere Image Creation Process

The workflow on the left shows the creation of a base OS image in the vCenter vSphere client using inputs from
Packer. The workflow on the right shows how NKP uses that same base OS image to create CAPI-enabled VM
images for your cluster.
After creating the base image, the NKP image builder uses it to create a CAPI-enabled vSphere template that includes
the Kubernetes objects for the cluster. You can use that resulting template with the NKP create cluster command
to create the VM nodes in your cluster directly on a vCenter server. From that point, you can use NKP to provision
and manage your cluster.

vSphere Air-gapped FIPS: BaseOS Image in vCenter


Creating a base OS image from DVD ISO files is a one-time process. The base OS image file is created in the
vSphere Client for use in the vSphere VM template. Therefore, the base OS image is used by Konvoy Image Builder
(KIB) to create a VM template to configure Kubernetes nodes by the Nutanix Kubernetes Platform (NKP) vSphere
provider.

The Base OS Image


For vSphere, a username is populated by SSH_USERNAME , and the user can use authorization through
SSH_PASSWORD or SSH_PRIVATE_KEY_FILE environment variables and required by default by the packer. This user
needs administrator privileges. It is possible to configure a custom user and password when building the OS image;
however, that requires the Konvoy Image Builder (KIB) configuration to be overridden.
While creating the base OS image, it is important to take into consideration the following elements:

• Storage configuration: Nutanix recommends customizing disk partitions and not configuring a SWAP partition.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 295


• Network configuration: as KIB must download and install packages, activating the network is required.
• Connect to Red Hat: if using Red Hat Enterprise Linux (RHEL), registering with Red Hat is required to configure
software repositories and install software packages.
• Software selection: Nutanix recommends choosing Minimal Install.
• NKP recommends installing with the packages provided by the operating system package managers. Use the
version that corresponds to the major version of your operating system.

vSphere Air-gapped FIPS: Loading the Registry


Before creating an air-gapped Kubernetes cluster, you need to load the required images in a local registry
for the Konvoy component.

About this task


If you do not already have a local registry set up, see the Local Registry Tools page for more information.

Note: For air-gapped, ensure you download the bundle nkp-air-gapped-


bundle_v2.12.0_linux_amd64.tar.gz and extract the tar file to a local directory. For more information, see
Downloading NKP on page 16.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz

Procedure

1. The directory structure after extraction can be accessed in subsequent steps using commands to access files from
different directories. Example: For the bootstrap cluster, change your directory to the nkp-<version> directory,
similar to the example below, depending on your current location.
cd nkp-v2.12.0

2. Set an environment variable with your registry address and any other needed variables using this command.
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
export REGISTRY_CA=<path to the cacert file on the bastion>

3. Execute the following command to load the air-gapped image bundle into your private registry using any of the
relevant flags to apply variables above.
nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

Note: It might take some time to push all the images to your image registry, depending on the network
performance of the machine you are running the script on and the registry.

Kommander Load Images

About this task


If you are operating in an air-gapped environment, a local container registry containing all the necessary installation
images, including the Kommander component images, is required. See below for instructions on how to push the
necessary images to this registry.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 296


Procedure

1. Load the Kommander images into your private registry using the command below to load the image bundle.
nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.12.0.tar --to-
registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME} --to-registry-
password=${REGISTRY_PASSWORD}

2. Optional Step for Ultimate License to load NKP Catalog Applications images.
nkp push bundle --bundle ./container-images/nkp-catalog-applications-image-bundle-
v2.12.0.tar --to-registry=${REGISTRY_URL} --to-registry-username=${REGISTRY_USERNAME}
--to-registry-password=${REGISTRY_PASSWORD}

vSphere Air-gapped FIPS: Creating a CAPI VM Template


The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server.

About this task


You must have at least one image before creating a new cluster. If you have an image, this step in your configuration
is not required each time since that image can be used to spin up a new cluster. However, if you need different images
for different environments or providers, you will need to create a new custom image.

Note: Users need to perform the steps in the topic vSphere: Creating an Image before starting this procedure.

Procedure

1. Assuming you have downloaded nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz, extract the tar


file to a local directory.
tar -xzvf nkp-air-gapped-bundle_v2.12.0_linux_amd64.tar.gz && cd nkp-v2.12.0/kib

2. You will need to fetch the distro packages as well as other artifacts. By fetching the distro packages from distro
repositories, you get the latest security fixes available at machine image build time.

3. In your download location, there is a bundles directory with all the steps to create an OS package bundle for a
particular OS. To create it, run the new NKP command create-package-bundle. This builds an OS bundle
using the Kubernetes version defined in ansible/group_vars/all/defaults.yaml. Example command.
./konvoy-image create-package-bundle --os redhat-8.4 --output-directory=artifacts

» For FIPS, pass the flag: --fips


» For Red Hat Enterprise Linux (RHEL) OS, pass your RedHat subscription manager credentials: export
RMS_ACTIVATION_KEY. Example command:
export RHSM_ACTIVATION_KEY="-ci"
export RHSM_ORG_ID="1232131"

4. Build an image template with Konvoy Image Builder (KIB).

5. Follow the instructions to build a vSphere template below, and if applicable, set the override --overrides
overrides/offline.yaml flag described in Step 4 below.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 297


Create a vSphere Template for Your Cluster from a Base OS Image

Procedure

1. Set the following vSphere environment variables on the bastion VM host.


export VSPHERE_SERVER=your_vCenter_APIserver_URL
export VSPHERE_USERNAME=your_vCenter_user_name
export VSPHERE_PASSWORD=your_vCenter_password

2. Copy the base OS image file created in the vSphere Client to your desired location on the bastion VM host and
make a note of the path and file name.

3. Create an image.yaml file and add the following variables for vSphere. NKP uses this file and these variables as
inputs in the next step. To customize your image.yaml file, refer to this section: Customize your Image.

Note: This example is Ubuntu 20.04. You will need to replace the OS name below based on your OS. Also, refer to
the example YAML files located here: OVA YAML.

---
download_images: true
build_name: "ubuntu-2004"
packer_builder_type: "vsphere"
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-
vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/
{{guestinfo_datasource_ref}}/install.sh"
packer:
cluster: "<VSPHERE_CLUSTER_NAME>"
datacenter: "<VSPHERE_DATACENTER_NAME>"
datastore: "<VSPHERE_DATASTORE_NAME>"
folder: "<VSPHERE_FOLDER>"
insecure_connection: "false"
network: "<VSPHERE_NETWORK>"
resource_pool: "<VSPHERE_RESOURCE_POOL>"
template: "os-qualification-templates/d2iq-base-Ubuntu-20.04" # change default
value with your base template name
vsphere_guest_os_type: "other4xLinux64Guest"
guest_os_type: "ubuntu2004-64"
# goss params
distribution: "ubuntu"
distribution_version: "20.04"
# Use following overrides to select the authentication method that can be used with
base template
# ssh_username: "" # can be exported as environment variable 'SSH_USERNAME'
# ssh_password: "" # can be exported as environment variable 'SSH_PASSWORD'
# ssh_private_key_file = "" # can be exported as environment variable
'SSH_PRIVATE_KEY_FILE'
# ssh_agent_auth: false # is set to true, ssh_password and ssh_private_key will be
ignored

4. Create a vSphere VM template with your variation of the following command.


konvoy-image build images/ova/<image.yaml>

• Any additional configurations can be added to this command using --overrides flags as shown below:
1. Any credential overrides: --overrides overrides.yaml
2. for FIPS, add this flag: --overrides overrides/fips.yaml
3. for air-gapped, add this flag: --overrides overrides/offline-fips.yaml

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 298


5. The Konvoy Image Builder (KIB) uses the values in image.yaml and the input base OS image to create a
vSphere template directly on the vCenter server. This template contains the required artifacts needed to create a
Kubernetes cluster. When KIB successfully provisions the OS image, it creates a manifest file. The artifact_id
field of this file contains the name of the AMI ID (AWS), template name (vSphere), or image name (GCP/Azure),
for example.
{
"name": "vsphere-clone",
"builder_type": "vsphere-clone",
"build_time": 1644985039,
"files": null,
"artifact_id": "konvoy-ova-vsphere-rhel-84-1.21.6-1644983717",
"packer_run_uuid": "260e8110-77f8-ca94-e29e-ac7a2ae779c8",
"custom_data": {
"build_date": "2022-02-16T03:55:17Z",
"build_name": "vsphere-rhel-84",
"build_timestamp": "1644983717",
[...]
}
}

Tip: Recommendation: Now that you can now see the template created in your vCenter, it is best to rename it to
nkp-<NKP_VERSION>-k8s-<K8S_VERSION>-<DISTRO>, like nkp-2.4.0-k8s-1.24.6-ubuntu to
keep templates organized.

6. The next steps are to deploy a NKP cluster using your vSphere template.

vSphere Air-gapped FIPS: Creating the Management Cluster


Create a vSphere Management Cluster in an air-gapped environment using FIPS.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing. First, you must name your cluster.

Before you begin

Procedure

• The table below identifies the current FIPS and etcd versions for this release.

Table 21: Supported FIPS Builds

Component Repository Version


Kubernetes docker.io/mesosphere v1.29.6+fips.0
etcd docker.io/mesosphere 3.5.10+fips.0

Note: The cluster name may only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.

• Name your cluster and give it a unique name suitable for your environment.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 299


• Set the environment variable:
export CLUSTER_NAME=<my-vsphere-cluster>

Create a New vSphere Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Configure your cluster to use an existing local registry as a mirror when attempting to pull images: IMPORTANT:
The image must be created by Konvoy Image Builder in order to use the registry mirror feature. Use the
following command to set the environment variables for vSphere.
export VSPHERE_SERVER=example.vsphere.url
export [email protected]
export VSPHERE_PASSWORD=example_password

2. Load the image, using either the docker or podman command

» docker load -i konvoy-bootstrap-image-v2.12.0.tar

» podman load -i konvoy-bootstrap-image-v2.12.0.tar

3. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure.
nkp create cluster vsphere
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <CONTROL_PLANE_IP> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file </path/to/key.pub> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
--virtual-ip-interface <ip_interface_name> \
--extra-sans "127.0.0.1" \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--kubernetes-version=v1.29.6+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--etcd-image-repository=docker.io/mesosphere --etcd-version=3.5.10+fips.0 \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 300


--self-managed

Note: To increase Dockerhub's rate limit, use your Dockerhub credentials when creating the cluster by setting
the following flag --registry-mirror-url=https://registry-1.docker.io --registry-
mirror-username= --registry-mirror-password= on the nkp create cluster command.

» Flatcar OS flag: Flatcar OS use --os-hint flatcar to instruct the bootstrap cluster to make some changes
related to the installation paths:
» HTTP: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --
https-proxy, and --no-proxy and their related values in this command for it to be successful. More
information is available in Configuring an HTTP or HTTPS Proxy on page 644.

vSphere Air-gapped FIPS: Configure MetalLB


Create a MetalLB configmap for your Pre-provisioned Insfrastructure.
It is recommended that an external load balancer (LB) be the control plane endpoint. To distribute request load among
the control plane machines, configure the load balancer to send requests to all the control plane machines. Configure
the load balancer to send requests only to control plane machines that are responding to API requests. If you do not
have one, you can use Metal LB to create a MetalLB configmap for your Pre-provisioned infrastructure.
Choose one of the following two protocols you want to use to announce service IPs. If your environment is not
currently equipped with a load balancer, you can use MetalLB. Otherwise, your load balancer will work, and you
can continue the installation process with Pre-provisioned: Install Kommander. To use MetalLB, create a MetalLB
configMap for your Pre-provisioned infrastructure. MetalLB uses one of two protocols for exposing Kubernetes
services:

• Layer 2, with Address Resolution Protocol (ARP)


• Border Gateway Protocol (BGP)
Select one of the following procedures to create your MetalLB manifest for further editing.

Layer 2 Configuration
Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP
addresses.
Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by
responding to ARP requests on your local network directly and giving the machine’s MAC address to clients.

• MetalLB IP address ranges or CIDR need to be within the node’s primary network subnet.
• MetalLB IP address ranges or CIDRs and node subnets must not conflict with the Kubernetes cluster pod and
service subnets.
For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250 and
configures Layer 2 mode:
The following values are generi; enterr your specific values into the fields where applicable.
; enter
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 301


- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
EOF
kubectl apply -f metallb-conf.yaml

BGP Configuration
For a basic configuration featuring one BGP router and one IP address range, you need four pieces of information:

• The router IP address that MetalLB needs to connect to.


• The router’s autonomous systems (AS) number.
• The AS number MetalLB is to be used.
• An IP address range is expressed as a CIDR prefix.
As an example, if you want to give MetalLB the range 192.168.10.0/24 and AS number 64500 and connect it to a
router at 10.0.0.1 with AS number 64501, your configuration will look like this:
cat << EOF > metallb-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.0.0.1
peer-asn: 64501
my-asn: 64500
address-pools:
- name: default
protocol: bgp
addresses:
- 192.168.10.0/24
EOF
kubectl apply -f metallb-conf.yaml

vSphere Air-gapped FIPS: Installing Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
vSphere environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 302


• If the Kommander installation fails, or you want to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See the Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy, and External Load Balancer.

5. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/NKP-catalog-applications
ref:
tag: v2.12.0

6. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 303


vSphere Air-gapped FIPS: Verifying your Installation and UI Log in
Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 304


Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Cluster Operations Management section of the
documentation. The majority of this customization such as attaching clusters and deploying applications will take
place in the dashboard or UI of NKP. The Cluster Operations section allows you to manage cluster operations and
their application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

vSphere Air-gapped FIPS: Creating Managed Clusters Using the NKP CLI
This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After initial cluster creation, you have the ability to create additional clusters from the CLI. In a previous step,
the new cluster was created as Self-managed which allows it to be a Management cluster or a stand alone
cluster. Subsequent new clusters are not self-managed as they will likely be Managed or Attached clusters to this
Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 305


Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First, you must name your cluster. Then, you run the command to deploy it.

Note: The cluster name intomight only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail
if the name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<my-managed-vsphere-cluster>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure

1. Configure your cluster to use an existing local registry as a mirror when attempting to pull images: IMPORTANT:
The image must be created by Konvoy Image Builder in order to use the registry mirror feature..
export REGISTRY_URL=<https/http>://<registry-address>:<registry-port>
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 306


2. Load the images, using either the docker or podman command.

» docker load -i konvoy-bootstrap-image-v2.12.0.tar

» podman load -i konvoy-bootstrap-image-v2.12.0.tar

3. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values,
including the VM template name you assigned in the previous procedure.
nkp create cluster vsphere \
--cluster-name ${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE}
--network <NETWORK_NAME> \
--control-plane-endpoint-host <CONTROL_PLANE_IP> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file </path/to/key.pub> \e
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
--virtual-ip-interface <ip_interface_name> \
--extra-sans "127.0.0.1" \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--kubernetes-version=v1.29.6+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
--etcd-image-repository=docker.io/mesosphere \
--etcd-version=3.5.10+fips.0 \
--kubeconfig=<management-cluster-kubeconfig-path> \

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the Nutanix Kubernetes Platform (NKP) CLI, it attaches automatically to
the Management Cluster after a few moments. However, if you do not set a workspace, the attached cluster will be
created in the default workspace. To ensure that the attached cluster is created in your desired workspace namespace,
follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 307


3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to the workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 308


Next Step

Procedure

• Cluster Operations Management

VMware Cloud Director Installation Options


There is not a “Basic Install” for VMWare Cloud Director since each tenant will have different needs. For Managed
Service Providers (MSPs), refer to the Cloud Director for Service Providers section of the documentation
regarding installation plus image and OVA export and import for your Tenant Organizations.
Before continuing to install Nutanix Kubernetes Platform (NKP) on Cloud Director, verify that your VMware
vSphere Client environment is running vSphere Client version v6.7.x with Update 3 or later version with ESXi.
You must be able to reach the vSphere API endpoint from where the Konvoy command line interface (CLI) runs
and have a vSphere account containing Administrator privileges. A RedHat subscription is required with username
and password for downloading DVD ISOs and valid vSphere values for the following: vCenter API server URL
Datacenter name Zone name that contains ESXi hosts for your cluster’s nodes.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Next Step
Continue to the VMware Cloud Director Infrastructure ice Providers section of the Custom Install and
Infrastructure Tools chapter.

Azure Installation Options


For an environment that is on the Azure Infrastructure, install options based on those environment variables are
provided for you in this location.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operative in the most common scenarios.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Additional Resource Information Specific to Azure

• Control plane nodes - NKP on Azure defaults to deploying a Standard_D4s_v3 virtual machine with a 128 GiB
volume for the OS and an 80GiB volume for etcd storage, which meets the above resource requirements.
• Worker nodes - NKP on Azure defaults to deploying a Standard_D8s_v3 virtual machine with an 80 GiB
volume for the OS, which meets the above resource requirements.

Section Contents

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 309


Azure Installation
This installation provides instructions on how to install NKP in an Azure non-air-gapped environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Azure Prerequisites
Before you begin using Konvoy with Azure, you must:
1. Sign in to Azure:
az login
[
{
"cloudName": "AzureCloud",
"homeTenantId": "a1234567-b132-1234-1a11-1234a5678b90",
"id": "b1234567-abcd-11a1-a0a0-1234a5678b90",
"isDefault": true,
"managedByTenants": [],
"name": "Nutanix Developer Subscription",
"state": "Enabled",
"tenantId": "a1234567-b132-1234-1a11-1234a5678b90",
"user": {
"name": "[email protected]",
"type": "user"
}
}
]

2. Create an Azure Service Principal (SP) by running the following commands:


1. If you have more than one Azure account, run this command to identify your account:
echo $(az account show --query id -o tsv)

2. Run this command to ensure you are pointing to the correct Azure subscription ID:
az account set --subscription "Nutanix Developer Subscription"

3. If an SP with the name exists, this command rotates the password:


az ad sp create-for-rbac --role contributor --name "$(whoami)-konvoy" --scopes=/
subscriptions/$(az account show --query id -o tsv) --query "{ client_id: appId,
client_secret: password, tenant_id: tenant }"
Output:
{
"client_id": "7654321a-1a23-567b-b789-0987b6543a21",
"client_secret": "Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C",
"tenant_id": "a1234567-b132-1234-1a11-1234a5678b90"
}

3. Set the AZURE_CLIENT_SECRET environment variable:


export AZURE_CLIENT_SECRET="<azure_client_secret>" #
Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 310


export AZURE_CLIENT_ID="<client_id>" # 7654321a-1a23-567b-
b789-0987b6543a21
export AZURE_TENANT_ID="<tenant_id>" # a1234567-
b132-1234-1a11-1234a5678b90
export AZURE_SUBSCRIPTION_ID="<subscription_id>" # b1234567-abcd-11a1-
a0a0-1234a5678b90

4. Ensure you have an override file to configure specific attributes of your Azure image.

Azure: Creating an Image


Learn how to build a custom image for use with NKP.

About this task


Extract the bundle and cd into the extracted konvoy-image-bundle-$VERSION_$OS folder. The bundled version
of konvoy-image contains an embedded docker image that contains all the requirements for the building.

Note: The konvoy-image binary and all supporting folders are also extracted. When extracted, konvoy-image
bind mounts the current working directory (${PWD}) into the container to be used.

This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant Amazon
Machine Image (AMI). KIB uses variable overrides to specify base images and container images to use in your new
AMI.
The default Azure image is not recommended for use in production. We suggest . In order to build the image, use
KIB for Azure to take advantage of enhanced cluster operations. Explore the Customize your Image topic for more
options.
For more information about using the image to create clusters, refer to the Azure Create a New Cluster section of
the documentation.

Before you begin

• Download the Konvoy Image Builder bundle for your version of NKP.
• Check the Supported Kubernetes Version for your Provider.
• Create a working Docker setup.
Extract the bundle and cd into the extracted konvoy-image-bundle-$VERSION_$OS folder. The bundled version
of konvoy-image contains an embedded docker image that contains all the requirements for the building.
The konvoy-image binary and all supporting folders are also extracted. When extracted, konvoy-image bind
mounts the current working directory (${PWD}) into the container to be used.

Procedure
Run the konvoy-image command to build and validate the image.
konvoy-image build azure --client-id ${AZURE_CLIENT_ID} --tenant-id ${AZURE_TENANT_ID}
--overrides override-source-image.yaml images/azure/ubuntu-2004.yaml
By default, the image builder builds in the westus2 location. To specify another location, set the --location flag
(shown in the example below is how to change the location to eastus):
konvoy-image build azure --client-id ${AZURE_CLIENT_ID} --tenant-id ${AZURE_TENANT_ID}
--location eastus --overrides override-source-image.yaml images/azure/centos-7.yaml
When the command is complete, the image id is printed and written to the ./packer.pkr.hcl file. This file has
an artifact_id field whose value provides the name of the image. Then, specify this image ID when creating the
cluster.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 311


Image Gallery

About this task


By default Konvoy Image Builder will create a Resource Group, Gallery, and Image Name to store the resulting
image in.

Procedure

• To specify a specific Resource Group, Gallery, or Image Name flags might be specified:
--gallery-image-locations string a list of locations to publish the image
(default same as location)
--gallery-image-name string the gallery image name to publish the image to
--gallery-image-offer string the gallery image offer to set (default "nkp")
--gallery-image-publisher string the gallery image publisher to set (default
"nkp")
--gallery-image-sku string the gallery image sku to set
--gallery-name string the gallery name to publish the image in
(default "nkp")
--resource-group string the resource group to create the image in
(default "nkp")

Azure: Creating the Management Cluster


Create an Azure Management Cluster in a non-air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing. First, you must name your cluster.
Name Your Cluster

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export CLUSTER_NAME=<azure-example>.

Encode your Azure Credential Variables:

Procedure
Base64 encodes the Azure environment variables set in the Azure install prerequisites step.
export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "${AZURE_SUBSCRIPTION_ID}" | base64 | tr -d
'\n')"
export AZURE_TENANT_ID_B64="$(echo -n "${AZURE_TENANT_ID}" | base64 | tr -d '\n')"
export AZURE_CLIENT_ID_B64="$(echo -n "${AZURE_CLIENT_ID}" | base64 | tr -d '\n')"
export AZURE_CLIENT_SECRET_B64="$(echo -n "${AZURE_CLIENT_SECRET}" | base64 | tr -d
'\n')"

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 312


Create an Azure Kubernetes Cluster

About this task


If you use these instructions to create a cluster on Azure using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
NKP uses Azure CSI as the default storage provider. You can use a Kubernetes CSIcompatible storage solution
that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for
more information.
Availability zones (AZs) are isolated locations within datacenter regions from which public cloud services originate
and operate. Because all the nodes in a node pool are deployed in a single AZ; you may wish to create additional node
pools to ensure your cluster has nodes deployed in multiple AZs.

Procedure
Run this command to create your Kubernetes cluster using any relevant flags.
nkp create cluster azure \
--cluster-name=${CLUSTER_NAME} \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

If you want to monitor or verify the installation of your clusters, refer to the topic: Verify your Cluster and NKP
Installation

Azure: Install Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
Azure environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy, and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you want to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 313


Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See the Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy, and External Load Balancer.

5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer; set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/NKP-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 314


Azure: Verifying your Installation and UI Log in
Verify Kommander Install and Log in to the Dashboard UI

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met
helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 315


Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP Dashboard.

Azure: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After ,initial cluster creation; you can create additional clusters from the CLI. In a previous step, the new cluster
was created as Self-managed, which allows it to be a Management cluster or a stand-alone cluster. Subsequent new
clusters are not self-managed, as they will likely be Managed or Attached clusters to this Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 316


Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First, you must name your cluster. Then yo,u run the command to deploy it.

Note: The cluster name might only contain the following characters: a-z, 0-9,., and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<aws-additional>

Create a Kubernetes Cluster

About this task


The below instructions tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, it will be created in the default workspace, and you need to take additional
steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Procedure
Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by management cluster you created in the previous section.
nkp create cluster azure \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--namespace=${WORKSPACE_NAMESPACE} \
--additional-tags=owner=$(whoami) \
--kubeconfig=<management-cluster-kubeconfig-path>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 317


Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 318


9. Create this kommandercluster object to attach the cluster to the workspace.
cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}
namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below
command. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them to a Managed Cluster to be centrally administrated
by a Management Cluster, refer to Platform Expansion:

Next Step

Procedure

• Cluster Operations Management

AKS Installation Options


For an environment that is on the Azure Kubernetes Service (AKS) Infrastructure, installation options based on those
environment variables are provided for you in this location.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operative in the most common scenarios.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

Note: An AKS cluster cannot be a Management or Pro cluster. Before installing NKP on your EKS cluster, first
ensure you have a Management cluster with NKP and the Kommander component installed that handles the life cycle of
your AKS cluster.

Installing Kommander requires you to have CAPI components, cert-manager, etc, on a self-managed cluster. The
CAPI components mean you can control the life cycle of the cluster and other clusters. However, because AKS is
semi-managed by Azure, the AKS clusters are under Azure's control and don’t have those components. Therefore,
Kommander will not be installed.

Section Contents

AKS Installation
Nutanix Kubernetes Platform (NKP) installation on Azure Kubernetes Service (AKS).

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 319


If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44
For additional custom YAML Ain't Markup Language (YAML) options, see Custom Installation and Additional
Infrastructure Tools.

AKS Prerequisites
Before you begin using Konvoy with AKS, you must:
1. Sign in to Azure using the command az login. For example:
[
{
"cloudName": "AzureCloud",
"homeTenantId": "a1234567-b132-1234-1a11-1234a5678b90",
"id": "b1234567-abcd-11a1-a0a0-1234a5678b90",
"isDefault": true,
"managedByTenants": [],
"name": "Nutanix Developer Subscription",
"state": "Enabled",
"tenantId": "a1234567-b132-1234-1a11-1234a5678b90",
"user": {
"name": "[email protected]",
"type": "user"
}
}
]

2. Create an Azure Service Principal (SP) by using the command az ad sp create-for-rbac --role
contributor --name "$(whoami)-konvoy" --scopes=/subscriptions/$(az account show --
query id -o tsv).
3. Set the Azure client secret environment variable using the command AZURE_CLIENT_SECRET. Example output:
export AZURE_CLIENT_SECRET="<azure_client_secret>" #
Z79yVstq_E.R0R7RUUck718vEHSuyhAB0C
export AZURE_CLIENT_ID="<client_id>" # 7654321a-1a23-567b-
b789-0987b6543a21
export AZURE_TENANT_ID="<tenant_id>" # a1234567-
b132-1234-1a11-1234a5678b90
export AZURE_SUBSCRIPTION_ID="<subscription_id>" # b1234567-abcd-11a1-
a0a0-1234a5678b90

4. Base64 encodes the same environment variables:


export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "${AZURE_SUBSCRIPTION_ID}" | base64 | tr
-d '\n')"
export AZURE_TENANT_ID_B64="$(echo -n "${AZURE_TENANT_ID}" | base64 | tr -d '\n')"
export AZURE_CLIENT_ID_B64="$(echo -n "${AZURE_CLIENT_ID}" | base64 | tr -d '\n')"
export AZURE_CLIENT_SECRET_B64="$(echo -n "${AZURE_CLIENT_SECRET}" | base64 | tr -d
'\n')"

5. Check to see what version of Kubernetes is available in your region. When deploying with AKS, you must pick
a version of Kubernetes that is available in AKS and use that version for subsequent steps. To find out the list
of available Kubernetes versions in the Azure Region you are using, run the following command, substituting
<your-location> for the Azure region you're deploying to:

1. az aks get-versions -o table --location <your-location>

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 320


2. The output resembles the following:
az aks get-versions -o table --location westus
KubernetesVersion Upgrades
------------------- ----------------------------------------
1.27.6(preview) None available
1.27.3(preview) 1.27.6(preview)
1.27.1(preview) 1.27.3(preview)
1.26.6 1.27.1(preview), 1.27.3(preview)
1.26.3 1.26.6, 1.27.1(preview), 1.27.3(preview)
1.25.11 1.26.3, 1.26.6
1.25.6 1.25.11, 1.26.3, 1.26.6
1.24.15 1.25.6, 1.25.11
1.24.10 1.24.15, 1.25.6, 1.25.11

6. Choose a version of Kubernetes for installation from the list using the command KubernetesVersion The
example shows the selected version is 1.29.0.
export KUBERNETES_VERSION=1.29.0
For the list of compatible supported Kubernetes versions, see Supported Kubernetes Versions.

NKP Prerequisites
Before starting the NKP installation, verify that you have:

• A Management cluster with NKP and the Kommander component installed.

Note: An AKS cluster cannot be a Management or Pro cluster. Before installing NKP on your AKS cluster, ensure
you have a Management cluster with NKP and the Kommander component installed, that handles the life cycle of
your AKS cluster.

• An x86_64-based Linux or macOS machine with a supported version of the operating system.
• A Self-managed Azure cluster, if you used the Day 1-Basic Installation for Azure instructions, your cluster
was created using --self-managed flag and therefore is already a self-managed cluster.
• Download the NKPbinary for Linux, or macOS. To check which version of NKP you installed for
compatibility reasons, run the NKP version -h command.
• Docker https://docs.docker.com/get-docker/ version 18.09.2 or later.
• kubectl https://kubernetes.io/docs/tasks/tools/#kubectl for interacting with the running cluster.
• The Azure CLI https://docs.microsoft.com/en-us/cli/azure/install-azure-cli.
• A valid Azure account used to sign in to the Azure CLI https://docs.microsoft.com/en-us/cli/azure/
authenticate-azure-cli?view=azure-cli-latest.
• All Resource requirements.

Note: Kommander installation requires you to have Cluster API (CAPI) components, cert-manager, etc on a self-
managed cluster. The CAPI components mean you can control the life cycle of the cluster, and other clusters. However,
because AKS is semi-managed by Azure, the AKS clusters are under Azure control and don’t have those components.
Therefore, Kommander will not be installed and these clusters will be attached to the management cluster.

To deploy a cluster with a custom image in a region where CAPI images https://cluster-api-aws.sigs.k8s.io/topics/
images/built-amis.html are not provided, you need to use Konvoy Image Builder to create your own image for the
region.
AKS best practices discourage building custom images. If the image is customized, it breaks some of the autoscaling
and security capabilities of AKS. Since custom virtual machine images are discouraged in AKS, Konvoy Image
Builder (KIB) does not include any support for building custom machine images for AKS.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 321


AKS: Create an AKS Cluster

About this task


When creating a Managed cluster on your AKS infrastructure, you can choose from multiple configuration types.

Procedure
Use NKP to create a new AKS cluster
Ensure that the KUBECONFIG environment variable is set to the Management cluster by running :
export KUBECONFIG=<Management_cluster_kubeconfig>.conf

Name Your Cluster

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export CLUSTER_NAME=<aks-example>

Note: The cluster name might only contain the following characters: a-z, 0-9, ., and -. Cluster creation will fail
if the name has capital letters. See Kubernetes for more naming information.

Create a New AKS Kubernetes Cluster from the CLI

About this task

Procedure

1. Set the environment variable to the name you assigned this cluster.
export CLUSTER_NAME=<aks-example>

2. Check to see what version of Kubernetes is available in your region. When deploying with AKS, you need
to declare the version of Kubernetes you want to use by running the following command, substituting <your-
location> for the Azure region you're deploying to.
az aks get-versions -o table --location <your-location>

3. Set the Kubernetes version you have chosen.


export KUBERNETES_VERSION=1.27.6

4.
Note: Refer to the current release Kubernetes compatibility table for the correct version to use and choose an
available 1.27.x version. The version listed in the command is an example.

Create the cluster.


nkp create cluster aks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=
$(whoami) --kubernetes-version=${KUBERNETES_VERSION}

5. Inspect or edit the cluster objects. Familiarize yourself with Cluster API before editing the cluster objects, as edits
can prevent the cluster from deploying successfully. See Customizing CAPI Clusters.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 322


6. Wait for the cluster control-plane to be ready.
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --
timeout=20m
The READY status will become True after the cluster control-plane becomes ready in one of the following steps.

7. After the objects are created on the API server, the Cluster API controllers reconcile them. They create
infrastructure and machines. As they progress, they update the Status of each object. Konvoy provides a command
to describe the current status of the cluster.
nkp describe cluster -c ${CLUSTER_NAME}
NAME READY SEVERITY REASON
SINCE MESSAGE
Cluster/aks-example True
48m
##ClusterInfrastructure - AzureManagedCluster/aks-example
##ControlPlane - AzureManagedControlPlane/aks-example

8. As they progress, the controllers also create Events. List the Events using this command.
kubectl get events | grep ${CLUSTER_NAME}
For brevity, the example uses grep. It is also possible to use separate commands to get Events for specific objects.
For example, kubectl get events --field-selector involvedObject.kind="AKSCluster" and
kubectl get events --field-selector involvedObject.kind="AKSMachine".
48m Normal SuccessfulSetNodeRefs machinepool/aks-
example-md-0 [{Kind: Namespace: Name:aks-mp6gglj-41174201-
vmss000000 UID:e3c30389-660d-46f5-b9d7-219f80b5674d APIVersion: ResourceVersion:
FieldPath:} {Kind: Namespace: Name:aks-mp6gglj-41174201-vmss000001 UID:300d71a0-
f3a7-4c29-9ff1-1995ffb9cfd3 APIVersion: ResourceVersion: FieldPath:} {Kind:
Namespace: Name:aks-mp6gglj-41174201-vmss000002 UID:8eae2b39-a415-425d-8417-
d915a0b2fa52 APIVersion: ResourceVersion: FieldPath:} {Kind: Namespace: Name:aks-
mp6gglj-41174201-vmss000003 UID:3e860b88-f1a4-44d1-b674-a54fad599a9d APIVersion:
ResourceVersion: FieldPath:}]
6m4s Normal AzureManagedControlPlane available azuremanagedcontrolplane/
aks-example successfully reconciled
48m Normal SuccessfulSetNodeRefs machinepool/aks-
example [{Kind: Namespace: Name:aks-mp6gglj-41174201-
vmss000000 UID:e3c30389-660d-46f5-b9d7-219f80b5674d APIVersion: ResourceVersion:
FieldPath:} {Kind: Namespace: Name:aks-mp6gglj-41174201-vmss000001 UID:300d71a0-
f3a7-4c29-9ff1-1995ffb9cfd3 APIVersion: ResourceVersion: FieldPath:} {Kind:
Namespace: Name:aks-mp6gglj-41174201-vmss000002 UID:8eae2b39-a415-425d-8417-
d915a0b2fa52 APIVersion: ResourceVersion: FieldPath:}]

AKS: Retrieve kubeconfig for AKS Cluster


Learn to interact with your AKS Kubernetes cluster.

About this task


This guide explains how to use the command line to interact with your newly deployed Kubernetes cluster. Before
you start, make sure you have created a workload cluster, as described in AKS: Create an AKS Cluster.
Explore the new AKS cluster with the steps below.

Procedure

1. Get a kubeconfig file for the workload cluster. When the workload cluster is created, the cluster life cycle
services generate a kubeconfig file for the workload cluster and write it to a Secret. The kubeconfig file

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 323


is scoped to the cluster administrator. Get the kubeconfig from the Secret, and write it to a file using this
command.
nkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf

2. List the Nodes using this command.


kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
NAME STATUS ROLES AGE VERSION
aks-cp6dsz8-41174201-vmss000000 Ready agent 56m v1.29.6
aks-cp6dsz8-41174201-vmss000001 Ready agent 55m v1.29.6
aks-cp6dsz8-41174201-vmss000002 Ready agent 56m v1.29.6
aks-mp6gglj-41174201-vmss000000 Ready agent 55m v1.29.6
aks-mp6gglj-41174201-vmss000001 Ready agent 55m v1.29.6
aks-mp6gglj-41174201-vmss000002 Ready agent 55m v1.29.6
aks-mp6gglj-41174201-vmss000003 Ready agent 56m v1.29.6

Note:
It might take a few minutes for the Status to move to Ready while the Pod network is deployed. The
Node Status will change to Ready soon after the calico-node DaemonSet Pods are Ready.

3. List the Pods using the command kubectl --kubeconfig=${CLUSTER_NAME}.conf get --all-
namespaces pods.
Example output:
NAMESPACE NAME READY
STATUS RESTARTS AGE
calico-system calico-kube-controllers-5dcd4b47b5-tgslm 1/1
Running 0 3m58s
calico-system calico-node-46dj9 1/1
Running 0 3m58s
calico-system calico-node-crdgc 1/1
Running 0 3m58s
calico-system calico-node-m7s7x 1/1
Running 0 3m58s
calico-system calico-node-qfkqc 1/1
Running 0 3m57s
calico-system calico-node-sfqfm 1/1
Running 0 3m57s
calico-system calico-node-sn67x 1/1
Running 0 3m53s
calico-system calico-node-w2pvt 1/1
Running 0 3m58s
calico-system calico-typha-6f7f59969c-5z4t5 1/1
Running 0 3m51s
calico-system calico-typha-6f7f59969c-ddzqb 1/1
Running 0 3m58s
calico-system calico-typha-6f7f59969c-rr4lj 1/1
Running 0 3m51s
kube-system azure-ip-masq-agent-4f4v6 1/1
Running 0 4m11s
kube-system azure-ip-masq-agent-5xfh2 1/1
Running 0 4m11s
kube-system azure-ip-masq-agent-9hlk8 1/1
Running 0 4m8s
kube-system azure-ip-masq-agent-9vsgg 1/1
Running 0 4m16s

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 324


AKS: Attach a Cluster
You can attach existing Kubernetes clusters to the Management Cluster using the instructions below.

About this task


After attaching the cluster, you can use the UI to examine and manage this cluster. The following procedure shows
how to attach an existing Azure Kubernetes Service (AKS) cluster.
This procedure assumes you have an existing and spun up Azure AKS cluster(s) with administrative privileges. Refer
to the Azure site regarding AKS for setup and configuration information.
spun-up Azure AKS cluster(s) with administrative privileges. For setup and configuration information, refer to the
Azure site regarding ##AKS# that the KUBECONFIG environment variable is set to the Management cluster before
attaching by running:

Note:
export KUBECONFIG=<Management_cluster_kubeconfig>.conf

Access Your AKS Clusters

Procedure

1. Ensure you are connected to your AKS clusters. Enter the following commands for each of your clusters.
kubectl config get-contexts
kubectl config use-context <context for first aks cluster>

2. Confirm kubectl can access the EKS cluster.


kubectl get nodes

Create a kubeconfig File

About this task


To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect
to Kommander.

Procedure

1. Create the necessary service account.


kubectl -n kube-system create serviceaccount kommander-cluster-admin

2. Create a token secret for the serviceaccount.


kubectl -n kube-system create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: kommander-cluster-admin-sa-token
annotations:
kubernetes.io/service-account.name: kommander-cluster-admin
type: kubernetes.io/service-account-token
EOF
For more information on Service Account Tokens, refer to this article in our blog.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 325


3. Verify that the serviceaccount token is ready by running this command.
kubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml
Verify that the data.token field is populated, as seen in the example output.
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDR...
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVX...
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: kommander-cluster-admin
kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8
creationTimestamp: "2022-08-19T13:36:42Z"
name: kommander-cluster-admin-sa-token
namespace: default
resourceVersion: "8554"
uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520
type: kubernetes.io/service-account-token

4. Configure the new service account for cluster-admin permissions.


cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kommander-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kommander-cluster-admin
namespace: kube-system
EOF

5. Set up the following environment variables with the access data that is needed for producing a new kubeconfig
file.
export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-
sa-token -o=go-template='{{.data.token}}' | base64 --decode)
export CURRENT_CONTEXT=$(kubectl config current-context)
export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-
template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}
{{ index .context "cluster" }}{{end}}{{end}}')
export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if
eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-
data" }}{{.}}{{end}}"{{ end }}{{ end }}')
export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}
{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')

6. Confirm these variables have been set correctly.


export -p | grep -E 'USER_TOKEN_VALUE|CURRENT_CONTEXT|CURRENT_CLUSTER|CLUSTER_CA|
CLUSTER_SERVER'

7. Generate a kubeconfig file that uses the environment variable values from the previous step.
cat << EOF > kommander-cluster-admin-config

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 326


apiVersion: v1
kind: Config
current-context: ${CURRENT_CONTEXT}
contexts:
- name: ${CURRENT_CONTEXT}
context:
cluster: ${CURRENT_CONTEXT}
user: kommander-cluster-admin
namespace: kube-system
clusters:
- name: ${CURRENT_CONTEXT}
cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_SERVER}
users:
- name: kommander-cluster-admin
user:
token: ${USER_TOKEN_VALUE}
EOF

8. This process produces a file in your current working directory called kommander-cluster-admin-config. The
contents of this file are used in Kommander to attach the cluster. Before importing this configuration, verify the
kubeconfig file can access the cluster.
kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces

Finalize attaching your cluster from the UI

About this task


Now that you have the kubeconfig, go to the NKP UI and follow these steps below:

Procedure
From the top menu bar, select your target workspace.

a. On the Dashboard page, select the Add Cluster option in the Actions dropdown menu at the top right.
b. Select Attach Cluster.
c. Select the No additional networking restrictions card. Alternatively, if you must use network restrictions,
stop following the steps below and see the Attach a cluster WITH network restrictions.
d. Upload the kubeconfig file you created in the previous section (or copy its contents) into the Cluster
Configuration section.
e. The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit
this field using the name you want for your cluster.
f. Add labels to classify your cluster as needed.
g. Select Create to attach your cluster. Next Step

GCP Installation Options


For an environment that is on the GCP Infrastructure, install options based on those environment variables are
provided for you in this location.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operative in the most common scenarios.
If not already done, see the documentation for:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 327


• Resource Requirements on page 38
• Installing NKP on page 47
• Prerequisites for Installation on page 44

Additional Resource Information Specific to GCP

• Control plane nodes - NKP on GCP defaults to deploying an n2-standard-4 instance with an 80GiB root
volume for control plane nodes, which meets the above requirements.
• Worker nodes - NKP on GCP defaults to deploying a n2-standard-8 instance with an 80GiB root volume for
worker nodes, which meets the above requirements.

GCP Installation
This installation provides instructions to install NKP in an GCP non-air-gapped environment.
Remember, there are always more options for custom YAML in the Custom Installation and Additional
Infrastructure Tools section, but this will get you operating with basic features.
If not already done, see the documentation for:

• Resource Requirements on page 38


• Installing NKP on page 47
• Prerequisites for Installation on page 44

GCP Prerequisites
Verify that your Google Cloud project does not have the Enable OS Login feature enabled.

Note:
The Enable OS Login feature is sometimes enabled by default in GCP projects. If the OS login feature is
enabled, KIB will not be able to ssh to the VM instances it creates and will not be able to create an image
successfully.
To check if it is enabled, use the commands on this page Set and remove custom metadata | Compute
Engine Documentation | Google Cloud to inspect the metadata configured in your project. If you find
the enable-oslogin flag set to TRUE, you must remove it (or set it to FALSE) to use KIB.

The user creating the Service Accounts needs additional privileges in addition to the Editor role.

Note: See GCP Roles for more information.

GCP: Creating an Image


Learn how to build a custom image for use with NKP.

About this task


This procedure describes how to use the Konvoy Image Builder (KIB) to create a Cluster API compliant GCP
image. GCP images contain configuration information and software to create a specific, pre-configured, operating
environment. For example, you can create a GCP image of your current computer system settings and software. The
GCP image can then be replicated and distributed, creating your computer system for other users. KIB uses variable
overrides to specify the base image and container images to use in your new AMI.
The prerequisites to use Konvoy Image Builder are:

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 328


Procedure

1. Download the KIB bundle for your version of NKP prefixed with konvoy-image-bundle for your OS.

2. Check the Supported Infrastructure Operating Systems.

3. Check the Supported Kubernetes Version for your Provider.

4. Create a working registry.

» Podman Version 4.0 or later for Linux. For more information, see https://podman.io/getting-started/
installation. For host requirements, see https://kind.sigs.k8s.io/docs/user/rootless/#host-requirements.
» Docker container engine version 18.09.2 or 20.10.0 installed for Linux or MacOS. For more information, see
https://docs.docker.com/get-docker/.

5. On Debian-based Linux distributions, install a version of the cri-tools package known to be compatible with both
the Kubernetes and container runtime versions.

6. Note: The Enable OS Login feature is sometimes enabled by default in GCP projects. If the OS login feature
is enabled, KIB will not be able to ssh to the VM instances it creates and will not be able to create an image
successfully.
To check if it is enabled, use the commands on this page Set and remove custom metadata |
Compute Engine Documentation | Google Cloud to inspect the metadata configured in your project.
If you find the enable-oslogin flag set to TRUE, you must remove (or set it to FALSE) to use KIB
successfully.

Verify that your Google Cloud project does not have the Enable OS Login feature enabled. See below for more
information.

GCP Prerequisite Roles

About this task


If not done previously during Konvoy Image Builder download in Prerequisites, extract the bundle and cd into the
extracted konvoy-image-bundle-$VERSION folder. Otherwise, proceed to the steps below.

Procedure

1. If you are creating your image on either a non-GCP instance or one that does not have the required roles, you must
either:

» Create a GCP service account.


» If you have already created a service account, retrieve the credentials for an existing service account.

2. Export the static credentials that will be used to create the cluster.
export GCP_B64ENCODED_CREDENTIALS=$(base64 < "${GOOGLE_APPLICATION_CREDENTIALS}" | tr
-d '\n')

Build the GCP Image

About this task


Depending on which version of NKP you are running, steps and flags will be different.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 329


Procedure

1. Run the konvoy-image command to build and validate the image.


./konvoy-image build gcp --project-id ${GCP_PROJECT} --network ${NETWORK_NAME}
images/gcp/ubuntu-2004.yaml

2. KIB will run and print out the name of the created image; you will use this name when creating a Kubernetes
cluster. See the sample output below.

Note: Ensure you have named the correct YAML file for your OS in the konvoy-image build command.

...
==> ubuntu-2004-focal-v20220419: Deleting instance...
ubuntu-2004-focal-v20220419: Instance has been deleted!
==> ubuntu-2004-focal-v20220419: Creating image...
==> ubuntu-2004-focal-v20220419: Deleting disk...
ubuntu-2004-focal-v20220419: Disk has been deleted!
==> ubuntu-2004-focal-v20220419: Running post-processor: manifest
Build 'ubuntu-2004-focal-v20220419' finished after 7 minutes 46 seconds.

==> Wait completed after 7 minutes 46 seconds

==> Builds finished. The artifacts of successful builds are:


--> ubuntu-2004-focal-v20220419: A disk image was created: konvoy-
ubuntu-2004-1-23-7-1658523168
--> ubuntu-2004-focal-v20220419: A disk image was created: konvoy-
ubuntu-2004-1-23-7-1658523168

3. To find a list of images you have created in your account, run the following command.
gcloud compute images list --no-standard-images

Related Information

Procedure

• To use a local registry, even in a non-air-gapped environment, download and extract the bundle. Downloading
NKP on page 16 the Complete NKP Air-gapped Bundle for this release (that is. nkp-air-gapped-
bundle_v2.12.0_linux_amd64.tar.gz) to load registry

• To view the complete set of instructions, see Load the Registry.

GCP: Creating the Management Cluster


Create a GCP Management Cluster in a non-air-gapped environment.

About this task


Use this procedure to create a self-managed Management cluster with NKP. A self-managed cluster refers to one
in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are
managing. First, you must name your cluster.
Name Your Cluster

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. For more naming information, see https://kubernetes.io/docs/concepts/overview/
working-with-objects/names/.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 330


Note: NKP uses the GCP CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage
that is suitable for production.

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable using the command export CLUSTER_NAME=<gcp-example>.

Create a GCP Kubernetes Cluster

About this task


If you use these instructions to create a cluster on GCP using the NKP default settings without any edits to
configuration files or additional flags, your cluster is deployed on an Ubuntu 20.04 operating system image with 3
control plane nodes, and 4 worker nodes.
By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will
reside in a single zone. You might create additional node pools in other zones with the nkp create nodepool
command.
Availability zones (AZs) are isolated locations within data center regions from which public cloud services originate
and operate. Because all the nodes in a node pool are deployed in a single AZ, you may wish to create additional node
pools to ensure your cluster has nodes deployed in multiple AZs.

Procedure

1. Create an image using Konvoy Image Builder (KIB) and then export the image name.
export IMAGE_NAME=projects/${GCP_PROJECT}/global/images/<image_name_from_kib>

2. Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation.
If you need to change the Kubernetes subnets, you must do this at cluster creation. The default subnets used in
NKP are.
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12

» (Optional) Modify Control Plane Audit logs - Users can make modifications to the KubeadmControlplane
cluster-api object to configure different kubelet options. See the following guide if you wish to configure
your control plane beyond the existing options that are available from flags.
» (Optional) Determine what VPC Network to use. All GCP accounts come with a preconfigured VPC Network
named default, which will be used if you do not specify a different network. To use a different VPC network
for your cluster, create one by following these instructions for Create and Manage VPC Networks. Then
specify the --network <new_vpc_network_name> option on the create cluster command below. More
information is available on GCP Cloud Nat and network flag.

3. Create a Kubernetes cluster. The following example shows a common configuration.


nkp create cluster gcp \
--cluster-name=${CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--project=${GCP_PROJECT} \

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 331


--image=${IMAGE_NAME} \
--self-managed

Note: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --
https-proxy, and --no-proxy and their related values in this command for it to be successful. More
information is available in Configuring an HTTP or HTTPS Proxy on page 644.

If you want to monitor or verify the installation of your clusters, refer to the topic: Verify your Cluster and NKP
Installation

GCP: Installing Kommander


This section provides installation instructions for the Kommander component of NKP in a non-air-gapped
GCP environment.

About this task


Once you have installed the Konvoy component of NKP, you will continue with the installation of the
Kommander component that will bring up the UI dashboard.

Tip: Tips and Recommendations

• The --kubeconfig=${CLUSTER_NAME}.conf flag ensures that you install Kommander on the correct
cluster. For alternatives, see Provide Context for Commands with a kubeconfig File.
• Applications can take longer to deploy and time out the installation. Add the --wait-timeout <time
to wait> flag and specify a period of time (for example, 1h) to allocate more time to the deployment
of applications.
• If the Kommander installation fails, or you want to reconfigure applications, rerun the install
command to retry.

Prerequisites:

• Ensure you have reviewed all Prerequisites for Install.


• Ensure you have a default StorageClass.
• Note the name of the cluster where you want to install Kommander. If you do not know the cluster name, use
kubectl get clusters -A to display and find it.

Create your Kommander Installation Configuration File

Procedure

1. Set the environment variable for your cluster.


export CLUSTER_NAME=<your-management-cluster-name>

2. Copy the kubeconfig file of your Management cluster to your local directory.
nkp get kubeconfig -c ${CLUSTER_NAME} >> ${CLUSTER_NAME}.conf

3. Create a configuration file for the deployment.


nkp install kommander --init > kommander.yaml

4. If required: Customize your kommander.yaml.

a. See the Customizations page for customization options. Some options include Custom Domains and
Certificates, HTTP proxy, and External Load Balancer.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 332


5. Only required if your cluster uses a custom AWS VPC and requires an internal load-balancer; set the traefik
annotation to create an internal-facing ELB.
apps:
traefik:
enabled: true
values: |
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

6. Enable NKP Catalog Applications and Install Kommander in the same kommander.yaml from the previous
section, add these values (if you are enabling NKP Catalog Apps) for NKP-catalog-appliations.
apiVersion: config.kommander.mesosphere.io/v1alpha1
kind: Installation
catalog:
repositories:
- name: NKP-catalog-applications
labels:
kommander.d2iq.io/project-default-catalog-repository: "true"
kommander.d2iq.io/workspace-default-catalog-repository: "true"
kommander.d2iq.io/gitapps-gitrepository-type: "NKP"
gitRepositorySpec:
url: https://github.com/mesosphere/NKP-catalog-applications
ref:
tag: v2.12.0

7. Use the customized kommander.yaml to install NKP.


nkp install kommander --installer-config kommander.yaml --kubeconfig=
${CLUSTER_NAME}.conf

Note: If you only want to enable catalog applications to an existing configuration, add these values to an existing
installer configuration file to maintain your Management cluster’s settings.
If you want to enable NKP Catalog applications after installing NKP, see the topic Configuring NKP
Catalog Applications after Installing NKP.

GCP: Verifying your Installation and UI Log in


Verify Kommander installation and log in to the Dashboard UI.

About this task


Verify Kommander Installation

Note: If the Kommander installation fails or you wish to reconfigure applications, you can rerun the install command
to retry the installation.

Procedure
You can check the status of the installation using the following command.
kubectl -n kommander wait --for condition=Ready helmreleases --all --timeout 15m

Note: If you prefer the CLI to not wait for all applications to become ready, you can set the --wait=false flag.

The first wait for each of the helm charts to reach their Ready condition, eventually resulting in an output resembling
below:
helmrelease.helm.toolkit.fluxcd.io/centralized-grafana condition met

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 333


helmrelease.helm.toolkit.fluxcd.io/dex condition met
helmrelease.helm.toolkit.fluxcd.io/dex-k8s-authenticator condition met
helmrelease.helm.toolkit.fluxcd.io/fluent-bit condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-logging condition met
helmrelease.helm.toolkit.fluxcd.io/grafana-loki condition met
helmrelease.helm.toolkit.fluxcd.io/karma condition met
helmrelease.helm.toolkit.fluxcd.io/kommander condition met
helmrelease.helm.toolkit.fluxcd.io/kommander-appmanagement condition met
helmrelease.helm.toolkit.fluxcd.io/kube-prometheus-stack condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost condition met
helmrelease.helm.toolkit.fluxcd.io/kubecost-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/kubefed condition met
helmrelease.helm.toolkit.fluxcd.io/kubernetes-dashboard condition met
helmrelease.helm.toolkit.fluxcd.io/kubetunnel condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator condition met
helmrelease.helm.toolkit.fluxcd.io/logging-operator-logging condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-adapter condition met
helmrelease.helm.toolkit.fluxcd.io/prometheus-thanos-traefik condition met
helmrelease.helm.toolkit.fluxcd.io/reloader condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph condition met
helmrelease.helm.toolkit.fluxcd.io/rook-ceph-cluster condition met
helmrelease.helm.toolkit.fluxcd.io/thanos condition met
helmrelease.helm.toolkit.fluxcd.io/traefik condition met
helmrelease.helm.toolkit.fluxcd.io/traefik-forward-auth-mgmt condition met
helmrelease.helm.toolkit.fluxcd.io/velero condition met

Failed HelmReleases

Procedure
If an application fails to deploy, check the status of a HelmRelease using the command kubectl -n kommander
get helmrelease <HELMRELEASE_NAME>

If you find any HelmReleases in a “broken” release state, such as “exhausted” or “another rollback/release in
progress”, trigger a reconciliation of the HelmRelease using the commands kubectl -n kommander patch
helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/
suspend", "value": true}]' kubectl -n kommander patch helmrelease <HELMRELEASE_NAME> --
type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'

Log in to the UI

Procedure

1. By default, you can log in to the UI in Kommander with the credentials given using this command.
nkp open dashboard --kubeconfig=${CLUSTER_NAME}.conf

2. Retrieve your credentials at any time if necessary.


kubectl -n kommander get secret NKP-credentials -o go-template='Username:
{{.data.username|base64decode}}{{ "\n"}}Password: {{.data.password|base64decode}}
{{ "\n"}}'

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 334


3. Retrieve the URL used for accessing the UI with the following.
kubectl -n kommander get svc kommander-traefik -o go-template='https://{{with
index .status.loadBalancer.ingress 0}}{{or .hostname .ip}}{{end}}/NKP/kommander/
dashboard{{ "\n"}}'
Only use these static credentials to access the UI for configuring an external identity provider. Treat them as
back up credentials rather than use them for normal access.

a. Rotate the password using the following command.


nkp experimental rotate dashboard-password
The output displays the new password:
Password: kqZ31lMBSCLcBjUKVwLJMQL2PxalipIzZw5Pjyw09wDqjWV3dz2wPSSBYi09JGJp

Dashboard UI Functions

Procedure

After installing Konvoy component and building a cluster as well as successfully installing Kommander and logging
into the UI, you are now ready to customize configurations using the Day 2 Cluster Operations Management section
of the documentation. The majority of this customization such as attaching clusters and deploying applications will
take place in the dashboard or UI of NKP. The Day 2 section allows you to manage cluster operations and their
application workloads to optimize your organization’s productivity.

• Continue to the NKP dashboard.

GCP: Creating Managed Clusters Using the NKP CLI


This topic explains how to continue using the CLI to create managed clusters rather than switching to the
UI dashboard.

About this task


After the initial cluster creation, you can create additional clusters from the CLI. In a previous step, the new cluster
was created as Self-managed, which allows it to be a Management cluster or a stand-alone cluster. Subsequent new
clusters are not self-managed, as they will likely be Managed or Attached clusters to this Management Cluster.

Note: When creating Managed clusters, you do not need to create and move CAPI objects or install the
Kommander component. Those tasks are only done on Management clusters!
Your new managed cluster needs to be part of a workspace under a management cluster. To make the new
managed cluster a part of a Workspace, set that workspace environment variable.

Procedure

1. If you have an existing Workspace name, run this command to find the name.
kubectl get workspace -A

2. When you have the Workspace name, set the WORKSPACE_NAMESPACE environment variable.
export WORKSPACE_NAMESPACE=<workspace_namespace>

Note: If you need to create a new Workspace, follow the instructions to Create a New Workspace

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 335


Name Your Cluster

About this task


Each cluster must have an original name.
After you have defined the infrastructure and control plane endpoints, you can proceed to create the cluster by
following these steps to create a new pre-provisioned cluster. This process creates a self-managed cluster to be used
as the Management cluster.
First, you must name your cluster. Then, you run the command to deploy it.

Note: The cluster name might only contain the following characters: a-z, 0-9,. , and -. Cluster creation will fail if
the name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name, you must use the same cluster-name as used when defining your
inventory objects.

Perform both steps to name the cluster:

Procedure

1. Give your cluster a unique name suitable for your environment.

2. Set the environment variable.


export MANAGED_CLUSTER_NAME=<gcp-additional>

Create a Kubernetes Cluster

About this task


The instructions below tell you how to create a cluster and have it automatically attach to the workspace you set
above. If you do not set a workspace, the cluster will be created in the #default# workspace, and you will need to take
additional steps to attach to a workspace later. For instructions on how to do this, see Attach a Kubernetes Cluster.

Note: Google Cloud Platform does not publish images. You must first build the image using Konvoy Image
Builder.

Procedure

1. Create an image using Konvoy Image Builder (KIB) and then export the image name.
export IMAGE_NAME=projects/${GCP_PROJECT}/global/images/<image_name_from_kib>

2. Execute this command to create your additional Kubernetes cluster using any relevant flags. This will create a new
non-self-managed cluster that can be managed by the management cluster you created in the previous section.
nkp create cluster gcp \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--namespace ${WORKSPACE_NAMESPACE} \
--project=${GCP_PROJECT} \
--image=${IMAGE_NAME} \
--kubeconfig=<management-cluster-kubeconfig-path>

Tip: If your environment uses HTTP or HTTPS proxies, you must include the flags --http-proxy, --https-
proxy, and --no-proxy and their related values in this command for it to be successful. More information is
available in Configuring an HTTP or HTTPS Proxy on page 644.

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 336


Manually Attach an NKP CLI Cluster to the Management Cluster

Procedure

When you create a Managed Cluster with the NKP CLI, it attaches automatically to the Management Cluster after a
few moments. However, if you do not set a workspace, the attached cluster will be created in the default workspace.
To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:

1. Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command.
echo ${MANAGED_CLUSTER_NAME}

2. Retrieve your kubeconfig from the cluster you have created without setting a workspace.
nkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} >
${MANAGED_CLUSTER_NAME}.conf

3. Note: This is only necessary if you never set the workspace of your cluster upon creation.

You can now either attach it in the UI, link to attaching it to the workspace through UI that was earlier, or attach
your cluster to the workspace you want in the CLI.

4. Retrieve the workspace where you want to attach the cluster.


kubectl get workspaces -A

5. Set the WORKSPACE_NAMESPACE environment variable.


export WORKSPACE_NAMESPACE=<workspace-namespace>

6. You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the
kubeconfig secret value of your cluster.
kubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-
template='{{.data.value}}{{ "\n"}}'

7. This will return a lengthy value. Copy this entire string for a secret using the template below as a reference.
Create a new attached-cluster-kubeconfig.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: <your-managed-cluster-name>-kubeconfig
labels:
cluster.x-k8s.io/cluster-name: <your-managed-cluster-name>
type: cluster.x-k8s.io/secret
data:
value: <value-you-copied-from-secret-above>

8. Create this secret in the desired workspace.


kubectl apply -f attached-cluster-kubeconfig.yaml --namespace
${WORKSPACE_NAMESPACE}

9. Create this kommandercluster object to attach the cluster to the workspace.


cat << EOF | kubectl apply -f -
apiVersion: kommander.mesosphere.io/v1beta1
kind: KommanderCluster
metadata:
name: ${MANAGED_CLUSTER_NAME}

Nutanix Kubernetes Platform | Basic Installations by Infrastructure | 337


namespace: ${WORKSPACE_NAMESPACE}
spec:
kubeconfigRef:
name: ${MANAGED_CLUSTER_NAME}-kubeconfig
clusterRef:
capiCluster:
name: ${MANAGED_CLUSTER_NAME}
EOF

10. You can now view this cluster in your Workspace in the UI, and you can confirm its status by running the
command below. It may take a few minutes to reach "Joined" status.
kubectl get kommanderclusters -A
If you have several Pro Clusters and want to turn one of them into a Managed Cluster to be centrally
administrated by a Management Cluster, refer to Platform Expansion:

Next Step

Procedure

• Cluster Operations Management


5
CLUSTER OPERATIONS MANAGEMENT
Manage your NKP environment using the Cluster Operations Management features.
The Cluster Operations Management section allows you to manage cluster operations and their application workloads
to optimize your organization’s productivity.

• Operations on page 339


• Applications on page 376
• Workspaces on page 396
• Projects on page 423
• Cluster Management on page 462
• Backup and Restore on page 544
• Logging on page 561
• Security on page 585
• Networking on page 597
• GPUs on page 607
• Monitoring and Alerts on page 617
• Storage for Applications on page 632

Operations
You can manage your cluster and deployed applications using platform applications.
After you deploy an NKP cluster and the platform applications you want to use, you are ready to begin managing
cluster operations and their application workloads to optimize your organization’s productivity.
In most cases, a production cluster requires additional advanced configuration tailored for your environment, ongoing
maintenance, authentication and authorization, and other common activities. For example, it is important to monitor
cluster activity and collect metrics to ensure application performance and response time, evaluate network traffic
patterns, manage user access to services, and verify workload distribution and efficiency.
In addition to the configurations, you can also control the appearance of your NKP UI by adding banners and footers.
There are different options available depending on the NKP level that you license and install.

• Access Control on page 340


• Identity Providers on page 350
• Kubectl API Access Using an Identity Provider on page 357
• Infrastructure Providers on page 359
• Header, Footer, and Logo Implementation on page 374

Nutanix Kubernetes Platform | Cluster Operations Management | 339


Access Control
You can centrally manage access across clusters and define role-based authorization within the NKP UI to control
resource access on the management cluster for a set or all of the target clusters. These resources are similar to
Kubernetes RBAC but with crucial differences, and they make it possible to define the roles and role bindings once
and federate them to clusters within a given scope.
NKP UI has two conceptual groups of resources that are used to manage access control:

• Kommander Roles: control access to resources on the management clusters.


• Cluster Roles: control access to resources on all target clusters.
Use these two groups of resources to manage access control within three levels of scope:

Table 22: Managing Access Across Scopes

Environment Context Kommander Roles Cluster Roles


Global : Manages access to Create ClusterRoles on the Federates ClusterRoles on all target
the entire environment. management cluster. clusters across all workspaces.

Workspace: Manages access to Create namespaced Roles on Federates ClusterRoles on all target
clusters in a specific workspace, the management cluster in the clusters in the workspace.
for example, in the scope of multi- workspace namespace.
tenancy. See Multi-Tenancy in
NKP on page 421.

Project: Manages access for Create namespaced Roles on Federates namespaced Roles on all
clusters in a specific project, for the management cluster in the target clusters in the project in the
example, in the scope of multi- project namespace. project namespace.
tenancy. See Multi-Tenancy in
NKP on page 421.

Create the role bindings for each level and type create RoleBindings or ClusterRoleBindings on the clusters
that apply to each category.
This approach gives you maximum flexibility over who has access to what resources, conveniently mapped to your
existing identity providers’ claims.

Limitation for Kommander Roles


In addition to granting a Kommander Role, you must also grant the appropriate NKP role to allow external users and
groups into the UI. For details about the built-in NKP roles, see Types of Access Control Objects on page 341.
Here are examples of ClusterRoleBindings that grant an Identity provider (IdP) group (in this example, the user
group engineering) is provided with admin access to the Kommander routes:
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eng-kommander-dashboard
labels:
"workspaces.kommander.mesosphere.io/rbac": ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nkp-kommander-admin
subjects:

Nutanix Kubernetes Platform | Cluster Operations Management | 340


- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:engineering
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eng-nkp-routes
labels:
"workspaces.kommander.mesosphere.io/rbac": ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nkp-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:engineering
EOF

Types of Access Control Objects


Manage Kubernetes role-based access control with three different object categories: Groups, Roles, and Policies.

Groups
Access control groups are configured in the Groups tab of the Identity Providers page.
You can map group and user claims made by your configured identity providers to Kommander groups by selecting
administration or identity providers in the left sidebar in the global workspace level, and then select the Groups tab.

Roles
ClusterRoles are named collections of rules defining which verbs can be applied to what resources.

• Kommander Roles apply specifically to resources in the management cluster.


• Cluster Roles apply to target clusters within their scope at these levels:

• Global level - this is all target clusters in all workspaces,


• Workspace level - all target clusters in the workspaces.
• Project level - this i all target clusters that are added to the project.

Propagating Workspace Roles to Projects


By default, users are granted the Kommander Workspace Admin, Edit, or View roles. You are also
granted the equivalent Kommander Project Admin, Edit, or View role for any project created in the
workspace. Other workspace roles are not automatically propagated to the equivalent role for a project in
the workspace.

About this task


Each workspace has roles defined using KommanderWorkspaceRole resources. Automatic propagation is
controlled using the annotation "workspace.kommander.mesosphere.io/sync-to-project": "true" on a
KommanderWorkspaceRole resource. You can manage this only by using the CLI.

Procedure

1. Run the command kubectl get kommanderworkspaceroles -n <WORKSPACE_NAMESPACE>.


NAME DISPLAY NAME AGE

Nutanix Kubernetes Platform | Cluster Operations Management | 341


kommander-workspace-admin Kommander Workspace Admin Role 2m18s
kommander-workspace-edit Kommander Workspace Edit Role 2m18s
kommander-workspace-view Kommander Workspace View Role 2m18s

2. To prevent propagation of the kommander-workspace-view role, remove this annotation from the
KommanderWorkspaceRole resource.
kubectl annotate kommanderworkspacerole -n <WORKSPACE_NAMESPACE> kommander-workspace-
view workspace.kommander.mesosphere.io/sync-to-project-

3. To enable propagation of the role, add this annotation to the relevant KommanderWorkspaceRole resource.
kubectl annotate kommanderworkspacerole -n <WORKSPACE_NAMESPACE> kommander-workspace-
view workspace.kommander.mesosphere.io/sync-to-project=true

Limitation for Workspace


During the inheritance of the Project Role, when granting users access to a workspace, you must manually grant
access to the projects within that workspace. Each project is created with a set of admin, edit, or view roles, and you
can choose to add RoleBinding to each group or user of the workspace for one of these project roles. Usually, these
are prefixed with one of the roles kommander-project-(admin/edit/view).
This is an example of RoleBinding that grants the Kommander Project Admin role access for the project namespace to
the engineering group:
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: workspace-admin-project1-admin
namespace: <my-project-namespace-xxxxx>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: <kommander-project-admin-xxxxx>
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:engineering
EOF

Role Bindings
Kommander role bindings, cluster role bindings, and project role bindings bind a Kommander group to any number of
roles. All groups defined in the Groups tab are present at the global, workspace, or project levels and are ready for
you to assign roles to them.

Access to Kubernetes and Kommander Resources


You can grant access to Kommander and Kubernetes resources using RBAC.
Initially, users and groups from an external identity provider have no access to Kubernetes resources. Privileges
must be granted explicitly by interacting with the RBAC API. This section provides some basic examples for general
usage. For more information on the RBAC API, see the Using RBAC Authorization section in the Kubernetes
documentation at https://kubernetes.io/docs/reference/access-authn-authz/rbac/.
Kubernetes does not provide an identity database for standard users. A trusted identity provider must provide users
and group membership. In Kubernetes, RBAC policies are additive, which means that a subject (user, group, or
service account) is denied access to a resource unless explicitly granted access by a cluster administrator. You can
grant access by binding a subject to a role, which grants some level of access to one or more resources. Kubernetes
is shipped with some default role, which aids in creating broad access control policies. For more information, see

Nutanix Kubernetes Platform | Cluster Operations Management | 342


Default roles and role bindings in the Kubernetes documentation https://kubernetes.io/docs/reference/access-
authn-authz/rbac/#default-roles-and-role-bindings.
For example, if you want to make [email protected] a cluster administrator, bind their username to the cluster-
admin default role as follows:
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mary-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected]
EOF

User to Namespace Restriction


A common example is granting users access to specific namespaces by creating a RoleBinding (RoleBindings are
namespaced scoped). For example, to make the user [email protected] a reader of the baz namespace, bind the
user to the view role:
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bob-view
namespace: baz
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected]
EOF
The user can now only perform non-destructive operations targeting resources in the #baz# namespace.

Groups
If your external identity provider supports group claims, you can also bind groups to roles. To make the engineering
LDAP group administrators of the production namespace bind the group to the admin role:
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: engineering-admin
namespace: production
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:engineering

Nutanix Kubernetes Platform | Cluster Operations Management | 343


EOF
One important distinction from adding users is that all external groups are prefixed with oidc:, so a group name is
oidc:devops. This prevents collision with locally defined groups.

NKP UI Authorization
The NKP UI and other HTTP applications protected by Kommander forward authentication, are also authorized
by the Kubernetes RBAC API. In addition to the Kubernetes API resources, it is possible to define rules which
map to HTTP URIs and HTTP verbs. Kubernetes RBAC refer to these as nonResourceURLs, Kommander forward
authentication uses these rules to grant or deny access to HTTP endpoints.

Default Roles
Roles are created to grant access to the dashboard and select applications that expose an HTTP server
through the ingress controller. The cluster-admin role is a system role that grants permission to all
actions (verbs) on any resource, including non-resource URLs. The default dashboard user is bound to this
role.

Note: Granting usethe r the administrator privileges on /nkp/* grants admin privileges to all sub-resources, even if
the bindings exist for sub-resources with fewer privileges.

Table 23: Table

Dashboard Role Path access


* cluster-admin * read, write, delete
kommander nkp-view /nkp/* read
kommander nkp-edit /nkp/* read, write
kommander nkp-admin /nkp/* read, write, delete
kommander-dashboard nkp-kommander-view /nkp/kommander/ read
dashboard/*
kommander-dashboard nkp-kommander-edit /nkp/kommander/ read, write
dashboard/*
kommander-dashboard nkp-kommander-admin /nkp/kommander/ read, write, delete
dashboard/*
alertmanager nkp-kube-prometheus- /nkp/alertmanager/* read
stack-alertmanager-view
alertmanager nkp-kube-prometheus- /nkp/alertmanager/* read, write
stack-alertmanager-edit
alertmanager nkp-kube-prometheus- /nkp/alertmanager/* read, write, delete
stack-alertmanager-
admin
centralized-grafana nkp-centralized-grafana- /nkp/kommander/ read
grafana-view monitoring/grafana/*
centralized-grafana nkp-centralized-grafana- /nkp/kommander/ read, write
grafana-edit monitoring/grafana/*
centralized-grafana nkp-centralized-grafana- /nkp/kommander/ read, write, delete
grafana-admin monitoring/grafana/*

Nutanix Kubernetes Platform | Cluster Operations Management | 344


Dashboard Role Path access
centralized-kubecost nkp-centralized- /nkp/kommander/ read
kubecost-view kubecost/*
centralized-kubecost nkp-centralized- /nkp/kommander/ read, write
kubecost-edit kubecost/*
centralized-kubecost nkp-centralized- /nkp/kommander/ read, write, delete
kubecost-admin kubecost/*
grafana nkp-kube-prometheus- /nkp/grafana/* read
stack-grafana-view
grafana nkp-kube-prometheus- /nkp/grafana/* read, write
stack-grafana-edit
grafana nkp-kube-prometheus- /nkp/grafana/* read, write, delete
stack-grafana-admin
grafana-logging nkp-grafana-logging-view /nkp/logging/grafana/* read
grafana-logging nkp-grafana-logging-edit /nkp/logging/grafana/* read, write
grafana-logging nkp-grafana-logging- /nkp/logging/grafana/* read, write, delete
admin
karma nkp-karma-view /nkp/kommander/ read
monitoring/karma/*
karma nkp-karma-edit /nkp/kommander/ read, write
monitoring/karma/*
karma nkp-karma-admin /nkp/kommander/ read, write, delete
monitoring/karma/*
kubernetes-dashboard nkp-kubernetes- /nkp/kubernetes/* read
dashboard-view
kubernetes-dashboard nkp-kubernetes- /nkp/kubernetes/* read, write
dashboard-edit
kubernetes-dashboard nkp-kubernetes- /nkp/kubernetes/* read, write, delete
dashboard-admin
prometheus nkp-kube-prometheus- /nkp/prometheus/* read
stack-prometheus-view
prometheus nkp-kube-prometheus- /nkp/prometheus/* read, write
stack-prometheus-edit
prometheus nkp-kube-prometheus- /nkp/prometheus/* read, write, edit
stack-prometheus-admin
traefik nkp-traefik-view /nkp/traefik/* read
traefik nkp-traefik-edit /nkp/traefik/* read, edit
traefik nkp-traefik-admin /nkp/traefik/* read, edit, delete
thanos nkp-thanos-query-view /nkp/kommander/ read
monitoring/query/*
thanos nkp-thanos-query-edit /nkp/kommander/ read, write
monitoring/query/*

Nutanix Kubernetes Platform | Cluster Operations Management | 345


Dashboard Role Path access
thanos nkp-thanos-query-admin /nkp/kommander/ read, write, delete
monitoring/query/*

Examples of Default Roles


This topic provides a few examples of binding subjects to the default roles defined for the NKP UI
endpoints.

User
To grant the user [email protected] administrative access to all Kommander resources, bind the user to the nkp-
admin role:
cat << EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nkp-admin-mary
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nkp-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected]
EOF
If you inspect the role, you can see what access is now granted:
kubectl describe clusterroles nkp-admin
Name: nkp-admin
Labels: app.kubernetes.io/instance=kommander
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/version=v2.0.0
helm.toolkit.fluxcd.io/name=kommander
helm.toolkit.fluxcd.io/namespace=kommander
rbac.authorization.k8s.io/aggregate-to-admin=true
Annotations: meta.helm.sh/release-name: kommander
meta.helm.sh/release-namespace: kommander
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
[/nkp/*] [] [delete]
[/nkp] [] [delete]
[/nkp/*] [] [get]
[/nkp] [] [get]
[/nkp/*] [] [head]
[/nkp] [] [head]
[/nkp/*] [] [post]
[/nkp] [] [post]
[/nkp/*] [] [put]
[/nkp] [] [put]
The user can now use the HTTP verbs HEAD, GET, DELETE, POST, and PUT when accessing any URL at or under
/nkp. The downstream application follows REST conventions. This effectively allows privileges to be read, edited,
and deleted.

Note: To enable users to access the NKP UI, ensure they have the appropriate nkp-kommander role and the
Kommander roles granted in the NKP UI.

Nutanix Kubernetes Platform | Cluster Operations Management | 346


Group
To grant view access to the /nkp/* endpoints and edit access to the grafana logging endpoint to group logging-
ops, create the following ClusterRoleBindings:
cat << EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nkp-view-logging-ops
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nkp-view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:logging-ops
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nkp-logging-edit-logging-ops
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nkp-logging-edit
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:logging-ops
EOF

Note: External groups must be prefixed by oidc:

Members of logging-ops to view all the resources under /nkp and edit all the resources under /nkp/logging/
grafana.

Creating Custom Roles


If one of the predefined roles from NKP does not include all the permissions you need, you can create a
custom role.

About this task


Perform the following tasks to assign actions and permissions to roles:

Procedure

1. In the Administration section of the sidebar menu, select Access Control.

2. Select the Cluster Roles tab, and then select + Create Role .

3. Enter a descriptive name for the role and ensure that Cluster Role is selected as the type.

Nutanix Kubernetes Platform | Cluster Operations Management | 347


4. For example, to configure a read-only role, select Add Rule.

a. In the Resources input, select All Resource Types.


b. Select the get, list, and watch options.
c. Click Save.
You can assign your newly created role to the developer's group.

Kubernetes Dashboard
The Kubernetes dashboard displays information that offloads authorization directly to the Kubernetes API server.
Once authenticated, all users may access the dashboard at /nkp/kubernetes/ without needing an nkp role.
However, the cluster RBAC policy protects access to the underlying Kubernetes resources exposed by the dashboard.
This topic describes some basic examples of operations that provide the building blocks for creating an access control
policy. For more information about creating your roles and advanced policies, see Using RBAC Authorization in the
Kubernetes documentation at https://kubernetes.io/docs/reference/access-authn-authz/rbac/. For information
on adding a user to a cluster as an administrator, see Onboarding a User to an NKP Cluster on page 348.

Onboarding a User to an NKP Cluster


After you install NKP and create a cluster, you can add new users to your environment.

Before you begin


You must have administrator rights. Also, ensure that:

• You have an LDAP Connector.


• You are a cluster administrator.
• You have a valid NKP license (Starter or Pro)
• You have a running cluster.
For information about adding users using other types of connectors, see:

• https://dexidp.io/docs/connectors/oidc/
• https://dexidp.io/docs/connectors/saml/
• https://dexidp.io/docs/connectors/github/
To onboard a user:

Procedure

1. Create an LDAP Connector definition and name the file ldap.yaml.


apiVersion: v1
kind: Secret
metadata:
name: ldap-password
namespace: kommander
type: Opaque
stringData:
password: superSecret
---
apiVersion: dex.mesosphere.io/v1alpha1
kind: Connector
metadata:
name: ldap

Nutanix Kubernetes Platform | Cluster Operations Management | 348


namespace: kommander
spec:
enabled: true
type: ldap
displayName: LDAP Test Connector
ldap:
host: ldapdce.testdomain
insecureNoSSL: true
bindDN: cn=ldapconnector,cn=testgroup,ou=testorg,dc=testdomain
bindSecretRef:
name: ldap-password
userSearch:
baseDN: dc=testdomain
filter: "(objectClass=inetOrgPerson)"
username: uid
idAttr: uid
emailAttr: uid
groupSearch:
baseDN: ou=testorg,dc=testdomain
filter: "(objectClass=posixGroup)"
userMatchers:
- userAttr: uid
groupAttr: memberUid
nameAttr: cn

2. Add the connector and use the command kubectl apply -f ldap.yaml.
The following output is displayed.
secret/ldap-password created
connector.dex.mesosphere.io/ldap created

3. Add the appropriate role bindings and name the file new_user.yaml.
See the following examples for both Single User and Group Bindings.

» For Single Users:


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: newUser

» For Group Binding:


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-admin
namespace: ml
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

Nutanix Kubernetes Platform | Cluster Operations Management | 349


- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:kommanderAdmins

4. Add the role binding(s) use the command kubectl apply -f new_user.yaml.

Note:

• ClusterRoleBindings permissions are applicable at the global level.


• RoleBindings permissions are applicable at the namespace level.

Identity Providers
You can grant access to users in your organization.
NKP supports GitHub Identity Provider Configuration on page 351, Adding an LDAP Connector on
page 353, SAML,and standard OIDC identity providers such as Google. These identity management providers
support the login and authentication process for NKP and your Kubernetes clusters.
You can configure as many identity providers as you want, and users can select from any method when logging in. If
you have multiple workspaces in your environment, you can use a single identity provider to manage access for all of
them or choose to configure an identity provider per workspace.
Configuring a dedicated identity provider per workspace can be useful if you want to retain access to your
workspaces separately. In this case, users of a specific workspace have a dedicated login 2-factor authentication
page with the identity provider options configured for their workspace. This setup is particularly helpful if you have
multiple tenants. For more information, see Multi-Tenancy in NKP on page 421.

Advantages of Using an External Identity Provider


Using an external identity provider is beneficial for:

• Centralized management of multiple users and multiple clusters.


• Centralized management of password rotation, expiration, and so on.
• Support of 2-factor-authentication methods for increased security.
• Separate storage of user credentials.

Access Limitations

• The GitHub provider allows you to specify any organizations and teams that are eligible for access.
• The LDAP provider allows you to configure search filters for either users or groups.
• The OIDC provider cannot limit users based on identity.
• The SAML provider allows users to log in using a single sign-on (SSO).

Configuring an Identity Provider Through the UI


You can configure an identity provider through the UI.

Before you begin


To configure an identity provider:

Procedure

1. Log into the Kommander UI. See Logging In To the UI on page 74.

Nutanix Kubernetes Platform | Cluster Operations Management | 350


2. From the dropdown list, select the Global workspace.

3. Select Administration > Identity Providers.

4. Select the Identity Providers tab.

5. Select Add Identity Provider.

6. Select an identity provider.

7. Select the target workspace for the identity provider and complete the fields with the relevant details.

Note: You can configure an identity provider globally for your entire organization using theAll Workspaces
option or per workspace, enabling multi-tenancy.

8. Click Save.

Disabling an Identity Provider


You can disable an identity provider temporarily.

Procedure

1. Select the three-dot button on the Identity Providers table.

2. Select Disable from the dropdown list.


The provider option no longer appears on the Identity Provider page.

GitHub Identity Provider Configuration


You can configure GitHub as an identity provider and grant access to NKP.
NKP allows authorizing access to your clusters and the UI with GitHub credentials but it must be configured in the
dashboard. To ensure every developer in your GitHub organization has access to your Kubernetes clusters using their
GitHub credentials, add that option for login by adding an identity provider with the information from your GitHub
profile in the OAuth application settings
The first login requires you to authorize the GitHub account. As an administrator of the cluster, select the Authorize
github-username button on the page that follows the login. After setting up the GitHub authorization, the future
login screens will have the Log in with github-auth button as an option.

Adding an Identity Provider Using GitHub

To authorize all developers to access your clusters using their GitHub credentials, set up GitHub as an
identity provider login option.

Procedure

1. Start by creating a new OAuth Application in your GitHub organization by completing the registration form. To
view the form, see https://github.com/settings/applications/new.

2. In the Application name field, enter a name for your application.

3. In the Homepage URL field, enter your cluster URL.

4. In the Authorization callback URL field, use your cluster URL followed by /dex/callback by adding this
to the end of your URL.

5. Click Register application.


After you complete the application, the Settings page. appears

Nutanix Kubernetes Platform | Cluster Operations Management | 351


6. You need the Client ID and Client Secret from this page for the NKP UI.
If you do not have a Client Secret for the application, to generate a new client secret, select Generate a new
client secret.

7. Log in to your NKP UI from the top menu bar, and select the Global workspace.

8. Select Identity Providers in the Administration section of the sidebar menu.

9. Select the Identity Providers tab and then click Add Identity Provider .

10. Select GitHub as the identity provider type, and select the target workspace.

11. Copy the Client ID and Client Secret values from GitHub into this form.

12. To configure dex to load all the groups configured in the user's GitHub identity, select the Load All Groups
checkbox.
This allows you to configure group-specific access to NKP and Kubernetes resources.

Note: Do not select the Enable Device Flow checkbox before selecting <Register the Application> .

13. Click Save.

Mapping the Identity Provider Groups to the Kubernetes Groups

You can map the identity provider groups to the Kubernetes groups.

Procedure

1. In the NKP UI, select the Groups tab from the Identity Provider screen, and then click Create Group.

2. In the Enter Name field, enter a descriptive name.

3. Add the groups or teams from your GitHub provider under Identity Provider Groups.
For more information on finding the teams to which you are assigned in GitHub, see the Changing team visibility
section at https://docs.github.com/en/organizations/organizing-members-into-teams/changing-team-
visibility.

4. Click Save.

Assigning a Role to the Developers Group

After defining a group, bind one or more roles to this group. This topic describes how to bind the group to
the View Only role.

Procedure

1. In the NKP UI, from the top menu bar, select Global or the target workspace.

2. Select the Cluster Role Bindings tab and then select Add roles.

3. Select View Only role from the Roles dropdown list and select Save.
For more information on granting users access to Kommander paths on your cluster, see Access to Kubernetes
and Kommander Resources on page 342.

Nutanix Kubernetes Platform | Cluster Operations Management | 352


4. At a minimum, add a read only path for access to all the Kommander Dashboard views:

Table 24: Kommander Dashboard Views

Dashboard Role Path Access


kommander-dashboard nkp-kommander-view /nkp/kommander/ read
dashboard/*

When you check your attached clusters and login as a user from your matched groups, every resource, is listed. Do
delete or edit them.

External LDAP Directory Configuration


You can connect your cluster to an external LDAP directory. Configure your NKP cluster for logging in with the
credentials stored in an external LDAP directory service.

Adding an LDAP Connector

Each LDAP directory is set up in unique ways. So, these steps are important. Add the LDAP authentication
mechanism using the CLI or UI.

About this task


This topic describes the configuration of an NKP cluster to connect to the Online LDAP Test Server in Forum
Systems Web site at https://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/. For
demonstration purpose, the configuration shown uses insecureNoSSL: true. In production, you should protect
LDAP communication with a properly configured transport layer security (TLS). When using TLS, as an admin, you
can add insecureSkipVerify: true to spec.ldap to skip server certificate verification, if needed.

Note: This topic does not cover all possible configurations. For more information, see Dex LDAP connector reference
documentation on GitHub at https://github.com/dexidp/dex/blob/v2.22.0/Documentation/connectors/
ldap.md.

Procedure
Choose whether to establish an external LDAP globally or for a specific workspace.

» Global LDAP - identity provider serves all workspaces: Create and apply the following objects:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: ldap-password
namespace: kommander
type: Opaque
stringData:
password: password
---
apiVersion: dex.mesosphere.io/v1alpha1
kind: Connector
metadata:
name: ldap
namespace: kommander
spec:
enabled: true

Nutanix Kubernetes Platform | Cluster Operations Management | 353


type: ldap
displayName: LDAP Test
ldap:
host: ldap.forumsys.com:389
insecureNoSSL: true
bindDN: cn=read-only-admin,dc=example,dc=com
bindSecretRef:
name: ldap-password
userSearch:
baseDN: dc=example,dc=com
filter: "(objectClass=inetOrgPerson)"
username: uid
idAttr: uid
emailAttr: mail
groupSearch:
baseDN: dc=example,dc=com
filter: "(objectClass=groupOfUniqueNames)"
userMatchers:
- userAttr: DN
groupAttr: uniqueMember
nameAttr: ou
EOF

Note: The value for the LDAP connector spec:displayName (here LDAP Test) appears on the Login button
for this identity provider in the NKP UI. Enter a name for the users.

» Workspace LDAP - identity provider serves a specific workspace: Create and apply the following objects:

Note: Establish LDAP for a specific workspace in the scope of multiple tenants..

• 1. Obtain the workspace name for which you are establishing an LDAP authentication server.
kubectl get workspaces
Note down the value under the WORKSPACE NAMESPACE column.
2. Set the WORKSPACE_NAMESPACE environment variable to that namespace.
export WORKSPACE_NAMESPACE=<your-namespace>

3. Create and apply the following objects on that workspace.


cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: ldap-password
namespace: ${WORKSPACE_NAMESPACE}
type: Opaque
stringData:
password: password
---
apiVersion: dex.mesosphere.io/v1alpha1
kind: Connector
metadata:
name: ldap
namespace: ${WORKSPACE_NAMESPACE}
spec:
enabled: true
type: ldap
displayName: LDAP Test
ldap:
host: ldap.forumsys.com:389
insecureNoSSL: true

Nutanix Kubernetes Platform | Cluster Operations Management | 354


bindDN: cn=read-only-admin,dc=example,dc=com
bindSecretRef:
name: ldap-password
userSearch:
baseDN: dc=example,dc=com
filter: "(objectClass=inetOrgPerson)"
username: uid
idAttr: uid
emailAttr: mail
groupSearch:
baseDN: dc=example,dc=com
filter: "(objectClass=groupOfUniqueNames)"
userMatchers:
- userAttr: DN
groupAttr: uniqueMember
nameAttr: ou
EOF

Note: The value for the LDAP connector spec:displayName (here LDAP Test) appears
on the Login button for this identity provider in the NKP UI. Choose a name for the users.

Testing the LDAP Connector

You can test the LDAP connector.

Procedure

1. To retrieve a list of connectors using the kubectl get connector.dex.mesosphere.io -A command.

2. Run the kubectl get Connector.dex.mesosphere.io -n kommander <LDAP-CONNECTOR-NAME> -o


yaml command to verify that the LDAP connector is created successfully.

Logging In for Global LDAP

Global LDAP identity provider serves all workspaces.

Procedure

1. Visit https://<YOUR-CLUSTER-HOST>/token and initiate a login flow.

2. On the login page, click Log in with <ldap-name>.

3. Enter the LDAP credentials and log in.

Note: In the UI, after the LDAP authentication is enabled, additional access rights must be configured using the
Add Identity Provider page in the UI.

Logging In for Workspace LDAP

Workspace LDAP identity provider serves a specific workspace.

Procedure

1. Complete the steps in Generating a Dedicated Login URL for Each Tenant on page 423.

2. On the login page, click Log in with <ldap-name>.

Nutanix Kubernetes Platform | Cluster Operations Management | 355


3. Enter the LDAP credentials and log in.

Note: In the UI, after the LDAP authentication is enabled, additional access rights must be configured using the
Add Identity Provider page in the UI.

LDAP Troubleshooting

If the Dex LDAP connector configuration is incorrect, debug the problem, and iterate on it. The Dex log
output contains helpful error messages, as indicated in the following examples:

Reading Errors During Dex Startup

This is a Nutanix task.

About this task


If the Dex configuration fragment provided results in an invalid Dex config log file, Dex does not properly
start up. The, read the error details by reviewing the Dex logs.

Procedure

1. Use the kubectl logs -f dex-66675fcb7c-snxb8 -n kommander command to retrieve the Dex logs.
You may see an error similar to the following example:
error parse config file /etc/dex/cfg/config.yaml: error unmarshaling JSON: parse
connector config: illegal base64 data at input byte 0

2. Another reason for Dex not starting up correctly is that https://<YOUR-CLUSTER-HOST>/token displays a
5xx HTTP error response after timing out.

Errors Upon Login

Most problems with the Dex LDAP connector configuration become apparent only after a login attempt. A login that
fails from misconfiguration results in an error displaying only Internal Server Error and Login error. You
can find the root cause by reading the Dex log, as shown in the following example.
kubectl logs -f dex-5d55b6b94b-9pm2d -n kommander
You can look for output similar to this example.
[...]
time="2019-07-29T13:03:57Z" level=error msg="Failed to login user: failed to connect:
LDAP Result Code 200 \"Network Error\": dial tcp: lookup freeipa.example.com on
10.255.0.10:53: no such host"
Here, the directory’s DNS name was misconfigured, which should be easy to address.
A more difficult problem occurs when a login through Dex through LDAP fails because Dex cannot find the specified
user unambiguously in the directory. That is the result of an invalid LDAP user search configuration. Here’s an
example error message from the Dex log.
time="2019-07-29T14:21:27Z" level=info msg="performing ldap search
cn=users,cn=compat,dc=demo1,dc=freeipa,dc=org sub (&(objectClass=posixAccount)
(uid=employee))"
time="2019-07-29T14:21:27Z" level=error msg="Failed to login user: ldap: filter
returned multiple (2) results: \"(&(objectClass=posixAccount)(uid=employee))\""
Solving problems like this requires you to review the directory structures carefully. Directory structures can be very
different between different LDAP setups. You must carefully assemble a user search configuration matching the
directory structure.

Nutanix Kubernetes Platform | Cluster Operations Management | 356


Notably, with some directories, it can be hard to distinguish between the cases such as properly configured
and user not found where login fails in an expected way and displays not properly configured, and
therefore user not found where login fails in an unexpected way.

Successful Login Example

For comparison, here are some sample log lines issued by Dex for a successful login:
time="2019-07-29T15:35:51Z" level=info msg="performing ldap search
cn=accounts,dc=demo1,dc=freeipa,dc=org sub (&(objectClass=posixAccount)
(uid=employee))"
time="2019-07-29T15:35:52Z" level=info msg="username \"employee\" mapped to entry
uid=employee,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org"
time="2019-07-29T15:35:52Z" level=info msg="login successful: connector \"ldap\",
username=\"\", email=\"[email protected]\", groups=[]"

Kubectl API Access Using an Identity Provider


After installing NKP, a single user with admin rights and static credentials is available. However, static
credentials are hard to manage and replace.
To allow other users and user groups to access your environment, Nutanix recommends setting up an external identity
provider. Users added through an identity provider do not have static credentials, but have to generate a token to gain
access to your environment’s kubectl API. This token ensures that certificates are rotated continuously for security
reasons.
There are two options for the generation of this token:

Table 25: Token Generation

Method How Often Does the User Have to Generate a Token?


Generating a token User must log in with credentials and manually generate a
kubeconfig file with a fresh token every 24 hours.

Enabling the Konvoy Async User configures the Konvoy Async Plugin so the authentication is
Plugin routed through Dex's oidc and the token is generated automatically.
By enabling the plugin, the user is routed to an additional login
procedure for authentication, but they no longer have to generate a
token manually in the UI.

The instructions for either generating a token manually or enabling the Konvoy Async Plugin differ slightly
depending on whether you configured the identity provide globally for all the workspaces, or individually for a single
workspace.

Configuring Token Authentication for Global Identity Providers

In this scenario, the Identity Provider serves all workspaces.

About this task

Note: You must manually generate a new token every 24 hours:

About this task

Nutanix Kubernetes Platform | Cluster Operations Management | 357


Procedure

1. Log in to the NKP UI with your credentials.

2. Select your username.

3. Select Generate Token.

4. Login again.

5. If there are several clusters, select the target cluster.

6. Follow the instructions on the displayed page.

Enabling the Konvoy Async Plugin for Global Identity Providers

Enable the Konvoy Async Plugin to automatically update the token.

Before you begin


You or a global admin must configure an identity provider to see this option.

Procedure

1. Open the login URL.

2. To authenticate, select Konvoy credentials plugin instructions.

3. Follow the instructions on the displayed (Konvoy) Credentials plugin instructionspage.


If you use Method 1 in the instructions documented in the (Konvoy) Credentials plugin instructions, then
download a kubeconfig file that includes the contexts for all clusters.
Alternatively, to switch between clusters, you can use Method 2 to create a kubeconfig file per cluster and use
the --kubeconfig= flag or export KUBECONFIG= commands.

Warning: If you choose Method 2, the Set profile name field is not optional if you have multiple clusters in
your environment. Ensure you change the name of the profile for each cluster for which you want to generate a
kubeconfig file. Otherwise, all clusters will use the same token, which makes cluster authentication vulnerable
and can let users access clusters for which they do not have authorization.

Configuring Token Authentication for Workspace Identity Providers

In this scenario, the identity provider serves a specific workspace or tenant.

About this task

Note: You must manually generate a new token every 24 hours:

Procedure

1. Open the login link you obtained from the global administrator, which they generated for your workspace or
tenant.

2. Select Generate Kubectl Configuration.

3. If there are several clusters in the workspace, select the cluster for which you want to generate a token.

4. Log in with your credentials.

Nutanix Kubernetes Platform | Cluster Operations Management | 358


5. Follow the instructions on the page displayed.

Enabling the Konvoy Async Plugin for Workspace Identity Providers

Enable the Konvoy Async Plugin to automatically update the token.

Before you begin


You or a global admin must configure a workspace-scoped identity provider to see this option.

Procedure

1. Open the login link you obtained from the global administrator, which they generated for your workspace or
tenant.

2. Select Credentials plugin instructions.

3. Follow the instructions on the (Konvoy) Credentials plugin instructions page.


If you use Method 1 in the instructions documented in the (Konvoy) Credentials plugin instructions, then
download a kubeconfig file that includes the contexts for all clusters.
Alternatively, to switch between clusters, you can use Method 2 to create a kubeconfig file per cluster and use
the --kubeconfig= flag or export KUBECONFIG= commands.

Warning: If you choose Method 2, the Set profile name field is not optional if you have multiple clusters in
your environment. Ensure you change the name of the profile for each cluster for which you want to generate a
kubeconfig file. Otherwise, all clusters will use the same token, which makes cluster authentication vulnerable
and can let users access clusters for which they do not have authorization.

Infrastructure Providers
Infrastructure providers, such as AWS, Azure, and vSphere, provide the infrastructure for your
Management clusters. You may have many accounts for a single infrastructure provider. To automate
cluster provisioning, NKP needs authentication keys for your preferred infrastructure provider.
To provision new clusters and manage them in the NKP UI, NKP also needs infrastructure provider credentials.
Currently, you can create infrastructure providers records for:

• AWS: Creating an AWS Infrastructure Provider with a User Role on page 360
• Azure: Creating an Azure Infrastructure Provider in the UI on page 372
• vSphere: Creating a vSphere Infrastructure Provider in the UI on page 373
Infrastructure provider credentials are configured in each workspace. The name you assign must be unique across all
the other namespaces in your cluster.

Viewing and Modifying Infrastructure Providers


You can use the NKP UI to view, create, and delete infrastructure provider records.

Procedure

1. From the top menu bar, select your target workspace.

Nutanix Kubernetes Platform | Cluster Operations Management | 359


2. In the Administration section of the sidebar menu, select Infrastructure Providers.

• AWS:

• Configure AWS Provider with Role Credentials (Recommended if using AWS).


• Configure AWS Provider with static credentials.
• Azure:


• Create an Azure Infrastructure Provider in the NKP UI.
• Create a managed Azure cluster in the NKP UI.
• vSphere:


• Create a vSphere Infrastructure Provider in the NKP UI.
• Create a managed cluster on vSphere in the NKP UI.
• VMware Cloud Director (VCD):


• Create a VMware Cloud Director Infrastructure Provider in the NKP UI.
• Create a managed Cluster on VCD in the NKP UI

Deleting an infrastructure provider


Before deleting an infrastructure provider, NKP verifies whether any existing managed clusters were
created using this provider.

Procedure
To delete an infrastructure provider, delete all the other clusters created with that infrastructure provider first.
This ensures that NKP has access to your infrastructure provider to remove all the resources created for a managed
cluster.

Creating an AWS Infrastructure Provider with a User Role


You can create an AWS Infrastructure Provider in the NKP UI. Create your provider to add resources to
your AWS account.

About this task

Important: Nutanix recommends using the role-based method as this is more secure.

Note: The role authentication method can only be used if your management cluster is running in AWS.

For more flexible credential configuration, we offer a role-based authentication method with an optional External
ID for third party access. For more information, see the IAM roles for Amazon EC2 in the AWS documentation at
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.

Procedure

1. Complete the steps in Create a Role Manually on page 361.

2. From the top menu bar, select your target workspace.

3. In the Administration section of the sidebar menu, select Infrastructure Providers.

Nutanix Kubernetes Platform | Cluster Operations Management | 360


4. Select Add Infrastructure Provider.

5. Select the Amazon Web Services (AWS) option.

6. Ensure Role is selected as the Authentication Method.

7. Enter a name for your infrastructure provider.


Select a name that matches the AWS user.

8. Enter the Role ARN.

9. If you want to share the role with a 3rd party, add an External ID. External IDs secure your environment from
accidentally used roles. For more information see How to use an external ID when granting access to your
AWS resources to a third party in the AWS documentation at https://docs.aws.amazon.com/IAM/latest/
UserGuide/id_roles_create_for-user_externalid.html.

10. Click Save.

Create a Role Manually

Create a role manually before configuring an AWS Infrastructure Provider with a User Role.

About this task


The role should grant permissions to create the following resources in the AWS account:

• EC2 Instances
• VPC
• Subnets
• Elastic Load Balancer (ELB)
• Internet Gateway
• NAT Gateway
• Elastic Block Storage (EBS) Volumes
• Security Groups
• Route Tables
• IAM Roles

Procedure

1. The user you delegate from your role must have a minimum set of permissions. The following snippet is the
minimal IAM policy required.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AllocateAddress",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateInternetGateway",

Nutanix Kubernetes Platform | Cluster Operations Management | 361


"ec2:CreateNatGateway",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:ModifyVpcAttribute",
"ec2:DeleteInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeInternetGateways",
"ec2:DescribeImages",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVolumes",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RemoveTags",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeInstanceRefreshes",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:DescribeKeyPairs"

Nutanix Kubernetes Platform | Cluster Operations Management | 362


],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateOrUpdateTags",
"autoscaling:StartInstanceRefresh",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteTags"
],
"Resource": [
"arn:*:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*"
]
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/autoscaling.amazonaws.com/
AWSServiceRoleForAutoScaling"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "autoscaling.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/
AWSServiceRoleForElasticLoadBalancing"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/
AWSServiceRoleForEC2Spot"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "spot.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:PassRole"],
"Resource": ["arn:*:iam::*:role/*.cluster-api-provider-aws.sigs.k8s.io"]
},
{
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",

Nutanix Kubernetes Platform | Cluster Operations Management | 363


"secretsmanager:TagResource"
],
"Resource": ["arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"]
},
{
"Effect": "Allow",
"Action": ["ssm:GetParameter"],
"Resource": ["arn:*:ssm:*:*:parameter/aws/service/eks/optimized-ami/*"]
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/eks.amazonaws.com/
AWSServiceRoleForAmazonEKS"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "eks.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/eks-nodegroup.amazonaws.com/
AWSServiceRoleForAmazonEKSNodegroup"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "eks-nodegroup.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:aws:iam::*:role/aws-service-role/eks-fargate-pods.amazonaws.com/
AWSServiceRoleForAmazonEKSForFargate"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "eks-fargate.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:GetRole", "iam:ListAttachedRolePolicies"],
"Resource": ["arn:*:iam::*:role/*"]
},
{
"Effect": "Allow",
"Action": ["iam:GetPolicy"],
"Resource": ["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"]
},
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters",
"eks:CreateCluster",
"eks:TagResource",
"eks:UpdateClusterVersion",
"eks:DeleteCluster",
"eks:UpdateClusterConfig",

Nutanix Kubernetes Platform | Cluster Operations Management | 364


"eks:UntagResource",
"eks:UpdateNodegroupVersion",
"eks:DescribeNodegroup",
"eks:DeleteNodegroup",
"eks:UpdateNodegroupConfig",
"eks:CreateNodegroup",
"eks:AssociateEncryptionConfig"
],
"Resource": ["arn:*:eks:*:*:cluster/*", "arn:*:eks:*:*:nodegroup/*/*/*"]
},
{
"Effect": "Allow",
"Action": [
"eks:ListAddons",
"eks:CreateAddon",
"eks:DescribeAddonVersions",
"eks:DescribeAddon",
"eks:DeleteAddon",
"eks:UpdateAddon",
"eks:TagResource",
"eks:DescribeFargateProfile",
"eks:CreateFargateProfile",
"eks:DeleteFargateProfile"
],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["iam:PassRole"],
"Resource": ["*"],
"Condition": {
"StringEquals": { "iam:PassedToService": "eks.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["kms:CreateGrant", "kms:DescribeKey"],
"Resource": ["*"],
"Condition": {
"ForAnyValue:StringLike": {
"kms:ResourceAliases": "alias/cluster-api-provider-aws-*"
}
}
}
]
}
Make sure to also add a correct trust relationship to the created role.
This preceding example allows everyone within the same account to assign AssumeRole with the created role.

2. Replace YOURACCOUNTRESTRICTION with the AWS Account ID that you want AssumeRole from.

Note: Never add a */ wildcard. This opens your account to the public.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com",
"AWS": "arn:aws:iam::YOURACCOUNTRESTRICTION:root"

Nutanix Kubernetes Platform | Cluster Operations Management | 365


},
"Action": "sts:AssumeRole"
}
]
}

3. To use the role created, attach the following policy to the role which is already attached to your managed or
attached cluster. Replace YOURACCOUNTRESTRICTION with the AWS Account ID where the role AssumeRole is
saved. Also, replace THEROLEYOUCREATED with the AWS Role name.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeRoleKommander",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::YOURACCOUNTRESTRICTION:role/THEROLEYOUCREATED"
}
]
}

Configuring an AWS Infrastructure Provider with Static Credentials


When configuring an infrastructure provider with static credentials, you need an access ID and secret key
for a user with a set of minimum capabilities.

About this task


To create an AWS infrastructure provider with static credentials:

Procedure

1. In NKP, select the workspace associated with the credentials that you are adding.

2. Navigate to Administration > Infrastructure Providers, and click Add Infrastructure Provider .

3. Select the Amazon Web Services (AWS) option.

4. Ensure Static is selected as the authentication method.

5. Enter a name for your infrastructure provider for later reference.


Consider choosing a name that matches the AWS user.

6. Enter a access ID and secret keys using the keys generated above.

7. click Save to save your provider.

Creating a New User Using CLI

You can create a new user using CLI.

Before you begin


You must install the AWS CLI utility. For more information, see Install or update to the latest version of the
AWS CLI in the AWS documentation at https://docs.aws.amazon.com/cli/latest/userguide/getting-started-
install.html.

Nutanix Kubernetes Platform | Cluster Operations Management | 366


Procedure
Create a new user with the following AWS CLI commands:

• aws iam create-user --user-name Kommander

• aws iam create-policy --policy-name kommander-policy --policy-document


'{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":
["ec2:AllocateAddress","ec2:AssociateRouteTable","ec2:AttachInternetGateway","ec2:AuthorizeSecur
["*"]},{"Effect":"Allow","Action":
["autoscaling:CreateAutoScalingGroup","autoscaling:UpdateAutoScalingGroup","autoscaling:CreateOr
["arn:*:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*"]},
{"Effect":"Allow","Action":["iam:CreateServiceLinkedRole"],"Resource":
["arn:*:iam::*:role/aws-service-role/autoscaling.amazonaws.com/
AWSServiceRoleForAutoScaling"],"Condition":{"StringLike":
{"iam:AWSServiceName":"autoscaling.amazonaws.com"}}},
{"Effect":"Allow","Action":["iam:CreateServiceLinkedRole"],"Resource":
["arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/
AWSServiceRoleForElasticLoadBalancing"],"Condition":{"StringLike":
{"iam:AWSServiceName":"elasticloadbalancing.amazonaws.com"}}},
{"Effect":"Allow","Action":["iam:CreateServiceLinkedRole"],"Resource":
["arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/
AWSServiceRoleForEC2Spot"],"Condition":{"StringLike":
{"iam:AWSServiceName":"spot.amazonaws.com"}}},{"Effect":"Allow","Action":
["iam:PassRole"],"Resource":["arn:*:iam::*:role/*.cluster-
api-provider-aws.sigs.k8s.io"]},{"Effect":"Allow","Action":
["secretsmanager:CreateSecret","secretsmanager:DeleteSecret","secretsmanager:TagResource"],"Reso
["arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"]},
{"Effect":"Allow","Action":["ssm:GetParameter"],"Resource":["arn:*:ssm:*:*:parameter/
aws/service/eks/optimized-ami/*"]},{"Effect":"Allow","Action":
["iam:CreateServiceLinkedRole"],"Resource":["arn:*:iam::*:role/aws-service-
role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS"],"Condition":{"StringLike":
{"iam:AWSServiceName":"eks.amazonaws.com"}}},{"Effect":"Allow","Action":
["iam:CreateServiceLinkedRole"],"Resource":["arn:*:iam::*:role/aws-service-role/
eks-nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup"],"Condition":
{"StringLike":{"iam:AWSServiceName":"eks-nodegroup.amazonaws.com"}}},
{"Effect":"Allow","Action":["iam:CreateServiceLinkedRole"],"Resource":
["arn:aws:iam::*:role/aws-service-role/eks-fargate-pods.amazonaws.com/
AWSServiceRoleForAmazonEKSForFargate"],"Condition":{"StringLike":
{"iam:AWSServiceName":"eks-fargate.amazonaws.com"}}},{"Effect":"Allow","Action":
["iam:GetRole","iam:ListAttachedRolePolicies"],"Resource":["arn:*:iam::*:role/
*"]},{"Effect":"Allow","Action":["iam:GetPolicy"],"Resource":
["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"]},{"Effect":"Allow","Action":
["eks:DescribeCluster","eks:ListClusters","eks:CreateCluster","eks:TagResource","eks:UpdateClust
["arn:*:eks:*:*:cluster/*","arn:*:eks:*:*:nodegroup/*/*/*"]},
{"Effect":"Allow","Action":
["eks:ListAddons","eks:CreateAddon","eks:DescribeAddonVersions","eks:DescribeAddon","eks:DeleteA
["*"]},{"Effect":"Allow","Action":["iam:PassRole"],"Resource":
["*"],"Condition":{"StringEquals":{"iam:PassedToService":"eks.amazonaws.com"}}},
{"Effect":"Allow","Action":["kms:CreateGrant","kms:DescribeKey"],"Resource":
["*"],"Condition":{"ForAnyValue:StringLike":{"kms:ResourceAliases":"alias/cluster-
api-provider-aws-*"}}}]}'

• aws iam attach-user-policy --user-name Kommander --policy-arn $(aws iam list-policies


--query 'Policies[?PolicyName==`kommander-policy`].Arn' | grep -o '".*"' | tr -d
'"')

• aws iam create-access-key --user-name Kommander

Nutanix Kubernetes Platform | Cluster Operations Management | 367


Using an Existing User to Configure an AWS Infrastructure

You can use an existing AWS user with the credentials configured.

Before you begin


For more information, see Configuration and credential file settings in the AWS documentation at https://
docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html. The user must be authorized to create the
following resources in the AWS account:

• EC2 Instances
• VPC
• Subnets
• Elastic Load Balancer (ELB)
• Internet Gateway
• NAT Gateway
• Elastic Block Storage (EBS) Volumes
• Security Groups
• Route Tables
• IAM Roles

Procedure
The following is the minimal IAM policy required.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AllocateAddress",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:ModifyVpcAttribute",
"ec2:DeleteInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",

Nutanix Kubernetes Platform | Cluster Operations Management | 368


"ec2:DescribeInternetGateways",
"ec2:DescribeImages",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVolumes",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RemoveTags",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeInstanceRefreshes",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions",
"ec2:DescribeKeyPairs"
],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateOrUpdateTags",
"autoscaling:StartInstanceRefresh",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteTags"
],
"Resource": [
"arn:*:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/*"
]
},
{
"Effect": "Allow",

Nutanix Kubernetes Platform | Cluster Operations Management | 369


"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/autoscaling.amazonaws.com/
AWSServiceRoleForAutoScaling"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "autoscaling.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/
AWSServiceRoleForElasticLoadBalancing"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/
AWSServiceRoleForEC2Spot"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "spot.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:PassRole"],
"Resource": ["arn:*:iam::*:role/*.cluster-api-provider-aws.sigs.k8s.io"]
},
{
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",
"secretsmanager:TagResource"
],
"Resource": ["arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"]
},
{
"Effect": "Allow",
"Action": ["ssm:GetParameter"],
"Resource": ["arn:*:ssm:*:*:parameter/aws/service/eks/optimized-ami/*"]
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/eks.amazonaws.com/
AWSServiceRoleForAmazonEKS"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "eks.amazonaws.com" }
}

Nutanix Kubernetes Platform | Cluster Operations Management | 370


},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:*:iam::*:role/aws-service-role/eks-nodegroup.amazonaws.com/
AWSServiceRoleForAmazonEKSNodegroup"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "eks-nodegroup.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:CreateServiceLinkedRole"],
"Resource": [
"arn:aws:iam::*:role/aws-service-role/eks-fargate-pods.amazonaws.com/
AWSServiceRoleForAmazonEKSForFargate"
],
"Condition": {
"StringLike": { "iam:AWSServiceName": "eks-fargate.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["iam:GetRole", "iam:ListAttachedRolePolicies"],
"Resource": ["arn:*:iam::*:role/*"]
},
{
"Effect": "Allow",
"Action": ["iam:GetPolicy"],
"Resource": ["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"]
},
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters",
"eks:CreateCluster",
"eks:TagResource",
"eks:UpdateClusterVersion",
"eks:DeleteCluster",
"eks:UpdateClusterConfig",
"eks:UntagResource",
"eks:UpdateNodegroupVersion",
"eks:DescribeNodegroup",
"eks:DeleteNodegroup",
"eks:UpdateNodegroupConfig",
"eks:CreateNodegroup",
"eks:AssociateEncryptionConfig"
],
"Resource": ["arn:*:eks:*:*:cluster/*", "arn:*:eks:*:*:nodegroup/*/*/*"]
},
{
"Effect": "Allow",
"Action": [
"eks:ListAddons",
"eks:CreateAddon",
"eks:DescribeAddonVersions",
"eks:DescribeAddon",
"eks:DeleteAddon",
"eks:UpdateAddon",

Nutanix Kubernetes Platform | Cluster Operations Management | 371


"eks:TagResource",
"eks:DescribeFargateProfile",
"eks:CreateFargateProfile",
"eks:DeleteFargateProfile"
],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["iam:PassRole"],
"Resource": ["*"],
"Condition": {
"StringEquals": { "iam:PassedToService": "eks.amazonaws.com" }
}
},
{
"Effect": "Allow",
"Action": ["kms:CreateGrant", "kms:DescribeKey"],
"Resource": ["*"],
"Condition": {
"ForAnyValue:StringLike": {
"kms:ResourceAliases": "alias/cluster-api-provider-aws-*"
}
}
}
]
}

Creating an Azure Infrastructure Provider in the UI


You can create an Azure Infrastructure Provider in the NKP UI.

Before you begin


Before you provision Azure clusters using the NKP UI, you must first create an Azure infrastructure provider to
contain your Azure credentials:

Procedure

1. Log in to the Azure command line.


az login

2. Create an Azure Service Principal (SP) by running the following command.


az ad sp create-for-rbac --role contributor --name "$(whoami)-konvoy" --scopes=/
subscriptions/$(az account show --query id -o tsv)

3. From the Dashboard menu, select Infrastructure Providers.

4. Select Add Infrastructure Provider.

5. If you are already in a workspace, the provider is automatically created in that workspace.

6. Select Microsoft Azure.

7. Add a Name for your Infrastructure Provider.

Nutanix Kubernetes Platform | Cluster Operations Management | 372


8. Copy and paste the following values into the indicated fields.

• Copy the id output from the login command above and paste it into the Subscription ID field.
• Copy the tenant used in step 2 and paste it into the Tenant ID field.
• Copy the appId used in step 2 and paste it into the Client ID field.
• Copy the password used in step 2 and paste it into the Client Secret field.

9. Click Save.

Creating a vSphere Infrastructure Provider in the UI


You can create a vSphere Infrastructure Provider in the NKP UI.

Procedure

1. Log in to your NKP Ultimate UI and access the NKP home page.

2. From the left navigation menu, select Infrastructure Providers.

3. Select Add Infrastructure Provider.

4. If you are already in a workspace, the new infrastructure provider is automatically created in that workspace.

5. Select vSphere and add the following information.

• Add a Name for your infrastructure provider.


• In the Username field, enter a valid vSphere vCenter username.
• In the Password field, enter a valid vSphere vCenter user password.
• In the Host URL field, enter the vCenter Server URL.
This field must contain only the domain for the URL, such as vcenter.ca1.your-org-
platform.domain.cloud. Do not specify the protocols http:// or https:// to avoid errors during
cluster creation.
• (Optional) Enter a valid TLS Certificate Thumbprint value.
The TLS Certificate Thumbprint helps in creating a secure connection to VMware vCenter. If you do not have
a thumprint, your connection might be marked as insecure. This field is optional because you might not have a
self-signed vCenter instance, and you only need the thumbprint if you do. The command to obtain this SHA-1
thumbprint for the vSphere’s server’s TLS certificate is listed under the field in the interface.

6. click Save.

Creating a VMware Cloud Director Infrastructure Provider in the UI


You can create a VMware Cloud Director Infrastructure Provider in the NKP UI.

Before you begin


Before you provision VMware Cloud Director (VCD) clusters using the NKP UI, you must

• Complete the VMware Cloud Director Prerequisites on page 912 for the VMware Cloud Director.
• Create a VCD infrastructure provider to contain your credentials.

Nutanix Kubernetes Platform | Cluster Operations Management | 373


Procedure

1. Log in to your NKP Ultimate UI.

2. From the left-navigation menu, select Infrastructure Providers.

3. Select Add Infrastructure Provider.

4. For referencing this infrastructure provider, add a Provider Name.

5. Specify a Refresh Token name that you created in VCD.


You can generate API Tokens to grant programmatic access to VCD for both providers and tenant users. For
more information, see Cloud Director API Token in the VMWare website at https://blogs.vmware.com/
cloudprovider/2022/03/cloud-director-api-token.html. Automation scripts or third-party solutions use the API
token to make requests of VCD on the user’s behalf. You must generate and name the token within VCD, and also
grant tenants the right to use and manage them.

6. Specify a Site URL value, which must begin with https://. For example, "https://vcd.example.com".
Do not use a trailing forward slash character.

Warning: Ensure to make a note of the Refresh Token, as it displays only one time, and cannot be retrieved
afterwards.

Note: Editing a VCD infrastructure provider means that you are changing the credentials under which NKP
connects to VMware Cloud Director. This can have negative effects on any existing cluster that use that
infrastructure provider record.
To prevent errors, NKP first checks if there are any existing clusters for the selected infrastructure
provider. If a VCD infrastructure provider has existing clusters, NKP displays an error message and
prevents you from editing the infrastructure provider.

Header, Footer, and Logo Implementation


Use the customizable Banners page to add header banners, footer banners, and select the colors for
them. You can define header and/or footer banners for your NKP pages and turn them on and off, as
needed.
NKP displays your header and footer banner in a default typeface and size, which cannot be changed.

Creating a Header Banner


The text you type in the Text field appears centered at the top of the screen. The text length is limited to
100 characters, including spaces. The text color is determined by the background color and automatically
calculates an appropriate light or dark color for you.

About this task


The Color selection control uses the style of your browser for its color picker tool. This control allows you to select a
color for your header banner:

Procedure

1. Enter the color’s Hex code.

2. Select a general color range, and then select a specific shade or tint. The color input uses the style of your browser
for its color selection tool.

3. Select the eyedropper, move it to a sample of the color you want and select once to select that color’s location.

Nutanix Kubernetes Platform | Cluster Operations Management | 374


Creating a Footer Banner
The text you type in the Text field appears centered at the bottom of the screen.

About this task


The Color selection control uses the style of your browser for its color picker tool. This control allows you to select a
color for your footer banner:

Procedure

1. Enter the color’s Hex code.

2. Select a general color range from the slider bar, and then select a specific shade or tint with your mouse cursor.

3. Select the eyedropper, move it to a sample of the color you want and select once to select that color’s location.

Adding Your Organization’s Logo Using the Drag and Drop Option
When you license and install NKP Ultimate or Gov Advanced, you also have the option to add your
organization’s logo to the header. The width of the header banner automatically adjusts to contain your
logo. NKP automatically places your logo on the left side of the header and centers it vertically.

Before you begin


Your logo graphic must meet the following criteria:

• Use a suggested file format: PNG, SVG, or JPEG.


• The file size cannot exceed 200 KB.
Error messages affecting the file to upload appear below the image in red, inside the shaded logo area.

Note: To provide security against certain kinds of malicious activity, your browser has a same-origin policy for
accessing resources. When you upload a file, the browser creates a unique identifier for the file. This prevents you from
selecting a file more than once.

Procedure

1. Locate the required file in the MacOS Finder or Windows File Explorer.

2. Drag and drop an image of the appropriate file type into the shaded area to see a preview of the image and display
the file name.
You can select X on the upper-right or Remove on the lower-right to clear the image, if needed.

3. Click Save.

Warning: You cannot select a file for drag-and-drop if it does not have a valid image format.

Adding Your Organization’s Logo Using the Upload Option


Upload a logo image to the header by browsing your file.

Procedure

1. Select Browse Files.

2. To clear the image, select X or click the Remove link, if needed.

3. Click Save.

Nutanix Kubernetes Platform | Cluster Operations Management | 375


Applications
This section includes information on the applications you can deploy in NKP.

Customizing Your Application


If you want to customize an application or change how a specific application is deployed, you can create
a ConfigMap to change or add values to the information that is stored in the HelmRelease. Override the
default configuration of an application by setting the configOverrides field on the AppDeployment to that
ConfigMap. This overrides the configuration of the app for all clusters within the workspace.

About this task


For workspace applications, you can also enable and customize them on a per-cluster basis. For instructions on
how to enable and customize an application per cluster in a given workspace, see Cluster-scoped Application for
Existing AppDeployments on page 400.

Before you begin


Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace where the cluster
is attached:
export WORKSPACE_NAMESPACE=<your_workspace_namespace>
You can now copy the following commands without replacing the placeholder with your workspace namespace every
time you run a command.
Here's an example of how to customize the AppDeployment of Kube Prometheus Stack:

Procedure

1. Provide the name of a ConfigMap with the custom configuration in the AppDeployment.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: kube-prometheus-stack-46.8.0
kind: ClusterApp
configOverrides:
name: kube-prometheus-stack-overrides-attached
EOF

2. Create the ConfigMap with the name provided in the previous step, which provides the custom configuration on
top of the default configuration.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ${WORKSPACE_NAMESPACE}
name: kube-prometheus-stack-overrides-attached
data:
values.yaml: |
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:

Nutanix Kubernetes Platform | Cluster Operations Management | 376


resources:
requests:
storage: 150Gi
EOF

Printing and Reviewing the Current State of an AppDeployment Resource


If you want to know how the AppDeployment resource is currently configured, use the commands below
to print a table of the declared information. If the AppDeployment is configured for several clusters in a
workspace, a column will display a list of the clusters.

About this task


You can review all the AppDeployments in a workspace or a specific AppDeployments of an application in a
workspace.

Procedure
You can run the following commands to review AppDeployments.

» All AppDeployments in a workspace: To review the state of the AppDeployment resource for a specific
workspace, run the get command with the name of your workspace. Here's as example:
nkp get appdeployments -w kommander-workspace
The output displays a list of all your applications:
NAME APP CLUSTERS
[...]
kube-oidc-proxy kube-oidc-proxy-0.3.2 host-cluster
kube-prometheus-stack kube-prometheus-stack-46.8.0 host-cluster
kubecost kubecost-0.35.1 host-cluster
[...]

» Specific AppDeployment of an application in a workspace: To review the state of a specific


AppDeployment of an application, run the get command with the name of the application and your workspace.
Here's an example:
nkp get appdeployment kube-prometheus-stack -w kommander-workspace
The output is as follows:
NAME APP CLUSTERS
kube-prometheus-stack kube-prometheus-stack-46.8.0 host-cluster

Note: For more information on how to create, or get an AppDeployment, see the CLI documentation.

Deployment Scope
In a single-cluster environment with an Starter license, AppDeployments enable customizing any platform
application. In a multi-cluster environment with a Starter license, AppDeployments enable workspace-level,
project-level, and per-cluster deployment and customization of workspace applications.

Logging Stack Application Sizing Recommendations


Sizing recommendations for Logging Stack applications.
For information on how you customize your AppDeployments, see AppDeployment Resources on page 396.

Note: When configuring storage for logging-operator-logging-overrides, ensure that you create a
ConfigMap in your workspace namespace for every cluster in that workspace.

Nutanix Kubernetes Platform | Cluster Operations Management | 377


Keep in mind that you can configure logging-operator-logging-overrides only through the CLI.

Nutanix Kubernetes Platform | Cluster Operations Management | 378


Table 26: Table

No. of Worker Log Generating Application Suggested Configuration


Nodes Load
50 1.4 MB/s Logging Operator Logging values.yaml: |-
Override Config clusterOutputs:
- name: loki
spec:
loki:
# change
${WORKSPACE_NAMESPACE} to
the actual value of your
workspace namespace
url: http://grafana-
loki-loki-distributed-gateway.
${WORKSPACE_NAMESPACE}.svc.cluster.local:8

extract_kubernetes_labels:
true

configure_kubernetes_labels:
true
buffer:
disabled: true
retry_forever:
false
retry_max_times: 5
flush_mode:
interval
flush_interval:
10s

flush_thread_count: 8
extra_labels:
log_source:
kubernetes_container
fluentbit:
inputTail:
Mem_Buf_Limit: 512MB

fluentd:
bufferStorageVolume:
emptyDir:
medium: Memory
disablePvc: true
scaling:
replicas: 10
resources:
requests:
memory: 1000Mi
cpu: 1000m
limits:
memory: 2000Mi
cpu: 1000m

Loki ingester:
replicas: 10
distributor:
replicas: 2

100 8.5 MB/s Logging Operator Logging values.yaml: |-


Override Config clusterOutputs:
- name: loki
Nutanix Kubernetes Platform | Cluster spec:
Operations Management | 379
loki:
# change
Rook Ceph Cluster Sizing Recommendations
Sizing recommendations for Logging Stack applications.
For information on how you customize your AppDeployments, see AppDeployment Resources on page 396.

Note: To add more storage to rook-ceph-cluster, copy and paste storageClassDeviceSets


list from the rook-ceph-cluster-1.10.3-d2iq-defaults ConfigMap into
your workspace where rook-ceph-cluster is present and then modify count and
volumeClaimTemplates.spec.resource.requests.storage .

Nutanix Kubernetes Platform | Cluster Operations Management | 380


Table 27: Table

No. of Worker Nodes Application Suggested Configuration


50 Rook Ceph Cluster cephClusterSpec:
labels:
monitoring:
prometheus.kommander.d2iq.io/select:
"true"
storage:
storageClassDeviceSets:
- name: rook-ceph-osd-set1
count: 4
portable: true
encrypted: false
placement:
topologySpreadConstraints:
- maxSkew: 1
topologyKey:
topology.kubernetes.io/zone # The
nodes in the same rack have the same
topology.kubernetes.io/zone label.
whenUnsatisfiable:
ScheduleAnyway
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-osd
- rook-ceph-osd-
prepare
- maxSkew: 1
topologyKey: kubernetes.io/
hostname
whenUnsatisfiable:
ScheduleAnyway
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rook-ceph-osd
- rook-ceph-osd-
prepare
volumeClaimTemplates:
# If there are some faster
devices and some slower devices, it is
more efficient to use
# separate metadata, wal, and
data devices.
# Refer https://rook.io/docs/
rook/v1.10/CRDs/Cluster/pvc-cluster/
#dedicated-metadata-and-wal-device-for-
osd-on-pvc
- metadata:
name: data
spec:
resources:
requests:
storage: 120Gi
volumeMode: Block
accessModes:
- ReadWriteOnce

100 Nutanix Kubernetes Platform | Cluster Operations Management | 381


nkp:
grafana-loki:
additionalConfig:
Application Management Using the UI
Choose your license type for instructions on how to enable and customize an application and then verify it
has been deployed correctly.

Ultimate: Application Management Using the UI


You can deploy and uninstall application using the UI.

Note: To use the CLI to deploy or uninstall applications, see Deploying Platform Applications Using CLI on
page 389.

Ultimate: Enabling an Application Using the UI

This topic describes how to enable your platform applications from the UI.

Procedure

1. From the top menu bar, select your target workspace.

2. From the sidebar, browse through the available applications from your configured repositories, and select
Applications.

3. Select the three-dot button of the desired application card > Enable.

4. If available, select a version from the dropdown list.


The dropdown list is only visible if there are more than one versions to choose from.

5. Select the clusters where you want to deploy the application.

6. For customizations only: to override the default configuration values, in the sidebar, select Configuration.

Note: If there are customization Overrides at the workspace and cluster level, they are combined for
implementation. Cluster-level Overrides take precedence over Workspace Overrides.

a. To customize an application for all clusters in a workspace, copy your customized values into the text editor
under Workspace Application Configuration or upload your YAML file that contains the values.
someField: someValue

b. To add a customization per cluster, copy the customized values into the text editor of each cluster under
Cluster Application Configuration Override or upload your YAML file that contains the values.
someField: someValue

7. Verify that the details are correct and select Enable.


There may be dependencies between the applications, which are listed in Platform Applications Dependencies
on page 390. Review them carefully before customizing to ensure that the applications are deployed
successfully.

Ultimate: Customizing an Application Using the UI

You can enable an application and customize it using the UI.

About this task


To customize the applications that are deployed to a workspace’s cluster using the UI:

Nutanix Kubernetes Platform | Cluster Operations Management | 382


Procedure

1. From the top menu bar, select your target workspace.

2. From the sidebar, browse through the available applications from your configured repositories, and select
Applications.

3. In the Application card you want to customize, select the three dot menu and Edit.

4. To override the default configuration values, select Configuration in the sidebar.

Note: If there are customization Overrides at the workspace and cluster levels, they are combined for
implementation. Cluster-level Overrides take precedence over Workspace Overrides.

a. To customize an application for all clusters in a workspace, copy your customized values into the text editor
under Workspace Application Configuration or upload your YAML file that contains the values.
someField: someValue

b. To add a customization per cluster, copy the customized values into the text editor of each cluster under
Cluster Application Configuration Override or upload your YAML file that contains the values.
someField: someValue

5. Verify that the details are correct and select Save.

Ultimate: Customizing an Application For a Specific Cluster

You can also customize an application for a specific cluster from the Clusters view:

Procedure

1. From the sidebar menu, select Clusters.

2. Select the target cluster.

3. Select the Applications tab.

4. Navigate to the target Applications card.

5. Select the three-dot menu > Edit.

Ultimate: Verifying an Application using the UI

The application has now been enabled.

About this task


To verify that the application is deployed correctly:

Procedure

1. From the top menu bar, select your target workspace.

2. Select the cluster you want to verify.

a. Select Management Cluster if your target cluster is the Management Cluster Workspace.
b. Otherwise, select Clusters, and choose your target cluster.

Nutanix Kubernetes Platform | Cluster Operations Management | 383


3. Select the Applications tab and navigate to the application you want to verify.

4. If the application was deployed successfully, the status Deployed appears in the application card. Otherwise,
hover over the failed status to obtain more information on why the application failed to deploy.

Note: It can take several minutes for the application to deploy completely. If the Deployed or Failed status is not
displayed, the deployment process is not finished.

Ultimate: Disabling an Application Using the UI

You can disable an application using the UI.

About this task


Follow these steps to disable an application with the UI:

Procedure

1. From the top menu bar, select your target workspace.

2. From the sidebar, browse through the available applications from your configured repositories, and select
Applications.

3. Select the three-dot button of the desired application card > Uninstall.

4. Follow the instruction on the confirmation pop-up message and select Uninstall Application.

Pro: Application Management Using the UI


You can enable an application and customize it using the UI.
You can deploy and uninstall application using the UI.

Note: To use the CLI to deploy or uninstall applications, see Deploying Platform Applications Using CLI on
page 389.

Pro: Enabling an Application Using the UI

About this task


To enable your platform applications from the UI in Kommander:

Procedure

1. From the sidebar to browse through the available applications from your configured repositories, select
Applications.

2. Select the three-dot button of the desired application card > Enable.

3. If available, select a version from the dropdown list. This dropdown list is only visible if there is more than one
version to choose from.

4. Select the clusters where you want to deploy the application.

Nutanix Kubernetes Platform | Cluster Operations Management | 384


5. For customizations only: to override the default configuration values, select Configuration in the sidebar.

a. To customize an application for all clusters in a workspace, copy your customized values into the text editor
under Workspace Application Configuration or upload your YAML file that contains the values.
someField: someValue

6. Confirm the details are correct and then select Enable.


There may be dependencies between the applications, which are listed in Platform Applications Dependencies
on page 390. Review them carefully before customizing to ensure that the applications are deployed
successfully.

Pro: Customizing an Application Using the UI

About this task


To customize the applications that are deployed to yout Management Cluster cluster using the UI:

Procedure

1. From the sidebar, browse through the available applications from your configured repositories and select
Applications

2. In the Application card you want to customize, select the three dot menu and Edit.

3. To override the default configuration values, select Configuration in the sidebar.

Note: If there are customization Overrides at the workspace and cluster level, they are combined for
implementation. Cluster-level Overrides take precedence over Workspace Overrides.

a. To customize an application for all clusters in a workspace, copy your customized values into the text editor
under Workspace Application Configuration or upload your YAML file that contains the values.
someField: someValue

4. Verify that the details are correct and select Save.

Pro: Verifying an Application using the UI

The application has now been enabled.

About this task


To ensure that the application is deployed correctly:

Procedure

1. From the sidebar, select Management Cluster.

2. Select the Applications tab and navigate to the application you want to verify.

3. If the application was deployed successfully, the status Deployed appears in the application card. Otherwise,
hover over the failed status to obtain more information on why the application failed to deploy.

Note: It can take several minutes for the application to deploy completely. If the Deployed or Failed status is not
displayed, the deployment process is not finished.

Nutanix Kubernetes Platform | Cluster Operations Management | 385


Pro: Disabling an Application Using the UI

About this task


To disable an application with the UI:

Procedure

1. From the sidebar, browse through the available applications from your configured repositories and select
Applications

2. Select the three-dot button of the desired application card > Uninstall.

3. Follow the instruction on the confirmation pop-up message, and select Uninstall Application.

Platform Applications
When attaching a cluster, NKP deploys certain platform applications on the newly attached cluster.
Operators can use the NKP UI to customize which platform applications to deploy to the attached clusters
in a given workspace. For more information and to check the default and their current versions. see Nutnix
Kubernetes Platform Release Notes

Default Foundational Applications


These applications provide the foundation for all platform application capabilities and deployments on Managed
Clusters. These applications must be enabled for any Platform Applications to work properly. For current NKP
release Helm Values and NKP versions, see the Components and Application values of the NKP Release Notes:
The foundational applications are comprised of the following Platform Applications:

• Cert Manager: Automates TLS certificate management and issuance. For more information, see https://cert-
manager.io/docs/.
• Reloader: A controller that watches changes on ConfigMaps and Secrets, and automatically triggers updates on
the dependent applications. For more information, see https://github.com/stakater/Reloader.
• traefik: Provides an HTTP reverse proxy and load balancer. Requires cert-manager and reloader. For more
information, see https://traefik.io/.
• Chart Museum: An Open source Helm Chart (collection of files that describe a set of Kubernetes resources)
repository. For more information, see https://chartmuseum.com/.

• Air-gapped environments only: ChartMuseum is used on air-gapped installations to store the Helm
Charts for air-gapped installations. In non-air-gapped installations, the charts are fetched from upstream
repositories and Chartmuseum is not installed.

Common Platform Application Name APP ID


Cert-Manager cert-manager
Logging Operator logging-operator
Reloader reloader
Traefik traefik
Traefik ForwardAuth traefik-forward-auth
ChartMuseum chartmuseum

1. To see which applications are enabled or disabled in each category, verify the status.
kubectl get apps,clusterapps,appdeployments -A

Nutanix Kubernetes Platform | Cluster Operations Management | 386


2. After deployment, the applications will be enabled.
kubectl get helmreleases istio -n ${WORKSPACE_NAMESPACE} -w
To check whether enabled or not, connect to the attached cluster and watch the HelmReleases to verify the
deployment. In this example, we are checking if istio got deployed correctly:
3. You should eventually see the HelmRelease marked as Ready:
NAMESPACE NAME READY STATUS AGE
workspace-test-vjsfq istio True Release reconciliation succeeded 7m3s

Logging
Collects logs over time from Kubernetes and applications deployed on managed clusters. Also provides the ability to
visualize and query the aggregated logs.

• Fluent-Bit: Open source and multi-platform log processor tool which aims to be a generic. For example, Swiss
knife for logs processing and distribution. For more information, see https://docs.fluentbit.io/manual.
• Grafana: Log into the dashboard to view logs aggregated to Grafana Loki. For more information, see https://
grafana.com/oss/grafana/.
• Logging operation: A horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by
Prometheus. For more information, see Logging operator.
• Rook Ceph: Automates the deployment and configuration of a Kubernetes logging pipeline. For more information,
see https://rook.io/docs/rook/v1.10/Helm-Charts/operator-chart/.
• Rook Ceph Cluster: A Kubernetes-native high performance object store with an S3-compatible API that supports
deploying into private and public cloud infrastructures. For more information, see https://rook.io/docs/rook/
v1.10/Helm-Charts/ceph-cluster-chart/.

Note: Currently, the monitoring stack is deployed by default. The logging stack is not.

Common Platform Application Name APP ID


Fluent Bit fluent-bit
Grafana Logging grafana-logging
Logging Operator logging-operator
Grafana Loki (project) project-grafana-loki
Rook Ceph rook-ceph
Rook Ceph Cluster rook-ceph-cluster

Monitoring
Provides monitoring capabilities by collecting metrics, including cost metrics for Kubernetes and applications
deployed on managed clusters. Also provides visualization of metrics and evaluates rule expressions to trigger alerts
when specific conditions are observed.

• Kubecost: provides real-time cost visibility and insights for teams using Kubernetes, helping you continuously
reduce your cloud costs. For more information, see https://kubecost.com/
• kubernetes-dashboard: A general purpose, web-based UI for Kubernetes clusters. It allows users to manage
applications running in the cluster, troubleshoot them and manage the cluster itself. For more information, see
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Nutanix Kubernetes Platform | Cluster Operations Management | 387


• kube-prometheus-stack: A stack of applications that collect metrics and provide visualization and alerting
capabilities. For more information, see https://github.com/prometheus-community/helm-charts/tree/main/
charts/kube-prometheus-stack

Note: Prometheus, Prometheus Alertmanager, and Grafana are included in the bundled installation. For more
information, see https://prometheus.io/, https://prometheus.io/docs/alerting/latest/alertmanager and
https://grafana.com/.

• nvidia-gpu-operator: The NVIDIA GPU Operator manages NVIDIA GPU resources in a Kubernetes cluster and
automates tasks related to bootstrapping GPU nodes. For more information, see https://catalog.ngc.nvidia.com/
orgs/nvidia/containers/gpu-operator.
• prometheus-adapter: Provides cluster metrics from Prometheus. For more information, see https://github.com/
DirectXMan12/k8s-prometheus-adapter.

Common Platform Application Name APP ID


Kubecost kubecost
Kubernetes Dashboard kubernetes-dashboard
Full Prometheus Stack kube-prometheus-stack
Prometheus Adapter prometheus-adapter
NVIDIA GPU Operator nvidia-gpu-operator

Security
Allows management of security constraints and capabilities for the clusters and users.

• gatekeeper: A policy Controller for Kubernetes. For more information, see https://github.com/open-policy-
agent/gatekeeper

Platform Application APP ID


Gatekeeper gatekeeper

Single Sign On (SSO)


Group of platform applications that allow enabling SSO on attached clusters. SSO is a centralized system for
connecting attached clusters to the centralized authority on the management cluster.

• kube-oidc-proxy: A reverse proxy server that authenticates users using OIDC to Kubernetes API servers where
OIDC authentication is not available. For more information, see https://github.com/jetstack/kube-oidc-proxy
• traefik-forward-auth: Installs a forward authentication application providing Google OAuth based authentication
for Traefik. For more information, see https://github.com/thomseddon/traefik-forward-auth.

Platform Application APP ID


Kube OIDC Proxy kube-oidc-proxy
Traefik ForwardAuth traefik-forward-auth

Backup
This platform application assists you with backing up and restoring your environment:

Nutanix Kubernetes Platform | Cluster Operations Management | 388


• velero: An open source tool for safely backing up and restoring resources in a Kubernetes cluster, performing
disaster recovery, and migrating resources and persistent volumes to another Kubernetes cluster.For more
information, see https://velero.io/.

Platform Application APP ID


Velero velero

Review the Workspace Platform Application Defaults and Resource Requirements on page 42 to ensure that
the attached clusters have sufficient resources.
When deploying and upgrading applications, platform applications come as a bundle; they are tested as a single unit,
and you must deploy or upgrade them in a single process, for each workspace. This means all clusters in a workspace
have the same set and versions of platform applications deployed.

Deploying Platform Applications Using CLI


This topic describes how to use the CLI to enable an application to deploy to managed and attached
clusters in a workspace.

Before you begin


Before you begin, you must have:

• A running cluster with Kommander installed.


• An existing Kubernetes cluster attached to Kommander (see Kubernetes Cluster Attachment on page 473).
• Determine the name of the workspace where you wish to perform the deployments. You can use the nkp get
workspaces command to view the list of workspace names and their corresponding namespaces.

• Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace where the
cluster is attached:
export WORKSPACE_NAMESPACE=<workspace_namespace>

• Set the WORKSPACE_NAME environment variable to the name of the workspace where the cluster is attached:
export WORKSPACE_NAME=<workspace_name>

Note: From the CLI, you can enable applications to deploy in the workspace. Verify that the application has
successfully deployed through the CLI.

To create the AppDeployment, enable a supported application to deploy to your existing attached or managed cluster
with an AppDeployment resource (see AppDeployment Resources on page 396).

Procedure

1. Obtain the APP ID and Version of the application from the "Components and Applications" section in the
Nutanix Kubenetes Platform Release Notes.You must add them in the <APP-ID>-<Version> format, for example,
istio-1.17.2.

Nutanix Kubernetes Platform | Cluster Operations Management | 389


2. Run the following command and define the --app flag to specify which platform application and version will be
enabled.
nkp create appdeployment istio --app istio-1.17.2 --workspace ${WORKSPACE_NAME}

Note:

• The --app flag must match the APP NAME from the list of available platform applications.
• Observe that the nkp create command must be run with the WORKSPACE_NAME instead of the
WORKSPACE_NAMESPACE flag.

This instructs Kommander to create and deploy the AppDeployment to the KommanderClusters in the
specified WORKSPACE_NAME.

Verifying the Deployed Platform Applications


The platform applications are now enabled after being deployed.

Procedure
Connect to the attached cluster and watch the HelmReleases to verify the deployment. In this example, we are
checking whether istio is deployed correctly.
kubectl get helmreleases istio -n ${WORKSPACE_NAMESPACE} -w
HelmRelease must be marked as Ready.
NAMESPACE NAME READY STATUS AGE
workspace-test-vjsfq istio True Release reconciliation succeeded 7m3s
Some supported applications have dependencies on other applications. For more information, see Platform
Applications Dependencies on page 390.

Platform Applications Dependencies


Platform applications that are deployed to a workspace’s attached clusters can depend on each other. It
is important to note these dependencies when customizing the workspace platform applications to ensure
that your applications are properly deployed to the clusters. .
For more information on how to customize workspace platform applications, see Platform Applications on
page 386.
When deploying or troubleshooting platform applications, it helps to understand how platform applications interact
and may require other platform applications as dependencies.
If a platform application’s dependency does not successfully deploy, the platform application requiring that
dependency does not successfully deploy.
The following table details information about the workspace platform application:

fluent-bit - -
grafana-logging grafana-loki -
grafana-loki rook-ceph-cluster -
logging-operator - -
rook-ceph - -
rook-ceph-cluster rook-ceph kube-prometheus-stack

Users can override the configuration to remove the dependency, as needed.

Nutanix Kubernetes Platform | Cluster Operations Management | 390


Foundational Applications
Provides the foundation for all platform application capabilities and deployments on managed clusters. These
applications must be enabled for any platform applications to work properly.
The foundational applications are comprised of the following platform applications:

• cert-manager (see https://cert-manager.io/docs): Automates TLS certificate management and issuance.


• reloader (see https://github.com/stakater/Reloader): A controller that watches changes on ConfigMaps and
Secrets, and automatically triggers updates on the dependent applications.
• traefik (see https://traefik.io/): Provides an HTTP reverse proxy and load balancer. Requires cert-manager and
reloader.

Table 28: Foundational Applications

Platform Application Required Dependencies


cert-manager -
reloader -
traefik cert-manager, reloader

Logging
Logs are collected over a period of time from Kubernetes and applications are deployed on managed clusters. Also
provides the ability to visualize and query the aggregated logs.

• fluent-bit: Open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs
processing and distribution. For more information, see https://docs.fluentbit.io/manual/.
• grafana-logging: Logging dashboard used to view logs aggregated to Grafana Loki. For more information, see
https://grafana.com/oss/grafana/.
• grafana-loki: A horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by
Prometheus. For more information, see https://grafana.com/oss/loki/.
• logging-operator (see https://banzaicloud.com/docs/one-eye/logging-operator/): Automates the deployment
and configuration of a Kubernetes logging pipeline. For more information, see
• rook-ceph: A Kubernetes-native high performance object store with an S3-compatible API that supports deploying
into private and public cloud infrastructures. For more information, see https://rook.io/docs/rook/v1.10/Helm-
Charts/operator-chart/) and rook-ceph-cluster (see https://rook.io/docs/rook/v1.10/Helm-Charts/ceph-
cluster-chart/

Table 29: Logging

Platform Application Required Dependencies Optional Dependencies


fluent-bit - -
grafana-logging grafana-loki -
grafana-loki rook-ceph-cluster -
logging-operator - -
rook-ceph - -

Nutanix Kubernetes Platform | Cluster Operations Management | 391


Platform Application Required Dependencies Optional Dependencies
rook-ceph-cluster rook-ceph kube-prometheus-stack

Note: Users can override the configuration


to remove the dependency, as needed.

Monitoring
Provides monitoring capabilities by collecting metrics, including cost metrics, for Kubernetes and applications
deployed on managed clusters. Also provides visualization of metrics and evaluates rule expressions to trigger alerts
when specific conditions are observed.

• Kubecost: provides real-time cost visibility and insights for teams using Kubernetes, helping you continuously
reduce your cloud costs. For more information, see .https://kubecost.com/
• kubernetes-dashboard: A general purpose, web-based UI for Kubernetes clusters. It allows users to manage
applications running in the cluster, troubleshoot them and manage the cluster itself. For more information, see
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/.
• kube-prometheus-stack: A stack of applications that collect metrics and provide visualization and alerting
capabilities. For more information, see https://github.com/prometheus-community/helm-charts/tree/main/
charts/kube-prometheus-stack.

Note: Prometheus, Prometheus Alertmanager, and Grafana are included in the bundled installation. For more
information, see https://prometheus.io/, https://prometheus.io/docs/alerting/latest/alertmanager,
and https://grafana.com/.

• nvidia-gpu-operator: The NVIDIA GPU Operator manages NVIDIA GPU resources in a Kubernetes cluster and
automates tasks related to bootstrapping GPU nodes. For more information, see https://catalog.ngc.nvidia.com/
orgs/nvidia/containers/gpu-operator.
• prometheus-adapter: Provides cluster metrics from Prometheus. For more information, see https://github.com/
DirectXMan12/k8s-prometheus-adapter.

Table 30: Monitoring

Platform Application Required Dependencies


kubecost -
kubernetes-dashboard -
kube-prometheus-stack -
prometheus-adapter kube-prometheus-stack
nvidia-gpu-operator -

Security
Allows management of security constraints and capabilities for the clusters and users.

• gatekeeper: A policy Controller for Kubernetes.


• For more information, see https://github.com/open-policy-agent/gatekeeper.

Nutanix Kubernetes Platform | Cluster Operations Management | 392


Table 31: Security

Platform Application Required Dependencies


gatekeeper gatekeeper

Single Sign On (SSO)


Group of platform applications that allow enabling SSO on attached clusters. SSO is a centralized system for
connecting attached clusters to the centralized authority on the management cluster.

• kube-oidc-proxy (see ): A reverse proxy server that authenticates users using OIDC to Kubernetes API servers
where OIDC authentication is not available. For more information, see https://github.com/jetstack/kube-oidc-
proxy.
• traefik-forward-auth: Installs a forward authentication application providing Google OAuth based authentication
for Traefik. For more information, see https://github.com/thomseddon/traefik-forward-auth.

Table 32: SSO

Platform Application Required Dependencies


kube-oidc-proxy cert-manager, traefik
traefik-forward-auth traefik

Backup
This platform application assists you with backing up and restoring your environment:

• velero: An open source tool for safely backing up and restoring resources in a Kubernetes cluster, performing
disaster recovery, and migrating resources and persistent volumes to another Kubernetes cluster.For more
information, see https://velero.io/.

Table 33: Backup

Platform Application APP ID


Velero rook-ceph-cluster

Service Mesh
Allows deploying service mesh on clusters, enabling the management of microservices in cloud-native applications.
Service mesh can provide a number of benefits, such as providing observability into communications, providing
secure connections, or automating retries and backoff for failed requests.

• istio: Addresses the challenges developers and operators face with a distributed or microservices architecture. For
more information, see https://istio.io/latest/about/service-mesh/.
• jaeger: A distributed tracing system used for monitoring and troubleshooting microservices-based distributed
systems.For more information, see https://www.jaegertracing.io/.
• kiali: A management console for an Istio-based service mesh. It provides dashboards, observability, and lets
you operate your mesh with robust configuration and validation capabilities. For more information, see https://
kiali.io/.

Nutanix Kubernetes Platform | Cluster Operations Management | 393


Table 34: Service Mesh

Platform Application Required Dependencies Optional Dependencies


istio kube-prometheus-stack -
jaeger istio -
kiali istio jaeger (optional for monitoring
purposes)
knative istio -

NKP AI Navigator Cluster Info Agent


Coupled with the AI Navigator, it analyses your cluster’s data to include live information on queries made through the
AI Navigator chatbot.

• ai-navigator-info-api: This is the collector of the application’s API service, which performs all data
abstraction data structuring services. This component is enabled by default and included in the AI Navigator.
• ai-navigator-info-agent: After manually enabling this platform application, the agent starts collecting pro
or management cluster data and injecting it into the Cluster Info Agent database.

Table 35: NKP AI Navigator

Platform Application Required Dependencies Optional Dependencies


ai-navigator-info-agent On the Management/Pro Cluster -

• ai-navigator-info-api (included in
ai-navigator-app)

Workspace Platform Application Resource Requirements


See Workspace Platform Application Defaults and Resource Requirements on page 42 for a list of all platform
applications, their default deployment configuration, required resources, and storage minimums.

Setting Priority Classes in NKP Applications


In NKP, your workloads can be prioritized to ensure that your critical components stay running in any
situation. Priority Classes can be set for any application in NKP, including Platform Applications, Catalog
applications, and even your own Custom Applications.

About this task


By default, the priority classes of Platform Applications are set by NKP.
For more information about the default priority classes for NKP applications, see the following pages:

• Workspace Platform Application Defaults and Resource Requirements on page 42


• Management Cluster Application Requirements on page 41
• Project Platform Application Configuration Requirements on page 429
This topic provides instructions on how to override the default priority class of any application in NKP to a different
one.
NKP Priority Classes: The priority classes that are available in NKP are as follows:

Nutanix Kubernetes Platform | Cluster Operations Management | 394


Before you begin

Table 36: Priority Classes

Class Value Name Value Description


NKP High nkp-high-priority 100001000 This is the priority class that
is used for high priority NKP
workloads.
NKP Critical nkp-critical-priority 100002000 This is the highest priority class
that is used for critical priority
NKP workloads.

1. Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace where the
cluster is attached: xport WORKSPACE_NAMESPACE=<your_workspace_namespace>.
export WORKSPACE_NAMESPACE=<your_workspace_namespace>

2. You are now able to copy the following commands without having to replace the placeholder with your
workspace namespace every time you run a command.
Follow these steps.

Note: Keep in mind that the overrides for each application appears differently and is dependent on how the
application’s helm chart values are configured.
For more information about the helm chart values used in the NKP, see "Component and Applications"
section in the Nutanix Kubernetes Platform Release Notes.
Generally speaking, performing a search for the priorityClassName field allows you to find out how
you can set the priority class for a component.
In the example below which uses the helm chart values in Grafana Loki, the referenced
priorityClassName field is nested under the ingester component. The priority class can be set for
several other components, including distributor, ruler, and on a global level.

Procedure

1. Create a ConfigMap with custom priority class configuration values for Grafana Loki.
The following example sets the priority class of ingester component to the NKP critical priority class.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ${WORKSPACE_NAMESPACE}
name: grafana-loki-overrides
data:
values.yaml: |
ingester:
priorityClassName: nkp-critical-priority
EOF

2. Edit the grafana-loki AppDeployment to set the value of spec.configOverrides.name to grafana-


loki-overrides.
After your editing is complete, the AppDeployment resembles this example.
apiVersion: apps.kommander.d2iq.io/v1alpha3

Nutanix Kubernetes Platform | Cluster Operations Management | 395


kind: AppDeployment
metadata:
name: grafana-loki
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: grafana-loki-0.69.16
kind: ClusterApp
configOverrides:
name: grafana-loki-overrides

3. It will take a few minutes to reconcile but you can check the ingester pod’s priority class after reconciling.
kubectl get pods -n ${WORKSPACE_NAMESPACE} -o custom-
columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName,PRIORITY:.spec.priority
|grep ingester
The results appears as follows::
NAME PRIORITY
PRIORITY
grafana-loki-loki-distributed-ingester-0 nkp-critical-
priority 100002000

AppDeployment Resources
Use AppDeployments to deploy and customize platform, NKP catalog, and custom applications.
An AppDeployment is a custom resource (see Custom Resource in https://kubernetes.io/docs/concepts/extend-
kubernetes/api-extension/custom-resources/ created by NKP with the purpose of deploying applications
(platform, NKP catalog and custom applications) in the management cluster, managed clusters, or both. Customers of
both Pro and Ultimate products use AppDeployments, regardless of their setup (air-gapped, non-air-gapped, etc.),
and their infrastructure provider.
When installing NKP, an AppDeployment resource is created for each enabled Platform Application. This
AppDeployment resource references a ClusterApp, which then references the repository that contains a concrete
declarative and preconfigured setup of an application, usually in the form of a HelmRelease. ClusterApps are
cluster-scoped so that these platform applications are deployable to all workspaces or projects.
In the case of NKP catalog and custom applications, the AppDeployment references an App instead of a
ClusterApp, which also references the repository containing the installation and deployment information. Apps are
namespace-scoped and are meant to only be deployable to the workspace or project in which they have been created.
For example, this is the default AppDeployment for the Kube Prometheus Stack platform application:
apiVersion: apps.kommander.nutanix.io/v1alpha3
kind: AppDeployment
metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: kube-prometheus-stack-46.8.0
kind: ClusterApp

Workspaces
Allow teams or tenants to manage their own clusters using workspaces. Workspaces are a logical grouping
of clusters that maintain a similar configuration, with certain configurations automatically federated to those
clusters. Workspaces give you the flexibility to represent your organization in a way that makes sense
for your teams or tenants. For example, you can create workspaces to separate clusters according to
departments, products, or business functions.

Nutanix Kubernetes Platform | Cluster Operations Management | 396


The following procedures are supported for workspaces:

• Deploying Platform Applications Using CLI on page 389


• Platform Applications on page 386
• Platform Applications Dependencies on page 390
• Workspace Platform Application Defaults and Resource Requirements on page 42

Global or Workspace UI
The UI is designed to be accessible for different roles at different levels:

• Global: At the top level, IT administrators manage all clusters across all workspaces.
• Workspace: DevOps administrators manage multiple clusters within a workspace.
• Projects: DevOps administrators or developers manage configuration and services across multiple clusters.

Default Workspace
To get started immediately, you can use the default workspace deployed in NKP. However, take into account that you
cannot move clusters from one workspace to another after creating/attaching them.

Creating a Workspace
In NKP, you can create your own workspaces.

About this task


To create a workspace:

Procedure

1. From the workspace selection dropdown list in the top menu bar, select Create Workspace.

2. Type a name and description.

3. Click Save.
The workspace is now accessible from the workspace selection dropdown list.

Adding or Editing Workspace Annotations and Labels


When creating or editing a workspace, you can use the Advanced Options to add, edit, or delete
annotations and labels to your workspace. Both the annotations and labels are applied to the workspace
namespace.

About this task


To perform an action in workspace:

Procedure

1. From the top menu bar, select your target workspace.

2. Select the Actions from the dropdown list and click Edit.

3. Enter in new Key and Value labels for your workspace, or edit existing Key and Value labels.

Note: Labels that are added to a workspace are also applied to the kommanderclusters object and as well as
to all the clusters in the workspace.

Nutanix Kubernetes Platform | Cluster Operations Management | 397


Deleting a Workspace
In NKP, you can delete existing workspaces.

About this task


To delete a workspace:

Note: Workspaces can only be deleted if all the clusters in the workspace have been deleted or detached.

Procedure

1. From the top menu bar, select Global.

2. From the sidebar menu, select Workspaces.

3. Select the three-dot button to the right of the workspace you want to delete, and then click Delete.

4. Confirm the workspace deletion in the Delete Workspace dialog box.

Workspace Applications
This topic describes the applications and application types that you can use with NKP.
Application types are either pre-packaged applications from the Nutanix Application Catalog or custom applications
that you maintain for your teams or organization.

• Platform Applications on page 386 are applications integrated into NKP.


• Cluster-scoped Application Configuration from the NKP UI on page 398
• Cluster-scoped Application for Existing AppDeployments on page 400

Cluster-scoped Application Configuration from the NKP UI


When you enable an application for a workspace, you deploy this application to all clusters within that
workspace. You can also choose to enable or customize an application on certain clusters within a
workspace.
This functionality allows you to use NKP in a multi-cluster scenario without restricting the management of multiple
clusters from a single workspace.

Note: NKP Pro users are only be able to configure and deploy applications to a single cluster within a workspace.
Selecting an application to deploy to a cluster skips cluster selection and takes you directly to the workspace
configuration overrides page.

Enabling a Cluster-scoped Application

Before you begin


Ensure that you’ve provisioned or attached clusters in one of the following environments:

• Amazon Web Services (AWS): Creating a New AWS Air-gapped Cluster on page 779
• Amazon Elastic Kubernetes Service (EKS):Create an EKS Cluster from the UI on page 820
• Microsoft Azure:Creating a Managed Azure Cluster Through the NKP UI on page 464
For more information, see the current list of Catalog and Platform Applications:

• Workplace Catalog Applications on page 406

Nutanix Kubernetes Platform | Cluster Operations Management | 398


• Platform Applications on page 386
Navigate to the workspace containing the clusters you want to deploy to by selecting the appropriate workspace name
from the dropdown list at the top of the NKP dashboard.

Procedure

1. From the left navigation pane, find the application you want to deploy to the cluster, and select Applications.

2. Select the three-dot menu in the desired application’s tile and select Enable.

Note: You can also access the Application Enablement by selecting the three-dot menu > View > Details.
Then, select Enable from the application’s details page.

The Application Enablement page appears.

3. Select the cluster(s) that you want to deploy the application to.
The available clusters are sorted by Name, Type, Provider and any Labels that you added.

4. In the top-right corner of the Application Enablement page, deploy the application to the clusters by selecting
Enable.
You are automatically redirected to either the Applications or View Details page.
To view the application enabled in your chosen cluster, navigate to the Clusters page on the left navigation bar.
The application appears in the Applications pane of the appropriate cluster.

Note: Once you enable an application at the workspace level, NKP automatically enables that app on any other
cluster you create or attach.

Configuring a Cluster-scoped Application

About this task


For scenarios where applications require different configurations on a per-cluster basis, navigate to the Applications
page and select Edit from the three-dot menu of the appropriate application to return to the application enablement
page.

Procedure

1. Select the cluster(s) that you want to deploy the application to.
The available clusters can be sorted by Name, Type, Provider and any Labels you’ve added.

2. Select the Configuration tab.

3. The Configuration tab contains two separate types of code editors, where you can enter your specified overrides
and configurations.

» Workspace Application Configuration: A workspace-level code editor that applies all configurations and
overrides to the entirety of the workspace and its clusters for this application.
» Cluster Application Configuration Override: A cluster-scoped code editor that applies configurations
and overrides to the cluster specified. These customizations will merge with the workspace application
configuration. If there is no cluster-scoped configuration, the workspace configuration applies.

4. If you already have a configuration to apply in a text or .yaml file, you can upload the file by selecting Upload
File. If you want to download the displayed set of configurations, select Download File.

Nutanix Kubernetes Platform | Cluster Operations Management | 399


5. Finish configuring the cluster-scoped applications by selecting Save in the top right corner of the Application
Enablement page.
You are automatically redirected to either the Applications or View Details page. To view the custom
configurations of the application in the cluster, select the Configurations tab on the details page of the
application.

Note:
Editing is disabled in the code boxes displayed in the application’s details page. To edit the
configuration, click Edit in the top right of the page and repeat the steps in this section.

Removing a Cluster-scoped Application

About this task


Navigate to the cluster you’ve deployed your applications to by selecting Clusters from the left navigation bar.

Procedure

1. Click on the Applications tab.

2. Select the three-dot menu in the application tile that you want and select Uninstall.
A prompt appears to confirm your decision to uninstall the application.

3. Follow the instructions in the prompt and select Uninstall

4. Refresh the page to confirm that the application has been removed from the cluster.
This process only removes the application from the specific cluster you’ve navigated to. To remove this
application from other clusters, navigate to the Clusters page and repeat the process.

Cluster-scoped Application for Existing AppDeployments


This topic describes how to enable cluster-scoped configuration of applications for existing
AppDeployments.
When you enable an application for a workspace, you deploy this application to all clusters within that workspace.
You can also choose to enable or customize an application on certain clusters within a workspace. This functionality
allows you to use NKP in a multi-cluster scenario without restricting the management of multiple clusters from a
single workspace.
Your NKP cluster comes bundled with a set of default application configurations. If you want to override the default
configuration of your applications, you can define workspace configOverrides on top of the default workspace
configuration. And if you want to further customize your workplace by enabling applications on a per-cluster basis or
by defining per-cluster customizations, you can create and apply clusterConfigOverrides.
The cluster-scoped enablement and customization of applications is an Ultimate-only feature, which allows the
configuration of all workspace Platform Applications on page 386, Workplace Catalog Applications on
page 406, and Custom Applications on page 414 through the CLI in your managed and attached clusters
regardless of your environment setup (air-gapped or non-air-gapped). This capability is not provided for project
applications.

Enabling an Application Per Cluster

Before you begin

• Any application you wish to enable or customize at a cluster level, first needs to be enabled at the workspace-level
through an AppDeployment. See Deploying Platform Applications Using CLI on page 389 and Workplace
Catalog Applications on page 406.

Nutanix Kubernetes Platform | Cluster Operations Management | 400


• For custom configurations, you must created a ConfigMap. For all the required spec fields for each customization
you want to add to an application in a cluster, see AppDeployment Resources on page 396.
You can apply a ConfigMap to several clusters, or create a ConfigMap for each cluster, but the ConfigMap
object must exist in the Management cluster.
• Determine the name of the workspace where you wish to perform the deployments. You can use the nkp get
workspaces command to see the list of workspace names and their corresponding namespaces.

• Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace where the
cluster is attached.
export WORKSPACE_NAMESPACE=<workspace_namespace>

• Set the WORKSPACE_NAME environment variable to the name of the workspace where the cluster is attached.
export WORKSPACE_NAME=<workspace_name>

When you enable an application on a workspace, it is deployed to all clusters in the workspace by default. If you want
to deploy it only to a subset of clusters when enabling it on a workspace for the first time, you can follow the steps:
To enable an application per cluster for the first time:

Procedure

1. Create an AppDeployment for your application, selecting a subset of clusters within the workspace to enable
it on. You can use the nkp get clusters --workspace ${WORKSPACE_NAME} command to see the list of
clusters in the workspace.
The following snippet is an example. Replace the application name, version, workspace name and cluster names
according to your requirements. For compatible versions, see the "Components and Applications" section in the
Nutanix Kubernetes Platforms Release Notes.
nkp create appdeployment kube-prometheus-stack --app kube-prometheus-stack-46.8.0 --
workspace ${WORKSPACE_NAME} --clusters attached-cluster1,attached-cluster2

2. (Optional) Check the current status of the AppDeployment to see the names of the clusters where the application
is currently enabled.

Enabling or Disabling an Application Per Cluster

You can enable or disable an application per cluster after it has been enabled at the workspace level.

About this task


You can enable or disable applications at any time. After you have enabled the application at the workspace level, the
spec.clusterSelector field populates.

Note: For clusters that are newly attached into the workspace, all applications enabled for the workspace are
automatically enabled on and deployed to the new clusters.

If you want to see on what clusters your application is currently deployed, see the print and review the current state of
your AppDeployment. For more information, see AppDeployment Resources on page 396.

Procedure
Edit the AppDeployment YAML by adding or removing the names of the clusters where you want to enable your
application in the clusterSelector section:

Nutanix Kubernetes Platform | Cluster Operations Management | 401


The following snippet is an example. Replace the application name, version, workspace name and cluster names
according to your requirements.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: kube-prometheus-stack-46.8.0
kind: ClusterApp
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
- attached-cluster3-new
EOF

Customizing an Application Per Cluster

You can customize the application for each cluster occurrence of said application. If you want to customize
the application for a cluster that is not yet attached, refer to the instructions below, so the application is
deployed with the custom configuration during attachment.

About this task


To enable per-cluster customizations:

Procedure

1. Reference the name of the ConfigMap to be applied per cluster in the spec.clusterConfigOverrides
fields. In this example, you have three different customizations specified in three different ConfigMaps for three
different clusters in one workspace.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: kube-prometheus-stack-46.8.1
kind: ClusterApp
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
- attached-cluster2
- attached-cluster3-new
clusterConfigOverrides:
- configMapName: kps-cluster1-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name

Nutanix Kubernetes Platform | Cluster Operations Management | 402


operator: In
values:
- attached-cluster1
- configMapName: kps-cluster2-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster2
- configMapName: kps-cluster3-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster3-new
EOF

2. If you have not done so yet, create the ConfigMaps referenced in each clusterConfigOverrides entry.

Note:

• The changes are applied only if the YAML file has a valid syntax.
• Set up only one cluster override ConfigMap per cluster. If there are several ConfigMaps configured
for a cluster, only one will be applied.
• Cluster override ConfigMaps must be created on the Management cluster.

Customizing an Application Per Cluster at Attachment

You can customize the application configuration for a cluster prior to its attachment, so that the application
is deployed with this custom configuration on attachment. This is preferable, if you do not want to
redeploy the application with an updated configuration after it has been initially installed, which may cause
downtime.

About this task


To enable per-cluster customizations, follow these steps before attaching the cluster

Procedure

1. Set the CLUSTER_NAME environment variable to the cluster name that you will give your to-be-attached cluster.
export CLUSTER_NAME=<your_attached_cluster_name>
Reference the name of the ConfigMap you want to apply to this cluster in the
spec.clusterConfigOverrides fields. You do not need to update the spec.clusterSelector field.
In this example, you have the kps-cluster1-overrides customization specified for attached-cluster-1
and a different customization (in kps-your-attached-cluster-overrides ConfigMap) for your to-be-
attached cluster.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:

Nutanix Kubernetes Platform | Cluster Operations Management | 403


appRef:
name: kube-prometheus-stack-46.8.1
kind: ClusterApp
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
clusterConfigOverrides:
- configMapName: kps-cluster1-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
- configMapName: kps-your-attached-cluster-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- ${CLUSTER_NAME}
EOF

2. If you have not done so yet, create the ConfigMap referenced for your to-be-attached cluster.

Note:

• The changes are applied only if the YAML file has a valid syntax.
• Cluster override ConfigMaps must be created on the Management cluster.

Verify the Configuration of your Application

Procedure

1. To verify whether the applications connect to the managed or attached cluster and check the status of the
deployments, see Workplace Catalog Applications on page 406.

2. If you want to know how the AppDeployment resource is currently configured, refer to the print and review the
state of your AppDeployments.

Disabling the Custom Configuration of an Application Per Cluster

Enabled customizations are defined in a ConfigMap which, in turn, is referenced in the


spec.clusterConfigOverrides object of your AppDeployment.

Procedure

1. Review your current configuration to establish what you want to remove.


kubectl get appdeployment -n ${WORKSPACE_NAMESPACE} kube-prometheus-stack -o yaml
The result appears as follows.
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment

Nutanix Kubernetes Platform | Cluster Operations Management | 404


metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: kube-prometheus-stack-46.8.0
kind: ClusterApp
configOverrides:
name: kube-prometheus-stack-overrides-attached
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
- attached-cluster2
clusterConfigOverrides:
- configMapName: kps-cluster1-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
- configMapName: kps-cluster2-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster2
Here you can see that kube-prometheus-stack has been enabled for the attached-cluster1 and
attached-cluster2. There is also a custom configuration for each of the clusters: kps-cluster1-
overrides and kps-cluster2-overrides.

2. To delete the customization, delete the configMapName entry of the cluster. This is located under
clusterConfigOverrides.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kube-prometheus-stack
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
kind: ClusterApp
name: kube-prometheus-stack-46.8.0
configOverrides:
name: kube-prometheus-stack-ws-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In
values:
- attached-cluster1
clusterConfigOverrides:
- configMapName: kps-cluster1-overrides
clusterSelector:
matchExpressions:
- key: kommander.d2iq.io/cluster-name
operator: In

Nutanix Kubernetes Platform | Cluster Operations Management | 405


values:
- attached-cluster1
EOF

Note: Compare steps one and two for a reference of how an entry should be deleted.

3. Before deleting a ConfigMap that contains your customization, ensure you will NOT require it at a later time. It is
not possible to restore a deleted ConfigMap. If you choose to delete it, run.
kubectl delete configmap <name_configmap> -n ${WORKSPACE_NAMESPACE}

Note: It is not possible to delete a ConfigMap that is being actively used and referenced in the
configOverride of any AppDeployment.

Workplace Catalog Applications


Catalog applications are any third-party or open source applications that appear in the Catalog. These
applications are deployed to be used for customer workloads. Nutanix provides Workplace Catalog
Applications for use in your environment.

Installing the NKP Catalog Application Using the CLI


Catalog applications are applications provided by Nutanix for use in your environment.

Before you begin

• Ensure your clusters run on a supported Kubernetes version and that this Kubernetes version is also compatible
with your catalog application version.
• For customers with an NKP Ultimate License on page 28 and a multi-cluster environment, Nutanix recommends
keeping all clusters on the same Kubernetes version. This ensures your NKP catalog application can run on all
clusters in a given workspace.
• Ensure that your NKP Catalog application is compatible with:

• The Kubernetes version in all the Managed and Attached clusters of the workspace where you want to install
the catalog application.
• The range of Kubernetes versions supported in this release of NKP.
• If your current Catalog application version is not compatible, upgrade the application to a compatible version.

Note: With the latest NKP version, only the following versions of Catalog applications are supported. All the
previous versions and any other applications previously included in the Catalog are now deprecated.

Table 37: Supported Catalog Applications

Name App ID Compatible Kubernetes Application Version


Versions
kafka-operator-0.25.1 kafka-operator 1.21-1.27 0.25.1
zookeeper- zookeeper-operator 1.26-1.27 0.2.15
operator-0.2.16-nkp.1

About this task


Follow these steps to install the NKP catalog from the CLI.

Nutanix Kubernetes Platform | Cluster Operations Management | 406


Procedure

1. If you are running in air-gapped environment, install Kommander in an Air-gapped environment. For more
information, see Installing Kommander in an Air-gapped Environment on page 965.

2. Set the WORKSPACE_NAMESPACE environment variable to the name of your workspace’s namespace.
export WORKSPACE_NAMESPACE=<workspace namespace>

3. Create the GitRepository.


kubectl apply -f - <<EOF
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: nkp-catalog-applications
namespace: ${WORKSPACE_NAMESPACE}
labels:
kommander.d2iq.io/gitapps-gitrepository-type: catalog
kommander.d2iq.io/gitrepository-type: catalog
spec:
interval: 1m0s
ref:
tag: v2.12.0
timeout: 20s
url: https://github.com/mesosphere/nkp-catalog-applications
EOF

4. Verify that you can see the NKP workspace catalog Apps available in the UI (in the Applications section in said
workspace), and in the CLI, using kubectl.
kubectl get apps -n ${WORKSPACE_NAMESPACE}

Kafka Operator in a Workspace


Apache Kafka is an open-source distributed event streaming platform used for high-performance data
pipelines, streaming analytics, data integration, and mission-critical applications. The Kafka Operator is
a Kubernetes operator to automate provisioning, management, autoscaling, and operations of Apache
Kafka clusters deployed to Kubernetes. It works by watching custom resources, such as KafkaClusters,
KafkaUsers, and KafkaTopics, to provision underlying Kubernetes resources (that is StatefulSets)
required for a production-ready Kafka Cluster.

Usage of Custom Image for a Zookeeper Cluster

Warning: If you use a custom version of KafkaCluster with cruise.control, ensure you use the custom resource
image version 2.5.123 in the .cruiseControlConfig.image field for both air-gapped and non-air-gapped
environments.

To avoid the critical CVEs associated with the official kafka image in version v0.25.1, a custom image must be
specified when creating a zookeeper cluster.
Specify the following custom values in KafkaCluster CRD:

• .spec.clusterImage to ghcr.io/banzaicloud/kafka:2.13-3.4.1

• .spec.cruiseControlConfig.initContainers[*].image to ghcr.io/banzaicloud/cruise-
control:2.5.123

Nutanix Kubernetes Platform | Cluster Operations Management | 407


Installing Kafka Operator in a Workspace

This topic describes the Kafka operator running in a workspace namespace, and how to create and
manage Kafka clusters in any project namespaces.

About this task


Follow these steps to install the Kafka operator in a workspace.

Note: Only install the Kafka operator once per workspace.

For more information, see Deploying Kafka in a Project on page 432.

Procedure

1. Follow the generic installation instructions for workspace catalog applications on the Application Deployment
page.

2. Within the AppDeployment, update the appRef to specify the correct kafka-operator App. You can find the
appRef.name by listing the available Apps in the workspace namespace.
kubectl get apps -n ${WORKSPACE_NAMESPACE}
For details on custom configuration for the operator, see Kafka operator Helm Chart documentation at https://
github.com/banzaicloud/koperator/tree/master/charts/kafka-operator#configuration.

Uninstalling Kafka Operator Using the CLI

Uninstalling the Kafka operator does not affect existing KafkaCluster deployments. After uninstalling
the operator, you must manually remove any remaining Custom Resource Definitions (CRDs) from the
operator.

Procedure

1. Delete all of the deployed Kafka custom resources.


For more information, see Deleting Kafka in a Project on page 434.

2. Uninstall a Kafka operator AppDeployment.


kubectl -n <workspace namespace> delete AppDeployment <name of AppDeployment>

3. Remove Kafka CRDs.

Note: The CRDs are not finalized for deletion until you delete the associated custom resources.

kubectl delete crds kafkaclusters.kafka.banzaicloud.io


kafkausers.kafka.banzaicloud.io kafkatopics.kafka.banzaicloud.io

Zookeeper Operator in Workspace


ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed
synchronization, and providing group services. The ZooKeeper operator is a Kubernetes operator that
handles the provisioning and management of ZooKeeper clusters. It works by watching custom resources,
such as ZookeeperClusters, to provision the underlying Kubernetes resources (StatefulSets) required
for a production-ready ZooKeeper Cluster

Nutanix Kubernetes Platform | Cluster Operations Management | 408


Usage of Custom Image for a Zookeeper Cluster
To avoid the critical CVEs associated with the official zookeeper image in version v0.2.15, a custom image must be
specified when creating a zookeeper cluster.
apiVersion: "zookeeper.pravega.io/v1beta1"
kind: "ZookeeperCluster"
# ...
spec:
image:
repository: ghcr.io/mesosphere/zookeeper
tag: v0.2.15-d2iq
For more information about custom images go to Ultimate: Upgrade Project Catalog Applications on page 1107.

Installing Zookeeper Operator in a Workspace

This topic describes the ZooKeeper operator running in a workspace namespace, and how to create and
manage ZooKeeper clusters in any project namespaces.

About this task


Follow these steps to install the Zookeeper operator in a workspace.

Note: Only install the Zookeeper operator once per workspace.

For more information, see Deploying ZooKeeper in a Project on page 431.

Procedure

1. Follow the generic installation instructions for workspace catalog applications in Application Deployment
page.

2. Within the AppDeployment, update the appRef to specify the correct zookeeper-operator App. You can
find the appRef.name by listing the available Apps in the workspace namespace.
kubectl get apps -n ${WORKSPACE_NAMESPACE}
For details on custom configuration for the operator, see ZooKeeper operator Helm Chart documentation at
https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper-operator#configuration.

Uninstalling Zookeeper Operator Using the CLI

Uninstalling the ZooKeeper operator will not directly affect any running ZookeeperClusters. By default,
the operator waits for any ZookeeperClusters to be deleted before it will fully uninstall (you can set
hooks.delete: true in the application configuration to disable this behavior). After uninstalling the
operator, you need to manually clean up any leftover Custom Resource Definitions (CRDs).

Procedure

1. Delete all ZookeeperClusters.


For more information, see Deleting Zookeeper in a Project on page 432.

2. Uninstall a ZooKeeper operator AppDeployment.


kubectl -n <workspace namespace> delete AppDeployment <name of AppDeployment>

Nutanix Kubernetes Platform | Cluster Operations Management | 409


3. Remove Zookeeper CRDs.

Warning: After you remove the CRDs, all deployed ZookeeperClusters will be deleted!

kubectl delete crds zookeeperclusters.zookeeper.pravega.io

Deployment of Catalog Applications in Workspaces


Deploy applications to attached clusters using the CLI. This topic describes how to use the CLI to deploy a
workspace catalog application to attached clusters within a workspace.
To deploy an application to selected clusters within a workspace, see Cluster-scoped Application for Existing
AppDeployments on page 400.

Enabling the Catalog Application Using the UI

Before you begin


Before you begin, you must have:

• A running cluster with Kommander installed. The cluster must be on a supported Kubernetes version for this
release of NKP and also compatible with the catalog application version you want to install.
• Attach an Existing Kubernetes Cluster section of the documentation completed. For more information, see
Kubernetes Cluster Attachment on page 473.
• Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace the attached
cluster exists in.
export WORKSPACE_NAMESPACE=<workspace_namespace>
After creating a GitRepository, use either the NKP UI or the CLI to enable your catalog applications.

Note: From within a workspace, you can enable applications to deploy. Verify that an application has successfully
deployed through the CLI.

About this task


Follow these steps to enable your catalog applications from the NKP UI:

Procedure

1. Ultimate only: From the top menu bar, select your target workspace.

2. From the sidebar menu to browse the available applications from your configured repositories and select
Applications.

3. Select the three dot button on the required application tile and select Enable.

4. If available, select a version from the dropdown list.


This dropdown list will only be visible if there is more than one version.

5. (Optional) If you want to override the default configuration values, copy your customized values into the text
editor under Configure Service or upload your YAML file that contains the values.
someField: someValue

Nutanix Kubernetes Platform | Cluster Operations Management | 410


6. Confirm the details are correct, and then click Enable.
For all applications, you must provide a display name and an ID which is automatically generated based on
what you enter for the display name, unless or until you edit the ID directly. The ID must be compliant with
Kubernetes DNS subdomain name validation rules in the Kubernetes documentation.
Alternately, you can use the CLI to enable your catalog applications.

Enabling the Catalog Application Using the CLI

See Workspace Catalog Applications for the list of available applications that you can deploy on the
attached cluster.

Before you begin

Procedure

1. Enable a supported application to deploy to your attached Kubernetes cluster with an AppDeployment resource.
For more information, see Kubernetes Cluster Attachment on page 473.

2. Within the AppDeployment, define the appRef to specify which App to enable.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kafka-operator
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: kafka-operator-0.25.1
kind: App
EOF

Note:

• The appRef.name must match the app name from the list of available catalog applications.
• Create the resource in the workspace you just created, which instructs Kommander to deploy the
AppDeployment to the KommanderClusters in the same workspace.

Enabling the Catalog Application With a Custom Configuration Using the CLI

About this task


To enable the catalog application:

Procedure

1. Provide the name of a ConfigMap in the AppDeployment, which provides custom configuration on top of the
default configuration.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: kafka-operator
namespace: ${WORKSPACE_NAMESPACE}
spec:

Nutanix Kubernetes Platform | Cluster Operations Management | 411


appRef:
name: kafka-operator-0.25.1
kind: App
configOverrides:
name: kafka-operator-overrides
EOF

2. Create the ConfigMap with the name provided in the step above, with the custom configuration.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ${WORKSPACE_NAMESPACE}
name: kafka-operator-overrides
data:
values.yaml: |
operator:
verboseLogging: true
EOF
Kommander waits for the ConfigMap to be present before deploying the AppDeployment to the managed or
attached clusters.

Verify the Catalog Applications

The applications are now enabled.

Procedure
Connect to the attached cluster and check the HelmReleases to verify the deployment.
kubectl get helmreleases -n ${WORKSPACE_NAMESPACE}
The result appears as follows.
NAMESPACE NAME READY STATUS
AGE
workspace-test-vjsfq kafka-operator True Release reconciliation succeeded
7m3s

Workspace Catalog Application Upgrade


Upgrade catalog applications using the CLI or UI.
Before upgrading, keep in mind the distinction between Platform applications and Catalog applications. Platform
applications are deployed and upgraded as a set for each cluster or workspace. Catalog applications are deployed
separately, so that you can deploy and upgrade them individually for each workspace or project.

Upgrading the Catalog Applications Using the UI

Before you begin


Complete the upgrade prerequisites tasks. For more information, see Upgrade Prerequisites on page 1092.

About this task


To upgrade an application from the NKP UI:

Procedure

1. From the top menu bar, select your target workspace.

Nutanix Kubernetes Platform | Cluster Operations Management | 412


2. From the sidebar menu, select Applications.

3. Select the three dot button on the required application tile, and then select Edit.

4. Select the Version from the dropdown list and select a new version.
This dropdown list is only available if there is a newer version to upgrade to.

5. Click Save.

Upgrading the Catalog Applications Using the CLI

Before you begin

Note: The commands use the workspace name and not namespace.
You can retrieve the workspace name by running the following command.
nkp get workspaces
To view a list of the deployed apps to your workspace, run the following command.
nkp get appdeployments --workspace=<workspace-name>

Complete the upgrade prerequisites tasks. For more information, see Upgrade Prerequisites on page 1092.

About this task


To upgrade an application from the CLI:

Procedure

1. To see what app(s) and app versions are available to upgrade, run the following command.

Note: You can reference the app version by going into the app name (e.g. <APP ID>-<APP VERSION>.

kubectl get apps -n ${WORKSPACE_NAMESPACE}


You can also use this command to display the apps and app versions, for example.
kubectl get apps -n ${WORKSPACE_NAMESPACE} -o jsonpath='{range .items[*]}
{@.spec.appId}{"----"}{@.spec.version}{"\n"}{end}'
This is an example of an output that displays the different application and application versions.
kafka-operator----0.20.0
kafka-operator----0.20.2
kafka-operator----0.23.0-dev.0
kafka-operator----0.25.1
zookeeper-operator----0.2.13
zookeeper-operator----0.2.14
zookeeper-operator----0.2.15

Nutanix Kubernetes Platform | Cluster Operations Management | 413


2. Run the following command to upgrade an application from the NKP CLI.
nkp upgrade catalogapp <appdeployment-name> --workspace=<my-workspace-name> --to-
version=<version.number>
The following command upgrades the Kafka Operator application, named kafka-operator-abc in a workspace
to version 0.25.1.
nkp upgrade catalogapp kafka-operator-abc --workspace=my-workspace --to-
version=0.25.1

Note: Platform applications cannot be upgraded on a one-off basis, and must be upgraded in a single process for
each workspace. If you attempt to upgrade a platform application with these commands, you receive an error and
the application is not upgraded.

Custom Applications
Custom applications are third-party applications you have added to the NKP Catalog.
Custom applications are any third-party applications that are not provided in the NKP Application Catalog. Custom
applications can leverage applications from the NKP Catalog or be fully-customized. There is no expectation of
support by Nutanix for a Custom application. Custom applications can be deployed on Konvoy clusters or on any
Nutanix supported 3rd party Kubernetes distribution.

Git Repository Structure

Git repositories must be structured in a specific manner for defined applications to be processed by
Kommander.
You must structure your git repository based on the following guidelines, for your applications to be processed
properly by Kommander so that they can be deployed.

Git Repository Directory Structure


Use the following basic directory structure for your git repository.
### helm-repositories
# ### <helm repository 1>
# # ### kustomization.yaml
# # ### <helm repository name>.yaml
# ### <helm repository 2>
# ### kustomization.yaml
# ### <helm repository name>.yaml
### services
### <app name>
# ### <app version1> # semantic version of the app helm chart. e.g., 1.2.3
# # ### defaults
# # # ### cm.yaml
# # # ### kustomization.yaml
# # ### <app name>.yaml
# # ### kustomization.yaml
# ### <app version2> # another semantic version of the app helm chart. e.g.,
2.3.4
# # ### defaults
# # # ### cm.yaml
# # # ### kustomization.yaml
# # ### <app name>.yaml
# # ### kustomization.yaml
# ### metadata.yaml
### <another app name>
...
Remember the following guidelines:

Nutanix Kubernetes Platform | Cluster Operations Management | 414


• Define applications in the services/ directory.
• You can define multiple versions of an application, under different directories nested under the services/<app
name>/ directory.

• Define application manifests.


For more information, see HelmRelease in the Flux documentation. Under each versioned directory services/
<app name>/<version>/in the <app name>.yaml file that which is listed in the kustomization.yaml file.

Kubernetes Kustomization file. For more information, see the Kubernetes Kustomization docs. For more
information, see the The Kustomization File in the SIG CLI documentation.
• Define the default values ConfigMap for HelmReleases in the services/<app name>/<version>/
defaults directory, accompanied by a kustomization.yaml Kubernetes Kustomization file pointing to the
ConfigMap file.

• Define the metadata.yaml of each application under the services/<app name>/ directory. For more
information, see Workspace Application Metadata on page 416
For an example of how to structure custom catalog Git repositories, see https://github.com/mesosphere/nkp-
catalog-applications.

Helm Repositories
You must include the HelmRepository that is referenced in each HelmRelease's Chart spec.
Each services/<app name>/<version>/kustomization.yaml must include the path of the YAML file that
defines the HelmRepository. For example.
# services/<app name>/<version>/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- <app name>.yaml
- ../../../helm-repositories/<helm repository 1>
For more information, see the flux documentation about Helm Repositories in the Flux documentation.

Substitution Variables
Some substitution variables are provided. For more information, see Kustomization in the Flux documentation.

• ${releaseName}: For each App deployment, this variable is set to the AppDeployment name. Use this
variable to prefix the names of any resources that are defined in the application directory in the Git repository
so that multiple instances of the same application can be deployed. If you create resources without using the
releaseName prefix (or suffix) in the name field, there can be conflicts if the same named resource is created in
that same namespace.
• ${releaseNamespace}: The namespace of the Workspace.

• ${workspaceNamespace}: The namespace of the Workspace that the Workspace belongs to.

Creating a Git Repository

Use the CLI to create the GitRepository resource and add a new repository to your Workspace.

About this task


Create a Git Repository in the Workspace namespace.

Nutanix Kubernetes Platform | Cluster Operations Management | 415


Procedure

1. If you are running in an air-gapped environment, complete the steps in Installing Kommander in an Air-gapped
Environment on page 965.

2. Set the WORKSPACE_NAMESPACE environment variable to the name of your workspace’s namespace.
export WORKSPACE_NAMESPACE=<workspace_namespace>

3. Adapt the URL of your Git repository.


kubectl apply -f - <<EOF
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: example-repo
namespace: ${WORKSPACE_NAMESPACE}
labels:
kommander.d2iq.io/gitapps-gitrepository-type: nkp
kommander.d2iq.io/gitrepository-type: catalog
spec:
interval: 1m0s
ref:
branch: <your-target-branch-name> # e.g., main
timeout: 20s
url: https://github.com/<example-org>/<example-repo>
EOF

4. Ensure the status of the GitRepository signals a ready state.


kubectl get gitrepository example-repo -n ${WORKSPACE_NAMESPACE}
The repository commit also displays the ready state.
NAME URL READY
STATUS AGE
example-repo https://github.com/example-org/example-repo True
Fetched revision: master/6c54bd1722604bd03d25dcac7a31c44ff4e03c6a 11m
For more information on the GitRepository resource fields and how to make Flux aware of credentials required to
access a private Git repository, see the Secret reference section in the Flux documentation. For more information,
see https://fluxcd.io/flux/components/source/gitrepositories/#secret-reference.

Note: To troubleshoot issues with adding the GitRepository, review the following logs.
kubectl -n kommander-flux logs -l app=source-controller
[...]
kubectl -n kommander-flux logs -l app=kustomize-controller
[...]
kubectl -n kommander-flux logs -l app=helm-controller
[...]

Workspace Application Metadata

You can define how custom applications display in the NKP UI by defining a metadata.yaml file for each
application in the git repository. You must define this file at services/<application>/metadata.yaml for
it to process correctly.

Note: To display more information about custom applications in the UI, define a metadata.yaml file for each
application in the Git repository.

Nutanix Kubernetes Platform | Cluster Operations Management | 416


You can define the following fields:

Table 38: Workplace Application Metadata

Field Default Description


displayName falls back to App ID Display name of the application for the UI.
description “” Short description, should be a sentence or two,
displayed in the UI on the application card.
category general 1 or more categories for this application. Categories
are used to group applications in the UI.
overview Markdown overview used on the application detail
page in the UI.
icon Base64 encoded icon SVG file used for application
logos in the UI.
scope project List of scopes, can be set only to project or
workspace currently.

None of these fields are required for the application to display in the UI.
Here is an example metadata.yaml file.
displayName: Prometheus Monitoring Stack
description: Stack of applications that collect metrics and provides visualization and
alerting capabilities. Includes Prometheus, Prometheus Alertmanager and Grafana.
category:
- monitoring
overview: >
# Overview
A stack of applications that collects metrics and provides visualization and alerting
capabilities. Includes Prometheus, Prometheus Alertmanager and Grafana.

## Dashboards
By deploying the Prometheus Monitoring Stack, the following platform applications and
their respective dashboards are deployed. After deployment to clusters in a workspace,
the dashboards are available to access from a respective cluster's detail page.

### Prometheus

A software application for event monitoring and alerting. It records real-time


metrics in a time series database built using a HTTP pull model, with flexible and
real-time alerting.

- [Prometheus Documentation - Overview](https://prometheus.io/docs/introduction/


overview/)

### Prometheus Alertmanager


A Prometheus component that enables you to configure and manage alerts sent by the
Prometheus server and to route them to notification, paging, and automation systems.

- [Prometheus Alertmanager Documentation - Overview](https://prometheus.io/docs/


alerting/latest/alertmanager/)

### Grafana
A monitoring dashboard from Grafana that can be used to visualize metrics collected
by Prometheus.

Nutanix Kubernetes Platform | Cluster Operations Management | 417


- [Grafana Documentation](https://grafana.com/docs/)
icon:
PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAzMDAgMzAwIiBzdHlsZT0iZW5hYm

Custom Application from the Workspace Catalog

Enable a Custom Application from the Workspace Catalog


After creating a GitRepository, you can either use the NKP UI or the CLI to enable your custom applications. To
deploy an application to selected clusters within a workspace, see Cluster-scoped Application Configuration from
the NKP UI on page 398.
From within a workspace, you can enable applications to deploy. Verify that an application has successfully deployed
through the CLI.

Enabling the Custom Application Using the UI

About this task


To enabling the custom application using the UI:

Procedure

1. From the top menu bar, select your target workspace.

2. From the sidebar menu to browse the available applications from your configured repositories, select
Applications.

3. Select the three dot button on the required application tile and click Enable.

4. If available, select a version from the dropdown list.


This dropdown list will only be visible if there is more than one version.

5. (Optional) If you want to override the default configuration values, copy your customized values into the text
editor under Configure Service or upload your YAML file that contains the values.
someField: someValue

6. Confirm the details are correct, and then click Enable.


For all applications, you must provide a display name and an ID which is automatically generated based on what
you enter for the display name, unless or until you edit the ID directly. The ID must be compliant with Kubernetes
DNS subdomain name validation rules. For more information, see DNS Subdomain Names section in the
Kubernetes documentation.
Alternately, you can use the CLI to enable your catalog applications.

Enabling the Custom Application Using the CLI

Before you begin

• Determine the name of the workspace where you wish to perform the deployments. You can use the nkp get
workspaces command to see the list of workspace names and their corresponding namespaces.

• Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace where the
cluster is attached:
export WORKSPACE_NAMESPACE=<workspace_namespace>

Nutanix Kubernetes Platform | Cluster Operations Management | 418


To enabling the custom application using the CLI:

Procedure

1. Get the list of available applications to enable using the following command.
kubectl get apps -n ${WORKSPACE_NAMESPACE}

2. Deploy one of the supported applications from the list with an AppDeployment resource.

3. Within the AppDeployment, define the appRef to specify which App to enable.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: my-custom-app
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: custom-app-0.0.1
kind: App
EOF

Note:

• The appRef.name must match the app name from the list of available catalog applications.
• Create the resource in the workspace you just created, which instructs Kommander to deploy the
AppDeployment to the KommanderClusters in the same workspace.

Enabling the Custom Application With Custom Configuration Using the CLI

About this task


To enabling the custom application with custom configuration using the CLI:

Procedure

1. Provide the name of a ConfigMap in the AppDeployment, which provides custom configuration on top of the
default configuration.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
name: my-custom-app
namespace: ${WORKSPACE_NAMESPACE}
spec:
appRef:
name: custom-app-0.0.1
kind: App
configOverrides:
name: my-custom-app-overrides
EOF

2. Create the ConfigMap with the name provided in the step above, with the custom configuration.
cat <<EOF | kubectl apply -f -
apiVersion: v1

Nutanix Kubernetes Platform | Cluster Operations Management | 419


kind: ConfigMap
metadata:
namespace: ${WORKSPACE_NAMESPACE}
name: my-custom-app-overrides
data:
values.yaml: |
someField: someValue
EOF
Kommander waits for the ConfigMap to be present before deploying the AppDeployment to the managed or
attached clusters.

Verify the Custom Applications

After completing the previous steps, your applications are enabled.

Procedure
Connect to the attached cluster and check the HelmReleases to verify the deployment.
kubectl get helmreleases -n ${WORKSPACE_NAMESPACE}
The output is as follows.
NAMESPACE NAME READY STATUS AGE
workspace-test-vjsfq my-custom-app True Release reconciliation succeeded 7m3s

Configuring Workspace Role Bindings


Workspace Role Bindings grant access to specified Workspace Roles for a specified group of people.

Before you begin


Before you can create a Workspace Role Binding, ensure you have created a workspace Group. A Group can contain
one or several Identity Provider users, groups or both.

Note: The syntax for the Identity Provider groups you add to a NKP Group varies depending on the context for which
you have established an Identity Provider.

• If you have set up an identity provider globally, for All Workspaces:

• For groups: Add an Identity Provider Group in the oidc:<IdP_user_group> format. For
example, oidc:engineering.
• For users: Add an Identity Provider User in the <user_email>. For example,
[email protected].

• If you have set up an identity provider for a Specific Workspace:

• For groups: Add an Identity Provider Group in the


oidc:<workspace_name>:<IdP_user_group> format. For example, oidc:tenant-
z:engineering.

• For users: Add an Identity Provider User in the <workspace_ID>:<user_email> format. For
example, tenant-z:[email protected].
Run kubectl get workspaces to obtain a list of all existing workspaces. The workspace_ID is listed
under the NAME column.

About this task


You can assign a role to this Kommander Group:

Nutanix Kubernetes Platform | Cluster Operations Management | 420


Procedure

1. From the top menu bar, select your target workspace.

2. Select Access Control in the Administration section of the sidebar menu.

3. Select the Cluster Role Bindings tab, and then select Add Roles next to the group you want.

4. Select the Role, or Roles, you want from the dropdown list and click Save.
It will take a few minutes for the resource to be created.

Multi-Tenancy in NKP
You can use workspaces to manage your tenants' environments separately, while still maintaining control
over clusters and environments centrally. For example, if you operate as a Managed Service Provider
(MSP), you can manage your clients clusters' life cycles, resources, and applications. If you operate as an
environment administrator, you can these resources per department, division, employee group, etc.
Here are some important concepts:

• Multi-tenancy in NKP is an architecture model where a single NKP Ultimate instance serves multiple
organization’s divisions, customers or tenants. In NKP, each tenant system is represented by a workspace. Each
workspace and its resources can be isolated from other workspaces (by using separate Identity Providers), even
though they all fall under a single Ultimate license.
Multi-tenant environments have at least two participating parties: the Ultimate license administrator (for example,
an MSP), and one or several tenants.
• Managed Service Providers or MSPs are partner organizations that use NKP to facilitate cloud infrastructure
services to their customers or tenants.

• Tenants can be customers of Managed Service Provider partners. They outsource their cloud management
requirements to MSPs, so they can focus on the development of their products.
Tenants can also be divisions within an organization that require a strict isolation from other divisions, for
example, through differentiated access control.
In NKP, a workspace is assigned to a tenant.

Access Control in Multi-Tenant Environments


To isolate each tenant’s information and environment, multi-tenancy allows you to configure an identity provider per
workspace or tenant. In this setup, NKP keeps all workspaces and tenants separate and isolated from each other.
You, as a global administrator, manage tenant access at the Workspace level. A tenant can further adapt user access at
the Project level.

Nutanix Kubernetes Platform | Cluster Operations Management | 421


Figure 11: Multi-tenant Cluster

Here are some important concepts:

• Workspaces: In a multi-tenant system, workspaces and tenants are synonymous. You can set up an identity
provider to control all workspaces, including the Management cluster’s kommander workspace. You can then set
up additional identity providers for each workspace/tenant, and generate a dedicated Login URL so each tenant
has its own user access.
For more information see, Generating a Dedicated Login URL for Each Tenant on page 423.
• Projects: After you set up an identity provider per workspace or tenant, the tenant can choose to further
narrow down access with an additional layer. A tenant can choose to organize clusters into projects and assign
differentiated access to user groups with Project Role Bindings.
For more information, see Project Role Bindings on page 447.
By assigning clusters to one or several projects, you can enable more complex user access.

Multi-Tenancy Enablement
To enable multi-tenancy, you must:

Nutanix Kubernetes Platform | Cluster Operations Management | 422


• If you want to use a single IdP to access all of your tenant’s environments, configure an Identity Provider globally.
• Configure an Identity Provider per workspace. This way, each tenant has a dedicated IdP to access their
workspace.
• Create NKP Identity Provider groups with the correct prefixes to map your existing IdP groups.
• Create a dedicated login URL for each tenant. You can provide a workspace login link to each tenant for access to
the NKP UI and for the generation of kubectl API access tokens.
To enforce security, every tenant should be in a different AWS account, so they are truly independent of each other.

Generating a Dedicated Login URL for Each Tenant


This page contains instructions on how to generate a workspace-specific URL to access the NKP UI.

About this task


By making this URL available to your tenant, you provide them with a dedicated login page, where users can enter
their SSO credentials to access their workspace in the NKP UI and to where users can create a token to access a
cluster’s kubectl API. Other tenants and their SSO configurations are not visible.

Before you begin

• Complete the steps in Multi-Tenancy in NKP on page 421.


• Ensure you have administrator permissions and access to all workspaces.

Procedure

1. Set an environment variable to point at the workspace for which you want to generate a URL:
Replace <name_target_workspace> with the workspace name. If you do not know the exact name of the
workspace, run kubectl get workspace to get a list of all workspace names.
export WORKSPACE_NAME=<name_target_workspace>

2. Generate an NKP UI login URL for that workspace.


echo https://$(kubectl get kommandercluster -n kommander host-cluster -o
jsonpath='{ .status.ingress.address }')/token/landing/${WORKSPACE_NAME}
The output is as follows.
https://example.com/token/landing/<WORKSPACE_NAME>

3. Share the output login URL with your tenant, so users can start accessing their workspace from the NKP UI.

Note: The login page displays:

• Identity providers set globally.


• Identity providers set for that specific workspace.
The login page does not display any resources or workspaces for which the tenant has no permissions.

Projects
Multi-cluster Configuration Management

Nutanix Kubernetes Platform | Cluster Operations Management | 423


Projects support the management of configMaps, continuous deployments, secrets, services, quotas, and role-based
access control and multi-tenant logging by leveraging federated resources. When a Project is created, NKP creates a
federated namespace that is propagated to the Kubernetes clusters associated with this Project.
Federation in this context means that a common configuration is pushed out from a central location (NKP) to all
Kubernetes clusters, or a pre-defined subset group, under NKP management. This pre-defined subset group of
Kubernetes clusters is called a Project.
Projects enable teams to deploy their configurations and services to clusters in a consistent way. Projects enable
central IT or a business unit to share their Kubernetes clusters among several teams. Using Projects, NKP leverages
Kubernetes Cluster Federation (KubeFed) to coordinate the configuration of multiple Kubernetes clusters.
Kommander allows a user to use labels to select, manually or dynamically, the Kubernetes clusters associated with a
Project.

Project Namespaces
Project Namespaces isolate configurations across clusters. Individual standard Kubernetes namespaces are
automatically created on all clusters belonging to the project. When creating a new project, you can customize
the Kubernetes namespace name that is created. It is the grouping of all of these individual standard Kubernetes
namespaces that make up the concept of a Project Namespace. A Project Namespace is a Kommander specific
concept.

Creating a Project Using the UI

About this task


When you create a Project, you must specify a Project Name, a Namespace Name (optional) and a way to allow
Kommander to determine which Kubernetes clusters will be part of this project.
As mentioned previously, a Project Namespace corresponds to a Kubernetes Federated Namespace. By default, the
name of the namespace is auto-generated based on the project name (first 57 characters) plus 5 unique alphanumeric
characters. You can specify a namespace name, but you must ensure it does not conflict with any existing namespace
on the target Kubernetes clusters, that will be a part of the Project.
To determine which Kubernetes clusters will be part of this project, you can either select manually existing clusters or
define labels that Kommander will use to dynamically add clusters. The latter is recommended because it will allow
you to deploy additional Kubernetes clusters later and to have them automatically associated with Projects based on
their labels.
To create a Project, you can either use the NKP UI or create a Project object on the Kubernetes cluster where
Kommander is running (using kubectl or the Kubernetes API). The latter allows you to configure Kommander
resources in a declarative way. It is available for all kinds of Kommander resources.
Here is an example of what it looks like to create a project using the NKP UI:

Procedure
Task step.

Creating a Project Using the CLI

About this task


The following sample is a YAML Kubernetes object for creating a Kommander Project. This example does not work
verbatim because it depends on a workspace name that has been previously created and does not exist by default in
your cluster.

Nutanix Kubernetes Platform | Cluster Operations Management | 424


Procedure
Use this as an example format and fill in the workspace name and namespace name appropriately along with the
proper labels.
apiVersion: workspaces.kommander.mesosphere.io/v1alpha1
kind: Project
metadata:
name: My-Project-Name
namespace: my-project-k8s-namespace-name
spec:
workspaceRef:
name: myworkspacename
namespaceName: myworkspacename-di3tx
placement:
clusterSelector:
matchLabels:
cluster: prod
The following procedures are supported for projects:

• Project Applications on page 425


• Project Deployments on page 441
• Project Role Bindings on page 447
• Project Roles on page 450
• Project ConfigMaps on page 453
• Project Secrets on page 454
• Project Quotas and Limit Ranges on page 455
• Project Network Policies on page 457

Project Applications
This section documents the applications and application types that you can utilize with NKP.
Application types are:

• Workplace Catalog Applications on page 406 that are either pre-packaged applications from the Nutanix
Application Catalog or custom applications that you maintain for your teams or organization.

• NKP Applications on page 376 are applications that are provided by Nutanix and added to the Catalog.
• Custom Applications on page 414 are applications integrated into Kommander.
• Platform Applications on page 386
When deploying and upgrading applications, platform applications come as a bundle; they are tested as a single unit
and you must deploy or upgrade them in a single process, for each workspace. This means all clusters in a workspace
have the same set and versions of platform applications deployed. Whereas catalog applications are individual, so you
can deploy and upgrade them individually, for each project.

Project Platform Applications


How project Platform applications work
The following table describes the list of applications that can be deployed to attached clusters within a project.
Review the Project Platform Application Configuration Requirements on page 429 to ensure that the attached
clusters in the project have sufficient resources.

Nutanix Kubernetes Platform | Cluster Operations Management | 425


From within a project, you can enable applications to deploy. Verify that an application has successfully deployed
through the CLI.

Platform Applications

Table 39: Platform Applications

Name APP ID Deployed by default


project-grafana-logging-6.57.4 project-grafana-logging False
project-grafana-loki-0.69.16 project-grafana-loki False
project-logging-1.0.3 project-logging False

Enabling the Platform Application Using the UI

About this task


Follow these steps to:

Procedure

1. From the top menu bar, select your target workspace.

2. Select Projects from the sidebar menu.

3. Select your project from the list.

4. Select Applications tab to browse the available applications.

5. Select the three dot button from the bottom-right corner of the desired application tile, and then select Enable.

6. If you want to override the default configuration values, copy your customized values into the text editor under
Configure Service or upload your YAML file that contains the values.
someField: someValue

7. Confirm the details are correct, and then click Enable.


To use the CLI to enable or disable applications, see Deploying Platform Applications Using CLI on
page 389

Warning: There may be dependencies between the applications, which are listed in Project Platform
Application Dependencies on page 428. Review them carefully prior to customizing to ensure that the
applications are deployed successfully.

Platform Applications Upgrade Using the CLI

Platform Applications within a Project are automatically upgraded when the Workspace that a Project
belongs to is upgraded.
For more information on how to upgrade these applications, see Ultimate: Upgrade Platform Applications on
Managed and Attached Clusters on page 1101.

Nutanix Kubernetes Platform | Cluster Operations Management | 426


Deploying Project Platform Applications Using the CLI

Deploy applications to attached clusters in a project using the CLI.

About this task


This topic describes how to use the CLI to deploy an application to attached clusters within a project.
For a list of all applications and those that are enabled by default, see Project Platform Applications.

Before you begin


Ensure that you have:

• A running cluster with Kommander installed.


• An existing Kubernetes cluster attached to Kommander.
• Set the WORKSPACE_NAME environment variable to the name of the workspace where the cluster is attached.
export WORKSPACE_NAME=<workspace_name>

• Set the WORKSPACE_NAMESPACE environment variable to the namespace of the above workspace.
export WORKSPACE_NAMESPACE=$(kubectl get namespace --
selector='workspaces.kommander.mesosphere.io/workspace-name=${WORKSPACE_NAME}' -o
jsonpath='{.items[0].metadata.name}')

• Set the PROJECT_NAME environment variable to the name of the project in which the cluster is included:
export PROJECT_NAME=<project_name>

• Set the PROJECT_NAMESPACE environment variable to the name of the above project's namespace:
export PROJECT_NAMESPACE=$(kubectl get project ${PROJECT_NAMESPACE} -n
${WORKSPACE_NAMESPACE} -o jsonpath='{.status.namespaceRef.name}')

Procedure

1. Deploy one of the supported applications to your existing attached cluster with an AppDeployment resource.
Provide the appRef and application version to specify which App is deployed.
nkp create appdeployment project-grafana-logging --app project-grafana-logging-6.38.1
--workspace ${WORKSPACE_NAME} --project ${PROJECT_NAME}

2. Create the resource in the project you just created, which instructs Kommander to deploy the AppDeployment to
the KommanderClusters in the same project.

Note:

• The appRef.name must match the app name from the list of available catalog applications.
• Observe that the nkp create command must be run with both the --workspace and --project
flags for project platform applications.

Deploying the Project Platform Application With Custom Configuration Using the CLI

About this task


To perform custom configuration using the CLI:

Nutanix Kubernetes Platform | Cluster Operations Management | 427


Procedure

1. Create the AppDeployment and provide the name of a ConfigMap, which provides custom configuration on top
of the default configuration.
nkp create appdeployment project-grafana-logging --app project-grafana-logging-6.38.1
--config-overrides project-grafana-logging-overrides --workspace ${WORKSPACE_NAME}
--project ${PROJECT_NAMESPACE}

2. Create the ConfigMap with the name provided in the step above, with the custom configuration.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ${PROJECT_NAMESPACE}
name: project-grafana-logging-overrides
data:
values.yaml: |
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Loki
type: loki
url: "http://project-grafana-loki-loki-distributed-gateway"
access: proxy
isDefault: false
EOF
Kommander waits for the ConfigMap to be present before deploying the AppDeployment to the managed or
attached clusters.

Verify the Project Platform Applications

After completing the previous steps, your applications are enabled.

Procedure

1. Export the project_namespace with this command.


export PROJECT_NAMESPACE=<project_namespace>

2. Connect to the attached cluster and check the HelmReleases to verify the deployment.
kubectl get helmreleases -n ${PROJECT_NAMESPACE}
NAMESPACE NAME READY STATUS
AGE
project-test-vjsfq project-grafana-logging True Release reconciliation
succeeded 7m3s

Note: Some of the supported applications have dependencies on other applications. See Project Platform
Application Dependencies on page 428 for that table.

Project Platform Application Dependencies

Dependencies between project platform applications.


There are many dependencies between the applications that are deployed to a project’s attached clusters. It is
important to note these dependencies when customizing the platform applications to ensure that your services are

Nutanix Kubernetes Platform | Cluster Operations Management | 428


properly deployed to the clusters. For more information on how to customize platform applications, see Project
Platform Applications on page 425.

Application Dependencies
When deploying or troubleshooting applications, it helps to understand how applications interact and may require
other applications as dependencies.
If an application’s dependency does not successfully deploy, the application requiring that dependency does not
successfully deploy.
The following sections detail information about the platform applications.

Logging
Collects logs over time from Kubernetes pods deployed in the project namespace. Also provides the ability to
visualize and query the aggregated logs.

• project-logging: Defines resources for the Logging Operator which uses them to direct the project’s logs to its
respective Grafana Loki application. For more information, see https://grafana.com/oss/grafana/.
• project-grafana-loki: A horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by
Prometheus. For more information, see https://grafana.com/oss/loki/.
• project-grafana-logging: Logging dashboard used to view logs aggregated to Grafana Loki. For more
information, see https://grafana.com/oss/grafana/.

Warning: The project logging applications depend on the Enabling Logging Applications Using the UI on
page 566 being deployed.

Table 40: Project Platform Application Dependencies

Application Required Dependencies


project-logging logging-operator (workspace)
project-grafana-loki project-logging, grafana-loki (workspace), logging-
operator (workspace)
project-grafana-logging project-grafana-loki

Project Platform Application Configuration Requirements

Project Platform Application Descriptions and Resource Requirements


Platform applications require more resources than solely deploying or attaching clusters into a project. Your cluster
must have sufficient resources when deploying or attaching to ensure that the applications are installed successfully.
The following table describes all the platform applications that are available to the clusters in a project, minimum
resource and persistent storage requirements, and whether they are enabled by default.

Nutanix Kubernetes Platform | Cluster Operations Management | 429


Table 41: Project Platform Application Configuration Requirements

Name Minimum Minimum Deployed by Default Priority


Resources Persistent Storage Default Class
Suggested Required
project-grafana- cpu: 200m No NKP Critical
logging (100002000)
memory: 100Mi

project-grafana-loki # of PVs: 3 No NKP Critical


(100002000)
PV sizes: 10Gi x 3
(total: 30Gi)

project-logging No NKP Critical


(100002000)

Project Catalog Applications


Catalog applications are any third-party or open source applications that appear in the Catalog. These
can be NKP applications provided by Nutanix for use in your environment or that can be used but are not
supported by Nutanix.
For more information, see:

• Project Catalog Applications on page 430


• Project-level NKP Applications on page 431
• Usage of Custom Resources with Workspace Catalog Applications on page 434
• Custom Project Applications on page 434

Upgrading Project Catalog Applications Using the UI

Before upgrading your catalog applications, verify the current and supported versions of the application.
Also, keep in mind the distinction between Platform applications and Catalog applications. Platform
applications are deployed and upgraded as a set for each cluster or workspace. Catalog applications are
deployed separately, so that you can deploy and upgrade them individually for each project.

About this task


Catalog applications must be upgraded to the latest version BEFORE upgrading the Konvoy component for Managed
clusters or Kubernetes version for attached clusters.
To upgrade an application from the NKP UI:

Procedure

1. From the top menu bar, select your target workspace.

2. From the side menu bar, select Projects.

3. Select your target project.

4. Select Applications from the project menu bar.

5. Select the three dot button from the bottom-right corner of the desired application tile, and then click Edit.

Nutanix Kubernetes Platform | Cluster Operations Management | 430


6. Select the Version dropdown list, and select a new version. This dropdown list will only be available if there is a
newer version to upgrade to.

7. Click Save.

Upgrading Project Catalog Applications Using the CLI

About this task


To upgrade project catalog applications:

Procedure

1. To see what app(s) and app versions are available to upgrade, run the following command:

Note: The APP ID column displays the available apps and the versions available to upgrade.

kubectl get apps -n ${PROJECT_NAMESPACE}

2. Run the following command to upgrade an application from the NKP CLI.
nkp upgrade catalogapp <appdeployment-name> --workspace=my-workspace --project=my-
project --to-version=<version.number>
As an example, the following command upgrades the Kafka Operator application, named
kafka-operator-abc, in a workspace to version 0.25.1.
nkp upgrade catalogapp kafka-operator-abc --workspace=my-workspace --to-
version=0.25.1

Note: Platform applications cannot be upgraded on a one-off basis, and must be upgraded in a single process for
each workspace. If you attempt to upgrade a platform application with these commands, you receive an error and
the application is not upgraded.

Project-level NKP Applications

NKP applications are catalog applications provided by Nutanix for use in your environment.
Some NKP workspace catalog applications will provision CustomResourceDefinitions, which allow you
to deploy Custom Resources to a Project. See your NKP workspace catalog application’s documentation for
instructions.

Deploying ZooKeeper in a Project

To get started with creating ZooKeeper clusters in your project namespace, you first need to deploy the
Zookeeper operator in the workspace where the project exists.

About this task


After you deploy the ZooKeeper operator, you can create ZooKeeper Clusters by applying a ZookeeperCluster
custom resource on each attached cluster in a project’s namespace.
A Helm chart exists in the ZooKeeper operator repository that can assist with deploying ZooKeeper clusters. For
more information, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper.

Note: If you need to manage these custom resources across all clusters in a project, it is recommended you use
Project Deployments on page 441 which enables you to leverage GitOps to deploy the resources. Otherwise, you
will need to create the resources manually in each cluster.

Nutanix Kubernetes Platform | Cluster Operations Management | 431


Follow these steps to deploy a ZooKeeper cluster in a project namespace. This procedure results in a running
ZooKeeper cluster, ready for use in your project’s namespace.

Before you begin


You must first deploy the Zookeeper operator. For more information, see Zookeeper Operator in Workspace on
page 408.

Procedure

1. Set the PROJECT_NAMESPACE environment variable to the name of your project’s namespace.
export PROJECT_NAMESPACE=<project namespace>

2. Create a ZooKeeper Cluster custom resource in your project namespace.


kubectl apply -f - <<EOF
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
name: zookeeper
namespace: ${PROJECT_NAMESPACE}
spec:
replicas: 1
EOF

3. Check the status of your ZooKeeper cluster using kubectl.


kubectl get zookeeperclusters -n ${PROJECT_NAMESPACE}
NAME REPLICAS READY REPLICAS VERSION DESIRED VERSION INTERNAL
ENDPOINT EXTERNAL ENDPOINT AGE
zookeeper 1 1 0.2.15 0.2.15
10.100.200.18:2181 N/A 94s

Deleting Zookeeper in a Project

About this task


To delete the Zookeeper clusters:

Procedure

1. View ZookeeperClustersin all namespaces.


kubectl get zookeeperclusters -A

2. Delete a specific ZookeeperCluster.


kubectl -n ${PROJECT_NAMESPACE} delete zookeepercluster <name of zookeepercluster>

Deploying Kafka in a Project

After you deploy the Kafka operator, you can create Kafka clusters by applying a KafkaCluster custom
resource on each attached cluster in a project’s namespace.

About this task


Refer to the Kafka operator repository for examples of the custom resources and their configurations. For more
information, see https://github.com/banzaicloud/koperator/tree/master/config/samples.

Nutanix Kubernetes Platform | Cluster Operations Management | 432


Before you begin
To get started with creating and managing a Kafka Cluster in a project, you must:

• Deploy the Kafka operator in the workspace where the project exists. See Kafka Operator in a Workspace on
page 407.
• Deploy the ZooKeeper operator the workspace where the project exists. See Zookeeper Operator in
Workspace on page 408.
• Deploy the ZooKeeper operator in the workspace where the project exists. Deploy Zookeeper in a Project in the
same project where you want to enable Kafka. See Deploying ZooKeeper in a Project on page 431.

Note: If you need to manage these custom resources across all clusters in a project, it is recommended you use project
deployments which enables you to leverage GitOps to deploy the resources. Otherwise, you must create the custom
resources manually in each cluster.

Procedure

1. Ensure you deployed Zookeeper in a project.


See Deploying ZooKeeper in a Project on page 431

2. Set the PROJECT_NAMESPACE environment variable to the name of your project’s namespace.
export PROJECT_NAMESPACE=<project namespace>

3. Obtain the Kafka Operator version you deployed in the workspace.


Replace <target_namespace> with the namespace where you deployed Kafka.
kubectl get appdeployments.apps.kommander.d2iq.io -n <target_namespace> -o
template="{{ .spec.appRef.name }}" kafka-operator
The output prints the Kafka Operator version.

4. Use the Kafka Operator version to download the simplekafkacluster.yaml file you require.
In the following URL, replace /v0.25.1/ with the Kafka version you obtained in the previous step and
download the file.
https://raw.githubusercontent.com/banzaicloud/koperator/v0.25.1/config/samples/
simplekafkacluster.yaml
In order to use a CVE-free kafka image, set clusterImage value to ghcr.io/banzaicloud/
kafka:2.13-3.4.1 (similarly to workspace installation in ).

5. Open and edit the downloaded file to use the correct Zookeeper Cluster address.
Replace <project_namespace> with the target project namespace.
zkAddresses:
- "zookeeper-client.<project_namespace>:2181"

6. Apply the KafkaCluster configuration to your project’s namespace:.


kubectl apply -n ${PROJECT_NAMESPACE} -f simplekafkacluster.yaml

7. Check the status of your Kafka cluster using kubectl.


kubectl -n ${PROJECT_NAMESPACE} get kafkaclusters
The output should look similar to this.
NAME CLUSTER STATE CLUSTER ALERT COUNT LAST SUCCESSFUL UPGRADE UPGRADE
ERROR COUNT AGE

Nutanix Kubernetes Platform | Cluster Operations Management | 433


kafka ClusterRunning 0 0
79m
With both the ZooKeeper cluster and Kafka cluster running in your project’s namespace, refer to the Kafka
Operator documentation. When performing those steps, ensure you substitute: zookeeper-client.<project
namespace>:2181 anywhere that the zookeeper client address is mentioned. For more information on how to
test and verify they are working as expected, see Kafka Operator documentation.

Deleting Kafka in a Project

About this task


To delete the Kafka custom resources:

Procedure

1. View all Kafka resources in the cluster.


kubectl get kafkaclusters -A
kubectl get kafkausers -A
Kubectl get kafkatopics -A

2. Delete a KafkaCluster example.


kubectl -n ${PROJECT_NAMESPACE} delete kafkacluster <name of KafkaCluster>

Usage of Custom Resources with Workspace Catalog Applications

Some workspace catalog applications provision some CustomResourceDefinition, which allow you to
deploy Custom Resources. Refer to your workspace catalog application’s documentation for instructions.

Custom Project Applications

Custom applications are third-party applications you have added to the Kommander Catalog.
Custom applications are any third-party applications that are not provided in the NKP Application Catalog. Custom
applications can leverage applications from the NKP Catalog or be fully-customized. There is no expectation of
support by Nutanix for a Custom application. Custom applications can be deployed on Konvoy clusters or on any
Nutanix supported 3rd party Kubernetes distribution.

• Projects: Git Repository Structure on page 434


• Project: Workspace Application Metadata on page 437
• Enabling a Custom Application From the Project Catalog Using the UI on page 438 and Enabling a
Custom Application From the Project Catalog Using the CLI on page 439

Projects: Git Repository Structure

Git repositories must be structured in a specific manner for defined applications to be processed by
Kommander.
You must structure your git repository based on the following guidelines, for your applications to be processed
properly by Kommander so that they can be deployed.

Nutanix Kubernetes Platform | Cluster Operations Management | 434


Git Repository Directory Structure
Use the following basic directory structure for your git repository.
### helm-repositories
# ### <helm repository 1>
# # ### kustomization.yaml
# # ### <helm repository name>.yaml
# ### <helm repository 2>
# ### kustomization.yaml
# ### <helm repository name>.yaml
### services
### <app name>
# ### <app version1> # semantic version of the app helm chart. e.g., 1.2.3
# # ### defaults
# # # ### cm.yaml
# # # ### kustomization.yaml
# # ### <app name>.yaml
# # ### kustomization.yaml
# ### <app version2> # another semantic version of the app helm chart. e.g.,
2.3.4
# # ### defaults
# # # ### cm.yaml
# # # ### kustomization.yaml
# # ### <app name>.yaml
# # ### kustomization.yaml
# ### metadata.yaml
### <another app name>
...
Remember the following guidelines:

• Define applications in the services/ directory.


• You can define multiple versions of an application, under different directories nested under the services/<app
name>/ directory.

• Define application manifests, such as a HelmRelease, under each versioned directory services/<app name>/
<version>/.

in <app name>.yaml in which is listed in the kustomization.yaml Kubernetes Kustomization


file. For more information, see https://fluxcd.io/docs/components/helm/helmreleases/ and https://
kubectl.docs.kubernetes.io/references/kustomize/kustomization/.
• Define the default values ConfigMap for HelmReleases in the services/<app name>/<version>/
defaults directory, accompanied by a kustomization.yaml Kubernetes Kustomization file pointing to the
ConfigMap file.

• Define the metadata.yaml of each application under the services/<app name>/ directory. For more
information, see Workspace Application Metadata on page 416
For an example of how to structure custom catalog Git repositories, see the NKP Catalog repository at https://
github.com/mesosphere/nkp-catalog-applications.

Helm Repositories
You must include the HelmRepository that is referenced in each HelmRelease's Chart spec.
Each services/<app name>/<version>/kustomization.yaml must include the path of the YAML file that
defines the HelmRepository. For example.
# services/<app name>/<version>/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

Nutanix Kubernetes Platform | Cluster Operations Management | 435


resources:
- <app name>.yaml
- ../../../helm-repositories/<helm repository 1>
For more information, see the flux documentation at:

• HelmRepositories: https://fluxcd.io/docs/components/source/helmrepositories/
• Manage Helm Releases: https://fluxcd.io/flux/guides/helmreleases/

Substitution Variables
Some substitution variables are provided. For more information, see https://fluxcd.io/docs/components/
kustomize/kustomization/#variable-substitution.

• ${releaseName}: For each application deployment, this variable is set to the AppDeployment name. Use this
variable to prefix the names of any resources that are defined in the application directory in the Git repository
so that multiple instances of the same application can be deployed. If you create resources without using the
releaseName prefix (or suffix) in the name field, there can be conflicts if the same named resource is created in
that same namespace.
• ${releaseNamespace}: The namespace of the project.

• ${workspaceNamespace}: The namespace of the workspace that the project belongs to.

Project: Creating a Git Repository

Use the CLI to create the GitRepository resource and add a new repository to your Project.

About this task


To ceate a Git repository in the Project namespace.

Procedure

1. Refer to Installing Kommander in an Air-gapped Environment on page 965 setup instructions section, if
you are running in air-gapped environment.

2. Set the WORKSPACE_NAMESPACE environment variable to the name of your project's namespace.
export PROJECT_NAMESPACE=<project_namespace>

3. Adapt the URL of your Git repository.


kubectl apply -f - <<EOF
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: example-repo
namespace: ${PROJECT_NAMESPACE}
spec:
interval: 1m0s
ref:
branch: <your-target-branch-name> # e.g., main
timeout: 20s
url: https://github.com/<example-org>/<example-repo>
EOF

Nutanix Kubernetes Platform | Cluster Operations Management | 436


4. Ensure the status of the GitRepository signals a ready state.
kubectl get gitrepository example-repo -n ${PROJECT_NAMESPACE}
The repository commit also displays the ready state.
NAME URL READY
STATUS AGE
example-repo https://github.com/example-org/example-repo True
Fetched revision: master/6c54bd1722604bd03d25dcac7a31c44ff4e03c6a 11m
For more information on the GitRepository resource fields and how to make Flux aware of credentials required
to access a private Git repository, see the Flux documentation at https://fluxcd.io/flux/components/source/
gitrepositories/#secret-reference.

Note: To troubleshoot issues with adding the GitRepository, review the following logs:
kubectl -n kommander-flux logs -l app=source-controller
[...]
kubectl -n kommander-flux logs -l app=kustomize-controller
[...]
kubectl -n kommander-flux logs -l app=helm-controller
[...]

Project: Workspace Application Metadata

You can define how custom applications display in the NKP UI by defining a metadata.yaml file for each
application in the git repository. You must define this file at services/<application>/metadata.yaml for
it to process correctly.

Note: To display more information about custom applications in the UI, define a metadata.yaml file for each
application in the Git repository.

You can define the following fields:

Table 42: Workplace Application Metadata

Field Default Description


displayName falls back to App ID Display name of the application for the UI.
description “” Short description, should be a sentence or two,
displayed in the UI on the application card.
category general 1 or more categories for this application. Categories
are used to group applications in the UI.
overview - Markdown overvi