0% found this document useful (0 votes)
217 views1,353 pages

NSX 4.0 Administration Guide

The NSX Administration Guide for VMware NSX 4.0 provides comprehensive instructions on managing and configuring NSX components, including NSX Manager, Tier-0 and Tier-1 Gateways, segments, DHCP, VPN, NAT, and security features. It covers various topics such as load balancing, IP address management, network monitoring, and multi-tenancy, along with detailed procedures and best practices. The document serves as a crucial resource for administrators to effectively manage NSX environments and ensure security and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
217 views1,353 pages

NSX 4.0 Administration Guide

The NSX Administration Guide for VMware NSX 4.0 provides comprehensive instructions on managing and configuring NSX components, including NSX Manager, Tier-0 and Tier-1 Gateways, segments, DHCP, VPN, NAT, and security features. It covers various topics such as load balancing, IP address management, network monitoring, and multi-tenancy, along with detailed procedures and best practices. The document serves as a crucial resource for administrators to effectively manage NSX environments and ensure security and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1353

NSX Administration Guide

Modified on 06 JAN 2023


VMware NSX 4.0
NSX Administration Guide

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2017-2023 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware, Inc. 2
Contents

About Administering VMware NSX 17

1 NSX Manager 18
Security of NSX Manager 21
License Enforcement in NSX Manager 22
View Monitoring Dashboards 24

2 Tier-0 Gateways 27
Add a Tier-0 Gateway 28
Create an IP Prefix List 34
Create a Community List 35
Configure a Static Route 36
Create a Route Map 37
Using Regular Expressions to Match Community Lists When Adding Route Maps 39
Configure BGP 40
Configure OSPF 47
Configure BFD 50
Configure Multicast 51
Configure IPv6 Layer 3 Forwarding 51
Create SLAAC and DAD Profiles for IPv6 Address Assignment 52
State Synchronization of Tier-0 Gateways 53
Changing the HA Mode of a Tier-0 Gateway 54
Tier-0 VRF Gateways 54
Deploy VRF-Lite with BGP 56
VRF Route Leaking 58
Configure the ARP Limit of a Tier-0 or Tier-1 Gateway or Logical Router 62
Stateful Services on Tier-0 and Tier-1 62
Key Concepts Stateful Services 63
Supported Topologies 66
Configure Failure Domains 68
Configure Stateful Services on Tier-0 and Tier-1 Gateways 70
Understanding Traffic Flows 73

3 Tier-1 Gateway 76
Add a Tier-1 Gateway 76
State Synchronization of Tier-1 Gateways 79

4 Segments 81

VMware, Inc. 3
NSX Administration Guide

Segment Profiles 81
Understanding QoS Segment Profile 83
Understanding IP Discovery Segment Profile 85
Understanding SpoofGuard Segment Profile 88
Understanding Segment Security Segment Profile 90
Understanding MAC Discovery Segment Profile 91
Add a Segment 93
Edge Bridging: Extending Overlay Segments to VLAN 97
Configure an Edge VM for Bridging 99
Create an Edge Bridge Profile 101
Extend an Overlay Segment to a VLAN or a Range of VLANs 104
Add a Metadata Proxy Server 105
Distributed Port Groups 106

5 DHCP 108
Configure DHCP Service 111
DHCP Configuration Settings: Reference 111
Configure Segment DHCP Server on a Segment 120
Configure Gateway DHCP Server on a Segment 122
Configure DHCP Relay on a Segment 124
Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway 127
View Gateway DHCP Statistics 127
View Segment DHCP Statistics 129
Scenarios: Selection of Edge Cluster for DHCP Service 129
Scenarios: Impact of Changing Segment Connectivity on DHCP 134

6 Host Switches 137


Managing NSX on a vSphere Distributed Switch 137
Configuring a vSphere Distributed Switch 138
Managing NSX Distributed Virtual Port Groups 140
NSX Cluster Prepared with VDS 141
APIs to Configure vSphere Distributed Switch on NSX 142
Feature Support in a vSphere Distributed Switch Enabled to Support NSX 145
License for vSphere Distributed Switch 148
Enhanced Datapath 148
Automatically Assign ENS Logical Cores 148
Configure Guest Inter-VLAN Routing 149
Receive Side Scaling 152

7 Virtual Private Network (VPN) 158


Understanding IPSec VPN 159

VMware, Inc. 4
NSX Administration Guide

Using Policy-Based IPSec VPN 159


Using Route-Based IPSec VPN 161
Understanding Layer 2 VPN 162
Enable and Disable L2 VPN Path MTU Discovery 164
Adding VPN Services 165
Add an IPSec VPN Service 166
Add an L2 VPN Service 168
Adding IPSec VPN Sessions 171
Using Certificate-Based Authentication for IPSec VPN Sessions 171
Add a Policy-Based IPSec Session 172
Add a Route-Based IPSec Session 176
About Supported Compliance Suites 180
Adding L2 VPN Sessions 181
Add an L2 VPN Server Session 181
Add an L2 VPN Client Session 184
Download the Remote Side L2 VPN Configuration File 185
Add Local Endpoints 187
Adding Profiles 189
Add IKE Profiles 189
Add IPSec Profiles 192
Add DPD Profiles 194
Add an Autonomous Edge as an L2 VPN Client 195
Check the Realized State of an IPSec VPN Session 198
Understanding TCP MSS Clamping 201
Troubleshooting VPN Problems 202
Monitor and Troubleshoot VPN Sessions 202
Alarms When an IPsec VPN Session or Tunnel Is Down 202

8 Network Address Translation (NAT) 205


Configure NAT/DNAT/No SNAT/No DNAT/Reflexive NAT 207
Configure NAT64 210
NAT and Gateway Firewall 212

9 NSX Advanced Load Balancer (Avi) 214

10 Load Balancer 215


Key Load Balancer Concepts 216
Scaling Load Balancer Resources 216
Supported Load Balancer Features 217
Load Balancer Topologies 218
Setting Up Load Balancer Components 220

VMware, Inc. 5
NSX Administration Guide

Add Load Balancers 220


Add an Active Monitor 222
Add a Passive Monitor 225
Add a Server Pool 227
Setting Up Virtual Server Components 230
Groups Created for Server Pools and Virtual Servers 262

11 Distributed Load Balancer 263


Understanding Traffic Flow with a Distributed Load Balancer 265
Create and Attach a Distributed Load Balancer Instance 266
Create a Server Pool for Distributed Load Balancer 267
Create a Virtual Server with a Fast TCP or UDP Profile 269
Verifying Distributed Load Balancer Configuration on ESXi Hosts 270
Distributed Load Balancer Statistics and Diagnostics 272
Distributed Load Balancer Operational Status 274
Run Traceflow on Distributed Load Balancer 277
Supported Features 278

12 Ethernet VPN (EVPN) 279


Overview of BGP EVPN 279
EVPN Support in NSX 282
EVPN Inline Mode 284
EVPN Inline Mode Configuration Workflow 285
EVPN Route Server Mode 290
EVPN Route Server Mode Configuration Workflow 291

13 Forwarding Policies 299


Add or Edit Forwarding Policies 300

14 IP Address Management (IPAM) 302


Add a DNS Zone 302
Add a DNS Forwarder Service 303
Add an IP Address Pool 304
Add an IP Address Block 304

15 Networking Settings 306


Configuring Multicast 306
Create an IGMP Profile 308
Create a PIM Profile 308
About IGMP Join 309
Add an EVPN/VXLAN VNI Pool 310

VMware, Inc. 6
NSX Administration Guide

Configure Global Gateway Settings 310


Add a Gateway QoS Profile 311
Add a BFD Profile 312
Add a DHCP Profile 312
Add a DHCP Server Profile 312
Add a DHCP Relay Profile 315

16 Security 316
Firewall Rule Enforcement 317
Security Overview 317
NSX Guest Introspection Platform 323
NSX Guest Introspection Platform Architecture 323
NSX Guest Introspection Platform Use Cases 325
Installing Host Components 326
Installing Guest Components 326
Supported File Systems for Guest VMs 336
Logging and Troubleshooting Host Components 337
Logging and Troubleshooting Guest Components 342
Collect Environment and Workload Details 348
Supported Software 349
Security Monitoring 349
Using vRealize Log Insight for Unified Security Logs 349
Monitoring Security Statistics 352
Security Terminology 353
Identity Firewall 353
Identity Firewall Workflow 354
Layer 7 Context Profile 357
Layer 7 Firewall Rule Workflow 358
Distributed Firewall 359
FQDN Filtering 359
Firewall Drafts 361
Configure Malicious IP Feeds 363
Malicious IPs Filtering and Analysis Dashboard 364
Add a Distributed Firewall 365
Distributed Firewall Packet Logs 369
Manage a Firewall Exclusion List 372
Extending Security Policies to Physical Workloads 373
Shared Address Sets 380
Export or Import a Firewall Configuration 380
Gateway Firewall 381
Supported Gateway Firewall Features on NSX Edge 382

VMware, Inc. 7
NSX Administration Guide

Gateway Firewall Settings 383


Add a Gateway Firewall Policy and Rule 383
TLS Inspection 386
URL Filtering 398
FQDN Analysis 399
Gateway Firewall Packet Logs 401
Distributed Security for vSphere Distributed Switch 403
Install Distributed Security for vSphere Distributed Switch 405
Endpoint Protection 406
Understand Endpoint Protection 406
Configure Endpoint Protection 408
Manage Endpoint Protection 420
East-West Network Security - Chaining Third-party Services 435
Key Concepts of East-West Network Protection 435
NSX Requirements for East-West Traffic 436
High-Level Tasks for East-West Network Security 438
Deploy a Service for East-West Traffic Introspection 438
Add Redirection Rules for East-West Traffic 440
Exclude Members from a Security Service 442
Get List of Service Paths 443
Uninstall an East-West Traffic Introspection Service 444
Upgrade East-West Service VM 445
North-South Network Security - Inserting Third-party Service 449
High-Level Tasks for North-South Network Security 449
Deploy a Service for North-South Traffic Introspection 450
Add Redirection Rules for North-South Traffic 452
Uninstall a North-South Traffic Introspection Service 453
Update Service Insertion Status 453
Upgrade North-South Service VM 454
Network Introspection Settings 461
Add a Service Segment 461
Add a Service Profile 462
Add a Service Chain 462
NSX IDS/IPS and NSX Malware Prevention 463
Getting Started with NSX IDS/IPS and NSX Malware Prevention 464
Offline Downloading and Uploading NSX Intrusion Detection Signatures 482
Adding Security Profiles 486
Using NSX IDS/IPS and NSX Malware Prevention on a Distributed Firewall 488
Using NSX IDS/IPS and NSX Malware Prevention on a Gateway Firewall 509
Distributed IDS/IPS Logs 516
Monitoring File Events 519

VMware, Inc. 8
NSX Administration Guide

Monitoring IDS/IPS Events 532


Administering NSX Malware Prevention 536
Troubleshooting NSX Malware Prevention 537
NSX Network Detection and Response 547
Getting Started with NSX Network Detection and Response 547
Working with the NSX Network Detection and Response Application 552
Administering NSX Network Detection and Response 641
Troubleshooting NSX Network Detection and Response 642
Time-Based Firewall Policy 651
Troubleshooting Firewall 653
Monitor and Troubleshoot Firewall on NSX Manager 653
Troubleshooting Distributed Firewall on ESX Hosts 653
Troubleshooting Gateway Firewall 663
Check Rule Realization Status 667
Distributed Firewall Packet Logs 669
Bare Metal Server Security 673
General Security Settings 674
Private IP Ranges 674
Firewall General Settings 674
Identity Firewall Event Log Sources 681
URL Database 681

17 Inventory 683
Add a Service 683
Add a Group 684
Overview of Group Membership Criteria 688
Profiles 690
Context Profiles 690
L7 Access Profiles 692
Attribute Types 693
App IDs 693
FQDNs 697
Custom URLs 698
URL Categories 702
Containers 702
Public Cloud Services 703
Physical Servers 704
Tags 704
Add Tags to an Object 708
Add a Tag to Multiple Objects 709
Unassign Tags from an Object 710

VMware, Inc. 9
NSX Administration Guide

Unassign a Tag from Multiple Objects 711

18 Multisite and NSX Federation 712


NSX Multisite 713
Working with VMware Site Recovery Manager and Multisite Environments 727
NSX Federation 727
Overview of NSX Federation 728
Networking in NSX Federation 742
Security in NSX Federation 759
Traceflow in Federation 775
Prevent Password Lockout on Local Manager Nodes 778
Backup and Restore in NSX Federation 779
Disaster Recovery for Global Manager 781
Working with Site Recovery Manager and Federation 783
Network Recovery for Local Managers 785

19 Multi-tenancy 787
Orgs and Projects 787
Resource Sharing 790
Groups and Distributed Firewall 791
Users and Roles 794
Feature Support 796

20 System Monitoring 798


Monitor NSX Edge Nodes and Gateways 798
APIs to Fetch Time-Series Metrics 801
Dynamic Plugins 806
Working with Events and Alarms 816
View Alarm Information 816
View Alarm Definitions 817
Configuring Alarm Definition Settings 819
Managing Alarm States 820
Registering Notification Watchers 821
Using Log Insight or Splunk for System Monitoring 823
Using vRealize Operations Manager for System Monitoring 829
Using vRealize Network Insight Cloud for System Monitoring 833

21 Network Monitoring 844


Add an IPFIX Collector 844
Add a Firewall IPFIX Profile 845
Add a Switch IPFIX Profile 845

VMware, Inc. 10
NSX Administration Guide

IPFIX Monitoring on a vSphere Distributed Switch 847


Add a Port Mirroring Session 848
Port Mirroring on a vSphere Distributed Switch 851
Perform a Traceflow 852
Simple Network Management Protocol (SNMP) 855
Network Latency Statistics 856
Measure Network Latency Statistics 860
Export Network Latency Statistics 862
Monitoring Tools in Manager Mode 864
View Port Connection Information in Manager Mode 864
Traceflow 864
Monitor Port Mirroring Sessions in Manager Mode 868
Configure Filters for a Port Mirroring Session 871
Configure IPFIX in Manager Mode 872
Monitor a Logical Switch Port Activity in Manager Mode 883
Checking CPU Usage and Network Latency 884
Live Traffic Analysis 885
Create a Live Traffic Analysis Session 887

22 Authentication and Authorization 890


Managing Local User Accounts 891
Activate a Local User in NSX Manager 891
Manage Local User Accounts 892
Manage Local User’s Password or Name Using the CLI 894
Password Management 896
Authentication Policy Settings 900
Integration with VMware Identity Manager/Workspace ONE Access 905
Time Synchronization between NSX Manager, vIDM, and Related Components 905
Obtain the Certificate Thumbprint from a vIDM Host 906
Configure VMware Identity Manager/Workspace ONE Access Integration 907
Validate VMware Identity Manager™ Functionality 909
Integration with LDAP 911
LDAP Identity Source 912
NSX API Authentication Using a Session Cookie 914
Add a Role Assignment or Principal Identity 918
Role-Based Access Control 921
Create or Manage Custom Roles 934
Configuring Both vIDM and LDAP or Transitioning from vIDM to LDAP 936
Logging User Account Changes 936

23 Certificates 938

VMware, Inc. 11
NSX Administration Guide

Types of Certificates 938


Certificates for NSX Federation 940
Create a Certificate Signing Request File 942
Creating Self-signed Certificates 944
Create a Self-Signed Certificate 944
Import a Certificate for a CSR 945
Importing and Replacing Certificates 945
Import a Self-signed or CA-signed Certificate 946
Import a CA Certificate 946
Set Checks for Certificate Imports 947
Replace Certificates 948
Importing and Retrieving CRLs 950
Import a Certificate Revocation List 950
Configuring NSX Manager to Retrieve a Certificate Revocation List 951
Import or Update a Trusted CA Bundle 951
Storage of Public Certificates and Private Keys for Load Balancer or VPN service 952
Alarm Notification for Certificate Expiration 952

24 Integration of Antrea Container Clusters 954


Architecture of Antrea Container Cluster Integration with NSX 955
Registering an Antrea Container Cluster to NSX 957
Prerequisites for Registering an Antrea Container Cluster to NSX 958
Edit the Bootstrap Configuration File 962
Submit the YAML Files to the Kubernetes API Server 965
Viewing Inventory of an Antrea Container Cluster in NSX Manager 967
View Details of an Antrea Container Cluster 967
View Details of Namespaces in an Antrea Container Cluster 969
Monitor Health Status of an Antrea Container Cluster 970
Collect Support Bundles for an Antrea Container Cluster 972
Trace the Path of a Packet with Antrea Traceflow 977
Antrea Groups 979
Add an Antrea Group 983
Distributed Firewall Policies for an Antrea Container Cluster 985
Add a Distributed Firewall Policy for Antrea Container Clusters 986
Example: Add a Distributed Firewall Policy for an Antrea Container Cluster 992
Deregister an Antrea Container Cluster from NSX 994
Delete an Antrea Container Cluster from the NSX Inventory by Using the Command Line 996

25 Configuring NSX in Manager Mode 998


Logical Switches in Manager Mode 998
Understanding BUM Frame Replication Modes 999

VMware, Inc. 12
NSX Administration Guide

Create a Logical Switch in Manager Mode 1000


Connecting a VM to a Logical Switch in Manager Mode 1002
Create a Logical Switch Port In Manager Mode 1009
Test Layer 2 Connectivity in Manager Mode 1010
Create a VLAN Logical Switch for the NSX Edge Uplink in Manager Mode 1013
Switching Profiles for Logical Switches and Logical Ports 1015
Edge Bridging in Manager Mode: Extending Overlay Segments to VLAN 1034
Logical Routers in Manager Mode 1042
Tier-1 Logical Router 1042
Tier-0 Logical Router 1053
NAT in Manager Mode 1085
Network Address Translation 1085
Grouping Objects in Manager Mode 1098
Create an IP Set in Manager Mode 1098
Create an IP Pool in Manager Mode 1099
Create a MAC Set in Manager Mode 1100
Create an NSGroup in Manager Mode 1100
Configuring Services and Service Groups 1102
Manage Tags for a VM in Manager Mode 1103
DHCP in Manager Mode 1104
DHCP 1105
Metadata Proxies 1109
IP Address Management in Manager Mode 1112
Manage IP Blocks in Manager Mode 1112
Manage Subnets for IP Blocks in Manager Mode 1112
Load Balancing in Manager Mode 1113
Key Load Balancer Concepts 1114
Configuring Load Balancer Components 1115
Firewall in Manager Mode 1145
Add or Delete a Firewall Rule to a Logical Router in Manager Mode 1145
Configure Firewall for a Logical Switch Bridge Port in Manager Mode 1146
Firewall Sections and Firewall Rules 1147
About Firewall Rules 1151

26 Backing Up and Restoring NSX Manager or Global Manager 1158


Configure Backups 1159
Start or Schedule Backups 1162
Remove Old Backups 1163
Listing Available Backups 1164
Restore a Backup 1165
Certificate Management after Restore 1169

VMware, Inc. 13
NSX Administration Guide

27 Operations and Management 1170


View the Usage and Capacity of Categories of Objects 1171
Configuring the Login Banner and UI 1174
Configure the Login Window with a User Agreement Banner 1174
Configure the User Interface Settings 1175
Configure a Node Profile 1176
Checking the Realized State of a Configuration Change 1179
View Network Topology 1183
Search for Objects 1184
Filter by Object Attributes 1185
Add a Compute Manager 1186
Replace Compute Manager 1190
Configuring Active Directory and Event Log Scraping 1192
Enable Windows Security Log Access for the Event Log Reader 1194
Add an LDAP Server 1194
Synchronize Active Directory 1195
Remove NSX Extension from VMware vCenter 1196
Managing the NSX Manager Cluster 1197
View the Configuration and Status of the NSX Manager Cluster 1197
Update API Service Configuration of the NSX Manager Cluster 1200
Shut Down and Power On the NSX Manager Cluster 1201
Reboot an NSX Manager 1201
Change the IP Address of an NSX Manager 1202
Resize an NSX Manager Node 1204
Replacing an NSX Edge Transport Node in an NSX Edge Cluster 1206
Replace an NSX Edge Transport Node Using the NSX Manager UI 1206
Replace an NSX Edge Transport Node Using the API 1208
Managing Resource Reservations for an Edge VM Appliance 1210
Tune Resource Reservations for an NSX Edge Appliance 1211
Replacing NSX Edge Hardware or Redeploying NSX Edge Nodes VM 1212
Replace NSX Edge Hardware 1213
Redeploy an NSX Edge VM Appliance 1214
Adding and Removing an ESXi Host Transport Node to and from vCenter Servers 1221
Changing the Distributed Router Interfaces' MAC Address 1222
Configuring Appliances 1223
Configuring NTP on Appliances and Transport Nodes 1224
Add a License Key and Generate a License Usage Report 1225
License Types 1227
Compliance-Based Configuration 1231
View Compliance Status Report 1232
Compliance Status Report Codes 1232

VMware, Inc. 14
NSX Administration Guide

Configure Global FIPS Compliance Mode for Load Balancer 1235


Collect Support Bundles 1238
Understanding Support Bundle File Paths 1239
Log Messages and Error Codes 1244
Configure Remote Logging 1248
Add Syslog Servers for NSX Nodes 1256
Log Message IDs 1257
Troubleshooting Syslog Issues 1258
Configure Serial Logging on an Appliance VM 1258
Firewall Audit Log Messages 1259
Customer Experience Improvement Program 1275
Edit the Customer Experience Improvement Program Configuration 1275
Find the SSH Fingerprint of a Remote Server 1276
Configuring an External Load Balancer 1277
Configure Proxy Settings 1279
Promote Manager Objects to Policy Objects 1280
Back up and restore NSX configured in VMware vCenter 1282

28 Using NSX Cloud 1284


Cloud Service Manager: UI Walkthrough 1284
Clouds 1285
System 1287
Threat Detection using the NSX Cloud Quarantine Policy 1292
Quarantine Policy in the NSX Enforced Mode 1293
Quarantine Policy in the Native Cloud Enforced Mode 1298
User Managed List for VMs 1298
NSX Enforced Mode 1299
Supported Operating Systems for Workload VMs 1300
Onboarding VMs in the NSX Enforced Mode 1302
Managing VMs in the NSX Enforced Mode 1311
Native Cloud Enforced Mode 1313
Managing VMs in the Native Cloud Enforced Mode 1313
NSX Features Supported with NSX Cloud 1316
Group VMs using NSX and Public Cloud Tags 1318
Use Native-Cloud Services 1321
Service Insertion for your Workload VMs in the NSX Enforced Mode 1322
Enable NAT on NSX-managed VMs 1331
Enable Syslog Forwarding 1332
Automate VPN for Public Cloud Endpoints using APIs 1332
Set up VPN in the Native Cloud Enforced Mode 1334
Set up VPN in the NSX Enforced Mode 1338

VMware, Inc. 15
NSX Administration Guide

Deploying NSX Management Components On Microsoft Azure 1340


Redeploying Manager Nodes on Cloud Native Azure 1341
Managing Backup and Restore of NSX Manager and CSM in Microsoft Azure 1342
Restore CSM from Microsoft Azure Recovery Services Vault 1343
Restore NSX Manager from Microsoft Azure Recovery Services Vault 1345
NSX Cloud FAQs and Troubleshooting 1350

VMware, Inc. 16
About Administering VMware NSX

The NSX Administration Guide provides information about configuring and managing networking
for VMware NSX® (Formerly known as NSX-T Data Center), including how to create logical
switches and ports and how to set up networking for tiered logical routers, configure NAT,
firewalls, SpoofGuard, grouping and DHCP. It also describes how to configure NSX Cloud.

Intended Audience
This information is intended for anyone who wants to configure NSX. The information is written
for experienced Windows or Linux system administrators who are familiar with virtual machine
technology, networking, and security operations.

VMware Technical Publications Glossary


VMware Technical Publications provides a glossary of terms that might be unfamiliar to you.
For definitions of terms as they are used in VMware technical documentation, go to https://
www.vmware.com/topics/glossary.

Related Documentation
You can find the VMware NSX® Intelligence™ documentation at https://docs.vmware.com/en/
VMware-NSX-Intelligence/index.html. The NSX Intelligence 1.0 content was initially included and
released with the NSX 2.5 documentation set.

VMware, Inc. 17
NSX Manager
1
The NSX Manager provides a web-based user interface where you can manage your NSX
environment. It also hosts the API server that processes API calls.

The NSX Manager interface provides two modes for configuring resources:

n Policy mode

n Manager mode

Accessing Policy Mode and Manager Mode


If present, you can use the Policy and Manager buttons to switch between the Policy and Manager
modes. Switching modes controls which menus items are available to you.

n By default, if your environment contains only objects created through Policy mode, your user
interface is in Policy mode and you do not see the Policy and Manager buttons.

n By default, if your environment contains any objects created through Manager mode, you see
the Policy and Manager buttons in the top-right corner.

These defaults can be changed by modifying the user interface settings. See Configure the User
Interface Settings for more information.

The same System tab is used in the Policy and Manager interfaces. If you modify Edge
nodes, Edge clusters, or transport zones, it can take up to 5 minutes for those changes to
be visible in Policy mode. You can synchronize immediately using POST /policy/api/v1/infra/
sites/default/enforcement-points/default?action=reload.

VMware, Inc. 18
NSX Administration Guide

When to Use Policy Mode or Manager Mode


Be consistent about which mode you use. There are a few reasons to use one mode over the
other.

n If you are deploying a new NSX environment, using Policy mode to create and manage your
environment is the best choice in most situations.

n Some features are not available in Policy mode. If you need these features, use Manager
mode for all configurations.

n If you plan to use NSX Federation, use Policy mode to create all objects. Global Manager
supports only Policy mode.

n If you are upgrading from an earlier version of NSX and your configurations were created
using the Advanced Networking & Security tab, use Manager mode.

The menu items and configurations that were found under the Advanced Networking &
Security tab are available in NSX 3.0 in Manager mode.

Important If you decide to use Policy mode, use it to create all objects. Do not use Manager
mode to create objects.

Similarly, if you need to use Manager mode, use it to create all objects. Do not use Policy mode to
create objects.

Table 1-1. When to Use Policy Mode or Manager Mode

Policy Mode Manager Mode

Most new deployments should use Policy mode. Deployments which were created using the advanced
NSX Federation supports only Policy mode. If you want to interface, for example, upgrades from versions before
use NSX Federation, or might use it in future, use Policy Policy mode was available.
mode.

NSX Cloud deployments Deployments which integrate with other plugins. For
example, NSX Container Plug-in, Openstack, and other
cloud management platforms.

VMware, Inc. 19
NSX Administration Guide

Table 1-1. When to Use Policy Mode or Manager Mode (continued)

Policy Mode Manager Mode

Networking features available in Policy mode only:


n DNS Services and DNS Zones
n VPN
n Forwarding policies for NSX Cloud

Security features available in Policy mode only: Security features available in Manager mode only:
n Endpoint Protection n Bridge Firewall
n Network Introspection (East-West Service Insertion)
n Context Profiles
n L7 applications
n FQDN
n New Distributed Firewall and Gateway Firewall Layout
n Categories
n Auto service rules
n Drafts

Names for Objects Created in Policy Mode and Manager


Mode
The objects you create have different names depending on which interface was used to create
them.

Table 1-2. Object Names

Objects Created Using Policy Mode Objects Created Using Manager Mode

Segment Logical switch

Tier-1 gateway Tier-1 logical router

Tier-0 gateway Tier-0 logical router

Group NSGroup, IP Sets, MAC Sets

Security Policy Firewall section

Gateway firewall Edge firewall

Policy and Manager APIs


The NSX Manager provides two APIs: Policy and Manager.

n The Policy API contains URIs that begin with /policy/api.

n The Manager API contains URIs that begin with /api.

For more information about using the Policy API, see the NSX Policy API: Getting Started Guide.

VMware, Inc. 20
NSX Administration Guide

Security
NSX Manager has the following security features:

n NSX Manager has a built-in user account called admin, which has access rights to all
resources, but does not have rights to the operating system to install software. NSX upgrade
files are the only files allowed for installation. You can change the username and role
permissions for admin, but you cannot delete admin.

n NSX Manager supports session timeout and automatic user logout. NSX Manager does not
support session lock. Initiating a session lock can be a function of the workstation operating
system being used to access NSX Manager. Upon session termination or user logout, users are
redirected to the login page.

n Authentication mechanisms implemented on NSX follow security best practices and are
resistant to replay attacks. The secure practices are deployed systematically. For example,
sessions IDs and tokens on NSX Manager for each session are unique and expire after the user
logs out or after a period of inactivity. Also, every session has a time record and the session
communications are encrypted to prevent session hijacking.

You can view and change the session timeout value with the following CLI commands:

n The command get service http displays a list of values including session timeout.

n To change the session timeout value, run the following commands:

set service http session-timeout <timeout-value-in-seconds>


restart service ui-service

This chapter includes the following topics:

n Security of NSX Manager

n License Enforcement in NSX Manager

n View Monitoring Dashboards

Security of NSX Manager


NSX Manager is a restricted system and has features designed to ensure the integrity of the
system and to keep the system secure.

Details of the NSX Manager security features:

n NSX Manager supports session time-out and user logoff. NSX Manager does not support
session lock. Initiating a session lock can be a function of the workstation operating system
being used to access NSX Manager.

n In NSX 3.1, NSX Manager has two local accounts: admin and audit. You cannot deactivate the
local accounts or create local accounts.

VMware, Inc. 21
NSX Administration Guide

n Starting in NSX 3.1.1, there are two new guest user accounts. In the Enterprise environment,
guestuser1 and guestuser2 user accounts are available. In the NSX Cloud environment,
cloud_admin and cloud_audit user accounts are available. For 3.1.1 and later, the local
accounts for audit and guest users are inactive by default, but you can activate or deactivate
them. You cannot deactivate the admin account or create new local accounts.

n NSX Manager enforces approved authorizations for controlling the flow of management
information within the network device based on information flow control policies.

n NSX Manager initiates session auditing upon startup.

n NSX Manager uses its internal system clock to generate time stamps for audit records.

n The NSX Manager user interface includes a user account, which has access rights to all
resources, but does not have rights to the operating system to install software and hardware.
NSX upgrade files are the only files allowed for installation. You cannot edit the rights of or
delete this user.

n All passwords in the system (databases, configuration files, log files, and so on.) get encrypted
using a strong one-way hashing algorithm with a salt. During authentication, when the
user enters the password it gets obfuscated. Starting in NSX 4.0, password complexity is
configurable using the UI, API and CLI. Also available in this release is the ability to reset
node authentication policy and password complexity configuration back to their default system
settings.

n FIPS compliance

n NSX Manager uses FIPS 140-2 approved algorithms for authentication to a cryptographic
module.

n NSX Manager generates unique session identifiers using a FIPS 140-2 approved random
number generator.

n NSX Manager uses a FIPS 140-2 approved cryptographic algorithm to protect the
confidentiality of remote maintenance and diagnostic sessions.

n NSX Manager authenticates SNMP messages using a FIPS-validated Keyed-Hash Message


Authentication Code (HMAC).

n NSX Manager recognizes only system-generated session identifiers and invalidates session
identifiers upon administrator logout or other session termination.

n An audit log gets generated for events such as logon, logoff, and access to resources. Each
audit log contains the timestamp, source, result, and a description of the event. For more
information, see Log Messages and Error Codes.

License Enforcement in NSX Manager


License compliance is enforced when you try to access features in the NSX Manager user
interface. The license enforcement is based on features and time, but not capacity. The capacity
enforcement is honor-based.

VMware, Inc. 22
NSX Administration Guide

Beginning with NSX 3.1, the NSX license editions that you have assigned to your NSX deployment
determine which features you can access in the Policy mode of the NSX Manager user interface.
If you have multiple editions of licenses, NSX Manager uses the highest license edition that is
applicable.

When the licenses are valid, the order of priority for the license editions is as follows.

Priority License

1 NSX Data Center Enterprise Plus, NSX Data Center Evaluation

2 NSX Enterprise Plus per Processor (Limited Export), NSX Data Center Advanced, NSX
for vSphere - Enterprise, NSX for vSphere - Advanced, NSX Data Center Advanced per
Processor (for Limited Export)

3 NSX Data Center for Remote Office Branch Office (ROBO)

4 NSX Data Center Professional

5 NSX Data Center Standard and NSX for vSphere - Standard

6 NSX for vShield Endpoint

Note Add-On licenses verifies the add-on features, such as NSX Data Center Distributed Threat
Prevention. For details, see License Types.

The assigned license is used to determine the list of features that you are allowed to use in the
NSX Manager user interface. If you are a new user, you can only access those features that are
available in the license edition that you have purchased. If you try to access a feature that is
not valid for your current license, you see a message similar to what is shown in the following
screenshot.

VMware, Inc. 23
NSX Administration Guide

You can upgrade your current NSX deployment to NSX 3.1 or later regardless of the license that
is in effect. Similarly, you can perform a backup or restore operation regardless of the license that
is in effect. However, after a successful upgrade, backup, or restoration of a backup, the license
enforcement is applied based on the current valid assigned license to your NSX deployment.

If an assigned license expires or becomes invalid, only the read and delete operations are allowed
for objects that were configured before the license expired or before an upgrade, backup, or
restoration process began. The edit, create, and new operations are disabled. A warning banner
similar to what is shown in the following screenshot is displayed.

If NSX Manager has another valid license with a lower priority, alongside the expired higher
priority license, then the features are enabled based on the priority of the valid license.

The list of supported features for the different VMware NSX Data Center license editions is
available in the VMware NSX Data Center Datasheet at https://www.vmware.com/content/dam/
digitalmarketing/vmware/en/pdf/products/nsx/vmware-nsx-datasheet.pdf. In that document,
locate the VMware NSX Data Center Editions section and identify the license edition that is
required for the features you want to use.

See Add a License Key and Generate a License Usage Report for more information on how to add
a license key. You can also find information about the restrictions of the default license, NSX for
vShield Endpoint, that is used when you install NSX Manager.

View Monitoring Dashboards


The NSX Manager interface provides numerous monitoring dashboards showing details regarding
system status, networking and security, and compliance reporting. This information is displayed or
accessible throughout the NSX Manager interface, but can be accessed together in the Home >
Monitoring Dashboards page.

You can access the monitoring dashboards from the Home page of the NSX Manager interface.
From the dashboards, you can click through and access the source pages from which the
dashboard data is drawn.

Procedure

1 Log in as administrator to the NSX Manager interface.

VMware, Inc. 24
NSX Administration Guide

2 Click Home if you are not already on the Home page.

3 Click Monitoring Dashboards and select the desired category of dashboards from the drop-
down menu.

The page displays the dashboards in the selected categories. The dashboard graphics are
color-coded, with color code key displayed directly above the dashboards.

4 To access a deeper level of detail, click the title of the dashboard, or one of the elements of the
dashboard, if activated.

The following tables describe the default dashboards and their sources.

Table 1-3. System Dashboards

Dashboard Sources Description

System System > Appliances Shows the status of the NSX Manager cluster and NSX
Advanced Load Balancer resource (CPU, memory, disk)
consumption.

Fabric System > Fabric > Nodes Shows the status of the NSX fabric, including host and edge
System > Fabric > Profiles transport nodes, transport zones, and compute managers.
System > Fabric > Transport You can also view profiles and set global fabric settings for
Zones tunnel endpoint, remote tunnel endpoint, and global MTU
consistency checks if you select the Fabric title to access
System > Fabric > Compute
these additional tasks.
Managers
System > Fabric > Settings

Backups System > Backup & Restore Shows the status of NSX backups, if configured. It is strongly
recommended that you configure scheduled backups that are
stored remotely to an SFTP site.

Endpoint System > Service Shows the status of endpoint protection deployment.
Protection Deployments

Table 1-4. Networking & Security Dashboards in Policy Mode

Dashboard Sources Description

Security Inventory > Groups Shows the status of groups and security policies. A group
Security > Distributed is a collection of workloads, segments, segment ports, IP
Firewall addresses, MAC addresses, and so on where security policies,
including East-West firewall rules, may be applied.

Gateways Networking > Tier-0 Shows the status of Tier-0 and Tier-1 gateways.
Gateways
Networking > Tier-1
Gateways

Segments Networking > Segments Shows the status of network segments.

Load Balancers Networking > Load Balancing Shows the status of the load balancer VMs.

VMware, Inc. 25
NSX Administration Guide

Table 1-4. Networking & Security Dashboards in Policy Mode (continued)

Dashboard Sources Description

Virtual Services Networking > Network Shows the availability and scalability for virtual services
Services > Advanced Load applications by monitoring their health and distributing traffic.
Balancer/Load Balancing

VPNs Networking > VPN Shows the status of virtual private networks.

Table 1-5. Networking & Security Dashboards in Manager Mode

Dashboard Sources Description

Load Balancers Networking > Load Balancing Shows the status of the load balancer services, load balancer
virtual servers, and load balancer server pools. A load
balancer can host one or more virtual servers. A virtual server
is bound to a server pool that includes members hosting
applications.

Firewall Security > East West Security Indicates if the firewall is enabled, and shows the number of
> Distributed Firewall policies, rules, and exclusions list members.
Security > East West Security
Note Each detailed item displayed in this dashboard is
> Bridge Firewall
sourced from a specific sub-tab in the source page cited.
Networking > Connectivity >
Tier-0 Logical Routers and
Networking > Connectivity >
Tier-1 Logical Routers

VPN Not applicable. Shows the status of virtual private networks and the number
of IPSec and L2 VPN sessions open.

Switching Networking > Connectivity > Shows the status of logical switches and logical ports,
Logical Switches including both VM and container ports.

Table 1-6. Compliance Report Dashboard

Column Description

Non-Compliance Code Displays the specific non-compliance code.

Description Specific cause of non-compliance status.

Resource Name The NSX resource (node, switch, and profile) in non-compliance.

Resource Type Resource type of cause.

Affected Resources Number of resources affected. Click the number value to view a list.

You can also add widget to configure custom monitoring dashboards using NSX REST APIs.
See the latest version of the NSX REST API Guide at https://code.vmware.com for API details.
See the Compliance Status Report Codes for more information about each compliance report
code.

VMware, Inc. 26
Tier-0 Gateways
2
A tier-0 gateway performs the functions of a tier-0 logical router. It processes traffic between the
logical and physical networks.

NSX Cloud Note If using NSX Cloud, see NSX Features Supported with NSX Cloud for a list of
auto-generated logical entities, supported features, and configurations required for NSX Cloud.

An Edge node can support only one tier-0 gateway or logical router. When you create a tier-0
gateway or logical router, make sure you do not create more tier-0 gateways or logical routers
than the number of Edge nodes in the NSX Edge cluster.

Note When connecting tier-0 uplinks to multi-chassis port-channel topologies such as vPC
(virtual PortChannel) or VSS (Virtual Switching System) from Cisco, or MLAG (Multi-Chassis
Link Aggregation) from Arista, be sure to consult with the network provider to understand the
limitations of the topology when it is being used for transit routing.

This chapter includes the following topics:

n Add a Tier-0 Gateway

n Create an IP Prefix List

n Create a Community List

n Configure a Static Route

n Create a Route Map

n Using Regular Expressions to Match Community Lists When Adding Route Maps

n Configure BGP

n Configure OSPF

n Configure BFD

n Configure Multicast

n Configure IPv6 Layer 3 Forwarding

n Create SLAAC and DAD Profiles for IPv6 Address Assignment

n State Synchronization of Tier-0 Gateways

n Changing the HA Mode of a Tier-0 Gateway

VMware, Inc. 27
NSX Administration Guide

n Tier-0 VRF Gateways

n Configure the ARP Limit of a Tier-0 or Tier-1 Gateway or Logical Router

n Stateful Services on Tier-0 and Tier-1

Add a Tier-0 Gateway


A tier-0 gateway has downlink connections to tier-1 gateways and external connections to physical
networks.

If you are adding a tier-0 gateway from Global Manager in NSX Federation, see Add a Tier-0
Gateway from Global Manager.

You can configure the HA (high availability) mode of a tier-0 gateway to be active-active or
active-standby. The following services are only supported in active-standby mode:

n NAT

n Load balancing

n Stateful firewall

n VPN

Tier-0 and tier-1 gateways support the following addressing configurations for all interfaces
(external interfaces, service interfaces and downlinks) in both single tier and multi-tiered
topologies:

n IPv4 only

n IPv6 only

n Dual Stack - both IPv4 and IPv6

To use IPv6 or dual stack addressing, enable IPv4 and IPv6 as the L3 Forwarding Mode in
Networking > Networking Settings > Global Networking Config.

You can configure the tier-0 gateway to support EVPN (Ethernet VPN). For more information
about configuring EVPN, see Chapter 12 Ethernet VPN (EVPN).

If you configure route redistribution for the tier-0 gateway, you can select from two groups of
sources: tier-0 subnets and advertised tier-1 subnets. The sources in the tier-0 subnets group are:

Source Type Description

Connected Interfaces and These include external interface subnets, service interface subnets and segment subnets
Segments connected to the tier-0 gateway.

Static Routes Static routes that you have configured on the tier-0 gateway.

NAT IP NAT IP addresses owned by the tier-0 gateway and discovered from NAT rules that are
configured on the tier-0 gateway.

IPSec Local IP Local IPSEC endpoint IP address for establishing VPN sessions.

VMware, Inc. 28
NSX Administration Guide

Source Type Description

DNS Forwarder IP Listener IP for DNS queries from clients and also used as source IP used to forward DNS
queries to upstream DNS server.

EVPN TEP IP This is used to redistribute EVPN local endpoint subnets on the tier-0 gateway.

The sources in the advertised tier-1 subnets group are:

Source Type Description

Connected Interfaces and These include segment subnets connected to the tier-1 gateway and service interface
Segments subnets configured on the tier-1 gateway.

Static Routes Static routes that you have configured on the tier-1 gateway.

NAT IP NAT IP addresses owned by the tier-1 gateway and discovered from NAT rules that are
configured on the tier-1 gateway.

LB VIP IP address of the load balancing virtual server.

LB SNAT IP IP address or a range of IP addresses used for source NAT by the load balancer.

DNS Forwarder IP Listener IP for DNS queries from clients and also used as source IP used to forward DNS
queries to upstream DNS server.

IPSec Local Endpoint IP address of the IPSec local endpoint.

Proxy ARP is automatically enabled on a tier-0 gateway when a NAT rule or a load balancer
VIP uses an IP address from the subnet of the tier-0 gateway external interface. By enabling
proxy-ARP, hosts on the overlay segments and hosts on a VLAN segment can exchange network
traffic together without implementing any change in the physical networking fabric.

For a detailed example of a packet flow in a proxy ARP topology, see the NSX Reference Design
Guide on the VMware Communities portal.
Before NSX 3.2, proxy ARP is supported on a tier-0 gateway in only an active-standby
configuration, and it responds to ARP queries for the external and service interface IPs. Proxy
ARP also responds to ARP queries for service IPs that are in an IP prefix list that is configured with
the Permit action.

Starting in NSX 3.2, proxy ARP is also supported on a tier-0 gateway in an active-active
configuration. However, all the Edge nodes in the active-active tier-0 configuration must have
directly reachability to the network on which proxy ARP is required. In other words, you
must configure the external interface and the service interface on all the Edge nodes that are
participating in the tier-0 gateway for the proxy ARP to work.

Prerequisites

If you plan to configure multicast, see Configuring Multicast.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 Click Add Tier-0 Gateway.

VMware, Inc. 29
NSX Administration Guide

4 Enter a name for the gateway.

5 Select an HA (high availability) mode.

The default mode is active-active. In the active-active mode, traffic is load balanced across all
members. In active-standby mode, all traffic is processed by an elected active member. If the
active member fails, a new member is elected to be active.

6 If the HA mode is active-standby, select a failover mode.

Option Description

Preemptive If the preferred node fails and recovers, it will preempt its peer and become the active
node. The peer will change its state to standby.

Non-preemptive If the preferred node fails and recovers, it will check if its peer is the active node. If so,
the preferred node will not preempt its peer and will be the standby node.

7 (Optional) Select an NSX Edge cluster.

8 (Optional) Click Additional Settings.

a In the Internal Transit Subnet field, enter a subnet.

This is the subnet used for communication between components within this gateway. The
default is 169.254.0.0/24.

b In the T0-T1 Transit Subnets field, enter one or more subnets.

These subnets are used for communication between this gateway and all tier-1 gateways
that are linked to it. After you create this gateway and link a tier-1 gateway to it, you will
see the actual IP address assigned to the link on the tier-0 gateway side and on the tier-1
gateway side. The address is displayed in Additional Settings > Router Links on the tier-0
gateway page and the tier-1 gateway page. The default is 100.64.0.0/16.

After the tier-0 gateway is created, you can change the T0-T1 Transit Subnets by editing
the gateway. Note that this will cause a brief disruption in traffic.

c In the Forwarding Up Timer field, enter a time.

Forwarding up timer defines the time in seconds that the router must wait before sending
the up notification after the first BGP session is established. This timer (previously known
as forwarding delay) minimizes downtime in case of fail-overs for active-active or active-
standby configurations of logical routers on NSX Edge that use dynamic routing (BGP). It
should be set to the number of seconds an external router (TOR) takes to advertise all the
routes to this router after the first BGP/BFD session. The timer value should be directly
proportional to the number of northbound dynamic routes that the router must learn. This
timer should be set to 0 on single edge node setups.

9 Click Route Distinguisher for VRF Gateways to configure a route distinguisher admin address.

This is only needed for EVPN in Inline mode.

10 (Optional) Add one or more tags.

VMware, Inc. 30
NSX Administration Guide

11 Click Save.

12 For IPv6, under Additional Settings, you can select or create an ND Profile and a DAD
Profile.

These profiles are used to configure Stateless Address Autoconfiguration (SLAAC) and
Duplicate Address Detection (DAD) for IPv6 addresses.

13 (Optional) Click EVPN Settings to configure EVPN.

a Select an EVPN mode.

The options are:

n Inline - In this mode, EVPN handles both data plane and control plane traffic.

n Route Server - Available only if this gateway's HA mode is active-active. In this mode,
EVPN handles control plane traffic only.

n No EVPN

b If EVPN mode is Inline, select an EVPN/VXLAN VNI pool or create a new pool by clicking
the menu icon (3 dots).

c If EVPN mode is Route Server, select an EVPN Tenant or create a new EVPN tenant by
clicking the menu icon (3 dots).

d In the EVPN Tunnel Endpoint field click Set to add EVPN local tunnel endpoints.

For the tunnel endpoint, select an Edge node and specify an IP address.

Optionally, you can specify the MTU.

Note Ensure that the external interface has been configured on the NSX Edge node that
you select for the EVPN tunnel endpoint.

14 To configure route redistribution, click Route Redistribution and Set.

Select one or more of the sources:

n Tier-0 subnets: Static Routes, NAT IP, IPSec Local IP, DNS Forwarder IP, EVPN TEP IP,
Connected Interfaces & Segments.

Under Connected Interfaces & Segments, you can select one or more of the following:
Service Interface Subnet, External Interface Subnet, Loopback Interface Subnet,
Connected Segment.

n Advertised tier-1 subnets: DNS Forwarder IP, Static Routes, LB VIP, NAT IP, LB SNAT IP,
IPSec Local Endpoint, Connected Interfaces & Segments.

Under Connected Interfaces & Segments, you can select Service Interface Subnet and/or
Connected Segment.

VMware, Inc. 31
NSX Administration Guide

15 To configure interfaces, click Interfaces and Set.

a Click Add Interface.

b Enter a name.

c Select a type.

If the HA mode is active-standby, the choices are External, Service, and Loopback. If the
HA mode is active-active, the choices are External and Loopback.

d Enter an IP address in CIDR format.

e Select a segment.

f If the interface type is not Service, select an NSX Edge node.

g (Optional) If the interface type is not Loopback, enter an MTU value.

h (Optional) If the interface type is External, you can enable multicast by setting PIM
(Protocol Independent Multicast) to Enabled.

You can also configure the following:

n IGMP Join Local - Enter one or more IP addresses. IGMP join is a debugging tool used
to generate (*,g) join to Rendezvous Point (RP) and get traffic forwarded to the node
where the join is issued. For more information, see About IGMP Join.

n Hello Interval (seconds) - Default is 30. The range is 1 - 180. This parameter specifies
the time between Hello messages. After the Hello Interval is changed, it takes effect
only after the currently scheduled PIM timer expires

n Hold Time (seconds) - The range is 1 - 630. Must be greater than Hello Interval. The
default is 3.5 times Hello Interval. If a neighbor does not receive a Hello message
from this gateway during this time interval, the neighbor will consider this gateway
unreachable.

i (Optional) Add tags and select an ND profile.

j (Optional) If the interface type is External, for URPF Mode, you can select Strict or None.

URPF (Unicast Reverse Path Forwarding) is a security feature.

k (Optional) After you create an interface, you can download the aggregate of ARP proxies
for the gateway by clicking the menu icon (three dots) for the interface and selecting
Download ARP Proxies.

You can also download the ARP proxy for a specific interface by expanding a gateway
and then expanding Interfaces. Click an interface and click the menu icon (three dots) and
select Download ARP Proxy.

Note You cannot download the ARP proxy for loopback interfaces.

VMware, Inc. 32
NSX Administration Guide

16 (Optional) If the HA mode is active-standby, click Set next to HA VIP Configuration to


configure HA VIP.

With HA VIP configured, the tier-0 gateway is operational even if one external interface is
down. The physical router interacts with the HA VIP only. HA VIP is intended to work with
static routing and not with BGP.
a Click Add HA VIP Configuration.

b Enter an IP address and subnet mask.

The HA VIP subnet must be the same as the subnet of the interface that it is bound to.

c Select two interfaces from two different Edge nodes.

17 Click Routing to add IP prefix lists, community lists, static routes, and route maps.

18 Click Multicast to configure multicast routing.

19 Click BGP to configure BGP.

20 Click OSPF to configure OSPF.

This feature is available starting with NSX 3.1.1.

21 (Optional) To download the routing table or forwarding table, do the following:

a Click the menu icon (three dots) and select a download option.

b Enter values for Transport Node, Network, and Source as required.

c Click Download to save the .CSV file.

22 (Optional) To download the ARP table from a linked tier-1 gateway, do the following:

a From the Linked Tier-1 Gateways column, click the number.

b Click the menu icon (3 dots) and select Download ARP Table.

c Select an edge node.

d Click Download to save the .CSV file.

Results

The new gateway is added to the list. For any gateway, you can modify its configurations by
clicking the menu icon (3 dots) and select Edit. For the following configurations, you do not need
to click Edit. You only need to click the expand icon (right arrow) for the gateway, find the entity
and click the number next to it. Note that the number must be non-zero. If it is zero, you must edit
the gateway.

n In the Interfaces section: External and Service Interfaces.

n In the Routing section: IP Prefix Lists, Static Routes, Static Route BFD Peer, Community
Lists, Route Maps.

n In the BGP section: BGP Neighbors.

VMware, Inc. 33
NSX Administration Guide

If NSX Federation is configured, this feature of reconfiguring a gateway by clicking on an entity is


applicable to gateways created by the Global Manager (GM) as well. Note that some entities in a
GM-created gateway can be modified by the Local Manager, but others cannot. For example, IP
Prefix Lists of a GM-created gateway cannot be modified by the Local Manager. Also, from the
Local Manager, you can edit existing External and Service Interfaces of a GM-created gateway
but you cannot add an interface.

Create an IP Prefix List


An IP prefix list contains single or multiple IP addresses that are assigned access permissions for
route advertisement. The IP addresses in this list are processed sequentially. IP prefix lists are
referenced through BGP neighbor filters or route maps with in or out direction.

For example, you can add the IP address 192.168.100.3/27 to the IP prefix list and deny the
route from being redistributed to the northbound router. You can also append an IP address
with less-than-or-equal-to (le) and greater-than-or-equal-to (ge) modifiers to grant or limit route
redistribution. For example, 192.168.100.3/27 ge 24 le 30 modifiers match subnet masks greater
than or equal to 24-bits and less than or equal to 30-bits in length.

Note The default action for a route is Deny. When you create a prefix list to deny or permit
specific routes, be sure to create an IP prefix with no specific network address (select Any from
the dropdown list) and the Permit action if you want to permit all other routes.

Prerequisites

Verify that you have a tier-0 gateway configured. See Create a Tier-0 Logical Router in Manager
Mode.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to IP Prefix List.

6 Click Add IP Prefix List.

7 Enter a name for the IP prefix list.

8 Click Set to add IP prefixes.

VMware, Inc. 34
NSX Administration Guide

9 Click Add Prefix.

a Enter an IP address in CIDR format.

For example, 192.168.100.3/27.

b (Optional) Set a range of IP address numbers in the le or ge modifiers.

For example, set le to 30 and ge to 24.

c Select Deny or Permit from the drop-down menu.

d Click Add.

10 Repeat the previous step to specify additional prefixes.

11 Click Save.

Create a Community List


You can create BGP community lists so that you can configure route maps based on community
lists.

Community lists are user-defined lists of community attribute values. These lists can be used for
matching or manipulating the communities attribute in BGP update messages.

Both the BGP Communities attribute (RFC 1997) and the BGP Large Communities attribute (RFC
8092) are supported. The BGP Communities attribute is a 32-bit value split into two 16-bit values.
The BGP Large Communities attribute has 3 components, each 4 octets in length.

In route maps we can match on or set the BGP Communities or Large Communities attribute.
Using this feature, network operators can implement network policy based on the BGP
communities attribute.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to Community List.

6 Click Add Community List.

7 Enter a name for the community list.

VMware, Inc. 35
NSX Administration Guide

8 Specify a list of communities. For a regular community, use the aa:nn format, for example,
300:500. For a large community, use the format aa:bb:cc, for example, 11:22:33. Note that the
list cannot have both regular communities and large communities. It must contain only regular
communities, or only large communities.

In addition, you can select one or more of the following regular communities. Note that they
cannot be added if the list contains large communinities.

n NO_EXPORT_SUBCONFED - Do not advertise to EBGP peers.

n NO_ADVERTISE - Do not advertise to any peer.

n NO_EXPORT - Do not advertise outside BGP confederation

9 Click Save.

Configure a Static Route


You can configure a static route on the tier-0 gateway to external networks. After you configure a
static route, there is no need to advertise the route from tier-0 to tier-1, because tier-1 gateways
automatically have a static default route towards their connected tier-0 gateway.

Recursive static routes are supported.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to Static Routes.

6 Click Add Static Route.

7 Enter a name and network address in CIDR format. Static routes based on IPv6 are supported.
IPv6 prefixes can only have an IPv6 next hop.

8 Click Set Next Hops to add next-hop information.

9 Click Add Next Hop.

10 Enter an IP address or select NULL.

If NULL is selected, the route is called a device route.

11 Specify the administrative distance.

12 Select a scope from the drop-down list. A scope can be an interface, a gateway, an IPSec
session, or a segment.

13 Click Add.

VMware, Inc. 36
NSX Administration Guide

What to do next

Check that the static route is configured properly. See Verify the Static Route on a Tier-0 Router.

Create a Route Map


A route map consists of a sequence of IP prefix lists, BGP path attributes, and an associated
action. The router scans the sequence for an IP address match. If there is a match, the router
performs the action and scans no further.

Route maps can be referenced at the BGP neighbor level and for route redistribution.

Prerequisites

n Verify that an IP prefix list or a community list is configured. See Create an IP Prefix List in
Manager Mode or Create a Community List.

n For details about using regular expressions to define route-map match criteria for community
lists, see Using Regular Expressions to Match Community Lists When Adding Route Maps.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to Route Maps.

6 Click Add Route Map.

7 Enter a name and click Set to add match criteria.

8 Click Add Match Criteria to add one or more match criteria.

VMware, Inc. 37
NSX Administration Guide

9 For each criterion, select IP Prefix or Community List and click Set to specify one or more
match expressions.

a If you selected Community List, specify match expressions that define how to match
members of community lists. For each community list, the following match options are
available:

n MATCH ANY - perform the set action in the route map if any of the communities in the
community list is matched.

n MATCH ALL - perform the set action in the route map if all the communities in the
community list are matched regardless of the order.

n MATCH EXACT - perform the set action in the route map if all the communities in the
community list are matched in the exact same order.

n MATCH COMMUNITY REGEXP - perform the set action in the route map if all the
regular communities associated with the NRLI match the regular expression.

n MATCH LARGE COMMUNITY REGEXP - perform the set action in the route map if all
the large communities associated with the NRLI match the regular expression.

You should use the match criterion MATCH_COMMUNITY_REGEX to


match routes against standard communities, and use the match criterion
MATCH_LARGE_COMMUNITY_REGEX to match routes against large communities. If you
want to permit routes containing either the standard community or large community value,
you must create two match criteria. If the match expressions are given in the same match
criterion, only the routes containing both the standard and large communities will be
permitted.

For any match criterion, the match expressions are applied in an AND operation, which
means that all match expressions must be satisfied for a match to occur. If there are
multiple match criteria, they are applied in an OR operation, which means that a match will
occur if any one match criterion is satisfied.

10 Set BGP attributes.

BGP Attribute Description

AS-path Prepend Prepend a path with one or more AS (autonomous system) numbers to make the path longer
and therefore less preferred.

MED Multi-Exit Discriminator indicates to an external peer a preferred path to an AS.

Weight Set a weight to influence path selection. The range is 0 - 65535.

VMware, Inc. 38
NSX Administration Guide

BGP Attribute Description

Community Specify a list of communities. For a regular community use the aa:nn format, for example,
300:500. For a large community use the aa:bb:cc format, for example, 11:22:33. Or use the
drop-down menu to select one of the following:
n NO_EXPORT_SUBCONFED - Do not advertise to EBGP peers.
n NO_ADVERTISE - Do not advertise to any peer.
n NO_EXPORT - Do not advertise outside BGP confederation

Local Preference Use this value to choose the outbound external BGP path. The path with the highest value is
preferred.

11 In the Action column, select Permit or Deny.

You can permit or deny IP addresses matched by the IP prefix lists or community lists from
being advertised.

12 Click Save.

Using Regular Expressions to Match Community Lists When


Adding Route Maps
You can use regular expressions to define the route-map match criteria for community lists. BGP
regular expressions are based on POSIX 1003.2 regular expressions.

The following expressions are a subset of the POSIX regular expressions.

Expression Description

.* Matches any single character.

* Matches 0 or more occurrences of pattern.

+ Matches 1 or more occurrences of pattern.

? Matches 0 or 1 occurrence of pattern.

^ Matches the beginning of the line.

$ Matches the end of the line.

_ This character has special meanings in BGP regular expressions. It matches to a space, comma, AS
set delimiters { and } and AS confederation delimiters ( and ). It also matches to the beginning of the
line and the end of the line. Therefore this character can be used for an AS value boundaries match.
This character technically evaluates to (^|[,{}()]|$).

Here are some examples for using regular expressions in route maps:

Expression Description

^101 Matches routes having community attribute that starts with 101.

^[0-9]+ Matches routes having community attribute that starts with a number between 0-9 and has one or
more instances of such a number.

.* Matches routes having any or no community attribute.

VMware, Inc. 39
NSX Administration Guide

Expression Description

.+ Matches routes having any community value.

^$ Matches routes having no/null community value.

Configure BGP
To enable access between your VMs and the outside world, you can configure an external or
internal BGP (eBGP or iBGP) connection between a tier-0 gateway and a router in your physical
infrastructure.

When configuring BGP, you must configure a local Autonomous System (AS) number for the
tier-0 gateway. You must also configure the remote AS number. EBGP neighbors must be directly
connected and in the same subnet as the tier-0 uplink. If they are not in the same subnet, BGP
multi-hop should be used.

BGPv6 is supported for single hop and multihop. Redistribution, prefix list, and route maps are
supported with IPv6 prefixes.

RFC-5549 enables BGPv6 sessions to exchange IPv4 routes with an IPv6 next hop. To minimize
the number of BGP sessions and IPv4 addresses, you can exchange both IPv4 and IPv6 routes
over a BGP session. Support for encoding and processing an IPv4 route with an IPv6 next hop is
negotiated as part of the capability exchange in the BGP OPEN message. If both sides of a peering
session support the capability, IPv4 routes are advertised with an IPv6 next hop. Multi-protocol
BGP (MP-BGP) is used to advertise the Network Layer Reachability Information of a IPv4 address
family using the next hop of an IPv6 address family.

A tier-0 gateway in active-active mode supports inter-SR (service router) iBGP. If gateway #1 is
unable to communicate with a northbound physical router, traffic is re-routed to gateway #2 in the
active-active cluster. If gateway #2 is able to communicate with the physical router, traffic between
gateway #1 and the physical router will not be affected. A route learned by an Edge node from a
northbound router will always be preferred to the same route learned over inter-SR iBGP. It is not
possible to change this preference.

The implementation of ECMP on NSX Edge is based on the 5-tuple of the protocol number, source
and destination address, and source and destination port.

The iBGP feature has the following capabilities and restrictions:

n Redistribution, prefix lists, and routes maps are supported.

n Route reflectors are not supported.

n BGP confederation is not supported.

How the BGP router ID (RID) is determined:

n If there is no loopback interface, BGP takes the highest interface IP address as RID.

n If BGP has already chosen the highest interface IP as RID, adding a loopback interface will not
affect BGP neighborship and RID is not changed.

VMware, Inc. 40
NSX Administration Guide

n If RID is the highest interface IP and loopback is present, disabling and enabling BGP will
change the RID to the loopback IP.

n If RID is the highest interface IP and loopback is present, rebooting the edge node, enabling
maintenance mode on the edge node, or restarting the routing process will not change the
RID.

n If RID is the highest interface IP and loopback is present, redeploying or replacing the edge
transport node will change the RID to the loopback interface IP.

n If RID is the highest interface IP and loopback is present, modifying or deleting the highest
interface IP address will change the RID to the loopback interface IP.

n If RID is the loopback interface IP, modifying or deleting the highest interface IP will not
change the RID.

n Clearing BGP neighbors will change the RID. It retains only the old RID.

n If the loopback interface has an IPv6 address, BGP does not use it as RID. It will take the
highest IPv4 interface IP.

n A soft restart or hard restart of BGP adjacency from a remote site does not affect the BGP RID.

Supported BGP Capabilities

As defined in https://datatracker.ietf.org/doc/html/rfc2842, a BGP speaker determines the


capabilities supported by its peer by examining the list of capabilities present in the Capabilities
Optional Parameter in the OPEN message that the speaker receives from the peer. NSX supports
the following capabilities:

VMware, Inc. 41
NSX Administration Guide

Support
ed by
Advertise Tier-0
d Gateway
Address Support when
Families from Received
Capability Capability Support Tier-0 from Configurabl
Code Description ed Gateway Peer Default Behavior e

1 Multiprotocol IPv4 Yes Yes IPv4 Unicast address family Yes


extensions, with: Unicast is enabled and advertised by
n AFI=1, SAFI=1 : IPv6 default when an IPv4 neighbor
IPv4 Unicast Unicast is configured, or manually added
in the Route Filter settings under
n AFI=2, SAFI=1 : L2VPN
the BGP neighbor configuration.
IPv6 Unicast EVPN
IPv6 Unicast address family
n AFI=25,
is enabled and advertised by
SAFI=70 :
default when an IPv6 neighbor
L2VPN EVPN
is configured, or manually added
in the Route Filter settings under
the BGP neighbor configuration.
L2VPN EVPN address family is
enabled and advertised when
configured in the Route Filter
settings under the BGP neighbor
configuration. The IPv4 Unicast
address family is mandatory in
NSX and automatically enabled
when adding L2VPN EVPN
address family.

2 Route refresh IPv4 Yes Yes Advertised by default No


Unicast
IPv6
Unicast
L2VPN
EVPN

5 Extended next hop IPv6 Yes Yes Not advertised by default. To Yes
encoding Unicast enable this capability you must
provide an IPv4 address family
along with the IPv6 address
family for the IPv6 BGP peer IP
address.

64 Graceful restart IPv4 Yes Yes Not advertised by default (Edge Yes
Unicast node by default is a helper)
IPv6
Unicast
L2VPN
EVPN

VMware, Inc. 42
NSX Administration Guide

Support
ed by
Advertise Tier-0
d Gateway
Address Support when
Families from Received
Capability Capability Support Tier-0 from Configurabl
Code Description ed Gateway Peer Default Behavior e

65 Support for 4-octet IPv4 Yes Yes Advertised by default No


AS number Unicast
IPv6
Unicast
L2VPN
EVPN

69 ADD-Path, with: IPv4 Yes Yes (both The receive-only capability is No


n AFI=1/2/25, Unicast (Receive Send and supported and advertised by
SAFI=1/70 IPv6 only) Receive) default.

n Send/Receive=1 Unicast When the Edge node receives


(Send Only) L2VPN the same BGP prefix multiple
EVPN times but with the same metric, if
n Send/
ECMP is enabled, all paths will be
Receive=2
installed and active.
(Receive Only)
When the Edge node receives
n Send/
the same BGP prefix multiple
Receive=3
times with different metrics
(Both)
(for example, a larger ASPATH
length) the best path route will
be installed and active. The less
preferred paths will be kept in
the BGP routing table to improve
control plane convergence.

73 FQDN IPv4 Yes Yes Advertised by default No


Unicast
IPv6
Unicast
L2VPN
EVPN

128 Route refresh IPv4 Yes Yes Advertised by default No


(Cisco) Unicast
IPv6
Unicast
L2VPN
EVPN

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

VMware, Inc. 43
NSX Administration Guide

4 Click BGP.

a Enter the local AS number.

In active-active mode, the default ASN value, 65000, is already filled in. In active-standby
mode, there is no default ASN value.

b Click the BGP toggle to enable or disable BGP.

In active-active mode, BGP is enabled by default. In active-standby mode, BGP is disabled


by default.

c If this gateway is in active-active mode, click the Inter SR iBGP toggle to enable or disable
inter-SR iBGP. It is enabled by default.

If the gateway is in active-standby mode, this feature is not available.

d Click the ECMP toggle button to enable or disable ECMP.

e Click the Multipath Relax toggle button to enable or disable load-sharing across multiple
paths that differ only in AS-path attribute values but have the same AS-path length.

Note ECMP must be enabled for Multipath Relax to work.

f In the Graceful Restart field, select Disable, Helper Only, or Graceful Restart and Helper.

You can optionally change the Graceful Restart Timer and Graceful Restart Stale Timer.

By default, the Graceful Restart mode is set to Helper Only. Helper mode is useful
for eliminating and/or reducing the disruption of traffic associated with routes learned
from a neighbor capable of Graceful Restart. The neighbor must be able to preserve its
forwarding table while it undergoes a restart.

For EVPN, only the Helper Only mode is supported.

The Graceful Restart capability is not recommended to be enabled on the tier-0 gateways
because BGP peerings from all the gateways are always active. On a failover, the Graceful
Restart capability will increase the time a remote neighbor takes to select an alternate
tier-0 gateway. This will delay BFD-based convergence.

Note: Unless overridden by neighbor-specific configuration, the tier-0 configuration


applies to all BGP neighbors.

5 Configure Route Aggregation by adding IP address prefixes.

a Click Set for Route Aggregation.

b Click Add Prefix.

c Enter a IP address prefix in CIDR format.

d For the option Summary - Only, select Yes or No.

6 Click Apply.

You must save the global BGP configuration before you can configure BGP neighbors.

VMware, Inc. 44
NSX Administration Guide

7 Configure BGP Neighbors.

a Click Set for BGP Neighbors.

b Click Add BGP Neighbor.

c Enter the IP address of the neighbor.

d Enable or disable BFD.

e Enter a value for Remote AS number.

For iBGP, enter the same AS number as the one in step 4a. For eBGP, enter the AS
number of the physical router.

f Under Route Filter, click Set to add one or more route filters.

For IP Address Family, you can select IPv4, IPv6, or L2VPN EVPN. The following
combinations are supported:

n IPv4 and IPv6

n IPv4 and L2VPN EVPN

The combination of IPv6 and L2VPN EVPN is not supported.

For the RFC 5549 feature, ensure that you provide an IPv4 address family along with the
IPv6 address family for the IPv6 BGP peer IP address.

For Out Filter and In Filter, click Configure and select filters, then click Save.

For Maximum Routes, you can specify a value between 1 and 1,000,000. When the
number of BGP routes received from the peer reaches 75% of the configured limit, or
when it exceeds the configured limit for the first time, a warning message is logged in the
file /var/log/frr.log. In addition to the log message, the output of NSX CLI command
get bgp neighbor shows if the number of prefixes received exceeds the configured limit.
Note that the gateway will continue to accept routes from the BGP neighbor even after the
Maximum Routes limit is reached.

Note If you configure a BGP neighbor with one address family, for example, L2VPN
EVPN, and then later add a second address family, the established BGP connection will be
reset.

g Enable or disable the Allowas-in feature.

This is disabled by default. With this feature enabled, BGP neighbors can receive routes
with the same AS, for example, when you have two locations interconnected using the
same service provider. This feature applies to all the address families and cannot be
applied to specific address families.

h In the Source Addresses field, you can select a source address to establish a peering
session with a neighbor using this specific source address. If you do not select any, the
gateway will automatically choose one.

i Enter a value for Max Hop Limit.

VMware, Inc. 45
NSX Administration Guide

j In the Graceful Restart field, you can optionally select Disable, Helper Only, or Graceful
Restart and Helper.

Option Description

None selected The Graceful Restart for this neighbor will follow the Tier-0 gateway BGP configuration.

Disable n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be disabled
for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
disabled for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be disabled for this neighbor.

Helper Only n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be
configured as Helper Only for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
configured as Helper Only for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be configured as Helper Only for this neighbor.

Graceful Restart n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be
and Helper configured as Graceful Restart and Helper for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
configured as Graceful Restart and Helper for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be configured as Graceful Restart and Helper for this neighbor.

Note For EVPN, only the Helper Only mode is supported.

k Click Timers & Password.

l Enter a value for BFD Interval.

The unit is milliseconds. For an Edge node running in a VM, the minimum value is 500. For
a bare-metal Edge node, the minimum value is 50.

m Enter a value for BFD Multiplier.

n Enter a value, in seconds, for Hold Down Time and Keep Alive Time.

The Keep Alive Time specifies how frequently KEEPALIVE messages will be sent. The
value can be between 0 and 65535. Zero means no KEEPALIVE messages will be sent.

The Hold Down Time specifies how long the gateway will wait for a KEEPALIVE message
from a neighbor before considering the neighbor dead. The value can be 0 or between
3 and 65535. Zero means no KEEPALIVE messages are sent between the BGP neighbors
and the neighbor will never be considered unreachable.

Hold Down Time must be at least three times the value of the Keep Alive Time.

o Enter a password.

This is required if you configure MD5 authentication between BGP peers.

8 Click Save.

VMware, Inc. 46
NSX Administration Guide

9 (Optional) After a BGP neighbor is added, you can save its advertised and learned routes.

a Click the number from the BGP Neighbors field.

b From the Set BGP Neighbors dialog box, click the menu icon (3 dots) of a BGP neighbor
and select Download Advertised Routes or Download Learned Routes.

Configure OSPF
OSPF (Open Shortest Path First) is an interior gateway protocol (IGP) that operates within a single
autonomous system (AS). Starting with NSX 3.1.1, you can configure OSPF on a tier-0 gateway.

The OSPF feature has the following capabilities and restrictions:

n Only OSPFv2 is supported.

n The tier-0 gateway can be active-active or active-standby (preemptive and non-preemptive).

n Only the default VRF is supported.

n You can configure a single area on a tier-0 gateway with a maximum of two tier-0 uplinks per
Edge node.

n Backbone, normal area, and NSSA (not-so-stubby area) are supported.

n No redistribution is supported between BGP and OSPF.

n OSPF and BGP can be used together in the case of BGP multi-hop where the peer IP is
learned through OSPF.

n The same redistribution features supported for BGP are supported for OSPF (tier-0 uplinks,
downlinks, loopbacks, tier-1 downlinks, etc.). Depending on the area type, redistribution for
all these networks will result in the Edge node generating type 5 external LSA (link-state
advertisement) or type 7 external LSA with type 2 metric only (e2 or n2 routes). The Edge node
itself can learn any type of LSA.

n MD5 and plain password authentication are supported on the area configuration.

n Federation is not supported.

n Route summarization for e2 and n2 routes is supported.

n The interface running OSPF can be broadcast or numbered point-to-point (/31).

n OSPF sessions can be backed with BFD.

n For graceful restart, only the helper mode is supported.

n Redistribution route maps are supported. Only the matching of prefix lists is applicable. No set
actions.

n OSPF ECMP is supported up to maximum of 8 paths.

n Default Originate is supported.

n NAT with OSPF is not supported.

VMware, Inc. 47
NSX Administration Guide

n Tier-0 VIP with OSPF is not supported.

How the OSPF router ID (RID) is determined:

n If there is no loopback interface, OSPF takes the highest interface IP address as RID.

n If OSPF has already chosen the highest interface IP as RID, adding a loopback interface will
not affect OSPF neighborship and RID is not changed.

n If RID is the highest interface IP and loopback is present, disabling and enabling OSPF will
change the RID to the loopback IP.

n If RID is the highest interface IP and loopback is present, rebooting the edge node, enabling
maintenance mode on the edge node, or restarting the routing process will not change the
RID.

n If RID is the highest interface IP and loopback is present, redeploying or replacing the edge
transport node will change the RID to the loopback interface IP.

n If RID is the highest interface IP and loopback is present, modifying or deleting the highest
interface IP address will change the RID to the loopback interface IP.

n If RID is the loopback interface IP, modifying or deleting the highest interface IP will not
change the RID.

n Clearing OSPF neighbors will change the RID. It retains only the old RID.

n A soft restart or hard restart of OSPF adjacency from a remote site does not affect the OSPF
RID.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 Click the OSPF toggle to enable OSPF.

4 In the Area Definition field, click Set to add an area definition.

You can add only one area definition.


a Click Add Area Definition.

b Enter an area ID.

The value must be a number or 4 numbers in IPv4 format (for example, 1.2.3.4).

c In the Type column, select Normal or NSSA.

An OSPF NSSA (not-so-stubby area) allows external routes to be flooded within the area.

d In the Authentication column, select None, Password, or MD5.

e In the Key ID column, enter a key ID if Authentication is set to MD5.

VMware, Inc. 48
NSX Administration Guide

f In the Password column, enter a password if Authentication is set to Password or MD5.

In NSX 3.1.3.2 and earlier, the plain-text and MD5 passwords can have a maximum
of 8 characters. Starting with NSX 3.1.3.3, the MD5 password can have a maximum
of 16 characters, and the maximum length of the plain-text password remains to be 8
characters.

g Click Save.

5 In the Graceful Restart field, select either Disable or Helper Only.

6 Click the ECMP toggle to enable or disable ECMP.

ECMP (equal-cost multi-path) routing allows packet forwarding to occur over multiple best
paths. It can provide fault tolerance for failed paths.

7 In the Route Summarization field, click Set to add a summary address.

Route summarization can reduce the number of LSAs that are flooded into an area. You can
summarize one or more ranges of IP addresses and send routing information about these
addresses in a single LSA.
a Click Add Prefix.

b Enter an IP address prefix in CIDR format.

c In the Advertise column, select Yes or No to indicate whether to advertise the summary
route.

The default is Yes.

d Repeat the steps above to add more prefixes.

8 Click the Default Route Originate toggle to enable or disable default route originate.

Enable this to redistribute the default route in OSFP.

9 In the OSPF Configured Interfaces field, click Set to configure OSPF on existing external
interfaces.

a Click Configure Interface.

b In the Interface column, select an interface from the dropdown list.

c In the Area ID column, select an area ID from the dropdown list.

d In the Network Type column, select Broadcast or P2P.

e In the OSPF column, set the toggle to Enabled.

f Click the BFD toggle to enable or disable BFD.

g If BFD is enabled, select a BFD profile.

h To change the OSPF Hello Interval, enter a new value.

The default is 10 seconds. This parameter specifies the time between Hello messages.

VMware, Inc. 49
NSX Administration Guide

i To change the OSPF Dead Interval, enter a new value.

The default is 40 seconds. If a Hello message is not received within this time interval, the
neighbor is considered unavailable..

j Click Save.

k Repeat the steps above to configure more interfaces.

10 Click Save.

11 Click Route Re-distribution to expand the section.

12 Click the OSPF Route Redistribution Status toggle to enable OSPF.

13 If you have route re-distribution rules configured, click the number to see the current rules
or to add additional ones. If you do not have any configured, click Set to add re-distribution
rules. Add OSPF to the Destination Protocol of any rule that will redistribute routes into
OSPF. Remember to do this step if you plan to add re-distribution rules later.

Results

After you configure OSPF, in the OSPF Neighbors field, you can click View to see information
about OSPF neighbors. The information displayed includes Neighbor IP Address, Interface,
Source, Edge Node, Priority, and State.

Note: If a neighbor is not reachable, an alarm about the neighbor will be raised. If the neighbor is
no longer in the network, simply acknowledge the alarm but do not resolve it. If you resolve the
alarm, it will be raised again.

Configure BFD
BFD (Bidirectional Forwarding Detection) is a protocol that can detect forwarding path failures.

BFD can back up sessions for BGP and static routes.

Note In NSX 4.0.0.1, only IPv4 is supported. Starting with NSX 4.0.1.1, both IPv4 and IPv6 are
supported.

For more information about BGP, see Configure BGP.

For more information about static routes, see Configure a Static Route.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing and Set for Static Route BFD Peer.

5 Click Add Static Route BFD Peer.

VMware, Inc. 50
NSX Administration Guide

6 Select a BFD profile. See Add a BFD Profile.

7 Enter the peer IP address and optionally the source addresses.

8 Click Save.

Configure Multicast
IP multicast routing enables a host (source) to send a single copy of data to a single multicast
address. Data is then distributed to a group of recipients using a special form of IP address called
the IP multicast group address. You can configure multicast on a tier-0 gateway for an IPv4
network to enable multicast routing.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click the Multicast toggle to enable multicast.

5 In the Replication Multicast Range field, enter an address range in CIDR format.

Replication Multicast Range is a range of multicast group addresses (GENEVE outer


destination IP) that is used in the underlay to replicate workload/tenant multicast group
addresses. It is recommended that there is no overlap between the Replication Multicast
Range and workload/tenant multicast group addresses.

6 In the IGMP Profile drop-down list, select an IGMP profile.

7 In the PIM Profile drop-down list, select a PIM profile.

Configure IPv6 Layer 3 Forwarding


IPv4 layer 3 forwarding is enabled by default. You can also configure IPv6 layer 3 forwarding.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Networking Settings.

3 Click the Global Networking Config tab.

4 Edit the Global Gateway Configuration and select IPv4 and IPv6 for the L3 Forwarding Mode.

IPv6 only is not supported.

5 Click Save.

6 Select Networking > Tier-0 Gateways.

7 Edit a tier-0 gateway by clicking the menu icon (three dots) and select Edit.

VMware, Inc. 51
NSX Administration Guide

8 Go to Additional Settings.

a There are no configurable IPv6 addresses for Internal Transit Subnet. The system
automatically uses IPv6 link local addresses.

b Enter a /48 IPv6 subnet for T0-T1 Transit Subnets.

9 Go to Interfaces and add an interface for IPv6.

Create SLAAC and DAD Profiles for IPv6 Address


Assignment
When using IPv6 on a logical router interface, you can set up Stateless Address Autoconfiguration
(SLAAC) for the assignment of IP addresses. SLAAC enables the addressing of a host, based on
a network prefix advertised from a local network router, through router advertisements. Duplicate
Address Detection (DAD) ensures the uniqueness of IP addresses.

Prerequisites

Navigate to Networking > Networking Settings, click the Global Gateway Config tab and select
IPv4 and IPv6 as the L3 Forwarding Mode

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Additional Settings.

5 To create an ND Profile (SLAAC profile), click the menu icon (three dots) and select Create
New.

a Enter a name for the profile.

b Select a mode:

n Disabled - Router advertisement messages are disabled.

n SLAAC with DNS Through RA - The address and DNS information is generated with
the router advertisement message.

n SLAAC with DNS Through DHCP - The address is generated with the router
advertisement message and the DNS information is generated by the DHCP server.

n DHCP with Address and DNS through DHCP - The address and DNS information is
generated by the DHCP server.

n SLAAC with Address and DNS through DHCP - The address and DNS information is
generated by the DHCP server. This option is only supported by NSX Edge and not by
ESXi hosts.

VMware, Inc. 52
NSX Administration Guide

c Enter the reachable time and the retransmission interval for the router advertisement
message.

d Enter the domain name and specify a lifetime for the domain name. Enter these values only
for the SLAAC with DNS Through RA mode.

e Enter a DNS server and specify a lifetime for the DNS server. Enter these values only for
the SLAAC with DNS Through RA mode.

f Enter the values for router advertisement:

n RA Interval - The interval of time between the transmission of consecutive router


advertisement messages.

n Hop Limit - The lifetime of the advertised routes.

n Router Lifetime - The lifetime of the router.

n Prefix Lifetime- The lifetime of the prefix in seconds.

n Prefix Preferred Time - The time that a valid address is preferred.

6 To create a DAD Profile, click the menu icon (three dots) and select Create New.

a Enter a name for the profile.

b Select a mode:

n Loose - A duplicate address notification is received but no action is taken when a


duplicate address is detected.

n Strict - A duplicate address notification is received and the duplicate address is no


longer used.

c Enter the Wait Time (seconds) that specifies the interval of time between the NS packets.

d Enter the NS Retries Count that specifies the number of NS packets to detect duplicate
addresses at intervals defined in Wait Time (seconds)

State Synchronization of Tier-0 Gateways


Connection information of the traffic running on a given tier-0 SR (Service Router) is synchronized
to its peer tier-0 SR in active-standby or stateful active-active HA modes. Note that stateful
active-active mode is only available starting with NSX 4.0.1.1.

In NSX 4.0.0.1, note the following about state synchronization:

n State synchronization is supported for Gateway Firewall, Identity Firewall, NAT, IPSec VPN,
and DHCP.

n If new sessions were going through a tier-0 SR just before a failover, it might happen that
those sessions were not synchronized on the associated tier-0 SR and potentially affect the
traffic for those sessions.

VMware, Inc. 53
NSX Administration Guide

Starting with NSX 4.0.1.1, note the following about state synchronization:

n In active-standby mode, state synchronization is supported for Gateway Firewall, Identity


Firewall, NAT, IPSec VPN, and DHCP.

n In active-active mode, state synchronization is supported for Gateway Firewall, Identity


Firewall, and NAT. IPSec VPN is not supported.

n If new sessions were going through a tier-0 SR just before a failover, it might happen that
those sessions were not synchronized on the associated tier-0 SR and potentially affect the
traffic for those sessions.

Changing the HA Mode of a Tier-0 Gateway


You can change the high availability (HA) mode of a tier-0 gateway in certain circumstances.

Changing the HA mode is allowed only if there is no more than one service router running on
the gateway. This means that you must not have uplinks on more than one Edge transport node.
However, you can have more than one uplink on the same Edge transport node.

After you set the HA mode from active-active to active-standby, you can set the failover mode.
The default is non-preemptive.

HA mode change is not allowed if the following services or features are configured.

n DNS Forwarder

n IPSec VPN

n L2 VPN

n HA VIP

n Stateful Firewall

n SNAT, DNAT, NO_SNAT, or NO_DNAT

n Reflexive NAT applied on an interface

n Service Insertion

n VRF

n Centralized Service Port

Tier-0 VRF Gateways


Virtual routing and forwarding (VRF) makes it possible to instantiate isolated routing and
forwarding tables within a router. VRFs are supported by deploying tier-0 VRF gateways. A tier-0
VRF gateway must be linked to a parent tier-0 gateway and inherits some of the tier-0 gateway
settings, such as HA mode, Edge cluster, internal transit subnet, T0-T1 transit subnets, and BGP
routing configuration.

VMware, Inc. 54
NSX Administration Guide

Multiple tier-0 VRF instances can be created under the same parent tier-0, which allows the
separation of segments and tier-1 gateways into multiple isolated tenants. With tier-0 VRF
gateways, tenants can use overlapping IP addresses without any interference or communication
with each other.

NSX tier-0 VRF gateways can be used to connect tenant networks to external routers using static
routes or BGP [RFC4364]. This is also known as VRF-Lite.

DC DC
Gateway Gateway

VRF VRF
Gateway A Gateway B

Tenant A Tenant B
Tier-0
Gateway
Tier-1 Tier-1
Gateway Gateway

NSX tier-0 VRF gateways can also be deployed with EVPN. For more information, see Chapter 12
Ethernet VPN (EVPN).

NSX Federation support:

n Tier-0 VRF gateway is not supported with NSX Federation and therefore it cannot be
configured on Global Manager.

n Tier-0 VRF gateway is not supported on stretched tier-0 gateways in NSX Federation.

Note that even though a tier-0 VRF gateway has an HA mode, it does not have a mechanism to
respond to a communication failure that is independent of the parent tier-0 gateway's mechanism.
If a tier-0 VRF gateway loses connectivity to a neighbor but the criteria for the tier-0 gateway to
fail over is not met, the VRF gateway will not fail over. The only time a VRF gateway will fail over is
when the parent tier-0 gateway does a failover.

VMware, Inc. 55
NSX Administration Guide

Deploy VRF-Lite with BGP

Prerequisites

n The parent tier-0 gateway needs to be created before the tier-0 VRF gateway instance.

n The parent tier-0 gateway needs to have an external interface before you create an external
interface on the tier-0 VRF gateway.

n VLAN tagging (802.1q) is used to differentiate traffic among VRFs. The external interface on
tier-0 VRF gateway needs to be connected to a trunk segment with the corresponding access
VLAN ID defined in the segment VLAN range.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Configure the VLAN trunk segment.

a Select Networking > Segments.

b Click Add Segments.

c Enter a name for the segment.

d In Connected Gateway, set the type of connectivity for the segment as None.

e Select a VLAN transport zone.

f Expand the Additional Settings category.

g In VLAN, enter a list or range of VLAN IDs allowed in the trunk segment.

h Click Save.

3 Create the parent tier-0 gateway.

The parent tier-0 gateway needs to be created before the tier-0 VRF gateway instance. For
more information about configuring a tier-0 gateway, see Add a Tier-0 Gateway.

4 Create the tier-0 VRF gateway.

a Select Networking > Tier-0 Gateway.

b Click Add Gateway > VRF.

c Enter a name for the gateway.

d Select a tier-0 gateway in Connect to Tier-0 Gateway.

Note Some advanced configurations are inherited from the parent tier-0, such as HA
mode, edge cluster, internal transit subnet, T0-T1 transit subnets.

VMware, Inc. 56
NSX Administration Guide

e Click VRF Settings.

Note The VRF settings are optional for regular VRF-Lite deployments, but are mandatory
for EVPN use cases. For EVPN use cases, see Chapter 12 Ethernet VPN (EVPN).

f Under L3 VRF Settings, specify a Route Distinguisher.

If the connected tier-0 gateway has RD Admin Address configured, the Route
Distinguisher is automatically populated. Enter a new value if you want to override the
assigned Route Distinguisher.

g Click Save and then Yes to continue configuring the VRF gateway.

5 Configure the external interfaces on the VRF gateway.

a Click Interfaces > Set > Add Interface.

b Enter a name for the interface.

c Enter the IP address and mask for the external interface.

d In Type, select External.

e In Connected To(Segment), select the trunk segment created from Step 2.

f Select an edge node.

g Enter the Access VLAN ID from the list as configured for the segment.

h Click Save and then Close.

6 Configure BGP neighbor for VRF-Lite.

a Click BGP.

b Click the BGP toggle to enable BGP.

The Local AS number is inherited from the parent tier-0 gateway.

You can configure the other advanced BGP settings such as ECMP.

c In the BGP Neighbors field, click Set > Add BGP Neighbor.

d Enter the neighbor IP address.

e Enable BFD if required.

f Enter the Remote AS number of the neighbor.

g Enter the source IP address.

There should be one or more addresses of created external interfaces or loopback.

h Under Route Filter, click Set > Add Route Filter to enable IP Address Family, filters based
on prefix lists, and maximum routes received from the BGP neighbor.

i Click Add and then Apply.

j Click Save and then Close.

VMware, Inc. 57
NSX Administration Guide

7 Re-distribute the routes in the VRF gateway and announce to the BGP neighbors.

a Click Route Re-distribution.

b In the Route Re-distribution field, click Set > Add Route Re-distribution.

c Enter a name for the redistribution policy.

d Click Set to select available sources, such as tier-0 connected interfaces and segments and
then click Apply.

e Click Add and then click Apply.

8 Make sure that your segments or tier-1 gateways are connected to the tier-0 VRF gateway.

VRF Route Leaking


By default, the data plane traffic between VRF instances is isolated in NSX. By configuring VRF
route leaking, traffic can be exchanged between VRF instances. Static routes must be configured
on the tier-0 VRF instances to allow traffic to be exchanged.

Two topology options are supported in NSX:

n Local VRF-to-VRF route leaking

n Northbound VRF leaking

Multi-tier routing architecture is required for traffic to be exchanged in a VRF leaking topology
since static routes pointing to tier-1 distributed router (DR) uplinks are required.

Local VRF-to-VRF Route Leaking


Tier-1 DR IP addresses can be checked by using both the Edge node CLI or Network Topology in
NSX Manager.

The following diagram depicts a sample topology for the local VRF-to-VRF route leaking option.

VMware, Inc. 58
NSX Administration Guide

Physical
Router

Trunk

Tier-0
Tier-0 VRF A Gateway Tier-0 VRF B
Static route: 172.16.20.0/24 Static route: 172.16.10.0/24
Next hop: Tier-1 VRF B DR Next hop: Tier-1 VRF A DR
Scope: VRF B Scope: VRF A

Tier-1 VRF A Tier-1 VRF B


DR uplink: 100.64.80.1 DR uplink: 100.64.90.1

Segment VRF A Segment VRF B

172.16.10.0/24 172.16.20.0/24

The configuration workflow for the sample topology is as follows:

Tier-0 VRF A Configuration Workflow

1 Select Networking > Tier-0 Gateway.


2 For tier-0 VRF A, click the menu icon (three dots) and select Edit.
3 Click Routing.
4 In the Static Routes field, click Set > Add Static Route and configure the static route:
a Enter a name for the static route.
b In the Network field, enter a prefix.

For example, 172.16.20.0/24


5 In the Next Hops column, click Set > Set Next Hops and define the next hops for the static route:
a Enter the IP address of the tier-1 DR uplink in VRF B.

For example, 100.64.90.1

Note Tier-1 DR IP addresses can be checked by using both the Edge node CLI or Network Topology in NSX
Manager.
b Enter the Admin Distance of 1.
c Enter the scope.

For example, VRF-B

VMware, Inc. 59
NSX Administration Guide

Tier-0 VRF B Configuration Workflow

1 Select Networking > Tier-0 Gateway.


2 For tier-0 VRF B, click the menu icon (three dots) and select Edit.
3 Click Routing.
4 In the Static Routes field, click Set > Add Static Route and configure the static route:
a Enter a name for the static route.
b In the Network field, enter a prefix.

For example, 172.16.10.0/24


5 In the Next Hops column, click Set > Set Next Hops and define the next hops for the static route:
a Enter the IP address of the tier-1 DR uplink in VRF A.

For example, 100.64.80.1

Note Tier-1 DR IP addresses can be checked by using both the Edge node CLI or Network Topology in NSX
Manager.
b Enter the Admin Distance of 1.
c Enter the scope.

For example, VRF-A

Northbound VRF Route Leaking


For this topology option, the northbound static route should have a next hop as an external IP
address reachable in the destination VRF routing table. It is not recommended to point static
routes directly to connected IP address uplinks as the static route would fail if an outage occurs
on that link or neighbor. A loopback or virtual IP address host route (/32) can be advertised in the
network in the destination VRF. Since the host route is advertised by both top of rack switches,
two ECMP routes are installed in the tier-0 VRF. A return static route should be created in the
destination VRF pointing to the tier-1 DR uplink IP address as the next hop.

The following diagram depicts a sample topology for the northbound VRF route leaking option.

VMware, Inc. 60
NSX Administration Guide

Server VRF B External Loopback VRF B


10.10.10.10/24 192.168.1.1/32

Physical
Router Trunk

Tier-0 VRF A Tier-0 Tier-0 VRF B


Static route: 10.10.10.0/24 Gateway Static route: 172.16.10.0/24
Next hop: Loopback VRF B Next hop: Tier-1 VRF A DR
Scope: VRF B Scope: VRF A

Tier-1 VRF A
Tier-01 VRF B
DR uplink: 100.64.80.1
Segment VRF A Segment VRF B

172.16.10.0/24 172.16.20.0/24

The configuration workflow for the sample topology is as follows:

Tier-0 VRF A Configuration Workflow

1 Select Networking > Tier-0 Gateway.


2 For tier-0 VRF A, click the menu icon (three dots) and select Edit.
3 Click Routing.
4 In the Static Routes field, click Set > Add Static Route and configure the static route:
a Enter a name for the static route.
b In the Network field, enter a prefix.

For example, 10.10.10.0/24


5 In the Next Hops column, click Set > Set Next Hops and define the next hops for the static route:
a Enter the IP address of the tier-1 DR uplink in VRF B.

For example, 192.168.1.1


b Enter the Admin Distance of 1.
c Enter the scope.

For example, VRF-B

VMware, Inc. 61
NSX Administration Guide

Tier-0 VRF B Configuration Workflow

1 Select Networking > Tier-0 Gateway.


2 For tier-0 VRF B, click the menu icon (three dots) and select Edit.
3 Click Routing.
4 In the Static Routes field, click Set > Add Static Route and configure the static route:
a Enter a name for the static route.
b In the Network field, enter a prefix.

For example, 172.16.10.0/24


5 In the Next Hops column, click Set > Set Next Hops and define the next hops for the static route:
a Enter the IP address of the tier-1 DR uplink in VRF A.

For example, 100.64.80.1


b Enter the Admin Distance of 1.
c Enter the scope.

For example, VRF-A

Configure the ARP Limit of a Tier-0 or Tier-1 Gateway or


Logical Router
You can configure the ARP limit of a tier-0 or tier-1 gateway or logical router using the API. The
limit specifies the maximum number of ARP entries per transport node at each gateway or logical
router.

To read or set the global ARP limit, use the following API methods and parameter:

Method URI Parameter

GET, PUT, PATCH /policy/api/v1/infra/global- arp_limit_per_gateway (range:


config 5000 - 50000, default: 50000)

To read or set the ARP limit for a specific tier-0 or tier-1 gateway, use the following API methods
and parameter. If the limit is not set, the global ARP limit will apply.

Method URI Parameter

GET, PUT, PATCH /policy/api/v1/infra/tier-0s/ arp_limit (range: 5000 - 50000, no


<tier-0-id> default)

GET, PUT, PATCH /policy/api/v1/infra/tier-1s/ arp_limit (range: 5000 - 50000, no


<tier-1-id> default)

Note that updating the ARP limit using Manager GlobalConfig API is not allowed.

Stateful Services on Tier-0 and Tier-1


To meet the demands of stateful services such as more bandwidth and throughput, you can
configure Tier-0 and Tier-1 gateways in Active-Active (A-A) configuration. Stateful services are
required for next generation firewall, Layer 7 rules, URL filtering or TLS decryption.

VMware, Inc. 62
NSX Administration Guide

Starting with NSX 4.0.1.1, you can scale-out or scale-in the number of service routers by adding
NSX Edge nodes to the cluster.

Caution As you scale-in or scale-out NSX Edge nodes, you might see loss of traffic packets for
existing flows.

The supported stateful services are:

n Next Generation Firewall

n URL filtering

n TLS proxy

n Firewall

n NAT

The unsupported services are:

n DNS Forwarder

n NSX LB

n L2VPN

n IPSecVPN

In your existing topology, if Tier-1 gateway is in active-standby (A-S) mode, you cannot
reconfigure it in A-A HA mode and it cannot share the same NSX Edge cluster with A-A stateful
Tier-0 gateways. As a workaround, deploy that Tier-1 gateway in active-standby mode on a
separate cluster. Then, deploy Tier-0 gateway on another NSX Edge cluster. If your environment
requires a Tier-1 gateway, configure it in A-A HA mode. See Supported Topologies.

Key Concepts Stateful Services


Understand the key concepts required to configure stateful services.

Physical Router

NAT
Interface group

Physical Firewall, Next


Uplink 1 Uplink 2 Uplink 3 Uplink 4
Router Gen Firewall,
URL Filtering,
TLS

Tier-0 Active-Active
Tier-0 Sub-cluster 1 Sub-cluster 2 service Router &
Active-Active Distributed
Router
Tier-1 Active-Active
Tier-1 service Router &
Active-Active Distributed
Router
Edge 1 Edge 2 Edge 3 Edge 4

Edge Cluster

VMware, Inc. 63
NSX Administration Guide

Interface Groups
Use an interface group to group equivalent external (uplink) interfaces or service interfaces
across service routers. Equivalent interfaces refer to interfaces that have the same inbound
policies such as firewall rules, NAT rules, and so on, and equivalent network reachability. There
are two interface types supported: External and Service. You must only create homogenous
interface groups, such as an interface group comprising of only external interfaces or only service
interfaces. NSX does not support an interface group comprising links from both external and
service interfaces.

To define an interface group, meet the following conditions:

n Only one interface link from each service router of the NSX Edge cluster must participate in the
interface group.

n Every interface link must be part of a single interface group.

n Every interface group must have the same number of interfaces from each service router.

Note An interface group is required to create a stateful Active-Active group. If these conditions
are not met, NSX raises a status alarm on the Tier-0 or the Tier-1 UI screen.

Interface groups allow traffic flows to continue without disruption even when the original uplink
interface that supported packet transmission fails because the peer uplink interface takes over its
traffic. So, if Uplink 1 or Edge 1 fails, then the interface group decides where the next packet must
be punted to, which is the peer shadow port on the peer edge node on the same sub-cluster.

You can have more than one interface groups, where every interface group is dedicated to a
specific requirement of the NSX Edge node. For example, one interface group can serve internet
traffic while another group can be a Direct Connect connection between a NSX Edge cluster and a
router.

On Tier-1 gateway, a default interface group is created. On Tier-0 gateway, you need to create an
interface group.

To create an interface group, run the API PUT /infra/tier-0s/<name>/locale-services/


<location>/interface-groups/<group-name>

{
"resource_type": "Tier0InterfaceGroup",
"id": "uplinkgroup",
"display_name": "uplinkgroup",
"path": "/infra/tier-0s/Tier0Gateway1/locale-services/Tier0LocalServices-1/interface-
groups/uplinkgroup",
"relative_path": "uplinkgroup",
"parent_path": "/infra/tier-0s/Tier0Gateway1/locale-services/Tier0LocalServices-1",
"unique_id": "c5b2a758-7040-410b-a35d-298a16b55df0",
"realization_id": "c5b2a758-7040-410b-a35d-298a16b55df0",
"marked_for_delete": false,
"overridden": false,
"members": [
{

VMware, Inc. 64
NSX Administration Guide

"interface_path": "/infra/tier-0s/Tier0Gateway1/locale-services/
Tier0LocalServices-1/interfaces/tier0_interface1"
},
{
"interface_path": "/infra/tier-0s/Tier0Gateway1/locale-services/
Tier0LocalServices-1/interfaces/tier0_interface2"
}
],
}

External Interface
Interface connecting to the physical infrastructure/router. It supports static routing and BGP. In
previous releases, this interface was referred to as uplink interface. This interface can also be used
to extend a VRF (Virtual routing and forwarding instance) from the physical networking fabric into
the NSX domain.

Service Interface
Interface connecting VLAN segments to provide connectivity and services to VLAN backed
physical or virtual workloads. Service interface can also be connected to overlay segments for
Tier-1 standalone load balancer. This was previously referred to as centralized service port (CSP).
Both static and dynamic routing are supported over this interface.

Sub-clusters
When you configure stateful services on NSX Edge nodes, NSX automatically creates sub-clusters
on the given NSX Edge cluster. So, NSX Edge a cluster of four NSX Edge nodes becomes two
sub-clusters, where each sub-cluster is a pair of NSX Edge nodes.

All service routers on NSX Edge nodes participating in an interface group are converted into pairs.

For example, in sub-cluster 1, if Edge node 1 goes down, all ingress or egress traffic on Edge 1
is switched to Edge 2. So, Edge 1 and Edge 2 function as the original NSX Edge node and peer
NSX Edge node respectively in the sub-cluster. During the failover process, Edge 2 takes over
the backplane IP address that Edge 1 was serving to ensure no traffic is lost and traffic flow is
maintained. When the failed Edge node 1 comes back up, the initial state is restored, where all
traffic is redirected back to Edge 1.

Failure Domain
Configure failure domains to ensure that both NSX Edge nodes selected for a sub-cluster do not
belong to the same failure domain.

To ensure failure domain functions as per design, meet these conditions:

n Label each NSX Edge with a failure domain and deploy one NSX Edge node in each failure
domain. Both NSX Edge nodes of a sub-cluster must not belong to the same failure domain.

VMware, Inc. 65
NSX Administration Guide

n Ensure that both NSX Edge nodes of a sub-cluster remain as part of the same sub-cluster.
To ensure that these nodes are automatically paired in the same sub-cluster, follow a specific
sequence when referencing these nodes to failure domains. For example, in a sub-cluster, first
reference NSX Edge-1 to a failure domain and then reference NSX Edge-2 to a different failure
domain. So, when NSX Edge-1 comes back up after a failure, the failure domain where the
node was referenced to allows it to rejoin the same sub-cluster.

Supported Topologies
These are the supported topologies for stateful services on Tier-0 or Tier-1 in active-active HA
mode.

Greenfield Topologies
In new installations, note the following considerations when building one of the supported
topologies for stateful services on a NSX Edge cluster:

n Tier-1 sateful active-active gateways running stateful services must be connected to Tier-0
stateful active-active gateways and must be hosted on the same NSX Edge cluster.

n Tier-1 active-standby gateways can be connected to Tier-0 stateful active-active gateways but
Tier-1 gateways must be hosted on a different NSX Edge cluster.

Brownfield Topologies
In existing installations, note the following considerations when building one of the supported
topologies for stateful services on a NSX Edge cluster:

n An existing Tier-1 gateway in active-standby HA mode cannot be configured to be in active-


active HA mode. You need to create a new Tier-1 gateway in active-active HA mode.

n Tier-0 active-standby gateways cannot be converted to Tier-0 active-active gateways.

n Tier-1 stateless active-active gateways can be converted to stateful active-active gateways if


there no Tier-1 gateways attached to Tier-0 gateways.

VMware, Inc. 66
NSX Administration Guide

Tier-0 Active-Active and Tier-1 Active-Active HA mode


Physical Router

NAT
Interface group

Physical Firewall, Next


Uplink 1 Uplink 2 Uplink 3 Uplink 4
Router Gen Firewall,
URL Filtering,
TLS

Tier-0 Active-Active
Tier-0 Sub-cluster 1 Sub-cluster 2 service Router &
Active-Active Distributed
Router
Tier-1 Active-Active
Tier-1 service Router &
Active-Active Distributed
Router
Edge 1 Edge 2 Edge 3 Edge 4

Edge Cluster

Tier-0 Active-Active and Tier-1 Active-Standby HA mode


Physical Router

NAT
Interface group

Physical Firewall, Next


Uplink 1 Uplink 2
Router Gen Firewall,
URL Filtering,
TLS

Tier-0 Sub-cluster 1

Tier-1

Edge 1 Edge 2 Edge 3 Edge 4

Edge Cluster 1 Edge Cluster 2

VMware, Inc. 67
NSX Administration Guide

Tier-0 Active-Active HA mode (no Tier-1 gateways)


Physical Router

NAT
Interface group

Physical Firewall, Next


Uplink 1 Uplink 2 Uplink 3 Uplink 4
Router Gen Firewall,
URL Filtering,
TLS

Tier-0 Active-Active
Sub-cluster 1 Sub-cluster 2 Service Router
Tier-0
Tier-0 Active-Active
Distributed Router
Edge 1 Edge 2 Edge 3 Edge 4

Edge Cluster

Tier-0 Active-Active and Tier-1 Distributed Router only


Physical Router

NAT
Interface group

Physical Firewall, Next


Uplink 1 Uplink 2 Uplink 3 Uplink 4
Router Gen Firewall,
URL Filtering,
TLS

Tier-0 Active-Active
Tier-0 Sub-cluster 1 Sub-cluster 2 service Router &
Active-Active Distributed
Router

Tier-1 Active-Active
Tier-1
Distributed Router

Edge 1 Edge 2 Edge 3 Edge 4

Edge Cluster

Configure Failure Domains


A failure domain allows automatic recovery of a failed NSX Edge node, based on the allocation
rules set in the NSX Edge cluster. Before configuring a Tier-0 stateful Active-Active (A-A)
gateway, reference NSX Edge nodes to different failure domains.

A stateful A-A cluster expands or shrinks as you increase the number of NSX Edge nodes. In a
stateful active-active cluster, NSX automatically creates sub-clusters out of the existing number of
NSX Edge nodes. Each sub-cluster works as a pair of active and backup NSX Edge nodes. When
one of the NSX Edge node from a sub-cluster fails, the failure domain associated to that NSX Edge
node automatically recovers it.

VMware, Inc. 68
NSX Administration Guide

In this procedure, you will reference NSX Edge nodes to different failure domains.

Note Ensure that NSX Edge-1 and NSX Edge-2 of sub-cluster-1 belong to two different failure
domains.

Procedure

1 Using the API, create failure domains for the each Edge node that you will add to
the stateful A-A cluster, for example, FD1A-Edge1 and FD2A-Edge 2. Set the parameter
preferred_active_edge_services to true for both Edge 1 and Edge 2.

POST /api/v1/failure-domains
{
"display_name": "FD1A-Edge1",
"preferred_active_edge_services": "true"
}

POST /api/v1/failure-domains
{
"display_name": "FD2A-Edge2",
"preferred_active_edge_services": "true"
}

2 Using the API, associate each Edge node with the failure domain for the site. First call
the GET /api/v1/transport-nodes/<transport-node-id> API to get the data about the
Edge node. Use the result of the GET API as the input for the PUT /api/v1/transport-
nodes/<transport-node-id> API, with the additional property, failure_domain_id, set
appropriately. For example,

GET /api/v1/transport-nodes/<transport-node-id>
Response:
{
"resource_type": "TransportNode",
"description": "Updated NSX configured Test Transport Node",
"id": "77816de2-39c3-436c-b891-54d31f580961",
...
}

PUT /api/v1/transport-nodes/<transport-node-id>
{
"resource_type": "TransportNode",
"description": "Updated NSX configured Test Transport Node",
"id": "77816de2-39c3-436c-b891-54d31f580961",
...
"failure_domain_id": "<UUID>",
}

VMware, Inc. 69
NSX Administration Guide

3 Using the API, configure the Edge cluster to allocate nodes based on failure domain. First call
the GET /api/v1/edge-clusters/<edge-cluster-id> API to get the data about the Edge
cluster. Use the result of the GET API as the input for the PUT /api/v1/edge-clusters/
<edge-cluster-id> API, with the additional property, allocation_rules set appropriately.
For example,

GET /api/v1/edge-clusters/<edge-cluster-id>
Response:
{
"_revision": 0,
"id": "bf8d4daf-93f6-4c23-af38-63f6d372e14e",
"resource_type": "EdgeCluster",
...
}

PUT /api/v1/edge-clusters/<edge-cluster-id>
{
"_revision": 0,
"id": "bf8d4daf-93f6-4c23-af38-63f6d372e14e",
"resource_type": "EdgeCluster",
...
"allocation_rules": [
{
"action": {
"enabled": true,
"action_type": "AllocationBasedOnFailureDomain"
}
}
],
}

Results

The NSX Edge nodes are referenced to different failure domains. You can now use them to create
a cluster and configure Tier-0 gateway in A-A Stateful HA mode.

Configure Stateful Services on Tier-0 and Tier-1 Gateways


Configure Tier-0 and Tier-1 gateways in Active-Active (A-A) Stateful high availability mode on an
NSX Edge cluster and enable stateful services.

The topology considered for this procedure uses Tier-0 gateways and Tier-1 gateways both in A-A
Stateful mode.

Prerequisites

n If there are odd number of NSX Edge nodes in the cluster, it leads to the scenario where one
sub-cluster does not have a backup node. On failure of that single node, traffic is disrupted.

VMware, Inc. 70
NSX Administration Guide

NSX triggers an alarm that you must resolve to correctly configure stateful services. Ensure a
NSX Edge cluster consists of even number of NSX Edge nodes. For example, in a NSX Edge
cluster of 4 nodes, NSX forms two sub-clusters, where each sub-cluster contains two nodes.
One node in each sub-clusters is the backup node of the active NSX Edge node.

n Ensure the NSX Edge nodes you will use as part of the NSX Edge cluster are referenced to
different failure domains.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Go to Networking → Tier-0 Gateways.

3 From the Add Gateway drop-down menu, click Tier-0.

4 Enter the name of the Tier-0 gateway.

5 In the HA Mode field, select Active Active and enable Stateful.

Note Once you enable the gateway to be stateful, you cannot edit the HA mode.

6 Select the NSX Edge cluster and click Save.

7 Click Yes to continue to edit the Tier-0 gateway.

8 Expand the Interface and Interface Groups section and in the External and Service Interface
field click Set.

9 In the Set Interfaces window, click Add Interface.

10 Enter the name, select the segment the interface is connected to and NSX Edge node. Enter
any other optional details.

11 Click Save to complete adding the interface.

12 After you add interfaces, go to the Interface Groups field, click Set.

13 In the Set Interface Groups window, click Add Interface Group.

Important Create an interface group comprising of one uplink from each Tier-0 SR of the
cluster. Ensure that one uplink from every SR is part of the group and that uplink is only part of
a single group. Each interface participating in the interface group must be equivalent. Uplinks
are called equivalent when they are reachable on the network and when they share the same
firewall, NAT and other network layer 4-7 policies.

An interface group allows multiple segments to be grouped into a single group which is
connected to a NSX Edge cluster.

If the interface group does not have an uplink from each SR, then it can result in traffic loss.
NSX triggers an alarm when this requirement is not met.

14 Click Close Editing to update the Tier-0 A-A HA gateway.

VMware, Inc. 71
NSX Administration Guide

15 After deploying the Tier-0 A-A Stateful gateway, deploy Tier-1 gateways in A-A HA mode on
the same NSX Edge cluster where Tier-0 gateways are configured. When you scale-out or
scale-in a Tier-0 gateway, which means new sub-clusters of NSX Edge are added or removed,
associated Tier-1 gateways also follow the same behavior.

16 Create a locale service on Tier-0 gateways.

PUT https://<policy-mgr>/policy/api/v1/infra/tier-0s/vmc_prv/locale-
services/<locale_service>

{
"route_redistribution_types": [ "TIER0_STATIC", "TIER0_NAT" ],
"edge_cluster_path": "/infra/sites/default/enforcement-point/nsx/edge-clusters/
<95196903-6b8a-4276-a7c4-387263e834fd>",
"preferred_edge_paths": [ "/infra/sites/default/enforcement-point/nsx/edge-clusters/
<95196903-6b8a-4276-a7c4-387263e834fd>/edge-nodes/<940f1f4b-0317-45d4-84e2-b8c2394e7405>"],
"_revision": 0
}

17 Deploy Tier-1 gateways in A-A HA mode and select the NSX Edge cluster to run the gateway.

18 Create a locale service on Tier-1.

Without creating a locale service, the gateway is a DR-only gateway.


PUT https://<policy-mgr>/policy/api/v1/infra/tier-1s/cgw/locale-services/
<locale_service>

{
"edge_cluster_path": "/infra/sites/default/enforcement-point/nsx/edge-clusters/
<95196903-6b8a-4276-a7c4-387263e834fd>",
"preferred_edge_paths": [ "/infra/sites/default/enforcement-point/nsx/edge-clusters/
<95196903-6b8a-4276-a7c4-387263e834fd>/edge-nodes/<940f1f4b-0317-45d4-84e2-b8c2394e7405>"],
"_revision": 0
}

19 Create an SNAT rule for the service router on the Tier-0 A-A Stateful gateway. It is mandatory
to enter the translated IP.

20 Go to Networking → NAT and click Add NAT Rule.

21 From the Action drop-down list, select SNAT and enter source and destination IP.

22 In the Translated IP | Port field, enter the IP that the source IP must be translated to.

23 Click Save.

24 Verify the high availability status on Tier-1 SR and Tier-0 SR. Verify that a pair of NSX Edge
nodes form a sub-cluster. Both are active. The peer node only takes over and processes traffic
on the failure of the active node.

On a Tier-0 node> # get high-availability status

Service Router
UUID : 073a9fda-7a11-4d59-80c3-a7ea5371d265

VMware, Inc. 72
NSX Administration Guide

state : Active
type : TIER0
mode : Stateful A/A
failover mode : Preemptive
rank : 0
service count : 0
service score : 0
HA ports state
UUID : de647a80-d27c-46ee-a251-b35a3cead0d0
op_state : Up
addresses : 169.254.0.2/25;fe80::50:56ff:fe56:5300/64
Sub-cluster Information
UUID : c8db92e7-21da-453d-9853-2648849e7bda
Peer SR UUID : daaca25b-9028-4e31-b9b7-35bae481e60a
Peer Node UUID : 68668f1c-0330-11ec-84cf-00505682699c
Peer Routers
Node UUID : 9fe732b6-0330-11ec-ae4e-005056821b5a
HA state : Active
Node UUID : 8486560a-0330-11ec-902b-00505682411d
HA state : Active
Node UUID : 68668f1c-0330-11ec-84cf-00505682699c
HA state : Active

Results

You can run stateful services on Tier-0 gateways in active-active mode.

What to do next

n To scale-out a cluster, add even number of NSX Edge nodes.

Note If you add odd number of NSX Edge nodes, the newly added node does not have a
backup node. If the newly added node fails, then traffic is disrupted. The NSX Manager raises
an alarm if you only add odd number of nodes in a NSX Edge cluster.

n To scale-in a cluster, remove even number of NSX Edge nodes from the cluster.

Understanding Traffic Flows


Traffic flows on stateful Tier-0 and Tier-1 gateways configured in active-active HA mode.

VMware, Inc. 73
NSX Administration Guide

Physical Router

NAT
Interface group

Physical Firewall, Next


Uplink 1 Uplink 2 Uplink 3 Uplink 4
Router Gen Firewall,
URL Filtering,
TLS

Tier-0 Active-Active
Tier-0 Sub-cluster 1 Sub-cluster 2 service Router &
Active-Active Distributed
Router
Tier-1 Active-Active
Tier-1 service Router &
Active-Active Distributed
Router
Edge 1 Edge 2 Edge 3 Edge 4

Edge Cluster

South-North Traffic Flow


1 Based on a deterministic hash, an incoming packet from a southbound VM is punted to the
backplane of the Edge-2.

2 Edge-2 determines that Edge-4 is actively managing the traffic flows and forwards the flow out
through the external interfaces (which are part of the interface group).

3 An IP hash is performed, based on external server destination IP, and traffic is punted from
Edge-2 to Edge-4. The packet is further forwarded to Tier-0 gateway service router (SR),
where SNAT changes the source IP address to translated IP address.

4 After the flow reaches Edge-4 Tier-0 SR, the shadow port forwards the NAT traffic to the
uplink interface and then sent out to the physical router.

5 If Tier-0 SR on Edge-4 fails, NSX punts traffic to its backup node in the sub-cluster, Edge-3,
where SNAT changes the source IP address to translated IP address. The backup interface on
Edge-1 takes over the backplane IP and the uplink IP of Tier-0 gateway before beginning to
process traffic. The backup interface on Edge-3 is operationally Up and the shadow interface
on Edge-4 is Down.

6 All traffic flows processed by firewall and NAT rules are synchronized on the Tier-0 SR on
Edge-3.

7 When Edge-4 comes back up, the flow is resynchronized back to Edge-4. When the shadow
port comes back up, NSX punts traffic to it.

North-South Traffic Flow


1 A packet from a northbound VM is hashed by the physical router using its own hashing
algorithm to send the packet to Edge-3, based on an ECMP routing choice. The Tier-0
gateway is running on Edge-3.

VMware, Inc. 74
NSX Administration Guide

2 Edge-3 determines that Edge-4 is actively managing the traffic flow and forwards the flow to
Edge-4. The flow is managed by the shadow interface of Edge-4.

3 An IP hash is performed, based on external server source IP, traffic is punted from Edge-3
Tier-0 SR to Edge-4 Tier-0 SR, where NAT is enabled. The source IP is changed to the
translated IP address.

4 The packet is sent from Edge-4 Tier-0 SR to Edge 4 Tier-0 DR and then to Tier-1 gateway,
finally reaching the destination VM.

5 If Tier-0 service router on Edge-4 fails, NSX punts traffic to its peer node (sub-cluster 2), which
is Edge-3. NAT enabled on Edge-3 changes the source IP address to translated IP address.

6 Before beginning to process traffic, the backup shadow port on Edge-3 manages the traffic
flow. Now, the backup shadow port on Edge-3 is operationally Up and the shadow port on
Edge-4 is Down.

7 All traffic flows processed by firewall and NAT rules are synchronized on the Tier-0 SR on
Edge-3.

8 When Edge-4 comes back up, the flow is resynchronized back to it. The shadow port on
Edge-4 comes back up and manages the punted traffic.

Sub-cluster Failure
If both the nodes in a sub-cluster go down, the sub-cluster goes down.

n Existing flows are disrupted causing traffic loss.

n New flows are punted to the other sub-cluster.

n When the failed sub-cluster comes back up again, the flows return to the original sub-cluster.

If a sub-cluster goes down for any reason, then the other sub-cluster in the cluster takes over.

Single Node Failure


On failure of an Edge node , the following events happen:

1 Interface links of the Edge node fail.

2 The shadow port on the failed Edge node is in Down state.

3 The backup port of the peer node in the sub-cluster takes over.

4 The firewall and the NAT states are synchronized on the peer Edge node.

5 The backup port on the peer node provides connectivity to new traffic flows.

6 When interface links of the failed node comes back up, the firewall and the NAT states are
resynchronized with the shadow port of the active node.

7 NSX punts back traffic flows to the original node.

VMware, Inc. 75
Tier-1 Gateway
3
A tier-1 gateway has downlink connections to segments and uplink connections to tier-0 gateways.

You can configure route advertisements and static routes on a tier-1 gateway. Recursive static
routes are supported.

This chapter includes the following topics:

n Add a Tier-1 Gateway

n State Synchronization of Tier-1 Gateways

Add a Tier-1 Gateway


A tier-1 gateway is typically connected to a tier-0 gateway in the northbound direction and to
segments in the southbound direction.

If you are adding a tier-1 gateway from Global Manager in NSX Federation, see Add a Tier-1
Gateway from Global Manager.

Tier-0 and tier-1 gateways support the following addressing configurations for all interfaces
(external interfaces, service interfaces and downlinks) in both single tier and multi-tiered
topologies:

n IPv4 only

n IPv6 only

n Dual Stack - both IPv4 and IPv6

To use IPv6 or dual stack addressing, enable IPv4 and IPv6 as the L3 Forwarding Mode in
Networking > Networking Settings > Global Networking Config.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Tier-1 Gateways.

3 Click Add Tier-1 Gateway.

4 Enter a name for the gateway.

VMware, Inc. 76
NSX Administration Guide

5 (Optional) Select a tier-0 gateway to connect to this tier-1 gateway to create a multi-tier
topology.

6 (Optional) Select an NSX Edge cluster if you want this tier-1 gateway to host stateful services
such as NAT, load balancer, or firewall.

If an NSX Edge cluster is selected, a service router will always be created (even if you do not
configure stateful services), affecting the north/south traffic pattern.

7 (Optional) In the Edges field, click Set to select an NSX Edge node.

8 If you selected an NSX Edge cluster, select a failover mode or accept the default.

Option Description

Preemptive If the preferred NSX Edge node fails and recovers, it will preempt its peer and become
the active node. The peer will change its state to standby.

Non-preemptive If the preferred NSX Edge node fails and recovers, it will check if its peer is the active
node. If so, the preferred node will not preempt its peer and will be the standby node.
This is the default option.

9 If you plan to configure a load balancer on this gateway, select an Edges Pool Allocation Size
setting according to the size of the load balancer.

The options are Routing, LB Small, LB Medium, LB Large, and LB XLarge. The default is
Routing and is suitable if no load balancer will be configured on this gateway. This parameter
allows the NSX Manager to place the tier-1 gateway on the Edge nodes in a more intelligent
way. With this setting the number of load balancing and routing functions on each node is
taken into consideration. Note that after you create the gateway, you can change this setting if
you have not configured a load balancer.

10 (Optional) Click the Enable StandBy Relocation toggle to enable or disable standby relocation.

Standby relocation means that if the Edge node where the active or standby logical router is
running fails, a new standby logical router is created on another Edge node to maintain high
availability. If the Edge node that fails is running the active logical router, the original standby
logical router becomes the active logical router and a new standby logical router is created. If
the Edge node that fails is running the standby logical router, the new standby logical router
replaces it.

11 (Optional) Click Route Advertisement.

Select one or more of the following:

n All Static Routes

n All NAT IP's

n All DNS Forwarder Routes

n All LB VIP Routes

n All Connected Segments and Service Ports

VMware, Inc. 77
NSX Administration Guide

n All LB SNAT IP Routes

n All IPSec Local Endpoints

12 Click Save.

13 (Optional) Click Route Advertisement.

a In the Set Route Advertisement Rules field, click Set to add route advertisement rules.

14 (Optional) Click Additional Settings.

a For IPv6, you can select or create an ND Profile and a DAD Profile.

These profiles are used to configure Stateless Address Autoconfiguration (SLAAC) and
Duplicate Address Detection (DAD) for IPv6 addresses.

b Select an Ingress QoS Profile and an Egress QoS Profile for traffic limitations.

These profiles are used to set information rate and burst size for permitted traffic. See Add
a Gateway QoS Profile for more information on creating QoS profiles.
If this gateway is linked to a tier-0 gateway, the Router Links field shows the link addresses.

15 (Optional) Click Service Interfaces and Set to configure connections to segments. Required in
some topologies such as VLAN-backed segments or one-arm load balancing.

a Click Add Interface.

b Enter a name and IP address in CIDR format.

If you configure multicast on this gateway, you must not configure tier-1 addresses as static
RP address in the PIM profile.

c Select a segment.

d In the MTU field, enter a value between 64 and 9000.

e For URPF Mode, you can select Strict or None.

URPF (Unicast Reverse Path Forwarding) is a security feature.

f Add one or more tags.

g In the ND Profile field, select or create a profile.

h Click Save.

i (Optional) After you create an interface, you can download the ARP proxies for the
gateway by clicking the menu icon (three dots) for the interface and selecting Download
ARP Proxies.

You can also download the ARP proxy for a specific interface by expanding a gateway and
then expanding Service Interfaces. Click an interface and click the menu icon (three dots)
and select Download ARP Proxy.

VMware, Inc. 78
NSX Administration Guide

16 (Optional) Click Static Routes and Set to configure static routes.

a Click Add Static Route.

b Enter a name and a network address in the CIDR or IPv6 CIDR format.

c Click Set Next Hops to add next hop information.

d Click Save.

17 (Optional) Click Multicast and then the toggle to enable multicast.

You must select an Edge cluster for this gateway. Also, this gateway must be linked to a tier-0
gateway that has multicast enabled.

18 (Optional) If the tier-1 gateway is connected to a tier-0 gateway, you can download the ARP
table of the tier-0 gateway. Do the following:

a Click the tier-0 gateway from the Linked Tier-0 Gateway column.

b Click the menu icon (3 dots) and select Download ARP Table.

c Select an edge node.

d Click Download to save the .CSV file.

Results

The new gateway is added to the list. For any gateway, you can modify its configurations by
clicking the menu icon (3 dots) and select Edit. To reconfigure service interfaces or static routes,
you do not need to click Edit. You only need to click the expand icon (right arrow) for the gateway,
expand the Service Interfaces or Static Routes section, and click the number that is shown. Note
that the number must be non-zero. If it is zero, you must edit the gateway.

If NSX Federation is configured, this feature of reconfiguring a gateway by clicking on an entity


is applicable to gateways created by the Global Manager (GM) as well. Note that some entities in
a GM-created gateway can be modified by the Local Manager, but others cannot. For example,
Static Routes of a GM-created gateway cannot be modified by the Local Manager. Also, from the
Local Manager, you can edit existing Service Interfaces of a GM-created gateway but you cannot
add an interface.

State Synchronization of Tier-1 Gateways


Connection information of the traffic running on a given tier-1 SR (Service Router) is synchronized
to its peer tier-1 SR in active-standby or stateful active-active HA modes. Note that stateful active-
active mode is only available starting with NSX 4.0.1.1.

Note State synchronization is not supported for TLS Inspection and IDPS.

In NSX 4.0.0.1, note the following about state synchronization:

n State synchronization is supported for Gateway Firewall, Identity Firewall, NAT, IPSec VPN,
DHCP, FQDN analysis, and URL filtering.

VMware, Inc. 79
NSX Administration Guide

n If new sessions were going through a tier-1 SR just before a failover, it might happen that those
sessions were not synchronized on the associated tier-1 SR and potentially affect the traffic for
those sessions.

Starting with NSX 4.0.1.1, note the following about state synchronization:

n In active-standby mode, state synchronization is supported for Gateway Firewall, Identity


Firewall, NAT, IPSec VPN, DHCP, FQDN analysis, and URL filtering.

n In active-active mode, state synchronization is supported for Gateway Firewall, Identity


Firewall, NAT, FQDN analysis, and URL filtering. FQDN analysis is only supported with a single
sub-cluster. IPSec VPN is not supported.

n If new sessions were going through a tier-1 SR just before a failover, it might happen that those
sessions were not synchronized on the associated tier-1 SR and potentially affect the traffic for
those sessions.

VMware, Inc. 80
Segments
4
In NSX, segments are virtual layer 2 domains. A segment was earlier called a logical switch.

There are two types of segments in NSX:

n VLAN-backed segments

n Overlay-backed segments

A VLAN-backed segment is a layer 2 broadcast domain that is implemented as a traditional VLAN


in the physical infrastructure. This means that traffic between two VMs on two different hosts but
attached to the same VLAN-backed segment is carried over a VLAN between the two hosts. The
resulting constraint is that you must provision an appropriate VLAN in the physical infrastructure
for those two VMs to communicate at layer 2 over a VLAN-backed segment.

In an overlay-backed segment, traffic between two VMs on different hosts but attached to the
same overlay segment have their layer 2 traffic carried by a tunnel between the hosts. NSX
instantiates and maintains this IP tunnel without the need for any segment-specific configuration
in the physical infrastructure. As a result, the virtual network infrastructure is decoupled from
the physical network infrastructure. That is, you can create segments dynamically without any
configuration of the physical network infrastructure.

The default number of MAC addresses learned on an overlay-backed segment is 2048. The
default MAC limit per segment can be changed through the API field remote_overlay_mac_limit
in MacLearningSpec. For more information see the MacSwitchingProfile in the NSX API Guide.

This chapter includes the following topics:

n Segment Profiles

n Add a Segment

n Edge Bridging: Extending Overlay Segments to VLAN

n Add a Metadata Proxy Server

n Distributed Port Groups

Segment Profiles
Segment profiles include Layer 2 networking configuration details for segments and segment
ports. NSX Manager supports several types of segment profiles.

VMware, Inc. 81
NSX Administration Guide

The following types of segment profiles are available:

n QoS (Quality of Service)

n IP Discovery

n SpoofGuard

n Segment Security

n MAC Management

Note You cannot edit or delete the default segment profiles. If you require alternate settings
from what is in the default segment profile you can create a custom segment profile. By default
all custom segment profiles except the segment security profile will inherit the settings of the
appropriate default segment profile. For example, a custom IP discovery segment profile by
default will have the same settings as the default IP discovery segment profile.

Each default or custom segment profile has a unique identifier. You use this identifier to associate
the segment profile to a segment or a segment port.

A segment or segment port can be associated with only one segment profile of each type. You
cannot have, for example, two QoS segment profiles associated with a segment or segment port.

If you do not associate a segment profile when you create a segment, then the NSX Manager
associates a corresponding default system-defined segment profile. The children segment ports
inherit the default system-defined segment profile from the parent segment.

When you create or update a segment or segment port you can choose to associate either a
default or a custom segment profile. When the segment profile is associated or disassociated from
a segment the segment profile for the children segment ports is applied based on the following
criteria.

n If the parent segment has a profile associated with it, the child segment port inherits the
segment profile from the parent.

n If the parent segment does not have a segment profile associated with it, a default segment
profile is assigned to the segment and the segment port inherits that default segment profile.

n If you explicitly associate a custom profile with a segment port, then this custom profile
overrides the existing segment profile.

Note If you have associated a custom segment profile with a segment, but want to retain the
default segment profile for one of the child segment port, then you must make a copy of the
default segment profile and associate it with the specific segment port.

You cannot delete a custom segment profile if it is associated to a segment or a segment port. You
can find out whether any segments and segment ports are associated with the custom segment
profile by going to the Assigned To section of the Summary view and clicking on the listed
segments and segment ports.

VMware, Inc. 82
NSX Administration Guide

Understanding QoS Segment Profile


QoS provides high-quality and dedicated network performance for preferred traffic that requires
high bandwidth. The QoS mechanism does this by prioritizing sufficient bandwidth, controlling
latency and jitter, and reducing data loss for preferred packets even when there is a network
congestion. This level of network service is provided by using the existing network resources
efficiently.

For this release, shaping and traffic marking namely, CoS and DSCP is supported. The Layer 2
Class of Service (CoS) allows you to specify priority for data packets when traffic is buffered in
the segment due to congestion. The Layer 3 Differentiated Services Code Point (DSCP) detects
packets based on their DSCP values. CoS is always applied to the data packet irrespective of the
trusted mode.

NSX trusts the DSCP setting applied by a virtual machine or modifying and setting the DSCP
value at the segment level. In each case, the DSCP value is propagated to the outer IP header of
encapsulated frames. This enables the external physical network to prioritize the traffic based on
the DSCP setting on the external header. When DSCP is in the trusted mode, the DSCP value is
copied from the inner header. When in the untrusted mode, the DSCP value is not preserved for
the inner header.

Note DSCP settings work only on tunneled traffic. These settings do not apply to traffic inside the
same hypervisor.

You can use the QoS switching profile to configure the average ingress and egress bandwidth
values to set the transmit limit rate. The peak bandwidth rate is used to support burst traffic a
segment is allowed to prevent congestion on the northbound network links. These settings do
not guarantee the bandwidth but help limit the use of network bandwidth. The actual bandwidth
you will observe is determined by the link speed of the port or the values in the switching profile,
whichever is lower.

The QoS switching profile settings are applied to the segment and inherited by the child segment
port.

Create a QoS Segment Profile


You can define the DSCP value and configure the ingress and egress settings to create a custom
QoS switching profile.

Prerequisites

n Familiarize yourself with the QoS switching profile concept. See Understanding QoS Switching
Profile.

n Identify the network traffic you want to prioritize.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Segment Profiles.

VMware, Inc. 83
NSX Administration Guide

3 Click Add Segment Profile and select QoS.

4 Complete the QoS switching profile details.

Option Description

Name Name of the profile.

Mode Select either a Trusted or Untrusted option from the Mode drop-down menu.
When you select the Trusted mode the inner header DSCP value is applied
to the outer IP header for IP/IPv6 traffic. For non IP/IPv6 traffic, the outer
IP header takes the default value. Trusted mode is supported on an overlay-
based logical port. The default value is 0.
Untrusted mode is supported on overlay-based and VLAN-based logical
port. For the overlay-based logical port, the DSCP value of the outbound
IP header is set to the configured value irrespective to the inner packet type
for the logical port. For the VLAN-based logical port, the DSCP value of IP/
IPv6 packet will be set to the configured value. The DSCP values range for
untrusted mode is between 0 to 63.

Note DSCP settings work only on tunneled traffic. These settings do not
apply to traffic inside the same hypervisor.

Priority Set the DSCP priority value.


The priority values range from 0 to 63.

Class of Service Set the CoS value.


CoS is supported on VLAN-based logical port. CoS groups similar types of
traffic in the network and each type of traffic is treated as a class with its own
level of service priority. The lower priority traffic is slowed down or in some
cases dropped to provide better throughput for higher priority traffic. CoS
can also be configured for the VLAN ID with zero packet.
The CoS values range from 0 to 7, where 0 is the best effort service.

Ingress Set custom values for the outbound network traffic from the VM to the logical
network.
You can use the average bandwidth to reduce network congestion. The peak
bandwidth rate is used to support burst traffic and the burst size is based
on the duration with peak bandwidth. You set burst duration in the burst
size setting. You cannot guarantee the bandwidth. However, you can use the
Average, Peak, and Burst Size settings to limit network bandwidth.
For example, if the average bandwidth is 30 Mbps, peak bandwidth is 60
Mbps, and the allowed duration is 0.1 second, then the burst size is 60 *
1000000 * 0.10/8 = 750000 Bytes.
The default value 0 disables rate limiting on the ingress traffic.

Ingress Broadcast Set custom values for the outbound network traffic from the VM to the logical
network based on broadcast.
For example, when you set the average bandwidth for a logical switch to
3000 Kbps, peak bandwidth is 6000 Kbps, and the allowed duration is 0.1
second, then the burst size is 6000 * 1000 * 0.10/8 = 75000 Bytes.
The default value 0 disables rate limiting on the ingress broadcast traffic.

Egress Set custom values for the inbound network traffic from the logical network to
the VM.
The default value 0 disables rate limiting on the egress traffic.

VMware, Inc. 84
NSX Administration Guide

If the ingress, ingress broadcast, and egress options are not configured, the default values are
used.

5 Click Save.

Understanding IP Discovery Segment Profile


IP Discovery uses DHCP and DHCPv6 snooping, ARP (Address Resolution Protocol) snooping, ND
(Neighbor Discovery) snooping, and VM Tools to learn MAC and IP addresses.

Note IP discovery methods for IPv6 are disabled in the default IP discovery segment profile. To
enable IP discovery for IPv6 for segments, you must create an IP discovery profile with the IPv6
options enabled and attach the profile to the segments. In addition, make sure that distributed
firewall allows IPv6 Neighbor Discovery packets between all workloads (allowed by default).

The discovered MAC and IP addresses are used to achieve ARP/ND suppression, which minimizes
traffic between VMs connected to the same segment. The number of IPs in the ARP/ND
suppression cache for any given port is determined by the settings in the port's IP Discovery
profile. The relevant settings are ARP Binding Limit, ND Snooping Limit, Duplicate IP Detection,
ARP ND Binding Limit Timeout, and Trust on First Use (TOFU).

The discovered MAC and IP addresses are also used by the SpoofGuard and distributed firewall
(DFW) components. DFW uses the address bindings to determine the IP address of objects in
firewall rules.

DHCP/DHCPv6 snooping inspects the DHCP/DHCPv6 packets exchanged between the DHCP/
DHCPv6 client and server to learn the IP and MAC addresses.

ARP snooping inspects the outgoing ARP and GARP (gratuitous ARP) packets of a VM to learn the
IP and MAC addresses.

VM Tools is software that runs on an ESXi-hosted VM and can provide the VM's configuration
information including MAC and IP or IPv6 addresses. This IP discovery method is available for VMs
running on ESXi hosts only.

ND snooping is the IPv6 equivalent of ARP snooping. It inspects neighbor solicitation (NS) and
neighbor advertisement (NA) messages to learn the IP and MAC addresses.

Duplicate address detection checks whether a newly discovered IP address is already present
on the realized binding list for a different port. This check is performed for ports on the same
segment. If a duplicate address is detected, the newly discovered address is added to the
discovered list, but is not added to the realized binding list. All duplicate IPs have an associated
discovery timestamp. If the IP that is on the realized binding list is removed, either by adding it
to the ignore binding list or by disabling snooping, the duplicate IP with the oldest timestamp is
moved to the realized binding list. The duplicate address information is available through an API
call.

VMware, Inc. 85
NSX Administration Guide

By default, the discovery methods ARP snooping and ND snooping operate in a mode called trust
on first use (TOFU). In TOFU mode, when an address is discovered and added to the realized
bindings list, that binding remains in the realized list forever. TOFU applies to the first 'n' unique
<IP, MAC, VLAN> bindings discovered using ARP/ND snooping, where 'n' is the binding limit that
you can configure. You can disable TOFU for ARP/ND snooping. The methods will then operate
in trust on every use (TOEU) mode. In TOEU mode, when an address is discovered, it is added to
the realized bindings list and when it is deleted or expired, it is removed from the realized bindings
list. DHCP snooping and VM Tools always operate in TOEU mode.

Note TOFU is not the same as SpoofGuard, and it does not block traffic in the same way as
SpoofGuard. For more information, see Understanding SpoofGuard Segment Profile.

For Linux VMs, the ARP flux problem might cause ARP snooping to obtain incorrect information.
The problem can be prevented with an ARP filter. For more information, see http://linux-ip.net/
html/ether-arp.html#ether-arp-flux.

For each port, NSX Manager maintains an ignore bindings list, which contains IP addresses that
cannot be bound to the port. If you navigate to Networking > Logical Switches > Ports in
Manager mode and select a port, you can add discovered bindings to the ignore bindings list.
You can also delete an existing discovered or realized binding by copying it to Ignore Bindings.

Create an IP Discovery Segment Profile


NSX has several default IP Discovery segment profiles. You can also create additional ones.

IP Discovery is the central infrastructure in NSXwhich determines the set of IP addresses that
are associated with a port in the system. IP Discovery policies are applied via the Segment IP
Discovery Profile which is configurable from the Policy Manager. It can be associated with a
segment, segment port or a group. See Configure IP Discovery Segment Profile on Groups. When
a segment or segment port is created, it is initially assigned a Default Segment IP Discovery Profile
with a predefined set of policies.

Prerequisites

Familiarize yourself with the IP Discovery segment profile concepts. See Understanding IP
Discovery Segment Profile.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select IP Discovery.

4 Specify the IP Discovery segment profile details.

Option Description

Name Enter a name.

ARP Snooping For an IPv4 environment. Applicable if VMs have static IP addresses.

VMware, Inc. 86
NSX Administration Guide

Option Description

ARP Binding Limit The maximum number of IPv4 IP addresses that can be bound to a port. The
minimum value allowed is 1 and the maximum is 256. The default is 1.

ARP ND Binding Limit Timeout The timeout value, in minutes, for IP addresses in the ARP/ND binding table
if TOFU is disabled. If an address times out, a newly discovered address
replaces it.

DHCP Snooping For an IPv4 environment. Applicable if VMs have IPv4 addresses.

DHCP Snooping - IPv6 For an IPv6 environment. Applicable if VMs have IPv6 addresses.

VM Tools Available for ESXi-hosted VMs only.

VM Tools - IPv6 Available for ESXi-hosted VMs only.

ND Snooping For an IPv6 environment. Applicable if VMs have static IP addresses.

ND Snooping Limit The maximum number of IPv6 addresses that can be bound to a port.

Trust on First Use Applicable to ARP and ND snooping.

Duplicate IP Detection For all snooping methods and both IPv4 and IPv6 environments.

5 Click Save.

Configure IP Discovery Segment Profile on Groups


Configuring IP Discovery segment profiles on a group allows a Security Administrator to configure
IP discovery profile parameters, and apply them to group members.

Configuring The following static and dynamic group members are supported:

n Segment

n Segment Port

n VM

n Groups

n Mix of the above

Profiles on groups only apply if the default profile is applied to the segment or segment port:

Custom Profile on Segment (S) and


Custom Group Profile Segment Port(SP) Effective Profile on Port

Custom Default (S), Default (SP) Custom

Custom 1 Default (S), Custom 2 (SP) Custom 2

Custom 1 Custom 2 (S), Default (SP) Custom 2

Custom 1 Custom 2 (S), Custom 3 (SP) Custom 3

Each time a profile is applied to a group a sequence number is specified. If a member is present in
multiple groups, the group with the lower sequence number has higher priority.

Discovery Profile Binding Map API

VMware, Inc. 87
NSX Administration Guide

Method API Resource Type

PUT, PATCH, GET, DELETE /infra/domains/<domain-id>/ DiscoveryProfileBindingMap


groups/<group-id>/discovery-
profile-binding-maps/<binding-
map-id>

GET /infra/domains/<domain-id>/ DiscoveryProfileBindingMapListResult


groups/<group-id>/discovery-
profile-binding-maps

Parameters for DiscoveryProfileBindingMap

Field Type Description

profile_path Policy Path Required

sequence_number Integer Required. Sequence number is used


to resolve conflicts when two profiles
are applied to the same segment
or segment port. The low sequence
number has higher precedence.

API for Segments and Ports

Method API Resource Type

GET /infra/tier-1s/<tier-1-id>/ EffectiveProfilesResponse


segments/<segment-id>/effective-
profiles

/infra/segments/<segment-id>/
effective-profiles

/infra/segments/<segment-id>/
effective-profiles

/infra/segments/<segment-id>/
ports/<port-id>

Example Request
POST https://{{policy-ip}}/policy/api/v1/infra/domains/default/groups/TestGroup/discovery-
profile-binding-maps/ipdmap

{
"profile_path" : "/infra/ip-discovery-profiles/ip-discovery-custom-profile-1",
"sequence_number" :"10"
}

Understanding SpoofGuard Segment Profile


SpoofGuard helps prevent a form of malicious attack called "web spoofing" or "phishing." A
SpoofGuard policy blocks traffic determined to be spoofed.

VMware, Inc. 88
NSX Administration Guide

SpoofGuard is a tool that is designed to prevent virtual machines in your environment from
sending traffic with an IP address it is not authorized to send traffic from. In the instance that a
virtual machine’s IP address does not match the IP address on the corresponding logical port and
segment address binding in SpoofGuard, the virtual machine’s vNIC is prevented from accessing
the network entirely. SpoofGuard can be configured at the port or segment level. There are
several reasons SpoofGuard might be used in your environment:

n Preventing a rogue virtual machine from assuming the IP address of an existing VM.

n Ensuring the IP addresses of virtual machines cannot be altered without intervention – in some
environments, it’s preferable that virtual machines cannot alter their IP addresses without
proper change control review. SpoofGuard facilitates this by ensuring that the virtual machine
owner cannot simply alter the IP address and continue working unimpeded.

n Guaranteeing that distributed firewall (DFW) rules will not be inadvertently (or deliberately)
bypassed – for DFW rules created utilizing IP sets as sources or destinations, the possibility
always exists that a virtual machine could have its IP address forged in the packet header,
thereby bypassing the rules in question.

NSX SpoofGuard configuration covers the following:

n MAC SpoofGuard - authenticates MAC address of packet

n IP SpoofGuard - authenticates MAC and IP addresses of packet

n Dynamic Address Resolution Protocol (ARP) inspection, that is, ARP and Gratuitous Address
Resolution Protocol (GARP) SpoofGuard and Neighbor Discovery (ND) SpoofGuard validation
are all against the MAC source, IP Source and IP-MAC source mapping in the ARP/GARP/ND
payload.

At the port level, the allowed MAC/VLAN/IP allow-list is provided through the Address Bindings
property of the port. When the virtual machine sends traffic, it is dropped if its IP/MAC/VLAN
does not match the IP/MAC/VLAN properties of the port. The port level SpoofGuard deals with
traffic authentication, i.e. is the traffic consistent with VIF configuration.

At the segment level, the allowed MAC/VLAN/IP allow-list is provided through the Address
Bindings property of the segment. This is typically an allowed IP range/subnet for the segment
and the segment level SpoofGuard deals with traffic authorization.

Traffic must be permitted by port level AND segment level SpoofGuard before it will be allowed
into segment. Enabling or disabling port and segment level SpoofGuard, can be controlled using
the SpoofGuard segment profile.

Create a SpoofGuard Segment Profile


When SpoofGuard is configured, if the IP address of a virtual machine changes, traffic from
the virtual machine may be blocked until the corresponding configured port/segment address
bindings are updated with the new IP address.

Enable SpoofGuard for the port group(s) containing the guests. When enabled for each network
adapter, SpoofGuard inspects packets for the prescribed MAC and its corresponding IP address.

VMware, Inc. 89
NSX Administration Guide

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select Spoof Guard.

4 Enter a name.

5 To enable port level SpoofGuard, set Port Bindings to Enabled.

6 Click Save.

Understanding Segment Security Segment Profile


Segment security provides stateless Layer2 and Layer 3 security by checking the ingress traffic to
the segment and dropping unauthorized packets sent from VMs by matching the IP address, MAC
address, and protocols to a set of allowed addresses and protocols. You can use segment security
to protect the segment integrity by filtering out malicious attacks from the VMs in the network.

Note that the default segment security profile has the DHCP settings Server Block and Server
Block - IPv6 enabled. This means that a segment that uses the default segment security profile
will block traffic from a DHCP server to a DHCP client. If you want a segment that allows DHCP
server traffic, you must create a custom segment security profile for the segment.

Create a Segment Security Segment Profile


You can create a custom segment security segment profile with MAC destination addresses from
the allowed BPDU list and configure rate limiting.

Prerequisites

Familiarize yourself with the segment security segment profile concept. See Understanding Switch
Security Switching Profile.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select Segment Security.

4 Complete the segment security profile details.

Option Description

Name Name of the profile.

BPDU Filter Toggle the BPDU Filter button to enable BPDU filtering. Disabled by default.
When the BPDU filter is enabled, all of the traffic to BPDU destination MAC
address is blocked. The BPDU filter when enabled also disables STP on the
logical switch ports because these ports are not expected to take part in STP.

VMware, Inc. 90
NSX Administration Guide

Option Description

BPDU Filter Allow List Click the destination MAC address from the BPDU destination MAC
addresses list to allow traffic to the permitted destination. You must enable
BPDU Filter to be able to select from this list.

DHCP Filter Toggle the Server Block button and Client Block button to enable DHCP
filtering. Both are disabled by default.
DHCP Server Block blocks traffic from a DHCP server to a DHCP client. Note
that it does not block traffic from a DHCP server to a DHCP relay agent.
DHCP Client Block prevents a VM from acquiring a DHCP IP address by
blocking DHCP requests.

DHCPv6 Filter Toggle the Server Block - IPv6 button and Client Block - IPv6 button to
enable DHCP filtering. Both are disabled by default.
DHCPv6 Server Block blocks traffic from a DHCPv6 server to a DHCPv6
client. Note that it does not block traffic from a DHCP server to a DHCP relay
agent. Packets whose UDP source port number is 547 are filtered.
DHCPv6 Client Block prevents a VM from acquiring a DHCP IP address by
blocking DHCP requests. Packets whose UDP source port number is 546 are
filtered.

Block Non-IP Traffic Toggle the Block Non-IP Traffic button to allow only IPv4, IPv6, ARP, and
BPDU traffic.
The rest of the non-IP traffic is blocked. The permitted IPv4, IPv6, ARP,
GARP and BPDU traffic is based on other policies set in address binding and
SpoofGuard configuration.
By default, this option is disabled to allow non-IP traffic to be handled as
regular traffic.

RA Guard Toggle the RA Guard button to filter out ingress IPv6 router advertisements.
ICMPv6 type 134 packets are filtered out. This option is enabled by default.

Rate Limits Set a rate limit for broadcast and multicast traffic. This option is enabled by
default.
Rate limits can be used to protect the logical switch or VMs from events such
as broadcast storms.
To avoid any connectivity problems, the minimum rate limit value must be >=
10 pps.

5 Click Save.

Understanding MAC Discovery Segment Profile


The MAC discovery segment profile supports two functionalities: MAC learning and MAC address
change.

The MAC address change feature allows a VM to change its MAC address. A VM connected to a
port can run an administrative command to change the MAC address of its vNIC and still send and
receive traffic on that vNIC. This feature is supported on ESXi only. In the default MAC discovery
segment profile, this property is enabled.

VMware, Inc. 91
NSX Administration Guide

MAC learning provides network connectivity to deployments where multiple MAC addresses get
configured behind one vNIC, for example, in a nested hypervisor deployment where an ESXi VM
runs on an ESXi host and multiple VMs run inside the ESXi VM. Without MAC learning, when the
vNIC of the ESXi VM connects to a segment port, its MAC address is static. VMs running inside
the ESXi VM do not have network connectivity because their packets have different source MAC
addresses. With MAC learning, the vSwitch inspects the source MAC address of every packet
coming from the vNIC, learns the MAC address and allows the packet to proceed. If a MAC
address that is learned is not used for a certain period of time, it is removed. This time period
is not configurable. The field MAC Learning Aging Time displays the pre-defined value, which is
600.

MAC learning also supports unknown unicast flooding. Normally, when a packet that is received
by a port has an unknown destination MAC address, the packet is dropped. With unknown unicast
flooding enabled, the port floods unknown unicast traffic to every port on the switch that has MAC
learning and unknown unicast flooding enabled. This property is enabled by default, but only if
MAC learning is enabled.

The number of MAC addresses that can be learned is configurable. The maximum value is 4096,
which is the default. You can also set the policy for when the limit is reached. The options are:

n Drop - Packets from an unknown source MAC address are dropped. Packets inbound to this
MAC address will be treated as unknown unicast. The port will receive the packets only if it has
unknown unicast flooding enabled.

n Allow - Packets from an unknown source MAC address are forwarded although the address
will not be learned. Packets inbound to this MAC address will be treated as unknown unicast.
The port will receive the packets only if it has unknown unicast flooding enabled.

If you enable MAC learning or MAC address change, to improve security, configure SpoofGuard as
well.

Create a MAC Discovery Segment Profile


You can create a MAC discovery segment profile to manage MAC addresses.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select MAC Discovery.

4 Complete the MAC discovery profile details.

Option Description

Name Name of the profile.

MAC Change Enable or disable the MAC address change feature. The default is disabled.

MAC Learning Enable or disable the MAC learning feature. The default is disabled.

VMware, Inc. 92
NSX Administration Guide

Option Description

MAC Limit Policy Select Allow or Drop. The default is Allow. This option is available if you
enable MAC learning

Unknown Unicast Flooding Enable or disable the unknown unicast flooding feature. The default is
enabled. This option is available if you enable MAC learning

MAC Limit Set the maximum number of MAC addresses. The default is 4096. This option
is available if you enable MAC learning

MAC Learning Aging Time For information only. This option is not configurable. The pre-defined value is
600.

5 Click Save.

Add a Segment
You can add two kinds of segments: overlay-backed segments and VLAN-backed segments.

Segments are created as part of a transport zone. There are two types of transport zones: VLAN
transport zones and overlay transport zones. A segment created in a VLAN transport zone is a
VLAN-backed segment, and a segment created in an overlay transport zone is an overlay-backed
segment.

n DHCPv4 relay is supported on a VLAN-backed segment through the Service Interface. Only
one DHCP v4 relay or service is supported on a segment.

n For a standalone segment that is not connected to a gateway, only Segment DHCP server is
supported.

n For a VLAN segment requiring DHCP server, only Segment DHCP server is supported.
Gateway DHCP is not supported on VLAN.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments.

3 Click Add Segment.

4 Enter a name for the segment.

VMware, Inc. 93
NSX Administration Guide

5 Select the type of connectivity for the segment.

Connectivity Description

None Select this option when you do not want to connect the segment to any
upstream gateway (tier-0 or tier-1). Typically, you want to add a standalone
segment in the following scenarios:
n When you want to create a local testing environment for users that are
running workloads on the same subnet.
n When east-west connectivity with users on the other subnets is not
necessary.
n When north-south connectivity to users outside the data center is not
necessary.
n When you want to configure layer 2 bridging or guest VLAN tagging.

Tier-1 Select this option when you want to connect the segment to a tier-1 gateway.

Tier-0 Select this option when you want to connect the segment to a tier-0 gateway.

Note You can change the connectivity of a gateway-connected segment from one gateway
to another gateway (same or different gateway type). In addition, you can change the
connectivity of segment from "None" to a tier-0 or tier-1 gateway. The segment connectivity
changes are permitted only when the gateways and the connected segments are in the same
transport zone. However, if the segment has DHCP configured on it, some restrictions and
caveats apply on changing the segment connectivity. For more information, see Scenarios:
Impact of Changing Segment Connectivity on DHCP.

6 Enter the Gateway IP address of the subnet in a CIDR format. A segment can contain an IPv4
subnet, or an IPv6 subnet, or both.

n If a segment is not connected to a gateway, subnet is optional.

n If a segment is connected either to a tier-1 or tier-0 gateway, subnet is required.

Subnets of one segment must not overlap with the subnets of other segments in your network.
A segment is always associated with a single virtual network identifier (VNI) regardless of
whether it is configured with one subnet, two subnets, or no subnet.

7 Select a transport zone, which can be an overlay or a VLAN.

To create a VLAN-backed segment, add the segment in a VLAN transport zone. Similarly, to
create an overlay-backed segment, add the segment in an overlay transport zone.

8 (Optional) To configure DHCP on the segment, click Set DHCP Config.

For a detailed information about DHCP configuration, see Configure DHCP Service.

9 If the transport zone is of type VLAN, specify a list of VLAN IDs. If the transport zone is of
type Overlay, and you want to support layer 2 bridging or guest VLAN tagging, specify a list of
VLAN IDs or VLAN ranges

VMware, Inc. 94
NSX Administration Guide

10 (Optional) Select an uplink teaming policy for the segment.

This drop-down menu displays the named teaming policies, if you have added them in the
VLAN transport zone. If no uplink teaming policy is selected, the default teaming policy is
used.

n Named teaming policies are not applicable to overlay segments. Overlay segments always
follow the default teaming policy.

n For VLAN-backed segments, you have the flexibility to override the default teaming policy
with a selected named teaming policy. This capability is provided so that you can steer the
infrastructure traffic from the host to specific VLAN segments in the VLAN transport zone.
Before adding the VLAN segment, ensure that the named teaming policy names are added
in the VLAN transport zone.

11 (Optional) Enter the fully qualified domain name.

DHCPv4 server and DHCPv4 static bindings on the segment automatically inherit the domain
name from the segment configuration as the Domain Name option.

12 If you want to use Layer 2 VPN to extend the segment, click the L2 VPN text box and select an
L2 VPN server or client session.

You can select more than one.

13 In VPN Tunnel ID, enter a unique value that is used to identify the segment.

14 In the Metadata Proxy field, select a metadata proxy from the drop-down list, or click the
menu icon (3 dots) to create one.

15 Click Save.

16 To add segment ports, click Yes when prompted if you want to continue configuring the
segment.

a Click Set from the Ports / Interfaces column.

b Click Add Segment Port.

c Enter a port name.

d For ID, enter the VIF UUID of the VM or server that connects to this port.

e Select a type: Child, or Static.

Leave this text box blank except for use cases such as containers or VMware HCX. If this
port is for a container in a VM, select Child. If this port is for a bare metal container or
server, select Static.

f Enter a context ID.

Enter the parent VIF ID if Type is Child, or transport node ID if Type is Static.

g Enter a traffic tag.

Enter the VLAN ID in container and other use cases.

VMware, Inc. 95
NSX Administration Guide

h Select an address allocation method: IP Pool, MAC Pool, Both, or None.

i Specify tags.

j Apply address binding by specifying the IP (IPv4 address, IPv4 subnet, IPv6 address, or
IPv6 subnet) and MAC address of the logical port to which you want to apply address
binding. For example, for IPv6, 2001::/64 is an IPv6 subnet, 2001::1 is a host IP, whereas
2001::1/64 is an invalid input. You can also specify a VLAN ID.

Manual address bindings, if specified, override the auto discovered address bindings.

k Select segment profiles for this port.

17 To select segment profiles, click Segment Profiles.

18 Click Save.

19 (Optional) You can click the menu icon (3 dots) for the following options to save specific
information about the segment in a CSV file:

n Download MAC Table: Select the source which can be the Central Control Plane or a
specific transport node for the associated MAC addresses.

n Download VTEP Table: Select the source which can be the Central Control Plane or a
specific transport node for the associated VTEPs.

n Download ARP Table: Select the edge node to save the ARP table.

Note This option is only available if the segment is connected to a gateway.

n Download ARP Proxy: Save the aggregate of the ARP proxy for the segment.

Note This option is only available if the segment is connected to a gateway.

20 (Optional) You can view more information about the segment by expanding the segment and
clicking the following options on the right:

n View Statistics: Contains the following tabs:

n Local Port: Displays the traffic details for the local port.

n Interface Statistics: Displays the data details for specific edge nodes.

Note This tab is only available if the segment is connected to the gateway.

n DHCP Statistics: Displays the DHCP server packet counts and DHCP pool usage
statistics. This tab is available only when you have configured Segment DHCP server
on the segment.

If you have configured both DHCPv4 and DHCPv6 servers on a segment, the DHCP
Statistics tab will display only the DHCPv4 packet counts and the DHCPv4 pool usage
statistics. DHCPv6 packet counts and DHCPv6 pool usage statistics are currently not
supported.

n View Related Groups: Displays the groups associated with the segment.

VMware, Inc. 96
NSX Administration Guide

n View DAD Status: Displays Duplicate Address Detection (DAD) status for the segment.

Note This tab is only available if the segment is connected to the gateway.

Results

The new segment is added to the list. For any segment, you can modify its configurations by
clicking the menu icon (3 dots) and select Edit. To reconfigure ports, you do not need to click Edit.
You only need to click the expand icon (right arrow) for the segment and click the number in the
Ports column. Note that the number must be non-zero. If it is zero, you must edit the segment.

If NSX Federation is configured, this feature of reconfiguring a segment by clicking on an entity


is applicable to segments created by the Global Manager (GM) as well. Note that from the Local
Manager, you can create ports for a GM-created segment because you cannot create segment
ports from the Global Manager.

Edge Bridging: Extending Overlay Segments to VLAN


Workloads attached to overlay segments typically communicate at layer 3 with physical devices
outside of the NSX domain, through tier-0 gateways instantiated on NSX Edge. However, there
are some scenarios where layer 2 connectivity is required between virtual machines in NSX and
physical devices.

Some examples are:

n Migration from physical to virtual, or virtual to virtual.

n Integration of a physical appliance that provides services to a segment, like an external load
balancer.

n Connection to a database server that requires layer 2 adjacency to its virtual machine clients.

For that purpose, on the top of the gateway service, NSX Edge can also run a bridge service.
The following diagram represents those two options: the virtual machine in the bottom left corner
has layer 3 connectivity through a gateway to the physical host, and layer 2 connectivity through
a bridge to the database server. It is possible to both route and bridge a segment. In fact, it is
possible to use the tier 0 gateway in this diagram as a default gateway for the database server.

VMware, Inc. 97
NSX Administration Guide

Figure 4-1. NSX VM Bridge and Gateway Communication

Physical host

172.16.10.1/24 10.113.12.5

T0
gateway

Database server

172.16.10.100 172.16.10.10
Bridge

NSX domain, overlay backed Physical infrastructure, VLAN backed

The NSX Edge bridge, like the gateway, is supported for long term deployments, even if it is often
used as a temporary solution during migrations.

The bridge functionality extends an overlay segment into a VLAN, identified by a VLAN ID on an
uplink of the NSX Edge where the bridge is running. Typically, two redundant active and standby
bridges get deployed on separate edges as part of the same edge cluster. There is no active/
active redundancy possible. Setting up the bridge functionality involves the following configuration
steps:

n Make sure that the NSX Edge is suitable for hosting the bridge service. The bridge adds a few
constraints to the deployment of an edge in a VM form factor.

n Identify the NSX Edges that run the bridge service. A bridge profile statically designates
the edge responsible for running the active bridge and optionally designates a second edge
hosting the standby bridge.

n Lastly associate an overlay segment to a VLAN ID or IDs and a bridge profile. This results in
the creation of the appropriate active/standby bridges on the edges specified in the bridge
profile, that extend at layer 2 the overlay segment to the VLAN or VLANs identified by the
VLAN IDs.

VMware, Inc. 98
NSX Administration Guide

Configure an Edge VM for Bridging


There are no specific constraints to configure bridging on a bare metal edge. However, if you
are planning to run a bridge on an NSX Edge VM, use this section to understand the specific
configuration to perform in the vSphere infrastructure.

As an example, our scenario includes two virtual machines, VM1 and VM2, on transport node ESXi
1 attached to an overlay segment S. The VMs can communicate at layer 2 with the physical host
on the right side of the diagram thanks to a bridge instantiated on the edge VM running on ESXi
host 2. The TEP (tunnel end point) on ESXi 1 encapsulates the traffic from VM1/VM2 and forwards
it to the TEP of the edge VM. Then the bridge unencapsulates the traffic and sends it tagged with
VLAN ID 10 on its VLAN uplink. Then the traffic gets switched to the physical host.

Figure 4-2. Edge VM Bridging

ESXi 1 ESXi 2

edge VM

VM1 VM1
Bridge

mac1 mac2 VLAN


TEP
uplink
Overlay segment S

port p: mac1,mac2 Physical host


TEP
Transport
VLAN dvpg1
VDS with NSX VDS VLAN 10

As you can see on the diagram, the VLAN uplink of the bridge is linked to port p, which is attached
to distributed portgroup dvpg1. This port p is injecting into dvpg1 traffic with the source mac
addresses mac1 and mac2 of virtual machined VM1 and VM2. Port p must also accept traffic with
destination mac addresses mac1 and mac2, so that the physical host can reach VM1 and VM2.
When a bridge is running on the edge VM, the port of this edge VM behaves in a non-standard
way as far as the vSphere switching infrastructure is concerned. That means that dvpg1 will
need some additional configuration to accommodate the edge VM. The following section lists
the different options based on your environment.

Option 1: Edge VM is on a VSS portgroup


This option is for when the Edge VM is connected to a VSS (vSphere Standard Switch). You must
enable promiscuous mode and forged transmit.

n Set promiscuous mode on the portgroup.

n Allow forged transmit on the portgroup.

VMware, Inc. 99
NSX Administration Guide

n Run the following command to enable reverse filter on the ESXi host where the Edge VM is
running:

esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1

Then disable and enable promiscuous mode on the portgroup with the following steps:

n Edit the portgroup's settings.

n Disable promiscuous mode and save the settings.

n Edit the portgroup's settings again.

n Enable promiscuous mode and save the settings.

n Do not have other port groups in promiscuous mode on the same host sharing the same set of
VLANs.

n Avoid running other VMs attached to the portgroup in promiscuous mode on the same host,
as the traffic gets replicated to all those VMs and affect performance.

Option 2a: Edge VM is on a VDS 6.6.0 (or later) portgroup


This option is for when the Edge VM is connected to a VDS (vSphere Distributed Switch). You
must be running ESXi 6.7 or later, and VDS 6.6.0 or later.

n Enable MAC learning with the option “allow unicast flooding” on the distributed portgroup.

Starting with vSphere 8.0, you can enable the Mac Learning UI option in the
distributed portgroup configuration. For previous releases, you need to use the VIM API
DVSMacLearningPolicy and setting allowUnicastFlooding to true. For an example of
how to enable MAC learning, see https://williamlam.com/2018/04/native-mac-learning-in-
vsphere-6-7-removes-the-need-for-promiscuous-mode-for-nested-esxi.html.

Option 2b: Edge VM is on a VDS 6.5.0 (or later) portgroup


This option is for when the Edge VM is connected to a VDS (vSphere Distributed Switch). You
enable promiscuous mode and forged transmit.

n Set promiscuous mode on the distributed portgroup.

n Allow forged transmit on the distributed portgroup.

n Run the following command to enable reverse filter on the ESXi host where the Edge VM is
running:

esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1

Then disable and enable promiscuous mode on the distributed portgroup with the following
steps:

n Edit the distributed portgroup's settings.

n Disable promiscuous mode and save the settings.

VMware, Inc. 100


NSX Administration Guide

n Edit the distributed portgroup's settings again.

n Enable promiscuous mode and save the settings.

n Do not have other distributed port groups in promiscuous mode on the same host sharing the
same set of VLANs.

n Avoid running other VMs attached to the distributed portgroup in promiscuous mode on the
same host, as the traffic gets replicated to all those VMs and affects performance.

Option 3: Edge VM is connected to an NSX segment


If the Edge is deployed on a host with NSX installed, it can connect to a VLAN segment and use
MAC Learning, which is the preferred configuration option.

n Create a new MAC Discovery segment profile by navigating to Networking > Segments >
Profiles.

n Click Add Segment Profile > MAC Discovery.

n Enable MAC Learning. This will also enable Unknown Unicast Flooding. Keep the flooding
option enabled for bridging to work in all scenarios.

n Click Save.

n Edit the segment used by the Edge by navigating to Networking > Segments.

n Click the menu icon (3 dots) and select Edit.

n Expand the Segment Profiles section, then set the MAC Discovery profile to the one
created above.

Note If you bridge a segment to VLAN 0 and you use a distributed router on this segment, the
gateway might not route VLAN 0 traffic when using MAC learning. In this scenario, avoid option
3. Avoid option 2a if the edge VM is attached to the distributed portgroup of a VDS prepared for
NSX for vSphere.

Create an Edge Bridge Profile


The edge bridge profile is a template for instantiating bridges. In the template, you define a
primary edge, an optional backup edge from the same edge cluster as the primary, and a failover
mode, preemptive or non-preemptive.

The preference is to use the primary edge for running the active bridge, the bridge forwarding
traffic between overlay segment and VLAN. The standby bridge, that typically runs on the backup
edge, does not forward any traffic.

You can instantiate multiple bridges from the same bridge profile. As a result, in most cases, few
edge bridge profiles are required. For example, if you plan to use two edges (edge1 and edge2)
for bridging, you might want to create two edge bridge profiles:

n Profile 1 with edge1 as primary and edge2 as backup

VMware, Inc. 101


NSX Administration Guide

n Profile 2 with edge2 as primary and edge1 as backup

You can then create an arbitrary number of bridges using edge1 as primary (respectively backup),
by associating them to the profile 1 (respectively profile 2). Those two profiles are enough to load
share the bridged traffic between the two edges, on a per segment basis. The Few Bridge Profiles
for Many Bridges diagram represents an example of bridging eight segments across two edges,
using two edge bridge profiles.

This diagram shows bridge overlay segments S1 to VLAN 1, segment S2 to VLAN 2, and so on.
Segments S1 to S4 are using bridge profile 1, resulting in active bridges on edge1, standby on
edge2. Segment S5 to S8 are using bridge profile 2, leading to active bridges on edge2, standby
on edge1. This diagram shows load sharing of the bridging functionality, on a per segment basis.

Figure 4-3. Few Bridge Profiles for Many Bridges


edge1 edge2

TEP TEP

S1 S3 S2 S4 S5 S7 S6 S8

B5 B6 B5 B6
standby standby Active Active Bridge profile 2
Primary: edge2
B7 B8 B7 B8 Backup: edge1
standby standby Active Active

B1 B2 B1 B2
Bridge profile 1 Active Active standby standby
Primary: edge1
Backup: edge2 B3 B4 B3 B4
Active Active standby standby

Vlan1 Vlan3 Vlan2 Vlan4 Vlan5 Vlan7 Vlan6 Vlan8

VLAN trunk VLAN trunk

Depending on the availability of the edges and the failover mode selected for the bridge profiles,
the active bridges might be running on the backup edges.

When both edges in the bridge profile are available, the active bridge is typically running on the
primary edge. If the active bridge or the primary edge fails, the standby bridge on the backup
edge takes over the active role and starts forwarding traffic between overlay segment and VLAN.

A bridge switchover, moving the active bridge to a different edge, is an operation that results in
traffic loss. The bridge that is becoming active synchronizes the mac addresses that were learned
on the previously active bridge and starts flooding RARP packets, using those mac addresses
as source mac addresses. This mechanism is necessary to update the mac address tables of the
physical infrastructure.

For example, what if a failure occurs on the primary edge and the bridge running on the backup
edge is already active? In preemptive mode, when the failure is recovered on the primary edge, a
bridge switchover is triggered and the bridge on the primary edge becomes active again.

VMware, Inc. 102


NSX Administration Guide

The benefit of the preemptive mode is that the system is attempting to forward the bridged traffic
along a deterministic path. If you take the example of the Few Bridge Profiles for Many Bridges
figure, with a preemptive mode, you are sure that the bridge traffic gets distributed on a per
segment basis as soon as both edges are available, thus providing more bandwidth.

The drawback of the preemptive mode is that there is a disruptive bridge convergence when the
bridge on the primary edge recovers and becomes active again.

In non-preemptive mode, the bridge on the primary edge recovers from failure as a standby
bridge. The benefit of this mode is that there is no additional traffic disruption when the primary
recovers. The preemptive mode is the best option in terms of bandwidth, thanks to its load
sharing. The drawback of the non-preemptive mode is that bridge traffic flow is non-deterministic
and can be sub-optimal. In the example shown in the Few Bridge Profiles for Many Bridges figure,
after a failed edge recovers, the bridge traffic still flows through a unique edge, with no load
sharing.

You can manually trigger a bridge switchover. To manually trigger a bridge switchover from the
CLI of the edge currently hosting the standby bridge, enter: set bridge <uuid> state active.

Use this command only in non-preemptive mode. If you use it in preemptive mode, it returns an
error.

For more information on set or get bridge commands, see the NSX Command-Line Interface
Reference.
Esnure you verify that you have an NSX Edge cluster with two NSX Edge transport nodes.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Profiles.

3 Click Edge Bridge Profiles.

4 Click Add Edge Bridge Profile.

5 Enter a name for the Edge bridge profile and optionally a description.

6 Select an NSX Edge cluster.

7 Select a primary node.

8 Select a backup node.

9 Select a failover mode.

The options are Preemptive and Non-Preemptive.

10 Click Save.

What to do next

Create a bridge-backed segment. See Extend an Overlay Segment to a VLAN or a Range of


VLANs.

VMware, Inc. 103


NSX Administration Guide

Extend an Overlay Segment to a VLAN or a Range of VLANs


After you have identified the edges on which you want the bridging functionality to be performed
and created the appropriate edge bridge profile, the final step is to edit the segment configuration
and specify the edge bridge profile to which you want to associate with the segment and the
VLAN ID or range of VLAN IDs to which to bridge your segment. This step instantiates one or two
bridges on the edges identified in the edge bridge profile.

When you configure a bridge with a single VLAN ID, a frame received on the overlay segment
by the bridge gets decapsulated and forwarded on the VLAN uplink of the bridge with an added
802.1Q tag corresponding to this VLAN ID.

When you create the bridge specifying a VLAN ID range, you must configure the overlay segment
being bridged for Guest VLAN Tagging (GVT). This means that the encapsulated frames already
carry an 802.1Q tag. When the bridge receives an encapsulated frame carrying a VLAN tag on its
overlay interface, it first checks that VLAN ID in the tag belongs to the VLAN range configured for
the bridge. After confirmation, it forwards the frame on the VLAN uplink of the bridge carrying the
original 802.1Q tag that was received on the overlay. Otherwise, it drops the frame.

Note If needed, you can configure multiple bridges on the same segment, but:

n The same segment cannot be bridged twice on the same edge.

n The bridge does not have any loop detection or prevention. If you configure multiple bridges
to the same bridging domain on the VLAN side, it results in a permanent bridging loop.

Configuring a Bridge-Backed Segment

Prerequisites

n You have identified an overlay segment you want to bridge.

n You have an edge bridge profile specifying one or two edges attached to the overlay transport
zone of your segment.

n If you are using edge VMs, you have checked the configuration requirements in Configure an
Edge VM for Bridging.

Procedure

1 From a browser, log in with admin privileges to an NSX Manager or a Global Manager at
https://<nsx-mgr-or-global-mgr-ip-address>.

2 Select Networking > Segments.

3 Click the menu icon (three dots) of the overlay segment that you want to configure layer 2
bridging on and select Edit.

4 Expand Additional Settings and in the Edge Bridges field, click Set.

5 Click Add Edge Bridge.

6 Select an Edge bridge profile.

VMware, Inc. 104


NSX Administration Guide

7 Select a VLAN transport zone to identify the VLAN uplinks used by the bridge.

8 Enter a VLAN ID or a VLAN ID range (specify VLAN ranges and not individual VLANs).

9 (Optional) Select a teaming policy.

If there are multiple VLAN uplinks on the edge NVDS attached to the VLAN transport zone
selected in the previous steps, use a failover order named teaming policy to identify the exact
uplink on which VLAN bridged traffic gets forwarded. The uplinks of a VM edge do not fail, so
the teaming policy only needs a single uplink. If you do not enter a specific teaming policy and
there are multiple VLAN uplinks, the first one configured on the edge NVDS is used.

10 Click Add.

11 Click Apply.

Add a Metadata Proxy Server


A metadata proxy server enables VMs to retrieve metadata from an OpenStack Nova API server.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Segments > Metadata Proxies.

3 Click Add Metadata Proxy.

4 Enter a name for the metadata proxy server.

5 In the Server Address field, enter the URL and port for the Nova server.

The valid port range is 3000 - 9000.

6 Select an Edge cluster.

7 (Optional) Select Edge nodes.

If you select any Edge node, you cannot enable Standby Relocation in the next step.

8 (Optional) Enable Standby Relocation.

Standby relocation means that if the Edge node running the metadata proxy fails, the
metadata proxy will run on a standby Edge node. You can only enable standby relocation
if you do not select any Edge node.

9 In the Shared Signature Secret field, enter the secret that the metadata proxy will use to
access the Nova server.

10 (Optional) Select a certificate for encrypted communication with the Nova server.

11 (Optional) Select a cryptographic protocol.

The options are TLSv1, TLSv1.1, and TLSv1.2. TLSv1.1 and TLSv1.2 are supported by default.

VMware, Inc. 105


NSX Administration Guide

Distributed Port Groups


A distributed port group specifies port configuration options for each member port on a vSphere
distributed switch. Distributed port groups define how a connection is made to a network.

Distributed Port Group Creation in NSX


When you install Distributed Security to a vSphere Distributed Switch (VDS), the Distributed
Virtual port groups (DVPG) and DVports of the VDS are discovered and objects are automatically
created to represent them in NSX. For details, see Install Distributed Security for vSphere
Distributed Switch.

Important The objects created in NSX Manager for the DVPGs are called distributed port groups.
They are not called segments in the UI.

The objects created in NSX Manager for the DVports are called distributed ports. They are not
called segment ports in the UI.

Also, the following events occur during the Distributed Security installation:

n The VLAN tags for the DVPG are automatically discovered and shown in NSX Manager. The
VLAN tags can only be edited in VMware vCenter.

n The default segment profiles are applied to the distributed port groups. You can later switch
them to custom profiles.

n Only connected DVports from VMware vCenter are discovered by NSX. Free DVports are not
discovered.

After the Distributed Security installation, you can view these objects by navigating to Networking
> Segments, and then selecting the Distributed Port Groups tab.

The distributed port group and distributed port objects are kept in sync between VMware vCenter
and NSX. This means that if DVPGs or DVports are created or removed in VMware vCenter, then
those changes are automatically made to the respective distributed port groups or distributed
ports in NSX Manager. If changes are made in VMware vCenter while connectivity is lost between
VMware vCenter and NSX, those changes are automatically processed and reflected in NSX
Manager when connectivity is restored.

Available Actions for Distributed Port Groups and Distributed Ports


You can perform the following actions for distributed port groups and distributed ports in NSX
Manager:

VMware, Inc. 106


NSX Administration Guide

Object Available Actions

Distributed port groups n Apply SpoofGuard.


n Apply IP Discovery.
n Apply switch security profile.
n Add and remove tags which allows the distributed port group to be added to
dynamic NSGroups.
n Add and remove the distributed port group from static NSGroups.

Distributed ports n Add and remove tags which allows distributed ports to be added to dynamic
NSGroups.
n Add and remove distributed ports from static NSGroups.
n Manage address bindings.

VMware, Inc. 107


DHCP
5
You can configure DHCP service on each segment regardless of whether the segment is
connected to a gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers
are supported.

NSX supports the following types of DHCP configuration on a segment:

n Segment DHCP server (earlier known as Local DHCP server in NSX 3.x releases)

n Gateway DHCP server (supported only for IPv4 subnets in a segment)

n DHCP Relay

High-level Overview of Configuration


The following figure shows the high-level overview of DHCP server configuration in NSX.

Add DHCP Server Profile

Configure Segment DHCP Attach DHCP Server Profile


Server Per Segment to Tier-0/Tier-1 Gateway

Service Interface Configure


Downlink Interface Isolated Segment
(VLAN/Overlay Gateway DHCP Server
(Overlay Segment) (VLAN/Overlay)
Segment) Per Segment

IPv4 IPv6 IPv4 IPv6 IPv4 IPv6

Downlink Interface
(Overlay Segment)

IPv4

The following figure shows the high-level overview of DHCP Relay configuration in NSX.

VMware, Inc. 108


NSX Administration Guide

Add DHCP Relay Profile

Configure DHCP Relay


Per Segment

Downlink Interface Service Interface


(Overlay Segment) (VLAN/Overlay Segment)

IPv4 IPv6 IPv4

Supported DHCP Configuration Types


The following figure shows an example of the various scenarios for Segment DHCP server,
Gateway DHCP server, and DHCP Relay in an NSX network.

VLAN 101
EN: Edge Node
VLAN 102
1 Segment DHCP
EN 1 EN 2

External 2 Gateway DHCP


Tier-0
DHCP Server

3 DHCP Relay

EN 3 EN 1 EN 2
2
Tier-1 Tier-1
1 3
Network-1 Network-2

Network-3

VM2 VM3

VM1

VMware, Inc. 109


NSX Administration Guide

Table 5-1. Types of DHCP Configuration in NSX

DHCP Configuration Type Description

Segment DHCP server Select this option to create a Segment DHCP server that has an IP address
on the segment. A Segment DHCP server provides a dynamic IP assignment
service only to the VMs that are attached to the segment. The DHCP server IP
address must belong to the subnet that is configured on the segment. Also,
the server IP address must be different from the Gateway IP address of the
segment.
Segment DHCP server is local to the segment and not available to the other
segments in the network.
You can configure all DHCP settings, including DHCP ranges, DHCP options,
and static bindings on the segment.
For isolated segments, which are not connected to a gateway, Segment DHCP
server configuration type is selected by default.
You can configure DHCPv6 in the IPv6 subnet of a segment with a Segment
DHCP server.

DHCP Relay Select this option to relay the DHCP client requests to the external DHCP
servers. The external DHCP servers can be in any subnet, outside the SDDC,
or in the physical network.
DHCP Relay service is local to the segment and is not available to the other
segments in the network.
When you use a DHCP Relay on a segment, you cannot configure DHCP
settings, DHCP options, and static bindings on the segment.

Gateway DHCP server Gateway DHCP server is analogous to a central DHCP server that dynamically
assigns IP and other network configuration to the VMs on all the segments
that are connected to the gateway and using Gateway DHCP server.
By default, segments that are connected to a tier-1 or tier-0 gateway use
Segment DHCP server. If needed, you can choose to configure a Gateway
DHCP server or a DHCP Relay on the segment.
To configure Gateway DHCP server on a segment, a DHCP server profile must
be attached to the gateway.
If the IPv4 subnet of a segment uses a Gateway DHCP server, you cannot
configure DHCPv6 in the IPv6 subnet of the same segment because Gateway
DHCPv6 server is not supported. In this case, the IPv6 subnet cannot support
any DHCPv6 server configuration, including the IPv6 static bindings.

This chapter includes the following topics:

n Configure DHCP Service

n Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway

n View Gateway DHCP Statistics

n View Segment DHCP Statistics

n Scenarios: Selection of Edge Cluster for DHCP Service

n Scenarios: Impact of Changing Segment Connectivity on DHCP

VMware, Inc. 110


NSX Administration Guide

Configure DHCP Service


You can configure DHCP service on each segment regardless of whether the segment is
connected to a gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers
are supported.

DHCP Configuration Settings: Reference


Use this reference documentation to understand the various considerations that you must keep
in mind while configuring the DHCP service and to obtain a detailed guidance about the
configuration settings on the Set DHCP Config page.

The following note mentions the DHCP configuration types that are supported or not supported
based on how overlay or VLAN segments are connected:

Note
n On an isolated segment that is not connected to a gateway, only Segment DHCP server is
supported.

n Segments that are configured with an IPv6 subnet can have either a Segment DHCPv6 server
or a DHCPv6 relay. Gateway DHCPv6 server is not supported.

n If a segment contains an IPv4 subnet and an IPv6 subnet, you can configure both DHCPv4 and
DHCPv6 servers on the segment.

n DHCPv4 relay is supported on a VLAN segment through the service interface of a tier-0 or
tier-1 gateway. Only one DHCPv4 relay service is supported on a VLAN segment.

n For a VLAN segment requiring a DHCP server, only Segment DHCP server is supported.
Gateway DHCP server is not supported on a VLAN segment.

When a Segment DHCP server or DHCP Relay is configured on a VLAN segment, Forged
Transmits option on the ESXi host where the Edge VM is installed must be set to Accept. Set
this option in the VDS or VSS configuration.

In vSphere 7.0 or later, the Forged Transmits option is set to Reject, by default. To enable this
option on the host, do the following steps:

1 Log in to the vSphere Client UI with admin privileges.

2 Go to Hosts and Clusters and click a host in the cluster.

3 Navigate to Configure > Networking > Virtual Switches, and then click Edit.

4 On the Edit Settings window, click Security. From the Forged Transmits drop-down menu,
select Accept.

To learn about Forged Transmits, see the vSphere Security documentation.

Important After a segment has DHCP service configured on it, some restrictions and caveats
apply on changing the connectivity of the segment. For more information, see Scenarios: Impact
of Changing Segment Connectivity on DHCP.

VMware, Inc. 111


NSX Administration Guide

The following sections provide guidance about the configuration settings on the Set DHCP Config
page.

DHCP Type
n When a segment is connected to a gateway, Segment DHCP server is selected by default. If
needed, you can select Gateway DHCP server or DHCP Relay from the drop-down menu.

n If you select the DHCP type as Gateway DHCP server, the DHCP profile that is attached to the
gateway is autoselected. The name and server IP address are fetched automatically from that
DHCP profile and displayed in a read-only mode.

n For an isolated segment, which is not connected to a gateway, Segment DHCP server is
selected by default.

DHCP Profile
n When you are configuring a Segment DHCP server or a DHCP Relay on the segment, you
must select a DHCP profile from the drop-down menu. If no profiles are available in the DHCP

Profile drop-down menu, click and create a DHCP profile. After the profile is created, it is
automatically attached to the segment.

n When a segment is using a Gateway DHCP server, ensure that an edge cluster is selected
either in the gateway, or DHCP server profile, or both. If an edge cluster is unavailable in either
the profile or the gateway, an error message is displayed when you save the segment.

n When a segment is using a Segment DHCP server, ensure that the DHCP server profile
contains an edge cluster. If an edge cluster is unavailable in the profile, an error message
is displayed when you save the segment.

IPv4 Server or IPv6 Server Settings


This section explains the configuration settings in the IPv4 Server tab page and the IPv6 Server
tab page.

DHCP Server Address

n If you are configuring a Segment DHCP server, server IP address is required. A maximum
of two server IP addresses are supported. One IPv4 address and one IPv6 address. For
an IPv4 address, the prefix length must be <= 30, and for an IPv6 address, the prefix
length must be <= 126. The server IP addresses must belong to the subnets that you
have specified in this segment. The DHCP server IP address must not overlap with the IP
addresses in the DHCP ranges and DHCP static binding. The DHCP server profile might
contain server IP addresses, but these IP addresses are ignored when you configure a
Segment DHCP server on the segment.

n After a Segment DHCP server is created, you can edit the server IP addresses on the Set
DHCP Config page. However, the new IP addresses must belong to the same subnet that
is configured in the segment.

VMware, Inc. 112


NSX Administration Guide

n If you are configuring a Gateway DHCP server, the DHCP Server Address text box is not
editable. The server IP addresses are fetched automatically from the DHCP profile that is
attached to the connected gateway.

n The Gateway DHCP server IP addresses in the DHCP server profile can be different from
the subnet that is configured in the segment. In this case, the Gateway DHCP server
connects with the IPv4 subnet of the segment through an internal relay service, which is
autocreated when the Gateway DHCP server is created. The internal relay service uses any
one IP address from the subnet of the Gateway DHCP server IP address.

n The IP address used by the internal relay service acts as the default gateway on the
Gateway DHCP server to communicate with the IPv4 subnet of the segment.

n After a Gateway DHCP server is created, you can edit the server IP addresses in the DHCP
profile of the gateway. However, you cannot change the DHCP profile that is attached to
the gateway. When a DHCP server profile is used in your network, preferably avoid editing
the server IP addresses in the DHCP server profile. It might cause a failure while renewing
or releasing the IP addresses that are leased to the DHCP clients.

DHCP Ranges

n IP ranges, CIDR subnet, and IP addresses are allowed. IPv4 addresses must be in a
CIDR /32 format, and IPv6 addresses must be in a CIDR /128 format. You can also enter
an IP address as a range by entering the same IP address in the start and the end of the
range. For example, 172.16.10.10-172.16.10.10.

n IP addresses in the DHCP ranges must belong to the subnet that is configured on the
segment. That is, DHCP ranges cannot contain IP addresses from multiple subnets.

n IP ranges must not overlap with the DHCP server IP address and the DHCP static binding
IP addresses.

n IP ranges in the DHCP IP pool must not overlap each other.

n Number of IP addresses in any DHCP range must not exceed 65536.

n The following types of IPv6 addresses are not permitted in DHCP for IPv6 ranges:

n Link Local Unicast addresses (FE80::/64)

n Multicast addresses (FF00::/8)

n Unspecified address (0:0:0:0:0:0:0:0)

VMware, Inc. 113


NSX Administration Guide

n Address with all F (F:F:F:F:F:F:F:F)

Caution After a DHCP server is created, you can update existing ranges, append new IP
ranges, or delete existing ranges. However, it is a good practice to avoid deleting, shrinking,
or expanding the existing IP ranges. For example, do not try to combine multiple smaller IP
ranges to create a single large IP range. When you modify existing ranges after the DHCP
service is running, it might cause the DHCP clients to lose network connection and result in a
temporary traffic disruption.

Excluded Ranges (Only for DHCPv6)

Enter IPv6 addresses or a range of IPv6 addresses that you want to exclude for dynamic IP
assignment to DHCPv6 clients.

In IPv6 networks, the DHCP ranges can be large. Sometimes, you might want to reserve
certain IPv6 addresses, or multiple small ranges of IPv6 addresses from the large DHCP range
for static binding. In such situations, you can specify excluded ranges.

Lease Time

Default value is 86400 seconds. Valid range of values is 60–4294967295.

Preferred Time (Only for DHCPv6)

Preferred time is the length of time that a valid IP address is preferred. When the preferred
time expires, the IP address becomes deprecated. If no value is entered, preferred time is
autocalculated as (lease time * 0.8).

Lease time must be > preferred time.

Valid range of values is 60–4294967295. Default is 69120 seconds.

DNS Servers

A maximum of two DNS servers are permitted. When not specified, no DNS is assigned to the
DHCP client.

Domain names (Only for DHCPv6)

One or more domain names are supported.

DHCPv4 server configuration automatically fetches the domain name that you specified in the
segment configuration.

SNTP Servers (Only for DHCPv6)

A maximum of two SNTP servers are permitted.

DHCPv6 server does not support NTP.

DHCP Options (Only for DHCPv4)


DHCP Options for IPv6 are not supported.

VMware, Inc. 114


NSX Administration Guide

Each classless static route option in DHCP for IPv4 can have multiple routes with the same
destination. Each route includes a destination subnet, subnet mask, next hop router. For
information about classless static routes in DHCPv4, see RFC 3442 specifications. You can add
a maximum of 127 classless static routes on a DHCPv4 server.

In addition to the Generic Option 121 (classless static route), NSX supports other Generic Options
that are described in the following table. The Generic Options, which are not listed in this table are
also accepted without any validation, but they do not take effect.

Table 5-2. Supported Generic Options

Code Name Value Type Example Value

2 Time Offset Integer - seconds offset from UTC 28800


Allowed values: -43200–43200
Maximum items: 1

13 Boot File Size Number of blocks. One block is 512 1385


bytes.
Integer values: 1–65535
Maximum items: 1

19 Forward On/Off IP forwarding 0


Allowed values: [0, 1]
1 for on, 0 for off
Maximum items: 1

26 MTU Interface MTU for a given interface. 9600


Allowed values: 68–65535
Maximum items: 1

28 Broadcast Address IP address 10.10.10.255


Maximum items: 1

35 ARP Timeout Integer (seconds) 360


Allowed values: 0–4294967295

40 NIS Domain Text vmware.com


Maximum: 255 characters

41 NIS Servers IP addresses in a preferred order 10.10.10.10


Maximum items: 63

42 NTP Servers IP addresses in a preferred order 10.10.10.11


Maximum items: 63

44 NETBIOS Name Server IP addresses in a preferred order 10.10.10.12


Maximum items: 63

45 NETBIOS Dist Server IP addresses in a preferred order 10.10.10.13


Maximum items: 63

VMware, Inc. 115


NSX Administration Guide

Table 5-2. Supported Generic Options (continued)

Code Name Value Type Example Value

46 NETBIOS Node Type Integer encoding of node type 2


Allowed values: [1, 2, 4, 6]
Maximum items: 4
1 = B-node - broadcast no WINS
2 = P-node - WINS only
4 = M-node - broadcast then WINS
8 = H-node - WINS then broadcast

47 NETBIOS Scope String encoded according to RFC


1001/1002
Maximum: 255 characters

58 Renewal Time N/A - based on the lease time between 300


0–4294967295
Maximum items: 1

59 Rebinding Time N/A - based on the lease time between 300


0–4294967295
Maximum items: 1

64 NIS+ Domain Name Text (domain name) vmware.com

65 NIS+ Server Address IP addresses in a preferred order 10.10.10.10

66 Server Name Text (server domain name) 10.10.10.253


Maximum: 255 characters

67 Bootfile Name Text (file name) /tftpboot/pxelinux/


Maximum: 255 characters pxelinux.bin

117 Name Service Search Not natively supported with API 6


Allowed values: [0, 6, 41, 44, 65]
Maximum items: 5

119 Domain Search One or more domain names. Each vmware.com


domain name must be enclosed in quotes
and separated by commas.

150 TFTP server address IP address 10.10.10.10

209 PXE Configuration File Maximum: 255 characters configs/common

210 PXE Path Prefix Maximum: 255 characters /tftpboot/pxelinux/


files/

211 PXE Reboot Time Allowed values: 0–4294967295 1800

VMware, Inc. 116


NSX Administration Guide

DHCP Static Bindings


In a typical network environment, you have VMs that run services, such as FTP, email servers,
application servers, and so on. You might not want the IP address of these VMs to change in your
network. In this case, you can bind a static IP address to the MAC address of each VM (DHCP
client). The static IP address must belong to the subnet (if any) that is configured on the segment,
and it must not overlap with the DHCP IP ranges and DHCP server IP address.

DHCP static bindings are supported when you are configuring either a Segment DHCP server or
a Gateway DHCP server on the segment. When a segment is using a DHCP Relay, you cannot
configure static bindings.

On a DHCP for IPv4 server, static bindings are supported regardless of whether the segment uses
a Segment DHCP or a Gateway DHCP configuration. On a DHCP for IPv6 server, static bindings
are supported only when the segment uses a Segment DHCP configuration.

Static Binding Options Common to DHCPv4 and DHCPv6 Server

The following table describes the static binding options that are common to DHCP for IPv4 and
DHCP for IPv6 servers.

Option Description

Name Enter a unique display name to identify each static binding. The name
must be limited to 255 characters.

MAC Address Required. Enter the MAC address of the DHCP client to which you
want to bind a static IP address.
The following validations apply to MAC address in static bindings:
n MAC address must be unique in all the static bindings on a
segment that uses a Segment DHCP server.
n MAC address must be unique in all the static bindings across all
the segments that are connected to the gateway and which use the
Gateway DHCP server.
For example, consider that you have 10 segments connected to a
tier-1 gateway. You use a Gateway DHCP server for four segments
(Segment1 to Segment4), and a Segment DHCP server for the
remaining six segments (Segment5 to Segment10). Assume that you
have a total of 20 static bindings across all the four segments
(Segment1 to Segment4), which use the Gateway DHCP server. In
addition, you have five static bindings in each of the other six
segments (Segment5 to Segment10), which use a Segment DHCP
server. In this example:
n The MAC address in each of the 20 static bindings must be unique
across all the segments (Segment1 to Segment4) that use the
Gateway DHCP server.
n The MAC address in the five static bindings must be unique
on each segment (Segment5 to Segment10) that use a Segment
DHCP server.

VMware, Inc. 117


NSX Administration Guide

Option Description

IP Address n Required for IPv4 static binding. Enter a single IPv4 address to
bind to the MAC address of the client.
n Optional for IPv6 static binding. Enter a single Global Unicast IPv6
address to bind to the MAC address of the client.
When no IPv6 address is specified for static binding, Stateless Address
Autoconfiguration (SLAAC) is used to auto-assign an IPv6 address to
the DHCPv6 clients. Also, you can use Stateless DHCP to assign other
DHCP configuration options, such as DNS, domain names, and so on,
to the DHCPv6 clients.
For more information about Stateless DHCP for IPv6, read the RFC
3376 specifications.
The following types of IPv6 addresses are not permitted in IPv6 static
binding:
n Link Local Unicast addresses (FE80::/64 )
n Multicast IPv6 addresses (FF00::/8)
n Unspecified address (0:0:0:0:0:0:0:0)
n Address with all F (F:F:F:F:F:F:F:F)
The static IP address must belong to the subnet (if any) that is
configured on the segment, and it must be outside the DHCP ranges
that you have configured on the segment.

Lease Time Optional. Enter the amount of time in seconds for which the IP address
is bound to the DHCP client. When the lease time expires, the IP
address becomes invalid and the DHCP server can assign the address
to other DHCP clients on the segment.
Valid range of values is 60–4294967295. Default is 86400.

Description Optional. Enter a description for the static binding.

Tags Optional. Add tags to label static bindings so that you can quickly
search or filter bindings, troubleshoot and trace binding-related
issues, or do other tasks.
For more information about adding tags and use cases for tagging
objects, see Tags.

Static Binding Options (Only in DHCPv4 Server)

The following table describes the static binding options that are available only in a DHCP for
IPv4 server.

VMware, Inc. 118


NSX Administration Guide

DHCP For IPv4 Option Description

Gateway Address Enter the default gateway IP address that the DHCP for IPv4 server
must provide to the DHCP client.

Host Name Enter the host name of the DHCP for IPv4 client so that the DHCPv4
server can always bind the client (host) with the same IPv4 address each
time.
The host name must be limited to 63 characters.
The following validations apply to host name in static bindings:
n Host name must be unique in all the static bindings on a segment
that uses a Segment DHCP server.
n Host name must be unique in all the static bindings across all the
segments that are connected to the gateway and which use the
Gateway DHCP server.
For example, consider that you have 10 segments connected to a
tier-1 gateway. You use a Gateway DHCP server for four segments
(Segment1 to Segment4), and a Segment DHCP server for the remaining
six segments (Segment5 to Segment10). Assume that you have a total of
20 static bindings across all the four segments (Segment1 to Segment4),
which use the Gateway DHCP server. In addition, you have five static
bindings in each of the other six segments (Segment5 to Segment10),
which use a Segment DHCP server. In this example:
n The host name in each of the 20 static bindings must be unique
across all the segments (Segment1 to Segment4) that use the
Gateway DHCP server.
n The host name in the five static bindings must be unique on each
segment (Segment5 to Segment10) that use a Segment DHCP
server.

DHCP Options Optional. Click Set to configure DHCP for IPv4 Classless Static Routes
and other Generic Options.

Some additional notes for DHCPv4 static binding:

n IPv4 static bindings automatically inherit the domain name that you configured on the
segment.

n To specify DNS servers in the static binding configuration, add the Generic Option (Code 6
- DNS Servers).

n To synchronize the system time on DHCPv4 clients with DHCPv4 servers, use NTP. DHCP
for IPv4 server does not support SNTP.

n If DHCP options are not specified in the static bindings, the DHCP options from the
DHCPv4 server on the segment are automatically inherited in the static bindings. However,
if you have explicitly added one or more DHCP options in the static bindings, these DHCP
options are not autoinherited from the DHCPv4 server on the segment.

Static Binding Options (Only in DHCPv6 Server)

The following table describes the static binding options that are available only in a DHCP for
IPv6 server.

VMware, Inc. 119


NSX Administration Guide

DHCP for IPv6 Option Description

DNS Servers Optional. Enter a maximum of two domain name servers to use for the
name resolution.
When not specified, no DNS is assigned to the DHCP client.

SNTP Servers Optional. Enter a maximum of two Simple Network Time Protocol
(SNTP) servers. The clients use these SNTP servers to synchronize their
system time to that of the standard time servers.

Preferred Time Optional. Enter the length of time that a valid IP address is preferred.
When the preferred time expires, the IP address becomes deprecated.
If no value is entered, preferred time is auto-calculated as (lease time *
0.8).
Lease time must be > preferred time.
Valid range of values is 60–4294967295. Default is 69120.

Domain Names Optional. Enter the domain name to provide to the DHCPv6 clients.
Multiple domain names are supported in an IPv6 static binding.
When not specified, no domain name is assigned to the DHCP clients.

Some additional notes for DHCPv6 static binding:

n Gateway IP address configuration is unavailable in IPv6 static bindings. IPv6 client learns
about its first-hop router from the ICMPv6 router advertisement (RA) message.

n NTP is not supported in DHCPv6 static bindings.

Configure Segment DHCP Server on a Segment


A Segment DHCP server provides a dynamic IP assignment service only to the VMs that are
attached to the segment. NSX supports Segment DHCP server configuration on the downlink
interface and the service interface. You can configure a Segment DHCPv4 server, or a Segment
DHCPv6 server, or both, on the segment.

The following figure shows a sample network topology that has a Segment DHCP server
configured on four networks.

VMware, Inc. 120


NSX Administration Guide

VLAN 101
EN: Edge Node
VLAN 102
1 Segment DHCP
EN 1 EN 2

Tier-0

EN 1 EN 2

1
1 Tier-1

1
Network-1 Network-2
Network-4

Network-3

VM2 VM3
VM4

VM1

In this network topology, a Segment DHCP server is configured on the following networks:

n Network-2 is connected to the downlink interface of tier-1 gateway.

n Network-1 is connected to the service interface of tier-0 gateway.

n Network-4 is connected to the service interface of tier-1 gateway.

n Network-3 is an isolated segment, which is not connected to any gateway.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://nsx-manager-
ip-address.
2 Select Networking > Segments.

3 Find the segment where you want to configure the DHCP service. Next to the segment name,

click , and then click Edit.

4 Click Set DHCP Config.

5 From the DHCP Type drop-down menu, select Segment DHCP Server.

VMware, Inc. 121


NSX Administration Guide

6 (Required) From the DHCP Profile drop-down menu, select a DHCP server profile. If no profile

is available in the drop-down menu, click and then click Create New to add a DHCP server
profile. After the profile is created, it is automatically attached to the segment.

For more information about creating a DHCP server profile, see Add a DHCP Server Profile.

7 Click the DHCP Config toggle button to enable DHCP configuration on the segment.

If you are configuring a Segment DHCPv4 server and a Segment DHCPv6 server, ensure that
you enable the DHCP Config toggle button in both the IPv4 Server and IPv6 Server tabs.

8 Specify the following DHCP configuration settings:

n DHCP Server Address

n DHCP Ranges

n Optional: Excluded Ranges (only for DHCPv6)

n Optional: Lease Time

n Optional: Preferred Time (only for DHCPv6)

n Optional: Domain Names (only for DHCPv6)

n Optional: DNS Servers

n Optional: SNTP Servers (only for DHCPv6)

n Optional: DHCP Options (only for DHCPv4)

n Optional: Static Bindings

For a detailed information about these DHCP configuration settings, see the reference
documentation at DHCP Configuration Settings: Reference.

9 Click Apply.

Configure Gateway DHCP Server on a Segment


A Gateway DHCP server is attached to a tier-0 or tier-1 gateway, and it provides DHCP service to
the networks (overlay segments), which are directly connected to the gateway and configured to
use a Gateway DHCP server.

In this case, the DHCP server that is created on the tier-0 or tier-1 gateway will have an internal
relay so that the connected segments can forward traffic to the DHCP servers, which you specified
in the DHCP server profile.

The following figure shows a sample network topology with Gateway DHCP servers that are
servicing networks, which are directly connected to the tier-0 and tier-1 gateway.

VMware, Inc. 122


NSX Administration Guide

VLAN 101
EN: Edge Node
VLAN 102
2 Gateway DHCP
EN 1 EN 2

DHCP Server
Tier-0
Profile
DHCP Server
Profile

EN 1 EN 2

2 Tier-1 2
2

Network-3 Network-2
Network-1

VM1 VM3
VM2

In this network topology, a Gateway DHCP server is configured on the following networks:

n Network-1 is connected to the service interface of the tier-1 gateway.

n Network-2 is connected to the downlink interface of the tier-1 gateway.

n Network-3 is connected to the downlink or service interface of the tier-0 gateway.

Prerequisites

Ensure that you have specified the Gateway IP address of the IPv4 subnet in the segments that are
directly connected to the tier-0 or tier-1 gateway.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://nsx-manager-
ip-address.
2 Select Networking > Segments.

3 Find the segment where you want to configure the DHCP service. Next to the segment name,

click , and then click Edit.

4 Click Set DHCP Config.

5 In the DHCP Type drop-down menu, select Gateway DHCP Server.

VMware, Inc. 123


NSX Administration Guide

6 (Required) Ensure that a DHCP server profile is attached to the gateway.

If a profile is set on the gateway, the name of the profile is displayed in a read-only mode.
If a DHCP server profile is not set on the gateway, do these steps:
a Click the information icon next to DHCP Profile, and then click the gateway name to
navigate to the gateway page.

b Next to the gateway name, click , and then click Edit.

c Next to DHCP Config, click Set.

The Set DHCP Configuration window opens.

d In the Type drop-down menu, select DHCP Server.

e Select a DHCP server profile to attach to this gateway and click Save.

f Close the edit mode on the gateway and return to the edit mode on the Segments page.

g Click Set DHCP Config and continue the remaining steps in this procedure.

7 Click the DHCP Config toggle button to enable DHCP configuration on the segment.

Note You can configure only Gateway DHCPv4 server on a segment. Gateway DHCPv6
server is not supported.

8 Observe that the DHCP Server Address is fetched automatically from the DHCP profile and
displayed on the IPv4 Server > Settings page.

9 Specify the following DHCP configuration settings:

n DHCP Ranges

n Optional: Lease Time

n Optional: DNS Servers

n Optional: DHCP Options

n Optional: Static Bindings

For a detailed information about these DHCP configuration settings, see the reference
documentation at DHCP Configuration Settings: Reference.

10 Click Apply.

Configure DHCP Relay on a Segment


In a DHCP Relay configuration, the DHCP messages are forwarded to the external DHCP servers.
The external DHCP servers can be in any subnet, outside the SDDC, or in the physical network.

DHCP Relay configuration is supported in the following scenarios:

n When an overlay segment is connected to the downlink interface of a tier-0 or tier-1 gateway.
In this case, the DHCP messages can be relayed either to DHCPv4 servers or DHCPv6 servers.
Step 2 in the Procedure section of this topic explains the workflow for this scenario.

VMware, Inc. 124


NSX Administration Guide

n When an overlay or VLAN segment is connected to the service interface of a tier-0 or tier-1
gateway. In this case, the DHCP messages are relayed only to DHCPv4 servers. Step 3 in the
Procedure section of this topic explains the workflow for this scenario.

Note When you use a DHCP Relay on a segment, you cannot configure DHCP settings, DHCP
options, and static bindings on the segment.

The following figure shows a sample network topology that has a DHCP Relay configured on three
networks.

VLAN 101
EN: Edge Node
VLAN 102
3 DHCP Relay
EN 1 EN 2

Tier-0

EN 1 EN 2

Tier-1 3
3

Network-2
Network-1
DHCP Relay
Profile
3
VM3
VM2

Network-3 DHCP Relay DHCP Relay


Profile Profile

VM1

In this network topology, a DHCP Relay is configured on the following networks:

n Network-1 is connected to the service interface of the tier-1 gateway.

n Network-2 is connected to the downlink interface of the tier-1 gateway.

n Network-3 is connected either to the downlink or service interface of the tier-0 gateway.

VMware, Inc. 125


NSX Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://nsx-manager-
ip-address.
2 Configure a DHCP Relay on an overlay segment that is connected to the downlink interface of
a tier-0 or tier-1 gateway.

a Navigate to Networking > Segments.

b Find the overlay segment where you want to configure the DHCP Relay. Next to the

segment name, click , and then click Edit.

c Click Set DHCP Config.

d From the DHCP Type drop-down menu, select DHCP Relay.

e From the DHCP Profile drop-down menu, select a DHCP relay profile. If no profile is

available in the drop-down menu, click and then click Create New to add a DHCP relay
profile. After the profile is created, it is automatically attached to the segment.

f Click Apply.

3 Configure a DHCP Relay on a segment that is connected to the service interface of a tier-0 or
tier-1 gateway.

a Navigate to Networking > Segments.

b Add a segment in either a VLAN or an overlay transport zone. Do not connect this
segment to any gateway. Also, do not set any DHCP configuration on this segment, such
as DHCP server address, DHCP ranges, and static bindings.

For example, assume that you have added a segment in the VLAN transport zone with
name as My-VLAN-Segment.

c Navigate to Networking > Tier-0 Gateways or Networking > Tier-1 Gateways.

d Find the gateway where you want to connect this VLAN segment to the service interface.

Click , and then click Edit.

e Expand the Interfaces section and click the link to open the Set Interfaces page.

f Click Add Interface.

g In the Name text box, enter a name for this interface.

For example, specify the name as Connect-to-VLAN.

h From the Type drop-down menu, select Service.

i Enter the IP Address/Mask in a CIDR format.

For example, enter 172.16.10.1/24.

VMware, Inc. 126


NSX Administration Guide

j From the Connect To (Segment) drop-down menu, select the My-VLAN-Segment, which
you created earlier.

k From the DHCP Profile drop-down menu, select a DHCP relay profile.

l Click Close.

Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway


To use Gateway DHCP server for a dynamic IP assignment, you must attach a DHCP server profile
to a tier-0 or tier-1 gateway.

You can attach a DHCP profile to a gateway only when the segments connected to that gateway
do not have a Segment DHCP server or a DHCP relay configured on them. If a Segment DHCP
server or DHCP relay exists on the segment, the UI throws an error when you try to attach a DHCP
profile to the gateway. You must disconnect the segments from the gateway, and then attach a
DHCP profile to the gateway.

Prerequisites

A DHCP server profile is added in the network.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://nsx-
manager-ip-address.

2 Go to Networking > Tier-0 Gateways or Networking > Tier-1 Gateways.

3 Edit the appropriate gateway.

4 Next to DHCP Config, click Set.

The Set DHCP Configuration window opens.

5 In the Type drop-down menu, select DHCP Server.

6 Select a DHCP server profile to attach to this gateway.

7 Click Save.

What to do next

Navigate to Networking > Segments. On each segment that is connected to this gateway,
configure the DHCP settings, DHCP options, and static bindings.

For more information, see Configure DHCP Service.

View Gateway DHCP Statistics


After a Gateway DHCP server is in use, you can view the DHCP server statistics on the directly
connected tier-0 or tier-1 gateway.

VMware, Inc. 127


NSX Administration Guide

Prerequisites

n DHCP server profile is attached to the tier-0 or tier-1 gateway.

n Segments that are connected to the gateway are configured to use a Gateway DHCP server.

n DHCP settings are configured on the segments that are directly connected to the gateway.

n The server runtime status is up and the Gateway DHCP server is in use.

Procedure

1 From your browser, log in to an NSX Manager at https://nsx-manager-ip-address.

2 Navigate to Networking > Tier-0 Gateways or Networking > Tier-1 Gateways.

3 Find the gateway whose Gateway DHCP server statistics you want to view.

4 Expand the gateway configuration settings, and then click the link next to DHCP.

5 In the pop-up window, click View Statistics.

Gateway DHCPv4 server statistics are displayed.

In the DHCP Server Packets section, a pie chart displays the breakup of the DHCP packet
counts. These packet counts represent the count of the various DHCP message types. The
number at the center of the pie chart represents the sum of all DHCP packet counts.

The DHCP Pool Statistics section displays the pool usage statistics of segments that are
directly connected to the gateway and using Gateway DHCP server. For example, this section
shows statistics, such as the size of the DHCP pool, the number of IP addresses used from the
pool, and the allocation percentage.

Note If you have configured a Segment DHCP server on a gateway-connected segment, the
statistics of the Segment DHCP server are not displayed on the DHCP Statistics page of the
gateway. DHCP statistics are displayed only for those segments that are configured to use a
Gateway DHCP server.

For example, assume that you have four segments connected to the downlink interfaces of
a tier-1 gateway. Segments 1 and 2 are using a Segment DHCP server, whereas segments 3
and 4 are using the Gateway DHCP server. In this case, the DHCP Statistics page displays the
Gateway DHCP server statistics only for segments 3 and 4.

To view segment DHCP statistics, you must navigate to the Segments page. For more
information, see View Segment DHCP Statistics.

6 (Optional) To reset DHCP packet counts, click Reset Packet Counter.

The DHCP packet counts that are displayed next to the pie chart are reset. The DHCP pool
statistics are not impacted.

VMware, Inc. 128


NSX Administration Guide

View Segment DHCP Statistics


After a Segment DHCP server is in use, you can view the DHCP server statistics on the Segments
page.

Prerequisites

n DHCP settings are configured on the segment.

n The server runtime status is up and the Segment DHCP server is in use.

Procedure

1 From your browser, log in to an NSX Manager at https://nsx-manager-ip-address.

2 Select Networking > Segments.

3 Find the segment whose DHCP statistics you want to view.

4 Expand the segment configuration settings, and then click View Statistics.

The Segment Statistics page opens.

5 Click the DHCP Statistics tab.

In the DHCP Server Packets section, a pie chart displays the breakup of the DHCP packet
counts. The packet counts represent the count of the various DHCP message types. The
number at the center of the pie chart represents the sum of all DHCP packet counts.

The DHCP Pool Statistics section displays the pool usage statistics. For example, this section
shows statistics, such as the size of the DHCP pool, the number of IP addresses used from the
pool, and the allocation percentage.

Important If you have configured both DHCPv4 and DHCPv6 servers on a segment, the
DHCP Statistics page will display only the DHCPv4 packet counts and the DHCPv4 pool
usage statistics. DHCPv6 packet counts and DHCPv6 pool usage statistics are currently not
supported.

6 (Optional) To reset DHCP packet counts, click Reset Packet Counter.

The DHCP packet counts that are displayed next to the pie chart are reset. The DHCP pool
statistics are not impacted.

Scenarios: Selection of Edge Cluster for DHCP Service


DHCP server runs as a service (service router) in the edge nodes of an NSX Edge cluster.

Isolated segments that are not connected to a gateway can use only a Segment DHCP server.
Segments that are connected to a gateway on the downlink interface can use either a Segment
DHCP server, DHCP Relay, or Gateway DHCP server.

VMware, Inc. 129


NSX Administration Guide

Regardless of whether a segment uses a Segment DHCP server or a Gateway DHCP server, DHCP
server always runs as a service router in the edge transport nodes of an edge cluster. If the
segment uses a Segment DHCP server, the DHCP service is created in the edge cluster that you
specified in the DHCP profile. However, if the segment uses a Gateway DHCP server, the edge
cluster in which the DHCP service is created depends on the combination of the following factors:

n Is an edge cluster specified in the gateway?

n Is an edge cluster specified in the DHCP profile of the gateway?

n Is the edge cluster in the gateway and in the DHCP profile same or different?

n Is the tier-1 routed segment connected to a tier-0 gateway?

The following scenarios explain how the edge cluster is selected for creating the DHCP service.

Scenario 1: Isolated Segment Uses Segment DHCP Server


Scenario Description:

n An edge cluster (Cluster1) is created with four edge nodes: N1, N2, N3, N4.

n A segment with None connectivity is added in the overlay transport zone.

n Segment uses a Segment DHCP server, by default.

The DHCP server profile configuration is as follows:

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Edges: Autoallocated

In this scenario, any two edge nodes from Cluster1 are autoallocated to create the DHCP service,
and DHCP high availability (HA) is automatically configured. One of the edge nodes in Cluster1
runs in active mode and the other edge runs in passive mode.

Note
n If you manually allocate the edge nodes in the DHCP profile, the edge node that is added first
becomes the active edge. The second edge node takes the passive role.

n If you select only one edge node in the DHCP profile, DHCP HA is not configured.

Scenario 2: Tier-1 Routed Segment Uses Gateway DHCP and


Different Edge Clusters in Gateway and DHCP Profile
Consider that you have two edge clusters in your network (Cluster1 and Cluster2). Both clusters
have four edge nodes each:

n Cluster1 edge nodes: N1, N2, N3, N4

n Cluster2 edge nodes: N5, N6, N7, N8

VMware, Inc. 130


NSX Administration Guide

Scenario Description:

n Segment is connected to a tier-1 gateway.

n Tier-1 gateway is not connected to a tier-0 gateway.

n DHCP server profile in the tier-1 gateway uses Cluster1.

n Tier-1 gateway uses Cluster2.

n Segment is configured to use the Gateway DHCP server.

The DHCP server profile in the tier-1 gateway has the following configuration:

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Edges: N1,N2 (allocated manually in the given sequence)

The tier-1 gateway configuration is as follows:

n Edge Cluster: Cluster2

n Edges: N5,N6 (allocated manually in the given sequence)

In this scenario, DHCP service runs on the edge nodes of Cluster2. As Cluster2 contains multiple
edge nodes, DHCP HA is autoconfigured. However, the manually allocated edges N5 and N6 on
the gateway are ignored for DHCP HA. Any two nodes from Cluster2 are randomly autoallocated
for DHCP HA.

This scenario also applies when the segment is directly connected to a tier-0 gateway, and there is
no tier-1 gateway in your network topology.

Caution You can change the edge cluster on the Gateway DHCP server after the DHCP server is
created. However, this action causes all the existing DHCP leases that are assigned to the DHCP
clients to be lost.

To summarize, the main points of this scenario are as follows:

n When you use a Gateway DHCP server and set different edge clusters in the gateway DHCP
profile and tier-1 gateway, then DHCP service is always created in the edge cluster of the
gateway.

n The edge nodes are randomly allocated from the edge cluster of the tier-1 gateway for DHCP
HA configuration.

n If no edge cluster is specified in the tier-1 gateway, the edge cluster in the DHCP profile of the
tier-1 gateway (Cluster1) is used to create the DHCP service.

VMware, Inc. 131


NSX Administration Guide

Scenario 3: Tier-1 Routed Segment Uses Segment DHCP Server and


Different Edge Clusters in Gateway and DHCP Profile
In this scenario, a segment is connected to a tier-1 gateway, but you use a Segment DHCP server
on the segment. Consider that you have three edge clusters in your network (Cluster1, Cluster2,
Cluster 3). Each cluster has two edges nodes each.

n Cluster1 edge nodes: N1, N2

n Cluster2 edge nodes: N3, N4

n Cluster3 edge nodes: N5, N6

Scenario Description:

n Segment is connected to a tier-1 gateway.

n Tier-1 gateway is connected to a tier-0 gateway (optional).

n DHCP profile on the gateway uses Cluster1.

n Gateway uses Cluster2.

n Segment is configured to use Segment DHCP server.

n DHCP server profile on the segment uses Cluster3.

The DHCP server profile on the gateway is as follows:

n Profile Name: ProfileX

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Edges: N1,N2 (allocated manually in the given sequence)

The tier-1 gateway configuration is as follows:

n Edge Cluster: Cluster2

n Edges: N3,N4 (allocated manually in the given sequence)

The profile on the Segment DHCP server is as follows:

n Profile Name: ProfileY

n Profile Type: DHCP Server

n Edge Cluster: Cluster3

n Edges: N5,N6 (allocated manually in the given sequence)

In this scenario, because the segment is configured to use a Segment DHCP server, the edge
cluster (Cluster2) in the connected tier-1 gateway is ignored to create the DHCP service. DHCP
service runs in the edge nodes of Cluster3 (N5, N6). DHCP HA is also configured. N5 becomes the
active edge node and N6 becomes the standby edge.

VMware, Inc. 132


NSX Administration Guide

If edge nodes are not manually allocated from Cluster3, any two nodes from this cluster are
autoallocated for creating the DHCP service and configuring DHCP HA. One of the edge nodes
becomes an active edge and the other node becomes the standby edge. If only one edge node is
allocated manually from Cluster3, DHCP HA is not configured.

This scenario also applies when the segment is directly connected to a tier-0 gateway, and there is
no tier-1 gateway in your network topology.

Scenario 4: Tier-1 Routed Segment Uses Gateway DHCP and Same


Edge Clusters in Gateway and DHCP Profile
Consider that you have a single edge cluster (Cluster1) in your network with four edge nodes: N1,
N2, N3, N4.

Scenario Description:

n Segment is connected to a tier-1 gateway.

n Tier-1 gateway is connected to a tier-0 gateway (optional)

n Gateway and DHCP profile on the gateway use the same edge cluster (Cluster1).

n Segment is configured to use Gateway DHCP server.

The DHCP server profile on the gateway is as follows:

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Edges: N1,N2 (allocated manually in the given sequence)

The tier-1 gateway configuration is as follows:

n Edge Cluster: Cluster1

n Edges: N3,N4 (allocated manually in the given sequence)

In this scenario, as the gateway DHCP profile and gateway use a similar edge cluster (Cluster1),
DHCP service is created in the edge nodes N1 and N2 of the gateway DHCP profile. The edge
nodes N3 and N4 that you specified in the connected tier-1 gateway are ignored for creating the
DHCP service.

If edge nodes are not manually set in the DHCP profile, any two nodes from Cluster1 are
autoallocated for creating the DHCP service and configuring DHCP HA. One of the edge nodes
becomes an active edge and the other edge becomes the standby edge.

To summarize, the main points of this scenario are as follows:

n When you use a Gateway DHCP server and specify similar edge clusters in the DHCP profile
and connected gateway, then DHCP service is created in the edge nodes of the DHCP profile.

n The edges nodes that you manually specified in the connected gateway are ignored.

VMware, Inc. 133


NSX Administration Guide

Scenario 5: Tier-1 Routed Segment is Connected to Tier-0 Gateway


and No Edge Cluster is Set in Tier-1 Gateway
In this scenario, a segment is connected to a tier-1 gateway, and the tier-1 gateway is connected to
a tier-0 gateway. Consider that you have three edge clusters in your network (Cluster1, Cluster2,
Cluster 3). Each cluster has two edges nodes each.

n Cluster1 edge nodes: N1, N2

n Cluster2 edge nodes: N3, N4

n Cluster3 edge nodes: N5, N6

Scenario Description:

n Segment is directly connected to a tier-1 gateway.

n Tier-1 gateway is connected to a tier-0 gateway.

n DHCP server profile is specified on both tier-1 and tier-0 gateways.

n DHCP profile on tier-1 gateway uses Cluster1.

n DHCP profile on tier-0 gateway uses Cluster2.

n No edge cluster is selected in tier-1 gateway.

n Tier-0 gateway uses Cluster3.

n Segment is configured to use a Gateway DHCP server.

In this scenario, because the tier-1 gateway has no edge cluster specified, NSX falls back on the
edge cluster of the connected tier-0 gateway. DHCP service is created in the edge cluster of tier-0
gateway (Cluster3). Any two edge nodes from this edge cluster are autoallocated for creating the
DHCP service and configuring DHCP HA.

To summarize, the main points of this scenario are as follows:

n When a tier-1 gateway has no edge cluster specified, NSX falls back on the edge cluster of the
connected tier-0 gateway to create the DHCP service.

n If no edge cluster is detected in the tier-0 gateway, DHCP service is created in the edge
cluster of the tier-1 gateway DHCP profile.

Scenarios: Impact of Changing Segment Connectivity on


DHCP
After you save a segment with DHCP configuration, you must be careful about changing the
connectivity of the segment.

Segment connectivity changes are allowed only when the segments and gateways belong to the
same transport zone.

The following scenarios explain the segment connectivity changes that are allowed or disallowed,
and whether DHCP is impacted in each of these scenarios.

VMware, Inc. 134


NSX Administration Guide

Scenario 1: Move a Routed Segment with Gateway DHCP Server to a


Different Gateway
Consider that you have added a segment and connected it either to a tier-0 or tier-1 gateway. You
configured Gateway DHCP server on this segment, saved the segment, and connected workloads
to this segment. DHCP service is now used by the workloads on this segment.

Later, you decide to change the connectivity of this segment to another tier-0 or tier-1 gateway,
which is in the same transport zone. This change is allowed. However, when you save the
segment, an information message alerts you that changing the gateway connectivity impacts the
existing DHCP leases, which are assigned to the workloads.

Scenario 2: Move a Routed Segment with Segment DHCP Server or


Relay to a Different Gateway
Consider that you have added a segment and connected it either to a tier-0 or tier-1 gateway.
You configured Segment DHCP server or DHCP Relay on this segment, saved the segment,
and connected workloads to this segment. DHCP service is now used by the workloads on this
segment.

Later, you decide to change the connectivity of this segment to another tier-0 or tier-1 gateway,
which is in the same transport zone. This change is allowed. As the DHCP server is local to the
segment, the DHCP configuration settings, including ranges, static bindings, and DHCP options
are retained on the segment. The DHCP leases of the workloads are retained and there is no loss
of network connectivity.

After the segment is moved to a new gateway, you can continue to update the DHCP
configuration settings and other segment properties. You can change the DHCP type and DHCP
profile of a routed segment after moving the segment to a different gateway.

Scenario 3: Move a Standalone Segment with Segment DHCP Server


to a Tier-0 or Tier-1 Gateway
Consider that you have added a segment with None connectivity in your network. You have
configured Segment DHCP server on this segment, saved the segment, and connected workloads
to this segment. DHCP service is now used by the workloads on this segment.

Later, you decide to connect this segment either to a tier-0 or tier-1 gateway, which is in the same
transport zone. This change is allowed. As a Segment DHCP server existed on the segment, the
DHCP configuration settings, including ranges, static bindings, and DHCP options are retained
on the segment. The DHCP leases of the workloads are retained and there is no loss of network
connectivity.

VMware, Inc. 135


NSX Administration Guide

After the segment is connected to the gateway, you can continue to update the DHCP
configuration settings, and other segment properties. However, you cannot select a different
DHCP type and the DHCP profile in the segment. For example, you cannot change the DHCP type
from a Segment DHCP server to a Gateway DHCP server or a DHCP Relay. In addition, you cannot
change the DHCP server profile in the segment. But you can edit the properties of the DHCP
profile, if needed.

Scenario 4: Move a Standalone Segment Without DHCP


Configuration to a Tier-0 or Tier-1 Gateway
Consider that you have added a segment with None connectivity in your network. You have not
configured DHCP on this segment, saved the segment, and connected workloads to this segment.

Later, you decide to connect this segment either to a tier-0 or tier-1 gateway, which is in the same
transport zone. This change is allowed. As no DHCP configuration existed on the segment, the
segment automatically uses the Gateway DHCP server after it is connected to the gateway. The
DHCP profile attached to this gateway gets autoselected in the segment.

Now, you can specify the DHCP configuration settings, including ranges, static bindings, and
DHCP options on the segment. You can also edit the other segment properties, if necessary.
However, you cannot change the DHCP type from a Gateway DHCP server to a Segment DHCP
server or a DHCP Relay.

Remember, you can configure only a Gateway DHCPv4 server on the segment. Gateway DHCPv6
server is not supported.

Scenario 5: Move a Segment with Tier-0 or Tier-1 Connectivity to


None Connectivity
Consider that you have added a segment to a tier-0 or tier-1 gateway in your network. You
have configured Gateway DHCP server or DHCP Relay on this segment, saved the segment,
and connected workloads to this segment. DHCP service is now used by the workloads on this
segment.

Later, you decide to change the connectivity of this segment to None. This change is not allowed.

In this scenario, the following workaround can help:

1 Temporarily disconnect the existing segment from the gateway or delete the segment.

a In NSX Manager, navigate to Networking > Segments.

b Click the vertical ellipses next to the segment, and then click Edit.

c Turn off the Gateway Connectivity option to disconnect the segment temporarily from the
gateway.

2 Add a new segment and do not connect it to any gateway.

3 Configure a Segment DHCP server on this standalone segment, if needed.

VMware, Inc. 136


Host Switches
6
A host switch managed object is a virtual network switch that provides networking services to the
various hosts in the network. It is instantiated on every host that participates in NSX networking

The following host switches are supported in NSX:

n NSX Virtual Distributed Switch: NSX introduces a host switch that normalizes connectivity
among various compute domains, including multiple VMware VMware vCenter instances,
containers, and other off premises or cloud implementations.

n NSX Virtual Distributed Switch can be configured based on the performance required in your
environment:

n Standard: Configured for regular workloads, where normal traffic throughput is expected
on the workloads.

n Enhanced: Configured for telecom workloads, where high traffic throughput is expected on
the workloads.

n vSphere Distributed Virtual Switch: Provides centralized management and monitoring of the
networking configuration of all hosts that are associated with the switch in a VMware vCenter
environment.

This chapter includes the following topics:

n Managing NSX on a vSphere Distributed Switch

n Enhanced Datapath

Managing NSX on a vSphere Distributed Switch


Configure and run NSX on a vSphere Distributed Switch (VDS).

In NSX 4.0, you can only use a VDS switch to prepare ESXi host nodes as transport nodes.
N-VDS switch is not supported. Configure the NSX Distributed Firewall functionality on VDS in
data centers and workloads where segmentation, visibility, or advanced security capabilities are
desired. This ensures distributed firewall capabilities work on a VM managed by whether it is
managed by vCenter Server.

However, to prepare an NSX Edge VM as a transport node, you can only use a N-VDS switch. You
can connect a NSX Edge VM to any of the supported host switches (VSS or VDS) depending on
the topology in your network.

VMware, Inc. 137


NSX Administration Guide

After you prepare a cluster of transport node hosts with VDS as the host switch, you can do the
following:

n Manage NSX transport nodes on a VDS switch.

n Realize a segment created in NSX as an NSX Distributed Virtual port group in vCenter Server.

n Migrate VMs between VDS port groups.

Configuring a vSphere Distributed Switch


When a transport node is configured on a VDS host switch, some network parameters can only be
configured in VMware vCenter.

The following requirements must be met to install NSX on a VDS host switch:

n VMware vCenter 7.0 or a later version

n ESXi 7.0 or a later version

The created VDS switch can be configured to centrally manage networking for NSX hosts.

Configuring a VDS switch for NSX networking requires objects to be configured on NSX and in
vCenter Server.

n In vSphere:

n Create a VDS switch.

n Set MTU to at least 1600

n Add ESXi hosts to the switch. These hosts are later prepared as NSX transport nodes.

n Assign uplinks to physical NICs.

n In NSX:

n When configuring a transport node, map uplinks created in NSX uplink profile with uplinks
in VDS.

For more details on preparing a host transport node on a VDS switch, see the NSX Installation
Guide.
The following parameters can only be configured in a VMware vCenter on a VDS backed host
switch:

VMware, Inc. 138


NSX Administration Guide

Configuration VDS NSX Description

MTU In VMware vCenter, set an MTU Any MTU value set in As a host transport node that is
value on the switch. an NSX uplink profile is prepared using VDS as the host
overriden. switch, the MTU value needs to
Note A VDS switch must have an
be set on the VDS switch in
MTU of 1600 or higher.
vCenter Server.
In VMware vCenter, select VDS, click
Actions → Settings → Edit Settings.

Uplinks/LAGs In VMware vCenter, configure When a transport node As a host transport node that
Uplinks/LAGs on a VDS switch. is prepared, the teaming is prepared using VDS as the
policy on NSX is host switch, the uplink or LAG
In VMware vCenter, select VDS, click
mapped to uplinks/LAGs are configured on the VDS
Actions → Settings → Edit Settings.
configured on a VDS switch. During configuration,
switch. NSX requires teaming policy
be configured for the transport
node. This teaming policy is
mapped to the uplinks/LAGs
configured on the VDS switch.

NIOC Configure in VMware vCenter. NIOC configuration is As a host transport node that is
In VMware vCenter, select VDS, click not available when a prepared using VDS as the host
Actions → Settings → Edit Settings. host transport node is switch, the NIOC profile can only
prepared using a VDS be configured in vCenter Server.
switch.

Link Layer Configure in VMware vCenter. LLDP configuration is As a host transport node that is
Discovery In VMware vCenter, select VDS, click not available when a prepared using VDS as the host
Protocol Actions → Settings → Edit Settings. host transport node is switch, the LLDP profile can only
(LLDP) prepared using a VDS be configured in vCenter Server.
switch.

Add or Manage Manage in VMware vCenter. Prepared as transport Before preparing a transport
Hosts In VMware vCenter, go to nodes in NSX. node using a VDS switch, that
Networking → VDS Switch → Add node must be added to the VDS
and Manage Host.. switch in vCenter Server.

Note NIOC profiles, Link Layer Discovery Protocol (LLDP) profile, and Link Aggregation Group
(LAG) for these virtual machines are managed by VDS switches and not by NSX. As a vSphere
administrator, configure these parameters from VMware vCenter UI or by calling VDS API
commands.

After preparing a host transport node with VDS as a host switch, the host switch type displays VDS
as the host switch. It displays the configured uplink profile in NSX and the associated transport
zones.

VMware, Inc. 139


NSX Administration Guide

In VMware vCenter, the VDS switch used to prepare NSX hosts is created as an NSX

Switch.

Managing NSX Distributed Virtual Port Groups


A transport node prepared with VDS as a host switch ensures that segments created in NSX is
realized as an NSX Distributed Virtual port group on a VDS switch and Segment in NSX .

In earlier versions of NSX , a segment created in NSX are represented as an opaque network
in vCenter Server. When running NSX on a VDS switch, a segment is represented as an NSX
Distributed Virtual Port Groups.

Any changes to the segments on the NSX network are synchronized in VMware vCenter.

In vCenter Server, an NSX Distributed Virtual Port Group is represented as .

VMware, Inc. 140


NSX Administration Guide

Any NSX segment created in NSX is realized in VMware vCenter as an NSX object. A VMware
vCenter displays the following details related to NSX segments:

n NSX Manager

n Virtual network identifier of the segment

n Transport zone

n Attached virtual machines

The port binding for the segment is by default set to Ephemeral. Switching parameters for the
switch that are set in NSX cannot be edited in VMware vCenter and conversely.

Important In a vCenter Server, an NSX Distributed Virtual port group realized does not require
a unique name to differentiate it with other port groups on a VDS switch. So, multiple NSX
Distributed Virtual port groups can have the same name. Any vSphere automations that use port
group names might result in errors.

In vCenter Server, you can perform these actions on an NSX Distributed Virtual Port Group:

n Add VMkernel Adapters.

n Migrate VMs to Another Network.

However, NSX objects related to an NSX Distributed Virtual port group can only be edited in NSX
Manager. You can edit these segment properties:

n Replication Mode for the segment

n VLAN trunk ID used by the segment

n Switching Profiles (for example, Port Mirroring)

n Ports created on the segment

For details on configuring a vSphere Distributed Virtual port group, refer to the vSphere
Networking Guide.

NSX Cluster Prepared with VDS


An example of an NSX cluster prepared using VDS as the host switch.

VMware, Inc. 141


NSX Administration Guide

In the sample topology diagram, two VDS switches are configured to manage NSX traffic and
vSphere traffic.

VDS-1 and VDS-2 are configured to manage networking for ESXi hosts from Cluster-1, Cluster-2,
and Cluster-3. Cluster-1 is prepared to run only vSphere traffic, whereas, Cluster-2 and Cluster-3
are prepared as host transport nodes with these VDS switches.

In vCenter Server, uplink port groups on VDS switches are assigned physical NICs. In the
topology, uplinks on VDS-1 and VDS-2 are assigned to physical NICs. Depending on the hardware
configuration of the ESXi host, you might want to plan out how many physical NICs to be assigned
to the switch. In addition to assiging uplinks to the VDS switch, MTU, NIOC, LLDP, LAG profiles
are configured on the VDS switches.

After VDS switches are configured in NSX, add an uplink profile.

When preparing a cluster by applying a transport node profile (on a VDS switch), the uplinks from
the transport node profile is mapped to VDS uplinks.

After preparing the clusters, ESXi hosts on cluster-2 and Cluster-3 manage NSXtraffic, while
cluster-1 manage vSphere traffic.

APIs to Configure vSphere Distributed Switch on NSX


NSX API commands to support vSphere Distributed Switch on NSX.

VMware, Inc. 142


NSX Administration Guide

API Changes for vSphere Distributed Switch


For detailed information related to API calls, see the NSX API Guide.

Note Configuration done using API commands is also possible from the VMware vCenter user
interface. For more information on creating a NSX transport node using Sphere Distributed Switch
as host switch, refer to the Configure a Managed Host Transport Node topic in the NSX Installation
Guide.

VMware, Inc. 143


NSX Administration Guide

API NSX on vSphere Distributed Switch (VDS)

Create a Transport node for a Discovered node. /api/v1/fabric/discovered-nodes/<external-id/discovered-node-id>?action=cr

{
"node_id":
"d7ef478b-752c-400a-b5f0-207c04567e5d", "host_switch_spec": {
"host_switches": [
{
"host_switch_name": "vds-1",
"host_switch_id":
"50 2b 92 54 e0 80 d8 d1-ee ab 8d a6 7b fd f9 4b",
"host_switch_type": "VDS",
"host_switch_mode": "STANDARD",
"host_switch_profile_ids": [
{
"key": "UplinkHostSwitchProfile",
"value":
"159353ae-c572-4aca-9469-9582480a7467"
} ],
"pnics": [],
"uplinks": [
{
"vds_uplink_name": "Uplink 2",
"uplink_name": "nsxuplink1"
} ],
"is_migrate_pnics": false,
"ip_assignment_spec": {
"resource_type": "AssignedByDhcp"
},
"cpu_config": [],
"transport_zone_endpoints": [
{
"transport_zone_id":
"06ba5326-67ac-4f2c-9953-a8c5d326b51e",
"transport_zone_profile_ids": [
{
"resource_type": "BfdHealthMonitoringProfile",
"profile_id":
"52035bb3-ab02-4a08-9884-18631312e50a"
} ] } ],
"vmk_install_migration": [],
"pnics_uninstall_migration": [],
"vmk_uninstall_migration": [],
"not_ready": false
}
],
"resource_type": "StandardHostSwitchSpec"
},
"transport_zone_endpoints": [],
"maintenance_mode": "DISABLED",
"is_overridden": false,
"resource_type": "TransportNode",
"display_name": "TestTN",
}

VM configuration vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo

VMware, Inc. 144


NSX Administration Guide

API NSX on vSphere Distributed Switch (VDS)

VMkernel NIC vim.dvs.DistributedVirtualPort

Physical NIC to Uplink Mapping API: vim.host.NetworkSystem:networkSystem.updateNetworkConfig


Property: vim.host.NetworkConfig.proxySwitch

MTU API: vim.dvs.VmwareDistributedVirtualSwitch.reconfigure


Property: VmwareDistributedVirtualSwitch.ConfigSpec.maxMtu

LAG API: vim.dvs.VmwareDistributedVirtualSwitch.updateLacpGroupConfig


Property: vim.dvs.VmwareDistributedVirtualSwitch.LacpGroupSpec

NIOC API:vim.dvs.VmwareDistributedVirtualSwitch.reconfigure
Property: vim.dvs.VmwareDistributedVirtualSwitch.ConfigSpec.infrastructureT

LLDP API: vim.dvs.VmwareDistributedVirtualSwitch.reconfigure


Property: vim.dvs.VmwareDistributedVirtualSwitch.ConfigSpec.linkDiscoveryPr

Feature Support in a vSphere Distributed Switch Enabled to Support


NSX
Comparison of features supported by a VDS switch version earlier to 7.0 and VDS version 7.0 or
later (NSX enabled).

IPFIX and Port Mirroring


An NSX transport node prepared with a VDS switch supports IPFIX, port mirroring.

See Port Mirroring on a vSphere Distributed Switch.

See IPFIX Monitoring on a vSphere Distributed Switch.

SR-IOV support
SR-IOV is supported on a vSphere Distributed Switch but not on a NSX Virtual Distributed Switch.

Feature NSX Virtual Distributed Switch vSphere Distributed Switch

SR-IOV No Yes (vSphere 7.0 and later)

VMware, Inc. 145


NSX Administration Guide

Stateless Cluster Host Profile Support

Feature NSX Virtual Distributed Switch vSphere Distributed Switch

Host Profile Stateless Yes Yes (vSphere 7.0 and later)


No (when VMkernel adapters are
connected to NSX Port Group on
vSphere Distributed Switch.

Distributed Resource Scheduler Support

DRS (NIOC
Source Host Destination Host Configured) vSphere

vSphere Distributed Switch-A vSphere Distributed Switch-B No No

vSphere Distributed Switch-A vSphere Distributed Switch-A Yes 7.0

vMotion Support
vMotion between source vSphere Distributed Switch and destination vSphere Distributed Switch.
Both VDS switches are enabled to support NSX.

Compute
Source / VDS Destination / VDS vMotion Storage vMotion

vSphere Distributed Switch-A vSphere Distributed Switch-A Yes Yes


(VMware vCenter -A) (VMware vCenter-A)

vSphere Distributed Switch-A vSphere Distributed Switch-B Yes Yes


(VMware vCenter -A) (VMware vCenter -A)

vSphere Distributed Switch-A vSphere Distributed Switch-B Yes Yes


(VMware vCenter -A) (VMware vCenter -B)

Segment-A (VMware vCenter -A) Segment-B (VMware vCenter-A) No No

Segment-A (VMware vCenter -A) Segment-B (VMware vCenter -B) No No

Transport Zone-A Transport Zone-B No No

NSX-A NSX-B No No

vMotion between vSphere Distributed Switch (NSX enabled) and NSX Virtual Distributed Switch

Destination / NSX Virtual


Source / VDS Distributed Switch Compute vMotion Storage vMotion

VMware vCenter-A VMware vCenter-A Yes Yes

VMware vCenter-A VMware vCenter-B Yes Yes

Segment-A (VMware Segment-B (VMware No No


vCenter-A) vCenter-A)

Segment-A (VMware Segment-B (VMware No No


vCenter-A) vCenter-B)

VMware, Inc. 146


NSX Administration Guide

Destination / NSX Virtual


Source / VDS Distributed Switch Compute vMotion Storage vMotion

Transport Zone-A Transport Zone-B No No

NSX-A NSX-B No No

vMotion between vSphere Distributed Switch (NSX enabled) and vSphere Standard Switch or
vSphere Distributed Switch

Destination / NSX Virtual Compute


Source / VDS Distributed Switch vMotion Storage vMotion

VMware vCenter-A VMware vCenter-A Yes Yes

VMware vCenter-A VMware vCenter-B Yes Yes

Segment-A (VMware vCenter-A) Segment-B (VMware vCenter-A) No No

Segment-A (VMware vCenter-A) Segment-B (VMware vCenter-B) No No

Transport Zone-A Transport Zone-B No No

NSX-A NSX-B No No

Enhanced Networking Stack


Both VDS and NSX Virtual Distributed Switches support all features of the enhanced networking
stack.

LACP
n VDS does not support LACP in Active mode.

n NSX Virtual Distributed Switch supports LACP in Active mode.

Scale Supported in vSphere 7.0

Parameter NSX Virtual Distributed Switch

Logical Switch n NSX Distributed Virtual port groups (in VMware vCenter) support 10000
X N, where N is the number of VDS switches in vCenter Server.
n NSX supports 10000 segments.

Relationship between NSX Distributed Virtual port groups and Hostd memory on the host.

NSX Distributed Virtual Port Groups Minimum Hostd Memory Supported VMs

5000 600 MB 191

10000 1000 MB 409

15000 1500 MB 682

VMware, Inc. 147


NSX Administration Guide

License for vSphere Distributed Switch


For earlier versions of NSX, a vSphere Enterprise Plus license is required for the vSphere
Distributed Switch 7.0 feature. Starting NSX 3.1.1, the NSX Data Center and NSX Firewall licenses
support the use of vSphere Distributed Switch 7.0 for all editions of VMware vCenter and vSphere.

Note With NSX licenses, you get an equivalent number of CPU licenses to use the vSphere
Distributed Switch feature. However, the NSX licenses do not provide an upgrade to vSphere
Enterprise Plus. The NSX licenses only apply to VDS on the hosts where NSX is deployed.

Procedure

1 Add a VMware vCenter compute manager if you do not have one already registered with NSX.
See Add a Compute Manager.

For a VMware vCenter compute manager that has already been registered, edit the compute
manager and provide your credentials for reauthentication.

Note To use NSX Data Center and NSX Firewall licenses for the vSphere Distributed Switch
7.0 feature, the VMware vCenter user must either be an administrator, or the user must have
Global.Licenses privileges and be a member of the LicenseService.Administrators group.

2 Log in to VMware vCenter and verify the NSX for vSphere solution asset exists. You can use
the NSX for vSphere solution asset for NSX deployments.

3 Assign your NSX Data Center or NSX Firewall license to the license asset in VMware vCenter.

Note The license asset in VMware vCenter is assigned the default NSX for vShield Endpoint
license. To use vSphere Distributed Switch, you need any valid NSX license other than the
default NSX for vShield Endpoint license.

Results

The vSphere Distributed Switch 7.0 feature is now available and you can attach hosts to a vSphere
Distributed Switch.

Enhanced Datapath
Enhanced Datapath is a networking stack mode that provides superior network performance. It is
primarily targeted for NFV workloads that require higher performance than regular workloads.

On an ESXi host, configure a vSphere Distributed Switch in NSX in the Enhanced Datapath mode.
In the Enhanced Datapath mode, you can configure overlay traffic and VLAN traffic.

Automatically Assign ENS Logical Cores


Automatically assign logical cores to vNICs such that dedicated logical cores manage the incoming
traffic to and outgoing traffic from vNICs.

VMware, Inc. 148


NSX Administration Guide

With a switch configured in the enhanced datapath mode, if a single logical core is associated to
a vNIC, then that logical core processes bidirectional traffic coming into or going out of a vNIC.
When multiple logical cores are configured, the host automatically determines which logical core
must process a vNIC's traffic.

Assign logical cores to vNICs based on one of these parameters.

n vNIC-count: Host assumes transmission of incoming or outgoing traffic for a vNIC direction
requires same amount of the CPU resource. Each logical core is assigned the same number
of vNICs based on the available pool of logical cores. It is the default mode. The vNIC-count
mode is reliable, but is not optimal for an asymmetric traffic.

n CPU-usage: Host predicts the CPU usage to transmit incoming or outgoing traffic at each
vNIC direction by using internal statistics. Based on the usage of CPU to transmit traffic, host
changes the logical core assignments to balance load among logical cores. The CPU usage
mode is more optimal than vNIC-count, but unreliable when traffic is not steady.

In CPU usage mode, if the traffic transmitted changes frequently, then the predicted CPU
resources required and vNIC assignment might also change frequently. Too frequent assignment
changes might cause packet drops.

If the traffic patterns are symmetric among vNICs, the vNIC-count option provides reliable
behavior, which is less vulnerable to frequent changes. However, if the traffic patterns are
asymmetric, vNIC-count might result in packet drops since it does not distinguish the traffic
difference among vNICs.

In vNIC-count mode, it is recommended to configure an appropriate number of logical cores so


that each logical core is assigned to the same number of vNICs. If the number of vNICs associated
to each logical core is different, CPU assignment is unfair and performance is not deterministic.

When you connect or remove a vNIC or a logical core, the host automatically reflects the changes.

Procedure

u To switch from one mode to another mode, run the following command.

set ens lcore-assignment-mode <host-switch-name> <ens-lc-mode>

Where, <ens-lc-mode> can be set to the mode vNIC-count or cpu-usage.

vNIC-count is vNIC/Direction count-based logical core assignment.

cpu-usage is CPU usage-based logical core assignment.

Configure Guest Inter-VLAN Routing


On overlay networks, NSX supports routing of inter-VLAN traffic on an L3 domain. During routing,
virtual distributed router (VDR) uses VLAN ID to route packets between VLAN subnets.

VMware, Inc. 149


NSX Administration Guide

Inter-VLAN routing overcomes the limitation of 10 vNICs that can be used per VM. NSX
supporting inter-VLAN routing ensures that many VLAN subinterfaces can be created on the
vNIC and consumed for different networking services. For example, one vNIC of a VM can
be divided into several subinterfaces. Each subinterface belongs to a subnet, which can host a
networking service such as SNMP or DHCP. With Inter-VLAN routing, for example, a subinterface
on VLAN-10 can reach a subinterface on VLAN-10 or any other VLAN.

Each vNIC on a VM is connected to a switch through the parent logical port, which manages
untagged packets.

To create a subinterface, on a switch configured in Enhanced mode, create a child port using the
API with an associated VIF using the API call described in the procedure. The subinterface tagged
with a VLAN ID is associated to a new logical switch, for example, VLAN10 is attached to logical
switch LS-VLAN-10. All subinterfaces of VLAN10 have to be attached to LS-VLAN-10. This 1–1
mapping between the VLAN ID of the subinterface and its associated logical switch is an important
prerequisite. For example, adding a child port with VLAN20 to logical switch LS-VLAN-10 mapped
to VLAN-10 makes routing of packets between VLANs non-functional. Such configuration errors
make the inter-VLAN routing non-functional.

Starting from NSX 3.2.2, logical port proton APIs are replaced with the corresponding segment
port policy APIs.

Prerequisites

n Before you associate a VLAN subinterface to a logical switch, ensure that the logical switch
does not have any other associations with another VLAN subinterface. If there is a mismatch,
inter-VLAN routing on overlay networks might not work.

n Ensure that hosts run ESXi v 6.7 U2 or later versions.

Procedure

1 To create subinterfaces for a vNIC, ensure that the vNIC is updated to a parent port. Make the
following REST API call:

PATCH https://<nsx-mgr-ip>/policy/api/v1/infra/segments/<Segment to which vNIC is


connected>/ports/<Seg-Port-vNIC>
{
"attachment": {
"id": "<Attachment UUID of the vNIC>",
"type": "PARENT"
},
"admin_state": "UP",
"resource_type": "SegmentPort",
"display_name": "parentport"
}
}

VMware, Inc. 150


NSX Administration Guide

If the logical switch does not have a corresponding segment, you can make the following REST
API calls (logical port proton API):

PUT https://<nsx-mgr-ip>/api/v1/logical-ports/<Logical-Port UUID-of-the-vNIC>


{
"resource_type" : "LogicalPort",
"display_name" : "parentport",
"attachment" : {
"attachment_type" : "VIF",
"context" : {
"resource_type" : "VifAttachmentContext",
"vif_type": "PARENT"
},
"id" : "<Attachment UUID of the vNIC>"
},
"admin_state" : "UP",
"logical_switch_id" : "UUID of Logical Switch to which the vNIC is connected",
"_revision" : 0
}

2 To create child ports for a parent vNIC port on the N-VDS that is associated to the
subinterfaces on a VM, make the following API call:

Note Before making the API call, verify that a segment exists to connect child ports with the
subinterfaces on the VM.

PUT https://<nsx-mgr-ip>/policy/api/v1/infra/segments/<Segment to which child port is


connected>/ports/<Child-port>
{
"attachment": {
"id": "<Attachment UUID of the CHILD port>",
"type": "CHILD",
"context_id": "<Attachment UUID of the PARENT port from Step 1>",
"traffic_tag": <VLAN ID>,
"app_id": "<ID of the attachment>", ==> display id(can be any string). Must be unique.
},
"address_bindings": [
{
"ip_address": "<IP address to the corresponding VLAN>",
"mac_address": "<vNIC MAC Address>",
"vlan_id": <VLAN ID>
}
],
"admin_state": "UP",
"resource_type": "SegmentPort",
"display_name": "<Name of the Child PORT>"
}

If the logical switch does not have a corresponding segment, you can make the following REST
API calls (logical port proton API):

POST https://<nsx-mgr-ip>/api/v1/logical-ports/
{

VMware, Inc. 151


NSX Administration Guide

"resource_type" : "LogicalPort",
"display_name" : "<Name of the Child PORT>",
"attachment" : {
"attachment_type" : "VIF",
"context" : {
"resource_type" : "VifAttachmentContext",
"parent_vif_id" : "<UUID of the PARENT port from Step 1>",
"traffic_tag" : <VLAN ID>,
"app_id" : "<ID of the attachment>", ==> display id(can give any string). Must be
unique.
"vif_type" : "CHILD"
},
"id" : "<ID of the CHILD port>"
},

"logical_switch_id" : "<UUID of the Logical switch(not the PARENT PORT's logical switch)
to which Child port would be connected to>",
"address_bindings" : [ { "mac_address" : "<vNIC MAC address>", "ip_address" : "<IP
address to the corresponding VLAN>", "vlan" : <VLAN ID> } ],
"admin_state" : "UP"
}

Results

NSX creates subinterfaces on VMs.

Receive Side Scaling


Receive Side Scaling allows multiple cores on the receive side for processing incoming traffic.

Without RSS, receiving ESXi hosts only use one physical queue and hence one lcore for packet
processing. When the receive side data increases it creates a bottleneck at the single core. The
overall throughput performance might decrease. With RSS enabled on the NIC, you can configure
multiple hardware queues to process requests from VMs. Before you use a NIC card to leverage
the RSS functionality, use the VMware Compatibility Guide for I/O to confirm whether the NIC card
driver supports RSS. Most of the NIC cards support at least 4 queues. So, RSS might provide 4x
throughput performance improvements.

VMware, Inc. 152


NSX Administration Guide

You can choose to configure RSS in these modes:

n enable Default Queue Receive Side Scaling (DRSS)

n enable RSS engine dedicated to a single vNIC queue

n enable RSS engine shared by multiple vNIC queues

Configure Multiple Context on Host Switch


Provide multiple cores to vNICs by configuring the Multiple Context functionality on a host switch
running in Enhanced Datapath mode. It helps improve packet performance.

On a host switch configured to run in the Enhanced Datapath mode, you can configure Multiple
Context functionality for vNIC traffic. Multiple Context means that multiple logical cores can serve
Tx (transmit) and Rx (receive) queues, in contrast to the single context, where only one logical core
serves both the Tx queue and Rx queue. A Tx and Rx queue pair represents a vNIC queue.

As an admin, you can assign Multiple Context to vNIC queues based on the network traffic load.
As traffic load increases for a vNIC queue, a single context or logical core for a specific vNIC
queue can prove to be insufficient to load balance traffic. Assigning Multiple Context to that vNIC
allocates more vCPU resources to load balance traffic.

As you design for an optimized network and increased throughput, consider these points:

n The number of logical cores assigned depends on the capacity of the host.

VMware, Inc. 153


NSX Administration Guide

n The number of Default Queue RSS (DRSS) configurable on a host depends on the maximum
number of physical CPUs available on the host.

n Logical cores can be shared across DRSS and Multiple Context queues.

n Both DRSS and Multiple Context can function independently. However, configuring them
together provides additional performance benefits to physical hardware queues (DRSS)
and vNIC queues. See Configure Default Queue Receive Side Scaling for more details on
configuring DRSS.

Prerequisites

n To configure the Multiple Context functionality for a vNIC, create multiple logical cores on the
host.

n Ensure that the host switch is configured in ENS Interrupt mode or Enhanced Datapath mode.
The Multiple Context functionality is not available in the Standard mode.

Procedure

1 To verify that host switch is configured to run in Enhanced Datapath mode:

a Navigate to System → Host Transport Nodes.

b Select the transport node.

c Select the Overview tab and verify Enhanced Datapath Capable parameter is set to
Yes.

2 To configure Multiple Context functionality for vNIC traffic managed through Enhanced
Datapath mode, edit configuration options of VMs and set the following parameter value.
See the latest vSphere Virtual Machine Administration guide for details on how to edit VM
configuration options.

ethernetX.ctxPerDev = "3"

Where, the value 3 indicates that Multiple Context functionality is applied per vNIC queue.

Other supported values for contexts are:

n ethernetX.ctxPerDev =1 indicates that Multiple Context functionality is applied per VM.

n ethernetX.ctxPerDev =2 indicates that Multiple Context functionality is applied per vNIC.

Results

Enhanced Datapath improves packet throughput by using the Multiple Context functionality set for
vNIC queues.

Configure Default Queue Receive Side Scaling


Improve packet throughput by enabling Default Queue Receive Side Scaling (DRSS) on the NIC
card.

VMware, Inc. 154


NSX Administration Guide

After you enable the Default Queue Receive Side Scaling (DRSS) configuration on a NIC port,
Enhanced Network Stack (ENS) manages the receive-side data arriving at physical NIC cards.
A single port on the physical NIC card makes multiple hardware queues available to receive-
side data. Each queue is assigned a local logical core from the non-uniform memory access
(NUMA) node. When inbound packets - multicast, unknown, or broadcast - arrive at a physical
NIC port, they are distributed across several hardware queues, depending on the availability of
logical cores. DRSS reduces bottlenecks processed by a single queue. DRSS is intended to serve
broadcast, unknown, or multicast (BUM) traffic.

For example, on a physical NIC card that has two ports, you can configure one port to make
multiple hardware queues available to efficiently manage receive-side (Rx) traffic. It can be done
by passing DRSS=4,0 value in the ESXi system parameters command. This parameter enables the
first physical NIC port for DRSS.

Note If multiple context is not enabled, then configuring vNICs for multiple context does not
work.

Prerequisites

n Ensure the NIC card supports Default Queue Receive Side Scaling.

Procedure

1 Install i40en ENS driver NIC driver.

2 If the NIC has two ports, enable RSS on the first port of the physical NIC, by running the
command.

esxcli system module parameters set -m -i40en_ens -p DRSS=4,0

Where, DRSS is enabled for 4 Rx queues on the first port and it is not enabled for Tx queues.

The number of DRSS queues assigned depends on the number of physical CPUs available on
the host.

Note Depending on the version of the NIC card, by default, the DRSS might be enabled or
disabled.

3 If NIC teaming is in use, then configuration of both NIC ports must be the same.

esxcli system module parameters set -m -i40en_ens -p DRSS=4,4

4 Unload Load the NIC driver for module parameters to take effect.

5 Load the NIC driver.

What to do next

Configure multiple context so that ENS module can improve packet throughput of vNIC queues.

VMware, Inc. 155


NSX Administration Guide

Configure NetQ Receive Side Scaling


Enable NetQ Receive Side Scaling to enable vNIC requests to be offloaded to a physical NIC. It
improves packet performance of the receive-side data.

When a physical NIC card sends packets to a host, the Enhanced Network Stack (ENS), which runs
as the host switch is configured in Enhanced Datapath mode, on that host distributes data across
different logical cores on NUMA nodes. There are a couple of ways to configure RSS engines.

As a network admin wanting to improve the throughput packet performance of receive-side data,
you might want to consider one of these ways to configure RSS to leverage the benefits.

These two modes are:

n RSS engine is dedicated to a single vNIC queue: A dedicated RSS engine completely offloads
any request coming from a vNIC to the physical NIC. In this mode, a single RSS engine is
dedicated to a single vNIC queue. It improves throughput performance as pNIC manages the
recieve side data and shares it among the available hardware queues to serve the request. The
vNIC queues are co-located on the same logical core or fastpath as pnic queues.

n RSS engine is shared by multiple vNIC queues: In this mode, multiple hardware queues are
made available to vNIC queues. However, the vNIC handling flows might not be aligned with
the physical hardware queue that will process data. It means, there is no guarantee that vNIC
and physical NICs will be aligned.

Note If Default Queue Receive Side Scaling (DRSS) is enabled on the NIC card, deactivate it.

Prerequisites

n Hosts must be running ESXi version 7 update 3 or later.

n Ensure NIC card supports RSS functionality.

n Supported drivers: Intel40en (async driver). Refer to the driver documentation to confirm
whether it has ENS compatible RSS implementation.

Procedure

1 To enable NetQ/RSS, esxcli system module parameters set -m -i40en_ens -p


DRSS=0,0 RSS=1,0.

Where, DRSS=0,0 indicates DRSS is deactivated on both NIC ports.

RSS=1,0 indicates NetQ RSS is enabled on one of the NIC ports.

2 To unload driver, run vmkload_mod -u i40en_ens.

3 To reload driver for the RSS setting to take effect, run vmkload_mod i40en_ens.

4 Stop the device manager to trigger PCI fastconnect so that it can scan devices and associate
the driver with a NIC.

Run kill -HUP 'ps | grep mgr | awk '{print $1}'.

VMware, Inc. 156


NSX Administration Guide

5 To configure multiple RSS engines to be available to serve RSS requests from vNICs, configure
these parameters in the .vmx file of VM.

ethernet.pnicfeatures = '4', which indicates RSS feature is requested by vNICs.

ethernet.ctxPerDev = '3', which indicates that multiple contexts (multiple logical cores)
are enabled to process each vNIC. The VMs connected to the vSphere switch are configured
for multiple queues. It means multiple logical cores of a NUMA node can process the Tx and Rx
traffic coming from vNICs.

When multiple vNICs request RSS offloading, the Enhanced Network Stack (ENS) does not
offload their RSS requests to the pnic, but the shared RSS engine processes their requests.
For shared RSS, multiple RSS queues are available but co-location of a vNIC queue or a pNIC
queue is not guranteed.

6 To configure a dedicated RSS engine to process requests from a vNIC, configure these
parameters in the .vmx file of the VM.

ethernet.rssoffload=True,

With the preceding configuration enabled, RSS requests from a vNIC is offloaded to the
physical NIC. Only one vNIC can offload its requests to an RSS engine. In this mode, vNIC
queues are aligned to the pNIC queues.

7 Verify that packet flow is distributed on the hardware queues provided by the RSS engine.

Run the following commands.

vsish

get /net/pNics/vmnicX/stats

Sample output:

rxq0: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0


rxq1: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
rxq2: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
rxq3: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
rxq4: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
rxq5: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
rxq6: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
rxq7: pkts=0 bytes=0 toFill=2047 toProc=0 noBuf=0 csumErr=0
txq0: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq1: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq2: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq3: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq4: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq5: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq6: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0
txq7: pkts=0 bytes=0 toFill=0 toProc=0 dropped=0

VMware, Inc. 157


Virtual Private Network (VPN)
7
NSX supports IPSec Virtual Private Network (IPSec VPN) and Layer 2 VPN (L2 VPN) on an NSX
Edge node. IPSec VPN offers site-to-site connectivity between an NSX Edge node and remote
sites. With L2 VPN, you can extend your data center by enabling virtual machines to keep their
network connectivity across geographical boundaries while using the same IP address.

Note IPSec VPN and L2 VPN are not supported in the NSX limited export release.

You must have a working NSX Edge node, with at least one configured Tier-0 or Tier-1 gateway,
before you can configure a VPN service. For more information, see "NSX Edge Installation" in the
NSX-T Data Center Installation Guide.
Beginning with NSX 2.4, you can also configure new VPN services using the NSX Manager user
interface. In earlier releases of NSX, you can only configure VPN services using REST API calls.

Important When using NSX 2.4 or later to configure VPN services, you must use new objects,
such as Tier-0 gateways, that were created using the NSX Manager UI or Policy APIs that are
included with NSX 2.4 or later release. To use existing Tier-0 or Tier-1 logical routers that were
configured before the NSX 2.4 release, you must continue to use API calls to configure a VPN
service.

System-default configuration profiles with predefined values and settings are made available for
your use during a VPN service configuration. You can also define new profiles with different
settings and select them during the VPN service configuration.

The Intel QuickAssist Technology (QAT) feature on a bare metal server is supported for IPSec VPN
bulk cryptography. Support for this feature began with NSX 3.0. For more information on support
of the QAT feature on bare metal servers, see the NSX Installation Guide.

This chapter includes the following topics:

n Understanding IPSec VPN

n Understanding Layer 2 VPN

n Adding VPN Services

n Adding IPSec VPN Sessions

n Adding L2 VPN Sessions

n Add Local Endpoints

VMware, Inc. 158


NSX Administration Guide

n Adding Profiles

n Add an Autonomous Edge as an L2 VPN Client

n Check the Realized State of an IPSec VPN Session

n Understanding TCP MSS Clamping

n Troubleshooting VPN Problems

Understanding IPSec VPN


Internet Protocol Security (IPSec) VPN secures traffic flowing between two networks connected
over a public network through IPSec gateways called endpoints. NSX Edge only supports a tunnel
mode that uses IP tunneling with Encapsulating Security Payload (ESP). ESP operates directly on
top of IP, using IP protocol number 50.

IPSec VPN uses the IKE protocol to negotiate security parameters. The default UDP port is set to
500. If NAT is detected in the gateway, the port is set to UDP 4500.

NSX Edge supports a policy-based or a route-based IPSec VPN.

Beginning with NSX 2.5, IPSec VPN services are supported on both Tier-0 and Tier-1 gateways.
See Add a Tier-0 Gateway or Add a Tier-1 Gateway for more information. The Tier-0 or Tier-1
gateway must be in Active-Standby high-availability mode when used for an IPSec VPN service.
You can use segments that are connected to either Tier-0 or Tier-1 gateways when configuring an
IPSec VPN service.

An IPsec VPN service in NSX uses the gateway-level failover functionality to support a high-
availability service at the VPN service level. Tunnels are re-established on failover and VPN
configuration data is synchronized. Before NSX 3.0 release, the IPSec VPN state is not
synchronized as tunnels are being re-established. Beginning with NSX 3.0 release, the IPSec VPN
state is synchronized to the standby NSX Edge node when the current active NSX Edge node
fails and the original standby NSX Edge node becomes the new active NSX Edge node without
renegotiating the tunnels. This feature is supported for both policy-based and route-based IPSec
VPN services.

Pre-shared key mode authentication and IP unicast traffic are supported between the NSX Edge
node and remote VPN sites. In addition, certificate authentication is supported beginning with
NSX 2.4. Only certificate types signed by one of the following signature hash algorithms are
supported.

n SHA256withRSA

n SHA384withRSA

n SHA512withRSA

Using Policy-Based IPSec VPN


Policy-based IPSec VPN requires a VPN policy to be applied to packets to determine which traffic
is to be protected by IPSec before being passed through the VPN tunnel.

VMware, Inc. 159


NSX Administration Guide

This type of VPN is considered static because when a local network topology and configuration
change, the VPN policy settings must also be updated to accommodate the changes.

When using a policy-based IPSec VPN with NSX, you use IPSec tunnels to connect one or more
local subnets behind the NSX Edge node with the peer subnets on the remote VPN site.

You can deploy an NSX Edge node behind a NAT device. In this deployment, the NAT device
translates the VPN address of an NSX Edge node to a publicly accessible address facing the
Internet. Remote VPN sites use this public address to access the NSX Edge node.

You can place remote VPN sites behind a NAT device as well. You must provide the remote VPN
site's public IP address and its ID (either FQDN or IP address) to set up the IPSec tunnel. On both
ends, static one-to-one NAT is required for the VPN address.

Note DNAT is not supported on tier-0 or tier-1 gateways where policy-based IPSec VPN are
configured.

IPSec VPN can provide a secure communications tunnel between an on-premises network and
a network in your cloud software-defined data center (SDDC). For policy-based IPSec VPN,
the local and peer networks provided in the session must be configured symmetrically at both
endpoints. For example, if the cloud-SDDC has the local networks configured as X, Y, Z
subnets and the peer network is A, then the on-premises VPN configuration must have A as the
local network and X, Y, Z as the peer network. This case is true even when A is set to ANY
(0.0.0.0/0). For example, if the cloud-SDDC policy-based VPN session has the local network
configured as 10.1.1.0/24 and the peer network as 0.0.0.0/0, at the on-premises VPN
endpoint, the VPN configuration must have 0.0.0.0/0 as the local network and 10.1.1.0/24
as the peer network. If misconfigured, the IPSec VPN tunnel negotiation might fail.

The size of the NSX Edge node determines the maximum number of supported tunnels, as shown
in the following table.

Table 7-1. Number of IPSec Tunnels Supported


# of IPSec Tunnels Per VPN
Edge Node # of IPSec Tunnels Per Service
Size VPN Session (Policy-Based) # of Sessions Per VPN Service (16 tunnels per session)

Small N/A (POC/Lab Only) N/A (POC/Lab Only) N/A (POC/Lab Only)

Medium 128 128 2048

Large 128 (soft limit) 256 4096

Bare Metal 128 (soft limit) 512 6000

Restriction The inherent architecture of policy-based IPSec VPN restricts you from setting up a
VPN tunnel redundancy.

For information about configuring a policy-based IPSec VPN, see Add an IPSec VPN Service.

VMware, Inc. 160


NSX Administration Guide

Using Route-Based IPSec VPN


Route-based IPSec VPN provides tunneling on traffic based on the static routes or routes learned
dynamically over a special interface called virtual tunnel interface (VTI) using, for example, BGP as
the protocol. IPSec secures all the traffic flowing through the VTI.

Note
n OSPF dynamic routing is not supported for routing through IPSec VPN tunnels.

n Dynamic routing for VTI is not supported on VPN that is based on Tier-1 gateways.

n Load balancer over IPSec VPN is not supported for route-based VPN terminated on Tier-1
gateways.

Route-based IPSec VPN is similar to Generic Routing Encapsulation (GRE) over IPSec, with
the exception that no additional encapsulation is added to the packet before applying IPSec
processing.

In this VPN tunneling approach, VTIs are created on the NSX Edge node. Each VTI is associated
with an IPSec tunnel. The encrypted traffic is routed from one site to another site through the VTI
interfaces. IPSec processing happens only at the VTI.

VPN Tunnel Redundancy


You can configure VPN tunnel redundancy with a route-based IPSec VPN session that is
configured on a Tier-0 gateway. With tunnel redundancy, multiple tunnels can be set up between
two sites, with one tunnel being used as the primary with failover to the other tunnels when
the primary tunnel becomes unavailable. This feature is most useful when a site has multiple
connectivity options, such as with different ISPs for link redundancy.

Important
n In NSX, IPSec VPN tunnel redundancy is supported using BGP only.

n Do not use static routing for route-based IPSec VPN tunnels to achieve VPN tunnel
redundancy.

The following figure shows a logical representation of IPSec VPN tunnel redundancy between
two sites. In this figure, Site A and Site B represent two data centers. For this example, assume
that NSX is not managing the Edge VPN Gateways in Site A, and that NSX is managing an Edge
Gateway virtual appliance in Site B.

VMware, Inc. 161


NSX Administration Guide

Figure 7-1. Tunnel Redundancy in Route-Based IPSec VPN

As shown in the figure, you can configure two independent IPSec VPN tunnels by using VTIs.
Dynamic routing is configured using BGP protocol to achieve tunnel redundancy. If both IPSec
VPN tunnels are available, they remain in service. All the traffic destined from Site A to Site
B through the NSX Edge node is routed through the VTI. The data traffic undergoes IPSec
processing and goes out of its associated NSX Edge node uplink interface. All the incoming IPSec
traffic received from Site B VPN Gateway on the NSX Edge node uplink interface is forwarded to
the VTI after decryption, and then usual routing takes place.

You must configure BGP HoldDown timer and KeepAlive timer values to detect loss of connectivity
with peer within the required failover time. See Configure BGP.

Understanding Layer 2 VPN


With Layer 2 VPN (L2 VPN), you can extend Layer 2 networks (VNIs or VLANs) across multiple
sites on the same broadcast domain. This connection is secured with a route-based IPSec tunnel
between the L2 VPN server and the L2 VPN client.

Note This L2 VPN feature is available only for NSX and does not have any third-party
interoperability.

The extended network is a single subnet with a single broadcast domain, which means the VMs
remain on the same subnet when they are moved between sites. The VMs' IP addresses do not
change when they are moved. So, enterprises can seamlessly migrate VMs between network sites.
The VMs can run on either VNI-based networks or VLAN-based networks. For cloud providers, L2
VPN provides a mechanism to onboard tenants without modifying existing IP addresses used by
their workloads and applications.

VMware, Inc. 162


NSX Administration Guide

In addition to supporting data center migration, an on-premises network extended with an L2 VPN
is useful for a disaster recovery plan and dynamically engaging off-premise compute resources to
meet the increased demand.

L2 VPN services are supported on both Tier-0 and Tier-1 gateways. Only one L2 VPN service
(either client or server) can be configured for either Tier-0 or Tier-1 gateway.

Each L2 VPN session has one Generic Routing Encapsulation (GRE) tunnel. Tunnel redundancy is
not supported. An L2 VPN session can extend up to 4094 L2 segments.

VLAN-based and VNI-based segments can be extended using L2 VPN service on an NSX Edge
node that is managed in an NSX environment. You can extend L2 networks from VLAN to VNI,
VLAN to VLAN, and VNI to VNI.

Segments can be connected to either Tier-0 or Tier-1 gateways and use L2 VPN services.

Also supported is VLAN trunking using virtual distributed switching (VDS) 7.0 or later running
NSX. If there are sufficient compute and I/O resources, an NSX Edge cluster can extend multiple
VLAN networks over a single interface using VLAN trunking.

Beginning with NSX 3.0, the L2 VPN path MTU discovery (PMTUD) feature is enabled by default.
With the PMTUD enabled, the source host learns the path MTU value for the destination host
through the L2 VPN tunnel and limits the length of the outgoing IP packet to the learned value.
This feature helps avoid IP fragmentation and reassembly within the tunnel, as a result improving
the L2 VPN performance.

The L2 VPN PMTUD feature is not applicable for non-IP, non-unicast, and unicast packets with
the DF (Don’t Fragment) flag cleared. The global PMTU cache timer expires every 10 minutes. To
disable or enable L2 VPN PMTUD feature, see Enable and Disable L2 VPN Path MTU Discovery.

The L2 VPN service support is provided in the following deployment scenarios.

n Between an NSX L2 VPN server and an L2 VPN client hosted on an NSX Edge that is managed
in an NSX Data Center for vSphere environment. A managed L2 VPN client supports both
VLANs and VNIs.

n Between an NSX L2 VPN server and an L2 VPN client hosted on a standalone or unmanaged
NSX Edge. An unmanaged L2 VPN client supports VLANs only.

n Between an NSX L2 VPN server and an L2 VPN client hosted on an autonomous NSX Edge.
An autonomous L2 VPN client supports VLANs only.

n Beginning with NSX 2.4 release, L2 VPN service support is available between an NSX L2
VPN server and NSX L2 VPN clients. In this scenario, you can extend the logical L2 segments
between two on-premises software-defined data centers (SDDCs).

The following table lists the compatible NSX versions that can be used for the L2 VPN server and
client.

VMware, Inc. 163


NSX Administration Guide

Table 7-2. NSX L2 VPN Client


L2 VPN Client Version (NSX) L2 VPN Client Version (NSX)
L2 VPN Server Version (NSX) Validated Supported Not Validated

4.0.1.1 4.0.1.1, 4.0.0.1, 3.2.1.2 N/A

3.2.0 3.2.0, 3.1.3, 3.1.2 3.1.x and later

3.1.3 3.1.3, 3.1.2, 3.1.1 3.0.x and later

3.1.2 3.1.2, 3.1.1, 2.5.3 3.0.x and later

3.1.1 3.1.1, 3.1.0, 3.0.1 3.0.x and later

3.1.0 3.1.0, 3.0.1, 3.0.0 3.0.x and later

3.0.3 3.0.3, 3.0.2, 3.0.1 2.5.x and later

3.0.2 3.0.2, 3.0.1, 2.5.2 2.5.x and later

3.0.0 3.0.0, 2.5.0, 2.5.1 2.5.x and later

The following table lists the compatible NSX and NSX-v versions that can be used for the L2 VPN
server and client.

Table 7-3. NSX for vSphere L2VPN Client


L2 VPN Client Version (NSX-v) L2 VPN Client Version (NSX-v)
L2 VPN Server Version (NSX) Validated Supported Not Validated

3.2.x 6.4.12 6.4.x and later

3.1.x 6.4.8 6.4.x and later

Enable and Disable L2 VPN Path MTU Discovery


You can enable or disable the L2 VPN path MTU (PMTU) discovery feature using CLI commands.
By default L2 VPN PMTU discovery is enabled.

Prerequisites

You must have the user name and password for the admin account to log in to the NSX Edge
node.

Procedure

1 Log in with admin privileges to the CLI of the NSX Edge node .

2 To check the status of the L2 VPN PMTU discovery feature, use the following command.

Nsxedge> get dataplane l2vpn-pmtu config

If the feature is enabled, you see the following output: l2vpn_pmtu_enabled : True.

If the feature is disabled, you see the following output: l2vpn_pmtu_enabled : False.

VMware, Inc. 164


NSX Administration Guide

3 To disable the L2 VPN PMTU discovery feature, use the following command.

nsxedge> set dataplane l2vpn-pmtu disabled

Adding VPN Services


You can add either an IPSec VPN (policy-based or route-based) or an L2 VPN using the NSX
Manager user interface (UI).

The following sections provide information about the workflows required to set up the VPN service
that you need. The topics that follow these sections provide details on how to add either an IPSec
VPN or an L2 VPN using the NSX Manager user interface.

Policy-Based IPSec VPN Configuration Workflow


Configuring a policy-based IPSec VPN service workflow requires the following high-level steps.

1 Create and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See Add
an IPSec VPN Service.

2 Create a DPD (dead peer detection) profile, if you prefer not to use the system default. See
Add DPD Profiles.

3 To use a non-system default IKE profile, define an IKE (Internet Key Exchange) profile. See
Add IKE Profiles.

4 Configure an IPSec profile using Add IPSec Profiles.

5 Use Add Local Endpoints to create a VPN server hosted on the NSX Edge.

6 Configure a policy-based IPSec VPN session, apply the profiles, and attach the local endpoint
to it. See Add a Policy-Based IPSec Session. Specify the local and peer subnets to be used for
the tunnel. Traffic from a local subnet destined to the peer subnet is protected using the tunnel
defined in the session.

Route-Based IPSec VPN Configuration Workflow


A route-based IPSec VPN configuration workflow requires the following high-level steps.

1 Configure and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See
Add an IPSec VPN Service.

2 Define an IKE profile if you prefer not to use the default IKE profile. See Add IKE Profiles.

3 If you decide not to use the system default IPSec profile, create one using Add IPSec Profiles.

4 Create a DPD profile if you want to do not want to use the default DPD profile. See Add DPD
Profiles.

5 Add a local endpoint using Add Local Endpoints.

VMware, Inc. 165


NSX Administration Guide

6 Configure a route-based IPSec VPN session, apply the profiles, and attach the local endpoint
to the session. Provide a VTI IP in the configuration and use the same IP to configure routing.
The routes can be static or dynamic (using BGP). See Add a Route-Based IPSec Session.

L2 VPN Configuration Workflow


Configuring an L2 VPN requires that you configure an L2 VPN service in Server mode and then
another L2 VPN service in Client mode. You also must configure the sessions for the L2 VPN
server and L2 VPN client using the peer code generated by the L2 VPN Server. Following is a
high-level workflow for configuring an L2 VPN service.

1 Create an L2 VPN Service in Server mode.

a Configure a route-based IPSec VPN tunnel with a Tier-0 or Tier-1 gateway and an L2 VPN
Server service using that route-based IPSec tunnel. See Add an L2 VPN Server Service.

b Configure an L2 VPN server session, which binds the newly created route-based IPSec
VPN service and the L2 VPN server service, and automatically allocates the GRE IP
addresses. See Add an L2 VPN Server Session.

c Add segments to the L2 VPN Server sessions. This step is also described in Add an L2
VPN Server Session.

d Use Download the Remote Side L2 VPN Configuration File to obtain the peer code for
the L2 VPN Server service session, which must be applied on the remote site and used to
configure the L2 VPN Client session automatically.

2 Create an L2 VPN Service in Client mode.

a Configure another route-based IPSec VPN service using a different Tier-0 or Tier-1
gateway and configure an L2 VPN Client service using that Tier-0 or Tier-1 gateway that
you just configured. See Add an L2 VPN Client Service for information.

b Define the L2 VPN Client sessions by importing the peer code generated by the L2 VPN
Server service. See Add an L2 VPN Client Session.

c Add segments to the L2 VPN Client sessions defined in the previous step. This step is
described in Add an L2 VPN Client Session.

Add an IPSec VPN Service


NSX supports a site-to-site IPSec VPN service between a Tier-0 or Tier-1 gateway and remote
sites. You can create a policy-based or a route-based IPSec VPN service. You must create the
IPSec VPN service first before you can configure either a policy-based or a route-based IPSec
VPN session.

Note IPSec VPN is not supported in the NSX limited export release.

IPSec VPN is not supported when the local endpoint IP address goes through NAT in the same
logical router that the IPSec VPN session is configured.

VMware, Inc. 166


NSX Administration Guide

Prerequisites

n Familiarize yourself with the IPSec VPN. See Understanding IPSec VPN.

n You must have at least one Tier-0 or Tier-1 gateway configured and available for use. See Add
a Tier-0 Gateway or Add a Tier-1 Gateway for more information.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to Networking > VPN > VPN Services.

3 Select Add Service > IPSec.

4 Enter a name for the IPSec service.

This name is required.

5 From the Tier-0/Tier-1 Gateway drop-down menu, select the Tier-0 or Tier-1 gateway to
associate with this IPSec VPN service.

6 Enable or disable Admin Status.

By default, the value is set to Enabled, which means the IPSec VPN service is enabled on the
Tier-0 or Tier-1 gateway after the new IPSec VPN service is configured.

7 Set the value for IKE Log Level.

The default is set to the Info level.

8 Enter a value for Tags if you want to include this service in a tag group.

9 To enable or disable the stateful synchronization of VPN sessions, toggle Session sync.

By default, the value is set to Enabled.

10 Click Global Bypass Rules if you want to allow data packets to be exchanged between the
specified local and remote IP addresses without any IPSec protection. In the Local Networks
and Remote Networks text boxes, enter the list of local and remote subnets between which
the bypass rules are applied.

If you enable these rules, data packets are exchanged between the specified local and remote
IP sites even if their IP addresses are specified in the IPSec session rules. The default is to
use the IPSec protection when data is exchanged between local and remote sites. These rules
apply for all IPSec VPN sessions created within this IPSec VPN service.

11 Click Save.

After the new IPSec VPN service is created successfully, you are asked whether you want to
continue with the rest of the IPSec VPN configuration. If you click Yes, you are taken back to
the Add IPSec VPN Service panel. The Sessions link is now enabled and you can click it to add
an IPSec VPN session.

VMware, Inc. 167


NSX Administration Guide

Results

After one or more IPSec VPN sessions are added, the number of sessions for each VPN service
will appear in the VPN Services tab. You can reconfigure or add sessions by clicking the number
in the Sessions column. You do not need to edit the service. If the number is zero, it is not
clickable and you must edit the service to add sessions.

What to do next

Use information in Adding IPSec VPN Sessions to guide you in adding an IPSec VPN session. You
also provide information for the profiles and local endpoint that are required to finish the IPSec
VPN configuration.

Add an L2 VPN Service


You configure an L2 VPN service on a Tier-0 or Tier-1 gateway. To enable the L2 VPN service, you
must first create an IPSec VPN service on the Tier-0 or Tier-1 gateway, if it does not exist yet. You
then configure an L2 VPN tunnel between an L2 VPN server (destination gateway) and an L2 VPN
client (source gateway).

To configure an L2 VPN service, use the information in the topics that follow in this section.

Prerequisites

n Familiarize yourself with IPsec VPN and L2 VPN. See Understanding IPSec VPN and
Understanding Layer 2 VPN.

n You must have at least one Tier-0 or Tier-1 gateway configured and available for use. See Add
a Tier-0 Gateway or Add a Tier-1 Gateway.

Procedure

1 Add an L2 VPN Server Service


To configure an L2 VPN Server service, you must configure the L2 VPN service in server
mode on the destination NSX Edge to which the L2 VPN client is to be connected.

2 Add an L2 VPN Client Service


After configuring the L2 VPN Server service, configure the L2 VPN service in the client mode
on another NSX Edge instance.

Add an L2 VPN Server Service


To configure an L2 VPN Server service, you must configure the L2 VPN service in server mode on
the destination NSX Edge to which the L2 VPN client is to be connected.

Procedure

1 With admin privileges, log in to NSX Manager.

VMware, Inc. 168


NSX Administration Guide

2 (Optional) If an IPSec VPN service does not exist yet on either a Tier-0 or Tier-1 gateway that
you want to configure as the L2 VPN server, create the service using the following steps.

a Navigate to the Networking > VPN > VPN Services tab and select Add Service > IPSec.

b Enter a name for the IPSec VPN service.

c From the Tier-0/Tier-1 Gateway drop-down menu, select the gateway to use with the L2
VPN server.

d If you want to use values different from the system defaults, set the rest of the properties
on the Add IPSec Service pane, as needed.

e Click Save and when prompted if you want to continue configuring the IPSec VPN service,
select No.

3 Navigate to the Networking > VPN > VPN Services tab and select Add Service > L2 VPN
Server to create an L2 VPN server.

4 Enter a name for the L2 VPN server.

5 From the Tier-0/Tier-1 Gateway drop-down menu, select the same Tier-0 or Tier-1 gateway
that you used with the IPSec service you created a moment ago.

6 Enter an optional description for this L2 VPN server.

7 Enter a value for Tags if you want to include this service in a tag group.

8 Enable or disable the Hub & Spoke property.

By default, the value is set to Disabled, which means the traffic received from the L2 VPN
clients is only replicated to the segments connected to the L2 VPN server. If this property is set
to Enabled, the traffic from any L2 VPN client is replicated to all other L2 VPN clients.

9 Click Save.

After the new L2 VPN server is created successfully, you are asked whether you want to
continue with the rest of the L2 VPN service configuration. If you click Yes, you are taken back
to the Add L2 VPN Server pane and the Session link is enabled. You can use that link to create
an L2 VPN server session or use the Networking > VPN > L2 VPN Sessions tab.

Results

After one or more L2 VPN sessions are added, the number of sessions for each VPN service will
appear in the VPN Services tab. You can reconfigure or add sessions by clicking the number in
the Sessions column. You do not need to edit the service. Note that if the number is zero, it is not
clickable and you must edit the service to add sessions.

What to do next

Configure an L2 VPN server session for the L2 VPN server that you configured using information
in Add an L2 VPN Server Session as a guide.

VMware, Inc. 169


NSX Administration Guide

Add an L2 VPN Client Service


After configuring the L2 VPN Server service, configure the L2 VPN service in the client mode on
another NSX Edge instance.

Procedure

1 With admin privileges, log in to NSX Manager.

2 (Optional) If an IPSec VPN service does not exist yet on either a Tier-0 or Tier-1 gateway that
you want to configure as the L2 VPN client, create the service using the following steps.

a Navigate to the Networking > VPN > VPN Services tab and select Add Service > IPSec.

b Enter a name for the IPSec VPN service.

c From the Tier-0/Tier-1 Gateway drop-down menu, select a Tier-0 or Tier-1 gateway to use
with the L2 VPN client.

d If you want to use values different from the system defaults, set the rest of the properties
on the Add IPSec Service pane, as needed.

e Click Save and when prompted if you want to continue configuring the IPSec VPN service,
select No.

3 Navigate to the Networking > VPN > VPN Services tab and select Add Service > L2 VPN
Client.

4 Enter a name for the L2 VPN Client service.

5 From the Tier-0/Tier-1 Gateway drop-down menu, select the same Tier-0 or Tier-1 gateway
that you used with the route-based IPSec tunnel you created a moment ago.

6 Optionally set the values for Description and Tags.

7 Click Save.

After the new L2 VPN client service is created successfully, you are asked whether you want to
continue with the rest of the L2 VPN client configuration. If you click Yes, you are taken back
to the Add L2 VPN Client pane and the Session link is enabled. You can use that link to create
an L2 VPN client session or use the Networking > VPN > L2 VPN Sessions tab.

Results

After one or more L2 VPN sessions are added, the number of sessions for each VPN service will
appear in the VPN Services tab. You can reconfigure or add sessions by clicking the number in
the Sessions column. You do not need to edit the service. If the number is zero, it is not clickable
and you must edit the service to add sessions.

What to do next

Configure an L2 VPN client session for the L2 VPN Client service that you configured. Use the
information in Add an L2 VPN Client Session as a guide.

VMware, Inc. 170


NSX Administration Guide

Adding IPSec VPN Sessions


After you have configured an IPSec VPN service, you must add either a policy-based IPSec VPN
session or a route-based IPSec VPN session, depending on the type of IPSec VPN you want to
configure. You also provide the information for the local endpoint and profiles to use to finish the
IPSec VPN service configuration.

Using Certificate-Based Authentication for IPSec VPN Sessions


When you use certificate-based authentication for an IPSec VPN session, you must configure the
certificate details for the IPSec session in the associated local endpoint.

Note Wildcard certificates are not supported for IPSec VPN.

Refer to the following workflow for details on how to configure the certificate details for a IPSec
VPN session.

Configure Certificate-Based Authentication for an IPSec VPN Session


1 Create and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See Add
an IPSec VPN Service.

2 If you do not have the necessary server certificates or CA certificates in NSX Manager, import
the certificates. See Import a Self-signed or CA-signed Certificate and Import a CA Certificate.

3 Use Add Local Endpoints to create a VPN server hosted on the logical router and select the
certificates for it.

The local ID is derived from the certificate associated with the local endpoint and depends
on the X509v3 extensions present in the certificate. The local ID can be either the X509v3
extension Subject Alternative Name (SAN) or Distinguished Name (DN). The Local ID is not
required and the ID specified there is ignored. However, for the remote VPN gateway, you
need to configure the local ID as remote ID in the peer VPN gateway.

n If X509v3 Subject Alternative Name is found in the certificate, then one of the SAN
strings is taken as the local ID value.

If the certificate has multiple SAN fields, then following order is used to select the local ID.

Or
de
r SAN Field

1 IP Address

2 DNS

3 Email Address

VMware, Inc. 171


NSX Administration Guide

For example, if the configured site certificate has the following SAN fields,

X509v3 Subject Alternative Name:


DNS:Site123.vmware.com, email:[email protected], IP Address:1.1.1.1

then the IP address 1.1.1.1 is used as the local ID. If the IP address is not available, then
the DNS string is used. And if the IP address and the DNS are not available, then the email
address is used.

n If X509v3 Subject Alternative Name is not present in the certificate, then the
Distinguished Name (DN) is used as the local ID value.

For example, if the certificate does not have any SAN fields, and its DN string is

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123

then the DN string automatically becomes the local ID. The local ID is the peer ID on the
remote site.

Note If the certificate details are not properly configured, it might cause the VPN session to
go down with the Down alarm of Authentication failed.

4 Configure either a policy-based or route-based IPSec VPN session. See Add a Policy-Based
IPSec Session or Add a Route-Based IPSec Session.

Make sure to configure the following settings.

a From the Authentication Mode drop-down menu, select Certificate.

b In the Remote ID textbox, enter a value to identify the peer site.

The remote ID must be a distinguished name (DN), IP address, DNS, or an email address
used in the peer site's certificate.

Note If the peer site's certificate contains an email address in the DN string, for example,

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123/[email protected]

then enter the Remote ID value using the following format as an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]

Add a Policy-Based IPSec Session


When you add a policy-based IPSec VPN, IPSec tunnels are used to connect multiple local
subnets that are behind the NSX Edge node with peer subnets on the remote VPN site.

VMware, Inc. 172


NSX Administration Guide

The following steps use the IPSec Sessions tab on the NSX Manager UI to create a policy-based
IPSec session. You also add information for the tunnel, IKE, and DPD profiles, and select an
existing local endpoint to use with the policy-based IPSec VPN.

Note You can also add the IPSec VPN sessions immediately after you have successfully
configured the IPSec VPN service. You click Yes when prompted to continue with the IPSec VPN
service configuration and select Sessions > Add Sessions on the Add IPsec Service panel. The first
few steps in the following procedure assume you selected No to the prompt to continue with the
IPSec VPN service configuration. If you selected Yes, proceed to step 3 in the following steps to
guide you with the rest of the policy-based IPSec VPN session configuration.

Prerequisites

n You must have configured an IPSec VPN service before proceeding. See Add an IPSec VPN
Service.

n Obtain the information for the local endpoint, IP address for the peer site, local network
subnet, and remote network subnet to use with the policy-based IPSec VPN session you are
adding. To create a local endpoint, see Add Local Endpoints.

n If you are using a Pre-Shared Key (PSK) for authentication, obtain the PSK value.

n If you are using a certificate for authentication, ensure that the necessary server certificates
and corresponding CA-signed certificates are already imported. See Chapter 23 Certificates.

n If you do not want to use the defaults for the IPSec tunnel, IKE, or dead peer detection (DPD)
profiles provided by NSX, configure the profiles you want to use instead. See Adding Profiles
for information.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to the Networking > VPN > IPSec Sessions tab.

3 Select Add IPSec Session > Policy Based.

4 Enter a name for the policy-based IPSec VPN session.

5 From the VPN Service drop-down menu, select the IPSec VPN service to which you want to
add this new IPSec session.

Note If you are adding this IPSec session from the Add IPSec Sessions dialog box, the VPN
Service name is already indicated above the Add IPSec Session button.

6 Select an existing local endpoint from the drop-down menu.

This local endpoint value is required and identifies the local NSX Edge node. If you want

to create a different local endpoint, click the three-dot menu ( ) and select Add Local
Endpoint.

VMware, Inc. 173


NSX Administration Guide

7 In the Remote IP text box, enter the required IP address of the remote site.

This value is required.

8 Enter an optional description for this policy-based IPSec VPN session.

The maximum length is 1024 characters.

9 To enable or disable the IPSec VPN session, click Admin Status.

By default, the value is set to Enabled, which means the IPSec VPN session is to be configured
down to the NSX Edge node.

10 (Optional) From the Compliance suite drop-down menu, select a security compliance suite.

Note Compliance suite support is provided beginning with NSX 2.5. See About Supported
Compliance Suites for more information.

The default value selected is None. If you select a compliance suite, the Authentication Mode
is set to Certificate and in the Advanced Properties section, the values for IKE profile and
IPSec profile are set to the system-defined profiles for the selected security compliance suite.
You cannot edit these system-defined profiles.

11 If the Compliance Suite is set to None, select a mode from the Authentication Mode drop-
down menu.

The default authentication mode used is PSK, which means a secret key shared between NSX
Edge and the remote site is used for the IPSec VPN session. If you select Certificate, the site
certificate that was used to configure the local endpoint is used for authentication.

For more information about certificate-based authentication, see Using Certificate-Based


Authentication for IPSec VPN Sessions.

12 In the Local Networks and Remote Networks text boxes, enter at least one IP subnet address
to use for this policy-based IPSec VPN session.

These subnets must be in a CIDR format.

13 If Authentication Mode is set to PSK, enter the key value in the Pre-shared Key text box.

This secret key can be a string with a maximum length of 128 characters.

Caution Be careful when sharing and storing a PSK value because it contains some sensitive
information.

VMware, Inc. 174


NSX Administration Guide

14 To identify the peer site, enter a value in Remote ID.

For peer sites using PSK authentication, this ID value must be the public IP address or the
FQDN of the peer site. For peer sites using certificate authentication, this ID value must be the
common name (CN) or distinguished name (DN) used in the peer site's certificate.

Note If the peer site's certificate contains an email address in the DN string, for example,

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123/[email protected]

then enter the Remote ID value using the following format as an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

If the local site's certificate contains an email address in the DN string and the peer site uses
the strongSwan IPsec implementation, enter the local site's ID value in that peer site. The
following is an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

15 To change the profiles, initiation mode, TCP MSS clamping mode, and tags used by the
policy-based IPSec VPN session, click Advanced Properties.

By default, the system generated profiles are used. Select another available profile if you do
not want to use the default. If you want to use a profile that is not configured yet, click the

three-dot menu ( ) to create another profile. See Adding Profiles.


a If the IKE Profiles drop-down menu is enabled, select the IKE profile.

b Select the IPsec tunnel profile, if the IPSec Profiles drop-down menu is not disabled.

c Select the preferred DPD profile if the DPD Profiles drop-down menu is enabled.

VMware, Inc. 175


NSX Administration Guide

d Select the preferred mode from the Connection Initiation Mode drop-down menu.

Connection initiation mode defines the policy used by the local endpoint in the process of
tunnel creation. The default value is Initiator. The following table describes the different
connection initiation modes available.

Table 7-4. Connection Initiation Modes

Connection Initiation Mode Description

Initiator The default value. In this mode, the local endpoint


initiates the IPSec VPN tunnel creation and responds
to incoming tunnel setup requests from the peer
gateway.

On Demand In this mode, the local endpoint initiates the IPSec


VPN tunnel creation after the first packet matching
the policy rule is received. It also responds to the
incoming initiation request.

Respond Only The IPSec VPN never initiates a connection. The peer
site always initiates the connection request and the
local endpoint responds to that connection request.

e If you want to reduce the maximum segment size (MSS) payload of the TCP session during
the IPSec connection, enable TCP MSS Clamping, select the TCP MSS direction value, and
optionally set the TCP MSS Value.

See Understanding TCP MSS Clamping for more information.

f If you want to include this session as part of a specific group, enter the tag name in Tags.

16 Click Save.

Results

When the new policy-based IPSec VPN session is configured successfully, it is added to the list of
available IPsec VPN sessions. It is in read-only mode.

What to do next

n Verify that the IPSec VPN tunnel status is Up. See Monitor and Troubleshoot VPN Sessions for
information.

n If necessary, manage the IPSec VPN session information by clicking the three-dot menu ( )
on the left-side of the session's row. Select one of the actions you are allowed to perform.

Add a Route-Based IPSec Session


When you add a route-based IPSec VPN, tunneling is provided on traffic that is based on routes
that were learned dynamically over a virtual tunnel interface (VTI) using a preferred protocol, such
as BGP. IPSec secures all the traffic flowing through the VTI.

VMware, Inc. 176


NSX Administration Guide

The steps described in this topic use the IPSec Sessions tab to create a route-based IPSec
session. You also add information for the tunnel, IKE, and DPD profiles, and select an existing
local endpoint to use with the route-based IPSec VPN.

Note You can also add the IPSec VPN sessions immediately after you have successfully
configured the IPSec VPN service. You click Yes when prompted to continue with the IPSec VPN
service configuration and select Sessions > Add Sessions on the Add IPSec Service panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the IPSec VPN service configuration. If you selected Yes, proceed to step 3 in the following steps
to guide you with the rest of the route-based IPSec VPN session configuration.

Prerequisites

n You must have configured an IPSec VPN service before proceeding. See Add an IPSec VPN
Service.

n Obtain the information for the local endpoint, IP address for the peer site, and tunnel service
IP subnet address to use with the route-based IPSec session you are adding. To create a local
endpoint, see Add Local Endpoints.

n If you are using a Pre-Shared Key (PSK) for authentication, obtain the PSK value.

n If you are using a certificate for authentication, ensure that the necessary server certificates
and corresponding CA-signed certificates are already imported. See Chapter 23 Certificates.

n If you do not want to use the default values for the IPSec tunnel, IKE, or dead peer detection
(DPD) profiles provided by NSX, configure the profiles you want to use instead. See Adding
Profiles for information.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to Networking > VPN > IPSec Sessions.

3 Select Add IPSec Session > Route Based.

4 Enter a name for the route-based IPSec session.

5 From the VPN Service drop-down menu, select the IPSec VPN service to which you want to
add this new IPSec session.

Note If you are adding this IPSec session from the Add IPSec Sessions dialog box, the VPN
Service name is already indicated above the Add IPSec Session button.

6 Select an existing local endpoint from the drop-down menu.

This local endpoint value is required and identifies the local NSX Edge node. If you want

to create a different local endpoint, click the three-dot menu ( ) and select Add Local
Endpoint.

VMware, Inc. 177


NSX Administration Guide

7 In the Remote IP text box, enter the IP address of the remote site.

This value is required.

8 Enter an optional description for this route-based IPSec VPN session.

The maximum length is 1024 characters.

9 To enable or disable the IPSec session, click Admin Status .

By default, the value is set to Enabled, which means the IPSec session is to be configured
down to the NSX Edge node.

10 (Optional) From the Compliance suite drop-down menu, select a security compliance suite.

Note Compliance suite support is provided beginning with NSX 2.5. See About Supported
Compliance Suites for more information.

The default value is set to None. If you select a compliance suite, the Authentication Mode is
set to Certificate and in the Advanced Properties section, the values for IKE profile and
IPSec profile are set to the system-defined profiles for the selected compliance suite. You
cannot edit these system-defined profiles.

11 Enter an IP subnet address in Tunnel Interface in the CIDR notation.

This address is required.

12 If the Compliance Suite is set to None, select a mode from the Authentication Mode drop-
down menu.

The default authentication mode used is PSK, which means a secret key shared between NSX
Edge and the remote site is used for the IPSec VPN session. If you select Certificate, the site
certificate that was used to configure the local endpoint is used for authentication.

For more information about certificate-based authentication, see Using Certificate-Based


Authentication for IPSec VPN Sessions.

13 If you selected PSK for the authentication mode, enter the key value in the Pre-shared Key text
box.

This secret key can be a string with a maximum length of 128 characters.

Caution Be careful when sharing and storing a PSK value because it contains some sensitive
information.

VMware, Inc. 178


NSX Administration Guide

14 Enter a value in Remote ID.

For peer sites using PSK authentication, this ID value must be the public IP address or the
FQDN of the peer site. For peer sites using certificate authentication, this ID value must be the
common name (CN) or distinguished name (DN) used in the peer site's certificate.

Note If the peer site's certificate contains an email address in the DN string, for example,

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123/[email protected]

then enter the Remote ID value using the following format as an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

If the local site's certificate contains an email address in the DN string and the peer site uses
the strongSwan IPsec implementation, enter the local site's ID value in that peer site. The
following is an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

15 If you want to include this IPSec session as part of a specific group tag, enter the tag name in
Tags.

16 To change the profiles, initiation mode, TCP MSS clamping mode, and tags used by the
route-based IPSec VPN session, click Advanced Properties.

By default, the system-generated profiles are used. Select another available profile if you do
not want to use the default. If you want to use a profile that is not configured yet, click the

three-dot menu ( ) to create another profile. See Adding Profiles.


a If the IKE Profiles drop-down menu is enabled, select the IKE profile.

b Select the IPsec tunnel profile, if the IPSec Profiles drop-down menu is not disabled.

VMware, Inc. 179


NSX Administration Guide

c Select the preferred DPD profile if the DPD Profiles drop-down menu is enabled.

d Select the preferred mode from the Connection Initiation Mode drop-down menu.

Connection initiation mode defines the policy used by the local endpoint in the process of
tunnel creation. The default value is Initiator. The following table describes the different
connection initiation modes available.

Table 7-5. Connection Initiation Modes

Connection Initiation Mode Description

Initiator The default value. In this mode, the local endpoint


initiates the IPSec VPN tunnel creation and responds
to incoming tunnel setup requests from the peer
gateway.

On Demand Do not use with the route-based VPN. This mode


applies to policy-based VPN only.

Respond Only The IPSec VPN never initiates a connection. The peer
site always initiates the connection request and the
local endpoint responds to that connection request.

17 If you want to reduce the maximum segment size (MSS) payload of the TCP session during
the IPSec connection, enable TCP MSS Clamping, select the TCP MSS direction value, and
optionally set the TCP MSS Value. []

See Understanding TCP MSS Clamping for more information.

18 If you want to include this IPSec session as part of a specific group tag, enter the tag name in
Tags.

19 Click Save.

Results

When the new route-based IPSec VPN session is configured successfully, it is added to the list of
available IPsec VPN sessions. It is in read-only mode.

What to do next

n Verify that the IPSec VPN tunnel status is Up. See Monitor and Troubleshoot VPN Sessions for
information.

n Configure routing using either a static route or BGP. See Configure a Static Route or Configure
BGP.

n If necessary, manage the IPSec VPN session information by clicking the three-dot menu ( )
on the left-side of the session's row. Select one of the actions you can perform.

About Supported Compliance Suites


You can specify a security compliance suite to use to configure the security profiles used for an
IPSec VPN session.

VMware, Inc. 180


NSX Administration Guide

A security compliance suite has predefined values that are used for different security parameters
and that cannot be modified. When you select a compliance suite, the predefined values are
automatically used for the security profile of the IPSec VPN session you are configuring.

The following table lists the compliance suites that are supported for IKE profiles in NSX and the
values that are predefined for each.

Compliance Suite Encryption


Name IKE Version Algorithm Digest Algorithm Diffie Hellman Group

CNSA IKE V2 AES 256 SHA2 384 Group 15, Group 20

FIPS IKE FLEX AES 128 SHA2 256 Group 20

Foundation IKE V1 AES 128 SHA2 256 Group 14

PRIME IKE V2 AES GCM 128 Not Set Group 19

Suite-B-GCM-128 IKE V2 AES 128 SHA2 256 Group 19

Suite-B-GCM-256 IKE V2 AES 256 SHA2 384 Group 20

Note The AES 128 and AES 256 algorithms use the CBC mode of operation.

The following table lists the compliance suites that are supported for IPSec profiles in NSX and the
values that are predefined for each.

Compliance Suite Encryption


Name Algorithm Digest Algorithm PFS Group Diffie-Hellman Group

CNSA AES 256 SHA2 384 Enabled Group 15, Group 20

FIPS AES GCM 128 Not Set Enabled Group 20

Foundation AES 128 SHA2 256 Enabled Group 14

PRIME AES GCM 128 Not Set Enabled Group 19

Suite-B-GCM-128 AES GCM 128 Not Set Enabled Group 19

Suite-B-GCM-256 AES GCM 256 Not Set Enabled Group 20

Note The AES 128 and AES 256 algorithms use the CBC mode of operation.

Adding L2 VPN Sessions


After you have configured an L2 VPN server and an L2 VPN client, you must add L2 VPN sessions
for both to complete the L2 VPN service configuration.

Add an L2 VPN Server Session


After creating an L2 VPN Server service, you must add an L2 VPN session and attach it to an
existing segment.

VMware, Inc. 181


NSX Administration Guide

The following steps use the L2 VPN Sessions tab on the NSX Manager UI to create an L2 VPN
Server session. You also select an existing local endpoint and segment to attach to the L2 VPN
Server session.

Note You can also add an L2 VPN Server session immediately after you have successfully
configured the L2 VPN Server service. You click Yes when prompted to continue with the L2 VPN
Server configuration and select Sessions > Add Sessions on the Add L2 VPN Server panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the L2 VPN Server configuration. If you selected Yes, proceed to step 3 in the following steps to
guide you with the rest of the L2 VPN Server session configuration.

Prerequisites

n You must have configured an L2 VPN Server service before proceeding. See Add an L2 VPN
Server Service.

n Obtain the information for the local endpoint and remote IP to use with the L2 VPN Server
session you are adding. To create a local endpoint, see Add Local Endpoints.

n Obtain the values for the pre-shared key (PSK) and the tunnel interface subnet to use with the
L2 VPN Server session.

n Obtain the name of the existing segment you want to attach to the L2 VPN Server session you
are creating. See Add a Segment for information.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to the Networking > VPN > L2 VPN Sessions tab.

3 Select Add L2 VPN Session > L2 VPN Server.

4 Enter a name for the L2 VPN Server session.

5 From the VPN Service drop-down menu, select the IPsec service on the same Tier-0 gateway
for which the L2 VPN session is being created.

Note If you are adding this L2 VPN Server session from the Set L2VPN Server Sessions dialog
box, the L2 VPN Server service is already indicated above the Add L2 Session button.

6 Select an existing local endpoint from the drop-down menu.

If you want to create a different local endpoint, click the three-dot menu ( ) and select Add
Local Endpoint.

7 Enter the IP address of the remote site under Remote IP.

8 To enable or disable the L2 VPN Server session, click Admin Status.

By default, the value is set to Enabled, which means the L2 VPN Server session is to be
configured down to the NSX Edge node.

VMware, Inc. 182


NSX Administration Guide

9 Enter the secret key value in Pre-shared Key.

Caution Be careful when sharing and storing a PSK value because it is considered sensitive
information.

10 Enter an IP subnet address in the Tunnel Interface using the CIDR notation.

For example, 4.5.6.6/24. This subnet address is required.

11 Enter a value in Remote ID.

For peer sites using certificate authentication, this ID must be the common name in the peer
site's certificate. For PSK peers, this ID can be any string. Preferably, use the public IP address
of the VPN or an FQDN for the VPN services as the Remote ID.

12 If you want to include this session as part of a specific group, enter the tag name in Tags.

13 Click Advanced Properties. if you want to reduce the maximum segment size (MSS) payload of
the TCP session during the L2 VPN connection.

By default, TCP MSS Clamping is enabled and the TCP MSS Direction is set to Both. See
Understanding TCP MSS Clamping for more information.
a Enable or disable TCP MSS Clamping.

b Set the TCP MSS Value, if necessary. If the field is left blank, the value is automatically
assigned.

14 Click Save and click Yes when prompted if you want to continue with the VPN service
configuration.

You are returned to the Add L2VPN Sessions panel and the Segments link is now enabled.

15 Attach an existing segment to the L2 VPN Server session.

a Click Segments > Set Segments.

b In the Set Segments dialog box, click Set Segment to attach an existing segment to the L2
VPN Server session.

c From the Segment drop-down menu, select the VNI-based or VLAN-based segment that
you want to attach to the session.

d Enter a unique value in the VPN Tunnel ID that is used to identify the segment that you
selected.

e In the Local Egress Gateway IP text box, enter the IP address of the local gateway that
your workload VMs on the segment use as their default gateway. The same IP address can
be configured in the remote site on the extended segment.

f Click Save and then Close.

In the Set L2VPN Sessions pane or dialog box, the system has incremented the Segments
count for the L2 VPN Server session.

16 To finish the L2 VPN Server session configuration, click Close Editing.

VMware, Inc. 183


NSX Administration Guide

Results

In the VPN Services tab, the system incremented the Sessions count for the L2 VPN Server
service that you configured.

If you have attached one or more segments to the session, you see the number of segments for
each session in the L2 VPN Sessions tab. You can reconfigure or add segments by clicking the
number in the Segments column. You do not need to edit the session. If the number is zero, it is
not clickable and you must edit the session to add segments.

What to do next

To complete the L2 VPN service configuration, you must also create an L2 VPN service in Client
mode and an L2 VPN client session. See Add an L2 VPN Client Service and Add an L2 VPN Client
Session.

Add an L2 VPN Client Session


You must add an L2 VPN Client session after creating an L2 VPN Client service, and attach it to an
existing segment.

The following steps use the L2 VPN Sessions tab on the NSX Manager UI to create an L2 VPN
Client session. You also select an existing local endpoint and segment to attach to the L2 VPN
Client session.

Note You can also add an L2 VPN Client session immediately after you have successfully
configured the L2 VPN Client service. Click Yes when prompted to continue with the L2 VPN
Client configuration and select Sessions > Add Sessions on the Add L2 VPN Client panel. The first
few steps in the following procedure assume you selected No to the prompt to continue with the
L2 VPN Client configuration. If you selected Yes, proceed to step 3 in the following steps to guide
you with the rest of the L2 VPN Client session configuration.

Prerequisites

n You must have configured an L2 VPN Client service before proceeding. See Add an L2 VPN
Client Service.

n Obtain the IP addresses information for the local IP and remote IP to use with the L2 VPN
Client session you are adding.

n Obtain the peer code that was generated during the L2 VPN server configuration. See
Download the Remote Side L2 VPN Configuration File.

n Obtain the name of the existing segment you want to attach to the L2 VPN Client session you
are creating. See Add a Segment.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select the Networking > VPN > L2 VPN Sessions.

VMware, Inc. 184


NSX Administration Guide

3 Select Add L2 VPN Session > L2 VPN Client.

4 Enter a name for the L2 VPN Client session.

5 From the VPN Service drop-down menu, select the L2 VPN Client service with which the L2
VPN session is to be associated.

Note If you are adding this L2 VPN Client session from the Set L2VPN Client Sessions dialog
box, the L2 VPN Client service is already indicated above the Add L2 Session button.

6 In the Local IP address text box, enter the IP address of the L2 VPN Client session.

7 Enter the remote IP address of the IPSec tunnel to be used for the L2 VPN Client session.

8 In the Peer Configuration text box, enter the peer code generated when you configured the
L2 VPN Server service.

9 Enable or disable Admin Status.

By default, the value is set to Enabled, which means the L2 VPN Server session is to be
configured down to the NSX Edge node.

10 Click Save and click Yes when prompted if you want to continue with the VPN service
configuration.

11 Attach an existing segment to the L2 VPN Client session.

a Select Segments > Add Segments.

b In the Set Segments dialog box, click Add Segment.

c From the Segment drop-down menu, select the VNI-based or VLAN-based segment you
want to attach to the L2 VPN Client session.

d Enter a unique value in the VPN Tunnel ID that is used to identify the segment that you
selected.

e Click Close.

12 To finish the L2 VPN Client session configuration, click Close Editing.

Results

In the VPN Services tab, the sessions count is updated for the L2 VPN Client service that you
configured.

If you have attached one or more segments to the session, you see the number of segments for
each session in the L2 VPN Sessions tab. You can reconfigure or add segments by clicking the
number in the Segments column. You do not need to edit the session. If the number is zero, it is
not clickable and you must edit the session to add segments.

Download the Remote Side L2 VPN Configuration File


To configure the L2 VPN client session, you must obtain the peer code that was generated when
you configured the L2 VPN server session.

VMware, Inc. 185


NSX Administration Guide

Prerequisites

n You must have configured an L2 VPN server service and a session successfully before
proceeding. See Add an L2 VPN Server Service.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to the Networking > VPN > L2 VPN Sessions tab.

3 In the table of L2 VPN sessions, expand the row for the L2 VPN server session you plan to use
for the L2 VPN client session configuration.

4 Click Download Config and click Yes on the Warning dialog box.

A text file with the name L2VPNSession_<name-of-L2-VPN-server-session>_config.txt


is downloaded. It contains the peer code for the remote side L2 VPN configuration.

Caution Be careful when storing and sharing the peer code because it contains a PSK value,
which is considered sensitive information.

For example, L2VPNSession_L2VPNServer_config.txt contains the following


configuration.

[
{
"transport_tunnel_path": "/infra/tier-0s/ServerT0_AS/locale-services/1-
policyconnectivity-693/ipsec-vpn-services/IpsecService1/sessions/Routebase1",
"peer_code":
"MCw3ZjBjYzdjLHsic2l0ZU5hbWUiOiJSb3V0ZWJhc2UxIiwic3JjVGFwSXAiOiIxNjkuMjU0LjY0LjIiLCJkc3RUYX
BJcCI6IjE2OS4yNTQuNjQuMSIsImlrZU9wdGl
vbiI6ImlrZXYyIiwiZW5jYXBQcm90byI6ImdyZS9pcHNlYyIsImRoR3JvdXAiOiJkaDE0IiwiZW5jcnlwdEFuZERpZ2
VzdCI6ImFlcy1nY20vc2hhLTI1NiIsInBzayI
6IlZNd2FyZTEyMyIsInR1bm5lbHMiOlt7ImxvY2FsSWQiOiI2MC42MC42MC4xIiwicGVlcklkIjoiNTAuNTAuNTAuMS
IsImxvY2FsVnRpSXAiOiIxNjkuMi4yLjMvMzEifV19"
}
]

5 Copy the peer code, which you use to configure the L2 VPN client service and session.

Using the preceding configuration file example, the following peer code is what you copy to
use with the L2 VPN client configuration.

MCw3ZjBjYzdjLHsic2l0ZU5hbWUiOiJSb3V0ZWJhc2UxIiwic3JjVGFwSXAiOiIxNjkuMjU0LjY0LjIiLCJkc3RUYXB
JcCI6IjE2OS4yNTQuNjQuMSIsImlrZU9wdGl
vbiI6ImlrZXYyIiwiZW5jYXBQcm90byI6ImdyZS9pcHNlYyIsImRoR3JvdXAiOiJkaDE0IiwiZW5jcnlwdEFuZERpZ2
VzdCI6ImFlcy1nY20vc2hhLTI1NiIsInBzayI
6IlZNd2FyZTEyMyIsInR1bm5lbHMiOlt7ImxvY2FsSWQiOiI2MC42MC42MC4xIiwicGVlcklkIjoiNTAuNTAuNTAuMS
IsImxvY2FsVnRpSXAiOiIxNjkuMi4yLjMvMzEifV19

VMware, Inc. 186


NSX Administration Guide

What to do next

Configure the L2 VPN Client service and session. See Add an L2 VPN Client Service and Add an
L2 VPN Client Session.

Add Local Endpoints


You must configure a local endpoint to use with the IPSec VPN that you are configuring.

The following steps use the Local Endpoints tab on the NSX Manager UI. You can also create a
local endpoint while in the process of adding an IPSec VPN session by clicking the three-dot menu

( ) and selecting Add Local Endpoint. If you are in the middle of configuring an IPSec VPN
session, proceed to step 3 in the following steps to guide you with creating a new local endpoint.

Prerequisites

n If you are using a certificate-based authentication mode for the IPSec VPN session that is to
use the local endpoint you are configuring, obtain the information about the certificate that the
local endpoint must use.

n Ensure that you have configured an IPSec VPN service to which this local endpoint is to be
associated.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to Networking > VPN > Local Endpoints and click Add Local Endpoint.

3 Enter a name for the local endpoint.

4 From the VPN Service drop-down menu, select the IPSec VPN service to which this local
endpoint is to be associated.

5 Enter an IP address for the local endpoint.

For an IPSec VPN service running on a Tier-0 gateway, the local endpoint IP address must be
different from the Tier-0 gateway's uplink interface IP address. The local endpoint IP address
you provide is associated with the loopback interface for the Tier-0 gateway and is also
published as a routable IP address over the uplink interface. For IPSec VPN service running
on a Tier-1 gateway, in order for the local endpoint IP address to be routable, the route
advertisement for IPSec local endpoints must be enabled in the Tier-1 gateway configuration.
See Add a Tier-1 Gateway for more information.

6 If you are using a certificate-based authentication mode for the IPSec VPN session, from the
Site Certificate drop-down menu, select the certificate that is to be used by the local endpoint.

7 (Optional) Optionally add a description in Description.

VMware, Inc. 187


NSX Administration Guide

8 Enter the Local ID value that is used for identifying the local NSX Edge instance.

This local ID is the peer ID on the remote site. The local ID must be either the public IP address
or FQDN of the remote site. For IPSec VPN sessions with certificate-based authentication and
are associated with the local endpoint, the Local ID is derived from the certificate associated
with the local endpoint. The ID specified in the Local ID text box is ignored. The local ID
derived from the certificate for a VPN session depends on the extensions present in the
certificate.

n If the X509v3 extension X509v3 Subject Alternative Name is not present in the certificate,
then the Distinguished Name (DN) is used as the local ID value.

For example, if the certificate does not have any Subject Alternative Name (SAN) fields and
its DN string is:

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123

then the DN string is used as the local ID. This local ID is the peer ID on the remote site.

n If the X509v3 extension X509v3 Subject Alternative Name is found in the certificate, then
one of the SAN fields is taken as the local ID value.

If the certificate has multiple SAN fields, then the following order is used to select the local
ID.

Order SAN Field

1 IP Address

2 DNS

3 Email Address

For example, if the configured site certificate has the following SAN fields:

x509v3 Subject Alternative Name:


DNS:Site123.vmware.com, email:[email protected], IP Address:1.1.1.1

then the IP address 1.1.1.1 is used as the local ID. If the IP address is not available, then
the DNS string is used. And if the IP address and DNS are not available, then the email
address is used.

To see the local ID that is used for an IPSec VPN session, do the following:

a Navigate to Networking > VPN and then click the IPSec Sessions tab.

b Expand the IPSec VPN session.

c Click Download Config to download the configuration file which contains the local ID.

9 From the Trusted CA Certificates and Certificate Revocation List drop-down menus, select
the appropriate certificates that are required for the local endpoint.

10 (Optional) Specify a tag.

VMware, Inc. 188


NSX Administration Guide

11 Click Save.

Adding Profiles
NSX provides the system-generated IPSec tunnel profile and an IKE profile that are assigned by
default when you configure either an IPSec VPN or L2 VPN service. A system-generated DPD
profile is created for an IPSec VPN configuration.

The IKE and IPSec profiles provide information about the algorithms that are used to authenticate,
encrypt, and establish a shared secret between network sites. The DPD profile provides
information about the number of seconds to wait in between probes to detect if an IPSec peer
site is alive or not.

If you decide not to use the default profiles provided by NSX, you can configure your own profile
using the information in the topics that follow in this section.

Add IKE Profiles


The Internet Key Exchange (IKE) profiles provide information about the algorithms that are used to
authenticate, encrypt, and establish a shared secret between network sites when you establish an
IKE tunnel.

NSX provides system-generated IKE profiles that are assigned by default when you configure an
IPSec VPN or L2 VPN service. The following table lists the default profiles provided.

Table 7-6. Default IKE Profiles Used for IPSec VPN or L2 VPN Services

Default IKE Profile Name Description

nsx-default-l2vpn-ike- n Used for an L2 VPN service configuration.


profile n Configured with IKE V2, AES CBC 128 encryption algorithm, SHA2 256 algorithm, and
Diffie-Hellman group14 key exchange algorithm.

nsx-default-l3vpn-ike- n Used for an IPSec VPN service configuration.


profile n Configured with IKE V2, AES CBC 128 encryption algorithm, SHA2 256 algorithm, and
Diffie-Hellman group 14 key exchange algorithm.

Instead of the default IKE profiles used, you can also select one of the compliance suites
supported starting with NSX 2.5. See About Supported Compliance Suites for more information.

If you decide not to use the default IKE profiles or compliance suites provided, you can configure
your own IKE profile using the following steps.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > VPN and then click the Profiles tab.

3 Select the IKE Profiles profile type, and click Add IKE Profile.

4 Enter a name for the IKE profile.

VMware, Inc. 189


NSX Administration Guide

5 From the IKE Version drop-down menu, select the IKE version to use to set up a security
association (SA) in the IPSec protocol suite.

Table 7-7. IKE Versions

IKE Version Description

IKEv1 When selected, the IPSec VPN initiates and responds to an IKEv1 protocol only.

IKEv2 This version is the default. When selected, the IPSec VPN initiates and responds to an
IKEv2 protocol only.

IKE-Flex If this version is selected and if the tunnel establishment fails with the IKEv2 protocol,
the source site does not fall back and initiate a connection with the IKEv1 protocol.
Instead, if the remote site initiates a connection with the IKEv1 protocol, then the
connection is accepted.

VMware, Inc. 190


NSX Administration Guide

6 Select the encryption, digest, and Diffie-Hellman group algorithms from the drop-down
menus. You can select multiple algorithms to apply or deselect any selected algorithms you
do not want to be applied.

Table 7-8. Algorithms Used

Type of Algorithm Valid Values Description

Encryption n AES 128 (default) The encryption algorithm used during the Internet Key
n AES 256 Exchange (IKE) negotiation.

n AES GCM 128 The AES 128 and AES 256 algorithms use the CBC mode
of operation.
n AES GCM 192
The AES-GCM algorithms are supported when used with
n AES GCM 256
IKEv2. They are not supported when used with IKEv1.

Digest n SHA2 256 (default) The secure hashing algorithm used during the IKE
n SHA1 negotiation.

n SHA2 384 If AES-GCM is the only encryption algorithm selected


in the Encryption Algorithm text box, then no hash
n SHA2 512
algorithms can be specified in the Digest Algorithm
text box, per section 8 in RFC 5282. In addition, the
Psuedo-Random Function (PRF) algorithm PRF-HMAC-
SHA2-256 is implicitly selected and used in the IKE
security association (SA) negotiation. The PRF-HMAC-
SHA2-256 algorithm must also be configured on the
peer gateway in order for the phase 1 of the IKE SA
negotiation to succeed.
If more algorithms are specified in the Encryption
Algorithm text box, in addition to the AES-GCM
algorithm, then one or more hash algorithms can be
selected in the Digest Algorithm text box. In addition,
the PRF algorithm used in the IKE SA negotiation is
implicitly determined based on the hash algorithms
configured. At least one of the matching PRF algorithms
must also be configured on the peer gateway in order
for the phase 1 of the IKE SA negotiation to succeed. For
example, if the Encryption Algorithm text box contains
AES 128 and AES GCM 128 and SHA1 is specified in the
Digest Algorithm text box, then the PRF-HMAC-SHA1
algorithm is used during the IKE SA negotiation. It must
also be configured in the peer gateway.

Diffie-Hellman n Group 14 (default) The cryptography schemes that the peer site and the
Group n Group 2 NSX Edge use to establish a shared secret over an
insecure communications channel.
n Group 5
n Group 15
n Group 16
n Group 19
n Group 20
n Group 21

VMware, Inc. 191


NSX Administration Guide

Note When you attempt to establish an IPSec VPN tunnel with a GUARD VPN Client
(previously QuickSec VPN Client) using two encryption algorithms or two digest algorithms,
the GUARD VPN Client adds additional algorithms in the proposed negotiation list. For
example, if you specified AES 128 and AES 256 as the encryption algorithms and SHA2 256
and SHA2 512 as the digest algorithms to use in the IKE profile you are using to establish the
IPSec VPN tunnel, the GUARD VPN Client also proposes AES 192 (using CBC mode) and SHA2
384 in the negotiation list. In this case, NSX uses the first encryption algorithm you selected
when establishing the IPSec VPN tunnel.

7 Enter a security association (SA) lifetime value, in seconds, if you want it different from the
default value of 86400 seconds (24 hours).

8 Provide a description and add a tag, as needed.

9 Click Save.

Results

A new row is added to the table of available IKE profiles. To edit or delete a non-system created

profile, click the three-dot menu ( ) and select from the list of actions available.

Add IPSec Profiles


The Internet Protocol Security (IPSec) profiles provide information about the algorithms that are
used to authenticate, encrypt, and establish a shared secret between network sites when you
establish an IPSec tunnel.

NSX provides system-generated IPSec profiles that are assigned by default when you configure an
IPSec VPN or L2 VPN service. The following table lists the default IPSec profiles provided.

Table 7-9. Default IPSec Profiles Used for IPSec VPN or L2 VPN Services
Name of Default IPSec
Profile Description

nsx-default-l2vpn-tunnel- n Used for L2 VPN.


profile n Configured with AES GCM 128 encryption algorithm and Diffie-Hellman group 14 key
exchange algorithm.

nsx-default-l3vpn-tunnel- n Used for IPSec VPN.


profile n Configured with AES GCM 128 encryption algorithm and Diffie-Hellman group 14 key
exchange algorithm.

Instead of the default IPSec profile, you can also select one of the supported compliance suites.
See About Supported Compliance Suites for more information.

If you decide not to use the default IPSec profiles or compliance suites provided, you can
configure your own using the following steps.

Procedure

1 With admin privileges, log in to NSX Manager.

VMware, Inc. 192


NSX Administration Guide

2 Select Networking > VPN and then click the Profiles tab.

3 Select the IPSec Profiles profile type, and click Add IPSec Profile.

4 Enter a name for the IPSec profile.

5 From the drop-down menus, select the encryption, digest, and Diffie-Hellman algorithms. You
can select multiple algorithms to apply.

Deselect the ones you do not want used.

Table 7-10. Algorithms Used

Type of Algorithm Valid Values Description

Encryption n AES GCM 128 (default) The encryption algorithm used during the Internet
n AES 128 Protocol Security (IPSec) negotiation.

n AES 256 The AES 128 and AES 256 algorithms use the CBC mode
of operation.
n AES GCM 192
n AES GCM 256
n No Encryption Auth AES
GMAC 128
n No Encryption Auth AES
GMAC 192
n No Encryption Auth AES
GMAC 256
n No Encryption

Digest n SHA1 The secure hashing algorithm used during the IPSec
n SHA2 256 negotiation.

n SHA2 384
n SHA2 512

Diffie-Hellman n Group 14 (default) The cryptography schemes that the peer site and NSX
Group n Group 2 Edge use to establish a shared secret over an insecure
communications channel.
n Group 5
n Group 15
n Group 16
n Group 19
n Group 20
n Group 21

6 Deselect PFS Group if you decide not to use the PFS Group protocol on your VPN service.

It is selected by default.

7 In the SA Lifetime text box, modify the default number of seconds before the IPSec tunnel
must be re-established.

By default, an SA lifetime of 24 hours (86400 seconds) is used.

8 Select the value for DF Bit to use with the IPSec tunnel.

The value determines how to handle the "Don't Fragment" (DF) bit included in the data packet
received. The acceptable values are described in the following table.

VMware, Inc. 193


NSX Administration Guide

Table 7-11. DF Bit Values

DF Bit Value Description

COPY The default value. When this value is selected, NSX copies the value of the DF bit from
the received packet into the packet which is forwarded. This value implies that if the
data packet received has the DF bit set, after encryption, the packet also has the DF
bit set.

CLEAR When this value is selected, NSX ignores the value of the DF bit in the data packet
received, and the DF bit is always 0 in the encrypted packet.

9 Provide a description and add a tag, if necessary.

10 Click Save.

Results

A new row is added to the table of available IPSec profiles. To edit or delete a non-system created

profile, click the three-dot menu ( ) and select from the list of actions available.

Add DPD Profiles


A DPD (Dead Peer Detection) profile provides information about the number of seconds to wait in
between probes to detect if an IPSec peer site is alive or not.

NSX provides a system-generated DPD profile, named nsx-default-l3vpn-dpd-profile, that


is assigned by default when you configure an IPSec VPN service. This default DPD profile is a
periodic DPD probe mode.

If you decide not to use the default DPD profile provided, you can configure your own using the
following steps.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to Networking > VPN > Profiles.

3 Select DPD Profiles from the Select Profile Type drop-down menu, and click Add DPD
Profile.

4 Enter a name for the DPD profile.

5 From the DPD Probe Mode drop-down menu, select Periodic or On Demand mode.

For a periodic DPD probe mode, a DPD probe is sent every time the specified DPD probe
interval time is reached.

For an on-demand DPD probe mode, a DPD probe is sent if no IPSec packet is received from
the peer site after an idle period. The value in DPD Probe Interval determines the idle period
used.

VMware, Inc. 194


NSX Administration Guide

6 In the DPD Probe Interval text box, enter the number of seconds you want the NSX Edge
node to wait before sending the next DPD probe.

For a periodic DPD probe mode, the valid values are between 3 and 360 seconds. The default
value is 60 seconds.

For an on-demand probe mode, the valid values are between 1 and 10 seconds. The default
value is 3 seconds.

When the periodic DPD probe mode is set, the IKE daemon running on the NSX Edge sends a
DPD probe periodically. If the peer site responds within half a second, the next DPD probe is
sent after the configured DPD probe interval time has been reached. If the peer site does not
respond, then the DPD probe is sent again after waiting for half a second. If the remote peer
site continues not to respond, the IKE daemon resends the DPD probe again, until a response
is received or the retry count has been reached. Before the peer site is declared to be dead,
the IKE daemon resends the DPD probe up to a maximum of times specified in the Retry
Count property. After the peer site is declared dead, the NSX Edge node then tears down the
security association (SA) on the dead peer's link.

When the on-demand DPD mode is set, the DPD probe is sent only if no IPSec traffic is
received from the peer site after the configured DPD probe interval time has been reached.

7 In the Retry Count text box, enter the number of retries allowed.

The valid values are between 1 and 100. The default retry count is 5.

8 Provide a description and add a tag, as needed.

9 To enable or disable the DPD profile, click the Admin Status toggle.

By default, the value is set to Enabled. When the DPD profile is enabled, the DPD profile is
used for all IPSec sessions in the IPSec VPN service that uses the DPD profile.

10 Click Save.

Results

A new row is added to the table of available DPD profiles. To edit or delete a non-system created

profile, click the three-dot menu ( ) and select from the list of actions available.

Add an Autonomous Edge as an L2 VPN Client


You can use L2 VPN to extend your Layer 2 networks to a site that is not managed by NSX.
An autonomous NSX Edge can be deployed on the site, as an L2 VPN client. The autonomous
NSX Edge is simple to deploy, easily programmable, and provides high-performance VPN. The
autonomous NSX Edge is deployed using an OVF file on a host that is not managed by NSX. You
can also enable high availability (HA) for VPN redundancy by deploying primary and secondary
autonomous Edge L2 VPN clients.

VMware, Inc. 195


NSX Administration Guide

Prerequisites

n Create a port group and bind it to the vSwitch on your host. Ensure that this port group
accepts promiscuous mode and forged transmits from the port group's security settings.

n Create a port group for your internal L2 extension port.

n Obtain the IP addresses for the local IP and remote IP to use with the L2 VPN Client session
you are adding.

n Obtain the peer code that was generated during the L2 VPN server configuration.

Procedure

1 Using vSphere Web Client, log in to the VMware vCenter that manages the non-NSX
environment.

2 Select Hosts and Clusters and expand clusters to show the available hosts.

3 Right-click the host where you want to install the autonomous NSX Edge and select Deploy
OVF Template.

4 Enter the URL to download and install the OVF file from the Internet or click Browse to locate
the folder on your computer that contains the autonomous NSX Edge OVF file and click Next.

5 On the Select name and folder page, enter a name for the autonomous NSX Edge and select
the folder or data center where you want to deploy. Then click Next.

6 On the Select a compute resource page, select the destination of the compute resource.

7 On the OVF Template Details page, review the template details and click Next.

8 On the Configuration page, select a deployment configuration option.

9 On the Select storage page, select the location to store the files for the configuration and disk
files.

10 On the Select networks page, configure the networks that the deployed template must use.
Select the port group you created for the uplink interface, the port group that you created for
the L2 extension port, and enter an HA interface. Click Next.

11 On the Customize Template page, enter the following values and click Next.

a Type and retype the CLI admin password.

b Type and retype the CLI enable password.

c Type and retype the CLI root password.

d Enter the IPv4 address for the Management Network.

e Enable the option to deploy an autonomous Edge.

f Enter the External Port details for VLAN ID, exit interface, IP address, and IP prefix
length such that the exit interface maps to the Network with the port group of your uplink
interface.

VMware, Inc. 196


NSX Administration Guide

If the exit interface is connected to a trunk port group, specify a VLAN ID. For example,
20,eth2,192.168.5.1,24. You can also configure your port group with a VLAN ID and
use VLAN 0 for the External Port.

g (Optional) To configure High Availability, enter the HA Port details where the exit interface
maps to the appropriate HA Network.

h (Optional) When deploying an autonomous NSX Edge as a secondary node for HA, select
Deploy this autonomous-edge as a secondary node.

Use the same OVF file as the primary node and enter the primary node's IP address, user
name, password, and thumbprint.

To retrieve the thumbprint of the primary node, log in to the primary node and run the
following command:

get certificate api thumbprint

Ensure that the VTEP IP addresses of the primary and secondary nodes are in the same
subnet and that they connect to the same port group. When you complete the deployment
and start the secondary-edge, it connects to the primary node to form an edge-cluster .

12 On the Ready to complete page, review the autonomous Edge settings and click Finish.

Note If there are errors during the deployment, a message of the day is displayed on the CLI.
You can also use an API call to check for errors:

GET https://<nsx-mgr>/api/v1/node/status

The errors are categorized as soft errors and hard errors. Use API calls to resolve the soft
errors as required. You can clear the message of day using an API call:

POST /api/v1/node/status?action=clear_bootup_error

13 Power on the autonomous NSX Edge appliance.

14 Log in to the autonomous NSX Edge client.

15 Select L2VPN > Add Session and enter the following values:

a Enter a session name.

b Enter the local IP address and the remote IP address.

c Enter the peer code from the L2VPN server. See Download the Remote Side L2 VPN
Configuration File for details on obtaining the peer code.

16 Click Save.

17 Select Port > Add Port to create an L2 extension port.

18 Enter a name, a VLAN, and select an exit interface.

19 Click Save.

VMware, Inc. 197


NSX Administration Guide

20 Select L2VPN > Attach Port and enter the following values:

a Select the L2 VPN session that you created.

b Select the L2 extension port that you created.

c Enter a tunnel ID.

21 Click Attach.

You can create additional L2 extension ports and attach them to the session if you need to
extend multiple L2 networks.

22 Use the browser to log in to the autonomous NSX Edge or use API calls to view the status of
the L2VPN session.

Note If the L2VPN server configuration changes, ensure that you download the peer code
again and update the session with the new peer code.

Check the Realized State of an IPSec VPN Session


After you send a configuration update request for an IPSec VPN session, you can check to see
if the requested state has been successfully processed in the NSX local control plane on the
transport nodes.

When you create an IPSec VPN session, multiple entities are created: IKE profile, DPD profile,
tunnel profile, local endpoint, IPSec VPN service, and IPSec VPN session. These entities all share
the same IPSecVPNSession span, so you can obtain the realization state of all the entities of the
IPSec VPN session by using the same GET API call. You can check the realization state using only
the API.

Prerequisites

n Familiarize yourself with IPSec VPN. See Understanding IPSec VPN.

n Verify the IPSec VPN is configured successfully. See Add an IPSec VPN Service.

n You must have access to the NSX Manager API.

Procedure

1 Send a POST, PUT, or DELETE request API call.

For example:

PUT https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/8dd1c386-9b2c-4448-85b8-51ff649fae4f
{
"resource_type": "PolicyBasedIPSecVPNSession",
"id": "8dd1c386-9b2c-4448-85b8-51ff649fae4f",
"display_name": "Test RZ_UPDATED",
"ipsec_vpn_service_id": "7adfa455-a6fc-4934-a919-f5728957364c",
"peer_endpoint_id": "17263ca6-dce4-4c29-bd8a-e7d12bd1a82d",

VMware, Inc. 198


NSX Administration Guide

"local_endpoint_id": "91ebfa0a-820f-41ab-bd87-f0fb1f24e7c8",
"enabled": true,
"policy_rules": [
{
"id": "1026",
"sources": [
{
"subnet": "1.1.1.0/24"
}
],
"logged": true,
"destinations": [
{
"subnet": "2.1.4..0/24"
}
],
"action": "PROTECT",
"enabled": true,
"_revision": 1
}
]
}

2 Locate and copy the value of x-nsx-requestid from the response header returned.

For example:

x-nsx-requestid e550100d-f722-40cc-9de6-cf84d3da3ccb

3 Request the realization state of the IPSec VPN session using the following GET call.

GET https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/<ipsec-vpn-session-id>/state?request_id=<request-id>

The following API call uses the id and x-nsx-requestid values in the examples used in the
previous steps.

GET https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/8dd1c386-9b2c-4448-85b8-51ff649fae4f/state?
request_id=e550100d-f722-40cc-9de6-cf84d3da3ccb

Following is an example of a response you receive when the realization state is in_progress.

{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "fe651e63-04bd-43a4-a8ec-45381a3b71b9",
"state": "in_progress",
"failure_message": "CCP Id:ab5958df-d98a-468e-a72b-d89dcdae5346, Message:State
realization is in progress at the node."
},
{
"sub_system_type": "TransportNode",
"sub_system_id": "ebe174ac-e4f1-4135-ba72-3dd2eb7099e3",
"state": "in_sync"

VMware, Inc. 199


NSX Administration Guide

}
],
"state": "in_progress",
"failure_message": "The state realization is in progress at transport nodes."
}

Following is an example of a response you receive when the realization state is in_sync.

{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "7046e8f4-a680-11e8-9bc3-020020593f59",
"state": "in_sync"
}
],
"state": "in_sync"
}

The following are examples of possible responses you receive when the realization state is
unknown.

{
"state": "unknown",
"failure_message": "Unable to get response from any CCP node. Please retry operation
after some time."
}

{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "3e643776-5def-11e8-94ae-020022e7749b",
"state": "unknown",
"failure_message": "CCP Id:ab5958df-d98a-468e-a72b-d89dcdae5346, Message: Unable
to get response from the node. Please retry operation after some time."
},
{
"sub_system_type": "TransportNode",
"sub_system_id": "4784ca0a-5def-11e8-93be-020022f94b73",
"state": "in_sync"
}
],
"state": "unknown",
"failure_message": "The state realization is unknown at transport nodes"
}

VMware, Inc. 200


NSX Administration Guide

After you perform an entity DELETE operation, you might receive the status of NOT_FOUND, as
shown in the following example.

{
"http_status": "NOT_FOUND",
"error_code": 600,
"module_name": "common-services",
"error_message": "The operation failed because object identifier LogicalRouter/
61746f54-7ab8-4702-93fe-6ddeb804 is missing: Object identifiers are case sensitive.."
}

If the IPSec VPN service associated with the session is disabled, you receive the BAD_REQUEST
response, as shown in the following example.

{
"httpStatus": "BAD_REQUEST",
"error_code": 110199,
"module_name": "VPN",
"error_message": "VPN service f9cfe508-05e3-4e1d-b253-fed096bb2b63 associated with the
session 8dd1c386-9b2c-4448-85b8-51ff649fae4f is disabled. Can not get the realization
status."
}

Understanding TCP MSS Clamping


TCP MSS clamping enables you to reduce the maximum segment size (MSS) value used by a TCP
session during a connection establishment through a VPN tunnel.

TCP MSS is the maximum amount of data in bytes that a host is willing to accept in a single TCP
segment. Each end of a TCP connection sends its desired MSS value to its peer-end during the
three-way handshake, where MSS is one of the TCP header options used in a TCP SYN packet.
The sender host calculates the TCP MSS based on the maximum transmission unit (MTU) of its
egress interface.

When a TCP traffic goes through any kind of VPN tunnel, additional headers are added to the
original packet to keep it secure. For IPSec tunnel mode, additional headers used are IP, ESP,
and optionally UDP (if a port translation is present in the network). Because of these additional
headers, the size of the encapsulated packet goes beyond the MTU of the VPN interface. The
packet can get fragmented or dropped based on the DF policy.

To avoid packet fragmentation or drop in an IPSec VPN session, you can adjust the MSS value
for the IPSec session by enabling the TCP MSS clamping feature. Navigate to Networking > VPN
> IPSec Sessions. When you are adding an IPSec session or editing an existing one, expand the
Advanced Properties section, and enable TCP MSS Clamping. By default, the TCP MSS Clamping
feature is disabled for an IPSec session.

VMware, Inc. 201


NSX Administration Guide

When the TCP MSS Clamping feature is enabled for an IPSec session, you can configure the
pre-calculated MSS value suitable for the IPSec session by setting both TCP MSS Direction and
TCP MSS Value. The configured MSS value is used for MSS clamping. You can opt to use the
dynamic MSS calculation by setting the TCP MSS Direction and leaving TCP MSS Value blank.
The MSS value is auto-calculated based on the VPN interface MTU, VPN overhead, and the path
MTU (PMTU) when it is already determined. The effective MSS is recalculated during each TCP
handshake to handle the MTU or PMTU changes dynamically. See Add a Policy-Based IPSec
Session or Add a Route-Based IPSec Session for more information.

Similarly, for L2 VPN, TCP MSS Clamping configuration is given only in the L2 VPN server session.
You can navigate to Networking > VPN > L2 VPN Sessions. Select Add L2 VPN Session > L2
VPN Server and expand the Advanced Properties section. TCP MSS Clamping is enabled by
default for both the directions with auto-calculation mode, but you can configure a desired TCP
MSS value that is suitable for the topology or disable it. See Add an L2 VPN Server Session for
more information.

Troubleshooting VPN Problems


This section provides information to help you resolve problems you might encounter while using
the VPN features in NSX.

Monitor and Troubleshoot VPN Sessions


After you configure an IPSec or L2 VPN session, you can monitor the VPN tunnel status and
troubleshoot any reported tunnel issues using the NSX Manager user interface.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Navigate to the Networking > VPN > IPSec Sessions or Networking > VPN > L2 VPN
Sessions tab.

3 Expand the row for the VPN session that you want to monitor or troubleshoot.

4 To view the status of the VPN tunnel status, click the info icon.

The Status dialog box appears and displays the available statuses.

5 To view the VPN tunnel traffic statistics, click View Statistics in the Status column.

The Statistics dialog box displays the traffic statistics for the VPN tunnel.

6 To view the error statistics, click the View More link in the Statistics dialog box.

7 To close the Statistics dialog box, click Close.

Alarms When an IPsec VPN Session or Tunnel Is Down


When an IPsec VPN session or tunnel is down, an alarm is raised and the reason for the Down
alarm is displayed on the Alarms dashboard or the VPN page on the NSX Manager user interface.

VMware, Inc. 202


NSX Administration Guide

Solution

Use the following tables to locate the Reason message that you see on the NSX Manager user
interface and review the possible cause for the Down alarm. To resolve the alarm, perform the
necessary actions listed for the specific Reason message and possible cause for the Down alarm.

Table 7-12. Causes and Solutions for an IPsec VPN Session Is Down Alarm
Reason for the IPsec VPN Necessary Actions to Resolve the
Session Down Alarm Possible Cause Alarm Message

Authentication failed The IKE SA establishment between the n Verify the Local ID and Remote ID
VPN gateways failed due to a failure in values. The Local ID value must be
authentication. Authentication of the IKE SA set as the Remote ID value in the
depends on the pre-shared key, Local ID, and peer VPN gateway.
Remote ID values. n Verify the Pre-shared Key value.
It must match exactly in both the
VPN gateways.

No proposal chosen The IKE transform configuration in both Ensure that the following properties
the local and peer configuration file are are configured the same for both
inconsistent. gateways.
n DH groups

n Digest and encryption


algorithms

Peer sent delete The peer gateway initiated a delete case. A To determine why the peer gateway
DELETE payload is received for IKE SA. sent a DELETE payload, examine the
logs in the NSX Edge and on the peer
gateway side.

Peer not responding The IKE SA negotiation timed out. n Verify that the remote gateway is
up.
n Verify the connectivity to the
remote gateway.

Invalid syntax n IKE proposals or transforms are not formed To debug the invalid syntax, analyze
correctly. the logs.
n There are malformed IKE payloads.

Invalid spi An invalid SPI value was received in the IKE To debug the invalid SPI value,
payload. analyze the logs.

Configuration failed The session configuration realization failed in Resolve the error using the reason
NSX Edge due to some constraints or certain displayed in the session dump under
criteria. The reason is listed in the session dump Session_Config_Fail_Reason.
under Session_Config_Fail_Reason.

Negotiation not The IKE SA negotiation has not started. n Verify that the Connection
started Initiation Mode property in the
session configuration is set to
Responder

n Verify that the peer configuration


has at least one of the gateways
set to Initiator.
.

VMware, Inc. 203


NSX Administration Guide

Table 7-12. Causes and Solutions for an IPsec VPN Session Is Down Alarm (continued)
Reason for the IPsec VPN Necessary Actions to Resolve the
Session Down Alarm Possible Cause Alarm Message

IPsec service not Status of the VPN service used for the session is Verify if the Admin Status in the IPsec
active not active. VPN service configuration is disabled.

Session disabled Admin has disabled the session. Enable the session to resolve this
error.

SR state is not Active SR is in a standby state. Verify that the VPN session on the
NSX Edge node where HA peer SR is
in the Active state.

Table 7-13. Causes and Solutions for an IPsec VPN Tunnel Is Down Alarm
Reason for the IPsec VPN Necessary Actions to Resolve the
Tunnel Down Alarm Possible Cause Alarm Message

Peer sent delete The peer gateway sent a DELETE payload for To understand why the peer gateway
the IPSEC SA. sent a DELETE payload, you must
check the logs in both the NSX Edge
and in the peer gateway side.

No proposal chosen The ESP transform configuration is not Ensure that the following properties
consistent in the configurations for both the are configured the same for both
local and peer gateways. gateways.
n DH groups

n Digest and encryption


algorithms

n The PFS is activated or not.

TS unacceptable The IPsec SA setup has failed due to a Check the local and remote network
mismatch in the policy rule definition between configuration on both gateways.
the gateways for the tunnel configuration.

IKE SA down The IKE SA session is down. Check the session down reason listed
in the logs and resolve the errors.

Invalid syntax n The proposals or transforms are not formed To debug the invalid syntax, analyze
correctly. the logs.
n There are malformed payloads.

Invalid spi An invalid SPI value was received in the ESP To debug the invalid SPI value,
payload. analyze the logs.

No IKE peers All IKE peers are dead. There are no peer n Check if the remote gateway is up.
gateways left with whom to try to establish a n Check the connectivity to the
connection. configured peer gateways.

IPsec negotiation not The IPsec SA negotiation has not started. The IKE SA is not up yet. Check the
started session down reason listed in the logs
and resolve the errors.

VMware, Inc. 204


Network Address Translation
(NAT) 8
Network address translation (NAT) maps one IP address space to another. You can configure NAT
on tier-0 and tier-1 gateways.

The following diagram shows how NAT can be configured.

T0 Gateway T0 Gateway T1 Gateway


Active/Active Active/Standby Active/Standby

NAT NAT NAT64 NAT NAT64

Reflexive NAT No No No No
SNAT DNAT Reflexive IPv6 to IPv4 SNAT DNAT Reflexive IPv6 to IPv4
SNAT DNAT SNAT DNAT

Three types of NAT are supported, in addition to NAT64.

Note Disabling gateway firewall causes the NAT rule to drop traffic. If the gateway firewall needs
to be disabled, include an Allow rule:
Source Destination ACTION

ANY ANY ALLOW

n Source NAT (SNAT) - translates a source IP address of outbound packets so that packets
appear as originating from a different network. Supported on tier-0/tier-1 gateways running
in active-standby mode. For one-to-one SNAT, the SNAT translated IP address is not
programmed on the loopback port, and there is no forwarding entry with an SNAT translated
IP as the prefix. For n-to-one SNAT, the SNAT translated IP address is programmed on the
loopback port, and users will see a forwarding entry with an SNAT-translated IP address
prefix. NSX SNAT is designed to be applied to traffic that egresses the NSX environment.

n Destination NAT (DNAT) - translates the destination IP address of inbound packets so that
packets are delivered to a target address into another network. Supported on tier-0/tier-1
gateways running in active-standby mode. NSX DNAT is designed to be applied to traffic that
ingresses the NSX environment.

VMware, Inc. 205


NSX Administration Guide

n Reflexive NAT - (sometimes called stateless NAT) translates addresses passing through
a routing device. Inbound packets undergo destination address rewriting, and outbound
packets undergo source address rewriting. It is not keeping a session as it is stateless.
Supported on tier-0 gateways running in active-active or active-standby mode, and on tier-1
gateways. Stateful NAT is not supported in active-active mode.

You can also disable SNAT or DNAT for an IP address or a range of addresses (No-SNAT/No-
DNAT). If an address has multiple NAT rules, the rule with the highest priority is applied.

Note If there is a service interface configured in a NAT rule, translated_port will be realized on
NSX Manager as destination_port. This means that the service will be the translated port while
the translated port is used to match the traffic as destination port. If there is no service configured,
the port will be ignored.

If you are creating a NAT rule from a Global Manager in an NSX Federation environment, you can
select site-specific IP addresses for NAT. Note the following:

n Do not click Set under Apply To if you want the default option of applying the NAT rule to all
locations.

n Under Apply To, click Set and select the locations whose entities you want to apply the rule to
and then select Apply NAT rule to all entities.

n Under Apply To, click Set, select a location and then select Interfaces from the Categories
drop-down menu. You can select specific interfaces to which you want to apply the NAT rule.

n DNAT is not supported on a tier-1 gateway where policy-based IPSec VPN is configured.

n SNAT configured on a tier-0 gateway's external interface processes traffic from a tier-1
gateway, and from another external interface on the tier-0 gateway.

n NAT is configured on the uplinks of the tier-0/tier-1 gateways and processes traffic going
through this interface. This means that tier-0 gateway NAT rules will not apply between two
tier-1 gateways connected to the tier-0.

NAT Support Matrices


Configuration Fields

Type source-addr dest-addr translated-addr translated-port match-service

SNAT optional optional must no optional

DNAT optional must must optional optional

NO_SNAT must optional no no optional

NO_DNAT optional must no no optional

REFLEXIVE must no must no no

NAT64 optional must must optional optional

Configuration Use Cases

VMware, Inc. 206


NSX Administration Guide

Types 1:1 n:n n:m n:1 1:m

SNAT Yes Yes Yes Yes No

DNAT Yes Yes * configurable, Yes * configurable, but


but not not supported
supported

NO_SNAT - - - - -

NO_DNAT - - - - -

REFLEXIVE Yes Yes No No No

NAT64 Yes Yes No Yes No

This chapter includes the following topics:

n Configure NAT/DNAT/No SNAT/No DNAT/Reflexive NAT

n Configure NAT64

n NAT and Gateway Firewall

Configure NAT/DNAT/No SNAT/No DNAT/Reflexive NAT


You can configure different types of NAT for IPv4 on a tier-0 or tier-1 gateway.

Note If there is a service configured in this NAT rule, the translated_port will be realized on
NSX Manager as the destination_port. This means the service will be the translated port while the
translated port is used to match the traffic as destination port. If there is no service configured, the
port will be ignored.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > NAT.

3 Select a gateway from the Gateway drop-down menu.

4 Next to View, select NAT.

5 Click Add NAT Rule.

6 Enter a Name.

7 Select an action.

Gateway Available Actions

Tier-1 gateway Available actions are SNAT, DNAT, Reflexive, NO SNAT, and NO DNAT.

Tier-0 gateway in active-standby Available actions are SNAT, DNAT, NO SNAT, and NO DNAT.
mode

Tier-0 gateway in active-active The available action is Reflexive.


mode

VMware, Inc. 207


NSX Administration Guide

8 Enter a Source. If this text box is left blank, the NAT rule applies to all sources outside of the
local subnet.

Specify an IP address, or an IP address range in CIDR format. For SNAT, NO_SNAT and
Reflexive rules, this is a required field and represents the source network of the packets
leaving the network.

9 (Required) Enter a Destination.

Specify an IP address, or an IP address range in CIDR format. For DNAT and NO_DNAT rules,
this is a required field and represents the source network of the packets leaving the network.
This field is not applicable for Reflexive.

10 Enter a value for Translated IP.

Specify an IPv4 address, or an IP address range in CIDR format. If translated IP is less than the
match IP for SNAT it will work as PAT.

11 Toggle Enable to enable the rule.

12 In the Service column, click Set to select services.

If there is a service interface configured in a NAT rule, translated_port will be realized on


NSX Manager as destination_port. This means that the service will be the translated port
while the translated port is used to match the traffic as destination port. If there is no service
configured, the port will be ignored.

13 Enter a value for Translated Port.

If there is a service interface configured in a NAT rule, translated_port will be realized on


NSX Manager as destination_port. This means that the service will be the translated port
while the translated port is used to match the traffic as destination port. If there is no service
configured, the port will be ignored.

VMware, Inc. 208


NSX Administration Guide

14 For Apply To, click Set and select objects that this rule applies to.

The available objects are Tier-0 Gateways, Interfaces, Labels, Service Instance Endpoints,
and Virtual Endpoints.

Note If you are using NSX Federation and creating a NAT rule from a Global Manager
appliance, you can select site-specific IP addresses for NAT. You can apply the NAT rule to
any of the following location spans:

n Do not click Set if you want to use the default option of applying the NAT rule to all
locations.

n Click Set. In the Applied To | New Rule dialog box, select the locations whose entities you
want to apply the rule to and then click Apply.

n Click Set. In the Applied To | New Rule dialog box, select a location and then select
Interfaces from the Categories drop-down menu. You can select specific interfaces to
which you want to apply the NAT rule.

n Click Set. In the Applied To | New Rule dialog box, select a location and then select VTI
from the Categories drop-down menu. You can select specific VTIs to which you want to
apply the NAT rule.

See Features and Configurations Supported in NSX Federation for more details.

15 (Optional) Select a firewall setting.

The available settings are:

n Match External Address - The firewall will be applied to external address of a NAT rule.

n For SNAT, the external address is the translated source address after NAT is done.

n For DNAT, the external address is the original destination address before NAT is done.

n For REFLEXIVE, to egress traffic, the firewall is applied to the translated source
address after NAT is done. For ingress traffic, the firewall is applied to the original
destination address before NAT is done.

n Match Internal Address - Indicates the firewall will be applied to internal address of a NAT
rule.

n For SNAT, the internal address is the original source address before NAT is done.

n For DNAT, the internal address is the translated destination address after NAT is done.

n For REFLEXIVE, for egress traffic, the firewall is applied to the original source address
before NAT is done. For ingress traffic, the firewall is applied to the translated
destination address after NAT is done.

n Bypass - The packet bypasses firewall rules.

16 (Optional) Toggle the Logging button to enable logging.

VMware, Inc. 209


NSX Administration Guide

17 Specify a priority value.

A lower value means a higher priority. The default is 0. A No SNAT or No DNAT rule should
have a higher priority than other rules.

18 (Optional) Apply to Policy Based VPN: Applies only for the DNAT or No DNAT rule category.
The rule is applied based on the priority value. Despite the Bypass or Match settings, the
settings applied for the Apply To parameter of a NAT policy are still honored.

n Bypass: NAT Rule is not applied to the traffic decrypted from a policy-based IPSec VPN
tunnel. This is the default setting.

n Match: If the traffic is decrypted from a policy-based IPSec VPN tunnel, the NAT policy is
evaluated and matched. NAT policy is NOT evaluated on the traffic that is not decrypted
from a policy-based IPSec VPN tunnel.
For a NAT policy to hit the decrypted traffic, the policy must be set to Match and the interface
where encrypted traffic is sent/received must be set in the Apply To parameter of the NAT
policy. For more information on the Apply To parameter, see Chapter 8 Network Address
Translation (NAT).

19 Click Save.

Configure NAT64
NAT64 is a mechanism for translating IPv6 packets to IPv4 packets, and vice versa. NAT64 allows
IPv6-only clients to contact IPv4 servers using unicast UDP or TCP. NAT64 only allows an IPv6-
only client to initiate communications to an IPv4-only server. To perform IPv6-to-IPv4 translation,
binding and session information is saved. NAT64 is stateful.

The following diagram shows details of NAT64 translation.

VMware, Inc. 210


NSX Administration Guide

VLAN 101
Source IPv6: 2021::DB1::1
VLAN 102
Destination IPv6: 64::ff90::a0a:a02

Destination IPV4 IP is embedded as the


last 4 bytes in the IPv6 address
EN 1 EN 2

2 1 NAT Rule
Tier-0 Source IPv6: 2021::DB1::1
Source IPv6: 2021::DB1::1 Destination IPv6: 64:ff90::a0a:a02
Translated IP: 20.20.20.0/24
EN 1 EN 2 Destination IPv6: 64::ff90::a0a:a02 Service: Destination port 80

2 Packet before NAT64 translation sent from


Tier-1 3 IPv6 Network
Source IPv4: 20.20.20.22
Network-1
Destination IPv4: 10.10.10.2
10.10.10.0/24
3 Translated packet sent from IPv6 Network
Source: 20.20.20.22
Destination: 10.10.10.2 (translated from 64:ff90::a0a:a02)

VM 2
10.10.10.2/24

Note
n NAT64 is only supported for external IPv6 traffic coming in through the NSX Edge uplink to
the IPv4 server in the overlay.

n NAT64 supports TCP and UDP. Packets of all other protocol types are discarded. NAT64 does
not support ICMP, fragmentation, or IPV6 packets that have extension headers.

n When a NAT64 rule and an NSX load balancer are configured on the same Edge node, using
the NAT64 rule to direct IPv6 packets to the IPv4 load balancer is not supported.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > NAT.

3 Select a gateway from the Gateway dropdown list.

4 Next to View, select NAT64.

5 Click Add NAT 64 Rule.

6 Enter a Name.

7 Enter a Source.

Specify an IPv6 address, or an IPv6 address range in CIDR format. For example, 2001:DB7:1::1
or 2001:DB7:1::/64.

If this text box is left blank, the NAT rule applies to all sources outside of the local subnet.

VMware, Inc. 211


NSX Administration Guide

8 Enter a Destination.

Specify an IPv6 address, or an IPv6 address range in CIDR format with subnet size 96. For
example, 64:ff9b::0B01:0101 or 2001:DB8::/96.

9 Enter a value for Translated IP.

Specify an IPv4 address, an IPv4 address range, or a comma-separated list of IPv4 addresses.
For example, 10.1.1.1, 10.1.1.1-10.1.1.2, or 10.1.1.1,10.1.1.2.

10 Toggle Enable to enable the rule.

11 (Optional) In the Service column, click Set to select services.

12 (Optional) Enter a value for Translated Port.

13 (Optional) For Apply To, click Set and select objects that this rule applies to.

The only option available is Interfaces.

14 (Optional) Toggle the logging button to enable logging.

15 (Optional) Specify a priority value.

A lower value means a higher priority. The default is 0.

16 The Firewall setting is set to Bypass and cannot be changed.

17 Click Save.

NAT and Gateway Firewall


A NAT firewall allows internet traffic to pass through the gateway if a device on the private
network requested it. Any unsolicited requests or data packets are discarded, preventing
communication with potentially dangerous devices.

If a tier-1 gateway has both SNAT and gateway firewall (GWFW) configured, and if the GWFW
is not configured to be stateful, you must configure NO SNAT for the tier-1 gateway's advertised
subnets. Otherwise, traffic to IP addresses in these subnets will fail.

In the example below, T1-A is the gateway and there is a SNAT rule configured that translates any
traffic from its attached subnet 192.168.1.0/0 to 10.1.1.1.

VMware, Inc. 212


NSX Administration Guide

Tier-0

Tier-1 Tier-1

VM-A VM-B
192.168.1.1 20.1.1.1

Here are some traffic scenarios:

1 Any traffic stream that is initiated from VM-A/192.168.1.1 will get translated to 10.1.1.1 as the
source IP, regardless if gateway firewall is stateful, stateless, or disabled. When the traffic from
VM-C or VM-B returns for that flow, they will have a destination IP of 10.1.1.1; T1-A will match it
up with the SNAT flow and translate it correctly so that it flows back to VM-A. The SNAT rule
works as expected, and there are no issues.

2 VM-B/20.1.1.1 initiates a traffic flow to VM-A/192.168.1.1. Here, there’s a difference in behavior


when T1-A has a stateful firewall versus when it has no firewall or stateless firewall. The firewall
rules permit the traffic between VM-B and VM-A. To have this scenario, configure a NO-NAT
rule for traffic matching 192.168.1.0/24 to 20.1.1.0/24. When this NO-NAT rule exists, then
there will be no difference in behavior.

3 If T1-A has stateful firewall, the T1-A firewall will create a firewall connection entry for the TCP
SYN packet from VM-B/20.1.1.1 to VM-A/192.168.1.1. When VM-A replies, T1-A will match the
reply packet with the stateful connection entry, and forward the traffic from VM-A/192.168.1.1
to VM-B/ 20.1.1.1 with no SNAT translation. This is because the firewall will skip the SNAT
lookup when the return traffic matches up with a firewall connection entry.

4 If T1-A has firewall disabled or stateless, the T1-A firewall will forward the TCP SYN packet
from VM-B/20.1.1.1 to VM-A/192.168.1.1 without creating a firewall connection entry, because
it’s either stateless or no firewall. When VM-A/192.168.1.1 replies back to VM-B/20.1.1.1, T1-A
sees that there’s no firewall connection entry, performs SNAT on it, and translates the source
IP from VM-A/192.168.1.1 to 10.1.1.1. When that reply gets back to VM-B, VM-B will drop the
traffic because the source IP address is 10.1.1.1 instead of VM-A/192.168.1.1.

VMware, Inc. 213


NSX Advanced Load Balancer
(Avi) 9
The VMware NSX Advanced Load Balancer is a distributed and highly scalable cloud-native
Application Distribution solution.

The NSX Advanced Load Balancer is no longer consumed on the NSX Manager. If you have
activated the NSX Advanced Load Balancer, or have upgraded from 3.2.0 or 3.2.1 to 3.2.2 or
higher, we recommend you deactivate the NSX Advanced Load Balancer by clicking Deactivate
NSX-T ALB in the banner message on the UI.

After deactivation none of the configurations are lost, and there is no impact on the running traffic.
The configurations are preserved on the NSX Advanced Load Balancer.

After deactivating the NSX Advanced Load Balancer from the NSX Manager, log into the
NSX Advanced Load Balancer Controller to access all of the NSX Advanced Load Balancer
configurations.

To install and configure a controller cluster see Install NSX Advanced Load Balancer Appliance
Cluster, in the NSX Installation Guide.

For NSX Advanced Load Balancer configuration, see VMware NSX Advanced Load Balancer
Documentation.

VMware, Inc. 214


Load Balancer
10
The NSX logical load balancer offers high-availability service for applications and distributes the
network traffic load among multiple servers.

Tier 1

Load Balancer Server 1


Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1

The load balancer distributes incoming service requests evenly among multiple servers in such a
way that the load distribution is transparent to users. Load balancing helps in achieving optimal
resource utilization, maximizing throughput, minimizing response time, and avoiding overload.

You can map a virtual IP address to a set of pool servers for load balancing. The load balancer
accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address and decides which pool
server to use.

Depending on your environment needs, you can scale the load balancer performance by
increasing the existing virtual servers and pool members to handle heavy network traffic load.

Note Logical load balancer is supported only on the tier-1 gateway. One load balancer can be
attached only to a tier-1 gateway.

This chapter includes the following topics:

n Key Load Balancer Concepts

n Setting Up Load Balancer Components

n Groups Created for Server Pools and Virtual Servers

VMware, Inc. 215


NSX Administration Guide

Key Load Balancer Concepts


Load balancer includes virtual servers, server pools, and health checks monitors.

A load balancer is connected to a Tier-1 logical router. The load balancer hosts single or multiple
virtual servers. A virtual server is an abstract of an application service, represented by a unique
combination of IP, port, and protocol. The virtual server is associated to single to multiple server
pools. A server pool consists of a group of servers. The server pools include individual server pool
members.

To test whether each server is correctly running the application, you can add health check
monitors that check the health status of a server.

NSX Edge Node

Tier - 1 A Tier - 1 B

LB 1 LB 2

Virtual Virtual Virtual


server 1 server 2 server 3

Pool 1 Pool 2 Pool 3 Pool 4 Pool 5

HC 1 HC 2

Scaling Load Balancer Resources


When you configure a load balancer, you can specify a size (small, medium, large, or extra large).
The size determines the number of virtual servers, server pools, and pool members the load
balancer can support.

A load balancer runs on a tier-1 gateway, which must be in active-standby mode. The gateway
runs on NSX Edge nodes. The form factor of the NSX Edge node (bare metal, small, medium,
large, or extra large) determines the number of load balancers that the NSX Edge node can
support. Note that in Manager mode, you create logical routers, which have similar functionality to
gateways. See Chapter 1 NSX Manager.

For more information about what the different load balance sizes and NSX Edge form factors can
support, see https://configmax.vmware.com.

VMware, Inc. 216


NSX Administration Guide

Note that using a small NSX Edge node to run a small load balancer is not recommended in a
production environment.

You can call an API to get the load balancer usage information of an NSX Edge node. If you use
Policy mode to configure load balancing, run the following command:

GET /policy/api/v1/infra/lb-node-usage?node_path=<node-path>

If you use Manager mode to configure load balancing, run the following command:

GET /api/v1/loadbalancer/usage-per-node/<node-id>

The usage information includes the number of load balancer objects (such as load balancer
services, virtual servers, server pools, and pool members) that are configured on the node. For
more information, see the NSX API Guide.

Supported Load Balancer Features


NSX load balancer supports the following features.

n Layer 4 - TCP and UDP

n Layer 7 - HTTP and HTTPS with load balancer rules support

n Server pools - static and dynamic with NSGroup

n Persistence - Source-IP and Cookie persistence mode

n Health check monitors - Active monitor which includes HTTP, HTTPS, TCP, UDP, and ICMP,
and passive monitor

n SNAT - Transparent, Automap, and IP List

n HTTP upgrade - For applications using HTTP upgrade such as WebSocket, the client or server
requests for HTTP Upgrade, which is supported. By default, NSX supports and accepts HTTPS
upgrade client request using the HTTP application profile.

Note: SSL -Terminate-mode and proxy-mode is not supported in NSX limited export release.

VMware, Inc. 217


NSX Administration Guide

Load Balancer (LB)

Fast TCP

Fast UDP Application Profile Client-SSL Profile

HTTP
Virtual Server Server-SSL Profile
Source-IP

Persistence Profile LB Rules

Cookie

SNAT

Pool

Pool Members

Active Monitor Passive Monitor

HTTP HTTPS TCP UDP ICMP

Load Balancer Topologies


Load balancers are typically deployed in either inline or one-arm mode. One-arm mode requires
virtual server Source NAT (SNAT) configuration, and inline mode does not.

Inline Topology
In the inline mode, the load balancer is in the traffic path between the client and the server. Clients
and servers should not be connected to overlay segments on the same tier-1 logical router if SNAT
on the load balancer is not desired. If clients and servers are connected to overlay segments on
the same tier-1 logical router, SNAT is required.

VMware, Inc. 218


NSX Administration Guide

C S

External Router

Tier-0 LR

Virtual
LB 1 C S
Server 1

Tier-1 A Tier-1 B

C S S S C S C

One-Arm Topology
In one-arm mode, the load balancer is not in the traffic path between the client and the server.
In this mode, the client and the server can be anywhere. The load balancer performs Source
NAT (SNAT) to force return traffic from the server destined to the client to go through the load
balancer. This topology requires virtual server SNAT to be enabled.

When the load balancer receives the client traffic to the virtual IP address, the load balancer
selects a server pool member and forwards the client traffic to it. In the one-arm mode, the
load balancer replaces the client IP address with the load balancer IP address so that the server
response is always sent to the load balancer. The load balancer forwards the response to the
client.

Virtual
LB 3
Server 3

C S Tier1 One-Arm C

External Router Virtual


LB 2
Server 2

Tier-0 LR Tier1 One-Arm B

C S

Virtual Tier-1 A
LB 1
Server 1

Tier1 One-Arm A C S S C S

VMware, Inc. 219


NSX Administration Guide

Special Use Case When no overlay is used and everything is VLAN, overlay still must be
configured on the edge nodes hosting the tier-1 one-arm load balancer. This is because edge
nodes must have at least one tunnel end point UP for high availability between edge nodes. When
the tunnels are UP they will agree on which edge node will run the active or standby role of each
tier-0 and tier-1 gateway.

Tier-1 Service Chaining


If a tier-1 gateway or logical router hosts different services, such as NAT, firewall, and load
balancer, the services are applied in the following order:

n Ingress

DNAT - Firewall - Load Balancer

Note: If DNAT is configured with Firewall Bypass, firewall is skipped but not load balancer.

n Egress

Load Balancer - Firewall - SNAT

Setting Up Load Balancer Components


To use logical load balancers, you must start by configuring a load balancer and attaching it to a
tier-1 gateway.

Next, you set up health check monitoring for your servers. You must then configure server pools
for your load balancer. Finally, you must create a layer 4 or layer 7 virtual server for your load
balancer and attach the newly created virtual server to the load balancer.

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Add Load Balancers


Load balancer is created and attached to the tier-1 gateway.

VMware, Inc. 220


NSX Administration Guide

You can configure the level of error messages you want the load balancer to add to the error log.

Note
n Avoid setting the log level to DEBUG on load balancers with a significant traffic due to the
number of messages printed to the log that affect performance.

n Load balancer over IPSec VPN is not supported for route-based VPN terminated on Tier-1
gateways.

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Prerequisites

Verify that a tier-1 gateway is configured. See Chapter 3 Tier-1 Gateway.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Add Load Balancer.

3 Enter a name and a description for the load balancer.

4 Select the load balancer size based on your needs of virtual servers and pool members and
available resources.

5 Select the already configured tier-1 gateway to attach to this load balancer from the drop-
down menu.

The tier-1 gateway must be in the Active-Standby mode.

6 Define the severity level of the error log from the drop-down menu.

Load balancer collects information about encountered issues of different severity levels to the
error log.

7 (Optional) Enter tags to make searching easier.

You can specify a tag to set a scope of the tag.

VMware, Inc. 221


NSX Administration Guide

8 Click Save.

The load balancer creation and attaching the load balancer to the tier-1 gateway takes about
three minutes and the configuration status to appear green and Up.

If the status is Down, click the information icon and resolve the error before you proceed.

9 (Optional) Delete the load balancer.

a Detach the load balancer from the virtual server and tier-1 gateway.

b Select the load balancer.

c Click the vertical ellipses button.

d Select Delete.

Add an Active Monitor


The active health monitor is used to test whether a server is available. The active health monitor
uses several types of tests such as sending a basic ping to servers or advanced HTTP requests to
monitor an application health.

Servers that fail to respond within a certain time period or respond with errors are excluded from
future connection handling until a subsequent periodic health check finds these servers to be
healthy.

Active health checks are performed on server pool members after the pool member is attached to
a virtual server and that virtual server is attached to a tier-1 gateway. The tier-1 uplink IP address is
used for the health check.

Note More than one active health monitor can be configured per server pool.

Tier-1
1

Load Balancer Server 1


Clients Passive
health checks
4 (optional) Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
Active 3
health checks
(optional)

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Monitors > Active > Add Active Monitor.

VMware, Inc. 222


NSX Administration Guide

3 Select a protocol for the server from the drop-down menu.

You can also use predefined protocols; HTTP, HTTPS, ICMP, TCP, and UDP for NSX Manager.

4 Select the HTTP protocol.

5 Configure the values to monitor a service pool.

You can also accept the default active health monitor values.

Option Description

Name and Description Enter a name and description for the active health monitor.

Monitoring Port Set the value of the monitoring port.

Monitoring Interval Set the time in seconds that the monitor sends another connection request to
the server.

Timeout Period Set the time the load balancer will wait for the Pool Member monitor
response before considering failed.

Fail Count Set a value when the consecutive failures reach this value, the server is
considered temporarily unavailable.

Rise Count Set the value of consecutive successful monitors to reach before changing the
Pool Member Status from Down to Up.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

For example, if the monitoring interval is set as 5 seconds and the timeout as 15 seconds,
the load balancer send requests to the server every 5 seconds. In each probe if the expected
response is received from the server within 15 seconds, the health check result is OK. If not,
then the result is CRITICAL. If the recent three health check results are all UP, the server is
considered as UP.

6 To configure the HTTP Request, click Configure.

7 Enter the HTTP request and response configuration details.

Option Description

HTTP Method Select the method to detect the server status from the drop-down menu,
GET, OPTIONS, POST, HEAD, and PUT.

HTTP Request URL Enter the request URI for the method.
ASCII control characters (backspace, vertical tab, horizontal tab, line feed,
etc), unsafe characters such as a space, \, <, >, {, }, and any character outside
the ASCII character set are not allowed in the request URL and should be
encoded. For example, replace a space with a plus (+) sign, or with %20.

HTTP Request Version Select the supported request version from the drop-down menu.
You can also accept the default version, HTTP_VERSION_1.

HTTP Request Header Click Add and enter the HTTP request header name and corresponding
value.

HTTP Request Body Enter the request body.


Valid for the POST and PUT methods.

VMware, Inc. 223


NSX Administration Guide

Option Description

HTTP Response Code Enter the string that the monitor expects to match in the status line of HTTP
response body.
The response code is a comma-separated list.
For example, 200,301,302,401.

HTTP Response Body If the HTTP response body string and the HTTP health check response body
match, then the server is considered as healthy.

8 Click Save.

9 Select the HTTPS protocol from the drop-down list.

10 Complete step 5.

11 Click Configure.

12 Enter the HTTP request and response and SSL configuration details.

Option Description

Name and Description Enter a name and description for the active health monitor.

HTTP Method Select the method to detect the server status from the drop-down menu,
GET, OPTIONS, POST, HEAD, and PUT.

HTTP Request URL Enter the request URI for the method.
ASCII control characters (backspace, vertical tab, horizontal tab, line feed,
etc), unsafe characters such as a space, \, <, >, {, }, and any character outside
the ASCII character set are not allowed in the request URL and should be
encoded. For example, replace a space with a plus (+) sign, or with %20.

HTTP Request Version Select the supported request version from the drop-down menu.
You can also accept the default version, HTTP_VERSION_1.

HTTP Request Header Click Add and enter the HTTP request header name and corresponding
value.

HTTP Request Body Enter the request body.


Valid for the POST and PUT methods.

HTTP Response Code Enter the string that the monitor expects to match in the status line of HTTP
response body.
The response code is a comma-separated list.
For example, 200,301,302,401.

HTTP Response Body If the HTTP response body string and the HTTP health check response body
match, then the server is considered as healthy.

Server SSL Toggle the button to enable the SSL server.

Client Certificate (Optional) Select a certificate from the drop-down menu to be used if the
server does not host multiple host names on the same IP address or if the
client does not support an SNI extension.

Server SSL Profile (Optional) Assign a default SSL profile from the drop-down menu that defines
reusable and application-independent client-side SSL properties.
Click the vertical ellipses and create a custom SSL profile.

VMware, Inc. 224


NSX Administration Guide

Option Description

Trusted CA Certificates (Optional) You can require the client to have a CA certificate for
authentication.

Mandatory Server Authentication (Optional) Toggle the button to enable server authentication.

Certificate Chain Depth (Optional) Set the authentication depth for the client certificate chain.

Certificate Revocation List (Optional) Set a Certificate Revocation List (CRL) in the client-side SSL profile
to reject compromised client certificates.

13 Select the ICMP protocol.

14 Complete step 5 and assign the data size in byte of the ICMP health check packet.

15 Select the TCP protocol.

16 Complete step 5 and you can leave the TCP data parameters empty.

If both the data sent and expected are not listed, then a three-way handshake TCP connection
is established to validate the server health. No data is sent.

Expected data if listed has to be a string. Regular expressions are not supported.

17 Select the UDP protocol.

18 Complete step 5 and configure the UDP data.

Required Option Description

UDP Data Sent Enter the string to be sent to a server after a connection is established.

UDP Data Expected Enter the string expected to receive from the server.
Only when the received string matches this definition, is the server is
considered as UP.

What to do next

Associate the active health monitor with a server pool. See Add a Server Pool.

Add a Passive Monitor


Load balancers perform passive health checks to monitor failures during client connections and
mark servers causing consistent failures as DOWN.

Passive health check monitors client traffic going through the load balancer for failures. For
example, if a pool member sends a TCP Reset (RST) in response to a client connection, the load
balancer detects that failure. If there are multiple consecutive failures, then the load balancer
considers that server pool member to be temporarily unavailable and stops sending connection
requests to that pool member for some time. After some time, the load balancer sends a
connection request to verify that the pool member has recovered. If that connection is successful,
then the pool member is considered healthy. Otherwise, the load balancer waits for some time and
tries again.

VMware, Inc. 225


NSX Administration Guide

Passive health check considers the following scenarios to be failures in the client traffic.

n For server pools associated with Layer 7 virtual servers, if the connection to the pool member
fails. For example, if the pool member sends a TCP RST when the load balancer tries to
connect or perform an SSL handshake between the load balancer and the pool member fails.

n For server pools associated with Layer 4 TCP virtual servers, if the pool member sends a TCP
RST in response to client TCP SYN or does not respond at all.

n For server pools associated with Layer 4 UDP virtual servers, if a port is unreachable or a
destination unreachable ICMP error message is received in response to a client UDP packet.

Server pools associated to Layer 7 virtual servers, the failed connection count is incremented when
any TCP connection errors, for example, TCP RST failure to send data or SSL handshake failures
occur.

Server pools associated to Layer 4 virtual servers, if no response is received to a TCP SYN sent to
the server pool member or if a TCP RST is received in response to a TCP SYN, then the server pool
member is considered as DOWN. The failed count is incremented.

For Layer 4 UDP virtual servers, if an ICMP error such as, port or destination unreachable message
is received in response to the client traffic, then it is considered as DOWN.

Note One passive health monitor can be configured per server pool.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Monitors > Passive > Add Passive Monitor.

3 Enter a name and description for the passive health monitor.

4 Configure the values to monitor a service pool.

You can also accept the default active health monitor values.

Option Description

Fail Count Set a value when the consecutive failures reach this value, the server is
considered temporarily unavailable.

Timeout Period Set the number of times the server is tested before it is considered as
DOWN.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

For example, when the consecutive failures reach the configured value 5, that member is
considered temporarily unavailable for 5 seconds. After this period, that member is tried again
for a new connection to see if it is available. If that connection is successful, then the member
is considered available and the failed count is set to zero. However, if that connection fails,
then it is not used for another timeout interval of 5 seconds.

VMware, Inc. 226


NSX Administration Guide

What to do next

Associate the passive health monitor with a server pool. See Add a Server Pool.

Add a Server Pool


A server pool consists of one or more servers that are configured and running the same
application. A single pool can be associated to both Layer 4 and Layer 7 virtual servers.

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Figure 10-1. Server Pool Parameter Configuration


SNAT

Pool

Pool Members

Active Monitor Passive Monitor

HTTP HTTPS TCP UDP ICMP

Prerequisites

n If you use dynamic pool members, a NSGroup must be configured. See Create an NSGroup in
Manager Mode.

n Verify that a passive health monitor is configured. See Add a Passive Monitor.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Server Pools > Add Server Pool.

3 Enter a name and description for the load balancer server pool.

You can optionally describe the connections managed by the server pool.

VMware, Inc. 227


NSX Administration Guide

4 Select the algorithm balancing method for the server pool.

Load balancing algorithm controls how the incoming connections are distributed among the
members. The algorithm can be used on a server pool or a server directly.

All load balancing algorithms skip servers that meet any of the following conditions:

n Admin state is set to DISABLED.

n Admin state is set to GRACEFUL_DISABLED and no matching persistence entry.

n Active or passive health check state is DOWN.

n Connection limit for the maximum server pool concurrent connections is reached.

Option Description

ROUND_ROBIN Incoming client requests are cycled through a list of available servers capable
of handling the request.
Ignores the server pool member weights even if they are configured.

WEIGHTED_ROUND_ROBIN Each server is assigned a weight value that signifies how that server performs
relative to other servers in the pool. The value determines how many client
requests are sent to a server compared to other servers in the pool.
This load balancing algorithm focuses on fairly distributing the load among
the available server resources.

LEAST_CONNECTION Distributes client requests to multiple servers based on the number of


connections already on the server.
New connections are sent to the server with the fewest connections. Ignores
the server pool member weights even if they are configured.

WEIGHTED_LEAST_CONNECTION Each server is assigned a weight value that signifies how that server performs
relative to other servers in the pool. The value determines how many client
requests are sent to a server compared to other servers in the pool.
This load balancing algorithm focuses on using the weight value to distribute
the load among the available server resources.
By default, the weight value is 1 if the value is not configured and slow start is
enabled.

IP-HASH Selects a server based on a hash of the source IP address and the total
weight of all the running servers.

VMware, Inc. 228


NSX Administration Guide

5 Click Select Members and elect the server pool members.

A server pool consists of single or multiple pool members.

Option Description

Enter individual members Enter a pool member name, IPv4 or IPv6 address, and a port. IP addresses
can be either IPv4 or IPv6. Mixed addressing is not supported. Note that
the pool members IP version must match the VIP IP version. For example,
VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.
Each server pool member can be configured with a weight for use in the
load balancing algorithm. The weight indicates how much more or less load a
given pool member can handle relative to other members in the same pool.
You can set the server pool admin state. By default, the option is enable
when a server pool member is added.
If the option is disabled, active connections are processed, and the server
pool member is not selected for new connections. New connections are
assigned to other members of the pool.
If gracefully disabled, it allows you to remove servers for maintenance. The
existing connections to a member in the server pool in this state continue to
be processed.
Toggle the button to designate a pool member as a backup member to work
with the health monitor to provide an Active-Standby state. Traffic failover
occurs for backup members if active members fail a health check. Backup
members are skipped during the server selection. When the server pool is
inactive, the incoming connections are sent to only the backup members that
are configured with a sorry page indicating an application is unavailable.
Max Concurrent Connection value assigns a connection maximum so that
the server pool members are not overloaded and skipped during server
selection. If a value is not specified, then the connection is unlimited.

Select a group Select a pre-configured group of server pool members.


Enter a group name and an optional description.
Set the compute member from existing list or create one. You can specify
membership criteria, select members of the group, add IP addresses, and
MAC addresses as group members, and add Active Directory groups. IP
addresses can be either IPv4 or IPv6. Mixed addressing is not supported. The
identity members intersect with the compute member to define membership
of the group. Select a tag from the drop-down menu.
You can optionally define the maximum group IP address list.

6 Click Set Monitors and select one or more active health check monitors for the server. Click
Apply.

The load balancer periodically sends an ICMP ping to the servers to verify health independent
of data traffic. You can configure more than one active health check monitor per server pool.

VMware, Inc. 229


NSX Administration Guide

7 Select the Source NAT (SNAT) translation mode.

Depending on the topology, SNAT might be required so that the load balancer receives the
traffic from the server destined to the client. SNAT can be enabled per server pool.

SNAT Translation Mode Description

Automap Mode Load Balancer uses the interface IP address and ephemeral port to continue
the communication with a client initially connected to one of the server's
established listening ports.
SNAT is required.
Enable port overloading to allow the same SNAT IP and port to be used
for multiple connections if the tuple (source IP, source port, destination
IP, destination port, and IP protocol) is unique after the SNAT process is
performed.
You can also set the port overload factor to allow the maximum number of
times a port can be used simultaneously for multiple connections.

Disabled Disable the SNAT translation mode.

IP Pool Specify a single IPv4 or IPv6 address range, for example, 1.1.1.1-1.1.1.10 to
be used for SNAT while connecting to any of the servers in the pool. IP
addresses can be either IPv4 or IPv6. Mixed addressing is not supported.
By default, the port range 4096 through 65535 is used for all configured
SNAT IP addresses. The port range 1000 through 4095 is reserved for
purposes such as health checks, and connections initiated from Linux
applications. If multiple IP addresses are present, then they are selected in
a Round Robin manner.
If a virtual server IP port is in the SNAT default port range 4096 through
65535, make sure that the virtual server IP is not in the SNAT IP pool.
Enable port overloading to allow the same SNAT IP and port to be used
for multiple connections if the tuple (source IP, source port, destination
IP, destination port, and IP protocol) is unique after the SNAT process is
performed.
You can also set the port overload factor to allow the maximum number of
times a port can be used simultaneously for multiple connections.

8 Click Additional Properties, and toggle the button to enable TCP Multiplexing.

With TCP multiplexing, you can use the same TCP connection between a load balancer and the
server for sending multiple client requests from different client TCP connections.

9 Set the Max Multiplexing Connections per server that are kept alive to send future client
requests.

10 Enter the Min Active Members the server pool must always maintain.

11 Select a passive health monitor for the server pool from the drop-down menu.

12 Select a tag from the drop-down menu.

Setting Up Virtual Server Components


You can set up the Layer 4 and Layer 7 virtual servers and configure several virtual server
components such as, application profiles, persistent profiles, and load balancer rules.

VMware, Inc. 230


NSX Administration Guide

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Figure 10-2. Virtual Server Components

Load Balancer (LB)

Fast TCP

Fast UDP Application Profile Client-SSL Profile

HTTP
Virtual Server Server-SSL Profile
Source-IP

Persistence Profile LB Rules

Cookie

Pool

Add an Application Profile


Application profiles are associated with virtual servers to enhance load balancing network traffic
and simplify traffic-management tasks.

Application profiles define the behavior of a particular type of network traffic. The associated
virtual server processes network traffic according to the values specified in the application profile.
Fast TCP, Fast UDP, and HTTP application profiles are the supported types of profiles.

TCP application profile is used by default when no application profile is associated to a virtual
server. TCP and UDP application profiles are used when an application is running on a TCP or
UDP protocol and does not require any application level load balancing such as, HTTP URL load
balancing. These profiles are also used when you only want Layer 4 load balancing, which has a
faster performance and supports connection mirroring.

VMware, Inc. 231


NSX Administration Guide

Figure 10-3. Layer 4 TCP and UDP Application Profile

Tier-1

Load Balancer Server 1


Clients
Virtual Layer 4 VIP Server 2
Server 1 (TCP/UDP)
Server 3

Health Check Pool 1

HTTP application profile is used for both HTTP and HTTPS applications when the load balancer
must take actions based on Layer 7 such as, load balancing all images requests to a specific server
pool member or stopping HTTPS to offload SSL from pool members. Unlike the TCP application
profile, the HTTP application profile terminates the client TCP connection at the load balancer and
waits for the clients HTTP or HTTPS request before selecting the server pool member.

Figure 10-4. Layer 7 HTTPS Application Profile

Tier-1

Layer 7 VIP
(HTTP/HTTPS)
Load Balancer Server 1
Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Profiles > Application > Add Application Profiles.

VMware, Inc. 232


NSX Administration Guide

3 Select a Fast TCP application profile and enter the profile details.

You can also accept the default FAST TCP profile settings.

Option Description

Name and Description Enter a name and a description for the Fast TCP application profile.

Idle Timeout Enter the time in seconds on how long the server can remain idle after a TCP
connection is established.
Set the idle time to the actual application idle time and add a few more
seconds so that the load balancer does not close its connections before the
application does.

HA Flow Mirroring Toggle the button to make all the flows to the associated virtual server
mirrored to the HA standby node.

Connection Close Timeout Enter the time in seconds that the TCP connection both FINs or RST must be
kept for an application before closing the connection.
A short closing timeout might be required to support fast connection rates.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

4 Select a Fast UDP application profile and enter the profile details.

You can also accept the default UDP profile settings.

Option Description

Name and Description Enter a name and a description for the Fast UDP application profile.

Idle Timeout Enter the time in seconds on how long the server can remain idle after a UDP
connection is established.
UDP is a connectionless protocol. For load balancing purposes, all the UDP
packets with the same flow signature such as, source and destination IP
address or ports and IP protocol received within the idle timeout period are
considered to belong to the same connection and sent to the same server.
If no packets are received during the idle timeout period, the connection
which is an association between the flow signature and the selected server is
closed.

HA Flow Mirroring Toggle the button to make all the flows to the associated virtual server
mirrored to the HA standby node.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

5 Select a HTTP application profile and enter the profile details.

You can also accept the default HTTP profile settings.

To detect an inactive client or server communication, the load balancer uses the HTTP
application profile response timeout feature set to 60 seconds. If the server does not send
traffic during the 60 seconds interval, NSX ends the connection on the client and server side.
Default application profiles cannot be edited. To edit HTTP application profile settings, create
a custom profile.

VMware, Inc. 233


NSX Administration Guide

HTTP application profile is used for both HTTP and HTTPS applications.

Option Description

Name and Description Enter a name and a description for the HTTP application profile.

Idle Timeout Enter the time in seconds on how long client idle connections remain before
the load balancer closes them (FIN).

Request Header Size Specify the maximum buffer size in bytes used to store HTTP request
headers.

Response Header Size Specify the maximum buffer size in bytes used to store HTTP response
headers. The default is 4096, and the maximum is 65536.

Redirection n None - If a website is temporarily down, user receives a page not found
error message.
n HTTP Redirect - If a website is temporarily down or has moved, incoming
requests for that virtual server can be temporarily redirected to a URL
specified here. Only a static redirection is supported.

For example, if HTTP Redirect is set to http://sitedown.abc.com/


sorry.html, then irrespective of the actual request, for example,
http://original_app.site.com/home.html or http://original_app.site.com/
somepage.html, incoming requests are redirected to the specified URL
when the original website is down.
n HTTP to HTTPS Redirect - Certain secure applications might want to force
communication over SSL, but instead of rejecting non-SSL connections,
they can redirect the client request to use SSL. With HTTP to HTTPS
Redirect, you can preserve both the host and URI paths and redirect the
client request to use SSL.

For HTTP to HTTPS redirect, the HTTPS virtual server must have port 443
and the same virtual server IP address must be configured on the same
load balancer.

For example, a client request for http://app.com/path/page.html is


redirected to https://app.com/path/page.html. If either the host name
or the URI must be modified while redirecting, for example, redirect to
https://secure.app.com/path/page.html, then load balancing rules must
be used.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

VMware, Inc. 234


NSX Administration Guide

Option Description

X-Forwarded-For (XFF) n Insert - If the XFF HTTP header is not present in the incoming request,
the load balancer inserts a new XFF header with the client IP address.
If the XFF HTTP header is present in the incoming request, the load
balancer appends the XFF header with the client IP address.
n Replace - If the XFF HTTP header is present in the incoming request, the
load balancer replaces the header.
Web servers log each request they handle with the requesting client IP
address. These logs are used for debugging and analytic purposes. If the
deployment topology requires SNAT on the load balancer, then server uses
the client SNAT IP address which defeats the purpose of logging.
As a workaround, the load balancer can be configured to insert XFF HTTP
header with the original client IP address. Servers can be configured to log
the IP address in the XFF header instead of the source IP address of the
connection.

Request Body Size Enter value for the maximum size of the buffer used to store the HTTP
request body.
If the size is not specified, then the request body size is unlimited.

Response Timeout (sec) Enter the time in seconds on how long the load balancer waits for Server
HTTP Response before it stops and closes the connection to the pool
member and retries the request to another server.

Server Keep-Alive Toggle the button for the load balancer to turn off TCP multiplexing and
enable HTTP keep-alive.
If the client uses HTTP/1.0, the load balancer upgrades to HTTP/1.1 protocol
and the HTTP keep-alive is set. All HTTP requests received on the same
client-side TCP connection are sent to the same server over a single TCP
connection to ensure that reauthorization is not required.
When HTTP keep-alive is enabled and forwarding rules are configured in the
load balancer, the server keep-alive setting takes precedence. As a result,
HTTP requests are sent to servers already connected with keep-alive.
If you always want to give priority to the forwarding rules when the load
balancer rule conditions are met, disable the keep-alive setting.
Note that the persistence setting takes precedence over the keep-alive
setting.
Processing is done in the order of Persistence > Keep-Alive > Load Balancer
Rules.

Add a Persistence Profile


To ensure stability of stateful applications, load balancers implement persistence which directs all
related connections to the same server. Different types of persistence are supported to address
different types of application needs.

Some applications maintain the server state such as, shopping carts. Such state can be per client
and identified by the client IP address or per HTTP session. Applications can access or modify this
state while processing subsequent related connections from the same client or HTTP session.

VMware, Inc. 235


NSX Administration Guide

The source IP persistence profile tracks sessions based on the source IP address. When a client
requests a connection to a virtual server that enables the source address persistence, the load
balancer checks if that client was previously connected, and if so, returns the client to the same
server. If not, the load balancer selects the server pool member based on the pool load balancing
algorithm. Source IP persistence profile is used by Layer 4 and Layer 7 virtual servers.

If rule persistence, cookie persistence, and server keep-alive are all configured, the load balancer
follows the priority of rule persistence > cookie persistence > server keep-alive.

The Cookie persistence profile offers 3 modes:

n Cookie Insert - the load balance inserts its own cookie with the pool member information
(encoded or not) in the server response to the client. The client then forwards the received
cookies in subsequent requests (NSX cookie included), and the load balancer uses that
information to provide the pool member persistence. The NSX cookie is trimmed from the
client request when sent to the pool member.

n Cookie Prefix - the load balancer appends the pool member information (encoded or not)
in the server response to the client. The client then forwards the received HTTP cookie in
subsequent requests (with the NSX prepended information), and the load balancer uses that
information to provide the pool member persistence. The NSX cookie prefix is trimmed from
the client request when sent to the pool member.

n Cookie Rewrite - the load balancer replace server cookie value with the pool member
information (encoded or not) in the server response to the client. The client then forwards
the received HTTP cookie in subsequent requests (with the NSX prepended information), and
the load balancer uses that information to provide the pool member persistence. The original
server cookie is replaced in the client request when sent to the pool member.

Cookie persistence is available only on L7 virtual servers. Note that a blank space in a cookie name
is not supported.

The generic persistence profile supports persistence based on the HTTP header, cookie, or URL
in the HTTP request. Therefore, it supports application session persistence when the session ID is
part of the URL. This profile is not associated with a virtual server directly. Specify this profile when
you configure a load balancer rule for request forwarding and response rewrite.

Tier-1

Client 1 Layer 4 VIP or


Layer 7 VIP
Load Balancer
Server 1

Server 2
Virtual
Client 2 Server 1
Server 3

Health Check Pool 1

VMware, Inc. 236


NSX Administration Guide

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Profiles > Persistence > Add Persistence Profiles.

3 Select Source IP to add a source IP persistence profile and enter the profile details.

You can also accept the default Source IP profile settings.

Option Description

Name and Description Enter a name and a description for the Source IP persistence profile.

Share Persistence Toggle the button to share the persistence so that all virtual servers this
profile is associated with can share the persistence table.
If the persistence sharing is not enabled in the Source IP persistence profile
associated to a virtual server, each virtual server that the profile is associated
to maintains a private persistence table.

Persistence Entry Timeout Enter the persistence expiration time in seconds.


The load balancer persistence table maintains entries to record that client
requests are directed to the same server.
The very first connection from new client IP is load balanced to a pool
member based on the load balancing algorithm. NSX will store that
persistence entry on the LB persistence-table which is viewable on the Edge
Node hosting the T1-LB active via the CLI command:get load-balancer
<LB-UUID> persistence-tables.
n When there are connections from that client to the VIP, the persistence
entry is kept.
n When there are no more connections from that client to the VIP,
the persistence entry begins the timer count down specified in the
"Persistence Entry Timeout" value. If no new connection from that client
to the VIP is made before the timer expires, the persistence entry for that
client IP is deleted. If that client comes back after the entry is deleted,
it will be load balanced again to a pool member based on the load
balancing algorithm.

Purge Entries When Full A large timeout value can lead to the persistence table quickly filling up when
the traffic is heavy. When this option is enabled, the oldest entry is deleted to
accept the newest entry.
When this option is disabled, if the source IP persistence table is full, new
client connections are rejected.

HA Persistence Mirroring Toggle the button to synchronize persistence entries to the HA peer. When
HA persistence mirroring is enabled, the client IP persistence remains in the
case of load balancer failover.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

VMware, Inc. 237


NSX Administration Guide

4 Select a Cookie persistence profile, and enter the profile details. Cookie persistence is
available only on L7 virtual servers. Note that a blank space in a cookie name is not supported.

Option Description

Name and Description Enter a name and a description for the Cookie persistence profile.

Share Persistence Toggle the button to share persistence across multiple virtual servers that are
associated to the same pool members.
The Cookie persistence profile inserts a cookie with the format,
<name>.<profile-id>.<pool-id>.
If the persistence shared is not enabled in the Cookie persistence profile
associated with a virtual server, the private Cookie persistence for each
virtual server is used and is qualified by the pool member. The load balancer
inserts a cookie with the format, <name>.<virtual_server_id>.<pool_id>.

Cookie Mode Select a mode from the drop-down menu.


n INSERT - Adds a unique cookie to identify the session.
n PREFIX - Appends to the existing HTTP cookie information.
n REWRITE - Rewrites the existing HTTP cookie information.

Cookie Name Enter the cookie name. A blank space in a cookie name is not supported.

Cookie Domain Enter the domain name.


HTTP cookie domain can be configured only in the INSERT mode.

Cookie Fallback Toggle the button so that the client request is rejected if cookie points to a
server that is in a DISABLED or is in a DOWN state.
Selects a new server to handle a client request if the cookie points to a server
that is in a DISABLED or is in a DOWN state.

Cookie Path Enter the cookie URL path.


HTTP cookie path can be set only in the INSERT mode.

Cookie Garbling Toggle the button to disable encryption.


When garbling is disabled, the cookie server IP address and port information
is in a plain text. Encrypt the cookie server IP address and port information.

Cookie Type Select a cookie type from the drop-down menu.


Session Cookie - Not stored. Will be lost when the browser is closed.
Persistence Cookie - Stored by the browser. Not lost when the browser is
closed.

HttpOnly Flag When enabled, this option prevents a script running in the browser from
accessing cookies.
HttpOnly Flag is only available in the INSERT mode.

Secure Flag When enabled, this option causes web browsers to send cookies over https
only.
Secure Flag is only available in the INSERT mode.

Max Idle Time Enter the time in seconds that the cookie type can be idle before a cookie
expires.

Max Cookie Age For the session cookie type, enter the time in seconds a cookie is available.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

VMware, Inc. 238


NSX Administration Guide

5 Select Generic to add a generic persistence profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Source IP persistence profile.

Share Persistence Toggle the button to share the profile among virtual servers.

Persistence Entry Timeout Enter the persistence expiration time in seconds.


The load balancer persistence table maintains entries to record that client
requests are directed to the same server.
The very first connection from new client IP is load balanced to a pool
member based on the load balancing algorithm. NSX will store that
persistence entry on the LB persistence-table which is viewable on the Edge
Node hosting the T1-LB active via the CLI command:get load-balancer
<LB-UUID> persistence-tables.
n When there are connections from that client to the VIP, the persistence
entry is kept.
n When there are no more connections from that client to the VIP,
the persistence entry begins the timer count down specified in the
"Persistence Entry Timeout" value. If no new connection from that client
to the VIP is made before the timer expires, the persistence entry for that
client IP is deleted. If that client comes back after the entry is deleted,
it will be load balanced again to a pool member based on the load
balancing algorithm.

HA Persistence Mirroring Toggle the button to synchronize persistence entries to the HA peer.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Add an SSL Profile


SSL profiles configure application-independent SSL properties such as, cipher lists and reuse
these lists across multiple applications. SSL properties are different when the load balancer is
acting as a client and as a server, as a result separate SSL profiles for client-side and server-side
are supported.

Note SSL profile is not supported in the NSX limited export release.

Client-side SSL profile refers to the load balancer acting as an SSL server and stopping the
client SSL connection. Server-side SSL profile refers to the load balancer acting as a client and
establishing a connection to the server.

You can specify a cipher list on both the client-side and server-side SSL profiles.

SSL session caching allows the SSL client and server to reuse previously negotiated security
parameters avoiding the expensive public key operation during the SSL handshake. SSL session
caching is disabled by default on both the client-side and server-side.

VMware, Inc. 239


NSX Administration Guide

SSL session tickets are an alternate mechanism that allows the SSL client and server to reuse
previously negotiated session parameters. In SSL session tickets, the client and server negotiate
whether they support SSL session tickets during the handshake exchange. If supported by both,
server can send an SSL ticket, which includes encrypted SSL session parameters to the client. The
client can use that ticket in subsequent connections to reuse the session. SSL session tickets are
enabled on the client-side and disabled on the server-side.

Figure 10-5. SSL Offloading

Tier-1
1

Layer 7 VIP
Load Balancer Server 1
Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1


HTTPS HTTP
(Client SSL
Profile)

Figure 10-6. End-to-End SSL

Tier-1
1

Layer 7 VIP

Load Balancer Server 1


Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1

HTTPS HTTPS
(Client SSL (Server SSL
Profile) Profile)

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Profiles > SSL Profile.

VMware, Inc. 240


NSX Administration Guide

3 Select a Client SSL Profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Client SSL profile.

SSL Suite Select the SSL Cipher group from the drop-down menu and available SSL
Ciphers and SSL protocols to be included in the Client SSL profile are
populated.
Balanced SSL Cipher group is the default.

Session Caching Toggle the button to allow the SSL client and server to reuse previously
negotiated security parameters avoiding the expensive public key operation
during an SSL handshake.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Supported SSL Ciphers Depending on the SSL suite, you assigned the supported SSL Ciphers are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-down
menu.

Supported SSL Protocols Depending on the SSL suite, you assigned the supported SSL protocols are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-down
menu.

Session Cache Entry Timeout Enter the cache timeout in seconds to specify how long the SSL session
parameters must be kept and can be reused.

Prefer Server Cipher Toggle the button so that the server can select the first supported cipher from
the list it can support.
During an SSL handshake, the client sends an ordered list of supported
ciphers to the server.

4 Select a Server SSL Profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Server SSL profile.

SSL Suite Select the SSL Cipher group from the drop-down menu and available SSL
Ciphers and SSL protocols to be included in the Server SSL profile are
populated.
Balanced SSL Cipher group is the default.

Session Caching Toggle the button to allow the SSL client and server to reuse previously
negotiated security parameters avoiding the expensive public key operation
during an SSL handshake.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Supported SSL Ciphers Depending on the SSL suite, you assigned the supported SSL Ciphers are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-down
menu.

VMware, Inc. 241


NSX Administration Guide

Option Description

Supported SSL Protocols Depending on the SSL suite, you assigned the supported SSL protocols are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-down
menu.

Session Cache Entry Timeout Enter the cache timeout in seconds to specify how long the SSL session
parameters must be kept and can be reused.

Prefer Server Cipher Toggle the button so that the server can select the first supported cipher from
the list it can support.
During an SSL handshake, the client sends an ordered list of supported
ciphers to the server.

Add Layer 4 Virtual Servers


Virtual servers receive all the client connections and distribute them among the servers. A virtual
server has an IP address, a port, and a protocol. For Layer 4 virtual servers, lists of ports ranges
can be specified instead of a single TCP or UDP port to support complex protocols with dynamic
ports.

A Layer 4 virtual server must be associated to a primary server pool, also called a default pool.

If a virtual server status is disabled, any new connection attempts to the virtual server are
rejected by sending either a TCP RST for the TCP connection or ICMP error message for UDP.
New connections are rejected even if there are matching persistence entries for them. Active
connections continue to be processed. If a virtual server is deleted or disassociated from a load
balancer, then active connections to that virtual server fail.

Prerequisites

n Verify that application profiles are available. See Add an Application Profile.

n Verify that persistent profiles are available. See Add a Persistence Profile.

n Verify that SSL profiles for the client and server are available. See Add an SSL Profile.

n Verify that server pools are available. See Add a Server Pool.

n Verify that load balancer is available. See Add Load Balancers.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Virtual Servers > Add Virtual Server.

3 Select a L4 TCP or a L4 UDP protocol and enter the protocol details.

Layer 4 virtual servers support either the Fast TCP or Fast UDP protocol, but not both.

VMware, Inc. 242


NSX Administration Guide

For Fast TCP or Fast UDP protocol support on the same IP address and port, for example
DNS, a virtual server must be created for each protocol.

L4 TCP Option L4 TCP Description

Name and Description Enter a name and a description for the Layer 4 virtual server.

IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported. Note that the pool members IP version must match the VIP IP
version. For example, VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.

Ports Enter the virtual server port number.

Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop-down menu.

Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.

Application Profile Based on the protocol type, the existing application profile is automatically
populated.
Click the vertical ellipses to create an application profile.

Persistence Select an existing persistence profile from the drop-down menu.


Persistence profile can be enabled on a virtual server to allow Source IP
related client connections to be sent to the same server.

Access List Control When you enable Access List Control (ALC), all traffic flowing through the
load balancer is compared with the ACL statement, which either drops or
allows the traffic.
ACL is disabled by default. To enable, click Configure, and select Enabled.
Select an Action:
n Allow - Allows connections matching the selected group. All other
connections are dropped.
n Drop - Allows connections not matching the selected group. A dropped
connection generates a log entry is access log is enabled.
Select a Group. The IP addresses included in this group are either dropped
or allowed by the ACL.

Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.

Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.

Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.

VMware, Inc. 243


NSX Administration Guide

L4 TCP Option L4 TCP Description

Default Pool Member Port Enter a default pool member port if the pool member port for a virtual server
is not defined.
For example, if a virtual server is defined with a port range of 2000–2999
and the default pool member port range is set as 8000-8999, then an
incoming client connection to the virtual server port 2500 is sent to a pool
member with a destination port set to 8500.

Admin State Toggle the button to disable the admin state of the Layer 4 virtual server.

Access Log Toggle the button to enable logging for the Layer 4 virtual server.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

L4 UDP Option L4 UDP Description

Name and Description Enter a name and a description for the Layer 4 virtual server.

IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported. Note that the pool members IP version must match the VIP IP
version. For example, VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.

Ports Enter the virtual server port number.

Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop-down menu.

Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.

Application Profile Based on the protocol type, the existing application profile is automatically
populated.
You can click the vertical ellipses to create an application profile.

Persistence Select an existing persistence profile from the drop-down menu.


Persistence profile can be enabled on a virtual server to allow Source IP
related client connections to be sent to the same server.

Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.

Access List Control When you enable Access List Control (ALC) all traffic flowing through the load
balancer will be compared with the ACL statement, which will either drop it or
allow it.
ACL is disabled by default. To enable, click Configure, and check Enabled.
Select an Action:
n Allow - Allows connections matching the selected group. All other
connections are dropped
n Drop - Allows connections not matching the selected group. A dropped
connection generates a log entry is access log is enabled.
Select a Group. The IP addresses included in this group are either dropped
or allowed by the ACL.

VMware, Inc. 244


NSX Administration Guide

L4 UDP Option L4 UDP Description

Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.

Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.

Default Pool Member Port Enter a default pool member port if the pool member port for a virtual server
is not defined.
For example, if a virtual server is defined with port range 2000–2999 and the
default pool member port range is set as 8000-8999, then an incoming client
connection to the virtual server port 2500 is sent to a pool member with a
destination port set to 8500.

Admin State Toggle the button to disable the admin state of the Layer 4 virtual server.

Access Log Toggle the button to enable logging for the Layer 4 virtual server.

Log Significant Event Only This field can only be configured if access logs are enabled. Connections that
cannot be sent to a pool member are treated as a significant event such as
"max connection limit," or "Access Control drop."

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Add Layer 7 HTTP Virtual Servers


Virtual servers receive all the client connections and distribute them among the servers. A virtual
server has an IP address, a port, and a protocol TCP.

If a virtual server status is disabled, any new connection attempts to the virtual server are
rejected by sending either a TCP RST for the TCP connection or ICMP error message for UDP.
New connections are rejected even if there are matching persistence entries for them. Active
connections continue to be processed. If a virtual server is deleted or disassociated from a load
balancer, then active connections to that virtual server fail.

Note SSL profile is not supported in the NSX limited export release.

If a client-side SSL profile binding is configured on a virtual server but not a server-side SSL profile
binding, then the virtual server operates in an SSL-terminate mode, which has an encrypted
connection to the client and plain text connection to the server. If both the client-side and server-
side SSL profile bindings are configured, then the virtual server operates in SSL-proxy mode,
which has an encrypted connection both to the client and the server.

Associating server-side SSL profile binding without associating a client-side SSL profile binding
is currently not supported. If a client-side and a server-side SSL profile binding is not associated
with a virtual server and the application is SSL-based, then the virtual server operates in an
SSL-unaware mode. In this case, the virtual server must be configured for Layer 4. For example,
the virtual server can be associated to a fast TCP profile.

VMware, Inc. 245


NSX Administration Guide

Prerequisites

n Verify that application profiles are available. See Add an Application Profile.

n Verify that persistent profiles are available. See Add a Persistence Profile.

n Verify that SSL profiles for the client and server are available. See Add an SSL Profile.

n Verify that server pools are available. See Add a Server Pool.

n Verify that CA and client certificate are available. See Chapter 23 Certificates.

n Verify that a certification revocation list (CRL) is available. See Import a Certificate Revocation
List.

n Verify that load balancer is available. See Add Load Balancers.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Select Networking > Load Balancing > Virtual Servers > Add Virtual Server.

3 Select L7 HTTP from the drop-down list and enter the protocol details.

Layer 7 virtual servers support the HTTP and HTTPS protocols.

Option Description

Name and Description Enter a name and a description for the Layer virtual server.

IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported.

Ports Enter the virtual server port number.

Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop down menu.

Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.

Application Profile Based on the protocol type, the existing application profile is automatically
populated.
You can click the vertical ellipses to create an application profile.

Persistence Select an existing persistence profile from the drop-down menu.


Persistence profile can be enabled on a virtual server to allow Source IP and
Cookie related client connections to be sent to the same server.

4 Click Configure to set the Layer 7 virtual server SSL.

You can configure the Client SSL and Server SSL.

VMware, Inc. 246


NSX Administration Guide

5 Configure the Client SSL.

Option Description

Client SSL Toggle the button to enable the profile.


Client-side SSL profile binding allows multiple certificates, for different host
names to be associated to the same virtual server.

Default Certificate Select a default certificate from the drop-down menu.


This certificate is used if the server does not host multiple host names on
the same IP address or if the client does not support Server Name Indication
(SNI) extension.
To use a 2k/3k/4k certificate/key use NSX Manager to Creating Self-signed
Certificates or to Create a Certificate Signing Request File.
To use an 8k certificate/key, import the 8k certificate key using Importing and
Replacing Certificates.

Client SSL Profile Select the client-side SSL Profile from the drop-down menu.

SNI Certificates Select the available SNI certificate from the drop-down menu.

Trusted CA Certificates Select the available CA certificate.

Mandatory Client Authentication Toggle the button to enable this menu item.

Certificate Chain Depth Set the certificate chain depth to verify the depth in the server certificates
chain.

Certificate Revocation List Select the available CRL to disallow compromised server certificates.

6 Configure the Server SSL.

Option Description

Server SSL Toggle the button to enable the profile.

Client Certificate Select a client certificate from the drop-down menu.


This certificate is used if the server does not host multiple host names on
the same IP address or if the client does not support Server Name Indication
(SNI) extension.

Server SSL Profile Select the Server-side SSL Profile from the drop-down menu.

Trusted CA Certificates Select the available CA certificate.

Mandatory Server Authentication Toggle the button to enable this menu item.
Server-side SSL profile binding specifies whether the server certificate
presented to the load balancer during the SSL handshake must be validated
or not. When validation is enabled, the server certificate must be signed by
one of the trusted CAs whose self-signed certificates are specified in the
same server-side SSL profile binding.

Certificate Chain Depth Set the certificate chain depth to verify the depth in the server certificates
chain.

Certificate Revocation List Select the available CRL to disallow compromised server certificates.
OCSP and OCSP stapling are not supported on the server-side.

VMware, Inc. 247


NSX Administration Guide

7 Click Additional Properties to configure additional Layer 7 virtual server properties.

Option Description

Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.

Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.

Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.

Default Pool Member Port Enter a default pool member port, if the pool member port for a virtual server
is not defined.
For example, if a virtual server is defined with port range 2000-2999 and the
default pool member port range is set as 8000-8999, then an incoming client
connection to the virtual server port 2500 is sent to a pool member with a
destination port set to 8500.

Admin State Toggle the button to disable the admin state of the Layer 7 virtual server.

Access Log Toggle the button to enable logging for the Layer 7 virtual server.

Log Significant Event Only This field can only be configured if access logs are enabled. Requests with an
HTTP response status of >=400 are treated as a significant event.

Tags Select a tag from the drop-down list.


You can specify a tag to set a scope of the tag.

8 Click Save.

Add Load Balancer Rules


With Layer 7 HTTP virtual servers, you can optionally configure load balancer rules and customize
load balancing behavior using match or action rules.

Load balancer rules are supported for only Layer 7 virtual servers with an HTTP application profile.
Different load balancer services can use load balancer rules.

Each load balancer rule consists of single or multiple match conditions and single or multiple
actions. If the match conditions are not specified, then the load balancer rule always matches and
is used to define default rules. If more than one match condition is specified, then the matching
strategy determines if all conditions must match or any one condition must match for the load
balancer rule to be considered a match.

Each load balancer rule is implemented at a specific phase of the load balancing processing;
Transport, HTTP Access, Request Rewrite, Request Forwarding, and Response Rewrite. Not all
the match conditions and actions are applicable to each phase.

Up to 4,000 load balancer rules can be configured with the API, if the skip_scale_validation
flag in LbService is set. Note that the flag can be set via API. Refer to the NSX API Guide for more
information. Up to 512 load balancer rules can be configured through the user interface.

VMware, Inc. 248


NSX Administration Guide

Load Balancer rules support REGEX for match types. For more information, see Regular
Expressions in Load Balancer Rules.

Prerequisites

Verify a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

n Configure Transport Phase Load Balancer Rules


Transport phase is the first phase of a client HTTP request.

n Configure HTTP Access Load Balancer Rules


A JSON web token (JWT) is a standardized, optionally validated and/or encrypted format that
is used to securely transfer information between two parties.

n Configure Request Rewrite Load Balancer Rules


An HTTP request rewrite is applied to the HTTP request coming from the client.

n Configure Request Forwarding Load Balancer Rules


Request forwarding redirects a URL or host to a specific server pool.

n Configure Response Rewrite Load Balancer Rules


An HTTP response rewrite is applied to the HTTP response going out from the servers to the
client.

n Regular Expressions in Load Balancer Rules


Regular expressions (REGEX) are used in match conditions for load balancer rules.

Configure Transport Phase Load Balancer Rules


Transport phase is the first phase of a client HTTP request.

Load Balancer virtual server SSL configuration is found under SSL Configuration. There are
two possible configurations. In both modes, the load balancer sees the traffic, and applies load
balancer rules based on the client HTTP traffic.

n SSL Offload, configuring only the SSL client. In this mode, the client to VIP traffic is encrypted
(HTTPS), and the load balancer decrypts it. The VIP to Pool member traffic is clear (HTTP).

n SSL End-to-End, configuring both the Client SSL and Server SSL. In this mode, the client to
VIP traffic is encrypted (HTTPS), and the load balancer decrypts it and then re-encrypts it. The
VIP to Pool member traffic is encrypted (HTTPS).

The Transport Phase is complete when the virtual server receives the client SSL hello message
virtual server. this occurs before SSL is ended, and before HTTP traffic.

The Transport Phase allows administrators to select the SSL mode, annd specific server pool
based on the client SSL hello message. There are three options for the virtual server SSL mode:

n SSL Offload

n End-to-End

n SSL-Passthrough (the load balancer does not end SSL)

VMware, Inc. 249


NSX Administration Guide

Load Balancer rules support REGEX for match types. PCRE style REGEX patterns are supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 In the Load Balancer Rules section, next to Transport Phase, click Set > Add Rule to configure
the load balancer rules for the Transport Phase.

3 SSL SNI is the only match condition supported. Match conditions are used to match application
traffic passing through load balancers.

4 From the drop-down list, select a Match Type: starts with, ends with, equals, contains,
matches regex.

5 Enter a SNI Name.

6 Toggle the Case Sensitive button to set a case-sensitive flag for HTTP header value
comparison.

7 Toggle the Negate button to enable it.

8 From the drop-down list, select a Match Strategy:

Match Strategy Description

Any Either host or path may match for this rule to be considered a match.

All Both host and path must match for this rule to be considered a match.

9 From the drop-down menu, select the SSL Mode Selection.

SSL Mode Description

SSL Passthrough SSL Passthrough passes HTTP traffic to a backend server without decrypting
the traffic on the load balancer. The data is kept encrypted as it travels
through the load balancer.
If SSL Passthrough is selected, a server pool can be selected. See Add a
Server Pool for Load Balancing in Manager Mode.

SSL Offloading SSL Offloading decrypts all HTTP traffic on the load balancer. SSL offloading
allows data to be inspected as it passes between the load balancer and
server. If NTLM and multiplexing are not configured, the load balancer
establishes a new connection to the selected backend server for each HTTP
request.

SSL End-to End After receiving the HTTP request, the load balancer connects to the selected
backend server and talks with it using HTTPS. If NTLM and multiplexing
are not configured, the load balancer establishes a new connection to the
selected backend server for each HTTP request.

VMware, Inc. 250


NSX Administration Guide

10 Click SAVE and APPLY.


Configure HTTP Access Load Balancer Rules
A JSON web token (JWT) is a standardized, optionally validated and/or encrypted format that is
used to securely transfer information between two parties.

In the HTTP ACCESS phase, users can define the action to validate JWT from clients and pass, or
remove JWT to backend servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 In the Load Balancer Rules section, next to HTTP Access Phase, click Set > Add Rule to
configure the load balancer rules for the HTTP Request Rewrite phase.

3 From the drop-down menu, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.

Supported Match Condition Description

HTTP Request Method Match an HTTP request method.


http_request.method - value to match

HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match

HTTP Request URI Arguments Match an HTTP request URI query argument.
http_request.uri_arguments - value to match

HTTP Request Version Match an HTTP request version.


http_request.version - value to match

HTTP Request Header Match any HTTP request header.


http_request.header_name - header name to match
http_request.header_value - value to match

HTTP Request Cookie Match any HTTP request cookie.


http_request.cookie_value - value to match

HTTP Request Body Match an HTTP request body content.


http_request.body_value - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

VMware, Inc. 251


NSX Administration Guide

Supported Match Condition Description

IP Header Source Matches IP header text boxes in of HTTP messages. The source type must be
either a single IP address, a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresses are supported.
n If IP Header Source is selected with a Group source type, select the group
from the drop-down menu.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

Variable Create a variable and assign a value to the variable.

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.

4 From the drop-down list, select a Match Type: starts with, ends with, equals, contains,
matches regex.

5 If needed, enter the URI.

6 From the drop-down list, select a Match Strategy:

Match Strategy Description

Any Either host or path may match for this rule to be considered a match.

All Both host and path must match for this rule to be considered a match.

VMware, Inc. 252


NSX Administration Guide

7 From the drop-down menu select an Action:

Action Description

JWT Authentication JSON Web Token (JWT) is an open standard that defines a compact and
self-contained way for securely transmitting information between parties as
a JSON object. This information can be verified and trusted because it is
digitally signed.
n Realm - A description of the protected area. If no realm is specified,
clients often display a formatted hostname. The configured realm is
returned when a client request is rejected with 401 http status. The
response is: "WWW-Authentication: Bearer realm=<realm>".
n Tokens - This parameter is optional. Load balancer searches for every
specified token one-by-one for the JWT message until found. If not
found, or if this text box is not configured, load balancer searches the
Bearer header by default in the http request "Authorization: Bearer
<token>"
n Key Type - Symmetric key or asymmetric public key (certificate-id)
n Preserve JWT - This is a flag to preserve JWT and pass it to backend
server. If disabled, the JWT key to the backend server is removed.

Connection Drop If negate is enabled, when Connection Drop is configured, all requests not
matching the specified match condition are dropped. Requests matching the
specified match condition are allowed.

Variable Assignment Enables users to assign a value to a variable in HTTP Access Phase, in such
a way that the result can be used as a condition in other load balancer rule
phases.

8 Click Save and Apply.


Configure Request Rewrite Load Balancer Rules
An HTTP request rewrite is applied to the HTTP request coming from the client.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 In the Load Balancer Rules section, next to Request Rewrite Phase, click Set > Add Rule to
configure the load balancer rules for the HTTP Request Rewrite phase.

VMware, Inc. 253


NSX Administration Guide

3 From the drop-down list, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.

Supported Match Condition Description

HTTP Request Method Match an HTTP request method.


http_request.method - value to match

HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match

HTTP Request URI Arguments Used to match URI arguments aka query string of HTTP request messages,
for example, in URI http://exaple.com?foo=1&bar=2, the "foo=1&bar=2" is
the query string containing URI arguments. In an URI scheme, query string
is indicated by the first question mark ("?") character and terminated by a
number sign ("#") character or by the end of the URI.
http_request.uri_arguments - value to match

HTTP Request Version Used to match the HTTP protocol version of the HTTP request messages
http_request.version - value to match

HTTP Request Header Used to match HTTP request messages by HTTP header fields. HTTP header
fields are components of the header section of HTTP request and response
messages. They define the operating parameters of an HTTP transaction.
http_request.header_name - header name to match
http_request.header_value - value to match

HTTP Request Cookie Used to match HTTP request messages by cookie which is a specific type of
HTTP header. The match_type and case_sensitive define how to compare
cookie value.
http_request.cookie_value - value to match

HTTP Request Body Match an HTTP request body content.


http_request.body_value - value to match

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresss are supported
n If IP Header Source is selected with a Group source type, seelct the group
from the drop-down list.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

VMware, Inc. 254


NSX Administration Guide

Supported Match Condition Description

Variable Create a variable and assign a value to the variable.

Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.

4 From the drop-down menu, select a Match Type: starts with, ends with, equals, contains, or
matches regex. Match type is used to match a condition with a specified action.

Match Type Description

Starts With If the match condition starts with the specified value, the condition matches.

Ends With If the match condition ends with the specified value, the condition matches.

Equals If the match condition is the same as the specified value, the condition
matches.

Contains If the match condition contains the specified value, the condition matches.

Matches Regex If the match condition matches the specified values, the condition matches.

5 Specify the URI.

6 From the drop-down menu, select a Match Strategy:

Match Strategy Description

Any Indicates that either host or path can match for this rule to be considered a
match.

All Indicates that both host and path must match for this rule to be considered a
match.

7 Select an Action from the drop-down menu:

Actions Description

HTTP Request URI Rewrite This action is used to rewrite URIs in matched HTTP request messages.
Specify the URI and URI Arguments in this condition to rewrite the
matched HTTP request message's URI and URI arguments to the new
values. Full URI scheme of HTTP messages have following syntax: Scheme:
[//[user[:password]@]host[:port]][/path][?query][#fragment The URI field of
this action is used to rewrite the /path part in the above scheme. The URI
Arguments field is used to rewrite the query part. Captured variables and
built-in variables can be used in the URI and URI Arguments fields.
a Enter the URI of the HTTP request
b Enter the query string of URI, which typically contains key value pairs, for
example: foo1=bar1&foo2=bar2.

HTTP Request Header Rewrite This action is used to rewrite header fields of matched HTTP request
messages to specified new values.
a Enter the name of a header text box HTTP request message.
b Enter the header value.

VMware, Inc. 255


NSX Administration Guide

Actions Description

HTTP Request Header Delete This action is used to delete header fields of HTTP request messages at
HTTP_REQUEST_REWRITE phase. One action can be used to delete all
headers with same header name. To delete headers with different header
names, multiple actions must be defined.
n Enter the name of a header field of HTTP request message.

Variable Assignment Create a variable and assign it a name and value.

8 Toggle the Case Sensitive button to set a case-sensitive flag for HTTP header value
comparison.

9 Toggle the Negate button to enable it.

10 Click Save and Apply.


Configure Request Forwarding Load Balancer Rules
Request forwarding redirects a URL or host to a specific server pool.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 Click Request Forwarding > Add Rule to configure the load balancer rules for the HTTP
Request Forwarding.

3 From the drop-down list, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.

Supported Match Condition Description

HTTP Request Method Match an HTTP request method.


http_request.method - value to match

HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match

HTTP Request URI Arguments Used to match URI arguments aka query string of HTTP request messages,
for example, in URI http://exaple.com?foo=1&bar=2, the "foo=1&bar=2" is
the query string containing URI arguments. In an URI scheme, query string
is indicated by the first question mark ("?") character and terminated by a
number sign ("#") character or by the end of the URI.
http_request.uri_arguments - value to match

HTTP Request Version Used to match the HTTP protocol version of the HTTP request messages
http_request.version - value to match

VMware, Inc. 256


NSX Administration Guide

Supported Match Condition Description

HTTP Request Header Used to match HTTP request messages by HTTP header fields. HTTP header
fields are components of the header section of HTTP request and response
messages. They define the operating parameters of an HTTP transaction.
http_request.header_name - header name to match
http_request.header_value - value to match

HTTP Request Cookie Used to match HTTP request messages by cookie which is a specific type of
HTTP header. The match_type and case_sensitive define how to compare
cookie value.
http_request.cookie_value - value to match

HTTP Request Body Match an HTTP request body content.


http_request.body_value - value to match

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresss are supported
n If IP Header Source is selected with a Group source type, seelct the group
from the drop-down list.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

Variable Create a variable and assign a value to the variable.

Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.

4 Select an action:

Action Description

HTTP Reject Used to reject HTTP request messages. The specified reply_status value is
used as the status code for the corresponding HTTP response message. The
response message is sent back to client (usually a browser) indicating the
reason it was rejected.
http_forward.reply_status - HTTP status code used to reject
http_forward.reply_message - HTTP rejection message

HTTP Redirect Used to redirect HTTP request messages to a new URL. The HTTP status
code for redirection is 3xx, for example, 301, 302, 303, 307, etc. The
redirect_url is the new URL that the HTTP request message is redirected
to.
http_forward.redirect_status - HTTP status code for redirect
http_forward.redirect_url - HTTP redirect URL

VMware, Inc. 257


NSX Administration Guide

Action Description

Select Pool Force the request to a specific server pool. Specified pool member's
configured algorithm (predictor) is used to select a server within the server
pool. The matched HTTP request messages are forwarded to the specified
pool.
When HTTP keep-alive is enabled and forwarding rules are configured in the
load balancer, the server keep-alive setting takes precedence. As a result,
HTTP requests are sent to servers already connected with keep-alive.
If you always want to give priority to the forwarding rules when the load
balancer rule conditions are met, disable the keep-alive setting.
Note that the persistence setting takes precedence over the keep-alive
setting.
Processing is done in the order of Persistence > Keep-Alive > Load Balancer
Rules
http_forward.select_pool - server pool UUID

Variable Persistence On Select a generic persistence profile and enter a variable name.
You can also enable Hash Variable. If the variable value is long, hashing the
variable ensures that it is correctly stored in the persistence table. If the Hash
Variable is not enabled, only the fixed prefix part of the variable value is
stored in the persistence table if the variable value is long. As a result, two
different requests with long variable values might be dispatched to the same
backend server because their variable values have the same prefix part, when
they should be dispatched to different backend servers.

Connection Drop If negate is enabled in condition, when Connection Drop is configured, all
requests not matching the condition are dropped. Requests matching the
condition are allowed.

Reply Status Shows the status of the reply.

Reply Message Server responds with a reply message that contains confirmed addresses and
configuration.

5 Click Save and Apply.


Configure Response Rewrite Load Balancer Rules
An HTTP response rewrite is applied to the HTTP response going out from the servers to the
client.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Procedure

1 Open the Layer 7 HTTP virtual server.

VMware, Inc. 258


NSX Administration Guide

2 Click Response Rewrite > Add Rule to configure the load balancer rules for the HTTP
Response Rewrite.

All match values accept regular expressions.

Supported Match Condition Description

HTTP Response Header This condition is used to match HTTP response messages from backend
servers by HTTP header fields.
http_response.header_name - header name to match
http_response.header_value - value to match

HTTP Response Method Match an HTTP response method.


http_response.method - value to match

HTTP Response URI Match an HTTP response URI.


http_response.uri - value to match

HTTP Response URI Arguments Match an HTTP response URI arguments.


http_response.uri_args - value to match

HTTP Response Version Match an HTTP response version.


http_response.version - value to match

HTTP Response Cookie Match any HTTP response cookie.


http_response.cookie_value - value to match

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
The source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresses are supported.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

Variable Create a variable and assign a value to the variable.

Case Sensitive Set a case-sensitive flag for HTTP header value comparison.

VMware, Inc. 259


NSX Administration Guide

3 Select an action:

Action Description

HTTP Response Header Rewrite This action is used to rewrite header fields of HTTP response messages to
specified new values.
http_response.header_name - header name
http_response.header_value - value to write

HTTP Response Header Delete This action is used to delete header fields of HTTP response messages.
http_request.header_delete - header name
http_request.header_delete - value to write

Variable Persistence Learn Select a generic persistence profile and enter a variable name.
You can also enable Hash Variable. If the variable value is long, hashing the
variable ensures that it will be correctly stored in the persistence table. If
Hash Variable is not enabled, only the fixed prefix part of the variable value
is stored in the persistence table if the variable value is long. As a result,
two different requests with long variable values might be dispatched to the
same backend server (because their variable values have the same prefix
part) when they should be dispatched to different backend servers.

4 Click Save and Apply.


Regular Expressions in Load Balancer Rules
Regular expressions (REGEX) are used in match conditions for load balancer rules.

Perl Compatible Regular Expressions (PCRE) style REGEX patterns is supported with a few
limitations on advanced use cases. When REGEX is used in match conditions, named capturing
groups are supported.

REGEX restrictions include:

n Character unions and intersections are not supported. For example, do not use [a-z[0-9]] and
[a-z&&[aeiou]] instead use [a-z0-9] and [aeiou] respectively.

n Only 9 back references are supported and \1 through \9 can be used to refer to them.

n Use \0dd format to match octal characters, not the \ddd format.

n Embedded flags are not supported at the top level, they are only supported within groups. For
example, do not use "Case (?i:s)ensitive" instead use "Case ((?i:s)ensitive)".

n Preprocessing operations \l, \u, \L, \U are not supported. Where \l - lowercase next char \u -
uppercase next char \L - lower case until \E \U - upper case to \E.

n (?(condition)X), (?{code}), (??{Code}) and (?#comment) are not supported.

n Predefined Unicode character class \X is not supported

n Using named character construct for Unicode characters is not supported. For example, do
not use \N{name} instead use \u2018.

When REGEX is used in match conditions, named capturing groups are supported. For
example, REGEX match pattern /news/(?<year>\d+)-(?<month>\d+)-(?<day>\d+)/(?<article>.*)
can be used to match a URI like /news/2018-06-15/news1234.html.

VMware, Inc. 260


NSX Administration Guide

Then variables are set as follows, $year = "2018" $month = "06" $day = "15" $article =
"news1234.html". After the variables are set, these variables can be used in load balancer
rule actions. For example, URI can be rewritten using the matched variables like, /news.py?
year=$year&month=$month&day=$day&article=$article. Then the URI gets rewritten as /
news.py?year=2018&month=06&day=15&article=news1234.html.

Rewrite actions can use a combination of named capturing groups


and built-in variables. For example, URI can be written as /
news.py?year=$year&month=$month&day=$day&article=$article&user_ip=$_remote_addr.
Then the example URI gets rewritten as /news.py?
year=2018&month=06&day=15&article=news1234.html&user_ip=1.1.1.1.

Note For named capturing groups, the name cannot start with an _ character.

In addition to named capturing groups, the following built-in variables can be used in rewrite
actions. All the built-in variable names start with _.

n $_args - arguments from the request

n $_arg_<name> - argument <name> in the request line

n $_cookie_<name> - value of <name> cookie

n $_upstream_cookie_<name> - cookie with the specified name sent by the upstream


server in the "Set-Cookie" response header field

n $_upstream_http_<name> - arbitrary response header field and <name> is the field name
converted to lower case with dashes replaced by underscores

n $_host - in the order of precedence - host name from the request line, or host name from
the "Host" request header field, or the server name matching a request

n $_http_<name> - arbitrary request header field and <name> is the field name converted
to lower case with dashes replaced by underscores

n $_https - "on" if connection operates in SSL mode, or "" otherwise

n $_is_args - "?" if a request line has arguments, or "" otherwise

n $_query_string - same as $_args

n $_remote_addr - client address

n $_remote_port - client port

n $_request_uri - full original request URI (with arguments)

n $_scheme - request scheme, "http" or "https"

n $_server_addr - address of the server which accepted a request

n $_server_name - name of the server which accepted a request

n $_server_port - port of the server which accepted a request

n $_server_protocol - request protocol, usually "HTTP/1.0" or "HTTP/1.1"

VMware, Inc. 261


NSX Administration Guide

n $_ssl_client_escaped_cert - returns the client certificate in the PEM format for an


established SSL connection.

n $_ssl_server_name - returns the server name requested through SNI

n $_uri - URI path in request

n $_ssl_ciphers: returns the client SSL ciphers

n $_ssl_client_i_dn: returns the "issuer DN" string of the client certificate for an established
SSL connection according to RFC 2253

n $_ssl_client_s_dn: returns the "subject DN" string of the client certificate for an
established SSL connection according to RFC 2253

n $_ssl_protocol: returns the protocol of an established SSL connection

n $_ssl_session_reused: returns "r" if an SSL session was reused, or "." otherwise

Groups Created for Server Pools and Virtual Servers


NSX Manager automatically creates groups for load balancer server pools and VIP ports.

Load Balancer created groups are visible under Inventory > Groups.

Server pool groups are created with the name NLB.PoolLB.Pool_Name LB_Name with group
member IP addresses assigned:

n Pool configured with no LB-SNAT (transparent): 0.0.0.0/0

n Pool configured with LB-SNAT Automap: T1-Uplink IP 100.64.x.y and T1-ServiceInterface IP

n Pool configured with LB-SNAT IP-Pool: LB-SNAT IP-Pool

VIP Groups are created with the name NLB.VIP.virtual server name and the VIP group member IP
addresses are VIP IP@.

For server pool groups, you can create an allow traffic distributed firewall rule from the load
balancer ( NLB.PoolLB. Pool_Name LB_Name). For Tier-1 gateway firewall, you can create an
allow traffic from clients to LB VIP NLB.VIP.virtual server name.

VMware, Inc. 262


Distributed Load Balancer
11
A Distributed Load Balancer configured in NSX can help you effectively load balance East-West
traffic and scale traffic because it runs on each ESXi host.

Important Distributed Load Balancer is supported only for Kubernetes (K8s) cluster IPs managed
by vSphere with Kubernetes. Distributed Load Balancer is not supported for any other workload
types. As an administrator, you cannot use NSX Manager GUI to create or modify Distributed Load
Balancer objects. These objects are pushed by VMware vCenter through NSX API when K8 cluster
IPs are created in VMware vCenter.

Note Do not enable Distributed Intrusion Detection and Prevention Service (IDS/IPS) in an
environment that is using Distributed Load Balancer. NSX does not support using IDS/IPS with a
Distributed Load Balancer.

In traditional networks, a central load balancer deployed on an NSX Edge node is configured to
distribute traffic load managed by virtual servers that are configured on the load balancer.

If you are using a central balancer, increasing the number of virtual servers in the load balancer
pool might not always meet scale or performance criteria for a multi-tier distributed application.
A distributed load balancer is realized on each hypervisor where load balancing workloads, such
as clients and servers are deployed, ensuring traffic is load balanced on each hypervisor in a
distribued way.

A distributed load balancer can be configured on the NSX network along with a central load
balancer.

VMware, Inc. 263


NSX Administration Guide

In the diagram, an instance of the Distributed Load Balancer is attached to a VM group. As the
VMs are downlinks to the distributed logical router, Distributed Load Balancer only load balances
east-west traffic. In contrast, the central load balancer, manages north-south traffic.

To cater load balancing requirements of each component or module of an application, a


distributed load balancer can be attached to each tier of an application. For example, to serve
a user request, a frontend of the application needs to reach out to the middle module to get
data. However, the middle layer might not be deployed to serve the final data to the user, so
it needs to reach out the backend layer to get additional data. For a complex application, many
modules might need to interact with each other to get information. Along with complexity, when
the number of user request increase exponentially, a distributed load balancer can efficiently meet
the user needs without taking a performance hit. Configuring a Distributed Load Balancer on
every host achieves issues of scale and packet transmission efficiency.

This chapter includes the following topics:

n Understanding Traffic Flow with a Distributed Load Balancer

n Create and Attach a Distributed Load Balancer Instance

n Create a Server Pool for Distributed Load Balancer

n Create a Virtual Server with a Fast TCP or UDP Profile

n Verifying Distributed Load Balancer Configuration on ESXi Hosts

n Distributed Load Balancer Statistics and Diagnostics

n Distributed Load Balancer Operational Status

n Run Traceflow on Distributed Load Balancer

n Supported Features

VMware, Inc. 264


NSX Administration Guide

Understanding Traffic Flow with a Distributed Load Balancer


Understand how traffic flows between VMs that are connected to an instance of a distributed load
balancer (DLB).

As an administrator ensure:

n Virtual IP addresses and pool members connected to a DLB instance must have unique IP
address for traffic to be routed correctly.

Traffic flow between Web VM1 and APP VM2.

1 When Web VM1 sends out a packet to APP VM2 it is received by the VIP-APP.

The DLB APP is attached to the policy group consisting of Web tier VMs. Similarly, DLB-APP
hosting VIP-DB must be attached to the policy group consisting of App tier VMs.

2 The VIP-APP hosted on DLB APP receives the request from Web VM1.

3 Before reaching the destination VM group, the packet is filtered by distributed firewall rules.

4 After the packets are filtered based on the firewall rules, it is sent to the Tier-1 router.

5 It is further routed to the the physical router.

6 The route is completed when the packet is delivered to the destination App VM2 group.

As DLB VIPs can only be accessed from VMs connected to downlinks of Tier-0 or Tier-1 logical
routers, DLB provides load balancing services to east-west traffic.

A DLB instance can co-exist with an instance of DFW. With DLB and DFW enabled on a virtual
interface of a hypervisor, first the traffic is load balanced based on the configuraiton in DLB and
then DFW rules are applied on traffic flowing from a VM to the hypervisor. DLB rules are applied
on traffic originating from downlinks of a Tier-0 or Tier-1 logical routers going to the destination
hypervisor. DLB rules cannot be applied on traffic flowing in the reverse direction - originating
from outside the host going to a destination VM.

VMware, Inc. 265


NSX Administration Guide

For example, if the DLB instance is load balancing traffic from Web-VMs to App-VMs, then to
allow such traffic to pass through DFW, ensure that the DFW rule is set to value "Source=Web-
VMs, Destination=App-VMs, Action=Allow".

Create and Attach a Distributed Load Balancer Instance


Unlike a central load balancer, a Distributed Load Balancer (DLB) instance is attached to virtual
interfaces of a VM group.

At the end of the procedure a DLB instance is attached to the virtual interfaces of a VM group. It is
only possible to create and attach a DLB instance through API commands.

Prerequisites

n Add a policy group consisting of VMs. For example, such a VM group can be related to the
App tier that receives requests from a VM on the Web-tier.

Procedure

u Run Put /policy/api/v1/infra/lb-services/<mydlb>.

"connectivity_path" : "/infra/domains/default/groups/<clientVMGroup>",

"enabled" : true,

"size" : "DLB",

"error_log_level" : "Debug",

"access_log_enabled" : false,

"resource_type" : "LBService",

"display_name" : "mydlb"

}
Where,

n connectivity_path:

n If the connectivity path is set to Null or Empty, the DLB instance is not applied to any
transport nodes.

n If the connectivity path is set ALL, all virtual interfaces of all transport nodes are bound
to the DLB instance. One DLB instance is applied to all the virtual interfaces of the
policy group.

n size: Set to value DLB. As each application or virtual interface gets an instance of DLB,
there is just a single size form factor of the DLB instance.

n enabled: By default, the created DLB instance is enabled. You cannot disable the DLB
instance.

VMware, Inc. 266


NSX Administration Guide

n error_log_level: Supported levels are Debug, Error, and Info. By default, log level is
set to Info. To get verbose logs, set the level to Debug.

A DLB instance is created and attached to the VM group. The DLB instance created on the
Web-tier is attached to all the virtual interfaces of the Web-tier VM group.

What to do next

After creating a DLB instance, log in to the NSX Manager, go to Networking -> Load Balancing ->
Load Balancers. View details of the DLB instance.

Next, Create a Server Pool for Distributed Load Balancer.

Create a Server Pool for Distributed Load Balancer


Create a load balancer pool to include virtual machines that consume DLB services.

This task can be done both from the NSX UI and NSX API.

The API command to create a DLB pool is PUT https://<NSXManager_IPaddress>/policy/api/v1/


infra/lb-pools/<lb-pool-id>

Prerequisites

n Create a VM group that consumes DLB service.

n Create and attach a DLB instance to a VM group.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Go to Networking > Load Balancing > Server Pools.

3 Click Add Server Pool.

VMware, Inc. 267


NSX Administration Guide

4 Enter values in these fields.

Field Description

Name Enter name of the DLB pool.

Algorithm Weighted Round Robin, Round Robin, Weighted Least Connection, Least
Connection and IP Hash are the supported algorithms. Since Distributed
Load Balancer runs locally on each ESXi server, these algorithms are local to
each ESXi server. There is no synchronization of load balancing connection
information between different ESXi servers of a cluster.
n Weighted Round Robin: Use this algorithm to send connections to pool
members based on the weights assigned to each pool member. For
example, if you assign pool member A with weight 3, pool member B
with weight 2 and pool member C with weight 1, then out of a total of 6
client connections, pool member A receives 3 connections, pool member
B receives 2 connections and pool member C receives 1 connection.
n Round Robin: Use this algorithm to send equal number of connections to
each pool member.
n Least Connection: Use this algorithm so that a pool member with the
least number of active connections. Each pool member is configured to
a slow start (Slow Start is set to True). When it receives connections, the
status of the pool member is set to Slow Start is False.
n Weighted Least Connection: Use this algorithm, to send connections to
pool members based on the weights assigned to each pool member.
n IP Hash: Use this algorithm to send connections based on the hash of IP
addresses.

Note Do not use IP Hash if you want to persist connections to the same
pool member even after the number of pool members change.

Members/Group Click Select Members and on the Configure Server Pool Members window,
do one of the following:
n Select Enter individual members. To add a new member, click Add
Member and enter values in the mandatory fields.
n Select Select a group and Add Group or select an existing group.

To add a new group, enter values in these fields.


n Name
n Compute Members: Click Set Members to add a group that includes
all the pool members.
n IP Revision Filter: Both IPv4 and IPv6 are supported.
n Port: Default port for all the dynamic pool members.

SNAT Translation Mode Set this field to Disabled state. SNAT translation is not supported in a
Distributed Load Balancer.

5 Click Save.

Results

Server pool members are added for the Distributed Load Balancer.

VMware, Inc. 268


NSX Administration Guide

What to do next

See Create a Virtual Server with a Fast TCP or UDP Profile.

Create a Virtual Server with a Fast TCP or UDP Profile


Create a virtual server and bind it to a Distributed Load Balancer service.

This task can be performed both from the NSX UI and NSX APIs.

The API command to create a virtual server is PUT https://<NSXManager_IPAddress>/


policy/api/v1/infra/lb-virtual-servers/<lb-virtual-server-id>.

Prerequisites

n Create a server pool for the Distributed Load Balancer.

n To use IPv6 addresses as the virtual IP of Distributed Load Balancer, on the Global Networking
Config page (Networking > Global Networking Config), ensure L3 Forwarding Mode to IPv4
and IPv6.

Procedure

1 With admin privileges, log in to NSX Manager.

2 Go to Networking → Load Balancing → Virtual Servers.

3 Click Add Virtual Server -> L4 TCP.

4 To configure a virtual server for a Distributed Load Balancer, only the following fields are
supported.

Field Description

Name Enter a name for the virtual server.

IP Address Supports both IPv4 and IPv6 addresses.


Enter the IP address of the Distributed Load Balancer virtual server. All client
connections arrive at this IP address of the Distributed Load Balancer virtual
server.

Ports Virtual server port number.


Multiple ports or port ranges are not supported in the virtual server of a
Distributed Load Balancer.

Load Balancer Attach the Distributed Load Balancer instance that is associated to the virtual
server. The virtual server then knows which policy group the load balancer is
servicing.

Server Pool Select the server pool. The server pool contains backend servers. Server pool
consists of one or more servers that are similarly configured and are running
the same application. It is also referred to as pool members.

Note If the virtual IP address of the Distributed Load Balancer is IPv4, the
server pool members must be of the same versions. Likewise if you use IPv6
version of virtual IP address.

VMware, Inc. 269


NSX Administration Guide

Field Description

Application Profile Select the application profile for the virtual server.
The app