0% found this document useful (0 votes)
77 views986 pages

NSXT 30 Admin

The NSX-T Data Center Administration Guide provides comprehensive instructions for managing VMware NSX-T Data Center 3.0, covering various components such as NSX Manager, Tier-0 and Tier-1 Gateways, segments, host switches, VPNs, load balancing, and security settings. It includes detailed procedures for configuration, monitoring, and troubleshooting across different networking and security features. The guide also addresses system management tasks, including user authentication, certificate management, and backup operations.

Uploaded by

NaiaraSousa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views986 pages

NSXT 30 Admin

The NSX-T Data Center Administration Guide provides comprehensive instructions for managing VMware NSX-T Data Center 3.0, covering various components such as NSX Manager, Tier-0 and Tier-1 Gateways, segments, host switches, VPNs, load balancing, and security settings. It includes detailed procedures for configuration, monitoring, and troubleshooting across different networking and security features. The guide also addresses system management tasks, including user authentication, certificate management, and backup operations.

Uploaded by

NaiaraSousa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 986

NSX-T Data Center

Administration Guide
Modified on 17 SEP 2020
VMware NSX-T Data Center 3.0
NSX-T Data Center Administration Guide

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2020 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware, Inc. 2
Contents

About Administering VMware NSX-T Data Center 13

1 NSX Manager 14
View Monitoring Dashboards 17

2 Tier-0 Gateways 20
Add a Tier-0 Gateway 21
Create an IP Prefix List 25
Create a Community List 26
Configure a Static Route 27
Create a Route Map 28
Using Regular Expressions to Match Community Lists When Adding Route Maps 30
Configure BGP 30
Configure BFD 34
Configure Multicast 34
Configure IPv6 Layer 3 Forwarding 35
Create SLAAC and DAD Profiles for IPv6 Address Assignment 36
Changing the HA Mode of a Tier-0 Gateway 37
Add a VRF Gateway 38
Configuring EVPN 39

3 Tier-1 Gateway 42
Add a Tier-1 Gateway 42

4 Segments 45
Segment Profiles 46
Understanding QoS Segment Profile 47
Understanding IP Discovery Segment Profile 49
Understanding SpoofGuard Segment Profile 51
Understanding Segment Security Segment Profile 52
Understanding MAC Discovery Segment Profile 54
Add a Segment 55
Types of DHCP on a Segment 58
Configure DHCP on a Segment 59
Configure DHCP Static Bindings on a Segment 66
Layer 2 Bridging 69
Create an Edge Bridge Profile 70
Configure Edge-Based Bridging 71

VMware, Inc. 3
NSX-T Data Center Administration Guide

Create a Layer 2 Bridge-Backed Segment 74


Add a Metadata Proxy Server 74

5 Host Switches 76
Managing NSX-T on a vSphere Distributed Switch 76
Configuring a vSphere Distributed Switch 77
Managing NSX Distributed Virtual Port Groups 79
NSX-T Cluster Prepared with VDS 80
APIs to Configure vSphere Distributed Switch 81
Feature Support in a vSphere Distributed Switch Enabled to Support NSX-T Data Center
86
Enhanced Networking Stack 88
Automatically Assign ENS Logical Cores 89
Configure Guest Inter-VLAN Routing 90
Migrate Host Switch to vSphere Distributed Switch 91
NSX Virtual Distributed Switch 96

6 Virtual Private Network (VPN) 98


Understanding IPSec VPN 99
Using Policy-Based IPSec VPN 100
Using Route-Based IPSec VPN 101
Understanding Layer 2 VPN 102
Enable and Disable L2 VPN Path MTU Discovery 103
Adding VPN Services 104
Add an IPSec VPN Service 105
Add an L2 VPN Service 107
Adding IPSec VPN Sessions 109
Add a Policy-Based IPSec Session 109
Add a Route-Based IPSec Session 113
About Supported Compliance Suites 117
Understanding TCP MSS Clamping 118
Adding L2 VPN Sessions 119
Add an L2 VPN Server Session 119
Add an L2 VPN Client Session 121
Download the Remote Side L2 VPN Configuration File 123
Add Local Endpoints 124
Adding Profiles 125
Add IKE Profiles 126
Add IPSec Profiles 129
Add DPD Profiles 131
Add an Autonomous Edge as an L2 VPN Client 132
Check the Realized State of an IPSec VPN Session 135

VMware, Inc. 4
NSX-T Data Center Administration Guide

Monitor and Troubleshoot VPN Sessions 138

7 Network Address Translation (NAT) 139


Configure NAT on a Gateway 139

8 Load Balancing 142


Key Load Balancer Concepts 143
Scaling Load Balancer Resources 143
Supported Load Balancer Features 144
Load Balancer Topologies 145
Setting Up Load Balancer Components 147
Add Load Balancers 147
Add an Active Monitor 149
Add a Passive Monitor 152
Add a Server Pool 153
Setting Up Virtual Server Components 157
Groups Created for Server Pools and Virtual Servers 188

9 Distributed Load Balancer 189


Understanding Traffic Flow with a Distributed Load Balancer 191
Create and Attach a Distributed Load Balancer Instance 192
Create a Server Pool for Distributed Load Balancer 193
Create a Virtual Server with a Fast TCP or UDP Profile 194
Verifying Distributed Load Balancer Configuration on ESXi Hosts 196
Monitoring Distributed Load Balancer Statistics 197

10 Forwarding Policies 199


Add or Edit Forwarding Policies 200

11 IP Address Management (IPAM) 202


Add a DNS Zone 202
Add a DNS Forwarder Service 203
Add a DHCP Profile 204
Add a DHCP Server Profile 204
Add a DHCP Relay Profile 207
Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway 208
Scenarios: Selection of Edge Cluster for DHCP Service 209
Scenarios: Impact of Changing Segment Connectivity on DHCP 214
Add an IP Address Pool 217
Add an IP Address Block 218

VMware, Inc. 5
NSX-T Data Center Administration Guide

12 Networking Settings 219


Configuring Multicast 219
Create an IGMP Profile 221
Create a PIM Profile 221
Add a VNI Pool 222
Configure Gateway Settings 222
Add a Gateway QoS Profile 223
Add a BFD Profile 224

13 Security 225
Security Configuration Overview 225
Security Overview 226
Security Terminology 227
Identity Firewall 228
Identity Firewall Workflow 229
Layer 7 Context Profile 231
Layer 7 Firewall Rule Workflow 232
Attributes 233
Distributed Firewall 237
Firewall Drafts 237
Add a Distributed Firewall 239
Firewall Packet Logs 243
Manage a Firewall Exclusion List 243
Filtering Specific Domains (FQDN/URLs) 244
Extending Security Policies to Physical Workloads 245
Shared Address Sets 252
Distributed IDS 252
Distributed IDS Settings and Signatures 253
Distributed IDS Profiles 255
Distributed IDS Rules 258
Distributed IDS Events 259
Verify Distributed IDS Status on Host 261
East-West Network Security - Chaining Third-party Services 263
Key Concepts of Network Protection East-West 263
NSX-T Data Center Requirements for East-West Traffic 264
High-Level Tasks for East-West Network Security 264
Deploy a Service for East-West Traffic Introspection 265
Add Redirection Rules for East-West Traffic 266
Uninstall an East-West Traffic Introspection Service 269
Gateway Firewall 269
Add a Gateway Firewall Policy and Rule 270

VMware, Inc. 6
NSX-T Data Center Administration Guide

URL Analysis Workflow 273


North-South Network Security - Inserting Third-party Service 275
High-Level Tasks for North-South Network Security 275
Deploy a Service for North-South Traffic Introspection 275
Add Redirection Rules for North-South Traffic 277
Uninstall a North-South Traffic Introspection Service 279
Endpoint Protection 279
Understand Endpoint Protection 279
Configure Endpoint Protection 284
Manage Endpoint Protection 301
Security Profiles 312
Create a Session Timer 312
Flood Protection 314
Configure DNS Security 316
Manage Group to Profile Precedence 317
Time-Based Firewall Policy 317
Network Introspection Settings 318
Add a Service Segment 318
Add a Service Profile 319
Add a Service Chain 320
Bare Metal Server Security 321

14 Inventory 323
Add a Service 323
Add a Group 324
Add a Context Profile 326
Containers 328
Public Cloud Services 330
Physical Servers 330
Tags 330
Add Tags to an Object 334
Add a Tag to Multiple Objects 334
Unassign Tags from an Object 336
Unassign a Tag from Multiple Objects 336

15 Multisite and Federation 338


NSX-T Data Center Multisite 338
Working with VMware Site Recovery Manager 346
NSX-T Data Center Federation 347
Overview of Federation 347
Networking in Federation 356

VMware, Inc. 7
NSX-T Data Center Administration Guide

Security in Federation 371


Backup and Restore in Federation 386

16 System Monitoring 388


Monitor NSX Edge Nodes 388
Working with Events and Alarms 390
About Events and Alarms 390
View Alarm Information 419
View Alarm Definitions 421
Configuring Alarm Definition Settings 422
Managing Alarm States 423
Using vRealize Log Insight for System Monitoring 424
Using vRealize Operations Manager for System Monitoring 425
Using vRealize Network Insight Cloud for System Monitoring 429

17 Network Monitoring 440


Add an IPFIX Collector 440
Add a Firewall IPFIX Profile 441
Add a Switch IPFIX Profile 441
IPFIX Monitoring on a vSphere Distributed Switch 443
Add a Port Mirroring Profile 443
Port Mirroring on a vSphere Distributed Switch 444
Perform a Traceflow 445
Simple Network Management Protocol (SNMP) 448
Monitor Fabric Nodes 449
Network Latency Statistics 449
Measure Network Latency Statistics 453
Export Network Latency Statistics 454
Monitoring Tools in Manager Mode 456
View Port Connection Information in Manager Mode 456
Traceflow 457
Monitor Port Mirroring Sessions in Manager Mode 460
Configure Filters for a Port Mirroring Session 463
Configure IPFIX in Manager Mode 464
Monitor a Logical Switch Port Activity in Manager Mode 634

18 Authentication and Authorization 636


Local User Accounts 637
Manage a User's Password 637
Resetting the Passwords of an Appliance 638
Authentication Policy Settings 639

VMware, Inc. 8
NSX-T Data Center Administration Guide

Integration with VMware Identity Manager 640


Time Synchronization between NSX Manager, vIDM, and Related Components 640
Obtain the Certificate Thumbprint from a vIDM Host 641
Configure VMware Identity Manager Integration 642
Validate VMware Identity Manager Functionality 644
Integration with LDAP 646
LDAP Identity Source 646
Add a Role Assignment or Principal Identity 648
Configuring Both vIDM and LDAP or Transitioning from vIDM to LDAP 650
Role-Based Access Control 650

19 Certificates 661
Types of Certificates 661
Certificates for Federation 663
Create a Certificate Signing Request File 665
Creating Self-signed Certificates 666
Create a Self-Signed Certificate 666
Import a Certificate for a CSR 667
Importing and Replacing Certificates 667
Import a Self-signed or CA-signed Certificate 668
Import a CA Certificate 668
Replace Certificates 669
Importing and Retrieving CRLs 670
Import a Certificate Revocation List 671
Configuring NSX Manager to Retrieve a Certificate Revocation List 672
Storage of Public Certificates and Private Keys for Load Balancer or VPN service 672

20 Configuring NSX-T Data Center in Manager Mode 673


Logical Switches in Manager Mode 673
Understanding BUM Frame Replication Modes 674
Create a Logical Switch in Manager Mode 676
Connecting a VM to a Logical Switch in Manager Mode 677
Create a Logical Switch Port In Manager Mode 686
Test Layer 2 Connectivity in Manager Mode 687
Create a VLAN Logical Switch for the NSX Edge Uplink in Manager Mode 690
Switching Profiles for Logical Switches and Logical Ports 692
Layer 2 Bridging in Manager Mode 709
Logical Routers in Manager Mode 715
Tier-1 Logical Router 716
Tier-0 Logical Router 726
NAT in Manager Mode 758

VMware, Inc. 9
NSX-T Data Center Administration Guide

Network Address Translation 758


Grouping Objects in Manager Mode 771
Create an IP Set in Manager Mode 771
Create an IP Pool in Manager Mode 772
Create a MAC Set in Manager Mode 772
Create an NSGroup in Manager Mode 773
Configuring Services and Service Groups 775
Manage Tags for a VM in Manager Mode 776
DHCP in Manager Mode 777
DHCP 777
Metadata Proxies 782
IP Address Management in Manager Mode 784
Manage IP Blocks in Manager Mode 784
Manage Subnets for IP Blocks in Manager Mode 785
Load Balancing in Manager Mode 785
Key Load Balancer Concepts 786
Configuring Load Balancer Components 787
Firewall in Manager Mode 818
Add or Delete a Firewall Rule to a Logical Router in Manager Mode 818
Configure Firewall for a Logical Switch Bridge Port in Manager Mode 819
Firewall Sections and Firewall Rules 820
About Firewall Rules 827
Implement a Bump-in-the-Wire Firewall in Manager Mode 834

21 Backing Up and Restoring NSX Manager 835


Configure Backups 836
Remove Old Backups 838
Restore a Backup 839
Listing Available Backups 842
Certificate Management after Restore 842

22 Operations and Management 844


View the Usage and Capacity of Categories of Objects 845
Configure User Interface Settings 847
Configure a Node Profile 847
Checking the Realized State of a Configuration Change 849
View Network Topology 853
Search for Objects 853
Filter by Object Attributes 854
Add a Compute Manager 855
Add an Active Directory 858

VMware, Inc. 10
NSX-T Data Center Administration Guide

Add an LDAP Server 859


Synchronize Active Directory 860
Remove NSX-T Data Center Extension from vCenter Server 861
Managing the NSX Manager Cluster 862
View the Configuration and Status of the NSX Manager Cluster 862
Update API Service Configuration of the NSX Manager Cluster 865
Shut Down and Power On the NSX Manager Cluster 866
Reboot an NSX Manager 866
Change the IP Address of an NSX Manager 866
Resize an NSX Manager Node 868
Replacing an NSX Edge Transport Node in an NSX Edge Cluster 869
Replace an NSX Edge Transport Node Using the NSX Manager UI 869
Replace an NSX Edge Transport Node Using the API 870
Managing Resource Reservations for an Edge VM Appliance 872
Tune Resource Reservations for an NSX Edge Appliance 873
Adding and Removing an ESXi Host Transport Node to and from vCenter Servers 874
Changing the Distributed Router Interfaces' MAC Address 875
Configuring Appliances 876
Add a License Key and Generate a License Usage Report 877
Compliance-Based Configuration 880
View Compliance Status Report 881
Compliance Status Report Codes 881
Configure Global FIPS Compliance Mode for Load Balancer 884
Collect Support Bundles 887
Log Messages and Error Codes 888
Configure Remote Logging 891
Log Message IDs 898
Troubleshooting Syslog Issues 900
Configure Serial Logging on an Appliance VM 900
Firewall Audit Log Messages 901
Customer Experience Improvement Program 916
Edit the Customer Experience Improvement Program Configuration 916
Find the SSH Fingerprint of a Remote Server 917
Configuring an External Load Balancer 918
Configure Proxy Settings 918
View Container-Related Information 919

23 Using NSX Cloud 920


The Cloud Service Manager 920
Clouds 920
System 927

VMware, Inc. 11
NSX-T Data Center Administration Guide

Threat Detection using the NSX Cloud Quarantine Policy 932


Quarantine Policy in the NSX Enforced Mode 933
Quarantine Policy in the Native Cloud Enforced Mode 938
Whitelisting VMs 938
NSX Enforced Mode 939
Currently Supported Operating Systems for Workload VMs 940
Onboarding VMs in the NSX Enforced Mode 940
Managing VMs in the NSX Enforced Mode 949
Native Cloud Enforced Mode 950
Managing VMs in the Native Cloud Enforced Mode 950
NSX-T Data Center Features Supported with NSX Cloud 954
Group VMs using NSX-T Data Center and Public Cloud Tags 955
Use Native-Cloud Services 958
Service Insertion for your Workload VMs in the NSX Enforced Mode 959
Enable NAT on NSX-managed VMs 968
Enable Syslog Forwarding 969
Set up VPN in the Native Cloud Enforced Mode 969
Set up VPN in the NSX Enforced Mode 978
Frequently Asked Questions (FAQs) 984

VMware, Inc. 12
About Administering VMware NSX-T Data
Center

The NSX-T Data Center Administration Guide provides information about configuring and
managing networking for VMware NSX-T™ Data Center, including how to create logical switches
and ports and how to set up networking for tiered logical routers, configure NAT, firewalls,
SpoofGuard, grouping and DHCP. It also describes how to configure NSX Cloud.

Intended Audience
This information is intended for anyone who wants to configure NSX-T Data Center. The
information is written for experienced Windows or Linux system administrators who are familiar
with virtual machine technology, networking, and security operations.

VMware Technical Publications Glossary


VMware Technical Publications provides a glossary of terms that might be unfamiliar to you. For
definitions of terms as they are used in VMware technical documentation, go to https://
www.vmware.com/topics/glossary.

Related Documentation
You can find the VMware NSX® Intelligence™ documentation at https://docs.vmware.com/en/
VMware-NSX-Intelligence/index.html. The NSX Intelligence 1.0 content was initially included and
released with the NSX-T Data Center 2.5 documentation set.

VMware, Inc. 13
NSX Manager
1
The NSX Manager provides a web-based user interface where you can manage your NSX-T
environment. It also hosts the API server that processes API calls.

The NSX Manager interface provides two modes for configuring resources:

n Policy mode

n Manager mode

Accessing Policy Mode and Manager Mode


If present, you can use the Policy and Manager buttons to switch between the Policy and
Manager modes. Switching modes controls which menus items are available to you.

n By default, if your environment contains only objects created through Policy mode, your user
interface is in Policy mode and you do not see the Policy and Manager buttons.

n By default, if your environment contains any objects created through Manager mode, you see
the Policy and Manager buttons in the top-right corner.

These defaults can be changed by modifying the user interface settings. See Configure User
Interface Settings for more information.

The same System tab is used in the Policy and Manager interfaces. If you modify Edge nodes,
Edge clusters, or transport zones, it can take up to 5 minutes for those changes to be visible in
Policy mode. You can synchronize immediately using POST /policy/api/v1/infra/sites/default/
enforcement-points/default?action=reload.

VMware, Inc. 14
NSX-T Data Center Administration Guide

When to Use Policy Mode or Manager Mode


Be consistent about which mode you use. There are a few reasons to use one mode over the
other.

n If you are deploying a new NSX-T Data Center environment, using Policy mode to create and
manage your environment is the best choice in most situations.

n Some features are not available in Policy mode. If you need these features, use Manager
mode for all configurations.

n If you plan to use Federation, use Policy mode to create all objects. Global Manager supports
only Policy mode.

n If you are upgrading from an earlier version of NSX-T Data Center and your configurations
were created using the Advanced Networking & Security tab, use Manager mode.

The menu items and configurations that were found under the Advanced Networking &
Security tab are available in NSX-T Data Center 3.0 in Manager mode.

Important If you decide to use Policy mode, use it to create all objects. Do not use Manager
mode to create objects.

Similarly, if you need to use Manager mode, use it to create all objects. Do not use Policy mode
to create objects.

Table 1-1. When to Use Policy Mode or Manager Mode


Policy Mode Manager Mode

Most new deployments should use Policy mode. Deployments which were created using the advanced
Federation supports only Policy mode. If you want to use interface, for example, upgrades from versions before
Federation, or might use it in future, use Policy mode. Policy mode was available.

NSX Cloud deployments Deployments which integrate with other plugins. For
example, NSX Container Plug-in, Openstack, and other
cloud management platforms.

Networking features available in Policy mode only: Networking features available in Manager mode only:
n DNS Services and DNS Zones n Forwarding up timer
n VPN
n Forwarding policies for NSX Cloud

Security features available in Policy mode only: Security features available in Manager mode only:
n Endpoint Protection n Bridge Firewall
n Network Introspection (East-West Service Insertion)
n Context Profiles
n L7 applications
n FQDN
n New Distributed Firewall and Gateway Firewall Layout
n Categories
n Auto service rules
n Drafts

VMware, Inc. 15
NSX-T Data Center Administration Guide

Names for Objects Created in Policy Mode and Manager


Mode
The objects you create have different names depending on which interface was used to create
them.

Table 1-2. Object Names


Objects Created Using Policy Mode Objects Created Using Manager Mode

Segment Logical switch

Tier-1 gateway Tier-1 logical router

Tier-0 gateway Tier-0 logical router

Group NSGroup, IP Sets, MAC Sets

Security Policy Firewall section

Gateway firewall Edge firewall

Policy and Manager APIs


The NSX Manager provides two APIs: Policy and Manager.

n The Policy API contains URIs that begin with /policy/api.

n The Manager API contains URIs that begin with /api.

For more information about using the Policy API, see the NSX-T Policy API Getting Started Guide.

Security
NSX Manager has the following security features:

n NSX Manager has a built-in user account called admin, which has access rights to all
resources, but does not have rights to the operating system to install software. NSX-T
upgrade files are the only files allowed for installation. You cannot edit the rights of or delete
the admin user. Note that you can change the username admin.

n NSX Manager supports session time-out and automatic user logout. NSX Manager does not
support session lock. Initiating a session lock can be a function of the workstation operating
system being used to access NSX Manager. Upon session termination or user logout, users
are redirected to the login page.

n Authentication mechanisms implemented on NSX-T follow security best practices and are
resistant to replay attacks. The secure practices are deployed systematically. For example,
sessions IDs and tokens on NSX Manager for each session are unique and expire after the
user logs out or after a period of inactivity. Also, every session has a time record and the
session communications are encrypted to prevent session hijacking.

This chapter includes the following topics:

VMware, Inc. 16
NSX-T Data Center Administration Guide

n View Monitoring Dashboards

View Monitoring Dashboards


The NSX Manager interface provides numerous monitoring dashboards showing details
regarding system status, networking and security, and compliance reporting. This information is
displayed or accessible throughout the NSX Manager interface, but can be accessed together in
the Home > Monitoring Dashboards page.

You can access the monitoring dashboards from the Home page of the NSX Manager interface.
From the dashboards, you can click through and access the source pages from which the
dashboard data is drawn.

Procedure

1 Log in as administrator to the NSX Manager interface.

2 Click Home if you are not already on the Home page.

3 Click Monitoring Dashboards and select the desired category of dashboards from the drop-
down menu.

The page displays the dashboards in the selected categories. The dashboard graphics are
color-coded, with color code key displayed directly above the dashboards.

4 To access a deeper level of detail, click the title of the dashboard, or one of the elements of
the dashboard, if activated.

The following tables describe the default dashboards and their sources.

Table 1-3. System Dashboards


Dashboard Sources Description

System System > Appliances > Shows the status of the NSX Manager cluster and resource
Overview (CPU, memory, disk) consumption.

Fabric System > Fabric > Nodes Shows the status of the NSX-T fabric, including host and edge
System > Fabric > Transport transport nodes, transport zones, and compute managers.
Zones
System > Fabric > Compute
Managers

Backups System > Backup & Restore Shows the status of NSX-T backups, if configured. It is
strongly recommended that you configure scheduled backups
that are stored remotely to an SFTP site.

Endpoint System > Service Shows the status of endpoint protection deployment.
Protection Deployments

VMware, Inc. 17
NSX-T Data Center Administration Guide

Table 1-4. Networking & Security Dashboards in Policy Mode


Dashboard Sources Description

Security Inventory > Groups Shows the status of groups and security policies. A group is a
Security > Distributed collection of workloads, segments, segment ports, and IP
Firewall addresses, where security policies, including East-West
firewall rules, may be applied.

Gateways Networking > Tier-0 Shows the status of Tier-0 and Tier-1 gateways.
Gateways
Networking > Tier-1
Gateways

Segments Networking > Segments Shows the status of network segments.

Load Balancers Networking > Load Balancing Shows the status of the load balancer VMs.

VPNs Networking > VPN Shows the status of virtual private networks.

Table 1-5. Networking & Security Dashboards in Manager Mode


Dashboard Sources Description

Load Balancers Networking > Load Balancing Shows the status of the load balancer services, load balancer
virtual servers, and load balancer server pools. A load
balancer can host one or more virtual servers. A virtual server
is bound to a server pool that includes members hosting
applications.

Firewall Security > Distributed Indicates if the firewall is enabled, and shows the number of
Firewall policies, rules, and exclusions list members.
Security > Bridge Firewall
Note Each detailed item displayed in this dashboard is
Networking > Tier-0 Logical sourced from a specific sub-tab in the source page cited.
Routers and Networking >
Tier-1 Logical Routers

VPN Not applicable. Shows the status of virtual private networks and the number
of IPSec and L2 VPN sessions open.

Switching Networking > Logical Shows the status of logical switches and logical ports,
Switches including both VM and container ports.

Table 1-6. Compliance Report Dashboard


Column Description

Non-Compliance Code Displays the specific non-compliance code.

Description Specific cause of non-compliance status.

Resource Name The NSX-T resource (node, switch, and profile) in non-compliance.

Resource Type Resource type of cause.

Affected Resources Number of resources affected. Click the number value to view a list.

VMware, Inc. 18
NSX-T Data Center Administration Guide

See the Compliance Status Report Codes for more information about each compliance report
code.

VMware, Inc. 19
Tier-0 Gateways
2
A tier-0 gateway performs the functions of a tier-0 logical router. It processes traffic between the
logical and physical networks.

NSX Cloud Note If using NSX Cloud, see NSX-T Data Center Features Supported with NSX
Cloud for a list of auto-generated logical entities, supported features, and configurations required
for NSX Cloud.

An Edge node can support only one tier-0 gateway or logical router. When you create a tier-0
gateway or logical router, make sure you do not create more tier-0 gateways or logical routers
than the number of Edge nodes in the NSX Edge cluster.

This chapter includes the following topics:

n Add a Tier-0 Gateway

n Create an IP Prefix List

n Create a Community List

n Configure a Static Route

n Create a Route Map

n Using Regular Expressions to Match Community Lists When Adding Route Maps

n Configure BGP

n Configure BFD

n Configure Multicast

n Configure IPv6 Layer 3 Forwarding

n Create SLAAC and DAD Profiles for IPv6 Address Assignment

n Changing the HA Mode of a Tier-0 Gateway

n Add a VRF Gateway

n Configuring EVPN

VMware, Inc. 20
NSX-T Data Center Administration Guide

Add a Tier-0 Gateway


A tier-0 gateway has downlink connections to tier-1 gateways and uplink connections to physical
networks.

If you are adding a tier-0 gateway from Global Manager in Federation, see Add a Tier-0 Gateway
from Global Manager.

You can configure the HA (high availability) mode of a tier-0 gateway to be active-active or
active-standby. The following services are only supported in active-standby mode:

n NAT

n Load balancing

n Stateful firewall

n VPN

Tier-0 and tier-1 gateways support the following addressing configurations for all interfaces
(uplinks, service ports and downlinks) in both single tier and multi-tiered topologies:

n IPv4 only

n IPv6 only

n Dual Stack - both IPv4 and IPv6

To use IPv6 or dual stack addressing, enable IPv4 and IPv6 as the L3 Forwarding Mode in
Networking > Networking Settings > Global Networking Config .

You can configure the tier-0 gateway to support EVPN (Ethernet VPN) type-5 routes. For more
information about configuring EVPN, see Configuring EVPN.

If you configure route redistribution for the tier-0 gateway, you can select from two groups of
sources: tier-0 subnets and advertised tier-1 subnets. The sources in the tier-0 subnets group are:

Source Type Description

Connected Interfaces and These include external interface subnets, service interface subnets and segment
Segments subnets connected to the tier-0 gateway.

Static Routes Static routes that you have configured on the tier-0 gateway.

NAT IP NAT IP addresses owned by the tier-0 gateway and discovered from NAT rules that are
configured on the tier-0 gateway.

IPSec Local IP Local IPSEC endpoint IP address for establishing VPN sessions.

DNS Forwarder IP Listener IP for DNS queries from clients and also used as source IP used to forward DNS
queries to upstream DNS server.

EVPN TEP IP This is used to redistribute EVPN local endpoint subnets on the tier-0 gateway.

The sources in the advertised tier-1 subnets group are:

VMware, Inc. 21
NSX-T Data Center Administration Guide

Source Type Description

Connected Interfaces and These include segment subnets connected to the tier-1 gateway and service interface
Segments subnets configured on the tier-1 gateway.

Static Routes Static routes that you have configured on the tier-1 gateway.

NAT IP NAT IP addresses owned by the tier-1 gateway and discovered from NAT rules that are
configured on the tier-1 gateway.

LB VIP IP address of the load balancing virtual server.

LB SNAT IP IP address or a range of IP addresses used for source NAT by the load balancer.

DNS Forwarder IP Listener IP for DNS queries from clients and also used as source IP used to forward DNS
queries to upstream DNS server.

IPSec Local Endpoint IP address of the IPSec local endpoint.

Prerequisites

If you plan to configure multicast, see Configuring Multicast.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 Click Add Tier-0 Gateway.

4 Enter a name for the gateway.

5 Select an HA (high availability) mode.

The default mode is active-active. In the active-active mode, traffic is load balanced across all
members. In active-standby mode, all traffic is processed by an elected active member. If the
active member fails, a new member is elected to be active.

6 If the HA mode is active-standby, select a failover mode.

Option Description

Preemptive If the preferred node fails and recovers, it will preempt its peer and become the active
node. The peer will change its state to standby.

Non-preemptive If the preferred node fails and recovers, it will check if its peer is the active node. If so,
the preferred node will not preempt its peer and will be the standby node.

7 (Optional) Select an NSX Edge cluster.

VMware, Inc. 22
NSX-T Data Center Administration Guide

8 (Optional) Click Additional Settings.

a In the Internal Transit Subnet field, enter a subnet.

This is the subnet used for communication between components within this gateway. The
default is 169.254.0.0/28.

b In the T0-T1 Transit Subnets field, enter one or more subnets.

These subnets are used for communication between this gateway and all tier-1 gateways
that are linked to it. After you create this gateway and link a tier-1 gateway to it, you will
see the actual IP address assigned to the link on the tier-0 gateway side and on the tier-1
gateway side. The address is displayed in Additional Settings > Router Links on the
tier-0 gateway page and the tier-1 gateway page. The default is 100.64.0.0/16.

9 Click Route Distinguisher for VRF Gateways to configure a route distinguisher admin
address.

This is only needed for EVPN and for the automatic route distinguisher use case.

10 (Optional) Add one or more tags.

11 Click Save.

12 For IPv6, under Additional Settings, you can select or create an ND Profile and a DAD
Profile.

These profiles are used to configure Stateless Address Autoconfiguration (SLAAC) and
Duplicate Address Detection (DAD) for IPv6 addresses.

13 Click EVPN Settings to configure EVPN.

a Select a VNI pool.

You can click the menu icon (3 dots) to create a VNI pool if you have not previouly
created one.

b In the EVPN Tunnel Endpoint field click Set to add EVPN local tunnel endpoints.

For the tunnel endpoint, select an Edge node and specify an IP address.

Optionally, you can specify the MTU.

Note Ensure that the uplink interface has been configured on the NSX Edge node that
you select for the EVPN tunnel endpoint.

14 To configure route redistribution, click Route Redistribution and Set.

Select one or more of the sources:

n Tier-0 subnets: Static Routes, NAT IP, IPSec Local IP, DNS Forwarder IP, EVPN TEP IP,
Connected Interfaces & Segments.

Under Connected Interfaces & Segments, you can select one or more of the following:
Service Interface Subnet, External Interface Subnet, Loopback Interface Subnet,
Connected Segment.

VMware, Inc. 23
NSX-T Data Center Administration Guide

n Advertised tier-1 subnets: DNS Forwarder IP, Static Routes, LB VIP, NAT IP, LB SNAT IP,
IPSec Local Endpoint, Connected Interfaces & Segments.

Under Connected Interfaces & Segments, you can select Service Interface Subnet
and/or Connected Segment.

15 To configure interfaces, click Interfaces and Set.

a Click Add Interface.

b Enter a name.

c Select a type.

If the HA mode is active-standby, the choices are External, Service, and Loopback. If the
HA mode is active-active, the choices are External and Loopback.

d Enter an IP address in CIDR format.

e Select a segment.

f If the interface type is not Service, select an NSX Edge node.

g (Optional) If the interface type is not Loopback, enter an MTU value.

h (Optional) If the interface type is External, you can enable multicast by setting PIM
(Protocol Independent Multicast) to Enabled.

PIM can be enabled only on a single uplink interface.


Note: If you later disable PIM on this interface, then multicast will be disabled on all
interfaces including the downlinks on this gateway.

i (Optional) Add tags and select an ND profile.

j (Optional) If the interface type is External, for URPF Mode, you can select Strict or None.

URPF (Unicast Reverse Path Forwarding) is a security feature.

k After you create an interface, you can download the ARP table by clicking the menu icon
(three dots) for the interface and selecting Download ARP table.

16 (Optional) If the HA mode is active-standby, click Set next to HA VIP Configuration to


configure HA VIP.

With HA VIP configured, the tier-0 gateway is operational even if one uplink is down. The
physical router interacts with the HA VIP only.
a Click Add HA VIP Configuration.

b Enter an IP address and subnet mask.

The HA VIP subnet must be the same as the subnet of the interface that it is bound to.

c Select 2 interfaces.

17 Click Routing to add IP prefix lists, community lists, static routes, and route maps.

18 Click Multicast to configure multicast routing.

VMware, Inc. 24
NSX-T Data Center Administration Guide

19 Click BGP to configure BGP.

20 (Optional) To download the routing table or forwarding table, click the menu icon (three dots)
and select a download option. Enter values for Transport Node, Network and Source as
required, and save the .CSV file.

What to do next

After the tier-0 gateway is added, you can optionally enable dynamic IP management on the
gateway by selecting either a DHCP server profile or a DHCP relay profile. For more information,
see Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway.

Create an IP Prefix List


An IP prefix list contains single or multiple IP addresses that are assigned access permissions for
route advertisement. The IP addresses in this list are processed sequentially. IP prefix lists are
referenced through BGP neighbor filters or route maps with in or out direction.

For example, you can add the IP address 192.168.100.3/27 to the IP prefix list and deny the route
from being redistributed to the northbound router. You can also append an IP address with less-
than-or-equal-to (le) and greater-than-or-equal-to (ge) modifiers to grant or limit route
redistribution. For example, 192.168.100.3/27 ge 24 le 30 modifiers match subnet masks greater
than or equal to 24-bits and less than or equal to 30-bits in length.

Note The default action for a route is Deny. When you create a prefix list to deny or permit
specific routes, be sure to create an IP prefix with no specific network address (select Any from
the dropdown list) and the Permit action if you want to permit all other routes.

Prerequisites

Verify that you have a tier-0 gateway configured. See Create a Tier-0 Logical Router in Manager
Mode.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to IP Prefix List.

6 Click Add IP Prefix List.

7 Enter a name for the IP prefix list.

8 Click Set to add IP prefixes.

VMware, Inc. 25
NSX-T Data Center Administration Guide

9 Click Add Prefix.

a Enter an IP address in CIDR format.

For example, 192.168.100.3/27.

b (Optional) Set a range of IP address numbers in the le or ge modifiers.

For example, set le to 30 and ge to 24.

c Select Deny or Permit from the drop-down menu.

d Click Add.

10 Repeat the previous step to specify additional prefixes.

11 Click Save.

Create a Community List


You can create BGP community lists so that you can configure route maps based on community
lists.

Community lists are user-defined lists of community attribute values. These lists can be used for
matching or manipulating the communities attribute in BGP update messages.

Both the BGP Communities attribute (RFC 1997) and the BGP Large Communities attribute (RFC
8092) are supported. The BGP Communities attribute is a 32-bit value split into two 16-bit values.
The BGP Large Communities attribute has 3 components, each 4 octets in length.

In route maps we can match on or set the BGP Communities or Large Communities attribute.
Using this feature, network operators can implement network policy based on the BGP
communities attribute.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to Community List.

6 Click Add Community List.

7 Enter a name for the community list.

VMware, Inc. 26
NSX-T Data Center Administration Guide

8 Specify a list of communities. For a regular community, use the aa:nn format, for example,
300:500. For a large community, use the format aa:bb:cc, for example, 11:22:33. Note that the
list cannot have both regular communities and large communities. It must contain only regular
communities, or only large communities.

In addition, you can select one or more of the following regular communities. Note that they
cannot be added if the list contains large communinities.

n NO_EXPORT_SUBCONFED - Do not advertise to EBGP peers.

n NO_ADVERTISE - Do not advertise to any peer.

n NO_EXPORT - Do not advertise outside BGP confederation

9 Click Save.

Configure a Static Route


You can configure a static route on the tier-0 gateway to external networks. After you configure
a static route, there is no need to advertise the route from tier-0 to tier-1, because tier-1
gateways automatically have a static default route towards their connected tier-0 gateway.

Recursive static routes are supported.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to Static Routes.

6 Click Add Static Route.

7 Enter a name and network address in CIDR format. Static routes based on IPv6 are
supported. IPv6 prefixes can only have an IPv6 next hop.

8 Click Set Next Hops to add next-hop information.

9 Click Add Next Hop.

10 Enter an IP address or select NULL.

If NULL is selected, the route is called a device route.

11 Specify the administrative distance.

12 Select a scope from the drop-down list. A scope can be an interface, a gateway, an IPSec
session, or a segment.

13 Click Add.

VMware, Inc. 27
NSX-T Data Center Administration Guide

What to do next

Check that the static route is configured properly. See Verify the Static Route on a Tier-0 Router.

Create a Route Map


A route map consists of a sequence of IP prefix lists, BGP path attributes, and an associated
action. The router scans the sequence for an IP address match. If there is a match, the router
performs the action and scans no further.

Route maps can be referenced at the BGP neighbor level and for route redistribution.

Prerequisites

n Verify that an IP prefix list or a community list is configured. See Create an IP Prefix List in
Manager Mode or Create a Community List.

n For details about using regular expressions to define route-map match criteria for community
lists, see Using Regular Expressions to Match Community Lists When Adding Route Maps.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing.

5 Click Set next to Route Maps.

6 Click Add Route Map.

7 Enter a name and click Set to add match criteria.

8 Click Add Match Criteria to add one or more match criteria.

VMware, Inc. 28
NSX-T Data Center Administration Guide

9 For each criterion, select IP Prefix or Community List and click Set to specify one or more
match expressions.

a If you selected Community List, specify match expressions that define how to match
members of community lists. For each community list, the following match options are
available:

n MATCH ANY - perform the set action in the route map if any of the communities in the
community list is matched.

n MATCH ALL - perform the set action in the route map if all the communities in the
community list are matched regardless of the order.

n MATCH EXACT - perform the set action in the route map if all the communities in the
community list are matched in the exact same order.

n MATCH COMMUNITY REGEXP - perform the set action in the route map if all the
regular communities associated with the NRLI match the regular expression.

n MATCH LARGE COMMUNITY REGEXP - perform the set action in the route map if all
the large communities associated with the NRLI match the regular expression.

You should use the match criterion MATCH_COMMUNITY_REGEX to match routes against
standard communities, and use the match criterion MATCH_LARGE_COMMUNITY_REGEX
to match routes against large communities. If you want to permit routes containing either
the standard community or large community value, you must create two match criteria. If
the match expressions are given in the same match criterion, only the routes containing
both the standard and large communities will be permitted.

For any match criterion, the match expressions are applied in an AND operation, which
means that all match expressions must be satisfied for a match to occur. If there are
multiple match criteria, they are applied in an OR operation, which means that a match will
occur if any one match criterion is satisfied.

10 Set BGP attributes.

BGP Attribute Description

AS-path Prepend Prepend a path with one or more AS (autonomous system) numbers to make the path longer
and therefore less preferred.

MED Multi-Exit Discriminator indicates to an external peer a preferred path to an AS.

Weight Set a weight to influence path selection. The range is 0 - 65535.

Community Specify a list of communities. For a regular community use the aa:nn format, for example,
300:500. For a large community use the aa:bb:cc format, for example, 11:22:33. Or use the
drop-down menu to select one of the following:
n NO_EXPORT_SUBCONFED - Do not advertise to EBGP peers.
n NO_ADVERTISE - Do not advertise to any peer.
n NO_EXPORT - Do not advertise outside BGP confederation

Local Preference Use this value to choose the outbound external BGP path. The path with the highest value is
preferred.

VMware, Inc. 29
NSX-T Data Center Administration Guide

11 In the Action column, select Permit or Deny.

You can permit or deny IP addresses matched by the IP prefix lists or community lists from
being advertised.

12 Click Save.

Using Regular Expressions to Match Community Lists When


Adding Route Maps
You can use regular expressions to define the route-map match criteria for community lists. BGP
regular expressions are based on POSIX 1003.2 regular expressions.

The following expressions are a subset of the POSIX regular expressions.

Expression Description

.* Matches any single character.

* Matches 0 or more occurrences of pattern.

+ Matches 1 or more occurrences of pattern.

? Matches 0 or 1 occurrence of pattern.

^ Matches the beginning of the line.

$ Matches the end of the line.

_ This character has special meanings in BGP regular expressions. It matches to a space, comma, AS
set delimiters { and } and AS confederation delimiters ( and ). It also matches to the beginning of
the line and the end of the line. Therefore this character can be used for an AS value boundaries
match. This character technically evaluates to (^|[,{}()]|$).

Here are some examples for using regular expressions in route maps:

Expression Description

^101 Matches routes having community attribute that starts with 101.

^[0-9]+ Matches routes having community attribute that starts with a number between 0-9 and has one or
more instances of such a number.

.* Matches routes having any or no community attribute.

.+ Matches routes having any community value.

^$ Matches routes having no/null community value.

Configure BGP
To enable access between your VMs and the outside world, you can configure an external or
internal BGP (eBGP or iBGP) connection between a tier-0 gateway and a router in your physical
infrastructure.

VMware, Inc. 30
NSX-T Data Center Administration Guide

When configuring BGP, you must configure a local Autonomous System (AS) number for the
tier-0 gateway. You must also configure the remote AS number. EBGP neighbors must be
directly connected and in the same subnet as the tier-0 uplink. If they are not in the same subnet,
BGP multi-hop should be used.

BGPv6 is supported for single hop and multihop. A BGPv6 neighbor only supports IPv6
addresses. Redistribution, prefix list, and route maps are supported with IPv6 prefixes.

A tier-0 gateway in active-active mode supports inter-SR (service router) iBGP. If gateway #1 is
unable to communicate with a northbound physical router, traffic is re-routed to gateway #2 in
the active-active cluster. If gateway #2 is able to communicate with the physical router, traffic
between gateway #1 and the physical router will not be affected.

The implementation of ECMP on NSX Edge is based on the 5-tuple of the protocol number,
source and destination address, and source and destination port.

The iBGP feature has the following capabilities and restrictions:

n Redistribution, prefix lists, and routes maps are supported.

n Route reflectors are not supported.

n BGP confederation is not supported.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click BGP.

a Enter the local AS number.

In active-active mode, the default ASN value, 65000, is already filled in. In active-standby
mode, there is no default ASN value.

b Click the BGP toggle to enable or disable BGP.

In active-active mode, BGP is enabled by default. In active-standby mode, BGP is disabled


by default.

c If this gateway is in active-active mode, click the Inter SR iBGP toggle to enable or disable
inter-SR iBGP. It is enabled by default.

If the gateway is in active-standby mode, this feature is not available.

d Click the ECMP toggle button to enable or disable ECMP.

VMware, Inc. 31
NSX-T Data Center Administration Guide

e Click the Multipath Relax toggle button to enable or disable load-sharing across multiple
paths that differ only in AS-path attribute values but have the same AS-path length.

Note ECMP must be enabled for Multipath Relax to work.

f In the Graceful Restart field, select Disable, Helper Only, or Graceful Restart and Helper.

You can optionally change the Graceful Restart Timer and Graceful Restart Stale Timer.

By default, the Graceful Restart mode is set to Helper Only. Helper mode is useful for
eliminating and/or reducing the disruption of traffic associated with routes learned from a
neighbor capable of Graceful Restart. The neighbor must be able to preserve its
forwarding table while it undergoes a restart.

For EVPN, only the Helper Only mode is supported.

The Graceful Restart capability is not recommended to be enabled on the tier-0 gateways
because BGP peerings from all the gateways are always active. On a failover, the Graceful
Restart capability will increase the time a remote neighbor takes to select an alternate
tier-0 gateway. This will delay BFD-based convergence.

Note: Unless overridden by neighbor-specific configuration, the tier-0 configuration


applies to all BGP neighbors.

5 Configure Route Aggregation by adding IP address prefixes.

a Click Add Prefix.

b Enter a IP address prefix in CIDR format.

c For the option Summary Only, select Yes or No.

6 Click Save.

You must save the global BGP configuration before you can configure BGP neighbors.

7 Configure BGP Neighbors.

a Enter the IP address of the neighbor.

b Enable or disable BFD.

c Enter a value for Remote AS number.

For iBGP, enter the same AS number as the one in step 4a. For eBGP, enter the AS
number of the physical router.

VMware, Inc. 32
NSX-T Data Center Administration Guide

d Under Route Filter, click Set to add one or more route filters.

For IP Address Family, you can select IPv4, IPv6, or L2VPN EVPN. You can have at most
two route filters, with one address family being IPv4 and the other being L2VPN EVPN.
No other combinations (IPv4 and IPv6, IPv6 and L2VPN EVPN) are allowed.

For Maximum Routes, you can specify a value between 1 and 1,000,000. This is the
maximum number of BGP routes that the gateway will accept from the BGP neighbor.

Note: If you configure a BGP neighbor with one address family, for example, L2VPN
EVPN, and then later add a second address family, the established BGP connection will
be reset.

e Enable or disable the Allowas-in feature.

This is disabled by default. With this feature enabled, BGP neighbors can receive routes
with the same AS, for example, when you have two locations interconnected using the
same service provider. This feature applies to all the address families and cannot be
applied to specific address families.

f In the Source Addresses field, you can select a source address to establish a peering
session with a neighbor using this specific source address. If you do not select any, the
gateway will automatically choose one.

g Enter a value for Max Hop Limit.

h In the Graceful Restart field, you can optionally select Disable, Helper Only, or Graceful
Restart and Helper.

Option Description

None selected The Graceful Restart for this neighbor will follow the Tier-0 gateway BGP configuration.

Disable n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be disabled
for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
disabled for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be disabled for this neighbor.

Helper Only n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be
configured as Helper Only for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
configured as Helper Only for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be configured as Helper Only for this neighbor.

Graceful Restart n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be
and Helper configured as Graceful Restart and Helper for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
configured as Graceful Restart and Helper for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be configured as Graceful Restart and Helper for this neighbor.

Note: For EVPN, only the Helper Only mode is supported.

VMware, Inc. 33
NSX-T Data Center Administration Guide

i Click Timers & Password.

j Enter a value for BFD Interval.

The unit is milliseconds. For an Edge node running in a VM, the minimum value is 500. For
a bare-metal Edge node, the minimum value is 50.

k Enter a value for BFD Multiplier.

l Enter a value, in seconds, for Hold Down Time and Keep Alive Time.

The Keep Alive Time specifies how frequently KEEPALIVE messages will be sent. The
value can be between 0 and 65535. Zero means no KEEPALIVE messages will be sent.

The Hold Down Time specifies how long the gateway will wait for a KEEPALIVE message
from a neighbor before considering the neighbor dead. The value can be 0 or between 3
and 65535. Zero means no KEEPALIVE messages are sent between the BGP neighbors
and the neighbor will never be considered unreachable.

Hold Down Time must be at least three times the value of the Keep Alive Time.

m Enter a password.

This is required if you configure MD5 authentication between BGP peers.

8 Click Save.

Configure BFD
BFD (Bidirectional Forwarding Detection) is a protocol that can detect forwarding path failures.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Routing and Set for Static Route BFD Peer.

5 Click Add Static Route BFD Peer.

6 Select a BFD profile. See Add a BFD Profile.

7 Enter the peer IP address and optionally the source addresses.

8 Click Save.

Configure Multicast
IP multicast routing enables a host (source) to send a single copy of data to a single multicast
address. Data is then distributed to a group of recipients using a special form of IP address called

VMware, Inc. 34
NSX-T Data Center Administration Guide

the IP multicast group address. You can configure multicast on a tier-0 gateway for an IPv4
network to enable multicast routing.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click the Multicast toggle to enable multicast.

5 In the Replication Multicast Range field, enter an address range in CIDR format.

Replication Multicast Range is a range of multicast group addresses (GENEVE outer


destination IP) that is used in the underlay to replicate workload/tenant multicast group
addresses. It is recommended that there is no overlap between the Replication Multicast
Range and workload/tenant multicast group addresses.

6 In the IGMP Profile drop-down list, select an IGMP profile.

7 In the PIM Profile drop-down list, select a PIM profile.

Configure IPv6 Layer 3 Forwarding


IPv4 layer 3 forwarding is enabled by default. You can also configure IPv6 layer 3 forwarding.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the Global Networking Config tab.

4 Edit the Global Gateway Configuration and select IPv4 and IPv6 for the L3 Forwarding Mode.

IPv6 only is not supported.

5 Click Save.

6 Select Networking > Tier-0 Gateways.

7 Edit a tier-0 gateway by clicking the menu icon (three dots) and select Edit.

8 Go to Additional Settings.

a Enter an IPv6 subnet for Internal Transit Subnet.

b Enter an IPv6 subnet for T0-T1 Transit Subnets.

9 Go to Interfaces and add an interface for IPv6.

VMware, Inc. 35
NSX-T Data Center Administration Guide

Create SLAAC and DAD Profiles for IPv6 Address


Assignment
When using IPv6 on a logical router interface, you can set up Stateless Address
Autoconfiguration (SLAAC) for the assignment of IP addresses. SLAAC enables the addressing of
a host, based on a network prefix advertised from a local network router, through router
advertisements. Duplicate Address Detection (DAD) ensures the uniqueness of IP addresses.

Prerequisites

Navigate to Networking > Networking Settings, click the Global Gateway Config tab and select
IPv4 and IPv6 as the L3 Forwarding Mode

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateways.

3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.

4 Click Additional Settings.

5 To create an ND Profile (SLAAC profile), click the menu icon (three dots) and select Create
New.

a Enter a name for the profile.

b Select a mode:

n Disabled - Router advertisement messages are disabled.

n SLAAC with DNS Through RA - The address and DNS information is generated with
the router advertisement message.

n SLAAC with DNS Through DHCP - The address is generated with the router
advertisement message and the DNS information is generated by the DHCP server.

n DHCP with Address and DNS through DHCP - The address and DNS information is
generated by the DHCP server.

n SLAAC with Address and DNS through DHCP - The address and DNS information is
generated by the DHCP server. This option is only supported by NSX Edge and not
by KVM hosts or ESXi hosts.

c Enter the reachable time and the retransmission interval for the router advertisement
message.

d Enter the domain name and specify a lifetime for the domain name. Enter these values
only for the SLAAC with DNS Through RA mode.

VMware, Inc. 36
NSX-T Data Center Administration Guide

e Enter a DNS server and specify a lifetime for the DNS server. Enter these values only for
the SLAAC with DNS Through RA mode.

f Enter the values for router advertisement:

n RA Interval - The interval of time between the transmission of consecutive router


advertisement messages.

n Hop Limit - The lifetime of the advertised routes.

n Router Lifetime - The lifetime of the router.

n Prefix Lifetime- The lifetime of the prefix in seconds.

n Prefix Preferred Time - The time that a valid address is preferred.

6 To create a DAD Profile, click the menu icon (three dots) and select Create New.

a Enter a name for the profile.

b Select a mode:

n Loose - A duplicate address notification is received but no action is taken when a


duplicate address is detected.

n Strict - A duplicate address notification is received and the duplicate address is no


longer used.

c Enter the Wait Time (seconds) that specifies the interval of time between the NS packets.

d Enter the NS Retries Count that specifies the number of NS packets to detect duplicate
addresses at intervals defined in Wait Time (seconds)

Changing the HA Mode of a Tier-0 Gateway


You can change the high availability (HA) mode of a tier-0 gateway in certain circumstances.

Changing the HA mode is allowed only if there is no more than one service router running on the
gateway. This means that you must not have uplinks on more than one Edge transport node.
However, you can have more than one uplink on the same Edge transport node.

After you set the HA mode from active-active to active-standby, you can set the failover mode.
The default is non-preemptive.

HA mode change is not allowed if the following services or features are configured.

n DNS Forwarder

n IPSec VPN

n L2 VPN

n HA VIP

n Stateful Firewall

n SNAT, DNAT, NO_SNAT, or NO_DNAT

VMware, Inc. 37
NSX-T Data Center Administration Guide

n Reflexive NAT applied on an interface

n Service Insertion

n VRF

n Centralized Service Port

Add a VRF Gateway


A virtual routing and forwarding (VRF) gateway makes it possible for multiple instances of a
routing table to exist within the same gateway at the same time. VRFs are the layer 3 equivalent
of a VLAN. A VRF gateway must be linked to a tier-0 gateway. From the tier-0 gateway, the VRF
gateway inherits the failover mode, Edge cluster, internal transit subnet, T0-T1 transit subnets,
and BGP routing configuration.

Prerequisites

For VRF gateways on EVPN, ensure that you configure the EVPN settings for the tier-0 gateway
that you want to link to. These settings are only needed to support EVPN:

n Specify a VNI pool on the tier-0 gateway.

n Set the EVPN local tunnel endpoints on the tier-0 gateway.

For more information, see Configuring EVPN.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-0 Gateway.

3 Click Add Gateway > VRF.

4 Enter a name for the gateway.

5 Select a tier-0 gateway.

VMware, Inc. 38
NSX-T Data Center Administration Guide

6 Click VRF Settings.

These settings are only needed to support EVPN.


a Specify a Route Distinguisher.

If the connected tier-0 gateway has RD Admin Address configured, the Route
Distinguisher is automatically populated. Enter a new value if you want to override the
assigned Route Distinguisher.

b Specify an EVPN Transit VNI.

The VNI must be unique and belong to the VNI pool configured on the linked tier-0
gateway.

c In the Route Targets field, click Set to add route targets.

For each route target, select a mode, which can be Auto or Manual. Specify one or more
Import Route Targets. Specify one or more Export Route Targets.

7 Click Save and then Yes to continue configuring the VRF gateway.

8 For VRF-lite, configure one or more external interfaces on the VRF gateway with an Access
VLAN ID and connect to a VLAN Segment. For EVPN, configure one or more service
interfaces on the VRF gateway with an Access VLAN ID and connect to an Overlay Segment.
See Add a Segment. VRF interfaces require existing external interfaces on the linked tier-0
gateway to be mapped to each edge node. The Segment connected to the Access interface
needs to have VLAN IDs configured in range or list format.

9 Click BGP to set BGP, ECMP, Route Aggregation, and BGP Neighbours. You can add a route
filter with IPv4/IPv6 address families. See Add a Tier-0 Gateway.

10 Click Routing and complete routing configuration. For supporting route leaking between the
VRF gateway and linked tier-0 gateway/peer VRF gateway, you can add a static route and
select Next Hop scope as the linked tier-0 gateway, or as one of the existing peer VRF
gateways. See Add a Tier-0 Gateway.

Configuring EVPN
EVPN (Ethernet VPN) is a standards-based BGP control plane that provides the ability to extend
Layer 2 and Layer 3 connectivity between different data centers.

The EVPN feature has the following capabilities and limitations:

n Multi-Protocol BGP (MP-BGP) EVPN between NSX Edge and physical routers.

n VXLAN used as the overlay for MP-BGP EVPN.

n Multi-tenancy in MP-BGP EVPN by using VRF instances.

n Support for EVPN type-5 routes only.

VMware, Inc. 39
NSX-T Data Center Administration Guide

n NSX-T generates unique router MAC for every NSX edge VTEP in the EVPN domain.
However, there may be other nodes in the network that are not managed by NSX-T, for
example, physical routers. You must make sure that the router MACs are unique across all the
VTEPs in the EVPN domain.

n The EVPN feature supports NSX Edge to be either the ingress or the egress of the EVPN
virtual tunnel endpoint. If an NSX Edge node receives EVPN type-5 prefixes from its eBGP
peer that need to be redistributed to another eBGP peer, the routes will be re-advertised
without any change to the nexthop.

n In multi-path network topologies, it is recommended that ECMP is enabled in the BGP EVPN
control plane as well, so that all the possible paths can be advertised. This will avoid any
potential traffic blackhole due to asymmetric data path forwarding.

Configuration Prerequisites
n Virtual Router (vRouter) deployed on VMware ESXi hypervisor.

n Peer physical router supporting EVPN type-5 routes.

Configuration Steps
n Create a VNI pool. See Add a VNI Pool.

n Configure a VLAN Segment. See Add a Segment.

n Configure an overlay Segment and specify one or more VLAN ranges. See Add a Segment.

n Configure a tier-0 gateway to support EVPN. See Add a Tier-0 Gateway.

n Under EVPN Settings, select a VNI pool and create EVPN Tunnel Endpoints.

n Under Route Distinguisher for VRF Gateways, configure RD Admin Address for the automatic
route distinguisher use case.

n Configure one or more external interfaces on the tier-0 gateway and connect to the VLAN
Segment.

n Configure BGP neighbors with the peer physical router. Add route filter with IPv4 and L2VPN
EVPN Address Families.

n Configure Route Re-Distribution. Select EVPN TEP IP under Tier-0 Subnets along with other
sources.

n Configure VRF to support EVPN. See Add a VRF Gateway.

n Under VRF Settings, specify an EVPN Transit VNI.

n Specify Route Distinguisher for a manual route distinguisher.

n Specify Import/Export Route Targets for manual route targets.

n Add service interface on VRF for each edge node and connect to the Overlay Segment.
Specify an Access VLAN ID for each service interface.

VMware, Inc. 40
NSX-T Data Center Administration Guide

n Configure per VRF BGP neighbors with the peer vRouter. The routes learned over the VRF
BGP sessions are redistributed by the NSX Edge to the peer physical router over the MP-BGP
EVPN session.

VMware, Inc. 41
Tier-1 Gateway
3
A tier-1 gateway has downlink connections to segments and uplink connections to tier-0
gateways.

You can configure route advertisements and static routes on a tier-1 gateway. Recursive static
routes are supported.

This chapter includes the following topics:

n Add a Tier-1 Gateway

Add a Tier-1 Gateway


A tier-1 gateway is typically connected to a tier-0 gateway in the northbound direction and to
segments in the southbound direction.

If you are adding a tier-1 gateway from Global Manager in Federation, see Add a Tier-1 Gateway
from Global Manager.

Tier-0 and tier-1 gateways support the following addressing configurations for all interfaces
(uplinks, service ports and downlinks) in both single tier and multi-tiered topologies:

n IPv4 only

n IPv6 only

n Dual Stack - both IPv4 and IPv6

To use IPv6 or dual stack addressing, enable IPv4 and IPv6 as the L3 Forwarding Mode in
Networking > Networking Settings > Global Networking Config .

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Tier-1 Gateways.

3 Click Add Tier-1 Gateway.

4 Enter a name for the gateway.

VMware, Inc. 42
NSX-T Data Center Administration Guide

5 (Optional) Select a tier-0 gateway to connect to this tier-1 gateway to create a multi-tier
topology.

6 (Optional) Select an NSX Edge cluster if you want this tier-1 gateway to host stateful services
such as NAT, load balancer, or firewall.

If an NSX Edge cluster is selected, a service router will always be created (even if you do not
configure stateful services), affecting the north/south traffic pattern.

7 (Optional) In the Edges field, click Set to select an NSX Edge node.

8 If you selected an NSX Edge cluster, select a failover mode or accept the default.

Option Description

Preemptive If the preferred NSX Edge node fails and recovers, it will preempt its peer and become
the active node. The peer will change its state to standby. This is the default option.

Non-preemptive If the preferred NSX Edge node fails and recovers, it will check if its peer is the active
node. If so, the preferred node will not preempt its peer and will be the standby node.

9 If you plan to configure a load balancer on this gateway, select an Edges Pool Allocation Size
setting according to the size of the load balancer.

The options are Routing, LB Small, LB Medium, LB Large, and LB XLarge. The default is
Routing and is suitable if no load balancer will be configured on this gateway. This parameter
allows the NSX Manager to place the tier-1 gateway on the Edge nodes in a more intelligent
way. With this setting the number of load balancing and routing functions on each node is
taken into consideration. Note that you cannot change this setting after the gateway is
created.

10 (Optional) Click the Enable StandBy Relocation toggle to enable or disable standby
relocation.

Standby relocation means that if the Edge node where the active or standby logical router is
running fails, a new standby logical router is created on another Edge node to maintain high
availability. If the Edge node that fails is running the active logical router, the original standby
logical router becomes the active logical router and a new standby logical router is created. If
the Edge node that fails is running the standby logical router, the new standby logical router
replaces it.

11 (Optional) Click Route Advertisement.

Select one or more of the following:

n All Static Routes

n All NAT IP's

n All DNS Forwarder Routes

n All LB VIP Routes

n All Connected Segments and Service Ports

VMware, Inc. 43
NSX-T Data Center Administration Guide

n All LB SNAT IP Routes

n All IPSec Local Endpoints

12 Click Save.

13 (Optional) Click Route Advertisement.

a In the Set Route Advertisement Rules field, click Set to add route advertisement rules.

14 (Optional) Click Additional Settings.

a For IPv6, you can select or create an ND Profile and a DAD Profile.

These profiles are used to configure Stateless Address Autoconfiguration (SLAAC) and
Duplicate Address Detection (DAD) for IPv6 addresses.

b Select an Ingress QoS Profile and an Egress QoS Profile for traffic limitations.

These profiles are used to set information rate and burst size for permitted traffic. See
Add a Gateway QoS Profile for more information on creating QoS profiles.
If this gateway is linked to a tier-0 gateway, the Router Links field shows the link addresses.

15 (Optional) Click Service Interfaces and Set to configure connections to segments. Required in
some topologies such as VLAN-backed segments or one-arm load balancing.

a Click Add Interface.

b Enter a name and IP address in CIDR format.

c Select a segment.

d In the MTU field, enter a value between 64 and 9000.

e For URPF Mode, you can select Strict or None.

URPF (Unicast Reverse Path Forwarding) is a security feature.

f Add one or more tags.

g In the ND Profile field, select or create a profile.

h Click Save.

16 (Optional) Click Static Routes and Set to configure static routes.

a Click Add Static Route.

b Enter a name and a network address in the CIDR or IPv6 CIDR format.

c Click Set Next Hops to add next hop information.

d Click Save.

What to do next

After the tier-1 gateway is added, you can optionally enable dynamic IP management on the
gateway by selecting either a DHCP server profile or a DHCP relay profile. For more information,
see Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway.

VMware, Inc. 44
Segments
4
In NSX-T Data Center, segments are virtual layer 2 domains. A segment was earlier called a
logical switch.

There are two types of segments in NSX-T Data Center:

n VLAN-backed segments

n Overlay-backed segments

A VLAN-backed segment is a layer 2 broadcast domain that is implemented as a traditional


VLAN in the physical infrastructure. This means that traffic between two VMs on two different
hosts but attached to the same VLAN-backed segment is carried over a VLAN between the two
hosts. The resulting constraint is that you must provision an appropriate VLAN in the physical
infrastructure for those two VMs to communicate at layer 2 over a VLAN-backed segment.

In an overlay-backed segment, traffic between two VMs on different hosts but attached to the
same overlay segment have their layer 2 traffic carried by a tunnel between the hosts. NSX-T
Data Center instantiates and maintains this IP tunnel without the need for any segment-specific
configuration in the physical infrastructure. As a result, the virtual network infrastructure is
decoupled from the physical network infrastructure. That is, you can create segments
dynamically without any configuration of the physical network infrastructure.

The default number of MAC addresses learned on an overlay-backed segment is 2048. The
default MAC limit per segment can be changed through the API field remote_overlay_mac_limit in
MacLearningSpec. For more information see the MacSwitchingProfile in the NSX-T Data Center API
Guide.

This chapter includes the following topics:

n Segment Profiles

n Add a Segment

n Types of DHCP on a Segment

n Configure DHCP on a Segment

n Configure DHCP Static Bindings on a Segment

n Layer 2 Bridging

n Add a Metadata Proxy Server

VMware, Inc. 45
NSX-T Data Center Administration Guide

Segment Profiles
Segment profiles include Layer 2 networking configuration details for segments and segment
ports. NSX Manager supports several types of segment profiles.

The following types of segment profiles are available:

n QoS (Quality of Service)

n IP Discovery

n SpoofGuard

n Segment Security

n MAC Management

Note You cannot edit or delete the default segment profiles. If you require alternate settings
from what is in the default segment profile you can create a custom segment profile. By default
all custom segment profiles except the segment security profile will inherit the settings of the
appropriate default segment profile. For example, a custom IP discovery segment profile by
default will have the same settings as the default IP discovery segment profile.

Each default or custom segment profile has a unique identifier. You use this identifier to associate
the segment profile to a segment or a segment port.

A segment or segment port can be associated with only one segment profile of each type. You
cannot have, for example, two QoS segment profiles associated with a segment or segment port.

If you do not associate a segment profile when you create a segment, then the NSX Manager
associates a corresponding default system-defined segment profile. The children segment ports
inherit the default system-defined segment profile from the parent segment.

When you create or update a segment or segment port you can choose to associate either a
default or a custom segment profile. When the segment profile is associated or disassociated
from a segment the segment profile for the children segment ports is applied based on the
following criteria.

n If the parent segment has a profile associated with it, the child segment port inherits the
segment profile from the parent.

n If the parent segment does not have a segment profile associated with it, a default segment
profile is assigned to the segment and the segment port inherits that default segment profile.

n If you explicitly associate a custom profile with a segment port, then this custom profile
overrides the existing segment profile.

Note If you have associated a custom segment profile with a segment, but want to retain the
default segment profile for one of the child segment port, then you must make a copy of the
default segment profile and associate it with the specific segment port.

VMware, Inc. 46
NSX-T Data Center Administration Guide

You cannot delete a custom segment profile if it is associated to a segment or a segment port.
You can find out whether any segments and segment ports are associated with the custom
segment profile by going to the Assigned To section of the Summary view and clicking on the
listed segments and segment ports.

Understanding QoS Segment Profile


QoS provides high-quality and dedicated network performance for preferred traffic that requires
high bandwidth. The QoS mechanism does this by prioritizing sufficient bandwidth, controlling
latency and jitter, and reducing data loss for preferred packets even when there is a network
congestion. This level of network service is provided by using the existing network resources
efficiently.

For this release, shaping and traffic marking namely, CoS and DSCP is supported. The Layer 2
Class of Service (CoS) allows you to specify priority for data packets when traffic is buffered in
the segment due to congestion. The Layer 3 Differentiated Services Code Point (DSCP) detects
packets based on their DSCP values. CoS is always applied to the data packet irrespective of the
trusted mode.

NSX-T Data Center trusts the DSCP setting applied by a virtual machine or modifying and setting
the DSCP value at the segment level. In each case, the DSCP value is propagated to the outer IP
header of encapsulated frames. This enables the external physical network to prioritize the traffic
based on the DSCP setting on the external header. When DSCP is in the trusted mode, the DSCP
value is copied from the inner header. When in the untrusted mode, the DSCP value is not
preserved for the inner header.

Note DSCP settings work only on tunneled traffic. These settings do not apply to traffic inside
the same hypervisor.

You can use the QoS switching profile to configure the average ingress and egress bandwidth
values to set the transmit limit rate. The peak bandwidth rate is used to support burst traffic a
segment is allowed to prevent congestion on the northbound network links. These settings do
not guarantee the bandwidth but help limit the use of network bandwidth. The actual bandwidth
you will observe is determined by the link speed of the port or the values in the switching profile,
whichever is lower.

The QoS switching profile settings are applied to the segment and inherited by the child segment
port.

Create a QoS Segment Profile


You can define the DSCP value and configure the ingress and egress settings to create a custom
QoS switching profile.

Prerequisites

n Familiarize yourself with the QoS switching profile concept. See Understanding QoS
Switching Profile.

VMware, Inc. 47
NSX-T Data Center Administration Guide

n Identify the network traffic you want to prioritize.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select QoS.

4 Complete the QoS switching profile details.

Option Description

Name Name of the profile.

Mode Select either a Trusted or Untrusted option from the Mode drop-down
menu.
When you select the Trusted mode the inner header DSCP value is applied
to the outer IP header for IP/IPv6 traffic. For non IP/IPv6 traffic, the outer IP
header takes the default value. Trusted mode is supported on an overlay-
based logical port. The default value is 0.
Untrusted mode is supported on overlay-based and VLAN-based logical
port. For the overlay-based logical port, the DSCP value of the outbound IP
header is set to the configured value irrespective to the inner packet type
for the logical port. For the VLAN-based logical port, the DSCP value of IP/
IPv6 packet will be set to the configured value. The DSCP values range for
untrusted mode is between 0 to 63.

Note DSCP settings work only on tunneled traffic. These settings do not
apply to traffic inside the same hypervisor.

Priority Set the CoS priority value.


The priority values range from 0 to 63, where 0 has the highest priority.

Class of Service Set the CoS value.


CoS is supported on VLAN-based logical port. CoS groups similar types of
traffic in the network and each type of traffic is treated as a class with its
own level of service priority. The lower priority traffic is slowed down or in
some cases dropped to provide better throughput for higher priority traffic.
CoS can also be configured for the VLAN ID with zero packet.
The CoS values range from 0 to 7, where 0 is the best effort service.

Ingress Set custom values for the outbound network traffic from the VM to the
logical network.
You can use the average bandwidth to reduce network congestion. The
peak bandwidth rate is used to support burst traffic and the burst duration is
set in the burst size setting. You cannot guarantee the bandwidth. However,
you can use the setting to limit network bandwidth. The default value 0,
disables the ingress traffic.
For example, when you set the average bandwidth for the logical switch to
30 Mbps the policy limits the bandwidth. You can cap the burst traffic at 100
Mbps for a duration 20 Bytes.

VMware, Inc. 48
NSX-T Data Center Administration Guide

Option Description

Ingress Broadcast Set custom values for the outbound network traffic from the VM to the
logical network based on broadcast.
The default value 0, disables the ingress broadcast traffic.
For example, when you set the average bandwidth for a logical switch to 50
Kbps the policy limits the bandwidth. You can cap the burst traffic to 400
Kbps for a duration of 60 Bytes.

Egress Set custom values for the inbound network traffic from the logical network
to the VM.
The default value 0, disables the egress traffic.

If the ingress, ingress broadcast, and egress options are not configured, the default values
are used as protocol buffers.

5 Click Save.

Understanding IP Discovery Segment Profile


IP Discovery uses DHCP and DHCPv6 snooping, ARP (Address Resolution Protocol) snooping, ND
(Neighbor Discovery) snooping, and VM Tools to learn MAC and IP addresses.

The discovered MAC and IP addresses are used to achieve ARP/ND suppression, which
minimizes traffic between VMs connected to the same segment. The number of IPs in the
ARP/ND suppression cache for any given port is determined by the settings in the port's IP
Discovery profile. The relevant settings are ARP Binding Limit, ND Snooping Limit, Duplicate IP
Detection, ARP ND Binding Limit Timeout, and Trust on First Use (TOFU).

The discovered MAC and IP addresses are also used by the SpoofGuard and distributed firewall
(DFW) components. DFW uses the address bindings to determine the IP address of objects in
firewall rules.

DHCP/DHCPv6 snooping inspects the DHCP/DHCPv6 packets exchanged between the DHCP/
DHCPv6 client and server to learn the IP and MAC addresses.

ARP snooping inspects the outgoing ARP and GARP (gratuitous ARP) packets of a VM to learn
the IP and MAC addresses.

VM Tools is software that runs on an ESXi-hosted VM and can provide the VM's configuration
information including MAC and IP or IPv6 addresses. This IP discovery method is available for
VMs running on ESXi hosts only.

ND snooping is the IPv6 equivalent of ARP snooping. It inspects neighbor solicitation (NS) and
neighbor advertisement (NA) messages to learn the IP and MAC addresses.

Duplicate address detection checks whether a newly discovered IP address is already present on
the realized binding list for a different port. This check is performed for ports on the same
segment. If a duplicate address is detected, the newly discovered address is added to the
discovered list, but is not added to the realized binding list. All duplicate IPs have an associated

VMware, Inc. 49
NSX-T Data Center Administration Guide

discovery timestamp. If the IP that is on the realized binding list is removed, either by adding it to
the ignore binding list or by disabling snooping, the duplicate IP with the oldest timestamp is
moved to the realized binding list. The duplicate address information is available through an API
call.

By default, the discovery methods ARP snooping and ND snooping operate in a mode called
trust on first use (TOFU). In TOFU mode, when an address is discovered and added to the
realized bindings list, that binding remains in the realized list forever. TOFU applies to the first 'n'
unique <IP, MAC, VLAN> bindings discovered using ARP/ND snooping, where 'n' is the binding
limit that you can configure. You can disable TOFU for ARP/ND snooping. The methods will then
operate in trust on every use (TOEU) mode. In TOEU mode, when an address is discovered, it is
added to the realized bindings list and when it is deleted or expired, it is removed from the
realized bindings list. DHCP snooping and VM Tools always operate in TOEU mode.

Note TOFU is not the same as SpoofGuard, and it does not block traffic in the same way as
SpoofGuard. For more information, see Understanding SpoofGuard Segment Profile.

For Linux VMs, the ARP flux problem might cause ARP snooping to obtain incorrect information.
The problem can be prevented with an ARP filter. For more information, see http://linux-ip.net/
html/ether-arp.html#ether-arp-flux.

For each port, NSX Manager maintains an ignore bindings list, which contains IP addresses that
cannot be bound to the port. If you navigate to Networking > Logical Switches > Ports in
Manager mode and select a port, you can add discovered bindings to the ignore bindings list.
You can also delete an existing discovered or realized binding by copying it to Ignore Bindings.

Create an IP Discovery Segment Profile


NSX-T Data Center has several default IP Discovery segment profiles. You can also create
additional ones.

Prerequisites

Familiarize yourself with the IP Discovery segment profile concepts. See Understanding IP
Discovery Segment Profile.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select IP Discovery.

4 Specify the IP Discovery segment profile details.

Option Description

Name Enter a name.

ARP Snooping For an IPv4 environment. Applicable if VMs have static IP addresses.

VMware, Inc. 50
NSX-T Data Center Administration Guide

Option Description

ARP Binding Limit The maximum number of IPv4 IP addresses that can be bound to a port. The
minimum value allowed is 1 and the maximum is 256. The default is 1.

ARP ND Binding Limit Timeout The timeout value, in minutes, for IP addresses in the ARP/ND binding table
if TOFU is disabled. If an address times out, a newly discovered address
replaces it.

DHCP Snooping For an IPv4 environment. Applicable if VMs have IPv4 addresses.

DHCP Snooping - IPv6 For an IPv6 environment. Applicable if VMs have IPv6 addresses.

VM Tools Available for ESXi-hosted VMs only.

VM Tools - IPv6 Available for ESXi-hosted VMs only.

ND Snooping For an IPv6 environment. Applicable if VMs have static IP addresses.

ND Snooping Limit The maximum number of IPv6 addresses that can be bound to a port.

Trust on First Use Applicable to ARP and ND snooping.

Duplicate IP Detection For all snooping methods and both IPv4 and IPv6 environments.

5 Click Save.

Understanding SpoofGuard Segment Profile


SpoofGuard helps prevent a form of malicious attack called "web spoofing" or "phishing." A
SpoofGuard policy blocks traffic determined to be spoofed.

SpoofGuard is a tool that is designed to prevent virtual machines in your environment from
sending traffic with an IP address it is not authorized to end traffic from. In the instance that a
virtual machine’s IP address does not match the IP address on the corresponding logical port and
segment address binding in SpoofGuard, the virtual machine’s vNIC is prevented from accessing
the network entirely. SpoofGuard can be configured at the port or segment level. There are
several reasons SpoofGuard might be used in your environment:

n Preventing a rogue virtual machine from assuming the IP address of an existing VM.

n Ensuring the IP addresses of virtual machines cannot be altered without intervention – in


some environments, it’s preferable that virtual machines cannot alter their IP addresses
without proper change control review. SpoofGuard facilitates this by ensuring that the virtual
machine owner cannot simply alter the IP address and continue working unimpeded.

n Guaranteeing that distributed firewall (DFW) rules will not be inadvertently (or deliberately)
bypassed – for DFW rules created utilizing IP sets as sources or destinations, the possibility
always exists that a virtual machine could have it’s IP address forged in the packet header,
thereby bypassing the rules in question.

NSX-T Data Center SpoofGuard configuration covers the following:

n MAC SpoofGuard - authenticates MAC address of packet

n IP SpoofGuard - authenticates MAC and IP addresses of packet

VMware, Inc. 51
NSX-T Data Center Administration Guide

n Dynamic Address Resolution Protocol (ARP) inspection, that is, ARP and Gratuitous Address
Resolution Protocol (GARP) SpoofGuard and Neighbor Discovery (ND) SpoofGuard validation
are all against the MAC source, IP Source and IP-MAC source mapping in the ARP/GARP/ND
payload.

At the port level, the allowed MAC/VLAN/IP whitelist is provided through the Address Bindings
property of the port. When the virtual machine sends traffic, it is dropped if its IP/MAC/VLAN
does not match the IP/MAC/VLAN properties of the port. The port level SpoofGuard deals with
traffic authentication, i.e. is the traffic consistent with VIF configuration.

At the segment level, the allowed MAC/VLAN/IP whitelist is provided through the Address
Bindings property of the segment. This is typically an allowed IP range/subnet for the segment
and the segment level SpoofGuard deals with traffic authorization.

Traffic must be permitted by port level AND segment level SpoofGuard before it will be allowed
into segment. Enabling or disabling port and segment level SpoofGuard, can be controlled using
the SpoofGuard segment profile.

Create a SpoofGuard Segment Profile


When SpoofGuard is configured, if the IP address of a virtual machine changes, traffic from the
virtual machine may be blocked until the corresponding configured port/segment address
bindings are updated with the new IP address.

Enable SpoofGuard for the port group(s) containing the guests. When enabled for each network
adapter, SpoofGuard inspects packets for the prescribed MAC and its corresponding IP address.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select Spoof Guard.

4 Enter a name.

5 To enable port level SpoofGuard, set Port Bindings to Enabled.

6 Click Save.

Understanding Segment Security Segment Profile


Segment security provides stateless Layer2 and Layer 3 security by checking the ingress traffic
to the segment and dropping unauthorized packets sent from VMs by matching the IP address,
MAC address, and protocols to a set of allowed addresses and protocols. You can use segment
security to protect the segment integrity by filtering out malicious attacks from the VMs in the
network.

VMware, Inc. 52
NSX-T Data Center Administration Guide

Note that the default segment security profile has the DHCP settings Server Block and Server
Block - IPv6 enabled. This means that a segment that uses the default segment security profile
will block traffic from a DHCP server to a DHCP client. If you want a segment that allows DHCP
server traffic, you must create a custom segment security profile for the segment.

Create a Segment Security Segment Profile


You can create a custom segment security segment profile with MAC destination addresses from
the allowed BPDU list and configure rate limiting.

Prerequisites

Familiarize yourself with the segment security segment profile concept. See Understanding
Switch Security Switching Profile.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select Segment Security.

4 Complete the segment security profile details.

Option Description

Name Name of the profile.

BPDU Filter Toggle the BPDU Filter button to enable BPDU filtering. Disabled by default.
When the BPDU filter is enabled, all of the traffic to BPDU destination MAC
address is blocked. The BPDU filter when enabled also disables STP on the
logical switch ports because these ports are not expected to take part in
STP.

BPDU Filter Allow List Click the destination MAC address from the BPDU destination MAC
addresses list to allow traffic to the permitted destination. You must enable
BPDU Filter to be able to select from this list.

DHCP Filter Toggle the Server Block button and Client Block button to enable DHCP
filtering. Both are disabled by default.
DHCP Server Block blocks traffic from a DHCP server to a DHCP client. Note
that it does not block traffic from a DHCP server to a DHCP relay agent.
DHCP Client Block prevents a VM from acquiring a DHCP IP address by
blocking DHCP requests.

DHCPv6 Filter Toggle the Server Block - IPv6 button and Client Block - IPv6 button to
enable DHCP filtering. Both are disabled by default.
DHCPv6 Server Block blocks traffic from a DHCPv6 server to a DHCPv6
client. Note that it does not block traffic from a DHCP server to a DHCP relay
agent. Packets whose UDP source port number is 547 are filtered.
DHCPv6 Client Block prevents a VM from acquiring a DHCP IP address by
blocking DHCP requests. Packets whose UDP source port number is 546 are
filtered.

VMware, Inc. 53
NSX-T Data Center Administration Guide

Option Description

Block Non-IP Traffic Toggle the Block Non-IP Traffic button to allow only IPv4, IPv6, ARP, and
BPDU traffic.
The rest of the non-IP traffic is blocked. The permitted IPv4, IPv6, ARP,
GARP and BPDU traffic is based on other policies set in address binding and
SpoofGuard configuration.
By default, this option is disabled to allow non-IP traffic to be handled as
regular traffic.

RA Guard Toggle the RA Guard button to filter out ingress IPv6 router advertisements.
ICMPv6 type 134 packets are filtered out. This option is enabled by default.

Rate Limits Set a rate limit for broadcast and multicast traffic. This option is enabled by
default.
Rate limits can be used to protect the logical switch or VMs from events
such as broadcast storms.
To avoid any connectivity problems, the minimum rate limit value must be >=
10 pps.

5 Click Save.

Understanding MAC Discovery Segment Profile


The MAC management segment profile supports two functionalities: MAC learning and MAC
address change.

The MAC address change feature allows a VM to change its MAC address. A VM connected to a
port can run an administrative command to change the MAC address of its vNIC and still send
and receive traffic on that vNIC. This feature is supported on ESXi only and not on KVM. This
property is disabled by default.

MAC learning provides network connectivity to deployments where multiple MAC addresses are
configured behind one vNIC, for example, in a nested hypervisor deployment where an ESXi VM
runs on an ESXi host and multiple VMs run inside the ESXi VM. Without MAC learning, when the
ESXi VM's vNIC connects to a segment port, its MAC address is static. VMs running inside the
ESXi VM do not have network connectivity because their packets have different source MAC
addresses. With MAC learning, the vSwitch inspects the source MAC address of every packet
coming from the vNIC, learns the MAC address and allows the packet to go through. If a MAC
address that is learned is not used for a certain period of time, it is removed. This time period is
not confurable. The field MAC Learning Aging Time displays the pre-defined value, which is 600.

MAC learning also supports unknown unicast flooding. Normally, when a packet that is received
by a port has an unknown destination MAC address, the packet is dropped. With unknown
unicast flooding enabled, the port floods unknown unicast traffic to every port on the switch that
has MAC learning and unknown unicast flooding enabled. This property is enabled by default, but
only if MAC learning is enabled.

VMware, Inc. 54
NSX-T Data Center Administration Guide

The number of MAC addresses that can be learned is configurable. The maximum value is 4096,
which is the default. You can also set the policy for when the limit is reached. The options are:

n Drop - Packets from an unknown source MAC address are dropped. Packets inbound to this
MAC address will be treated as unknown unicast. The port will receive the packets only if it
has unknown unicast flooding enabled.

n Allow - Packets from an unknown source MAC address are forwarded although the address
will not be learned. Packets inbound to this MAC address will be treated as unknown unicast.
The port will receive the packets only if it has unknown unicast flooding enabled.

If you enable MAC learning or MAC address change, to improve security, configure SpoofGuard
as well.

Create a MAC Discovery Segment Profile


You can create a MAC discovery segment profile to manage MAC addresses.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Segment Profiles.

3 Click Add Segment Profile and select MAC Discovery.

4 Complete the MAC discovery profile details.

Option Description

Name Name of the profile.

MAC Change Enable or disable the MAC address change feature. The default is disabled.

MAC Learning Enable or disable the MAC learning feature. The default is disabled.

MAC Limit Policy Select Allow or Drop. The default is Allow. This option is available if you
enable MAC learning

Unknown Unicast Flooding Enable or disable the unknown unicast flooding feature. The default is
enabled. This option is available if you enable MAC learning

MAC Limit Set the maximum number of MAC addresses. The default is 4096. This
option is available if you enable MAC learning

MAC Learning Aging Time For information only. This option is not configurable. The pre-defined value
is 600.

5 Click Save.

Add a Segment
You can add two kinds of segments: overlay-backed segments and VLAN-backed segments.

VMware, Inc. 55
NSX-T Data Center Administration Guide

Segments are created as part of a transport zone. There are two types of transport zones: VLAN
transport zones and overlay transport zones. A segment created in a VLAN transport zone is a
VLAN-backed segment, and a segment created in an overlay transport zone is an overlay-
backed segment.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments.

3 Click Add Segment.

4 Enter a name for the segment.

5 Select the type of connectivity for the segment.

Connectivity Description

None Select this option when you do not want to connect the segment to any
upstream gateway (tier-0 or tier-1). Typically, you want to add a standalone
segment in the following scenarios:
n When you want to create a local testing environment for users that are
running workloads on the same subnet.
n When east-west connectivity with users on the other subnets is not
necessary.
n When north-south connectivity to users outside the data center is not
necessary.
n When you want to configure layer 2 bridging or guest VLAN tagging.

Tier-1 Select this option when you want to connect the segment to a tier-1
gateway.

Tier-0 Select this option when you want to connect the segment to a tier-0
gateway.

Note You can change the connectivity of a gateway-connected segment from one gateway
to another gateway (same or different gateway type). In addition, you can change the
connectivity of segment from "None" to a tier-0 or tier-1 gateway. The segment connectivity
changes are permitted only when the gateways and the connected segments are in the same
transport zone. However, if the segment has DHCP configured on it, some restrictions and
caveats apply on changing the segment connectivity. For more information, see Scenarios:
Impact of Changing Segment Connectivity on DHCP.

6 Enter the Gateway IP address of the subnet in a CIDR format. A segment can contain an IPv4
subnet, or an IPv6 subnet, or both.

n If a segment is not connected to a gateway, subnet is optional.

n If a segment is connected either to a tier-1 or tier-0 gateway, subnet is required.

VMware, Inc. 56
NSX-T Data Center Administration Guide

Subnets of one segment must not overlap with the subnets of other segments in your
network. A segment is always associated with a single virtual network identifier (VNI)
regardless of whether it is configured with one subnet, two subnets, or no subnet.

7 Select a transport zone, which can be an overlay or a VLAN.

To create a VLAN-backed segment, add the segment in a VLAN transport zone. Similarly, to
create an overlay-backed segment, add the segment in an overlay transport zone.

8 (Optional) To configure DHCP on the segment, click Set DHCP Config.

For detailed steps on configuring DHCP Settings and DHCP Options, see Configure DHCP on
a Segment.

9 If the transport zone is of type VLAN, specify a list of VLAN IDs. If the transport zone is of
type Overlay, and you want to support layer 2 bridging or guest VLAN tagging, specify a list
of VLAN IDs or VLAN ranges

10 (Optional) Select an uplink teaming policy for the segment.

This drop-down menu displays the named teaming policies, if you have added them in the
VLAN transport zone. If no uplink teaming policy is selected, the default teaming policy is
used.

n Named teaming policies are not applicable to overlay segments. Overlay segments
always follow the default teaming policy.

n For VLAN-backed segments, you have the flexibility to override the default teaming
policy with a selected named teaming policy. This capability is provided so that you can
steer the infrastructure traffic from the host to specific VLAN segments in the VLAN
transport zone. Before adding the VLAN segment, ensure that the named teaming policy
names are added in the VLAN transport zone.

11 (Optional) Enter the fully qualified domain name.

DHCPv4 server and DHCPv4 static bindings on the segment automatically inherit the domain
name from the segment configuration as the Domain Name option.

12 If you want to use Layer 2 VPN to extend the segment, click the L2 VPN text box and select
an L2 VPN server or client session.

You can select more than one.

13 In VPN Tunnel ID, enter a unique value that is used to identify the segment.

14 (Optional) In the Metadata Proxy field, click Set to attach or detach a metadata proxy to this
segment.

To attach a metadata proxy, select an existing metadata proxy. To detach a metadata proxy,
deselect the metadata proxy that is selected.

15 Click Save.

VMware, Inc. 57
NSX-T Data Center Administration Guide

16 To add segment ports, click Yes when prompted if you want to continue configuring the
segment.

a Click Ports and Set.

b Click Add Segment Port.

c Enter a port name.

d For ID, enter the VIF UUID of the VM or server that connects to this port.

e Select a type: Child, or Static.

Leave this text box blank except for use cases such as containers or VMware HCX. If this
port is for a container in a VM, select Child. If this port is for a bare metal container or
server, select Static.

f Enter a context ID.

Enter the parent VIF ID if Type is Child, or transport node ID if Type is Static.

g Enter a traffic tag.

Enter the VLAN ID in container and other use cases.

h Select an address allocation method: IP Pool, MAC Pool, Both, or None.

i Specify tags.

j Apply address binding by specifying the IP (IPv4 address, IPv6 address, or IPv6 subnet)
and MAC address of the logical port to which you want to apply address binding. For
example, for IPv6, 2001::/64 is an IPv6 subnet, 2001::1 is a host IP, whereas 2001::1/64 is an
invalid input. You can also specify a VLAN ID.

Manual address bindings, if specified, override the auto discovered address bindings.

k Select segment profiles for this port.

17 To select segment profiles, click Segment Profiles .

18 (Optional) To bind a static IP address to the MAC address of a VM on the segment, expand
DHCP Static Bindings, and then click Set.

Both DHCP for IPv4 and DHCP for IPv6 static bindings are supported. For detailed steps on
configuring static binding settings, see Configure DHCP Static Bindings on a Segment.

19 Click Save.

Types of DHCP on a Segment


NSX-T Data Center supports three types of DHCP on a segment: DHCP local server, Gateway
DHCP, and DHCP relay.

DHCP Local Server

VMware, Inc. 58
NSX-T Data Center Administration Guide

As the name suggests, it is a DHCP server that is local to the segment and not available to the
other segments in the network. A local DHCP server provides a dynamic IP assignment
service only to the VMs that are attached to the segment. The IP address of a local DHCP
server must be in the subnet that is configured on the segment.

Gateway DHCP

It is analogous to a central DHCP service that dynamically assigns IP and other network
configuration to the VMs on all the segments that are connected to the gateway and using
Gateway DHCP. Depending on the type of DHCP profile you attach to the gateway, you can
configure a Gateway DHCP server or a Gateway DHCP relay on the segment. By default,
segments that are connected to a tier-1 or tier-0 gateway use Gateway DHCP. The IP address
of a Gateway DHCP server can be different from the subnets that are configured in the
segments.

DHCP Relay

It is a DHCP relay service that is local to the segment and not available to the other segments
in the network. The DHCP relay service relays the DHCP requests of the VMs that are
attached to the segment to the remote DHCP servers. The remote DHCP servers can be in
any subnet, outside the SDDC, or in the physical network.

You can configure DHCP on each segment regardless of whether the segment is connected to a
gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.

For a gateway-connected segment, all the three DHCP types are supported. However, Gateway
DHCP is supported only in the IPv4 subnet of a segment.

For a standalone segment that is not connected to a gateway, only local DHCP server is
supported.

The following restrictions apply to DHCPv6 server configuration on an IPv6 subnet:

n Segments configured with an IPv6 subnet can have either a local DHCPv6 server or a
DHCPv6 relay. Gateway DHCPv6 is not supported.

n DHCPv6 Options (classless static routes and generic options) are not supported.

Configure DHCP on a Segment


You can configure DHCP on each segment regardless of whether the segment is connected to a
gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.

For a gateway-connected segment, all the following DHCP types are supported:

n DHCP local server

n DHCP relay

n Gateway DHCP (supported only for IPv4 subnets in a segment)

For a standalone segment that is not connected to a gateway, only local DHCP server is
supported.

VMware, Inc. 59
NSX-T Data Center Administration Guide

The following restrictions apply to DHCPv6 server configuration on an IPv6 subnet:

n Segments configured with an IPv6 subnet can have either a local DHCPv6 server or a
DHCPv6 relay. Gateway DHCPv6 is not supported.

n DHCPv6 Options (classless static routes and generic options) are not supported.

Prerequisites

n DHCP profile is added in the network.

n If you are configuring Gateway DHCP on a segment, a DHCP profile must be attached to the
directly connected tier-1 or tier-0 gateway.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments.

3 Either add or edit a segment.

n To configure a new segment, click Add Segment.

n To modify the properties of an existing segment, click the vertical ellipses next to the
name of an existing segment, and then click Edit.

4 If you are adding a segment, ensure that the following segment properties are specified:

n Segment name

n Connectivity

n Transport zone

n Subnets (required for a gateway-connected segment, optional for a standalone segment)

If you are editing an existing segment, go directly to the next step.

5 Click Set DHCP Config.

VMware, Inc. 60
NSX-T Data Center Administration Guide

6 In the DHCP Type drop-down menu, select any one of the following types.

On a segment, IPv6 and IPv4 subnets always use the same DHCP type. Mixed configuration is
not supported.

DHCP Type Description

DHCP Local Server Select this option to create a local DHCP server that has an IP address on
the segment.
As the name suggests, it is a DHCP server that is local to the segment and
not available to the other segments in the network. A local DHCP server
provides a dynamic IP assignment service only to the VMs that are attached
to the segment.
You can configure all DHCP settings, including DHCP ranges, DHCP Options,
and static bindings on the segment.
For standalone segments, this type is selected by default.

DHCP Relay Select this option to relay the DHCP client requests to the external DHCP
servers. The external DHCP servers can be in any subnet, outside the SDDC,
or in the physical network.
The DHCP relay service is local to the segment and not available to the
other segments in the network.
When you use a DHCP relay on a segment, you cannot configure DHCP
Settings and DHCP Options. The UI does not prevent you from configuring
DHCP static bindings. However, in NSX-T Data Center 3.0, static binding with
a DHCP relay is an unsupported configuration.

Gateway DHCP This DHCP type is analogous to a central DHCP service that dynamically
assigns IP and other network configuration to the VMs on all the segments
that are connected to the gateway and using Gateway DHCP. Depending on
the type of DHCP profile you attach to the gateway, you can configure a
Gateway DHCP server or a Gateway DHCP relay on the segment.
By default, segments that are connected to a tier-1 or tier-0 gateway use
Gateway DHCP. If needed, you can choose to configure a DHCP local server
or a DHCP relay on the segment.
To configure Gateway DHCP on a segment, a DHCP profile must be
attached to the gateway.
If the IPv4 subnet uses Gateway DHCP, you cannot configure DHCPv6 in the
IPv6 subnet of the same segment because Gateway DHCPv6 is not
supported. In this case, the IPv6 subnet cannot support any DHCPv6 server
configuration, including the IPv6 static bindings.

Note In NSX-T Data Center 3.0 and 3.0.1, after the segment is created and the DHCP service
is in use, you cannot change the DHCP type of a gateway-connected segment. Starting in
version 3.0.2, you can change the DHCP type of a gateway-connected segment.

VMware, Inc. 61
NSX-T Data Center Administration Guide

7 In the DHCP Profile drop-down menu, select the name of the DHCP server profile or DHCP
relay profile.

n If the segment is connected to a gateway, Gateway DHCP server is selected by default.


The DHCP profile that is attached to the gateway is autoselected. The name and server IP
address are fetched automatically from that DHCP profile and displayed in a read-only
mode.

When a segment is using a Gateway DHCP server, ensure that an edge cluster is selected
either in the gateway, or DHCP server profile, or both. If an edge cluster is unavailable in
either the profile or the gateway, an error message is displayed when you save the
segment.

n If you are configuring a local DHCP server or a DHCP relay on the segment, you must
select a DHCP profile from the drop-down menu. If no profiles are available in the drop-
down menu, click the vertical ellipses icon and create a DHCP profile. After the profile is
created, it is automatically attached to the segment.

When a segment is using a local DHCP server, ensure that the DHCP server profile
contains an edge cluster. If an edge cluster is unavailable in the profile, an error message
is displayed when you save the segment.

Note In NSX-T Data Center 3.0 and 3.0.1, after the segment is created and the DHCP service
is in use, you cannot change the DHCP profile of the segment. Starting in version 3.0.2, you
can change the DHCP profile of the segment that uses a DHCP local server or a DHCP relay.

8 Click the IPv4 Server or IPv6 Server tab.

If the segment contains an IPv4 subnet and an IPv6 subnet, you can configure both DHCPv4
and DHCPv6 servers on the segment.

VMware, Inc. 62
NSX-T Data Center Administration Guide

9 Configure the DHCP settings.

a Enable the DHCP configuration settings on the subnet by clicking the DHCP Config toggle
button.

b In the DHCP Server Address text box, enter the IP addresses.

n If you are configuring a DHCP local server, server IP address is required. A maximum
of two server IP addresses are supported. One IPv4 address and one IPv6 address.
For an IPv4 address, the prefix length must be <= 30, and for an IPv6 address, the
prefix length must be <= 126. The server IP addresses must belong to the subnets that
you have specified in this segment. The DHCP server IP address must not overlap with
the IP addresses in the DHCP ranges and DHCP static binding. The DHCP server
profile might contain server IP addresses, but these IP addresses are ignored when
you configure a local DHCP server on the segment.

After a local DHCP server is created, you can edit the server IP addresses on the Set
DHCP Config page. However, the new IP addresses must belong to the same subnet
that is configured in the segment.

n If you are configuring a DHCP relay, this step is not applicable. The server IP
addresses are fetched automatically from the DHCP relay profile and displayed below
the profile name.

n If you are configuring a Gateway DHCP server, this text box is not editable. The server
IP addresses are fetched automatically from the DHCP profile that is attached to the
connected gateway.

Remember, the Gateway DHCP server IP addresses in the DHCP server profile can be
different from the subnet that is configured in the segment. In this case, the Gateway
DHCP server connects with the IPv4 subnet of the segment through an internal relay
service, which is autocreated when the Gateway DHCP server is created. The internal
relay service uses any one IP address from the subnet of the Gateway DHCP server IP
address. The IP address used by the internal relay service acts as the default gateway
on the Gateway DHCP server to communicate with the IPv4 subnet of the segment.

After a Gateway DHCP server is created, you can edit the server IP addresses in the
DHCP profile of the gateway. However, you cannot change the DHCP profile that is
attached to the gateway.

VMware, Inc. 63
NSX-T Data Center Administration Guide

c (Optional) In the DHCP Ranges text box, enter one or more IP address ranges.

Both IP ranges and IP addresses are allowed. IPv4 addresses must be in a CIDR /32
format, and IPv6 addresses must be in a CIDR /128 format. You can also enter an IP
address as a range by entering the same IP address in the start and the end of the range.
For example, 172.16.10.10-172.16.10.10.

Ensure that DHCP ranges meet the following requirements:

n IP addresses in the DHCP ranges must belong to the subnet that is configured on the
segment. That is, DHCP ranges cannot contain IP addresses from multiple subnets.

n IP ranges must not overlap with the DHCP Server IP address and the DHCP static
binding IP addresses.

n IP ranges in the DHCP IP pool must not overlap each other.

n Number of IP addresses in any DHCP range must not exceed 65536.

Note The following types of IPv6 addresses are not permitted in DHCP for IPv6 ranges:

n Link Local Unicast addresses (FE80::/64)

n Multicast addresses (FF00::/8)

n Unspecified address (0:0:0:0:0:0:0:0)

n Address with all F (F:F:F:F:F:F:F:F)

Caution After a DHCP server is created, you can update existing ranges, append new IP
ranges, or delete existing ranges. However, it is a good practice to avoid deleting,
shrinking, or expanding the existing IP ranges. For example, do not try to combine
multiple smaller IP ranges to create a single large IP range. You might accidentally miss
including IP addresses, which are already leased to the DHCP clients from the larger IP
range. Therefore, when you modify existing ranges after the DHCP service is running, it
might cause the DHCP clients to lose network connection and result in a temporary traffic
disruption.

d (Optional) (Only for DHCPv6): In the Excluded Ranges text box, enter IPv6 addresses or a
range of IPv6 addresses that you want to exclude for dynamic IP assignment to DHCPv6
clients.

In IPv6 networks, the DHCP ranges can be large. Sometimes, you might want to reserve
certain IPv6 addresses, or multiple small ranges of IPv6 addresses from the large DHCP
range for static binding. In such situations, you can specify excluded ranges.

e (Optional) Edit the lease time in seconds.

Default value is 86400. Valid range of values is 60–4294967295. The lease time
configured in the DHCP server configuration takes precedence over the lease time that
you specified in the DHCP profile.

VMware, Inc. 64
NSX-T Data Center Administration Guide

f (Optional) (Only for DHCPv6): Enter the preferred time in seconds.

Preferred time is the length of time that a valid IP address is preferred. When the
preferred time expires, the IP address becomes deprecated. If no value is entered,
preferred time is autocalculated as (lease time * 0.8).

Lease time must be > preferred time.

Valid range of values is 60–4294967295. Default is 69120.

g (Optional) Enter the IP address of the domain name server (DNS) to use for name
resolution. A maximum of two DNS servers are permitted.

When not specified, no DNS is assigned to the DHCP client. DNS server IP addresses must
belong to the same subnet as the subnet's gateway IP address.

h (Optional) (Only for DHCPv6): Enter one or more domain names.

DHCPv4 configuration automatically fetches the domain name that you specified in the
segment configuration.

i (Optional) (Only for DHCPv6): Enter the IP address of the Simple Network Time Protocol
(SNTP) server. A maximum of two SNTP servers are permitted.

In NSX-T Data Center 3.0, DHCPv6 server does not support NTP.

DHCPv4 server supports only NTP. To add an NTP server, click Options, and add the
Generic Option (Code 42 - NTP Servers).

10 (Optional) Click Options, and specify the Classless Static Routes (Option 121) and Generic
Options.

In NSX-T Data Center 3.0, DHCP Options for IPv6 are not supported.

n Each classless static route option in DHCP for IPv4 can have multiple routes with the same
destination. Each route includes a destination subnet, subnet mask, next hop router. For
information about classless static routes in DHCPv4, see RFC 3442 specifications. You can
add a maximum of 127 classless static routes on a DHCPv4 server.

n For adding Generic Options, select the code of the option and enter a value of the option.
For binary values, the value must be in a base-64 encoded format.

11 Click Apply to save the DHCP configuration, and then click Save to save the segment
configuration.

What to do next

n After a segment has DHCP configured on it, some restrictions and caveats apply on changing
the segment connectivity. For more information, see Scenarios: Impact of Changing Segment
Connectivity on DHCP.

VMware, Inc. 65
NSX-T Data Center Administration Guide

n When a DHCP server profile is attached to a segment that uses a DHCP local server, the
DHCP service is created in the edge cluster that you specified in the DHCP profile. However, if
the segment uses a Gateway DHCP server, the edge cluster in which the DHCP service is
created depends on a combination of several factors. For a detailed information about how
an edge cluster is selected for DHCP service, see Scenarios: Selection of Edge Cluster for
DHCP Service.

Configure DHCP Static Bindings on a Segment


You can configure static bindings on both DHCP for IPv4 and DHCP for IPv6 servers.

In a typical network environment, you have VMs that run services, such as FTP, email servers,
application servers, and so on. You might not want the IP address of these VMs to change in your
network. In this case, you can bind a static IP address to the MAC address of each VM (DHCP
client). The static IP address must not overlap with the DHCP IP ranges and the DHCP Server IP
addresses.

DHCP static bindings are allowed when you are configuring either a local DHCP server or a
Gateway DHCP server on the segment. The UI does not prevent you from configuring DHCP
static bindings when the segment is using a DHCP relay. However, in NSX-T Data Center 3.0,
static binding with a DHCP relay is an unsupported configuration.

Prerequisites

The segment on which you want to configure DHCP static bindings must be saved in the
network.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments.

3 Next to the segment that you want to edit, click the vertical ellipses, and click Edit.

4 Expand the DHCP Static Bindings section, and next to DHCP Static Bindings, click Set.

By default, the IPv4 Static Binding page is displayed. To bind IPv6 addresses, make sure that
you first click the IPv6 Static Binding tab, and then proceed to the next step.

VMware, Inc. 66
NSX-T Data Center Administration Guide

5 Click Add Static Binding.

a Specify the DHCP static binding options.

The following table describes the static binding options that are common to DHCP for
IPv4 and DHCP for IPv6 servers.

Option Description

Name Enter a unique display name to identify each static binding. The name
must be limited to 255 characters.

IP Address n Required for IPv4 static binding. Enter a single IPv4 address to bind
to the MAC address of the client.
n Optional for IPv6 static binding. Enter a single Global Unicast IPv6
address to bind to the MAC address of the client.
When no IPv6 address is specified for static binding, Stateless Address
Autoconfiguration (SLAAC) is used to auto-assign an IPv6 address to the
DHCPv6 clients. Also, you can use Stateless DHCP to assign other DHCP
configuration options, such as DNS, domain names, and so on, to the
DHCPv6 clients.
For more information about Stateless DHCP for IPv6, read the RFC 3376
specifications.
The following types of IPv6 addresses are not permitted in IPv6 static
binding:
n Link Local Unicast addresses (FE80::/64 )
n Multicast IPv6 addresses (FF00::/8)
n Unspecified address (0:0:0:0:0:0:0:0)
n Address with all F (F:F:F:F:F:F:F:F)
The static IP address must belong to the subnet (if any) that is configured
on the segment.

MAC Address Required. Enter the MAC address of the DHCP client to which you want
to bind a static IP address.
The following validations apply to MAC address in static bindings:
n MAC address must be unique in all the static bindings on a segment
that uses a local DHCP server.
n MAC address must be unique in all the static bindings across all the
segments that are connected to the gateway and which use the
Gateway DHCP server.
For example, consider that you have 10 segments connected to a tier-1
gateway. You use a Gateway DHCP server for four segments (Segment1
to Segment4), and a local DHCP server for the remaining six segments
(Segment5 to Segment10). Assume that you have a total of 20 static
bindings across all the four segments (Segment1 to Segment4), which
use the Gateway DHCP server. In addition, you have five static bindings
in each of the other six segments (Segment5 to Segment10), which use a
local DHCP server. In this example:
n The MAC address in each of the 20 static bindings must be unique
across all the segments (Segment1 to Segment4) that use the
Gateway DHCP server.
n The MAC address in the five static bindings must be unique on each
segment (Segment5 to Segment10) that use a local DHCP server.

VMware, Inc. 67
NSX-T Data Center Administration Guide

Option Description

Lease Time Optional. Enter the amount of time in seconds for which the IP address is
bound to the DHCP client. When the lease time expires, the IP address
becomes invalid and the DHCP server can assign the address to other
DHCP clients on the segment.
Valid range of values is 60–4294967295. Default is 86400.

Description Optional. Enter a description for the static binding.

Tags Optional. Add tags to label static bindings so that you can quickly search
or filter bindings, troubleshoot and trace binding-related issues, or do
other tasks.
For more information about adding tags and use cases for tagging
objects, see Tags.

The following table describes the static binding options that are available only in a DHCP
for IPv4 server.

DHCP For IPv4 Option Description

Gateway Address Enter the default gateway IP address that the DHCP for IPv4 server must
provide to the DHCP client.

Host Name Enter the host name of the DHCP for IPv4 client so that the DHCPv4
server can always bind the client (host) with the same IPv4 address each
time.
The host name must be limited to 63 characters.
The following validations apply to host name in static bindings:
n Host name must be unique in all the static bindings on a segment that
uses a local DHCP server.
n Host name must be unique in all the static bindings across all the
segments that are connected to the gateway and which use the
Gateway DHCP server.
For example, consider that you have 10 segments connected to a tier-1
gateway. You use a Gateway DHCP server for four segments (Segment1
to Segment4), and a local DHCP server for the remaining six segments
(Segment5 to Segment10). Assume that you have a total of 20 static
bindings across all the four segments (Segment1 to Segment4), which
use the Gateway DHCP server. In addition, you have five static bindings
in each of the other six segments (Segment5 to Segment10), which use a
local DHCP server. In this example:
n The host name in each of the 20 static bindings must be unique
across all the segments (Segment1 to Segment4) that use the
Gateway DHCP server.
n The host name in the five static bindings must be unique on each
segment (Segment5 to Segment10) that use a local DHCP server.

DHCP Options Optional. Click Set to configure DHCP for IPv4 Classless Static Routes and
other Generic Options.

Some additional notes for DHCPv4 static binding:

n IPv4 static bindings automatically inherit the domain name that you configured on the
segment.

VMware, Inc. 68
NSX-T Data Center Administration Guide

n To specify DNS servers in the static binding configuration, add the Generic Option
(Code 6 - DNS Servers).

n To synchronize the system time on DHCPv4 clients with DHCPv4 servers, use NTP.
DHCP for IPv4 server does not support SNTP.

n If DHCP Options are not specified in the static bindings, the DHCP Options from the
DHCPv4 server on the segment are automatically inherited in the static bindings.
However, if you have explicitly added one or more DHCP Options in the static
bindings, these DHCP Options are not autoinherited from the DHCPv4 server on the
segment.
The following table describes the static binding options that are available only in a DHCP
for IPv6 server.

DHCP For IPv6 Option Description

DNS Servers Optional. Enter a maximum of two domain name servers to use for the
name resolution.
When not specified, no DNS is assigned to the DHCP client.

SNTP Servers Optional. Enter a maximum of two Simple Network Time Protocol (SNTP)
servers. The clients use these SNTP servers to synchronize their system
time to that of the standard time servers.

Preferred Time Optional. Enter the length of time that a valid IP address is preferred.
When the preferred time expires, the IP address becomes deprecated. If
no value is entered, preferred time is auto-calculated as (lease time *
0.8).
Lease time must be > preferred time.
Valid range of values is 60–4294967295. Default is 69120.

Domain Names Optional. Enter the domain name to provide to the DHCPv6 clients.
Multiple domain names are supported in an IPv6 static binding.
When not specified, no domain name is assigned to the DHCP clients.

Some additional notes for DHCPv6 static binding:

n Gateway IP address configuration is unavailable in IPv6 static bindings. IPv6 client


learns about its first-hop router from the ICMPv6 router advertisement (RA) message.

n NTP is not supported in DHCPv6 static bindings.

b Click Save after configuring each static binding.

Layer 2 Bridging
With layer 2 bridging, you can have a connection to a VLAN-backed port group or a device, such
as a gateway, that resides outside of your NSX-T Data Center deployment. A layer 2 bridge is
also useful in a migration scenario, in which you need to split a subnet across physical and virtual
workloads.

VMware, Inc. 69
NSX-T Data Center Administration Guide

A layer 2 bridge requires an Edge cluster and an Edge Bridge profile. An Edge Bridge profile
specifies which Edge cluster to use for bridging and which Edge transport node acts as the
primary and backup bridge. When you configure a segment, you can specify an Edge bridge
profile to enable layer 2 bridging.

Create an Edge Bridge Profile


An Edge bridge profile makes an NSX Edge cluster capable of providing layer 2 bridging to a
segment.

When you create an edge bridge profile, if you set the failover mode to be preemptive and a
failover occurs, the standby node becomes the active node. After the failed node recovers, it
becomes the active node again. If you set the failover mode to be non-preemptive and a failover
occurs, the standby node becomes the active node. After the failed node recovers, it becomes
the standby node. You can manually set the standby edge node to be the active node by running
the CLI command set l2bridge-port <uuid> state active on the standby edge node. The
command can only be applied in non-preemptive mode. Otherwise, there will be an error. In non-
preemptive mode, the command will trigger an HA failover when applied on a standby node, and
it will be ignored when applied on an active node. For more information, see the NSX-T Data
Center Command-Line Interface Reference.

Prerequisites

n Verify that you have an NSX Edge cluster with two NSX Edge transport nodes.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Edge Bridge Profiles.

3 Click Add Edge Bridge Profile.

4 Enter a name for the Edge bridge profile and optionally a description.

5 Select an NSX Edge cluster.

6 Select a primary node.

7 Select a backup node.

8 Select a failover mode.

The options are Preemptive and Non-Preemptive.

9 Click Save.

What to do next

You can now associate a segment with the bridge profile.

VMware, Inc. 70
NSX-T Data Center Administration Guide

Configure Edge-Based Bridging


When you configure edge-based bridging, after creating an edge brige profile for an edge
cluster, some additonal configurations are required for an Edge node running in a VM.

There are three configuration options.

Option 1: Configure Promiscuous Mode


This option is suitable if the Edge node is deployed on vSphere with either vDS (vSphere
Distributed Switch) or vSS (vSphere Standard Switch) configured.

n Set promiscuous mode on the portgroup.

n Allow forged transmit on the portgroup.

n Run the following command to enable reverse filter on the ESXi host where the Edge VM is
running:

esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1

Then disable and enable promiscuous mode on the portgroup with the following steps:

n Edit the portgroup's settings.

n Disable promiscuous mode and save the settings.

n Edit the portgroup's settings again.

n Enable promiscuous mode and save the settings.

n Do not have other port groups in promiscuous mode on the same host sharing the same set
of VLANs.

n The active and standby Edge VMs should be on different hosts. If they are on the same host
the throughput might be reduced because VLAN traffic needs to be forwarded to both VMs
in promiscuous mode.

Option 2: Configure MAC Learning


If the Edge is deployed on a host with NSX-T installed, it can connect to a VLAN segment. The
segment must have a MAC Management profile with MAC Learning enabled. Similarly, the
segment must have a MAC Discovery profile with MAC Learning enabled.

Option 3: Configure a Sink Port


This option is suitable if the Edge node is deployed on vSphere with vDS configured and you do
not want to enable promiscuous mode. It is more involved than option 1.

1 Retrieve the port number for the trunk vNIC that you want to configure as a sink port.

a Log in to the vSphere Web Client, and navigate to Home > Networking.

VMware, Inc. 71
NSX-T Data Center Administration Guide

b Click the distributed port group to which the NSX Edge trunk interface is connected, and
click Ports to view the ports and connected VMs. Note the port number associated with
the trunk interface. Use this port number when fetching and updating opaque data.

2 Retrieve the dvsUuid value for the vSphere Distributed Switch.

a Log in to the vCenter Mob UI at https://<vc-ip>/mob .

b Click content.

c Click the link associated with the rootFolder (for example: group-d1 (Datacenters)).

d Click the link associated with the childEntity (for example: datacenter-1).

e Click the link associated with the networkFolder (for example: group-n6).

f Click the DVS name link for the vSphere distributed switch associated with the NSX Edges
(for example: dvs-1 (Mgmt_VDS)).

g Copy the value of the uuid string. Use this value for dvsUuid when fetching and updating
opaque data.

3 Verify if opaque data exists for the specified port.

a Go to https://<vc-ip>/mob/?moid=DVSManager&vmodl=1.

b Click fetchOpaqueDataEx.

c In the selectionSet value box paste the following XML input:

<selectionSet xsi:type="DVPortSelection">
<dvsUuid>c2 1d 11 50 6a 7c 77 68-e6 ba ce 6a 1d 96 2a 15</dvsUuid> <!-- example dvsUuid --
>
<portKey>393</portKey> <!-- example port number -->
</selectionSet>

Use the port number and dvsUuid value that you retrieved for the NSX Edge trunk
interface.

d Set isRuntime to false.

e Click Invoke Method. If the result shows values for vim.dvs.OpaqueData.ConfigInfo, then
there is already opaque data set, use the edit operation when you set the sink port. If the
value for vim.dvs.OpaqueData.ConfigInfo is empty, use the add operation when you set the
sink port.

4 Configure the sink port in the vCenter managed object browser (MOB).

a Go to https://<vc-ip>/mob/?moid=DVSManager&vmodl=1.

b Click updateOpaqueDataEx.

VMware, Inc. 72
NSX-T Data Center Administration Guide

c In the selectionSet value box paste the following XML input. For example,

<selectionSet xsi:type="DVPortSelection">
<dvsUuid>c2 1d 11 50 6a 7c 77 68-e6 ba ce 6a 1d 96 2a 15</dvsUuid> <!-- example dvsUuid --
>
<portKey>393</portKey> <!-- example port number -->
</selectionSet>

Use the dvsUuid value that you retrieved from the vCenter MOB.

d On the opaqueDataSpec value box paste one of the following XML inputs.

Use this input to enable a SINK port if opaque data is not set (operation is set to add):

<opaqueDataSpec>
<operation>add</operation>
<opaqueData>
<key>com.vmware.etherswitch.port.extraEthFRP</key>
<opaqueData
xsi:type="vmodl.Binary">AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>

Use this input to enable a SINK port if opaque data is already set (operation is set to edit):

<opaqueDataSpec>
<operation>edit</operation>
<opaqueData>
<key>com.vmware.etherswitch.port.extraEthFRP</key>
<opaqueData
xsi:type="vmodl.Binary">AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>

Use this input to disable a SINK port:

<opaqueDataSpec>
<operation>edit</operation>
<opaqueData>
<key>com.vmware.etherswitch.port.extraEthFRP</key>
<opaqueData
xsi:type="vmodl.Binary">AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>

VMware, Inc. 73
NSX-T Data Center Administration Guide

e Set isRuntime to false.

f Click Invoke Method.

Create a Layer 2 Bridge-Backed Segment


When you have VMs that are connected to the NSX-T Data Center overlay, you can configure a
bridge-backed segment to provide layer 2 connectivity with other devices or VMs that are
outside of your NSX-T Data Center deployment.

Prerequisites

n Verify that you have an Edge bridge profile.

n At least one ESXi or KVM host to serve as a regular transport node. This node has hosted
VMs that require connectivity with devices outside of a NSX-T Data Center deployment.

n A VM or another end device outside of the NSX-T Data Center deployment. This end device
must be attached to a VLAN port matching the VLAN ID of the bridge-backed segment.

n One segment in an overlay transport zone to serve as the bridge-backed segment.

Procedure

1 From a browser, log in to an NSX Manager at https://<nsx-mgr>.

2 Select Networking > Segments

3 Click the menu icon (three dots) of the overlay segment that you want to configure layer 2
bridging on and select Edit.

4 In the Edge Bridges field, click Set.

5 Click Add Edge Bridge.

You can add one or more Edge bridge profiles.

6 Select an Edge bridge profile.

7 Select a transport zone.

8 Enter a VLAN ID or a VLAN trunk specification (specify VLAN ranges and not individual
VLANs).

9 (Optional) Select a teaming policy.

10 Click Add.

Results

You can test the functionality of the bridge by sending a ping from a VM attached to the
segment to a device that is external to the NSX-T deployment.

Add a Metadata Proxy Server


A metadata proxy server enables VMs to retrieve metadata from an OpenStack Nova API server.

VMware, Inc. 74
NSX-T Data Center Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Segments > Metadata Proxies.

3 Click Add Metadata Proxy.

4 Enter a name for the metadata proxy server.

5 In the Server Address field, enter the URL and port for the Nova server.

The valid port range is 3000 - 9000.

6 Select an Edge cluster.

7 (Optional) Select Edge nodes.

If you select any Edge node, you cannot enable Standby Relocation in the next step.

8 (Optional) Enable Standby Relocation.

Standby relocation means that if the Edge node running the metadata proxy fails, the
metadata proxy will run on a standby Edge node. You can only enable standby relocation if
you do not select any Edge node.

9 In the Shared Signature Secret field, enter the secret that the metadata proxy will use to
access the Nova server.

10 (Optional) Select a certificate for encrypted communication with the Nova server.

11 (Optional) Select a cryptographic protocol.

The options are TLSv1, TLSv1.1, and TLSv1.2. TLSv1.1 and TLSv1.2 are supported by default.

VMware, Inc. 75
Host Switches
5
A host switch managed object is a virtual network switch that provides networking services to
the various hosts in the network. It is instantiated on every host that participates in NSX-T
networking

The following host switches are supported in NSX-T:

n NSX-T Virtual Distributed Switch: NSX-T introduces a host switch that normalizes connectivity
among various compute domains, including multiple VMware vCenter Server instances, KVM,
containers, and other off premises or cloud implementations.

NSX-T Virtual Distributed Switch can be configured based on the performance required in
your environment:

n Standard: Configured for regular workloads, where normal traffic throughput is expected
on the workloads.

n Enhanced: Configured for telecom workloads, where high traffic throughput is expected
on the workloads.

n vSphere Distributed Virtual Switch: Provides centralized management and monitoring of the
networking configuration of all hosts that are associated with the switch in a vCenter Server
environment.

This chapter includes the following topics:

n Managing NSX-T on a vSphere Distributed Switch

n Enhanced Networking Stack

n Migrate Host Switch to vSphere Distributed Switch

n NSX Virtual Distributed Switch

Managing NSX-T on a vSphere Distributed Switch


You can configure and run NSX-T on a vSphere Distributed Switch (VDS) switch.

VMware, Inc. 76
NSX-T Data Center Administration Guide

In NSX-T 3.0, a host transport node can be prepared by installing NSX-T on a VDS switch. To
prepare an an NSX Edge VM as a transport node, you can only use an N-VDS switch. But, you
can connect an NSX Edge VM to any of the supported switches (VSS, VDS, or N-VDS) depending
on the topology in your network.

After you prepare a cluster of transport node hosts with VDS as the host switch, you can do the
following:

n Manage NSX-T transport nodes on a VDS switch.

n Realize a segment created in NSX-T as an NSX Distributed Virtual port group in vCenter
Server.

n Migrate VMs between vSphere Distributed Virtual port groups and NSX Distributed Virtual
port groups.

n Send VMs traffic running on both these type of port groups.

Configuring a vSphere Distributed Switch


When a transport node is configured on a VDS host switch, some network parameters can only
be configured in vCenter Server.

The following requirements must be met to install NSX-T on a VDS host switch:

n vCenter Server 7.0 or a later version

n ESXi 7.0 or a later version

The created VDS switch can be configured to centrally manage networking for NSX-T hosts.

Configuring a VDS switch for NSX-T networking requires objects to be configured on NSX-T and
in vCenter Server.

n In vSphere:

n Create a VDS switch.

n Set MTU to at least 1600

n Add ESXi hosts to the switch. These hosts are later prepared as NSX-T transport
nodes.

n Assign uplinks to physical NICs.

n In NSX-T:

n When configuring a transport node, map uplinks created in NSX-T uplink profile with
uplinks in VDS.

For more details on preparing a host transport node on a VDS switch, see the NSX-T Data
Center Installation Guide.

The following parameters can only be configured in a vCenter Server on a VDS backed host
switch:

VMware, Inc. 77
NSX-T Data Center Administration Guide

Configuration VDS NSX-T Description

MTU In vCenter Server, set an MTU value Any MTU value set in an As a host transport node that is
on the switch. NSX-T uplink profile is prepared using VDS as the host
overriden. switch, the MTU value needs to
Note A VDS switch must have an
be set on the VDS switch in
MTU of 1600 or higher.
vCenter Server.
In vCenter Server, select VDS, click
Actions → Settings → Edit Settings.

Uplinks/LAGs In vCenter Server, configure Uplinks/ When a transport node is As a host transport node that is
LAGs on a VDS switch. prepared, the teaming prepared using VDS as the host
policy on NSX-T is switch, the uplink or LAG are
In vCenter Server, select VDS, click
mapped to uplinks/LAGs configured on the VDS switch.
Actions → Settings → Edit Settings.
configured on a VDS During configuration, NSX-T
switch. requires teaming policy be
configured for the transport
Note A transport node
node. This teaming policy is
prepared on an N-VDS
mapped to the uplinks/LAGs
switch, the teaming
configured on the VDS switch.
policy is mapped to
physical NICs.

NIOC Configure in vCenter Server. NIOC configuration is not As a host transport node that is
In vCenter Server, select VDS, click available when a host prepared using VDS as the host
Actions → Settings → Edit Settings. transport node is switch, the NIOC profile can only
prepared using a VDS be configured in vCenter Server.
switch.

Link Layer Configure in vCenter Server. LLDP configuration is not As a host transport node that is
Discovery In vCenter Server, select VDS, click available when a host prepared using VDS as the host
Protocol (LLDP) Actions → Settings → Edit Settings. transport node is switch, the LLDP profile can only
prepared using a VDS be configured in vCenter Server.
switch.

Add or Manage Manage in vCenter Server. Prepared as transport Before preparing a transport
Hosts In vCenter Server, go to Networking nodes in NSX-T. node using a VDS switch, that
→ VDS Switch → Add and Manage node must be added to the VDS
Host.. switch in vCenter Server.

Note NIOC profiles, Link Layer Discovery Protocol (LLDP) profile, and Link Aggregation Group
(LAG) for these virtual machines are managed by VDS switches and not by NSX-T. As a vSphere
administrator, configure these parameters from vCenter Server UI or by calling VDS API
commands.

After preparing a host transport node with VDS as a host switch, the host switch type displays
VDS as the host switch. It displays the configured uplink profile in NSX-T and the associated
transport zones.

VMware, Inc. 78
NSX-T Data Center Administration Guide

In vCenter Server, the VDS switch used to prepare NSX-T hosts is created as an NSX Switch.

Managing NSX Distributed Virtual Port Groups


A transport node prepared with VDS as a host switch ensures that segments created in NSX-T is
realized as an NSX Distributed Virtual port group on a VDS switch and Segment in NSX-T.

In earlier versions of NSX-T Data Center , a segment created in NSX-T are represented as an
opaque network in vCenter Server. When running NSX-T on a VDS switch, a segment is
represented as an NSX Distributed Virtual Port Groups.

Any changes to the segments on the NSX-T network are synchronized in vCenter Server.

In vCenter Server, an NSX Distributed Virtual Port Group is represented as .

VMware, Inc. 79
NSX-T Data Center Administration Guide

Any NSX-T segment created in NSX-T is realized in vCenter Server as an NSX-T object. A vCenter
Server displays the following details related to NSX-T segments:

n NSX Manager

n Virtual network identifier of the segment

n Transport zone

n Attached virtual machines

The port binding for the segment is by default set to Ephemeral. Switching parameters for the
switch that are set in NSX-T cannot be edited in vCenter Server and conversely.

Important In a vCenter Server, an NSX Distributed Virtual port group realized does not require a
unique name to differentiate it with other port groups on a VDS switch. So, multiple NSX
Distributed Virtual port groups can have the same name. Any vSphere automations that use port
group names might result in errors.

In vCenter Server, you can perform these actions on an NSX Distributed Virtual Port Group:

n Add VMkernel Adapters.

n Migrate VMs to Another Network.

However, NSX-T objects related to an NSX Distributed Virtual port group can only be edited in
NSX Manager. You can edit these segment properties:

n Replication Mode for the segment

n VLAN trunk ID used by the segment

n Switching Profiles (for example, Port Mirroring)

n Ports created on the segment

For details on how to configure a vSphere Distributed Virtual port group, refer to the vSphere
Networking Guide.

NSX-T Cluster Prepared with VDS


An example of an NSX-T cluster prepared using VDS as the host switch.

VMware, Inc. 80
NSX-T Data Center Administration Guide

In the sample topology diagram, two VDS switches are configured to manage NSX-T traffic and
vSphere traffic.

VDS-1 and VDS-2 are configured to manage networking for ESXi hosts from Cluster-1, Cluster-2,
and Cluster-3. Cluster-1 is prepared to run only vSphere traffic, whereas, Cluster-2 and Cluster-3
are prepared as host transport nodes with these VDS switches.

In vCenter Server, uplink port groups on VDS switches are assigned physical NICs. In the
topology, uplinks on VDS-1 and VDS-2 are assigned to physical NICs. Depending on the hardware
configuration of the ESXi host, you might want to plan out how many physical NICs to be
assigned to the switch. In addition to assiging uplinks to the VDS switch, MTU, NIOC, LLDP, LAG
profiles are configured on the VDS switches.

After VDS switches are configured, in NSX-T, add an uplink profile.

When preparing a cluster by applying a transport node profile (on a VDS switch), the uplinks
from the transport node profile is mapped to VDS uplinks. In contrast, when preparing a cluster
on an N-VDS switch, the uplinks from the transport node profile is directly mapped to physical
NICs.

After preparing the clusters, ESXi hosts on cluster-2 and Cluster-3 manage NSX-T traffic, while
cluster-1 manage vSphere traffic.

APIs to Configure vSphere Distributed Switch


API calls to some of the NSX-T Data Center and vSphere Distributed Switch commands are
updated to support NSX-T Data Center networking on vSphere Distributed Switch.

VMware, Inc. 81
NSX-T Data Center Administration Guide

API Changes for vSphere Distributed Switch


For detailed information related to API calls, see the NSX-T Data Center API Guide.

Note Configuration done using API commands is also possible from the vCenter Server user
interface. For more information on creating a NSX-T Data Center transport node using Sphere
Distributed Switch as host switch, refer to the Configure a Managed Host Transport Node topic
in the NSX-T Data Center Installation Guide.

VMware, Inc. 82
NSX-T Data Center Administration Guide

Changes in API Commands


NSX-T Virtual Distributed NSX-T on vSphere for NSX-T on vSphere
API / Switch (N-VDS) Distributed Switch (VDS) Distributed Switch

Create a Transport node for /api/v1/fabric/discovered- /api/v1/fabric/discovered- n "host_switch_name":


a Discovered node. nodes/<external-id/ nodes/<external-id/ "vds-1": Is not a
discovered-node-id>? discovered-node-id>? administrator entered
action=create_transport_no action=create_transport_no switch name. The host
de de switch name field is
selected from the
{
{ populated list of vSphere
"node_id":
"node_id": "d7ef478b-752c-400a- Distributed Switches
"d7ef478b-752c-400a- b5f0-207c04567e5d", created in vSphere.
b5f0-207c04567e5d", "host_switch_spec": {
"host_switch_spec": n "host_switch_id": Is the
"host_switches": [
{ UUID of the vSphere
{
"host_switches": [ "host_switch_name": Distributed Switch
{ "vds-1", object. The
"host_switch_name": "host_switch_id": corresponding API in
"nvds-1", "50 2b 92 54 e0 80
"host_switch_id": vSphere is
d8 d1-ee ab 8d a6 7b
"50 2b 92 54 e0 80 vim.DistributedVirtualSw
fd f9 4b",
d8 d1-ee ab 8d a6 7b itch.config.uuid .
"host_switch_type":
fd f9 4b", "VDS", n "vds_uplink_name": An
"host_switch_type": "host_switch_mode":
"NVDS", uplink created in vSphere
"STANDARD",
"host_switch_mode": Distributed Switch,
"STANDARD", "host_switch_profile_ mapping uplinks to
ids": [ physical NICs.
"host_switch_profile_ {
ids": [ n "uplink_name": An uplink
"key":
{ created in NSX-T that
"UplinkHostSwitchProf
"key": ile", maps uplinks on N-VDS
"UplinkHostSwitchProf "value": to the uplinks defined in
ile", "159353ae- vds_uplink_name.
"value": "159353ae- c572-4aca-9469-958248
c572-4aca-9469-958248 n "is_migrate_pnics":
0a7467"
0a7467" } ], false: By default,
} "pnics": [], migration of physical
], "uplinks": [ NICs when using vSphere
"pnics": [], {
"uplinks": [ Distributed Switch is not
"vds_uplink_name":
{ supported.
"Uplink 2",
"vds_uplink_name": "uplink_name": n "transport_zone_endpoint
"Uplink 2", "nsxuplink1" s": Not supported when
"uplink_name": } ],
"nsxuplink1" the host switch type is
"is_migrate_pnics":
} ], vSphere Distributed
false,
"is_migrate_pnics": Switch. This field is
false, "ip_assignment_spec": required when the host
{ switch type is N-VDS.
"ip_assignment_spec": "resource_type":
{ Transport zone endpoint
"AssignedByDhcp"
"resource_type": IDs correspond to the
},
"AssignedByDhcp" "cpu_config": [], host switch it is
},
associated with.
"cpu_config": [], "transport_zone_endpo
ints": [ For more details
"transport_zone_endpo { on /api/v1/fabric/
ints": [ "transport_zone_id": discovered-nodes/
{
"transport_zone_id": <external-id/discovered-
"06ba5326-67ac-4f2c-9
953-a8c5d326b51e",

VMware, Inc. 83
NSX-T Data Center Administration Guide

Changes in API Commands


NSX-T Virtual Distributed NSX-T on vSphere for NSX-T on vSphere
API / Switch (N-VDS) Distributed Switch (VDS) Distributed Switch

"06ba5326-67ac-4f2c-9 node-id>?
953-a8c5d326b51e", "transport_zone_profi action=create_transport_
le_ids": [
node, refer to the NSX-T
"transport_zone_profi {
"resource_type": Data Center API Guide.
le_ids": [
{ "BfdHealthMonitoringP
"resource_type": rofile",
"BfdHealthMonitoringP "profile_id":
rofile", "52035bb3-
"profile_id": ab02-4a08-9884-186313
"52035bb3- 12e50a"
ab02-4a08-9884-186313 } ] } ],
12e50a" } ] } ],
"vmk_install_migratio
"vmk_install_migratio n": [],
n": [],
"pnics_uninstall_migr
"pnics_uninstall_migr ation": [],
ation": [],
"vmk_uninstall_migrat
"vmk_uninstall_migrat ion": [],
ion": [], "not_ready": false
"not_ready": false }
} ],
], "resource_type":
"resource_type": "StandardHostSwitchSp
"StandardHostSwitchSp ec"
ec" },
},
"transport_zone_endpo
"transport_zone_endpo ints": [],
ints": [], "maintenance_mode":
"maintenance_mode": "DISABLED",
"DISABLED", "is_overridden":
"is_overridden": false,
false, "resource_type":
"resource_type": "TransportNode",
"TransportNode", "display_name":
"id": "TestTN",
"d7ef478b-752c-400a- }
b5f0-207c04567e5d",
"display_name":
"TestTN",
}

VM configuration vim.vm.device.VirtualEther vim.vm.device.VirtualEther n vSphere Distributed


netCard.OpaqueNetworkBacki netCard.DistributedVirtual Switch: As a vSphere
ngInfo PortBackingInfo administrator, ensure the
BackingType parameter is
set to NSX.

Note The VNIC


BackingType defaults to
DistributedVirtualPortBackin
gInfo when the BackingType
is set to
OpaqueNetworkBackingInfo.

VMware, Inc. 84
NSX-T Data Center Administration Guide

Changes in API Commands


NSX-T Virtual Distributed NSX-T on vSphere for NSX-T on vSphere
API / Switch (N-VDS) Distributed Switch (VDS) Distributed Switch

VMkernel NIC vim.host.VirtualNic.Opaque vim.dvs.DistributedVirtual n N-VDS: As an NSX-T


NetworkSpec Port administrator, set values
to these parameters:
n OpaqueNetworkId
n OpaqueNetworkType
n NSX-T on vSphere
Distributed Switch: As a
vSphere administrator,
set values to these
parameters:
n SwitchUUID and
portgroupKey
n BackingType of the
DVPG must be NSX.

Physical NIC to Uplink /api/v1/transport-node- API: n To map physical NICs to


Mapping profiles vim.host.NetworkSystem:net uplinks for vSphere
/api/v1/transport-nodes workSystem.updateNetworkCo Distributed Switch by
nfig calling API command, set
Property: the maxMtu property.
vim.host.NetworkConfig.pro
xySwitch

MTU /api/v1/host-switch- API: n To configure an MTU


profiles vim.dvs.VmwareDistributedV value for vSphere
irtualSwitch.reconfigure Distributed Switch by
Property: calling API command, set
VmwareDistributedVirtualSw the maxMtu property.
itch.ConfigSpec.maxMtu
Note MTU value defined
in uplink profiles in NSX-
T are not applied to the
host switch.

LAG /api/v1/host-switch- API: n To configure an LAG


profiles vim.dvs.VmwareDistributedV profile for vSphere
irtualSwitch.updateLacpGro Distributed Switch by
upConfig calling API command, set
Property: the LacpGroupSpec
vim.dvs.VmwareDistributedV property.
irtualSwitch.LacpGroupSpec

VMware, Inc. 85
NSX-T Data Center Administration Guide

Changes in API Commands


NSX-T Virtual Distributed NSX-T on vSphere for NSX-T on vSphere
API / Switch (N-VDS) Distributed Switch (VDS) Distributed Switch

NIOC /api/v1/transport-node- API:vim.dvs.VmwareDistribu n To configure an NIOC


profiles tedVirtualSwitch.reconfigu profile for vSphere
/api/v1/transport-nodes re Distributed Switch by
Property: calling API command, set
vim.dvs.VmwareDistributedV the
irtualSwitch.ConfigSpec.in infrastructureTrafficRes
frastructureTrafficResourc ourceConfig property.
eConfig

LLDP /api/v1/transport-node- API: n To configure LLDP


profiles vim.dvs.VmwareDistributedV profile for vSphere
/api/v1/transport-nodes irtualSwitch.reconfigure Distributed Switch by
Property: calling API command, set
vim.dvs.VmwareDistributedV the
irtualSwitch.ConfigSpec.li linkDiscoveryProtocolCon
nkDiscoveryProtocolConfig fig property.

Feature Support in a vSphere Distributed Switch Enabled to Support


NSX-T Data Center
Comparison of features supported by a VDS switch version earlier to 7.0 and VDS version 7.0 or
later (NSX-T Data Center enabled).

IPFIX and Port Mirroring


An NSX-T transport node prepared with a VDS switch supports IPFIX, port mirroring.

See Port Mirroring on a vSphere Distributed Switch.

See IPFIX Monitoring on a vSphere Distributed Switch.

SR-IOV support
SR-IOV is supported on a vSphere Distributed Switch but not on a NSX Virtual Distributed Switch.

Feature NSX Virtual Distributed Switch vSphere Distributed Switch

SR-IOV No Yes (vSphere 7.0 and later)

Stateless Cluster Host Profile Support


Feature NSX Virtual Distributed Switch vSphere Distributed Switch

Host Profile Stateless Yes Yes (vSphere 7.0 and later)


No (when VMkernel adapters are
connected to NSX Port Group on
vSphere Distributed Switch.

VMware, Inc. 86
NSX-T Data Center Administration Guide

Distributed Resource Scheduler Support


DRS (NIOC
Source Host Destination Host Configured) vSphere

vSphere Distributed Switch-A vSphere Distributed Switch-B No No

Opaque Network (N-VDS-A) Opaque Network (N-VDS-B) Yes 6.7

vSphere Distributed Switch Opaque Network (N-VDS) Yes 7.0

vSphere Distributed Switch-A vSphere Distributed Switch-A Yes 7.0

Opaque Network (N-VDS) vSphere Distributed Switch No No

vMotion Support
vMotion between source vSphere Distributed Switch and destination vSphere Distributed Switch.
Both VDS switches are enabled to support NSX-T Data Center.

Compute
Source / VDS Destination / VDS vMotion Storage vMotion

vSphere Distributed Switch-A vSphere Distributed Switch-A Yes Yes


(vCenter Server -A) (vCenter Server-A)

vSphere Distributed Switch-A vSphere Distributed Switch-B Yes Yes


(vCenter Server -A) (vCenter Server -A)

vSphere Distributed Switch-A vSphere Distributed Switch-B Yes Yes


(vCenter Server -A) (vCenter Server -B)

Segment-A (vCenter Server -A) Segment-B (vCenter Server-A) No No

Segment-A (vCenter Server -A) Segment-B (vCenter Server -B) No No

Transport Zone-A Transport Zone-B No No

NSX-T Data Center-A NSX-T Data Center-B No No

vMotion between vSphere Distributed Switch (NSX-T Data Center enabled) and NSX Virtual
Distributed Switch

Destination / NSX Virtual


Source / VDS Distributed Switch Compute vMotion Storage vMotion

vCenter Server-A vCenter Server-A Yes Yes

vCenter Server-A vCenter Server-B Yes Yes

Segment-A (vCenter Server- Segment-B (vCenter No No


A) Server-A)

Segment-A (vCenter Server- Segment-B (vCenter No No


A) Server-B)

Transport Zone-A Transport Zone-B No No

NSX-T Data Center-A NSX-T Data Center-B No No

vMotion between vSphere Distributed Switch (NSX-T Data Center enabled) and vSphere
Standard Switch or vSphere Distributed Switch

VMware, Inc. 87
NSX-T Data Center Administration Guide

Destination / NSX Virtual Compute


Source / VDS Distributed Switch vMotion Storage vMotion

vCenter Server-A vCenter Server-A Yes Yes

vCenter Server-A vCenter Server-B Yes Yes

Segment-A (vCenter Server-A) Segment-B (vCenter Server-A) No No

Segment-A (vCenter Server-A) Segment-B (vCenter Server-B) No No

Transport Zone-A Transport Zone-B No No

NSX-T Data Center-A NSX-T Data Center-B No No

Enhanced Networking Stack


Both VDS and NSX Virtual Distributed Switches support all features of the enhanced networking
stack.

LACP
n VDS does not support LACP in Active mode.

n NSX Virtual Distributed Switch supports LACP in Active mode.

Scale Supported in vSphere 7.0


Parameter NSX Virtual Distributed Switch

Logical Switch n NSX Distributed Virtual port groups (in vCenter Server) support 10000 X
N, where N is the number of VDS switches in vCenter Server.
n NSX-T Data Center supports 10000 segments.

Relationship between NSX Distributed Virtual port groups and Hostd memory on the host.

NSX Distributed Virtual Port Groups Minimum Hostd Memory Supported VMs

5000 600 MB 191

10000 1000 MB 409

15000 1500 MB 682

Enhanced Networking Stack


Enhanced data path is a networking stack mode, which when configured provides superior
network performance. It is primarily targeted for NFV workloads, which requires the performance
benefits provided by this mode.

The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host. ENS
also supports traffic flowing through Edge VMs. In the enhanced data path mode, you can
configure overlay traffic and VLAN traffic.

VMware, Inc. 88
NSX-T Data Center Administration Guide

Automatically Assign ENS Logical Cores


Automatically assign logical cores to vNICs such that dedicated logical cores manage the
incoming traffic to and outgoing traffic from vNICs.

With the N-VDS switch configured in the enhanced datapath mode, if a single logical core is
associated to a vNIC, then that logical core processes bidirectional traffic coming into or going
out of a vNIC. When multiple logical cores are configured, the host automatically determines
which logical core must process a vNIC's traffic.

Assign logical cores to vNICs based on one of these parameters.

n vNIC-count: Host assumes transmission of incoming or outgoing traffic for a vNIC direction
requires same amount of the CPU resource. Each logical core is assigned the same number of
vNICs based on the available pool of logical cores. It is the default mode. The vNIC-count
mode is reliable, but is not optimal for an asymmetric traffic.

n CPU-usage: Host predicts the CPU usage to transmit incoming or outgoing traffic at each
vNIC direction by using internal statistics. Based on the usage of CPU to transmit traffic, host
changes the logical core assignments to balance load among logical cores. The CPU usage
mode is more optimal than vNIC-count, but unreliable when traffic is not steady.

In CPU usage mode, if the traffic transmitted changes frequently, then the predicted CPU
resources required and vNIC assignment might also change frequently. Too frequent assignment
changes might cause packet drops.

If the traffic patterns are symmetric among vNICs, the vNIC-count option provides reliable
behavior, which is less vulnerable to frequent changes. However, if the traffic patterns are
asymmetric, vNIC-count might result in packet drops since it does not distinguish the traffic
difference among vNICs.

In vNIC-count mode, it is recommended to configure an appropriate number of logical cores so


that each logical core is assigned to the same number of vNICs. If the number vNIC associated to
each logical core is different, CPU assignment is unfair and performance is not deterministic.

When a vNIC is connected or disconnected or when a logical core is added or removed, hosts
automatically detect the changes and rebalance.

Procedure

u To switch from one mode to another mode, run the following command.

set ens lcore-assignment-mode <host-switch-name> <ens-lc-mode>

Where, <ens-lc-mode> can be set to the mode vNIC-count or cpu-usage.

vNIC-count is vNIC/Direction count-based logical core assignment.

cpu-usage is CPU usage-based logical core assignment.

VMware, Inc. 89
NSX-T Data Center Administration Guide

Configure Guest Inter-VLAN Routing


On overlay networks, NSX-T supports routing of inter-VLAN traffic on an L3 domain. During
routing, virtual distributed router (VDR) uses VLAN ID to route packets between VLAN subnets.

Inter-VLAN routing overcomes the limitation of 10 vNICs that can be used per VM. NSX-T
supporting inter-VLAN routing ensures that many VLAN subinterfaces can be created on the
vNIC and consumed for different networking services. For example, one vNIC of a VM can be
divided into several subinterfaces. Each subinterface belongs to a subnet, which can host a
networking service such as SNMP or DHCP. With Inter-VLAN routing, for example, a subinterface
on VLAN-10 can reach a subinterface on VLAN-10 or any other VLAN.

Each vNIC on a VM is connected to the N-VDS through the parent logical port, which manages
untagged packets.

To create a subinterface, on the Enhanced N-VDS switch, create a child port using the API with
an associated VIF using the API call described in the procedure. The subinterface tagged with a
VLAN ID is associated to a new logical switch, for example, VLAN10 is attached to logical switch
LS-VLAN-10. All subinterfaces of VLAN10 have to be attached to LS-VLAN-10. This 1–1 mapping
between the VLAN ID of the subinterface and its associated logical switch is an important
prerequisite. For example, adding a child port with VLAN20 to logical switch LS-VLAN-10
mapped to VLAN-10 makes routing of packets between VLANs non-functional. Such
configuration errors make the inter-VLAN routing non-functional.

Prerequisites

n Before you associate a VLAN subinterface to a logical switch, ensure that the logical switch
does not have any other associations with another VLAN subinterface. If there is a mismatch,
inter-VLAN routing on overlay networks might not work.

n Ensure that hosts run ESXi v 6.7 U2 or later versions.

Procedure

1 To create subinterfaces for a vNIC, ensure that the vNIC is updated to a parent port. Make
the following REST API call.

PUT https://<nsx-mgr-ip>/api/v1/logical-ports/<Logical-Port UUID-of-the-vNIC>


{
"resource_type" : "LogicalPort",
"display_name" : "parentport",
"attachment" : {
"attachment_type" : "VIF",
"context" : {
"resource_type" : "VifAttachmentContext",
"vif_type": "PARENT"
},
"id" : "<Attachment UUID of the vNIC>"
},

VMware, Inc. 90
NSX-T Data Center Administration Guide

"admin_state" : "UP",
"logical_switch_id" : "UUID of Logical Switch to which the vNIC is connected",
"_revision" : 0
}

2 To create child ports for a parent vNIC port on the N-VDS that is associated to the
subinterfaces on a VM, make the API call. Before making the API call, verify that a logical
switch exists to connect child ports with the subinterfaces on the VM.

POST https://<nsx-mgr-ip>/api/v1/logical-ports/
{
"resource_type" : "LogicalPort",
"display_name" : "<Name of the Child PORT>",
"attachment" : {
"attachment_type" : "VIF",
"context" : {
"resource_type" : "VifAttachmentContext",
"parent_vif_id" : "<UUID of the PARENT port from Step 1>",
"traffic_tag" : <VLAN ID>,
"app_id" : "<ID of the attachment>", ==> display id(can give any string). Must be unique.
"vif_type" : "CHILD"
},
"id" : "<ID of the CHILD port>"
},

"logical_switch_id" : "<UUID of the Logical switch(not the PARENT PORT's logical switch) to
which Child port would be connected to>",
"address_bindings" : [ { "mac_address" : "<vNIC MAC address>", "ip_address" : "<IP address to
the corresponding VLAN>", "vlan" : <VLAN ID> } ],
"admin_state" : "UP"
}

Results

NSX-T Data Center creates subinterfaces on VMs.

Migrate Host Switch to vSphere Distributed Switch


When using N-VDS as the host switch, NSX-T is represented as an opaque network in vCenter
Server. N-VDS owns one or more of the physical interfaces (pNICs) on the transport node, and
port configuration is performed from NSX-T Data Center. Starting NSX-T Data Center 3.0.2, you
can migrate your host switch to vSphere Distributed Switch (VDS) 7.0 for optimal pNIC usage,
and managing the networking for NSX-T hosts from vCenter Server. When running NSX-T on a
VDS switch, a segment is represented as an NSX Distributed Virtual Port Groups. Any changes to
the segments on the NSX-T network are synchronized in vCenter Server.

Prerequisites

Contact VMware support to assess the impact of migrating to VDS 7.0.

VMware, Inc. 91
NSX-T Data Center Administration Guide

The following requirements must be met to migrate to a VDS 7.0 host switch:

n vCenter Server 7.0 or later

n ESXi 7.0 or later

n NSX-T is no longer represented as an opaque network after migration. You may need to
update your scripts to manage the migrated representation of the NSX-T hosts.

Procedure

1 Migrate your host switch using API calls or run the commands for migration from the CLI.

n Make the following API calls to perform the migration:

a To verify that the hosts are ready for migration, make the following API call and run a
pre-check:

POST https://{{NSX Manager-IP}}/policy/api/v1/nvds-urt/precheck

Example response:

{ "precheck_id": "166959af-7f4b-4d49-b294-907000eef889" }

b Address any configuration inconsistencies and run the pre-check again.

c Verify the status of the pre-check.

POST https://{{NSX Manager-IP}}/policy/api/v1/nvds-urt/status-summary/<precheck-id>

Example repose:

{
"precheck_id": "166959af-7f4b-4d49-b294-907000eef889",
"precheck_status": "PENDING_TOPOLOGY"
}

d For stateless hosts, nominate one of the hosts as the source host and initiate the
migration.

e To retrieve the recommended topology, make the following API call:

GET https://<nsx-ip>/api/v1/nvds-urt/topology/<precheck-id>

Example response:

{
"topology": [
{
"nvds_id": "21d4fd9b-7214-46b7-ab16-c4e7138f011f",
"nvds_name": "nsxvswitch",
"compute_manager_topology": [
{
"compute_manager_id": "fa1421d9-54a7-418e-9e18-7d0ff0d2f771",
"dvswitch": [

VMware, Inc. 92
NSX-T Data Center Administration Guide

{
"data_center_id": "datacenter-3",
"vds_name": "CVDS-nsxvswitch-datacenter-3",
"vmknic": [
"vmk1"
],
"transport_node_id": [
"4a6161af-7eec-4780-8faf-0e0610c33c2e",
"5a78981a-03a6-40c0-8a77-28522bbf07a9",
"f9c6314d-9b99-48aa-bfc8-1b3a582162bb"
]
}
]
}
]
}
]
}

f Make the following API call to create a VDS with the recommended topology:

POST https://{{NSX Manager-IP}}/policy/api/v1/nvds-urt/topology?action=apply

You can choose to rename the VDS. If a VDS with the name that you specified
already exists, the existing VDS is used.

Example input:

{
"topology": [
{
"nvds_id": "c8ff4053-502a-4636-8a38-4413c2a2d52f",
"nvds_name": "nsxvswitch",
"compute_manager_topology": [
{
"compute_manager_id": "fa1421d9-54a7-418e-9e18-7d0ff0d2f771",
"dvswitch": [
{
"data_center_id": "datacenter-3",
"vds_name": "test-dvs",
"transport_node_id": [
"65592db5-adad-47a7-8502-1ab548c63c6d",
"e57234ee-1d0d-425e-b6dd-7dbc5f6e6527",
"70f55855-6f81-45a8-bd40-d8b60ae45b82"
]
}
]
}
]
}
]
}

VMware, Inc. 93
NSX-T Data Center Administration Guide

g To track the status of the migration, make the following API call:

POST https://{{NSX Manager-IP}}/policy/api/v1/nvds-urt/status-summary/<precheck-id>

When the host is ready for migration, precheck_status changes from APPLYING
_TOPOLOGY to UPGRADE_READY.

Refer to the NSX-T Data Center API Guide guide for more information on API
parameters.

h Place the ESXi host in maintenance mode and evacuate the powered off VMs. For a
stateless host, reboot the nominated source host.

i To initiate the N-VDS to VDS migration, make the following API call:

POST https://{{NSX Manager-IP}}/policy/api/v1/transport-nodes/<tn-id>?


action=migrate_to_vds

The hosts are migrated asynchronously. You can upgrade multiple transport nodes in
parallel by calling the API for a desired set of hosts. Services like DRS continue to run
as expected during the process of migration.

j Make the following API call to track the status of migration:

POST https://{{NSX Manager-IP}}/policy/api/v1/nvds-urt/status-summary/<precheck-id>

The host migration_state changes from UPGRADE_IN_PROGRESS to SUCCESS after a


successful migration.

Example response:

{
"precheck_id": "c306e279-8b75-4160-919c-6c40030fb3d0",
"precheck_status": "READY",
"migration_state": [
{
"host": "65592db5-adad-47a7-8502-1ab548c63c6d",
"overall_state": "UPGRADE_READY"
},
{
"host": "e57234ee-1d0d-425e-b6dd-7dbc5f6e6527",
"overall_state": "UPGRADE_READY"
},
{
"host": "70f55855-6f81-45a8-bd40-d8b60ae45b82",
"overall_state": "SUCCESS"
}
]
}

In the event of failures, the overall_state changes to FAILED, indicating the reason for
the migration failure. Run the migrate_to_vds action to run the migration task again.

VMware, Inc. 94
NSX-T Data Center Administration Guide

k For stateless hosts:

1 Extract the host profile from the migrated host and attach it to the cluster.

2 Reboot the remaining hosts in the cluster.

n Perform the migration from the NSX Manager CLI.

a To verify that the hosts are ready for migration, run the following command and run a
pre-check:

vds-migrate precheck

Sample output:

Precheck Id: 0a26d126-7116-11e5-9d70-feff819cdc9f

b Address any configuration inconsistencies and run the pre-check again.

c To retrieve the recommended topology, run the following command:

vds-migrate show-topology

Sample output:

Precheck Id: 137d2a87-0544-4914-829d-d8b7e33b13f2


NVDS: nvds1(19cca902-9455-4316-92e2-65f4f5b4b138)
Compute Manager Topology:
[
{
"compute_manager_id": "fd37ed6e-0eae-4d65-b29a-d40eee1d5d47",
"dvswitch": [
{
"transport_node_id": [
"4d011ade-a010-4eea-b45a-b2569c0bb9ad"
],
"data_center_id": "datacenter-3",
"vmknic": [],
"vds_name": "CVDS-nvds1-datacenter-3"
}
]
}
]

d Run the following command to create a VDS with the recommended topology:

vds-migrate apply-topology

e Log in to vCenter Server and verify that the VDS is created.

f To initiate the N-VDS to VDS migration, run the following command:

vds-migrate esxi-cluster-name <cluster-name>

VMware, Inc. 95
NSX-T Data Center Administration Guide

Sample output:

VDS Migration Done:


3 Transport-Nodes Migrate Successfully
0 Transport-Nodes Migrate Failed

You can also use the transport node ID to initiate migration:

vds-migrate tn-list <file-path>

where <file-path> includes the Transport Node IDs.

Sample output:

nsx-manager-1> vds-migrate tn-list /opt/tnid


VDS Migration Done:
3 Transport-Nodes Migrate Successfully
0 Transport-Nodes Migrate Failed

2 Move the migrated hosts out of maintenance mode.

NSX Virtual Distributed Switch


The primary component involved in the data plane of the transport nodes is the NSX Virtual
Distributed Switch (N-VDS). On ESXi hypervisors, the N-VDS implementation is derived from
VMware vSphere® Distributed Switch™ (VDS). With KVM hypervisors, the N-VDS
implementation is derived from the Open vSwitch (OVS).

The N-VDS is required on NSX-T overlay and VLAN-backed networks.

The NVDS forwards traffic between:

n Components running on the transport node (for example, between virtual machines).

n Internal components and the physical network.

If N-VDS is used to forward traffic between internal components and physical network, the NVDS
must own one or more physical interfaces (pNICs) on the transport node. As with other virtual
switches, an N-VDS cannot share a physical interface with another N-VDS, it may coexist with
another N-VDS (or other vSwitch) when using a separate set of pNICs. While N-VDS behavior in
realizing connectivity is identical regardless of the specific implementation, data plane realization
and enforcement capabilities differ based on compute manager and associated hypervisor
capability.

By default, IGMP snooping (IGMPv1/v2/v3, MLDv1/v2) is enabled on N-VDS configured on an ESXi


host.

To change settings of IGMP snooping on an N-VDS switch, run the following CLI commands.

VMware, Inc. 96
NSX-T Data Center Administration Guide

get host-switch nvds mcast-filter

set host-switch nvds mcast-filter


legacy mcast filter mode: {legacy|snooping}
snooping mcast filter mode: {legacy|snooping}

To change settings at a per-port level, run the following CLI commands.

get host-switch <host-switch-name> dvport <dvport-id> mcast-filter

get host-switch <host-switch-name> dvport <dvport-id> mcast-filter <entry-mode>


<entry-group>

For more details, see the NSX-T Data Center Command-Line Interface Reference.

VMware, Inc. 97
Virtual Private Network (VPN)
6
NSX-T Data Center supports IPSec Virtual Private Network (IPSec VPN) and Layer 2 VPN (L2
VPN) on an NSX Edge node. IPSec VPN offers site-to-site connectivity between an NSX Edge
node and remote sites. With L2 VPN, you can extend your data center by enabling virtual
machines to keep their network connectivity across geographical boundaries while using the
same IP address.

Note IPSec VPN and L2 VPN are not supported in the NSX-T Data Center limited export release.

You must have a working NSX Edge node, with at least one configured Tier-0 or Tier-1 gateway,
before you can configure a VPN service. For more information, see "NSX Edge Installation" in the
NSX-T Data Center Installation Guide.

Beginning with NSX-T Data Center 2.4, you can also configure new VPN services using the NSX
Manager user interface. In earlier releases of NSX-T Data Center, you can only configure VPN
services using REST API calls.

Important When using NSX-T Data Center 2.4 or later to configure VPN services, you must use
new objects, such as Tier-0 gateways, that were created using the NSX Manager UI or Policy
APIs that are included with NSX-T Data Center 2.4 or later release. To use existing Tier-0 or Tier-1
logical routers that were configured before the NSX-T Data Center 2.4 release, you must
continue to use API calls to configure a VPN service.

System-default configuration profiles with predefined values and settings are made available for
your use during a VPN service configuration. You can also define new profiles with different
settings and select them during the VPN service configuration.

The Intel QuickAssist Technology (QAT) feature on a bare metal server is supported for IPSec
VPN bulk cryptography. Support for this feature began with NSX-T Data Center 3.0. For more
information on support of the QAT feature on bare metal servers, see the NSX-T Data Center
Installation Guide.

This chapter includes the following topics:

n Understanding IPSec VPN

n Understanding Layer 2 VPN

n Adding VPN Services

VMware, Inc. 98
NSX-T Data Center Administration Guide

n Adding IPSec VPN Sessions

n Adding L2 VPN Sessions

n Add Local Endpoints

n Adding Profiles

n Add an Autonomous Edge as an L2 VPN Client

n Check the Realized State of an IPSec VPN Session

n Monitor and Troubleshoot VPN Sessions

Understanding IPSec VPN


Internet Protocol Security (IPSec) VPN secures traffic flowing between two networks connected
over a public network through IPSec gateways called endpoints. NSX Edge only supports a
tunnel mode that uses IP tunneling with Encapsulating Security Payload (ESP). ESP operates
directly on top of IP, using IP protocol number 50.

IPSec VPN uses the IKE protocol to negotiate security parameters. The default UDP port is set to
500. If NAT is detected in the gateway, the port is set to UDP 4500.

NSX Edge supports a policy-based or a route-based IPSec VPN.

Beginning with NSX-T Data Center 2.5, IPSec VPN services are supported on both Tier-0 and
Tier-1 gateways. See Add a Tier-0 Gateway or Add a Tier-1 Gateway for more information. The
Tier-0 or Tier-1 gateway must be in Active-Standby high-availability mode when used for an IPSec
VPN service. You can use segments that are connected to either Tier-0 or Tier-1 gateways when
configuring an IPSec VPN service.

An IPsec VPN service in NSX-T Data Center uses the gateway-level failover functionality to
support a high-availability service at the VPN service level. Tunnels are re-established on failover
and VPN configuration data is synchronized. Before NSX-T Data Center 3.0 release, the IPSec
VPN state is not synchronized as tunnels are being re-established. Beginning with NSX-T Data
Center 3.0 release, the IPSec VPN state is synchronized to the standby NSX Edge node when the
current active NSX Edge node fails and the original standby NSX Edge node becomes the new
active NSX Edge node without renegotiating the tunnels. This feature is supported for both
policy-based and route-based IPSec VPN services.

Pre-shared key mode authentication and IP unicast traffic are supported between the NSX Edge
node and remote VPN sites. In addition, certificate authentication is supported beginning with
NSX-T Data Center 2.4. Only certificate types signed by one of the following signature hash
algorithms are supported.

n SHA256withRSA

n SHA384withRSA

n SHA512withRSA

VMware, Inc. 99
NSX-T Data Center Administration Guide

Using Policy-Based IPSec VPN


Policy-based IPSec VPN requires a VPN policy to be applied to packets to determine which
traffic is to be protected by IPSec before being passed through the VPN tunnel.

This type of VPN is considered static because when a local network topology and configuration
change, the VPN policy settings must also be updated to accommodate the changes.

When using a policy-based IPSec VPN with NSX-T Data Center, you use IPSec tunnels to connect
one or more local subnets behind the NSX Edge node with the peer subnets on the remote VPN
site.

You can deploy an NSX Edge node behind a NAT device. In this deployment, the NAT device
translates the VPN address of an NSX Edge node to a publicly accessible address facing the
Internet. Remote VPN sites use this public address to access the NSX Edge node.

You can place remote VPN sites behind a NAT device as well. You must provide the remote VPN
site's public IP address and its ID (either FQDN or IP address) to set up the IPSec tunnel. On both
ends, static one-to-one NAT is required for the VPN address.

Note DNAT is not supported on a tier-1 gateway where policy-based IPSec VPN is configured.

IPSec VPN can provide a secure communications tunnel between an on-premises network and a
network in your cloud software-defined data center (SDDC). For policy-based IPSec VPN, the
local and peer networks provided in the session must be configured symmetrically at both
endpoints. For example, if the cloud-SDDC has the local networks configured as X, Y, Z subnets
and the peer network is A, then the on-premises VPN configuration must have A as the local
network and X, Y, Z as the peer network. This case is true even when A is set to
ANY (0.0.0.0/0). For example, if the cloud-SDDC policy-based VPN session has the local
network configured as 10.1.1.0/24 and the peer network as 0.0.0.0/0, at the on-premises VPN
endpoint, the VPN configuration must have 0.0.0.0/0 as the local network and 10.1.1.0/24 as
the peer network. If misconfigured, the IPSec VPN tunnel negotiation might fail.

The size of the NSX Edge node determines the maximum number of supported tunnels, as shown
in the following table.

Table 6-1. Number of IPSec Tunnels Supported


# of IPSec Tunnels Per VPN
Edge Node # of IPSec Tunnels Per Service
Size VPN Session (Policy-Based) # of Sessions Per VPN Service (16 tunnels per session)

Small N/A (POC/Lab Only) N/A (POC/Lab Only) N/A (POC/Lab Only)

Medium 128 128 2048

Large 128 (soft limit) 256 4096

Bare Metal 128 (soft limit) 512 6000

Restriction The inherent architecture of policy-based IPSec VPN restricts you from setting up a
VPN tunnel redundancy.

VMware, Inc. 100


NSX-T Data Center Administration Guide

For information about configuring a policy-based IPSec VPN, see Add an IPSec VPN Service.

Using Route-Based IPSec VPN


Route-based IPSec VPN provides tunneling on traffic based on the static routes or routes learned
dynamically over a special interface called virtual tunnel interface (VTI) using, for example, BGP
as the protocol. IPSec secures all the traffic flowing through the VTI.

Note
n OSPF dynamic routing is not supported for routing through IPSec VPN tunnels.

n Dynamic routing for VTI is not supported on VPN that is based on Tier-1 gateways.

Route-based IPSec VPN is similar to Generic Routing Encapsulation (GRE) over IPSec, with the
exception that no additional encapsulation is added to the packet before applying IPSec
processing.

In this VPN tunneling approach, VTIs are created on the NSX Edge node. Each VTI is associated
with an IPSec tunnel. The encrypted traffic is routed from one site to another site through the VTI
interfaces. IPSec processing happens only at the VTI.

VPN Tunnel Redundancy


You can configure VPN tunnel redundancy with a route-based IPSec VPN session that is
configured on a Tier-0 gateway. With tunnel redundancy, multiple tunnels can be set up between
two sites, with one tunnel being used as the primary with failover to the other tunnels when the
primary tunnel becomes unavailable. This feature is most useful when a site has multiple
connectivity options, such as with different ISPs for link redundancy.

Important
n In NSX-T Data Center, IPSec VPN tunnel redundancy is supported using BGP only.

n Do not use static routing for route-based IPSec VPN tunnels to achieve VPN tunnel
redundancy.

The following figure shows a logical representation of IPSec VPN tunnel redundancy between
two sites. In this figure, Site A and Site B represent two data centers. For this example, assume
that NSX-T Data Center is not managing the Edge VPN Gateways in Site A, and that NSX-T Data
Center is managing an Edge Gateway virtual appliance in Site B.

VMware, Inc. 101


NSX-T Data Center Administration Guide

Figure 6-1. Tunnel Redundancy in Route-Based IPSec VPN


Site A Site B

BGP
VTI
VPN
BGP
Tunnel
VTI
Uplink

Router
Uplink

BGP
VPN VTI
BGP Tunnel
Subnets VTI Uplink Subnets

As shown in the figure, you can configure two independent IPSec VPN tunnels by using VTIs.
Dynamic routing is configured using BGP protocol to achieve tunnel redundancy. If both IPSec
VPN tunnels are available, they remain in service. All the traffic destined from Site A to Site B
through the NSX Edge node is routed through the VTI. The data traffic undergoes IPSec
processing and goes out of its associated NSX Edge node uplink interface. All the incoming IPSec
traffic received from Site B VPN Gateway on the NSX Edge node uplink interface is forwarded to
the VTI after decryption, and then usual routing takes place.

You must configure BGP HoldDown timer and KeepAlive timer values to detect loss of
connectivity with peer within the required failover time. See Configure BGP.

Understanding Layer 2 VPN


With Layer 2 VPN (L2 VPN), you can extend Layer 2 networks (VNIs or VLANs) across multiple
sites on the same broadcast domain. This connection is secured with a route-based IPSec tunnel
between the L2 VPN server and the L2 VPN client.

Note This L2 VPN feature is available only for NSX-T Data Center and does not have any third-
party interoperability.

The extended network is a single subnet with a single broadcast domain, which means the VMs
remain on the same subnet when they are moved between sites. The VMs' IP addresses do not
change when they are moved. So, enterprises can seamlessly migrate VMs between network
sites. The VMs can run on either VNI-based networks or VLAN-based networks. For cloud
providers, L2 VPN provides a mechanism to onboard tenants without modifying existing IP
addresses used by their workloads and applications.

In addition to supporting data center migration, an on-premises network extended with an L2


VPN is useful for a disaster recovery plan and dynamically engaging off-premise compute
resources to meet the increased demand.

VMware, Inc. 102


NSX-T Data Center Administration Guide

L2 VPN services are supported on both Tier-0 and Tier-1 gateways. Only one L2 VPN service
(either client or server) can be configured for either Tier-0 or Tier-1 gateway.

Each L2 VPN session has one Generic Routing Encapsulation (GRE) tunnel. Tunnel redundancy is
not supported. An L2 VPN session can extend up to 4094 L2 segments.

VLAN-based and VNI-based segments can be extended using L2 VPN service on an NSX Edge
node that is managed in an NSX-T Data Center environment. You can extend L2 networks from
VLAN to VNI, VLAN to VLAN, and VNI to VNI.

Segments can be connected to either Tier-0 or Tier-1 gateways and use L2 VPN services.

Also supported is VLAN trunking using an ESX NSX-managed virtual distributed switch (N-VDS).
If there are sufficient compute and I/O resources, an NSX Edge cluster can extend multiple VLAN
networks over a single interface using VLAN trunking.

Beginning with NSX-T Data Center 3.0, the L2 VPN path MTU discovery (PMTUD) feature is
enabled by default. With the PMTUD enabled, the source host learns the path MTU value for the
destination host through the L2 VPN tunnel and limits the length of the outgoing IP packet to the
learned value. This feature helps avoid IP fragmentation and reassembly within the tunnel, as a
result improving the L2 VPN performance.

The L2 VPN PMTUD feature is not applicable for non-IP, non-unicast, and unicast packets with
the DF (Don’t Fragment) flag cleared. The global PMTU cache timer expires every 10 minutes. To
disable or enable L2 VPN PMTUD feature, see Enable and Disable L2 VPN Path MTU Discovery.

The L2 VPN service support is provided in the following deployment scenarios.

n Between an NSX-T Data Center L2 VPN server and an L2 VPN client hosted on an NSX Edge
that is managed in an NSX Data Center for vSphere environment. A managed L2 VPN client
supports both VLANs and VNIs.

n Between an NSX-T Data Center L2 VPN server and an L2 VPN client hosted on a standalone
or unmanaged NSX Edge. An unmanaged L2 VPN client supports VLANs only.

n Between an NSX-T Data Center L2 VPN server and an L2 VPN client hosted on an
autonomous NSX Edge. An autonomous L2 VPN client supports VLANs only.

n Beginning with NSX-T Data Center 2.4 release, L2 VPN service support is available between
an NSX-T Data Center L2 VPN server and NSX-T Data Center L2 VPN clients. In this scenario,
you can extend the logical L2 segments between two on-premises software-defined data
centers (SDDCs).

Enable and Disable L2 VPN Path MTU Discovery


You can enable or disable the L2 VPN path MTU (PMTU) discovery feature using CLI commands.
By default L2 VPN PMTU discovery is enabled.

Prerequisites

You must have the user name and password for the admin account to log in to the NSX Edge
node.

VMware, Inc. 103


NSX-T Data Center Administration Guide

Procedure

1 Log in with admin privileges to the CLI of the NSX Edge node .

2 To check the status of the L2 VPN PMTU discovery feature, use the following command.

Nsxedge> get dataplane l2vpn-pmtu config

If the feature is enabled, you see the following output: l2vpn_pmtu_enabled : True.

If the feature is disabled, you see the following output: l2vpn_pmtu_enabled : False.

3 To disable the L2 VPN PMTU discovery feature, use the following command.

nsxedge> set dataplane l2vpn-pmtu disabled

Adding VPN Services


You can add either an IPSec VPN (policy-based or route-based) or an L2 VPN using the NSX
Manager user interface (UI).

The following sections provide information about the workflows required to set up the VPN
service that you need. The topics that follow these sections provide details on how to add either
an IPSec VPN or an L2 VPN using the NSX Manager user interface.

Policy-Based IPSec VPN Configuration Workflow


Configuring a policy-based IPSec VPN service workflow requires the following high-level steps.

1 Create and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See Add
an IPSec VPN Service.

2 Create a DPD (dead peer detection) profile, if you prefer not to use the system default. See
Add DPD Profiles.

3 To use a non-system default IKE profile, define an IKE (Internet Key Exchange) profile . See
Add IKE Profiles.

4 Configure an IPSec profile using Add IPSec Profiles.

5 Use Add Local Endpoints to create a VPN server hosted on the NSX Edge.

6 Configure a policy-based IPSec VPN session, apply the profiles, and attach the local endpoint
to it. See Add a Policy-Based IPSec Session. Specify the local and peer subnets to be used for
the tunnel. Traffic from a local subnet destined to the peer subnet is protected using the
tunnel defined in the session.

Route-Based IPSec VPN Configuration Workflow


A route-based IPSec VPN configuration workflow requires the following high-level steps.

1 Configure and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See
Add an IPSec VPN Service.

VMware, Inc. 104


NSX-T Data Center Administration Guide

2 Define an IKE profile if you prefer not to use the default IKE profile. See Add IKE Profiles.

3 If you decide not to use the system default IPSec profile, create one using Add IPSec Profiles.

4 Create a DPD profile if you want to do not want to use the default DPD profile. See Add DPD
Profiles.

5 Add a local endpoint using Add Local Endpoints.

6 Configure a route-based IPSec VPN session, apply the profiles, and attach the local endpoint
to the session. Provide a VTI IP in the configuration and use the same IP to configure routing.
The routes can be static or dynamic (using BGP). See Add a Route-Based IPSec Session.

L2 VPN Configuration Workflow


Configuring an L2 VPN requires that you configure an L2 VPN service in Server mode and then
another L2 VPN service in Client mode. You also must configure the sessions for the L2 VPN
server and L2 VPN client using the peer code generated by the L2 VPN Server. Following is a
high-level workflow for configuring an L2 VPN service.

1 Create an L2 VPN Service in Server mode.

a Configure a route-based IPSec VPN tunnel with a Tier-0 or Tier-1 gateway and an L2 VPN
Server service using that route-based IPSec tunnel. See Add an L2 VPN Server Service.

b Configure an L2 VPN server session, which binds the newly created route-based IPSec
VPN service and the L2 VPN server service, and automatically allocates the GRE IP
addresses. See Add an L2 VPN Server Session.

c Add segments to the L2 VPN Server sessions. This step is also described in Add an L2
VPN Server Session.

d Use Download the Remote Side L2 VPN Configuration File to obtain the peer code for the
L2 VPN Server service session, which must be applied on the remote site and used to
configure the L2 VPN Client session automatically.

2 Create an L2 VPN Service in Client mode.

a Configure another route-based IPSec VPN service using a different Tier-0 or Tier-1
gateway and configure an L2 VPN Client service using that Tier-0 or Tier-1 gateway that
you just configured. See Add an L2 VPN Client Service for information.

b Define the L2 VPN Client sessions by importing the peer code generated by the L2 VPN
Server service. See Add an L2 VPN Client Session.

c Add segments to the L2 VPN Client sessions defined in the previous step. This step is
described in Add an L2 VPN Client Session.

Add an IPSec VPN Service


NSX-T Data Center supports a site-to-site IPSec VPN service between a Tier-0 or Tier-1 gateway
and remote sites. You can create a policy-based or a route-based IPSec VPN service. You must

VMware, Inc. 105


NSX-T Data Center Administration Guide

create the IPSec VPN service first before you can configure either a policy-based or a route-
based IPSec VPN session.

Note IPSec VPN is not supported in the NSX-T Data Center limited export release.

IPSec VPN is not supported when the local endpoint IP address goes through NAT in the same
logical router that the IPSec VPN session is configured.

Prerequisites

n Familiarize yourself with the IPSec VPN. See Understanding IPSec VPN.

n You must have at least one Tier-0 or Tier-1 gateway configured and available for use. See
Add a Tier-0 Gateway or Add a Tier-1 Gateway for more information.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to Networking > VPN > VPN Services.

3 Select Add Service > IPSec.

4 Enter a name for the IPSec service.

This name is required.

5 From the Tier-0/Tier-1 Gateway drop-down menu, select the Tier-0 or Tier-1 gateway to
associate with this IPSec VPN service.

6 Enable or disable Admin Status.

By default, the value is set to Enabled, which means the IPSec VPN service is enabled on the
Tier-0 or Tier-1 gateway after the new IPSec VPN service is configured.

7 Set the value for IKE Log Level.

The default is set to the Info level.

8 Enter a value for Tags if you want to include this service in a tag group.

9 To enable or disable the stateful synchronization of VPN sessions, toggle Session sync.

By default, the value is set to Enabled.

10 Click Global Bypass Rules if you want to allow data packets to be exchanged between the
specified local and remote IP addresses without any IPSec protection. In the Local Networks
and Remote Networks text boxes, enter the list of local and remote subnets between which
the bypass rules are applied.

If you enable these rules, data packets are exchanged between the specified local and
remote IP sites even if their IP addresses are specified in the IPSec session rules. The default
is to use the IPSec protection when data is exchanged between local and remote sites. These
rules apply for all IPSec VPN sessions created within this IPSec VPN service.

VMware, Inc. 106


NSX-T Data Center Administration Guide

11 Click Save.

After the new IPSec VPN service is created successfully, you are asked whether you want to
continue with the rest of the IPSec VPN configuration. If you click Yes, you are taken back to
the Add IPSec VPN Service panel. The Sessions link is now enabled and you can click it to
add an IPSec VPN session.

What to do next

Use information in Adding IPSec VPN Sessions to guide you in adding an IPSec VPN session. You
also provide information for the profiles and local endpoint that are required to finish the IPSec
VPN configuration.

Add an L2 VPN Service


You configure an L2 VPN service on a Tier-0 or Tier-1 gateway. To enable the L2 VPN service,
you must first create an IPSec VPN service on the Tier-0 or Tier-1 gateway, if it does not exist yet.
You then configure an L2 VPN tunnel between an L2 VPN server (destination gateway) and an L2
VPN client (source gateway).

To configure an L2 VPN service, use the information in the topics that follow in this section.

Prerequisites

n Familiarize yourself with IPsec VPN and L2 VPN. See Understanding IPSec VPN and
Understanding Layer 2 VPN.

n You must have at least one Tier-0 or Tier-1 gateway configured and available for use. See
Add a Tier-0 Gateway or Add a Tier-1 Gateway.

Procedure

1 Add an L2 VPN Server Service


To configure an L2 VPN Server service, you must configure the L2 VPN service in server
mode on the destination NSX Edge to which the L2 VPN client is to be connected.

2 Add an L2 VPN Client Service


After configuring the L2 VPN Server service, configure the L2 VPN service in the client mode
on another NSX Edge instance.

Add an L2 VPN Server Service


To configure an L2 VPN Server service, you must configure the L2 VPN service in server mode
on the destination NSX Edge to which the L2 VPN client is to be connected.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

VMware, Inc. 107


NSX-T Data Center Administration Guide

2 (Optional) If an IPSec VPN service does not exist yet on either a Tier-0 or Tier-1 gateway that
you want to configure as the L2 VPN server, create the service using the following steps.

a Navigate to the Networking > VPN > VPN Services tab and select Add Service > IPSec.

b Enter a name for the IPSec VPN service.

c From the Tier-0/Tier-1 Gateway drop-down menu, select the gateway to use with the L2
VPN server.

d If you want to use values different from the system defaults, set the rest of the properties
on the Add IPSec Service pane, as needed.

e Click Save and when prompted if you want to continue configuring the IPSec VPN
service, select No.

3 Navigate to the Networking > VPN > VPN Services tab and select Add Service > L2 VPN
Server to create an L2 VPN server.

4 Enter a name for the L2 VPN server.

5 From the Tier-0/Tier-1 Gateway drop-down menu, select the same Tier-0 or Tier-1 gateway
that you used with the IPSec service you created a moment ago.

6 Enter an optional description for this L2 VPN server.

7 Enter a value for Tags if you want to include this service in a tag group.

8 Enable or disable the Hub & Spoke property.

By default, the value is set to Disabled, which means the traffic received from the L2 VPN
clients is only replicated to the segments connected to the L2 VPN server. If this property is
set to Enabled, the traffic from any L2 VPN client is replicated to all other L2 VPN clients.

9 Click Save.

After the new L2 VPN server is created successfully, you are asked whether you want to
continue with the rest of the L2 VPN service configuration. If you click Yes, you are taken
back to the Add L2 VPN Server pane and the Session link is enabled. You can use that link to
create an L2 VPN server session or use the Networking > VPN > L2 VPN Sessions tab.

What to do next

Configure an L2 VPN server session for the L2 VPN server that you configured using information
in Add an L2 VPN Server Session as a guide.

Add an L2 VPN Client Service


After configuring the L2 VPN Server service, configure the L2 VPN service in the client mode on
another NSX Edge instance.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

VMware, Inc. 108


NSX-T Data Center Administration Guide

2 (Optional) If an IPSec VPN service does not exist yet on either a Tier-0 or Tier-1 gateway that
you want to configure as the L2 VPN client, create the service using the following steps.

a Navigate to the Networking > VPN > VPN Services tab and select Add Service > IPSec.

b Enter a name for the IPSec VPN service.

c From the Tier-0/Tier-1 Gateway drop-down menu, select a Tier-0 or Tier-1 gateway to
use with the L2 VPN client.

d If you want to use values different from the system defaults, set the rest of the properties
on the Add IPSec Service pane, as needed.

e Click Save and when prompted if you want to continue configuring the IPSec VPN
service, select No.

3 Navigate to the Networking > VPN > VPN Services tab and select Add Service > L2 VPN
Client.

4 Enter a name for the L2 VPN Client service.

5 From the Tier-0/Tier-1 Gateway drop-down menu, select the same Tier-0 or Tier-1 gateway
that you used with the route-based IPSec tunnel you created a moment ago.

6 Optionally set the values for Description and Tags.

7 Click Save.

After the new L2 VPN client service is created successfully, you are asked whether you want
to continue with the rest of the L2 VPN client configuration. If you click Yes, you are taken
back to the Add L2 VPN Client pane and the Session link is enabled. You can use that link to
create an L2 VPN client session or use the Networking > VPN > L2 VPN Sessions tab.

What to do next

Configure an L2 VPN client session for the L2 VPN Client service that you configured. Use the
information in Add an L2 VPN Client Session as a guide.

Adding IPSec VPN Sessions


After you have configured an IPSec VPN service, you must add either a policy-based IPSec VPN
session or a route-based IPSec VPN session, depending on the type of IPSec VPN you want to
configure. You also provide the information for the local endpoint and profiles to use to finish the
IPSec VPN service configuration.

Add a Policy-Based IPSec Session


When you add a policy-based IPSec VPN, IPSec tunnels are used to connect multiple local
subnets that are behind the NSX Edge node with peer subnets on the remote VPN site.

VMware, Inc. 109


NSX-T Data Center Administration Guide

The following steps use the IPSec Sessions tab on the NSX Manager UI to create a policy-based
IPSec session. You also add information for the tunnel, IKE, and DPD profiles, and select an
existing local endpoint to use with the policy-based IPSec VPN.

Note You can also add the IPSec VPN sessions immediately after you have successfully
configured the IPSec VPN service. You click Yes when prompted to continue with the IPSec VPN
service configuration and select Sessions > Add Sessions on the Add IPsec Service panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the IPSec VPN service configuration. If you selected Yes, proceed to step 3 in the following steps
to guide you with the rest of the policy-based IPSec VPN session configuration.

Prerequisites

n You must have configured an IPSec VPN service before proceeding. See Add an IPSec VPN
Service.

n Obtain the information for the local endpoint, IP address for the peer site, local network
subnet, and remote network subnet to use with the policy-based IPSec VPN session you are
adding. To create a local endpoint, see Add Local Endpoints.

n If you are using a Pre-Shared Key (PSK) for authentication, obtain the PSK value.

n If you are using a certificate for authentication, ensure that the necessary server certificates
and corresponding CA-signed certificates are already imported. See Chapter 19 Certificates.

n If you do not want to use the defaults for the IPSec tunnel, IKE, or dead peer detection (DPD)
profiles provided by NSX-T Data Center, configure the profiles you want to use instead. See
Adding Profiles for information.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to the Networking > VPN > IPSec Sessions tab.

3 Select Add IPSec Session > Policy Based.

4 Enter a name for the policy-based IPSec VPN session.

5 From the VPN Service drop-down menu, select the IPSec VPN service to which you want to
add this new IPSec session.

Note If you are adding this IPSec session from the Add IPSec Sessions dialog box, the VPN
Service name is already indicated above the Add IPSec Session button.

6 Select an existing local endpoint from the drop-down menu.

This local endpoint value is required and identifies the local NSX Edge node. If you want to
create a different local endpoint, click the three-dot menu ( ) and select Add Local Endpoint.

VMware, Inc. 110


NSX-T Data Center Administration Guide

7 In the Remote IP text box, enter the required IP address of the remote site.

This value is required.

8 Enter an optional description for this policy-based IPSec VPN session.

The maximum length is 1024 characters.

9 To enable or disable the IPSec VPN session, click Admin Status .

By default, the value is set to Enabled, which means the IPSec VPN session is to be configured
down to the NSX Edge node.

10 (Optional) From the Compliance suite drop-down menu, select a security compliance suite.

Note Compliance suite support is provided beginning with NSX-T Data Center 2.5. See
About Supported Compliance Suites for more information.

The default value selected is None. If you select a compliance suite, the Authentication Mode
is set to Certificate and in the Advanced Properties section, the values for IKE profile and
IPSec profile are set to the system-defined profiles for the selected security compliance
suite. You cannot edit these system-defined profiles.

11 If the Compliance Suite is set to None, select a mode from the Authentication Mode drop-
down menu.

The default authentication mode used is PSK, which means a secret key shared between NSX
Edge and the remote site is used for the IPSec VPN session. If you select Certificate, the site
certificate that was used to configure the local endpoint is used for authentication.

12 In the Local Networks and Remote Networks text boxes, enter at least one IP subnet address
to use for this policy-based IPSec VPN session.

These subnets must be in a CIDR format.

13 If Authentication Mode is set to PSK, enter the key value in the Pre-shared Key text box.

This secret key can be a string with a maximum length of 128 characters.

Caution Be careful when sharing and storing a PSK value because it contains some sensitive
information.

VMware, Inc. 111


NSX-T Data Center Administration Guide

14 To identify the peer site, enter a value in Remote ID.

For peer sites using PSK authentication, this ID value must be the public IP address or the
FQDN of the peer site. For peer sites using certificate authentication, this ID value must be
the common name (CN) or distinguished name (DN) used in the peer site's certificate.

Note If the peer site's certificate contains an email address in the DN string, for example,

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123/[email protected]

then enter the Remote ID value using the following format as an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

If the local site's certificate contains an email address in the DN string and the peer site uses
the strongSwan IPsec implementation, enter the local site's ID value in that peer site. The
following is an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

15 To change the profiles, initiation mode, TCP MSS clamping mode, and tags used by the
policy-based IPSec VPN session, click Advanced Properties.

By default, the system generated profiles are used. Select another available profile if you do
not want to use the default. If you want to use a profile that is not configured yet, click the
three-dot menu ( ) to create another profile. See Adding Profiles.
a If the IKE Profiles drop-down menu is enabled, select the IKE profile.

b Select the IPsec tunnel profile, if the IPSec Profiles drop-down menu is not disabled.

c Select the preferred DPD profile if the DPD Profiles drop-down menu is enabled.

VMware, Inc. 112


NSX-T Data Center Administration Guide

d Select the preferred mode from the Connection Initiation Mode drop-down menu.

Connection initiation mode defines the policy used by the local endpoint in the process of
tunnel creation. The default value is Initiator. The following table describes the different
connection initiation modes available.

Table 6-2. Connection Initiation Modes


Connection Initiation Mode Description

Initiator The default value. In this mode, the local endpoint


initiates the IPSec VPN tunnel creation and responds
to incoming tunnel setup requests from the peer
gateway.

On Demand In this mode, the local endpoint initiates the IPSec VPN
tunnel creation after the first packet matching the
policy rule is received. It also responds to the incoming
initiation request.

Respond Only The IPSec VPN never initiates a connection. The peer
site always initiates the connection request and the
local endpoint responds to that connection request.

e If you want to reduce the maximum segment size (MSS) payload of the TCP session
during the IPSec connection, enable TCP MSS Clamping, select the TCP MSS direction
value, and optionally set the TCP MSS Value.

See Understanding TCP MSS Clamping for more information.

f If you want to include this session as part of a specific group, enter the tag name in Tags.

16 Click Save.

Results

When the new policy-based IPSec VPN session is configured successfully, it is added to the list of
available IPsec VPN sessions. It is in read-only mode.

What to do next

n Verify that the IPSec VPN tunnel status is Up. See Monitor and Troubleshoot VPN Sessions
for information.

n If necessary, manage the IPSec VPN session information by clicking the three-dot menu ( )
on the left-side of the session's row. Select one of the actions you are allowed to perform.

Add a Route-Based IPSec Session


When you add a route-based IPSec VPN, tunneling is provided on traffic that is based on routes
that were learned dynamically over a virtual tunnel interface (VTI) using a preferred protocol,
such as BGP. IPSec secures all the traffic flowing through the VTI.

VMware, Inc. 113


NSX-T Data Center Administration Guide

The steps described in this topic use the IPSec Sessions tab to create a route-based IPSec
session. You also add information for the tunnel, IKE, and DPD profiles, and select an existing
local endpoint to use with the route-based IPSec VPN.

Note You can also add the IPSec VPN sessions immediately after you have successfully
configured the IPSec VPN service. You click Yes when prompted to continue with the IPSec VPN
service configuration and select Sessions > Add Sessions on the Add IPsec Service panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the IPSec VPN service configuration. If you selected Yes, proceed to step 3 in the following steps
to guide you with the rest of the route-based IPSec VPN session configuration.

Prerequisites

n You must have configured an IPSec VPN service before proceeding. See Add an IPSec VPN
Service.

n Obtain the information for the local endpoint, IP address for the peer site, and tunnel service
IP subnet address to use with the route-based IPSec session you are adding. To create a local
endpoint, see Add Local Endpoints.

n If you are using a Pre-Shared Key (PSK) for authentication, obtain the PSK value.

n If you are using a certificate for authentication, ensure that the necessary server certificates
and corresponding CA-signed certificates are already imported. See Chapter 19 Certificates.

n If you do not want to use the default values for the IPSec tunnel, IKE, or dead peer detection
(DPD) profiles provided by NSX-T Data Center, configure the profiles you want to use
instead. See Adding Profiles for information.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to Networking > VPN > IPSec Sessions.

3 Select Add IPSec Session > Route Based.

4 Enter a name for the route-based IPSec session.

5 From the VPN Service drop-down menu, select the IPSec VPN service to which you want to
add this new IPSec session.

Note If you are adding this IPSec session from the Add IPSec Sessions dialog box, the VPN
Service name is already indicated above the Add IPSec Session button.

6 Select an existing local endpoint from the drop-down menu.

This local endpoint value is required and identifies the local NSX Edge node. If you want to
create a different local endpoint, click the three-dot menu ( ) and select Add Local Endpoint.

VMware, Inc. 114


NSX-T Data Center Administration Guide

7 In the Remote IP text box, enter the IP address of the remote site.

This value is required.

8 Enter an optional description for this route-based IPSec VPN session.

The maximum length is 1024 characters.

9 To enable or disable the IPSec session, click Admin Status .

By default, the value is set to Enabled, which means the IPSec session is to be configured
down to the NSX Edge node.

10 (Optional) From the Compliance suite drop-down menu, select a security compliance suite.

Note Compliance suite support is provided beginning with NSX-T Data Center 2.5. See
About Supported Compliance Suites for more information.

The default value is set to None. If you select a compliance suite, the Authentication Mode is
set to Certificate and in the Advanced Properties section, the values for IKE profile and
IPSec profile are set to the system-defined profiles for the selected compliance suite. You
cannot edit these system-defined profiles.

11 Enter an IP subnet address in Tunnel Interface in the CIDR notation.

This address is required.

12 If the Compliance Suite is set to None, select a mode from the Authentication Mode drop-
down menu.

The default authentication mode used is PSK, which means a secret key shared between NSX
Edge and the remote site is used for the IPSec VPN session. If you select Certificate, the site
certificate that was used to configure the local endpoint is used for authentication.

13 If you selected PSK for the authentication mode, enter the key value in the Pre-shared Key
text box.

This secret key can be a string with a maximum length of 128 characters.

Caution Be careful when sharing and storing a PSK value because it contains some sensitive
information.

VMware, Inc. 115


NSX-T Data Center Administration Guide

14 Enter a value in Remote ID.

For peer sites using PSK authentication, this ID value must be the public IP address or the
FQDN of the peer site. For peer sites using certificate authentication, this ID value must be
the common name (CN) or distinguished name (DN) used in the peer site's certificate.

Note If the peer site's certificate contains an email address in the DN string, for example,

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123/[email protected]

then enter the Remote ID value using the following format as an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

If the local site's certificate contains an email address in the DN string and the peer site uses
the strongSwan IPsec implementation, enter the local site's ID value in that peer site. The
following is an example.

C=US, ST=California, O=MyCompany, OU=MyOrg, CN=Site123, [email protected]"

15 If you want to include this IPSec session as part of a specific group tag, enter the tag name in
Tags.

16 To change the profiles, initiation mode, TCP MSS clamping mode, and tags used by the route-
based IPSec VPN session, click Advanced Properties.

By default, the system-generated profiles are used. Select another available profile if you do
not want to use the default. If you want to use a profile that is not configured yet, click the
three-dot menu ( ) to create another profile. See Adding Profiles.
a If the IKE Profiles drop-down menu is enabled, select the IKE profile.

b Select the IPsec tunnel profile, if the IPSec Profiles drop-down menu is not disabled.

VMware, Inc. 116


NSX-T Data Center Administration Guide

c Select the preferred DPD profile if the DPD Profiles drop-down menu is enabled.

d Select the preferred mode from the Connection Initiation Mode drop-down menu.

Connection initiation mode defines the policy used by the local endpoint in the process of
tunnel creation. The default value is Initiator. The following table describes the different
connection initiation modes available.

Table 6-3. Connection Initiation Modes


Connection Initiation Mode Description

Initiator The default value. In this mode, the local endpoint


initiates the IPSec VPN tunnel creation and responds
to incoming tunnel setup requests from the peer
gateway.

On Demand Do not use with the route-based VPN. This mode


applies to policy-based VPN only.

Respond Only The IPSec VPN never initiates a connection. The peer
site always initiates the connection request and the
local endpoint responds to that connection request.

17 If you want to reduce the maximum segment size (MSS) payload of the TCP session during
the IPSec connection, enable TCP MSS Clamping, select the TCP MSS direction value, and
optionally set the TCP MSS Value. []

See Understanding TCP MSS Clamping for more information.

18 If you want to include this IPSec session as part of a specific group tag, enter the tag name in
Tags.

19 Click Save.

Results

When the new route-based IPSec VPN session is configured successfully, it is added to the list of
available IPsec VPN sessions. It is in read-only mode.

What to do next

n Verify that the IPSec VPN tunnel status is Up. See Monitor and Troubleshoot VPN Sessions
for information.

n Configure routing using either a static route or BGP. See Configure a Static Route or
Configure BGP.

n If necessary, manage the IPSec VPN session information by clicking the three-dot menu ( )
on the left-side of the session's row. Select one of the actions you can perform.

About Supported Compliance Suites


Beginning with NSX-T Data Center 2.5, you can specify a security compliance suite to use to
configure the security profiles used for an IPSec VPN session.

VMware, Inc. 117


NSX-T Data Center Administration Guide

A security compliance suite has predefined values that are used for different security parameters
and that cannot be modified. When you select a compliance suite, the predefined values are
automatically used for the security profile of the IPSec VPN session you are configuring.

The following table lists the compliance suites that are supported for IKE profiles in NSX-T Data
Center and the values that are predefined for each.

Compliance Suite Encryption


Name IKE Version Algorithm Digest Algorithm Diffie Hellman Group

CNSA IKE V2 AES 256 SHA2 384 Group 15, Group 20

FIPS IKE FLEX AES 128 SHA2 256 Group 20

Foundation IKE V1 AES 128 SHA2 256 Group 14

PRIME IKE V2 AES GCM 128 Not Set Group 19

Suite-B-GCM-128 IKE V2 AES 128 SHA2 256 Group 19

Suite-B-GCM-256 IKE V2 AES 256 SHA2 384 Group 20

The following table lists the compliance suites that are supported for IPSec profiles in NSX-T Data
Center and the values that are predefined for each.

Compliance Suite Encryption


Name Algorithm Digest Algorithm PFS Group Diffie-Hellman Group

CNSA AES 256 SHA2 384 Enabled Group 15, Group 20

FIPS AES GCM 128 Not Set Enabled Group 20

Foundation AES 128 SHA2 256 Enabled Group 14

PRIME AES GCM 128 Not Set Enabled Group 19

Suite-B-GCM-128 AES GCM 128 Not Set Enabled Group 19

Suite-B-GCM-256 AES GCM 256 Not Set Enabled Group 20

Understanding TCP MSS Clamping


TCP MSS clamping enables you to reduce the maximum segment size (MSS) value used by a TCP
session during connection establishment through an IPSec tunnel. This feature is supported
starting with NSX-T Data Center 2.5.

TCP MSS is the maximum amount of data in bytes that a host is willing to accept in a single TCP
segment. Each end of a TCP connection sends its desired MSS value to its peer-end during a
three-way handshake, where MSS is one of the TCP header options used in a TCP SYN packet.
TCP MSS is calculated based on the maximum transmission unit (MTU) of the egress interface of
the sender host.

When a TCP traffic goes through an IPSec VPN or any kind of VPN tunnel, additional headers are
added to the original packet to keep it secure. For IPSec tunnel mode, additional headers used
are IP, ESP, and optionally UDP (if port translation is present in the network). Because of these
additional headers, the size of the encapsulated packet goes beyond the MTU of the VPN
interface. The packet can get fragmented or dropped based on the DF policy.

VMware, Inc. 118


NSX-T Data Center Administration Guide

To avoid packet fragmentation or drop, you can adjust the MSS value for the IPSec session by
enabling the TCP MSS clamping feature. Navigate to Networking > VPN > IPSec Sessions. When
you are adding an IPSec session or editing an existing one, expand the Advance Properties
section, and enable TCP MSS Clamping.

You can configure the pre-calculated MSS value suitable for the IPSec session by setting both
TCP MSS Direction and TCP MSS Value. The configured MSS value is used for MSS clamping.
You can opt to use the dynamic MSS calculation by setting the TCP MSS Direction and leaving
TCP MSS Value blank. The MSS value is auto-calculated based on the VPN interface MTU, VPN
overhead, and the path MTU (PMTU) when it is already determined. The effective MSS is
recalculated during each TCP handshake to handle the MTU or PMTU changes dynamically.

Adding L2 VPN Sessions


After you have configured an L2 VPN server and an L2 VPN client, you must add L2 VPN
sessions for both to complete the L2 VPN service configuration.

Add an L2 VPN Server Session


After creating an L2 VPN Server service, you must add an L2 VPN session and attach it to an
existing segment.

The following steps use the L2 VPN Sessions tab on the NSX Manager UI to create an L2 VPN
Server session. You also select an existing local endpoint and segment to attach to the L2 VPN
Server session.

Note You can also add an L2 VPN Server session immediately after you have successfully
configured the L2 VPN Server service. You click Yes when prompted to continue with the L2 VPN
Server configuration and select Sessions > Add Sessions on the Add L2 VPN Server panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the L2 VPN Server configuration. If you selected Yes, proceed to step 3 in the following steps to
guide you with the rest of the L2 VPN Server session configuration.

Prerequisites

n You must have configured an L2 VPN Server service before proceeding. See Add an L2 VPN
Server Service.

n Obtain the information for the local endpoint and remote IP to use with the L2 VPN Server
session you are adding. To create a local endpoint, see Add Local Endpoints.

n Obtain the values for the pre-shared key (PSK) and the tunnel interface subnet to use with
the L2 VPN Server session.

n Obtain the name of the existing segment you want to attach to the L2 VPN Server session
you are creating. See Add a Segment for information.

VMware, Inc. 119


NSX-T Data Center Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to the Networking > VPN > L2 VPN Sessions tab.

3 Select Add L2 VPN Session > L2 VPN Server.

4 Enter a name for the L2 VPN Server session.

5 From the L2 VPN Service drop-down menu, select the L2 VPN Server service for which the
L2 VPN session is being created.

Note If you are adding this L2 VPN Server session from the Set L2VPN Server Sessions
dialog box, the L2 VPN Server service is already indicated above the Add L2 Session button.

6 Select an existing local endpoint from the drop-down menu.

If you want to create a different local endpoint, click the three-dot menu ( ) and select Add
Local Endpoint.

7 Enter the IP address of the remote site.

8 To enable or disable the L2 VPN Server session, click Admin Status.

By default, the value is set to Enabled, which means the L2 VPN Server session is to be
configured down to the NSX Edge node.

9 Enter the secret key value in Pre-shared Key.

Caution Be careful when sharing and storing a PSK value because it is considered sensitive
information.

10 Enter an IP subnet address in the Tunnel Interface using the CIDR notation.

For example, 4.5.6.6/24. This subnet address is required.

11 Enter a value in Remote ID.

For peer sites using certificate authentication, this ID must be the common name in the peer
site's certificate. For PSK peers, this ID can be any string. Preferably, use the public IP address
of the VPN or an FQDN for the VPN services as the Remote ID.

12 If you want to include this session as part of a specific group, enter the tag name in Tags.

13 Click Save and click Yes when prompted if you want to continue with the VPN service
configuration.

You are returned to the Add L2VPN Sessions panel and the Segments link is now enabled.

VMware, Inc. 120


NSX-T Data Center Administration Guide

14 Attach an existing segment to the L2 VPN Server session.

a Click Segments > Set Segments.

b In the Set Segments dialog box, click Set Segment to attach an existing segment to the
L2 VPN Server session.

c From the Segment drop-down menu, select the VNI-based or VLAN-based segment that
you want to attach to the session.

d Enter a unique value in the VPN Tunnel ID that is used to identify the segment that you
selected.

e In the Local Egress Gateway IP text box, enter the IP address of the local gateway that
your workload VMs on the segment use as their default gateway. The same IP address
can be configured in the remote site on the extended segment.

f Click Save and then Close.

In the Set L2VPN Sessions pane or dialog box, the system has incremented the Segments
count for the L2 VPN Server session.

15 To finish the L2 VPN Server session configuration, click Close Editing.

Results

In the VPN Services tab, the system incremented the Sessions count for the L2 VPN Server
service that you configured.

What to do next

To complete the L2 VPN service configuration, you must also create an L2 VPN service in Client
mode and an L2 VPN client session. See Add an L2 VPN Client Service and Add an L2 VPN Client
Session.

Add an L2 VPN Client Session


You must add an L2 VPN Client session after creating an L2 VPN Client service, and attach it to
an existing segment.

The following steps use the L2 VPN Sessions tab on the NSX Manager UI to create an L2 VPN
Client session. You also select an existing local endpoint and segment to attach to the L2 VPN
Client session.

Note You can also add an L2 VPN Client session immediately after you have successfully
configured the L2 VPN Client service. Click Yes when prompted to continue with the L2 VPN
Client configuration and select Sessions > Add Sessions on the Add L2 VPN Client panel. The first
few steps in the following procedure assume you selected No to the prompt to continue with the
L2 VPN Client configuration. If you selected Yes, proceed to step 3 in the following steps to guide
you with the rest of the L2 VPN Client session configuration.

VMware, Inc. 121


NSX-T Data Center Administration Guide

Prerequisites

n You must have configured an L2 VPN Client service before proceeding. See Add an L2 VPN
Client Service.

n Obtain the IP addresses information for the local IP and remote IP to use with the L2 VPN
Client session you are adding.

n Obtain the peer code that was generated during the L2 VPN server configuration. See
Download the Remote Side L2 VPN Configuration File.

n Obtain the name of the existing segment you want to attach to the L2 VPN Client session you
are creating. See Add a Segment.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select the Networking > VPN > L2 VPN Sessions.

3 Select Add L2 VPN Session > L2 VPN Client.

4 Enter a name for the L2 VPN Client session.

5 From the VPN Service drop-down menu, select the L2 VPN Client service with which the L2
VPN session is to be associated.

Note If you are adding this L2 VPN Client session from the Set L2VPN Client Sessions dialog
box, the L2 VPN Client service is already indicated above the Add L2 Session button.

6 In the Local IP address text box, enter the IP address of the L2 VPN Client session.

7 Enter the remote IP address of the IPSec tunnel to be used for the L2 VPN Client session.

8 In the Peer Configuration text box, enter the peer code generated when you configured the
L2 VPN Server service.

9 Enable or disable Admin Status.

By default, the value is set to Enabled, which means the L2 VPN Server session is to be
configured down to the NSX Edge node.

10 Click Save and click Yes when prompted if you want to continue with the VPN service
configuration.

11 Attach an existing segment to the L2 VPN Client session.

a Select Segments > Add Segments.

b In the Set Segments dialog box, click Add Segment.

c From the Segment drop-down menu, select the VNI-based or VLAN-based segment you
want to attach to the L2 VPN Client session.

VMware, Inc. 122


NSX-T Data Center Administration Guide

d Enter a unique value in the VPN Tunnel ID that is used to identify the segment that you
selected.

e Click Close.

12 To finish the L2 VPN Client session configuration, click Close Editing.

Results

In the VPN Services tab, the sessions count is updated for the L2 VPN Client service that you
configured.

Download the Remote Side L2 VPN Configuration File


To configure the L2 VPN client session, you must obtain the peer code that was generated when
you configured the L2 VPN server session.

Prerequisites

n You must have configured an L2 VPN server service and a session successfully before
proceeding. See Add an L2 VPN Server Service.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to the Networking > VPN > L2 VPN Sessions tab.

3 In the table of L2 VPN sessions, expand the row for the L2 VPN server session you plan to
use for the L2 VPN client session configuration.

4 Click Download Config and click Yes on the Warning dialog box.

A text file with the name L2VPNSession_<name-of-L2-VPN-server-session>_config.txt is


downloaded. It contains the peer code for the remote side L2 VPN configuration.

Caution Be careful when storing and sharing the peer code because it contains a PSK value,
which is considered sensitive information.

For example, L2VPNSession_L2VPNServer_config.txt contains the following configuration.

[
{
"transport_tunnel_path": "/infra/tier-0s/ServerT0_AS/locale-services/1-policyconnectivity-693/
ipsec-vpn-services/IpsecService1/sessions/Routebase1",
"peer_code":
"MCw3ZjBjYzdjLHsic2l0ZU5hbWUiOiJSb3V0ZWJhc2UxIiwic3JjVGFwSXAiOiIxNjkuMjU0LjY0LjIiLCJkc3RUYXBJcCI6I
jE2OS4yNTQuNjQuMSIsImlrZU9wdGl
vbiI6ImlrZXYyIiwiZW5jYXBQcm90byI6ImdyZS9pcHNlYyIsImRoR3JvdXAiOiJkaDE0IiwiZW5jcnlwdEFuZERpZ2VzdCI6I
mFlcy1nY20vc2hhLTI1NiIsInBzayI

VMware, Inc. 123


NSX-T Data Center Administration Guide

6IlZNd2FyZTEyMyIsInR1bm5lbHMiOlt7ImxvY2FsSWQiOiI2MC42MC42MC4xIiwicGVlcklkIjoiNTAuNTAuNTAuMSIsImxvY
2FsVnRpSXAiOiIxNjkuMi4yLjMvMzEifV19"
}
]

5 Copy the peer code, which you use to configure the L2 VPN client service and session.

Using the preceding configuration file example, the following peer code is what you copy to
use with the L2 VPN client configuration.

MCw3ZjBjYzdjLHsic2l0ZU5hbWUiOiJSb3V0ZWJhc2UxIiwic3JjVGFwSXAiOiIxNjkuMjU0LjY0LjIiLCJkc3RUYXBJcCI6Ij
E2OS4yNTQuNjQuMSIsImlrZU9wdGl
vbiI6ImlrZXYyIiwiZW5jYXBQcm90byI6ImdyZS9pcHNlYyIsImRoR3JvdXAiOiJkaDE0IiwiZW5jcnlwdEFuZERpZ2VzdCI6I
mFlcy1nY20vc2hhLTI1NiIsInBzayI
6IlZNd2FyZTEyMyIsInR1bm5lbHMiOlt7ImxvY2FsSWQiOiI2MC42MC42MC4xIiwicGVlcklkIjoiNTAuNTAuNTAuMSIsImxvY
2FsVnRpSXAiOiIxNjkuMi4yLjMvMzEifV19

What to do next

Configure the L2 VPN Client service and session. See Add an L2 VPN Client Service and Add an
L2 VPN Client Session.

Add Local Endpoints


You must configure a local endpoint to use with the IPSec VPN that you are configuring.

The following steps use the Local Endpoints tab on the NSX Manager UI. You can also create a
local endpoint while in the process of adding an IPSec VPN session by clicking the three-dot
menu ( ) and selecting Add Local Endpoint. If you are in the middle of configuring an IPSec VPN
session, proceed to step 3 in the following steps to guide you with creating a new local endpoint.

Prerequisites

n If you are using a certificate-based authentication mode for the IPSec VPN session that is to
use the local endpoint you are configuring, obtain the information about the certificate that
the local endpoint must use.

n Ensure that you have configured an IPSec VPN service to which this local endpoint is to be
associated.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to Networking > VPN > Local Endpoints and click Add Local Endpoint.

3 Enter a name for the local endpoint.

4 From the VPN Service drop-down menu, select the IPSec VPN service to which this local
endpoint is to be associated.

VMware, Inc. 124


NSX-T Data Center Administration Guide

5 Enter an IP address for the local endpoint.

For an IPSec VPN service running on a Tier-0 gateway, the local endpoint IP address must be
different from the Tier-0 gateway's uplink interface IP address. The local endpoint IP address
you provide is associated with the loopback interface for the Tier-0 gateway and is also
published as a routable IP address over the uplink interface. For IPSec VPN service running
on a Tier-1 gateway, in order for the local endpoint IP address to be routable, the route
advertisement for IPSec local endpoints must be enabled in the Tier-1 gateway configuration.
See Add a Tier-1 Gateway for more information.

6 If you are using a certificate-based authentication mode for the IPSec VPN session, from the
Site Certificate drop-down menu, select the certificate that is to be used by the local
endpoint.

7 (Optional) Optionally add a description in Description.

8 Enter the Local ID value that is used for identifying the local NSX Edge instance.

This local ID is the peer ID on the remote site. The local ID must be either the public IP
address or FQDN of the remote site. For certificate-based VPN sessions defined using the
local endpoint, the local ID is derived from the certificate associated with the local endpoint.
The ID specified in the Local ID text box is ignored. The local ID derived from the certificate
for a VPN session depends on the extensions present in the certificate.

n If the X509v3 extension X509v3 Subject Alternative Name is not present in the certificate,
then the Distinguished Name (DN) is used as the local ID value.

n If the X509v3 extension X509v3 Subject Alternative Name is found in the certificate, then
one of the Subject Alternative Name is taken as the local ID value.

9 From the Trusted CA Certificates and Certificate Revocation List drop-down menus, select
the appropriate certificates that are required for the local endpoint.

10 Specify a tag, if needed.

11 Click Save.

Adding Profiles
NSX-T Data Center provides the system-generated IPSec tunnel profile and an IKE profile that
are assigned by default when you configure either an IPSec VPN or L2 VPN service. A system-
generated DPD profile is created for an IPSec VPN configuration.

The IKE and IPSec profiles provide information about the algorithms that are used to
authenticate, encrypt, and establish a shared secret between network sites. The DPD profile
provides information about the number of seconds to wait in between probes to detect if an
IPSec peer site is alive or not.

If you decide not to use the default profiles provided by NSX-T Data Center, you can configure
your own profile using the information in the topics that follow in this section.

VMware, Inc. 125


NSX-T Data Center Administration Guide

Add IKE Profiles


The Internet Key Exchange (IKE) profiles provide information about the algorithms that are used
to authenticate, encrypt, and establish a shared secret between network sites when you
establish an IKE tunnel.

NSX-T Data Center provides system-generated IKE profiles that are assigned by default when
you configure an IPSec VPN or L2 VPN service. The following table lists the default profiles
provided.

Table 6-4. Default IKE Profiles Used for IPSec VPN or L2 VPN Services
Default IKE Profile Name Description

nsx-default-l2vpn-ike-profile n Used for an L2 VPN service configuration.


n Configured with IKE V2, AES 128 encryption
algorithm, SHA2 256 algorithm, and Diffie-
Hellman group14 key exchange algorithm.

nsx-default-l3vpn-ike-profile n Used for an IPSec VPN service configuration.


n Configured with IKE V2, AES 128 encryption
algorithm, SHA2 256 algorithm, and Diffie-
Hellman group 14 key exchange algorithm.

Instead of the default IKE profiles used, you can also select one of the compliance suites
supported starting with NSX-T Data Center 2.5. See About Supported Compliance Suites for
more information.

If you decide not to use the default IKE profiles or compliance suites provided, you can configure
your own IKE profile using the following steps.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Click the Networking > VPN > Profiles tab.

3 Select the IKE Profiles profile type, and click Add IKE Profile.

4 Enter a name for the IKE profile.

VMware, Inc. 126


NSX-T Data Center Administration Guide

5 From the IKE Version drop-down menu, select the IKE version to use to set up a security
association (SA) in the IPSec protocol suite.

Table 6-5. IKE Versions


IKE Version Description

IKEv1 When selected, the IPSec VPN initiates and responds to an IKEv1 protocol
only.

IKEv2 This version is the default. When selected, the IPSec VPN initiates and
responds to an IKEv2 protocol only.

IKE-Flex If this version is selected and if the tunnel establishment fails with the IKEv2
protocol, the source site does not fall back and initiate a connection with
the IKEv1 protocol. Instead, if the remote site initiates a connection with the
IKEv1 protocol, then the connection is accepted.

VMware, Inc. 127


NSX-T Data Center Administration Guide

6 Select the encryption, digest, and Diffie-Hellman group algorithms from the drop-down
menus. You can select multiple algorithms to apply or deselect any selected algorithms you
do not want to be applied.

Table 6-6. Algorithms Used


Type of Algorithm Valid Values Description

Encryption n AES 128 ( default) The encryption algorithm used during the Internet Key
n AES 256 Exchange (IKE) negotiation.

n AES GCM 128 The AES-GCM algorithms are supported when used
with IKEv2. They are not supported when used with
n AES GCM 192
IKEv1.
n AES GCM 256

Digest n SHA2 256 (default) The secure hashing algorithm used during the IKE
n SHA1 negotiation.

n SHA2 384 If AES-GCM is the only encryption algorithm selected


in the Encryption Algorithm text box, then no hash
n SHA2 512
algorithms can be specified in the Digest Algorithm
text box, per section 8 in RFC 5282. In addition, the
Psuedo-Random Function (PRF) algorithm PRF-HMAC-
SHA2-256 is implicitly selected and used in the IKE
security association (SA) negotiation. The PRF-HMAC-
SHA2-256 algorithm must also be configured on the
peer gateway in order for the phase 1 of the IKE SA
negotiation to succeed.
If more algorithms are specified in the Encryption
Algorithm text box, in addition to the AES-GCM
algorithm, then one or more hash algorithms can be
selected in the Digest Algorithm text box. In addition,
the PRF algorithm used in the IKE SA negotiation is
implicitly determined based on the hash algorithms
configured. At least one of the matching PRF
algorithms must also be configured on the peer
gateway in order for the phase 1 of the IKE SA
negotiation to succeed. For example, if the Encryption
Algorithm text box contains AES 128 and AES GCM
128. and SHA1 is specified in the Digest Algorithm text
box, then the PRF-HMAC-SHA1 algorithm is used
during the IKE SA negotiation. It must also be
configured in the peer gateway.

Diffie-Hellman Group n Group 14 (default) The cryptography schemes that the peer site and the
n Group 2 NSX Edge use to establish a shared secret over an
insecure communications channel.
n Group 5
n Group 15
n Group 16
n Group 19
n Group 20
n Group 21

VMware, Inc. 128


NSX-T Data Center Administration Guide

Note When you attempt to establish an IPSec VPN tunnel with a GUARD VPN Client
(previously QuickSec VPN Client) using two encryption algorithms or two digest algorithms,
the GUARD VPN Client adds additional algorithms in the proposed negotiation list. For
example, if you specified AES 128 and AES 256 as the encryption algorithms and SHA2 256
and SHA2 512 as the digest algorithms to use in the IKE profile you are using to establish the
IPSec VPN tunnel, the GUARD VPN Client also proposes AES 192 and SHA2 384 in the
negotiation list. In this case, NSX-T Data Center uses the first encryption algorithm you
selected when establishing the IPSec VPN tunnel.

7 Enter a security association (SA) lifetime value, in seconds, if you want it different from the
default value of 86400 seconds (24 hours).

8 Provide a description and add a tag, as needed.

9 Click Save.

Results

A new row is added to the table of available IKE profiles. To edit or delete a non-system created
profile, click the three-dot menu ( ) and select from the list of actions available.

Add IPSec Profiles


The Internet Protocol Security (IPSec) profiles provide information about the algorithms that are
used to authenticate, encrypt, and establish a shared secret between network sites when you
establish an IPSec tunnel.

NSX-T Data Center provides system-generated IPSec profiles that are assigned by default when
you configure an IPSec VPN or L2 VPN service. The following table lists the default IPSec profiles
provided.

Table 6-7. Default IPSec Profiles Used for IPSec VPN or L2 VPN Services
Name of Default IPSec Profile Description

nsx-default-l2vpn-tunnel-profile n Used for L2 VPN.


n Configured with AES GCM 128 encryption
algorithm and Diffie-Hellman group 14 key
exchange algorithm.

nsx-default-l3vpn-tunnel-profile n Used for IPSec VPN.


n Configured with AES GCM 128 encryption
algorithm and Diffie-Hellman group 14 key
exchange algorithm.

Instead of the default IPSec profile, you can also select one of the compliance suites supported
starting with NSX-T Data Center 2.5. See About Supported Compliance Suites for more
information.

If you decide not to use the default IPSec profiles or compliance suites provided, you can
configure your own using the following steps.

VMware, Inc. 129


NSX-T Data Center Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to the Networking > VPN > Profiles tab.

3 Select the IPSec Profiles profile type, and click Add IPSec Profile.

4 Enter a name for the IPSec profile.

5 From the drop-down menus, select the encryption, digest, and Diffie-Hellman algorithms. You
can select multiple algorithms to apply.

Deselect the ones you do not want used.

Table 6-8. Algorithms Used


Type of Algorithm Valid Values Description

Encryption n AES GCM 128 (default) The encryption algorithm used during
n AES 128 the Internet Protocol Security (IPSec)
negotiation.
n AES 256
n AES GCM 192
n AES GCM 256
n No Encryption Auth AES GMAC 128'
n No Encryption Auth AES GMAC 192
n No Encryption Auth AES GMAC 256
n No Encryption

Digest n SHA1 The secure hashing algorithm used


n SHA2 256 during the IPSec negotiation.

n SHA2 384
n SHA2 512

Diffie-Hellman n Group 14 (default) The cryptography schemes that the


Group n Group 2 peer site and NSX Edge use to
establish a shared secret over an
n Group 5
insecure communications channel.
n Group 15
n Group 16
n Group 19
n Group 20
n Group 21

6 Deselect PFS Group if you decide not to use the PFS Group protocol on your VPN service.

It is selected by default.

7 In the SA Lifetime text box, modify the default number of seconds before the IPSec tunnel
must be re-established.

By default, an SA lifetime of 24 hours (86400 seconds) is used.

VMware, Inc. 130


NSX-T Data Center Administration Guide

8 Select the value for DF Bit to use with the IPSec tunnel.

The value determines how to handle the "Don't Fragment" (DF) bit included in the data
packet received. The acceptable values are described in the following table.

Table 6-9. DF Bit Values


DF Bit Value Description

COPY The default value. When this value is selected, NSX-T Data Center copies the
value of the DF bit from the received packet into the packet which is
forwarded. This value implies that if the data packet received has the DF bit
set, after encryption, the packet also has the DF bit set.

CLEAR When this value is selected, NSX-T Data Center ignores the value of the DF bit
in the data packet received, and the DF bit is always 0 in the encrypted packet.

9 Provide a description and add a tag, if necessary.

10 Click Save.

Results

A new row is added to the table of available IPSec profiles. To edit or delete a non-system
created profile, click the three-dot menu ( ) and select from the list of actions available.

Add DPD Profiles


A DPD (Dead Peer Detection) profile provides information about the number of seconds to wait
in between probes to detect if an IPSec peer site is alive or not.

NSX-T Data Center provides a system-generated DPD profile, named nsx-default-l3vpn-dpd-


profile, that is assigned by default when you configure an IPSec VPN service. This default DPD
profile is a periodic DPD probe mode.

If you decide not to use the default DPD profile provided, you can configure your own using the
following steps.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to Networking > VPN > Profiles.

3 Select DPD Profiles from the Select Profile Type drop-down menu, and click Add DPD
Profile.

4 Enter a name for the DPD profile.

5 From the DPD Probe Mode drop-down menu, select Periodic or On Demand mode.

For a periodic DPD probe mode, a DPD probe is sent every time the specified DPD probe
interval time is reached.

VMware, Inc. 131


NSX-T Data Center Administration Guide

For an on-demand DPD probe mode, a DPD probe is sent if no IPSec packet is received from
the peer site after an idle period. The value in DPD Probe Interval determines the idle period
used.

6 In the DPD Probe Interval text box, enter the number of seconds you want the NSX Edge
node to wait before sending the next DPD probe.

For a periodic DPD probe mode, the valid values are between 3 and 360 seconds. The
default value is 60 seconds.

For an on-demand probe mode, the valid values are between 1 and 10 seconds. The default
value is 3 seconds.

When the periodic DPD probe mode is set, the IKE daemon running on the NSX Edge sends a
DPD probe periodically. If the peer site responds within half a second, the next DPD probe is
sent after the configured DPD probe interval time has been reached. If the peer site does not
respond, then the DPD probe is sent again after waiting for half a second. If the remote peer
site continues not to respond, the IKE daemon resends the DPD probe again, until a response
is received or the retry count has been reached. Before the peer site is declared to be dead,
the IKE daemon resends the DPD probe up to a maximum of times specified in the Retry
Count property. After the peer site is declared dead, the NSX Edge node then tears down
the security association (SA) on the dead peer's link.

When the on-demand DPD mode is set, the DPD probe is sent only if no IPSec traffic is
received from the peer site after the configured DPD probe interval time has been reached.

7 In the Retry Count text box, enter the number of retries allowed.

The valid values are between 1 and 100. The default retry count is 5.

8 Provide a description and add a tag, as needed.

9 To enable or disable the DPD profile, click the Admin Status toggle.

By default, the value is set to Enabled. When the DPD profile is enabled, the DPD profile is
used for all IPSec sessions in the IPSec VPN service that uses the DPD profile.

10 Click Save.

Results

A new row is added to the table of available DPD profiles. To edit or delete a non-system created
profile, click the three-dot menu ( ) and select from the list of actions available.

Add an Autonomous Edge as an L2 VPN Client


You can use L2 VPN to extend your Layer 2 networks to a site that is not managed by NSX-T
Data Center. An autonomous NSX Edge can be deployed on the site, as an L2 VPN client. The
autonomous NSX Edge is simple to deploy, easily programmable, and provides high-performance
VPN. The autonomous NSX Edge is deployed using an OVF file on a host that is not managed by

VMware, Inc. 132


NSX-T Data Center Administration Guide

NSX-T Data Center. You can also enable high availability (HA) for VPN redundancy by deploying
primary and secondary autonomous Edge L2 VPN clients.

Prerequisites

n Create a port group and bind it to the vSwitch on your host.

n Create a port group for your internal L2 extension port.

n Obtain the IP addresses for the local IP and remote IP to use with the L2 VPN Client session
you are adding.

n Obtain the peer code that was generated during the L2 VPN server configuration.

Procedure

1 Using vSphere Web Client, log in to the vCenter Server that manages the non-NSX
environment.

2 Select Hosts and Clusters and expand clusters to show the available hosts.

3 Right-click the host where you want to install the autonomous NSX Edge and select Deploy
OVF Template.

4 Enter the URL to download and install the OVF file from the Internet or click Browse to locate
the folder on your computer that contains the autonomous NSX Edge OVF file and click Next.

5 On the Select name and folder page, enter a name for the autonomous NSX Edge and select
the folder or data center where you want to deploy. Then click Next.

6 On the Select a compute resource page, select the destination of the compute resource.

7 On the OVF Template Details page, review the template details and click Next.

8 On the Configuration page, select a deployment configuration option.

9 On the Select storage page, select the location to store the files for the configuration and
disk files.

10 On the Select networks page, configure the networks that the deployed template must use.
Select the port group you created for the uplink interface, the port group that you created
for the L2 extension port, and enter an HA interface. Click Next.

11 On the Customize Template page, enter the following values and click Next.

a Type and retype the CLI admin password.

b Type and retype the CLI enable password.

c Type and retype the CLI root password.

d Enter the IPv4 address for the Management Network.

e Enable the option to deploy an autonomous Edge.

VMware, Inc. 133


NSX-T Data Center Administration Guide

f Enter the External Port details for VLAN ID, exit interface, IP address, and IP prefix length
such that the exit interface maps to the Network with the port group of your uplink
interface.

If the exit interface is connected to a trunk port group, specify a VLAN ID. For example,
20,eth2,192.168.5.1,24. You can also configure your port group with a VLAN ID and
use VLAN 0 for the External Port.

g (Optional) To configure High Availability, enter the HA Port details where the exit
interface maps to the appropriate HA Network.

h (Optional) When deploying an autonomous NSX Edge as a secondary node for HA, select
Deploy this autonomous-edge as a secondary node.

Use the same OVF file as the primary node and enter the primary node's IP address, user
name, password, and thumbprint.

To retrieve the thumbprint of the primary node, log in to the primary node and run the
following command:

get certificate api thumbprint

Ensure that the VTEP IP addresses of the primary and secondary nodes are in the same
subnet and that they connect to the same port group. When you complete the
deployment and start the secondary-edge, it connects to the primary node to form an
edge-cluster .

12 On the Ready to complete page, review the autonomous Edge settings and click Finish.

Note If there are errors during the deployment, a message of the day is displayed on the
CLI. You can also use an API call to check for errors:

GET https://<nsx-mgr>/api/v1/node/status

The errors are categorized as soft errors and hard errors. Use API calls to resolve the soft
errors as required. You can clear the message of day using an API call:

POST /api/v1/node/status?action=clear_bootup_error

13 Power on the autonomous NSX Edge appliance.

14 Log in to the autonomous NSX Edge client.

15 Select L2VPN > Add Session and enter the following values:

a Enter a session name.

b Enter the local IP address and the remote IP address.

c Enter the peer code from the L2VPN server. See Download the Remote Side L2 VPN
Configuration File for details on obtaining the peer code.

16 Click Save.

VMware, Inc. 134


NSX-T Data Center Administration Guide

17 Select Port > Add Port to create an L2 extension port.

18 Enter a name, a VLAN, and select an exit interface.

19 Click Save.

20 Select L2VPN > Attach Port and enter the following values:

a Select the L2 VPN session that you created.

b Select the L2 extension port that you created.

c Enter a tunnel ID.

21 Click Attach.

You can create additional L2 extension ports and attach them to the session if you need to
extend multiple L2 networks.

22 Use the browser to log in to the autonomous NSX Edge or use API calls to view the status of
the L2VPN session.

Note If the L2VPN server configuration changes, ensure that you download the peer code
again and update the session with the new peer code.

Check the Realized State of an IPSec VPN Session


After you send a configuration update request for an IPSec VPN session, you can check to see if
the requested state has been successfully processed in the NSX-T Data Center local control
plane on the transport nodes.

When you create an IPSec VPN session, multiple entities are created: IKE profile, DPD profile,
tunnel profile, local endpoint, IPSec VPN service, and IPSec VPN session. These entities all share
the same IPSecVPNSession span, so you can obtain the realization state of all the entities of the
IPSec VPN session by using the same GET API call. You can check the realization state using only
the API.

Prerequisites

n Familiarize yourself with IPSec VPN. See Understanding IPSec VPN.

n Verify the IPSec VPN is configured successfully. See Add an IPSec VPN Service.

n You must have access to the NSX Manager API.

Procedure

1 Send a POST, PUT, or DELETE request API call.

VMware, Inc. 135


NSX-T Data Center Administration Guide

For example:

PUT https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/8dd1c386-9b2c-4448-85b8-51ff649fae4f
{
"resource_type": "PolicyBasedIPSecVPNSession",
"id": "8dd1c386-9b2c-4448-85b8-51ff649fae4f",
"display_name": "Test RZ_UPDATED",
"ipsec_vpn_service_id": "7adfa455-a6fc-4934-a919-f5728957364c",
"peer_endpoint_id": "17263ca6-dce4-4c29-bd8a-e7d12bd1a82d",
"local_endpoint_id": "91ebfa0a-820f-41ab-bd87-f0fb1f24e7c8",
"enabled": true,
"policy_rules": [
{
"id": "1026",
"sources": [
{
"subnet": "1.1.1.0/24"
}
],
"logged": true,
"destinations": [
{
"subnet": "2.1.4..0/24"
}
],
"action": "PROTECT",
"enabled": true,
"_revision": 1
}
]
}

2 Locate and copy the value of x-nsx-requestid from the response header returned.

For example:

x-nsx-requestid e550100d-f722-40cc-9de6-cf84d3da3ccb

3 Request the realization state of the IPSec VPN session using the following GET call.

GET https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/<ipsec-vpn-session-id>/state?request_id=<request-id>

The following API call uses the id and x-nsx-requestid values in the examples used in the
previous steps.

GET https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/8dd1c386-9b2c-4448-85b8-51ff649fae4f/state?
request_id=e550100d-f722-40cc-9de6-cf84d3da3ccb

Following is an example of a response you receive when the realization state is in_progress.

{
"details": [
{
"sub_system_type": "TransportNode",

VMware, Inc. 136


NSX-T Data Center Administration Guide

"sub_system_id": "fe651e63-04bd-43a4-a8ec-45381a3b71b9",
"state": "in_progress",
"failure_message": "CCP Id:ab5958df-d98a-468e-a72b-d89dcdae5346, Message:State realization
is in progress at the node."
},
{
"sub_system_type": "TransportNode",
"sub_system_id": "ebe174ac-e4f1-4135-ba72-3dd2eb7099e3",
"state": "in_sync"
}
],
"state": "in_progress",
"failure_message": "The state realization is in progress at transport nodes."
}

Following is an example of a response you receive when the realization state is in_sync.

{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "7046e8f4-a680-11e8-9bc3-020020593f59",
"state": "in_sync"
}
],
"state": "in_sync"
}

The following are examples of possible responses you receive when the realization state is
unknown.

{
"state": "unknown",
"failure_message": "Unable to get response from any CCP node. Please retry operation after
some time."
}

{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "3e643776-5def-11e8-94ae-020022e7749b",
"state": "unknown",
"failure_message": "CCP Id:ab5958df-d98a-468e-a72b-d89dcdae5346, Message: Unable to get
response from the node. Please retry operation after some time."
},
{
"sub_system_type": "TransportNode",
"sub_system_id": "4784ca0a-5def-11e8-93be-020022f94b73",
"state": "in_sync"
}

VMware, Inc. 137


NSX-T Data Center Administration Guide

],
"state": "unknown",
"failure_message": "The state realization is unknown at transport nodes"
}

After you perform an entity DELETE operation, you might receive the status of NOT_FOUND, as
shown in the following example.

{
"http_status": "NOT_FOUND",
"error_code": 600,
"module_name": "common-services",
"error_message": "The operation failed because object identifier LogicalRouter/
61746f54-7ab8-4702-93fe-6ddeb804 is missing: Object identifiers are case sensitive.."
}

If the IPSec VPN service associated with the session is disabled, you receive the BAD_REQUEST
response, as shown in the following example.

{
"httpStatus": "BAD_REQUEST",
"error_code": 110199,
"module_name": "VPN",
"error_message": "VPN service f9cfe508-05e3-4e1d-b253-fed096bb2b63 associated with the
session 8dd1c386-9b2c-4448-85b8-51ff649fae4f is disabled. Can not get the realization status."
}

Monitor and Troubleshoot VPN Sessions


After you configure an IPSec or L2 VPN session, you can monitor the VPN tunnel status and
troubleshoot any reported tunnel issues using the NSX Manager user interface.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to the Networking > VPN > IPSec Sessions or Networking > VPN > L2 VPN
Sessions tab.

3 Expand the row for the VPN session that you want to monitor or troubleshoot.

4 To view the status of the VPN tunnel status, click the info icon.

The Status dialog box appears and displays the available statuses.

5 To view the VPN tunnel traffic statistics, click View Statistics in the Status column.

The Statistics dialog box displays the traffic statistics for the VPN tunnel.

6 To view the error statistics, click the View More link in the Statistics dialog box.

7 To close the Statistics dialog box, click Close.

VMware, Inc. 138


Network Address Translation
(NAT) 7
Network address translation (NAT) maps one IP address space to another. You can configure
NAT on tier-0 and tier-1 gateways.

This chapter includes the following topics:

n Configure NAT on a Gateway

Configure NAT on a Gateway


You can configure NAT and NAT 64 rules on a tier-0 or tier-1 gateway.

NAT64 is a mechanism for translating IPv6 packets to IPv4 packets, and vice versa. NAT 64
allows IPv6-only clients to contact IPv4 servers using unicast UDP, or TCP. NAT64 only allows an
IPv6-only client to initiate communications to an IPv4-only server. To perform IPv6-IPv4
translation, binding and session information are saved. NAT64 is stateful.

n NAT64 is only supported for external IPv6 traffic coming in through the NSX-T edge uplink to
the IPv4 server in the overlay.

n NAT64 supports TCP and UDP, all other protocol type packets are discarded. NAT64 does
not support: ICMP, Fragmentation, and IPV6 packets that have extension headers.

For NAT, source NAT (SNAT), destination NAT (DNAT), or reflexive NAT are supported. If a tier-0
gateway is running in active-active mode, you cannot configure SNAT or DNAT because
asymmetrical paths might cause issues. You can only configure reflexive NAT (sometimes called
stateless NAT). If a tier-0 gateway is running in active-standby mode, you can configure SNAT,
DNAT, or reflexive NAT.

You can also disable SNAT or DNAT for an IP address or a range of addresses. If an address has
multiple NAT rules, the rule with the highest priority is applied.

Note DNAT is not supported on a tier-1 gateway where policy-based IPSec VPN is configured.

SNAT configured on a tier-0 gateway's external interface processes traffic from a tier-1 gateway,
and from another external interface on the tier-0 gateway.

VMware, Inc. 139


NSX-T Data Center Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > NAT.

3 Select a gateway.

4 Next to View, select NAT or NAT64.

5 Click Add NAT Rule or Add NAT 64 Rule.

6 Enter a Name.

7 If you are configuring NAT, select an action. For NAT 64, the action is NAT64.

NAT Option Description

Tier-1 gateway Available actions are SNAT, DNAT, Reflexive, NO SNAT, and NO DNAT.

Tier-0 gateway in active-standby Available actions are SNAT, DNAT, NO SNAT, and NO DNAT.
mode

Tier-0 gateway in active-active The available action is Reflexive.


mode

8 Enter a Source. If this text box is left blank, the NAT rule applies to all sources outside of the
local subnet.

Option Description

NAT Specify an IP address, or an IP address range in CIDR format. For SNAT,


NO_SNAT and REFLEXIVE rules, this is a mandatory text box and represents
the source network of the packets leaving the network.

NAT64 Enter an IPv6 address, or an IPv6 CIDR.

9 (Required) Enter a Destination.

Option Description

NAT Specify an IP address, or an IP address range in CIDR format.

NAT64 Enter an IPv6 address, or an IPv6 address range in CIDR format with the
prefix /96. The prefix /96 is supported because the destination IPv4 IP is
embedded as the last 4 bytes in the IPv6 address

10 Enter a value for Translated IP.

Option Description

NAT Specify an IPv4 address, or an IP address range in CIDR format.

NAT64 Specify an IPv4 address, a comma-separated list of IPv4 addresses, or an


IPv4 address range. IPV4 CIDR is not supported.

VMware, Inc. 140


NSX-T Data Center Administration Guide

11 Toggle Enable to enable the rule.

12 In the Service column, click Set to select services. See Add a Service for more information.
For NAT 64, select a pre-defined service or create a user-defined service with TCP or UDP,
with the source/destination port as Any, or a specific port.

13 For Apply To, click Set and select objects that this rule applies to.

The available objects are Tier-0 Gateways, Interfaces, Labels, Service Instance Endpoints,
and Virtual Endpoints.

Note If you are using Federation and creating a NAT rule from a Global Manager appliance,
you can select site-specific IP addresses for NAT. You can apply the NAT rule to any of the
following location spans:

n Do not click Set if you want to use the default option of applying the NAT rule to all
locations.

n Click Set. In the Apply To dialog box, select the locations whose entities you want to
apply the rule to and then select Apply NAT rule to all entities.

n Click Set. In the Apply To dialog box, select a location and then select Interfaces from the
Categories drop-down menu. You can select specific interfaces to which you want to
apply the NAT rule.

See Features and Configurations Supported in Federation for more details.

14 Enter a value for Translated Port.

15 Select a firewall setting.

Option Description

NAT Available settings are:


n Match External Address - The packet is processed by firewall rules that
match the combination of translated IP address, and translated port.
n Match Internal Address - The packet is processed by firewall rules that
match the combination of original IP address, and original port.
n Bypass - The packet bypasses firewall rules.

NAT64 The available setting is Bypass - the packet bypasses firewall rules.

16 (Optional) Toggle the logging button to enable logging.

17 Specify a priority value.

A lower value means a higher priority. The default is 100.

18 Click Save.

VMware, Inc. 141


Load Balancing
8
The NSX-T Data Center logical load balancer offers high-availability service for applications and
distributes the network traffic load among multiple servers.

Tier 1

Load Balancer Server 1


Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1

The load balancer distributes incoming service requests evenly among multiple servers in such a
way that the load distribution is transparent to users. Load balancing helps in achieving optimal
resource utilization, maximizing throughput, minimizing response time, and avoiding overload.

You can map a virtual IP address to a set of pool servers for load balancing. The load balancer
accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address and decides which pool
server to use.

Depending on your environment needs, you can scale the load balancer performance by
increasing the existing virtual servers and pool members to handle heavy network traffic load.

Note Logical load balancer is supported only on the tier-1 gateway. One load balancer can be
attached only to a tier-1 gateway.

This chapter includes the following topics:

n Key Load Balancer Concepts

n Setting Up Load Balancer Components

n Groups Created for Server Pools and Virtual Servers

VMware, Inc. 142


NSX-T Data Center Administration Guide

Key Load Balancer Concepts


Load balancer includes virtual servers, server pools, and health checks monitors.

A load balancer is connected to a Tier-1 logical router. The load balancer hosts single or multiple
virtual servers. A virtual server is an abstract of an application service, represented by a unique
combination of IP, port, and protocol. The virtual server is associated to single to multiple server
pools. A server pool consists of a group of servers. The server pools include individual server
pool members.

To test whether each server is correctly running the application, you can add health check
monitors that check the health status of a server.

NSX Edge Node

Tier - 1 A Tier - 1 B

LB 1 LB 2

Virtual Virtual Virtual


server 1 server 2 server 3

Pool 1 Pool 2 Pool 3 Pool 4 Pool 5

HC 1 HC 2

Scaling Load Balancer Resources


When you configure a load balancer, you can specify a size (small, medium, large, or extra large).
The size determines the number of virtual servers, server pools, and pool members the load
balancer can support.

A load balancer runs on a tier-1 gateway, which must be in active-standby mode. The gateway
runs on NSX Edge nodes. The form factor of the NSX Edge node (bare metal, small, medium,
large, or extra large) determines the number of load balancers that the NSX Edge node can
support. Note that in Manager mode, you create logical routers, which have similar functionality
to gateways. See Chapter 1 NSX Manager.

For more information about what the different load balance sizes and NSX Edge form factors can
support, see https://configmax.vmware.com.

VMware, Inc. 143


NSX-T Data Center Administration Guide

Note that using a small NSX Edge node to run a small load balancer is not recommended in a
production environment.

You can call an API to get the load balancer usage information of an NSX Edge node. If you use
Policy mode to configure load balancing, run the following command:

GET /policy/api/v1/infra/lb-node-usage?node_path=<node-path>

If you use Manager mode to configure load balancing, run the following command:

GET /api/v1/loadbalancer/usage-per-node/<node-id>

The usage information includes the number of load balancer objects (such as load balancer
services, virtual servers, server pools, and pool members) that are configured on the node. For
more information, see the NSX-T Data Center API Guide.

Supported Load Balancer Features


NSX-T Data Center load balancer supports the following features.

n Layer 4 - TCP and UDP

n Layer 7 - HTTP and HTTPS with load balancer rules support

n Server pools - static and dynamic with NSGroup

n Persistence - Source-IP and Cookie persistence mode

n Health check monitors - Active monitor which includes HTTP, HTPPS, TCP, UDP, and ICMP,
and passive monitor

n SNAT - Transparent, Automap, and IP List

n HTTP upgrade - For applications using HTTP upgrade such as WebSocket, the client or server
requests for HTTP Upgrade, which is supported. By default, NSX-T Data Center supports and
accepts HTTPS upgrade client request using the HTTP application profile.

To detect an inactive client or server communication, the load balancer uses the HTTP
application profile response timeout feature set to 60 seconds. If the server does not send
traffic during the 60 seconds interval, NSX-T Data Center ends the connection on the client
and server side. Default application profiles cannot be edited. To edit HTTP application profile
settings, create a custom profile.

Note: SSL -Terminate-mode and proxy-mode is not supported in NSX-T Data Center limited
export release.

VMware, Inc. 144


NSX-T Data Center Administration Guide

Load Balancer (LB)

Fast TCP

Fast UDP Application Profile Client-SSL Profile

HTTP
Virtual Server Server-SSL Profile
Source-IP

Persistence Profile LB Rules

Cookie

SNAT

Pool

Pool Members

Active Monitor Passive Monitor

HTTP HTTPS TCP UDP ICMP

Load Balancer Topologies


Load balancers are typically deployed in either inline or one-arm mode.

Inline Topology
In the inline mode, the load balancer is in the traffic path between the client and the server.
Clients and servers must not be connected to the same tier-1 logical router. This topology does
not require virtual server SNAT.

VMware, Inc. 145


NSX-T Data Center Administration Guide

C S

External Router

Tier-0 LR

Virtual
LB 1 C S
Server 1

Tier-1 A Tier-1 B

C S S S C S C

One-Arm Topology
In one-arm mode, the load balancer is not in the traffic path between the client and the server. In
this mode, the client and the server can be anywhere. The load balancer performs Source NAT
(SNAT) to force return traffic from the server destined to the client to go through the load
balancer. This topology requires virtual server SNAT to be enabled.

When the load balancer receives the client traffic to the virtual IP address, the load balancer
selects a server pool member and forwards the client traffic to it. In the one-arm mode, the load
balancer replaces the client IP address with the load balancer IP address so that the server
response is always sent to the load balancer and the load balancer forwards the response to the
client.

Virtual
LB 3
Server 3

C S Tier1 One-Arm C

External Router Virtual


LB 2
Server 2

Tier-0 LR Tier1 One-Arm B

C S

Virtual Tier-1 A
LB 1
Server 1

Tier1 One-Arm A C S S C S

VMware, Inc. 146


NSX-T Data Center Administration Guide

Tier-1 Service Chaining


If a tier-1 gateway or logical router hosts different services, such as NAT, firewall, and load
balancer, the services are applied in the following order:

n Ingress

DNAT - Firewall - Load Balancer

Note: If DNAT is configured with Firewall Bypass, Firewall is skipped but not Load Balancer.

n Egress

Load Balancer - Firewall - SNAT

Setting Up Load Balancer Components


To use logical load balancers, you must start by configuring a load balancer and attaching it to a
tier-1 gateway.

Next, you set up health check monitoring for your servers. You must then configure server pools
for your load balancer. Finally, you must create a layer 4 or layer 7 virtual server for your load
balancer and attach the newly created virtual server to the load balancer.

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Add Load Balancers


Load balancer is created and attached to the tier-1 gateway.

You can configure the level of error messages you want the load balancer to add to the error log.

Note Avoid setting the log level to DEBUG on load balancers with a significant traffic due to the
number of messages printed to the log that affect performance.

VMware, Inc. 147


NSX-T Data Center Administration Guide

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Prerequisites

Verify that a tier-1 gateway is configured. See Chapter 3 Tier-1 Gateway.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Add Load Balancer.

3 Enter a name and a description for the load balancer.

4 Select the load balancer size based on your needs of virtual servers and pool members and
available resources.

5 Select the already configured tier-1 gateway to attach to this load balancer from the drop-
down menu.

The tier-1 gateway must be in the Active-Standby mode.

6 Define the severity level of the error log from the drop-down menu.

Load balancer collects information about encountered issues of different severity levels to
the error log.

7 (Optional) Enter tags to make searching easier.

You can specify a tag to set a scope of the tag.

8 Click Save.

The load balancer creation and attaching the load balancer to the tier-1 gateway takes about
three minutes and the configuration status to appear green and Up.

If the status is Down, click the information icon and resolve the error before you proceed.

9 (Optional) Delete the load balancer.

a Detach the load balancer from the virtual server and tier-1 gateway.

b Select the load balancer.

VMware, Inc. 148


NSX-T Data Center Administration Guide

c Click the vertical ellipses button.

d Select Delete.

Add an Active Monitor


The active health monitor is used to test whether a server is available. The active health monitor
uses several types of tests such as sending a basic ping to servers or advanced HTTP requests to
monitor an application health.

Servers that fail to respond within a certain time period or respond with errors are excluded from
future connection handling until a subsequent periodic health check finds these servers to be
healthy.

Active health checks are performed on server pool members after the pool member is attached
to a virtual server and that virtual server is attached to a tier-1 gateway. The tier-1 uplink IP
address is used for the health check.

Note More than one active health monitor can be configured per server pool.

Tier-1
1

Load Balancer Server 1


Clients Passive
health checks
4 (optional) Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
Active 3
health checks
(optional)

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Monitors > Active > Add Active Monitor.

3 Select a protocol for the server from the drop-down menu.

You can also use predefined protocols; HTTP, HTTPS, ICMP, TCP, and UDP for NSX Manager.

4 Select the HTTP protocol.

VMware, Inc. 149


NSX-T Data Center Administration Guide

5 Configure the values to monitor a service pool.

You can also accept the default active health monitor values.

Option Description

Name and Description Enter a name and description for the active health monitor.

Monitoring Port Set the value of the monitoring port.

Monitoring Interval Set the time in seconds that the monitor sends another connection request
to the server.

Timeout Period Set the number of times the server is tested before it is considered as
DOWN.

Fall Count Set a value when the consecutive failures reach this value, the server is
considered temporarily unavailable.

Rise Count Set a number after this timeout period, the server is tried again for a new
connection to see if it is available.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

For example, if the monitoring interval is set as 5 seconds and the timeout as 15 seconds, the
load balancer send requests to the server every 5 seconds. In each probe, if the expected
response is received from the server within 15 seconds, then the health check result is OK. If
not, then the result is CRITICAL. If the recent three health check results are all UP, the server
is considered as UP.

6 To configure the HTTP Request, click Configure.

7 Enter the HTTP request and response configuration details.

Option Description

HTTP Method Select the method to detect the server status from the drop-down menu,
GET, OPTIONS, POST, HEAD, and PUT.

HTTP Request URL Enter the request URI for the method.

HTTP Request Version Select the supported request version from the drop-down menu.
You can also accept the default version, HTTP_VERSION_1.

HTTP Response Header Click Add and enter the HTTP response header name and corresponding
value.
The default header value is 4000. The maximum header value is 64,000.

HTTP Request Body Enter the request body.


Valid for the POST and PUT methods.

HTTP Response Code Enter the string that the monitor expects to match in the status line of HTTP
response body.
The response code is a comma-separated list.
For example, 200,301,302,401.

HTTP Response Body If the HTTP response body string and the HTTP health check response body
match, then the server is considered as healthy.

VMware, Inc. 150


NSX-T Data Center Administration Guide

8 Click Save.

9 Select the HTTPS protocol from the drop-down list.

10 Complete step 5.

11 Click Configure.

12 Enter the HTTP request and response and SSL configuration details.

Option Description

Name and Description Enter a name and description for the active health monitor.

HTTP Method Select the method to detect the server status from the drop-down menu,
GET, OPTIONS, POST, HEAD, and PUT.

HTTP Request URL Enter the request URI for the method.

HTTP Request Version Select the supported request version from the drop-down menu.
You can also accept the default version, HTTP_VERSION_1.

HTTP Response Header Click Add and enter the HTTP response header name and corresponding
value.
The default header value is 4000. The maximum header value is 64,000.

HTTP Request Body Enter the request body.


Valid for the POST and PUT methods.

HTTP Response Code Enter the string that the monitor expects to match in the status line of HTTP
response body.
The response code is a comma-separated list.
For example, 200,301,302,401.

HTTP Response Body If the HTTP response body string and the HTTP health check response body
match, then the server is considered as healthy.

Server SSL Toggle the button to enable the SSL server.

Client Certificate (Optional) Select a certificate from the drop-down menu to be used if the
server does not host multiple host names on the same IP address or if the
client does not support an SNI extension.

Server SSL Profile (Optional) Assign a default SSL profile from the drop-down menu that
defines reusable and application-independent client-side SSL properties.
Click the vertical ellipses and create a custom SSL profile.

Trusted CA Certificates (Optional) You can require the client to have a CA certificate for
authentication.

Mandatory Server Authentication (Optional) Toggle the button to enable server authentication.

Certificate Chain Depth (Optional) Set the authentication depth for the client certificate chain.

Certificate Revocation List (Optional) Set a Certificate Revocation List (CRL) in the client-side SSL profile
to reject compromised client certificates.

13 Select the ICMP protocol.

14 Complete step 5 and assign the data size in byte of the ICMP health check packet.

15 Select the TCP protocol.

VMware, Inc. 151


NSX-T Data Center Administration Guide

16 Complete step 5 and you can leave the TCP data parameters empty.

If both the data sent and expected are not listed, then a three-way handshake TCP
connection is established to validate the server health. No data is sent.

Expected data if listed has to be a string. Regular expressions are not supported.

17 Select the UDP protocol.

18 Complete step 5 and configure the UDP data.

Required Option Description

UDP Data Sent Enter the string to be sent to a server after a connection is established.

UDP Data Expected Enter the string expected to receive from the server.
Only when the received string matches this definition, is the server is
considered as UP.

What to do next

Associate the active health monitor with a server pool. See Add a Server Pool.

Add a Passive Monitor


Load balancers perform passive health checks to monitor failures during client connections and
mark servers causing consistent failures as DOWN.

Passive health check monitors client traffic going through the load balancer for failures. For
example, if a pool member sends a TCP Reset (RST) in response to a client connection, the load
balancer detects that failure. If there are multiple consecutive failures, then the load balancer
considers that server pool member to be temporarily unavailable and stops sending connection
requests to that pool member for some time. After some time, the load balancer sends a
connection request to verify that the pool member has recovered. If that connection is
successful, then the pool member is considered healthy. Otherwise, the load balancer waits for
some time and tries again.

Passive health check considers the following scenarios to be failures in the client traffic.

n For server pools associated with Layer 7 virtual servers, if the connection to the pool member
fails. For example, if the pool member sends a TCP RST when the load balancer tries to
connect or perform an SSL handshake between the load balancer and the pool member fails.

n For server pools associated with Layer 4 TCP virtual servers, if the pool member sends a TCP
RST in response to client TCP SYN or does not respond at all.

n For server pools associated with Layer 4 UDP virtual servers, if a port is unreachable or a
destination unreachable ICMP error message is received in response to a client UDP packet.

Server pools associated to Layer 7 virtual servers, the failed connection count is incremented
when any TCP connection errors, for example, TCP RST failure to send data or SSL handshake
failures occur.

VMware, Inc. 152


NSX-T Data Center Administration Guide

Server pools associated to Layer 4 virtual servers, if no response is received to a TCP SYN sent
to the server pool member or if a TCP RST is received in response to a TCP SYN, then the server
pool member is considered as DOWN. The failed count is incremented.

For Layer 4 UDP virtual servers, if an ICMP error such as, port or destination unreachable
message is received in response to the client traffic, then it is considered as DOWN.

Note One passive health monitor can be configured per server pool.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Monitors > Passive > Add Passive Monitor.

3 Enter a name and description for the passive health monitor.

4 Configure the values to monitor a service pool.

You can also accept the default active health monitor values.

Option Description

Fall Count Set a value when the consecutive failures reach this value, the server is
considered temporarily unavailable.

Timeout Period Set the number of times the server is tested before it is considered as
DOWN.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

For example, when the consecutive failures reach the configured value 5, that member is
considered temporarily unavailable for 5 seconds. After this period, that member is tried
again for a new connection to see if it is available. If that connection is successful, then the
member is considered available and the failed count is set to zero. However, if that
connection fails, then it is not used for another timeout interval of 5 seconds.

What to do next

Associate the passive health monitor with a server pool. See Add a Server Pool.

Add a Server Pool


Server pool consists of one or more servers that are configured and running the same
application. A single pool can be associated to both Layer 4 and Layer 7 virtual servers.

VMware, Inc. 153


NSX-T Data Center Administration Guide

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Figure 8-1. Server Pool Parameter Configuration


SNAT

Pool

Pool Members

Active Monitor Passive Monitor

HTTP HTTPS TCP UDP ICMP

Prerequisites

n If you use dynamic pool members, a NSGroup must be configured. See Create an NSGroup in
Manager Mode.

n Verify that a passive health monitor is configured. See Add a Passive Monitor.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Server Pools > Add Server Pool.

3 Enter a name and description for the load balancer server pool.

You can optionally describe the connections managed by the server pool.

4 Select the algorithm balancing method for the server pool.

Load balancing algorithm controls how the incoming connections are distributed among the
members. The algorithm can be used on a server pool or a server directly.

All load balancing algorithms skip servers that meet any of the following conditions:

n Admin state is set to DISABLED.

VMware, Inc. 154


NSX-T Data Center Administration Guide

n Admin state is set to GRACEFUL_DISABLED and no matching persistence entry.

n Active or passive health check state is DOWN.

n Connection limit for the maximum server pool concurrent connections is reached.

Option Description

ROUND_ROBIN Incoming client requests are cycled through a list of available servers
capable of handling the request.
Ignores the server pool member weights even if they are configured.

WEIGHTED_ROUND_ROBIN Each server is assigned a weight value that signifies how that server
performs relative to other servers in the pool. The value determines how
many client requests are sent to a server compared to other servers in the
pool.
This load balancing algorithm focuses on fairly distributing the load among
the available server resources.

LEAST_CONNECTION Distributes client requests to multiple servers based on the number of


connections already on the server.
New connections are sent to the server with the fewest connections. Ignores
the server pool member weights even if they are configured.

WEIGHTED_LEAST_CONNECTION Each server is assigned a weight value that signifies how that server
performs relative to other servers in the pool. The value determines how
many client requests are sent to a server compared to other servers in the
pool.
This load balancing algorithm focuses on using the weight value to distribute
the load among the available server resources.
By default, the weight value is 1 if the value is not configured and slow start
is enabled.

IP-HASH Selects a server based on a hash of the source IP address and the total
weight of all the running servers.

VMware, Inc. 155


NSX-T Data Center Administration Guide

5 Click Select Members and elect the server pool members.

A server pool consists of single or multiple pool members.

Option Description

Enter individual members Enter a pool member name, IPv4 or IPv6 address, and a port. IP addresses
can be either IPv4 or IPv6. Mixed addressing is not supported. Note that the
pool members IP version must match the VIP IP version. For example, VIP-
IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.
Each server pool member can be configured with a weight for use in the
load balancing algorithm. The weight indicates how much more or less load
a given pool member can handle relative to other members in the same
pool.
You can set the server pool admin state. By default, the option is enable
when a server pool member is added.
If the option is disabled, active connections are processed, and the server
pool member is not selected for new connections. New connections are
assigned to other members of the pool.
If gracefully disabled, it allows you to remove servers for maintenance. The
existing connections to a member in the server pool in this state continue to
be processed.
Toggle the button to designate a pool member as a backup member to work
with the health monitor to provide an Active-Standby state. Traffic failover
occurs for backup members if active members fail a health check. Backup
members are skipped during the server selection. When the server pool is
inactive, the incoming connections are sent to only the backup members
that are configured with a sorry page indicating an application is unavailable.
Max Concurrent Connection value assigns a connection maximum so that
the server pool members are not overloaded and skipped during server
selection. If a value is not specified, then the connection is unlimited.

Select a group Select a pre-configured group of server pool members.


Enter a group name and an optional description.
Set the compute member from existing list or create one. You can specify
membership criteria, select members of the group, add IP addresses, and
MAC addresses as group members, and add Active Directory groups. IP
addresses can be either IPv4 or IPv6. Mixed addressing is not supported.
The identity members intersect with the compute member to define
membership of the group. Select a tag from the drop-down menu.
You can optionally define the maximum group IP address list.

6 Click Set Monitors and select one or more active health check monitors for the server. Click
Apply.

The load balancer periodically sends an ICMP ping to the servers to verify health independent
of data traffic. You can configure more than one active health check monitor per server pool.

VMware, Inc. 156


NSX-T Data Center Administration Guide

7 Select the Source NAT (SNAT) translation mode.

Depending on the topology, SNAT might be required so that the load balancer receives the
traffic from the server destined to the client. SNAT can be enabled per server pool.

Mode Description

Auto Map Mode Load Balancer uses the interface IP address and ephemeral port to continue
the communication with a client initially connected to one of the server's
established listening ports.
SNAT is required.
Enable port overloading to allow the same SNAT IP and port to be used for
multiple connections if the tuple (source IP, source port, destination IP,
destination port, and IP protocol) is unique after the SNAT process is
performed.
You can also set the port overload factor to allow the maximum number of
times a port can be used simultaneously for multiple connections.

Disable Disable the SNAT translation mode.

IP Pool Specify a single IPv4 or IPv6 address range, for example, 1.1.1.1-1.1.1.10 to be
used for SNAT while connecting to any of the servers in the pool. IP
addresses can be either IPv4 or IPv6. Mixed addressing is not supported.
By default, from 4000 through 64000-port range is used for all configured
SNAT IP addresses. Port ranges from 1000 through 4000 are reserved for
purposes such as, health checks and connections initiated from Linux
applications. If multiple IP addresses are present, then they are selected in a
Round Robin manner.
Enable port overloading to allow the same SNAT IP and port to be used for
multiple connections if the tuple (source IP, source port, destination IP,
destination port, and IP protocol) is unique after the SNAT process is
performed.
You can also set the port overload factor to allow the maximum number of
times a port can be used simultaneously for multiple connections.

8 Click Additional Properities, and toggle the button to enable TCP Multiplexing.

With TCP multiplexing, you can use the same TCP connection between a load balancer and
the server for sending multiple client requests from different client TCP connections.

9 Set the maximum number of TCP multiplexing connections per server that are kept alive to
send future client requests.

10 Enter the minimum number of active members the server pool must always maintain.

11 Select a passive health monitor for the server pool from the drop-down menu.

12 Select a tag from the drop-down menu.

Setting Up Virtual Server Components


You can set up the Layer 4 and Layer 7 virtual servers and configure several virtual server
components such as, application profiles, persistent profiles, and load balancer rules.

VMware, Inc. 157


NSX-T Data Center Administration Guide

Tier-1
1

Load Balancer Server 1


Clients
4 Server 2
Virtual
Server 1
Server 3
2
Health Check Pool 1
3

Figure 8-2. Virtual Server Components

Load Balancer (LB)

Fast TCP

Fast UDP Application Profile Client-SSL Profile

HTTP
Virtual Server Server-SSL Profile
Source-IP

Persistence Profile LB Rules

Cookie

Pool

Add an Application Profile


Application profiles are associated with virtual servers to enhance load balancing network traffic
and simplify traffic-management tasks.

Application profiles define the behavior of a particular type of network traffic. The associated
virtual server processes network traffic according to the values specified in the application
profile. Fast TCP, Fast UDP, and HTTP application profiles are the supported types of profiles.

TCP application profile is used by default when no application profile is associated to a virtual
server. TCP and UDP application profiles are used when an application is running on a TCP or
UDP protocol and does not require any application level load balancing such as, HTTP URL load
balancing. These profiles are also used when you only want Layer 4 load balancing, which has a
faster performance and supports connection mirroring.

VMware, Inc. 158


NSX-T Data Center Administration Guide

HTTP application profile is used for both HTTP and HTTPS applications when the load balancer
must take actions based on Layer 7 such as, load balancing all images requests to a specific
server pool member or stopping HTTPS to offload SSL from pool members. Unlike the TCP
application profile, the HTTP application profile stops the client TCP connection before selecting
the server pool member.

Figure 8-3. Layer 4 TCP and UDP Application Profile

Tier-1

Load Balancer Server 1


Clients
Virtual Layer 4 VIP Server 2
Server 1 (TCP/UDP)
Server 3

Health Check Pool 1

Figure 8-4. Layer 7 HTTPS Application Profile

Tier-1

Layer 7 VIP
(HTTP/HTTPS)
Load Balancer Server 1
Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Profiles > Application > Add Application Profiles.

VMware, Inc. 159


NSX-T Data Center Administration Guide

3 Select a Fast TCP application profile and enter the profile details.

You can also accept the default FAST TCP profile settings.

Option Description

Name and Description Enter a name and a description for the Fast TCP application profile.

Idle Timeout Enter the time in seconds on how long the server can remain idle after a TCP
connection is established.
Set the idle time to the actual application idle time and add a few more
seconds so that the load balancer does not close its connections before the
application does.

HA Flow Mirroring Toggle the button to make all the flows to the associated virtual server
mirrored to the HA standby node.

Connection Close Timeout Enter the time in seconds that the TCP connection both FINs or RST must be
kept for an application before closing the connection.
A short closing timeout might be required to support fast connection rates.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

4 Select a Fast UDP application profile and enter the profile details.

You can also accept the default UDP profile settings.

Option Description

Name and Description Enter a name and a description for the Fast UDP application profile.

Idle Timeout Enter the time in seconds on how long the server can remain idle after a
UDP connection is established.
UDP is a connectionless protocol. For load balancing purposes, all the UDP
packets with the same flow signature such as, source and destination IP
address or ports and IP protocol received within the idle timeout period are
considered to belong to the same connection and sent to the same server.
If no packets are received during the idle timeout period, the connection
which is an association between the flow signature and the selected server
is closed.

HA Flow Mirroring Toggle the button to make all the flows to the associated virtual server
mirrored to the HA standby node.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

5 Select a HTTP application profile and enter the profile details.

You can also accept the default HTTP profile settings.

VMware, Inc. 160


NSX-T Data Center Administration Guide

HTTP application profile is used for both HTTP and HTTPS applications.

Option Description

Name and Description Enter a name and a description for the HTTP application profile.

Idle Timeout Enter the time in seconds on how long an HTTP application can remain idle,
instead of the TCP socket setting which must be configured in the TCP
application profile.

Request Header Size Specify the maximum buffer size in bytes used to store HTTP request
headers.

X-Forwarded-For (XFF) n Insert - If the XFF HTTP header is not present in the incoming request,
the load balancer inserts a new XFF header with the client IP address. If
the XFF HTTP header is present in the incoming request, the load
balancer appends the XFF header with the client IP address.
n Replace - If the XFF HTTP header is present in the incoming request, the
load balancer replaces the header.
Web servers log each request they handle with the requesting client IP
address. These logs are used for debugging and analytics purposes. If the
deployment topology requires SNAT on the load balancer, then server uses
the client SNAT IP address which defeats the purpose of logging.
As a workaround, the load balancer can be configured to insert XFF HTTP
header with the original client IP address. Servers can be configured to log
the IP address in the XFF header instead of the source IP address of the
connection.

Request Body Size Enter value for the maximum size of the buffer used to store the HTTP
request body.
If the size is not specified, then the request body size is unlimited.

Redirection n None - If a website is temporarily down, user receives a page not found
error message.
n HTTP Redirect - If a website is temporarily down or has moved, incoming
requests for that virtual server can be temporarily redirected to a URL
specified here. Only a static redirection is supported.

For example, if HTTP Redirect is set to http://sitedown.abc.com/


sorry.html, then irrespective of the actual request, for example, http://
original_app.site.com/home.html or http://original_app.site.com/
somepage.html, incoming requests are redirected to the specified URL
when the original website is down.
n HTTP to HTTPS Redirect - Certain secure applications might want to
force communication over SSL, but instead of rejecting non-SSL
connections, they can redirect the client request to use SSL. With HTTP
to HTTPS Redirect, you can preserve both the host and URI paths and
redirect the client request to use SSL.

For HTTP to HTTPS redirect, the HTTPS virtual server must have port
443 and the same virtual server IP address must be configured on the
same load balancer.

For example, a client request for http://app.com/path/page.html is


redirected to https://app.com/path/page.html. If either the host name or
the URI must be modified while redirecting, for example, redirect to
https://secure.app.com/path/page.html, then load balancing rules must
be used.

VMware, Inc. 161


NSX-T Data Center Administration Guide

Option Description

NTLM Authentication Toggle the button for the load balancer to turn off TCP multiplexing and
enable HTTP keep-alive.
NTLM is an authentication protocol that can be used over HTTP. For load
balancing with NTLM authentication, TCP multiplexing must be disabled for
the server pools hosting NTLM-based applications. Otherwise, a server-side
connection established with one client's credentials can potentially be used
for serving another client's requests.
If NTLM is enabled in the profile and associated to a virtual server, and TCP
multiplexing is enabled at the server pool, then NTLM takes precedence.
TCP multiplexing is not performed for that virtual server. However, if the
same pool is associated to another non-NTLM virtual server, then TCP
multiplexing is available for connections to that virtual server.
If the client uses HTTP/1.0, the load balancer upgrades to HTTP/1.1 protocol
and the HTTP keep-alive is set. All HTTP requests received on the same
client-side TCP connection are sent to the same server over a single TCP
connection to ensure that reauthorization is not required.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Add a Persistence Profile


To ensure stability of stateful applications, load balancers implement persistence which directs all
related connections to the same server. Different types of persistence are supported to address
different types of application needs.

Some applications maintain the server state such as, shopping carts. Such state might be per
client and identified by the client IP address or per HTTP session. Applications might access or
modify this state while processing subsequent related connections from the same client or HTTP
session.

The source IP persistence profile tracks sessions based on the source IP address. When a client
requests a connection to a virtual server that enables the source address persistence, the load
balancer checks if that client was previously connected, if so, returns the client to the same
server. If not, you can select a server pool member based on the pool load balancing algorithm.
Source IP persistence profile is used by Layer 4 and Layer 7 virtual servers.

The cookie persistence profile inserts a unique cookie to identify the session the first time a client
accesses the site. The client forwards the HTTP cookie in subsequent requests and the load
balancer uses that information to provide the cookie persistence. Layer 7 virtual servers can only
use the cookie persistence profile.

The generic persistence profile supports persistence based on the HTTP header, cookie, or URL
in the HTTP request. Therefore, it supports app session persistence when the session ID is part of
the URL. This profile is not associated with a virtual server directly. You can specify this profile
when you configure a load balancer rule for request forwarding and response rewrite.

VMware, Inc. 162


NSX-T Data Center Administration Guide

Tier-1

Client 1 Layer 4 VIP or


Layer 7 VIP
Load Balancer
Server 1

Server 2
Virtual
Client 2 Server 1
Server 3

Health Check Pool 1

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Profiles > Persistence > Add Persistence Profiles.

3 Select Source IP to add a source IP persistence profile and enter the profile details.

You can also accept the default Source IP profile settings.

Option Description

Name and Description Enter a name and a description for the Source IP persistence profile.

Share Persistence Toggle the button to share the persistence so that all virtual servers this
profile is associated with can share the persistence table.
If the persistence sharing is not enabled in the Source IP persistence profile
associated to a virtual server, each virtual server that the profile is
associated to maintains a private persistence table.

Persistence Entry Timeout Enter the persistence expiration time in seconds.


The load balancer persistence table maintains entries to record that client
requests are directed to the same server.
The very first connection from new client IP is load balanced to a pool
member based on the load balancing algorithm. NSX will store that
persistence entry on the LB persistence-table which is viewable on the Edge
Node hosting the T1-LB active via the CLI command:get load-balancer
<LB-UUID> persistence-tables.
n When there are connections from that client to the VIP, the persistence
entry is kept.
n When there are no more connections from that client to the VIP, the
persistence entry begins the timer count down specified in the
"Persistence Entry Timeout" value. If no new connection from that client
to the VIP is made before the timer expires, the persistence entry for
that client IP is deleted. If that client comes back after the entry is
deleted, it will be load balanced again to a pool member based on the
load balancing algorithm.

VMware, Inc. 163


NSX-T Data Center Administration Guide

Option Description

Purge Entries When Full A large timeout value might lead to the persistence table quickly filling up
when the traffic is heavy. When this option is enabled, the oldest entry is
deleted to accept the newest entry.
When this option is disabled, if the source IP persistance table is full, new
client connections are rejected.

HA Persistence Mirroring Toggle the button to synchronize persistence entries to the HA peer. When
HA persistance mirroring is enabled, the client IP persistance remains in the
case of load balancer failover.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

4 Select a Cookie persistence profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Cookie persistence profile.

Share Persistence Toggle the button to share persistence across multiple virtual servers that
are associated to the same pool members.
The Cookie persistence profile inserts a cookie with the format,
<name>.<profile-id>.<pool-id>.
If the persistence shared is not enabled in the Cookie persistence profile
associated with a virtual server, the private Cookie persistence for each
virtual server is used and is qualified by the pool member. The load balancer
inserts a cookie with the format, <name>.<virtual_server_id>.<pool_id>.

Cookie Mode Select a mode from the drop-down menu.


n INSERT - Adds a unique cookie to identify the session.
n PREFIX - Appends to the existing HTTP cookie information.
n REWRITE - Rewrites the existing HTTP cookie information.

Cookie Name Enter the cookie name.

Cookie Domain Enter the domain name.


HTTP cookie domain can be configured only in the INSERT mode.

Cookie Fallback Toggle the button so that the client request is rejected if cookie points to a
server that is in a DISABLED or is in a DOWN state.
Selects a new server to handle a client request if the cookie points to a
server that is in a DISABLED or is in a DOWN state.

Cookie Path Enter the cookie URL path.


HTTP cookie path can be set only in the INSERT mode.

Cookie Garbling Toggle the button to disable encryption.


When garbling is disabled, the cookie server IP address and port information
is in a plain text. Encrypt the cookie server IP address and port information.

Cookie Type Select a cookie type from the drop-down menu.


Session Cookie - Not stored. Will be lost when the browser is closed.
Persistence Cookie - Stored by the browser. Not lost when the browser is
closed.

VMware, Inc. 164


NSX-T Data Center Administration Guide

Option Description

Max Idle Time Enter the time in seconds that the cookie type can be idle before a cookie
expires.

Max Cookie Age For the session cookie type, enter the time in seconds a cookie is available.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

5 Select Generic to add a generic persistence profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Source IP persistence profile.

Share Persistence Toggle the button to share the profile among virtual servers.

Persistence Entry Timeout Enter the persistence expiration time in seconds.


The load balancer persistence table maintains entries to record that client
requests are directed to the same server.
The very first connection from new client IP is load balanced to a pool
member based on the load balancing algorithm. NSX will store that
persistence entry on the LB persistence-table which is viewable on the Edge
Node hosting the T1-LB active via the CLI command:get load-balancer
<LB-UUID> persistence-tables.
n When there are connections from that client to the VIP, the persistence
entry is kept.
n When there are no more connections from that client to the VIP, the
persistence entry begins the timer count down specified in the
"Persistence Entry Timeout" value. If no new connection from that client
to the VIP is made before the timer expires, the persistence entry for
that client IP is deleted. If that client comes back after the entry is
deleted, it will be load balanced again to a pool member based on the
load balancing algorithm.

HA Persistence Mirroring Toggle the button to synchronize persistence entries to the HA peer.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Add an SSL Profile


SSL profiles configure application-independent SSL properties such as, cipher lists and reuse
these lists across multiple applications. SSL properties are different when the load balancer is
acting as a client and as a server, as a result separate SSL profiles for client-side and server-side
are supported.

Note SSL profile is not supported in the NSX-T Data Center limited export release.

Client-side SSL profile refers to the load balancer acting as an SSL server and stopping the client
SSL connection. Server-side SSL profile refers to the load balancer acting as a client and
establishing a connection to the server.

You can specify a cipher list on both the client-side and server-side SSL profiles.

VMware, Inc. 165


NSX-T Data Center Administration Guide

SSL session caching allows the SSL client and server to reuse previously negotiated security
parameters avoiding the expensive public key operation during the SSL handshake. SSL session
caching is disabled by default on both the client-side and server-side.

SSL session tickets are an alternate mechanism that allows the SSL client and server to reuse
previously negotiated session parameters. In SSL session tickets, the client and server negotiate
whether they support SSL session tickets during the handshake exchange. If supported by both,
server can send an SSL ticket, which includes encrypted SSL session parameters to the client.
The client can use that ticket in subsequent connections to reuse the session. SSL session tickets
are enabled on the client-side and disabled on the server-side.

Figure 8-5. SSL Offloading

Tier-1
1

Layer 7 VIP
Load Balancer Server 1
Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1


HTTPS HTTP
(Client SSL
Profile)

Figure 8-6. End-to-End SSL

Tier-1
1

Layer 7 VIP

Load Balancer Server 1


Clients
Server 2
Virtual
Server 1
Server 3

Health Check Pool 1

HTTPS HTTPS
(Client SSL (Server SSL
Profile) Profile)

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Profiles > SSL Profile.

VMware, Inc. 166


NSX-T Data Center Administration Guide

3 Select a Client SSL Profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Client SSL profile.

SSL Suite Select the SSL Cipher group from the drop-down menu and available SSL
Ciphers and SSL protocols to be included in the Client SSL profile are
populated.
Balanced SSL Cipher group is the default.

Session Caching Toggle the button to allow the SSL client and server to reuse previously
negotiated security parameters avoiding the expensive public key operation
during an SSL handshake.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Supported SSL Ciphers Depending on the SSL suite, you assigned the supported SSL Ciphers are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.

Supported SSL Protocols Depending on the SSL suite, you assigned the supported SSL protocols are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.

Session Cache Entry Timeout Enter the cache timeout in seconds to specify how long the SSL session
parameters must be kept and can be reused.

Prefer Server Cipher Toggle the button so that the server can select the first supported cipher
from the list it can support.
During an SSL handshake, the client sends an ordered list of supported
ciphers to the server.

4 Select a Server SSL Profile and enter the profile details.

Option Description

Name and Description Enter a name and a description for the Server SSL profile.

SSL Suite Select the SSL Cipher group from the drop-down menu and available SSL
Ciphers and SSL protocols to be included in the Server SSL profile are
populated.
Balanced SSL Cipher group is the default.

Session Caching Toggle the button to allow the SSL client and server to reuse previously
negotiated security parameters avoiding the expensive public key operation
during an SSL handshake.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Supported SSL Ciphers Depending on the SSL suite, you assigned the supported SSL Ciphers are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.

VMware, Inc. 167


NSX-T Data Center Administration Guide

Option Description

Supported SSL Protocols Depending on the SSL suite, you assigned the supported SSL protocols are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.

Session Cache Entry Timeout Enter the cache timeout in seconds to specify how long the SSL session
parameters must be kept and can be reused.

Prefer Server Cipher Toggle the button so that the server can select the first supported cipher
from the list it can support.
During an SSL handshake, the client sends an ordered list of supported
ciphers to the server.

Add Layer 4 Virtual Servers


Virtual servers receive all the client connections and distribute them among the servers. A virtual
server has an IP address, a port, and a protocol. For Layer 4 virtual servers, lists of ports ranges
can be specified instead of a single TCP or UDP port to support complex protocols with dynamic
ports.

A Layer 4 virtual server must be associated to a primary server pool, also called a default pool.

If a virtual server status is disabled, any new connection attempts to the virtual server are
rejected by sending either a TCP RST for the TCP connection or ICMP error message for UDP.
New connections are rejected even if there are matching persistence entries for them. Active
connections continue to be processed. If a virtual server is deleted or disassociated from a load
balancer, then active connections to that virtual server fail.

Prerequisites

n Verify that application profiles are available. See Add an Application Profile.

n Verify that persistent profiles are available. See Add a Persistence Profile.

n Verify that SSL profiles for the client and server are available. See Add an SSL Profile.

n Verify that server pools are available. See Add a Server Pool.

n Verify that load balancer is available. See Add Load Balancers.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Virtual Servers > Add Virtual Server.

3 Select a L4 TCP or a L4 UDP protocol and enter the protocol details.

Layer 4 virtual servers support either the Fast TCP or Fast UDP protocol, but not both.

VMware, Inc. 168


NSX-T Data Center Administration Guide

For Fast TCP or Fast UDP protocol support on the same IP address and port, for example
DNS, a virtual server must be created for each protocol.

L4 TCP Option L4 TCP Description

Name and Description Enter a name and a description for the Layer 4 virtual server.

IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported. Note that the pool members IP version must match the VIP IP
version. For example, VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.

Ports Enter the virtual server port number.

Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop-down menu.

Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.

Application Profile Based on the protocol type, the existing application profile is automatically
populated.
Click the vertical ellipses to create an application profile.

Persistence Select an existing persistence profile from the drop-down menu.


Persistence profile can be enabled on a virtual server to allow Source IP
related client connections to be sent to the same server.

Access List Control When you enable Access List Control (ALC), all traffic flowing through the
load balancer is compared with the ACL statement, which either drops or
allows the traffic.
ACL is disabled by default. To enable, click Configure, and select Enabled.
Select an Action:
n Allow - Allows connections matching the selected group. All other
connections are dropped.
n Drop - Allows connections not matching the selected group. A dropped
connection generates a log entry is access log is enabled.
Select a Group. The IP addresses included in this group are either dropped
or allowed by the ACL.

Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.

Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.

Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.

VMware, Inc. 169


NSX-T Data Center Administration Guide

L4 TCP Option L4 TCP Description

Default Pool Member Port Enter a default pool member port if the pool member port for a virtual
server is not defined.
For example, if a virtual server is defined with a port range of 2000–2999
and the default pool member port range is set as 8000-8999, then an
incoming client connection to the virtual server port 2500 is sent to a pool
member with a destination port set to 8500.

Admin State Toggle the button to disable the admin state of the Layer 4 virtual server.

Access Log Toggle the button to enable logging for the Layer 4 virtual server.

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

L4 UDP Option L4 UDP Description

Name and Description Enter a name and a description for the Layer 4 virtual server.

IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported. Note that the pool members IP version must match the VIP IP
version. For example, VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.

Ports Enter the virtual server port number.

Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop-down menu.

Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.

Application Profile Based on the protocol type, the existing application profile is automatically
populated.
You can click the vertical ellipses to create an application profile.

Persistence Select an existing persistence profile from the drop-down menu.


Persistence profile can be enabled on a virtual server to allow Source IP
related client connections to be sent to the same server.

Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.

Access List Control When you enable Access List Control (ALC) all traffic flowing through the
load balancer will be compared with the ACL statement, which will either
drop it or allow it.
ACL is disabled by default. To enable, click Configure, and check Enabled.
Select an Action:
n Allow - Allows connections matching the selected group. All other
connections are dropped
n Drop - Allows connections not matching the selected group. A dropped
connection generates a log entry is access log is enabled.
Select a Group. The IP addresses included in this group are either dropped
or allowed by the ACL.

VMware, Inc. 170


NSX-T Data Center Administration Guide

L4 UDP Option L4 UDP Description

Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.

Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.

Default Pool Member Port Enter a default pool member port if the pool member port for a virtual
server is not defined.
For example, if a virtual server is defined with port range 2000–2999 and
the default pool member port range is set as 8000-8999, then an incoming
client connection to the virtual server port 2500 is sent to a pool member
with a destination port set to 8500.

Admin State Toggle the button to disable the admin state of the Layer 4 virtual server.

Access Log Toggle the button to enable logging for the Layer 4 virtual server.

Log Significant Event Only This field can only be configured if access logs are enabled. Connections
that cannot be sent to a pool member are treated as a significant event such
as "max connection limit," or "Access Control drop."

Tags Enter tags to make searching easier.


You can specify a tag to set a scope of the tag.

Add Layer 7 HTTP Virtual Servers


Virtual servers receive all the client connections and distribute them among the servers. A virtual
server has an IP address, a port, and a protocol TCP.

If a virtual server status is disabled, any new connection attempts to the virtual server are
rejected by sending either a TCP RST for the TCP connection or ICMP error message for UDP.
New connections are rejected even if there are matching persistence entries for them. Active
connections continue to be processed. If a virtual server is deleted or disassociated from a load
balancer, then active connections to that virtual server fail.

Note SSL profile is not supported in the NSX-T Data Center limited export release.

If a client-side SSL profile binding is configured on a virtual server but not a server-side SSL
profile binding, then the virtual server operates in an SSL-terminate mode, which has an
encrypted connection to the client and plain text connection to the server. If both the client-side
and server-side SSL profile bindings are configured, then the virtual server operates in SSL-proxy
mode, which has an encrypted connection both to the client and the server.

Associating server-side SSL profile binding without associating a client-side SSL profile binding is
currently not supported. If a client-side and a server-side SSL profile binding is not associated
with a virtual server and the application is SSL-based, then the virtual server operates in an SSL-
unaware mode. In this case, the virtual server must be configured for Layer 4. For example, the
virtual server can be associated to a fast TCP profile.

VMware, Inc. 171


NSX-T Data Center Administration Guide

Prerequisites

n Verify that application profiles are available. See Add an Application Profile.

n Verify that persistent profiles are available. See Add a Persistence Profile.

n Verify that SSL profiles for the client and server are available. See Add an SSL Profile.

n Verify that server pools are available. See Add a Server Pool.

n Verify that CA and client certificate are available. See Create a Certificate Signing Request
File.

n Verify that a certification revocation list (CRL) is available. See Import a Certificate Revocation
List.

n Verify that load balancer is available. See Add Load Balancers.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Load Balancing > Virtual Servers > Add Virtual Server.

3 Select L7 HTTP from the drop-down list and enter the protocol details.

Layer 7 virtual servers support the HTTP and HTTPS protocols.

Option Description

Name and Description Enter a name and a description for the Layer virtual server.

IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported.

Ports Enter the virtual server port number.

Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop down menu.

Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.

Application Profile Based on the protocol type, the existing application profile is automatically
populated.
You can click the vertical ellipses to create an application profile.

Persistence Select an existing persistence profile from the drop-down menu.


Persistence profile can be enabled on a virtual server to allow Source IP and
Cookie related client connections to be sent to the same server.

4 Click Configure to set the Layer 7 virtual server SSL.

You can configure the Client SSL and Server SSL.

VMware, Inc. 172


NSX-T Data Center Administration Guide

5 Configure the Client SSL.

Option Description

Client SSL Toggle the button to enable the profile.


Client-side SSL profile binding allows multiple certificates, for different host
names to be associated to the same virtual server.

Default Certificate Select a default certificate from the drop-down menu.


This certificate is used if the server does not host multiple host names on the
same IP address or if the client does not support Server Name Indication
(SNI) extension.

Client SSL Profile Select the client-side SSL Profile from the drop-down menu.

SNI Certificates Select the available SNI certificate from the drop-down menu.

Trusted CA Certificates Select the available CA certificate.

Mandatory Client Authentication Toggle the button to enable this menu item.

Certificate Chain Depth Set the certificate chain depth to verify the depth in the server certificates
chain.

Certificate Revocation List Select the available CRL to disallow compromised server certificates.

6 Configure the Server SSL.

Option Description

Server SSL Toggle the button to enable the profile.

Client Certificate Select a client certificate from the drop-down menu.


This certificate is used if the server does not host multiple host names on the
same IP address or if the client does not support Server Name Indication
(SNI) extension.

Server SSL Profile Select the Server-side SSL Profile from the drop-down menu.

Trusted CA Certificates Select the available CA certificate.

Mandatory Server Authentication Toggle the button to enable this menu item.
Server-side SSL profile binding specifies whether the server certificate
presented to the load balancer during the SSL handshake must be validated
or not. When validation is enabled, the server certificate must be signed by
one of the trusted CAs whose self-signed certificates are specified in the
same server-side SSL profile binding.

Certificate Chain Depth Set the certificate chain depth to verify the depth in the server certificates
chain.

Certificate Revocation List Select the available CRL to disallow compromised server certificates.
OCSP and OCSP stapling are not supported on the server-side.

VMware, Inc. 173


NSX-T Data Center Administration Guide

7 Click Additional Properties to configure additional Layer 7 virtual server properties.

Option Description

Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.

Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.

Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.

Default Pool Member Port Enter a default pool member port, if the pool member port for a virtual
server is not defined.
For example, if a virtual server is defined with port range 2000-2999 and the
default pool member port range is set as 8000-8999, then an incoming
client connection to the virtual server port 2500 is sent to a pool member
with a destination port set to 8500.

Admin State Toggle the button to disable the admin state of the Layer 7 virtual server.

Access Log Toggle the button to enable logging for the Layer 7 virtual server.

Log Significant Event Only This field can only be configured if access logs are enabled. Requests with
an HTTP response status of >=400 are treated as a significant event.

Tags Select a tag from the drop-down list.


You can specify a tag to set a scope of the tag.

8 Click Save.

Add Load Balancer Rules


With Layer 7 HTTP virtual servers, you can optionally configure load balancer rules and
customize load balancing behavior using match or action rules.

Load balancer rules are supported for only Layer 7 virtual servers with an HTTP application
profile. Different load balancer services can use load balancer rules.

Each load balancer rule consists of single or multiple match conditions and single or multiple
actions. If the match conditions are not specified, then the load balancer rule always matches and
is used to define default rules. If more than one match condition is specified, then the matching
strategy determines if all conditions must match or any one condition must match for the load
balancer rule to be considered a match.

Each load balancer rule is implemented at a specific phase of the load balancing processing;
Transport, HTTP Access, Request Rewrite, Request Forwarding, and Response Rewrite. Not all
the match conditions and actions are applicable to each phase.

VMware, Inc. 174


NSX-T Data Center Administration Guide

Up to 4,000 load balancer rules can be configured with the API, if the skip_scale_validation flag
in LbService is set. Note that the flag can be set via API. Refer to the NSX-T Data Center API
Guide for more information. Up to 512 load balancer rules can be configured through the user
interface.

Load Balancer rules support REGEX for match types. For more information, see Regular
Expressions in Load Balancer Rules.

Prerequisites

Verify a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

n Configure Transport Phase Load Balancer Rules


Transport phase is the first phase of a client HTTP request.

n Configure HTTP Access Load Balancer Rules


A JSON web token (JWT) is a standardized, optionally validated and/or encrypted format
that is used to securely transfer information between two parties.

n Configure Request Rewrite Load Balancer Rules


An HTTP request rewrite is applied to the HTTP request coming from the client.

n Configure Request Forwarding Load Balancer Rules


Request forwarding redirects a URL or host to a specific server pool.

n Configure Response Rewrite Load Balancer Rules


An HTTP response rewrite is applied to the HTTP response going out from the servers to
the client.

n Regular Expressions in Load Balancer Rules


Regular expressions (REGEX) are used in match conditions for load balancer rules.

Configure Transport Phase Load Balancer Rules


Transport phase is the first phase of a client HTTP request.

Load Balancer virtual server SSL configuration is found under SSL Configuration. There are two
possible configurations. In both modes, the load balancer sees the traffic, and applies load
balancer rules based on the client HTTP traffic.

n SSL Offload, configuring only the SSL client. In this mode, the client to VIP traffic is encrypted
(HTTPS), and the load balancer decrypts it. The VIP to Pool member traffic is clear (HTTP).

n SSL End-to-End, configuring both the Client SSL and Server SSL. In this mode, the client to
VIP traffic is encrypted (HTTPS), and the load balancer decrypts it and then re-encrypts it.
The VIP to Pool member traffic is encrypted (HTTPS).

The Transport Phase is complete when the virtual server receives the client SSL hello message
virtual server. this occurs before SSL is ended, and before HTTP traffic.

VMware, Inc. 175


NSX-T Data Center Administration Guide

The Transport Phase allows administrators to select the SSL mode, annd specific server pool
based on the client SSL hello message. There are three options for the virtual server SSL mode:

n SSL Offload

n End-to-End

n SSL-Passthrough (the load balancer does not end SSL)

Load Balancer rules support REGEX for match types. PCRE style REGEX patterns are supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 In the Load Balancer Rules section, next to Transport Phase, click Set > Add Rule to configure
the load balancer rules for the Transport Phase.

3 SSL SNI is the only match condition supported. Match conditions are used to match
application traffic passing through load balancers.

4 From the drop-down list, select a Match Type: starts with, ends with, equals, contains,
matches regex.

5 Enter a SNI Name.

6 Toggle the Case Sensitive button to set a case-sensitive flag for HTTP header value
comparison.

7 Toggle the Negate button to enable it.

8 From the drop-down list, select a Match Strategy:

Match Strategy Description

Any Either host or path may match for this rule to be considered a match.

All Both host and path must match for this rule to be considered a match.

VMware, Inc. 176


NSX-T Data Center Administration Guide

9 From the drop-down menu, select the SSL Mode Selection.

SSL Mode Description

SSL Passthrough SSL Passthrough passes HTTP traffic to a backend server without
decrypting the traffic on the load balancer. The data is kept encrypted as it
travels through the load balancer.
If SSL Passthrough is selected, a server pool can be selected. See Add a
Server Pool for Load Balancing in Manager Mode.

SSL Offloading SSL Offloading decrypts all HTTP traffic on the load balancer. SSL offloading
allows data to be inspected as it passes between the load balancer and
server. If NTLM and multiplexing are not configured, the load balancer
establishes a new connection to the selected backend server for each HTTP
request.

SSL End-to End After receiving the HTTP request, the load balancer connects to the
selected backend server and talks with it using HTTPS. If NTLM and
multiplexing are not configured, the load balancer establishes a new
connection to the selected backend server for each HTTP request.

10 Click SAVE and APPLY.


Configure HTTP Access Load Balancer Rules
A JSON web token (JWT) is a standardized, optionally validated and/or encrypted format that is
used to securely transfer information between two parties.

In the HTTP ACCESS phase, users can define the action to validate JWT from clients and pass, or
remove JWT to backend servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 In the Load Balancer Rules section, next to HTTP Access Phase, click Set > Add Rule to
configure the load balancer rules for the HTTP Request Rewrite phase.

VMware, Inc. 177


NSX-T Data Center Administration Guide

3 From the drop-down menu, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.

Supported Match Condition Description

HTTP Request Method Match an HTTP request method.


http_request.method - value to match

HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match

HTTP Request URI Arguments Match an HTTP request URI query argument.
http_request.uri_arguments - value to match

HTTP Request Version Match an HTTP request version.


http_request.version - value to match

HTTP Request Header Match any HTTP request header.


http_request.header_name - header name to match
http_request.header_value - value to match

HTTP Request Cookie Match any HTTP request cookie.


http_request.cookie_value - value to match

HTTP Request Body Match an HTTP request body content.


http_request.body_value - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header text boxes in of HTTP messages. The source type must
be either a single IP address, a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresses are supported.
n If IP Header Source is selected with a Group source type, select the
group from the drop-down menu.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

Variable Create a variable and assign a value to the variable.

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.

4 From the drop-down list, select a Match Type: starts with, ends with, equals, contains,
matches regex.

5 If needed, enter the URI.

VMware, Inc. 178


NSX-T Data Center Administration Guide

6 From the drop-down list, select a Match Strategy:

Match Strategy Description

Any Either host or path may match for this rule to be considered a match.

All Both host and path must match for this rule to be considered a match.

7 From the drop-down menu select an Action:

Action Description

JWT Authentication JSON Web Token (JWT) is an open standard that defines a compact and
self-contained way for securely transmitting information between parties as
a JSON object. This information can be verified and trusted because it is
digitally signed.
n Realm - A description of the protected area. If no realm is specified,
clients often display a formatted hostname. The configured realm is
returned when a client request is rejected with 401 http status. The
response is: "WWW-Authentication: Bearer realm=<realm>".
n Tokens - This parameter is optional. Load balancer searches for every
specified token one-by-one for the JWT message until found. If not
found, or if this text box is not configured, load balancer searches the
Bearer header by default in the http request "Authorization: Bearer
<token>"
n Key Type - Symmetric key or asymmetric public key (certificate-id)
n Preserve JWT - This is a flag to preserve JWT and pass it to backend
server. If disabled, the JWT key to the backend server is removed.

Connection Drop If negate is enabled, when Connection Drop is configured, all requests not
matching the specified match condition are dropped. Requests matching the
specified match condition are allowed.

Variable Assignment Enables users to assign a value to a variable in HTTP Access Phase, in such
a way that the result can be used as a condition in other load balancer rule
phases.

8 Click Save and Apply.


Configure Request Rewrite Load Balancer Rules
An HTTP request rewrite is applied to the HTTP request coming from the client.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Procedure

1 Open the Layer 7 HTTP virtual server.

VMware, Inc. 179


NSX-T Data Center Administration Guide

2 In the Load Balancer Rules section, next to Request Rewrite Phase, click Set > Add Rule to
configure the load balancer rules for the HTTP Request Rewrite phase.

3 From the drop-down list, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.

Supported Match Condition Description

HTTP Request Method Match an HTTP request method.


http_request.method - value to match

HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match

HTTP Request URI Arguments Used to match URI arguments aka query string of HTTP request messages,
for example, in URI http://exaple.com?foo=1&bar=2, the "foo=1&bar=2" is the
query string containing URI arguments. In an URI scheme, query string is
indicated by the first question mark ("?") character and terminated by a
number sign ("#") character or by the end of the URI.
http_request.uri_arguments - value to match

HTTP Request Version Used to match the HTTP protocol version of the HTTP request messages
http_request.version - value to match

HTTP Request Header Used to match HTTP request messages by HTTP header fields. HTTP header
fields are components of the header section of HTTP request and response
messages. They define the operating parameters of an HTTP transaction.
http_request.header_name - header name to match
http_request.header_value - value to match

HTTP Request Cookie Used to match HTTP request messages by cookie which is a specific type of
HTTP header. The match_type and case_sensitive define how to compare
cookie value.
http_request.cookie_value - value to match

HTTP Request Body Match an HTTP request body content.


http_request.body_value - value to match

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresss are supported
n If IP Header Source is selected with a Group source type, seelct the
group from the drop-down list.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

VMware, Inc. 180


NSX-T Data Center Administration Guide

Supported Match Condition Description

Variable Create a variable and assign a value to the variable.

Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.

4 From the drop-down menu, select a Match Type: starts with, ends with, equals, contains, or
matches regex. Match type is used to match a condition with a specified action.

Match Type Description

Starts With If the match condition starts with the specified value, the condition matches.

Ends With If the match condition ends with the specified value, the condition matches.

Equals If the match condition is the same as the specified value, the condition
matches.

Contains If the match condition contains the specified value, the condition matches.

Matches Regex If the match condition matches the specified values, the condition matches.

5 Specify the URI.

6 From the drop-down menu, select a Match Strategy:

Match Strategy Description

Any Indicates that either host or path can match for this rule to be considered a
match.

All Indicates that both host and path must match for this rule to be considered
a match.

7 Select an Action from the drop-down menu:

Actions Description

HTTP Request URI Rewrite This action is used to rewrite URIs in matched HTTP request messages.
Specify the URI and URI Arguments in this condition to rewrite the matched
HTTP request message's URI and URI arguments to the new values. Full URI
scheme of HTTP messages have following syntax: Scheme:[//
[user[:password]@]host[:port]][/path][?query][#fragment The URI field of
this action is used to rewrite the /path part in the above scheme. The URI
Arguments field is used to rewrite the query part. Captured variables and
built-in variables can be used in the URI and URI Arguments fields.
a Enter the URI of the HTTP request
b Enter the query string of URI, which typically contains key value pairs, for
example: foo1=bar1&foo2=bar2.

HTTP Request Header Rewrite This action is used to rewrite header fields of matched HTTP request
messages to specified new values.
a Enter the name of a header text box HTTP request message.
b Enter the header value.

VMware, Inc. 181


NSX-T Data Center Administration Guide

Actions Description

HTTP Request Header Delete This action is used to delete header fields of HTTP request messages at
HTTP_REQUEST_REWRITE phase. One action can be used to delete all
headers with same header name. To delete headers with different header
names, multiple actions must be defined.
n Enter the name of a header field of HTTP request message.

Variable Assignment Create a variable and assign it a name and value.

8 Toggle the Case Sensitive button to set a case-sensitive flag for HTTP header value
comparison.

9 Toggle the Negate button to enable it.

10 Click Save and Apply.


Configure Request Forwarding Load Balancer Rules
Request forwarding redirects a URL or host to a specific server pool.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Procedure

1 Open the Layer 7 HTTP virtual server.

2 Click Request Forwarding > Add Rule to configure the load balancer rules for the HTTP
Request Forwarding.

3 From the drop-down list, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.

Supported Match Condition Description

HTTP Request Method Match an HTTP request method.


http_request.method - value to match

HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match

HTTP Request URI Arguments Used to match URI arguments aka query string of HTTP request messages,
for example, in URI http://exaple.com?foo=1&bar=2, the "foo=1&bar=2" is the
query string containing URI arguments. In an URI scheme, query string is
indicated by the first question mark ("?") character and terminated by a
number sign ("#") character or by the end of the URI.
http_request.uri_arguments - value to match

HTTP Request Version Used to match the HTTP protocol version of the HTTP request messages
http_request.version - value to match

VMware, Inc. 182


NSX-T Data Center Administration Guide

Supported Match Condition Description

HTTP Request Header Used to match HTTP request messages by HTTP header fields. HTTP header
fields are components of the header section of HTTP request and response
messages. They define the operating parameters of an HTTP transaction.
http_request.header_name - header name to match
http_request.header_value - value to match

HTTP Request Cookie Used to match HTTP request messages by cookie which is a specific type of
HTTP header. The match_type and case_sensitive define how to compare
cookie value.
http_request.cookie_value - value to match

HTTP Request Body Match an HTTP request body content.


http_request.body_value - value to match

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresss are supported
n If IP Header Source is selected with a Group source type, seelct the
group from the drop-down list.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

Variable Create a variable and assign a value to the variable.

Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.

4 Select an action:

Action Description

HTTP Reject Used to reject HTTP request messages. The specified reply_status value is
used as the status code for the corresponding HTTP response message. The
response message is sent back to client (usually a browser) indicating the
reason it was rejected.
http_forward.reply_status - HTTP status code used to reject
http_forward.reply_message - HTTP rejection message

HTTP Redirect Used to redirect HTTP request messages to a new URL. The HTTP status
code for redirection is 3xx, for example, 301, 302, 303, 307, etc. The
redirect_url is the new URL that the HTTP request message is redirected to.
http_forward.redirect_status - HTTP status code for redirect
http_forward.redirect_url - HTTP redirect URL

VMware, Inc. 183


NSX-T Data Center Administration Guide

Action Description

Select Pool Force the request to a specific server pool. Specified pool member's
configured algorithm (predictor) is used to select a server within the server
pool. The matched HTTP request messages are forwarded to the specified
pool.
http_forward.select_pool - server pool UUID

Variable Persistence On Select a generic persistence profile and enter a variable name.
You can also enable Hash Variable. If the variable value is long, hashing the
variable ensures that it is correctly stored in the persistence table. If the
Hash Variable is not enabled, only the fixed prefix part of the variable value
is stored in the persistence table if the variable value is long. As a result, two
different requests with long variable values might be dispatched to the same
backend server because their variable values have the same prefix part,
when they should be dispatched to different backend servers.

Connection Drop If negate is enabled in condition, when Connection Drop is configured, all
requests not matching the condition are dropped. Requests matching the
condition are allowed.

Reply Status Shows the status of the reply.

Reply Message Server responds with a reply message that contains confirmed addresses
and configuration.

5 Click Save and Apply.


Configure Response Rewrite Load Balancer Rules
An HTTP response rewrite is applied to the HTTP response going out from the servers to the
client.

Prerequisites

Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.

Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.

Procedure

1 Open the Layer 7 HTTP virtual server.

VMware, Inc. 184


NSX-T Data Center Administration Guide

2 Click Response Rewrite > Add Rule to configure the load balancer rules for the HTTP
Response Rewrite.

All match values accept regular expressions.

Supported Match Condition Description

HTTP Response Header This condition is used to match HTTP response messages from backend
servers by HTTP header fields.
http_response.header_name - header name to match
http_response.header_value - value to match

HTTP Response Method Match an HTTP response method.


http_response.method - value to match

HTTP Response URI Match an HTTP response URI.


http_response.uri - value to match

HTTP Response URI Arguments Match an HTTP response URI arguments.


http_response.uri_args - value to match

HTTP Response Version Match an HTTP response version.


http_response.version - value to match

HTTP Response Cookie Match any HTTP response cookie.


http_response.cookie_value - value to match

Client SSL Match client SSL profile ID.


ssl_profile_id - value to match

TCP Header Port Match a TCP source or the destination port.


tcp_header.source_port - source port to match
tcp_header.destination_port - destination port to match

IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
The source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresses are supported.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match

Variable Create a variable and assign a value to the variable.

Case Sensitive Set a case-sensitive flag for HTTP header value comparison.

VMware, Inc. 185


NSX-T Data Center Administration Guide

3 Select an action:

Action Description

HTTP Response Header Rewrite This action is used to rewrite header fields of HTTP response messages to
specified new values.
http_response.header_name - header name
http_response.header_value - value to write

HTTP Response Header Delete This action is used to delete header fields of HTTP response messages.
http_request.header_delete - header name
http_request.header_delete - value to write

Variable Persistence Learn Select a generic persistence profile and enter a variable name.
You can also enable Hash Variable. If the variable value is long, hashing the
variable ensures that it will be correctly stored in the persistence table. If
Hash Variable is not enabled, only the fixed prefix part of the variable value
is stored in the persistence table if the variable value is long. As a result, two
different requests with long variable values might be dispatched to the same
backend server (because their variable values have the same prefix part)
when they should be dispatched to different backend servers.

4 Click Save and Apply.


Regular Expressions in Load Balancer Rules
Regular expressions (REGEX) are used in match conditions for load balancer rules.

Perl Compatible Regular Expressions (PCRE) style REGEX patterns is supported with a few
limitations on advanced use cases. When REGEX is used in match conditions, named capturing
groups are supported.

REGEX restrictions include:

n Character unions and intersections are not supported. For example, do not use [a-z[0-9]] and
[a-z&&[aeiou]] instead use [a-z0-9] and [aeiou] respectively.

n Only 9 back references are supported and \1 through \9 can be used to refer to them.

n Use \0dd format to match octal characters, not the \ddd format.

n Embedded flags are not supported at the top level, they are only supported within groups.
For example, do not use "Case (?i:s)ensitive" instead use "Case ((?i:s)ensitive)".

n Preprocessing operations \l, \u, \L, \U are not supported. Where \l - lowercase next char \u -
uppercase next char \L - lower case until \E \U - upper case to \E.

n (?(condition)X), (?{code}), (??{Code}) and (?#comment) are not supported.

n Predefined Unicode character class \X is not supported

n Using named character construct for Unicode characters is not supported. For example, do
not use \N{name} instead use \u2018.

When REGEX is used in match conditions, named capturing groups are supported. For
example, REGEX match pattern /news/(?<year>\d+)-(?<month>\d+)-(?<day>\d+)/(?<article>.*)
can be used to match a URI like /news/2018-06-15/news1234.html.

VMware, Inc. 186


NSX-T Data Center Administration Guide

Then variables are set as follows, $year = "2018" $month = "06" $day = "15" $article =
"news1234.html". After the variables are set, these variables can be used in load balancer rule
actions. For example, URI can be rewritten using the matched variables like, /news.py?year=
$year&month=$month&day=$day&article=$article. Then the URI gets rewritten as /news.py?
year=2018&month=06&day=15&article=news1234.html.

Rewrite actions can use a combination of named capturing groups and built-in variables. For
example, URI can be written as /news.py?year=$year&month=$month&day=$day&article=
$article&user_ip=$_remote_addr. Then the example URI gets rewritten as /news.py?
year=2018&month=06&day=15&article=news1234.html&user_ip=1.1.1.1.

Note For named capturing groups, the name cannot start with an _ character.

In addition to named capturing groups, the following built-in variables can be used in rewrite
actions. All the built-in variable names start with _.

n $_args - arguments from the request

n $_arg_<name> - argument <name> in the request line

n $_cookie_<name> - value of <name> cookie

n $_upstream_cookie_<name> - cookie with the specified name sent by the upstream


server in the "Set-Cookie" response header field

n $_upstream_http_<name> - arbitrary response header field and <name> is the field name
converted to lower case with dashes replaced by underscores

n $_host - in the order of precedence - host name from the request line, or host name from
the "Host" request header field, or the server name matching a request

n $_http_<name> - arbitrary request header field and <name> is the field name converted
to lower case with dashes replaced by underscores

n $_https - "on" if connection operates in SSL mode, or "" otherwise

n $_is_args - "?" if a request line has arguments, or "" otherwise

n $_query_string - same as $_args

n $_remote_addr - client address

n $_remote_port - client port

n $_request_uri - full original request URI (with arguments)

n $_scheme - request scheme, "http" or "https"

n $_server_addr - address of the server which accepted a request

n $_server_name - name of the server which accepted a request

n $_server_port - port of the server which accepted a request

n $_server_protocol - request protocol, usually "HTTP/1.0" or "HTTP/1.1"

VMware, Inc. 187


NSX-T Data Center Administration Guide

n $_ssl_client_escaped_cert - returns the client certificate in the PEM format for an


established SSL connection.

n $_ssl_server_name - returns the server name requested through SNI

n $_uri - URI path in request

n $_ssl_ciphers: returns the client SSL ciphers

n $_ssl_client_i_dn: returns the "issuer DN" string of the client certificate for an established
SSL connection according to RFC 2253

n $_ssl_client_s_dn: returns the "subject DN" string of the client certificate for an
established SSL connection according to RFC 2253

n $_ssl_protocol: returns the protocol of an established SSL connection

n $_ssl_session_reused: returns "r" if an SSL session was reused, or "." otherwise

Groups Created for Server Pools and Virtual Servers


NSX Manager automatically creates groups for load balancer server pools and VIP ports.

Load Balancer created groups are visible under Inventory > Groups.

Server pool groups are created with the name NLB.PoolLB.Pool_Name LB_Name with group
member IP addresses assigned:

n Pool configured with no LB-SNAT (transparent): 0.0.0.0/0

n Pool configured with no LB-SNAT Automap: T1-Uplink IP 100.64.x.y and T1-ServiceInterface IP

n Pool configured with no LB-SNAT IP-Pool: LB-SNAT IP-Pool

VIP Groups are created with the name NLB.VIP.virtual server name and the VIP group member IP
addresses are VIP IP@.

For server pool groups, you can create an allow traffic distributed firewall rule from the load
balancer ( NLB.PoolLB. Pool_Name LB_Name). For Tier-1 gateway firewall, you can create an
allow traffic from clients to LB VIP NLB.VIP.virtual server name.

VMware, Inc. 188


Distributed Load Balancer
9
A Distributed Load Balancer configured in NSX-T Data Center can help you effectively load
balance East-West traffic and scale traffic because it runs on each ESXi host.

Important Distributed Load Balancer is supported only for Kubernetes (K8s) cluster IPs
managed by vSphere with Kubernetes. Distributed Load Balancer is not supported for any other
workload types. As an administrator, you cannot use NSX Manager GUI to create or modify
Distributed Load Balancer objects. These objects are pushed by vCenter Server through NSX-T
API when K8 cluster IPs are created in vCenter Server.

Note Do not enable Distributed Intrusion Detection Service (IDS) in an environment that is using
Distributed Load Balancer. NSX-T Data Center does not support using IDS with a Distributed
Load Balancer.

In traditional networks, a central load balancer deployed on an NSX Edge node is configured to
distribute traffic load managed by virtual servers that are configured on the load balancer.

If you are using a central balancer, increasing the number of virtual servers in the load balancer
pool might not always meet scale or performance criteria for a multi-tier distributed application. A
distributed load balancer is realized on each hypervisor where load balancing workloads, such as
clients and servers are deployed, ensuring traffic is load balanced on each hypervisor in a
distribued way.

A distributed load balancer can be configured on the NSX-T network along with a central load
balancer.

VMware, Inc. 189


NSX-T Data Center Administration Guide

In the diagram, an instance of the Distributed Load Balancer is attached to a VM group. As the
VMs are downlinks to the distributed logical router, Distributed Load Balancer only load balances
east-west traffic. In contrast, the central load balancer, manages north-south traffic.

To cater load balancing requirements of each component or module of an application, a


distributed load balancer can be attached to each tier of an application. For example, to serve a
user request, a frontend of the application needs to reach out to the middle module to get data.
However, the middle layer might not be deployed to serve the final data to the user, so it needs
to reach out the backend layer to get additional data. For a complex application, many modules
might need to interact with each other to get information. Along with complexity, when the
number of user request increase exponentially, a distributed load balancer can efficiently meet
the user needs without taking a performance hit. Configuring a Distributed Load Balancer on
every host achieves issues of scale and packet transmission efficiency.

This chapter includes the following topics:

n Understanding Traffic Flow with a Distributed Load Balancer

n Create and Attach a Distributed Load Balancer Instance

n Create a Server Pool for Distributed Load Balancer

n Create a Virtual Server with a Fast TCP or UDP Profile

n Verifying Distributed Load Balancer Configuration on ESXi Hosts

n Monitoring Distributed Load Balancer Statistics

VMware, Inc. 190


NSX-T Data Center Administration Guide

Understanding Traffic Flow with a Distributed Load Balancer


Understand how traffic flows between VMs that are connected to an instance of a distributed
load balancer.

As an administrator ensure:

n Virtual IP addresses and pool members connected to a DLB instance must have unique IP
address for traffic to be routed correctly.

Traffic flow between Web VM1 and APP VM2.

1 When Web VM1 sends out a packet to APP VM2 it is received by the VIP-APP.

The DLB APP is attached to the policy group consisting of Web tier VMs. Similarly, DLB-APP
hosting VIP-DB must be attached to the policy group consisting of App tier VMs.

2 The VIP-APP hosted on DLB APP receives the request from Web VM1.

3 Before reaching the destination VM group, the packet is filtered by distributed firewall rules.

4 After the packets are filtered based on the firewall rules, it is sent to the Tier-1 router.

5 It is further routed to the the physical router.

6 The route is completed when the packet is delivered to the destination App VM2 group.

As DLB VIPs can only be accessed from VMs connected to downlinks of Tier-0 or Tier-1 logical
routers, DLB provides load balancing services to east-west traffic.

A DLB instance can co-exist with an instance of DFW. With DLB and DFW enabled on a virtual
interface of a hypervisor, first the traffic is load balanced based on the configuraiton in DLB and
then DFW rules are applied on traffic flowing from a VM to the hypervisor. DLB rules are applied
on traffic originating from downlinks of a Tier-0 or Tier-1 logical routers going to the destination
hypervisor. DLB rules cannot be applied on traffic flowing in the reverse direction - originating
from outside the host going to a destination VM.

VMware, Inc. 191


NSX-T Data Center Administration Guide

For example, if the DLB instance is load balancing traffic from Web-VMs to App-VMs, then to
allow such traffic to pass through DFW, ensure that the DFW rule is set to value "Source=Web-
VMs, Destination=App-VMs, Action=Allow".

Create and Attach a Distributed Load Balancer Instance


Unlike a central load balancer, a Distributed Load Balancer (DLB) instance is attached to virtual
interfaces of a VM group.

At the end of the procedure a DLB instance is attached to the virtual interfaces of a VM group.

It is only possible to create and attach a DLB instance through API commands.

Prerequisites

n Add a policy group consisting of VMs. For example, such a VM group can be related to the
App tier that receives requests from a VM on the Web-tier.

Procedure

u Run Put /policy/api/v1/infra/lb-services/<mydlb>.

"connectivity_path" : "/infra/domains/default/groups/<clientVMGroup>",

"enabled" : true,

"size" : "DLB",

"error_log_level" : "INFO",

"access_log_enabled" : false,

"resource_type" : "LBService",

"display_name" : "mydlb"

}
Where,

n connectivity_path:

n If the connectivity path is set to Null or Empty, the DLB instance is not applied to any
transport nodes.

n If the connectivity path is set ALL, all virtual interfaces of all transport nodes are
bound to the DLB instance. One DLB instance is applied to all the virtual interfaces of
the policy group.

n size: Set to value DLB. As each application or virtual interface gets an instance of DLB,
there is just a single size form factor of the DLB instance.

n enabled: By default, the created DLB instance is enabled.

VMware, Inc. 192


NSX-T Data Center Administration Guide

A DLB instance is created and attached to the VM group. The DLB instance created on the
Web-tier is attached to all the virtual interfaces of the App-tier VM group.

What to do next

After creating a DLB instance, log in to the NSX Manager, go to Networking -> Load Balancing ->
Load Balancers. View details of the DLB instance.

Next, Create a Server Pool for Distributed Load Balancer.

Create a Server Pool for Distributed Load Balancer


Create a load balancer pool to include virtual machines that consume DLB services.

This task can be done both from the NSX-T UI and NSX-T API.

The API command to create a DLB pool, PUT https://<NSXManager_IPaddress>/policy/api/v1/


infra/lb-pools/<lb-pool-id>

Prerequisites

n Create a VM group that consumes DLB service.

n Create and attach a DLB instance to a VM group.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Go to Networking → Load Balancing → Server Pools.

3 Click Add Server Pool.

4 Enter values in these fields.

Field Description

Name Enter name of the DLB pool.

Algorithm Only ROUND_ROBIN is supported for a Distributed Load Balancer.

Members Click Select Members and add individual members to the group.
When adding individual members, only enter values to the the following
fields :
n Name
n IP address
n Port

Note Except for the above mentioned fields no other fields are supported
when adding members to a DLB pool.

VMware, Inc. 193


NSX-T Data Center Administration Guide

Field Description

Member Group Click Select Members and add a member group.


When adding individual members, enter values to the the following fields :
n Name
n Compute Members: Click Set Members to add a group that includes all
the pool members.
n IP Revision Filter: Only IPv4 is supported.
n Port: Default port for all the dynamic pool members.

Note Except for the above mentioned fields no other fields are supported
when adding members to a DLB pool.

SNAT Translation Mode Set this field to Disabled state. SNAT translation is not supported in a
Distributed Load Balancer.

5 Click Save.

Results

Server pool members are added for the Distributed Load Balancer.

What to do next

See Create a Virtual Server with a Fast TCP or UDP Profile.

Create a Virtual Server with a Fast TCP or UDP Profile


Create a virtual server and bind it to a Distributed Load Balancer service.

This task can be performed both from the NSX-T UI and NSX-T APIs.

The API command to create a virtual server is PUT https://<NSXManager_IPAddress>/policy/api/v1/


infra/lb-virtual-servers/<lb-virtual-server-id>.

Prerequisites

n Create a server pool for the Distributed Load Balancer.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Go to Networking → Load Balancing → Virtual Servers.

3 Click Add Virtual Server -> L4 TCP.

VMware, Inc. 194


NSX-T Data Center Administration Guide

4 To configure a virtual server for a Distributed Load Balancer, only the following fields are
supported.

Field Description

Name Enter a name for the virtual server.

IP Address IP address of the Distributed Load Balancer virtual server. Configures the IP
address of the Distributed Load Balancer virtual server where it receives all
client connections and distributes them among the backend servers.

Ports Virtual server port number.


Multiple ports or port ranges are not supported in the virtual server of a
Distributed Load Balancer.

Load Balancer Attach the Distributed Load Balancer instance that is associated to the
virtual server. The virtual server then knows which policy group the load
balancer is servicing.

Server Pool Select the server pool. The server pool contains backend servers. Server
pool consists of one or more servers that are similarly configured and are
running the same application. It is also referred to as pool members.

Application Profile Select the application profile for the virtual server.
The application profile defines the application protocol characteristics. It is
used to influence how load balancing is performed. The supported
application profiles are:
n Load Balancer Fast TCP Profile
n Load Balancer Fast UDP Profile

Default Pool Member Ports Optional field.


Enter one port number to be used when member ports are not defined.
Multiple ports or port ranges for default pool member ports are not
supported in the virtual server of a Distributed Load Balancer.

Persistence Optional field.


Select Source IP or Disabled.

The Distributed Load Balancer configuration is complete.

Results

Verify whether the DLB is distributing traffic to all the servers in the pool based on the algorithm
defined in the configuration. If you choose the Round_Robin algorithm, then DLB must be able to
choose servers from the pool in a round robin fashion.

In the ESXi host, verify whether the DLB configuration is complete.

What to do next

See Verifying Distributed Load Balancer Configuration on ESXi Hosts.

VMware, Inc. 195


NSX-T Data Center Administration Guide

Verifying Distributed Load Balancer Configuration on ESXi


Hosts
Verify whether the Distributed Load Balancer was configured completely on ESXi hosts.

After you securely connect to the ESXi host, run /opt/vmware/nsx-nestdb/bin/nestdb-cli. From
the nestdb-cli prompt, run the following commands.

Command Sample Response

To view the configured DLB service, run get LbServiceMsg. {'id': {'left': 13946864992859343551, 'right':
10845263561610880178}, 'virtual_server_id': [{'left':
13384746951958284821, 'right': 11316502527836868364}],
'display_name': 'mydlb', 'size': 'DLB', 'enabled': True,
'access_log_enabled': False, 'log_level':
'LB_LOG_LEVEL_INFO', 'applied_to': {'type': 'CONTAINER',
'attachment_id': {'left': 2826732686997341216, 'right':
10792930437485655035}}}

To view the virtual server configured for DLB, run get {'port': '80', 'revision': 0, 'display_name': 'mytcpvip',
LbVirtualServerMsg. 'pool_id': {'left': 4370937730160476541, 'right':
13181758910457427118}, 'enabled': True,
'access_log_enabled': False, 'id': {'left':
13384746951958284821, 'right': 11316502527836868364},
'ip_protocol': 'TCP', 'ip_address': {'ipv4': 2071690107},
'application_profile_id': {'left': 1527034089224553657,
'right': 10785436903467108397}}

To view configuration of the DLB pool members, run get {'tcp_multiplexing_number': 6, 'display_name':
LbPoolMsg. 'mylbpool', 'tcp_multiplexing_enabled': False, 'member':
[{'port': '80', 'weight': 1, 'display_name':
'Member_VM30', 'admin_state': 'ENABLED', 'ip_address':
{'ipv4': 3232261280}, 'backup_member': False}, {'port':
'80', 'weight': 1, 'display_name': 'Member_VM31',
'admin_state': 'ENABLED', 'ip_address': {'ipv4':
3232261281}, 'backup_member': False}, {'port': '80',
'weight': 1, 'display_name': 'Member_VM32',
'admin_state': 'ENABLED', 'ip_address': {'ipv4':
3232261282}, 'backup_member': False}], 'id': {'left':
4370937730160476541, 'right': 13181758910457427118},
'min_active_members': 1, 'algorithm': 'ROUND_ROBIN'}

VMware, Inc. 196


NSX-T Data Center Administration Guide

Command Sample Response

To view NSX controller configuration pushed to the ESXi {'container_type': 'CONTAINER', 'id': {'left':
host, run get ContainerMsg. 2826732686997341216, 'right': 10792930437485655035},
'vif': ['cd2e482b-2998-480f-beba-65fbd7ab1e62',
'f8aa2a58-5662-4c6b-8090-d1bd19174205', '83a1f709-
e675-4e42-b677-ff501fd0f4ec', 'b8366b39-4c81-41fc-b89e-
de7716462b2f'], 'name': 'default.clientVMGroup',
'mac_address': [{'mac': 52237218275}, {'mac':
52243694681}, {'mac': 52233233291}, {'mac':
52239463383}], 'ip_address': [{'ipv4': 16844388},
{'ipv4': 16844644}, {'ipv4': 16844132}, {'ipv4':
3232261283}, {'ipv4': 16844298}, {'ipv4': 16844554},
{'ipv4': 16844042}]}

To view application profile configuration on the ESXi host, {'display_name': 'default-tcp-lb-app-profile', 'id':
run get LbApplicationProfileMsg. {'left': 1527034089224553657, 'right':
10785436903467108397}, 'application_type': 'FAST_TCP',
'fast_tcp_profile': {'close_timeout': 8,
'flow_mirroring_enabled': False, 'idle_timeout': 1800}}

Monitoring Distributed Load Balancer Statistics


NSX-T Data Center CLI commands to monitor statistics for Distributed Load Balancer instances.

Action Command

Display all load balancers. get load-balancers

Display a specific load balancer. get load-balancer <UUID_LoadBalancer>

Show load balancer virtual-server configuration. get load-balancer <UUID_LoadBalancer> virtual-servers

Show statistics of all pools of the specified load get load-balancer <UUID_LoadBalancer> pools stats
balancer

Show statistics of the specified load balancer and pool get load-balancer <UUID_LoadBalancer> pool <UUID_Pool>
stats

Show persistence-tables entry get load-balancer <UUID_LoadBalancer> persistence-


tables

Show load balancer pools configuration get load-balancer <UUID_LoadBalancer> pools

Show statistics of all virtual servers of the specified get load-balancer <UUID_LoadBalancer> virtual-servers
load balancer stats

Show statistics of the specified load balancer and get load-balancer <UUID_LoadBalancer> virtual-server
virtual server <UUID_VirtualSerever> stat

Clear statistics of the specified load balancer and pool clear load-balancer <UUID_LoadBalancer> pool
<UUID_Pool> stats

Clear statistics of all pools of the specified load clear load-balancer <UUID_LoadBalancer> pools stats
balancer

Clear statistics of the specified load balancer clear load-balancer <UUID_LoadBalancer> stats

VMware, Inc. 197


NSX-T Data Center Administration Guide

Action Command

Clear statistics of the specified load balancer and clear load-balancer <UUID_LoadBalancer> virtual-server
virtual server <UUID_VirtualServer> stats

Clear statistics of all virtual servers of the specified clear load-balancer <UUID_LoadBalancer> virtual-servers
load balancer stats

VMware, Inc. 198


Forwarding Policies
10
This feature pertains to NSX Cloud.

Forwarding Policies or Policy-Based Routing (PBR) rules define how NSX-T handles traffic from
an NSX-managed VM. This traffic can be steered to NSX-T overlay or it can be routed through
the cloud provider's (underlay) network.

Note See Chapter 23 Using NSX Cloud for details on how to manage your public cloud workload
VMs with NSX-T Data Center.

Three default forwarding policies are set up automatically after you either deploy a PCG on a
Transit VPC/VNet or link a Compute VPC/VNet to the Transit.

1 One Route to Underlay for all traffic that is addressed within the Transit/Compute VPC/VNet

2 Another Route to Underlay for all traffic destined to the metadata services of the public
cloud.

3 One Route to Overlay for all other traffic, for example, traffic that is headed outside the
Transit/Compute VPC/VNet. Such traffic is routed over the NSX-T overlay tunnel to the PCG
and further to its destination.

Note For traffic destined to another VPC/VNET managed by the same PCG: Traffic is
routed from the source NSX-managed VPC/VNet via the NSX-T overlay tunnel to the PCG
and then routed to the destination VPC/VNet.

For traffic destined to another VPC/VNet managed by a different PCG: Traffic is routed
from one NSX-managed VPC/VNet over the NSX overlay tunnel to the PCG of the source
VPC/VNet and forwarded to the PCG of the destination NSX-managed VPC/VNet.

If traffic is headed to the internet, the PCG routes it to the destination in the internet.

Micro-segmentation while Routing to Underlay


Micro-segmentation is enforced even for workload VMs whose traffic is routed to the underlay
network.

VMware, Inc. 199


NSX-T Data Center Administration Guide

If you have direct connectivity from an NSX-managed workload VM to a destination outside the
managed VPC/VNet and want to bypass the PCG, set up a forwarding policy to route traffic from
this VM via underlay.

When traffic is routed through the underlay network, the PCG is bypassed and therefore the
north-south firewall is not encountered by traffic. However, you still have to manage rules for
east-west or distributed firewall (DFW) because those rules are applied at the VM-level before
reaching the PCG.

Supported Forwarding Policies and Common Use Cases


You may see a list of forwarding policies in the drop-down menu but in this release only the
following forwarding policies are supported:

n Route to Underlay

n Route from Underlay

n Route to Overlay

These are the common scenarios where forwarding policies are useful:

n Route to Underlay: Access a service on underlay from an NSX-managed VM. For example,
access to the AWS S3 service on the AWS underlay network.

n Route from Underlay: Access a service hosted on an NSX-managed VM from the underlay
network. For example, access from AWS ELB to the NSX-managed VM.

This chapter includes the following topics:

n Add or Edit Forwarding Policies

Add or Edit Forwarding Policies


You can edit the auto-created forwarding policies or add new ones.

For example, to use services provided by the public cloud, such as S3 by AWS, you can manually
create a policy to allow a set of IP addresses to access this service by being routed through
underlay.

Prerequisites

You must have a VPC or VNet with a PCG deployed on it.

Procedure

1 Click Add Section. Name the section appropriately, for example, AWS Services.

2 Select the check box next to the section and click Add Rule. Name the rule, for example,
S3 Rules.

VMware, Inc. 200


NSX-T Data Center Administration Guide

3 In the Sources tab, select the VPC or VNet where you have the workload VMs to which you
want to provide the service access, for example, the AWS VPC. You can also create a Group
here to include multiple VMs matching one or more criteria.

4 In the Destinations tab, select the VPC or VNet where the service is hosted, for example, a
Group that contains the IP address of the S3 service in AWS.

5 In the Services tab, select the service from the drop-down menu. If the service does not exist,
you can add it. You can also leave the selection to Any because you can provide the routing
details under Destinations.

6 In the Action tab, select how you want the routing to work, for example, select Route to
Underlay if setting up this policy for the AWS S3 service.

7 Click Publish to finish setting up the Forwarding Policy.

VMware, Inc. 201


IP Address Management (IPAM)
11
To manage IP addresses, you can configure DNS (Domain Name System), DHCP (Dynamic Host
Configuration Protocol), IP address pools, and IP address blocks.

Note IP blocks are used by NSX Container Plug-in (NCP). For more info about NCP, see the NSX
Container Plug-in for Kubernetes and Cloud Foundry - Installation and Administration Guide.

This chapter includes the following topics:

n Add a DNS Zone

n Add a DNS Forwarder Service

n Add a DHCP Profile

n Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway

n Scenarios: Selection of Edge Cluster for DHCP Service

n Scenarios: Impact of Changing Segment Connectivity on DHCP

n Add an IP Address Pool

n Add an IP Address Block

Add a DNS Zone


You can configure DNS zones for your DNS service. A DNS zone is a distinct portion of the
domain name space in DNS.

When you configure a DNS zone, you can specify a source IP for a DNS forwarder to use when
forwarding DNS queries to an upstream DNS server. If you do not specify a source IP, the DNS
query packet's source IP will be the DNS forwarder's listener IP. Specifying a source IP is needed
if the listener IP is an internal address that is not reachable from the external upstream DNS
server. To ensure that the DNS response packets are routed back to the forwarder, a dedicated
source IP is needed. Alternatively, you can configure SNAT on the logical router to translate the
listener IP to a public IP. In this case, you do not need to specify a source IP.

VMware, Inc. 202


NSX-T Data Center Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > IP Management > DNS.

3 Click the DNS Zones tab.

4 To add a default zone, select Add DNS Zone > Add Default Zone

a Enter a name and optionally a description.

b Enter the IP address of up to three DNS servers.

c (Optional) Enter an IP address in the Source IP field.

5 To add an FQDN zone, select Add DNS Zone > Add FQDN Zone

a Enter a name and optionally a description.

b Enter a FQDN for the domain.

c Enter the IP address of up to three DNS servers.

d (Optional) Enter an IP address in the Source IP field.

6 Click Save.

Add a DNS Forwarder Service


You can configure a DNS forwarder to forward DNS queries to external DNS servers.

Before you configure a DNS forwarder, you must configure a default DNS zone. Optionally, you
can configure one or more FQDN DNS zones. Each DNS zone is associated with up to 3 DNS
servers. When you configure a FQDN DNS zone, you specify one or more domain names. A DNS
forwarder is associated with a default DNS zone and up to 5 FQDN DNS zones. When a DNS
query is received, the DNS forwarder compares the domain name in the query with the domain
names in the FQDN DNS zones. If a match is found, the query is forwarded to the DNS servers
specified in the FQDN DNS zone. If a match is not found, the query is forwarded to the DNS
servers specified in the default DNS zone.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > IP Management > DNS.

3 Click Add DNS Service.

4 Enter a name and optionally a description.

5 Select a tier-0 or tier-1 gateway.

VMware, Inc. 203


NSX-T Data Center Administration Guide

6 Enter the IP address of the DNS service.

Clients send DNS queries to this IP address, which is also known as the DNS forwarder's
listener IP.

7 Select a default DNS zone.

8 Select a log level.

9 Select up to five FQDN zones.

10 Click the Admin Status toggle to enable or disable the DNS service.

11 Click Save.

Add a DHCP Profile


Before you can configure DHCP on a segment, you must add a DHCP profile in your network.
You can create two types of DHCP profiles: DHCP server profile and DHCP relay profile.

A DHCP profile can be used simultaneously by multiple segments and gateways in your network.
The following conditions apply when you attach a DHCP profile to a segment or a gateway:

n On a tier-0 or tier-1 gateway or a gateway-connected segment, you can attach either a DHCP
server profile or a DHCP relay profile.

n On a standalone segment that is not connected to a gateway, you can attach only a DHCP
server profile. Standalone segment supports only a local DHCP server.

n Add a DHCP Server Profile


You can add multiple DHCP server profiles in your network. Further, you can attach a single
DHCP server profile to multiple DHCP servers.

n Add a DHCP Relay Profile


You can add a DHCP relay profile to relay the DHCP traffic to remote DHCP servers. The
remote or external DHCP servers can be in any overlay segment, outside the SDDC, or in the
physical network.

Add a DHCP Server Profile


You can add multiple DHCP server profiles in your network. Further, you can attach a single
DHCP server profile to multiple DHCP servers.

Prerequisites

n Edge nodes are deployed in the network.

n Edge cluster is added in the network.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

VMware, Inc. 204


NSX-T Data Center Administration Guide

2 Select Networking > IP Management > DHCP.

3 Click Add DHCP Profile.

4 Enter a unique name to identify the DHCP server profile.

5 In the Profile Type drop-down menu, select DHCP Server.

6 (Optional) Enter the IP address of the DHCP server in a CIDR format.

Note A maximum of two DHCP server IP addresses are supported. You can enter one IPv4
address and one IPv6 address. For an IPv4 address, the prefix length must be <= 30, and for
an IPv6 address, the prefix length must be <= 126. The DHCP server IP address must not
overlap with the addresses used in DHCP ranges and DHCP static binding.

If no server IP address is specified, 100.96.0.1/30 is autoassigned to the DHCP server.

The server IP address cannot be any of the following:

n Multicast IP address

n Broadcast IP address

n Loopback IP address

n Unspecified IP address (address with all zeroes)

7 (Optional) Edit the lease time in seconds. The default value is 86400.

Valid range of values is 60–4294967295.

8 Select an Edge cluster.

Follow these guidelines:

n If you are using a local DHCP server on a segment, you must select an edge cluster in the
DHCP server profile. If an edge cluster is unavailable in the profile, an error message is
displayed when you save the segment.

n If you are using a Gateway DHCP server on the segment, select an edge cluster either in
the gateway, or DHCP server profile, or both. If an edge cluster is unavailable in either the
profile or the gateway, an error message is displayed when you save the segment.

Caution You can change the edge cluster in the profile after the DHCP server is created.
However, this action causes all the existing DHCP leases that are assigned to the DHCP
clients to be lost.

When a DHCP server profile is attached to a segment that uses a DHCP local server, the
DHCP service is created in the edge cluster that you specified in the DHCP profile. However, if
the segment uses a Gateway DHCP server, the edge cluster in which the DHCP service is
created depends on a combination of several factors. For a detailed information about how
an edge cluster is selected for DHCP service, see Scenarios: Selection of Edge Cluster for
DHCP Service.

VMware, Inc. 205


NSX-T Data Center Administration Guide

9 (Optional) Next to Edges, click Set and select the preferred edge nodes where you want the
DHCP service to run.

To select the preferred edge nodes, edge cluster must be selected. You can select a
maximum of two preferred edge nodes. The following table explains the scenarios when
DHCP HA is configured.

Scenario DHCP HA

No preferred edge node is selected from the DHCP HA is configured. A pair of active and standby edge nodes
edge cluster. are selected automatically from the available nodes in the edge
cluster.

Only one preferred edge node is selected from DHCP server runs without the HA support.
the edge cluster.

Two preferred edge nodes are selected from DHCP HA is configured. The first edge node that you add
the edge cluster. becomes the active edge, and the second edge node becomes
the standby edge.
The active edge is denoted with a sequence number 1, and the
standby edge is denoted with a sequence number 2.
You can interchange the active and standby edges. For example
to change the current active edge to standby, select the active
edge and click the Down arrow. Alternatively, you can select the
passive edge and click the Up arrow to make it active. The
sequence numbers are reversed in both situations.

After the DHCP server is created, you can change the preferred edge nodes in the DHCP
server profile. However, this flexibility includes certain caveats.

For example, let us assume that the edge cluster in the DHCP profile has four edge nodes N1,
N2, N3, and N4, and you have set N1 and N2 as the preferred edge nodes. N1 is the active
edge and N2 is the standby edge. The DHCP service is running on the active edge node N1,
and the DHCP server has started assigning leases to the DHCP clients on the segment.

Scenario Impact on DHCP Service

Delete existing preferred edge nodes N1 and A warning message informs you that the current DHCP leases will
N2, and add N3 and N4 as the new preferred be lost due to the replacement of existing preferred edges. This
edge nodes. action can cause a loss of network connectivity.
You can prevent loss of connectivity by replacing one edge node
at a time.

Delete existing preferred edges N1 and N2, and The DHCP servers remain on the edge nodes N1 and N2. The
keep the preferred edge nodes list empty. DHCP leases are retained and the DHCP clients do not lose
network connectivity.

Delete any one of the preferred edges, either When any one of the preferred edges N1 or N2 is deleted, the
N1 or N2. other edge continues to provide IP addresses to the DHCP clients.
The DHCP leases are retained and the DHCP clients do not
experience a loss of network connectivity. However, DHCP HA
support is lost.
To retain DHCP HA, you must replace the deleted edge with
another edge node, either N3 or N4, in the edge cluster.

VMware, Inc. 206


NSX-T Data Center Administration Guide

10 (Optional) In the Tag drop-down menu, enter a tag name. When you are done, click Add
Item(s).

The maximum length of the tag name is 256 characters.

If tags exist in the inventory, the Tag drop-down menu displays a list of all the available tags
and their scope. The list of available tags includes user-defined tags, system-defined tags,
and discovered tags. You can select an existing tag from the drop-down menu and add it to
the DHCP profile.

11 Click Save.

What to do next

Attach the DHCP server profile either to a segment or a gateway, and configure the DHCP server
settings at the level of each segment.

n Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway.

n Configure DHCP on a Segment.

Add a DHCP Relay Profile


You can add a DHCP relay profile to relay the DHCP traffic to remote DHCP servers. The remote
or external DHCP servers can be in any overlay segment, outside the SDDC, or in the physical
network.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > IP Management > DHCP.

3 Click Add DHCP Profile.

4 Enter a unique name to identify the relay profile.

5 In the Profile Type drop-down menu, select DHCP Relay.

6 (Required) Enter the IP addresses of the remote DHCP servers.

Both DHCPv4 and DHCPv6 servers are supported. You can enter multiple IP addresses. The
server IP addresses of the remote DHCP servers must not overlap with the addresses that
are used in DHCP ranges and DHCP static binding.

The server IP address cannot be any of the following:

n Multicast IP address

n Broadcast IP address

n Loopback IP address

n Unspecified IP address (address with all zeroes)

VMware, Inc. 207


NSX-T Data Center Administration Guide

7 (Optional) In the Tag drop-down menu, enter a tag name. When you are done, click Add
Item(s).

The maximum length of the tag name is 256 characters.

If tags exist in the inventory, the Tag drop-down menu displays a list of all the available tags
and their scope. The list of available tags includes user-defined tags, system-defined tags,
and discovered tags. You can select an existing tag from the drop-down menu and add it to
the DHCP profile.

8 Click Save.

What to do next

Attach the DHCP relay profile either to a gateway, or use the profile to configure a local DHCP
relay on the segment.

n Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway.

n Configure DHCP on a Segment.

Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway


To use Gateway DHCP for a dynamic IP assignment, you must attach a DHCP server profile to a
tier-0 or tier-1 gateway.

You can attach a DHCP profile to a gateway only when the segments connected to that gateway
do not have a local DHCP server or DHCP relay configured on them. If a local DHCP server or
DHCP relay exists on the segment, the UI throws an error when you try to attach a DHCP profile
to the gateway. You must disconnect the segments from the gateway, and then attach a DHCP
profile to the gateway.

Prerequisites

A DHCP server profile is added in the network.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Go to Networking > Tier-0 Gateways or Networking > Tier-1 Gateways.

3 Edit the appropriate gateway.

4 Do any of one of the following depending on the version of NSX-T Data Center that you are
using:

n In version 3.0.2, next to DHCP, click Set DHCP Configuration.

n In version 3.0 and 3.0.1, next to IP Address Management, click No Dynamic IP Allocation.

VMware, Inc. 208


NSX-T Data Center Administration Guide

5 In the Type drop-down menu, select DHCP Server or DHCP Relay.

Note If you select the profile type as DHCP Relay, the configuration does not take any
effect. You must assign the DHCP relay profile to the segments, which are connected to the
gateway. Attaching a DHCP relay profile to the gateway is a redundant configuration. This
functional behavior is a known issue. For information about assigning a DHCP relay profile to
the segment, click the Configure DHCP on a Segment link in the What to do next section of
this topic.

6 Select a DHCP server profile to attach to this gateway.

7 Click Save.

What to do next

Navigate to Networking > Segments. On each segment that is connected to this gateway,
configure the DHCP settings, static bindings, and other DHCP options.

For detailed steps, see:

n Configure DHCP on a Segment.

n Configure DHCP Static Bindings on a Segment.

After a Gateway DHCP server is in use, you can view the DHCP server statistics on the gateway.
On the gateway, next to DHCP or IP Address Management, click the Servers link. On the Set
DHCP Configuration page, click Statistics.

The Gateway DHCP server statistics are displayed in a pop-up window.

Note If you have configured a local DHCP server on a gateway-connected segment, the
Statistics link on the Set DHCP Configuration page does not display the local DHCP server
statistics. Only Gateway DHCP statistics are shown on this page.

Scenarios: Selection of Edge Cluster for DHCP Service


DHCP server runs as a service (service router) in the edge nodes of an NSX Edge cluster.

Standalone segments that are not connected to a gateway can use only a DHCP local server.
Segments that are connected to a gateway can use either a DHCP local server, DHCP relay, or
Gateway DHCP server.

Regardless of whether a segment uses a DHCP local server, DHCP relay, or a Gateway DHCP
server, DHCP always runs as a service router in the edge transport nodes of an edge cluster. If
the segment uses a DHCP local server, the DHCP service is created in the edge cluster that you
specified in the DHCP profile. However, if the segment uses a Gateway DHCP server, the edge
cluster in which the DHCP service is created depends on the combination of the following factors:

n Is an edge cluster specified in the gateway?

n Is an edge cluster specified in the DHCP profile of the gateway?

VMware, Inc. 209


NSX-T Data Center Administration Guide

n Is the edge cluster in the gateway and in the DHCP profile same or different?

n Is the tier-1 routed segment connected to a tier-0 gateway?

The following scenarios explain how the edge cluster is selected for creating the DHCP service.

Scenario 1: Standalone Segment Uses DHCP Local Server


Scenario Description:

n An edge cluster (Cluster1) is created with four edge nodes: N1, N2, N3, N4.

n A segment with None connectivity is added in the overlay transport zone.

n Segment uses a DHCP local server, by default.

The DHCP server profile configuration is as follows:

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Preferred Edges: None

In this scenario, any two edge nodes from Cluster1 are autoallocated to create the DHCP service,
and DHCP high availability (HA) is automatically configured. One of the edge nodes in Cluster1
runs in active mode and the other edge runs in passive mode.

Note
n If you select two preferred edge nodes in the DHCP profile, the edge node that is added first
becomes the active edge. The second edge node takes the passive role.

n If you select only one preferred edge node in the DHCP profile, DHCP HA is not configured.

Scenario 2: Tier-1 Routed Segment Uses Gateway DHCP and


Different Edge Clusters in Gateway and DHCP Profile
Consider that you have two edge clusters in your network (Cluster1 and Cluster2). Both clusters
have four edge nodes each:

n Cluster1 edge nodes: N1, N2, N3, N4

n Cluster2 edge nodes: N5, N6, N7, N8

Scenario Description:

n Segment is connected to a tier-1 gateway.

n Tier-1 gateway is not connected to a tier-0 gateway.

n DHCP server profile in the tier-1 gateway uses Cluster1.

n Tier-1 gateway uses Cluster2.

n Segment is configured to use the Gateway DHCP server.

VMware, Inc. 210


NSX-T Data Center Administration Guide

The DHCP server profile in the tier-1 gateway has the following configuration:

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Preferred Edges: N1,N2 (added in the given sequence)

The tier-1 gateway configuration is as follows:

n Edge Cluster: Cluster2

n Preferred Edges: N5,N6 (added in the given sequence)

In this scenario, DHCP service runs on the edge nodes of Cluster2. As Cluster2 contains multiple
edge nodes, DHCP HA is autoconfigured. However, the preferred edges N5 and N6 on the
gateway are ignored for DHCP HA. Any two nodes from Cluster2 are randomly autoallocated for
DHCP HA.

This scenario also applies when the segment is directly connected to a tier-0 gateway, and there
is no tier-1 gateway in your network topology.

Caution Starting in NSX-T Data Center 3.0.2, you can change the edge cluster on the Gateway
DHCP server after the DHCP server is created. However, this action causes all the existing DHCP
leases that are assigned to the DHCP clients to be lost.

To summarize, the main points of this scenario are as follows:

n When you use a Gateway DHCP server and set different edge clusters in the gateway DHCP
profile and tier-1 gateway, then DHCP service is always created in the edge cluster of the
gateway.

n The edge nodes are randomly allocated from the edge cluster of the tier-1 gateway for DHCP
HA configuration.

n If no edge cluster is specified in the tier-1 gateway, the edge cluster in the DHCP profile of the
tier-1 gateway (Cluster1) is used to create the DHCP service.

Scenario 3: Tier-1 Routed Segment Uses Local DHCP Server and


Different Edge Clusters in Gateway and DHCP Profile
In this scenario, a segment is connected to a tier-1 gateway, but you use a local DHCP server on
the segment. Consider that you have three edge clusters in your network (Cluster1, Cluster2,
Cluster 3). Each cluster has two edges nodes each.

n Cluster1 edge nodes: N1, N2

n Cluster2 edge nodes: N3, N4

n Cluster3 edge nodes: N5, N6

Scenario Description:

n Segment is connected to a tier-1 gateway.

VMware, Inc. 211


NSX-T Data Center Administration Guide

n Tier-1 gateway is connected to a tier-0 gateway (optional).

n DHCP profile on the gateway uses Cluster1.

n Gateway uses Cluster2.

n Segment is configured to use DHCP local server.

n Local DHCP server profile uses Cluster3.

The DHCP server profile on the gateway is as follows:

n Profile Name: ProfileX

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Preferred Edges: N1,N2 (added in the given sequence)

The tier-1 gateway configuration is as follows:

n Edge Cluster: Cluster2

n Preferred Edges: N3,N4 (added in the given sequence)

The profile of the local DHCP server is as follows:

n Profile Name: ProfileY

n Profile Type: DHCP Server

n Edge Cluster: Cluster3

n Preferred Edges: N5,N6 (added in the given sequence)

In this scenario, because the segment is configured to use a local DHCP server, the edge cluster
(Cluster2) in the connected tier-1 gateway is ignored to create the DHCP service. DHCP service
runs in the edge nodes of Cluster3 (N5, N6). DHCP HA is also configured. N5 becomes the active
edge node and N6 becomes the standby edge.

If no preferred nodes are set in Cluster3, any two nodes from this cluster are autoallocated for
creating the DHCP service and configuring DHCP HA. One of the edge nodes becomes an active
edge and the other node becomes the standby edge. If only one preferred edge node is set in
Cluster3, DHCP HA is not configured.

This scenario also applies when the segment is directly connected to a tier-0 gateway, and there
is no tier-1 gateway in your network topology.

Scenario 4: Tier-1 Routed Segment Uses Gateway DHCP and Same


Edge Clusters in Gateway and DHCP Profile
Consider that you have a single edge cluster (Cluster1) in your network with four edge nodes: N1,
N2, N3, N4.

VMware, Inc. 212


NSX-T Data Center Administration Guide

Scenario Description:

n Segment is connected to a tier-1 gateway.

n Tier-1 gateway is connected to a tier-0 gateway (optional)

n Gateway and DHCP profile on the gateway use the same edge cluster (Cluster1).

n Segment is configured to use Gateway DHCP server.

The DHCP server profile on the gateway is as follows:

n Profile Type: DHCP Server

n Edge Cluster: Cluster1

n Preferred Edges: N1,N2 (added in the given sequence)

The tier-1 gateway configuration is as follows:

n Edge Cluster: Cluster1

n Preferred Edges: N3,N4 (added in the given sequence)

In this scenario, as the gateway DHCP profile and gateway use a similar edge cluster (Cluster1),
DHCP service is created in the preferred edge nodes N1 and N2 of the gateway DHCP profile.
The preferred edge nodes N3 and N4 that you specified in the connected tier-1 gateway are
ignored for creating the DHCP service.

If no preferred edges are set in the DHCP profile, any two nodes from Cluster1 are autoallocated
for creating the DHCP service and configuring DHCP HA. One of the edge nodes becomes an
active edge and the other edge becomes the standby edge.

To summarize, the main points of this scenario are as follows:

n When you use a Gateway DHCP server and specify similar edge clusters in the DHCP profile
and connected gateway, then DHCP service is created in the preferred edge nodes of the
DHCP profile.

n The preferred edges nodes specified in the connected gateway are ignored.

Scenario 5: Tier-1 Routed Segment is Connected to Tier-0 Gateway


and No Edge Cluster is Set in Tier-1 Gateway
In this scenario, a segment is connected to a tier-1 gateway, and the tier-1 gateway is connected
to a tier-0 gateway. Consider that you have three edge clusters in your network (Cluster1,
Cluster2, Cluster 3). Each cluster has two edges nodes each.

n Cluster1 edge nodes: N1, N2

n Cluster2 edge nodes: N3, N4

n Cluster3 edge nodes: N5, N6

Scenario Description:

n Segment is directly connected to a tier-1 gateway.

VMware, Inc. 213


NSX-T Data Center Administration Guide

n Tier-1 gateway is connected to a tier-0 gateway.

n DHCP server profile is specified on both tier-1 and tier-0 gateways.

n DHCP profile on tier-1 gateway uses Cluster1.

n DHCP profile on tier-0 gateway uses Cluster2.

n No edge cluster is selected in tier-1 gateway.

n Tier-0 gateway uses Cluster3.

n Segment is configured to use a Gateway DHCP server.

In this scenario, because the tier-1 gateway has no edge cluster specified, NSX-T Data Center
falls back on the edge cluster of the connected tier-0 gateway. DHCP service is created in the
edge cluster of tier-0 gateway (Cluster3). Any two edge nodes from this edge cluster are
autoallocated for creating the DHCP service and configuring DHCP HA.

To summarize, the main points of this scenario are as follows:

n When a tier-1 gateway has no edge cluster specified, NSX-T Data Center falls back on the
edge cluster of the connected tier-0 gateway to create the DHCP service.

n If no edge cluster is detected in the tier-0 gateway, DHCP service is created in the edge
cluster of the tier-1 gateway DHCP profile.

Scenarios: Impact of Changing Segment Connectivity on


DHCP
After you save a segment with DHCP configuration, you must be careful about changing the
connectivity of the segment.

Segment connectivity changes are allowed only when the segments and gateways belong to the
same transport zone.

The following scenarios explain the segment connectivity changes that are allowed or disallowed,
and whether DHCP is impacted in each of these scenarios.

Scenario 1: Move a Routed Segment with Gateway DHCP Server to a


Different Gateway
Consider that you have added a segment and connected it either to a tier-0 or tier-1 gateway.
You configured Gateway DHCP server on this segment, saved the segment, and connected
workloads to this segment. DHCP service is now used by the workloads on this segment.

Later, you decide to change the connectivity of this segment to another tier-0 or tier-1 gateway,
which is in the same transport zone.

n Starting in NSX-T Data Center 3.0.2, this change is allowed. However, when you save the
segment, an information message alerts you that changing the gateway connectivity impacts
the existing DHCP leases, which are assigned to the workloads.

VMware, Inc. 214


NSX-T Data Center Administration Guide

n In NSX-T Data Center 3.0 and 3.0.1, you cannot change the connectivity of the segment from
one gateway to another gateway when the segment uses a Gateway DHCP server. Use the
following steps in the workaround:

Workaround (only for versions 3.0 and 3.0.1):

1 Temporarily disconnect the existing segment from the gateway, or delete the segment.
Temporary disconnection of the segment is supported only with the API. Follow these steps:

a Retrieve the segment details by running the following GET API:

GET https://{NSXManager_IP}/policy/api/v1/infra/segments/{segment-id}

Replace segment-id with the actual ID of the segment that you want to disconnect from
the gateway.

b Observe that the advanced_config section in the API output shows connectivity:"ON".

c Copy the GET API output in a text file and edit connectivity to OFF. Paste the complete
API output in the request body of the following PATCH API:

PATCH https://{NSXManager_IP}/policy/api/v1/infra/segments/{segment-id}

d Run the PATCH API to disconnect the segment.

2 Add a new segment.

3 Connect this new segment to the gateway of your choice.

Scenario 2: Move a Routed Segment with Local DHCP Server or


Relay to a Different Gateway
Consider that you have added a segment and connected it either to a tier-0 or tier-1 gateway.
You configured local DHCP server or DHCP relay on this segment, saved the segment, and
connected workloads to this segment. DHCP service is now used by the workloads on this
segment.

Later, you decide to change the connectivity of this segment to another tier-0 or tier-1 gateway,
which is in the same transport zone. This change is allowed. As the DHCP server is local to the
segment, the DHCP configuration settings, including ranges, static bindings, and DHCP options
are retained on the segment. The DHCP leases of the workloads are retained and there is no loss
of network connectivity.

After the segment is moved to a new gateway, you can continue to update the DHCP
configuration settings, and other segment properties.

n If you are using NSX-T Data Center 3.0 or 3.0.1, you cannot change the DHCP type and DHCP
profile of a routed segment after moving the segment to a different gateway. For example,
you cannot change the DHCP type from a local DHCP server or a DHCP relay to a Gateway
DHCP server. In addition, you cannot select a different DHCP server profile or relay profile in
the segment. But, you can edit the properties of the DHCP profile, if needed.

VMware, Inc. 215


NSX-T Data Center Administration Guide

n Starting in version 3.0.2, you can change the DHCP type and DHCP profile of a routed
segment after moving the segment to a different gateway.

Scenario 3: Move a Standalone Segment with Local DHCP Server to a


Tier-0 or Tier-1 Gateway
Consider that you have added a segment with None connectivity in your network. You have
configured local DHCP server on this segment, saved the segment, and connected workloads to
this segment. DHCP service is now used by the workloads on this segment.

Later, you decide to connect this segment either to a tier-0 or tier-1 gateway, which is in the
same transport zone. This change is allowed. As a local DHCP server existed on the segment, the
DHCP configuration settings, including ranges, static bindings, and DHCP options are retained on
the segment. The DHCP leases of the workloads are retained and there is no loss of network
connectivity.

After the segment is connected to the gateway, you can continue to update the DHCP
configuration settings, and other segment properties. However, you cannot select a different
DHCP type and the DHCP profile in the segment. For example, you cannot change the DHCP
type from a local DHCP server to a Gateway DHCP server or a DHCP relay. In addition, you
cannot change the DHCP server profile in the segment. But, you can edit the properties of the
DHCP profile, if needed.

Scenario 4: Move a Standalone Segment Without DHCP


Configuration to a Tier-0 or Tier-1 Gateway
Consider that you have added a segment with None connectivity in your network. You have not
configured DHCP on this segment, saved the segment, and connected workloads to this
segment.

Later, you decide to connect this segment either to a tier-0 or tier-1 gateway, which is in the
same transport zone. This change is allowed. As no DHCP configuration existed on the segment,
the segment automatically uses the Gateway DHCP server after it is connected to the gateway.
The DHCP profile attached to this gateway gets autoselected in the segment.

Now, you can specify the DHCP configuration settings, including ranges, static bindings, and
DHCP options on the segment. You can also edit the other segment properties, if necessary.
However, you cannot change the DHCP type from a Gateway DHCP server to a local DHCP
server or a DHCP relay.

Remember, you can configure only a Gateway DHCPv4 server on the segment. In NSX-T Data
Center 3.0, Gateway DHCPv6 server is not supported.

VMware, Inc. 216


NSX-T Data Center Administration Guide

Scenario 5: Move a Segment with Tier-0 or Tier-1 Connectivity to


None Connectivity
Consider that you have added a segment to a tier-0 or tier-1 gateway in your network. You have
configured Gateway DHCP server or DHCP relay on this segment, saved the segment, and
connected workloads to this segment. DHCP service is now used by the workloads on this
segment.

Later, you decide to change the connectivity of this segment to None. This change is not
allowed.

In this scenario, the following workaround can help:

1 Temporarily disconnect the existing segment from the gateway or delete the segment.

For information about temporarily disconnecting a segment from the gateway, see Scenario
1.

2 Add a new segment with a None connectivity.

3 Configure a local DHCP server on this standalone segment, if needed.

Add an IP Address Pool


You can configure IP address pools for use by components such as DHCP.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > IP Management > IP Address Pools.

3 Click Add IP Address Pool.

4 Enter a name and optionally a description.

5 Click Set in the Subnets column to add subnets.

6 To specify an address block, select Add Subnet > IP Block.

a Select an IP block.

b Specify a size.

c Click the Auto Assign Gateway toggle to enable or disable automatic gateway IP
assignment.

d Click Add.

7 To specify IP ranges, select Add Sunet > IP Ranges.

a Enter IPv4 or IPv6 IP ranges.

b Enter IP ranges in CIDR format.

VMware, Inc. 217


NSX-T Data Center Administration Guide

c Enter an address for Gateway IP.

d Click Add.

8 Click Save.

Add an IP Address Block


You can configure IP address blocks for use by other components.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > IP Management > IP Address Pools.

3 Click the IP Address Blocks tab.

4 Click Add IP Address Block.

5 Enter a name and optionally a description.

6 Enter an IP block in CIDR format.

7 Click Save.

VMware, Inc. 218


Networking Settings
12
You can configure networking settings for IPv6, VNI (Virtual Network Identifier) pools, gateways,
multicast, and BFD (Bidirectional Forwarding Detection).

This chapter includes the following topics:

n Configuring Multicast

n Add a VNI Pool

n Configure Gateway Settings

n Add a Gateway QoS Profile

n Add a BFD Profile

Configuring Multicast
You can configure multicast on a tier-0 gateway for an IPv4 network to send the same multicast
data to a group of recipients. In a multicast environment, any host, regardless of whether it is a
member of a group, can send to a group. However, only the members of a group will receive
packets sent to that group.

The multicast feature has the following capabilities and limitations:

n PIM Sparse Mode with IGMPv2.

n No Rendezvous Point (RP) or Bootstrap Router (BSR) functionality on NSX-T. However, RP


information can be learned via PIM Bootstrap Messages (BSMs). In addition, a Static RP can
be configured.

When a Static RP is configured, it serves as the RP for all multicast groups (224/4). If
candidate RPs learned from BSMs advertise candidacy for the same group range, the Static
RP is preferred. However, if candidate RPs advertise candidacy for a specific group or range
of groups, they are preferred as the RP for those groups.

n The Reverse Path Forwarding (RPF) check for all multicast-specific IPs (senders of data traffic,
BSRs, RPs) requires that a route to each of them exists. In NSX-T Data Center 3.0.0,
reachability via the default route is not supported. Starting with NSX-T Data Center 3.0.1,
reachability via the default route is also supported.

VMware, Inc. 219


NSX-T Data Center Administration Guide

n The RPF check requires a route to each multicast-specific IP with an IP address as the next
hop. Reachability via device routes, where the next hop is an interface index, is not
supported.

n Tier-0 gateway only.

n Supported on only one uplink on a tier-0 gateway.

n Active-Cold Standby only is supported.

n The NSX Edge cluster can be in active-active or active-standby mode. When the cluster is in
active-active mode, two of the cluster members will run multicast in active-cold standby
mode. You can run the CLI command get mcast high-availability role on each Edge to
identify the two nodes participating in multicast. Also note that since unicast reachability to
NSX-T in an active-active cluster is via ECMP, it is imperative that the northbound PIM router
selects the ECMP path that matches a PIM neighbor to send PIM Join/Prune messages to
NSX-T. In this way it will select the active Edge which is running PIM.

n Scale: up to 2000 multicast groups.

n East-west multicast replication: up to 4 VTEP segments for maximum replication efficiency.

n ESXi host and NSX Edge only (KVM not supported).

n Layer 2 bridge attached to a downlink segment not supported.

n Edge Firewall services are not supported for multicast.

n Multi-site (federation) not supported.

n Multi-VRF not supported.

Multicast Configuration Prerequisites


Underlay network configurations:

n Acquire a multicast address range from your network administrator. This will be used to
configure the Multicast Replication Range when you configure multicast on a tier-0 gateway
(see Configure Multicast).

n Enable IGMP snooping on the layer 2 switches to which GENEVE participating hosts are
attached. If IGMP snooping is enabled on layer 2, IGMP querier must be enabled on the router
or layer 3 switch with connectivity to multicast enabled networks.

Multicast Configuration Steps


1 Create an IGMP profile. See Create an IGMP Profile.

2 Optionally create a PIM profile to configure a Static Rendezvous Point (RP). See Create a PIM
Profile.

3 Configure a tier-0 gateway to support multicast. See Add a Tier-0 Gateway and Configure
Multicast

VMware, Inc. 220


NSX-T Data Center Administration Guide

Create an IGMP Profile


Internet Group Management Protocol (IGMP) is a multicast protocol used in IPv4 networks.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the Multicast Profiles tab.

4 Click Add IGMP Profile.

5 Enter a profile name and the following profile details.

Option Description

Query Interval (seconds) Interval between general query messages. A larger value causes
IGMP queries to be sent less often. Default: 30. Range: 1 - 1800.

Query Max Response Time (seconds) Maximum allowed time before sending a response to a
membership query message. Default: 10. Range: 1 - 25.

Last Member Query Interval (seconds) Maximum amount of time between group-specific query
messages, including those sent in response to leave-group
messages. Default: 10. Range: 1 - 25.

Robustness Variable Number of IGMP query messages sent. This helps alleviate the
risk of loss of packets in a busy network. A larger number is
recommended in a network with high traffic. Default: 2. Range: 1 -
255.

Results

The Last Member Query Interval and Robustness Variable parameters affect the time it takes
to process leave group messages. If Last Member Query Interval is set to 10 and Robustness
Variable is set to 2, the approximate times it takes to process leave group messages are as
follows:

n 1 leave group message - 20 seconds

n 200 leave group messages - 23 seconds

n 1000 leave group messages - 31 seconds

n 2000 leave group messages - 55 seconds

Create a PIM Profile


Protocol Independent Multicast (PIM) is a collection of multicast routing protocols for IP networks.
It is not dependent on a specific unicast routing protocol and can leverage whichever unicast
routing protocols are used to populate the unicast routing table.

VMware, Inc. 221


NSX-T Data Center Administration Guide

This step is optional. It is needed only if you want to configure a Static Rendezvous Point (RP). A
Rendezvous Point is a router in a multicast network domain that acts as a shared root for a
multicast shared tree. If a Static RP is configured, it is preferred over the RPs that are learned
from the elected Bootstrap Router (BSR).

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the Multicast Profiles tab.

4 In the Select Profile type drop-down menu, select PIM Profiles.

5 Click Add PIM Profile tab.

6 Enter a profile name.

7 Enter a Static Rendezvous Point (RP) address

Add a VNI Pool


You can create a VNI pool to be used when you configure EVPN for a tier-0 gateway. VNI pools
cannot have values that overlap.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the VNI Pool tab.

4 Click Add VNI Pool.

5 Enter a name for the pool.

6 Enter a start value.

The value must be from 75001 to 16777215.

7 Enter an end value.

The value must be from 75001 to 16777215.

8 Click Save.

Configure Gateway Settings


Set a global configuration for the layer 3 forwarding mode and the Maximum Transmission unit
(MTU). IPv4 layer 3 forwarding is enabled by default. You can also configure IPv6 layer 3
forwarding.

VMware, Inc. 222


NSX-T Data Center Administration Guide

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the Global Networking Config tab.

4 Click Edit next to Global Gateway Configuration.

a Enter a value for the Gateway Interface MTU.

The default value is 1500.

b Select the layer 3 forwarding mode.

5 Click Save.

Add a Gateway QoS Profile


Create a QoS profile for your tier-1 gateways to define limits on the traffic rates. You can specify
the permitted information rate and the burst size to set the limitations. Any traffic that does not
conform to the QoS policy, is dropped. QoS profiles can be set for both ingress and egress
traffic, for all traffic types (unicast, BUM, IPv4/IPv6). You can choose to create a different profile
for each tier-1 gateway.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the Gateway QoS Profiles tab.

4 Click Add Gateway QoS Profiles.

5 Enter a name for the profile.

6 Enter the commited bandwidth limit that you want to set for the traffic.

7 Enter the burst size. Use the following guidelines for burst size.

n B is the burst size in bytes.

n R is the committed rate (or bandwidth) in Mbps.

n I is the time interval in milliseconds, to refill or withdraw tokens(in bytes) from the token
bucket. Use the get dataplane command from the NSX Edge CLI to retrieve the time
interval, Qos_wakeup_interval_ms. The default value for Qos_wakeup_interval_ms is
50ms. However, this value is automatically adjusted by the dataplane based on the QoS
configuration.

VMware, Inc. 223


NSX-T Data Center Administration Guide

The constraints for burst size are:

n B >= R * 1000,000 * I / 1000 / 8 because burst size is the maximum amount of tokens
that can be refilled in each interval.

n B >= R * 1000,000 * 1 / 1000 / 8 because the minimum value for I is 1 ms, taking into
account dataplane CPU usage among other constraints.

n B >= MTU of SR port because at least the MTU-size amount of tokens need to be present
in the token bucket for an MTU-size packet to pass rate-limiting check.

Since the burst size needs to satisfy all three constraints, the configured value of burst size
would be:

Max (R * 1000,000 * I / 1000 / 8, R * 1000,000 * 1 / 1000 / 8, MTU)

For example, if R = 100 Mbps, I = 50 ms, and MTU = 1500, then

B >= max (100 * 1000,000 * 50 / 1000/ 8, 100 * 1000,000 * 50 / 1000/ 8, 1500) = 625000 in bytes

8 Click Save.

Add a BFD Profile


BFD (Bidirectional Forwarding Detection) is a protocol that can detect forwarding path failures.
You can create a BFD profile for your Tier-0 static routes.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Networking > Networking Settings.

3 Click the BFD Profile tab.

4 Click Add BFD Profile.

5 Enter a name for the profile.

6 Enter values for the heartbeat Interval and Declare Dead Multiple.

7 Click Save.

VMware, Inc. 224


Security
13
The topics in this section cover north-south and east-west security for distributed firewall rules,
identity firewall, network introspection, gateway firewall, and endpoint protection policies.

This chapter includes the following topics:

n Security Configuration Overview

n Security Overview

n Security Terminology

n Identity Firewall

n Layer 7 Context Profile

n Distributed Firewall

n Distributed IDS

n East-West Network Security - Chaining Third-party Services

n Gateway Firewall

n North-South Network Security - Inserting Third-party Service

n Endpoint Protection

n Security Profiles

n Time-Based Firewall Policy

n Network Introspection Settings

n Bare Metal Server Security

Security Configuration Overview


Configure east-west and north-south firewall policies under predefined categories for your
environment.

Distributed Firewall (east-west) and Gateway Firewall (north-south) offer multiple sets of
configurable rules divided by categories. You can configure an exclusion list that contains logical
switches, logical ports, or groups, to be excluded from firewall enforcement.

VMware, Inc. 225


NSX-T Data Center Administration Guide

Security policies are enforced as follows:

n Rules are processed in categories, left to right.

n Rules are processed in top-to-bottom ordering.

n Each packet is checked against the top rule in the rule table before moving down the
subsequent rules in the table.

n The first rule in the table that matches the traffic parameters is enforced.

No subsequent rules can be enforced as the search is then terminated for that packet. Because
of this behavior, it is always recommended to put the most granular policies at the top of the rule
table. This ensures they will be enforced before more specific rules.

Whether an east-west or north-south firewall fails close or fails open upon failure depends on the
last rule in the firewall. To ensure that a firewall fails close upon failure, configure the last rule to
reject or drop all packets.

Security Overview
The security overview dashboard has three tabs: Insights, Configuration, and Capacity.

The Insights tab shows details for:

n URL Analysis:

n The number of gateways with URL Filtering Enabled.

n The number of gateways connected to the cloud service, and if connectivity to the cloud
service is up.

n The latest signature pack available on the cloud service, and what gateways are up to
date.

n Information for the top five URL categories, and the URLs accessed in each category.

n Intrusion Detection Summary:

Entry Description

Total Intrusion Events Displays the total number of intrusion events as a


clickable link, and number in each severity category. For
more information, see Distributed IDS Events.

Trending by Severity Displays a graph with the number of intrusion events by


time.

Top VMs by Intrusion Events or by Vulnerability Severity Click the arrow to select the shown data.

n Distributed Firewall Rule Utilization:

n Number of identity firewall rules.

n Number of Layer 7 rules.

n Number of compute rules.

VMware, Inc. 226


NSX-T Data Center Administration Guide

n Number of rules with a combination of Layer 7 and IDFW.

n A summary of the configuration of endpoint protection for virtual machines. You can view
components having issues, and virtual machine distribution by service profile.

The Configuration tab has clickable links with the number of:

n Distributed FW Policies

n Gateway Policies

n Endpoint Policies

n Network Introspection EW Policies

n Network Introspection NS Policies

n Distributed IDS Policies

You can also view details of your distributed firewall policies, along with the count per category.

The Capacity tab in not available in Policy view.

Security Terminology
The following terms are used throughout distributed firewall.

Table 13-1. Security-Related Terminology


Construct Definition

Policy A security policy includes various security elements including firewall rules and service
configurations. Policy was previously called a firewall section.

Rule A set of parameters with which flows are evaluated against, and define which actions will be
taken upon a match. Rules include parameters such as source and destination, service, context
profile , logging, and tags.

Group Groups include different objects that are added both statically and dynamically, and can be
used as the source and destination field of a firewall rule. Groups can be configured to contain a
combination of virtual machines, IP sets, MAC sets, logical ports, logical switches, AD user
groups, and other nested groups. Dynamic inclusion of groups can be based on tag, machine
name, OS name, or computer name.
When you create a group, you must include a domain that it belongs to, and by default this is
the default domain.
Groups were previously called NSGroup or security group.

Service Defines a combination or port and protocol. Used to classify traffic based on port and protocol.
Pre-defined services and user-defined services can be used in firewall rules.

Context Profile Defines context aware attributes including APP-ID and domain name. Also includes sub
attributes such as application version, or cipher set. Firewall rules can include a context profile
to enable Layer-7 firewall rules.

VMware, Inc. 227


NSX-T Data Center Administration Guide

Identity Firewall
With Identity Firewall (IDFW) features an NSX administrator can create Active Directory user-
based Distributed Firewall (DFW) rules.

IDFW can be used for Virtual Desktops (VDI) or Remote desktop sessions (RDSH support),
enabling simultaneous log ins by multiple users, user application access based on requirements,
and the ability to maintain independent user environments. VDI management systems control
what users are granted access to the VDI virtual machines. NSX-T controls access to the
destination servers from the source virtual machine (VM), which has IDFW enabled. With RDSH,
administrators create security groups with different users in Active Directory (AD), and allow or
deny those users access to an application server based on their role. For example, Human
Resources and Engineering can connect to the same RDSH server, and have access to different
applications from that server.

IDFW can also be used on VMs that have supported operating systems. See Identity Firewall
Supported Configurations.

A high level overview of the IDFW configuration workflow begins with preparing the
infrastructure. Preparation includes the administrator installing the host preparation components
on each protected cluster, and setting up Active Directory synchronization so that NSX can
consume AD users and groups. Next, IDFW must know which desktop an Active Directory user
logs on to in to apply IDFW rules. When network events are generated by a user, the thin agent
installed with VMware Tools on the VM gathers and forwards the information, and sends it to the
context engine. This information is used to provide enforcement for the distributed firewall.

IDFW processes the user identity at the source only in distributed firewall rules. Identity-based
groups cannot be used as the destination in DFW rules.

Note IDFW relies on the security and integrity of the guest operating system. There are multiple
methods for a malicious local administrator to spoof their identity to bypass firewall rules. User
identity information is provided by the NSX Guest Introspection Thin Agent inside guest VMs.
Security administrators must ensure that thin agent is installed and running in each guest VM.
Logged-in users should not have the privilege to remove or stop the agent.

For supported IDFW configurations see Identity Firewall Supported Configurations.

IDFW workflow:

1 A user logs in to a VM and starts a network connection, by opening Skype or Outlook.

2 A user login event is detected by the Thin Agent, which gathers connection information and
identity information and sends it to the context engine.

3 The context engine forwards the connection and the identity information to Distributed
Firewall Wall for any applicable rule enforcement.

VMware, Inc. 228


NSX-T Data Center Administration Guide

Identity Firewall Workflow


IDFW enhances traditional firewall by allowing firewall rules based on user identity. For example,
administrators can allow or disallow customer support staff to access an HR database with a
single firewall policy.

Identity based firewall rules are determined by membership in an Active Directory (AD) group
membership. See Identity Firewall Supported Configurations.

IDFW processes the user identity at the source only in distributed firewall rules. Identity-based
groups cannot be used as the destination in DFW rules.

Note For Identity Firewall rule enforcement, Windows Time service should be on for all VMs
using Active Directory. This ensures that the date and time is synchronized between Active
Directory and VMs. AD group membership changes, including enabling and deleting users, do not
immediately take effect for logged in users. For changes to take effect, users must log out and
then log back in. AD administrator's should force a logout when group membership is modified.
This behavior is a limitation of Active Directory.

Prerequisites

If Windows auto-logon is enabled on VMs, go to Local Computer Policy > Computer


configuration > Administrative Templates > System > Logon and enable Always wait for the
network at computer startup and logon.

For supprted IDFW configurations see Identity Firewall Supported Configurations.

Procedure

1 Enable NSX File Introspection driver and NSX Network Introspection driver. VMware Tools full
installation adds these by default.

2 Enable IDFW on cluster or standalone host: Enable Identity Firewall.

3 Configure Active Directory domain: Add an Active Directory.

4 Configure Active Directory sync operations: Synchronize Active Directory.

5 Create security groups (SG) with Active Directory group members: Add a Group.

6 Assign SG with AD group members to a distributed firewall rule: Add a Distributed Firewall .

Enable Identity Firewall


Identity Firewall must be enabled for IDFW firewall rules to take effect.

Procedure

1 Select Security > Distributed Firewall.

2 In the left corner, click Actions > General Setting.

3 Toggle the status button to enable IDFW.

Distributed firewall must also be enabled for IDFW to work.

VMware, Inc. 229


NSX-T Data Center Administration Guide

4 To enable IDFW on standalone hosts or clusters, select the Identity Firewall Settings tab.

5 Toggle the Enable bar, and select the standalone hosts, or select the cluster where the IDFW
host must be enabled.

6 Click Save.

Identity Firewall Best Practices


The following best practices will help maximize the success of identity firewall rules.

n IDFW supports the following protocols:

n Single user (VDI, or Non-RDSH Server) use case support - TCP, UDP, ICMP

n Multi-User (RDSH) use case support - TCP, UDP

n A single ID-based group can be used as the source only within a distributed firewall rule. If IP
and ID-based groups are needed at the source, create two separate firewall rules.

n Any change on a domain, including a domain name change, will trigger a full sync with Active
Directory. Because a full sync can take a long time, we recommend syncing during off-peak
or non-business hours.

n For local domain controllers, the default LDAP port 389 and LDAPS port 636 are used for the
Active Directory sync, and should not be edited from the default values.

Identity Firewall Supported Configurations


The following configurations are supported for IDFW on virtual machines (VMs). IDFW for
physical devices is not supported.

IDFW supports the following protocols:

n Single user (VDI, or Non-RDSH Server) use case support - TCP, UDP, ICMP

n Multi-User (RDSH) use case support - TCP, UDP

Guest Operating Systems Enforcement Type

Windows 8 Desktop - supports desktop users use case

Windows 10 Desktop - supports desktop users use case

Windows 2012 Server - supports server users use case

Windows 2012R2 Server - supports server users use case

Windows 2016 Server - supports server users use case

Windows 2012R2 RDSH - supports Remote Desktop Session Host

Windows 2016 RDSH - supports Remote Desktop Session Host

VMware, Inc. 230


NSX-T Data Center Administration Guide

Active Directory Domain Controllers:

n Windows Server 2012

n Windows Server 2012R2

n Windows Server 2016

n Windows Server 2019

Host operating system: ESXi

VMware Tools - Version 11

n VMCI Driver

n NSX File Introspection Driver

n NSX Network Introspection Driver

Layer 7 Context Profile


Layer 7 Ap Ids are configured as part of a context profile.

A context profile can specify one or more Attributes, and can also include sub-attributes, for use
in distributed firewall (DFW) rules and gateway firewall rules. When a sub-attribute, such as TLS
version 1.2 is defined, multiple application identity attributes are not supported. In addition to
attributes, DFW also supports a Fully Qualified Domain Name (FQDN) or URL that can be
specified in a context profile for FQDN whitelisting or blacklisting. Currently a predefined list of
domains is supported. FQDN can be configured with an attribute in a context profile, or each can
be set in different context profiles. After a context profile has been defined, it can be applied to
one or more distributed firewall rules.

Currently, a predefined list of domains is supported. You can see the list of FQDNs when you add
a new context profile of attribute type Domain (FQDN) Name. You can also see a list of FQDNs
by running the API call /policy/api/v1/infra/context-profiles/attributes?
attribute_key=DOMAIN_NAME.

Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.

When a context-profile has been used in a rule, any traffic coming in from a virtual machine is
matched against the rule-table based on 5-tuple. If the rule matches the flow also includes a
Layer 7 context profile, that packet is redirected to a user-space component called the vDPI
engine. A few subsequent packets are punted to that vDPI engine for each flow, and after it has
determined the App Id, this information is stored in the in-kernel context-table. When the next
packet for the flow comes in, the information in the context table is compared with the rule table
again and is matched on 5-tuple, and on the layer 7 App Id. The appropriate action as defined in
the fully matched rule is taken, and if there is an ALLOW-rule, all subsequent packets for the flow
are process in the kernel, and matched against the connection table. For fully matched DROP rule
a reject packet is generated. Logs generated by the firewall will include the Layer 7 App Id and
applicable URL, if that flow was punted to DPI.

VMware, Inc. 231


NSX-T Data Center Administration Guide

Rule processing for an incoming packet:

1 Upon entering a DFW or Gateway filter, packets are looked up in the flow table based on 5-
tuple.

2 If no flow/state is found, the flow is matched against the rule-table based on 5-tuple and an
entry is created in the flow table.

3 If the flow matches a rule with a Layer 7 service object, the flow table state is marked as “DPI
In Progress.”

4 The traffic is then punted to the DPI engine. The DPI Engine determines the App Id.

5 After the App Id has been determined, the DPI Engine sends down the attribute which is
inserted into the context table for this flow. The "DPI In Progress" flag is removed, and traffic
is no longer punted to the DPI engine.

6 The flow (now with App Id) is reevaluated against all rules that match the App Id, starting with
the original rule that was matched based on 5-tuple, and the first fully matched L4/L7 rule is
picked up. The appropriate action is taken (allow/deny/reject) and the flow table entry is
updated accordingly.

Layer 7 Firewall Rule Workflow


Layer 7 App Ids are used in creating context profiles, which are used in distributed firewall rules
or gateway firewall rules. Rule enforcement based on attributes enables users to allow or deny
applications to run on any port.

NSX-T provides built in Attributes for common infrastructure and enterprise applications. App Ids
include versions (SSL/TLS and CIFS/SMB) and Cipher Suite (SSL/TLS). For distributed firewall,
App Ids are used in rules through context profiles, and can be combined with FQDN whitelisting
and blacklisting. App Ids are supported on ESXi and KVM hosts.

Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.

Supported App Ids and FQDNs:

n For FQDN, users need to configure a high priority rule with a DNS App Id for the specified
DNS servers on port 53.

n ALG App Ids (FTP, ORACLE, DCERPC, TFTP), require the corresponding ALG service for the
firewall rule.

n SYSLOG App Id is detected only on standard ports.

KVM Supported App Ids and FQDNs:

n Sub attributes are not supported on KVM.

n FTP and TFTP ALG App Ids are supported on KVM.

Note that if you are using a combination of Layer 7 and ICMP, or any other protocols you need to
put the Layer 7 firewall rules last. Any rules above a Layer 7 any/any rule will not be executed.

VMware, Inc. 232


NSX-T Data Center Administration Guide

Procedure

1 Create a custom context profile: Add a Context Profile.

2 Use the context profile in a distributed firewall rule, or a gateway firewall rule: Add a
Distributed Firewall or Add a Gateway Firewall Policy and Rule.

Multiple App Id context profiles can be used in a firewall rule with services set to Any. For
ALG profiles (FTP, ORACLE, DCERPC, TFTP), one context profile is supported per rule.

Attributes
Layer 7 attributes (App Ids) identify which application a particular packet or flow is generated by,
independent of the port that is being used.

Enforcement based on App Ids enable users to allow or deny applications to run on any port, or
to force applications to run on their standard port. vDPI enables matching packet payload against
defined patterns, commonly referred to as signatures. Signature-based identification and
enforcement, enables customers not just to match the particular application/protocol a flow
belongs to, but also the version of that protocol, for example TLS version 1.0 version TLS version
1.2 or different versions of CIFS traffic. This allows customers to get visibility into or restrict the
use of protocols that have known vulnerabilities for all deployed applications and their E-W flows
within the datacenter.

Layer 7 App Ids are used in context profiles in distributed firewall and gateway firewall rules, and
are supported on ESXi and KVM hosts.

Note NFS version 4 is not a supported attribute.

Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.

Supported App Ids and FQDNs:

n For FQDN, users need to configure a high priority rule with a DNS App Id for the specified
DNS servers on port 53.

n ALG App Ids (FTP, ORACLE, DCERPC, TFTP), require the corresponding ALG service for the
firewall rule.

n SYSLOG App Id is detected only on standard ports.

KVM Supported App Ids and FQDNs:

n Sub attributes are not supported on KVM.

n FTP and TFTP ALG App Ids are supported on KVM.

Attribute (App Id) Description Type

360ANTIV 360 Safeguard is a program developed by Qihoo 360, an IT Web Services


company based in China

ACTIVDIR Microsoft Active Directory Networking

VMware, Inc. 233


NSX-T Data Center Administration Guide

Attribute (App Id) Description Type

AMQP Advanced Messaging Queuing Protocol is application layer Networking


protocol which supports business message communication
between applications or organizations

AVAST Traffic generated by browsing Avast.com official website of Web Services


Avast! Antivirus downloads

AVG AVG Antivirus/Security software download and updates File Transfer

AVIRA Avira Antivirus/Security software download and updates File Transfer

BLAST A remote access protocol that compresses, encrypts, and Remote Access
encodes a computing experiences at a data center and transmits
it across any standard IP network for VMware Horizon desktops.

BDEFNDER BitDefender Antivirus/Security software download and updates. File Transfer

CA_CERT Certification authority (CA) issues digital certificates which Networking


certifies the ownership of a public key for message encryption

CIFS CIFS (Common Internet File System) is used to provide shared File Transfer
access to directories, files, printers, serial ports, and
miscellaneous communications between nodes on a network

CLDAP Connectionless Lightweight Directory Access Protocol is an Networking


application protocol for accessing and maintaining distributed
directory information services over an Internet Protocol (IP)
network using UDP.

CTRXCGP Citrix Common Gateway Protocol is an application protocol for Database


accessing and maintaining distributed directory information
services over an Internet Protocol (IP) network using UDP.

CTRXGOTO Hosting Citrix GoToMeeting, or similar sessions based on the Collaboration


GoToMeeting platform. Includes voice, video, and limited crowd
management functions

CTRXICA ICA (Independent Computing Architecture) is a proprietary Remote Access


protocol for an application server system, designed by Citrix
Systems

DCERPC Distributed Computing Environment / Remote Procedure Calls, is Networking


the remote procedure call system developed for the Distributed
Computing Environment (DCE)

DIAMETER An authentication, authorization, and accounting protocol for Networking


computer networks

DHCP Dynamic Host Configuration Protocol is a protocol used Networking


management for the distribution of IP addresses within a
network

DNS Querying a DNS server over TCP or UDP Networking

EPIC Epic EMR is an electronic medical records application that Client Server
provides patient care and healthcare information.

ESET Eset Antivirus/Security software download and updates File Transfer

FPROT F-Prot Antivirus/Security software download and updates File Transfer

VMware, Inc. 234


NSX-T Data Center Administration Guide

Attribute (App Id) Description Type

FTP FTP (File Transfer Protocol) is used to transfer files from a file File Transfer
server to a local machine

GITHUB Web-based Git or version control repository and Internet Collaboration


hosting service

HTTP (HyperText Transfer Protocol) the principal transport protocol Web Services
for the World Wide Web

HTTP2 Traffic generated by browsing websites that support the HTTP Web Services
2.0 protocol

IMAP IMAP (Internet Message Access Protocol) is an Internet standard Mail


protocol for accessing email on a remote server

KASPRSKY Kaspersky Antivirus/Security software download and updates File Transfer

KERBEROS Kerberos is a network authentication protocol designed to Networking


provide strong authentication for client/server applications by
using secret-key cryptography

LDAP LDAP (Lightweight Directory Access Protocol) is a protocol for Database


reading and editing directories over an IP network

MAXDB SQL connections and queries made to a MaxDB SQL server Database

MCAFEE McAfee Antivirus/Security software download and updates File Transfer

MSSQL Microsoft SQL Server is a relational database. Database

NFS Allows a user on a client computer to access files over a network File Transfer
in a manner similar to how local storage is accessed.

Note NFS version 4 is not a supported attribute.

NNTP An Internet application protocol used for transporting Usenet File Transfer
news articles (netnews) between news servers, and for reading
and posting articles by end user client applications.

NTBIOSNS NetBIOS Name Service. In order to start sessions or distribute Networking


datagrams, an application must register its NetBIOS name using
the name service

NTP NTP (Network Time Protocol) is used for synchronizing the Networking
clocks of computer systems over the network

OCSP An OCSP Responder verifying that a user's private key has not Networking
been compromised or revoked

ORACLE An object-relational database management system (ORDBMS) Database


produced and marketed by Oracle Corporation.

PANDA Panda Security Antivirus/Security software download and File Transfer


updates.

PCOIP A remote access protocol that compresses, encrypts, and Remote Access
encodes a computing experiences at a data center and transmits
it across any standard IP network.

POP2 POP (Post Office Protocol) is a protocol used by local e-mail Mail
clients to retrieve e-mail from a remote server.

VMware, Inc. 235


NSX-T Data Center Administration Guide

Attribute (App Id) Description Type

POP3 Microsoft's implementation of NetBIOS Name Service (NBNS), a Mail


name server and service for NetBIOS computer names.

RADIUS Provides centralized Authentication, Authorization, and Networking


Accounting (AAA) management for computers to connect and
use a network service

RDP RDP (Remote Desktop Protocol) provides users with a graphical Remote Access
interface to another computer

RTCP RTCP (Real-Time Transport Control Protocol) is a sister protocol Streaming Media
of the Real-time Transport Protocol (RTP). RTCP provides out-of-
band control information for an RTP flow.

RTP RTP (Real-Time Transport Protocol) is primarily used to deliver Streaming Media
real-time audio and video

RTSP RTSP (Real Time Streaming Protocol) is used for establishing and Streaming Media
controlling media sessions between end points

SIP SIP (Session Initiation Protocol) is a common control protocol for Streaming Media
setting up and controlling voice and video calls

SMTP SMTP (Simple Mail Transfer Protocol) An Internet standard for Mail
electronic mail (e-mail) transmission across Internet Protocol (IP)
networks.

SNMP SNMP (Simple Network Management Protocol) is an Internet- Network Monitoring


standard protocol for managing devices on IP networks.

SSH SSH (Secure Shell) is a network protocol that allows data to be Remote Access
exchanged using a secure channel between two networked
devices.

SSL SSL (Secure Sockets Layer) is a cryptographic protocol that Web Services
provides security over the Internet.

SYMUPDAT Symantec LiveUpdate traffic, this includes spyware definitions, File Transfer
firewall rules, antivirus signature files, and software updates.

SYSLOG SYSLOG is a protocol that allows network devices to send event Network Monitoring
messages to a logging server.

TELNET A network protocol used on the Internet or local area networks Remote Access
to provide a bidirectional interactive text-oriented
communications facility using a virtual terminal connection.

TFTP TFTP (Trivial File Transfer Protocol) being used to list, download, File Transfer
and upload files to a TFTP server like SolarWinds TFTP Server,
using a client like WinAgents TFTP client.

VNC Traffic for Virtual Network Computing. Remote Access

WINS Microsoft's implementation of NetBIOS Name Service (NBNS), a Networking


name server and service for NetBIOS computer names.

VMware, Inc. 236


NSX-T Data Center Administration Guide

Distributed Firewall
Distributed firewall comes with predefined categories for firewall rules. Rules are evaluated top
down, and left to right.

Table 13-2. Distributed Firewall Rule Categories


Category Description

Ethernet Used for Layer 2 based rules

Emergency Used for quarantine and allow rules

Infrastructure Define access to shared services. Global rules - AD, DNS,


NTP, DHCP, Backup, Managment Servers

Environment Rules between zones - production vs development, inter


business unit rules

Application Rules between applications, application tiers, or the rules


between micro services

Firewall Drafts
A draft is a complete distributed firewall configuration with policy sections and rules. Drafts can
be auto saved or manually saved, and immediately published or saved for publishing at a later.

To save a manual draft firewall configuration, go to the upper right of the distributed firewall
screen and click Actions > Save. After saving, the configuration can be viewed by selecting
Actions > View. Auto drafts are enabled by default. Auto drafts can be disabled by going to
Actions > General Settings. When auto drafts are enabled , any changes to a firewall
configuration results in a system generated autodraft. A maximum of 100 auto drafts and 10
manual drafts can be saved. Auto drafts can be edited and saved as a manual draft, for
publishing now or later. To prevent multiple users from opening and editing the draft, manual
drafts can be locked. When a draft is published, the current configuration is replaced by the
configuration in the draft.

Save or View a Firewall Draft


A draft is a distributed firewall configuration that has been published, or saved for publishing at a
later date. Drafts are created automatically, and manually.

Manual drafts can be edited and saved. Auto drafts can be cloned, and saved as manual drafts,
and then edited. The maximum number of drafts that can be saved is 100 autodrafts and 10
manual drafts.

Procedure

1 Click Security > Distributed Firewall.

2 To save a firewall configuration manually, go to Actions > Save.

A manual draft can be saved, or edited and then saved. After saving, you can revert to the
original configuration.

VMware, Inc. 237


NSX-T Data Center Administration Guide

3 Name the configuration.

4 To prevent multiple users from opening and editing a manual draft, Lock the configuration,
and add a comment.

5 Click Save.

6 To view the saved configuration, click Actions > View.

A timeline opens up showing all saved configurations. To see details such as draft name,
date, time and who saved it, point to the dot or star icon of any draft. Saved configurations
can be filtered by time, showing all drafts in the last one day, one week, 30 days, or the last
three months. They can be filtered by aurodraft and saved by me. They can also be filtered
by name, by using the search tool on the top right.

7 Hover over a draft to view name, date and time details of the saved configuration. Click the
name to view draft details.

The detailed draft view shows the required changes to be made to the current firewall
configuration, in order to be in sync with this draft. If this draft is published, all of the changes
visible in this view will be applied to the current configuration.

Clicking the downward arrow expands each section, and displays the added, modified, and
deleted changes in each section. The comparison shows added rules with a green bar on the
left side of the box, modified elements (such as a name change) have a yellow bar, and
deleted elements have a red bar.

8 To edit the name or description of a selected draft, click the menu icon (three dots) from the
View Draft Details window, and select Edit.

Manual drafts can be locked. If locked, a comment for the draft must be provided.

Some roles, such as enterprise administrator have full access credentials, and cannot be
locked out. See Role-Based Access Control.

9 Auto drafts and manual drafts can also be cloned and saved by clicking Clone.

In the Saved Configurations window, you can accept the default name, or edit it. You can also
lock the configuration. If locked, a comment for the draft must be provided.

10 To save the cloned version of the draft configuration, click Save. The draft is now present in
the Saved Configurations section.

What to do next

After viewing a draft, you can load and publish it. It is then the active firewall configuration.

Publish or Revert a Firewall Draft


Both auto drafts and saved manual drafts can be loaded and published to become the active
configuration.

During publishing, a new auto draft is created. This auto draft can be published to revert to the
previous configuration.

VMware, Inc. 238


NSX-T Data Center Administration Guide

Procedure

1 To view the saved configuration, click Actions > View.

A timeline opens up showing all saved configurations. To see details such as draft name,
date, time and who saved it, point to the dot icon of any draft. Saved configurations are
filtered by time, showing all drafts created in 1 day, 1 week, 30 days, or the last 3 months.

2 Click a draft name and the View Draft Details window appears.

3 Click Load. The new firewall configuration appears on the main window.

Note A draft cannot be loaded if firewall filters are being used, or if there are unsaved
changes in the current configuration.

4 To commit the draft configuration and make it active, click Publish. To return to the previous
published configuration, click Revert.

After publishing, the changes in the draft will be present in the active configuration.

5 To edit the contents of the selected draft before publishing, after clicking Load, edit the
configuration.

6 To save the edited version of the draft configuration, click Actions > Save.

Manual drafts can be saved as a new configuration, or an update to the existing


configuration. Auto drafts can only be saved as a new configuration.

7 Enter a Name , and optional Description. You can also Lock the draft. If locked, a comment
for the draft must be provided.

8 Click Save.

9 To commit the draft configuration and make it active, click Publish, or to return to the
previous published configuration, click Revert.

Add a Distributed Firewall


Distributed firewall monitors all the East-West traffic on your virtual machines.

Prerequisites

If you are creating rules for Identity Firewall, first create a group with Active Directory members.
To view supported protocols for IDFW, see Identity Firewall Supported Configurations.

Note For Identity Firewall rule enforcement, Windows Time service should be on for all VMs
using Active Directory. This ensures that the date and time is synchronized between Active
Directory and VMs. AD group membership changes, including enabling and deleting users, do not
immediately take effect for logged in users. For changes to take effect, users must log out and
then log back in. AD administrator's should force a logout when group membership is modified.
This behavior is a limitation of Active Directory.

VMware, Inc. 239


NSX-T Data Center Administration Guide

Note that if you are using a combination of Layer 7 and ICMP, or any other protocols you need to
put the Layer 7 firewall rules last. Any rules above a Layer 7 any/any rule will not be executed.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Select Security > Distributed Firewall from the navigation panel.

3 Enable Distributed Firewall by selecting Actions > General Settings at the top right of the
window, and toggling the Distributed Firewall Status. Click Save.

4 Ensure that you are in the correct pre-defined category, and click Add Policy. For more
about categories, see Distributed Firewall .

5 Enter a Name for the new policy section.

6 To configure the following policy settings, click the gear icon:

Option Description

TCP Strict A TCP connection begins with a three-way handshake


(SYN, SYN-ACK, ACK) and typically ends with a two-way
exchange (FIN, ACK). In certain circumstances, the
distributed firewall (DFW) might not see the three-way
handshake for a particular flow ( due to asymmetric
traffic or the distributed firewall being enabled while a
flow exists). By default, the distributed firewall does not
enforce the need to see a three-way handshake, and
picks up sessions that are already established. TCP strict
can be enabled on a per section basis to turn off mid-
session pick-up and enforce the requirement for a three-
way handshake.
When enabling TCP strict mode for a particular DFW
policy, and using a default ANY-ANY Block rule, packets
that do not complete the three-way handshake
connection requirements and that match a TCP-based
rule in this section are dropped. Strict is only applied to
stateful TCP rules, and is enabled at the distributed
firewall policy level. TCP strict is not enforced for packets
that match a default ANY-ANY Allow which has no TCP
service specified.

Stateful A stateful firewall monitors the state of active


connections and uses this information to determine
which packets to allow through the firewall.

Locked The policy can be locked to prevent multiple users from


editing the same sections. When locking a section, you
must include a comment.
Some roles such as enterprise administrator have full
access credentials, and cannot be locked out. See Role-
Based Access Control.

VMware, Inc. 240


NSX-T Data Center Administration Guide

7 Click Publish. Multiple policies can be added, and then published together at one time.

The new policy is shown on the screen.

8 Select a policy section and click Add Rule.

9 Enter a name for the rule. IPv4, IPv6, and multicast addresses are supported.

10 In the Sources column, click the edit icon and select the source of the rule. Groups with
Active Directory members can be used for the source field of an IDFW rule. See Add a Group
for more information.

11 In the Destinations column, click the edit icon and select the destination of the rule. If not
defined, the destination matches any. See Add a Group for more information.

12 In the Services column, click the edit icon and select services. The service matches any if not
defined.

13 The Profiles column is not available when adding a rule to the Ethernet category. For all other
rule categories, in the Profiles column, click the edit icon and select a context profile, or click
Add New Context Profile. See Add a Context Profile.

Context profiles use layer 7 APP ID attributes for use in distributed firewall rules and gateway
firewall rules. Multiple App ID context profiles can be used in a firewall rule with services set
to Any. For ALG profiles (FTP, or TFTP), one context profile is supported per rule.
Context profiles are not supported when creating IDS rules.

14 Click Apply to apply the context profile to the rule.

15 By default, the Applied to column is set to DFW, and the rule is applied to all workloads. You
can also apply the rule to selected groups. You cannot use Applied to for groups based on
IPSets. Applied to defines the scope of enforcement per rule, and is used mainly for
optimization or resources on ESXi and KVM hosts. It helps in defining a targeted policy for
specific zones and tenants, without interfering with other policy defined for other tenants and
zones.

VMware, Inc. 241


NSX-T Data Center Administration Guide

16 In the Action column, select an action.

Option Description

Allow Allows all L3 or L2 traffic with the specified source, destination, and protocol
to pass through the current firewall context. Packets that match the rule,
and are accepted, traverse the system as if the firewall is not present.

Drop Drops packets with the specified source, destination, and protocol. Dropping
a packet is a silent action with no notification to the source or destination
systems. Dropping the packet causes the connection to be retried until the
retry threshold is reached.

Reject Rejects packets with the specified source, destination, and protocol.
Rejecting a packet is a more graceful way to deny a packet, as it sends a
destination unreachable message to the sender. If the protocol is TCP, a TCP
RST message is sent. ICMP messages with administratively prohibited code
are sent for UDP, ICMP, and other IP connections. One benefit of using
Reject is that the sending application is notified after only one attempt that
the connection cannot be established.

17 Click the status toggle button to enable or disable the rule.

18 Click the gear icon to configure the following rule options:

Option Description

Logging Logging is turned off by default. Logs are stored at /var/log/dfwpktlogs.log


file on ESXi and KVM hosts.

Direction Refers to the direction of traffic from the point of view of the destination
object. IN means that only traffic to the object is checked, OUT means that
only traffic from the object is checked, and In-Out, means that traffic in both
directions is checked.

IP Protocol Enforce the rule based on IPv4, IPv6, or both IPv4-IPv6.

Log Label A description entered here will be seen on the interface on the host.

19 Click Publish. Multiple rules can be added and then published together at one time.

20 On each rule, click the Info icon to view the rule ID number, and where it is enforced.

This icon is grayed out until you publish the rule. You can also specify a rule ID when you click
the filter icon to display only policies and rules that satisfy the filter criteria.

21 The realization status API has been enhanced at a security policy level to provide additional
realization status information. This can be achieved by specifying the query parameter
include_enforced_status=true along with the intent_path. Make the following API call.

GET https//<nsx>/policy/api/v1/infra/realized-state/status?intent_path=/infra/
domains/default/security-policies/<security-policy-
id>&include_enforced_status=true

VMware, Inc. 242


NSX-T Data Center Administration Guide

Firewall Packet Logs


If logging is enabled for firewall rules, you can look at the firewall packet logs to troubleshoot
issues.

The log file is /var/log/dfwpktlogs.log for both ESXi and KVM hosts.

# tail -f /var/log/dfwpktlogs.log
2018-03-27T10:23:35.196Z INET TERM 3072 IN TCP FIN 100.64.80.1/60688->172.16.10.11/80 8/7 373/5451
2018-03-27T10:23:35.196Z INET TERM 3074 OUT TCP FIN 172.16.10.11/46108->172.16.20.11/8443 8/9
1178/7366
2018-03-27T10:23:35.196Z INET TERM 3072 IN TCP RST 100.64.80.1/60692->172.16.10.11/80 9/6 413/5411
2018-03-27T10:23:35.196Z INET TERM 3074 OUT TCP RST 172.16.10.11/46109->172.16.20.11/8443 9/7
1218/7262
2018-03-27T10:23:37.442Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.12/35770-
>172.16.20.11/8443 S
2018-03-27T10:23:38.492Z INET match PASS 2 OUT 1500 TCP 172.16.10.11/80->100.64.80.1/60660 A
2018-03-27T10:23:39.934Z INET match PASS 3072 IN 52 TCP 100.64.80.1/60720->172.16.10.11/80 S
2018-03-27T10:23:39.944Z INET match PASS 3074 OUT 60 TCP 172.16.10.11/46114->172.16.20.11/8443 S
2018-03-27T10:23:39.944Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.11/46114-
>172.16.20.11/8443 S
2018-03-27T10:23:42.449Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.12/35771-
>172.16.20.11/8443 S
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP RST 172.16.10.11/46109->172.16.20.11/8443 9/7 1218/7262
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP FIN 172.16.10.12/35766->172.16.20.11/8443 9/10
1233/7418
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP FIN 172.16.10.11/46110->172.16.20.11/8443 9/9 1230/7366
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP FIN 172.16.10.12/35767->172.16.20.11/8443 9/10
1233/7418
2018-03-27T10:23:44.939Z INET match PASS 3072 IN 52 TCP 100.64.80.1/60726->172.16.10.11/80 S
2018-03-27T10:23:44.957Z INET match PASS 3074 OUT 60 TCP 172.16.10.11/46115->172.16.20.11/8443 S
2018-03-27T10:23:44.957Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.11/46115-
>172.16.20.11/8443 S
2018-03-27T10:23:45.480Z INET TERM 2 OUT TCP TIMEOUT 172.16.10.11/80->100.64.80.1/60528 1/1 1500/56

Manage a Firewall Exclusion List


Firewall exclusion lists are made of groups that can be excluded from a firewall rule based on
group membership.

Groups can be excluded from firewall rules, and there are a maximum of 100 groups that can be
on the list. IP sets, MAC sets, and AD groups cannot be included as members in a group that is
used in a firewall exclusion list.

Note NSX-T Data Center automatically adds NSX Manager and NSX Edge node virtual machines
to the firewall exclusion list.

Procedure

1 Navigate to Security > Distributed Firewall > Actions > Exclusion List.

A window appears listing available groups.

VMware, Inc. 243


NSX-T Data Center Administration Guide

2 To add a group to the exclusion list, click the check box next to any group. Then click Apply.

3 To create a group, click Add Group. See Add a Group.

4 To edit a group, click the three dot menu next to a group and select Edit.

5 To delete a group, click the three dot menu and select Delete.

6 To display group details, click Expand All.

Filtering Specific Domains (FQDN/URLs)


Set up a distributed firewall rule to filter specific domains identified with FQDN/URLs, for
example, *.office365.com.

Currently, a predefined list of domains is supported. You can see the list of FQDNs when you add
a new context profile of attribute type Domain (FQDN) Name. You can also see a list of FQDNs
by running the API call /policy/api/v1/infra/context-profiles/attributes?
attribute_key=DOMAIN_NAME.

You must set up a DNS rule first, and then the FQDN whitelist or blacklist rule below it. This is
because NSX-T Data Center uses DNS Snooping to obtain a mapping between the IP address
and the FQDN. SpoofGuard should be enabled across the switch on all logical ports to protect
against the risk of DNS spoofing attacks. A DNS spoofing attack is when a malicious VM can
inject spoofed DNS responses to redirect traffic to malicious endpoints or bypass the firewall. For
more information about SpoofGuard, see Understanding SpoofGuard Segment Profile.

FQDN-based rules are retained during vMotion for ESXi hosts.

Note In the current release, ESXi and KVM are supported. ESXi supports drop/reject action for
URL rules. KVM supports the whitelisting feature.

Procedure

1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.

2 Navigate to Security > Distributed Firewall.

3 Add a firewall policy section by following the steps in Add a Distributed Firewall . An existing
firewall policy section can also be used.

4 Select the new or existing firewall policy section and click Add Rule to create the DNS firewall
rule first.

5 Provide a name for the firewall rule, such as DNS rule, and provide the following details:

Option Description

Services Click the edit icon and select the DNS or DNS-UDP service as applicable to
your environment.

Profile Click the edit icon and select the DNS context profile. This is precreated and
is available in your deployment by default.

VMware, Inc. 244


NSX-T Data Center Administration Guide

Option Description

Applied To Select a group as required.

Action Select Allow.

6 Click Add Rule again to set up the FQDN whitelisting or blacklisting rule.

7 Name the rule appropriately, such as, FQDN/URL Whitelist. Drag the rule under the DNS rule
under this policy section.

8 Provide the following details:

Option Description

Services Click the edit icon and select the service you want to associate with this rule,
for example, HTTP.

Profile Click the edit icon and click Add New Context Profile. Click in the column
titled Attribute, and select Domain (FQDN) Name. Select the list of Attribute
Name/Values from the predefined list. Click Add. See Add a Context Profile
for details.

Applied To Select DFW or a group as required.

Action Select Allow, Drop, or Reject.

9 Click Publish.

Extending Security Policies to Physical Workloads


NSX-T Data Center can act as a single point of administration for both virtual and physical
workloads.

Starting in NSX-T Data Center 2.5.1, integration with Arista CloudVision eXchange (CVX) is
supported. This integration facilitates consistent networking and security services, across virtual
and physical workloads, independent of your application frameworks or physical network
infrastructure. NSX-T Data Center does not directly program the physical network switch or
router but integrates at the physical SDN controller level, therefore preserving the autonomy of
security administrators and physical network administrators.

Starting in NSX-T Data Center 2.5.1, integration with Arista EOS 4.22.1FX-PCS and later is
supported.

VMware, Inc. 245


NSX-T Data Center Administration Guide

Limitations
n Arista switches require ARP traffic to exist before firewall rules are applied to an end host
that is connected to an Arista switch. Packets can therefore pass through the switch before
firewall rules are configured to block traffic.

n Allowed traffic does not resume when a switch crashes or is reloaded. The ARP tables need
to be populated again, after the switch comes up, for the firewall rules to be enforced on the
switch.

n Firewall rules cannot be applied on the Arista Physical Switch, for FTP passive clients that
connect to FTP Server connected to the Arista Physical Switch.

n In CVX HA setup that uses Virtual IP for the CVX cluster, the CVX VM’s dvpg’s Promiscuous
mode, and Forged transmits must be set to Accept. In case they are set to default (Reject),
the CVX HA Virtual IP will not be reachable from NSX Manager.

Configure NSX-T Data Center to interact with Arista CVX


Complete the configuration procedure on NSX-T Data Center so that CVX can be added as an
enforcement point in NSX-T Data Center and NSX-T Data Center can interact with CVX.

Prerequisites

Obtain the virtual IP address for the Arista CVX cluster.

VMware, Inc. 246


NSX-T Data Center Administration Guide

Procedure

1 Log in to NSX Manager as a root user and run the following command to retrieve the
thumbprint for CVX:

openssl s_client -connect <virtual IP address of CVX cluster> | openssl x509 -noout -fingerprint -
sha256

Sample output:

depth=0 CN = self.signed
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = self.signed
verify return:1
SHA256
Fingerprint=35:C1:42:BC:7A:2A:57:46:E8:72:F4:C8:B8:31:E3:13:5F:41:95:EF:D8:1E:E9:3D:F0:CC:3B:09:A2
:FE:22:DE

2 Edit the retrieved thumbprint to use only lower case characters and exclude any colons in the
thumbprint.

Sample of edited thumbprint for CVX:

35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de

3 Call the PATCH /policy/api/v1/infra/sites/default/enforcement-points API and use the CVX


thumbprint to create an enforcement endpoint for CVX. For example:

PATCH https://<nsx-manager>/policy/api/v1/infra/sites/default/enforcement-points/cvx-default-ep
{
"auto_enforce": "false",
"connection_info": {
"enforcement_point_address": "<IP address of CVX>",
"resource_type": "CvxConnectionInfo",
"username": "cvpadmin",
"password": "1q2w3e4rT",
"thumbprint": "65a9785e88b784f54269e908175ada662be55f156a2dc5f3a1b0c339cea5e343"
}
}

4 Call the GET /policy/api/v1/infra/sites/default/enforcement-points API to retrieve the


endpoint information. For example:

https://<nsx-manager>/policy/api/v1/infra/sites/default/enforcement-points/cvx-default-ep
{
"auto_enforce": "false",
"connection_info": {
"enforcement_point_address": "<IP address of CVX>",
"resource_type": "CvxConnectionInfo",
"username": "admin",

VMware, Inc. 247


NSX-T Data Center Administration Guide

"password": "1q2w3e4rT",
"thumbprint": "35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de"
}
}

Sample output:

{
"connection_info": {
"thumbprint": "35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de",
"enforcement_point_address": "192.168.2.198",
"resource_type": "CvxConnectionInfo"
},
"auto_enforce": false,
"resource_type": "EnforcementPoint",
"id": "cvx-default-ep",
"display_name": "cvx-default-ep",
"path": "/infra/sites/default/enforcement-points/cvx-default-ep",
"relative_path": "cvx-default-ep",
"parent_path": "/infra/sites/default",
"marked_for_delete": false,
"_system_owned": false,
"_create_user": "admin",
"_create_time": 1564036461953,
"_last_modified_user": "admin",
"_last_modified_time": 1564036461953,
"_protection": "NOT_PROTECTED",
"_revision": 0
}

5 Call the POST /api/v1/notification-watchers/ API and use the CVX thumbprint to create a
notification ID. For example:

POST https://<nsx-manager>/api/v1/notification-watchers/
{
"server": "<virtual IP address of CVX cluster>",
"method": "POST",
"uri": "/pcs/v1/nsgroup/notification",
"use_https": true,
"certificate_sha256_thumbprint":
"35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de",
"authentication_scheme": {
"scheme_name":"BASIC_AUTH",
"username":"cvpadmin",
"password":"1q2w3e4rT"
}
}

6 Call the GET /api/v1/notification-watchers/ to retrieve the notification ID.

Sample output:

{
"id": "a0286cb6-de4d-41de-99a0-294465345b80",

VMware, Inc. 248


NSX-T Data Center Administration Guide

"server": "192.168.2.198",
"port": 443,
"use_https": true,
"certificate_sha256_thumbprint":
"35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de",
"method": "POST",
"uri": "/pcs/v1/nsgroup/notification",
"authentication_scheme": {
"scheme_name": "BASIC_AUTH",
"username": "cvpadmin"
},
"send_timeout": 30,
"max_send_uri_count": 5000,
"resource_type": "NotificationWatcher",
"display_name": "a0286cb6-de4d-41de-99a0-294465345b80",
"_create_user": "admin",
"_create_time": 1564038044780,
"_last_modified_user": "admin",
"_last_modified_time": 1564038044780,
"_system_owned": false,
"_protection": "NOT_PROTECTED",
"_revision": 0
}

7 Call the PATCH /policy/api/v1/infra/domains/default/domain-deployment-maps/cvx-default-dmap


API to create a CVX domain deployment map. For example:

PATCH https://<nsx-manager>/policy/api/v1/infra/domains/default/domain-deployment-maps/cvx-
default-dmap
{

"display_name": "cvx-deployment-map",

"id": "cvx-default-dmap",

"enforcement_point_path": "/infra/sites/default/enforcement-points/cvx-default-ep"

8 Call the GET /policy/api/v1/infra/domains/default/domain-deployment-maps API to retrieve the


deployment map information.

Configure Arista CVX to interact with NSX-T Manager


After configuring NSX-T Data Center, complete the configuration procedure on Arista
CloudVision eXchange (CVX) to enable CVX to interact with NSX-T Data Center.

Prerequisites

NSX-T Data Center has registered the CVX as an enforcement point.

VMware, Inc. 249


NSX-T Data Center Administration Guide

Procedure

1 Log in to NSX Manager as a root user and run the following command to create a thumbprint
for CVX to communicate with NSX Manager:

openssl s_client -connect <IP address of nsx-manager>:443 | openssl x509 -pubkey -noout | openssl
rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl base64

Sample output:

depth=0 C = US, ST = CA, L = Palo Alto, O = VMware Inc., OU = NSX, CN = nsx-mgr


verify error:num=18:self signed certificate
verify return:1
depth=0 C = US, ST = CA, L = Palo Alto, O = VMware Inc., OU = NSX, CN = nsx-mgr
verify return:1
writing RSA key
S+zwADluzeNf+dnffDpYvgs4YrS6QBgyeDry40bPgms=

2 Run the following commands from the CVX CLI:

cvx
no shutdown
service pcs
no shutdown
controller <IP address of nsx-manager>
username <NSX administrator user name>
password <NSX administrator password>
enforcement-point cvx-default-ep
pinned-public-key <thumbprint for CVX to communicate with NSX
Manager>
notification-id <notification ID created while registering CVX with NSX>
end

3 Run the following command from the CVX CLI to check the configuration:

show running-config

Sample ouput:

cvx
no shutdown
source-interface Management1
!
service hsc
no shutdown

!
service pcs
no shutdown
controller 192.168.2.80
username admin

VMware, Inc. 250


NSX-T Data Center Administration Guide

password 7 046D26110E33491F482F2800131909556B
enforcement-point cvx-default-ep
pinned-public-key sha256//S+zwADluzeNf+dnffDpYvgs4YrS6QBgyeDry40bPgms=
notification-id a0286cb6-de4d-41de-99a0-294465345b80

4 Configure tag on the ethernet interface of the physical switch that connects to the physical
server. Run the following commands on the physical switch managed by CVX.

configure terminal
interface ethernet 4
tag phy_app_server
end
copy running-config startup-config
Copy completed successfully.

5 Run the following command to verify tag configuration for the switch:

show running-config section tag

Sample output:

interface Ethernet4
description connected-to-7150s-3
switchport trunk allowed vlan 1-4093
switchport mode trunk
tag sx4_app_server

IP addresses that are learnt on the tagged interfaces, using ARP, are shared with NSX-T Data
Center.

6 Log in to NSX Manager to create and publish firewall rules for the physical workloads
managed by CVX. See Chapter 13 Security for more information on creating rules. For
example:

NSX-T Data Center policies and rules published in NSX-T Data Center appear as dynamic
ACLs on the physical switch managed by CVX.

VMware, Inc. 251


NSX-T Data Center Administration Guide

For more information, see CVX HA set up, CVX HA Virtual IP setup, and Physical Switch Mlag
Setup

Shared Address Sets


Security groups based on dynamic or logical objects can be created and used in the Applied to
text box of distributed firewall rules.

Because address sets are dynamically populated based on virtual machine name or tags, and
must be updated on each filter, they can exhaust the available amount of heap memory on hosts
to store DFW rules and IP address sets.

In NSX-T Data Center version 2.5 and later, a feature called Global or Shared Address Sets,
makes address sets shared across all the filters. While each filter can have different rules, based
on Applied To, the address sets members are constant across all the filters. This feature is
enabled by default, reducing heap memory use. It cannot be disabled.

In NSX-T Data Center version 2.4 and earlier, Global or Shared Address Sets is disabled, and
environments with heavy distributed firewall rules might experience VSIP heap exhaustion.

Distributed IDS
Distributed Intrusion Detection Service (IDS) monitors network traffic on the host for suspicious
activity.

IDS detects intrusion attempts based on already known malicious instruction sequences. The
detected patterns in the IDS are known as signatures. Specific signatures can be excluded from
intrusion detection.

Note Do not enable Distributed Intrusion Detection Service (IDS) in an environment that is using
Distributed Load Balancer. NSX-T Data Center does not support using IDS with a Distributed
Load Balancer.

Distributed IDS Configuration:

1 Enable IDS on hosts, download latest signature set, and configure signature settings.
Distributed IDS Settings and Signatures

2 Create IDS profiles. Distributed IDS Profiles

VMware, Inc. 252


NSX-T Data Center Administration Guide

3 Create IDS rules. Distributed IDS Rules

4 Verify IDS status on hosts. Verify Distributed IDS Status on Host

Distributed IDS Settings and Signatures


NSX-T can automatically apply signatures to your hosts, and update intrusion detection
signatures by checking our cloud-based service.

Distributed firewall (DFW) must be enabled for IDS to work. If traffic is blocked by a DFW rule,
then IDS will not see the traffic.

Intrusion detection can be enabled on standalone hosts by toggling the enabled bar. If VC
clusters are detected, IDS can also be enabled on a cluster basis by selecting the cluster and
clicking enable.

Signatures
Signatures are applied to IDS rules through profiles. A single profile is applied to matching traffic.
By default, NSX Manager checks for new signatures once per day. New signature update
versions are published every two weeks (with additional non-scheduled 0-day updates). When a
new update is available, there is a banner across the page with an Update Now link.

If Auto update new versions is selected, signatures are automatically applied to your hosts after
they are downloaded from the cloud. If auto update is disabled, the signatures are stopped at the
listed version. Click view and change versions to add another version, in addition to the default.
Currently, two versions of signatures are maintained. Whenever there is a change in the version
commit identification number, a new version is downloaded.

If a proxy server is configured for NSX Manager to access the Internet, click Proxy Settings and
complete the configuration.

Offline Downloading and Uploading Signatures


The following API calls are available when using VMware Cloud on AWS.

To download and upload a signature bundle, when NSX Manager does not have Internet access:

1 This API is the first one to be called before any communication with the cloud service is
started. It registers the client using the client's license key, and generates credentials for the
client to use. The client_id and client_secret generated is used as the request for the
Authentication API. If the client has previously registered, but does not have access to the
client_id and client_secret, the client has to re-register using the same API.

POST https://api.nsx-sec-prod.com/1.0/auth/register

VMware, Inc. 253


NSX-T Data Center Administration Guide

Body:

{
"license_keys":["054HK-D0356-480N1-02AAM-AN047"],
"device_type":"NSX-Policy-Manager",
"client_id": "client_username"
}

Response:

{"client_id":"client_username",
"client_secret": "Y54+V/
rCpEm50x5HAUIzH6aXtTq7s97wCA2QqZ8VyrtFQjrJih7h0alItdQn02T46EJVnSMZWTseragTFScrtIwsiPSX7APQIC7MxAYZ
0BoAWvW2akMxyZKyzbYZjeROb/C2QchehC8GFiFNpwqiAcQjrQHwHGdttX4zTQ="
}

2 This API call authenticates the client using the client_id and client_secret, and generates an
authorization token to use in the headers of requests to IDS Signatures APIs. The token is
valid for 60 minutes. If the token is expired, the client has to reauthenticate using the client_id
and client_secret.

POST https://api.nsx-sec-prod.com/1.0/auth/authenticate

Body:

{"client_id":"client_username",
"client_secret": "Y54+V/
rCpEm50x5HAUIzH6aXtTq7s97wCA2QqZ8VyrtFQjrJih7h0alItdQn02T46EJVnSMZWTseragTFScrtIwsiPSX7APQIC7MxAYZ
0BoAWvW2akMxyZKyzbYZjeROb/C2QchehC8GFiFNpwqiAcQjrQHwHGdttX4zTQ="
}

Response:

{
"access_token":
"eyJhbGciOiJIUzUxMiJ9.eyJqdGkiOiI3ZjMwN2VhMmQwN2IyZjJjYzM5ZmU5NjJjNmZhNDFhMGZlMTk4YjMyMzU4OGU5NGU5
NzE3NmNmNzk0YWU1YjdjLTJkYWY2MmE3LTYxMzctNGJiNS05NzJlLTE0NjZhMGNkYmU3MCIsInN1YiI6IjdmMzA3ZWEyZDA3Yj
JmMmNjMzlmZTk2MmM2ZmE0MWEwZmUxOThiMzIzNTg4ZTk0ZTk3MTc2Y2Y3OTRhZTViN2MtMmRhZjYyYTctNjEzNy00YmI1LTk3
MmUtMTQ2NmEwY2RiZTcwIiwiZXhwIjoxNTU1NTUyMjk0LCJpYXQiOjE1NTU1NDg2OTR9.x4U75GShDLMhyiyUO2B9HIi1Adonz
x3Smo01qRhvXuErQSpE_Kxq3rzg1_IIyvoy3SJwwDhSh8KECtGW50eCPg",
"token_type": "bearer",
"expires_in": 3600,
"scope": "[distributed_threat_features]"
}

3 The response to this command has the link for the ZIP file. NSXCloud downloads the
signatures from the git hub repo every 24 hours, and saves the signatures in a ZIP file. Copy
and paste the signatures URL into your browser, and the ZIP file will download.

GET https://api.nsx-sec-prod.com/1.0/intrusion-services/signatures

VMware, Inc. 254


NSX-T Data Center Administration Guide

In the Headers tab, the Authorization key will have the access_token value from the
authenticate API response.

Authorization
eyJhbGciOiJIUzUxMiJ9.eyJqdGkiOiI3ZjMwN2VhMmQwN2IyZjJjYzM5ZmU5NjJjNmZhNDFhMGZlMTk4YjMyMzU4OGU5NGU5N
zE3NmNmNzk0YWU1YjdjLTJkYWY2MmE3LTYxMzctNGJiNS05NzJlLTE0NjZhMGNkYmU3MCIsInN1YiI6IjdmMzA3ZWEyZDA3YjJ
mMmNjMzlmZTk2MmM2ZmE0MWEwZmUxOThiMzIzNTg4ZTk0ZTk3MTc2Y2Y3OTRhZTViN2MtMmRhZjYyYTctNjEzNy00YmI1LTk3M
mUtMTQ2NmEwY2RiZTcwIiwiZXhwIjoxNTU1NTUyMjk0LCJpYXQiOjE1NTU1NDg2OTR9.x4U75GShDLMhyiyUO2B9HIi1Adonzx
3Smo01qRhvXuErQSpE_Kxq3rzg1_IIyvoy3SJwwDhSh8KECtGW50eCPg

Response:

{
"signatures_url": "https://ncs-idps-us-west-2-prod-signatures.s3.us-west-2.amazonaws.com/
a07fe284ff70dc67194f2e7cf1a8178d69570528.zip?X-Amz-Security-Token=IQoJb3JpZ2luX2VjENf%2F%2F%2F%2F
%2F%2F%2F%2F%2F%2FwEaCXVzLXdlc3QtMSJHMEUCIG1UYbzfBxOsm1lvdj1k36LPyoPota0L4CSOBMXgKGhmAiEA
%2BQC1K4Gr7VCRiBM4ZTH2WbP2rvIp0qfHfGlOx0ChGc4q6wEIHxABGgw1MTAwMTM3MTE1NTMiDA4H4ir7eJl779wWWirIAdLI
x1uAukLwnhmlgLmydZhW7ZExe
%2BamDkRU7KT46ZS93mC1CQeL00D2rjBYbCBiG1mzNILPuQ2EyxmqxhEOzFYimXDDBER4pmv8%2BbKnDWPg08RNTqpD
%2BAMicYNP7WlpxeZwYxeoBFruCDA2l3eXS6XNv3Ot6T2a
%2Bk4rMKHtZyFkzZREIIcQlPg7Ej5q62EvvMFQdo8TyZxFpMJBc4IeG0h1k6QZU1Jlkrq2RYKit5WwLD
%2BQKJrEdf4A0YctLbMCDbNbprrUcCADMKyclu8FOuABuK90a%2BvnA%2FJFYiJ32eJl
%2Bdt0YRbTnRyvlMuSUHxjNAdyrFxnkPyF80%2FQLYLVDRWUDatyAo10s3C0pzYN%2FvMKsumExy6FIcv
%2FOLoO8Y9RaMOTnUfeugpr6YsqMCH0pUR4dIVDYOi1hldNCf1XD74xMJSdnviaxY4vXD4bBDKPnRFFhOxLTRFAWVlMNDYggLh
3pV3rXdPnIwgFTrF7CmZGJAQBBKqaxzPMVZ2TQBABmjxoRqCBip8Y662Tbjth7iM2V522LMVonM6Tysf16ls6QU9IC6WqjdOde
i5yazK%2Fr9g%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20191202T222034Z&X-Amz-
SignedHeaders=host&X-Amz-Expires=3599&X-Amz-Credential=ASIAXNPZPUTA6A7V7P4X%2F20191202%2Fus-
west-1%2Fs3%2Faws4_request&X-Amz-
Signature=d85ca4aef6abe22062e2693acacf823f0a4fc51d1dc07cda8dec93d619050f5e"
}

4 Navigate to Security > Distributed IDS > Settings. Click Upload IDS Signatures in the right
corner. Navigate to the saved signature ZIP file and upload the file. You can also upload the
signature ZIP using the API call:

POST https://<mgr-ip>/policy/api/v1/infra/settings/firewall/security/intrusion-services/
signatures?action=upload_signatures

Distributed IDS Profiles


IDS Profiles are used to group signatures, which can then be applied to select applications. You
can create four custom profiles in addition to the default profile.

Signatures can be enabled based on the severity rating of the signature. A higher score indicates
an increased risk associated with the intrusion event. Severity is determined based on the
following:

n Severity specified in the signature itself

n CVSS (Common Vulnerability Scoring System) score specified in the signature

n Type-rating associated with the classification type

VMware, Inc. 255


NSX-T Data Center Administration Guide

Exclusions are set per severity level and are used to disable signatures, reducing noise and
improving performance. Exclusions are used to disable signatures:

n That cause false positives

n That are noisy

n That are irrelevant to the protected workloads

The default IDS profile includes critical severities and cannot be edited.

Procedure

1 Navigate to Security > Distributed IDS > Profiles.

2 Enter a profile name and description.

3 Click one or more of the severities you want to include.

See IDS Severity Ratings for more information.

4 To exclude a severity, click select under Signatures to Exclude. You can now view and
exclude the signatures included in that severity level. Click Add to add a signature to the
exclusion list. The following information is provided for each signature:

Variable Description

Signature ID Identification number that references individual signatures.

Details Describes the threat.

Product Affected Shows what product is vulnerable to the exploit.

Attack Target Target of the attack.

IDS Severity Indicates the severity of the signature. For more details, see IDS Severity
Ratings.

CVSS (Common Vulnerability CVSS is a framework for rating the severity of security vulnerabilities in
Scoring System) software. A CVSS base score of 0.0-3.9 is considered low severity. A CVSS
base score of 4.0-6.9 is medium severity. A CVSS base score of 7.0-10.0 is
high severity.

CVE (Common Vulnerability Common Vulnerability Enumeration (CVE), is a dictionary of publicly known
Enumeration) information security vulnerabilities and exposures.

Category Type of attack.

5 Click Save.

What to do next

Create IDS rules.

IDS Severity Ratings


Signature severity helps security teams prioritize incidents.

A higher score indicates an increased risk associated with the intrusion event.

VMware, Inc. 256


NSX-T Data Center Administration Guide

NSX IDS Severity Level Classification Type-Rating Classification Types

CRITICAL 1 n Attempted User Privilege Gain


n Unsuccessful User Privilege Gain
n Successful User Privilege Gain
n Attempted administrator Privilege
Gain
n Successful Administrator Privilege
Gain
n Executable Code was Detected
n A Network Trojan was Detected
n Web Application Attack
n Inappropriate Content was
Detected
n Potential Corporate Privacy
Violation
n Targeted Malicious Activity was
Detected
n Exploit Kit Activity Detected
n Domain Observed Used for C2
Detected
n Successful Credential Theft
Detected
n Emerging Threat alert from
SpiderLabs Research
n RedAlert from SpiderLabs
Research

High 2 n Potentially Bad Traffic


n Information Leak
n Large Scale Information Leak
n Attempted Denial of Service
n Decode of an RPC Query
n Suspicious Filename Detected
n Attempted Login Using a
Suspicious Username
n System Call Detected
n Client Using an Unusual Port
n Detection of a Denial of Service
Attack
n Detection of a Non-Standard
Protocol or Event
n Access to a Potential Vulnerable
Web Application Attack
n Attempt to Log in By a Default
Username and Password
n Device Retrieving External IP
Address Detected
n Possibly Unwanted Program
Detected

VMware, Inc. 257


NSX-T Data Center Administration Guide

NSX IDS Severity Level Classification Type-Rating Classification Types

n Possible Social Engineering


Attempted
n Crypto Currency Mining Activity
Detected

Medium 3 n Not Suspicious Traffic


n Unknown Traffic
n Suspicious String was Detected
n Detection of a Network Scan
n Generic Protocol Command
Decode
n Misc Activity
n Generic ICMP event

Low 4-9 n TCP Connection Detected


n Non-specific Potential Attack
n Attempt to Exploit Client-side Web
Application Vulnerability
n Non-specific Potential Web App
Attack
n Traffic Which is Likely a Bad Idea
or Misconfiguration
n Attempt to Exploit Administrative-
level Vulnerability
n Attempt to Exploit user-level
Vulnerability
n IP Based Alert From SpiderLabs
Research
n Successful Exploitation of a Root-
level Vulnerability
n Indication of an Active Backdoor
Channel
n Worm Propagation
n Specific Virus Detected

Distributed IDS Rules


IDS rules are used to apply a previously created profile to select applications and traffic.

IDS rules are created in the same manner as distributed firewall (DFW) rules. First, an IDS policy
or section is created, and then rules are created. DFW must be enabled, and traffic must be
allowed by DFW to be passed through to IDS rules.

IDS rules must:

n specify one IDS profile per rule

n stateful

n use of Layer 7 attributes (APP IDs) is not supported

VMware, Inc. 258


NSX-T Data Center Administration Guide

One or more policy sections with rules must be created, because there are no default rules.
Before creating rules, create a group that needs a similar rule policy. See Add a Group.

1 Navigate to Security > IDS > Rules.

2 Click Add Policy to create a policy section, and give the section a name.

3 Click the gear icon to configure the following policy section options:

Option Description

Stateful A stateful firewall monitors the state of active


connections and uses this information to determine
which packets to allow through the firewall.

Locked The policy can be locked to prevent multiple users from


editing the same sections. When locking a section, you
must include a comment.
Some roles such as enterprise administrator have full
access credentials, and cannot be locked out. See Role-
Based Access Control.

4 Click Add Rule to add a new rule, and give the rule a name.

5 Configure source/destination/services to determine which traffic needs IDS inspection. IDS


supports any type of group for source and destination.

6 Select the IDS Profile to be used for the matching traffic. For more information, see
Distributed IDS Profiles.

7 Configure Applied To, to limit the scope of the rules.

8 Click the gear icon to configure the following rule options:

Option Description

Logging Logging is turned off by default. Logs are stored in


the /var/log/dfwpktlogs.log file on ESXi and KVM hosts.

Direction Refers to the direction of traffic from the point of view of


the destination object. IN means that only traffic to the
object is checked. OUT means that only traffic from the
object is checked. In-Out, means that traffic in both
directions is checked.

IP Protocol Enforce the rule based on IPv4, IPv6, or both IPv4-IPv6.

Log Label A description entered here is seen on the interface on


the host.

9 Click Publish. Multiple rules can be added and then published together at one time.

For more information about creating policy sections and rules, see Add a Distributed Firewall .

Distributed IDS Events


The events window contains the last 14 days of data.

VMware, Inc. 259


NSX-T Data Center Administration Guide

Navigate to Security > Distributed IDS > Events to view time intrusion events.

Colored dots indicate the unique type of intrusion events and can be clicked for details. The size
of the dot indicates the number of times an intrusion event has been seen. A blinking dot
indicates that an attack is ongoing. Point to a dot to see the attack name, number of attempts,
first occurrence, and other details.

n Red dots - represent critical severity signature events.

n Orange dots - represent high severity signature events.

n Yellow dots - represent medium severity signature events.

n Gray dots - represent low severity signature events.

All the intrusion attempts for a particular signature are grouped and plotted at their first
occurrence.

n Select the timeline by clicking the arrow in the upper right corner. The time line can be
between 24 hours and 14 days.

n Filter events by:

Filter Criteria Description

Attack Target Target of the attack.

Attack Type Type of attack, such as trojan horse, or denial of service


(DoS).

CVSS (Common Vulnerability Score) Common Vulnerability Score (filter based on a score
above a set threshold).

Product Affected Vulnerable product or (version) i.e Windows XP or


Web_Browsers

VM Name The VM (based on logical port) where exploit traffic


originated from or was received by.

n Click the arrow next to an event to view details.

Detail Description

Last Detected This is the last time the signature was fired.

Details The name of the signature that was fired.

Product Affected Illustrates what product is vulnerable to the exploit.

VM Affected Lists of VMs involved in the intrusion attempt.

Vulnerability Details If available, this shows a link to the CVE and the CVSS
score associated with the vulnerability.

Source IP address of the attacker and source port used.

Destination IP address of the victim and destination port used.

Attack Direction Client-Server or Server-Client.

VMware, Inc. 260


NSX-T Data Center Administration Guide

Detail Description

Associated IDS Rule Clickable link to the configured IDS Rule which resulted in
this event.

Revision The revision number of the IDS signature.

Activity Displays the total number of times this particular IDS


signature was triggered, the most recent occurrence,
and the first occurrence.

n To view intrusion history, click the arrow next to an event, then click View Intrusion History.
A window opens with the following details:

Detail Description

Source IP IP address of the attacker.

Source Port Source port used in the attack.

Destination IP IP address of the victim.

Destination Port Destination port used in the attack.

Protocol Traffic protocol of the detected intrusion.

Time Detected This is the last time the signature was fired.

n The graph present under the chart represents events that occurred over a selected time
span. You can zoom in to the specific time window on this graph to view details of signatures
of the related events that happened during the time window.

Verify Distributed IDS Status on Host


To use the NSX virtual appliance CLI, you must have SSH access to an NSX virtual appliance.
Each NSX virtual appliance contains a command-line interface (CLI).

The viewable modes in the CLI can differ based on the assigned role and rights of a user. If you
are unable to access an interface mode or issue a particular command, consult your NSX
administrator.

Procedure

1 Open an SSH session to a compute host running the work loads that were previously
deployed. Log in as root.

2 Enter the nsxcli command to open the NSX-T Data Center CLI.

3 To confirm that IDS is enabled on this host, run the command: get ids status.

Sample Output:

localhost> get ids status


NSX IDS Status
--------------------------------------------------
status: enabled
uptime: 793756 (9 days 04:29:16)

VMware, Inc. 261


NSX-T Data Center Administration Guide

4 To confirm both of the IDS profiles have been applied to this host, run the command get ids
profile.

localhost> get ids profiles


NSX IDS Profiles
--------------------------------------------------
Profile count: 2
1. 31c1f26d-1f26-46db-b5ff-e6d3451efd71
2. 65776dba-9906-4207-9eb1-8e7d7fdf3de

5 To review IDS profile (engine) statistics, including the number of rules loaded, and the number
of packets and sessions evaluated, run the command get ids engine stats.

The output is on a per profile basis, and shows the number of signatures loaded for each
profile, and the number of packets that were evaluated.

localhost> get ids engine stats


NSX IDS Engine Statistics
--------------------------------------------------
uptime: 18 (0 days 00:00:18)

app_layer:
---------
flow:
http: 10713
tx:
http: 25911
detect:
------
engines:
alerts: 11129
id: 3
last_reload: 2020-03-17T21:29:39.387087+0000
packets_incoming: 572083
packets_outgoing: 571066
prof-uuid: 53ef4dba-0291-4ea3-96ef-d01259dca2fe
rules_failed: 0
rules_loaded: 11906

tcp:
---
memuse: 20872880
overlap: 50006
reassembly_memuse: 155439408
rst: 23797
sessions: 58811
syn: 89615
synack: 41635

VMware, Inc. 262


NSX-T Data Center Administration Guide

East-West Network Security - Chaining Third-party Services


After partners register network services such as Intrusion Detection System or Intrusion
Protection System (IDS/IPS) with NSX-T Data Center, as an administrator you can con