NSXT 30 Admin
NSXT 30 Admin
Administration Guide
Modified on 17 SEP 2020
VMware NSX-T Data Center 3.0
NSX-T Data Center Administration Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2020 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
1 NSX Manager 14
View Monitoring Dashboards 17
2 Tier-0 Gateways 20
Add a Tier-0 Gateway 21
Create an IP Prefix List 25
Create a Community List 26
Configure a Static Route 27
Create a Route Map 28
Using Regular Expressions to Match Community Lists When Adding Route Maps 30
Configure BGP 30
Configure BFD 34
Configure Multicast 34
Configure IPv6 Layer 3 Forwarding 35
Create SLAAC and DAD Profiles for IPv6 Address Assignment 36
Changing the HA Mode of a Tier-0 Gateway 37
Add a VRF Gateway 38
Configuring EVPN 39
3 Tier-1 Gateway 42
Add a Tier-1 Gateway 42
4 Segments 45
Segment Profiles 46
Understanding QoS Segment Profile 47
Understanding IP Discovery Segment Profile 49
Understanding SpoofGuard Segment Profile 51
Understanding Segment Security Segment Profile 52
Understanding MAC Discovery Segment Profile 54
Add a Segment 55
Types of DHCP on a Segment 58
Configure DHCP on a Segment 59
Configure DHCP Static Bindings on a Segment 66
Layer 2 Bridging 69
Create an Edge Bridge Profile 70
Configure Edge-Based Bridging 71
VMware, Inc. 3
NSX-T Data Center Administration Guide
5 Host Switches 76
Managing NSX-T on a vSphere Distributed Switch 76
Configuring a vSphere Distributed Switch 77
Managing NSX Distributed Virtual Port Groups 79
NSX-T Cluster Prepared with VDS 80
APIs to Configure vSphere Distributed Switch 81
Feature Support in a vSphere Distributed Switch Enabled to Support NSX-T Data Center
86
Enhanced Networking Stack 88
Automatically Assign ENS Logical Cores 89
Configure Guest Inter-VLAN Routing 90
Migrate Host Switch to vSphere Distributed Switch 91
NSX Virtual Distributed Switch 96
VMware, Inc. 4
NSX-T Data Center Administration Guide
VMware, Inc. 5
NSX-T Data Center Administration Guide
13 Security 225
Security Configuration Overview 225
Security Overview 226
Security Terminology 227
Identity Firewall 228
Identity Firewall Workflow 229
Layer 7 Context Profile 231
Layer 7 Firewall Rule Workflow 232
Attributes 233
Distributed Firewall 237
Firewall Drafts 237
Add a Distributed Firewall 239
Firewall Packet Logs 243
Manage a Firewall Exclusion List 243
Filtering Specific Domains (FQDN/URLs) 244
Extending Security Policies to Physical Workloads 245
Shared Address Sets 252
Distributed IDS 252
Distributed IDS Settings and Signatures 253
Distributed IDS Profiles 255
Distributed IDS Rules 258
Distributed IDS Events 259
Verify Distributed IDS Status on Host 261
East-West Network Security - Chaining Third-party Services 263
Key Concepts of Network Protection East-West 263
NSX-T Data Center Requirements for East-West Traffic 264
High-Level Tasks for East-West Network Security 264
Deploy a Service for East-West Traffic Introspection 265
Add Redirection Rules for East-West Traffic 266
Uninstall an East-West Traffic Introspection Service 269
Gateway Firewall 269
Add a Gateway Firewall Policy and Rule 270
VMware, Inc. 6
NSX-T Data Center Administration Guide
14 Inventory 323
Add a Service 323
Add a Group 324
Add a Context Profile 326
Containers 328
Public Cloud Services 330
Physical Servers 330
Tags 330
Add Tags to an Object 334
Add a Tag to Multiple Objects 334
Unassign Tags from an Object 336
Unassign a Tag from Multiple Objects 336
VMware, Inc. 7
NSX-T Data Center Administration Guide
VMware, Inc. 8
NSX-T Data Center Administration Guide
19 Certificates 661
Types of Certificates 661
Certificates for Federation 663
Create a Certificate Signing Request File 665
Creating Self-signed Certificates 666
Create a Self-Signed Certificate 666
Import a Certificate for a CSR 667
Importing and Replacing Certificates 667
Import a Self-signed or CA-signed Certificate 668
Import a CA Certificate 668
Replace Certificates 669
Importing and Retrieving CRLs 670
Import a Certificate Revocation List 671
Configuring NSX Manager to Retrieve a Certificate Revocation List 672
Storage of Public Certificates and Private Keys for Load Balancer or VPN service 672
VMware, Inc. 9
NSX-T Data Center Administration Guide
VMware, Inc. 10
NSX-T Data Center Administration Guide
VMware, Inc. 11
NSX-T Data Center Administration Guide
VMware, Inc. 12
About Administering VMware NSX-T Data
Center
The NSX-T Data Center Administration Guide provides information about configuring and
managing networking for VMware NSX-T™ Data Center, including how to create logical switches
and ports and how to set up networking for tiered logical routers, configure NAT, firewalls,
SpoofGuard, grouping and DHCP. It also describes how to configure NSX Cloud.
Intended Audience
This information is intended for anyone who wants to configure NSX-T Data Center. The
information is written for experienced Windows or Linux system administrators who are familiar
with virtual machine technology, networking, and security operations.
Related Documentation
You can find the VMware NSX® Intelligence™ documentation at https://docs.vmware.com/en/
VMware-NSX-Intelligence/index.html. The NSX Intelligence 1.0 content was initially included and
released with the NSX-T Data Center 2.5 documentation set.
VMware, Inc. 13
NSX Manager
1
The NSX Manager provides a web-based user interface where you can manage your NSX-T
environment. It also hosts the API server that processes API calls.
The NSX Manager interface provides two modes for configuring resources:
n Policy mode
n Manager mode
n By default, if your environment contains only objects created through Policy mode, your user
interface is in Policy mode and you do not see the Policy and Manager buttons.
n By default, if your environment contains any objects created through Manager mode, you see
the Policy and Manager buttons in the top-right corner.
These defaults can be changed by modifying the user interface settings. See Configure User
Interface Settings for more information.
The same System tab is used in the Policy and Manager interfaces. If you modify Edge nodes,
Edge clusters, or transport zones, it can take up to 5 minutes for those changes to be visible in
Policy mode. You can synchronize immediately using POST /policy/api/v1/infra/sites/default/
enforcement-points/default?action=reload.
VMware, Inc. 14
NSX-T Data Center Administration Guide
n If you are deploying a new NSX-T Data Center environment, using Policy mode to create and
manage your environment is the best choice in most situations.
n Some features are not available in Policy mode. If you need these features, use Manager
mode for all configurations.
n If you plan to use Federation, use Policy mode to create all objects. Global Manager supports
only Policy mode.
n If you are upgrading from an earlier version of NSX-T Data Center and your configurations
were created using the Advanced Networking & Security tab, use Manager mode.
The menu items and configurations that were found under the Advanced Networking &
Security tab are available in NSX-T Data Center 3.0 in Manager mode.
Important If you decide to use Policy mode, use it to create all objects. Do not use Manager
mode to create objects.
Similarly, if you need to use Manager mode, use it to create all objects. Do not use Policy mode
to create objects.
Most new deployments should use Policy mode. Deployments which were created using the advanced
Federation supports only Policy mode. If you want to use interface, for example, upgrades from versions before
Federation, or might use it in future, use Policy mode. Policy mode was available.
NSX Cloud deployments Deployments which integrate with other plugins. For
example, NSX Container Plug-in, Openstack, and other
cloud management platforms.
Networking features available in Policy mode only: Networking features available in Manager mode only:
n DNS Services and DNS Zones n Forwarding up timer
n VPN
n Forwarding policies for NSX Cloud
Security features available in Policy mode only: Security features available in Manager mode only:
n Endpoint Protection n Bridge Firewall
n Network Introspection (East-West Service Insertion)
n Context Profiles
n L7 applications
n FQDN
n New Distributed Firewall and Gateway Firewall Layout
n Categories
n Auto service rules
n Drafts
VMware, Inc. 15
NSX-T Data Center Administration Guide
For more information about using the Policy API, see the NSX-T Policy API Getting Started Guide.
Security
NSX Manager has the following security features:
n NSX Manager has a built-in user account called admin, which has access rights to all
resources, but does not have rights to the operating system to install software. NSX-T
upgrade files are the only files allowed for installation. You cannot edit the rights of or delete
the admin user. Note that you can change the username admin.
n NSX Manager supports session time-out and automatic user logout. NSX Manager does not
support session lock. Initiating a session lock can be a function of the workstation operating
system being used to access NSX Manager. Upon session termination or user logout, users
are redirected to the login page.
n Authentication mechanisms implemented on NSX-T follow security best practices and are
resistant to replay attacks. The secure practices are deployed systematically. For example,
sessions IDs and tokens on NSX Manager for each session are unique and expire after the
user logs out or after a period of inactivity. Also, every session has a time record and the
session communications are encrypted to prevent session hijacking.
VMware, Inc. 16
NSX-T Data Center Administration Guide
You can access the monitoring dashboards from the Home page of the NSX Manager interface.
From the dashboards, you can click through and access the source pages from which the
dashboard data is drawn.
Procedure
3 Click Monitoring Dashboards and select the desired category of dashboards from the drop-
down menu.
The page displays the dashboards in the selected categories. The dashboard graphics are
color-coded, with color code key displayed directly above the dashboards.
4 To access a deeper level of detail, click the title of the dashboard, or one of the elements of
the dashboard, if activated.
The following tables describe the default dashboards and their sources.
System System > Appliances > Shows the status of the NSX Manager cluster and resource
Overview (CPU, memory, disk) consumption.
Fabric System > Fabric > Nodes Shows the status of the NSX-T fabric, including host and edge
System > Fabric > Transport transport nodes, transport zones, and compute managers.
Zones
System > Fabric > Compute
Managers
Backups System > Backup & Restore Shows the status of NSX-T backups, if configured. It is
strongly recommended that you configure scheduled backups
that are stored remotely to an SFTP site.
Endpoint System > Service Shows the status of endpoint protection deployment.
Protection Deployments
VMware, Inc. 17
NSX-T Data Center Administration Guide
Security Inventory > Groups Shows the status of groups and security policies. A group is a
Security > Distributed collection of workloads, segments, segment ports, and IP
Firewall addresses, where security policies, including East-West
firewall rules, may be applied.
Gateways Networking > Tier-0 Shows the status of Tier-0 and Tier-1 gateways.
Gateways
Networking > Tier-1
Gateways
Load Balancers Networking > Load Balancing Shows the status of the load balancer VMs.
VPNs Networking > VPN Shows the status of virtual private networks.
Load Balancers Networking > Load Balancing Shows the status of the load balancer services, load balancer
virtual servers, and load balancer server pools. A load
balancer can host one or more virtual servers. A virtual server
is bound to a server pool that includes members hosting
applications.
Firewall Security > Distributed Indicates if the firewall is enabled, and shows the number of
Firewall policies, rules, and exclusions list members.
Security > Bridge Firewall
Note Each detailed item displayed in this dashboard is
Networking > Tier-0 Logical sourced from a specific sub-tab in the source page cited.
Routers and Networking >
Tier-1 Logical Routers
VPN Not applicable. Shows the status of virtual private networks and the number
of IPSec and L2 VPN sessions open.
Switching Networking > Logical Shows the status of logical switches and logical ports,
Switches including both VM and container ports.
Resource Name The NSX-T resource (node, switch, and profile) in non-compliance.
Affected Resources Number of resources affected. Click the number value to view a list.
VMware, Inc. 18
NSX-T Data Center Administration Guide
See the Compliance Status Report Codes for more information about each compliance report
code.
VMware, Inc. 19
Tier-0 Gateways
2
A tier-0 gateway performs the functions of a tier-0 logical router. It processes traffic between the
logical and physical networks.
NSX Cloud Note If using NSX Cloud, see NSX-T Data Center Features Supported with NSX
Cloud for a list of auto-generated logical entities, supported features, and configurations required
for NSX Cloud.
An Edge node can support only one tier-0 gateway or logical router. When you create a tier-0
gateway or logical router, make sure you do not create more tier-0 gateways or logical routers
than the number of Edge nodes in the NSX Edge cluster.
n Using Regular Expressions to Match Community Lists When Adding Route Maps
n Configure BGP
n Configure BFD
n Configure Multicast
n Configuring EVPN
VMware, Inc. 20
NSX-T Data Center Administration Guide
If you are adding a tier-0 gateway from Global Manager in Federation, see Add a Tier-0 Gateway
from Global Manager.
You can configure the HA (high availability) mode of a tier-0 gateway to be active-active or
active-standby. The following services are only supported in active-standby mode:
n NAT
n Load balancing
n Stateful firewall
n VPN
Tier-0 and tier-1 gateways support the following addressing configurations for all interfaces
(uplinks, service ports and downlinks) in both single tier and multi-tiered topologies:
n IPv4 only
n IPv6 only
To use IPv6 or dual stack addressing, enable IPv4 and IPv6 as the L3 Forwarding Mode in
Networking > Networking Settings > Global Networking Config .
You can configure the tier-0 gateway to support EVPN (Ethernet VPN) type-5 routes. For more
information about configuring EVPN, see Configuring EVPN.
If you configure route redistribution for the tier-0 gateway, you can select from two groups of
sources: tier-0 subnets and advertised tier-1 subnets. The sources in the tier-0 subnets group are:
Connected Interfaces and These include external interface subnets, service interface subnets and segment
Segments subnets connected to the tier-0 gateway.
Static Routes Static routes that you have configured on the tier-0 gateway.
NAT IP NAT IP addresses owned by the tier-0 gateway and discovered from NAT rules that are
configured on the tier-0 gateway.
IPSec Local IP Local IPSEC endpoint IP address for establishing VPN sessions.
DNS Forwarder IP Listener IP for DNS queries from clients and also used as source IP used to forward DNS
queries to upstream DNS server.
EVPN TEP IP This is used to redistribute EVPN local endpoint subnets on the tier-0 gateway.
VMware, Inc. 21
NSX-T Data Center Administration Guide
Connected Interfaces and These include segment subnets connected to the tier-1 gateway and service interface
Segments subnets configured on the tier-1 gateway.
Static Routes Static routes that you have configured on the tier-1 gateway.
NAT IP NAT IP addresses owned by the tier-1 gateway and discovered from NAT rules that are
configured on the tier-1 gateway.
LB SNAT IP IP address or a range of IP addresses used for source NAT by the load balancer.
DNS Forwarder IP Listener IP for DNS queries from clients and also used as source IP used to forward DNS
queries to upstream DNS server.
Prerequisites
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
The default mode is active-active. In the active-active mode, traffic is load balanced across all
members. In active-standby mode, all traffic is processed by an elected active member. If the
active member fails, a new member is elected to be active.
Option Description
Preemptive If the preferred node fails and recovers, it will preempt its peer and become the active
node. The peer will change its state to standby.
Non-preemptive If the preferred node fails and recovers, it will check if its peer is the active node. If so,
the preferred node will not preempt its peer and will be the standby node.
VMware, Inc. 22
NSX-T Data Center Administration Guide
This is the subnet used for communication between components within this gateway. The
default is 169.254.0.0/28.
These subnets are used for communication between this gateway and all tier-1 gateways
that are linked to it. After you create this gateway and link a tier-1 gateway to it, you will
see the actual IP address assigned to the link on the tier-0 gateway side and on the tier-1
gateway side. The address is displayed in Additional Settings > Router Links on the
tier-0 gateway page and the tier-1 gateway page. The default is 100.64.0.0/16.
9 Click Route Distinguisher for VRF Gateways to configure a route distinguisher admin
address.
This is only needed for EVPN and for the automatic route distinguisher use case.
11 Click Save.
12 For IPv6, under Additional Settings, you can select or create an ND Profile and a DAD
Profile.
These profiles are used to configure Stateless Address Autoconfiguration (SLAAC) and
Duplicate Address Detection (DAD) for IPv6 addresses.
You can click the menu icon (3 dots) to create a VNI pool if you have not previouly
created one.
b In the EVPN Tunnel Endpoint field click Set to add EVPN local tunnel endpoints.
For the tunnel endpoint, select an Edge node and specify an IP address.
Note Ensure that the uplink interface has been configured on the NSX Edge node that
you select for the EVPN tunnel endpoint.
n Tier-0 subnets: Static Routes, NAT IP, IPSec Local IP, DNS Forwarder IP, EVPN TEP IP,
Connected Interfaces & Segments.
Under Connected Interfaces & Segments, you can select one or more of the following:
Service Interface Subnet, External Interface Subnet, Loopback Interface Subnet,
Connected Segment.
VMware, Inc. 23
NSX-T Data Center Administration Guide
n Advertised tier-1 subnets: DNS Forwarder IP, Static Routes, LB VIP, NAT IP, LB SNAT IP,
IPSec Local Endpoint, Connected Interfaces & Segments.
Under Connected Interfaces & Segments, you can select Service Interface Subnet
and/or Connected Segment.
b Enter a name.
c Select a type.
If the HA mode is active-standby, the choices are External, Service, and Loopback. If the
HA mode is active-active, the choices are External and Loopback.
e Select a segment.
h (Optional) If the interface type is External, you can enable multicast by setting PIM
(Protocol Independent Multicast) to Enabled.
j (Optional) If the interface type is External, for URPF Mode, you can select Strict or None.
k After you create an interface, you can download the ARP table by clicking the menu icon
(three dots) for the interface and selecting Download ARP table.
With HA VIP configured, the tier-0 gateway is operational even if one uplink is down. The
physical router interacts with the HA VIP only.
a Click Add HA VIP Configuration.
The HA VIP subnet must be the same as the subnet of the interface that it is bound to.
c Select 2 interfaces.
17 Click Routing to add IP prefix lists, community lists, static routes, and route maps.
VMware, Inc. 24
NSX-T Data Center Administration Guide
20 (Optional) To download the routing table or forwarding table, click the menu icon (three dots)
and select a download option. Enter values for Transport Node, Network and Source as
required, and save the .CSV file.
What to do next
After the tier-0 gateway is added, you can optionally enable dynamic IP management on the
gateway by selecting either a DHCP server profile or a DHCP relay profile. For more information,
see Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway.
For example, you can add the IP address 192.168.100.3/27 to the IP prefix list and deny the route
from being redistributed to the northbound router. You can also append an IP address with less-
than-or-equal-to (le) and greater-than-or-equal-to (ge) modifiers to grant or limit route
redistribution. For example, 192.168.100.3/27 ge 24 le 30 modifiers match subnet masks greater
than or equal to 24-bits and less than or equal to 30-bits in length.
Note The default action for a route is Deny. When you create a prefix list to deny or permit
specific routes, be sure to create an IP prefix with no specific network address (select Any from
the dropdown list) and the Permit action if you want to permit all other routes.
Prerequisites
Verify that you have a tier-0 gateway configured. See Create a Tier-0 Logical Router in Manager
Mode.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
4 Click Routing.
VMware, Inc. 25
NSX-T Data Center Administration Guide
d Click Add.
11 Click Save.
Community lists are user-defined lists of community attribute values. These lists can be used for
matching or manipulating the communities attribute in BGP update messages.
Both the BGP Communities attribute (RFC 1997) and the BGP Large Communities attribute (RFC
8092) are supported. The BGP Communities attribute is a 32-bit value split into two 16-bit values.
The BGP Large Communities attribute has 3 components, each 4 octets in length.
In route maps we can match on or set the BGP Communities or Large Communities attribute.
Using this feature, network operators can implement network policy based on the BGP
communities attribute.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
4 Click Routing.
VMware, Inc. 26
NSX-T Data Center Administration Guide
8 Specify a list of communities. For a regular community, use the aa:nn format, for example,
300:500. For a large community, use the format aa:bb:cc, for example, 11:22:33. Note that the
list cannot have both regular communities and large communities. It must contain only regular
communities, or only large communities.
In addition, you can select one or more of the following regular communities. Note that they
cannot be added if the list contains large communinities.
9 Click Save.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
4 Click Routing.
7 Enter a name and network address in CIDR format. Static routes based on IPv6 are
supported. IPv6 prefixes can only have an IPv6 next hop.
12 Select a scope from the drop-down list. A scope can be an interface, a gateway, an IPSec
session, or a segment.
13 Click Add.
VMware, Inc. 27
NSX-T Data Center Administration Guide
What to do next
Check that the static route is configured properly. See Verify the Static Route on a Tier-0 Router.
Route maps can be referenced at the BGP neighbor level and for route redistribution.
Prerequisites
n Verify that an IP prefix list or a community list is configured. See Create an IP Prefix List in
Manager Mode or Create a Community List.
n For details about using regular expressions to define route-map match criteria for community
lists, see Using Regular Expressions to Match Community Lists When Adding Route Maps.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
4 Click Routing.
VMware, Inc. 28
NSX-T Data Center Administration Guide
9 For each criterion, select IP Prefix or Community List and click Set to specify one or more
match expressions.
a If you selected Community List, specify match expressions that define how to match
members of community lists. For each community list, the following match options are
available:
n MATCH ANY - perform the set action in the route map if any of the communities in the
community list is matched.
n MATCH ALL - perform the set action in the route map if all the communities in the
community list are matched regardless of the order.
n MATCH EXACT - perform the set action in the route map if all the communities in the
community list are matched in the exact same order.
n MATCH COMMUNITY REGEXP - perform the set action in the route map if all the
regular communities associated with the NRLI match the regular expression.
n MATCH LARGE COMMUNITY REGEXP - perform the set action in the route map if all
the large communities associated with the NRLI match the regular expression.
You should use the match criterion MATCH_COMMUNITY_REGEX to match routes against
standard communities, and use the match criterion MATCH_LARGE_COMMUNITY_REGEX
to match routes against large communities. If you want to permit routes containing either
the standard community or large community value, you must create two match criteria. If
the match expressions are given in the same match criterion, only the routes containing
both the standard and large communities will be permitted.
For any match criterion, the match expressions are applied in an AND operation, which
means that all match expressions must be satisfied for a match to occur. If there are
multiple match criteria, they are applied in an OR operation, which means that a match will
occur if any one match criterion is satisfied.
AS-path Prepend Prepend a path with one or more AS (autonomous system) numbers to make the path longer
and therefore less preferred.
Community Specify a list of communities. For a regular community use the aa:nn format, for example,
300:500. For a large community use the aa:bb:cc format, for example, 11:22:33. Or use the
drop-down menu to select one of the following:
n NO_EXPORT_SUBCONFED - Do not advertise to EBGP peers.
n NO_ADVERTISE - Do not advertise to any peer.
n NO_EXPORT - Do not advertise outside BGP confederation
Local Preference Use this value to choose the outbound external BGP path. The path with the highest value is
preferred.
VMware, Inc. 29
NSX-T Data Center Administration Guide
You can permit or deny IP addresses matched by the IP prefix lists or community lists from
being advertised.
12 Click Save.
Expression Description
_ This character has special meanings in BGP regular expressions. It matches to a space, comma, AS
set delimiters { and } and AS confederation delimiters ( and ). It also matches to the beginning of
the line and the end of the line. Therefore this character can be used for an AS value boundaries
match. This character technically evaluates to (^|[,{}()]|$).
Here are some examples for using regular expressions in route maps:
Expression Description
^101 Matches routes having community attribute that starts with 101.
^[0-9]+ Matches routes having community attribute that starts with a number between 0-9 and has one or
more instances of such a number.
Configure BGP
To enable access between your VMs and the outside world, you can configure an external or
internal BGP (eBGP or iBGP) connection between a tier-0 gateway and a router in your physical
infrastructure.
VMware, Inc. 30
NSX-T Data Center Administration Guide
When configuring BGP, you must configure a local Autonomous System (AS) number for the
tier-0 gateway. You must also configure the remote AS number. EBGP neighbors must be
directly connected and in the same subnet as the tier-0 uplink. If they are not in the same subnet,
BGP multi-hop should be used.
BGPv6 is supported for single hop and multihop. A BGPv6 neighbor only supports IPv6
addresses. Redistribution, prefix list, and route maps are supported with IPv6 prefixes.
A tier-0 gateway in active-active mode supports inter-SR (service router) iBGP. If gateway #1 is
unable to communicate with a northbound physical router, traffic is re-routed to gateway #2 in
the active-active cluster. If gateway #2 is able to communicate with the physical router, traffic
between gateway #1 and the physical router will not be affected.
The implementation of ECMP on NSX Edge is based on the 5-tuple of the protocol number,
source and destination address, and source and destination port.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
4 Click BGP.
In active-active mode, the default ASN value, 65000, is already filled in. In active-standby
mode, there is no default ASN value.
c If this gateway is in active-active mode, click the Inter SR iBGP toggle to enable or disable
inter-SR iBGP. It is enabled by default.
VMware, Inc. 31
NSX-T Data Center Administration Guide
e Click the Multipath Relax toggle button to enable or disable load-sharing across multiple
paths that differ only in AS-path attribute values but have the same AS-path length.
f In the Graceful Restart field, select Disable, Helper Only, or Graceful Restart and Helper.
You can optionally change the Graceful Restart Timer and Graceful Restart Stale Timer.
By default, the Graceful Restart mode is set to Helper Only. Helper mode is useful for
eliminating and/or reducing the disruption of traffic associated with routes learned from a
neighbor capable of Graceful Restart. The neighbor must be able to preserve its
forwarding table while it undergoes a restart.
The Graceful Restart capability is not recommended to be enabled on the tier-0 gateways
because BGP peerings from all the gateways are always active. On a failover, the Graceful
Restart capability will increase the time a remote neighbor takes to select an alternate
tier-0 gateway. This will delay BFD-based convergence.
6 Click Save.
You must save the global BGP configuration before you can configure BGP neighbors.
For iBGP, enter the same AS number as the one in step 4a. For eBGP, enter the AS
number of the physical router.
VMware, Inc. 32
NSX-T Data Center Administration Guide
d Under Route Filter, click Set to add one or more route filters.
For IP Address Family, you can select IPv4, IPv6, or L2VPN EVPN. You can have at most
two route filters, with one address family being IPv4 and the other being L2VPN EVPN.
No other combinations (IPv4 and IPv6, IPv6 and L2VPN EVPN) are allowed.
For Maximum Routes, you can specify a value between 1 and 1,000,000. This is the
maximum number of BGP routes that the gateway will accept from the BGP neighbor.
Note: If you configure a BGP neighbor with one address family, for example, L2VPN
EVPN, and then later add a second address family, the established BGP connection will
be reset.
This is disabled by default. With this feature enabled, BGP neighbors can receive routes
with the same AS, for example, when you have two locations interconnected using the
same service provider. This feature applies to all the address families and cannot be
applied to specific address families.
f In the Source Addresses field, you can select a source address to establish a peering
session with a neighbor using this specific source address. If you do not select any, the
gateway will automatically choose one.
h In the Graceful Restart field, you can optionally select Disable, Helper Only, or Graceful
Restart and Helper.
Option Description
None selected The Graceful Restart for this neighbor will follow the Tier-0 gateway BGP configuration.
Disable n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be disabled
for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
disabled for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be disabled for this neighbor.
Helper Only n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be
configured as Helper Only for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
configured as Helper Only for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be configured as Helper Only for this neighbor.
Graceful Restart n If the tier-0 gateway BGP is configured with Disable, Graceful Restart will be
and Helper configured as Graceful Restart and Helper for this neighbor.
n If the tier-0 gateway BGP is configured with Helper Only, Graceful Restart will be
configured as Graceful Restart and Helper for this neighbor.
n If the tier-0 gateway BGP is configured with Graceful Restart and Helper, Graceful
Restart will be configured as Graceful Restart and Helper for this neighbor.
VMware, Inc. 33
NSX-T Data Center Administration Guide
The unit is milliseconds. For an Edge node running in a VM, the minimum value is 500. For
a bare-metal Edge node, the minimum value is 50.
l Enter a value, in seconds, for Hold Down Time and Keep Alive Time.
The Keep Alive Time specifies how frequently KEEPALIVE messages will be sent. The
value can be between 0 and 65535. Zero means no KEEPALIVE messages will be sent.
The Hold Down Time specifies how long the gateway will wait for a KEEPALIVE message
from a neighbor before considering the neighbor dead. The value can be 0 or between 3
and 65535. Zero means no KEEPALIVE messages are sent between the BGP neighbors
and the neighbor will never be considered unreachable.
Hold Down Time must be at least three times the value of the Keep Alive Time.
m Enter a password.
8 Click Save.
Configure BFD
BFD (Bidirectional Forwarding Detection) is a protocol that can detect forwarding path failures.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
8 Click Save.
Configure Multicast
IP multicast routing enables a host (source) to send a single copy of data to a single multicast
address. Data is then distributed to a group of recipients using a special form of IP address called
VMware, Inc. 34
NSX-T Data Center Administration Guide
the IP multicast group address. You can configure multicast on a tier-0 gateway for an IPv4
network to enable multicast routing.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
5 In the Replication Multicast Range field, enter an address range in CIDR format.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 Edit the Global Gateway Configuration and select IPv4 and IPv6 for the L3 Forwarding Mode.
5 Click Save.
7 Edit a tier-0 gateway by clicking the menu icon (three dots) and select Edit.
8 Go to Additional Settings.
VMware, Inc. 35
NSX-T Data Center Administration Guide
Prerequisites
Navigate to Networking > Networking Settings, click the Global Gateway Config tab and select
IPv4 and IPv6 as the L3 Forwarding Mode
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 To edit a tier-0 gateway, click the menu icon (three dots) and select Edit.
5 To create an ND Profile (SLAAC profile), click the menu icon (three dots) and select Create
New.
b Select a mode:
n SLAAC with DNS Through RA - The address and DNS information is generated with
the router advertisement message.
n SLAAC with DNS Through DHCP - The address is generated with the router
advertisement message and the DNS information is generated by the DHCP server.
n DHCP with Address and DNS through DHCP - The address and DNS information is
generated by the DHCP server.
n SLAAC with Address and DNS through DHCP - The address and DNS information is
generated by the DHCP server. This option is only supported by NSX Edge and not
by KVM hosts or ESXi hosts.
c Enter the reachable time and the retransmission interval for the router advertisement
message.
d Enter the domain name and specify a lifetime for the domain name. Enter these values
only for the SLAAC with DNS Through RA mode.
VMware, Inc. 36
NSX-T Data Center Administration Guide
e Enter a DNS server and specify a lifetime for the DNS server. Enter these values only for
the SLAAC with DNS Through RA mode.
6 To create a DAD Profile, click the menu icon (three dots) and select Create New.
b Select a mode:
c Enter the Wait Time (seconds) that specifies the interval of time between the NS packets.
d Enter the NS Retries Count that specifies the number of NS packets to detect duplicate
addresses at intervals defined in Wait Time (seconds)
Changing the HA mode is allowed only if there is no more than one service router running on the
gateway. This means that you must not have uplinks on more than one Edge transport node.
However, you can have more than one uplink on the same Edge transport node.
After you set the HA mode from active-active to active-standby, you can set the failover mode.
The default is non-preemptive.
HA mode change is not allowed if the following services or features are configured.
n DNS Forwarder
n IPSec VPN
n L2 VPN
n HA VIP
n Stateful Firewall
VMware, Inc. 37
NSX-T Data Center Administration Guide
n Service Insertion
n VRF
Prerequisites
For VRF gateways on EVPN, ensure that you configure the EVPN settings for the tier-0 gateway
that you want to link to. These settings are only needed to support EVPN:
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
VMware, Inc. 38
NSX-T Data Center Administration Guide
If the connected tier-0 gateway has RD Admin Address configured, the Route
Distinguisher is automatically populated. Enter a new value if you want to override the
assigned Route Distinguisher.
The VNI must be unique and belong to the VNI pool configured on the linked tier-0
gateway.
For each route target, select a mode, which can be Auto or Manual. Specify one or more
Import Route Targets. Specify one or more Export Route Targets.
7 Click Save and then Yes to continue configuring the VRF gateway.
8 For VRF-lite, configure one or more external interfaces on the VRF gateway with an Access
VLAN ID and connect to a VLAN Segment. For EVPN, configure one or more service
interfaces on the VRF gateway with an Access VLAN ID and connect to an Overlay Segment.
See Add a Segment. VRF interfaces require existing external interfaces on the linked tier-0
gateway to be mapped to each edge node. The Segment connected to the Access interface
needs to have VLAN IDs configured in range or list format.
9 Click BGP to set BGP, ECMP, Route Aggregation, and BGP Neighbours. You can add a route
filter with IPv4/IPv6 address families. See Add a Tier-0 Gateway.
10 Click Routing and complete routing configuration. For supporting route leaking between the
VRF gateway and linked tier-0 gateway/peer VRF gateway, you can add a static route and
select Next Hop scope as the linked tier-0 gateway, or as one of the existing peer VRF
gateways. See Add a Tier-0 Gateway.
Configuring EVPN
EVPN (Ethernet VPN) is a standards-based BGP control plane that provides the ability to extend
Layer 2 and Layer 3 connectivity between different data centers.
n Multi-Protocol BGP (MP-BGP) EVPN between NSX Edge and physical routers.
VMware, Inc. 39
NSX-T Data Center Administration Guide
n NSX-T generates unique router MAC for every NSX edge VTEP in the EVPN domain.
However, there may be other nodes in the network that are not managed by NSX-T, for
example, physical routers. You must make sure that the router MACs are unique across all the
VTEPs in the EVPN domain.
n The EVPN feature supports NSX Edge to be either the ingress or the egress of the EVPN
virtual tunnel endpoint. If an NSX Edge node receives EVPN type-5 prefixes from its eBGP
peer that need to be redistributed to another eBGP peer, the routes will be re-advertised
without any change to the nexthop.
n In multi-path network topologies, it is recommended that ECMP is enabled in the BGP EVPN
control plane as well, so that all the possible paths can be advertised. This will avoid any
potential traffic blackhole due to asymmetric data path forwarding.
Configuration Prerequisites
n Virtual Router (vRouter) deployed on VMware ESXi hypervisor.
Configuration Steps
n Create a VNI pool. See Add a VNI Pool.
n Configure an overlay Segment and specify one or more VLAN ranges. See Add a Segment.
n Under EVPN Settings, select a VNI pool and create EVPN Tunnel Endpoints.
n Under Route Distinguisher for VRF Gateways, configure RD Admin Address for the automatic
route distinguisher use case.
n Configure one or more external interfaces on the tier-0 gateway and connect to the VLAN
Segment.
n Configure BGP neighbors with the peer physical router. Add route filter with IPv4 and L2VPN
EVPN Address Families.
n Configure Route Re-Distribution. Select EVPN TEP IP under Tier-0 Subnets along with other
sources.
n Add service interface on VRF for each edge node and connect to the Overlay Segment.
Specify an Access VLAN ID for each service interface.
VMware, Inc. 40
NSX-T Data Center Administration Guide
n Configure per VRF BGP neighbors with the peer vRouter. The routes learned over the VRF
BGP sessions are redistributed by the NSX Edge to the peer physical router over the MP-BGP
EVPN session.
VMware, Inc. 41
Tier-1 Gateway
3
A tier-1 gateway has downlink connections to segments and uplink connections to tier-0
gateways.
You can configure route advertisements and static routes on a tier-1 gateway. Recursive static
routes are supported.
If you are adding a tier-1 gateway from Global Manager in Federation, see Add a Tier-1 Gateway
from Global Manager.
Tier-0 and tier-1 gateways support the following addressing configurations for all interfaces
(uplinks, service ports and downlinks) in both single tier and multi-tiered topologies:
n IPv4 only
n IPv6 only
To use IPv6 or dual stack addressing, enable IPv4 and IPv6 as the L3 Forwarding Mode in
Networking > Networking Settings > Global Networking Config .
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
VMware, Inc. 42
NSX-T Data Center Administration Guide
5 (Optional) Select a tier-0 gateway to connect to this tier-1 gateway to create a multi-tier
topology.
6 (Optional) Select an NSX Edge cluster if you want this tier-1 gateway to host stateful services
such as NAT, load balancer, or firewall.
If an NSX Edge cluster is selected, a service router will always be created (even if you do not
configure stateful services), affecting the north/south traffic pattern.
7 (Optional) In the Edges field, click Set to select an NSX Edge node.
8 If you selected an NSX Edge cluster, select a failover mode or accept the default.
Option Description
Preemptive If the preferred NSX Edge node fails and recovers, it will preempt its peer and become
the active node. The peer will change its state to standby. This is the default option.
Non-preemptive If the preferred NSX Edge node fails and recovers, it will check if its peer is the active
node. If so, the preferred node will not preempt its peer and will be the standby node.
9 If you plan to configure a load balancer on this gateway, select an Edges Pool Allocation Size
setting according to the size of the load balancer.
The options are Routing, LB Small, LB Medium, LB Large, and LB XLarge. The default is
Routing and is suitable if no load balancer will be configured on this gateway. This parameter
allows the NSX Manager to place the tier-1 gateway on the Edge nodes in a more intelligent
way. With this setting the number of load balancing and routing functions on each node is
taken into consideration. Note that you cannot change this setting after the gateway is
created.
10 (Optional) Click the Enable StandBy Relocation toggle to enable or disable standby
relocation.
Standby relocation means that if the Edge node where the active or standby logical router is
running fails, a new standby logical router is created on another Edge node to maintain high
availability. If the Edge node that fails is running the active logical router, the original standby
logical router becomes the active logical router and a new standby logical router is created. If
the Edge node that fails is running the standby logical router, the new standby logical router
replaces it.
VMware, Inc. 43
NSX-T Data Center Administration Guide
12 Click Save.
a In the Set Route Advertisement Rules field, click Set to add route advertisement rules.
a For IPv6, you can select or create an ND Profile and a DAD Profile.
These profiles are used to configure Stateless Address Autoconfiguration (SLAAC) and
Duplicate Address Detection (DAD) for IPv6 addresses.
b Select an Ingress QoS Profile and an Egress QoS Profile for traffic limitations.
These profiles are used to set information rate and burst size for permitted traffic. See
Add a Gateway QoS Profile for more information on creating QoS profiles.
If this gateway is linked to a tier-0 gateway, the Router Links field shows the link addresses.
15 (Optional) Click Service Interfaces and Set to configure connections to segments. Required in
some topologies such as VLAN-backed segments or one-arm load balancing.
c Select a segment.
h Click Save.
b Enter a name and a network address in the CIDR or IPv6 CIDR format.
d Click Save.
What to do next
After the tier-1 gateway is added, you can optionally enable dynamic IP management on the
gateway by selecting either a DHCP server profile or a DHCP relay profile. For more information,
see Attach a DHCP Profile to a Tier-0 or Tier-1 Gateway.
VMware, Inc. 44
Segments
4
In NSX-T Data Center, segments are virtual layer 2 domains. A segment was earlier called a
logical switch.
n VLAN-backed segments
n Overlay-backed segments
In an overlay-backed segment, traffic between two VMs on different hosts but attached to the
same overlay segment have their layer 2 traffic carried by a tunnel between the hosts. NSX-T
Data Center instantiates and maintains this IP tunnel without the need for any segment-specific
configuration in the physical infrastructure. As a result, the virtual network infrastructure is
decoupled from the physical network infrastructure. That is, you can create segments
dynamically without any configuration of the physical network infrastructure.
The default number of MAC addresses learned on an overlay-backed segment is 2048. The
default MAC limit per segment can be changed through the API field remote_overlay_mac_limit in
MacLearningSpec. For more information see the MacSwitchingProfile in the NSX-T Data Center API
Guide.
n Segment Profiles
n Add a Segment
n Layer 2 Bridging
VMware, Inc. 45
NSX-T Data Center Administration Guide
Segment Profiles
Segment profiles include Layer 2 networking configuration details for segments and segment
ports. NSX Manager supports several types of segment profiles.
n IP Discovery
n SpoofGuard
n Segment Security
n MAC Management
Note You cannot edit or delete the default segment profiles. If you require alternate settings
from what is in the default segment profile you can create a custom segment profile. By default
all custom segment profiles except the segment security profile will inherit the settings of the
appropriate default segment profile. For example, a custom IP discovery segment profile by
default will have the same settings as the default IP discovery segment profile.
Each default or custom segment profile has a unique identifier. You use this identifier to associate
the segment profile to a segment or a segment port.
A segment or segment port can be associated with only one segment profile of each type. You
cannot have, for example, two QoS segment profiles associated with a segment or segment port.
If you do not associate a segment profile when you create a segment, then the NSX Manager
associates a corresponding default system-defined segment profile. The children segment ports
inherit the default system-defined segment profile from the parent segment.
When you create or update a segment or segment port you can choose to associate either a
default or a custom segment profile. When the segment profile is associated or disassociated
from a segment the segment profile for the children segment ports is applied based on the
following criteria.
n If the parent segment has a profile associated with it, the child segment port inherits the
segment profile from the parent.
n If the parent segment does not have a segment profile associated with it, a default segment
profile is assigned to the segment and the segment port inherits that default segment profile.
n If you explicitly associate a custom profile with a segment port, then this custom profile
overrides the existing segment profile.
Note If you have associated a custom segment profile with a segment, but want to retain the
default segment profile for one of the child segment port, then you must make a copy of the
default segment profile and associate it with the specific segment port.
VMware, Inc. 46
NSX-T Data Center Administration Guide
You cannot delete a custom segment profile if it is associated to a segment or a segment port.
You can find out whether any segments and segment ports are associated with the custom
segment profile by going to the Assigned To section of the Summary view and clicking on the
listed segments and segment ports.
For this release, shaping and traffic marking namely, CoS and DSCP is supported. The Layer 2
Class of Service (CoS) allows you to specify priority for data packets when traffic is buffered in
the segment due to congestion. The Layer 3 Differentiated Services Code Point (DSCP) detects
packets based on their DSCP values. CoS is always applied to the data packet irrespective of the
trusted mode.
NSX-T Data Center trusts the DSCP setting applied by a virtual machine or modifying and setting
the DSCP value at the segment level. In each case, the DSCP value is propagated to the outer IP
header of encapsulated frames. This enables the external physical network to prioritize the traffic
based on the DSCP setting on the external header. When DSCP is in the trusted mode, the DSCP
value is copied from the inner header. When in the untrusted mode, the DSCP value is not
preserved for the inner header.
Note DSCP settings work only on tunneled traffic. These settings do not apply to traffic inside
the same hypervisor.
You can use the QoS switching profile to configure the average ingress and egress bandwidth
values to set the transmit limit rate. The peak bandwidth rate is used to support burst traffic a
segment is allowed to prevent congestion on the northbound network links. These settings do
not guarantee the bandwidth but help limit the use of network bandwidth. The actual bandwidth
you will observe is determined by the link speed of the port or the values in the switching profile,
whichever is lower.
The QoS switching profile settings are applied to the segment and inherited by the child segment
port.
Prerequisites
n Familiarize yourself with the QoS switching profile concept. See Understanding QoS
Switching Profile.
VMware, Inc. 47
NSX-T Data Center Administration Guide
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Option Description
Mode Select either a Trusted or Untrusted option from the Mode drop-down
menu.
When you select the Trusted mode the inner header DSCP value is applied
to the outer IP header for IP/IPv6 traffic. For non IP/IPv6 traffic, the outer IP
header takes the default value. Trusted mode is supported on an overlay-
based logical port. The default value is 0.
Untrusted mode is supported on overlay-based and VLAN-based logical
port. For the overlay-based logical port, the DSCP value of the outbound IP
header is set to the configured value irrespective to the inner packet type
for the logical port. For the VLAN-based logical port, the DSCP value of IP/
IPv6 packet will be set to the configured value. The DSCP values range for
untrusted mode is between 0 to 63.
Note DSCP settings work only on tunneled traffic. These settings do not
apply to traffic inside the same hypervisor.
Ingress Set custom values for the outbound network traffic from the VM to the
logical network.
You can use the average bandwidth to reduce network congestion. The
peak bandwidth rate is used to support burst traffic and the burst duration is
set in the burst size setting. You cannot guarantee the bandwidth. However,
you can use the setting to limit network bandwidth. The default value 0,
disables the ingress traffic.
For example, when you set the average bandwidth for the logical switch to
30 Mbps the policy limits the bandwidth. You can cap the burst traffic at 100
Mbps for a duration 20 Bytes.
VMware, Inc. 48
NSX-T Data Center Administration Guide
Option Description
Ingress Broadcast Set custom values for the outbound network traffic from the VM to the
logical network based on broadcast.
The default value 0, disables the ingress broadcast traffic.
For example, when you set the average bandwidth for a logical switch to 50
Kbps the policy limits the bandwidth. You can cap the burst traffic to 400
Kbps for a duration of 60 Bytes.
Egress Set custom values for the inbound network traffic from the logical network
to the VM.
The default value 0, disables the egress traffic.
If the ingress, ingress broadcast, and egress options are not configured, the default values
are used as protocol buffers.
5 Click Save.
The discovered MAC and IP addresses are used to achieve ARP/ND suppression, which
minimizes traffic between VMs connected to the same segment. The number of IPs in the
ARP/ND suppression cache for any given port is determined by the settings in the port's IP
Discovery profile. The relevant settings are ARP Binding Limit, ND Snooping Limit, Duplicate IP
Detection, ARP ND Binding Limit Timeout, and Trust on First Use (TOFU).
The discovered MAC and IP addresses are also used by the SpoofGuard and distributed firewall
(DFW) components. DFW uses the address bindings to determine the IP address of objects in
firewall rules.
DHCP/DHCPv6 snooping inspects the DHCP/DHCPv6 packets exchanged between the DHCP/
DHCPv6 client and server to learn the IP and MAC addresses.
ARP snooping inspects the outgoing ARP and GARP (gratuitous ARP) packets of a VM to learn
the IP and MAC addresses.
VM Tools is software that runs on an ESXi-hosted VM and can provide the VM's configuration
information including MAC and IP or IPv6 addresses. This IP discovery method is available for
VMs running on ESXi hosts only.
ND snooping is the IPv6 equivalent of ARP snooping. It inspects neighbor solicitation (NS) and
neighbor advertisement (NA) messages to learn the IP and MAC addresses.
Duplicate address detection checks whether a newly discovered IP address is already present on
the realized binding list for a different port. This check is performed for ports on the same
segment. If a duplicate address is detected, the newly discovered address is added to the
discovered list, but is not added to the realized binding list. All duplicate IPs have an associated
VMware, Inc. 49
NSX-T Data Center Administration Guide
discovery timestamp. If the IP that is on the realized binding list is removed, either by adding it to
the ignore binding list or by disabling snooping, the duplicate IP with the oldest timestamp is
moved to the realized binding list. The duplicate address information is available through an API
call.
By default, the discovery methods ARP snooping and ND snooping operate in a mode called
trust on first use (TOFU). In TOFU mode, when an address is discovered and added to the
realized bindings list, that binding remains in the realized list forever. TOFU applies to the first 'n'
unique <IP, MAC, VLAN> bindings discovered using ARP/ND snooping, where 'n' is the binding
limit that you can configure. You can disable TOFU for ARP/ND snooping. The methods will then
operate in trust on every use (TOEU) mode. In TOEU mode, when an address is discovered, it is
added to the realized bindings list and when it is deleted or expired, it is removed from the
realized bindings list. DHCP snooping and VM Tools always operate in TOEU mode.
Note TOFU is not the same as SpoofGuard, and it does not block traffic in the same way as
SpoofGuard. For more information, see Understanding SpoofGuard Segment Profile.
For Linux VMs, the ARP flux problem might cause ARP snooping to obtain incorrect information.
The problem can be prevented with an ARP filter. For more information, see http://linux-ip.net/
html/ether-arp.html#ether-arp-flux.
For each port, NSX Manager maintains an ignore bindings list, which contains IP addresses that
cannot be bound to the port. If you navigate to Networking > Logical Switches > Ports in
Manager mode and select a port, you can add discovered bindings to the ignore bindings list.
You can also delete an existing discovered or realized binding by copying it to Ignore Bindings.
Prerequisites
Familiarize yourself with the IP Discovery segment profile concepts. See Understanding IP
Discovery Segment Profile.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Option Description
ARP Snooping For an IPv4 environment. Applicable if VMs have static IP addresses.
VMware, Inc. 50
NSX-T Data Center Administration Guide
Option Description
ARP Binding Limit The maximum number of IPv4 IP addresses that can be bound to a port. The
minimum value allowed is 1 and the maximum is 256. The default is 1.
ARP ND Binding Limit Timeout The timeout value, in minutes, for IP addresses in the ARP/ND binding table
if TOFU is disabled. If an address times out, a newly discovered address
replaces it.
DHCP Snooping For an IPv4 environment. Applicable if VMs have IPv4 addresses.
DHCP Snooping - IPv6 For an IPv6 environment. Applicable if VMs have IPv6 addresses.
ND Snooping Limit The maximum number of IPv6 addresses that can be bound to a port.
Duplicate IP Detection For all snooping methods and both IPv4 and IPv6 environments.
5 Click Save.
SpoofGuard is a tool that is designed to prevent virtual machines in your environment from
sending traffic with an IP address it is not authorized to end traffic from. In the instance that a
virtual machine’s IP address does not match the IP address on the corresponding logical port and
segment address binding in SpoofGuard, the virtual machine’s vNIC is prevented from accessing
the network entirely. SpoofGuard can be configured at the port or segment level. There are
several reasons SpoofGuard might be used in your environment:
n Preventing a rogue virtual machine from assuming the IP address of an existing VM.
n Guaranteeing that distributed firewall (DFW) rules will not be inadvertently (or deliberately)
bypassed – for DFW rules created utilizing IP sets as sources or destinations, the possibility
always exists that a virtual machine could have it’s IP address forged in the packet header,
thereby bypassing the rules in question.
VMware, Inc. 51
NSX-T Data Center Administration Guide
n Dynamic Address Resolution Protocol (ARP) inspection, that is, ARP and Gratuitous Address
Resolution Protocol (GARP) SpoofGuard and Neighbor Discovery (ND) SpoofGuard validation
are all against the MAC source, IP Source and IP-MAC source mapping in the ARP/GARP/ND
payload.
At the port level, the allowed MAC/VLAN/IP whitelist is provided through the Address Bindings
property of the port. When the virtual machine sends traffic, it is dropped if its IP/MAC/VLAN
does not match the IP/MAC/VLAN properties of the port. The port level SpoofGuard deals with
traffic authentication, i.e. is the traffic consistent with VIF configuration.
At the segment level, the allowed MAC/VLAN/IP whitelist is provided through the Address
Bindings property of the segment. This is typically an allowed IP range/subnet for the segment
and the segment level SpoofGuard deals with traffic authorization.
Traffic must be permitted by port level AND segment level SpoofGuard before it will be allowed
into segment. Enabling or disabling port and segment level SpoofGuard, can be controlled using
the SpoofGuard segment profile.
Enable SpoofGuard for the port group(s) containing the guests. When enabled for each network
adapter, SpoofGuard inspects packets for the prescribed MAC and its corresponding IP address.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 Enter a name.
6 Click Save.
VMware, Inc. 52
NSX-T Data Center Administration Guide
Note that the default segment security profile has the DHCP settings Server Block and Server
Block - IPv6 enabled. This means that a segment that uses the default segment security profile
will block traffic from a DHCP server to a DHCP client. If you want a segment that allows DHCP
server traffic, you must create a custom segment security profile for the segment.
Prerequisites
Familiarize yourself with the segment security segment profile concept. See Understanding
Switch Security Switching Profile.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Option Description
BPDU Filter Toggle the BPDU Filter button to enable BPDU filtering. Disabled by default.
When the BPDU filter is enabled, all of the traffic to BPDU destination MAC
address is blocked. The BPDU filter when enabled also disables STP on the
logical switch ports because these ports are not expected to take part in
STP.
BPDU Filter Allow List Click the destination MAC address from the BPDU destination MAC
addresses list to allow traffic to the permitted destination. You must enable
BPDU Filter to be able to select from this list.
DHCP Filter Toggle the Server Block button and Client Block button to enable DHCP
filtering. Both are disabled by default.
DHCP Server Block blocks traffic from a DHCP server to a DHCP client. Note
that it does not block traffic from a DHCP server to a DHCP relay agent.
DHCP Client Block prevents a VM from acquiring a DHCP IP address by
blocking DHCP requests.
DHCPv6 Filter Toggle the Server Block - IPv6 button and Client Block - IPv6 button to
enable DHCP filtering. Both are disabled by default.
DHCPv6 Server Block blocks traffic from a DHCPv6 server to a DHCPv6
client. Note that it does not block traffic from a DHCP server to a DHCP relay
agent. Packets whose UDP source port number is 547 are filtered.
DHCPv6 Client Block prevents a VM from acquiring a DHCP IP address by
blocking DHCP requests. Packets whose UDP source port number is 546 are
filtered.
VMware, Inc. 53
NSX-T Data Center Administration Guide
Option Description
Block Non-IP Traffic Toggle the Block Non-IP Traffic button to allow only IPv4, IPv6, ARP, and
BPDU traffic.
The rest of the non-IP traffic is blocked. The permitted IPv4, IPv6, ARP,
GARP and BPDU traffic is based on other policies set in address binding and
SpoofGuard configuration.
By default, this option is disabled to allow non-IP traffic to be handled as
regular traffic.
RA Guard Toggle the RA Guard button to filter out ingress IPv6 router advertisements.
ICMPv6 type 134 packets are filtered out. This option is enabled by default.
Rate Limits Set a rate limit for broadcast and multicast traffic. This option is enabled by
default.
Rate limits can be used to protect the logical switch or VMs from events
such as broadcast storms.
To avoid any connectivity problems, the minimum rate limit value must be >=
10 pps.
5 Click Save.
The MAC address change feature allows a VM to change its MAC address. A VM connected to a
port can run an administrative command to change the MAC address of its vNIC and still send
and receive traffic on that vNIC. This feature is supported on ESXi only and not on KVM. This
property is disabled by default.
MAC learning provides network connectivity to deployments where multiple MAC addresses are
configured behind one vNIC, for example, in a nested hypervisor deployment where an ESXi VM
runs on an ESXi host and multiple VMs run inside the ESXi VM. Without MAC learning, when the
ESXi VM's vNIC connects to a segment port, its MAC address is static. VMs running inside the
ESXi VM do not have network connectivity because their packets have different source MAC
addresses. With MAC learning, the vSwitch inspects the source MAC address of every packet
coming from the vNIC, learns the MAC address and allows the packet to go through. If a MAC
address that is learned is not used for a certain period of time, it is removed. This time period is
not confurable. The field MAC Learning Aging Time displays the pre-defined value, which is 600.
MAC learning also supports unknown unicast flooding. Normally, when a packet that is received
by a port has an unknown destination MAC address, the packet is dropped. With unknown
unicast flooding enabled, the port floods unknown unicast traffic to every port on the switch that
has MAC learning and unknown unicast flooding enabled. This property is enabled by default, but
only if MAC learning is enabled.
VMware, Inc. 54
NSX-T Data Center Administration Guide
The number of MAC addresses that can be learned is configurable. The maximum value is 4096,
which is the default. You can also set the policy for when the limit is reached. The options are:
n Drop - Packets from an unknown source MAC address are dropped. Packets inbound to this
MAC address will be treated as unknown unicast. The port will receive the packets only if it
has unknown unicast flooding enabled.
n Allow - Packets from an unknown source MAC address are forwarded although the address
will not be learned. Packets inbound to this MAC address will be treated as unknown unicast.
The port will receive the packets only if it has unknown unicast flooding enabled.
If you enable MAC learning or MAC address change, to improve security, configure SpoofGuard
as well.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Option Description
MAC Change Enable or disable the MAC address change feature. The default is disabled.
MAC Learning Enable or disable the MAC learning feature. The default is disabled.
MAC Limit Policy Select Allow or Drop. The default is Allow. This option is available if you
enable MAC learning
Unknown Unicast Flooding Enable or disable the unknown unicast flooding feature. The default is
enabled. This option is available if you enable MAC learning
MAC Limit Set the maximum number of MAC addresses. The default is 4096. This
option is available if you enable MAC learning
MAC Learning Aging Time For information only. This option is not configurable. The pre-defined value
is 600.
5 Click Save.
Add a Segment
You can add two kinds of segments: overlay-backed segments and VLAN-backed segments.
VMware, Inc. 55
NSX-T Data Center Administration Guide
Segments are created as part of a transport zone. There are two types of transport zones: VLAN
transport zones and overlay transport zones. A segment created in a VLAN transport zone is a
VLAN-backed segment, and a segment created in an overlay transport zone is an overlay-
backed segment.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Connectivity Description
None Select this option when you do not want to connect the segment to any
upstream gateway (tier-0 or tier-1). Typically, you want to add a standalone
segment in the following scenarios:
n When you want to create a local testing environment for users that are
running workloads on the same subnet.
n When east-west connectivity with users on the other subnets is not
necessary.
n When north-south connectivity to users outside the data center is not
necessary.
n When you want to configure layer 2 bridging or guest VLAN tagging.
Tier-1 Select this option when you want to connect the segment to a tier-1
gateway.
Tier-0 Select this option when you want to connect the segment to a tier-0
gateway.
Note You can change the connectivity of a gateway-connected segment from one gateway
to another gateway (same or different gateway type). In addition, you can change the
connectivity of segment from "None" to a tier-0 or tier-1 gateway. The segment connectivity
changes are permitted only when the gateways and the connected segments are in the same
transport zone. However, if the segment has DHCP configured on it, some restrictions and
caveats apply on changing the segment connectivity. For more information, see Scenarios:
Impact of Changing Segment Connectivity on DHCP.
6 Enter the Gateway IP address of the subnet in a CIDR format. A segment can contain an IPv4
subnet, or an IPv6 subnet, or both.
VMware, Inc. 56
NSX-T Data Center Administration Guide
Subnets of one segment must not overlap with the subnets of other segments in your
network. A segment is always associated with a single virtual network identifier (VNI)
regardless of whether it is configured with one subnet, two subnets, or no subnet.
To create a VLAN-backed segment, add the segment in a VLAN transport zone. Similarly, to
create an overlay-backed segment, add the segment in an overlay transport zone.
For detailed steps on configuring DHCP Settings and DHCP Options, see Configure DHCP on
a Segment.
9 If the transport zone is of type VLAN, specify a list of VLAN IDs. If the transport zone is of
type Overlay, and you want to support layer 2 bridging or guest VLAN tagging, specify a list
of VLAN IDs or VLAN ranges
This drop-down menu displays the named teaming policies, if you have added them in the
VLAN transport zone. If no uplink teaming policy is selected, the default teaming policy is
used.
n Named teaming policies are not applicable to overlay segments. Overlay segments
always follow the default teaming policy.
n For VLAN-backed segments, you have the flexibility to override the default teaming
policy with a selected named teaming policy. This capability is provided so that you can
steer the infrastructure traffic from the host to specific VLAN segments in the VLAN
transport zone. Before adding the VLAN segment, ensure that the named teaming policy
names are added in the VLAN transport zone.
DHCPv4 server and DHCPv4 static bindings on the segment automatically inherit the domain
name from the segment configuration as the Domain Name option.
12 If you want to use Layer 2 VPN to extend the segment, click the L2 VPN text box and select
an L2 VPN server or client session.
13 In VPN Tunnel ID, enter a unique value that is used to identify the segment.
14 (Optional) In the Metadata Proxy field, click Set to attach or detach a metadata proxy to this
segment.
To attach a metadata proxy, select an existing metadata proxy. To detach a metadata proxy,
deselect the metadata proxy that is selected.
15 Click Save.
VMware, Inc. 57
NSX-T Data Center Administration Guide
16 To add segment ports, click Yes when prompted if you want to continue configuring the
segment.
d For ID, enter the VIF UUID of the VM or server that connects to this port.
Leave this text box blank except for use cases such as containers or VMware HCX. If this
port is for a container in a VM, select Child. If this port is for a bare metal container or
server, select Static.
Enter the parent VIF ID if Type is Child, or transport node ID if Type is Static.
i Specify tags.
j Apply address binding by specifying the IP (IPv4 address, IPv6 address, or IPv6 subnet)
and MAC address of the logical port to which you want to apply address binding. For
example, for IPv6, 2001::/64 is an IPv6 subnet, 2001::1 is a host IP, whereas 2001::1/64 is an
invalid input. You can also specify a VLAN ID.
Manual address bindings, if specified, override the auto discovered address bindings.
18 (Optional) To bind a static IP address to the MAC address of a VM on the segment, expand
DHCP Static Bindings, and then click Set.
Both DHCP for IPv4 and DHCP for IPv6 static bindings are supported. For detailed steps on
configuring static binding settings, see Configure DHCP Static Bindings on a Segment.
19 Click Save.
VMware, Inc. 58
NSX-T Data Center Administration Guide
As the name suggests, it is a DHCP server that is local to the segment and not available to the
other segments in the network. A local DHCP server provides a dynamic IP assignment
service only to the VMs that are attached to the segment. The IP address of a local DHCP
server must be in the subnet that is configured on the segment.
Gateway DHCP
It is analogous to a central DHCP service that dynamically assigns IP and other network
configuration to the VMs on all the segments that are connected to the gateway and using
Gateway DHCP. Depending on the type of DHCP profile you attach to the gateway, you can
configure a Gateway DHCP server or a Gateway DHCP relay on the segment. By default,
segments that are connected to a tier-1 or tier-0 gateway use Gateway DHCP. The IP address
of a Gateway DHCP server can be different from the subnets that are configured in the
segments.
DHCP Relay
It is a DHCP relay service that is local to the segment and not available to the other segments
in the network. The DHCP relay service relays the DHCP requests of the VMs that are
attached to the segment to the remote DHCP servers. The remote DHCP servers can be in
any subnet, outside the SDDC, or in the physical network.
You can configure DHCP on each segment regardless of whether the segment is connected to a
gateway. Both DHCP for IPv4 (DHCPv4) and DHCP for IPv6 (DHCPv6) servers are supported.
For a gateway-connected segment, all the three DHCP types are supported. However, Gateway
DHCP is supported only in the IPv4 subnet of a segment.
For a standalone segment that is not connected to a gateway, only local DHCP server is
supported.
n Segments configured with an IPv6 subnet can have either a local DHCPv6 server or a
DHCPv6 relay. Gateway DHCPv6 is not supported.
n DHCPv6 Options (classless static routes and generic options) are not supported.
For a gateway-connected segment, all the following DHCP types are supported:
n DHCP relay
For a standalone segment that is not connected to a gateway, only local DHCP server is
supported.
VMware, Inc. 59
NSX-T Data Center Administration Guide
n Segments configured with an IPv6 subnet can have either a local DHCPv6 server or a
DHCPv6 relay. Gateway DHCPv6 is not supported.
n DHCPv6 Options (classless static routes and generic options) are not supported.
Prerequisites
n If you are configuring Gateway DHCP on a segment, a DHCP profile must be attached to the
directly connected tier-1 or tier-0 gateway.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
n To modify the properties of an existing segment, click the vertical ellipses next to the
name of an existing segment, and then click Edit.
4 If you are adding a segment, ensure that the following segment properties are specified:
n Segment name
n Connectivity
n Transport zone
VMware, Inc. 60
NSX-T Data Center Administration Guide
6 In the DHCP Type drop-down menu, select any one of the following types.
On a segment, IPv6 and IPv4 subnets always use the same DHCP type. Mixed configuration is
not supported.
DHCP Local Server Select this option to create a local DHCP server that has an IP address on
the segment.
As the name suggests, it is a DHCP server that is local to the segment and
not available to the other segments in the network. A local DHCP server
provides a dynamic IP assignment service only to the VMs that are attached
to the segment.
You can configure all DHCP settings, including DHCP ranges, DHCP Options,
and static bindings on the segment.
For standalone segments, this type is selected by default.
DHCP Relay Select this option to relay the DHCP client requests to the external DHCP
servers. The external DHCP servers can be in any subnet, outside the SDDC,
or in the physical network.
The DHCP relay service is local to the segment and not available to the
other segments in the network.
When you use a DHCP relay on a segment, you cannot configure DHCP
Settings and DHCP Options. The UI does not prevent you from configuring
DHCP static bindings. However, in NSX-T Data Center 3.0, static binding with
a DHCP relay is an unsupported configuration.
Gateway DHCP This DHCP type is analogous to a central DHCP service that dynamically
assigns IP and other network configuration to the VMs on all the segments
that are connected to the gateway and using Gateway DHCP. Depending on
the type of DHCP profile you attach to the gateway, you can configure a
Gateway DHCP server or a Gateway DHCP relay on the segment.
By default, segments that are connected to a tier-1 or tier-0 gateway use
Gateway DHCP. If needed, you can choose to configure a DHCP local server
or a DHCP relay on the segment.
To configure Gateway DHCP on a segment, a DHCP profile must be
attached to the gateway.
If the IPv4 subnet uses Gateway DHCP, you cannot configure DHCPv6 in the
IPv6 subnet of the same segment because Gateway DHCPv6 is not
supported. In this case, the IPv6 subnet cannot support any DHCPv6 server
configuration, including the IPv6 static bindings.
Note In NSX-T Data Center 3.0 and 3.0.1, after the segment is created and the DHCP service
is in use, you cannot change the DHCP type of a gateway-connected segment. Starting in
version 3.0.2, you can change the DHCP type of a gateway-connected segment.
VMware, Inc. 61
NSX-T Data Center Administration Guide
7 In the DHCP Profile drop-down menu, select the name of the DHCP server profile or DHCP
relay profile.
When a segment is using a Gateway DHCP server, ensure that an edge cluster is selected
either in the gateway, or DHCP server profile, or both. If an edge cluster is unavailable in
either the profile or the gateway, an error message is displayed when you save the
segment.
n If you are configuring a local DHCP server or a DHCP relay on the segment, you must
select a DHCP profile from the drop-down menu. If no profiles are available in the drop-
down menu, click the vertical ellipses icon and create a DHCP profile. After the profile is
created, it is automatically attached to the segment.
When a segment is using a local DHCP server, ensure that the DHCP server profile
contains an edge cluster. If an edge cluster is unavailable in the profile, an error message
is displayed when you save the segment.
Note In NSX-T Data Center 3.0 and 3.0.1, after the segment is created and the DHCP service
is in use, you cannot change the DHCP profile of the segment. Starting in version 3.0.2, you
can change the DHCP profile of the segment that uses a DHCP local server or a DHCP relay.
If the segment contains an IPv4 subnet and an IPv6 subnet, you can configure both DHCPv4
and DHCPv6 servers on the segment.
VMware, Inc. 62
NSX-T Data Center Administration Guide
a Enable the DHCP configuration settings on the subnet by clicking the DHCP Config toggle
button.
n If you are configuring a DHCP local server, server IP address is required. A maximum
of two server IP addresses are supported. One IPv4 address and one IPv6 address.
For an IPv4 address, the prefix length must be <= 30, and for an IPv6 address, the
prefix length must be <= 126. The server IP addresses must belong to the subnets that
you have specified in this segment. The DHCP server IP address must not overlap with
the IP addresses in the DHCP ranges and DHCP static binding. The DHCP server
profile might contain server IP addresses, but these IP addresses are ignored when
you configure a local DHCP server on the segment.
After a local DHCP server is created, you can edit the server IP addresses on the Set
DHCP Config page. However, the new IP addresses must belong to the same subnet
that is configured in the segment.
n If you are configuring a DHCP relay, this step is not applicable. The server IP
addresses are fetched automatically from the DHCP relay profile and displayed below
the profile name.
n If you are configuring a Gateway DHCP server, this text box is not editable. The server
IP addresses are fetched automatically from the DHCP profile that is attached to the
connected gateway.
Remember, the Gateway DHCP server IP addresses in the DHCP server profile can be
different from the subnet that is configured in the segment. In this case, the Gateway
DHCP server connects with the IPv4 subnet of the segment through an internal relay
service, which is autocreated when the Gateway DHCP server is created. The internal
relay service uses any one IP address from the subnet of the Gateway DHCP server IP
address. The IP address used by the internal relay service acts as the default gateway
on the Gateway DHCP server to communicate with the IPv4 subnet of the segment.
After a Gateway DHCP server is created, you can edit the server IP addresses in the
DHCP profile of the gateway. However, you cannot change the DHCP profile that is
attached to the gateway.
VMware, Inc. 63
NSX-T Data Center Administration Guide
c (Optional) In the DHCP Ranges text box, enter one or more IP address ranges.
Both IP ranges and IP addresses are allowed. IPv4 addresses must be in a CIDR /32
format, and IPv6 addresses must be in a CIDR /128 format. You can also enter an IP
address as a range by entering the same IP address in the start and the end of the range.
For example, 172.16.10.10-172.16.10.10.
n IP addresses in the DHCP ranges must belong to the subnet that is configured on the
segment. That is, DHCP ranges cannot contain IP addresses from multiple subnets.
n IP ranges must not overlap with the DHCP Server IP address and the DHCP static
binding IP addresses.
Note The following types of IPv6 addresses are not permitted in DHCP for IPv6 ranges:
Caution After a DHCP server is created, you can update existing ranges, append new IP
ranges, or delete existing ranges. However, it is a good practice to avoid deleting,
shrinking, or expanding the existing IP ranges. For example, do not try to combine
multiple smaller IP ranges to create a single large IP range. You might accidentally miss
including IP addresses, which are already leased to the DHCP clients from the larger IP
range. Therefore, when you modify existing ranges after the DHCP service is running, it
might cause the DHCP clients to lose network connection and result in a temporary traffic
disruption.
d (Optional) (Only for DHCPv6): In the Excluded Ranges text box, enter IPv6 addresses or a
range of IPv6 addresses that you want to exclude for dynamic IP assignment to DHCPv6
clients.
In IPv6 networks, the DHCP ranges can be large. Sometimes, you might want to reserve
certain IPv6 addresses, or multiple small ranges of IPv6 addresses from the large DHCP
range for static binding. In such situations, you can specify excluded ranges.
Default value is 86400. Valid range of values is 60–4294967295. The lease time
configured in the DHCP server configuration takes precedence over the lease time that
you specified in the DHCP profile.
VMware, Inc. 64
NSX-T Data Center Administration Guide
Preferred time is the length of time that a valid IP address is preferred. When the
preferred time expires, the IP address becomes deprecated. If no value is entered,
preferred time is autocalculated as (lease time * 0.8).
g (Optional) Enter the IP address of the domain name server (DNS) to use for name
resolution. A maximum of two DNS servers are permitted.
When not specified, no DNS is assigned to the DHCP client. DNS server IP addresses must
belong to the same subnet as the subnet's gateway IP address.
DHCPv4 configuration automatically fetches the domain name that you specified in the
segment configuration.
i (Optional) (Only for DHCPv6): Enter the IP address of the Simple Network Time Protocol
(SNTP) server. A maximum of two SNTP servers are permitted.
In NSX-T Data Center 3.0, DHCPv6 server does not support NTP.
DHCPv4 server supports only NTP. To add an NTP server, click Options, and add the
Generic Option (Code 42 - NTP Servers).
10 (Optional) Click Options, and specify the Classless Static Routes (Option 121) and Generic
Options.
In NSX-T Data Center 3.0, DHCP Options for IPv6 are not supported.
n Each classless static route option in DHCP for IPv4 can have multiple routes with the same
destination. Each route includes a destination subnet, subnet mask, next hop router. For
information about classless static routes in DHCPv4, see RFC 3442 specifications. You can
add a maximum of 127 classless static routes on a DHCPv4 server.
n For adding Generic Options, select the code of the option and enter a value of the option.
For binary values, the value must be in a base-64 encoded format.
11 Click Apply to save the DHCP configuration, and then click Save to save the segment
configuration.
What to do next
n After a segment has DHCP configured on it, some restrictions and caveats apply on changing
the segment connectivity. For more information, see Scenarios: Impact of Changing Segment
Connectivity on DHCP.
VMware, Inc. 65
NSX-T Data Center Administration Guide
n When a DHCP server profile is attached to a segment that uses a DHCP local server, the
DHCP service is created in the edge cluster that you specified in the DHCP profile. However, if
the segment uses a Gateway DHCP server, the edge cluster in which the DHCP service is
created depends on a combination of several factors. For a detailed information about how
an edge cluster is selected for DHCP service, see Scenarios: Selection of Edge Cluster for
DHCP Service.
In a typical network environment, you have VMs that run services, such as FTP, email servers,
application servers, and so on. You might not want the IP address of these VMs to change in your
network. In this case, you can bind a static IP address to the MAC address of each VM (DHCP
client). The static IP address must not overlap with the DHCP IP ranges and the DHCP Server IP
addresses.
DHCP static bindings are allowed when you are configuring either a local DHCP server or a
Gateway DHCP server on the segment. The UI does not prevent you from configuring DHCP
static bindings when the segment is using a DHCP relay. However, in NSX-T Data Center 3.0,
static binding with a DHCP relay is an unsupported configuration.
Prerequisites
The segment on which you want to configure DHCP static bindings must be saved in the
network.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Next to the segment that you want to edit, click the vertical ellipses, and click Edit.
4 Expand the DHCP Static Bindings section, and next to DHCP Static Bindings, click Set.
By default, the IPv4 Static Binding page is displayed. To bind IPv6 addresses, make sure that
you first click the IPv6 Static Binding tab, and then proceed to the next step.
VMware, Inc. 66
NSX-T Data Center Administration Guide
The following table describes the static binding options that are common to DHCP for
IPv4 and DHCP for IPv6 servers.
Option Description
Name Enter a unique display name to identify each static binding. The name
must be limited to 255 characters.
IP Address n Required for IPv4 static binding. Enter a single IPv4 address to bind
to the MAC address of the client.
n Optional for IPv6 static binding. Enter a single Global Unicast IPv6
address to bind to the MAC address of the client.
When no IPv6 address is specified for static binding, Stateless Address
Autoconfiguration (SLAAC) is used to auto-assign an IPv6 address to the
DHCPv6 clients. Also, you can use Stateless DHCP to assign other DHCP
configuration options, such as DNS, domain names, and so on, to the
DHCPv6 clients.
For more information about Stateless DHCP for IPv6, read the RFC 3376
specifications.
The following types of IPv6 addresses are not permitted in IPv6 static
binding:
n Link Local Unicast addresses (FE80::/64 )
n Multicast IPv6 addresses (FF00::/8)
n Unspecified address (0:0:0:0:0:0:0:0)
n Address with all F (F:F:F:F:F:F:F:F)
The static IP address must belong to the subnet (if any) that is configured
on the segment.
MAC Address Required. Enter the MAC address of the DHCP client to which you want
to bind a static IP address.
The following validations apply to MAC address in static bindings:
n MAC address must be unique in all the static bindings on a segment
that uses a local DHCP server.
n MAC address must be unique in all the static bindings across all the
segments that are connected to the gateway and which use the
Gateway DHCP server.
For example, consider that you have 10 segments connected to a tier-1
gateway. You use a Gateway DHCP server for four segments (Segment1
to Segment4), and a local DHCP server for the remaining six segments
(Segment5 to Segment10). Assume that you have a total of 20 static
bindings across all the four segments (Segment1 to Segment4), which
use the Gateway DHCP server. In addition, you have five static bindings
in each of the other six segments (Segment5 to Segment10), which use a
local DHCP server. In this example:
n The MAC address in each of the 20 static bindings must be unique
across all the segments (Segment1 to Segment4) that use the
Gateway DHCP server.
n The MAC address in the five static bindings must be unique on each
segment (Segment5 to Segment10) that use a local DHCP server.
VMware, Inc. 67
NSX-T Data Center Administration Guide
Option Description
Lease Time Optional. Enter the amount of time in seconds for which the IP address is
bound to the DHCP client. When the lease time expires, the IP address
becomes invalid and the DHCP server can assign the address to other
DHCP clients on the segment.
Valid range of values is 60–4294967295. Default is 86400.
Tags Optional. Add tags to label static bindings so that you can quickly search
or filter bindings, troubleshoot and trace binding-related issues, or do
other tasks.
For more information about adding tags and use cases for tagging
objects, see Tags.
The following table describes the static binding options that are available only in a DHCP
for IPv4 server.
Gateway Address Enter the default gateway IP address that the DHCP for IPv4 server must
provide to the DHCP client.
Host Name Enter the host name of the DHCP for IPv4 client so that the DHCPv4
server can always bind the client (host) with the same IPv4 address each
time.
The host name must be limited to 63 characters.
The following validations apply to host name in static bindings:
n Host name must be unique in all the static bindings on a segment that
uses a local DHCP server.
n Host name must be unique in all the static bindings across all the
segments that are connected to the gateway and which use the
Gateway DHCP server.
For example, consider that you have 10 segments connected to a tier-1
gateway. You use a Gateway DHCP server for four segments (Segment1
to Segment4), and a local DHCP server for the remaining six segments
(Segment5 to Segment10). Assume that you have a total of 20 static
bindings across all the four segments (Segment1 to Segment4), which
use the Gateway DHCP server. In addition, you have five static bindings
in each of the other six segments (Segment5 to Segment10), which use a
local DHCP server. In this example:
n The host name in each of the 20 static bindings must be unique
across all the segments (Segment1 to Segment4) that use the
Gateway DHCP server.
n The host name in the five static bindings must be unique on each
segment (Segment5 to Segment10) that use a local DHCP server.
DHCP Options Optional. Click Set to configure DHCP for IPv4 Classless Static Routes and
other Generic Options.
n IPv4 static bindings automatically inherit the domain name that you configured on the
segment.
VMware, Inc. 68
NSX-T Data Center Administration Guide
n To specify DNS servers in the static binding configuration, add the Generic Option
(Code 6 - DNS Servers).
n To synchronize the system time on DHCPv4 clients with DHCPv4 servers, use NTP.
DHCP for IPv4 server does not support SNTP.
n If DHCP Options are not specified in the static bindings, the DHCP Options from the
DHCPv4 server on the segment are automatically inherited in the static bindings.
However, if you have explicitly added one or more DHCP Options in the static
bindings, these DHCP Options are not autoinherited from the DHCPv4 server on the
segment.
The following table describes the static binding options that are available only in a DHCP
for IPv6 server.
DNS Servers Optional. Enter a maximum of two domain name servers to use for the
name resolution.
When not specified, no DNS is assigned to the DHCP client.
SNTP Servers Optional. Enter a maximum of two Simple Network Time Protocol (SNTP)
servers. The clients use these SNTP servers to synchronize their system
time to that of the standard time servers.
Preferred Time Optional. Enter the length of time that a valid IP address is preferred.
When the preferred time expires, the IP address becomes deprecated. If
no value is entered, preferred time is auto-calculated as (lease time *
0.8).
Lease time must be > preferred time.
Valid range of values is 60–4294967295. Default is 69120.
Domain Names Optional. Enter the domain name to provide to the DHCPv6 clients.
Multiple domain names are supported in an IPv6 static binding.
When not specified, no domain name is assigned to the DHCP clients.
Layer 2 Bridging
With layer 2 bridging, you can have a connection to a VLAN-backed port group or a device, such
as a gateway, that resides outside of your NSX-T Data Center deployment. A layer 2 bridge is
also useful in a migration scenario, in which you need to split a subnet across physical and virtual
workloads.
VMware, Inc. 69
NSX-T Data Center Administration Guide
A layer 2 bridge requires an Edge cluster and an Edge Bridge profile. An Edge Bridge profile
specifies which Edge cluster to use for bridging and which Edge transport node acts as the
primary and backup bridge. When you configure a segment, you can specify an Edge bridge
profile to enable layer 2 bridging.
When you create an edge bridge profile, if you set the failover mode to be preemptive and a
failover occurs, the standby node becomes the active node. After the failed node recovers, it
becomes the active node again. If you set the failover mode to be non-preemptive and a failover
occurs, the standby node becomes the active node. After the failed node recovers, it becomes
the standby node. You can manually set the standby edge node to be the active node by running
the CLI command set l2bridge-port <uuid> state active on the standby edge node. The
command can only be applied in non-preemptive mode. Otherwise, there will be an error. In non-
preemptive mode, the command will trigger an HA failover when applied on a standby node, and
it will be ignored when applied on an active node. For more information, see the NSX-T Data
Center Command-Line Interface Reference.
Prerequisites
n Verify that you have an NSX Edge cluster with two NSX Edge transport nodes.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 Enter a name for the Edge bridge profile and optionally a description.
9 Click Save.
What to do next
VMware, Inc. 70
NSX-T Data Center Administration Guide
n Run the following command to enable reverse filter on the ESXi host where the Edge VM is
running:
Then disable and enable promiscuous mode on the portgroup with the following steps:
n Do not have other port groups in promiscuous mode on the same host sharing the same set
of VLANs.
n The active and standby Edge VMs should be on different hosts. If they are on the same host
the throughput might be reduced because VLAN traffic needs to be forwarded to both VMs
in promiscuous mode.
1 Retrieve the port number for the trunk vNIC that you want to configure as a sink port.
a Log in to the vSphere Web Client, and navigate to Home > Networking.
VMware, Inc. 71
NSX-T Data Center Administration Guide
b Click the distributed port group to which the NSX Edge trunk interface is connected, and
click Ports to view the ports and connected VMs. Note the port number associated with
the trunk interface. Use this port number when fetching and updating opaque data.
b Click content.
c Click the link associated with the rootFolder (for example: group-d1 (Datacenters)).
d Click the link associated with the childEntity (for example: datacenter-1).
e Click the link associated with the networkFolder (for example: group-n6).
f Click the DVS name link for the vSphere distributed switch associated with the NSX Edges
(for example: dvs-1 (Mgmt_VDS)).
g Copy the value of the uuid string. Use this value for dvsUuid when fetching and updating
opaque data.
a Go to https://<vc-ip>/mob/?moid=DVSManager&vmodl=1.
b Click fetchOpaqueDataEx.
<selectionSet xsi:type="DVPortSelection">
<dvsUuid>c2 1d 11 50 6a 7c 77 68-e6 ba ce 6a 1d 96 2a 15</dvsUuid> <!-- example dvsUuid --
>
<portKey>393</portKey> <!-- example port number -->
</selectionSet>
Use the port number and dvsUuid value that you retrieved for the NSX Edge trunk
interface.
e Click Invoke Method. If the result shows values for vim.dvs.OpaqueData.ConfigInfo, then
there is already opaque data set, use the edit operation when you set the sink port. If the
value for vim.dvs.OpaqueData.ConfigInfo is empty, use the add operation when you set the
sink port.
4 Configure the sink port in the vCenter managed object browser (MOB).
a Go to https://<vc-ip>/mob/?moid=DVSManager&vmodl=1.
b Click updateOpaqueDataEx.
VMware, Inc. 72
NSX-T Data Center Administration Guide
c In the selectionSet value box paste the following XML input. For example,
<selectionSet xsi:type="DVPortSelection">
<dvsUuid>c2 1d 11 50 6a 7c 77 68-e6 ba ce 6a 1d 96 2a 15</dvsUuid> <!-- example dvsUuid --
>
<portKey>393</portKey> <!-- example port number -->
</selectionSet>
Use the dvsUuid value that you retrieved from the vCenter MOB.
d On the opaqueDataSpec value box paste one of the following XML inputs.
Use this input to enable a SINK port if opaque data is not set (operation is set to add):
<opaqueDataSpec>
<operation>add</operation>
<opaqueData>
<key>com.vmware.etherswitch.port.extraEthFRP</key>
<opaqueData
xsi:type="vmodl.Binary">AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>
Use this input to enable a SINK port if opaque data is already set (operation is set to edit):
<opaqueDataSpec>
<operation>edit</operation>
<opaqueData>
<key>com.vmware.etherswitch.port.extraEthFRP</key>
<opaqueData
xsi:type="vmodl.Binary">AAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>
<opaqueDataSpec>
<operation>edit</operation>
<opaqueData>
<key>com.vmware.etherswitch.port.extraEthFRP</key>
<opaqueData
xsi:type="vmodl.Binary">AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAA=</opaqueData>
</opaqueData>
</opaqueDataSpec>
VMware, Inc. 73
NSX-T Data Center Administration Guide
Prerequisites
n At least one ESXi or KVM host to serve as a regular transport node. This node has hosted
VMs that require connectivity with devices outside of a NSX-T Data Center deployment.
n A VM or another end device outside of the NSX-T Data Center deployment. This end device
must be attached to a VLAN port matching the VLAN ID of the bridge-backed segment.
Procedure
3 Click the menu icon (three dots) of the overlay segment that you want to configure layer 2
bridging on and select Edit.
8 Enter a VLAN ID or a VLAN trunk specification (specify VLAN ranges and not individual
VLANs).
10 Click Add.
Results
You can test the functionality of the bridge by sending a ping from a VM attached to the
segment to a device that is external to the NSX-T deployment.
VMware, Inc. 74
NSX-T Data Center Administration Guide
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 In the Server Address field, enter the URL and port for the Nova server.
If you select any Edge node, you cannot enable Standby Relocation in the next step.
Standby relocation means that if the Edge node running the metadata proxy fails, the
metadata proxy will run on a standby Edge node. You can only enable standby relocation if
you do not select any Edge node.
9 In the Shared Signature Secret field, enter the secret that the metadata proxy will use to
access the Nova server.
10 (Optional) Select a certificate for encrypted communication with the Nova server.
The options are TLSv1, TLSv1.1, and TLSv1.2. TLSv1.1 and TLSv1.2 are supported by default.
VMware, Inc. 75
Host Switches
5
A host switch managed object is a virtual network switch that provides networking services to
the various hosts in the network. It is instantiated on every host that participates in NSX-T
networking
n NSX-T Virtual Distributed Switch: NSX-T introduces a host switch that normalizes connectivity
among various compute domains, including multiple VMware vCenter Server instances, KVM,
containers, and other off premises or cloud implementations.
NSX-T Virtual Distributed Switch can be configured based on the performance required in
your environment:
n Standard: Configured for regular workloads, where normal traffic throughput is expected
on the workloads.
n Enhanced: Configured for telecom workloads, where high traffic throughput is expected
on the workloads.
n vSphere Distributed Virtual Switch: Provides centralized management and monitoring of the
networking configuration of all hosts that are associated with the switch in a vCenter Server
environment.
VMware, Inc. 76
NSX-T Data Center Administration Guide
In NSX-T 3.0, a host transport node can be prepared by installing NSX-T on a VDS switch. To
prepare an an NSX Edge VM as a transport node, you can only use an N-VDS switch. But, you
can connect an NSX Edge VM to any of the supported switches (VSS, VDS, or N-VDS) depending
on the topology in your network.
After you prepare a cluster of transport node hosts with VDS as the host switch, you can do the
following:
n Realize a segment created in NSX-T as an NSX Distributed Virtual port group in vCenter
Server.
n Migrate VMs between vSphere Distributed Virtual port groups and NSX Distributed Virtual
port groups.
The following requirements must be met to install NSX-T on a VDS host switch:
The created VDS switch can be configured to centrally manage networking for NSX-T hosts.
Configuring a VDS switch for NSX-T networking requires objects to be configured on NSX-T and
in vCenter Server.
n In vSphere:
n Add ESXi hosts to the switch. These hosts are later prepared as NSX-T transport
nodes.
n In NSX-T:
n When configuring a transport node, map uplinks created in NSX-T uplink profile with
uplinks in VDS.
For more details on preparing a host transport node on a VDS switch, see the NSX-T Data
Center Installation Guide.
The following parameters can only be configured in a vCenter Server on a VDS backed host
switch:
VMware, Inc. 77
NSX-T Data Center Administration Guide
MTU In vCenter Server, set an MTU value Any MTU value set in an As a host transport node that is
on the switch. NSX-T uplink profile is prepared using VDS as the host
overriden. switch, the MTU value needs to
Note A VDS switch must have an
be set on the VDS switch in
MTU of 1600 or higher.
vCenter Server.
In vCenter Server, select VDS, click
Actions → Settings → Edit Settings.
Uplinks/LAGs In vCenter Server, configure Uplinks/ When a transport node is As a host transport node that is
LAGs on a VDS switch. prepared, the teaming prepared using VDS as the host
policy on NSX-T is switch, the uplink or LAG are
In vCenter Server, select VDS, click
mapped to uplinks/LAGs configured on the VDS switch.
Actions → Settings → Edit Settings.
configured on a VDS During configuration, NSX-T
switch. requires teaming policy be
configured for the transport
Note A transport node
node. This teaming policy is
prepared on an N-VDS
mapped to the uplinks/LAGs
switch, the teaming
configured on the VDS switch.
policy is mapped to
physical NICs.
NIOC Configure in vCenter Server. NIOC configuration is not As a host transport node that is
In vCenter Server, select VDS, click available when a host prepared using VDS as the host
Actions → Settings → Edit Settings. transport node is switch, the NIOC profile can only
prepared using a VDS be configured in vCenter Server.
switch.
Link Layer Configure in vCenter Server. LLDP configuration is not As a host transport node that is
Discovery In vCenter Server, select VDS, click available when a host prepared using VDS as the host
Protocol (LLDP) Actions → Settings → Edit Settings. transport node is switch, the LLDP profile can only
prepared using a VDS be configured in vCenter Server.
switch.
Add or Manage Manage in vCenter Server. Prepared as transport Before preparing a transport
Hosts In vCenter Server, go to Networking nodes in NSX-T. node using a VDS switch, that
→ VDS Switch → Add and Manage node must be added to the VDS
Host.. switch in vCenter Server.
Note NIOC profiles, Link Layer Discovery Protocol (LLDP) profile, and Link Aggregation Group
(LAG) for these virtual machines are managed by VDS switches and not by NSX-T. As a vSphere
administrator, configure these parameters from vCenter Server UI or by calling VDS API
commands.
After preparing a host transport node with VDS as a host switch, the host switch type displays
VDS as the host switch. It displays the configured uplink profile in NSX-T and the associated
transport zones.
VMware, Inc. 78
NSX-T Data Center Administration Guide
In vCenter Server, the VDS switch used to prepare NSX-T hosts is created as an NSX Switch.
In earlier versions of NSX-T Data Center , a segment created in NSX-T are represented as an
opaque network in vCenter Server. When running NSX-T on a VDS switch, a segment is
represented as an NSX Distributed Virtual Port Groups.
Any changes to the segments on the NSX-T network are synchronized in vCenter Server.
VMware, Inc. 79
NSX-T Data Center Administration Guide
Any NSX-T segment created in NSX-T is realized in vCenter Server as an NSX-T object. A vCenter
Server displays the following details related to NSX-T segments:
n NSX Manager
n Transport zone
The port binding for the segment is by default set to Ephemeral. Switching parameters for the
switch that are set in NSX-T cannot be edited in vCenter Server and conversely.
Important In a vCenter Server, an NSX Distributed Virtual port group realized does not require a
unique name to differentiate it with other port groups on a VDS switch. So, multiple NSX
Distributed Virtual port groups can have the same name. Any vSphere automations that use port
group names might result in errors.
In vCenter Server, you can perform these actions on an NSX Distributed Virtual Port Group:
However, NSX-T objects related to an NSX Distributed Virtual port group can only be edited in
NSX Manager. You can edit these segment properties:
For details on how to configure a vSphere Distributed Virtual port group, refer to the vSphere
Networking Guide.
VMware, Inc. 80
NSX-T Data Center Administration Guide
In the sample topology diagram, two VDS switches are configured to manage NSX-T traffic and
vSphere traffic.
VDS-1 and VDS-2 are configured to manage networking for ESXi hosts from Cluster-1, Cluster-2,
and Cluster-3. Cluster-1 is prepared to run only vSphere traffic, whereas, Cluster-2 and Cluster-3
are prepared as host transport nodes with these VDS switches.
In vCenter Server, uplink port groups on VDS switches are assigned physical NICs. In the
topology, uplinks on VDS-1 and VDS-2 are assigned to physical NICs. Depending on the hardware
configuration of the ESXi host, you might want to plan out how many physical NICs to be
assigned to the switch. In addition to assiging uplinks to the VDS switch, MTU, NIOC, LLDP, LAG
profiles are configured on the VDS switches.
When preparing a cluster by applying a transport node profile (on a VDS switch), the uplinks
from the transport node profile is mapped to VDS uplinks. In contrast, when preparing a cluster
on an N-VDS switch, the uplinks from the transport node profile is directly mapped to physical
NICs.
After preparing the clusters, ESXi hosts on cluster-2 and Cluster-3 manage NSX-T traffic, while
cluster-1 manage vSphere traffic.
VMware, Inc. 81
NSX-T Data Center Administration Guide
Note Configuration done using API commands is also possible from the vCenter Server user
interface. For more information on creating a NSX-T Data Center transport node using Sphere
Distributed Switch as host switch, refer to the Configure a Managed Host Transport Node topic
in the NSX-T Data Center Installation Guide.
VMware, Inc. 82
NSX-T Data Center Administration Guide
VMware, Inc. 83
NSX-T Data Center Administration Guide
"06ba5326-67ac-4f2c-9 node-id>?
953-a8c5d326b51e", "transport_zone_profi action=create_transport_
le_ids": [
node, refer to the NSX-T
"transport_zone_profi {
"resource_type": Data Center API Guide.
le_ids": [
{ "BfdHealthMonitoringP
"resource_type": rofile",
"BfdHealthMonitoringP "profile_id":
rofile", "52035bb3-
"profile_id": ab02-4a08-9884-186313
"52035bb3- 12e50a"
ab02-4a08-9884-186313 } ] } ],
12e50a" } ] } ],
"vmk_install_migratio
"vmk_install_migratio n": [],
n": [],
"pnics_uninstall_migr
"pnics_uninstall_migr ation": [],
ation": [],
"vmk_uninstall_migrat
"vmk_uninstall_migrat ion": [],
ion": [], "not_ready": false
"not_ready": false }
} ],
], "resource_type":
"resource_type": "StandardHostSwitchSp
"StandardHostSwitchSp ec"
ec" },
},
"transport_zone_endpo
"transport_zone_endpo ints": [],
ints": [], "maintenance_mode":
"maintenance_mode": "DISABLED",
"DISABLED", "is_overridden":
"is_overridden": false,
false, "resource_type":
"resource_type": "TransportNode",
"TransportNode", "display_name":
"id": "TestTN",
"d7ef478b-752c-400a- }
b5f0-207c04567e5d",
"display_name":
"TestTN",
}
VMware, Inc. 84
NSX-T Data Center Administration Guide
VMware, Inc. 85
NSX-T Data Center Administration Guide
SR-IOV support
SR-IOV is supported on a vSphere Distributed Switch but not on a NSX Virtual Distributed Switch.
VMware, Inc. 86
NSX-T Data Center Administration Guide
vMotion Support
vMotion between source vSphere Distributed Switch and destination vSphere Distributed Switch.
Both VDS switches are enabled to support NSX-T Data Center.
Compute
Source / VDS Destination / VDS vMotion Storage vMotion
vMotion between vSphere Distributed Switch (NSX-T Data Center enabled) and NSX Virtual
Distributed Switch
vMotion between vSphere Distributed Switch (NSX-T Data Center enabled) and vSphere
Standard Switch or vSphere Distributed Switch
VMware, Inc. 87
NSX-T Data Center Administration Guide
LACP
n VDS does not support LACP in Active mode.
Logical Switch n NSX Distributed Virtual port groups (in vCenter Server) support 10000 X
N, where N is the number of VDS switches in vCenter Server.
n NSX-T Data Center supports 10000 segments.
Relationship between NSX Distributed Virtual port groups and Hostd memory on the host.
NSX Distributed Virtual Port Groups Minimum Hostd Memory Supported VMs
The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host. ENS
also supports traffic flowing through Edge VMs. In the enhanced data path mode, you can
configure overlay traffic and VLAN traffic.
VMware, Inc. 88
NSX-T Data Center Administration Guide
With the N-VDS switch configured in the enhanced datapath mode, if a single logical core is
associated to a vNIC, then that logical core processes bidirectional traffic coming into or going
out of a vNIC. When multiple logical cores are configured, the host automatically determines
which logical core must process a vNIC's traffic.
n vNIC-count: Host assumes transmission of incoming or outgoing traffic for a vNIC direction
requires same amount of the CPU resource. Each logical core is assigned the same number of
vNICs based on the available pool of logical cores. It is the default mode. The vNIC-count
mode is reliable, but is not optimal for an asymmetric traffic.
n CPU-usage: Host predicts the CPU usage to transmit incoming or outgoing traffic at each
vNIC direction by using internal statistics. Based on the usage of CPU to transmit traffic, host
changes the logical core assignments to balance load among logical cores. The CPU usage
mode is more optimal than vNIC-count, but unreliable when traffic is not steady.
In CPU usage mode, if the traffic transmitted changes frequently, then the predicted CPU
resources required and vNIC assignment might also change frequently. Too frequent assignment
changes might cause packet drops.
If the traffic patterns are symmetric among vNICs, the vNIC-count option provides reliable
behavior, which is less vulnerable to frequent changes. However, if the traffic patterns are
asymmetric, vNIC-count might result in packet drops since it does not distinguish the traffic
difference among vNICs.
When a vNIC is connected or disconnected or when a logical core is added or removed, hosts
automatically detect the changes and rebalance.
Procedure
u To switch from one mode to another mode, run the following command.
VMware, Inc. 89
NSX-T Data Center Administration Guide
Inter-VLAN routing overcomes the limitation of 10 vNICs that can be used per VM. NSX-T
supporting inter-VLAN routing ensures that many VLAN subinterfaces can be created on the
vNIC and consumed for different networking services. For example, one vNIC of a VM can be
divided into several subinterfaces. Each subinterface belongs to a subnet, which can host a
networking service such as SNMP or DHCP. With Inter-VLAN routing, for example, a subinterface
on VLAN-10 can reach a subinterface on VLAN-10 or any other VLAN.
Each vNIC on a VM is connected to the N-VDS through the parent logical port, which manages
untagged packets.
To create a subinterface, on the Enhanced N-VDS switch, create a child port using the API with
an associated VIF using the API call described in the procedure. The subinterface tagged with a
VLAN ID is associated to a new logical switch, for example, VLAN10 is attached to logical switch
LS-VLAN-10. All subinterfaces of VLAN10 have to be attached to LS-VLAN-10. This 1–1 mapping
between the VLAN ID of the subinterface and its associated logical switch is an important
prerequisite. For example, adding a child port with VLAN20 to logical switch LS-VLAN-10
mapped to VLAN-10 makes routing of packets between VLANs non-functional. Such
configuration errors make the inter-VLAN routing non-functional.
Prerequisites
n Before you associate a VLAN subinterface to a logical switch, ensure that the logical switch
does not have any other associations with another VLAN subinterface. If there is a mismatch,
inter-VLAN routing on overlay networks might not work.
Procedure
1 To create subinterfaces for a vNIC, ensure that the vNIC is updated to a parent port. Make
the following REST API call.
VMware, Inc. 90
NSX-T Data Center Administration Guide
"admin_state" : "UP",
"logical_switch_id" : "UUID of Logical Switch to which the vNIC is connected",
"_revision" : 0
}
2 To create child ports for a parent vNIC port on the N-VDS that is associated to the
subinterfaces on a VM, make the API call. Before making the API call, verify that a logical
switch exists to connect child ports with the subinterfaces on the VM.
POST https://<nsx-mgr-ip>/api/v1/logical-ports/
{
"resource_type" : "LogicalPort",
"display_name" : "<Name of the Child PORT>",
"attachment" : {
"attachment_type" : "VIF",
"context" : {
"resource_type" : "VifAttachmentContext",
"parent_vif_id" : "<UUID of the PARENT port from Step 1>",
"traffic_tag" : <VLAN ID>,
"app_id" : "<ID of the attachment>", ==> display id(can give any string). Must be unique.
"vif_type" : "CHILD"
},
"id" : "<ID of the CHILD port>"
},
"logical_switch_id" : "<UUID of the Logical switch(not the PARENT PORT's logical switch) to
which Child port would be connected to>",
"address_bindings" : [ { "mac_address" : "<vNIC MAC address>", "ip_address" : "<IP address to
the corresponding VLAN>", "vlan" : <VLAN ID> } ],
"admin_state" : "UP"
}
Results
Prerequisites
VMware, Inc. 91
NSX-T Data Center Administration Guide
The following requirements must be met to migrate to a VDS 7.0 host switch:
n NSX-T is no longer represented as an opaque network after migration. You may need to
update your scripts to manage the migrated representation of the NSX-T hosts.
Procedure
1 Migrate your host switch using API calls or run the commands for migration from the CLI.
a To verify that the hosts are ready for migration, make the following API call and run a
pre-check:
Example response:
{ "precheck_id": "166959af-7f4b-4d49-b294-907000eef889" }
Example repose:
{
"precheck_id": "166959af-7f4b-4d49-b294-907000eef889",
"precheck_status": "PENDING_TOPOLOGY"
}
d For stateless hosts, nominate one of the hosts as the source host and initiate the
migration.
GET https://<nsx-ip>/api/v1/nvds-urt/topology/<precheck-id>
Example response:
{
"topology": [
{
"nvds_id": "21d4fd9b-7214-46b7-ab16-c4e7138f011f",
"nvds_name": "nsxvswitch",
"compute_manager_topology": [
{
"compute_manager_id": "fa1421d9-54a7-418e-9e18-7d0ff0d2f771",
"dvswitch": [
VMware, Inc. 92
NSX-T Data Center Administration Guide
{
"data_center_id": "datacenter-3",
"vds_name": "CVDS-nsxvswitch-datacenter-3",
"vmknic": [
"vmk1"
],
"transport_node_id": [
"4a6161af-7eec-4780-8faf-0e0610c33c2e",
"5a78981a-03a6-40c0-8a77-28522bbf07a9",
"f9c6314d-9b99-48aa-bfc8-1b3a582162bb"
]
}
]
}
]
}
]
}
f Make the following API call to create a VDS with the recommended topology:
You can choose to rename the VDS. If a VDS with the name that you specified
already exists, the existing VDS is used.
Example input:
{
"topology": [
{
"nvds_id": "c8ff4053-502a-4636-8a38-4413c2a2d52f",
"nvds_name": "nsxvswitch",
"compute_manager_topology": [
{
"compute_manager_id": "fa1421d9-54a7-418e-9e18-7d0ff0d2f771",
"dvswitch": [
{
"data_center_id": "datacenter-3",
"vds_name": "test-dvs",
"transport_node_id": [
"65592db5-adad-47a7-8502-1ab548c63c6d",
"e57234ee-1d0d-425e-b6dd-7dbc5f6e6527",
"70f55855-6f81-45a8-bd40-d8b60ae45b82"
]
}
]
}
]
}
]
}
VMware, Inc. 93
NSX-T Data Center Administration Guide
g To track the status of the migration, make the following API call:
When the host is ready for migration, precheck_status changes from APPLYING
_TOPOLOGY to UPGRADE_READY.
Refer to the NSX-T Data Center API Guide guide for more information on API
parameters.
h Place the ESXi host in maintenance mode and evacuate the powered off VMs. For a
stateless host, reboot the nominated source host.
i To initiate the N-VDS to VDS migration, make the following API call:
The hosts are migrated asynchronously. You can upgrade multiple transport nodes in
parallel by calling the API for a desired set of hosts. Services like DRS continue to run
as expected during the process of migration.
Example response:
{
"precheck_id": "c306e279-8b75-4160-919c-6c40030fb3d0",
"precheck_status": "READY",
"migration_state": [
{
"host": "65592db5-adad-47a7-8502-1ab548c63c6d",
"overall_state": "UPGRADE_READY"
},
{
"host": "e57234ee-1d0d-425e-b6dd-7dbc5f6e6527",
"overall_state": "UPGRADE_READY"
},
{
"host": "70f55855-6f81-45a8-bd40-d8b60ae45b82",
"overall_state": "SUCCESS"
}
]
}
In the event of failures, the overall_state changes to FAILED, indicating the reason for
the migration failure. Run the migrate_to_vds action to run the migration task again.
VMware, Inc. 94
NSX-T Data Center Administration Guide
1 Extract the host profile from the migrated host and attach it to the cluster.
a To verify that the hosts are ready for migration, run the following command and run a
pre-check:
vds-migrate precheck
Sample output:
vds-migrate show-topology
Sample output:
d Run the following command to create a VDS with the recommended topology:
vds-migrate apply-topology
VMware, Inc. 95
NSX-T Data Center Administration Guide
Sample output:
Sample output:
n Components running on the transport node (for example, between virtual machines).
If N-VDS is used to forward traffic between internal components and physical network, the NVDS
must own one or more physical interfaces (pNICs) on the transport node. As with other virtual
switches, an N-VDS cannot share a physical interface with another N-VDS, it may coexist with
another N-VDS (or other vSwitch) when using a separate set of pNICs. While N-VDS behavior in
realizing connectivity is identical regardless of the specific implementation, data plane realization
and enforcement capabilities differ based on compute manager and associated hypervisor
capability.
To change settings of IGMP snooping on an N-VDS switch, run the following CLI commands.
VMware, Inc. 96
NSX-T Data Center Administration Guide
For more details, see the NSX-T Data Center Command-Line Interface Reference.
VMware, Inc. 97
Virtual Private Network (VPN)
6
NSX-T Data Center supports IPSec Virtual Private Network (IPSec VPN) and Layer 2 VPN (L2
VPN) on an NSX Edge node. IPSec VPN offers site-to-site connectivity between an NSX Edge
node and remote sites. With L2 VPN, you can extend your data center by enabling virtual
machines to keep their network connectivity across geographical boundaries while using the
same IP address.
Note IPSec VPN and L2 VPN are not supported in the NSX-T Data Center limited export release.
You must have a working NSX Edge node, with at least one configured Tier-0 or Tier-1 gateway,
before you can configure a VPN service. For more information, see "NSX Edge Installation" in the
NSX-T Data Center Installation Guide.
Beginning with NSX-T Data Center 2.4, you can also configure new VPN services using the NSX
Manager user interface. In earlier releases of NSX-T Data Center, you can only configure VPN
services using REST API calls.
Important When using NSX-T Data Center 2.4 or later to configure VPN services, you must use
new objects, such as Tier-0 gateways, that were created using the NSX Manager UI or Policy
APIs that are included with NSX-T Data Center 2.4 or later release. To use existing Tier-0 or Tier-1
logical routers that were configured before the NSX-T Data Center 2.4 release, you must
continue to use API calls to configure a VPN service.
System-default configuration profiles with predefined values and settings are made available for
your use during a VPN service configuration. You can also define new profiles with different
settings and select them during the VPN service configuration.
The Intel QuickAssist Technology (QAT) feature on a bare metal server is supported for IPSec
VPN bulk cryptography. Support for this feature began with NSX-T Data Center 3.0. For more
information on support of the QAT feature on bare metal servers, see the NSX-T Data Center
Installation Guide.
VMware, Inc. 98
NSX-T Data Center Administration Guide
n Adding Profiles
IPSec VPN uses the IKE protocol to negotiate security parameters. The default UDP port is set to
500. If NAT is detected in the gateway, the port is set to UDP 4500.
Beginning with NSX-T Data Center 2.5, IPSec VPN services are supported on both Tier-0 and
Tier-1 gateways. See Add a Tier-0 Gateway or Add a Tier-1 Gateway for more information. The
Tier-0 or Tier-1 gateway must be in Active-Standby high-availability mode when used for an IPSec
VPN service. You can use segments that are connected to either Tier-0 or Tier-1 gateways when
configuring an IPSec VPN service.
An IPsec VPN service in NSX-T Data Center uses the gateway-level failover functionality to
support a high-availability service at the VPN service level. Tunnels are re-established on failover
and VPN configuration data is synchronized. Before NSX-T Data Center 3.0 release, the IPSec
VPN state is not synchronized as tunnels are being re-established. Beginning with NSX-T Data
Center 3.0 release, the IPSec VPN state is synchronized to the standby NSX Edge node when the
current active NSX Edge node fails and the original standby NSX Edge node becomes the new
active NSX Edge node without renegotiating the tunnels. This feature is supported for both
policy-based and route-based IPSec VPN services.
Pre-shared key mode authentication and IP unicast traffic are supported between the NSX Edge
node and remote VPN sites. In addition, certificate authentication is supported beginning with
NSX-T Data Center 2.4. Only certificate types signed by one of the following signature hash
algorithms are supported.
n SHA256withRSA
n SHA384withRSA
n SHA512withRSA
VMware, Inc. 99
NSX-T Data Center Administration Guide
This type of VPN is considered static because when a local network topology and configuration
change, the VPN policy settings must also be updated to accommodate the changes.
When using a policy-based IPSec VPN with NSX-T Data Center, you use IPSec tunnels to connect
one or more local subnets behind the NSX Edge node with the peer subnets on the remote VPN
site.
You can deploy an NSX Edge node behind a NAT device. In this deployment, the NAT device
translates the VPN address of an NSX Edge node to a publicly accessible address facing the
Internet. Remote VPN sites use this public address to access the NSX Edge node.
You can place remote VPN sites behind a NAT device as well. You must provide the remote VPN
site's public IP address and its ID (either FQDN or IP address) to set up the IPSec tunnel. On both
ends, static one-to-one NAT is required for the VPN address.
Note DNAT is not supported on a tier-1 gateway where policy-based IPSec VPN is configured.
IPSec VPN can provide a secure communications tunnel between an on-premises network and a
network in your cloud software-defined data center (SDDC). For policy-based IPSec VPN, the
local and peer networks provided in the session must be configured symmetrically at both
endpoints. For example, if the cloud-SDDC has the local networks configured as X, Y, Z subnets
and the peer network is A, then the on-premises VPN configuration must have A as the local
network and X, Y, Z as the peer network. This case is true even when A is set to
ANY (0.0.0.0/0). For example, if the cloud-SDDC policy-based VPN session has the local
network configured as 10.1.1.0/24 and the peer network as 0.0.0.0/0, at the on-premises VPN
endpoint, the VPN configuration must have 0.0.0.0/0 as the local network and 10.1.1.0/24 as
the peer network. If misconfigured, the IPSec VPN tunnel negotiation might fail.
The size of the NSX Edge node determines the maximum number of supported tunnels, as shown
in the following table.
Small N/A (POC/Lab Only) N/A (POC/Lab Only) N/A (POC/Lab Only)
Restriction The inherent architecture of policy-based IPSec VPN restricts you from setting up a
VPN tunnel redundancy.
For information about configuring a policy-based IPSec VPN, see Add an IPSec VPN Service.
Note
n OSPF dynamic routing is not supported for routing through IPSec VPN tunnels.
n Dynamic routing for VTI is not supported on VPN that is based on Tier-1 gateways.
Route-based IPSec VPN is similar to Generic Routing Encapsulation (GRE) over IPSec, with the
exception that no additional encapsulation is added to the packet before applying IPSec
processing.
In this VPN tunneling approach, VTIs are created on the NSX Edge node. Each VTI is associated
with an IPSec tunnel. The encrypted traffic is routed from one site to another site through the VTI
interfaces. IPSec processing happens only at the VTI.
Important
n In NSX-T Data Center, IPSec VPN tunnel redundancy is supported using BGP only.
n Do not use static routing for route-based IPSec VPN tunnels to achieve VPN tunnel
redundancy.
The following figure shows a logical representation of IPSec VPN tunnel redundancy between
two sites. In this figure, Site A and Site B represent two data centers. For this example, assume
that NSX-T Data Center is not managing the Edge VPN Gateways in Site A, and that NSX-T Data
Center is managing an Edge Gateway virtual appliance in Site B.
BGP
VTI
VPN
BGP
Tunnel
VTI
Uplink
Router
Uplink
BGP
VPN VTI
BGP Tunnel
Subnets VTI Uplink Subnets
As shown in the figure, you can configure two independent IPSec VPN tunnels by using VTIs.
Dynamic routing is configured using BGP protocol to achieve tunnel redundancy. If both IPSec
VPN tunnels are available, they remain in service. All the traffic destined from Site A to Site B
through the NSX Edge node is routed through the VTI. The data traffic undergoes IPSec
processing and goes out of its associated NSX Edge node uplink interface. All the incoming IPSec
traffic received from Site B VPN Gateway on the NSX Edge node uplink interface is forwarded to
the VTI after decryption, and then usual routing takes place.
You must configure BGP HoldDown timer and KeepAlive timer values to detect loss of
connectivity with peer within the required failover time. See Configure BGP.
Note This L2 VPN feature is available only for NSX-T Data Center and does not have any third-
party interoperability.
The extended network is a single subnet with a single broadcast domain, which means the VMs
remain on the same subnet when they are moved between sites. The VMs' IP addresses do not
change when they are moved. So, enterprises can seamlessly migrate VMs between network
sites. The VMs can run on either VNI-based networks or VLAN-based networks. For cloud
providers, L2 VPN provides a mechanism to onboard tenants without modifying existing IP
addresses used by their workloads and applications.
L2 VPN services are supported on both Tier-0 and Tier-1 gateways. Only one L2 VPN service
(either client or server) can be configured for either Tier-0 or Tier-1 gateway.
Each L2 VPN session has one Generic Routing Encapsulation (GRE) tunnel. Tunnel redundancy is
not supported. An L2 VPN session can extend up to 4094 L2 segments.
VLAN-based and VNI-based segments can be extended using L2 VPN service on an NSX Edge
node that is managed in an NSX-T Data Center environment. You can extend L2 networks from
VLAN to VNI, VLAN to VLAN, and VNI to VNI.
Segments can be connected to either Tier-0 or Tier-1 gateways and use L2 VPN services.
Also supported is VLAN trunking using an ESX NSX-managed virtual distributed switch (N-VDS).
If there are sufficient compute and I/O resources, an NSX Edge cluster can extend multiple VLAN
networks over a single interface using VLAN trunking.
Beginning with NSX-T Data Center 3.0, the L2 VPN path MTU discovery (PMTUD) feature is
enabled by default. With the PMTUD enabled, the source host learns the path MTU value for the
destination host through the L2 VPN tunnel and limits the length of the outgoing IP packet to the
learned value. This feature helps avoid IP fragmentation and reassembly within the tunnel, as a
result improving the L2 VPN performance.
The L2 VPN PMTUD feature is not applicable for non-IP, non-unicast, and unicast packets with
the DF (Don’t Fragment) flag cleared. The global PMTU cache timer expires every 10 minutes. To
disable or enable L2 VPN PMTUD feature, see Enable and Disable L2 VPN Path MTU Discovery.
n Between an NSX-T Data Center L2 VPN server and an L2 VPN client hosted on an NSX Edge
that is managed in an NSX Data Center for vSphere environment. A managed L2 VPN client
supports both VLANs and VNIs.
n Between an NSX-T Data Center L2 VPN server and an L2 VPN client hosted on a standalone
or unmanaged NSX Edge. An unmanaged L2 VPN client supports VLANs only.
n Between an NSX-T Data Center L2 VPN server and an L2 VPN client hosted on an
autonomous NSX Edge. An autonomous L2 VPN client supports VLANs only.
n Beginning with NSX-T Data Center 2.4 release, L2 VPN service support is available between
an NSX-T Data Center L2 VPN server and NSX-T Data Center L2 VPN clients. In this scenario,
you can extend the logical L2 segments between two on-premises software-defined data
centers (SDDCs).
Prerequisites
You must have the user name and password for the admin account to log in to the NSX Edge
node.
Procedure
1 Log in with admin privileges to the CLI of the NSX Edge node .
2 To check the status of the L2 VPN PMTU discovery feature, use the following command.
If the feature is enabled, you see the following output: l2vpn_pmtu_enabled : True.
If the feature is disabled, you see the following output: l2vpn_pmtu_enabled : False.
3 To disable the L2 VPN PMTU discovery feature, use the following command.
The following sections provide information about the workflows required to set up the VPN
service that you need. The topics that follow these sections provide details on how to add either
an IPSec VPN or an L2 VPN using the NSX Manager user interface.
1 Create and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See Add
an IPSec VPN Service.
2 Create a DPD (dead peer detection) profile, if you prefer not to use the system default. See
Add DPD Profiles.
3 To use a non-system default IKE profile, define an IKE (Internet Key Exchange) profile . See
Add IKE Profiles.
5 Use Add Local Endpoints to create a VPN server hosted on the NSX Edge.
6 Configure a policy-based IPSec VPN session, apply the profiles, and attach the local endpoint
to it. See Add a Policy-Based IPSec Session. Specify the local and peer subnets to be used for
the tunnel. Traffic from a local subnet destined to the peer subnet is protected using the
tunnel defined in the session.
1 Configure and enable an IPSec VPN service using an existing Tier-0 or Tier-1 gateway. See
Add an IPSec VPN Service.
2 Define an IKE profile if you prefer not to use the default IKE profile. See Add IKE Profiles.
3 If you decide not to use the system default IPSec profile, create one using Add IPSec Profiles.
4 Create a DPD profile if you want to do not want to use the default DPD profile. See Add DPD
Profiles.
6 Configure a route-based IPSec VPN session, apply the profiles, and attach the local endpoint
to the session. Provide a VTI IP in the configuration and use the same IP to configure routing.
The routes can be static or dynamic (using BGP). See Add a Route-Based IPSec Session.
a Configure a route-based IPSec VPN tunnel with a Tier-0 or Tier-1 gateway and an L2 VPN
Server service using that route-based IPSec tunnel. See Add an L2 VPN Server Service.
b Configure an L2 VPN server session, which binds the newly created route-based IPSec
VPN service and the L2 VPN server service, and automatically allocates the GRE IP
addresses. See Add an L2 VPN Server Session.
c Add segments to the L2 VPN Server sessions. This step is also described in Add an L2
VPN Server Session.
d Use Download the Remote Side L2 VPN Configuration File to obtain the peer code for the
L2 VPN Server service session, which must be applied on the remote site and used to
configure the L2 VPN Client session automatically.
a Configure another route-based IPSec VPN service using a different Tier-0 or Tier-1
gateway and configure an L2 VPN Client service using that Tier-0 or Tier-1 gateway that
you just configured. See Add an L2 VPN Client Service for information.
b Define the L2 VPN Client sessions by importing the peer code generated by the L2 VPN
Server service. See Add an L2 VPN Client Session.
c Add segments to the L2 VPN Client sessions defined in the previous step. This step is
described in Add an L2 VPN Client Session.
create the IPSec VPN service first before you can configure either a policy-based or a route-
based IPSec VPN session.
Note IPSec VPN is not supported in the NSX-T Data Center limited export release.
IPSec VPN is not supported when the local endpoint IP address goes through NAT in the same
logical router that the IPSec VPN session is configured.
Prerequisites
n Familiarize yourself with the IPSec VPN. See Understanding IPSec VPN.
n You must have at least one Tier-0 or Tier-1 gateway configured and available for use. See
Add a Tier-0 Gateway or Add a Tier-1 Gateway for more information.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 From the Tier-0/Tier-1 Gateway drop-down menu, select the Tier-0 or Tier-1 gateway to
associate with this IPSec VPN service.
By default, the value is set to Enabled, which means the IPSec VPN service is enabled on the
Tier-0 or Tier-1 gateway after the new IPSec VPN service is configured.
8 Enter a value for Tags if you want to include this service in a tag group.
9 To enable or disable the stateful synchronization of VPN sessions, toggle Session sync.
10 Click Global Bypass Rules if you want to allow data packets to be exchanged between the
specified local and remote IP addresses without any IPSec protection. In the Local Networks
and Remote Networks text boxes, enter the list of local and remote subnets between which
the bypass rules are applied.
If you enable these rules, data packets are exchanged between the specified local and
remote IP sites even if their IP addresses are specified in the IPSec session rules. The default
is to use the IPSec protection when data is exchanged between local and remote sites. These
rules apply for all IPSec VPN sessions created within this IPSec VPN service.
11 Click Save.
After the new IPSec VPN service is created successfully, you are asked whether you want to
continue with the rest of the IPSec VPN configuration. If you click Yes, you are taken back to
the Add IPSec VPN Service panel. The Sessions link is now enabled and you can click it to
add an IPSec VPN session.
What to do next
Use information in Adding IPSec VPN Sessions to guide you in adding an IPSec VPN session. You
also provide information for the profiles and local endpoint that are required to finish the IPSec
VPN configuration.
To configure an L2 VPN service, use the information in the topics that follow in this section.
Prerequisites
n Familiarize yourself with IPsec VPN and L2 VPN. See Understanding IPSec VPN and
Understanding Layer 2 VPN.
n You must have at least one Tier-0 or Tier-1 gateway configured and available for use. See
Add a Tier-0 Gateway or Add a Tier-1 Gateway.
Procedure
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 (Optional) If an IPSec VPN service does not exist yet on either a Tier-0 or Tier-1 gateway that
you want to configure as the L2 VPN server, create the service using the following steps.
a Navigate to the Networking > VPN > VPN Services tab and select Add Service > IPSec.
c From the Tier-0/Tier-1 Gateway drop-down menu, select the gateway to use with the L2
VPN server.
d If you want to use values different from the system defaults, set the rest of the properties
on the Add IPSec Service pane, as needed.
e Click Save and when prompted if you want to continue configuring the IPSec VPN
service, select No.
3 Navigate to the Networking > VPN > VPN Services tab and select Add Service > L2 VPN
Server to create an L2 VPN server.
5 From the Tier-0/Tier-1 Gateway drop-down menu, select the same Tier-0 or Tier-1 gateway
that you used with the IPSec service you created a moment ago.
7 Enter a value for Tags if you want to include this service in a tag group.
By default, the value is set to Disabled, which means the traffic received from the L2 VPN
clients is only replicated to the segments connected to the L2 VPN server. If this property is
set to Enabled, the traffic from any L2 VPN client is replicated to all other L2 VPN clients.
9 Click Save.
After the new L2 VPN server is created successfully, you are asked whether you want to
continue with the rest of the L2 VPN service configuration. If you click Yes, you are taken
back to the Add L2 VPN Server pane and the Session link is enabled. You can use that link to
create an L2 VPN server session or use the Networking > VPN > L2 VPN Sessions tab.
What to do next
Configure an L2 VPN server session for the L2 VPN server that you configured using information
in Add an L2 VPN Server Session as a guide.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 (Optional) If an IPSec VPN service does not exist yet on either a Tier-0 or Tier-1 gateway that
you want to configure as the L2 VPN client, create the service using the following steps.
a Navigate to the Networking > VPN > VPN Services tab and select Add Service > IPSec.
c From the Tier-0/Tier-1 Gateway drop-down menu, select a Tier-0 or Tier-1 gateway to
use with the L2 VPN client.
d If you want to use values different from the system defaults, set the rest of the properties
on the Add IPSec Service pane, as needed.
e Click Save and when prompted if you want to continue configuring the IPSec VPN
service, select No.
3 Navigate to the Networking > VPN > VPN Services tab and select Add Service > L2 VPN
Client.
5 From the Tier-0/Tier-1 Gateway drop-down menu, select the same Tier-0 or Tier-1 gateway
that you used with the route-based IPSec tunnel you created a moment ago.
7 Click Save.
After the new L2 VPN client service is created successfully, you are asked whether you want
to continue with the rest of the L2 VPN client configuration. If you click Yes, you are taken
back to the Add L2 VPN Client pane and the Session link is enabled. You can use that link to
create an L2 VPN client session or use the Networking > VPN > L2 VPN Sessions tab.
What to do next
Configure an L2 VPN client session for the L2 VPN Client service that you configured. Use the
information in Add an L2 VPN Client Session as a guide.
The following steps use the IPSec Sessions tab on the NSX Manager UI to create a policy-based
IPSec session. You also add information for the tunnel, IKE, and DPD profiles, and select an
existing local endpoint to use with the policy-based IPSec VPN.
Note You can also add the IPSec VPN sessions immediately after you have successfully
configured the IPSec VPN service. You click Yes when prompted to continue with the IPSec VPN
service configuration and select Sessions > Add Sessions on the Add IPsec Service panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the IPSec VPN service configuration. If you selected Yes, proceed to step 3 in the following steps
to guide you with the rest of the policy-based IPSec VPN session configuration.
Prerequisites
n You must have configured an IPSec VPN service before proceeding. See Add an IPSec VPN
Service.
n Obtain the information for the local endpoint, IP address for the peer site, local network
subnet, and remote network subnet to use with the policy-based IPSec VPN session you are
adding. To create a local endpoint, see Add Local Endpoints.
n If you are using a Pre-Shared Key (PSK) for authentication, obtain the PSK value.
n If you are using a certificate for authentication, ensure that the necessary server certificates
and corresponding CA-signed certificates are already imported. See Chapter 19 Certificates.
n If you do not want to use the defaults for the IPSec tunnel, IKE, or dead peer detection (DPD)
profiles provided by NSX-T Data Center, configure the profiles you want to use instead. See
Adding Profiles for information.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 From the VPN Service drop-down menu, select the IPSec VPN service to which you want to
add this new IPSec session.
Note If you are adding this IPSec session from the Add IPSec Sessions dialog box, the VPN
Service name is already indicated above the Add IPSec Session button.
This local endpoint value is required and identifies the local NSX Edge node. If you want to
create a different local endpoint, click the three-dot menu ( ) and select Add Local Endpoint.
7 In the Remote IP text box, enter the required IP address of the remote site.
By default, the value is set to Enabled, which means the IPSec VPN session is to be configured
down to the NSX Edge node.
10 (Optional) From the Compliance suite drop-down menu, select a security compliance suite.
Note Compliance suite support is provided beginning with NSX-T Data Center 2.5. See
About Supported Compliance Suites for more information.
The default value selected is None. If you select a compliance suite, the Authentication Mode
is set to Certificate and in the Advanced Properties section, the values for IKE profile and
IPSec profile are set to the system-defined profiles for the selected security compliance
suite. You cannot edit these system-defined profiles.
11 If the Compliance Suite is set to None, select a mode from the Authentication Mode drop-
down menu.
The default authentication mode used is PSK, which means a secret key shared between NSX
Edge and the remote site is used for the IPSec VPN session. If you select Certificate, the site
certificate that was used to configure the local endpoint is used for authentication.
12 In the Local Networks and Remote Networks text boxes, enter at least one IP subnet address
to use for this policy-based IPSec VPN session.
13 If Authentication Mode is set to PSK, enter the key value in the Pre-shared Key text box.
This secret key can be a string with a maximum length of 128 characters.
Caution Be careful when sharing and storing a PSK value because it contains some sensitive
information.
For peer sites using PSK authentication, this ID value must be the public IP address or the
FQDN of the peer site. For peer sites using certificate authentication, this ID value must be
the common name (CN) or distinguished name (DN) used in the peer site's certificate.
Note If the peer site's certificate contains an email address in the DN string, for example,
then enter the Remote ID value using the following format as an example.
If the local site's certificate contains an email address in the DN string and the peer site uses
the strongSwan IPsec implementation, enter the local site's ID value in that peer site. The
following is an example.
15 To change the profiles, initiation mode, TCP MSS clamping mode, and tags used by the
policy-based IPSec VPN session, click Advanced Properties.
By default, the system generated profiles are used. Select another available profile if you do
not want to use the default. If you want to use a profile that is not configured yet, click the
three-dot menu ( ) to create another profile. See Adding Profiles.
a If the IKE Profiles drop-down menu is enabled, select the IKE profile.
b Select the IPsec tunnel profile, if the IPSec Profiles drop-down menu is not disabled.
c Select the preferred DPD profile if the DPD Profiles drop-down menu is enabled.
d Select the preferred mode from the Connection Initiation Mode drop-down menu.
Connection initiation mode defines the policy used by the local endpoint in the process of
tunnel creation. The default value is Initiator. The following table describes the different
connection initiation modes available.
On Demand In this mode, the local endpoint initiates the IPSec VPN
tunnel creation after the first packet matching the
policy rule is received. It also responds to the incoming
initiation request.
Respond Only The IPSec VPN never initiates a connection. The peer
site always initiates the connection request and the
local endpoint responds to that connection request.
e If you want to reduce the maximum segment size (MSS) payload of the TCP session
during the IPSec connection, enable TCP MSS Clamping, select the TCP MSS direction
value, and optionally set the TCP MSS Value.
f If you want to include this session as part of a specific group, enter the tag name in Tags.
16 Click Save.
Results
When the new policy-based IPSec VPN session is configured successfully, it is added to the list of
available IPsec VPN sessions. It is in read-only mode.
What to do next
n Verify that the IPSec VPN tunnel status is Up. See Monitor and Troubleshoot VPN Sessions
for information.
n If necessary, manage the IPSec VPN session information by clicking the three-dot menu ( )
on the left-side of the session's row. Select one of the actions you are allowed to perform.
The steps described in this topic use the IPSec Sessions tab to create a route-based IPSec
session. You also add information for the tunnel, IKE, and DPD profiles, and select an existing
local endpoint to use with the route-based IPSec VPN.
Note You can also add the IPSec VPN sessions immediately after you have successfully
configured the IPSec VPN service. You click Yes when prompted to continue with the IPSec VPN
service configuration and select Sessions > Add Sessions on the Add IPsec Service panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the IPSec VPN service configuration. If you selected Yes, proceed to step 3 in the following steps
to guide you with the rest of the route-based IPSec VPN session configuration.
Prerequisites
n You must have configured an IPSec VPN service before proceeding. See Add an IPSec VPN
Service.
n Obtain the information for the local endpoint, IP address for the peer site, and tunnel service
IP subnet address to use with the route-based IPSec session you are adding. To create a local
endpoint, see Add Local Endpoints.
n If you are using a Pre-Shared Key (PSK) for authentication, obtain the PSK value.
n If you are using a certificate for authentication, ensure that the necessary server certificates
and corresponding CA-signed certificates are already imported. See Chapter 19 Certificates.
n If you do not want to use the default values for the IPSec tunnel, IKE, or dead peer detection
(DPD) profiles provided by NSX-T Data Center, configure the profiles you want to use
instead. See Adding Profiles for information.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 From the VPN Service drop-down menu, select the IPSec VPN service to which you want to
add this new IPSec session.
Note If you are adding this IPSec session from the Add IPSec Sessions dialog box, the VPN
Service name is already indicated above the Add IPSec Session button.
This local endpoint value is required and identifies the local NSX Edge node. If you want to
create a different local endpoint, click the three-dot menu ( ) and select Add Local Endpoint.
7 In the Remote IP text box, enter the IP address of the remote site.
By default, the value is set to Enabled, which means the IPSec session is to be configured
down to the NSX Edge node.
10 (Optional) From the Compliance suite drop-down menu, select a security compliance suite.
Note Compliance suite support is provided beginning with NSX-T Data Center 2.5. See
About Supported Compliance Suites for more information.
The default value is set to None. If you select a compliance suite, the Authentication Mode is
set to Certificate and in the Advanced Properties section, the values for IKE profile and
IPSec profile are set to the system-defined profiles for the selected compliance suite. You
cannot edit these system-defined profiles.
12 If the Compliance Suite is set to None, select a mode from the Authentication Mode drop-
down menu.
The default authentication mode used is PSK, which means a secret key shared between NSX
Edge and the remote site is used for the IPSec VPN session. If you select Certificate, the site
certificate that was used to configure the local endpoint is used for authentication.
13 If you selected PSK for the authentication mode, enter the key value in the Pre-shared Key
text box.
This secret key can be a string with a maximum length of 128 characters.
Caution Be careful when sharing and storing a PSK value because it contains some sensitive
information.
For peer sites using PSK authentication, this ID value must be the public IP address or the
FQDN of the peer site. For peer sites using certificate authentication, this ID value must be
the common name (CN) or distinguished name (DN) used in the peer site's certificate.
Note If the peer site's certificate contains an email address in the DN string, for example,
then enter the Remote ID value using the following format as an example.
If the local site's certificate contains an email address in the DN string and the peer site uses
the strongSwan IPsec implementation, enter the local site's ID value in that peer site. The
following is an example.
15 If you want to include this IPSec session as part of a specific group tag, enter the tag name in
Tags.
16 To change the profiles, initiation mode, TCP MSS clamping mode, and tags used by the route-
based IPSec VPN session, click Advanced Properties.
By default, the system-generated profiles are used. Select another available profile if you do
not want to use the default. If you want to use a profile that is not configured yet, click the
three-dot menu ( ) to create another profile. See Adding Profiles.
a If the IKE Profiles drop-down menu is enabled, select the IKE profile.
b Select the IPsec tunnel profile, if the IPSec Profiles drop-down menu is not disabled.
c Select the preferred DPD profile if the DPD Profiles drop-down menu is enabled.
d Select the preferred mode from the Connection Initiation Mode drop-down menu.
Connection initiation mode defines the policy used by the local endpoint in the process of
tunnel creation. The default value is Initiator. The following table describes the different
connection initiation modes available.
Respond Only The IPSec VPN never initiates a connection. The peer
site always initiates the connection request and the
local endpoint responds to that connection request.
17 If you want to reduce the maximum segment size (MSS) payload of the TCP session during
the IPSec connection, enable TCP MSS Clamping, select the TCP MSS direction value, and
optionally set the TCP MSS Value. []
18 If you want to include this IPSec session as part of a specific group tag, enter the tag name in
Tags.
19 Click Save.
Results
When the new route-based IPSec VPN session is configured successfully, it is added to the list of
available IPsec VPN sessions. It is in read-only mode.
What to do next
n Verify that the IPSec VPN tunnel status is Up. See Monitor and Troubleshoot VPN Sessions
for information.
n Configure routing using either a static route or BGP. See Configure a Static Route or
Configure BGP.
n If necessary, manage the IPSec VPN session information by clicking the three-dot menu ( )
on the left-side of the session's row. Select one of the actions you can perform.
A security compliance suite has predefined values that are used for different security parameters
and that cannot be modified. When you select a compliance suite, the predefined values are
automatically used for the security profile of the IPSec VPN session you are configuring.
The following table lists the compliance suites that are supported for IKE profiles in NSX-T Data
Center and the values that are predefined for each.
The following table lists the compliance suites that are supported for IPSec profiles in NSX-T Data
Center and the values that are predefined for each.
TCP MSS is the maximum amount of data in bytes that a host is willing to accept in a single TCP
segment. Each end of a TCP connection sends its desired MSS value to its peer-end during a
three-way handshake, where MSS is one of the TCP header options used in a TCP SYN packet.
TCP MSS is calculated based on the maximum transmission unit (MTU) of the egress interface of
the sender host.
When a TCP traffic goes through an IPSec VPN or any kind of VPN tunnel, additional headers are
added to the original packet to keep it secure. For IPSec tunnel mode, additional headers used
are IP, ESP, and optionally UDP (if port translation is present in the network). Because of these
additional headers, the size of the encapsulated packet goes beyond the MTU of the VPN
interface. The packet can get fragmented or dropped based on the DF policy.
To avoid packet fragmentation or drop, you can adjust the MSS value for the IPSec session by
enabling the TCP MSS clamping feature. Navigate to Networking > VPN > IPSec Sessions. When
you are adding an IPSec session or editing an existing one, expand the Advance Properties
section, and enable TCP MSS Clamping.
You can configure the pre-calculated MSS value suitable for the IPSec session by setting both
TCP MSS Direction and TCP MSS Value. The configured MSS value is used for MSS clamping.
You can opt to use the dynamic MSS calculation by setting the TCP MSS Direction and leaving
TCP MSS Value blank. The MSS value is auto-calculated based on the VPN interface MTU, VPN
overhead, and the path MTU (PMTU) when it is already determined. The effective MSS is
recalculated during each TCP handshake to handle the MTU or PMTU changes dynamically.
The following steps use the L2 VPN Sessions tab on the NSX Manager UI to create an L2 VPN
Server session. You also select an existing local endpoint and segment to attach to the L2 VPN
Server session.
Note You can also add an L2 VPN Server session immediately after you have successfully
configured the L2 VPN Server service. You click Yes when prompted to continue with the L2 VPN
Server configuration and select Sessions > Add Sessions on the Add L2 VPN Server panel. The
first few steps in the following procedure assume you selected No to the prompt to continue with
the L2 VPN Server configuration. If you selected Yes, proceed to step 3 in the following steps to
guide you with the rest of the L2 VPN Server session configuration.
Prerequisites
n You must have configured an L2 VPN Server service before proceeding. See Add an L2 VPN
Server Service.
n Obtain the information for the local endpoint and remote IP to use with the L2 VPN Server
session you are adding. To create a local endpoint, see Add Local Endpoints.
n Obtain the values for the pre-shared key (PSK) and the tunnel interface subnet to use with
the L2 VPN Server session.
n Obtain the name of the existing segment you want to attach to the L2 VPN Server session
you are creating. See Add a Segment for information.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 From the L2 VPN Service drop-down menu, select the L2 VPN Server service for which the
L2 VPN session is being created.
Note If you are adding this L2 VPN Server session from the Set L2VPN Server Sessions
dialog box, the L2 VPN Server service is already indicated above the Add L2 Session button.
If you want to create a different local endpoint, click the three-dot menu ( ) and select Add
Local Endpoint.
By default, the value is set to Enabled, which means the L2 VPN Server session is to be
configured down to the NSX Edge node.
Caution Be careful when sharing and storing a PSK value because it is considered sensitive
information.
10 Enter an IP subnet address in the Tunnel Interface using the CIDR notation.
For peer sites using certificate authentication, this ID must be the common name in the peer
site's certificate. For PSK peers, this ID can be any string. Preferably, use the public IP address
of the VPN or an FQDN for the VPN services as the Remote ID.
12 If you want to include this session as part of a specific group, enter the tag name in Tags.
13 Click Save and click Yes when prompted if you want to continue with the VPN service
configuration.
You are returned to the Add L2VPN Sessions panel and the Segments link is now enabled.
b In the Set Segments dialog box, click Set Segment to attach an existing segment to the
L2 VPN Server session.
c From the Segment drop-down menu, select the VNI-based or VLAN-based segment that
you want to attach to the session.
d Enter a unique value in the VPN Tunnel ID that is used to identify the segment that you
selected.
e In the Local Egress Gateway IP text box, enter the IP address of the local gateway that
your workload VMs on the segment use as their default gateway. The same IP address
can be configured in the remote site on the extended segment.
In the Set L2VPN Sessions pane or dialog box, the system has incremented the Segments
count for the L2 VPN Server session.
Results
In the VPN Services tab, the system incremented the Sessions count for the L2 VPN Server
service that you configured.
What to do next
To complete the L2 VPN service configuration, you must also create an L2 VPN service in Client
mode and an L2 VPN client session. See Add an L2 VPN Client Service and Add an L2 VPN Client
Session.
The following steps use the L2 VPN Sessions tab on the NSX Manager UI to create an L2 VPN
Client session. You also select an existing local endpoint and segment to attach to the L2 VPN
Client session.
Note You can also add an L2 VPN Client session immediately after you have successfully
configured the L2 VPN Client service. Click Yes when prompted to continue with the L2 VPN
Client configuration and select Sessions > Add Sessions on the Add L2 VPN Client panel. The first
few steps in the following procedure assume you selected No to the prompt to continue with the
L2 VPN Client configuration. If you selected Yes, proceed to step 3 in the following steps to guide
you with the rest of the L2 VPN Client session configuration.
Prerequisites
n You must have configured an L2 VPN Client service before proceeding. See Add an L2 VPN
Client Service.
n Obtain the IP addresses information for the local IP and remote IP to use with the L2 VPN
Client session you are adding.
n Obtain the peer code that was generated during the L2 VPN server configuration. See
Download the Remote Side L2 VPN Configuration File.
n Obtain the name of the existing segment you want to attach to the L2 VPN Client session you
are creating. See Add a Segment.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 From the VPN Service drop-down menu, select the L2 VPN Client service with which the L2
VPN session is to be associated.
Note If you are adding this L2 VPN Client session from the Set L2VPN Client Sessions dialog
box, the L2 VPN Client service is already indicated above the Add L2 Session button.
6 In the Local IP address text box, enter the IP address of the L2 VPN Client session.
7 Enter the remote IP address of the IPSec tunnel to be used for the L2 VPN Client session.
8 In the Peer Configuration text box, enter the peer code generated when you configured the
L2 VPN Server service.
By default, the value is set to Enabled, which means the L2 VPN Server session is to be
configured down to the NSX Edge node.
10 Click Save and click Yes when prompted if you want to continue with the VPN service
configuration.
c From the Segment drop-down menu, select the VNI-based or VLAN-based segment you
want to attach to the L2 VPN Client session.
d Enter a unique value in the VPN Tunnel ID that is used to identify the segment that you
selected.
e Click Close.
Results
In the VPN Services tab, the sessions count is updated for the L2 VPN Client service that you
configured.
Prerequisites
n You must have configured an L2 VPN server service and a session successfully before
proceeding. See Add an L2 VPN Server Service.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 In the table of L2 VPN sessions, expand the row for the L2 VPN server session you plan to
use for the L2 VPN client session configuration.
4 Click Download Config and click Yes on the Warning dialog box.
Caution Be careful when storing and sharing the peer code because it contains a PSK value,
which is considered sensitive information.
[
{
"transport_tunnel_path": "/infra/tier-0s/ServerT0_AS/locale-services/1-policyconnectivity-693/
ipsec-vpn-services/IpsecService1/sessions/Routebase1",
"peer_code":
"MCw3ZjBjYzdjLHsic2l0ZU5hbWUiOiJSb3V0ZWJhc2UxIiwic3JjVGFwSXAiOiIxNjkuMjU0LjY0LjIiLCJkc3RUYXBJcCI6I
jE2OS4yNTQuNjQuMSIsImlrZU9wdGl
vbiI6ImlrZXYyIiwiZW5jYXBQcm90byI6ImdyZS9pcHNlYyIsImRoR3JvdXAiOiJkaDE0IiwiZW5jcnlwdEFuZERpZ2VzdCI6I
mFlcy1nY20vc2hhLTI1NiIsInBzayI
6IlZNd2FyZTEyMyIsInR1bm5lbHMiOlt7ImxvY2FsSWQiOiI2MC42MC42MC4xIiwicGVlcklkIjoiNTAuNTAuNTAuMSIsImxvY
2FsVnRpSXAiOiIxNjkuMi4yLjMvMzEifV19"
}
]
5 Copy the peer code, which you use to configure the L2 VPN client service and session.
Using the preceding configuration file example, the following peer code is what you copy to
use with the L2 VPN client configuration.
MCw3ZjBjYzdjLHsic2l0ZU5hbWUiOiJSb3V0ZWJhc2UxIiwic3JjVGFwSXAiOiIxNjkuMjU0LjY0LjIiLCJkc3RUYXBJcCI6Ij
E2OS4yNTQuNjQuMSIsImlrZU9wdGl
vbiI6ImlrZXYyIiwiZW5jYXBQcm90byI6ImdyZS9pcHNlYyIsImRoR3JvdXAiOiJkaDE0IiwiZW5jcnlwdEFuZERpZ2VzdCI6I
mFlcy1nY20vc2hhLTI1NiIsInBzayI
6IlZNd2FyZTEyMyIsInR1bm5lbHMiOlt7ImxvY2FsSWQiOiI2MC42MC42MC4xIiwicGVlcklkIjoiNTAuNTAuNTAuMSIsImxvY
2FsVnRpSXAiOiIxNjkuMi4yLjMvMzEifV19
What to do next
Configure the L2 VPN Client service and session. See Add an L2 VPN Client Service and Add an
L2 VPN Client Session.
The following steps use the Local Endpoints tab on the NSX Manager UI. You can also create a
local endpoint while in the process of adding an IPSec VPN session by clicking the three-dot
menu ( ) and selecting Add Local Endpoint. If you are in the middle of configuring an IPSec VPN
session, proceed to step 3 in the following steps to guide you with creating a new local endpoint.
Prerequisites
n If you are using a certificate-based authentication mode for the IPSec VPN session that is to
use the local endpoint you are configuring, obtain the information about the certificate that
the local endpoint must use.
n Ensure that you have configured an IPSec VPN service to which this local endpoint is to be
associated.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Navigate to Networking > VPN > Local Endpoints and click Add Local Endpoint.
4 From the VPN Service drop-down menu, select the IPSec VPN service to which this local
endpoint is to be associated.
For an IPSec VPN service running on a Tier-0 gateway, the local endpoint IP address must be
different from the Tier-0 gateway's uplink interface IP address. The local endpoint IP address
you provide is associated with the loopback interface for the Tier-0 gateway and is also
published as a routable IP address over the uplink interface. For IPSec VPN service running
on a Tier-1 gateway, in order for the local endpoint IP address to be routable, the route
advertisement for IPSec local endpoints must be enabled in the Tier-1 gateway configuration.
See Add a Tier-1 Gateway for more information.
6 If you are using a certificate-based authentication mode for the IPSec VPN session, from the
Site Certificate drop-down menu, select the certificate that is to be used by the local
endpoint.
8 Enter the Local ID value that is used for identifying the local NSX Edge instance.
This local ID is the peer ID on the remote site. The local ID must be either the public IP
address or FQDN of the remote site. For certificate-based VPN sessions defined using the
local endpoint, the local ID is derived from the certificate associated with the local endpoint.
The ID specified in the Local ID text box is ignored. The local ID derived from the certificate
for a VPN session depends on the extensions present in the certificate.
n If the X509v3 extension X509v3 Subject Alternative Name is not present in the certificate,
then the Distinguished Name (DN) is used as the local ID value.
n If the X509v3 extension X509v3 Subject Alternative Name is found in the certificate, then
one of the Subject Alternative Name is taken as the local ID value.
9 From the Trusted CA Certificates and Certificate Revocation List drop-down menus, select
the appropriate certificates that are required for the local endpoint.
11 Click Save.
Adding Profiles
NSX-T Data Center provides the system-generated IPSec tunnel profile and an IKE profile that
are assigned by default when you configure either an IPSec VPN or L2 VPN service. A system-
generated DPD profile is created for an IPSec VPN configuration.
The IKE and IPSec profiles provide information about the algorithms that are used to
authenticate, encrypt, and establish a shared secret between network sites. The DPD profile
provides information about the number of seconds to wait in between probes to detect if an
IPSec peer site is alive or not.
If you decide not to use the default profiles provided by NSX-T Data Center, you can configure
your own profile using the information in the topics that follow in this section.
NSX-T Data Center provides system-generated IKE profiles that are assigned by default when
you configure an IPSec VPN or L2 VPN service. The following table lists the default profiles
provided.
Table 6-4. Default IKE Profiles Used for IPSec VPN or L2 VPN Services
Default IKE Profile Name Description
Instead of the default IKE profiles used, you can also select one of the compliance suites
supported starting with NSX-T Data Center 2.5. See About Supported Compliance Suites for
more information.
If you decide not to use the default IKE profiles or compliance suites provided, you can configure
your own IKE profile using the following steps.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Select the IKE Profiles profile type, and click Add IKE Profile.
5 From the IKE Version drop-down menu, select the IKE version to use to set up a security
association (SA) in the IPSec protocol suite.
IKEv1 When selected, the IPSec VPN initiates and responds to an IKEv1 protocol
only.
IKEv2 This version is the default. When selected, the IPSec VPN initiates and
responds to an IKEv2 protocol only.
IKE-Flex If this version is selected and if the tunnel establishment fails with the IKEv2
protocol, the source site does not fall back and initiate a connection with
the IKEv1 protocol. Instead, if the remote site initiates a connection with the
IKEv1 protocol, then the connection is accepted.
6 Select the encryption, digest, and Diffie-Hellman group algorithms from the drop-down
menus. You can select multiple algorithms to apply or deselect any selected algorithms you
do not want to be applied.
Encryption n AES 128 ( default) The encryption algorithm used during the Internet Key
n AES 256 Exchange (IKE) negotiation.
n AES GCM 128 The AES-GCM algorithms are supported when used
with IKEv2. They are not supported when used with
n AES GCM 192
IKEv1.
n AES GCM 256
Digest n SHA2 256 (default) The secure hashing algorithm used during the IKE
n SHA1 negotiation.
Diffie-Hellman Group n Group 14 (default) The cryptography schemes that the peer site and the
n Group 2 NSX Edge use to establish a shared secret over an
insecure communications channel.
n Group 5
n Group 15
n Group 16
n Group 19
n Group 20
n Group 21
Note When you attempt to establish an IPSec VPN tunnel with a GUARD VPN Client
(previously QuickSec VPN Client) using two encryption algorithms or two digest algorithms,
the GUARD VPN Client adds additional algorithms in the proposed negotiation list. For
example, if you specified AES 128 and AES 256 as the encryption algorithms and SHA2 256
and SHA2 512 as the digest algorithms to use in the IKE profile you are using to establish the
IPSec VPN tunnel, the GUARD VPN Client also proposes AES 192 and SHA2 384 in the
negotiation list. In this case, NSX-T Data Center uses the first encryption algorithm you
selected when establishing the IPSec VPN tunnel.
7 Enter a security association (SA) lifetime value, in seconds, if you want it different from the
default value of 86400 seconds (24 hours).
9 Click Save.
Results
A new row is added to the table of available IKE profiles. To edit or delete a non-system created
profile, click the three-dot menu ( ) and select from the list of actions available.
NSX-T Data Center provides system-generated IPSec profiles that are assigned by default when
you configure an IPSec VPN or L2 VPN service. The following table lists the default IPSec profiles
provided.
Table 6-7. Default IPSec Profiles Used for IPSec VPN or L2 VPN Services
Name of Default IPSec Profile Description
Instead of the default IPSec profile, you can also select one of the compliance suites supported
starting with NSX-T Data Center 2.5. See About Supported Compliance Suites for more
information.
If you decide not to use the default IPSec profiles or compliance suites provided, you can
configure your own using the following steps.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Select the IPSec Profiles profile type, and click Add IPSec Profile.
5 From the drop-down menus, select the encryption, digest, and Diffie-Hellman algorithms. You
can select multiple algorithms to apply.
Encryption n AES GCM 128 (default) The encryption algorithm used during
n AES 128 the Internet Protocol Security (IPSec)
negotiation.
n AES 256
n AES GCM 192
n AES GCM 256
n No Encryption Auth AES GMAC 128'
n No Encryption Auth AES GMAC 192
n No Encryption Auth AES GMAC 256
n No Encryption
n SHA2 384
n SHA2 512
6 Deselect PFS Group if you decide not to use the PFS Group protocol on your VPN service.
It is selected by default.
7 In the SA Lifetime text box, modify the default number of seconds before the IPSec tunnel
must be re-established.
8 Select the value for DF Bit to use with the IPSec tunnel.
The value determines how to handle the "Don't Fragment" (DF) bit included in the data
packet received. The acceptable values are described in the following table.
COPY The default value. When this value is selected, NSX-T Data Center copies the
value of the DF bit from the received packet into the packet which is
forwarded. This value implies that if the data packet received has the DF bit
set, after encryption, the packet also has the DF bit set.
CLEAR When this value is selected, NSX-T Data Center ignores the value of the DF bit
in the data packet received, and the DF bit is always 0 in the encrypted packet.
10 Click Save.
Results
A new row is added to the table of available IPSec profiles. To edit or delete a non-system
created profile, click the three-dot menu ( ) and select from the list of actions available.
If you decide not to use the default DPD profile provided, you can configure your own using the
following steps.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Select DPD Profiles from the Select Profile Type drop-down menu, and click Add DPD
Profile.
5 From the DPD Probe Mode drop-down menu, select Periodic or On Demand mode.
For a periodic DPD probe mode, a DPD probe is sent every time the specified DPD probe
interval time is reached.
For an on-demand DPD probe mode, a DPD probe is sent if no IPSec packet is received from
the peer site after an idle period. The value in DPD Probe Interval determines the idle period
used.
6 In the DPD Probe Interval text box, enter the number of seconds you want the NSX Edge
node to wait before sending the next DPD probe.
For a periodic DPD probe mode, the valid values are between 3 and 360 seconds. The
default value is 60 seconds.
For an on-demand probe mode, the valid values are between 1 and 10 seconds. The default
value is 3 seconds.
When the periodic DPD probe mode is set, the IKE daemon running on the NSX Edge sends a
DPD probe periodically. If the peer site responds within half a second, the next DPD probe is
sent after the configured DPD probe interval time has been reached. If the peer site does not
respond, then the DPD probe is sent again after waiting for half a second. If the remote peer
site continues not to respond, the IKE daemon resends the DPD probe again, until a response
is received or the retry count has been reached. Before the peer site is declared to be dead,
the IKE daemon resends the DPD probe up to a maximum of times specified in the Retry
Count property. After the peer site is declared dead, the NSX Edge node then tears down
the security association (SA) on the dead peer's link.
When the on-demand DPD mode is set, the DPD probe is sent only if no IPSec traffic is
received from the peer site after the configured DPD probe interval time has been reached.
7 In the Retry Count text box, enter the number of retries allowed.
The valid values are between 1 and 100. The default retry count is 5.
9 To enable or disable the DPD profile, click the Admin Status toggle.
By default, the value is set to Enabled. When the DPD profile is enabled, the DPD profile is
used for all IPSec sessions in the IPSec VPN service that uses the DPD profile.
10 Click Save.
Results
A new row is added to the table of available DPD profiles. To edit or delete a non-system created
profile, click the three-dot menu ( ) and select from the list of actions available.
NSX-T Data Center. You can also enable high availability (HA) for VPN redundancy by deploying
primary and secondary autonomous Edge L2 VPN clients.
Prerequisites
n Obtain the IP addresses for the local IP and remote IP to use with the L2 VPN Client session
you are adding.
n Obtain the peer code that was generated during the L2 VPN server configuration.
Procedure
1 Using vSphere Web Client, log in to the vCenter Server that manages the non-NSX
environment.
2 Select Hosts and Clusters and expand clusters to show the available hosts.
3 Right-click the host where you want to install the autonomous NSX Edge and select Deploy
OVF Template.
4 Enter the URL to download and install the OVF file from the Internet or click Browse to locate
the folder on your computer that contains the autonomous NSX Edge OVF file and click Next.
5 On the Select name and folder page, enter a name for the autonomous NSX Edge and select
the folder or data center where you want to deploy. Then click Next.
6 On the Select a compute resource page, select the destination of the compute resource.
7 On the OVF Template Details page, review the template details and click Next.
9 On the Select storage page, select the location to store the files for the configuration and
disk files.
10 On the Select networks page, configure the networks that the deployed template must use.
Select the port group you created for the uplink interface, the port group that you created
for the L2 extension port, and enter an HA interface. Click Next.
11 On the Customize Template page, enter the following values and click Next.
f Enter the External Port details for VLAN ID, exit interface, IP address, and IP prefix length
such that the exit interface maps to the Network with the port group of your uplink
interface.
If the exit interface is connected to a trunk port group, specify a VLAN ID. For example,
20,eth2,192.168.5.1,24. You can also configure your port group with a VLAN ID and
use VLAN 0 for the External Port.
g (Optional) To configure High Availability, enter the HA Port details where the exit
interface maps to the appropriate HA Network.
h (Optional) When deploying an autonomous NSX Edge as a secondary node for HA, select
Deploy this autonomous-edge as a secondary node.
Use the same OVF file as the primary node and enter the primary node's IP address, user
name, password, and thumbprint.
To retrieve the thumbprint of the primary node, log in to the primary node and run the
following command:
Ensure that the VTEP IP addresses of the primary and secondary nodes are in the same
subnet and that they connect to the same port group. When you complete the
deployment and start the secondary-edge, it connects to the primary node to form an
edge-cluster .
12 On the Ready to complete page, review the autonomous Edge settings and click Finish.
Note If there are errors during the deployment, a message of the day is displayed on the
CLI. You can also use an API call to check for errors:
GET https://<nsx-mgr>/api/v1/node/status
The errors are categorized as soft errors and hard errors. Use API calls to resolve the soft
errors as required. You can clear the message of day using an API call:
POST /api/v1/node/status?action=clear_bootup_error
15 Select L2VPN > Add Session and enter the following values:
c Enter the peer code from the L2VPN server. See Download the Remote Side L2 VPN
Configuration File for details on obtaining the peer code.
16 Click Save.
19 Click Save.
20 Select L2VPN > Attach Port and enter the following values:
21 Click Attach.
You can create additional L2 extension ports and attach them to the session if you need to
extend multiple L2 networks.
22 Use the browser to log in to the autonomous NSX Edge or use API calls to view the status of
the L2VPN session.
Note If the L2VPN server configuration changes, ensure that you download the peer code
again and update the session with the new peer code.
When you create an IPSec VPN session, multiple entities are created: IKE profile, DPD profile,
tunnel profile, local endpoint, IPSec VPN service, and IPSec VPN session. These entities all share
the same IPSecVPNSession span, so you can obtain the realization state of all the entities of the
IPSec VPN session by using the same GET API call. You can check the realization state using only
the API.
Prerequisites
n Verify the IPSec VPN is configured successfully. See Add an IPSec VPN Service.
Procedure
For example:
PUT https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/8dd1c386-9b2c-4448-85b8-51ff649fae4f
{
"resource_type": "PolicyBasedIPSecVPNSession",
"id": "8dd1c386-9b2c-4448-85b8-51ff649fae4f",
"display_name": "Test RZ_UPDATED",
"ipsec_vpn_service_id": "7adfa455-a6fc-4934-a919-f5728957364c",
"peer_endpoint_id": "17263ca6-dce4-4c29-bd8a-e7d12bd1a82d",
"local_endpoint_id": "91ebfa0a-820f-41ab-bd87-f0fb1f24e7c8",
"enabled": true,
"policy_rules": [
{
"id": "1026",
"sources": [
{
"subnet": "1.1.1.0/24"
}
],
"logged": true,
"destinations": [
{
"subnet": "2.1.4..0/24"
}
],
"action": "PROTECT",
"enabled": true,
"_revision": 1
}
]
}
2 Locate and copy the value of x-nsx-requestid from the response header returned.
For example:
x-nsx-requestid e550100d-f722-40cc-9de6-cf84d3da3ccb
3 Request the realization state of the IPSec VPN session using the following GET call.
GET https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/<ipsec-vpn-session-id>/state?request_id=<request-id>
The following API call uses the id and x-nsx-requestid values in the examples used in the
previous steps.
GET https://<nsx-mgr>/api/v1/vpn/ipsec/sessions/8dd1c386-9b2c-4448-85b8-51ff649fae4f/state?
request_id=e550100d-f722-40cc-9de6-cf84d3da3ccb
Following is an example of a response you receive when the realization state is in_progress.
{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "fe651e63-04bd-43a4-a8ec-45381a3b71b9",
"state": "in_progress",
"failure_message": "CCP Id:ab5958df-d98a-468e-a72b-d89dcdae5346, Message:State realization
is in progress at the node."
},
{
"sub_system_type": "TransportNode",
"sub_system_id": "ebe174ac-e4f1-4135-ba72-3dd2eb7099e3",
"state": "in_sync"
}
],
"state": "in_progress",
"failure_message": "The state realization is in progress at transport nodes."
}
Following is an example of a response you receive when the realization state is in_sync.
{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "7046e8f4-a680-11e8-9bc3-020020593f59",
"state": "in_sync"
}
],
"state": "in_sync"
}
The following are examples of possible responses you receive when the realization state is
unknown.
{
"state": "unknown",
"failure_message": "Unable to get response from any CCP node. Please retry operation after
some time."
}
{
"details": [
{
"sub_system_type": "TransportNode",
"sub_system_id": "3e643776-5def-11e8-94ae-020022e7749b",
"state": "unknown",
"failure_message": "CCP Id:ab5958df-d98a-468e-a72b-d89dcdae5346, Message: Unable to get
response from the node. Please retry operation after some time."
},
{
"sub_system_type": "TransportNode",
"sub_system_id": "4784ca0a-5def-11e8-93be-020022f94b73",
"state": "in_sync"
}
],
"state": "unknown",
"failure_message": "The state realization is unknown at transport nodes"
}
After you perform an entity DELETE operation, you might receive the status of NOT_FOUND, as
shown in the following example.
{
"http_status": "NOT_FOUND",
"error_code": 600,
"module_name": "common-services",
"error_message": "The operation failed because object identifier LogicalRouter/
61746f54-7ab8-4702-93fe-6ddeb804 is missing: Object identifiers are case sensitive.."
}
If the IPSec VPN service associated with the session is disabled, you receive the BAD_REQUEST
response, as shown in the following example.
{
"httpStatus": "BAD_REQUEST",
"error_code": 110199,
"module_name": "VPN",
"error_message": "VPN service f9cfe508-05e3-4e1d-b253-fed096bb2b63 associated with the
session 8dd1c386-9b2c-4448-85b8-51ff649fae4f is disabled. Can not get the realization status."
}
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Navigate to the Networking > VPN > IPSec Sessions or Networking > VPN > L2 VPN
Sessions tab.
3 Expand the row for the VPN session that you want to monitor or troubleshoot.
4 To view the status of the VPN tunnel status, click the info icon.
The Status dialog box appears and displays the available statuses.
5 To view the VPN tunnel traffic statistics, click View Statistics in the Status column.
The Statistics dialog box displays the traffic statistics for the VPN tunnel.
6 To view the error statistics, click the View More link in the Statistics dialog box.
NAT64 is a mechanism for translating IPv6 packets to IPv4 packets, and vice versa. NAT 64
allows IPv6-only clients to contact IPv4 servers using unicast UDP, or TCP. NAT64 only allows an
IPv6-only client to initiate communications to an IPv4-only server. To perform IPv6-IPv4
translation, binding and session information are saved. NAT64 is stateful.
n NAT64 is only supported for external IPv6 traffic coming in through the NSX-T edge uplink to
the IPv4 server in the overlay.
n NAT64 supports TCP and UDP, all other protocol type packets are discarded. NAT64 does
not support: ICMP, Fragmentation, and IPV6 packets that have extension headers.
For NAT, source NAT (SNAT), destination NAT (DNAT), or reflexive NAT are supported. If a tier-0
gateway is running in active-active mode, you cannot configure SNAT or DNAT because
asymmetrical paths might cause issues. You can only configure reflexive NAT (sometimes called
stateless NAT). If a tier-0 gateway is running in active-standby mode, you can configure SNAT,
DNAT, or reflexive NAT.
You can also disable SNAT or DNAT for an IP address or a range of addresses. If an address has
multiple NAT rules, the rule with the highest priority is applied.
Note DNAT is not supported on a tier-1 gateway where policy-based IPSec VPN is configured.
SNAT configured on a tier-0 gateway's external interface processes traffic from a tier-1 gateway,
and from another external interface on the tier-0 gateway.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Select a gateway.
6 Enter a Name.
7 If you are configuring NAT, select an action. For NAT 64, the action is NAT64.
Tier-1 gateway Available actions are SNAT, DNAT, Reflexive, NO SNAT, and NO DNAT.
Tier-0 gateway in active-standby Available actions are SNAT, DNAT, NO SNAT, and NO DNAT.
mode
8 Enter a Source. If this text box is left blank, the NAT rule applies to all sources outside of the
local subnet.
Option Description
Option Description
NAT64 Enter an IPv6 address, or an IPv6 address range in CIDR format with the
prefix /96. The prefix /96 is supported because the destination IPv4 IP is
embedded as the last 4 bytes in the IPv6 address
Option Description
12 In the Service column, click Set to select services. See Add a Service for more information.
For NAT 64, select a pre-defined service or create a user-defined service with TCP or UDP,
with the source/destination port as Any, or a specific port.
13 For Apply To, click Set and select objects that this rule applies to.
The available objects are Tier-0 Gateways, Interfaces, Labels, Service Instance Endpoints,
and Virtual Endpoints.
Note If you are using Federation and creating a NAT rule from a Global Manager appliance,
you can select site-specific IP addresses for NAT. You can apply the NAT rule to any of the
following location spans:
n Do not click Set if you want to use the default option of applying the NAT rule to all
locations.
n Click Set. In the Apply To dialog box, select the locations whose entities you want to
apply the rule to and then select Apply NAT rule to all entities.
n Click Set. In the Apply To dialog box, select a location and then select Interfaces from the
Categories drop-down menu. You can select specific interfaces to which you want to
apply the NAT rule.
Option Description
NAT64 The available setting is Bypass - the packet bypasses firewall rules.
18 Click Save.
Tier 1
The load balancer distributes incoming service requests evenly among multiple servers in such a
way that the load distribution is transparent to users. Load balancing helps in achieving optimal
resource utilization, maximizing throughput, minimizing response time, and avoiding overload.
You can map a virtual IP address to a set of pool servers for load balancing. The load balancer
accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address and decides which pool
server to use.
Depending on your environment needs, you can scale the load balancer performance by
increasing the existing virtual servers and pool members to handle heavy network traffic load.
Note Logical load balancer is supported only on the tier-1 gateway. One load balancer can be
attached only to a tier-1 gateway.
A load balancer is connected to a Tier-1 logical router. The load balancer hosts single or multiple
virtual servers. A virtual server is an abstract of an application service, represented by a unique
combination of IP, port, and protocol. The virtual server is associated to single to multiple server
pools. A server pool consists of a group of servers. The server pools include individual server
pool members.
To test whether each server is correctly running the application, you can add health check
monitors that check the health status of a server.
Tier - 1 A Tier - 1 B
LB 1 LB 2
HC 1 HC 2
A load balancer runs on a tier-1 gateway, which must be in active-standby mode. The gateway
runs on NSX Edge nodes. The form factor of the NSX Edge node (bare metal, small, medium,
large, or extra large) determines the number of load balancers that the NSX Edge node can
support. Note that in Manager mode, you create logical routers, which have similar functionality
to gateways. See Chapter 1 NSX Manager.
For more information about what the different load balance sizes and NSX Edge form factors can
support, see https://configmax.vmware.com.
Note that using a small NSX Edge node to run a small load balancer is not recommended in a
production environment.
You can call an API to get the load balancer usage information of an NSX Edge node. If you use
Policy mode to configure load balancing, run the following command:
GET /policy/api/v1/infra/lb-node-usage?node_path=<node-path>
If you use Manager mode to configure load balancing, run the following command:
GET /api/v1/loadbalancer/usage-per-node/<node-id>
The usage information includes the number of load balancer objects (such as load balancer
services, virtual servers, server pools, and pool members) that are configured on the node. For
more information, see the NSX-T Data Center API Guide.
n Health check monitors - Active monitor which includes HTTP, HTPPS, TCP, UDP, and ICMP,
and passive monitor
n HTTP upgrade - For applications using HTTP upgrade such as WebSocket, the client or server
requests for HTTP Upgrade, which is supported. By default, NSX-T Data Center supports and
accepts HTTPS upgrade client request using the HTTP application profile.
To detect an inactive client or server communication, the load balancer uses the HTTP
application profile response timeout feature set to 60 seconds. If the server does not send
traffic during the 60 seconds interval, NSX-T Data Center ends the connection on the client
and server side. Default application profiles cannot be edited. To edit HTTP application profile
settings, create a custom profile.
Note: SSL -Terminate-mode and proxy-mode is not supported in NSX-T Data Center limited
export release.
Fast TCP
HTTP
Virtual Server Server-SSL Profile
Source-IP
Cookie
SNAT
Pool
Pool Members
Inline Topology
In the inline mode, the load balancer is in the traffic path between the client and the server.
Clients and servers must not be connected to the same tier-1 logical router. This topology does
not require virtual server SNAT.
C S
External Router
Tier-0 LR
Virtual
LB 1 C S
Server 1
Tier-1 A Tier-1 B
C S S S C S C
One-Arm Topology
In one-arm mode, the load balancer is not in the traffic path between the client and the server. In
this mode, the client and the server can be anywhere. The load balancer performs Source NAT
(SNAT) to force return traffic from the server destined to the client to go through the load
balancer. This topology requires virtual server SNAT to be enabled.
When the load balancer receives the client traffic to the virtual IP address, the load balancer
selects a server pool member and forwards the client traffic to it. In the one-arm mode, the load
balancer replaces the client IP address with the load balancer IP address so that the server
response is always sent to the load balancer and the load balancer forwards the response to the
client.
Virtual
LB 3
Server 3
C S Tier1 One-Arm C
C S
Virtual Tier-1 A
LB 1
Server 1
Tier1 One-Arm A C S S C S
n Ingress
Note: If DNAT is configured with Firewall Bypass, Firewall is skipped but not Load Balancer.
n Egress
Next, you set up health check monitoring for your servers. You must then configure server pools
for your load balancer. Finally, you must create a layer 4 or layer 7 virtual server for your load
balancer and attach the newly created virtual server to the load balancer.
Tier-1
1
You can configure the level of error messages you want the load balancer to add to the error log.
Note Avoid setting the log level to DEBUG on load balancers with a significant traffic due to the
number of messages printed to the log that affect performance.
Tier-1
1
Prerequisites
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 Select the load balancer size based on your needs of virtual servers and pool members and
available resources.
5 Select the already configured tier-1 gateway to attach to this load balancer from the drop-
down menu.
6 Define the severity level of the error log from the drop-down menu.
Load balancer collects information about encountered issues of different severity levels to
the error log.
8 Click Save.
The load balancer creation and attaching the load balancer to the tier-1 gateway takes about
three minutes and the configuration status to appear green and Up.
If the status is Down, click the information icon and resolve the error before you proceed.
a Detach the load balancer from the virtual server and tier-1 gateway.
d Select Delete.
Servers that fail to respond within a certain time period or respond with errors are excluded from
future connection handling until a subsequent periodic health check finds these servers to be
healthy.
Active health checks are performed on server pool members after the pool member is attached
to a virtual server and that virtual server is attached to a tier-1 gateway. The tier-1 uplink IP
address is used for the health check.
Note More than one active health monitor can be configured per server pool.
Tier-1
1
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Monitors > Active > Add Active Monitor.
You can also use predefined protocols; HTTP, HTTPS, ICMP, TCP, and UDP for NSX Manager.
You can also accept the default active health monitor values.
Option Description
Name and Description Enter a name and description for the active health monitor.
Monitoring Interval Set the time in seconds that the monitor sends another connection request
to the server.
Timeout Period Set the number of times the server is tested before it is considered as
DOWN.
Fall Count Set a value when the consecutive failures reach this value, the server is
considered temporarily unavailable.
Rise Count Set a number after this timeout period, the server is tried again for a new
connection to see if it is available.
For example, if the monitoring interval is set as 5 seconds and the timeout as 15 seconds, the
load balancer send requests to the server every 5 seconds. In each probe, if the expected
response is received from the server within 15 seconds, then the health check result is OK. If
not, then the result is CRITICAL. If the recent three health check results are all UP, the server
is considered as UP.
Option Description
HTTP Method Select the method to detect the server status from the drop-down menu,
GET, OPTIONS, POST, HEAD, and PUT.
HTTP Request URL Enter the request URI for the method.
HTTP Request Version Select the supported request version from the drop-down menu.
You can also accept the default version, HTTP_VERSION_1.
HTTP Response Header Click Add and enter the HTTP response header name and corresponding
value.
The default header value is 4000. The maximum header value is 64,000.
HTTP Response Code Enter the string that the monitor expects to match in the status line of HTTP
response body.
The response code is a comma-separated list.
For example, 200,301,302,401.
HTTP Response Body If the HTTP response body string and the HTTP health check response body
match, then the server is considered as healthy.
8 Click Save.
10 Complete step 5.
11 Click Configure.
12 Enter the HTTP request and response and SSL configuration details.
Option Description
Name and Description Enter a name and description for the active health monitor.
HTTP Method Select the method to detect the server status from the drop-down menu,
GET, OPTIONS, POST, HEAD, and PUT.
HTTP Request URL Enter the request URI for the method.
HTTP Request Version Select the supported request version from the drop-down menu.
You can also accept the default version, HTTP_VERSION_1.
HTTP Response Header Click Add and enter the HTTP response header name and corresponding
value.
The default header value is 4000. The maximum header value is 64,000.
HTTP Response Code Enter the string that the monitor expects to match in the status line of HTTP
response body.
The response code is a comma-separated list.
For example, 200,301,302,401.
HTTP Response Body If the HTTP response body string and the HTTP health check response body
match, then the server is considered as healthy.
Client Certificate (Optional) Select a certificate from the drop-down menu to be used if the
server does not host multiple host names on the same IP address or if the
client does not support an SNI extension.
Server SSL Profile (Optional) Assign a default SSL profile from the drop-down menu that
defines reusable and application-independent client-side SSL properties.
Click the vertical ellipses and create a custom SSL profile.
Trusted CA Certificates (Optional) You can require the client to have a CA certificate for
authentication.
Mandatory Server Authentication (Optional) Toggle the button to enable server authentication.
Certificate Chain Depth (Optional) Set the authentication depth for the client certificate chain.
Certificate Revocation List (Optional) Set a Certificate Revocation List (CRL) in the client-side SSL profile
to reject compromised client certificates.
14 Complete step 5 and assign the data size in byte of the ICMP health check packet.
16 Complete step 5 and you can leave the TCP data parameters empty.
If both the data sent and expected are not listed, then a three-way handshake TCP
connection is established to validate the server health. No data is sent.
Expected data if listed has to be a string. Regular expressions are not supported.
UDP Data Sent Enter the string to be sent to a server after a connection is established.
UDP Data Expected Enter the string expected to receive from the server.
Only when the received string matches this definition, is the server is
considered as UP.
What to do next
Associate the active health monitor with a server pool. See Add a Server Pool.
Passive health check monitors client traffic going through the load balancer for failures. For
example, if a pool member sends a TCP Reset (RST) in response to a client connection, the load
balancer detects that failure. If there are multiple consecutive failures, then the load balancer
considers that server pool member to be temporarily unavailable and stops sending connection
requests to that pool member for some time. After some time, the load balancer sends a
connection request to verify that the pool member has recovered. If that connection is
successful, then the pool member is considered healthy. Otherwise, the load balancer waits for
some time and tries again.
Passive health check considers the following scenarios to be failures in the client traffic.
n For server pools associated with Layer 7 virtual servers, if the connection to the pool member
fails. For example, if the pool member sends a TCP RST when the load balancer tries to
connect or perform an SSL handshake between the load balancer and the pool member fails.
n For server pools associated with Layer 4 TCP virtual servers, if the pool member sends a TCP
RST in response to client TCP SYN or does not respond at all.
n For server pools associated with Layer 4 UDP virtual servers, if a port is unreachable or a
destination unreachable ICMP error message is received in response to a client UDP packet.
Server pools associated to Layer 7 virtual servers, the failed connection count is incremented
when any TCP connection errors, for example, TCP RST failure to send data or SSL handshake
failures occur.
Server pools associated to Layer 4 virtual servers, if no response is received to a TCP SYN sent
to the server pool member or if a TCP RST is received in response to a TCP SYN, then the server
pool member is considered as DOWN. The failed count is incremented.
For Layer 4 UDP virtual servers, if an ICMP error such as, port or destination unreachable
message is received in response to the client traffic, then it is considered as DOWN.
Note One passive health monitor can be configured per server pool.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Monitors > Passive > Add Passive Monitor.
You can also accept the default active health monitor values.
Option Description
Fall Count Set a value when the consecutive failures reach this value, the server is
considered temporarily unavailable.
Timeout Period Set the number of times the server is tested before it is considered as
DOWN.
For example, when the consecutive failures reach the configured value 5, that member is
considered temporarily unavailable for 5 seconds. After this period, that member is tried
again for a new connection to see if it is available. If that connection is successful, then the
member is considered available and the failed count is set to zero. However, if that
connection fails, then it is not used for another timeout interval of 5 seconds.
What to do next
Associate the passive health monitor with a server pool. See Add a Server Pool.
Tier-1
1
Pool
Pool Members
Prerequisites
n If you use dynamic pool members, a NSGroup must be configured. See Create an NSGroup in
Manager Mode.
n Verify that a passive health monitor is configured. See Add a Passive Monitor.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Server Pools > Add Server Pool.
3 Enter a name and description for the load balancer server pool.
You can optionally describe the connections managed by the server pool.
Load balancing algorithm controls how the incoming connections are distributed among the
members. The algorithm can be used on a server pool or a server directly.
All load balancing algorithms skip servers that meet any of the following conditions:
n Connection limit for the maximum server pool concurrent connections is reached.
Option Description
ROUND_ROBIN Incoming client requests are cycled through a list of available servers
capable of handling the request.
Ignores the server pool member weights even if they are configured.
WEIGHTED_ROUND_ROBIN Each server is assigned a weight value that signifies how that server
performs relative to other servers in the pool. The value determines how
many client requests are sent to a server compared to other servers in the
pool.
This load balancing algorithm focuses on fairly distributing the load among
the available server resources.
WEIGHTED_LEAST_CONNECTION Each server is assigned a weight value that signifies how that server
performs relative to other servers in the pool. The value determines how
many client requests are sent to a server compared to other servers in the
pool.
This load balancing algorithm focuses on using the weight value to distribute
the load among the available server resources.
By default, the weight value is 1 if the value is not configured and slow start
is enabled.
IP-HASH Selects a server based on a hash of the source IP address and the total
weight of all the running servers.
Option Description
Enter individual members Enter a pool member name, IPv4 or IPv6 address, and a port. IP addresses
can be either IPv4 or IPv6. Mixed addressing is not supported. Note that the
pool members IP version must match the VIP IP version. For example, VIP-
IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.
Each server pool member can be configured with a weight for use in the
load balancing algorithm. The weight indicates how much more or less load
a given pool member can handle relative to other members in the same
pool.
You can set the server pool admin state. By default, the option is enable
when a server pool member is added.
If the option is disabled, active connections are processed, and the server
pool member is not selected for new connections. New connections are
assigned to other members of the pool.
If gracefully disabled, it allows you to remove servers for maintenance. The
existing connections to a member in the server pool in this state continue to
be processed.
Toggle the button to designate a pool member as a backup member to work
with the health monitor to provide an Active-Standby state. Traffic failover
occurs for backup members if active members fail a health check. Backup
members are skipped during the server selection. When the server pool is
inactive, the incoming connections are sent to only the backup members
that are configured with a sorry page indicating an application is unavailable.
Max Concurrent Connection value assigns a connection maximum so that
the server pool members are not overloaded and skipped during server
selection. If a value is not specified, then the connection is unlimited.
6 Click Set Monitors and select one or more active health check monitors for the server. Click
Apply.
The load balancer periodically sends an ICMP ping to the servers to verify health independent
of data traffic. You can configure more than one active health check monitor per server pool.
Depending on the topology, SNAT might be required so that the load balancer receives the
traffic from the server destined to the client. SNAT can be enabled per server pool.
Mode Description
Auto Map Mode Load Balancer uses the interface IP address and ephemeral port to continue
the communication with a client initially connected to one of the server's
established listening ports.
SNAT is required.
Enable port overloading to allow the same SNAT IP and port to be used for
multiple connections if the tuple (source IP, source port, destination IP,
destination port, and IP protocol) is unique after the SNAT process is
performed.
You can also set the port overload factor to allow the maximum number of
times a port can be used simultaneously for multiple connections.
IP Pool Specify a single IPv4 or IPv6 address range, for example, 1.1.1.1-1.1.1.10 to be
used for SNAT while connecting to any of the servers in the pool. IP
addresses can be either IPv4 or IPv6. Mixed addressing is not supported.
By default, from 4000 through 64000-port range is used for all configured
SNAT IP addresses. Port ranges from 1000 through 4000 are reserved for
purposes such as, health checks and connections initiated from Linux
applications. If multiple IP addresses are present, then they are selected in a
Round Robin manner.
Enable port overloading to allow the same SNAT IP and port to be used for
multiple connections if the tuple (source IP, source port, destination IP,
destination port, and IP protocol) is unique after the SNAT process is
performed.
You can also set the port overload factor to allow the maximum number of
times a port can be used simultaneously for multiple connections.
8 Click Additional Properities, and toggle the button to enable TCP Multiplexing.
With TCP multiplexing, you can use the same TCP connection between a load balancer and
the server for sending multiple client requests from different client TCP connections.
9 Set the maximum number of TCP multiplexing connections per server that are kept alive to
send future client requests.
10 Enter the minimum number of active members the server pool must always maintain.
11 Select a passive health monitor for the server pool from the drop-down menu.
Tier-1
1
Fast TCP
HTTP
Virtual Server Server-SSL Profile
Source-IP
Cookie
Pool
Application profiles define the behavior of a particular type of network traffic. The associated
virtual server processes network traffic according to the values specified in the application
profile. Fast TCP, Fast UDP, and HTTP application profiles are the supported types of profiles.
TCP application profile is used by default when no application profile is associated to a virtual
server. TCP and UDP application profiles are used when an application is running on a TCP or
UDP protocol and does not require any application level load balancing such as, HTTP URL load
balancing. These profiles are also used when you only want Layer 4 load balancing, which has a
faster performance and supports connection mirroring.
HTTP application profile is used for both HTTP and HTTPS applications when the load balancer
must take actions based on Layer 7 such as, load balancing all images requests to a specific
server pool member or stopping HTTPS to offload SSL from pool members. Unlike the TCP
application profile, the HTTP application profile stops the client TCP connection before selecting
the server pool member.
Tier-1
Tier-1
Layer 7 VIP
(HTTP/HTTPS)
Load Balancer Server 1
Clients
Server 2
Virtual
Server 1
Server 3
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Profiles > Application > Add Application Profiles.
3 Select a Fast TCP application profile and enter the profile details.
You can also accept the default FAST TCP profile settings.
Option Description
Name and Description Enter a name and a description for the Fast TCP application profile.
Idle Timeout Enter the time in seconds on how long the server can remain idle after a TCP
connection is established.
Set the idle time to the actual application idle time and add a few more
seconds so that the load balancer does not close its connections before the
application does.
HA Flow Mirroring Toggle the button to make all the flows to the associated virtual server
mirrored to the HA standby node.
Connection Close Timeout Enter the time in seconds that the TCP connection both FINs or RST must be
kept for an application before closing the connection.
A short closing timeout might be required to support fast connection rates.
4 Select a Fast UDP application profile and enter the profile details.
Option Description
Name and Description Enter a name and a description for the Fast UDP application profile.
Idle Timeout Enter the time in seconds on how long the server can remain idle after a
UDP connection is established.
UDP is a connectionless protocol. For load balancing purposes, all the UDP
packets with the same flow signature such as, source and destination IP
address or ports and IP protocol received within the idle timeout period are
considered to belong to the same connection and sent to the same server.
If no packets are received during the idle timeout period, the connection
which is an association between the flow signature and the selected server
is closed.
HA Flow Mirroring Toggle the button to make all the flows to the associated virtual server
mirrored to the HA standby node.
HTTP application profile is used for both HTTP and HTTPS applications.
Option Description
Name and Description Enter a name and a description for the HTTP application profile.
Idle Timeout Enter the time in seconds on how long an HTTP application can remain idle,
instead of the TCP socket setting which must be configured in the TCP
application profile.
Request Header Size Specify the maximum buffer size in bytes used to store HTTP request
headers.
X-Forwarded-For (XFF) n Insert - If the XFF HTTP header is not present in the incoming request,
the load balancer inserts a new XFF header with the client IP address. If
the XFF HTTP header is present in the incoming request, the load
balancer appends the XFF header with the client IP address.
n Replace - If the XFF HTTP header is present in the incoming request, the
load balancer replaces the header.
Web servers log each request they handle with the requesting client IP
address. These logs are used for debugging and analytics purposes. If the
deployment topology requires SNAT on the load balancer, then server uses
the client SNAT IP address which defeats the purpose of logging.
As a workaround, the load balancer can be configured to insert XFF HTTP
header with the original client IP address. Servers can be configured to log
the IP address in the XFF header instead of the source IP address of the
connection.
Request Body Size Enter value for the maximum size of the buffer used to store the HTTP
request body.
If the size is not specified, then the request body size is unlimited.
Redirection n None - If a website is temporarily down, user receives a page not found
error message.
n HTTP Redirect - If a website is temporarily down or has moved, incoming
requests for that virtual server can be temporarily redirected to a URL
specified here. Only a static redirection is supported.
For HTTP to HTTPS redirect, the HTTPS virtual server must have port
443 and the same virtual server IP address must be configured on the
same load balancer.
Option Description
NTLM Authentication Toggle the button for the load balancer to turn off TCP multiplexing and
enable HTTP keep-alive.
NTLM is an authentication protocol that can be used over HTTP. For load
balancing with NTLM authentication, TCP multiplexing must be disabled for
the server pools hosting NTLM-based applications. Otherwise, a server-side
connection established with one client's credentials can potentially be used
for serving another client's requests.
If NTLM is enabled in the profile and associated to a virtual server, and TCP
multiplexing is enabled at the server pool, then NTLM takes precedence.
TCP multiplexing is not performed for that virtual server. However, if the
same pool is associated to another non-NTLM virtual server, then TCP
multiplexing is available for connections to that virtual server.
If the client uses HTTP/1.0, the load balancer upgrades to HTTP/1.1 protocol
and the HTTP keep-alive is set. All HTTP requests received on the same
client-side TCP connection are sent to the same server over a single TCP
connection to ensure that reauthorization is not required.
Some applications maintain the server state such as, shopping carts. Such state might be per
client and identified by the client IP address or per HTTP session. Applications might access or
modify this state while processing subsequent related connections from the same client or HTTP
session.
The source IP persistence profile tracks sessions based on the source IP address. When a client
requests a connection to a virtual server that enables the source address persistence, the load
balancer checks if that client was previously connected, if so, returns the client to the same
server. If not, you can select a server pool member based on the pool load balancing algorithm.
Source IP persistence profile is used by Layer 4 and Layer 7 virtual servers.
The cookie persistence profile inserts a unique cookie to identify the session the first time a client
accesses the site. The client forwards the HTTP cookie in subsequent requests and the load
balancer uses that information to provide the cookie persistence. Layer 7 virtual servers can only
use the cookie persistence profile.
The generic persistence profile supports persistence based on the HTTP header, cookie, or URL
in the HTTP request. Therefore, it supports app session persistence when the session ID is part of
the URL. This profile is not associated with a virtual server directly. You can specify this profile
when you configure a load balancer rule for request forwarding and response rewrite.
Tier-1
Server 2
Virtual
Client 2 Server 1
Server 3
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Profiles > Persistence > Add Persistence Profiles.
3 Select Source IP to add a source IP persistence profile and enter the profile details.
Option Description
Name and Description Enter a name and a description for the Source IP persistence profile.
Share Persistence Toggle the button to share the persistence so that all virtual servers this
profile is associated with can share the persistence table.
If the persistence sharing is not enabled in the Source IP persistence profile
associated to a virtual server, each virtual server that the profile is
associated to maintains a private persistence table.
Option Description
Purge Entries When Full A large timeout value might lead to the persistence table quickly filling up
when the traffic is heavy. When this option is enabled, the oldest entry is
deleted to accept the newest entry.
When this option is disabled, if the source IP persistance table is full, new
client connections are rejected.
HA Persistence Mirroring Toggle the button to synchronize persistence entries to the HA peer. When
HA persistance mirroring is enabled, the client IP persistance remains in the
case of load balancer failover.
Option Description
Name and Description Enter a name and a description for the Cookie persistence profile.
Share Persistence Toggle the button to share persistence across multiple virtual servers that
are associated to the same pool members.
The Cookie persistence profile inserts a cookie with the format,
<name>.<profile-id>.<pool-id>.
If the persistence shared is not enabled in the Cookie persistence profile
associated with a virtual server, the private Cookie persistence for each
virtual server is used and is qualified by the pool member. The load balancer
inserts a cookie with the format, <name>.<virtual_server_id>.<pool_id>.
Cookie Fallback Toggle the button so that the client request is rejected if cookie points to a
server that is in a DISABLED or is in a DOWN state.
Selects a new server to handle a client request if the cookie points to a
server that is in a DISABLED or is in a DOWN state.
Option Description
Max Idle Time Enter the time in seconds that the cookie type can be idle before a cookie
expires.
Max Cookie Age For the session cookie type, enter the time in seconds a cookie is available.
5 Select Generic to add a generic persistence profile and enter the profile details.
Option Description
Name and Description Enter a name and a description for the Source IP persistence profile.
Share Persistence Toggle the button to share the profile among virtual servers.
HA Persistence Mirroring Toggle the button to synchronize persistence entries to the HA peer.
Note SSL profile is not supported in the NSX-T Data Center limited export release.
Client-side SSL profile refers to the load balancer acting as an SSL server and stopping the client
SSL connection. Server-side SSL profile refers to the load balancer acting as a client and
establishing a connection to the server.
You can specify a cipher list on both the client-side and server-side SSL profiles.
SSL session caching allows the SSL client and server to reuse previously negotiated security
parameters avoiding the expensive public key operation during the SSL handshake. SSL session
caching is disabled by default on both the client-side and server-side.
SSL session tickets are an alternate mechanism that allows the SSL client and server to reuse
previously negotiated session parameters. In SSL session tickets, the client and server negotiate
whether they support SSL session tickets during the handshake exchange. If supported by both,
server can send an SSL ticket, which includes encrypted SSL session parameters to the client.
The client can use that ticket in subsequent connections to reuse the session. SSL session tickets
are enabled on the client-side and disabled on the server-side.
Tier-1
1
Layer 7 VIP
Load Balancer Server 1
Clients
Server 2
Virtual
Server 1
Server 3
Tier-1
1
Layer 7 VIP
HTTPS HTTPS
(Client SSL (Server SSL
Profile) Profile)
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Profiles > SSL Profile.
Option Description
Name and Description Enter a name and a description for the Client SSL profile.
SSL Suite Select the SSL Cipher group from the drop-down menu and available SSL
Ciphers and SSL protocols to be included in the Client SSL profile are
populated.
Balanced SSL Cipher group is the default.
Session Caching Toggle the button to allow the SSL client and server to reuse previously
negotiated security parameters avoiding the expensive public key operation
during an SSL handshake.
Supported SSL Ciphers Depending on the SSL suite, you assigned the supported SSL Ciphers are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.
Supported SSL Protocols Depending on the SSL suite, you assigned the supported SSL protocols are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.
Session Cache Entry Timeout Enter the cache timeout in seconds to specify how long the SSL session
parameters must be kept and can be reused.
Prefer Server Cipher Toggle the button so that the server can select the first supported cipher
from the list it can support.
During an SSL handshake, the client sends an ordered list of supported
ciphers to the server.
Option Description
Name and Description Enter a name and a description for the Server SSL profile.
SSL Suite Select the SSL Cipher group from the drop-down menu and available SSL
Ciphers and SSL protocols to be included in the Server SSL profile are
populated.
Balanced SSL Cipher group is the default.
Session Caching Toggle the button to allow the SSL client and server to reuse previously
negotiated security parameters avoiding the expensive public key operation
during an SSL handshake.
Supported SSL Ciphers Depending on the SSL suite, you assigned the supported SSL Ciphers are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.
Option Description
Supported SSL Protocols Depending on the SSL suite, you assigned the supported SSL protocols are
populated here. Click View More to view the entire list.
If you selected Custom, you must select the SSL Ciphers from the drop-
down menu.
Session Cache Entry Timeout Enter the cache timeout in seconds to specify how long the SSL session
parameters must be kept and can be reused.
Prefer Server Cipher Toggle the button so that the server can select the first supported cipher
from the list it can support.
During an SSL handshake, the client sends an ordered list of supported
ciphers to the server.
A Layer 4 virtual server must be associated to a primary server pool, also called a default pool.
If a virtual server status is disabled, any new connection attempts to the virtual server are
rejected by sending either a TCP RST for the TCP connection or ICMP error message for UDP.
New connections are rejected even if there are matching persistence entries for them. Active
connections continue to be processed. If a virtual server is deleted or disassociated from a load
balancer, then active connections to that virtual server fail.
Prerequisites
n Verify that application profiles are available. See Add an Application Profile.
n Verify that persistent profiles are available. See Add a Persistence Profile.
n Verify that SSL profiles for the client and server are available. See Add an SSL Profile.
n Verify that server pools are available. See Add a Server Pool.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Virtual Servers > Add Virtual Server.
Layer 4 virtual servers support either the Fast TCP or Fast UDP protocol, but not both.
For Fast TCP or Fast UDP protocol support on the same IP address and port, for example
DNS, a virtual server must be created for each protocol.
Name and Description Enter a name and a description for the Layer 4 virtual server.
IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported. Note that the pool members IP version must match the VIP IP
version. For example, VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.
Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop-down menu.
Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.
Application Profile Based on the protocol type, the existing application profile is automatically
populated.
Click the vertical ellipses to create an application profile.
Access List Control When you enable Access List Control (ALC), all traffic flowing through the
load balancer is compared with the ACL statement, which either drops or
allows the traffic.
ACL is disabled by default. To enable, click Configure, and select Enabled.
Select an Action:
n Allow - Allows connections matching the selected group. All other
connections are dropped.
n Drop - Allows connections not matching the selected group. A dropped
connection generates a log entry is access log is enabled.
Select a Group. The IP addresses included in this group are either dropped
or allowed by the ACL.
Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.
Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.
Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.
Default Pool Member Port Enter a default pool member port if the pool member port for a virtual
server is not defined.
For example, if a virtual server is defined with a port range of 2000–2999
and the default pool member port range is set as 8000-8999, then an
incoming client connection to the virtual server port 2500 is sent to a pool
member with a destination port set to 8500.
Admin State Toggle the button to disable the admin state of the Layer 4 virtual server.
Access Log Toggle the button to enable logging for the Layer 4 virtual server.
Name and Description Enter a name and a description for the Layer 4 virtual server.
IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported. Note that the pool members IP version must match the VIP IP
version. For example, VIP-IPv4 with Pool-IPv4, and IPv6 with Pool-IPv6.
Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop-down menu.
Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.
Application Profile Based on the protocol type, the existing application profile is automatically
populated.
You can click the vertical ellipses to create an application profile.
Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.
Access List Control When you enable Access List Control (ALC) all traffic flowing through the
load balancer will be compared with the ACL statement, which will either
drop it or allow it.
ACL is disabled by default. To enable, click Configure, and check Enabled.
Select an Action:
n Allow - Allows connections matching the selected group. All other
connections are dropped
n Drop - Allows connections not matching the selected group. A dropped
connection generates a log entry is access log is enabled.
Select a Group. The IP addresses included in this group are either dropped
or allowed by the ACL.
Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.
Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.
Default Pool Member Port Enter a default pool member port if the pool member port for a virtual
server is not defined.
For example, if a virtual server is defined with port range 2000–2999 and
the default pool member port range is set as 8000-8999, then an incoming
client connection to the virtual server port 2500 is sent to a pool member
with a destination port set to 8500.
Admin State Toggle the button to disable the admin state of the Layer 4 virtual server.
Access Log Toggle the button to enable logging for the Layer 4 virtual server.
Log Significant Event Only This field can only be configured if access logs are enabled. Connections
that cannot be sent to a pool member are treated as a significant event such
as "max connection limit," or "Access Control drop."
If a virtual server status is disabled, any new connection attempts to the virtual server are
rejected by sending either a TCP RST for the TCP connection or ICMP error message for UDP.
New connections are rejected even if there are matching persistence entries for them. Active
connections continue to be processed. If a virtual server is deleted or disassociated from a load
balancer, then active connections to that virtual server fail.
Note SSL profile is not supported in the NSX-T Data Center limited export release.
If a client-side SSL profile binding is configured on a virtual server but not a server-side SSL
profile binding, then the virtual server operates in an SSL-terminate mode, which has an
encrypted connection to the client and plain text connection to the server. If both the client-side
and server-side SSL profile bindings are configured, then the virtual server operates in SSL-proxy
mode, which has an encrypted connection both to the client and the server.
Associating server-side SSL profile binding without associating a client-side SSL profile binding is
currently not supported. If a client-side and a server-side SSL profile binding is not associated
with a virtual server and the application is SSL-based, then the virtual server operates in an SSL-
unaware mode. In this case, the virtual server must be configured for Layer 4. For example, the
virtual server can be associated to a fast TCP profile.
Prerequisites
n Verify that application profiles are available. See Add an Application Profile.
n Verify that persistent profiles are available. See Add a Persistence Profile.
n Verify that SSL profiles for the client and server are available. See Add an SSL Profile.
n Verify that server pools are available. See Add a Server Pool.
n Verify that CA and client certificate are available. See Create a Certificate Signing Request
File.
n Verify that a certification revocation list (CRL) is available. See Import a Certificate Revocation
List.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
2 Select Networking > Load Balancing > Virtual Servers > Add Virtual Server.
3 Select L7 HTTP from the drop-down list and enter the protocol details.
Option Description
Name and Description Enter a name and a description for the Layer virtual server.
IP Address Enter the virtual server IP address. Both IPv4 and IPv6 addresses are
supported.
Load Balancer Select an existing load balancer to attach to this Layer 4 virtual server from
the drop down menu.
Server Pool Select an existing server pool from the drop-down menu.
The server pool consists of one or more servers, also called pool members
that are similarly configured and running the same application.
You can click the vertical ellipses to create a server pool.
Application Profile Based on the protocol type, the existing application profile is automatically
populated.
You can click the vertical ellipses to create an application profile.
Option Description
Client SSL Profile Select the client-side SSL Profile from the drop-down menu.
SNI Certificates Select the available SNI certificate from the drop-down menu.
Mandatory Client Authentication Toggle the button to enable this menu item.
Certificate Chain Depth Set the certificate chain depth to verify the depth in the server certificates
chain.
Certificate Revocation List Select the available CRL to disallow compromised server certificates.
Option Description
Server SSL Profile Select the Server-side SSL Profile from the drop-down menu.
Mandatory Server Authentication Toggle the button to enable this menu item.
Server-side SSL profile binding specifies whether the server certificate
presented to the load balancer during the SSL handshake must be validated
or not. When validation is enabled, the server certificate must be signed by
one of the trusted CAs whose self-signed certificates are specified in the
same server-side SSL profile binding.
Certificate Chain Depth Set the certificate chain depth to verify the depth in the server certificates
chain.
Certificate Revocation List Select the available CRL to disallow compromised server certificates.
OCSP and OCSP stapling are not supported on the server-side.
Option Description
Max Concurrent Connection Set the maximum concurrent connection allowed to a virtual server so that
the virtual server does not deplete resources of other applications hosted on
the same load balancer.
Max New Connection Rate Set the maximum new connection to a server pool member so that a virtual
server does not deplete resources.
Sorry Server Pool Select an existing sorry server pool from the drop-down menu.
The sorry server pool serves the request when a load balancer cannot select
a backend server to the serve the request from the default pool.
You can click the vertical ellipses to create a server pool.
Default Pool Member Port Enter a default pool member port, if the pool member port for a virtual
server is not defined.
For example, if a virtual server is defined with port range 2000-2999 and the
default pool member port range is set as 8000-8999, then an incoming
client connection to the virtual server port 2500 is sent to a pool member
with a destination port set to 8500.
Admin State Toggle the button to disable the admin state of the Layer 7 virtual server.
Access Log Toggle the button to enable logging for the Layer 7 virtual server.
Log Significant Event Only This field can only be configured if access logs are enabled. Requests with
an HTTP response status of >=400 are treated as a significant event.
8 Click Save.
Load balancer rules are supported for only Layer 7 virtual servers with an HTTP application
profile. Different load balancer services can use load balancer rules.
Each load balancer rule consists of single or multiple match conditions and single or multiple
actions. If the match conditions are not specified, then the load balancer rule always matches and
is used to define default rules. If more than one match condition is specified, then the matching
strategy determines if all conditions must match or any one condition must match for the load
balancer rule to be considered a match.
Each load balancer rule is implemented at a specific phase of the load balancing processing;
Transport, HTTP Access, Request Rewrite, Request Forwarding, and Response Rewrite. Not all
the match conditions and actions are applicable to each phase.
Up to 4,000 load balancer rules can be configured with the API, if the skip_scale_validation flag
in LbService is set. Note that the flag can be set via API. Refer to the NSX-T Data Center API
Guide for more information. Up to 512 load balancer rules can be configured through the user
interface.
Load Balancer rules support REGEX for match types. For more information, see Regular
Expressions in Load Balancer Rules.
Prerequisites
Verify a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.
Load Balancer virtual server SSL configuration is found under SSL Configuration. There are two
possible configurations. In both modes, the load balancer sees the traffic, and applies load
balancer rules based on the client HTTP traffic.
n SSL Offload, configuring only the SSL client. In this mode, the client to VIP traffic is encrypted
(HTTPS), and the load balancer decrypts it. The VIP to Pool member traffic is clear (HTTP).
n SSL End-to-End, configuring both the Client SSL and Server SSL. In this mode, the client to
VIP traffic is encrypted (HTTPS), and the load balancer decrypts it and then re-encrypts it.
The VIP to Pool member traffic is encrypted (HTTPS).
The Transport Phase is complete when the virtual server receives the client SSL hello message
virtual server. this occurs before SSL is ended, and before HTTP traffic.
The Transport Phase allows administrators to select the SSL mode, annd specific server pool
based on the client SSL hello message. There are three options for the virtual server SSL mode:
n SSL Offload
n End-to-End
Load Balancer rules support REGEX for match types. PCRE style REGEX patterns are supported
with a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.
Prerequisites
Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.
Procedure
2 In the Load Balancer Rules section, next to Transport Phase, click Set > Add Rule to configure
the load balancer rules for the Transport Phase.
3 SSL SNI is the only match condition supported. Match conditions are used to match
application traffic passing through load balancers.
4 From the drop-down list, select a Match Type: starts with, ends with, equals, contains,
matches regex.
6 Toggle the Case Sensitive button to set a case-sensitive flag for HTTP header value
comparison.
Any Either host or path may match for this rule to be considered a match.
All Both host and path must match for this rule to be considered a match.
SSL Passthrough SSL Passthrough passes HTTP traffic to a backend server without
decrypting the traffic on the load balancer. The data is kept encrypted as it
travels through the load balancer.
If SSL Passthrough is selected, a server pool can be selected. See Add a
Server Pool for Load Balancing in Manager Mode.
SSL Offloading SSL Offloading decrypts all HTTP traffic on the load balancer. SSL offloading
allows data to be inspected as it passes between the load balancer and
server. If NTLM and multiplexing are not configured, the load balancer
establishes a new connection to the selected backend server for each HTTP
request.
SSL End-to End After receiving the HTTP request, the load balancer connects to the
selected backend server and talks with it using HTTPS. If NTLM and
multiplexing are not configured, the load balancer establishes a new
connection to the selected backend server for each HTTP request.
In the HTTP ACCESS phase, users can define the action to validate JWT from clients and pass, or
remove JWT to backend servers.
Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.
Prerequisites
Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.
Procedure
2 In the Load Balancer Rules section, next to HTTP Access Phase, click Set > Add Rule to
configure the load balancer rules for the HTTP Request Rewrite phase.
3 From the drop-down menu, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.
HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match
HTTP Request URI Arguments Match an HTTP request URI query argument.
http_request.uri_arguments - value to match
IP Header Source Matches IP header text boxes in of HTTP messages. The source type must
be either a single IP address, a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresses are supported.
n If IP Header Source is selected with a Group source type, select the
group from the drop-down menu.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match
Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.
4 From the drop-down list, select a Match Type: starts with, ends with, equals, contains,
matches regex.
Any Either host or path may match for this rule to be considered a match.
All Both host and path must match for this rule to be considered a match.
Action Description
JWT Authentication JSON Web Token (JWT) is an open standard that defines a compact and
self-contained way for securely transmitting information between parties as
a JSON object. This information can be verified and trusted because it is
digitally signed.
n Realm - A description of the protected area. If no realm is specified,
clients often display a formatted hostname. The configured realm is
returned when a client request is rejected with 401 http status. The
response is: "WWW-Authentication: Bearer realm=<realm>".
n Tokens - This parameter is optional. Load balancer searches for every
specified token one-by-one for the JWT message until found. If not
found, or if this text box is not configured, load balancer searches the
Bearer header by default in the http request "Authorization: Bearer
<token>"
n Key Type - Symmetric key or asymmetric public key (certificate-id)
n Preserve JWT - This is a flag to preserve JWT and pass it to backend
server. If disabled, the JWT key to the backend server is removed.
Connection Drop If negate is enabled, when Connection Drop is configured, all requests not
matching the specified match condition are dropped. Requests matching the
specified match condition are allowed.
Variable Assignment Enables users to assign a value to a variable in HTTP Access Phase, in such
a way that the result can be used as a condition in other load balancer rule
phases.
Prerequisites
Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.
Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.
Procedure
2 In the Load Balancer Rules section, next to Request Rewrite Phase, click Set > Add Rule to
configure the load balancer rules for the HTTP Request Rewrite phase.
3 From the drop-down list, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.
HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match
HTTP Request URI Arguments Used to match URI arguments aka query string of HTTP request messages,
for example, in URI http://exaple.com?foo=1&bar=2, the "foo=1&bar=2" is the
query string containing URI arguments. In an URI scheme, query string is
indicated by the first question mark ("?") character and terminated by a
number sign ("#") character or by the end of the URI.
http_request.uri_arguments - value to match
HTTP Request Version Used to match the HTTP protocol version of the HTTP request messages
http_request.version - value to match
HTTP Request Header Used to match HTTP request messages by HTTP header fields. HTTP header
fields are components of the header section of HTTP request and response
messages. They define the operating parameters of an HTTP transaction.
http_request.header_name - header name to match
http_request.header_value - value to match
HTTP Request Cookie Used to match HTTP request messages by cookie which is a specific type of
HTTP header. The match_type and case_sensitive define how to compare
cookie value.
http_request.cookie_value - value to match
IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresss are supported
n If IP Header Source is selected with a Group source type, seelct the
group from the drop-down list.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match
Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.
4 From the drop-down menu, select a Match Type: starts with, ends with, equals, contains, or
matches regex. Match type is used to match a condition with a specified action.
Starts With If the match condition starts with the specified value, the condition matches.
Ends With If the match condition ends with the specified value, the condition matches.
Equals If the match condition is the same as the specified value, the condition
matches.
Contains If the match condition contains the specified value, the condition matches.
Matches Regex If the match condition matches the specified values, the condition matches.
Any Indicates that either host or path can match for this rule to be considered a
match.
All Indicates that both host and path must match for this rule to be considered
a match.
Actions Description
HTTP Request URI Rewrite This action is used to rewrite URIs in matched HTTP request messages.
Specify the URI and URI Arguments in this condition to rewrite the matched
HTTP request message's URI and URI arguments to the new values. Full URI
scheme of HTTP messages have following syntax: Scheme:[//
[user[:password]@]host[:port]][/path][?query][#fragment The URI field of
this action is used to rewrite the /path part in the above scheme. The URI
Arguments field is used to rewrite the query part. Captured variables and
built-in variables can be used in the URI and URI Arguments fields.
a Enter the URI of the HTTP request
b Enter the query string of URI, which typically contains key value pairs, for
example: foo1=bar1&foo2=bar2.
HTTP Request Header Rewrite This action is used to rewrite header fields of matched HTTP request
messages to specified new values.
a Enter the name of a header text box HTTP request message.
b Enter the header value.
Actions Description
HTTP Request Header Delete This action is used to delete header fields of HTTP request messages at
HTTP_REQUEST_REWRITE phase. One action can be used to delete all
headers with same header name. To delete headers with different header
names, multiple actions must be defined.
n Enter the name of a header field of HTTP request message.
8 Toggle the Case Sensitive button to set a case-sensitive flag for HTTP header value
comparison.
Prerequisites
Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.
Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.
Procedure
2 Click Request Forwarding > Add Rule to configure the load balancer rules for the HTTP
Request Forwarding.
3 From the drop-down list, select a match condition. Match conditions are used to match
application traffic passing through load balancers. Multiple match conditions can be specified
in one load balancer rule. Each match condition defines a criterion for application traffic.
HTTP Request URI Match an HTTP request URI without query arguments.
http_request.uri - value to match
HTTP Request URI Arguments Used to match URI arguments aka query string of HTTP request messages,
for example, in URI http://exaple.com?foo=1&bar=2, the "foo=1&bar=2" is the
query string containing URI arguments. In an URI scheme, query string is
indicated by the first question mark ("?") character and terminated by a
number sign ("#") character or by the end of the URI.
http_request.uri_arguments - value to match
HTTP Request Version Used to match the HTTP protocol version of the HTTP request messages
http_request.version - value to match
HTTP Request Header Used to match HTTP request messages by HTTP header fields. HTTP header
fields are components of the header section of HTTP request and response
messages. They define the operating parameters of an HTTP transaction.
http_request.header_name - header name to match
http_request.header_value - value to match
HTTP Request Cookie Used to match HTTP request messages by cookie which is a specific type of
HTTP header. The match_type and case_sensitive define how to compare
cookie value.
http_request.cookie_value - value to match
IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
n If IP Header Source is selected, with an IP Address source type, the
source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresss are supported
n If IP Header Source is selected with a Group source type, seelct the
group from the drop-down list.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match
Case Sensitive Set a case-sensitive flag for HTTP header value comparison. If true, case is
significant when comparing HTTP body value.
4 Select an action:
Action Description
HTTP Reject Used to reject HTTP request messages. The specified reply_status value is
used as the status code for the corresponding HTTP response message. The
response message is sent back to client (usually a browser) indicating the
reason it was rejected.
http_forward.reply_status - HTTP status code used to reject
http_forward.reply_message - HTTP rejection message
HTTP Redirect Used to redirect HTTP request messages to a new URL. The HTTP status
code for redirection is 3xx, for example, 301, 302, 303, 307, etc. The
redirect_url is the new URL that the HTTP request message is redirected to.
http_forward.redirect_status - HTTP status code for redirect
http_forward.redirect_url - HTTP redirect URL
Action Description
Select Pool Force the request to a specific server pool. Specified pool member's
configured algorithm (predictor) is used to select a server within the server
pool. The matched HTTP request messages are forwarded to the specified
pool.
http_forward.select_pool - server pool UUID
Variable Persistence On Select a generic persistence profile and enter a variable name.
You can also enable Hash Variable. If the variable value is long, hashing the
variable ensures that it is correctly stored in the persistence table. If the
Hash Variable is not enabled, only the fixed prefix part of the variable value
is stored in the persistence table if the variable value is long. As a result, two
different requests with long variable values might be dispatched to the same
backend server because their variable values have the same prefix part,
when they should be dispatched to different backend servers.
Connection Drop If negate is enabled in condition, when Connection Drop is configured, all
requests not matching the condition are dropped. Requests matching the
condition are allowed.
Reply Message Server responds with a reply message that contains confirmed addresses
and configuration.
Prerequisites
Verify that a Layer 7 HTTP virtual server is available. See Add Layer 7 HTTP Virtual Servers.
Load Balancer rules support REGEX for match types. PCRE style REGEX patters is supported with
a few limitations on advanced use cases. When REGEX is used in match conditions, named
capturing groups are supported. See Regular Expressions in Load Balancer Rules.
Procedure
2 Click Response Rewrite > Add Rule to configure the load balancer rules for the HTTP
Response Rewrite.
HTTP Response Header This condition is used to match HTTP response messages from backend
servers by HTTP header fields.
http_response.header_name - header name to match
http_response.header_value - value to match
IP Header Source Matches IP header fields in of HTTP messages. The source type must be
either a single IP address, or a range of IP addresses, or a group. See Add a
Group.
The source IP address of HTTP messages should match IP addresses which
are configured in groups. Both IPv4 and IPv6 addresses are supported.
ip_header.source_address - source address to match
ip_header.destination_address - destination address to match
Case Sensitive Set a case-sensitive flag for HTTP header value comparison.
3 Select an action:
Action Description
HTTP Response Header Rewrite This action is used to rewrite header fields of HTTP response messages to
specified new values.
http_response.header_name - header name
http_response.header_value - value to write
HTTP Response Header Delete This action is used to delete header fields of HTTP response messages.
http_request.header_delete - header name
http_request.header_delete - value to write
Variable Persistence Learn Select a generic persistence profile and enter a variable name.
You can also enable Hash Variable. If the variable value is long, hashing the
variable ensures that it will be correctly stored in the persistence table. If
Hash Variable is not enabled, only the fixed prefix part of the variable value
is stored in the persistence table if the variable value is long. As a result, two
different requests with long variable values might be dispatched to the same
backend server (because their variable values have the same prefix part)
when they should be dispatched to different backend servers.
Perl Compatible Regular Expressions (PCRE) style REGEX patterns is supported with a few
limitations on advanced use cases. When REGEX is used in match conditions, named capturing
groups are supported.
n Character unions and intersections are not supported. For example, do not use [a-z[0-9]] and
[a-z&&[aeiou]] instead use [a-z0-9] and [aeiou] respectively.
n Only 9 back references are supported and \1 through \9 can be used to refer to them.
n Use \0dd format to match octal characters, not the \ddd format.
n Embedded flags are not supported at the top level, they are only supported within groups.
For example, do not use "Case (?i:s)ensitive" instead use "Case ((?i:s)ensitive)".
n Preprocessing operations \l, \u, \L, \U are not supported. Where \l - lowercase next char \u -
uppercase next char \L - lower case until \E \U - upper case to \E.
n Using named character construct for Unicode characters is not supported. For example, do
not use \N{name} instead use \u2018.
When REGEX is used in match conditions, named capturing groups are supported. For
example, REGEX match pattern /news/(?<year>\d+)-(?<month>\d+)-(?<day>\d+)/(?<article>.*)
can be used to match a URI like /news/2018-06-15/news1234.html.
Then variables are set as follows, $year = "2018" $month = "06" $day = "15" $article =
"news1234.html". After the variables are set, these variables can be used in load balancer rule
actions. For example, URI can be rewritten using the matched variables like, /news.py?year=
$year&month=$month&day=$day&article=$article. Then the URI gets rewritten as /news.py?
year=2018&month=06&day=15&article=news1234.html.
Rewrite actions can use a combination of named capturing groups and built-in variables. For
example, URI can be written as /news.py?year=$year&month=$month&day=$day&article=
$article&user_ip=$_remote_addr. Then the example URI gets rewritten as /news.py?
year=2018&month=06&day=15&article=news1234.html&user_ip=1.1.1.1.
Note For named capturing groups, the name cannot start with an _ character.
In addition to named capturing groups, the following built-in variables can be used in rewrite
actions. All the built-in variable names start with _.
n $_upstream_http_<name> - arbitrary response header field and <name> is the field name
converted to lower case with dashes replaced by underscores
n $_host - in the order of precedence - host name from the request line, or host name from
the "Host" request header field, or the server name matching a request
n $_http_<name> - arbitrary request header field and <name> is the field name converted
to lower case with dashes replaced by underscores
n $_ssl_client_i_dn: returns the "issuer DN" string of the client certificate for an established
SSL connection according to RFC 2253
n $_ssl_client_s_dn: returns the "subject DN" string of the client certificate for an
established SSL connection according to RFC 2253
Load Balancer created groups are visible under Inventory > Groups.
Server pool groups are created with the name NLB.PoolLB.Pool_Name LB_Name with group
member IP addresses assigned:
VIP Groups are created with the name NLB.VIP.virtual server name and the VIP group member IP
addresses are VIP IP@.
For server pool groups, you can create an allow traffic distributed firewall rule from the load
balancer ( NLB.PoolLB. Pool_Name LB_Name). For Tier-1 gateway firewall, you can create an
allow traffic from clients to LB VIP NLB.VIP.virtual server name.
Important Distributed Load Balancer is supported only for Kubernetes (K8s) cluster IPs
managed by vSphere with Kubernetes. Distributed Load Balancer is not supported for any other
workload types. As an administrator, you cannot use NSX Manager GUI to create or modify
Distributed Load Balancer objects. These objects are pushed by vCenter Server through NSX-T
API when K8 cluster IPs are created in vCenter Server.
Note Do not enable Distributed Intrusion Detection Service (IDS) in an environment that is using
Distributed Load Balancer. NSX-T Data Center does not support using IDS with a Distributed
Load Balancer.
In traditional networks, a central load balancer deployed on an NSX Edge node is configured to
distribute traffic load managed by virtual servers that are configured on the load balancer.
If you are using a central balancer, increasing the number of virtual servers in the load balancer
pool might not always meet scale or performance criteria for a multi-tier distributed application. A
distributed load balancer is realized on each hypervisor where load balancing workloads, such as
clients and servers are deployed, ensuring traffic is load balanced on each hypervisor in a
distribued way.
A distributed load balancer can be configured on the NSX-T network along with a central load
balancer.
In the diagram, an instance of the Distributed Load Balancer is attached to a VM group. As the
VMs are downlinks to the distributed logical router, Distributed Load Balancer only load balances
east-west traffic. In contrast, the central load balancer, manages north-south traffic.
As an administrator ensure:
n Virtual IP addresses and pool members connected to a DLB instance must have unique IP
address for traffic to be routed correctly.
1 When Web VM1 sends out a packet to APP VM2 it is received by the VIP-APP.
The DLB APP is attached to the policy group consisting of Web tier VMs. Similarly, DLB-APP
hosting VIP-DB must be attached to the policy group consisting of App tier VMs.
2 The VIP-APP hosted on DLB APP receives the request from Web VM1.
3 Before reaching the destination VM group, the packet is filtered by distributed firewall rules.
4 After the packets are filtered based on the firewall rules, it is sent to the Tier-1 router.
6 The route is completed when the packet is delivered to the destination App VM2 group.
As DLB VIPs can only be accessed from VMs connected to downlinks of Tier-0 or Tier-1 logical
routers, DLB provides load balancing services to east-west traffic.
A DLB instance can co-exist with an instance of DFW. With DLB and DFW enabled on a virtual
interface of a hypervisor, first the traffic is load balanced based on the configuraiton in DLB and
then DFW rules are applied on traffic flowing from a VM to the hypervisor. DLB rules are applied
on traffic originating from downlinks of a Tier-0 or Tier-1 logical routers going to the destination
hypervisor. DLB rules cannot be applied on traffic flowing in the reverse direction - originating
from outside the host going to a destination VM.
For example, if the DLB instance is load balancing traffic from Web-VMs to App-VMs, then to
allow such traffic to pass through DFW, ensure that the DFW rule is set to value "Source=Web-
VMs, Destination=App-VMs, Action=Allow".
At the end of the procedure a DLB instance is attached to the virtual interfaces of a VM group.
It is only possible to create and attach a DLB instance through API commands.
Prerequisites
n Add a policy group consisting of VMs. For example, such a VM group can be related to the
App tier that receives requests from a VM on the Web-tier.
Procedure
"connectivity_path" : "/infra/domains/default/groups/<clientVMGroup>",
"enabled" : true,
"size" : "DLB",
"error_log_level" : "INFO",
"access_log_enabled" : false,
"resource_type" : "LBService",
"display_name" : "mydlb"
}
Where,
n connectivity_path:
n If the connectivity path is set to Null or Empty, the DLB instance is not applied to any
transport nodes.
n If the connectivity path is set ALL, all virtual interfaces of all transport nodes are
bound to the DLB instance. One DLB instance is applied to all the virtual interfaces of
the policy group.
n size: Set to value DLB. As each application or virtual interface gets an instance of DLB,
there is just a single size form factor of the DLB instance.
A DLB instance is created and attached to the VM group. The DLB instance created on the
Web-tier is attached to all the virtual interfaces of the App-tier VM group.
What to do next
After creating a DLB instance, log in to the NSX Manager, go to Networking -> Load Balancing ->
Load Balancers. View details of the DLB instance.
This task can be done both from the NSX-T UI and NSX-T API.
Prerequisites
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Field Description
Members Click Select Members and add individual members to the group.
When adding individual members, only enter values to the the following
fields :
n Name
n IP address
n Port
Note Except for the above mentioned fields no other fields are supported
when adding members to a DLB pool.
Field Description
Note Except for the above mentioned fields no other fields are supported
when adding members to a DLB pool.
SNAT Translation Mode Set this field to Disabled state. SNAT translation is not supported in a
Distributed Load Balancer.
5 Click Save.
Results
Server pool members are added for the Distributed Load Balancer.
What to do next
This task can be performed both from the NSX-T UI and NSX-T APIs.
Prerequisites
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 To configure a virtual server for a Distributed Load Balancer, only the following fields are
supported.
Field Description
IP Address IP address of the Distributed Load Balancer virtual server. Configures the IP
address of the Distributed Load Balancer virtual server where it receives all
client connections and distributes them among the backend servers.
Load Balancer Attach the Distributed Load Balancer instance that is associated to the
virtual server. The virtual server then knows which policy group the load
balancer is servicing.
Server Pool Select the server pool. The server pool contains backend servers. Server
pool consists of one or more servers that are similarly configured and are
running the same application. It is also referred to as pool members.
Application Profile Select the application profile for the virtual server.
The application profile defines the application protocol characteristics. It is
used to influence how load balancing is performed. The supported
application profiles are:
n Load Balancer Fast TCP Profile
n Load Balancer Fast UDP Profile
Results
Verify whether the DLB is distributing traffic to all the servers in the pool based on the algorithm
defined in the configuration. If you choose the Round_Robin algorithm, then DLB must be able to
choose servers from the pool in a round robin fashion.
What to do next
After you securely connect to the ESXi host, run /opt/vmware/nsx-nestdb/bin/nestdb-cli. From
the nestdb-cli prompt, run the following commands.
To view the configured DLB service, run get LbServiceMsg. {'id': {'left': 13946864992859343551, 'right':
10845263561610880178}, 'virtual_server_id': [{'left':
13384746951958284821, 'right': 11316502527836868364}],
'display_name': 'mydlb', 'size': 'DLB', 'enabled': True,
'access_log_enabled': False, 'log_level':
'LB_LOG_LEVEL_INFO', 'applied_to': {'type': 'CONTAINER',
'attachment_id': {'left': 2826732686997341216, 'right':
10792930437485655035}}}
To view the virtual server configured for DLB, run get {'port': '80', 'revision': 0, 'display_name': 'mytcpvip',
LbVirtualServerMsg. 'pool_id': {'left': 4370937730160476541, 'right':
13181758910457427118}, 'enabled': True,
'access_log_enabled': False, 'id': {'left':
13384746951958284821, 'right': 11316502527836868364},
'ip_protocol': 'TCP', 'ip_address': {'ipv4': 2071690107},
'application_profile_id': {'left': 1527034089224553657,
'right': 10785436903467108397}}
To view configuration of the DLB pool members, run get {'tcp_multiplexing_number': 6, 'display_name':
LbPoolMsg. 'mylbpool', 'tcp_multiplexing_enabled': False, 'member':
[{'port': '80', 'weight': 1, 'display_name':
'Member_VM30', 'admin_state': 'ENABLED', 'ip_address':
{'ipv4': 3232261280}, 'backup_member': False}, {'port':
'80', 'weight': 1, 'display_name': 'Member_VM31',
'admin_state': 'ENABLED', 'ip_address': {'ipv4':
3232261281}, 'backup_member': False}, {'port': '80',
'weight': 1, 'display_name': 'Member_VM32',
'admin_state': 'ENABLED', 'ip_address': {'ipv4':
3232261282}, 'backup_member': False}], 'id': {'left':
4370937730160476541, 'right': 13181758910457427118},
'min_active_members': 1, 'algorithm': 'ROUND_ROBIN'}
To view NSX controller configuration pushed to the ESXi {'container_type': 'CONTAINER', 'id': {'left':
host, run get ContainerMsg. 2826732686997341216, 'right': 10792930437485655035},
'vif': ['cd2e482b-2998-480f-beba-65fbd7ab1e62',
'f8aa2a58-5662-4c6b-8090-d1bd19174205', '83a1f709-
e675-4e42-b677-ff501fd0f4ec', 'b8366b39-4c81-41fc-b89e-
de7716462b2f'], 'name': 'default.clientVMGroup',
'mac_address': [{'mac': 52237218275}, {'mac':
52243694681}, {'mac': 52233233291}, {'mac':
52239463383}], 'ip_address': [{'ipv4': 16844388},
{'ipv4': 16844644}, {'ipv4': 16844132}, {'ipv4':
3232261283}, {'ipv4': 16844298}, {'ipv4': 16844554},
{'ipv4': 16844042}]}
To view application profile configuration on the ESXi host, {'display_name': 'default-tcp-lb-app-profile', 'id':
run get LbApplicationProfileMsg. {'left': 1527034089224553657, 'right':
10785436903467108397}, 'application_type': 'FAST_TCP',
'fast_tcp_profile': {'close_timeout': 8,
'flow_mirroring_enabled': False, 'idle_timeout': 1800}}
Action Command
Show statistics of all pools of the specified load get load-balancer <UUID_LoadBalancer> pools stats
balancer
Show statistics of the specified load balancer and pool get load-balancer <UUID_LoadBalancer> pool <UUID_Pool>
stats
Show statistics of all virtual servers of the specified get load-balancer <UUID_LoadBalancer> virtual-servers
load balancer stats
Show statistics of the specified load balancer and get load-balancer <UUID_LoadBalancer> virtual-server
virtual server <UUID_VirtualSerever> stat
Clear statistics of the specified load balancer and pool clear load-balancer <UUID_LoadBalancer> pool
<UUID_Pool> stats
Clear statistics of all pools of the specified load clear load-balancer <UUID_LoadBalancer> pools stats
balancer
Clear statistics of the specified load balancer clear load-balancer <UUID_LoadBalancer> stats
Action Command
Clear statistics of the specified load balancer and clear load-balancer <UUID_LoadBalancer> virtual-server
virtual server <UUID_VirtualServer> stats
Clear statistics of all virtual servers of the specified clear load-balancer <UUID_LoadBalancer> virtual-servers
load balancer stats
Forwarding Policies or Policy-Based Routing (PBR) rules define how NSX-T handles traffic from
an NSX-managed VM. This traffic can be steered to NSX-T overlay or it can be routed through
the cloud provider's (underlay) network.
Note See Chapter 23 Using NSX Cloud for details on how to manage your public cloud workload
VMs with NSX-T Data Center.
Three default forwarding policies are set up automatically after you either deploy a PCG on a
Transit VPC/VNet or link a Compute VPC/VNet to the Transit.
1 One Route to Underlay for all traffic that is addressed within the Transit/Compute VPC/VNet
2 Another Route to Underlay for all traffic destined to the metadata services of the public
cloud.
3 One Route to Overlay for all other traffic, for example, traffic that is headed outside the
Transit/Compute VPC/VNet. Such traffic is routed over the NSX-T overlay tunnel to the PCG
and further to its destination.
Note For traffic destined to another VPC/VNET managed by the same PCG: Traffic is
routed from the source NSX-managed VPC/VNet via the NSX-T overlay tunnel to the PCG
and then routed to the destination VPC/VNet.
For traffic destined to another VPC/VNet managed by a different PCG: Traffic is routed
from one NSX-managed VPC/VNet over the NSX overlay tunnel to the PCG of the source
VPC/VNet and forwarded to the PCG of the destination NSX-managed VPC/VNet.
If traffic is headed to the internet, the PCG routes it to the destination in the internet.
If you have direct connectivity from an NSX-managed workload VM to a destination outside the
managed VPC/VNet and want to bypass the PCG, set up a forwarding policy to route traffic from
this VM via underlay.
When traffic is routed through the underlay network, the PCG is bypassed and therefore the
north-south firewall is not encountered by traffic. However, you still have to manage rules for
east-west or distributed firewall (DFW) because those rules are applied at the VM-level before
reaching the PCG.
n Route to Underlay
n Route to Overlay
These are the common scenarios where forwarding policies are useful:
n Route to Underlay: Access a service on underlay from an NSX-managed VM. For example,
access to the AWS S3 service on the AWS underlay network.
n Route from Underlay: Access a service hosted on an NSX-managed VM from the underlay
network. For example, access from AWS ELB to the NSX-managed VM.
For example, to use services provided by the public cloud, such as S3 by AWS, you can manually
create a policy to allow a set of IP addresses to access this service by being routed through
underlay.
Prerequisites
Procedure
1 Click Add Section. Name the section appropriately, for example, AWS Services.
2 Select the check box next to the section and click Add Rule. Name the rule, for example,
S3 Rules.
3 In the Sources tab, select the VPC or VNet where you have the workload VMs to which you
want to provide the service access, for example, the AWS VPC. You can also create a Group
here to include multiple VMs matching one or more criteria.
4 In the Destinations tab, select the VPC or VNet where the service is hosted, for example, a
Group that contains the IP address of the S3 service in AWS.
5 In the Services tab, select the service from the drop-down menu. If the service does not exist,
you can add it. You can also leave the selection to Any because you can provide the routing
details under Destinations.
6 In the Action tab, select how you want the routing to work, for example, select Route to
Underlay if setting up this policy for the AWS S3 service.
Note IP blocks are used by NSX Container Plug-in (NCP). For more info about NCP, see the NSX
Container Plug-in for Kubernetes and Cloud Foundry - Installation and Administration Guide.
When you configure a DNS zone, you can specify a source IP for a DNS forwarder to use when
forwarding DNS queries to an upstream DNS server. If you do not specify a source IP, the DNS
query packet's source IP will be the DNS forwarder's listener IP. Specifying a source IP is needed
if the listener IP is an internal address that is not reachable from the external upstream DNS
server. To ensure that the DNS response packets are routed back to the forwarder, a dedicated
source IP is needed. Alternatively, you can configure SNAT on the logical router to translate the
listener IP to a public IP. In this case, you do not need to specify a source IP.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 To add a default zone, select Add DNS Zone > Add Default Zone
5 To add an FQDN zone, select Add DNS Zone > Add FQDN Zone
6 Click Save.
Before you configure a DNS forwarder, you must configure a default DNS zone. Optionally, you
can configure one or more FQDN DNS zones. Each DNS zone is associated with up to 3 DNS
servers. When you configure a FQDN DNS zone, you specify one or more domain names. A DNS
forwarder is associated with a default DNS zone and up to 5 FQDN DNS zones. When a DNS
query is received, the DNS forwarder compares the domain name in the query with the domain
names in the FQDN DNS zones. If a match is found, the query is forwarded to the DNS servers
specified in the FQDN DNS zone. If a match is not found, the query is forwarded to the DNS
servers specified in the default DNS zone.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Clients send DNS queries to this IP address, which is also known as the DNS forwarder's
listener IP.
10 Click the Admin Status toggle to enable or disable the DNS service.
11 Click Save.
A DHCP profile can be used simultaneously by multiple segments and gateways in your network.
The following conditions apply when you attach a DHCP profile to a segment or a gateway:
n On a tier-0 or tier-1 gateway or a gateway-connected segment, you can attach either a DHCP
server profile or a DHCP relay profile.
n On a standalone segment that is not connected to a gateway, you can attach only a DHCP
server profile. Standalone segment supports only a local DHCP server.
Prerequisites
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Note A maximum of two DHCP server IP addresses are supported. You can enter one IPv4
address and one IPv6 address. For an IPv4 address, the prefix length must be <= 30, and for
an IPv6 address, the prefix length must be <= 126. The DHCP server IP address must not
overlap with the addresses used in DHCP ranges and DHCP static binding.
n Multicast IP address
n Broadcast IP address
n Loopback IP address
7 (Optional) Edit the lease time in seconds. The default value is 86400.
n If you are using a local DHCP server on a segment, you must select an edge cluster in the
DHCP server profile. If an edge cluster is unavailable in the profile, an error message is
displayed when you save the segment.
n If you are using a Gateway DHCP server on the segment, select an edge cluster either in
the gateway, or DHCP server profile, or both. If an edge cluster is unavailable in either the
profile or the gateway, an error message is displayed when you save the segment.
Caution You can change the edge cluster in the profile after the DHCP server is created.
However, this action causes all the existing DHCP leases that are assigned to the DHCP
clients to be lost.
When a DHCP server profile is attached to a segment that uses a DHCP local server, the
DHCP service is created in the edge cluster that you specified in the DHCP profile. However, if
the segment uses a Gateway DHCP server, the edge cluster in which the DHCP service is
created depends on a combination of several factors. For a detailed information about how
an edge cluster is selected for DHCP service, see Scenarios: Selection of Edge Cluster for
DHCP Service.
9 (Optional) Next to Edges, click Set and select the preferred edge nodes where you want the
DHCP service to run.
To select the preferred edge nodes, edge cluster must be selected. You can select a
maximum of two preferred edge nodes. The following table explains the scenarios when
DHCP HA is configured.
Scenario DHCP HA
No preferred edge node is selected from the DHCP HA is configured. A pair of active and standby edge nodes
edge cluster. are selected automatically from the available nodes in the edge
cluster.
Only one preferred edge node is selected from DHCP server runs without the HA support.
the edge cluster.
Two preferred edge nodes are selected from DHCP HA is configured. The first edge node that you add
the edge cluster. becomes the active edge, and the second edge node becomes
the standby edge.
The active edge is denoted with a sequence number 1, and the
standby edge is denoted with a sequence number 2.
You can interchange the active and standby edges. For example
to change the current active edge to standby, select the active
edge and click the Down arrow. Alternatively, you can select the
passive edge and click the Up arrow to make it active. The
sequence numbers are reversed in both situations.
After the DHCP server is created, you can change the preferred edge nodes in the DHCP
server profile. However, this flexibility includes certain caveats.
For example, let us assume that the edge cluster in the DHCP profile has four edge nodes N1,
N2, N3, and N4, and you have set N1 and N2 as the preferred edge nodes. N1 is the active
edge and N2 is the standby edge. The DHCP service is running on the active edge node N1,
and the DHCP server has started assigning leases to the DHCP clients on the segment.
Delete existing preferred edge nodes N1 and A warning message informs you that the current DHCP leases will
N2, and add N3 and N4 as the new preferred be lost due to the replacement of existing preferred edges. This
edge nodes. action can cause a loss of network connectivity.
You can prevent loss of connectivity by replacing one edge node
at a time.
Delete existing preferred edges N1 and N2, and The DHCP servers remain on the edge nodes N1 and N2. The
keep the preferred edge nodes list empty. DHCP leases are retained and the DHCP clients do not lose
network connectivity.
Delete any one of the preferred edges, either When any one of the preferred edges N1 or N2 is deleted, the
N1 or N2. other edge continues to provide IP addresses to the DHCP clients.
The DHCP leases are retained and the DHCP clients do not
experience a loss of network connectivity. However, DHCP HA
support is lost.
To retain DHCP HA, you must replace the deleted edge with
another edge node, either N3 or N4, in the edge cluster.
10 (Optional) In the Tag drop-down menu, enter a tag name. When you are done, click Add
Item(s).
If tags exist in the inventory, the Tag drop-down menu displays a list of all the available tags
and their scope. The list of available tags includes user-defined tags, system-defined tags,
and discovered tags. You can select an existing tag from the drop-down menu and add it to
the DHCP profile.
11 Click Save.
What to do next
Attach the DHCP server profile either to a segment or a gateway, and configure the DHCP server
settings at the level of each segment.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Both DHCPv4 and DHCPv6 servers are supported. You can enter multiple IP addresses. The
server IP addresses of the remote DHCP servers must not overlap with the addresses that
are used in DHCP ranges and DHCP static binding.
n Multicast IP address
n Broadcast IP address
n Loopback IP address
7 (Optional) In the Tag drop-down menu, enter a tag name. When you are done, click Add
Item(s).
If tags exist in the inventory, the Tag drop-down menu displays a list of all the available tags
and their scope. The list of available tags includes user-defined tags, system-defined tags,
and discovered tags. You can select an existing tag from the drop-down menu and add it to
the DHCP profile.
8 Click Save.
What to do next
Attach the DHCP relay profile either to a gateway, or use the profile to configure a local DHCP
relay on the segment.
You can attach a DHCP profile to a gateway only when the segments connected to that gateway
do not have a local DHCP server or DHCP relay configured on them. If a local DHCP server or
DHCP relay exists on the segment, the UI throws an error when you try to attach a DHCP profile
to the gateway. You must disconnect the segments from the gateway, and then attach a DHCP
profile to the gateway.
Prerequisites
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
4 Do any of one of the following depending on the version of NSX-T Data Center that you are
using:
n In version 3.0 and 3.0.1, next to IP Address Management, click No Dynamic IP Allocation.
Note If you select the profile type as DHCP Relay, the configuration does not take any
effect. You must assign the DHCP relay profile to the segments, which are connected to the
gateway. Attaching a DHCP relay profile to the gateway is a redundant configuration. This
functional behavior is a known issue. For information about assigning a DHCP relay profile to
the segment, click the Configure DHCP on a Segment link in the What to do next section of
this topic.
7 Click Save.
What to do next
Navigate to Networking > Segments. On each segment that is connected to this gateway,
configure the DHCP settings, static bindings, and other DHCP options.
After a Gateway DHCP server is in use, you can view the DHCP server statistics on the gateway.
On the gateway, next to DHCP or IP Address Management, click the Servers link. On the Set
DHCP Configuration page, click Statistics.
Note If you have configured a local DHCP server on a gateway-connected segment, the
Statistics link on the Set DHCP Configuration page does not display the local DHCP server
statistics. Only Gateway DHCP statistics are shown on this page.
Standalone segments that are not connected to a gateway can use only a DHCP local server.
Segments that are connected to a gateway can use either a DHCP local server, DHCP relay, or
Gateway DHCP server.
Regardless of whether a segment uses a DHCP local server, DHCP relay, or a Gateway DHCP
server, DHCP always runs as a service router in the edge transport nodes of an edge cluster. If
the segment uses a DHCP local server, the DHCP service is created in the edge cluster that you
specified in the DHCP profile. However, if the segment uses a Gateway DHCP server, the edge
cluster in which the DHCP service is created depends on the combination of the following factors:
n Is the edge cluster in the gateway and in the DHCP profile same or different?
The following scenarios explain how the edge cluster is selected for creating the DHCP service.
n An edge cluster (Cluster1) is created with four edge nodes: N1, N2, N3, N4.
In this scenario, any two edge nodes from Cluster1 are autoallocated to create the DHCP service,
and DHCP high availability (HA) is automatically configured. One of the edge nodes in Cluster1
runs in active mode and the other edge runs in passive mode.
Note
n If you select two preferred edge nodes in the DHCP profile, the edge node that is added first
becomes the active edge. The second edge node takes the passive role.
n If you select only one preferred edge node in the DHCP profile, DHCP HA is not configured.
Scenario Description:
The DHCP server profile in the tier-1 gateway has the following configuration:
In this scenario, DHCP service runs on the edge nodes of Cluster2. As Cluster2 contains multiple
edge nodes, DHCP HA is autoconfigured. However, the preferred edges N5 and N6 on the
gateway are ignored for DHCP HA. Any two nodes from Cluster2 are randomly autoallocated for
DHCP HA.
This scenario also applies when the segment is directly connected to a tier-0 gateway, and there
is no tier-1 gateway in your network topology.
Caution Starting in NSX-T Data Center 3.0.2, you can change the edge cluster on the Gateway
DHCP server after the DHCP server is created. However, this action causes all the existing DHCP
leases that are assigned to the DHCP clients to be lost.
n When you use a Gateway DHCP server and set different edge clusters in the gateway DHCP
profile and tier-1 gateway, then DHCP service is always created in the edge cluster of the
gateway.
n The edge nodes are randomly allocated from the edge cluster of the tier-1 gateway for DHCP
HA configuration.
n If no edge cluster is specified in the tier-1 gateway, the edge cluster in the DHCP profile of the
tier-1 gateway (Cluster1) is used to create the DHCP service.
Scenario Description:
In this scenario, because the segment is configured to use a local DHCP server, the edge cluster
(Cluster2) in the connected tier-1 gateway is ignored to create the DHCP service. DHCP service
runs in the edge nodes of Cluster3 (N5, N6). DHCP HA is also configured. N5 becomes the active
edge node and N6 becomes the standby edge.
If no preferred nodes are set in Cluster3, any two nodes from this cluster are autoallocated for
creating the DHCP service and configuring DHCP HA. One of the edge nodes becomes an active
edge and the other node becomes the standby edge. If only one preferred edge node is set in
Cluster3, DHCP HA is not configured.
This scenario also applies when the segment is directly connected to a tier-0 gateway, and there
is no tier-1 gateway in your network topology.
Scenario Description:
n Gateway and DHCP profile on the gateway use the same edge cluster (Cluster1).
In this scenario, as the gateway DHCP profile and gateway use a similar edge cluster (Cluster1),
DHCP service is created in the preferred edge nodes N1 and N2 of the gateway DHCP profile.
The preferred edge nodes N3 and N4 that you specified in the connected tier-1 gateway are
ignored for creating the DHCP service.
If no preferred edges are set in the DHCP profile, any two nodes from Cluster1 are autoallocated
for creating the DHCP service and configuring DHCP HA. One of the edge nodes becomes an
active edge and the other edge becomes the standby edge.
n When you use a Gateway DHCP server and specify similar edge clusters in the DHCP profile
and connected gateway, then DHCP service is created in the preferred edge nodes of the
DHCP profile.
n The preferred edges nodes specified in the connected gateway are ignored.
Scenario Description:
In this scenario, because the tier-1 gateway has no edge cluster specified, NSX-T Data Center
falls back on the edge cluster of the connected tier-0 gateway. DHCP service is created in the
edge cluster of tier-0 gateway (Cluster3). Any two edge nodes from this edge cluster are
autoallocated for creating the DHCP service and configuring DHCP HA.
n When a tier-1 gateway has no edge cluster specified, NSX-T Data Center falls back on the
edge cluster of the connected tier-0 gateway to create the DHCP service.
n If no edge cluster is detected in the tier-0 gateway, DHCP service is created in the edge
cluster of the tier-1 gateway DHCP profile.
Segment connectivity changes are allowed only when the segments and gateways belong to the
same transport zone.
The following scenarios explain the segment connectivity changes that are allowed or disallowed,
and whether DHCP is impacted in each of these scenarios.
Later, you decide to change the connectivity of this segment to another tier-0 or tier-1 gateway,
which is in the same transport zone.
n Starting in NSX-T Data Center 3.0.2, this change is allowed. However, when you save the
segment, an information message alerts you that changing the gateway connectivity impacts
the existing DHCP leases, which are assigned to the workloads.
n In NSX-T Data Center 3.0 and 3.0.1, you cannot change the connectivity of the segment from
one gateway to another gateway when the segment uses a Gateway DHCP server. Use the
following steps in the workaround:
1 Temporarily disconnect the existing segment from the gateway, or delete the segment.
Temporary disconnection of the segment is supported only with the API. Follow these steps:
GET https://{NSXManager_IP}/policy/api/v1/infra/segments/{segment-id}
Replace segment-id with the actual ID of the segment that you want to disconnect from
the gateway.
b Observe that the advanced_config section in the API output shows connectivity:"ON".
c Copy the GET API output in a text file and edit connectivity to OFF. Paste the complete
API output in the request body of the following PATCH API:
PATCH https://{NSXManager_IP}/policy/api/v1/infra/segments/{segment-id}
Later, you decide to change the connectivity of this segment to another tier-0 or tier-1 gateway,
which is in the same transport zone. This change is allowed. As the DHCP server is local to the
segment, the DHCP configuration settings, including ranges, static bindings, and DHCP options
are retained on the segment. The DHCP leases of the workloads are retained and there is no loss
of network connectivity.
After the segment is moved to a new gateway, you can continue to update the DHCP
configuration settings, and other segment properties.
n If you are using NSX-T Data Center 3.0 or 3.0.1, you cannot change the DHCP type and DHCP
profile of a routed segment after moving the segment to a different gateway. For example,
you cannot change the DHCP type from a local DHCP server or a DHCP relay to a Gateway
DHCP server. In addition, you cannot select a different DHCP server profile or relay profile in
the segment. But, you can edit the properties of the DHCP profile, if needed.
n Starting in version 3.0.2, you can change the DHCP type and DHCP profile of a routed
segment after moving the segment to a different gateway.
Later, you decide to connect this segment either to a tier-0 or tier-1 gateway, which is in the
same transport zone. This change is allowed. As a local DHCP server existed on the segment, the
DHCP configuration settings, including ranges, static bindings, and DHCP options are retained on
the segment. The DHCP leases of the workloads are retained and there is no loss of network
connectivity.
After the segment is connected to the gateway, you can continue to update the DHCP
configuration settings, and other segment properties. However, you cannot select a different
DHCP type and the DHCP profile in the segment. For example, you cannot change the DHCP
type from a local DHCP server to a Gateway DHCP server or a DHCP relay. In addition, you
cannot change the DHCP server profile in the segment. But, you can edit the properties of the
DHCP profile, if needed.
Later, you decide to connect this segment either to a tier-0 or tier-1 gateway, which is in the
same transport zone. This change is allowed. As no DHCP configuration existed on the segment,
the segment automatically uses the Gateway DHCP server after it is connected to the gateway.
The DHCP profile attached to this gateway gets autoselected in the segment.
Now, you can specify the DHCP configuration settings, including ranges, static bindings, and
DHCP options on the segment. You can also edit the other segment properties, if necessary.
However, you cannot change the DHCP type from a Gateway DHCP server to a local DHCP
server or a DHCP relay.
Remember, you can configure only a Gateway DHCPv4 server on the segment. In NSX-T Data
Center 3.0, Gateway DHCPv6 server is not supported.
Later, you decide to change the connectivity of this segment to None. This change is not
allowed.
1 Temporarily disconnect the existing segment from the gateway or delete the segment.
For information about temporarily disconnecting a segment from the gateway, see Scenario
1.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
a Select an IP block.
b Specify a size.
c Click the Auto Assign Gateway toggle to enable or disable automatic gateway IP
assignment.
d Click Add.
d Click Add.
8 Click Save.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
7 Click Save.
n Configuring Multicast
Configuring Multicast
You can configure multicast on a tier-0 gateway for an IPv4 network to send the same multicast
data to a group of recipients. In a multicast environment, any host, regardless of whether it is a
member of a group, can send to a group. However, only the members of a group will receive
packets sent to that group.
When a Static RP is configured, it serves as the RP for all multicast groups (224/4). If
candidate RPs learned from BSMs advertise candidacy for the same group range, the Static
RP is preferred. However, if candidate RPs advertise candidacy for a specific group or range
of groups, they are preferred as the RP for those groups.
n The Reverse Path Forwarding (RPF) check for all multicast-specific IPs (senders of data traffic,
BSRs, RPs) requires that a route to each of them exists. In NSX-T Data Center 3.0.0,
reachability via the default route is not supported. Starting with NSX-T Data Center 3.0.1,
reachability via the default route is also supported.
n The RPF check requires a route to each multicast-specific IP with an IP address as the next
hop. Reachability via device routes, where the next hop is an interface index, is not
supported.
n The NSX Edge cluster can be in active-active or active-standby mode. When the cluster is in
active-active mode, two of the cluster members will run multicast in active-cold standby
mode. You can run the CLI command get mcast high-availability role on each Edge to
identify the two nodes participating in multicast. Also note that since unicast reachability to
NSX-T in an active-active cluster is via ECMP, it is imperative that the northbound PIM router
selects the ECMP path that matches a PIM neighbor to send PIM Join/Prune messages to
NSX-T. In this way it will select the active Edge which is running PIM.
n Acquire a multicast address range from your network administrator. This will be used to
configure the Multicast Replication Range when you configure multicast on a tier-0 gateway
(see Configure Multicast).
n Enable IGMP snooping on the layer 2 switches to which GENEVE participating hosts are
attached. If IGMP snooping is enabled on layer 2, IGMP querier must be enabled on the router
or layer 3 switch with connectivity to multicast enabled networks.
2 Optionally create a PIM profile to configure a Static Rendezvous Point (RP). See Create a PIM
Profile.
3 Configure a tier-0 gateway to support multicast. See Add a Tier-0 Gateway and Configure
Multicast
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Option Description
Query Interval (seconds) Interval between general query messages. A larger value causes
IGMP queries to be sent less often. Default: 30. Range: 1 - 1800.
Query Max Response Time (seconds) Maximum allowed time before sending a response to a
membership query message. Default: 10. Range: 1 - 25.
Last Member Query Interval (seconds) Maximum amount of time between group-specific query
messages, including those sent in response to leave-group
messages. Default: 10. Range: 1 - 25.
Robustness Variable Number of IGMP query messages sent. This helps alleviate the
risk of loss of packets in a busy network. A larger number is
recommended in a network with high traffic. Default: 2. Range: 1 -
255.
Results
The Last Member Query Interval and Robustness Variable parameters affect the time it takes
to process leave group messages. If Last Member Query Interval is set to 10 and Robustness
Variable is set to 2, the approximate times it takes to process leave group messages are as
follows:
This step is optional. It is needed only if you want to configure a Static Rendezvous Point (RP). A
Rendezvous Point is a router in a multicast network domain that acts as a shared root for a
multicast shared tree. If a Static RP is configured, it is preferred over the RPs that are learned
from the elected Bootstrap Router (BSR).
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
8 Click Save.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
5 Click Save.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
6 Enter the commited bandwidth limit that you want to set for the traffic.
7 Enter the burst size. Use the following guidelines for burst size.
n I is the time interval in milliseconds, to refill or withdraw tokens(in bytes) from the token
bucket. Use the get dataplane command from the NSX Edge CLI to retrieve the time
interval, Qos_wakeup_interval_ms. The default value for Qos_wakeup_interval_ms is
50ms. However, this value is automatically adjusted by the dataplane based on the QoS
configuration.
n B >= R * 1000,000 * I / 1000 / 8 because burst size is the maximum amount of tokens
that can be refilled in each interval.
n B >= R * 1000,000 * 1 / 1000 / 8 because the minimum value for I is 1 ms, taking into
account dataplane CPU usage among other constraints.
n B >= MTU of SR port because at least the MTU-size amount of tokens need to be present
in the token bucket for an MTU-size packet to pass rate-limiting check.
Since the burst size needs to satisfy all three constraints, the configured value of burst size
would be:
B >= max (100 * 1000,000 * 50 / 1000/ 8, 100 * 1000,000 * 50 / 1000/ 8, 1500) = 625000 in bytes
8 Click Save.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
6 Enter values for the heartbeat Interval and Declare Dead Multiple.
7 Click Save.
n Security Overview
n Security Terminology
n Identity Firewall
n Distributed Firewall
n Distributed IDS
n Gateway Firewall
n Endpoint Protection
n Security Profiles
Distributed Firewall (east-west) and Gateway Firewall (north-south) offer multiple sets of
configurable rules divided by categories. You can configure an exclusion list that contains logical
switches, logical ports, or groups, to be excluded from firewall enforcement.
n Each packet is checked against the top rule in the rule table before moving down the
subsequent rules in the table.
n The first rule in the table that matches the traffic parameters is enforced.
No subsequent rules can be enforced as the search is then terminated for that packet. Because
of this behavior, it is always recommended to put the most granular policies at the top of the rule
table. This ensures they will be enforced before more specific rules.
Whether an east-west or north-south firewall fails close or fails open upon failure depends on the
last rule in the firewall. To ensure that a firewall fails close upon failure, configure the last rule to
reject or drop all packets.
Security Overview
The security overview dashboard has three tabs: Insights, Configuration, and Capacity.
n URL Analysis:
n The number of gateways connected to the cloud service, and if connectivity to the cloud
service is up.
n The latest signature pack available on the cloud service, and what gateways are up to
date.
n Information for the top five URL categories, and the URLs accessed in each category.
Entry Description
Top VMs by Intrusion Events or by Vulnerability Severity Click the arrow to select the shown data.
n A summary of the configuration of endpoint protection for virtual machines. You can view
components having issues, and virtual machine distribution by service profile.
The Configuration tab has clickable links with the number of:
n Distributed FW Policies
n Gateway Policies
n Endpoint Policies
You can also view details of your distributed firewall policies, along with the count per category.
Security Terminology
The following terms are used throughout distributed firewall.
Policy A security policy includes various security elements including firewall rules and service
configurations. Policy was previously called a firewall section.
Rule A set of parameters with which flows are evaluated against, and define which actions will be
taken upon a match. Rules include parameters such as source and destination, service, context
profile , logging, and tags.
Group Groups include different objects that are added both statically and dynamically, and can be
used as the source and destination field of a firewall rule. Groups can be configured to contain a
combination of virtual machines, IP sets, MAC sets, logical ports, logical switches, AD user
groups, and other nested groups. Dynamic inclusion of groups can be based on tag, machine
name, OS name, or computer name.
When you create a group, you must include a domain that it belongs to, and by default this is
the default domain.
Groups were previously called NSGroup or security group.
Service Defines a combination or port and protocol. Used to classify traffic based on port and protocol.
Pre-defined services and user-defined services can be used in firewall rules.
Context Profile Defines context aware attributes including APP-ID and domain name. Also includes sub
attributes such as application version, or cipher set. Firewall rules can include a context profile
to enable Layer-7 firewall rules.
Identity Firewall
With Identity Firewall (IDFW) features an NSX administrator can create Active Directory user-
based Distributed Firewall (DFW) rules.
IDFW can be used for Virtual Desktops (VDI) or Remote desktop sessions (RDSH support),
enabling simultaneous log ins by multiple users, user application access based on requirements,
and the ability to maintain independent user environments. VDI management systems control
what users are granted access to the VDI virtual machines. NSX-T controls access to the
destination servers from the source virtual machine (VM), which has IDFW enabled. With RDSH,
administrators create security groups with different users in Active Directory (AD), and allow or
deny those users access to an application server based on their role. For example, Human
Resources and Engineering can connect to the same RDSH server, and have access to different
applications from that server.
IDFW can also be used on VMs that have supported operating systems. See Identity Firewall
Supported Configurations.
A high level overview of the IDFW configuration workflow begins with preparing the
infrastructure. Preparation includes the administrator installing the host preparation components
on each protected cluster, and setting up Active Directory synchronization so that NSX can
consume AD users and groups. Next, IDFW must know which desktop an Active Directory user
logs on to in to apply IDFW rules. When network events are generated by a user, the thin agent
installed with VMware Tools on the VM gathers and forwards the information, and sends it to the
context engine. This information is used to provide enforcement for the distributed firewall.
IDFW processes the user identity at the source only in distributed firewall rules. Identity-based
groups cannot be used as the destination in DFW rules.
Note IDFW relies on the security and integrity of the guest operating system. There are multiple
methods for a malicious local administrator to spoof their identity to bypass firewall rules. User
identity information is provided by the NSX Guest Introspection Thin Agent inside guest VMs.
Security administrators must ensure that thin agent is installed and running in each guest VM.
Logged-in users should not have the privilege to remove or stop the agent.
IDFW workflow:
2 A user login event is detected by the Thin Agent, which gathers connection information and
identity information and sends it to the context engine.
3 The context engine forwards the connection and the identity information to Distributed
Firewall Wall for any applicable rule enforcement.
Identity based firewall rules are determined by membership in an Active Directory (AD) group
membership. See Identity Firewall Supported Configurations.
IDFW processes the user identity at the source only in distributed firewall rules. Identity-based
groups cannot be used as the destination in DFW rules.
Note For Identity Firewall rule enforcement, Windows Time service should be on for all VMs
using Active Directory. This ensures that the date and time is synchronized between Active
Directory and VMs. AD group membership changes, including enabling and deleting users, do not
immediately take effect for logged in users. For changes to take effect, users must log out and
then log back in. AD administrator's should force a logout when group membership is modified.
This behavior is a limitation of Active Directory.
Prerequisites
Procedure
1 Enable NSX File Introspection driver and NSX Network Introspection driver. VMware Tools full
installation adds these by default.
5 Create security groups (SG) with Active Directory group members: Add a Group.
6 Assign SG with AD group members to a distributed firewall rule: Add a Distributed Firewall .
Procedure
4 To enable IDFW on standalone hosts or clusters, select the Identity Firewall Settings tab.
5 Toggle the Enable bar, and select the standalone hosts, or select the cluster where the IDFW
host must be enabled.
6 Click Save.
n Single user (VDI, or Non-RDSH Server) use case support - TCP, UDP, ICMP
n A single ID-based group can be used as the source only within a distributed firewall rule. If IP
and ID-based groups are needed at the source, create two separate firewall rules.
n Any change on a domain, including a domain name change, will trigger a full sync with Active
Directory. Because a full sync can take a long time, we recommend syncing during off-peak
or non-business hours.
n For local domain controllers, the default LDAP port 389 and LDAPS port 636 are used for the
Active Directory sync, and should not be edited from the default values.
n Single user (VDI, or Non-RDSH Server) use case support - TCP, UDP, ICMP
n VMCI Driver
A context profile can specify one or more Attributes, and can also include sub-attributes, for use
in distributed firewall (DFW) rules and gateway firewall rules. When a sub-attribute, such as TLS
version 1.2 is defined, multiple application identity attributes are not supported. In addition to
attributes, DFW also supports a Fully Qualified Domain Name (FQDN) or URL that can be
specified in a context profile for FQDN whitelisting or blacklisting. Currently a predefined list of
domains is supported. FQDN can be configured with an attribute in a context profile, or each can
be set in different context profiles. After a context profile has been defined, it can be applied to
one or more distributed firewall rules.
Currently, a predefined list of domains is supported. You can see the list of FQDNs when you add
a new context profile of attribute type Domain (FQDN) Name. You can also see a list of FQDNs
by running the API call /policy/api/v1/infra/context-profiles/attributes?
attribute_key=DOMAIN_NAME.
Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.
When a context-profile has been used in a rule, any traffic coming in from a virtual machine is
matched against the rule-table based on 5-tuple. If the rule matches the flow also includes a
Layer 7 context profile, that packet is redirected to a user-space component called the vDPI
engine. A few subsequent packets are punted to that vDPI engine for each flow, and after it has
determined the App Id, this information is stored in the in-kernel context-table. When the next
packet for the flow comes in, the information in the context table is compared with the rule table
again and is matched on 5-tuple, and on the layer 7 App Id. The appropriate action as defined in
the fully matched rule is taken, and if there is an ALLOW-rule, all subsequent packets for the flow
are process in the kernel, and matched against the connection table. For fully matched DROP rule
a reject packet is generated. Logs generated by the firewall will include the Layer 7 App Id and
applicable URL, if that flow was punted to DPI.
1 Upon entering a DFW or Gateway filter, packets are looked up in the flow table based on 5-
tuple.
2 If no flow/state is found, the flow is matched against the rule-table based on 5-tuple and an
entry is created in the flow table.
3 If the flow matches a rule with a Layer 7 service object, the flow table state is marked as “DPI
In Progress.”
4 The traffic is then punted to the DPI engine. The DPI Engine determines the App Id.
5 After the App Id has been determined, the DPI Engine sends down the attribute which is
inserted into the context table for this flow. The "DPI In Progress" flag is removed, and traffic
is no longer punted to the DPI engine.
6 The flow (now with App Id) is reevaluated against all rules that match the App Id, starting with
the original rule that was matched based on 5-tuple, and the first fully matched L4/L7 rule is
picked up. The appropriate action is taken (allow/deny/reject) and the flow table entry is
updated accordingly.
NSX-T provides built in Attributes for common infrastructure and enterprise applications. App Ids
include versions (SSL/TLS and CIFS/SMB) and Cipher Suite (SSL/TLS). For distributed firewall,
App Ids are used in rules through context profiles, and can be combined with FQDN whitelisting
and blacklisting. App Ids are supported on ESXi and KVM hosts.
Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.
n For FQDN, users need to configure a high priority rule with a DNS App Id for the specified
DNS servers on port 53.
n ALG App Ids (FTP, ORACLE, DCERPC, TFTP), require the corresponding ALG service for the
firewall rule.
Note that if you are using a combination of Layer 7 and ICMP, or any other protocols you need to
put the Layer 7 firewall rules last. Any rules above a Layer 7 any/any rule will not be executed.
Procedure
2 Use the context profile in a distributed firewall rule, or a gateway firewall rule: Add a
Distributed Firewall or Add a Gateway Firewall Policy and Rule.
Multiple App Id context profiles can be used in a firewall rule with services set to Any. For
ALG profiles (FTP, ORACLE, DCERPC, TFTP), one context profile is supported per rule.
Attributes
Layer 7 attributes (App Ids) identify which application a particular packet or flow is generated by,
independent of the port that is being used.
Enforcement based on App Ids enable users to allow or deny applications to run on any port, or
to force applications to run on their standard port. vDPI enables matching packet payload against
defined patterns, commonly referred to as signatures. Signature-based identification and
enforcement, enables customers not just to match the particular application/protocol a flow
belongs to, but also the version of that protocol, for example TLS version 1.0 version TLS version
1.2 or different versions of CIFS traffic. This allows customers to get visibility into or restrict the
use of protocols that have known vulnerabilities for all deployed applications and their E-W flows
within the datacenter.
Layer 7 App Ids are used in context profiles in distributed firewall and gateway firewall rules, and
are supported on ESXi and KVM hosts.
Gateway firewall rules do not support the use of FQDN attributes or other sub attributes.
n For FQDN, users need to configure a high priority rule with a DNS App Id for the specified
DNS servers on port 53.
n ALG App Ids (FTP, ORACLE, DCERPC, TFTP), require the corresponding ALG service for the
firewall rule.
BLAST A remote access protocol that compresses, encrypts, and Remote Access
encodes a computing experiences at a data center and transmits
it across any standard IP network for VMware Horizon desktops.
CIFS CIFS (Common Internet File System) is used to provide shared File Transfer
access to directories, files, printers, serial ports, and
miscellaneous communications between nodes on a network
EPIC Epic EMR is an electronic medical records application that Client Server
provides patient care and healthcare information.
FTP FTP (File Transfer Protocol) is used to transfer files from a file File Transfer
server to a local machine
HTTP (HyperText Transfer Protocol) the principal transport protocol Web Services
for the World Wide Web
HTTP2 Traffic generated by browsing websites that support the HTTP Web Services
2.0 protocol
MAXDB SQL connections and queries made to a MaxDB SQL server Database
NFS Allows a user on a client computer to access files over a network File Transfer
in a manner similar to how local storage is accessed.
NNTP An Internet application protocol used for transporting Usenet File Transfer
news articles (netnews) between news servers, and for reading
and posting articles by end user client applications.
NTP NTP (Network Time Protocol) is used for synchronizing the Networking
clocks of computer systems over the network
OCSP An OCSP Responder verifying that a user's private key has not Networking
been compromised or revoked
PCOIP A remote access protocol that compresses, encrypts, and Remote Access
encodes a computing experiences at a data center and transmits
it across any standard IP network.
POP2 POP (Post Office Protocol) is a protocol used by local e-mail Mail
clients to retrieve e-mail from a remote server.
RDP RDP (Remote Desktop Protocol) provides users with a graphical Remote Access
interface to another computer
RTCP RTCP (Real-Time Transport Control Protocol) is a sister protocol Streaming Media
of the Real-time Transport Protocol (RTP). RTCP provides out-of-
band control information for an RTP flow.
RTP RTP (Real-Time Transport Protocol) is primarily used to deliver Streaming Media
real-time audio and video
RTSP RTSP (Real Time Streaming Protocol) is used for establishing and Streaming Media
controlling media sessions between end points
SIP SIP (Session Initiation Protocol) is a common control protocol for Streaming Media
setting up and controlling voice and video calls
SMTP SMTP (Simple Mail Transfer Protocol) An Internet standard for Mail
electronic mail (e-mail) transmission across Internet Protocol (IP)
networks.
SSH SSH (Secure Shell) is a network protocol that allows data to be Remote Access
exchanged using a secure channel between two networked
devices.
SSL SSL (Secure Sockets Layer) is a cryptographic protocol that Web Services
provides security over the Internet.
SYMUPDAT Symantec LiveUpdate traffic, this includes spyware definitions, File Transfer
firewall rules, antivirus signature files, and software updates.
SYSLOG SYSLOG is a protocol that allows network devices to send event Network Monitoring
messages to a logging server.
TELNET A network protocol used on the Internet or local area networks Remote Access
to provide a bidirectional interactive text-oriented
communications facility using a virtual terminal connection.
TFTP TFTP (Trivial File Transfer Protocol) being used to list, download, File Transfer
and upload files to a TFTP server like SolarWinds TFTP Server,
using a client like WinAgents TFTP client.
Distributed Firewall
Distributed firewall comes with predefined categories for firewall rules. Rules are evaluated top
down, and left to right.
Firewall Drafts
A draft is a complete distributed firewall configuration with policy sections and rules. Drafts can
be auto saved or manually saved, and immediately published or saved for publishing at a later.
To save a manual draft firewall configuration, go to the upper right of the distributed firewall
screen and click Actions > Save. After saving, the configuration can be viewed by selecting
Actions > View. Auto drafts are enabled by default. Auto drafts can be disabled by going to
Actions > General Settings. When auto drafts are enabled , any changes to a firewall
configuration results in a system generated autodraft. A maximum of 100 auto drafts and 10
manual drafts can be saved. Auto drafts can be edited and saved as a manual draft, for
publishing now or later. To prevent multiple users from opening and editing the draft, manual
drafts can be locked. When a draft is published, the current configuration is replaced by the
configuration in the draft.
Manual drafts can be edited and saved. Auto drafts can be cloned, and saved as manual drafts,
and then edited. The maximum number of drafts that can be saved is 100 autodrafts and 10
manual drafts.
Procedure
A manual draft can be saved, or edited and then saved. After saving, you can revert to the
original configuration.
4 To prevent multiple users from opening and editing a manual draft, Lock the configuration,
and add a comment.
5 Click Save.
A timeline opens up showing all saved configurations. To see details such as draft name,
date, time and who saved it, point to the dot or star icon of any draft. Saved configurations
can be filtered by time, showing all drafts in the last one day, one week, 30 days, or the last
three months. They can be filtered by aurodraft and saved by me. They can also be filtered
by name, by using the search tool on the top right.
7 Hover over a draft to view name, date and time details of the saved configuration. Click the
name to view draft details.
The detailed draft view shows the required changes to be made to the current firewall
configuration, in order to be in sync with this draft. If this draft is published, all of the changes
visible in this view will be applied to the current configuration.
Clicking the downward arrow expands each section, and displays the added, modified, and
deleted changes in each section. The comparison shows added rules with a green bar on the
left side of the box, modified elements (such as a name change) have a yellow bar, and
deleted elements have a red bar.
8 To edit the name or description of a selected draft, click the menu icon (three dots) from the
View Draft Details window, and select Edit.
Manual drafts can be locked. If locked, a comment for the draft must be provided.
Some roles, such as enterprise administrator have full access credentials, and cannot be
locked out. See Role-Based Access Control.
9 Auto drafts and manual drafts can also be cloned and saved by clicking Clone.
In the Saved Configurations window, you can accept the default name, or edit it. You can also
lock the configuration. If locked, a comment for the draft must be provided.
10 To save the cloned version of the draft configuration, click Save. The draft is now present in
the Saved Configurations section.
What to do next
After viewing a draft, you can load and publish it. It is then the active firewall configuration.
During publishing, a new auto draft is created. This auto draft can be published to revert to the
previous configuration.
Procedure
A timeline opens up showing all saved configurations. To see details such as draft name,
date, time and who saved it, point to the dot icon of any draft. Saved configurations are
filtered by time, showing all drafts created in 1 day, 1 week, 30 days, or the last 3 months.
2 Click a draft name and the View Draft Details window appears.
3 Click Load. The new firewall configuration appears on the main window.
Note A draft cannot be loaded if firewall filters are being used, or if there are unsaved
changes in the current configuration.
4 To commit the draft configuration and make it active, click Publish. To return to the previous
published configuration, click Revert.
After publishing, the changes in the draft will be present in the active configuration.
5 To edit the contents of the selected draft before publishing, after clicking Load, edit the
configuration.
6 To save the edited version of the draft configuration, click Actions > Save.
7 Enter a Name , and optional Description. You can also Lock the draft. If locked, a comment
for the draft must be provided.
8 Click Save.
9 To commit the draft configuration and make it active, click Publish, or to return to the
previous published configuration, click Revert.
Prerequisites
If you are creating rules for Identity Firewall, first create a group with Active Directory members.
To view supported protocols for IDFW, see Identity Firewall Supported Configurations.
Note For Identity Firewall rule enforcement, Windows Time service should be on for all VMs
using Active Directory. This ensures that the date and time is synchronized between Active
Directory and VMs. AD group membership changes, including enabling and deleting users, do not
immediately take effect for logged in users. For changes to take effect, users must log out and
then log back in. AD administrator's should force a logout when group membership is modified.
This behavior is a limitation of Active Directory.
Note that if you are using a combination of Layer 7 and ICMP, or any other protocols you need to
put the Layer 7 firewall rules last. Any rules above a Layer 7 any/any rule will not be executed.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Enable Distributed Firewall by selecting Actions > General Settings at the top right of the
window, and toggling the Distributed Firewall Status. Click Save.
4 Ensure that you are in the correct pre-defined category, and click Add Policy. For more
about categories, see Distributed Firewall .
Option Description
7 Click Publish. Multiple policies can be added, and then published together at one time.
9 Enter a name for the rule. IPv4, IPv6, and multicast addresses are supported.
10 In the Sources column, click the edit icon and select the source of the rule. Groups with
Active Directory members can be used for the source field of an IDFW rule. See Add a Group
for more information.
11 In the Destinations column, click the edit icon and select the destination of the rule. If not
defined, the destination matches any. See Add a Group for more information.
12 In the Services column, click the edit icon and select services. The service matches any if not
defined.
13 The Profiles column is not available when adding a rule to the Ethernet category. For all other
rule categories, in the Profiles column, click the edit icon and select a context profile, or click
Add New Context Profile. See Add a Context Profile.
Context profiles use layer 7 APP ID attributes for use in distributed firewall rules and gateway
firewall rules. Multiple App ID context profiles can be used in a firewall rule with services set
to Any. For ALG profiles (FTP, or TFTP), one context profile is supported per rule.
Context profiles are not supported when creating IDS rules.
15 By default, the Applied to column is set to DFW, and the rule is applied to all workloads. You
can also apply the rule to selected groups. You cannot use Applied to for groups based on
IPSets. Applied to defines the scope of enforcement per rule, and is used mainly for
optimization or resources on ESXi and KVM hosts. It helps in defining a targeted policy for
specific zones and tenants, without interfering with other policy defined for other tenants and
zones.
Option Description
Allow Allows all L3 or L2 traffic with the specified source, destination, and protocol
to pass through the current firewall context. Packets that match the rule,
and are accepted, traverse the system as if the firewall is not present.
Drop Drops packets with the specified source, destination, and protocol. Dropping
a packet is a silent action with no notification to the source or destination
systems. Dropping the packet causes the connection to be retried until the
retry threshold is reached.
Reject Rejects packets with the specified source, destination, and protocol.
Rejecting a packet is a more graceful way to deny a packet, as it sends a
destination unreachable message to the sender. If the protocol is TCP, a TCP
RST message is sent. ICMP messages with administratively prohibited code
are sent for UDP, ICMP, and other IP connections. One benefit of using
Reject is that the sending application is notified after only one attempt that
the connection cannot be established.
Option Description
Direction Refers to the direction of traffic from the point of view of the destination
object. IN means that only traffic to the object is checked, OUT means that
only traffic from the object is checked, and In-Out, means that traffic in both
directions is checked.
Log Label A description entered here will be seen on the interface on the host.
19 Click Publish. Multiple rules can be added and then published together at one time.
20 On each rule, click the Info icon to view the rule ID number, and where it is enforced.
This icon is grayed out until you publish the rule. You can also specify a rule ID when you click
the filter icon to display only policies and rules that satisfy the filter criteria.
21 The realization status API has been enhanced at a security policy level to provide additional
realization status information. This can be achieved by specifying the query parameter
include_enforced_status=true along with the intent_path. Make the following API call.
GET https//<nsx>/policy/api/v1/infra/realized-state/status?intent_path=/infra/
domains/default/security-policies/<security-policy-
id>&include_enforced_status=true
The log file is /var/log/dfwpktlogs.log for both ESXi and KVM hosts.
# tail -f /var/log/dfwpktlogs.log
2018-03-27T10:23:35.196Z INET TERM 3072 IN TCP FIN 100.64.80.1/60688->172.16.10.11/80 8/7 373/5451
2018-03-27T10:23:35.196Z INET TERM 3074 OUT TCP FIN 172.16.10.11/46108->172.16.20.11/8443 8/9
1178/7366
2018-03-27T10:23:35.196Z INET TERM 3072 IN TCP RST 100.64.80.1/60692->172.16.10.11/80 9/6 413/5411
2018-03-27T10:23:35.196Z INET TERM 3074 OUT TCP RST 172.16.10.11/46109->172.16.20.11/8443 9/7
1218/7262
2018-03-27T10:23:37.442Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.12/35770-
>172.16.20.11/8443 S
2018-03-27T10:23:38.492Z INET match PASS 2 OUT 1500 TCP 172.16.10.11/80->100.64.80.1/60660 A
2018-03-27T10:23:39.934Z INET match PASS 3072 IN 52 TCP 100.64.80.1/60720->172.16.10.11/80 S
2018-03-27T10:23:39.944Z INET match PASS 3074 OUT 60 TCP 172.16.10.11/46114->172.16.20.11/8443 S
2018-03-27T10:23:39.944Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.11/46114-
>172.16.20.11/8443 S
2018-03-27T10:23:42.449Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.12/35771-
>172.16.20.11/8443 S
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP RST 172.16.10.11/46109->172.16.20.11/8443 9/7 1218/7262
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP FIN 172.16.10.12/35766->172.16.20.11/8443 9/10
1233/7418
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP FIN 172.16.10.11/46110->172.16.20.11/8443 9/9 1230/7366
2018-03-27T10:23:44.712Z INET TERM 3074 IN TCP FIN 172.16.10.12/35767->172.16.20.11/8443 9/10
1233/7418
2018-03-27T10:23:44.939Z INET match PASS 3072 IN 52 TCP 100.64.80.1/60726->172.16.10.11/80 S
2018-03-27T10:23:44.957Z INET match PASS 3074 OUT 60 TCP 172.16.10.11/46115->172.16.20.11/8443 S
2018-03-27T10:23:44.957Z 71d32787 INET match PASS 3074 IN 60 TCP 172.16.10.11/46115-
>172.16.20.11/8443 S
2018-03-27T10:23:45.480Z INET TERM 2 OUT TCP TIMEOUT 172.16.10.11/80->100.64.80.1/60528 1/1 1500/56
Groups can be excluded from firewall rules, and there are a maximum of 100 groups that can be
on the list. IP sets, MAC sets, and AD groups cannot be included as members in a group that is
used in a firewall exclusion list.
Note NSX-T Data Center automatically adds NSX Manager and NSX Edge node virtual machines
to the firewall exclusion list.
Procedure
1 Navigate to Security > Distributed Firewall > Actions > Exclusion List.
2 To add a group to the exclusion list, click the check box next to any group. Then click Apply.
4 To edit a group, click the three dot menu next to a group and select Edit.
5 To delete a group, click the three dot menu and select Delete.
Currently, a predefined list of domains is supported. You can see the list of FQDNs when you add
a new context profile of attribute type Domain (FQDN) Name. You can also see a list of FQDNs
by running the API call /policy/api/v1/infra/context-profiles/attributes?
attribute_key=DOMAIN_NAME.
You must set up a DNS rule first, and then the FQDN whitelist or blacklist rule below it. This is
because NSX-T Data Center uses DNS Snooping to obtain a mapping between the IP address
and the FQDN. SpoofGuard should be enabled across the switch on all logical ports to protect
against the risk of DNS spoofing attacks. A DNS spoofing attack is when a malicious VM can
inject spoofed DNS responses to redirect traffic to malicious endpoints or bypass the firewall. For
more information about SpoofGuard, see Understanding SpoofGuard Segment Profile.
Note In the current release, ESXi and KVM are supported. ESXi supports drop/reject action for
URL rules. KVM supports the whitelisting feature.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-
ip-address>.
3 Add a firewall policy section by following the steps in Add a Distributed Firewall . An existing
firewall policy section can also be used.
4 Select the new or existing firewall policy section and click Add Rule to create the DNS firewall
rule first.
5 Provide a name for the firewall rule, such as DNS rule, and provide the following details:
Option Description
Services Click the edit icon and select the DNS or DNS-UDP service as applicable to
your environment.
Profile Click the edit icon and select the DNS context profile. This is precreated and
is available in your deployment by default.
Option Description
6 Click Add Rule again to set up the FQDN whitelisting or blacklisting rule.
7 Name the rule appropriately, such as, FQDN/URL Whitelist. Drag the rule under the DNS rule
under this policy section.
Option Description
Services Click the edit icon and select the service you want to associate with this rule,
for example, HTTP.
Profile Click the edit icon and click Add New Context Profile. Click in the column
titled Attribute, and select Domain (FQDN) Name. Select the list of Attribute
Name/Values from the predefined list. Click Add. See Add a Context Profile
for details.
9 Click Publish.
Starting in NSX-T Data Center 2.5.1, integration with Arista CloudVision eXchange (CVX) is
supported. This integration facilitates consistent networking and security services, across virtual
and physical workloads, independent of your application frameworks or physical network
infrastructure. NSX-T Data Center does not directly program the physical network switch or
router but integrates at the physical SDN controller level, therefore preserving the autonomy of
security administrators and physical network administrators.
Starting in NSX-T Data Center 2.5.1, integration with Arista EOS 4.22.1FX-PCS and later is
supported.
Limitations
n Arista switches require ARP traffic to exist before firewall rules are applied to an end host
that is connected to an Arista switch. Packets can therefore pass through the switch before
firewall rules are configured to block traffic.
n Allowed traffic does not resume when a switch crashes or is reloaded. The ARP tables need
to be populated again, after the switch comes up, for the firewall rules to be enforced on the
switch.
n Firewall rules cannot be applied on the Arista Physical Switch, for FTP passive clients that
connect to FTP Server connected to the Arista Physical Switch.
n In CVX HA setup that uses Virtual IP for the CVX cluster, the CVX VM’s dvpg’s Promiscuous
mode, and Forged transmits must be set to Accept. In case they are set to default (Reject),
the CVX HA Virtual IP will not be reachable from NSX Manager.
Prerequisites
Procedure
1 Log in to NSX Manager as a root user and run the following command to retrieve the
thumbprint for CVX:
openssl s_client -connect <virtual IP address of CVX cluster> | openssl x509 -noout -fingerprint -
sha256
Sample output:
depth=0 CN = self.signed
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = self.signed
verify return:1
SHA256
Fingerprint=35:C1:42:BC:7A:2A:57:46:E8:72:F4:C8:B8:31:E3:13:5F:41:95:EF:D8:1E:E9:3D:F0:CC:3B:09:A2
:FE:22:DE
2 Edit the retrieved thumbprint to use only lower case characters and exclude any colons in the
thumbprint.
35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de
PATCH https://<nsx-manager>/policy/api/v1/infra/sites/default/enforcement-points/cvx-default-ep
{
"auto_enforce": "false",
"connection_info": {
"enforcement_point_address": "<IP address of CVX>",
"resource_type": "CvxConnectionInfo",
"username": "cvpadmin",
"password": "1q2w3e4rT",
"thumbprint": "65a9785e88b784f54269e908175ada662be55f156a2dc5f3a1b0c339cea5e343"
}
}
https://<nsx-manager>/policy/api/v1/infra/sites/default/enforcement-points/cvx-default-ep
{
"auto_enforce": "false",
"connection_info": {
"enforcement_point_address": "<IP address of CVX>",
"resource_type": "CvxConnectionInfo",
"username": "admin",
"password": "1q2w3e4rT",
"thumbprint": "35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de"
}
}
Sample output:
{
"connection_info": {
"thumbprint": "35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de",
"enforcement_point_address": "192.168.2.198",
"resource_type": "CvxConnectionInfo"
},
"auto_enforce": false,
"resource_type": "EnforcementPoint",
"id": "cvx-default-ep",
"display_name": "cvx-default-ep",
"path": "/infra/sites/default/enforcement-points/cvx-default-ep",
"relative_path": "cvx-default-ep",
"parent_path": "/infra/sites/default",
"marked_for_delete": false,
"_system_owned": false,
"_create_user": "admin",
"_create_time": 1564036461953,
"_last_modified_user": "admin",
"_last_modified_time": 1564036461953,
"_protection": "NOT_PROTECTED",
"_revision": 0
}
5 Call the POST /api/v1/notification-watchers/ API and use the CVX thumbprint to create a
notification ID. For example:
POST https://<nsx-manager>/api/v1/notification-watchers/
{
"server": "<virtual IP address of CVX cluster>",
"method": "POST",
"uri": "/pcs/v1/nsgroup/notification",
"use_https": true,
"certificate_sha256_thumbprint":
"35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de",
"authentication_scheme": {
"scheme_name":"BASIC_AUTH",
"username":"cvpadmin",
"password":"1q2w3e4rT"
}
}
Sample output:
{
"id": "a0286cb6-de4d-41de-99a0-294465345b80",
"server": "192.168.2.198",
"port": 443,
"use_https": true,
"certificate_sha256_thumbprint":
"35c142bc7a2a5746e872f4c8b831e3135f4195efd81ee93df0cc3b09a2fe22de",
"method": "POST",
"uri": "/pcs/v1/nsgroup/notification",
"authentication_scheme": {
"scheme_name": "BASIC_AUTH",
"username": "cvpadmin"
},
"send_timeout": 30,
"max_send_uri_count": 5000,
"resource_type": "NotificationWatcher",
"display_name": "a0286cb6-de4d-41de-99a0-294465345b80",
"_create_user": "admin",
"_create_time": 1564038044780,
"_last_modified_user": "admin",
"_last_modified_time": 1564038044780,
"_system_owned": false,
"_protection": "NOT_PROTECTED",
"_revision": 0
}
PATCH https://<nsx-manager>/policy/api/v1/infra/domains/default/domain-deployment-maps/cvx-
default-dmap
{
"display_name": "cvx-deployment-map",
"id": "cvx-default-dmap",
"enforcement_point_path": "/infra/sites/default/enforcement-points/cvx-default-ep"
Prerequisites
Procedure
1 Log in to NSX Manager as a root user and run the following command to create a thumbprint
for CVX to communicate with NSX Manager:
openssl s_client -connect <IP address of nsx-manager>:443 | openssl x509 -pubkey -noout | openssl
rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl base64
Sample output:
cvx
no shutdown
service pcs
no shutdown
controller <IP address of nsx-manager>
username <NSX administrator user name>
password <NSX administrator password>
enforcement-point cvx-default-ep
pinned-public-key <thumbprint for CVX to communicate with NSX
Manager>
notification-id <notification ID created while registering CVX with NSX>
end
3 Run the following command from the CVX CLI to check the configuration:
show running-config
Sample ouput:
cvx
no shutdown
source-interface Management1
!
service hsc
no shutdown
!
service pcs
no shutdown
controller 192.168.2.80
username admin
password 7 046D26110E33491F482F2800131909556B
enforcement-point cvx-default-ep
pinned-public-key sha256//S+zwADluzeNf+dnffDpYvgs4YrS6QBgyeDry40bPgms=
notification-id a0286cb6-de4d-41de-99a0-294465345b80
4 Configure tag on the ethernet interface of the physical switch that connects to the physical
server. Run the following commands on the physical switch managed by CVX.
configure terminal
interface ethernet 4
tag phy_app_server
end
copy running-config startup-config
Copy completed successfully.
5 Run the following command to verify tag configuration for the switch:
Sample output:
interface Ethernet4
description connected-to-7150s-3
switchport trunk allowed vlan 1-4093
switchport mode trunk
tag sx4_app_server
IP addresses that are learnt on the tagged interfaces, using ARP, are shared with NSX-T Data
Center.
6 Log in to NSX Manager to create and publish firewall rules for the physical workloads
managed by CVX. See Chapter 13 Security for more information on creating rules. For
example:
NSX-T Data Center policies and rules published in NSX-T Data Center appear as dynamic
ACLs on the physical switch managed by CVX.
For more information, see CVX HA set up, CVX HA Virtual IP setup, and Physical Switch Mlag
Setup
Because address sets are dynamically populated based on virtual machine name or tags, and
must be updated on each filter, they can exhaust the available amount of heap memory on hosts
to store DFW rules and IP address sets.
In NSX-T Data Center version 2.5 and later, a feature called Global or Shared Address Sets,
makes address sets shared across all the filters. While each filter can have different rules, based
on Applied To, the address sets members are constant across all the filters. This feature is
enabled by default, reducing heap memory use. It cannot be disabled.
In NSX-T Data Center version 2.4 and earlier, Global or Shared Address Sets is disabled, and
environments with heavy distributed firewall rules might experience VSIP heap exhaustion.
Distributed IDS
Distributed Intrusion Detection Service (IDS) monitors network traffic on the host for suspicious
activity.
IDS detects intrusion attempts based on already known malicious instruction sequences. The
detected patterns in the IDS are known as signatures. Specific signatures can be excluded from
intrusion detection.
Note Do not enable Distributed Intrusion Detection Service (IDS) in an environment that is using
Distributed Load Balancer. NSX-T Data Center does not support using IDS with a Distributed
Load Balancer.
1 Enable IDS on hosts, download latest signature set, and configure signature settings.
Distributed IDS Settings and Signatures
Distributed firewall (DFW) must be enabled for IDS to work. If traffic is blocked by a DFW rule,
then IDS will not see the traffic.
Intrusion detection can be enabled on standalone hosts by toggling the enabled bar. If VC
clusters are detected, IDS can also be enabled on a cluster basis by selecting the cluster and
clicking enable.
Signatures
Signatures are applied to IDS rules through profiles. A single profile is applied to matching traffic.
By default, NSX Manager checks for new signatures once per day. New signature update
versions are published every two weeks (with additional non-scheduled 0-day updates). When a
new update is available, there is a banner across the page with an Update Now link.
If Auto update new versions is selected, signatures are automatically applied to your hosts after
they are downloaded from the cloud. If auto update is disabled, the signatures are stopped at the
listed version. Click view and change versions to add another version, in addition to the default.
Currently, two versions of signatures are maintained. Whenever there is a change in the version
commit identification number, a new version is downloaded.
If a proxy server is configured for NSX Manager to access the Internet, click Proxy Settings and
complete the configuration.
To download and upload a signature bundle, when NSX Manager does not have Internet access:
1 This API is the first one to be called before any communication with the cloud service is
started. It registers the client using the client's license key, and generates credentials for the
client to use. The client_id and client_secret generated is used as the request for the
Authentication API. If the client has previously registered, but does not have access to the
client_id and client_secret, the client has to re-register using the same API.
POST https://api.nsx-sec-prod.com/1.0/auth/register
Body:
{
"license_keys":["054HK-D0356-480N1-02AAM-AN047"],
"device_type":"NSX-Policy-Manager",
"client_id": "client_username"
}
Response:
{"client_id":"client_username",
"client_secret": "Y54+V/
rCpEm50x5HAUIzH6aXtTq7s97wCA2QqZ8VyrtFQjrJih7h0alItdQn02T46EJVnSMZWTseragTFScrtIwsiPSX7APQIC7MxAYZ
0BoAWvW2akMxyZKyzbYZjeROb/C2QchehC8GFiFNpwqiAcQjrQHwHGdttX4zTQ="
}
2 This API call authenticates the client using the client_id and client_secret, and generates an
authorization token to use in the headers of requests to IDS Signatures APIs. The token is
valid for 60 minutes. If the token is expired, the client has to reauthenticate using the client_id
and client_secret.
POST https://api.nsx-sec-prod.com/1.0/auth/authenticate
Body:
{"client_id":"client_username",
"client_secret": "Y54+V/
rCpEm50x5HAUIzH6aXtTq7s97wCA2QqZ8VyrtFQjrJih7h0alItdQn02T46EJVnSMZWTseragTFScrtIwsiPSX7APQIC7MxAYZ
0BoAWvW2akMxyZKyzbYZjeROb/C2QchehC8GFiFNpwqiAcQjrQHwHGdttX4zTQ="
}
Response:
{
"access_token":
"eyJhbGciOiJIUzUxMiJ9.eyJqdGkiOiI3ZjMwN2VhMmQwN2IyZjJjYzM5ZmU5NjJjNmZhNDFhMGZlMTk4YjMyMzU4OGU5NGU5
NzE3NmNmNzk0YWU1YjdjLTJkYWY2MmE3LTYxMzctNGJiNS05NzJlLTE0NjZhMGNkYmU3MCIsInN1YiI6IjdmMzA3ZWEyZDA3Yj
JmMmNjMzlmZTk2MmM2ZmE0MWEwZmUxOThiMzIzNTg4ZTk0ZTk3MTc2Y2Y3OTRhZTViN2MtMmRhZjYyYTctNjEzNy00YmI1LTk3
MmUtMTQ2NmEwY2RiZTcwIiwiZXhwIjoxNTU1NTUyMjk0LCJpYXQiOjE1NTU1NDg2OTR9.x4U75GShDLMhyiyUO2B9HIi1Adonz
x3Smo01qRhvXuErQSpE_Kxq3rzg1_IIyvoy3SJwwDhSh8KECtGW50eCPg",
"token_type": "bearer",
"expires_in": 3600,
"scope": "[distributed_threat_features]"
}
3 The response to this command has the link for the ZIP file. NSXCloud downloads the
signatures from the git hub repo every 24 hours, and saves the signatures in a ZIP file. Copy
and paste the signatures URL into your browser, and the ZIP file will download.
GET https://api.nsx-sec-prod.com/1.0/intrusion-services/signatures
In the Headers tab, the Authorization key will have the access_token value from the
authenticate API response.
Authorization
eyJhbGciOiJIUzUxMiJ9.eyJqdGkiOiI3ZjMwN2VhMmQwN2IyZjJjYzM5ZmU5NjJjNmZhNDFhMGZlMTk4YjMyMzU4OGU5NGU5N
zE3NmNmNzk0YWU1YjdjLTJkYWY2MmE3LTYxMzctNGJiNS05NzJlLTE0NjZhMGNkYmU3MCIsInN1YiI6IjdmMzA3ZWEyZDA3YjJ
mMmNjMzlmZTk2MmM2ZmE0MWEwZmUxOThiMzIzNTg4ZTk0ZTk3MTc2Y2Y3OTRhZTViN2MtMmRhZjYyYTctNjEzNy00YmI1LTk3M
mUtMTQ2NmEwY2RiZTcwIiwiZXhwIjoxNTU1NTUyMjk0LCJpYXQiOjE1NTU1NDg2OTR9.x4U75GShDLMhyiyUO2B9HIi1Adonzx
3Smo01qRhvXuErQSpE_Kxq3rzg1_IIyvoy3SJwwDhSh8KECtGW50eCPg
Response:
{
"signatures_url": "https://ncs-idps-us-west-2-prod-signatures.s3.us-west-2.amazonaws.com/
a07fe284ff70dc67194f2e7cf1a8178d69570528.zip?X-Amz-Security-Token=IQoJb3JpZ2luX2VjENf%2F%2F%2F%2F
%2F%2F%2F%2F%2F%2FwEaCXVzLXdlc3QtMSJHMEUCIG1UYbzfBxOsm1lvdj1k36LPyoPota0L4CSOBMXgKGhmAiEA
%2BQC1K4Gr7VCRiBM4ZTH2WbP2rvIp0qfHfGlOx0ChGc4q6wEIHxABGgw1MTAwMTM3MTE1NTMiDA4H4ir7eJl779wWWirIAdLI
x1uAukLwnhmlgLmydZhW7ZExe
%2BamDkRU7KT46ZS93mC1CQeL00D2rjBYbCBiG1mzNILPuQ2EyxmqxhEOzFYimXDDBER4pmv8%2BbKnDWPg08RNTqpD
%2BAMicYNP7WlpxeZwYxeoBFruCDA2l3eXS6XNv3Ot6T2a
%2Bk4rMKHtZyFkzZREIIcQlPg7Ej5q62EvvMFQdo8TyZxFpMJBc4IeG0h1k6QZU1Jlkrq2RYKit5WwLD
%2BQKJrEdf4A0YctLbMCDbNbprrUcCADMKyclu8FOuABuK90a%2BvnA%2FJFYiJ32eJl
%2Bdt0YRbTnRyvlMuSUHxjNAdyrFxnkPyF80%2FQLYLVDRWUDatyAo10s3C0pzYN%2FvMKsumExy6FIcv
%2FOLoO8Y9RaMOTnUfeugpr6YsqMCH0pUR4dIVDYOi1hldNCf1XD74xMJSdnviaxY4vXD4bBDKPnRFFhOxLTRFAWVlMNDYggLh
3pV3rXdPnIwgFTrF7CmZGJAQBBKqaxzPMVZ2TQBABmjxoRqCBip8Y662Tbjth7iM2V522LMVonM6Tysf16ls6QU9IC6WqjdOde
i5yazK%2Fr9g%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20191202T222034Z&X-Amz-
SignedHeaders=host&X-Amz-Expires=3599&X-Amz-Credential=ASIAXNPZPUTA6A7V7P4X%2F20191202%2Fus-
west-1%2Fs3%2Faws4_request&X-Amz-
Signature=d85ca4aef6abe22062e2693acacf823f0a4fc51d1dc07cda8dec93d619050f5e"
}
4 Navigate to Security > Distributed IDS > Settings. Click Upload IDS Signatures in the right
corner. Navigate to the saved signature ZIP file and upload the file. You can also upload the
signature ZIP using the API call:
POST https://<mgr-ip>/policy/api/v1/infra/settings/firewall/security/intrusion-services/
signatures?action=upload_signatures
Signatures can be enabled based on the severity rating of the signature. A higher score indicates
an increased risk associated with the intrusion event. Severity is determined based on the
following:
Exclusions are set per severity level and are used to disable signatures, reducing noise and
improving performance. Exclusions are used to disable signatures:
The default IDS profile includes critical severities and cannot be edited.
Procedure
4 To exclude a severity, click select under Signatures to Exclude. You can now view and
exclude the signatures included in that severity level. Click Add to add a signature to the
exclusion list. The following information is provided for each signature:
Variable Description
IDS Severity Indicates the severity of the signature. For more details, see IDS Severity
Ratings.
CVSS (Common Vulnerability CVSS is a framework for rating the severity of security vulnerabilities in
Scoring System) software. A CVSS base score of 0.0-3.9 is considered low severity. A CVSS
base score of 4.0-6.9 is medium severity. A CVSS base score of 7.0-10.0 is
high severity.
CVE (Common Vulnerability Common Vulnerability Enumeration (CVE), is a dictionary of publicly known
Enumeration) information security vulnerabilities and exposures.
5 Click Save.
What to do next
A higher score indicates an increased risk associated with the intrusion event.
IDS rules are created in the same manner as distributed firewall (DFW) rules. First, an IDS policy
or section is created, and then rules are created. DFW must be enabled, and traffic must be
allowed by DFW to be passed through to IDS rules.
n stateful
One or more policy sections with rules must be created, because there are no default rules.
Before creating rules, create a group that needs a similar rule policy. See Add a Group.
2 Click Add Policy to create a policy section, and give the section a name.
3 Click the gear icon to configure the following policy section options:
Option Description
4 Click Add Rule to add a new rule, and give the rule a name.
6 Select the IDS Profile to be used for the matching traffic. For more information, see
Distributed IDS Profiles.
Option Description
9 Click Publish. Multiple rules can be added and then published together at one time.
For more information about creating policy sections and rules, see Add a Distributed Firewall .
Navigate to Security > Distributed IDS > Events to view time intrusion events.
Colored dots indicate the unique type of intrusion events and can be clicked for details. The size
of the dot indicates the number of times an intrusion event has been seen. A blinking dot
indicates that an attack is ongoing. Point to a dot to see the attack name, number of attempts,
first occurrence, and other details.
All the intrusion attempts for a particular signature are grouped and plotted at their first
occurrence.
n Select the timeline by clicking the arrow in the upper right corner. The time line can be
between 24 hours and 14 days.
CVSS (Common Vulnerability Score) Common Vulnerability Score (filter based on a score
above a set threshold).
Detail Description
Last Detected This is the last time the signature was fired.
Vulnerability Details If available, this shows a link to the CVE and the CVSS
score associated with the vulnerability.
Detail Description
Associated IDS Rule Clickable link to the configured IDS Rule which resulted in
this event.
n To view intrusion history, click the arrow next to an event, then click View Intrusion History.
A window opens with the following details:
Detail Description
Time Detected This is the last time the signature was fired.
n The graph present under the chart represents events that occurred over a selected time
span. You can zoom in to the specific time window on this graph to view details of signatures
of the related events that happened during the time window.
The viewable modes in the CLI can differ based on the assigned role and rights of a user. If you
are unable to access an interface mode or issue a particular command, consult your NSX
administrator.
Procedure
1 Open an SSH session to a compute host running the work loads that were previously
deployed. Log in as root.
2 Enter the nsxcli command to open the NSX-T Data Center CLI.
3 To confirm that IDS is enabled on this host, run the command: get ids status.
Sample Output:
4 To confirm both of the IDS profiles have been applied to this host, run the command get ids
profile.
5 To review IDS profile (engine) statistics, including the number of rules loaded, and the number
of packets and sessions evaluated, run the command get ids engine stats.
The output is on a per profile basis, and shows the number of signatures loaded for each
profile, and the number of packets that were evaluated.
app_layer:
---------
flow:
http: 10713
tx:
http: 25911
detect:
------
engines:
alerts: 11129
id: 3
last_reload: 2020-03-17T21:29:39.387087+0000
packets_incoming: 572083
packets_outgoing: 571066
prof-uuid: 53ef4dba-0291-4ea3-96ef-d01259dca2fe
rules_failed: 0
rules_loaded: 11906
tcp:
---
memuse: 20872880
overlap: 50006
reassembly_memuse: 155439408
rst: 23797
sessions: 58811
syn: 89615
synack: 41635