0% found this document useful (0 votes)
255 views36 pages

Module 5: NSX-T Data Center Logical Switching Design: © 2020 Vmware, Inc

Uploaded by

Danial Tan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
255 views36 pages

Module 5: NSX-T Data Center Logical Switching Design: © 2020 Vmware, Inc

Uploaded by

Danial Tan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Module 5: NSX-T Data

Center Logical Switching


Design

© 2020 VMware, Inc.


Importance
Overlay-backed logical switches, or segments, can be used to create isolated logical L2 networks
with the same flexibility and agility that exists with virtual machines.
The decoupling of logical switching from the physical network infrastructure is one of the main
benefits of adopting NSX-T Data Center.
NSX-T Data Center 3.0 supports vSphere Distributed Switch (VDS), which prepares the ESXi host
transport node. Using this feature, administrators can effectively install and configure the ESXi hosts
as transport nodes.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-2


Module Lessons
1. NSX-T Data Center Logical Switching
2. Traffic Flooding
3. Upgrading from N-VDS to VDS

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-3


Lesson 1: NSX-T Data Center Logical
Switching

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Define the key data plane terminology
• Explain NSX-T Data Center on vSphere Distributed Switch
• Describe how to upgrade to NSX-T Data Center on VDS

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-5


Overlay-Backed Logical Switches

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-6


About NSX-T on VDS
The ESXi hosts managed by vCenter Server can be configured to use VDS during the transport node
preparation.
In standalone ESXi host environments, NSX Manager installs the NSX-T Virtual Distributed Switch
(N-VDS) on transport nodes.
VDS uses the vSwitch kernel module and the N-VDS is implemented with the nsxt-vswitch kernel
module.
Segments from NSX Manager are viewed as NSX distributed port groups by vCenter Server.
The distributed port groups and the NSX distributed port groups can coexist on the same VDS.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-7


NSX-T on VDS Requirements
NSX-T on VDS has the following requirements:
• Both the vCenter Server system and the ESXi hosts must run vSphere 7.
• VDS version must be 7.0 or later.
• An MTU value of 1,700 must be set on VDS.
• The vCenter Server system must be registered as a compute manager in NSX Manager.
• The ESXi hosts must be added to VDS before configuring the transport nodes.
• The NSX-T Data Center version must be 3.0 or later.
• All the ESXi hosts in the vSphere cluster must be prepared by creating a transport node profile
and attaching it to the cluster in the NSX UI.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-8


Use Cases for NSX-T on VDS
NSX-T on VDS has the following common use cases:
• Facilitate the migration from NSX Data Center for vSphere to NSX-T Data Center.
• Consolidate both vSphere networking and NSX networking onto VDS.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5-9


Uplink Profile
The uplink profile is a template that defines how an NSX Virtual Distributed Switch connects to the
physical network.
The uplink profile specifies the following items:
• Format of the uplinks of an NSX Virtual Distributed Switch
• Teaming policy applied to those uplinks
• Transport VLAN used for the overlay traffic

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 10


Coexistence of NSX Logical Switches and Traditional Distributed Virtual
PortGroups (1)

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 11


Coexistence of NSX Logical Switches and Traditional Distributed Virtual
PortGroups (2)

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 12


Transport Zone Definition (1)
Before NSX-T Data Center 3.0: With NSX-T 3.0:
1. Attach Transport Node1 to Transport Zone 1. Attach N-VDS1 to Prod and N-VDS2 to Lab.
Prod and Transport Zone Lab.
The HSN remains on the host with local
2. Match N-VDS to its transport zone based on significance.
Host Switch Name (HSN).

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 13


Transport Zone Definition (2)
ENS/EDP is now specified at the TN creation, not in the TZ definition.

Enhanced Datapath is available at


transport node creation.

Enhanced Datapath is not an isolated NSX deployment. It can coexist with standard NSX.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 14


Host Switches
NSX-T Data Center introduces a host switch called NSX-T Virtual Distributed Switch. This switch
normalizes connectivity among various compute domains, including multiple vCenter Server
instances, KVM, containers, and other off-premises or cloud implementations.
NSX-T Virtual Distributed Switch can be configured based on the performance required in your
environment:
• Standard: Configured for regular workloads, where normal traffic throughput is expected on the
workloads
• Enhanced: Configured for telecom workloads, where high traffic throughput is expected on the
workloads

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 15


Running NSX-T on VDS (1)
Deploy VDS 7.0.

VDS upgrade:
• Upgrade of VDS from an earlier version is a
well-known process.
• The Safe migration wizard takes care of the
vmk0 migration and auto-rollback.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 16


Running NSX-T on VDS (2)
Add the ESXi host as a TEP.

NSX-T on VDS:
• You can install NSX-T Data Center in one step.
• The existing VDS configuration is unchanged.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 17


Review of Learner Objectives
• Define the key data plane terminology
• Explain NSX-T Data Center on vSphere Distributed Switch
• Describe how to upgrade to NSX-T Data Center on VDS

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 18


Lesson 2: Traffic Flooding

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Describe how unicast traffic is forwarded between VMs
• Compare how L2 switches map MAC addresses and the overlay model
• Define broadcast, unknown unicast, and multicast (BUM) traffic and how NSX-T Data Center
handles this traffic
• Identify the tables maintained by NSX Manager
• Describe the advantages of a two-tier hierarchical design
• Identify the encapsulation used by NSX-T Data Center for its overlay model

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 20


Flooded Traffic
When frames are replicated to multiple destinations, NSX-T Data Center does not differentiate
between BUM traffic:
• BUM traffic is flooded in a similar way across a logical switch.
Different NSX-T Data Center components orchestrate the replication of a frame to be flooded on a
logical switch.
NSX-T Data Center provides the following methods for flooding traffic:
• Source replication
• Unicast replication
These methods can be selected per logical switch.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 21


Tables Maintained by NSX Manager
Like traditional physical networking devices, NSX-T Data Center can populate the filtering database
of a logical switch from the data plane.
NSX Manager builds a central repository for tables that enhance the behavior of the system:
• Global MAC address to TEP table, which is per logical switch
• Global ARP table, which associates MAC addresses to IP addresses

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 22


Unicast Traffic Between VMs
At L2, when a frame is destined to an unknown MAC address, it is flooded in the network.
Switches typically implement a MAC address table, or filtering database (FDB), which associates
MAC addresses to ports to prevent flooding.
When a frame is destined to a unicast MAC address known in the MAC address table, it is forwarded
by the switch to the corresponding port.
The NSX Virtual Distributed Switch maintains such a table for each realized logical switch.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 23


BUM Replication: Source or Unicast
To enable flooding, NSX-T Data Center supports
different replication modes:
• Source:
– Simple, one hop (low latency)
– Replication cost paid by the multidestination
traffic initiator
• Unicast:
– Shared replication cost
– Optimizes bandwidth utilization between
ToRs
– Functionally the same as source in L2 fabric
– More efficient in L3 fabric
Unicast effectively covers both L2 and L3 fabric
use cases.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 24


Overlay Encapsulation
NSX-T Data Center uses Generic Network Virtualization Encapsulation (Geneve) for its overlay
model.
Geneve is an IETF Internet draft that builds on VXLAN concepts to provide enhanced flexibility for
data plane extensibility.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 25


Review of Learner Objectives
• Describe how unicast traffic is forwarded between VMs
• Compare how L2 switches map MAC addresses and the overlay model
• Define broadcast, unknown unicast, and multicast (BUM) traffic and how NSX-T Data Center
handles this traffic
• Identify the tables maintained by NSX Manager
• Describe the advantages of a two-tier hierarchical design
• Identify the encapsulation used by NSX-T Data Center for its overlay model

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 26


Lesson 3: Upgrading from N-VDS to VDS

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Identify the steps to manually upgrade from N-VDS to VDS
• Identify the design considerations for the virtual infrastructure segment and transport network
• Describe choices made in the segment profile design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 28


Manual Upgrade to VDS (1)
Prerequisite: ESXi and vCenter Server must be upgraded to 7.0.

For all hosts prepared individually for NSX (not


part of a transport node profile):
1. Put the host in maintenance mode.
vSphere DRS (DRS) evacuates VMs.
2. Uninstall NSX.
3. Install VDS 7.0.
4. Prepare NSX.
5. Exit maintenance mode.
DRS redistributes VMs.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 29


Manual Upgrade to VDS (2)
Prerequisite: ESXi and vCenter Server must be upgraded to 7.0.

For hosts prepared with a transport node profile:


1. Put the host in maintenance mode.
VMs are evacuated.
2. Create Cluster2.
3. Move host1 to Cluster2.
NSX is uninstalled. The TNP behavior is new.
4. Prepare NSX with TNP.
5. Migrate VMs manually with vSphere vMotion.
6. Move host2 to Cluster2.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 30


NSX on VDS Summary
Simple install based on VDS:
• Less disruptive
• vCenter Server admin control
• Simpler VLAN micro-segmentation use case
Third-party scripts that are not retrofitted for opaque networks work NSX-T Data Center.
More efficient code management, faster bug fixes, and feature development.
Changes:
• Consumption model change.
• vCenter Server and VDS are required.
• Some network objects belong to the vCenter Server admin if the NSX admin includes the vCenter
Server role.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 31


Summary of Design Decisions: Virtual Infrastructure IP Pool

Design Decision ID Design Decision Description


HY1303-VI-SDN-014 Create an uplink profile with the failover teaming policy with two active
uplinks for KVM transport nodes.
HY1303-VI-SDN-015 Create an uplink profile with the failover order teaming policy with one
active uplink and no standby uplinks for edge VM overlay traffic.
HY1303-VI-SDN-016 Create two uplink profiles with the failover order teaming policy with one
active uplink and no standby uplinks for edge VM uplink traffic.
HY1303-VI-SDN-017 Use a static IP to configure the NSX Edge transport nodes TEP in the
overlay N-VDS settings.
HY1303-VI-SDN-018 Create an IP Pool for the ESXi transport nodes TEP IP assignment.
HY1303-VI-SDN-019 Create an IP Pool for the KVM transport nodes TEP IP assignment.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 32


Summary of Design Decisions: Virtual Infrastructure Segment

Design Decision ID Design Decision Description


HY1303-VI-SDN-007 Apply DRS anti-affinity rules to NSX Manager nodes.
HY1303-VI-SDN-008 Apply DRS anti-affinity rules to the virtual machines of the NSX Edge
cluster.
HY1303-VI-SDN-009 Create a single transport zone for all overlay traffic.
HY1303-VI-SDN-010 Create two transport zones for edge virtual machine uplinks.
HY1303-VI-SDN-011 Deploy all workloads on NSX-T Data Center segments (logical switches).
HY1303-VI-SDN-012 Use hierarchical two-tier replication on all segments.
HY1303-VI-SDN-013 Create an uplink profile with the load balance source teaming policy with
two active uplinks for ESXi transport nodes.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 33


Lab 4: Overlay Layer 2 Design
Review and document the customer’s proposed overlay layer 2 design:
1. Document the Customer Overlay Layer 2 Design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 34


Review of Learner Objectives
• Identify the steps to manually upgrade from N-VDS to VDS
• Identify the design considerations for the virtual infrastructure segment and transport network
• Describe choices made in the segment profile design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 35


Key Points
• L2 switches maintain a MAC table, which is a mapping of source MAC addresses of frames to the
port where those frames were received.
• The overlay model maps source MAC addresses of frames to the TEP where those frames were
received.
• NSX Manager builds a global MAC to TEP table for each logical switch.
• NSX Manager builds an ARP table, which contains mappings of MAC to IP addresses.
• NSX-T Data Center uses Geneve for its overlay model, which extends the functions of the VXLAN
standard.
Questions?

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 5 - 36

You might also like