TECH
FIELD DAY
Hugo Riveros
SE Manager – Multi Country Area
Data Center Networking Introduction
2
What makes a data center network?
– Local Area Network (LAN) / Campus Networks
– Same geographical location, building, campus etc.
– Wired and wireless network connects users, IP phones and wireless APs
– Typical features required: POE, 802.1X etc
– Data Center Networks
– Same geographical location
– Connects Servers/VMs/Containers, applications, storage, firewalls/ load
balancers, etc. – wired connectivity Spines …
– Stable, low latency fabrics with high availability / high performance and
throughput / density and scale
– Build revenue for business (E-Commerce)!
– Typical features required: VXLAN/EVPN, BGP, OSPF, DCB, etc.. Leafs
– Focus on improving East - West traffic between racks
3
What is a network fabric?
Marketing term
– Optimally interconnect 1,000, 10,000, 100,000 or more
end points (servers, storage)
– Provide redundancy when any node or any link fails
– Failure will happen – it’s just a question of time
– Minimize # hops to reach any other peer in the fabric
– Latency impact
– East/West (E/W) traffic vs North/South (N/S) traffic
– E/W traffic = Servers to servers inside the DC
– N/S traffic = Clients to servers entering / servers to clients
leaving the DC
4
Data Center Networking
Architectures
5
Enterprise Datacenter Network Architecture Evolution
Traditional 3 layer
STP Spine&Leaf L3 ECMP
Optimized L2/L3 Fabric VXLAN* ,EVPN & Network Virtualization**
IRF/VSX MLAG Spine&Leaf L2 ECMP
TRILL/SPB …
…
L3 Fabric
L2 Fabric
…
…
vSwitch vSwitch
VMs VMs
Scalability, Agility, Orchestration
Classic / Underlay VxLAN Overlay
LACP SW VTEPs
L2 SW & HW VTEPs
L3 HW VTEPs
* VXLAN connections are created automatically, on demand, between leaf Switches/vSwitches.
** Network Virtualization – VMware NSX, OpenStack, HPE Distributed Cloud Networking/Nuage 6
Does Every DCN Solution = Spine/Leaf? Multi-Tier Data Center
Core …
1-Tier Data Center
Core
Spines
L3
Agg …
L3 L3
L2
Leafs Access
L2 L2
– Spine = Multiple individual backbone devices that provide redundant connectivity for each leaf
– Leaf = Switch which connects to every spine switch (can be VTEP but not mandatory), provides entry into equidistant networks with
no constraints on workload placement
– Core = A single device (logical or physical) that provides centralized connectivity to other devices (servers/switches)
– Aggregation = Aggregates multiple access switches – usually performs L2/L3 services
– Access = Typically connects into a Core or Aggregation device – usually running L2 services
– ToR = Umbrella term, referring to a switch located at the Top-of-Rack
7
EoR / MoR (End of Row / Middle of Row)
– EoR/MoR refers to physical location of switches where switches are placed in one rack
– Server-to-switch cables stretch from rack to rack, usually requires less equipment than ToR deployment
– Usually lower latency for intra-row traffic because of less hops
– Less problem isolation, less scalability
– EoR/MoR could be spine switches which connect to ToRs within the same Row
– Can be considered as 1 POD, replicate design to scale up multiple PODs
Data center spine/core/WAN edge
ToR ToR ToR ToR ToR ToR ToR ToR
EoR MoR
8
Consistent Leaf/ToR Designs
Spines …
Server Leafs IP Storage Leafs* Service Leafs** Border Leafs
vSwitch
VMs
Bare Metal OS
Server
* When Servers and IP storage are located in the same rack leaf functions are delivered from a single, DC optimized switch pair per rack.
** Separate Service Leafs are not needed when the network services are distributed among the servers/IP storage racks.
** When the Network Service are centralized, depending on the scale and failure zones design within the DC the Border Leafs can serve also as a Service Leafs.
9
Data Center Fabric (Spine-Leaf)
Scaling up the leafs
Question: What determines the number of leafs supported in a spine leaf topology?
Answer: The number of physical
ports supported in the spine
ports supported in aswitch.
single spine switch.
Leaf 33 Leaf 64
40/100G 40/100G
– Every leaf needs to connect to every spine
ISL ISL MLAG
– Recommendation is not to use VSX / IRF on spines
32 x 100G
40/100G 40/100G
Leaf 1 Leaf 32
10
Data Center Fabric (Spine-Leaf)
Understanding oversubscription
Spine-1 Spine-2 Spine-3 Spine-4
160G / 400G 40/100G 40/100G
40/100G
(4x40G / 4x100G) 40/100G
Leaf Leaf
48 x 10G ports = 480G 48 x 25G ports = 1,200
– 40G Uplinks = 3:1 Oversubscription – 40G Uplinks = 7.5:1 Oversubscription
(480G/160G = 3) (1,200G/160G = 7.5)
– 100G Uplinks = 1.2:1 Oversubscription – 100G Uplinks = 3:1 Oversubscription
(480G/400G = 1.2) (1,200G/400G = 3)
• Scale of the fabric defined by the density of the spine switch
• Fabric bandwidth can be increased by adding more spine switches
11
Scaling up up and beyond
Super-spines
– Some customers need to scale beyond the port density supported by a spine/leaf fabric
– Recommendation: Multiple spine/leaf L3 fabrics/PODs
L3
L2
12
Aruba Data Center Networks
Benefits or modern DCs
– A stable, low latency fabric with high availability/ performance/ density/
scalability
– N/S campus/client traffic connectivity achieved via border switches
(service leafs) /routers
– L2 extension between racks: Essentially driven by VM mobility
– VXLAN as de-facto solution by many overlay vendors
– Scalable, up to 16M Virtual Network Identifier (VNIs) to support multi-tenancy
– Oversubscription
– Spine and leaf for fewer layers and reduced hop count / latency /
oversubscription levels
– Designed for E/W application traffic performance (80% of traffic is EW)
– Mac Address Explosion
– DC fabric becomes a big L3 domain (no STP) with L2 processing
(encapsulation / de-capsulation) at the edge
13
Data Center Networking Portfolio
14
Management and Orchestration
Core, Aggregation and Data Center
IMC AirWave NAE NetEdit
Advanced wired management Unified multi-vendor Flexible troubleshooting Scalable, Simple
wired + wireless and automated root CLI-based
network management cause analytics simplify Orchestration
and enhance visibility
and control
15
Addressable Market for Aruba Switching will Double from CY18
24.9
TAM ($B) 23.1 1,6 Telco
22.8
3,4 4,0 4,7 DC for Tier 2 Cloud 2020+
Aruba portfolio breadth and strength
19.4
Today
6,1 5,7 5,2 4,5 DC in the Enterprise 2019+
13.0
4,2 4,3 4,4 Campus Core & Agg 2017+
4,0 4,1
8.9
8,9 9,0 9,3 9,5 9,6 9,7 Campus Access 2015+
2017 2018 2019 2020 2021 2022
Note: Excludes hyper-scale data center TAM
Source: Dell’Oro (Worldwide Datacenter Ethernet Switching Revenue 2016-22); HPE Market Model
16
Positioning Summary Map
Local Data Center
• Differentiation with campus synergy, consistency and homogeneity, along with operations automation and high level of analytics
• Targeting On-Premise DC simple moderate scale architectures with focus on ease of use and integration.
Lead with Long term investment in features and portfolio additions
Aruba AOS-CX innovation velocity
switching *
Leverage
extensive
Long term FlexFabric portfolio addressing demanding deployment scenarios
FlexFabric
solution
(*) check to see if customer use case is satisfied by Aruba 17
High Level Selection Considerations
Traditional requirements Consistency with campus
Software feature depth Analytics, automation, and simplicity
Unique requirements & integrations Interest in CX Innovations
FlexFabric Aruba
18
Check For Aruba Fit: Customer Qualification Overview
Similar to Campus Positioning
Foundational Interest in CX
Aruba Customer? Now versus later?
Questions Innovations?
Design Can we support Can we support the
Questions topology? scale?
Other No Blockers: Third party
integrations, converged
Considerations infra, one offs . etc.
Work with your CSE to do a second level validation to check fit.
Please contact us for further questions/assistance: ask-arubaDC-
team@[Link] 19
FY19 DC Portfolio: FlexFabric Options
Compact, cost effective, 100GbE (small core/spine) Highest density, 25/100GbE flexibility and features
5950 32 * 100G 5980 advanced 10/100GbE
Spine
12900E Series: 4, 8, 16 slots
12901E Series 12902E Series
FlexFabric
1/10GbE ToR, price/perf Portfolio 1-100GbE fixed and modular ToR flexibility
5980 advanced 10/100GbE
Leaf
Storage/HPC ToR
5710 Series 1/10GbE
– 40GbE up ToR / server iLO
594x fixed/modular 5950 fixed/modular
10/40GbE 10/25/50/100GbE
20
FlexFabric Leaf Options
HPE FlexFabric HPE FlexFabric HPE FlexFabric
5710 series 5940 and 5945 series 5980 series
– 1/10 GbE downlinks x – 10 or 25 GbE downlinks x – Full data path error detection
40/100G uplinks 40/100G uplinks – 1/10 GbE downlinks x 100G
– Low-latency, high-availability – VXLAN support for network uplinks
connectivity virtualization – VXLAN support for network
– Perfect for out-of-band – Low-latency, high-availability virtualization
management (iLO) connectivity – Deep buffers to ensure
connectivity – Enhanced support for network connectivity
telemetry – Flexible port configurations
21
22
ARUBA CX
SWITCHING
The next generation of switching
CUSTOMER NETWORKING CHALLENGES
IN THE EDGE-CLOUD ERA
Fragmented Constrained Control Legacy Networks
Operations and Visibility Can’t Keep Pace
How Can IT Deliver the
Digital Experiences of the Future?
ARUBA CX SWITCHING
NEXT-GEN, CLOUD-NATIVE SWITCHING DESIGNED FOR THE NETWORK OPERATOR
AOS-CX Aruba NetEdit 2.0 and
Network Analytics Engine Aruba CX Switches
Accessiblefrom
Accessible fromSystem,
System,NMS
NMSor
orCloud
Cloud
NetworkAnalytics
Network AnalyticsEngine
Engine
Time-SeriesDatabase
Time-Series Database
100%REST
100% RESTAPIs
APIs
StateDatabase
State Database
CX Core
Micro-Services
Micro-Services
Architecture
Architecture
AOS-CX
AOS-CX
CX Access CX Access CX Access
Cloud Native Distributed Edge Access to
(Gartner) Analytics Data Center
FINANCE
RETAIL
ARUBA CX
80+
HOSPITALITY
SWITCHING GOVERNMENT
MOMENTUM NETWORK
SINCE JUNE
HEALTHCARE
CORES
A MONTH
2017 LAUNCH
EDUCATION
MANUFACTURING
ENTERTAINMENT
ONE PORTFOLIO
FOR EDGE ACCESS
TO DATA CENTER
STREAMLINING
NETWORK OPERATIONS
CONSISTENT ARCHITECTURE AND OPERATING MODEL FROM EDGE ACCESS TO DATA CENTER
Other Vendor Approaches Aruba CX Approach
STORE HQ CAMPUS DATA CENTER STORE HQ CAMPUS DATA CENTER
vs
Architecture A Architecture B Architecture C Aruba CX
Multiple OS Software Various Cross- Simplified No Software DC-Class Improved IT
Management Licensing Platform Domain Design Licensing Performance Efficiency
Headaches Constraints Complexity
ARUBA CX SWITCHING
FOR THE ENTERPRISE
NEW PLATFORMS COMPLETE END-TO-END SWITCHING PORTFOLIO
Campus Data Center
Access Aggregation Core Spine Leaf
AOS-CX
Deep buffers
Large tables
CX 8400 Carrier-class HA
Modular
High-density access
CX 6400 Core and Agg
Stackable Top of Rack
Access and Agg CX 83xx Small Core
Diverse closet scale Campus Agg
CX 6300
One Operating System. One ASIC Architecture. One Operating Model.
Non-stop Core and DC with
Virtual Switching Extension VSX
Aruba CX 8400 Series Aruba CX 6400 Series Aruba CX 8320/8325
1/10/40 and 100G 1/10/25GbE and 40G Series
1/10/25GbE and 40/100G
2.5T/6.4T Use Cases
19.2T 7.2 Bpps Capacity Campus Agg/Core
Capacity Throughput Leaf and Spine
24T 18 Bpps
Capacity Throughput
CX 6300 for access / aggregation
Flexible, Stackable switches built for the future
7 4 1
Modular power integrated operating
switches power switches model
Future ready: 1/10G to 25/50G uplinks
for scale and investment protection
Flexible growth: VSF stacking
for ease of management and
collapsed architectures
Built for Wi-Fi 6: Smart Rate on all
ports and 60W always-on PoE
880G 10 member 2880W
Capacity Stacking 60W PoE
ARCHITECTURE MATTERS
ARUBA GEN7 ASIC
Faster Non-Blocking Flexible
Innovation Performance Programmability
30Y+ 7th Gen
CLOUD-NATIVE
OPERATING
SYSTEM
AOS-CX: Accessible from System, NMS or Cloud
BUILT ON CLOUD-
NATIVE PRINCIPLES Aruba Network Analytics Engine
Time-Series Database
100% REST APIs
Modular OS Full Programmability
State Database
Microservices
Architecture
AOS-CX
Resiliency by Design Elastic &
Scalable
AOS-CX 10.4 KEY FEATURES
SHRINKING MAINTENANCE WINDOWS, SECURITY, WI-FI 6
VSX Live Dynamic VXLAN with
Upgrades Segmentation MP-BGP EVPN
Corp Office 365
Academic
BYOD Records
n0tma1ware
IoT .biz
Guest AirGroup
Extended to Access
Dual control and data planes Extended to Access and AOS-CX Extended to Access
with improved performance to
bring live upgrades to Always-on Secure, unified access Industry-standard
modular access segmentation that scales and
PoE across wired and wireless for
provides consistent
users and IoT, enabled by
policy-based automation architecture across campus
and data center
140+ new features and
Enable APs, healthcare
5th major AOS-CX Release
devices, sensors, and IoT
devices to keep power during
upgrades
DISTRIBUTED
ANALYTICS AND
AUTOMATION
TURNING NETWORK TELEMETRY
INTO ACTIONABLE INSIGHTS
INTELLIGENT PRE-PROCESSING WITH ARUBA NETWORK ANALYTICS ENGINE
Other Monitoring Approaches Aruba CX Approach
Probes and Show Telemetry Third-Party Aruba NetEdit
Commands Streaming Monitoring Tools
>_ CX Core
vs
Needle in the Latency and large, Manual correlation
haystack unfiltered data sets and limited
actionable insights CX Access CX Access CX Access
NAE integrated everywhere in network
Difficult to Delays in data Resource Real-time, Automated 24/7 network
recreate and/or processing and intensive with network-wide monitoring for technician built-in
identify issues analysis longer MTTR visibility with rapid detection to every switch
actionable data of issues
ARUBA NETWORK ANALYTIC ENGINE
POWERING DISTRIBUTED ANALYTICS, ONE SWITCH AT A TIME
Wide Monitoring Capabilities AOS-CX Switch
Configuration • Protocol and System State
ASIC Counters • ACL’s • Baseline Monitoring
Traffic Monitoring
Config & Agent
State Container
Database NAE Agents
Real Time Network Visibility Network
– Built-in
Agent
Granular data archiving Analytics
– ASE
Engine Container
Realistic model of network behavior – Custom
Time Series Agent
Database
Intelligence and Automation Container
Full power of Python
Parameters for customization
Variables for persistent policy state REST API
Sandbox Isolation Flexible Actions
Low system overhead Alert Level • CLI command execution • CLI command output capture
Configuration checkpoint diff capture • Syslog generation • Script function callback
USE CASE FOCUS:
BRINGS ANALYTICS TO LIFE
IT Workflow Integrations Network Health Reporting Proactive Monitoring
ITSM integrated change mgmt. Transceiver Diagnostics for Predictive Fault Finder for
with ServiceNow / TopDesk Health and Failure Root Cause General Network Health
Proactive Email notifications for VSX Health Monitor to Highlight VoIP monitor based on IPSLA
critical events and errors VSX Stability transactions
Auto-config archiving with TFTP Monitor and Change Route when MAC and ARP Count Analytics to
config updates Failure Detected Ensure Proper Device Load
INTEGRATIONS WITH NETEDIT 2.0 BRING ENHANCED FOCUS ON HEALTH
NAE Scripts Published on Aruba Solutions Exchange (ASE)
NAE
“DEMO”
NAE
“DEMO - Telemetría”
[Link]
Community/Streaming-Telemetry-Real-time-notifications-to-
Whatsapp-when-a/td-p/551384
ERROR FREE
NETWORK
CONFIGURATION
SOME LAT CUSTOMERS WITH CX
44
AUTOMATE NETWORK LIFECYCLE
Consistency, Conformance, Deployment, Change Validation
Search Meet modern network demands, automate
Edit
Highlight inconsistencies and conformance
Validate violations to ensure business standards are met
Deploy
Never blow through a change window again,
Audit change impact commit / rollback
Visibility
Network Health indicators aligned to your concerns
Troubleshoot
Enable technicians to cleanly deploy new devices
CX Mobile App
THANK
YOU