Software Defined
Networking and Virtual
Networking
José Fortes
Center for Cloud and Autonomic Computing
Advanced Computing and Information Systems Lab
Universitas Indonesia , Jakarta, June 3, 2013
Advanced Computing and Information Systems laboratory
Outline
Basic motivation
Openflow basics
Hands-on demo
From SDN to virtual networking at large
Introduction to ViNe
Hands-on demo of ViNe on FutureGrid
Conclusions
Advanced Computing and Information Systems laboratory
Resources
http://www.openflow.org/wk/index.php/Open
Flow_Tutorial
• Use tools pre-packaged in a VM (mininet)
• Modify a OpenFlow hub to a learning switch
• Many controller/platform options
http://trema-tutorial.heroku.com/
• OpenFlow controller development using trema
Connecting VMs through ViNe
(https://portal.futuregrid.org/contrib/simple-
vine-tutorial)
Advanced Computing and Information Systems laboratory
What is Software-Defined Networking?
Broad Definition
• Open Network Foundation: “an architecture that
enables direct programmability of networks”
• Internet Engineering Task Force: “an approach that
enables applications to converse with and manipulate
the control software of network devices and resources”
– Internet Draft, Sep. 2011 by T. Nadeau
OpenFlow
• An approach to SDN with physical separation between
control and data planes
• Provides open interfaces (APIs)
• SDN is not OpenFlow but OpenFlow is a step towards
SDN
Advanced Computing and Information Systems laboratory
Original need for Openflow
Network infrastructure “ossification”
• Large base of devices and protocols
• Networking experiments cannot compete with
production traffic
• No practical way to test new network protocols in
realistic settings
Closed systems
• Vendor lock-in
• Proprietary management interfaces – lack of
standard or open interfaces
• Hard to establish collaborations
Advanced Computing and Information Systems laboratory
Networking Planes
Data Plane
• Process messages/packets/frames according
to local forwarding state
• Implemented/optimized in hardware
Control Plane
• Adjust forwarding state
• Distributed protocols/algorithms
• Manual configuration and scripting
SDN advocates full separation of control
and data planes
Advanced Computing and Information Systems laboratory
OpenFlow Architecture
Separate control plane and data plane
• Run control plane software on general purpose hardware
• Programmable data plane
Advanced Computing and Information Systems laboratory
An OpenFlow Switch
Controller
OpenFlow Protocol
Match flow
table entry Secure Group
Channel Table
Add, update, Table
delete miss
Flow Flow
OpenFlow OpenFlow
Table Table
ingress Pipeline output
port port
Advanced Computing and Information Systems laboratory
8
OpenFlow
Every packet that comes through an OpenFlow
port is processed through the flow pipeline
Processing may incur multiple tables
• The rules of processing for each table are programmed
by the controller through OpenFlow API
• If no matching entry found, packet is forwarded to
controller for processing
Main components of a flow entry in the table
• Match field (e.g. Ethernet MAC src, IPv4 dest)
• Priority – determines which match applies
• Instructions – update action set (applied at output)
Advanced Computing and Information Systems laboratory
9
OpenFlow
Provides basic primitives for virtualization
• Packets are intercepted
• High-throughput datapath: flow tables
• Packets not matched in flow table sent to controller
• Slower control path
• Can use event to program flow table entries
Support layer-2 matching and actions
• E.g. VLAN behavior
Implementations in hardware (e.g. Ethernet
switches) and software (e.g. VMMs)
Network “slices”
Advanced Computing and Information Systems laboratory
10
OpenFlow
Ctrl Ctrl Ctrl
VMM Ctrl
switch tag
Physical Physical
host switch
Physical
host
Advanced Computing and Information Systems laboratory 11
OpenFlow Flow Table Entry
Rule Action Stats
Packet + byte counters
1. Forward packet to port(s)
2. Encapsulate and forward to controller
3. Drop packet
4. Send to normal processing pipeline
Switch MAC MAC Eth VLAN IP IP IP TCP TCP
Port src dst type ID Src Dst Prot sport dport
Source: Nick McKeown, “Why Can't I Innovate in My Wiring Closet?”, MIT
CSAIL Colloquium, April 2008
Advanced Computing and Information Systems laboratory
Examples
Switching
Flow Switching
Firewall
Routing
VLAN Switching
From BrandonHeller’s Tutorial at http://www.opennetsummit.org/archives-april2012.html
Advanced Computing and Information Systems laboratory
Hands-on example
Will use a simulator called mininet running on a
Virtual Box virtual machine
Will create a simple 3-hosts 1-switch topology
Then will observe that no communication takes
place when flows are not programmed into
Openflow switch
Then will program
flows and observe
communication take
place as a result
Advanced Computing and Information Systems laboratory
Nick McKeown’s perspective
Where is the functionality?
• From closed boxes, distributed protocols
Advanced Computing and Information Systems laboratory
Software defined networking
Where is the functionality?
• From closed boxes, distributed protocols to open boxes
Advanced Computing and Information Systems laboratory
Software defined network
Advanced Computing and Information Systems laboratory
Scott Shenker’s perspective
SDN uses abstractions to program
networks
• Forwarding Abstraction
• State Distribution Abstraction
• Network Operating System (NOS)
NOS plus Forwarding Abstraction = SDN v1
Add a specification abstraction (SDN v2)
• Implemented through “Nypervisor”
Next two slides from Scott Shenker’s talk
“The Future of Networking, and the Past of
Protocols”
Advanced Computing and Information Systems laboratory
Software-Defined Networking (v1)
Current Networks
Control Program
Global Network View
Network Operating System
Control via
forwarding
interface
Protocols Protocols
Advanced Computing and Information Systems laboratory
Moving from SDNv1 to SDNv2
Abstract Network View
Control
Nypervisor
Program
Global Network View
Network Operating System
Advanced Computing and Information Systems laboratory
Many Projects/Implementations
Software Switch
• Open vSwitch
Network Operating Systems
• NOX, Trema, FloodLight, Maestro
Hypervisor
• FlowVisor
Routing
• RouteFlow
Many others
Advanced Computing and Information Systems laboratory
SDN and Cloud Computing
Cloud Computing
• Dynamic environment: resources (physical and virtual), users, and
applications frequently come and go
• Large scale infrastructure
• Need efficient mechanisms to change how networks operate
Without SDN
• Rely on vendor-provided and in-house software to manage the network
• Manually generated or semi-automatically generated configurations
• Only cloud/network administrators can interact with network equipment
With SDN
• Network “programming” instead of network configuration
• Potentially open to all users/applications
Advanced Computing and Information Systems laboratory
ViNe Architecture
Dedicated resources in each broadcast domain (LAN) for
VN processing –ViNe Routers (VRs)
• No VN software needed on nodes (platform independence)
• VNs can be managed by controlling/reconfiguring VRs
• VRs transparently address connectivity problems for nodes
• VR = computer running ViNe software
• Easy deployment
• Proven mechanisms can be
incorporated in physical
routers and firewalls.
• In OpenFlow-enabled
networks, flows can be
directed to VRs for L3
processing
Advanced Computing and Information Systems laboratory
23
Connectivity Recovery in ViNe
Limited VR VR
Internet
Retrieve message
Open connection
Send message
Network virtualization processing
only performed by VRs
Firewall traversal only needed for Queue VR
inter-VR communication
ViNe firewall traversal mechanism:
VRs with connectivity limitations (limited-VRs) initiate connection (TCP or
UDP) with VRs without limitations (queue-VRs)
Messages destined to limited-VRs are sent to corresponding queue-VRs
Long-lived connection possible between limited-VR and queue-VR
Generally applicable
Advanced Computing and Information Systems laboratory
24
ViNe Routing
Appli
cation
packet
processing in
Java
in user space
Processing Time Linux Linux Compute nodes need no
Libnet Netfilter additional software
Protocol data
12µs/message TCP/IP VN TCP/IP Message
header header header
Local Network Description Table (LNDT)
• Describes the VN membership of a node
Global Network Description Table (GNDT)
• Describes sub-networks for which a VR is responsible
Advanced Computing and Information Systems laboratory
ViNe Routing
Local Network Description Table (LNDT)
• Describes the VN membership of a node
Global Network Description Table (GNDT)
• Describes the sub-networks that a VR is responsible for
Suppose that a VR with the following routing tables, received
a packet from 172.16.0.10 destined to 172.16.10.90
GNDT – ViNe ID 1
Network/Mask Destination
172.16.0.0/24 VR-a
LNDT
172.16.10.0/24 VR-b
Host ViNe ID
172.16.0.10 1 GNDT – ViNe ID 2
172.16.0.11 2 Network/Mask Destination
172.16.0.0/24 VR-a
172.16.20.0/24 VR-c
Advanced Computing and Information Systems laboratory
ViNe Routing
Packet with header VH1→VH2
is directed to VRB using L2
communication (MAC VH1→
Original, unmodified packet MAC VRB)
VH1→VH2 is delivered ViNe packet is encapsulated
VRB looks up its routing table.
The table header
with an additional indicatesfor
that the
packet should
transmission be forwarded to
in physical
“A”
Example: VH1 space: B→A:(VH1 → VH2)
sends a packet to Virtual Space
VH2 VH VH VH1
VH2 ViNe domain ViNe domain
VRA VRB
ViNe
VH4 VH3 VR
Router
ViNe domain VRD ViNe domain
VRC H Host
Physical Space
ViNe header is H H H1
Public
stripped off for H2 network A
Private network B
final delivery A R N B
H4 Internet H3 R Router
Private network C
N F Public
network D N NAT
H H F Firewall
Advanced Computing and Information Systems laboratory
ViNe Management Architecture
VR operating parameters changeable at run-time
• Overlay routing tables, buffer size, encryption on/off
• Autonomic approaches possible
• Java reflection to map commands to method invocations
Requests ViNe Central Requests
Server
Configuration
actions Requests
ViNe Central Server VR ... VR
• Oversees global VN management
• Maintains ViNe-related information
• Authentication/authorization based on Public Key Infrastructure
• Remotely issue commands to reconfigure VR operation
Advanced Computing and Information Systems laboratory
28
Typical IaaS Cloud Network
Physical Server A Physical Server B
VM A1 VM A2 VM B1 VM B2
; ;
;
vNIC vNIC vNIC vNIC
Firewall Firewall
vNIC proxyARP NIC vNIC proxyARP NIC
Forwarding Forwarding
Physical
Network
Advanced Computing and Information Systems laboratory
Network Restrictions in Clouds
To address dangers of VM privileged users
• change IP and/or MAC addresses, configure NIC in
promiscuous mode, use raw sockets, attack network
(spoofing, proxy ARP, flooding, >)
Internal routing and NAT
• granted IP addresses (especially public) are not directly
configured inside VMs, and NAT techniques are used
(many intermediate nodes/hops in LAN communication)
Sandboxing (disables L2 communication)
• VMs are connected to host-only networks
• VM-to-VM communication is enabled by a combination of
NAT, routing and firewalling mechanisms
Packet filtering (beyond usual, VM can not be VR)
• only those VM packets containing valid addresses (IP and
Advanced Computing and Information Systems laboratory
MAC assigned by the provider) are allowed
ViNe Routing
Packet with header VH1→VH2
is directed to VRB using L2
communication (MAC VH1→
Original, unmodified packet MAC VRB)
VH1→VH2 is delivered ViNe packet is encapsulated
VRB looks up its routing table.
The table header
with an additional indicatesfor
that the Problem: communication is
Problem: packet packet should
transmission be forwarded to
in physical blocked in clouds
injection is blocked “A”
Example: VH1 space: B→A:(VH1 → VH2)
in clouds
sends a packet to Virtual Space
VH2 VH VH VH1
VH2 ViNe domain ViNe domain
VRA VRB
ViNe
VH4 VH3 VR
Router
ViNe domain VRD ViNe domain
VRC H Host
Physical Space
ViNe header is H H H1
Public
stripped off for H2 network A
Private network B
final delivery A R N B
H4 Internet H3 R Router
Private network C
N F Public
network D N NAT
H H F Firewall
Advanced Computing and Information Systems laboratory
Solution
Configure all nodes to work as VRs
• No need for host-to-VR L2 communication
• TCP or UDP based VR-to-VR communication circumvents
the source address check restriction
But8
• Network virtualization software required in all nodes
• Network virtualization overhead in inter- and intra-site
communication
• Complex configuration and operation
TinyViNe
• No need to implement complex network processing –
leave it to specialized resources (i.e., full-VRs)
• Keep it simple, lightweight, tiny
• Use IP addresses as assigned by providers
• Make it easy for end users to deploy
Advanced Computing and Information Systems laboratory
TinyViNe
TinyViNe software
• Enables host-to-VR communication on clouds
using UDP tunnels
• TinyVR – nodes running TinyViNe software
TinyVR processing
• Intercept packets destined to full-
VRs
• Transmit to a VR the intercepted
packets through UDP tunnels
• Decapsulate incoming messages
through UDP tunnels
• Deliver the packets
Advanced Computing and Information Systems laboratory
Multicloud/Intercloud/Sky Computing
Tiny Tiny
ViNe ViNe
Virtual Cluster Intel Xeon Woodcrest, 2.33
FutureGrid GHz, 2.5 GB RAM, Linux
2.6.16
UCSD
UC PU
ViNe
AMD Opteron 248, 2.2 GHz, Download
FutureGrid Server 1. ViNe-enable sites
3.5 GB RAM, Linux 2.6.32
UF 2. Configure ViNe VRs
UF
3. Instantiate BLAST VMs
4. Contextualize
a.Retrieve VM information
b.ViNe-enable VMs
Intel Xeon Prestonia, 2.4 c.Configure Hadoop
Melbourne, Australia GHz, 3.5 GB RAM, Linux
connected to UF (ssh) 2.6.18
Advanced Computing and Information Systems laboratory
Quick ViNe Overview
ViNe implements routing and other
communication mechanisms needed to
deploy user-level virtual networks across
WAN
ViNe offers:
• Full connectivity among machines
(physical and virtual) on public and private
networks – built-in firewall traversal
• Multiple isolated overlays
• Management APIs
Advanced Computing and Information Systems laboratory
ViNe Hands-on Setup on FG
Hotel
India
Sierra NID: Network
Impairment Device
Private
FG Network Alamo
Public
Foxtrot
Advanced Computing and Information Systems laboratory
ViNe Hands-on Setup on FG
Sierra NID: Network
Impairment Device
Private
Public FG Network
Foxtrot
Advanced Computing and Information Systems laboratory
ViNe Hands-on Exercise on FG
Symmetric
Limited Connectivity
Connectivity
VR VR
Sierra Foxtrot
Private
Network
Advanced Computing and Information Systems laboratory
Conclusions
SDN and virtual networking in general will play
increasingly important roles in
• Management of datacenters
• Hybrid clouds
• Intercloud systems
Originally aimed a LANs, now extending to WANs
Many interesting design
tradeoffs of flexibility of
management and use,
functionality and security will
become possible
Advanced Computing and Information Systems laboratory
Source: Andrew Waite. InfoSec Triads:
Advanced Computing and Information Systems laboratory