Module 6 - SDN

Download as pdf or txt
Download as pdf or txt
You are on page 1of 81

Software Defined Networking

Network Architecture (INWK 6115)


Based on the articles listed in the course information page and multiple public Internet resources
The Internet: A Remarkable Story
• Tremendous success
– From research experiment
to global infrastructure
• Brilliance of under-specifying
– Network: best-effort packet delivery
– Hosts: arbitrary applications
• Enables innovation in applications
– Web, P2P, VoIP, social networks, virtual worlds
• But, change is easy only at the edge…
Inside the Networks: A Different Story…

• Closed equipment
– Software bundled with hardware
– Vendor-specific interfaces
• Over specified
– Slow protocol standardization
• Few people can innovate
– Equipment vendors write the code
– Long delays to introduce new features

Impacts performance, security, reliability, cost…


Networks are Hard to Manage
• Operating a network is expensive
– More than half the cost of a network
– Yet, operator error causes most outages
• Buggy software in the equipment
– Routers with 20+ million lines of code
– Cascading failures, vulnerabilities, etc.
• The network is “in the way”
– Especially a problem in data centers
– … and home networks
5
Creating Foundation for Networking
• A domain, not (yet?) a discipline
– Alphabet soup of protocols
– Header formats, bit twiddling
– Pre-occupation with artifacts
• From practice, to principles
– Intellectual foundation for networking
– Identify the key abstractions
– … and support them efficiently
Traditional Computer Networks

Data plane:
Packet
streaming

Forward, filter, buffer, mark,


rate-limit, and measure packets
Routing vs. forwarding

• Routing (algorithm):
A successive exchange of connectivity
information between routers. Each router
builds its own routing table based on
collected information.
• Forwarding (process):
A switch- or router-local process which
forwards packets towards the destination
using the information given in the local
routing table.
8
Traditional Computer Networks

Control plane:
Distributed algorithms

Track topology changes, compute


routes, install forwarding rules
Traditional Computer Networks

Management plane:
Human time scale

Collect measurements and


configure the equipment
Death to the Control Plane!
• Simpler management
– No need to “invert” control-plane operations
• Faster pace of innovation
– Less dependence on vendors and standards
• Easier interoperability
– Compatibility only in “wire” protocols
• Simpler, cheaper equipment
– Minimal software
Software Defined Networking (SDN)
Logically-centralized control

Smart,
slow
API to the data plane
(e.g., OpenFlow)

Dumb,
fast

Switches
Software Defined Network
Well-defined open API Constructs a logical map
of the network
Feature Feature

Network OS

Open vendor agnostic protocol


OpenFlow
Simple Packet
Forwarding
Hardware Simple Packet
Forwarding
Hardware

Simple Packet
Forwarding
Simple Packet Hardware
Forwarding
Hardware

Simple Packet
Forwarding
Hardware
Network OS

Network OS: distributed system that creates a


consistent, up-to-date network view
– Runs on servers (controllers) in the network

Uses an open protocol to:


– Get state information from forwarding elements
– Give control directives to forwarding elements
Main Concepts of Architecture
• Separate data from control
– A standard protocol between data and control
• Define a generalized flow table
– Very flexible and generalized flow abstraction
– Open up layers 1-7
• Open control API
– For control and management applications
• Virtualization of the data and control planes
• Backward compatible
– Though allows completely new header
OpenFlow

– is a protocol for remotely controlling the forwarding


table of a switch or router
– is one element of SDN
OpenFlow Controller
OpenFlow Protocol (SSL/TCP)

Control Path OpenFlow

Data Path (Hardware)

17
18

Logical OpenFlow Switch


An OpenFlow Logical Switch
consists of:

• One or more flow tables and


a group table, which perform
packet lookups and
forwarding,

• One or more OpenFlow


channels to an external
controller.

• Ports

• Logic
19

Flow Tables
Using the OpenFlow switch protocol, the controller can add, update,
and delete flow entries in flow tables, both reactively (in response to
packets) and proactively.
Reactive Flow Entries are created when the controller dynamically
learns where devices are in the topology and must update the flow
tables on those devices to build end-to-end connectivity. For
example, if a host on switch A needs to talk to a host switch B,
messages will be sent to the controller to find out how to get to this
host. The controller will learn the host MAC address tables of the
switches and how they connect, programming the logic into the flow
tables of each switch. This is a reactive flow entry.
Proactive Flow Entries are programmed before traffic arrives. If it’s
already known that two devices should or should not communicate,
the controller can program these flow entries on the OpenFlow
endpoints ahead of time.
Traffic Matching, Pipeline Processing, 20

and Flow Table Navigation


Each OpenFlow switch contains at least 1 flow table and a set of flow entries in it.
When more than a single flow table, the matching starts at the first flow table and may
continue to additional flow tables of the pipeline. The packet will first start in table 0 and
check those entries based on priority. Highest priority will match first (e.g. 200, then 100). If
the flow needs to continue to another table, goto statement tells the packet to go to it.
This pipeline processing will happen in two stages, ingress processing and egress processing.

If no match is found in a flow table, the outcome depends on the


configuration of the Table-miss flow entry.
21
Table-miss Flow
Entry
The Table-miss flow entry is
the last in the table, has a
priority of 0 and a match of
anything.

The actions depend on how


you configure it?
Options:
• Forward the packet to the
controller over the
OpenFlow Channel
• Drop the packet
• Continue with the next
flow table
OpenFlow Example
Controller

PC
Software
Layer
OpenFlow Client
Flow Table
MAC MAC IP IP TCP TCP
Action
src dst Src Dst sport dport
Hardware
* * * 5.6.7.8 * * port 1
Layer

port 1 port 2 port 3 port 4

5.6.7.8 1.2.3.4 22
Examples
Routing

Switch MAC MAC Eth VLAN IP IP IP TCP TCP


Action
Port src dst type ID Src Dst Prot sport dport
* * * * * * 5.6.7.8 * * * port6

VLAN Switching
Switch MAC MAC Eth VLAN IP IP IP TCP TCP
Action
Port src dst type ID Src Dst Prot sport dport
port6,
* * 00:1f.. * vlan1 * * * * * port7,
port9

23
Data-Plane: Simple Packet Handling
• Simple packet-handling rules
– Pattern: match packet header bits
– Actions: drop, forward, modify, send to controller
– Priority: disambiguate overlapping patterns
– Counters: #bytes and #packets

1. src=1.2.*.*, dest=3.4.5.* à drop


2. src = *.*.*.*, dest=3.4.*.* à forward(2)
3. src=10.1.2.3, dest=*.*.*.* à send to controller
OpenFlow-only & OpenFlow-hybrid
25

Switches
OpenFlow-only switches are “dumb switches” having only a data/forwarding
plane and no way of making local decisions. All packets are processed by the
OpenFlow pipeline and cannot be processed otherwise.
OpenFlow-hybrid switches support both OpenFlow operation and normal
Ethernet switching operation such as L2 Ethernet switching, VLAN isolation, L3
routing, ACLs and QoS processing via the switch’s local control plane
A switch can have half of its ports as traditional routing and switching, and the
other half is configured for OpenFlow.
The OpenFlow half would be managed by an OpenFlow controller, and the
other half by the local switch control plane. Passing traffic between these
pipelines would require the use of a NORMAL or FLOOD reserved port.
26

OpenFlow Ports (Physical)


OpenFlow switches connect logically to each other via their
OpenFlow ports.
There are three types: physical ports, logical ports, and
reserved ports.
Physical Ports
Physical ports are switch-defined ports that correspond to a
hardware interface on the switch.

This could mean one-to-one mapping of OpenFlow physical


ports to hardware-defined Ethernet interfaces on the switch,
butt doesn’t necessarily have to be one-to-one.

OpenFlow switches can have physical ports that are actually


virtual, and map to some virtual representation of a physical
port as virtualize hardware network interfaces in compute
environments.
27

OpenFlow Ports (Logical)


Logical ports are switch-defined ports that do not
correspond directly to hardware interfaces on the switch.

Examples of these include LAGs, tunnels and loopback


interfaces.

The only differences between physical ports and logical


ports is that a packet associated with a logical port may
have an extra pipeline field called Tunnel-ID

When packets are received on logical ports that require


communication to the controller

Both the logical port and underlying physical port are


reported to the controller
28

OpenFlow Ports (Reserved)


The OpenFlow reserved ports specify generic forwarding
actions such as sending to the controller, flooding, or
forwarding using non-OpenFlow methods, such as “normal”
switch processing.

Types of required reserved ports: ALL, CONTROLLER,


TABLE, IN_PORT, ANY, UNSET, LOCAL.

The CONTROLLER port represents the OpenFlow Channel


used for communication between the switch and controller.

In hybrid environments, you’ll also see NORMAL and


FLOOD ports to allow interaction between the OpenFlow
pipeline and the hardware pipeline of the switch.
29

Controller: Programmability

Controller Application

Network OS

Events from switches Commands to switches


Topology changes, (Un)install rules,
Traffic statistics, Query statistics,
Arriving packets Send packets
30

OpenFlow Messages: Controller-to-switch


Messages
Controller-to-switch Messages and initiated by the controller and used to
directly manage or inspect the switch.
These messages include:
• Features – switch needs to request identity
• Configuration – set and query configuration parameters
• Modify-State – also called ‘flow mod’, used to add, delete and modify
flow/group entries
• Read-States – get statistics
• Packet Outs – controller send message to the switch, either full packet or
buffer ID.
• Barrier – Request or reply messages are used by controller to ensure
message dependencies have been met, and receive notification.
• Role-Request – set the role of its OpenFlow channel
• Asynchronous-Configuration – set an additional filter on asynchronous
message that it wants to receive on OpenFlow Channel
31

OpenFlow Messages: Asynchronous


Messages
Asynchronous messages are initiated by the switch and
used to update the controller of network events and
changes to the switch state.

These messages include:

Packet-in – transfer the control of a packet to the


controller
Flow-Removed – inform controller that flow has been
removed
Port Status – inform controller that switch has gone down
Error – notify controller of problems
32

OpenFlow Messages: Symmetric Messages

Symmetric messages are initiated either by the switch or controller


and sent without solicitation.

These messages include:


• Hello – introduction or keep-alive messages exchanged
between switch and controller
• Echo – sent from either switch or controller, these verify
liveness of connection and used to measure latency or
bandwidth
• Experimenter – a standard way for OpenFlow switches to offer
additional functionality within the OpenFlow message type
space.
34

Unifies Different Kinds of Boxes

• Router • Firewall
– Match: longest – Match: IP addresses and
destination IP prefix TCP/UDP port numbers
– Action: forward out a – Action: permit or deny
link • NAT
• Switch – Match: IP address and
– Match: destination MAC port
address – Action: rewrite address
– Action: drop or forward and port
or flood
OpenFlow is not enough…
• Adds the ability to modify, experiment…
• But still harder than it should be to add features
to a network
• Effectively assembly programming or an ISA
(Instruction Set Architecture)

[OpenFlow is just a forwarding table


management protocol]

35
Example OpenFlow Applications
• Dynamic access control
• Seamless mobility/migration
• Server load balancing
• Network virtualization
• Using multiple wireless access points
• Energy-efficient networking
• Adaptive traffic monitoring
• Denial-of-Service attack detection
E.g.: Dynamic Access Control
• Inspect first packet of a connection
• Consult the access control policy
• Install rules to block or route traffic
E.g.: Seamless Mobility/Migration
• See host send traffic at new location
• Modify rules to reroute the traffic
39

E.g.: Server Load Balancing


• Pre-install load-balancing policy
• Split traffic based on source IP

src=0*

src=1*
40

E.g.: Network Virtualization

Controller #1 Controller #2 Controller #3

Partition the space of packet headers


OpenFlow and other Components
• Open Networking Foundation
– Google, Facebook, Microsoft, Yahoo, Verizon, Deutsche
Telekom, and many other companies
• Commercial OpenFlow switches
– HP, NEC, Quanta, Dell, IBM, Juniper, …
• Network operating systems
– NOX, Beacon, Floodlight, Nettle, ONIX, POX, Frenetic
• Network deployments
– Eight campuses, and two research backbone networks
– Commercial deployments (e.g., Google backbone)
Analogy: Moving from Mainframes to PC

AppAppAppAppAppAppAppAppAppAppApp

Specialized
Open Interface
Applications
Windows Mac
Specialized or Linux or
(OS) OS
Operating
System Open Interface
Specialized
Hardware Microprocessor

Vertically integrated Horizontal


Closed, proprietary Open interfaces
Slow innovation Rapid innovation
Small industry Huge industry
Routers/Switches moving from HW to SW
AppAppAppAppAppAppAppAppAppAppApp

Specialized Open Interface


Features
Control Control Control
or or
Specialized Plane Plane Plane
Control
Plane Open Interface

Specialized Merchant
Hardware Switching Chips

Vertically integrated Horizontal


Closed, proprietary Open interfaces
Slow innovation Rapid innovation
44

Challenges to SDN
45

Challenges: Heterogeneous Switches


• Number of packet-handling rules
• Range of matches and actions
• Multi-stage pipeline of packet processing
• Offload some control-plane functionality

access MAC IP
control look-up look-up
46

Challenges: Controller Delay and Overhead

• A controller is much slower than a switch


• Processing packets leads to delay and overhead
• Need to keep most packets in the “fast path”

packets
47

Challenges: Distributed Controller


For scalability
Controller and reliability Controller
Application Application
Partition and replicate state
Network OS Network OS
48

Challenges: Testing and Debugging


• OpenFlow makes programming possible
– Network-wide view at controller
– Direct control over data plane
• Plenty of room for bugs
– Still a complex, distributed system
• Need for testing techniques
– Controller applications
– Controller and switches
– Rules installed in the switches
SDN is Technology or Architecture
SDN is really not a technology.
Merely a way of organizing network functionality.
Therefore SDN is an Architecture.
What Kind of Architecture?
SDN decouples the network control and forwarding
functions.
Why this decoupling?
• Networks are hard to manage
• Design is not based on formal principles
• A bag of protocol
• Therefore, modularity based on abstraction is
required
In Networks: What is Modular and what is
not Modular or has Abstraction or Not?
Data Plane:
Deals with times of nanoseconds, needs routing
information from Control Plane and is local.
Contains abstraction means layering for writing an
application
Example: For introducing optical wire from copper,
just change the way bits are sent. No change in the
upper layers.
Control Plane:
Has no modularity and deals with non-local data
Deals with routing (distribution and algorithm), TE
(MPLS etc.), Isolation (ACLs, VLANs, and Firewalls)
Control Plane Needs Abstraction
Abstraction: Reusable Components of a system

Functions of Control Plane:


1- Figures out what network looks like
[topology] ~ Reusable
2- Figures out how to accomplish goals on a
given topology ~ Not Reusable
3- Tells the switches what to do [Configure
forwarding State] ~ Reusable
SDN: Two Control Plane Abstractions
1- Global Network View:
- Provides information about current network
- Implemented with network operating
system (NOS)
NOS deals with or needs software running on
servers
2- Forwarding Model:
- Provides standard way of defining forward
state
- This is open flow
Another View of SDN

Control Program

Global View

Layer of Abstraction Network Operating System

Layer of Abstraction
Clean Separation of Concerns
• Control Programs: Expresses operator’s goal
- Implemented on global network view abstraction
- Computes forwarding state for each router/switch

• NOS: Links Global View and Physical Switches


- Gathers information for global network view
- Conveys configuration from Control Program to
switches

• Routers/Switches; Merely follow orders from NOS


and enables independent innovation in layers
Major Changes in Paradigm
• Control mechanism is now the program using
NOS API
- Not a distribution protocol, just a graph
algorithm [can be Djikstra or other graphs]
- Much easier to manage, evolve, and
understand
• Clean separation of control and data planes:
- Not packaged together in priority boxes
- Enables use of commodity hardware and the 3rd
party software
- Supports better testing and troubleshooting
Where the Network Virtualization Comes in

• When there is an addition of another node in network


• Hard work to accommodate changes
• Has to make changes on each router and watch and
adopt to the changes in a topology.
• That’s where Network Virtualization comes in.

• New Abstraction: Virtual Topology:


- Allows operator to express requirements and policies
- Via a set of logical switches and their configurations
New Layer: Network Hypervisor
• Translates requirements into switch (physical)
configurations
• Compiler for virtual topologies
• Control program has conveyed its semantics
• Control programs are easier now
• Role of compiler (Hypervisor) is complex now

S D S D
SDN Picture With Hypervisors

Control Program

Virtual Topology

Network Hypervisor

Global Network View

Layer of Abstraction Network Operating System

Layer of Abstraction
SDN and Middle Boxes

• SDN should implement middle box functionality: at


edges, in software, replacing separate middle boxes
• This cements the case for edge software forwarding
• Represents a radical shift from hardware to software
• Everything happens on edges through SDN
61

North-South/East-West Communication
62

North-South/East-West Communication
SDN Analysis based on Layers and Ecosystems

NP: Network Processor is an instruction set processor for network applications


Enables software implementations of communication functions to work
at the hardware speed
DCN: Distributed Cloud Networking
Two Major SDN Industry Strategies
64
Further on SDN Controllers

• Centralized brain of a network. Has a global view of


all the network devices, interconnections, and the
best paths between hosts
• Without it, to: detect a failure, announce to all, run
the routing algorithm, and update the DB in each
device (time); not possible
• SDN does not need to re-compute the shortest path
as it knows everything already
Some Commercial Controllers

• Cisco Application Policy Infrastructure Controller


(APIC): central point of control, data and policy, and
provides a central API
• HP Virtual Application Networks (VAN) SDN Controller:
Uses Openflow
• NEC ProgrammableFlow PF6800 Controller: integrates
with Openflow and MS VM Manager
• VMware NSX Controller: distributed control. Controls
virtual networks and overlay transport tunnels. Talks
to applications before configuring all the vSwitches
Some Open Source Controllers
• OpenDaylight: centralized over any vendor equipment
• OpenContrail: a stem from Juniper’s commercial offering
• Floodlight: Java based. Supports OpenFlow virtual and
real switches. A part of the big switch open source
project
• Ryu OpenFlow Controller: Supports OpenFlow, Netconf,
and OF-config protocols
• FlowVisor: acts as a go-between for OpenFlow switches
and multiple OpenFlow Controllers
• *** POX: Python-based and enables rapid development
and prototyping
• NOX: C++ based
SDN Use Cases
• Data Center Optimization: VM, NFV, and SDN
• Network Access Control: BOYD context. NAC also manages
access control limit, service chain, and QoS
• Network Virtualization: SaaS creates virtual network on top of
physical network.
• Virtual Customer Edge: vCPE
• Dynamic Interconnects: SDW, dynamic link between two Domain
Controllers. Dynamic management of QoS and BW allocation to
these links
• Virtual Core and Aggregation: for service provider. Virtual IP
Multimedia Subsystem (vIMS), evolved packet core (vEPC),
dynamic mobile backhaul, virtual Premises Equipment (vPE), and
NFV gb LAN infrastructure
Data Center SDN: Comparing VMware NSX,
Cisco ACI, and Open SDN Options
From:
http://www.datacenterknowledge.com/archives/2016/06/29/data-center-sdn-comparing-vmware-nsx-cisco-aci-and-open-sdn-
options/

• SDN is indicative of a migration from hardware to


software in the networking industry
• There are several vendors offering a variety of flavors
of SDN and network virtualization
• How are they different?
• Are some more open than others or not?
• What are their use cases?
VMware NSX
• NSX integrates security, management, functionality, VM
control, and a host of other network functions directly
into a hypervisor.
• Creates an entire networking architecture from your
hypervisor and includes L2, L3, and even L4-7
networking services.
• The goal is to decouple the network from the underlying
hardware and point completely optimized networking
services to the VM
• Micro-segmentation becomes a reality, increased
application continuity, and even integration with more
security services
VMware NSX: Use cases and limitations
• NSX must need VMware hypervisor.
• With VMware and a large number of VMs to deal
with complexities of virtual network management,
you absolutely need to look at NSX.
Limitations:
Level of automation is limited to virtual networks and
virtual machines.
There’s no automation for physical switches.
Some of the L4-L7 advanced network services are
delivered through a closed API, and might require
additional licensing.
VMware NSX: Use cases and limitations

• With a super simple VMware deployment with little


complexity, you’ll probably have little need for NSX.
• With sizeable VM architecture with a lot of VMware
networking management points, NSX can make your
life a lot easier.
Cisco Application Centric Infrastructure
(ACI)
• At a very high-level, ACI creates tight integration
between physical and virtual elements.
• Centralized management is done by the Cisco
application policy infrastructure controller, or APIC.
• It exposes a northbound API through XML and JSON
(JavaScript Object Notation) and provides a command-
line interface and GUI that use this API to manage the
fabric.
• From there, network policies and logical topologies,
which traditionally have dictated application design,
are instead applied based on the application needs.
ACI: Use-cases

• This is a truly powerful model capable of abstracting the


networking layer and integrating core services with your
important applications and resources.
• You can create full automation of all virtual and physical
network parameters through a single API.
• Furthermore, you can integrate with legacy workloads
and networks to control that traffic as well.
• And yes, you can even connect non-Cisco physical
switches to get information, on the actual device and
what it’s working with.
• Furthermore, partnerships with other vendors allow for
complete integrations.
ACI: Limitations

• The only way to get the full benefits from Cisco’s SDN
solution is by working with the Nexus line of
switches.
• More functionality is enabled if you’re running the
entire Cisco fabric in your DC.
• For some organizations, this can get expensive.
• However, if you’re leveraging Cisco technologies
already and haven’t looked into ACI and the APIC
architecture, go ahead.
Open SDN: BCF

• Provides more options and even support white


(brite) box solutions.
• Big Switch has a product called Big Cloud Fabric,
which it built using open networking (white box or
brite box) switches and SDN controller technology.
• Big Cloud Fabric is designed to meet the
requirements of physical, virtual, cloud and/or
containerized workloads.
Open SDN: BCF

• BCF supports multiple hypervisor environments,


including VMware vSphere, Microsoft Hyper-V, KVM,
and Citrix XenServer.
• Within a fabric, both virtualized servers and physical
servers can be attached for complete workload
flexibility.
• For Cloud environments, BCF continues OpenStack
support for Red Hat and Mirantis distributions.
• Integrate with Dell Open Networking switches.
BCF: Use Cases and Limitations

• BCF interoperates with the NSX controller providing


enhanced physical network visibility to VMware
network administrators.
• You can invest and have confidence around
commodity switches since the software controlling it is
powerful.
• You’re not locked down by any vendor, and your entire
networking control layer is extremely agile.
• Potentially trading off open vs proprietary
technologies
Open SDN: Cumulus Linux
• The architecture is built around native Linux networking, giving
you the full range of networking and software capabilities
available in Debian
• Switches running Cumulus Linux provide standard networking
functions such as bridging, routing, VLANs, Multi-chassis Link
Aggregation (MLAGs), IPv4/IPv6, OSPF/BGP, access control, VRF,
and VxLAN overlays.
• MLAG (Multi-chassis Link Aggregation Group) allows a single
device to be connected to 2 Ethernet switches using a single
Link Aggregation Group (LAG). The device is configured with a
single LAG with ports that are connected to two switches,
rather than a single switch. The two switches coordinate
between each other and make it appear to the device as if they
are single switch.
Open SDN: Cumulus Linux

• Cumulus can run on “bare-metal” network hardware


from vendors like Quanta, Accton, and Agema.
• Customers can purchase hardware at a cost far lower
than incumbents.
• HW running Cumulus Linux can run right alongside
existing systems, because it uses industry standard
switching and routing protocols.
• Hardware vendors like Quanta are now making a direct
impact around the commodity hardware conversation
Cumulus Linux: Use Cases and Limitations
• Acting as an integration point or overlay, Cumulus gives
organizations the ability to work with a powerful Linux-driven SDN
architecture.
• Integration into heavily virtualized systems (VMware), expansion
into Cloud environments (direct integration with OpenStack),
controlling big data (zero-touch networking provisioning for
Hadoop environments (Hadoop Distributed File System (HDFS), and a processing
part which is a MapReduce programming mod), and a lot more.
• Hadoop is an open-source software framework used for
distributed storage and processing of very large data sets
• Need right partners and professionals who can help and ensure
that a business is ready
• There are some deployments of Cumulus in the market,
enterprises aren’t in a rush to go completely open-source and
commodity.
Other SDN Vendors

• Plexxi
• Pica8
• PLUMgrid
• Embrane
• Pluribus Networks
• Anuta

You might also like