2
2
Overview
• Evolution of routers
• The Clean Slate project
• OpenFlow
• Emergence and evolution of SDN
• SDN architecture today
• Use cases
• Standards development
4
4
Planes
Control plane
• Developed by
various SDOs
• Needs to be
interoperable
• Strives to maintain
backwards
compatibility
• Sometimes takes
years to achieve
stability
Data plane
• Hardware-
dependent and
closed
• Used by vendors
to provide
differentiation
• Can be fairly
complicated,
incorporating a
number of inline
functions e.g.
ACLs, QoS, NAT
Management
plane
• Uses a
combination of
standard (e.g.
SNMP) and non-
standard tools
such as CLI
• Generally requires
low-level operator
input
Forwarding
Device
Data Plane
Element/Network
Management System
Control Plane
Mgmt
Plane
Management Plane
Determines how
packets should be
switched/forwarded
Responsible for
actual forwarding of
packets
FCAPS (Fault,
Configuration,
Accounting,
Performance &
Security)
5
5
Clean Slate Project (1)
With what we know today, if we
were to start again with a clean
slate, how would we design a
global communications
infrastructure
Mission: Re-invent the Internet
Two research questions:
How should the Internet look in 15
years?
6
6
Clean Slate Project (2)
• One of the flagship projects was ‘Internet Infrastructure:
OpenFlow and Software Defined Networking’
• Seminal paper on OpenFlow…
...kicked off the SDN movement and the data
communications world would never be the same again
7
7
OpenFlow: The Solution (1)
FROM TO
Routing/Bridging
Protocols, RIBs,
routing policy and
logic
Forwarding Tables
Secure Channel
Abstracted Flow
Table
OpenFlow
Controller
OpenFlow
Protocol
Control
Plane
Data
Plane
Data
Plane
Control
Plane
Control
Plane
Data
Plane
Protocols and algorithms to calculate
forwarding paths
Forwarding frames/packets based on paths
calculated by control plane
8
8
OpenFlow: How it works (1)
Secure Channel
Abstracted Flow
Table
OpenFlow
Controller
OpenFlow
Protocol
Control
Plane
* Ingress Port, Ethernet SA, Ethernet DA, VLAN ID, VLAN PCP, IP SA, IP DA, IP Proto, IP ToS, Source L4 Port, Dest L2 Port etc….
Adds, deletes and
modifies flow table
entries
Header Fields* Actions Counters
Flow 1 Forward to port
1/1
Flow 2 Drop
Flow n Send to controller
Switch forwards traffic by matching
against header fields and taking
corresponding actions
9
9
Defining SDN ONF: The physical separation of the network
control plane from the forwarding plane, and
where a control plane controls several devices.
This definition is too narrow…
As much a marketing term as a technical one
Automation through enhanced
programmability and open
interfaces
Dis-aggregation and abstraction
Centralisation of network control
with real-time network visibility
SDN is …
A new approach
to networking that
provides greater
network agility
and flexibility by:
12
12
SDN architectural framework (2)
Application Plane
Application Service
Network Services Abstraction Layer
Control Plane
Service App
Control Abstraction Layer (CAL)
Management Plane
App
Mgmt Abstraction Layer (MAL)
Service Interface
Device & Resource Abstraction Layer (DAL)
Forwarding Plane App Operational Plane
Network Device
CP Southbound Interface MP Southbound Interface
RFC
7426
13
13
SDN architectural framework (3)
Application
Plane
Application Service
Topology
Discovery &
Management
Network Devices – IP/MPLS/Transport
Southbound Interfaces
REST/RESTCONF/NETCONF/XMPP
Control
Plane
(controller)
Traffic Engineering
Route selection &
failover
Resource
Management
BGP-LS PCE-P
i2RS
SNMP
MIBs OpenFlow YANG
Configuration
Open
Flow
SNMP Netconf
Data
Plane
(with some
distributed
control plane
elements)
BGP PCC
RIBs
Segment
Routing
RSVP-
TE
East/West-
bound
interfaces –
BGP
IPFIX
ForCES
Northbound Interfaces
Note: designations of north-bound and south-bound are relative to the control plane (“controller”)
Device & Resource
Abstraction Layer (DAL)
Network Services Abstraction Layer
14
14
Comparing and contrasting with NFV
FROM TO
Tightly
coupled
Software
Purpose-built
hardware
COTS
hardware
Virtualised
Software
SDN: decouples elements of the control plane
from the data plane
NFV: decouples network software from closed,
proprietary hardware systems
17
17
SEBA - SDN Enabled Broadband Access
• Virtualised Access technologies at the edge of the carrier
network.
https://gonorthforge.com/seba-sdn-enabled-broadband-access-the-next-generation-of-broadband-access/
18
18
Traditional FTTH Residential Access
RG - Residential Gateway
ONU - Optical Network Unit
OLT - Optical Line Termination
BNG - Broadband Network Gateway https://youtu.be/jBeRYVVM7u8?t=231
VOLTHA
19
VOLTHA
• Virtual Optical Line Terminal
(OLT) Hardware Abstraction
• Provides a common, vendor
agnostic, GPON control and
management system, for a set of
white-box and vendor-specific
PON hardware devices.
https://opennetworking.org/voltha/
20
20
VOLTHA
• Network as a Switch:
– It makes a set of connected access network devices to look like a SDN
programmable switch.
• Evolution to virtualisation:
– it can work with a variety of access network technologies and devices
• Unified OAM abstraction:
– it provides unified, vendor- and technology agnostic handling of device
management tasks, such as service lifecycle, device lifecycle (including
discovery, upgrade), system monitoring, alarms, troubleshooting, security, etc.
• Cloud/DevOps bridge to modernisation:
– it does all above while using a microservices architecture running on top of
Docker and/or Kubernetes.
https://docs.voltha.org/master/index.html
21
21
Open Network Operating System (ONOS)
• Build carrier-grade solutions using white-box hardware.
• Create and deploy network services with simplified
programmatic interfaces.
https://opennetworking.org/onos/
23
23
SD-RAN
• Mobile RAN networks historically used vendor proprietary
base stations.
• Operators would like to see interoperable RAN
components.
• Operators, through the O-RAN consortium, are advocating
for a disaggregation of RAN networks into interoperable
Radio Unit (RU), distributed unit (DU), centralised unit (CU)
components.
• RAN Intelligent Controller (RIC) is integral to the O-RAN
architecture.
https://opennetworking.org/open-ran/
27
27
P4 Integrated Network Stack (PINS)
• Enables the use of SDN (and an external controller) to
dynamically add new advanced functions to a traditional
routed network.
https://opennetworking.org/pins/
Notas do Editor
Overview
We will start with a look at how routers have evolved and the conditions that resulted in the emergence of SDN. We will touch on the Stanford Clean Slate project and in particular, the development of OpenFlow. We will look at the different SDN architectures that are being adopted now together with key use cases. We will also touch on the SDOs (Standards Development Organisations) that are involved in SDN standardisation efforts and the two key open source SDN projects. Finally, we will briefly discuss how SDN differs from NFV.
Routers
When we look at routers today, we can still say that the fundamental role of routers has not changed since the IMP (interface message processor) of the ARPANET. They have two key responsibilities:
To determine network paths (routes). A number of routing protocols, both internal and external, are available today to perform this function.
To forward packets along the paths they have determined.
In short, the basic elementary function of routers (and switches) has not changed much since the inception of data networking.
Clean Slate Project (1)
The Clean Slate Project was an initiative of researchers at the Stanford University that was started around 2016.
The program stemmed from the belief that the current Internet has significant deficiencies that need to be solved before it can become a unified global communications infrastructure. There was a further belief that the Internet’s shortcomings will not be resolved by the conventional incremental and “backward-compatible” style of academic and industrial networking research. The program focused on unconventional, bold, and long-term research that tried to break what the researchers called the network’s ossification. The research program was characterized by two research questions: “With what we know today, if we were to start again with a clean slate, how would we design a global communications infrastructure?”, and “How should the Internet look in 15 years?” The intent was to measure their success in the long-term: They intended to look back in 15 years time and see significant impact from the program.
The mission of the project was to “re-invent the internet”. It’s significant that this work came out of the research community and not operators or vendors.
References:
http://www.tropicalcoder.com/CleanSlateWhitepaperV2.pdf
http://cleanslate.stanford.edu/index.php
Clean Slate Project (2)
One of the flagship projects within the Clean Slate initiative was ‘Internet Infrastructure: OpenFlow and Software Defined Networking’. The output of this activity was the seminal paper: “Openflow: Enabling Innovation in Campus Networks”. This work kicked off the SDN movement and the networking world would never be the same again.
References:
http://archive.openflow.org/documents/openflow-wp-latest.pdf
OpenFlow: The Solution (1)
At this point, it’s important to make a clear distinction between the control plane and data plane. As we stated earlier, a router has two basic functions:
To determine network paths (routes). A number of routing protocols and algorithms are used today to calculate forwarding paths. This is the control plane.
To forward packets along the paths they have determined. The control plane programs paths it has calculated into the forwarding or data plane. The function of the data plane is to actually forward packets according to these rules.
To address the question of how to run experimental protocols on live networks, the solution that the OpenFlow research team came up with was to:
Completely remove the control plane from Ethernet switches and move it to an external controller
To abstract the forwarding plane on switches as a flow table so that all switches appeared similar from a forwarding perspective. This was important because forwarding tables are very hardware-dependent.
To use a standardised interface (OpenFlow on-wire protocol) over a secure channel that allows the controller to manipulate entries in the flow table of the switch
It’s interesting to note that router vendors had done this very thing (data and control plane separation) a long time ago with physically separate control cards + line cards, albeit in a proprietary way.
OpenFlow: How it works (1)
In an OpenFlow network, the OpenFlow controller is responsible for adding flows to the flow table and also deleting and modifying them. There are a few approaches to doing so:
Reactive: flows are added as packets for new flows are detected
Proactive: flows are based on advanced knowledge of flows and their requirements
Hybrid: a combination of the above
The flow table on the switch has a number of entries with the following structure:
Header fields: the set packet header fields to match on. Specific header files can be ignored by using wildcards
Actions: a number of actions are possible:
Forward to an output port
Send to the controller
Drop
Set header fields
Etc
Counters:
Count packet statistics on a per-flow basis
When a packet enters the switch, its header is parsed and the header fields are looked up in the flow tables. If there is a match, the set of actions associated with the flow entry are executed. If there is no entry, the packet is sent to the controller so that it can determine what to do with the new flow.
Defining SDN
SDN has been one of the most hyped concepts in the history of networking. The original definition of SDN was one that is still promoted by the Open Networking Foundation (ONF): it defines it as ‘The physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices’. This definition is now too narrow to describe what SDN represents.
The term SDN itself has been overloaded (almost abused) to mean many things, some of which have no relation to what it originally stood for. Vendors have been quick to attribute this term to any capability that involves software, automation or programmability. In fact, there is some contention that the term SDN is now meaningless from the perspective of clearly defining a function.
For our purposes, we define SDN as:
A new approach to networking that provides greater network agility and flexibility by:
Automation through enhanced programmability and open interfaces
Dis-aggregation and abstraction
Centralisation of network control with real-time network visibiity
By agility, we mean the ability to react faster to network events and to roll out new services quicker.
By dis-aggregation, we mean the breaking up of integrated systems into their component parts
By abstraction, we mean the ability to hide low-level hardware or software-specific mechanisms via a layer of open interfaces or APIs.
SDN SDOs
SDN is a wide-reaching concept and a number of standards bodies are involved in SDN standardisation efforts.
ONF
Established in March 2011 in order to broaden the concept of OpenFlow and to promote the commercialisation of SDN
Custodians of the OpenFlow specification
Focus areas:
promoting open source software as the de facto route to standards development and interoperability
evolving the OpenFlow® standard to develop new capabilities to expand SDN benefits
accelerating the deployment of open SDN to free end-users from vendor lock-in.
IETF
development of IP/MPLS protocols and extensions to work within an SDN environment. Efforts include FORCES, segment routing, PCE, BGP-LS
MEF
Defining Lifecycle Service Orchestration (LSO) and management capabilities necessary to achieve the key aims of the MEF’s Third Network.
ITU
Specification of SDN framework
Broadband Forum
SDN in a broadband access environment
SDN architectural framework (1)
The ITU-T Y.3300 specification outlines an architectural framework for SDN. While the framework itself is quite simple, it forms the basis for just about all of the other frameworks and open source implementations.
There are three key layers identified by ITU-T Y.3300:
Extracts from ITU-T Y.3300:
“
Application layer
The application layer is where SDN applications specify network services or business applications by defining a service-aware behaviour of network resources in a programmatic manner. These applications interact with the SDN control layer via application-control interfaces, in order for the SDN control layer to automatically customize the behaviour and the properties of network resources. The programming of an SDN application makes use of the abstracted view of the network resources provided by the SDN control layer by means of information and data models exposed via the application-control interface.
SDN Control Layer
The SDN control layer provides a means to dynamically and deterministically control the behaviour of network resources (such as data transport and processing), as instructed by the application layer. The SDN applications specify how network resources should be controlled and allocated, by interacting with the SDN control layer via application-control interfaces. The control signalling from the SDN control layer to the network resources is then delivered via resource-control interfaces. The configuration and/or properties exposed to SDN applications are abstracted by means of information and data models. The level of abstraction varies according to the applications and the nature of the services to be delivered.
Resource layer
The resource layer is where the network elements perform the transport and the processing of data packets according to the decisions made by the SDN control layer, and which have been forwarded to the resource layer via a resource-control interface.”
This is a simple but very powerful model of SDN architecture; most open-source and vendor-proprietary models map quite neatly to this model.
References:
ITU-T Y.3300
SDN architectural framework (2)
The IETF model for SDN identifies the following planes:
(from RFC7426)
“Forwarding Plane - Responsible for handling packets in the data path based on the instructions received from the control plane. Actions of the forwarding plane include, but are not limited to, forwarding, dropping, and changing packets. The forwarding plane is usually the termination point for control-plane services and applications. The forwarding plane can contain forwarding resources such as classifiers. The forwarding plane is also widely referred to as the "data plane" or the "data path".
Operational Plane - Responsible for managing the operational state of the network device, e.g., whether the device is active or inactive, the number of ports available, the status of each port, and so on. The operational plane is usually the termination point for management-plane services and applications. The operational plane relates to network device resources such as ports, memory, and so on.
Control Plane - Responsible for making decisions on how packets should be forwarded by one or more network devices and pushing such decisions down to the network devices for execution. The control plane usually focuses mostly on the forwarding plane and less on the operational plane of the device. The control plane may be interested in operational-plane information, which could include, for instance, the current state of a particular port or its capabilities. The control plane’s main job is to fine-tune the forwarding tables that reside in the forwarding plane, based on the network topology or external service requests.
Management Plane - Responsible for monitoring, configuring, and maintaining network devices, e.g., making decisions regarding the state of a network device. The management plane usually focuses mostly on the operational plane of the device and less on the forwarding plane. The management plane may be used to configure the forwarding plane, but it does so infrequently and through a more wholesale approach than the control plane. For instance, the management plane may set up all or part of the forwarding rules at once, although such action would be expected to be taken sparingly.
Application Plane - The plane where applications and services that define network behavior reside. Applications that directly (or primarily) support the operation of the forwarding plane (such as routing processes within the control plane) are not considered part of the application plane. Note that applications may be implemented in a modular and distributed fashion and, therefore, can often span multiple planes”.
SDN architectural framework (3)
We have looked at two different frameworks for SDN – from the ITU-T and the IETF, respectively. For the purpose of this training course, and subsequent SDN-related training courses that are in development by APNIC, we will adopt the framework depicted here. As technologies develop, the model itself will develop along with them.
Firstly, we note that this is simply a framework that describes the different layers, functions and interfaces that form part of the SDN framework. These functions do not necessarily map to specific hardware or software elements.
The model described here has the following components:
Application Plane
Control Plane (or the controller, although this may indeed be a suite of functions)
Data Plane
Northbound interfaces: between the Application Plane and the Control Plane
Southbound interfaces: between the Control Plane and the Data Plane
(note that the designations of north and south are relative to the control plane (“the controller”)
In the next few slides, we will describe each of these elements.
Comparing and contrasting with NFV
The differences between SDN ad NFV are a source of some confusion.
From a technical viewpoint:
SDN: physically decouples control plane and data plane
NFV: physically decouples network software from closed, proprietary hardware systems
While SDN has its roots in the research community, NFV is a strong operator-led initiative. Like SDN, a key objective of NFV was to improve network agility by reducing dependence on proprietary hardware. If network applications could be run on COTS hardware, significant benefits could be achieved via homogenisation of hardware.
Neither SDN and NFV are directly dependent on each other. However, significant benefits can be realised by using the two together. In fact, many of the NFV use cases are enhanced with the adoption of SDN capabilities.
SDN SDOs
SDN is a wide-reaching concept and a number of standards bodies are involved in SDN standardisation efforts.
ONF
Established in March 2011 in order to broaden the concept of OpenFlow and to promote the commercialisation of SDN
Custodians of the OpenFlow specification
Focus areas:
promoting open source software as the de facto route to standards development and interoperability
evolving the OpenFlow® standard to develop new capabilities to expand SDN benefits
accelerating the deployment of open SDN to free end-users from vendor lock-in.
IETF
development of IP/MPLS protocols and extensions to work within an SDN environment. Efforts include FORCES, segment routing, PCE, BGP-LS
MEF
Defining Lifecycle Service Orchestration (LSO) and management capabilities necessary to achieve the key aims of the MEF’s Third Network.
ITU
Specification of SDN framework
Broadband Forum
SDN in a broadband access environment
P4 Integrated Network Stack (PINS)
a lightweight platform based on a variant of R-CORD. It supports a multitude of virtualized access technologies at the edge of the carrier network, including PON, G.Fast, and eventually DOCSIS and more. SEBA supports both residential access and wireless backhaul and is optimized such that traffic can run ‘fastpath’ straight through to the backbone without requiring VNF processing on a server.
Kubernetes based
High Speed
Operationalized with fault, configuration, accounting, performance and security (FCAPS) and operational support system (OSS) Integration
https://gonorthforge.com/seba-sdn-enabled-broadband-access-the-next-generation-of-broadband-access/
Gigabit Ethernet passive optical network (PON).
https://docs.voltha.org/master/overview/architecture_overview.html
Infrastructure
The Infrastructure for a VOLTHA deployment contains, at the bare minimum:
A kafka cluster. Kafka is the messaging bus system used publish events to the outside listeners, such as the Operator’s OSS/BSS. The recommended deployment size is 3 nodes for failure and resiliency, but can also be a single node.
An etcd cluster. ETCD is used as data store by the different VOLTHA components. The recommended deployment size is 3 nodes for failure and resiliency, but can also be a single node.
ONOS SDN Controller. ONOS manages the VOLTHA abstracted switch, installs traffic forwarding rules and handles different type of failures, e.g. port down events. ONOS comes with it’s own storage in the form of an Atomix cluster. The recommended deployment size is 3 nodes for ONOS and 3 nodes for Atomix to achieve high avaliability and resiliency, but can also be a single node with no atomix.
[Optional] radius server. A radius server is required for the ATT workflow for EAPOL based authentication.
[Optional] Jaeger tracing. Jaeger allows you to perform end-to-end distributed tracing of transactions across the different microservices, allowing for easier monitoring and troubleshooting.
[Optional] EFK (Elastic, Fluentd, Kibana) stack. EFK allows enhanced log management. Fluentd will collect the logs and send it to Elasticsearch which will save them in its database. Kibana will fetch the logs from Elasticsearch and display it on a web UI.
An infrastructure comprised of 3 node clusters of each of the components (ETCD, KAFKA, ONOS) can support up to 10 VOLTHA stacks, where each stack is connected up to 1024 subscribers, located on a single OLT or divided over a handful of them.
VOLTHA Stack
A single VOLTHA stack contains several components, each interacting with one another through open APIs defined in protobuf within the voltha-protos repo:
voltha-core. The VOLTHA core is the heart of the VOLTHA components. It receives requests from the Northbound, divides them in the proper sub-set of operations for each of adapters. Handles registration of the adapters and configuration information of ONUs and OLTs which it stores in ETCD, such as ports, flows, groups and other dataplane constructs. It also abstracts the OLT and ONU pairs as a switch in the form of a logical device. Flows from the SDN controller are stored, decomposed by the core and sent as specific instructions to the correct adapter(s).
OpenFlow Agent. The ofAgent as it is also known is responsible of establishing the connection between the SDN controller and VOLTHA core. It is the glue between the VOLTHA data model and the SDN controller, converting events coming from VOLTHA and instructions coming from ONOS between OpenFlow and gRPC calls. It’s completely stateless.
OLT adapter. The OLT adapter is the key component for importing an OLT of any model into VOLTHA. The main purpose of this component is to interact with the physical OLT, receive it’s information, events and status and report them to the core, while at the same time receive requests from the core and issue them to the device. The olt adapter also abstracts the technology of the OLTs, e.g GPON, XGS-PON, EPON. The interface to the core is standardized in the voltha-protos and must be common for any adapter by any OLT vendor. The southbound interface towards the OLT and its software can be proprietary as it’s not seen by upper layers of the system. An opensource implementation exists in the form of the open-olt-adapter) which uses gRPC and the openolt.proto API as its means of communication to the open-olt-agent. Closed source adapter that use different SB protocols to the device, such as NETCONF, have been proven to work with VOLTHA with no changes required to the system.
ONU Adapter. The ONU adapter is responsible for all the interactions and commands towards the ONU via OMCI, such as discovery, MIB upload, ME configuration, T-CONT and GEM port configuration and so on. The existing open source implementation voltha-openonu-adapter-go) includes a virtualized openOMCI stack, fully compliant withe G.988 spec stack. Any openOMCI compliant ONU can thus be connected to VOLTHA with no additional effort. For other technologies (e.g. EPON) or other Vendors other onu adapters that adhere to the voltha-protos can be brought in.
A VOLTHA stack is intended to be deployed for 1 up to a handful of OLTs with a total of 1024 subscribers connected. For multiple OLT scenarios many VOLTHA stacks can be connected to the same infrastructure, thus sharing storage, message bus and SDN controller.
Gigabit Ethernet passive optical network (PON).
Key concepts in VOLTHA:
Network as a Switch: It makes a set of connected access network devices to look like a SDN programmable switch.
Evolution to virtualization: it can work with a variety of access network technologies and devices
Unified OAM abstraction: it provides unified, vendor- and technology agnostic handling of device management tasks, such as service lifecycle, device lifecycle (including discovery, upgrade), system monitoring, alarms, troubleshooting, security, etc.
Cloud/DevOps bridge to modernization: it does all above while using a microservices architecture running on top of Docker and/or Kubernetes.
https://youtu.be/XI3ckGAK84k?t=282
The ONOS platform includes:
A platform and a set of applications that act as an extensible, modular, distributed SDN controller.
Simplified management, configuration and deployment of new software, hardware & services.
A scale-out architecture to provide
P4 Integrated Network Stack (PINS)
https://www.rcrwireless.com/20200708/opinion/readerforum/open-ran-101-ru-du-cu-reader-forum
What
In a 5G RAN architecture, the baseband unit (BBU) functionality is split into two functional units: a distributed unit (DU), responsible for real time L1 and L2 scheduling functions, and a centralized unit (CU) responsible for non-real time, higher L2 and L3.
RU: this is the radio unit that handles the digital front end (DFE) and the parts of the PHY layer,
distributed unit (DU), responsible for real time L1 and L2 scheduling functions, and a centralized unit (CU) responsible for non-real time, higher L2 and L3.
µONOS RIC
At the heart of ONF’s SD-RAN architecture is the µONOS RIC, based on ONOS, the leading open source SDN control plane for operators.
Refer to Berlin SD-RAN Trial (Deutsche Telekom deployed the first fully disaggregated 5G field trial)
ONOS RIC is a cloud-native, carrier-grade SDN controller that enables:
Ease in scalability
High performance
High availability
Support for multi-vendor equipment
P4 Integrated Network Stack (PINS)
P4 Integrated Network Stack (PINS) is an industry collaboration bringing SDN capabilities and P4 programmability to traditional routing devices that rely on embedded control protocols (like BGP). Specifically, this project uses P4 to model the switch abstraction interface (SAI) pipeline, adds externally programmable extensions to the pipeline and introduces P4Runtime as a new control plane interface for controlling the pipeline.