5. DISTRIBUTING CLOUD TECHNOLOGIES & THE EDGE CLOUD
~100 ms
Devices /
Things
MANUFACTURING
TRANSPORTATION
ENERGY
VIDEO
HEALTHCARE
RETAIL
Network
Core
Data Center |
Cloud
On-premise
edge
DRONE
S
PHONES
SMART
CITIESPCs
Network
edge
WIRELESS ACCESS
BASE STATIONS & RAN
NEXT GEN
CENTRAL OFFICE
MULTI-
ACCESS
EDGE
COMPUTING
WIRELINE
FIXED ACCESS
(vCCAP, PON)
REGIONAL
DATA CENTERS
Driver for
edge
Latency,
bandwidth,
Security,
connectivity
CUSTOMER PREMISE
EQUIPMENT (uCPE,
SD-WAN, EDGE
COMPUTE OR FOG
NODES)
SMALL CELLS
LICENSED &
UNLICENSED
< 60 ms<10-40 ms<5 msVaries <1 ms
Latency
expectation
MULTIPLE EDGE(S) & EDGE CLOUD LOCATIONS:
EACH WITH UNIQUE LATENCY AND EFFICIENCY REQUIREMENTS
5
6. WHAT IS EDGE COMPUTING?
Closer to
ENDPOINT DEVICES
TO IMPROVE
SERVICE CAPABILITIES
Optimize
TCO
Comply with
Data Locality
And reduce
application Latency
BEFORE TRANSITION TO
OR Another Network
Outmost layers of
Processing or Network
EDGE COMPUTING IS
THE
PLACEMENT OF DATA
CENTER-GRADE
NETWORK, COMPUTE &
STORAGE
The Edge
is the
6
7. Intel Confidential 7
OpenNESS Overview
OpenNESS is an open source reference toolkit to develop, securely on-board and
manage new edge services on the On-Premise & Network Edge.
Edge Platform Software Edge Controller Software
uCPE vRAN NGCO Data Center / Cloud
WHERE
WHAT
Access termination, traffic steering, multi-
tenancy for services, service registry, service
authentication, telemetry, cloud and
application frameworks
Edge Platform discovery, control, policy
management, exposed via standardized APIs
with a web-based GUI for easy application
onboarding
10. Intel Confidential 10
OpenNESS Deployment Models (continued)
On Prem Edge Network Edge
Controller
Edge
Platform
Edge
Platform
Edge
Platform
Controller deploys
Docker* containers
Controller
Edge
Platform
Edge
Platform
Edge
Platform
Controller deploys
VMs with libvirt
Controller
Edge
Edge
Platform
Edge
Edge
Platform EdgeEdge
K8s*
master
K8s*
master
Edge
Platform
K8s* master
Controller deploys
Docker containers
with Kubernetes*
Controller
Edge
Edge
Platform Edge
Edge
Platform EdgeEdge
OpenStack* OpenStack*
Edge
Platform
OpenStack*
Controller deploys
VMs with OpenStack*
*Other names and brands may be claimed as the property of others
11. Intel Confidential 11
OpenNESS Interface Architecture
Edge Platform
Edge App
Public Cloud
App
Edge App API (service
registration/discovery)
Dataplane Policy API
(filter and route Edge traffic)
Dataplane
eNodeB/
gNodeB
Edge Platform
OpenNESS Edge Services Software
(community and commercial editions)
Edge App
Public Cloud
App
Edge App API (service registration/discovery)
Cloud Connector API
Optimized Dataplane Accelerators
(NTS, DPDK, VPP, OVS-DPDK, SR-IOV, DPP)
Dataplane Policy API (filter and route Edge traffic)
OpenStack*/Kubernetes* Infra
Managers
OpenNESS Controller
Software (community and
commercial editions)
Service Orchestrators
OSS
NES Control API (Policy,
Telemetry, Lifecycle)
VIM API
Public
Cloud
Commercial EPC: U-
plane/UPF
3GPP* Interface ETSI* MEC Interface Cloud Interface
Resource provisioning, EPA
Configuration
Commercial EPC C-
Plane
Traffic steering request to EPC
Control plane
SGi/N6 (Terminate Access Network and forward
Edge and non-Edge traffic based on APN filtering)
S1-U/N3
Solution specific Interface
*Other names and brands may be claimed as the property of others
12. Intel Confidential
Legend:
12
Built on Cloud Native Container Technologies
FPGA NIC40G Eth
10G Eth
Kubernetes* Master
(Container Lifecycle management)
Switch
Kubernetes/Docker
Deployment
Xeon-SP/D
Movidius
*
VC
A
Upstream
Downstream
FPGA NIC
OpenNESS Controller
*Other names and brands may be claimed as the property of others
Container
OpenNESS Edge Platform
Microservices
Pod
K8s Node – CentOS* Linux 7
OpenNESS Application
eXpress Data Path
(XDP)*
Fortville* /
Columbiaville* DDP
CPU Manager
for
Kubernetes*
Multus*
and CNI*
Device Plugins
(FPGA, GPU,
QAT)
Container
Node Function
Discovery
(NFD)
NUMA*
Manager
Topology Manager
Collectd
SRIOV*
OPAE*/RS
U*
RT Kernel
1588 PTP*
SRIOV*-CNI*
(DPDK/Kernel)
SRIOV* Device
Plugin
Metric Scheduler
Userspace & CMK
Device Plug-in
Available
Work in
progress
13. Intel Confidential
Intel Internal Use Only
PARTNERSHIPS: TO WIN WITH
THE ECOSYSTEM
*Other names and brands may be claimed as the property of others.
USE INDUSTRY LEADING ECOSYSTEM PROGRAMS
TO DRIVE EDGE TRANSFORMATION & ENABLE DEVELOPERS
INVEST
OPEN SOURCE AND STANDARDS
INDUSTRY
COLLABORATION
INTEL® NETWORK BUILDERS &
INTEL® DEVELOPER ZONE
100+POCs/Trials/Deployments
Based on Member Solutions
12,000+Network Builder
University Program
Members
50+Network Edge
Ecosystem Program
Members
350+Members
35+Comms SPs
13,000+Developer’s Trained
worldwide
251,480+IDZ page views
NFVi
NETWORK FUNCTIONS
VIRTUALIZATION INFRASTRUCTURE
UCPEUNIVERSAL CUSTOMER PREMISES
EQUIPMENT
Intel® Select Solutions for Visual Cloud available Q2’19
15. Developer Opportunity
• Availability of unlicensed spectrum and Small form factor base stations mandated for small cells
Scale Opportunity
600+ TIP companies : AT&T, Verizon, Facebook, GM, Ford, Fiat-Chrysler, BT, Adlink, Be.Yond, ACS, Volteo, Mirantis,
Telefonica, Baicells, Deutsche Telekom, Mobile stack, Q Associates, Tech Mahindra, Viavi, Affirmed, Amdocs,
Skydome, Crown Castle, with MobiledgeX (a spinoff of Deutsche Telekom)
Chungwa telecom, Taiwan with Nokia provided Multiview sports streaming to 20000 fans using small cells in 2017
Platform and Software Design
• Open Source Cellular design via Open Cellular project to provide connectivity to 4 billion people
• Last but not the least, Open source software NEV-SDK, Akraino, MTP, ETSI MEC, Openstack Edge, Kubernetes,
FD.io
15
Developer Opportunity
16. Intel Confidential
OpenNESS
Controller
OpenNESS Edge Platform
16
Internal Architecture for Cellular Access
(Producer or Consumer) Edge App
(in VM or Container)
Edge Platform
Gateway
Controller Gateway
Network Traffic
Services
Edge Application
Agent
Edge Lifecycle
Agent
Edge Dataplane
Agent
Edge Virtualization
Agent
Edge Lifecycle
Agent
LTE/CUPS AGENT
Edge Virtualization
Agent
Commercial
EPC: U-
plane/UPF
APN–based
traffic filtering
Application Traffic Steering Rules
Datapat
h
Kubernetes* Master
gRPC based APIHTTP/REST based API
DNS
Server Lifecycle Management
OS + Container Runtime
Infrastructure Management
PDN
Service Registration,
Authentication and Discovery
DNS
Rules
API Flow (Direct)
Logical Mapping
Traffic Rules
Authentication
UI Web Front End
*Other names and brands may be claimed as the property of others
17. Intel Confidential
OpenNESS Microservices
17
• Edge Application Agent (EAA): service registration, service discovery, communication support for services. application
availability, session state relocation support procedures, traffic rules and DNS rules activation, access to persistent
storage and time of day information, providing service specific functionality. Mapped to ETSI MEC Mp1.
• Edge Dataplane Agent (EDA): Establish routing among applications, networks, services, etc. Mapped to ETSI MEC
Mp2
• Edge LifeCycle Agent (ELA): Configuration of platform, application rules and requirements; carry out application
lifecycle support procedures, management of application relocation, etc. Mapped to ETSI MEC Mm5
• Edge Virtualization Agent (EVA): manage virtualized resources e.g. to realize the application lifecycle management.
Mapped to ETSI MEC Mm6 and Mm7.
• LTE CUPS Agent: Configures the EPC controller and user plane for Edge Platform
• DNS Server: Edge DNS server for apps deployed on edge. For apps not on the edge, the DNS server behaves as a
DNS forwarder.
• Edge Platform/Controller Gateway: API gateway for Edge Platform and controller communication. Controller and agent
microservices communicate through these gateways.
• Data Plane Network Transport Services (NTS): Application specific traffic steering for Edge Platform.
*Other names and brands may be claimed as the property of others
Editor's Notes
When you think about the Network Edge different people will have different interpretations of what this means.
There are Multiple Edge(s) & Edge Cloud locations and for this presentation the primary focus of this deck is on On-Premise, which includes the Enterprise Edge & the Network Edge as shown in the diagram.
This presentation does not go into any great detail around the Device Edge or the Cloud Edge.
This rapid growth of data requires advanced intelligence closer to the endpoints that are both generating and consuming data.
To capture and accelerate this opportunity, the powerful data processing and analytics capabilities that have traditionally lived in the heart of the data center must be strategically placed closer-and-closer to the data generating and consuming endpoints, at the “edge.”
By expanding the powerful capabilities of the data center outward, service and network providers can deliver more powerful services, reduce application latency by processing more data closer to the edge, and optimize TCO.
Speaker notes:
Before getting into its nuts-and-bolts, let’s review, the context of OpenNESS, what problem it is solving, and how is it partitioning the solution space.
OpenNESS is “an open source” reference toolkit,
It provides open APIs for “developing”, “securely on-boarding” and “managing” edge services application.
It solves the problem of deploying Edge Services in telco networks using cloud computing capabilities and IT service environment. [performance, platform resources, & devops]
In the absence of such a platform, App Developers and Architects have to port and integrate their application services into proprietary network appliances, which may affect their responsiveness and time to market.
The following slide shows an animated video [from the opneness website] giving an artist’s impression of how OpenNESS enables developers.
[Before that a quick review of the solution space] OpenNESS consists of two main components,
The first one is the Edge Platform that runs the Edge Services workloads including accelerators, microservices and application frameworks.
And, the second one is the Edge Controller that manages it.
These components provide different functionality and are likely to be in different locations – as shown.
To understand it [the solution space], in today’s session we will go over
The OpenNESS deployment models,
Its architecture and interfaces, and
Some of the platform enhancement work we have been doing to bring together the elasticity/programmability of cloud and the performance and efficiency of virtualized network functions – which is important for edge apps and services.
** The slides and the flow is top down – you will find first a reference or an introduction, followed by details later.
Speaker Notes:
This is a short animation showing the context of OpenNESS, how it helps developers bring their innovative solutions to 5G and next gen network infrastructure.
See more at “www.openness.org”
Speaker notes:
One of the first question to address with Network Edge Services is – what is the deployment model.
A deployment model is a configuration option, one of the many, of geometries and topologies, of how Edge Services will be provided.
It has implications for the form factor, the underlying virtualisation infrastructure, and also latency.
OpenNESS focuses on the following two models:
The On-Prem Edge and the Network Edge.
In the On-Prem Edge deployment model:
The Edge Services Software is deployed probably in a Universal CPE, a uCPE, form factor Edge Platform,
Within an enterprise perimeter,
The OpenNESS controller is deployed in a centralized Telco/Public Cloud or in some hierarchical configuration.
In the Network Edge deployment model:
The Edge Services Software is deployed in a Network Function Virtualization Infra, an NFVI, form factor Edge Platform,
In a Central Office or Wireless Access aggregation point.
The OpenNESS Controller is again in a centralized telco cloud or regional centers for hierarchical configuration.
Speaker notes:
As shown here, OpenNESS has a VIM-based deployment for the Network Edge Deployment Model.
In the 19.06 (June) release we support K8S as the Infra manager for Cloud Native deployment.
Following that, OpenNESS will support OpenStack VIM.
For the On-Prem Edge Deployment Model, the OpenNESS Controller performs the infra management directly.
The reason we are going with this architecture is that, K8/OpenStack currently have some challenges for this type of deployment.
We are looking at options to address this in the roadmap.
==========================================
Pending clarifications
CF: What are the challenges mentioned above in On-Prem Edge (k8/OpenStack – pending Engg) – Networking ?
CF: EPA awareness at Controller?? Can we talk about it ? Kannan to send Onboarding with EPA flow – Anurag to reconfirm.
Speaker Notes:
This figure shows an abstract architecture of OpenNESS and its key interfaces.
The main functional blocks are the Red Boxes, the Green Boxes and the Blue Boxes.
The Red boxes show the minimal viable platform for deploying Edge Services and consist of
The OpenNESS Controller software
The OpenNESS Edge Services software
The OpenNESS/Network Platform dataplane software
The Green boxes show applications and application services:
These applications interact with Edge Services and potentially with each other.
They may consume data traffic from/to the dataplane.
Some applications produce service data for other such consumer applications.
The Dark blue boxes show complementary commercial solutions required to integrate OpenNESS with a cellular network.
The Interfaces shown are inspired by ETSI MEC, i.e. they follow the corresponding interfaces provided in ETSI MEC specification – where some of the interfaces are specified in detail and some others only recommended or specified without being further defined.
OpenNESS provides a working instance of the implementation of these interfaces for the minimal viable platform.
Applications follow the lifecycle prescribed by ETSI MEC.
Certain interfaces with the commercial VNFs relating to control and data plane e.g. traffic steering, are solution-specific.
In Release 1.0, we are only doing docker/Kubernetes based VIM.
Metapoint: we are presenting _first_ release, not _eventual_ release.
Installation from github.
Community OpenNESS does not do orchestration.
*Apps can’t span multiple edge platforms.
Speaker Notes:
OpenNESS Edge Platform bundles a host of Container Technologies that we have been developing for Cloud Native Infrastructure, some are already available and some will be supported as per roadmap. These provide
Low level control of resources for deterministic compute, and
Ability to deploy VNFs and apps for Network Edge.
We learned this thru real deployments e.g. FlexRAN. Here are three examples:
SR-IOV: capability to allocate virtual device instance queues into containers, and take advantage of enhanced filtering using Intel DDP (Dynamic Device Personalization profile). This is a key requirement to integrate accelerators and share resources on the platform to achieve the required density. This is applicable for NICs, FPGAs, and other accelerators.
OPAE = Open Programmable Acceleration Engine. This provides the ability to orchestrate and manage FPGA resources on the compute nodes. Query the device capabilities. With this, you can perform Remote System Updates from the OpenNESS controller, dynamically at runtime.
RT Kernel: achieving deterministic compute for low latency applications and VNFs on COTS hardware while running multiple instance on the same platform. This technology has been proven in challenging environments like RAN, and will be available in OpenNESS.
Also, we need to indicate which parts are in Intel roadmap, vs what parts we are hoping the user community will provide. Find out what we can commit to and communicate it.
Other Cloud Native Container Technologies:
Multus: A CNI meta-plugin that facilitates adding multiple network interfaces in a Kubernetes Pod using other third party delegate CNI plugins. Challenge before Intel team started working in k8s was that k8s only supported one pod network interface – eth0. To run containerized NFV use cases, multiple network interfaces needed. Multus has been developed to provide multiple network interfaces to support NFVi use cases.
SR-IOV Device plugin/DDP: It is a Kubernetes device plugin to discover and manage SR-IOV networking resources in a Kubernetes node. The plugin is available externally and supports SRIOV capable devices with a range of drivers, as well as supporting standard NICs. Future planned feature includes detecting and managing DDP profiles on the NIC.
SR-IOV CNI: A CNI(Container Networking Interface) plugin to enable and configure a SR-IOV network interface (SR-IOV VF – Virtual Function ) in a container environment. **Additional information on SR-IOV can be found at: https://builders.intel.com/docs/networkbuilders/adv-network-features-in-kubernetes-app-note.pdf
CPU Manager for Kubernetes (CMK): Open Source Software hosted on Intel’s Github repository. CPU Pinning and Isolation is used by performance sensitive workloads to first achieve the required performance through CPU Pinning and to guarantee this performance through isolation of workloads. Intel's open source project CMK enables these features in Kubernetes, with Kubernetes native mechanisms. Intel has also been working with the Kubernetes community to bring these features to Kubernetes itself through the introduction of a CPU Manager within Kubernetes. **Additional information about Version 1.3.1. can be found at: https://builders.intel.com/docs/networkbuilders/cpu-pin-and-isolation-in-kubernetes-app-note.pdf
Node Feature Discovery (NFD): Open source software, maintained under the Kubernetes SIGs Github repository. Detects hardware features available on each node in a Kubernetes cluster, and advertises those features using node labels. Currently available as Version 0.3.0. **More information at: https://builders.intel.com/docs/networkbuilders/node-feature-discovery-application-note.pdf
Topology Manager: Proposed feature for Kubernetes release 1.15. Topology Manager provides a mechanism to coordinated hardware resource assignments for different components in Kubernetes, with Alpha support for CPU and Device alignment. NUMA awareness is a key requirement for performance sensitive workloads, with solutions such as Telco depending on these requirements to be achieved in order to be functional. Kubernetes is currently agnostic to NUMA alignment which hampers the adoption of containerized solutions by these solutions. Intel is working with the Kubernetes community to introduce a new feature in Kubernetes to enable these workloads and more information can be found at: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0035-20190130-topology-manager.md.
Userspace Network Device Plugin: A Kubernetes Device Plugin that manages and advertises the ports of a Userspace enabled v-switch to the Kubelet, making the ports available to a Userspace application running within a Kubernetes pod. The plugin is currently in early development with basic functionality and currently only supporting OVS-DPDK, but with plans to extend support to AF-XDP and VPP. Other planned features include making use of hardware offload.
Device Plug-ins – The k8s device plug-ins provide a device plug-in framework which enables vendors to advertise, schedule, and setup devices with native k8s integration. Intel have developed FPGA, GPU, and QAT device plug-ins. More information at: https://builders.intel.com/docs/networkbuilders/intel-device-plugins-for-kubernetes-appnote.pdf
Metric Scheduler – There is currently no mechanism to take telemetry into consideration while making scheduling decisions on dynamic resource utilization, prevent workloads form being placed on nodes showing anomalies identified via telemetry, or moving running workloads in advance of failure to prevent service disruption. The metric scheduler enhances k8s scheduling with the help of a scheduler extender. WIP on workload migration used in conjunction with a Kubernetes descheduler.
============================
RSU: Remote System Update
Multus-CNI: Multus Container Network Interface
NFD: Node Feature Discovery. EPA feature– detect/advertise HW/SW capability; intelligent scheduling of a workload in K8s.
NUMA: Non-Uniform Memory Access (separate memory for each core to avoid memory access perf bottleneck)
DCG instructor presents slide ( 1 minute)
Ensuring our silicon solutions can be easily consumed and in a standard way is critical to the success of network transformation. As a result, Intel continues to contribute to the SW community and standards as well as collaborate with industry partners to ensure our solutions can be adopted in a standard fashion. For example, we are involved in many Linux Foundation projects, including OPNFV to drive NFV adoption in an open way. In addition, we contribute to ONAP to ensure you can easily place and manage workloads in the entire network.
In addition, it takes an entire ecosystem to drive the type of transformation. As a result, we created the Intel Network Builders program with over 350 members with the goal to accelerated network transformation. Through this collaboration, we’ve stood up over 100 unique solutions through PoCs, trials and deployments to help accelerate network transformation and trained 13000+ developers worldwide on these technologies.
And finally, we are utilizing the Intel Selection Solution.
We have two classes of solutions available through our partners. The first one is ISS for NFVI or Network Functions Virtualization Infrastructure. This one enables you to converge multiple virtual network functions to general purpose servers. It’s really targeted at broader NFV deployments.
The 2nd one is Intel SS for Universal CPE which is a fast growing market segment where CoSP have the ability to deliver managed enterprise services such as software defined-WAN to a virtualized server on an enterprise prem. This enables CoSP to create new rev stream but also create greater flexibility for the enterprise.
Intel select for VCD will be available on Q2 2019
Speaker Notes:
OpenNESS architecture is based on microservices, each performs a targeted functions (as described in another slide)
These microservies interact with each other via gRPC and Swagger interfaces to control the system as a whole.
The API messages are physically routed via the two gateway microservices
The Edge Lifecycle Agent and the Edge Virtualization Agent are distributed between Edge Platform and Controller, the edge platform can continue to work if the link with the controller is down and the critical information is locally available.
To understand the interactions between apps and microservices, some key concepts should be noted
Key Concept #1: Types of Apps
Apps are deployed by the OpenNESS Controller. Only apps approved by the Controller admin can be deployed on the Edge Platform.
There are two types of Apps
Producer: provides a service to other apps on the Edge Platform.
Consumer: consumes end user traffic and optionally can avail services from producer apps on the same edge platform (for now, we do not support cross edge-platform producer apps).
They follow following restrictions (to support Open Architecture):
A producer always MUST “authenticate/register” so that it can advertise its services to consumer apps.
A consumer MUST authenticate/register IF it wants to invoke available services from a Producer.
A Producer cannot consume end-user traffic directly, other than via a consumer app.
Key Concept #2: How Apps are authenticated
Now lets talk about how the authentication/registration works: (In this section, we can jump ahead to the sequence diagram slides, and then return)
Apps authenticate/register over HTTP /REST APIs with the EAA service.
The EAA authenticates the App, gets a certificate from the Controller (via the ELA) and delivers to the app.
The certificate is used for all subsequent API calls.
* Other types of authentication such as subscriber based service authentication will be added later in the roadmap.
Key Concept #3: How data is steered to apps
Now, lets talk about how traffic steering works.
First, the LTE CUPS agent in the controller configures traffic steering (by APN filtering) via a commercial EPC U_Plane VNF. This is how the edge platform gets access to UE traffic.
Next, the ELA service provides policies for the app
There are two types of policies
Traffic steering rules: These are deployed via the EDA service using NTS
DNS rules are implemented using a 3rd party DNS Server
Once the policies are set up, the NTS delivers datapath to app.
Need to indicate that, in 1.0 we either provide Kubernetes or unmanaged virtualization. With unmanaged, we don’t care if VMs or containers are used; with managed, we expect K8S to manage docker containers.
Q: (CF) How DNS server is connected on data plane – to the subscriber or App ? Kannan ?? Whether datapath or mgt nw ??
Talking points:
OpenNESS implementation is based on a flexible and scalable microservices architecture
These microservices are extendable and replaceable.
Here are some of the key microservices.