This document discusses CNI and the Linen CNI plugin. It begins with an introduction to CNI and how it allows plugins to configure network interfaces in containers. It then discusses the Linen CNI plugin, which is designed for overlay networks and uses Open vSwitch. It explains how Linen CNI works with Kubernetes and provides packet processing between nodes. The document also compares Linen CNI to other overlay networking solutions like OVN-Kubernetes.
5. What is CNI?
• CNI - the Container Network Interface
• A Open Source Project supported by CNCF (Cloud Native
Computing Foundation) and it has two main repositories
• containernetworking/cni: libraries for writing plugins to
configure network interfaces
• containernetworking/plugins: additional CNI network
plugins
• Support rkt, Docker, Kubernetes, OpenShift and Mesos
5
6. What is CNI?
• CNI (Container Network Interface) is an API for writing
plugins to configure network interfaces in Linux
containers
6
7. CNI Spec
• 3 Commands: ADD, DELETE, and VERSION
• Configuration on stdin, results on stdout
• Runtime parameters via env. CNI_ARGS & CAP_ARGS
7
8. How to Build?
• parseConf: parses the network configuration from stdin
• cmdAdd is called for ADD requests
(When pod is created)
• cmdDel is called for DELETE requests
(When pod is deleted)
• Add your code to the cmdAdd and cmdDel functions.
• Simple CNI code sample at :
https://github.com/containernetworking/plugins/tree/master/plugins/sample
8
type PluginConf
func parseConfig(stdin []byte) (*PluginConf, error)
func cmdAdd(args *skel.CmdArgs) error
func cmdDel(args *skel.CmdArgs) error
12. CNI Plugins
• bridge : Create a bridge adds the host and the container to it
• IPAM : IP address allocation
• host-local : maintains a local database of allocated IPs
• DHCP : Runs a daemon on the host to make DHCP requests on
behalf of the container
• Flannel: responsible for providing a layer 3 IPv4 network between
multiple nodes in a cluster
• Huge variety of different types plugins, such as loopback, PTP,
IPVLAN, MACVLAN, etc.
12
13. 3rd Party Plugins
• Project Calico - a layer 3 virtual network
• Weave - a multi-host Docker network
• Multus - a Multi plugin
• CNI-Genie - generic CNI network plugin
• Silk - a CNI plugin designed for Cloud Foundry
• Linen - designed for overlay networks and compatible with
OpenFlow protocol through Open vSwitch
• More than 10 third-party party plugins !!
13
15. What is Linen CNI?
A 3rd party CNI plugins designed for “Overlay Networks” and
compatible with “OpenFlow Protocol” through Open vSwitch
15
16. Overlay Network
16
• Underlay network (built using physical devices and links)
• Create a new virtual network topology on top of underlay
• GRE tunnel, VxLAN tunnel, MPLS and VPN
Underlay Network
17. Comparison of
multi-host networking
17
Comparison of multi-host overlay networking solutions
Calico Flannel Weave
Docker
Overlay Network
Network
Model
Pure Layer-3
Solution
VxLAN or
UDP Channel
VxLAN or UDP
Channel
VxLAN
Protocol
Support
TCP, UDP, ICMP
& ICMPv6
ALL ALL ALL
Reference from Battlefield: Calico, Flannel, Weave and Docker Overlay Network
18. Why Open vSwitch?
18
• Multi-host overlay networking
• Provide flexible network management
• Boosts packet processing, performance and throughput
19. Multi-host Overlay Networking
19
• All containers can communicate with all other containers
• All nodes can communicate with all containers (and vice-versa)
21. Performance
21
• Open vSwitch with the
Data Plane Development
Kit (OvS-DPDK)
• Intel DPDK accelerated
switching and packet
processing
22. Linen CNI Overview
22
Linen CNI is
• designed to meet the requirements of overlay networks
and compatible with OpenFlow protocol
• inspired by the document from Kubernetes OVS
networking
• a chained plugin and it depends on bridge plugin
23. Linen CNI Usage
23
On Host1:
$ ip netns add ns1
$ ip netns exec ns1 ip link
1: lo: <LOOPBACK> ...
$ CNI_PATH=`pwd` NETCONFPATH=/root ./cnitool
add mynet /var/run/netns/ns1
$ ip netns exec ns1 ip link
1: lo: <LOOPBACK> ...
3: eth0@if97:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 ...
30. Management Workflow
30
• linen-cni: Executed by
the container runtime
and set up the network
stack for containers
• flax daemon:
DaemonSet. Runs on
each host in order to
discover new nodes
joining and manipulate
ovsdb
31. Packet Processing
31
• The docker bridge is
replaced with linux
bridge (kbr0)
• OVS bridge is created
(obr0) and added as a
port to the kbr0 bridge
• All OVS bridges across
all nodes are linked
with VxLAN tunnels
32. Installation on K8S
32
• The Open vSwitch is required
• kubelet setting
kubelet ... --network-plugin=cni
--cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/opt/cni/bin
33. Installation on K8S
33
• Create a configuration list file in /etc/cni/net.d and
file name must be name with linen.conflist
• Make sure linen, bridge and host-local binaries are
in /opt/cni/bin
• (Optional) Apply a Daemon Set flaxd.yaml to discover
new node joining
34. Network configuration reference
34
• ovsBridge: name
of the ovs bridge to
use/create
• vtepIPs: list of the
VxLAN tunnel end
point IP addresses
• controller: sets
SDN controller,
assigns an IP
address, port
number
{
"name":"mynet",
"cniVersion": "0.3.1",
"plugins":[
{
//… bridge configurations
},
{
"type":"linen",
"runtimeConfig":{
"ovs":{
"ovsBridge":"br0",
"vtepIPs":[
"172.120.71.50"
],
"controller":"192.168.2.100:6653"
}
}
}
]
}
38. Network Models
38
Comparison of multi-host overlay networking solutions
Calico OVN-Kubernetes Flannel Linen
Network
Model
Layer-3 Solution Layer-3 Solution
VxLAN or
UDP Channel
VxLAN
Performance High High Medium Medium
Complexity High High Low Low
39. Takeaway
39
More network virtualization projects
https://github.com/John-Lin/linen-cni
@johnlin__
SDN-DS.TW: https://www.facebook.com/groups/sdnds.tw/
Contact me
https://github.com/John-Lin/tinynet
39