4. Beyond cloud-native… Do you care about:
• High-performance forwarding
• Proven cloud-grade, carrier-grade scale
• Feature rich for Kubernetes and LB, beyond CNI
• Feature rich in general for net + sec
• Multi-tenancy
• Open source / community
• Open standards-based federation
• Multiple orchestrator support
• Solid vendor backing and optional services
• Collapsing stacked SDNs: e.g. K8s on OpenStack
• Ease of use
SDN ECOSYSTEM in CNCF
6. Typical Kubernetes setup
●Kuberenetes Cluster
APIServer
Controller
Scheduler
etcd
OVS/Bridge
Docker network
pod
pod
kube-let
kube-proxy
Kubernetes system is consist from Kube Master and
Worker. Master has API, container scheduler,
Database. Worker has kube-let, kube-proxy and pod
as Container.
kube-let in worker node has Container Network
interface called CNI which is plugin of Network
function. Use can select network plugin for
particular use case.OVS/Bridge
Docker network
pod
pod
kube-let
kube-proxy
7. Typical Kubernetes Network
●Typical K8S network behavior
pod-network
service-network
external-network
Typically K8S has three type of Network
1) pod-network which is connecting POD. All of POD
connect common Network. This network only uses
internally.
2) service network which is used by cluster-ip of
“Service”. Inter Service communication uses this
network.
3) external-network which is used by ”LoadBalancer”.
Outside of K8S user uses this network to connect
POD.
pod pod pod
Service
ClusterIP
pod
Service
LoadBalancer
Internet
LAN
8. Typical Kubernetes Network
●Typical K8S network behavior
external-network
When POD and Service are created, each IP address is
assigned automatically.
1) External User connects “192.168.0.1” as Web
loadBalancer.
2) Web LoadBalancer does Destination NAT to
selected nginx pod.
3) nginx pod connects ”172.16.0.11” as DB Cluster IP
4) DB Cluster IP does Destination NAT to selected
mysql pod.
That is pod network is isolated from External network.
User cannot reach to POD directly from External.
Those such LoadBalancer and ClusterIP behavior is
done by kube-proxy.
mysq
l
nginx
DB
ClusterIP
Web
LoadBalancer
Internet
LAN
192.168.0.0/24
172.16.0.0/24
192.168.0.0/24
nginxmysq
l
.1
.21 .22 .23 .24
.11 .12
pod-network
service-network
9. K8S CNI Typical behavior
●Typical K8S network policy (namespace)
Namespace defines POD groups like Openstack Project.
It make isolation among different namespace.
That means pod inside namespace can communicate
each other, but different namespace cannot.
Even pods are created in different namespaces, IP
address are assigned by common pod-network pool.
mysq
l
nginx nginx
mysq
l
mysq
l
apatch apatchmysq
l
apiVersion: v1
kind: Pod
metadata:
name: mysql
namespace: groupA
labels:
name: db
spec:
containers:
- name: mysql-gA
image: mysql
namespace: groupA
namespace: groupB
pod-network
apiVersion: v1
kind: Pod
metadata:
name: mysql
namespace: groupB
labels:
name: sb
spec:
containers:
- name: mysql-gB
image: mysql
10. K8S CNI Typical behavior
●Typical K8S network policy (Label)
Label defines each pod as particular role. the Label also
be used to defined access control.
The pods in same service labels can reach each other.
The pods in different labels cannot reach.
Why Label is used to filter traffic, IP address of POD
may be changed when POD moves. so IP address based
filtering is useless.
For example, pod which labels are “wordpress” and
“db” accepts connection from pod having “wordpress”
and “webapi” labels. Connection is denied even If one
of labels are match.
mysq
l
nginx nginx
mysq
l
service: wordpress
role: webapi
service: wordpress
role: db
mysq
l
apatch apatchmysq
l
pod-network
service: redmine
role: webapi
service: redmine
role: db
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
service: wordpress
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
service: wordpress
role: webapi
11. K8S CNI Typical behavior
●Typical K8S network policy (Ingress)
nginx nginx
pod-network
external-network
Web
LoadBalancer
192.168.0.0/24
172.16.0.0/24
service-network
192.168.10.0/24 192.168.20.0/24
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
service: wordpress
role: nginx
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 192.168.20.0/24
“Ingress” addressed incoming traffic to pod. User can
define particular CIDR and port in Ingress – from –
ipBlock section.
12. K8S CNI Typical behavior
●Typical K8S network policy (Egress)
nginx nginx
pod-network
external-network
Web
LoadBalancer
192.168.0.0/24
172.16.0.0/24
service-network
192.168.10.0/24 192.168.20.0/24
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
service: wordpress
role: nginx
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 192.168.20.0/24
“Egress” addressed ongoing traffic from pod. User can
define particular CIDR and port in Egress – from –
ipBlock section.
13. Typical Kubernetes External Connection
●Kuberenetes setup
APIServer
Controller
Scheduler
etcd
Physical Network
BMS
OVS/Bridge
Docker network
pod
pod
kube-let
kube-proxy
Internet
If POD needs to connect to other network like
Internet or LAN, POD address is translated to
IP address of Worker node as SNAT.
Pod Network cannot connect external
network without SNAT.
That means, External system cannot filter by
exact IP address of POD and Worker node.
Because IP address of POD sometimes be
changed and it moves to other worker node.
POD IP address
is translated to IP
address of node
14. Considering use case
●Dedicated pod-network
mysq
l
nginx nginxmysq
l
192.168.10.0/24
Tenant: A
mysq
l
nginx nginxmysq
l
192.168.20.0/24
Tenant: B
One Kubernetes setup has only one Pod-network. If
dedicated pod network is required, it must be separated
kubernetes setup must be deployed. Kubernetes can be
deployed on Virtual-machine on Openstack or anything.
But even multiple kubernetes setup on each VM are
deployed, different pod network cannot communicate
without NAT as I described so far.
15. Considering use case
●Openstack Virtual-machine for K8S setup
•NovaAPI
•Glance
•Keystone
•Neutron
OVS/Bridge
APIServer
Controller
Scheduler
APIServer
Controller
Scheduler
OVS/Bridge
mysq
l
nginx nginx
mysq
l
192.168.10.0/24
mysq
l
nginx nginx
mysq
l
192.168.20.0/24
Tenant: BTenant: A
mysq
l
nginx
mysq
l
nginx
OVS/BR OVS/BR
OVS/Bridge
mysq
l
nginx
mysq
l
nginx
OVS/BR OVS/BR
16. Typical Enterprise use case
●Consider K8S limitation from Typical Enterprise Network Design
Web Web
API API
DB DB
192.168.10.0/24
192.168.20.0/24
192.168.30.0/24
172.16.0.0/24
Syslo
g
Monitor
Service Network
Develop:A
Develop:B
Typical Enterprise Network uses dedicated
network to isolate different Section, Division
purpose. For the isolation, Firewall address
ingress/eggres traffic between them.
For instance “Service Network” only be
allowed to ”Web” server TCP:80.
Develop:A allows to connect ”Web”
TCP:22,80.
Develop:B allows to to connect “API”and“DB”.
Develop:A and Develop:B doesn’t connect
each other.
“Web” allows to connect API, TCP8080 and
Syslog and monitoring and so on.
if ”Web” and “API” is containerized by K8S,
existing Network design might not work well.
Challenges:
• Dedicated POD network
• FW integration
• Existing Network connection
• Direct POD connection
18. Physical IP Fabric
(no changes)
TungstenFabric
CONTROLLER
ORCHESTRATOR
Host O/SvRouter
Network / Storage
orchestration
Gateway
…
Internet / WAN
or Legacy Env.
(Config, Control, Analytics, Svr Mgmt)
(Windows, Linux ….) on BMS
TOR
Compute
orchestration
Virtual Network
Blue
Virtual Network
Red
FW
Logical View
…
Centralized
PolicyDefinition
Distributed
PolicyEnforcement
BGP
BGP XMPPEVPN
Tungsten Fabric Overview
19. Typical Kubernetes setup with Tungsten Fabric
●Kuberenetes Cluster on BMS
APIServer
TF vRouter
pod
pod
TungstenFabric is one of CNI plugins which provides
additional network service to POD.
TungstenFabric resolves POD network limitation
which I described so far.
TungstenFabric has two mode, one is BMS mode
and another is Nested mode.
BMS mode installs TungstenFabric vRouter on BMS.
•KubeManager
•Controller
•Analytics
•Analytics-DB
kubelet
CNI
Agent
TF vRouter
pod
pod
kubelet
CNI
Agent
20. Typical Kubernetes setup with Tungsten Fabric
●Kuberenetes Cluster on Openstack
TF vRouter
Nested mode doesn’t install TF vRouter on VM. Also
SDN Controller is not installed Kubernetes Cluster.
only Kubemanager and CNI are installed. kube-
manager calls TF Controller which works Openstack
Neutron Plugin and CNI calls TF Agent on Compute
node. It is very unique solution to avoid multiple
SDN controller runs on VM.
Also, Worker node doesn’t need TF vRouter. it uses
VLAN to isolate POD network.
•Controller
•Analytics
•Analytics-DB
•NovaAPI
•Glance
•Keystone
•Neutron
Agent
APIServer
kube-manager
kubelet
CNI
bridge
pod
pod
vlan
vlan
21. Typical Kubernetes setup with Tungsten Fabric
●Kuberenetes Cluster with Openstack
TungstanFabric can work with both Openstack and
K8S at same time. It can extend same Virtual-
network between VM and POD.
Also same security policy such as Security Group or
Label based FW is attached to both VM and POD.
•Controller
•Analytics
•Analytics-DB
•NovaAPI
•Glance
•Keystone
•Neutron
TF vRouter
pod
pod
kubelet
CNI
Agent
APIServer
kube-manager
TF
vRouter
Agent
VM
22. What challenges can TF resolve?
●Dedicated POD network
mysql nginx nginxmysql
192.168.10.0/24
Tenant: A
apiVersion: v1
kind: Pod
metadata:
name: mysql
annotations: {
"opencontrail.org/network" : '{"domain":"default-domain",
"project": ”user1", "name":”pod-vn1"}'
}
labels:
name: db
spec:
containers:
- name: mysql-gA
image: mysql
TungstenFabric provides dedicated network to POD
using “annotations”.
User can attach their own virtual-network to POD
without separated K8S setup.
23. What challenges can TF resolve?
mysql nginx nginxmysql
192.168.10.0/24
TungstenFabric allows to connect multiple virtual
networks connecting POD.
User can define 5 tuple based Filter between virtual
network like Security Group as Openstack.
It’s easy to define which traffic allows to connect by
TungstenFabric.
192.168.20.0/24
●Inter POD network
24. What challenges can TF resolve?
apiVersion: v1
kind: Pod
metadata:
name: cirros-vn1-1
annotations: {
"opencontrail.org/network" : '{"domain":"default-domain",
"project": "juniper-test", "name":"pod-service-1"}'
}
labels:
application: service-app1
label: web
spec:
replicas: 2
containers:
- name: cirros-vn1-1
image: docker.io/cirros
imagePullPolicy: IfNotPresent
●Enforce Label Based Filter
Traffic filter is defined by YAML which is configured
POD owner. Thus, IT supervisor cannot control
traffic using Firewall.
TungstenFabric enforces traffic by Firewall Rule
which is defined IT supervisor.
TungstenFabric provides global policy to POD
network. If service owner violates its policy, traffic
cannot reach to other POD.
25. What challenges can TF resolve?
●Direct Connect from External Network
TungstenFabric provides Floating IP to POD
interface as same feature as Openstack one.
TungstenFabric does Destination NAT on its vRouter
module to exact POD IP address.
Left Picture shows “Router” does NAT, but actually
TF vRouter on worker node does NAT. There is no
Physical Router or dedicated NAT server.
It is very useful to connect POD directly from
external Network for debugging purpose.
This feature is not K8S standard feature, so it
requires to use TungsteFabric API.
nginx nginx
pod-network
external-network
Web
LoadBalancer
service-network
public-network
D:203.0.113.1
10.0.10.1
10.0.10.1
26. What challenges can TF resolve?
●External Network Connection
As described, TungstenFabric connect with Physical
Router and HVTEP. So TungstenFabric can associate
External Network with POD network keeping
network isolation.
Because, Virtual-network of TungstenFabric has
“VNI” and “Route-Target” for each and its route
advertises to Router/HVTEP by L3VPN/EVPN and
create Tunnel between Router/HVTEP and TF
vRouter.
APIServer
TF vRouter
pod
pod
•KubeManager
•Controller
•Analytics
•Analytics-DB
kubelet
CNI
Agent
TF vRouter
pod
pod
kubelet
CNI
Agent
SV VMSV VM
BGP L3VPN/EVPN
27. What challenges can TF resolve?
●VNF integration
TugstanFabric can associate pod network with
VNF/PNF.
VNF/PVN can stir pod traffic to Service Chaining.
Service Chaining can stir traffic to multiple VNFs by
TF vRouter instead of actual VNF configuration.
Thus, it is possible to add/delete/scale out/scale in
without configuration changing in VNF/PNF.
TF vRouter
pod
pod
kubelet
CNI
Agent
TF vRouter
Agent
Internet
28. What challenges can TF resolve?
●Consider Typical Enterprise Network Design by TungsetenFabric
Web Web
API API
DB DB
192.168.10.0/24
192.168.20.0/24
192.168.30.0/24
172.16.0.0/24
Syslo
g
Monitor
Service Network
Develop:A
Develop:B
TunstenFabric resolved many challenges of
K8S default feature. Thus, Left picture
network design is enabled.
Tungsten Fabric Resolves:
• Dedicated POD network
• FW integration
• Existing Network connection
• Direct POD connection