SlideShare uma empresa Scribd logo
1 de 61
Network Virtualization,
Overlays and Containers
Srini Seetharaman
srini.seetharaman@gmail.com
May 2015
• Today’s DC networks
• Network Virtualization
‒ Motivation, Requirements, Architecture
• Service Virtualization
‒ Motivation, Requirements, Architecture
• OpenStack Networking
• Docker Networking
• Hands-on overlay networking
• Lessons learned
Agenda
2
Technology Trends
Application Rollout Today
• Poor automation for VLAN, Service contexts, and VRFs
• Poor legacy application design?
Web
Tier
Application
Tier
Database
Tier
4
Typical Data Center Design
5
Rack
Core
Aggregation
Edge
Application group A
Application group B
Problem: Network not ready for VMs
Over 70% of today’s servers are Virtual Machines, but VMs are
not treated as first class citizens by the network
‒ East-west traffic poorly managed
‒ Lack of prioritization and rate-limiting at VM level
‒ Traffic between VMs on same server often unsupervised
‒ IP/MAC overlap not allowed, and addressing limited by VLANs
6
VM
VM
VM
VM
VM
VM
VM
VM
Containers
Symptoms of a broader problem with lack of proper
network abstractions and policy layering
Solution: SDN and NFV
7
Business How?
Reduced time to revenue Speed up of service provisioning
OpEx saving Automated operations and easier management
of resources
New revenue Through new business models centered around
on-demand usage
Feature velocity Introduce changes quickly according to business
logic needs
Improved policy compliance Ensure that cloud workload is compliant with
enterprise policies (e.g., access control)
Reduced OpEx during upgrades Introduce new functions and service by
replacing just software stack
Trend #1: Network Virtualization
Dynamic, Programmable, Automated
8
Computing
Infrastructure
SDN-based Virtualized Network Platform
Storage Infrastructure
Network Virtualization Requirements
9
Integration with
legacy network
End-to-end visibility of
VM traffic
Traffic isolation across
virtual networks
• Support bare metal
servers, appliances
and gateways
• VLAN, VxLAN, GRE
support, allowing IP
overlap across tenants
• Edge-based control of
VM traffic and
scalable host tracking
Troubleshooting
support
Application policyOrchestrating
virtual L4-L7 services
• End-to-end visibility
that maps Virtual to
Physical scalably
• Provisioning, and
chaining of virtual
services
• Application level policy
across and within
virtual networks
Trend #2: Service Virtualization
10
Internet Internet
NFV
Step 1. Virtualizing network functions
Step 2. Chaining/Stitching them
NFV in Data Centers
1. Virtualizing the L4-L7 network service
appliance (e.g., Load-balancer)
2. Chaining services to ensure that the traffic
is routed through virtual appliances
3. Optimizing service delivery for applications
• Increasing number of virtual appliances
• Increasing CPU or memory of each appliance
• Placement of virtual appliances
• Offloading certain tasks to NIC or switch
11
Compute
Orchestration
SDN control
Open-source?
Trend #3: New Infrastructure Tools
12
Deploying Network Virtualization
Goal
Computing Infrastructure
SDN-based Virtualized Network Platform
14
Deployment mode #1: Underlay
VPN termination,
L3 routing
VM VM VM VMVM VM IP 192.168.1.2, MAC 0x1
VM VM VM VMVM VM
VM VM VM VMVM VM
VM VM VM VMVM VM
Controller
cluster
CLI, REST, GUI
IP 192.168.1.2, MAC 0x2
IP 192.168.2.2, MAC 0x1
IP 192.168.1.2, MAC 0x3
IP 192.168.1.2, MAC 0x2
IP 192.168.1.2, MAC 0x1
IP 192.168.2.1, MAC 0x2
IP 192.168.1.3, MAC 0x4
Tenant membership
decided based on
{switch-port, MAC, IP}
tuple in each flow
15
VNet identified
using VLANs,
VxLANs or GRE
Internet
Custom routing
by controller
• Problem: SDN switches have resource limitations
‒ Weak CPU incapable of doing traffic summarization, frequent
statistics reporting, and packet marking
‒ Flow-table limitation in switches (e.g., 1500 exact match entries)
‒ Switch-controller communication limits (e.g., 200 packet_in/sec)
‒ Firmware does not always expose the full capabilities of the chipset
• Solution:
‒ Next generation of hardware customized for OpenFlow
‒ New TCAMs with larger capacity
‒ Intelligent traffic aggregation
‒ Minimal offloading to vSwitches
Performance Limitations
16
Legacy
L3 routing
Legacy
L2 switching
VM VM VM VMVM VM
10.1.1.0/24 10.1.2.0/24 10.2.1.0/24
10.1.1.1 10.1.1.2 10.1.2.1 10.1.2.2 10.2.1.1 10.2.1.2
VM VM VM VMVM VM
VM VM VM VMVM VM
VM VM VM VMVM VM
vDP vDP vDP vDP vDP vDP
Controller
cluster
Internet
Logical link
v/p-GatewayCLI, REST, GUI
Deployment mode #2: Overlay
vDP: Virtual Data Plane
VM addressing
masked from fabric
Tunnels
Tenant membership
decided by virtual
interface on the vSwitch
vDP
17
VxLAN Tunneling
18
• Between VxLAN Tunnel End Points (VTEP) in each host server
• UDP port numbers allows better ECMP hashing
• In absence of SDN control plane, IP multicast is used for
layer-2 flooding (broadcasts, multicasts and unknown unicasts)
VTEP outer
MAC header
Outer IP
header
Outer UDP
header
VxLAN
header
Original L2 packet
VxLAN flags
Reserved
24bit VN ID
Reserved
Source port
VxLAN port
UDP Length
Checksum
• Solution:
‒ Offload it to the top-of-
rack leaf switch
‒ Use hardware gateway
• Problem:
‒ Overlay mode is CPU
hungry at high line rates
and has anecdotally fared
poorly in real world
Performance Limitations
19
Throughput Recv
side cpu
Send
side cpu
Linux Bridge: 9.3 Gbps 85% 75%
OVS Bridge: 9.4 Gbps 82% 70%
OVS-STT: 9.5 Gbps 70% 70%
OVS-GRE: 2.3 Gbps 75% 97%
Source: http://networkheresy.com/2012/06/08/the-overhead-of-software-tunneling/
• Combined overlay and underlay (fabric) to achieve:
‒ end-to-end visibility
‒ complete control
‒ best mix of both worlds
• Also called P+V or Overlay-Underlay
‒ Vendors are converging towards this architecture
• The integration may need 1) link-local VLANs or 2)
integration with VM manager to detect VM profile
Deployment mode #3: Hybrid
20
• Decoupling elements inside the overlay and converging with
the underlay to make best of both worlds
• Current mode:
Deployment mode #3: Hybrid
21
Host A Host B Host C
VxLAN overlay
VLAN VLAN VLAN
Controller
• Traffic leaving host has VLAN tag
• The VLAN + Source-MAC is mapped to a VxLAN
• Future Mode:
Deployment mode #3: Hybrid
22
Host A Host B
VLAN trunk
Host C
VxLAN
Distributed Virtual Switch or VLAN-based overlay
OpenStack or
vCenter
VTEP
manager
Deploying Network Service Virtualization
23
VM VM VM VMVM VM
VM VM VM VMVM VM
vNF vNF vNF vNFvNF vNF
CLI, REST, GUI
Typical Deployment Mode is Overlay
vNF: Virtualized Network Function
Services can be single-tenanted and multi-tenanted
vNF vNFvNF vNF
Traffic to vFirewall
Traffic to
dst VMTraffic to
VIP
Network
Controller
Service
Controller
Compute
Controller
24
Service Type: Stateful and Stateless
25
OVS
VM 1VM 2VM 1
Host
OVS
VM 2
Host
Dst = VIP 1 Dst = VIP 2
Stateless service: No additional
appliance needed
Stateful service: Virtual function
deployed in VM or container
VM 3
Change header and
Fwd to specific VM
Traffic proxied
to specific VM
Typically stateless load-balancing and
distributed access control
Typically stateful LB, Intrusion
detection and SSL termination
Service Scaling: Scale-out and Scale-up
• Scale-out:
‒ Deploy more network function instances
‒ Scale-out of workload is also necessary
• Scale-up:
‒ Give more resources to each network function instance
‒ Offloading simple tasks to vSwitch, pSwitch or pAppliance
26
Combined Solution
OVS
VM 1VM 2VM 1
Host 1
OVS
VM 2
Host 2
OpenStack
Dst = VIP 1 Dst = VIP 2
Controller
Orchestration
Network
Plumbing
VM 3
Service rollout
and chaining
L2-L7 Service
orchestration
DC Network Virtualization
Policy/ QoS
Trouble-
shooting
UI/Analytics
Compute
L3 Spine
VTEP
Leaf
27
OpenStack Networking
• Most common platform for standardizing open API for
networking, and vendors to innovate.
• Neutron: High-level abstractions for creating and
managing tenant virtual network
‒ Flat L2 connectivity across DC
‒ DHCP enabled IP addressing
‒ Floating-IP (for outside-in access)
‒ L3 subnets and routers
‒ Gateway and VPN
‒ Load-balancer service
‒ Security groups
‒ ….
OpenStack Platform
29
Tenant BTenant A
OpenStack API
30
Typical workflow
1. Create a network
2. Associate a subnet
with the network
3. Boot a VM and
attach it to the
network
4. Delete the VM
5. Delete any ports
6. Delete the
network
Network Virtualization App
SDN Controller
pSwitch
pSwitch
vSwitch
vSwitch
OVSDBOpenFlow
XYZ Custom API
XYZ Mech driver
ML2 Plugin
Neutron API
Orchestration
North-bound
API
Application
Controller
South-bound
API
Dataplane
elements
Basic Technology for OpenStack Networking
Namespace Containerized networking at the process level managed at /proc.
Primarily used to create
dnsmasq Open-source DNS/DHCP agent run on every host
Linux Bridge L2/MAC learning switch built into the Kernel to use for forwarding
OpenvSwitch Advanced bridge that is programmable and supports tunneling
• ovs-vsctl used to configure the bridge
• ovs-ofctl used to configure the forwarding flow rules
NAT Network address translators are intermediate entities that
translate IP address + Ports (Types: SNAT, DNAT)
iptables Policy engine in kernel that is used for managing packet
forwarding, firewall, NAT features
31
OpenStack Agents
32
Courtesy: Edgar Magana @ Workday
• Basic free OpenStack software includes:
‒ OVS plugin that runs as mech driver in Neutron server, and
‒ OVS plugin that runs in both network and compute node
‒ No OpenFlow. Just wrappers to ovs-vsctl and ovs-ofctl CLI
OpenStack OVS Networking Agents
33
RPC over
mgmt. network
Controller Network node Compute node
OVS OVS
OVS
Agent
OVS
Agent
OVS Mech
driver
Neutron
server
VM VMVM VMHorizon UI
ovs-*ctl ovs-*ctl
Data
network
• Key feature that reduces bottlenecks in network
• View of 1 tenant routing through namespaces
Compute node Network node
Distributed Virtual Routing
35
eth1
br-ex
snat
namespace
qr qg
“VM2
br-tun
br-int
eth0
qrouter
namespace
S1 S2
br-tun
br-int
eth0
Public access
Private access
eth1
br-ex
“VM1
Floating-IP Non-floating-IP
rfp
• Vendor-driven consortium (with Cisco, Brocade, and others)
for developing open-source SDN controller platform
OpenDayLight Controller
36
• Overlay-based OpenStack Networking supported today
• All required features offered using Open vSwitch
programming
OpenStack Networking in OpenDaylight
<#>
Docker Networking
38
• Over the past few years, LXC came up as an alternative to VM
for running workload on hosts
• Each container is a clone of the host OS
• Docker brought Linux containers to prominence
‒ Tracks application configuration and possibly archives to DockerHub
Linux Containers
39
Container 1
App X
Container 2 Container 3
Host OS
Guest root
App Y
Guest root
App Z
Guest root
Docker
• Excellent way to track
application dependencies and
configuration in a portable
format.
• For instance the Dockerfile on
the right can be used to
spawn a container with nginx
LB and accessed at a host port
$ docker build XYZ
$ docker images
$ docker run -i --name=nginx1
-d –i nginx
$ docker ps
$ docker inspect nginx1
40
# Pull base image.
FROM dockerfile/ubuntu
# Install Nginx.
RUN 
add-apt-repository -y ppa:nginx/stable && 
apt-get update && 
apt-get install -y nginx && 
rm -rf /var/lib/apt/lists/* && 
echo "ndaemon off;" >> /etc/nginx/nginx.conf && 
chown -R www-data:www-data /var/lib/nginx
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs",
"/etc/nginx/conf.d", "/var/log/nginx"]
# Define working directory.
WORKDIR /etc/nginx
# Define default command.
CMD ["nginx"]
# Expose ports.
EXPOSE 80
EXPOSE 443
Networking Still in Early Stages
Today Docker usage is
predominantly within a
single laptop or host. The
default network on right is
allocated to the nginx
container we spawned.
But, folks are exploring
connecting containers
across hosts.
41
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.15",
"IPPrefixLen": 16,
"MacAddress":
"02:42:ac:11:00:0f",
"PortMapping": null,
"Ports": {
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49157"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49158"
}
]
}
Many ways to network in Docker
• Many of these are similar to what we can do with VM
(except the Unix-domain socket method of direct access)
42
Host
Container
C
Container D Container E Container FContainer A Container B
Direct
Host
network
Unix-domain
sockets and
other IPC
Docker0
Linux bridge
Docker proxy
(using iptables)
Open vSwitch
Port
mapping
Mechanisms for Multi-Host Networking
• Option 1: Flat IP space (at container level) with
routing (and possibly NAT) done by host
‒ Step 1: Assign /24 subnet CIDR to each host for its containers
‒ Step 2: Setup ip route to ensure traffic to external subnets leave
from host interface (e.g., eth0)
• Option 2: Create overlay network
‒ Step 1: Create a parallel network for cross-host communication
‒ Step 2: Connect hosts in cluster using encapsulation tunnels
‒ Step 3: Plug containers to appropriate virtual networks
43
Option 1: Flat IP space
Step 1: Choose CIDR wisely when starting Docker daemon
Step 2: Add static routes to other containers’ subnets
44
Host 1
Nginx1
172.17.42.18
Bash1
172.17.42.19
172.17.42.1
Docker0 bridge
eth0192.168.50.16
Host 2
Nginx2
172.17.43.18
Bash2
172.17.43.19
172.17.43.1
Docker0 bridge
eth0192.168.50.17
Docker
manages
these
allocation
route add -net 172.17.43.0/24 
gw 192.168.50.17
route add -net 172.17.42.0/24 
gw 192.168.50.16
Quiz: What IP address do
packets on the wire have?
NAT rules already in
place to masquerade
internal IP addresses
192.168.50.16 192.168.50.17
nginx1 ContainerX
Host 1
bash1 ContainerY
docker0
Open vSwitch
Host 2
Internet
Open vSwitch
docker0
vxlan vxlanvxlan vxlan
Other
cluster
hosts
Option 2: Open vSwitch based Overlay
Suggest creating
parallel network
that decouples
container
networking from
underlying
infrastructure
45
Container and VM networking unified
• Edge-based overlays are even more important in container world.
• Open vSwitch already supports network namespaces
• VxLAN provides:
‒ isolation,
‒ improves L2/L3 scalability,
‒ allows overlapping MAC/IP address
Docker Engine
OVS OVS OVS
Conta
iner
Conta
iner
Conta
iner
Conta
iner
Conta
iner
Conta
iner
VM V VM
Orchestration ?? OpenStack
VxLAN Tunneled network
Neutron
OVS agent
46
Hands-on Exercise
- Creating a neutron-like overlay
47
• In this tutorial exercise, we will use the LorisPack toolkit that allows easily creating
the parallel network, and isolating container communication to its own pod/group
• Desired end goals:
1. Containers isolated into two virtual networks
2. c1 cannot access container in different virtual network
3. c1 can have overlapping IP address
• Inter-host communication uses VxLAN encapsulation
Host 2Host 1
Goal for Tutorial: Preview of
Microsegmentation using VxLAN
48
c1
10.10.0.1
c2
10.10.0.1
c3
10.10.0.3
c4
10.10.0.4
Virtual
Network 1
Virtual
Network 2
X X
Setup 1: Installation
• Bring up two Linux VMs (preferably Ubuntu over Virtualbox)
on your laptop with following installed:
‒ Open vSwitch (version 2.1 +)
‒ Docker (version 1.5 +)
‒ LorisPack (git clone https://github.com/sdnhub/lorispack)
• The VMs should have host-only adapter added as a second
interface eth1 so that they can communicate with each other.
• In my case,
‒ VM1 IP is 192.168.56.101
‒ VM2 IP is 192.168.56.102
49
Setup 2: Docker and networking
On VM 192.168.56.101,
we run:
# docker run --name c1 -dit ubuntu
/bin/bash
# docker run --name c2 -dit ubuntu
/bin/bash
# loris init
# loris cluster  192.168.56.102
# loris connect c1 10.10.0.1/24 1
# loris connect c2 10.10.0.1/24 2
On VM 192.168.56.102,
we run:
# docker run --name c3 -dit ubuntu
/bin/bash
# docker run --name c4 -dit ubuntu
/bin/bash
# loris init
# loris cluster 192.168.56.101
# loris connect c3 10.10.0.3/24 1
# loris connect c4 10.10.0.4/24 2
• Verify the Open vSwitch configurations for connecting two nodes with
VxLAN and connecting the two containers to the OVS.
Port Configuration
51
# sudo ovs-vsctl show
873c293e-912d-4067-82ad-d1116d2ad39f
Manager "pssl:6640"
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tap3392"
tag: 1000
Interface "tap3392"
Port "tap3483"
tag: 1001
Interface "tap3483"
Bridge "br1"
Controller "pssl:6634"
Port "vxlanc0a83866"
Interface "vxlanc0a83866"
type: vxlan
options: {in_key=flow,
out_key=flow, remote_ip="192.168.56.102"}
Port "br1"
Interface "br1"
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.3.90"
(Equivalent to br-int)
(Equivalent to br-tun)
VxLAN
tunnel
port
c1
port
c2
port
• In our setup, we can verify reachability between containers using ping
• We observe that c1 is able to access c3, but not c4
• We observe that c4 is able to access c2 despite IP overlap
Microsegmentation in Effect
52
VM 2
192.168.56.102
VM 1
192.168.56.101
c1
10.10.0.1
c2
10.10.0.1
c3
10.10.0.3
c4
10.10.0.4
X
# docker attach c1
root@c1:/# ping 10.10.0.3
Success !
root@c1:/# ping 10.10.0.4
Fails!
# docker attach c4
root@c1:/# ping 10.10.0.1
Success !
# ovs-ofctl dump-flows br1 -OOpenFlow13
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=941.134s, table=0, n_packets=128, n_bytes=11936, priority=0 actions=resubmit(,3)
cookie=0x0, duration=941.146s, table=0, n_packets=106, n_bytes=10220, priority=1,in_port=1,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
cookie=0x0, duration=941.142s, table=0, n_packets=41, n_bytes=2214, priority=1,in_port=1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
cookie=0x0, duration=941.131s, table=3, n_packets=0, n_bytes=0, priority=0 actions=drop
cookie=0x0, duration=941.123s, table=3, n_packets=0, n_bytes=0, priority=1,tun_id=0x10ffd actions=push_vlan:0x8100,set_field:8189->vlan_vid,resubmit(,10)
cookie=0x0, duration=313.581s, table=3, n_packets=14, n_bytes=1116, priority=1,tun_id=0x103e9 actions=push_vlan:0x8100,set_field:5097->vlan_vid,resubmit(,10)
cookie=0x0, duration=305.662s, table=3, n_packets=114, n_bytes=10820, priority=1,tun_id=0x103e8 actions=push_vlan:0x8100,set_field:5096->vlan_vid,resubmit(,10)
cookie=0x0, duration=941.139s, table=10, n_packets=128, n_bytes=11936, priority=1
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-
>NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=941.137s, table=20, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,21)
cookie=0x0, duration=295.740s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, priority=1,vlan_tci=0x03e9/0x0fff,dl_dst=7a:fd:84:90:33:23 actions=load:0-
>NXM_OF_VLAN_TCI[],load:0x103e9->NXM_NX_TUN_ID[],output:2
cookie=0x0, duration=291.662s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, priority=1,vlan_tci=0x03e9/0x0fff,dl_dst=96:38:ce:87:e9:40 actions=load:0-
>NXM_OF_VLAN_TCI[],load:0x103e9->NXM_NX_TUN_ID[],output:2
cookie=0x0, duration=244.056s, table=20, n_packets=106, n_bytes=10220, hard_timeout=300, priority=1,vlan_tci=0x03e8/0x0fff,dl_dst=5e:fa:1a:ff:f7:53
actions=load:0->NXM_OF_VLAN_TCI[],load:0x103e8->NXM_NX_TUN_ID[],output:2
cookie=0x0, duration=941.127s, table=21, n_packets=0, n_bytes=0, priority=0 actions=drop
cookie=0x0, duration=941.119s, table=21, n_packets=0, n_bytes=0, priority=1,dl_vlan=4093 actions=pop_vlan,set_field:0x10ffd->tun_id,resubmit(,22)
cookie=0x0, duration=313.578s, table=21, n_packets=7, n_bytes=558, priority=1,dl_vlan=1001 actions=pop_vlan,set_field:0x103e9->tun_id,resubmit(,22)
cookie=0x0, duration=305.659s, table=21, n_packets=34, n_bytes=1656, priority=1,dl_vlan=1000 actions=pop_vlan,set_field:0x103e8->tun_id,resubmit(,22)
cookie=0x0, duration=619.312s, table=22, n_packets=41, n_bytes=2214, priority=1 actions=output:2
cookie=0x0, duration=941.110s, table=22, n_packets=0, n_bytes=0, priority=0 actions=drop
• 18 rules configured in a pipeline form to handle traffic using multiple
match/action tables in Open vSwitch
• LorisPack rules are exactly same as the standard OVS Neutron plugin rules
• Potential debugging nightmare if using standard OVS neutron plugin!
OVS rules to achieve this
53
• While ping is running from c1 to c3, inspect vxlan traffic on
the host
• Notice that traffic on wire between two hosts is encapsulated
in VxLAN header
Inspect VxLAN traffic
54
# sudo tcpdump -enntti eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
1431461413.331687 08:00:27:fd:35:5e > 08:00:27:46:8c:6f, ethertype IPv4 (0x0800), length
148: 192.168.56.101.57727 > 192.168.56.102.4789: VXLAN, flags [I] (0x08), vni 66536
16:1d:ab:79:ad:f8 > 5e:fa:1a:ff:f7:53, ethertype IPv4 (0x0800), length 98: 10.10.10.3 >
10.10.10.1: ICMP echo request, id 23, seq 41, length 64
1431461413.332774 08:00:27:46:8c:6f > 08:00:27:fd:35:5e, ethertype IPv4 (0x0800), length
148: 192.168.56.102.57727 > 192.168.56.101.4789: VXLAN, flags [I] (0x08), vni 66536
5e:fa:1a:ff:f7:53 > 16:1d:ab:79:ad:f8, ethertype IPv4 (0x0800), length 98: 10.10.10.1 >
10.10.10.3: ICMP echo reply, id 23, seq 41, length 64
Lessons Learned
Debugging is a Challenge
Symptom Plausible reasons Things to check
Two VMs unable to
contact each other
• Improper subnet and access
control policies
• Perform neutron client
commands and verify config
• Check iptables -L -t NAT rules
on both compute nodes
• Ping from VMs and check
tcpdump
• VM networking not configured
right
• Check neutron-debug ping-all,
ssh and
Traffic from VM is not
reaching outside
• DHCP failed because the
subnet’s dnsmasq is not
accessible or down
• Check IP assignment and
gateway in the VM
• Check neutron-debug dhcping
• Network node inaccessible
from compute node
• Check ovs-vsctl br-tun to verify
VxLAN or GRE tunnels
• S-NAT router in network node
misbehaving
• Check router configuration in
OpenStack
• Check router namespace using
ip netns exec <id> route –n
56
Debugging is a Challenge
Symptom Plausible reasons Things to check
Traffic from outside is
not reaching VM
• Not adding floating-IP to the
VM
• Check floating-IP assignment
• NAT rules lost from compute
node
• Check NAT rules on each
compute node
• DVR in compute node
misbehaving
• Check router configuration in
OpenStack
• Check router namespace using
ip netns exec <id> route –n
• Check if ip netns is able to ping
VM
• MTU is not correctly set in
network
• Perform iperf -m between
endpoints to check effective
MTU and check all interfaces
ping, tcpdump, ip netns, iptables, ovs-vsctl, ovs-ofctl, neutron-
debug, neutron client will haunt your dreams!
57
• Open-source version of OpenStack has challenges
going to production without vendor support
‒ Overlay and underlay integration not available
‒ Lacks high availability for the agents
‒ Analytics, metering and other operational tools are immature
‒ Debugging is a tricky art
Production Challenges: OpenStack
58
• Similar challenges plagues Docker networking too. In addition,
‒ Fast evolving, overwhelming ecosystem with cute-sounding DevOps
tools that is going through “natural selection”
‒ Storage and Networking are second order problems.
Production Challenges: Docker
59
ClusterHQ's
approach
to migrating
containers
across hosts
nginx nginx
Networking Redefined
Summary
• SDN brings in all operational goodness from computing world to
networking world.
• Looking at service virtualization separately is not wise.
Recommend a joint evaluation
• Varying architectures and networking policy being compiled
down.
• VM and container networking work with similar network
abstractions
‒ But at different scale and velocity
‒ Docker and OpenStack networking fairly similar
• Edge-based overlay intelligence using Open vSwitch is powerful.
61
Thank you.
Slidehare.net/sdnhub

Mais conteúdo relacionado

Mais procurados

VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld
 
Introduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylightIntroduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylightSDN Hub
 
VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...
VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...
VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...VMworld
 
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015Dmitri Kalintsev
 
VMware NSX + Cumulus Networks: Software Defined Networking
VMware NSX + Cumulus Networks: Software Defined NetworkingVMware NSX + Cumulus Networks: Software Defined Networking
VMware NSX + Cumulus Networks: Software Defined NetworkingCumulus Networks
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Dan Mihai Dumitriu
 
vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...
vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...
vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...VMworld
 
Reference design for v mware nsx
Reference design for v mware nsxReference design for v mware nsx
Reference design for v mware nsxsolarisyougood
 
Software Defined Networking(SDN) and practical implementation_trupti
Software Defined Networking(SDN) and practical implementation_truptiSoftware Defined Networking(SDN) and practical implementation_trupti
Software Defined Networking(SDN) and practical implementation_truptitrups7778
 
OpenStack and OpenContrail for FreeBSD platform by Michał Dubiel
OpenStack and OpenContrail for FreeBSD platform by Michał DubielOpenStack and OpenContrail for FreeBSD platform by Michał Dubiel
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
 
SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingThomas Graf
 
VMware NSX and Arista L2 Hardware VTEP Gateway Integration
VMware NSX and Arista L2 Hardware VTEP Gateway IntegrationVMware NSX and Arista L2 Hardware VTEP Gateway Integration
VMware NSX and Arista L2 Hardware VTEP Gateway IntegrationBayu Wibowo
 
Openstack Neutron Insights
Openstack Neutron InsightsOpenstack Neutron Insights
Openstack Neutron InsightsAtul Pandey
 
MidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integrationMidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integrationAkhilesh Dhawan
 
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld
 
NSX for vSphere Logical Routing Deep Dive
NSX for vSphere Logical Routing Deep DiveNSX for vSphere Logical Routing Deep Dive
NSX for vSphere Logical Routing Deep DivePooja Patel
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowAniekan Akpaffiong
 
VMworld 2013: Deploying VMware NSX Network Virtualization
VMworld 2013: Deploying VMware NSX Network Virtualization VMworld 2013: Deploying VMware NSX Network Virtualization
VMworld 2013: Deploying VMware NSX Network Virtualization VMworld
 

Mais procurados (20)

VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
 
Introduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylightIntroduction to Beryllium release of OpenDaylight
Introduction to Beryllium release of OpenDaylight
 
VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...
VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...
VMworld 2013: Troubleshooting VXLAN and Network Services in a Virtualized Env...
 
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
#NET5488 - Troubleshooting Methodology for VMware NSX - VMworld 2015
 
VMware NSX + Cumulus Networks: Software Defined Networking
VMware NSX + Cumulus Networks: Software Defined NetworkingVMware NSX + Cumulus Networks: Software Defined Networking
VMware NSX + Cumulus Networks: Software Defined Networking
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
 
Network Virtualization
Network VirtualizationNetwork Virtualization
Network Virtualization
 
vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...
vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...
vVMworld 2013: Deploying, Troubleshooting, and Monitoring VMware NSX Distribu...
 
Reference design for v mware nsx
Reference design for v mware nsxReference design for v mware nsx
Reference design for v mware nsx
 
Software Defined Networking(SDN) and practical implementation_trupti
Software Defined Networking(SDN) and practical implementation_truptiSoftware Defined Networking(SDN) and practical implementation_trupti
Software Defined Networking(SDN) and practical implementation_trupti
 
OpenStack and OpenContrail for FreeBSD platform by Michał Dubiel
OpenStack and OpenContrail for FreeBSD platform by Michał DubielOpenStack and OpenContrail for FreeBSD platform by Michał Dubiel
OpenStack and OpenContrail for FreeBSD platform by Michał Dubiel
 
SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center Networking
 
Introduction to SDN
Introduction to SDNIntroduction to SDN
Introduction to SDN
 
VMware NSX and Arista L2 Hardware VTEP Gateway Integration
VMware NSX and Arista L2 Hardware VTEP Gateway IntegrationVMware NSX and Arista L2 Hardware VTEP Gateway Integration
VMware NSX and Arista L2 Hardware VTEP Gateway Integration
 
Openstack Neutron Insights
Openstack Neutron InsightsOpenstack Neutron Insights
Openstack Neutron Insights
 
MidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integrationMidoNet Overview - OpenStack and SDN integration
MidoNet Overview - OpenStack and SDN integration
 
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
 
NSX for vSphere Logical Routing Deep Dive
NSX for vSphere Logical Routing Deep DiveNSX for vSphere Logical Routing Deep Dive
NSX for vSphere Logical Routing Deep Dive
 
VMware NSX 101: What, Why & How
VMware NSX 101: What, Why & HowVMware NSX 101: What, Why & How
VMware NSX 101: What, Why & How
 
VMworld 2013: Deploying VMware NSX Network Virtualization
VMworld 2013: Deploying VMware NSX Network Virtualization VMworld 2013: Deploying VMware NSX Network Virtualization
VMworld 2013: Deploying VMware NSX Network Virtualization
 

Semelhante a Network and Service Virtualization tutorial at ONUG Spring 2015

Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingDigicomp Academy AG
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld
 
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...VMworld
 
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PROIDEA
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and AutomationAdam Johnson
 
CloudKC: Evolution of Network Virtualization
CloudKC: Evolution of Network VirtualizationCloudKC: Evolution of Network Virtualization
CloudKC: Evolution of Network VirtualizationCynthia Thomas
 
VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld
 
Summit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP Semiconductors
Summit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP SemiconductorsSummit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP Semiconductors
Summit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP SemiconductorsOPNFV
 
Windows Server 8 Hyper V Networking
Windows Server 8 Hyper V NetworkingWindows Server 8 Hyper V Networking
Windows Server 8 Hyper V NetworkingAidan Finn
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
 
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaNSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaVMUG IT
 
Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)hypervnu
 
MidoNet 101: Face to Face with the Distributed SDN
MidoNet 101: Face to Face with the Distributed SDNMidoNet 101: Face to Face with the Distributed SDN
MidoNet 101: Face to Face with the Distributed SDNMidoNet
 
Openstack v4 0
Openstack v4 0Openstack v4 0
Openstack v4 0sprdd
 
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
 
Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...
Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...
Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...Haidee McMahon
 
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...nvirters
 

Semelhante a Network and Service Virtualization tutorial at ONUG Spring 2015 (20)

Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined Networking
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture
 
NFV в сетях операторов связи
NFV в сетях операторов связиNFV в сетях операторов связи
NFV в сетях операторов связи
 
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
VMworld 2014: Advanced Topics & Future Directions in Network Virtualization w...
 
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and Automation
 
CloudKC: Evolution of Network Virtualization
CloudKC: Evolution of Network VirtualizationCloudKC: Evolution of Network Virtualization
CloudKC: Evolution of Network Virtualization
 
VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments VMworld 2013: Operational Best Practices for NSX in VMware Environments
VMworld 2013: Operational Best Practices for NSX in VMware Environments
 
Summit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP Semiconductors
Summit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP SemiconductorsSummit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP Semiconductors
Summit 16: ARM Mini-Summit - NXP QorIQ NFV Solutions - NXP Semiconductors
 
Windows Server 8 Hyper V Networking
Windows Server 8 Hyper V NetworkingWindows Server 8 Hyper V Networking
Windows Server 8 Hyper V Networking
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
 
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della SicurezzaNSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
 
Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)
 
MidoNet 101: Face to Face with the Distributed SDN
MidoNet 101: Face to Face with the Distributed SDNMidoNet 101: Face to Face with the Distributed SDN
MidoNet 101: Face to Face with the Distributed SDN
 
Openstack v4 0
Openstack v4 0Openstack v4 0
Openstack v4 0
 
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
 
10 sdn-vir-6up
10 sdn-vir-6up10 sdn-vir-6up
10 sdn-vir-6up
 
nested-kvm
nested-kvmnested-kvm
nested-kvm
 
Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...
Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...
Ligato - A platform for development of Cloud-Native VNF's - SDN/NFV London me...
 
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
Tech Talk by John Casey (CTO) CPLANE_NETWORKS : High Performance OpenStack Ne...
 

Último

Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 

Último (20)

Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 

Network and Service Virtualization tutorial at ONUG Spring 2015

  • 1. Network Virtualization, Overlays and Containers Srini Seetharaman srini.seetharaman@gmail.com May 2015
  • 2. • Today’s DC networks • Network Virtualization ‒ Motivation, Requirements, Architecture • Service Virtualization ‒ Motivation, Requirements, Architecture • OpenStack Networking • Docker Networking • Hands-on overlay networking • Lessons learned Agenda 2
  • 4. Application Rollout Today • Poor automation for VLAN, Service contexts, and VRFs • Poor legacy application design? Web Tier Application Tier Database Tier 4
  • 5. Typical Data Center Design 5 Rack Core Aggregation Edge Application group A Application group B
  • 6. Problem: Network not ready for VMs Over 70% of today’s servers are Virtual Machines, but VMs are not treated as first class citizens by the network ‒ East-west traffic poorly managed ‒ Lack of prioritization and rate-limiting at VM level ‒ Traffic between VMs on same server often unsupervised ‒ IP/MAC overlap not allowed, and addressing limited by VLANs 6 VM VM VM VM VM VM VM VM Containers Symptoms of a broader problem with lack of proper network abstractions and policy layering
  • 7. Solution: SDN and NFV 7 Business How? Reduced time to revenue Speed up of service provisioning OpEx saving Automated operations and easier management of resources New revenue Through new business models centered around on-demand usage Feature velocity Introduce changes quickly according to business logic needs Improved policy compliance Ensure that cloud workload is compliant with enterprise policies (e.g., access control) Reduced OpEx during upgrades Introduce new functions and service by replacing just software stack
  • 8. Trend #1: Network Virtualization Dynamic, Programmable, Automated 8 Computing Infrastructure SDN-based Virtualized Network Platform Storage Infrastructure
  • 9. Network Virtualization Requirements 9 Integration with legacy network End-to-end visibility of VM traffic Traffic isolation across virtual networks • Support bare metal servers, appliances and gateways • VLAN, VxLAN, GRE support, allowing IP overlap across tenants • Edge-based control of VM traffic and scalable host tracking Troubleshooting support Application policyOrchestrating virtual L4-L7 services • End-to-end visibility that maps Virtual to Physical scalably • Provisioning, and chaining of virtual services • Application level policy across and within virtual networks
  • 10. Trend #2: Service Virtualization 10 Internet Internet NFV Step 1. Virtualizing network functions Step 2. Chaining/Stitching them
  • 11. NFV in Data Centers 1. Virtualizing the L4-L7 network service appliance (e.g., Load-balancer) 2. Chaining services to ensure that the traffic is routed through virtual appliances 3. Optimizing service delivery for applications • Increasing number of virtual appliances • Increasing CPU or memory of each appliance • Placement of virtual appliances • Offloading certain tasks to NIC or switch 11 Compute Orchestration SDN control Open-source?
  • 12. Trend #3: New Infrastructure Tools 12
  • 15. Deployment mode #1: Underlay VPN termination, L3 routing VM VM VM VMVM VM IP 192.168.1.2, MAC 0x1 VM VM VM VMVM VM VM VM VM VMVM VM VM VM VM VMVM VM Controller cluster CLI, REST, GUI IP 192.168.1.2, MAC 0x2 IP 192.168.2.2, MAC 0x1 IP 192.168.1.2, MAC 0x3 IP 192.168.1.2, MAC 0x2 IP 192.168.1.2, MAC 0x1 IP 192.168.2.1, MAC 0x2 IP 192.168.1.3, MAC 0x4 Tenant membership decided based on {switch-port, MAC, IP} tuple in each flow 15 VNet identified using VLANs, VxLANs or GRE Internet Custom routing by controller
  • 16. • Problem: SDN switches have resource limitations ‒ Weak CPU incapable of doing traffic summarization, frequent statistics reporting, and packet marking ‒ Flow-table limitation in switches (e.g., 1500 exact match entries) ‒ Switch-controller communication limits (e.g., 200 packet_in/sec) ‒ Firmware does not always expose the full capabilities of the chipset • Solution: ‒ Next generation of hardware customized for OpenFlow ‒ New TCAMs with larger capacity ‒ Intelligent traffic aggregation ‒ Minimal offloading to vSwitches Performance Limitations 16
  • 17. Legacy L3 routing Legacy L2 switching VM VM VM VMVM VM 10.1.1.0/24 10.1.2.0/24 10.2.1.0/24 10.1.1.1 10.1.1.2 10.1.2.1 10.1.2.2 10.2.1.1 10.2.1.2 VM VM VM VMVM VM VM VM VM VMVM VM VM VM VM VMVM VM vDP vDP vDP vDP vDP vDP Controller cluster Internet Logical link v/p-GatewayCLI, REST, GUI Deployment mode #2: Overlay vDP: Virtual Data Plane VM addressing masked from fabric Tunnels Tenant membership decided by virtual interface on the vSwitch vDP 17
  • 18. VxLAN Tunneling 18 • Between VxLAN Tunnel End Points (VTEP) in each host server • UDP port numbers allows better ECMP hashing • In absence of SDN control plane, IP multicast is used for layer-2 flooding (broadcasts, multicasts and unknown unicasts) VTEP outer MAC header Outer IP header Outer UDP header VxLAN header Original L2 packet VxLAN flags Reserved 24bit VN ID Reserved Source port VxLAN port UDP Length Checksum
  • 19. • Solution: ‒ Offload it to the top-of- rack leaf switch ‒ Use hardware gateway • Problem: ‒ Overlay mode is CPU hungry at high line rates and has anecdotally fared poorly in real world Performance Limitations 19 Throughput Recv side cpu Send side cpu Linux Bridge: 9.3 Gbps 85% 75% OVS Bridge: 9.4 Gbps 82% 70% OVS-STT: 9.5 Gbps 70% 70% OVS-GRE: 2.3 Gbps 75% 97% Source: http://networkheresy.com/2012/06/08/the-overhead-of-software-tunneling/
  • 20. • Combined overlay and underlay (fabric) to achieve: ‒ end-to-end visibility ‒ complete control ‒ best mix of both worlds • Also called P+V or Overlay-Underlay ‒ Vendors are converging towards this architecture • The integration may need 1) link-local VLANs or 2) integration with VM manager to detect VM profile Deployment mode #3: Hybrid 20
  • 21. • Decoupling elements inside the overlay and converging with the underlay to make best of both worlds • Current mode: Deployment mode #3: Hybrid 21 Host A Host B Host C VxLAN overlay VLAN VLAN VLAN
  • 22. Controller • Traffic leaving host has VLAN tag • The VLAN + Source-MAC is mapped to a VxLAN • Future Mode: Deployment mode #3: Hybrid 22 Host A Host B VLAN trunk Host C VxLAN Distributed Virtual Switch or VLAN-based overlay OpenStack or vCenter VTEP manager
  • 23. Deploying Network Service Virtualization 23
  • 24. VM VM VM VMVM VM VM VM VM VMVM VM vNF vNF vNF vNFvNF vNF CLI, REST, GUI Typical Deployment Mode is Overlay vNF: Virtualized Network Function Services can be single-tenanted and multi-tenanted vNF vNFvNF vNF Traffic to vFirewall Traffic to dst VMTraffic to VIP Network Controller Service Controller Compute Controller 24
  • 25. Service Type: Stateful and Stateless 25 OVS VM 1VM 2VM 1 Host OVS VM 2 Host Dst = VIP 1 Dst = VIP 2 Stateless service: No additional appliance needed Stateful service: Virtual function deployed in VM or container VM 3 Change header and Fwd to specific VM Traffic proxied to specific VM Typically stateless load-balancing and distributed access control Typically stateful LB, Intrusion detection and SSL termination
  • 26. Service Scaling: Scale-out and Scale-up • Scale-out: ‒ Deploy more network function instances ‒ Scale-out of workload is also necessary • Scale-up: ‒ Give more resources to each network function instance ‒ Offloading simple tasks to vSwitch, pSwitch or pAppliance 26
  • 27. Combined Solution OVS VM 1VM 2VM 1 Host 1 OVS VM 2 Host 2 OpenStack Dst = VIP 1 Dst = VIP 2 Controller Orchestration Network Plumbing VM 3 Service rollout and chaining L2-L7 Service orchestration DC Network Virtualization Policy/ QoS Trouble- shooting UI/Analytics Compute L3 Spine VTEP Leaf 27
  • 29. • Most common platform for standardizing open API for networking, and vendors to innovate. • Neutron: High-level abstractions for creating and managing tenant virtual network ‒ Flat L2 connectivity across DC ‒ DHCP enabled IP addressing ‒ Floating-IP (for outside-in access) ‒ L3 subnets and routers ‒ Gateway and VPN ‒ Load-balancer service ‒ Security groups ‒ …. OpenStack Platform 29 Tenant BTenant A
  • 30. OpenStack API 30 Typical workflow 1. Create a network 2. Associate a subnet with the network 3. Boot a VM and attach it to the network 4. Delete the VM 5. Delete any ports 6. Delete the network Network Virtualization App SDN Controller pSwitch pSwitch vSwitch vSwitch OVSDBOpenFlow XYZ Custom API XYZ Mech driver ML2 Plugin Neutron API Orchestration North-bound API Application Controller South-bound API Dataplane elements
  • 31. Basic Technology for OpenStack Networking Namespace Containerized networking at the process level managed at /proc. Primarily used to create dnsmasq Open-source DNS/DHCP agent run on every host Linux Bridge L2/MAC learning switch built into the Kernel to use for forwarding OpenvSwitch Advanced bridge that is programmable and supports tunneling • ovs-vsctl used to configure the bridge • ovs-ofctl used to configure the forwarding flow rules NAT Network address translators are intermediate entities that translate IP address + Ports (Types: SNAT, DNAT) iptables Policy engine in kernel that is used for managing packet forwarding, firewall, NAT features 31
  • 33. • Basic free OpenStack software includes: ‒ OVS plugin that runs as mech driver in Neutron server, and ‒ OVS plugin that runs in both network and compute node ‒ No OpenFlow. Just wrappers to ovs-vsctl and ovs-ofctl CLI OpenStack OVS Networking Agents 33 RPC over mgmt. network Controller Network node Compute node OVS OVS OVS Agent OVS Agent OVS Mech driver Neutron server VM VMVM VMHorizon UI ovs-*ctl ovs-*ctl Data network
  • 34. • Key feature that reduces bottlenecks in network • View of 1 tenant routing through namespaces Compute node Network node Distributed Virtual Routing 35 eth1 br-ex snat namespace qr qg “VM2 br-tun br-int eth0 qrouter namespace S1 S2 br-tun br-int eth0 Public access Private access eth1 br-ex “VM1 Floating-IP Non-floating-IP rfp
  • 35. • Vendor-driven consortium (with Cisco, Brocade, and others) for developing open-source SDN controller platform OpenDayLight Controller 36
  • 36. • Overlay-based OpenStack Networking supported today • All required features offered using Open vSwitch programming OpenStack Networking in OpenDaylight <#>
  • 38. • Over the past few years, LXC came up as an alternative to VM for running workload on hosts • Each container is a clone of the host OS • Docker brought Linux containers to prominence ‒ Tracks application configuration and possibly archives to DockerHub Linux Containers 39 Container 1 App X Container 2 Container 3 Host OS Guest root App Y Guest root App Z Guest root
  • 39. Docker • Excellent way to track application dependencies and configuration in a portable format. • For instance the Dockerfile on the right can be used to spawn a container with nginx LB and accessed at a host port $ docker build XYZ $ docker images $ docker run -i --name=nginx1 -d –i nginx $ docker ps $ docker inspect nginx1 40 # Pull base image. FROM dockerfile/ubuntu # Install Nginx. RUN add-apt-repository -y ppa:nginx/stable && apt-get update && apt-get install -y nginx && rm -rf /var/lib/apt/lists/* && echo "ndaemon off;" >> /etc/nginx/nginx.conf && chown -R www-data:www-data /var/lib/nginx # Define mountable directories. VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx"] # Define working directory. WORKDIR /etc/nginx # Define default command. CMD ["nginx"] # Expose ports. EXPOSE 80 EXPOSE 443
  • 40. Networking Still in Early Stages Today Docker usage is predominantly within a single laptop or host. The default network on right is allocated to the nginx container we spawned. But, folks are exploring connecting containers across hosts. 41 "NetworkSettings": { "Bridge": "docker0", "Gateway": "172.17.42.1", "IPAddress": "172.17.0.15", "IPPrefixLen": 16, "MacAddress": "02:42:ac:11:00:0f", "PortMapping": null, "Ports": { "443/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49157" } ], "80/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49158" } ] }
  • 41. Many ways to network in Docker • Many of these are similar to what we can do with VM (except the Unix-domain socket method of direct access) 42 Host Container C Container D Container E Container FContainer A Container B Direct Host network Unix-domain sockets and other IPC Docker0 Linux bridge Docker proxy (using iptables) Open vSwitch Port mapping
  • 42. Mechanisms for Multi-Host Networking • Option 1: Flat IP space (at container level) with routing (and possibly NAT) done by host ‒ Step 1: Assign /24 subnet CIDR to each host for its containers ‒ Step 2: Setup ip route to ensure traffic to external subnets leave from host interface (e.g., eth0) • Option 2: Create overlay network ‒ Step 1: Create a parallel network for cross-host communication ‒ Step 2: Connect hosts in cluster using encapsulation tunnels ‒ Step 3: Plug containers to appropriate virtual networks 43
  • 43. Option 1: Flat IP space Step 1: Choose CIDR wisely when starting Docker daemon Step 2: Add static routes to other containers’ subnets 44 Host 1 Nginx1 172.17.42.18 Bash1 172.17.42.19 172.17.42.1 Docker0 bridge eth0192.168.50.16 Host 2 Nginx2 172.17.43.18 Bash2 172.17.43.19 172.17.43.1 Docker0 bridge eth0192.168.50.17 Docker manages these allocation route add -net 172.17.43.0/24 gw 192.168.50.17 route add -net 172.17.42.0/24 gw 192.168.50.16 Quiz: What IP address do packets on the wire have? NAT rules already in place to masquerade internal IP addresses
  • 44. 192.168.50.16 192.168.50.17 nginx1 ContainerX Host 1 bash1 ContainerY docker0 Open vSwitch Host 2 Internet Open vSwitch docker0 vxlan vxlanvxlan vxlan Other cluster hosts Option 2: Open vSwitch based Overlay Suggest creating parallel network that decouples container networking from underlying infrastructure 45
  • 45. Container and VM networking unified • Edge-based overlays are even more important in container world. • Open vSwitch already supports network namespaces • VxLAN provides: ‒ isolation, ‒ improves L2/L3 scalability, ‒ allows overlapping MAC/IP address Docker Engine OVS OVS OVS Conta iner Conta iner Conta iner Conta iner Conta iner Conta iner VM V VM Orchestration ?? OpenStack VxLAN Tunneled network Neutron OVS agent 46
  • 46. Hands-on Exercise - Creating a neutron-like overlay 47
  • 47. • In this tutorial exercise, we will use the LorisPack toolkit that allows easily creating the parallel network, and isolating container communication to its own pod/group • Desired end goals: 1. Containers isolated into two virtual networks 2. c1 cannot access container in different virtual network 3. c1 can have overlapping IP address • Inter-host communication uses VxLAN encapsulation Host 2Host 1 Goal for Tutorial: Preview of Microsegmentation using VxLAN 48 c1 10.10.0.1 c2 10.10.0.1 c3 10.10.0.3 c4 10.10.0.4 Virtual Network 1 Virtual Network 2 X X
  • 48. Setup 1: Installation • Bring up two Linux VMs (preferably Ubuntu over Virtualbox) on your laptop with following installed: ‒ Open vSwitch (version 2.1 +) ‒ Docker (version 1.5 +) ‒ LorisPack (git clone https://github.com/sdnhub/lorispack) • The VMs should have host-only adapter added as a second interface eth1 so that they can communicate with each other. • In my case, ‒ VM1 IP is 192.168.56.101 ‒ VM2 IP is 192.168.56.102 49
  • 49. Setup 2: Docker and networking On VM 192.168.56.101, we run: # docker run --name c1 -dit ubuntu /bin/bash # docker run --name c2 -dit ubuntu /bin/bash # loris init # loris cluster 192.168.56.102 # loris connect c1 10.10.0.1/24 1 # loris connect c2 10.10.0.1/24 2 On VM 192.168.56.102, we run: # docker run --name c3 -dit ubuntu /bin/bash # docker run --name c4 -dit ubuntu /bin/bash # loris init # loris cluster 192.168.56.101 # loris connect c3 10.10.0.3/24 1 # loris connect c4 10.10.0.4/24 2
  • 50. • Verify the Open vSwitch configurations for connecting two nodes with VxLAN and connecting the two containers to the OVS. Port Configuration 51 # sudo ovs-vsctl show 873c293e-912d-4067-82ad-d1116d2ad39f Manager "pssl:6640" Bridge "br0" Port "br0" Interface "br0" type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "tap3392" tag: 1000 Interface "tap3392" Port "tap3483" tag: 1001 Interface "tap3483" Bridge "br1" Controller "pssl:6634" Port "vxlanc0a83866" Interface "vxlanc0a83866" type: vxlan options: {in_key=flow, out_key=flow, remote_ip="192.168.56.102"} Port "br1" Interface "br1" type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} ovs_version: "2.3.90" (Equivalent to br-int) (Equivalent to br-tun) VxLAN tunnel port c1 port c2 port
  • 51. • In our setup, we can verify reachability between containers using ping • We observe that c1 is able to access c3, but not c4 • We observe that c4 is able to access c2 despite IP overlap Microsegmentation in Effect 52 VM 2 192.168.56.102 VM 1 192.168.56.101 c1 10.10.0.1 c2 10.10.0.1 c3 10.10.0.3 c4 10.10.0.4 X # docker attach c1 root@c1:/# ping 10.10.0.3 Success ! root@c1:/# ping 10.10.0.4 Fails! # docker attach c4 root@c1:/# ping 10.10.0.1 Success !
  • 52. # ovs-ofctl dump-flows br1 -OOpenFlow13 OFPST_FLOW reply (OF1.3) (xid=0x2): cookie=0x0, duration=941.134s, table=0, n_packets=128, n_bytes=11936, priority=0 actions=resubmit(,3) cookie=0x0, duration=941.146s, table=0, n_packets=106, n_bytes=10220, priority=1,in_port=1,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) cookie=0x0, duration=941.142s, table=0, n_packets=41, n_bytes=2214, priority=1,in_port=1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21) cookie=0x0, duration=941.131s, table=3, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=941.123s, table=3, n_packets=0, n_bytes=0, priority=1,tun_id=0x10ffd actions=push_vlan:0x8100,set_field:8189->vlan_vid,resubmit(,10) cookie=0x0, duration=313.581s, table=3, n_packets=14, n_bytes=1116, priority=1,tun_id=0x103e9 actions=push_vlan:0x8100,set_field:5097->vlan_vid,resubmit(,10) cookie=0x0, duration=305.662s, table=3, n_packets=114, n_bytes=10820, priority=1,tun_id=0x103e8 actions=push_vlan:0x8100,set_field:5096->vlan_vid,resubmit(,10) cookie=0x0, duration=941.139s, table=10, n_packets=128, n_bytes=11936, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]- >NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 cookie=0x0, duration=941.137s, table=20, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,21) cookie=0x0, duration=295.740s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, priority=1,vlan_tci=0x03e9/0x0fff,dl_dst=7a:fd:84:90:33:23 actions=load:0- >NXM_OF_VLAN_TCI[],load:0x103e9->NXM_NX_TUN_ID[],output:2 cookie=0x0, duration=291.662s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, priority=1,vlan_tci=0x03e9/0x0fff,dl_dst=96:38:ce:87:e9:40 actions=load:0- >NXM_OF_VLAN_TCI[],load:0x103e9->NXM_NX_TUN_ID[],output:2 cookie=0x0, duration=244.056s, table=20, n_packets=106, n_bytes=10220, hard_timeout=300, priority=1,vlan_tci=0x03e8/0x0fff,dl_dst=5e:fa:1a:ff:f7:53 actions=load:0->NXM_OF_VLAN_TCI[],load:0x103e8->NXM_NX_TUN_ID[],output:2 cookie=0x0, duration=941.127s, table=21, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0x0, duration=941.119s, table=21, n_packets=0, n_bytes=0, priority=1,dl_vlan=4093 actions=pop_vlan,set_field:0x10ffd->tun_id,resubmit(,22) cookie=0x0, duration=313.578s, table=21, n_packets=7, n_bytes=558, priority=1,dl_vlan=1001 actions=pop_vlan,set_field:0x103e9->tun_id,resubmit(,22) cookie=0x0, duration=305.659s, table=21, n_packets=34, n_bytes=1656, priority=1,dl_vlan=1000 actions=pop_vlan,set_field:0x103e8->tun_id,resubmit(,22) cookie=0x0, duration=619.312s, table=22, n_packets=41, n_bytes=2214, priority=1 actions=output:2 cookie=0x0, duration=941.110s, table=22, n_packets=0, n_bytes=0, priority=0 actions=drop • 18 rules configured in a pipeline form to handle traffic using multiple match/action tables in Open vSwitch • LorisPack rules are exactly same as the standard OVS Neutron plugin rules • Potential debugging nightmare if using standard OVS neutron plugin! OVS rules to achieve this 53
  • 53. • While ping is running from c1 to c3, inspect vxlan traffic on the host • Notice that traffic on wire between two hosts is encapsulated in VxLAN header Inspect VxLAN traffic 54 # sudo tcpdump -enntti eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 1431461413.331687 08:00:27:fd:35:5e > 08:00:27:46:8c:6f, ethertype IPv4 (0x0800), length 148: 192.168.56.101.57727 > 192.168.56.102.4789: VXLAN, flags [I] (0x08), vni 66536 16:1d:ab:79:ad:f8 > 5e:fa:1a:ff:f7:53, ethertype IPv4 (0x0800), length 98: 10.10.10.3 > 10.10.10.1: ICMP echo request, id 23, seq 41, length 64 1431461413.332774 08:00:27:46:8c:6f > 08:00:27:fd:35:5e, ethertype IPv4 (0x0800), length 148: 192.168.56.102.57727 > 192.168.56.101.4789: VXLAN, flags [I] (0x08), vni 66536 5e:fa:1a:ff:f7:53 > 16:1d:ab:79:ad:f8, ethertype IPv4 (0x0800), length 98: 10.10.10.1 > 10.10.10.3: ICMP echo reply, id 23, seq 41, length 64
  • 55. Debugging is a Challenge Symptom Plausible reasons Things to check Two VMs unable to contact each other • Improper subnet and access control policies • Perform neutron client commands and verify config • Check iptables -L -t NAT rules on both compute nodes • Ping from VMs and check tcpdump • VM networking not configured right • Check neutron-debug ping-all, ssh and Traffic from VM is not reaching outside • DHCP failed because the subnet’s dnsmasq is not accessible or down • Check IP assignment and gateway in the VM • Check neutron-debug dhcping • Network node inaccessible from compute node • Check ovs-vsctl br-tun to verify VxLAN or GRE tunnels • S-NAT router in network node misbehaving • Check router configuration in OpenStack • Check router namespace using ip netns exec <id> route –n 56
  • 56. Debugging is a Challenge Symptom Plausible reasons Things to check Traffic from outside is not reaching VM • Not adding floating-IP to the VM • Check floating-IP assignment • NAT rules lost from compute node • Check NAT rules on each compute node • DVR in compute node misbehaving • Check router configuration in OpenStack • Check router namespace using ip netns exec <id> route –n • Check if ip netns is able to ping VM • MTU is not correctly set in network • Perform iperf -m between endpoints to check effective MTU and check all interfaces ping, tcpdump, ip netns, iptables, ovs-vsctl, ovs-ofctl, neutron- debug, neutron client will haunt your dreams! 57
  • 57. • Open-source version of OpenStack has challenges going to production without vendor support ‒ Overlay and underlay integration not available ‒ Lacks high availability for the agents ‒ Analytics, metering and other operational tools are immature ‒ Debugging is a tricky art Production Challenges: OpenStack 58
  • 58. • Similar challenges plagues Docker networking too. In addition, ‒ Fast evolving, overwhelming ecosystem with cute-sounding DevOps tools that is going through “natural selection” ‒ Storage and Networking are second order problems. Production Challenges: Docker 59 ClusterHQ's approach to migrating containers across hosts nginx nginx
  • 60. Summary • SDN brings in all operational goodness from computing world to networking world. • Looking at service virtualization separately is not wise. Recommend a joint evaluation • Varying architectures and networking policy being compiled down. • VM and container networking work with similar network abstractions ‒ But at different scale and velocity ‒ Docker and OpenStack networking fairly similar • Edge-based overlay intelligence using Open vSwitch is powerful. 61

Notas do Editor

  1. Netw
  2. Similar to server virtualization
  3. Broadcom Trident 2 support it
  4. Note: You can use Neutron + OVS to manage VLANs without requiring commercial s/w The set of plugins included in the main Neutron distribution and supported by the Neutron community include: Open vSwitch Plugin Cisco UCS/Nexus Plugin Linux Bridge Plugin Modular Layer 2 Plugin Nicira Network Virtualization Platform (NVP) Plugin Ryu OpenFlow Controller Plugin NEC OpenFlow Plugin Big Switch Controller Plugin Cloudbase Hyper-V Plugin MidoNet Plugin Brocade Neutron Plugin Brocade Neutron Plugin PLUMgrid Plugin Additional plugins are available from other sources: OpenContrail Plugin Extreme Networks Plugin Ruijie Networks Plugin Mellanox Neutron Plugin Mellanox Neutron Plugin Juniper Networks Neutron Plugin If you have your own plugin, feel free to add it to this list.