Networking with Neutron is one of the most complex subsystems in Openstack. In this talk we shed light on the key components of Neutron networking and the specialities of the relatively new ML2 core plugin.
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
Openstack Networking and ML2
1. 2015 - Component Soft Ltd 1
Openstack Networking and ML2
Attila Szlovencsák
Component Soft Ltd.
attila@componentsoft.eu
Openstack Hungary Meetup Group
26 February , 2015
2. 2015 - Component Soft Ltd 2
OpenStack Networking Concepts
● Network:
– an isolated L2 segment dedicated for tenant (or shared between tenants)
– subnet: a block of ipv4/6 addresses. More than one allowed for a network
– external flag (neutron only): “public net”
● Port:
– A connection point for attaching a single device (NIC of a virtual server) to
a virtual network. Port has
● a fixed address (via DHCP or injected)
● has an associated security group
● may have an associated floating address
● Security group:
– Group of L4 filter rules (no access granted with the “default” group)
● Floating address:
– an IP address from one 'external' network, associated with a port(fixed
address)
– could be moved between VMs
● Router (neutron only):
– L3 device that connects the 'external' and the tenant networks.
3. 2015 - Component Soft Ltd 3
Nova-network types (pre-grizzly)
br0 br0 br0
VM1 VM2 VM3 VM4
fixed network
(10.0.0.0/8)
nova-compute nova-computenova-network
dnsmasq
controller compute1 compute2
● Flat mode:
– Fixed addresses are
injected into the
image on launch
– All instances are
attached to the same
bridge
● Flat DHCP mode
(same a flat, plus)
– nova-network service
runs a DHCP
server(dnsmasq)
– fixed addresses are
assigned via DHCP
● Both cases floating IPs
are plumbed on eth0, and
NAT'ed to the private
address
public net
flat_interface=eth1
public interface=eth0
4. 2015 - Component Soft Ltd 4
Nova-network types (pre-grizzly)
● VLAN network mode
(default)
– As separate VLAN
and bridge is
created for each
project
– Each project has a
private IP range, IP
assigned via DHCP
– Access to nodes are
provided via NAT or
CloudPipe on the
nova-network node
br1 br1 br1
VM1 VM2 VM3 VM4
nova-compute nova-compute
nova-network
dnsmasq
controller
compute1 compute2
br2 br2br2
dnsmasq
eth1.1 eth1.2
eth1.1eth1.1
eth1.2
eth1.2
public net
10.1.0.0/16
10.2.0.0/16
router*
vlan_interface=eth1
public_interface=eth0
5. 2015 - Component Soft Ltd 5
Why neutron? (quantum)
● problems
● out of VLAN IDs (12 bit,~4096)
– double VLAN might help
● for VLAN Compute nodes need to be on the same L2 segment
– use VLAN trunking with multiple switches
● Two tenants cannot use the same address space for private net
– NAT, routing might be complicated
● solution
– have a separate node (network node), that acts as
● DHCP server, router for all tenant networks
● host for floating IPs
● host for security groups
– allows
● Other tenant segregations (GRE, VxVLAN (with ML2plugin))
● Complex tenant networks (SDN, overlapping networks)
● Backward compatibility (with LinuxBridge plugin)
11. 2015 - Component Soft Ltd 11
The ML2plugin
Why?
● Replace the monolithic plugins
– Eliminate redundant code when developing a new plugin
● Drivers per functionality
– type drivers: the “network separtion” type
● Flat, GRE,VLAN, VXLAN
– mechanism drivers:The way to manage your networks
● reuse existing plugins: OVSPlugin, LinuxBridgePlugin
– going to deprecate -> Modular Agent: combine OVS and
LinuxBridge functionality
● optional features
– L2Population: avoids broadcast flooding via APR responder
● support for heterogenous deployments (use OVS for a set of
compute nodes, LinuxBridge for others)
● l3router : moved to a service plugin ( such as other *aaS plugins)
13. 2015 - Component Soft Ltd 13
ML2 feautes – L2 population (Icehouse)
● Network overhead with broadcast/multicast
– GRE sends unicasts on each tunnel
– VxVLAN sends to a common multicast group
● avoid ARP broadcasts
– IP/MAC associations known from neutron db
– setup a local ARP proxy on each node
● LinuxBrigde
ip neighbour add ... permanent
bridge fdb add
● OVS
ebtables -t nat ... -j arpreply
ComputeCompute
NetworkNetworkComputeCompute
VMVM
ARP proxy
14. 2015 - Component Soft Ltd 14
ML2 features – L3 HA (Juno)
qrouter-X qrouter-X
qdhcp-A qdhcp-A
br-int
VLAN2
tap-B
qr-X-p1
VLAN2
br-int
Network node 1 Network node 2
ha-Y2keepalived (+conntrackd)
VRRP
qr-X-p1
goo.gl/z4flP3
tenant network
tenant HA network
ha-Y1
VM3VM3
eth0
VLAN2
VLAN2
Compute node
br-int
Floating IPs + Router IP
hosted on active router
only
15. 2015 - Component Soft Ltd 15
L3 HA facts
● Keepalived
– processes talk VRRP on a separte network work
● assigned to a virtual tenant
– VRRP election process
● hello packet go on multicast 224.0.0.18
● One byte for VRID in VRRP headers
– only 255 virtual routers per tenant
● on 3 missed hellos an election process is started
– the lower router_id wins
– failback is done if the original active comes back
● new active instance, and sends a gratuitous ARP request
– switches learn the new port for MAC/IP
● Other *aaS
● HA for FWaas, LBaaS scheduled to be implemented in Kilo
version