3. Agenda
● The evolving architecture of application networking
● Docker networking
● Infrastructure design patterns
● Design Patterns when modernizing a traditional application
● [REDACTED]
● Summary and Q/A
5. Physically hosted applications
● Services, application components are 1:1 with network addresses and
architecture.
● Often flat or simplistic networks defined by physical network ports or
VLANs used to segregate the application from the network.
● High availability is provided by clustering software or DNS/load-balancer
across multiple deployments/sites.
7. Virtual (Machine) applications
● Services and Applications are broken down into smaller VM allocations
resulting in an explosion of network resources
● The tight-packing of numerous VMs per host has resulted in numerous
networks being provisioned to every host.
● Virtual LANs are used as the method for providing segregation between
applications and application tiers.
10. Docker Networking
docker network ls
NETWORK ID NAME DRIVER SCOPE
4507d8b4dd86 bridge bridge local
8866a19c0751 docker_gwbridge bridge local
b88e79e31749 host host local
vlujsum8my0u ingress overlay swarm
e12df2f39d06 none null local
ed60df3f6402 mac_net macvlan local
[dan@dockercon ~]$
[dan@dockercon ~]$
11. 172.17.0.1
172.17.0.1
Host/Bridge Networking
Docker Engine
Bridge | NAT
Docker Engine
Bridge | NAT
Docker Engine
Bridge | NAT
172.17.0.1
10.0.0.1
10.0.0.2
10.0.0.3
:80
docker run –-net=host nginx
[dan@dockercon ~]$
[dan@dockercon ~]$
● The host flag will start the container
in the same namespace as the host
itself allowing a container to use the
hosts networking stack directly.
● Provides near metal speed, however
can result in port conflicts.
:80
12. 172.17.0.1
172.17.0.1
Host/Bridge Networking
Docker Engine
Bridge | NAT
Docker Engine
Bridge | NAT
Docker Engine
Bridge | NAT
172.17.0.0/16
172.17.0.1
10.0.0.1
10.0.0.2
10.0.0.3
docker run dockerimage:latest
[dan@dockercon ~]$
[dan@dockercon ~]$
● Containers are started and
connected by default to the internal
bridge network.
● These containers wont expose any
network connectivity to the outside
world by design, however can speak
to one another whilst on the same
host.
:80:80
13. 172.17.0.1
172.17.0.1
Host/Bridge Networking
Docker Engine
Bridge | NAT
Docker Engine
Bridge | NAT
Docker Engine
Bridge | NAT
172.17.0.1
172.17.0.2
:80
10.0.0.1
10.0.0.2
10.0.0.3
docker run –p 80:80 nginx
[dan@dockercon ~]$
[dan@dockercon ~]$
● The –p flag will expose an external
port on the host and map it to a port
on the container.
● Only containers with services need
to expose their ports potentially
solving port-conflicts.
:80
:80:80
172.17.0.0/16
14. Swarm Overlay networking
Docker Engine
Overlay
Docker Engine
Overlay
Docker Engine
Overlay
10.0.0.1
10.0.0.2
10.0.0.3
docker service create –-name web
--replicas 2
--publish 8080:80
nginx
[dan@dockercon ~]$
:8080
:8080
:8080
:80
:80
● The Overlay network makes use
of VXLAN in order to create an
overlay network over the
underlying network.
● The tunnel allows containers
across hosts to communicate.
15. Swarm Overlay networking
Docker Engine
Overlay
Docker Engine
Overlay
Docker Engine
Overlay
10.0.0.1
10.0.0.2
10.0.0.3
:80
:80
:8080
:8080
:8080
● By default the overlay is encrypted
with the AES algorithm and hosts will
rotate their keys every 12 hours.
● Publishing a port applies to all nodes
in the swarm cluster. Regardless of
node connected to, the request is
forwarded to a node running the task.
docker service create –-name web
--replicas 2
--publish 8080:80
nginx
[dan@dockercon ~]$
16. Swarm Overlay networking
Docker Engine
Overlay
Docker Engine
Overlay
Docker Engine
Overlay
10.0.0.1
10.0.0.2
10.0.0.3
:80
:80
● Each container gets a pair of IP
addresses.
● One IP address exists on the
Overlay network, this allows all
containers on the network to
communicate
● The other IP address carries the
tunnel to other hosts in the cluster
and contains all the actual data
that needs to leave the host.
10.0.0.3
10.0.0.4
172.18.0.3
172.18.0.4
17. Macvlan driver Docker Engine
10.0.0.1
10.1.0.1
10.1.0.2
Docker Engine
10.0.0.2
10.1.0.3
10.1.0.4
● The Macvlan driver provides a hardware
(MAC) address for each container, allowing
them to have a full TCP/IP stack.
● Allows containers to become part of the
traditional network, and use things like
external IPAM or VLAN trunking when
numerous networks are needed.
● No overhead from technologies such as
VXLAN or NAT.
18. Macvlan driver Docker Engine
10.0.0.1
10.1.0.2
10.1.0.3
Docker Engine
10.0.0.2
10.1.0.4
10.1.0.5
docker network create -d macvlan
--subnet=10.1.0.0/24
--gateway=10.1.0.1
-o parent=eth0 mac_net
[dan@dockercon ~]$
● Create a network using the macvlan
network and assign the ranges/gateway
and the parent adapter (or sub-adapter
for vlans e.g eth0.120)
19. Macvlan driver Docker Engine
10.0.0.1
10.1.0.2
10.1.0.3
Docker Engine
10.0.0.2
10.1.0.4
10.1.0.5
docker run --net=mac_net
--ip=10.1.0.2
nginx
[dan@dockercon ~]$
● When starting a container you can apply
a physical IP address on that network.
● The container is effectively another host
on the underlay network.
20. Macvlan driver
10.1.0.1
10.1.0.2
10.1.0.3
10.1.0.4
● The use of the macvlan driver essentially
makes a Docker container a first class citizen
on the network.
● This functionality however carries additional
overhead in terms of network management, as
each container will now exist on the network
as its own entity.
10.1.0.5
10.1.0.6
10.1.0.7
10.1.0.8
10.1.0.9
10.1.0.10
10.1.0.11
10.1.0.12
10.1.0.13
10.1.0.14
10.1.0.15
10.1.0.16
21. Networking plugins
Docker Engine
10.0.0.2
Plugin
Docker Engine
10.0.0.1
Plugin
● Docker networking plugins allow vendors to extend the functionality of their network devices and technologies
into the Docker Engine.
● Providing features such as vendor specific IP Address Management or enabling the network to configure itself to
provide functionality to containers through their lifecycle such as (overlays/QOS/Load balancing).
Configuration
23. Separate data/control planes
Docker Engine
Docker Engine
10.0.0.1
10.0.0.2
docker swarm init
--advertise-addr eth0
--data-path-addr eth1
[dan@dockercon ~]$
Overlay
10.1.0.1
10.1.0.2
● When initially configuring a Docker swarm
cluster on hosts with multiple NICs there is
the option of separating the data and control
planes.
● This provides physical and logical separation
of traffic leaving the host.
24. Separate data/control planes
Docker Engine
Docker Engine
10.0.0.1
10.0.0.2
docker swarm join
--token XYZ --advertise-addr eth0
--data-path-addr eth1
10.0.0.1:2377
[dan@dockercon ~]$
Overlay
Overlay
10.1.0.1
10.1.0.2
● Joining additional nodes to the swarm cluster
takes two additional flags to specify the traffic
carried by a particular adapter.
● Any services created will then be part of the
data plane and have traffic segregated from
the control plane.
26. Docker Enterprise Edition
● Docker Enterprise Edition provides a full
CaaS platform (Containers as a Service).
● Comes with Integrated Container
Orchestration, management platform and
increased security (RBAC, images
scanning etc.)
● Enterprise supported platform for
production deployments.
27. Universal Control Plane
● The Docker UCP provides a clustered enterprise
grade management platform for Docker.
● A centralized platform for managing and monitoring
swarm container clusters and container
infrastructure.
● Extended functionalisation of the Docker platform
making it easier to deploy applications at scale.
● Can be controlled through the UI or through the
CLI (client bundle) or through the Docker APIs.
28. Docker Trusted Registry
● Enterprise grade storage for all your Docker
Images, allowing users to host their images
locally.
● Can become part of the CI/CD processes
simplifying the process to build, ship and run
your applications.
● Images can be automatically scanned for
vulnerabilities ensuring that only compliant
images can be deployed.
29. Application Architecture
VLAN101 (F/E) VLAN102 (app)
VLAN101 (F/E) VLAN102 (app)
VM
Host
VM
Host
Load
Balancer
VLAN103 (DB)
DB Host(s)
VLAN103
(DB)
VLAN103
(DB)
30. “Behind the scenes the
developers and application
maintainers have repackaged
our applications into containers”
31. Application Architecture
VLAN101 (F/E) VLAN102 (app)
VLAN101 (F/E) VLAN102 (app)
VM
Host
VM
Host
Load
Balancer
VLAN103 (DB)
DB Host(s)
VLAN103
(DB)
VLAN103
(DB)
● The explosion of VMs also drove the
explosion of VLANs, which were a
recommended network architectural choice in
order to provide segregation of tiers of virtual
infrastructure.
● However we can simplify the network greatly
by making use of overlays (VXLAN), which not
only provide segregation but also encryption.
32. Front-End with HRM
Worker 1 Worker X
Docker Engine Docker Engine
Overlay
● Docker EE provides the HTTP Routing Mesh
capability, which simplifies the routing
between services.
● The HRM will inspect the hostname that has
been requested and route the traffic to that
particular service.
● This allows multiple overlays to exist in
harmony and traffic to be routed to them as
requests hit the HRM port.
Overlaywww.petstore.com
api.petstore.com
:80
33. Scalable services
Worker 1 Worker X
Docker Engine Docker Engine
Overlay
● Taking the existing and now packaged
applications, we can deploy them as services.
● We can deploy and scale them up as needed
across our cluster.
● Exposing service ports will provide load
balancing across service tasks and ensure
traffic is routed to where those tasks are
running.
OverlayApp Service
Store Service
34. Application Architecture
VM
Host
VLAN103 (DB)
Load
Balancer
VM
Host
DB Host(s)
VLAN101 (F/E) VLAN102 (app) VLAN103
(DB)
VLAN101 (F/E) VLAN102 (app) VLAN103
(DB)
● Some elements of an application require
direct access to the network to provide low-
level services.
● Other elements may have a requirement that
they have to be part of an existing network or
VLAN to provide direct access to other
services.
● Some elements are also based upon fixed or
hard-coded IP addresses and in some cases
a licensing restriction.
35. Preserving existing integrations
Worker 1 Worker X
Docker Engine Docker Engine
● The Use of Macvlan allows a container
with specific requirements such as
packet inspection directly on the
network.
● Custom singleton applications that are
hardcoded to interact with databases can
make use of their original IP addresses
and be part of the same segregated
VLAN in which the database server(s)
reside.
10.1.0.47
10.20.0.19
10.20.0.20
VLAN103
36. Design Patterns
●Where possible, there is a great opportunity to provide simplification of networking.
●The use of overlays (VXLAN) is all handled in software, providing software defined networking “as
code”. This also has the additional benefit of simplifying network device configurations.
●Overlay provided load balancing again is specified as part of the service design simplifying the
application and the network architecture design.
●Cases where VLANs or hard pinned IP connectivity are required can be met through the use of
containers attached through macvlan.
37. Explore the hands on labs in the
experience centre for some real
experience.
👍🏼
42. UCP Architecture
UCP Node(s)
Service Swarm
Docker Engine(s)
Service Kube
Docker Engine(s)
Ingress-Controller
swarm.dockercon.com
kube.dockercon.com
43. Summary
● Applications that can be re-homed on a network can make use of Docker networking features that will
simplify their deployment and their scaling.
● Overlay networks provide the capability to place workloads through the cluster without the headache of
having to be aware of task location.
● Services that are tied or hard coded to specific network requirements can still be deployed in
containers.
44. Interested in MTA
●Stop by the booth (MTA pod)
●Download the kit www.docker.com/mta
●Look for a MTA Roadshow near you
●Contact your Account Team
45. Docker EE
Hosted Demo
Add picture here
docker.com/trial
● Free 4 Hour Demo
● No Servers Required
● Full Docker EE
Cluster Access
Been with Docker for four months, however i've been a user and contributor to Docker projects for the last eighteen months.
Prior to Docker I've worked at numerous companies and had roles ranging from System Administrator, Infrastructure engineer, Architect and Consultant... So i've pretty much managed the full set so far :)
I'm also quite passionate about automation and things "as code", so i'm the person that has left scripts and small utilities behind at various organisations that "hopefully" have kept the lights on (but noone can fathom out how they work). (sorry)
There is still a lot of people/businesses wanting to learn the basics of Container/Docker networking
There are a lot of people who are starting to look towards migrating their applications into services and are wanting to learn about some of the design patterns that they can apply when they are migrating the application to a new containerised network.
This is the agenda for the Practical Design patterns session, if you were in yesterdays keynote then you may have an idea what the [redacted] topic will be .. and if not well you'll just have to stay and find out..
We're starting off with having a look at how application networking has changed and evolved as technologies have forced changes in application design.
TODO: Density / IP /
That was a quick tour of the landscape of "legacy applications" there are still a lot of places where that will the architecture that hosts these applications. Indeed there are many places that have only just started the path to virtualisation.
This is comparable to how a lot of home networks are configured. Typically the router that is provided to you from your Internet Service provider where all devices that are connected to the router inside of the house (laptops/smart phones toasters) are part of that internal bridge, they can use Network address translation to go outbound .. however they're protected as they don't expose any network connectivity to the outside world.
Which leads on to exposing connectivity
Only expose the ports required, all containers that are part of the Bridge will be able to communicate
As the Docker engine spins up the overlay network, all hosts in the cluster will be added to a VXLAN tunnel which as the name overlay suggests is essentially a network on top of a network. When a container needs to speak to another container (and that container is on a different host) then the traffic will enter the VXLAN tunnel and be transmitted to the VTEP (VXLAN tunnel endpoint) on the host where the container is currently running.
The one caveat being that Overlay networking encryption isn’t currently available on Windows
As the Docker engine spins up the overlay network, all hosts in the cluster will be added to a VXLAN tunnel which as the name overlay suggests is essentially a network on top of a network. When a container needs to speak to another container (and that container is on a different host) then the traffic will enter the VXLAN tunnel and be transmitted to the VTEP (Vxlan tunnel endpoint) on the host where the container is currently running.
The key concept here is that we can make use of overlays in order to provide:
- Simplification of networking architecture
- Simpler networking configuration on the actual networking devices
- Security through isolation of services with VXLAN
- Security through encryption of overlays
- Simpler networking configuration on the actual networking devices
Providing only a high level overview of how networking will be providing through UCP + Swarm/Kubernetes, the reason behind this is really to show the design patterns moving forward.
So, other than providing the high level overview I’m not going to be able to answer any questions about the integration in detail.
(CLICK)
If you do have questions though, I’d like to collect them afterwards so that we can start to look at putting together a FAQ.
Providing only a high level overview of how networking will be providing through UCP + Swarm/Kubernetes, the reason behind this is really to show the design patterns moving forward.
So, other than providing the high level overview I’m not going to be able to answer any questions about the integration in detail.
(CLICK)
If you do have questions though, I’d like to collect them afterwards so that we can start to look at putting together a FAQ.