Presentation from Container Camp London 2015 which compares both the network performance of containers on both AWS and Azure. Included SDN solutions in these tests are Flannel, Weave and Project Calico.
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607
Docker network performance in the public cloud
1. Docker network performance in
the public cloud
Arjan Schaaf - Luminis Technologies
container.camp London
September 11th 2015
2. Cloud RTI
• Luminis Technologies
• Founded in The Netherlands
• amdatu.com PAAS
• both public and private clouds
• cloud provider independent
3. Cloud RTI
• CoreOS
• Docker
• Kubernetes
• Load balancing, Data Stores, ELK
• High available, scalable applications with centralised
logging, monitoring and metrics
4. Choose your cloud wisely
• comparing cloud VM’s based on price or hardware
specification isn’t enough
• cloud providers throttle their VM’s differently
• don’t trust specifications on ‘paper’
5. Azure vs AWS
AZURE AWS
INSTANCE TYPE PRICE NETWORK INSTANCE TYPE PRICE NETWORK
A0 $0,018 5 Mbps t2.micro $0,014 Low to Moderate
A1 $0,051 100 Mbps t2.medium $0,056 Low to Moderate
D1 $0,084 unknown m4.large $0,139 Moderate
D2 $0,168 unknown m4.xlarge $0,278 High
A8 $1,97 40Gbit/s InfiniBand m4.10xlarge $2,78 10 Gbit
6. Native Network Test Setup
• qperf: short running test
• iperf3: longer running test using parallel connections
14. Docker Networking
• Connect containers over the host interface (use
ambassadors!)
• Use a SDN to connect your Docker cluster nodes
• weave
• flannel
• Project Calico
15. Before Docker 1.7
• Approach depended on the SDN
• replace the docker bridge
• proxy in front on docker daemon
16. Docker libnetwork
• Announced along with Docker 1.7 as an experimental
feature
• Networking Plugins: batteries included but swappable
• Included batteries are based on Socketplane
• Other plugins announced by: Weave, Project Calico,
Cisco, VMware and others
17. Choose your SDN wisely
• Functional features like encryption & DNS
• Support for libnetwork, kubernetes etc etc
• Implementations can be fundamentally different
• overlay networks like Flannel & Weave
• different overlay backend implementations (for example
UDP)
• L2/L3 based networks like Project Calico
18. Flannel
• Created by CoreOS
• Easy to setup
• Different backends
• UDP
• VXLAN
• AWS VPC (uses VPC routing table)
• GCE (uses Network routing table)
19. Weave
• Used Weave 1.0.3, 1.1 released this week
• DNS
• Proxy based approach
• Different backends
• pcap (default)
• VXLAN (fast-datapath-preview)
20. Project Calico
• Uses vRouters connected over BGP routes
• No additional overlay when running on a L2 or L3
network (think datacentre!)
• Won’t run on public clouds like AWS without a IPIP
tunnel
• Extensive and simple network policies (tenant isolation!)
• Very promising integration with Kubernetes
21. Docker Network Test Setup
• exactly the same as the “native” test but this time: use
the IP-address or DNS name of the container!
28. Native vs SDN performance
& cpu load client + server
INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO
IPERF C S IPERF C S IPERF C S IPERF C S
T2.MICRO -16% 62,7% 29% -2% 11,7% 23,2% -14% 59,7% 89,5% -14% 26% 57%
T2.MEDIUM -1% 28,7% 20,2% -1% 20,6% 18,7% -3% 52,6% 33,1% -3% 17% 37%
M4.LARGE -1% 15,4% 12,7% -1% 10% 10% -1% 34,1% 24,8% -1% 21% 21%
M4.XLARGE -0% 9,4% 7,9% -1% 6,6% 7,3% -1% 22,9% 18,9% -1% 12% 10%
M4.10XLARGE -55% 2,8% 5,0% -20% 2,7% 3,4% -79% 14,8% 13,5% -32% 3% 4%
29. cpu load compared to native
test results
INSTANCE TYPE FLANNEL UDP FLANNEL VXLAN WEAVE PCAP CALICO
C S C S C S C S
T2.MEDIUM 95% 57% 40% 45% 258% 157% 15% 184%
M4.LARGE 108% 46% 35% 15% 361% 185% 177% 140%
M4.XLARGE 92% 44% 35% 33% 367% 244% 141% 82%
30. Conclusion
• Happy with choice for Flannel VXLAN
• Interested in Project Calico in combination with
Kubernetes
31. Conclusion
• synthetic tests are a great starting point
• don’t forget to validate the results with “real life” load
tests on your application(s)