Six Myths about Ontologies: The Basics of Formal Ontology
Peformance Evaluation of Container-based Vi
1. Performance Evaluation of Container-based Virtualization
for High Performance Computing Environments
Miguel G. Xavier, Marcelo V. Neves, Fabio D. Rossi, Tiago C.
Ferreto, Timoteo Lange, Cesar A. F. De Rose
miguel.xavier@acad.pucrs.br
Faculty of Informatics, PUCRS
Porto Alegre, Brazil
February 27, 2013
3. Introduction
• Virtualization
• Hardware independence, availability, isolation and security
• Better manageability
• Widely used in datacenters/cloud computing
• Total cost of ownership is reduced
• HPC and Virtualization
• Usage scenarios
• Better resource sharing
• Custom environments
• However, hypervisor-based technologies in HPC environments has
traditionally been avoided
5. Evaluation
• Experimental Environment
• Cluster composed by 4 nodes
• Two processors with 8 cores (without threads)
• 16GB of memory
• Evaluations
• Analyzing the best results of performance
• Through micro-benchmarks (such as CPU, disk, memory, network)
in a single node
• Through macro-benchmarks (such as HPC)
• Analyzing the best results of isolation
• Through IBS benchmark
6. CPU Evaluation
• All of Container-based systems
obtained performance results similar
to native
• No influence of the different CPU
schedulers when a single CPU-
intensive process is run in a single
processor
• Xen presents a average overhead of
4.3%
Linpack
Mflops
0100200300
Native
LXC
OpenVZ
VServer
Xen
LINPACK Benchmark (source: http://www.netlib.org/linpack/)
7. Memory Bandwidth Evaluation
STREAM Benchmark (source: https://www.cs.virginia.edu/stream/)
• Container-based systems have the
ability to return unused memory to the
host and other containers
• Xen presented 31% of performance
overhead when compared to the
native throughput
8. Disk Evaluation
IOZone Benchmark (source: https://www.iozone.org)
• LXC and Linux-VServer use the
”deadline” linux scheduler
• OpenVZ uses CFQ scheduler in
order to provide the container
disk priority functionality
• Xen uses virtualized drivers
which are not able to achieve a
high performance yet
9. Network Evaluation
50
100
150
200
10 1000
Message size (bytes)
Latency(microseconds)
Legend
LXC
Native
OpenVZ
VServer
Xen
0
250
500
750
1000
10
1
10
3
105
107
Message size (bytes)
Bandwidth(Mbps)
Legend
LXC
Native
OpenVZ
VServer
Xen
NETPIPE Benchmark (source: http://www.scl.ameslab.gov/netpipe/)
• Xen obtained the worst performance among the virtualization systems probably due to
network driver virtualized
10. HPC Evaluation
NAS-MPI Benchmark (source: http://www.nas.nasa.gov/publications/npb.html)
• At this moment, is possible to
observe that all container-based
systems slightly exceeds the
native performance
• All HPC benchmarks while
performed on Xen suffered even
more overheads by reason of the
network penalties
bt cg ep ft is lu mg
NAS Parallel Benchmarks
Executiontime(seconds)
050100150200
Native
LXC
OpenVZ
VServer
Xen
11. Isolation
Isolation Benchmark Suite (source: http://web2.clarkson.edu/class/cs644/isolation/
• The results represent how much the applications performance is impacted by
different stress tests in another vm/container
• DNR means that application was not able to run
• All of Container-based systems had some impact in isolation
LXC OpenVZ Vserver Xen
CPU 0 0 0 0
Memory Bomb 88,2% 89,3% 20,6% 0,9%
Disk Stress 9% 39% 48,8% 0
Fork Bomb DNR DNR DNR 0
Network Receiver 2,2% 4,5% 13,6% 0,9%
Network Sender 10,3% 35,4% 8,2% 0,3%
12. Conclusions
• All container-based systems have a near-native performance of CPU,
memory, disk and network
• The only resource that could be successfully isolated was CPU. All three
systems showed poor performance isolation for memory, disk and
network
• Since the HPC applications were tested, so far, LXC demonstrates to be
the most suitable of the container-based systems for HPC due its use
facilities and management