SlideShare uma empresa Scribd logo
1 de 61
How to Design Scalable HPC, Deep Learning and Cloud
Middleware for Exascale Systems?
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
Keynote Talk at HPCAC Stanford Conference (Feb ‘19)
by
HPCAC-Stanford (Feb ‘19) 2Network Based Computing Laboratory
Big Data
(Hadoop, Spark,
HBase,
Memcached,
etc.)
Deep Learning
(Caffe, TensorFlow, BigDL,
etc.)
HPC
(MPI, RDMA,
Lustre, etc.)
Increasing Usage of HPC, Big Data and Deep Learning
Convergence of HPC, Big
Data, and Deep Learning!
Increasing Need to Run these
applications on the Cloud!!
HPCAC-Stanford (Feb ‘19) 3Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on
Existing HPC Infrastructure?
Physical Compute
HPCAC-Stanford (Feb ‘19) 4Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on
Existing HPC Infrastructure?
HPCAC-Stanford (Feb ‘19) 5Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on
Existing HPC Infrastructure?
HPCAC-Stanford (Feb ‘19) 6Network Based Computing Laboratory
Can We Run Big Data and Deep Learning Jobs on
Existing HPC Infrastructure?
Spark Job
Hadoop Job Deep Learning
Job
HPCAC-Stanford (Feb ‘19) 7Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC
– Virtualization with SR-IOV and Containers
HPC, Deep Learning and Cloud
HPCAC-Stanford (Feb ‘19) 8Network Based Computing Laboratory
Parallel Programming Models Overview
P1 P2 P3
Shared Memory
P1 P2 P3
Memory Memory Memory
P1 P2 P3
Memory Memory Memory
Logical shared memory
Shared Memory Model
SHMEM, DSM
Distributed Memory Model
MPI (Message Passing Interface)
Partitioned Global Address Space (PGAS)
Global Arrays, UPC, Chapel, X10, CAF, …
• Programming models provide abstract machine models
• Models can be mapped on different types of systems
– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.
• PGAS models and Hybrid MPI+PGAS models are gradually receiving
importance
HPCAC-Stanford (Feb ‘19) 9Network Based Computing Laboratory
Supporting Programming Models for Multi-Petaflop and
Exaflop Systems: Challenges
Programming Models
MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies
(InfiniBand, 40/100GigE,
Aries, and Omni-Path)
Multi-/Many-core
Architectures
Accelerators
(GPU and FPGA)
Middleware
Co-Design
Opportunities
and
Challenges
across Various
Layers
Performance
Scalability
Resilience
Communication Library or Runtime for Programming Models
Point-to-point
Communication
Collective
Communication
Energy-
Awareness
Synchronization
and Locks
I/O and
File Systems
Fault
Tolerance
HPCAC-Stanford (Feb ‘19) 10Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)
– Scalable job start-up
– Low memory footprint
• Scalable Collective communication
– Offload
– Non-blocking
– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)
– Multiple end-points per node
• Support for efficient multi-threading
• Integrated Support for Accelerators (GPGPUs and FPGAs)
• Fault-tolerance/resiliency
• QoS support for communication and I/O
• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)
• Virtualization
• Energy-Awareness
Broad Challenges in Designing Runtimes for (MPI+X) at Exascale
HPCAC-Stanford (Feb ‘19) 11Network Based Computing Laboratory
Overview of the MVAPICH2 Project
• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,950 organizations in 86 countries
– More than 522,000 (> 0.5 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘18 ranking)
• 3rd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China
• 14th, 556,104 cores (Oakforest-PACS) in Japan
• 17th, 367,024 cores (Stampede2) at TACC
• 27th, 241,108-core (Pleiades) at NASA and many others
– Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
Partner in the upcoming TACC Frontera System
HPCAC-Stanford (Feb ‘19) 12Network Based Computing Laboratory
0
100000
200000
300000
400000
500000
600000
Sep-04
Feb-05
Jul-05
Dec-05
May-06
Oct-06
Mar-07
Aug-07
Jan-08
Jun-08
Nov-08
Apr-09
Sep-09
Feb-10
Jul-10
Dec-10
May-11
Oct-11
Mar-12
Aug-12
Jan-13
Jun-13
Nov-13
Apr-14
Sep-14
Feb-15
Jul-15
Dec-15
May-16
Oct-16
Mar-17
Aug-17
Jan-18
Jun-18
Nov-18
NumberofDownloads
Timeline
MV0.9.4
MV20.9.0
MV20.9.8
MV21.0
MV1.0
MV21.0.3
MV1.1
MV21.4
MV21.5
MV21.6
MV21.7
MV21.8
MV21.9
MV2-GDR2.0b
MV2-MIC2.0
MV22.3
MV2-X2.3rc1
MV2Virt2.2
MV2-GDR2.3
OSUINAM0.9.4
MVAPICH2 Release Timeline and Downloads
HPCAC-Stanford (Feb ‘19) 13Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface
(MPI)
PGAS
(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication Runtime
Diverse APIs and Mechanisms
Point-to-
point
Primitives
Collectives
Algorithms
Energy-
Awareness
Remote
Memory
Access
I/O and
File Systems
Fault
Tolerance
Virtualization
Active
Messages
Job Startup
Introspection
& Analysis
Support for Modern Networking Technology
(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures
(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODP
SR-
IOV
Multi
Rail
Transport Mechanisms
Shared
Memory
CMA IVSHMEM
Modern Features
MCDRAM* NVLink* CAPI*
* Upcoming
XPMEM
HPCAC-Stanford (Feb ‘19) 14Network Based Computing Laboratory
MVAPICH2 Software Family
Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS
with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
HPCAC-Stanford (Feb ‘19) 15Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication
– Scalable Start-up
– Optimized Collectives using SHArP and Multi-Leaders
– Optimized CMA-based and XPMEM-based Collectives
– Asynchronous Progress
• Exploiting Accelerators (NVIDIA GPGPUs)
• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ‘19) 16Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2 Small Message Latency
Message Size (bytes)
Latency(us)
1.11
1.19
0.98
1.15
1.04
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
0
20
40
60
80
100
120
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-5-EDR
Omni-Path
Large Message Latency
Message Size (bytes)
Latency(us)
HPCAC-Stanford (Feb ‘19) 17Network Based Computing Laboratory
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-5-EDR
Omni-Path
Bidirectional Bandwidth
Bandwidth
(MBytes/sec)
Message Size (bytes)
22,564
12,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000 Unidirectional Bandwidth
Bandwidth
(MBytes/sec)
Message Size (bytes)
12,590
3,373
6,356
12,358
12,366
HPCAC-Stanford (Feb ‘19) 18Network Based Computing Laboratory
0
20
40
60
80
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
180K
230K
Time(seconds)
Number of Processes
TACC Stampede2
MPI_Init Hello World
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
Time(seconds)
Number of Processes
Oakforest-PACS
MPI_Init Hello World
22s
5.8s
21s
57s
• MPI_Init takes 22 seconds on 231,936 processes on 3,624 KNL nodes (Stampede2 – Full scale)
• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)
• All numbers reported with 64 processes per node, MVAPICH2-2.3a
• Designs integrated with mpirun_rsh, available for srun (SLURM launcher) as well
HPCAC-Stanford (Feb ‘19) 19Network Based Computing Laboratory
Cooperative Rendezvous Protocols
Platform: 2x14 core Broadwell 2680 (2.4 GHz)
Mellanox EDR ConnectX-5 (100 GBps)
Baseline: MVAPICH2X-2.3rc1, Open MPI v3.1.0
Cooperative Rendezvous Protocols for Improved Performance and Overlap
S. Chakraborty, M. Bayatpour,, J Hashmi, H. Subramoni, and DK Panda,
SC ‘18 (Best Student Paper Award Finalist)
19%
16% 10%
• Use both sender and receiver CPUs to progress communication concurrently
• Dynamically select rendezvous protocol based on communication primitives and sender/receiver
availability (load balancing)
• Up to 2x improvement in large message latency and bandwidth
• Up to 19% improvement for Graph500 at 1536 processes
0
500
1000
1500
28 56 112 224 448 896 1536
Time(Seconds)
Number of Processes
Graph500
MVAPICH2
Open MPI
Proposed
0
200
400
28 56 112 224 448 896 1536
Time(Seconds)
Number of Processes
CoMD
MVAPICH2
Open MPI
Proposed
0
50
100
28 56 112 224 448 896 1536
Time(Seconds)
Number of Processes
MiniGhost
MVAPICH2
Open MPI
Proposed
Will be available in upcoming MVAPICH2-X 2.3rc2
HPCAC-Stanford (Feb ‘19) 20Network Based Computing Laboratory
0
0.1
0.2
0.3
0.4
(4,28) (8,28) (16,28)
Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
Benefits of SHARP Allreduce at Application Level
12%
Avg DDOT Allreduce time of HPCG
SHARP support available since MVAPICH2 2.3a
Parameter Description Default
MV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled
--enable-sharp Configure flag to enable SHARP Disabled
• Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information
• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3-userguide.html#x1-990006.26
HPCAC-Stanford (Feb ‘19) 21Network Based Computing Laboratory
MPI_Allreduce on KNL + Omni-Path (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Latency(us)
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
0
200
400
600
800
1000
1200
1400
1600
1800
2000
8K 16K 32K 64K 128K 256K
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
OSU Micro Benchmark 64 PPN
2.4X
• For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4X
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based
Multi-Leader Design, SuperComputing '17.
Available since MVAPICH2-X 2.3b
HPCAC-Stanford (Feb ‘19) 22Network Based Computing Laboratory
Optimized CMA-based Collectives for Large Messages
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
4M
Message Size
KNL (2 Nodes, 128 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
Latency(us)
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
Message Size
KNL (4 Nodes, 256 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
1
10
100
1000
10000
100000
1000000
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
Message Size
KNL (8 Nodes, 512 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
Tuned CMA
• Significant improvement over existing implementation for Scatter/Gather with
1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower)
• New two-level algorithms for better scalability
• Improved performance for other collectives (Bcast, Allgather, and Alltoall)
~ 2.5x
Better
~ 3.2x
Better
~ 4x
Better
~ 17x
Better
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI
Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist
Performance of MPI_Gather on KNL nodes (64PPN)
Available since MVAPICH2-X 2.3b
HPCAC-Stanford (Feb ‘19) 23Network Based Computing Laboratory
Shared Address Space (XPMEM)-based Collectives Design
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-X-2.3rc1
OSU_Allreduce (Broadwell 256 procs)
• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2
• Offloaded computation/communication to peers ranks in reduction collective operation
• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce
73.2
1.8X
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-2.3rc1
OSU_Reduce (Broadwell 256 procs)
4X
36.1
37.9
16.8
J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction
Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.
Available in MVAPICH2-X 2.3rc1
HPCAC-Stanford (Feb ‘19) 24Network Based Computing Laboratory
Application-Level Benefits of XPMEM-Based Collectives
MiniAMR (Broadwell, ppn=16)
• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce
• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for
MiniAMR application kernel
0
200
400
600
800
28 56 112 224
ExecutionTime(s)
No. of Processes
Intel MPI
MVAPICH2
MVAPICH2-XPMEM
CNTK AlexNet Training
(Broadwell, B.S=default, iteration=50, ppn=28)
0
20
40
60
80
16 32 64 128 256
ExecutionTime(s)
No. of Processes
Intel MPI
MVAPICH2
MVAPICH2-XPMEM20%
9%
27%
15%
HPCAC-Stanford (Feb ‘19) 25Network Based Computing Laboratory
Efficient Zero-copy MPI Datatypes for Emerging Architectures
• New designs for efficient zero-copy based MPI derived datatype processing
• Efficient schemes mitigate datatype translation, packing, and exchange overheads
• Demonstrated benefits over prevalent MPI libraries for various application kernels
• To be available in the upcoming MVAPICH2-X release
0.1
1
10
100
2 4 8 16 28
LogscaleLatency
(milliseconds)
No. of Processes
MVAPICH2X-2.3
IMPI 2018
IMPI 2019
MVAPICH2X-Opt
5X
0.1
1
10
100
1000
Grid Dimensions (x, y, z, t)
MVAPICH2X-2.3
IMPI 2018
MVAPICH2X-Opt
19X
0.01
0.1
1
10
Grid Dimensions (x, y, z, t)
MVAPICH2X-2.3
IMPI 2018
MVAPICH2X-Opt
3X
3D-Stencil Datatype Kernel on
Broadwell (2x14 core)
MILC Datatype Kernel on KNL 7250
in Flat-Quadrant Mode (64-core)
NAS-MG Datatype Kernel on
OpenPOWER (20-core)
FALCON: Efficient Designs for Zero-copy MPI Datatype Processing on Emerging Architectures, J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, D. K. (DK) Panda, 33rd IEEE
International Parallel & Distributed Processing Symposium (IPDPS ’19), May 2019.
HPCAC-Stanford (Feb ‘19) 26Network Based Computing Laboratory
0
5000
10000
15000
20000
25000
30000
224 448 896
PerformanceinGFLOPS
Number of Processes
MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default
0
1
2
3
4
5
6
7
8
9
112 224 448
Timeperloopinseconds
Number of Processes
MVAPICH2 Async MVAPICH2 Default IMPI 2019 Async
Benefits of the New Asynchronous Progress Design: Broadwell +
InfiniBand
Up to 33% performance improvement in P3DFFT application with 448 processes
Up to 29% performance improvement in HPL application with 896 processes
Memory Consumption = 69%
P3DFFT High Performance Linpack (HPL)
26%
27% Lower is better Higher is better
A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, Efficient Asynchronous Communication
Progress for MPI without Dedicated Resources, EuroMPI 2018
Available in MVAPICH2-X 2.3rc1
PPN=28
33%
29%
12%
PPN=28
8%
HPCAC-Stanford (Feb ‘19) 27Network Based Computing Laboratory
• Scalability for million to billion processors
• Exploiting Accelerators (NVIDIA GPGPUs)
• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ‘19) 28Network Based Computing Laboratory
At Sender:
At Receiver:
MPI_Recv(r_devbuf, size, …);
inside
MVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
HPCAC-Stanford (Feb ‘19) 29Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases
• Support for MPI communication from NVIDIA GPU device memory
• High performance RDMA-based inter-node point-to-point communication
(GPU-GPU, GPU-Host and Host-GPU)
• High performance intra-node point-to-point communication for multi-GPU
adapters/node (GPU-GPU, GPU-Host and Host-GPU)
• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node
• Optimized and tuned collectives for GPU device buffers
• MPI datatype support for point-to-point and collective communication from
GPU device buffers
• Unified memory
HPCAC-Stanford (Feb ‘19) 30Network Based Computing Laboratory
0
2000
4000
6000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3
0
1000
2000
3000
4000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3
0
10
20
30 0
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
MV2-(NO-GDR) MV2-GDR 2.3
MVAPICH2-GDR-2.3
Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPU
Mellanox Connect-X4 EDR HCA
CUDA 9.0
Mellanox OFED 4.0 with GPU-Direct-RDMA
10x
9x
Optimized MVAPICH2-GDR Design
1.85us
11X
HPCAC-Stanford (Feb ‘19) 31Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)
• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768
MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768
MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32
AverageTimeStepsper
second(TPS)
Number of Processes
64K Particles 256K Particles
2X
2X
HPCAC-Stanford (Feb ‘19) 32Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96NormalizedExecutionTime
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32
NormalizedExecutionTime
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes
• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data
Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content
/tasks/operational/meteoSwiss/
HPCAC-Stanford (Feb ‘19) 33Network Based Computing Laboratory
MVAPICH2-GDR: Enhanced Derived Datatype Processing
• Kernel-based and GDRCOPY-based one-shot packing for inter-socket and inter-node communication
• Zero-copy (packing-free) for GPUs with peer-to-peer direct access over PCIe/NVLink
0
5
10
15
20
25
[6, 8,8,8,8] [6, 8,8,8,16] [6, 8,8,16,16] [6, 16,16,16,16]
MILC
Speedup
Problem size
GPU-based DDTBench mimics MILC communication
kernel
OpenMPI 4.0.0 MVAPICH2-GDR 2.3 MVAPICH2-GDR-Next
Platform: Nvidia DGX-2 system
(16 NVIDIA Volta GPUs connected with NVSwitch), CUDA 9.2
0
1
2
3
4
5
8 16 32 64
ExecutionTime(s)
Number of GPUs
Communication Kernel of COSMO Model
MVAPICH2-GDR 2.3 MVAPICH2-GDR-Next
Platform: Cray CS-Storm
(8 NVIDIA Tesla K80 GPUs per node), CUDA 8.0
Improved ~10X
HPCAC-Stanford (Feb ‘19) 34Network Based Computing Laboratory
• Scalability for million to billion processors
• Exploiting Accelerators (NVIDIA GPGPUs)
• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ‘19) 35Network Based Computing Laboratory
0
0.5
1
1.5
0 1 2 4 8 16 32 64 128 256 512 1K 2K
Latency(us)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
Intra-node Point-to-Point Performance on OpenPower
Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA
Intra-Socket Small Message Latency Intra-Socket Large Message Latency
Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth
0.30us
0
20
40
60
80
4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
0
10000
20000
30000
40000
1 8 64 512 4K 32K 256K 2M
Bandwidth(MB/s)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
0
20000
40000
60000
80000
1 8 64 512 4K 32K 256K 2M
Bandwidth(MB/s)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
HPCAC-Stanford (Feb ‘19) 36Network Based Computing Laboratory
0
5
10
15
20
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
Message Size (Bytes)
INTRA-NODE LATENCY (SMALL)
Intra-Socket
Inter-Socket
MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Volta)
0
5
10
15
20
25
30
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Bandwidth(GB/sec)
Message Size (Bytes)
INTER-NODE BANDWIDTH
Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Volta V100-SXM2 GPUs, and 2port EDR InfiniBand Interconnect
0
100
200
300
400
500
16K 32K 64K 128K256K512K 1M 2M 4M
Latency(us)
Message Size (Bytes)
INTRA-NODE LATENCY (LARGE)
Intra-Socket
Inter-Socket
0
5
10
15
0
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
Latency(us)
Message Size (Bytes)
INTER-NODE LATENCY (SMALL)
0
100
200
300
400
Latency(us)
Message Size (Bytes)
INTER-NODE LATENCY (LARGE)
Intra-node Bandwidth: 63.7 GB/sec (NVLINK)Intra-node Latency: 5.77 us (without GDRCopy)
Inter-node Latency: 5.77 us (without GDRCopy) Inter-node Bandwidth: 23.7 GB/sec (2 port EDR)Available since MVAPICH2-GDR 2.3a
0
20
40
60
80
1
4
16
64
256
1K
4K
16K
64K
256K
1M
4M
Bandwidth(GB/sec)
Message Size (Bytes)
INTRA-NODE BANDWIDTH
Intra-Socket
Inter-Socket
HPCAC-Stanford (Feb ‘19) 37Network Based Computing Laboratory
0
1000
2000
16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
MVAPICH2-GDR-Next
SpectrumMPI-10.1.0
OpenMPI-3.0.0
3X
0
2000
4000
16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
Message Size
MVAPICH2-GDR-Next
SpectrumMPI-10.1.0
OpenMPI-3.0.0
34%
0
1000
2000
3000
4000
16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
Message Size
MVAPICH2-GDR-Next
SpectrumMPI-10.1.0
OpenMPI-3.0.0
0
1000
2000
16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
MVAPICH2-GDR-Next
SpectrumMPI-10.1.0
OpenMPI-3.0.0
Optimized All-Reduce with XPMEM on OpenPOWER(Nodes=1,PPN=20)
Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch
• Optimized MPI All-Reduce Design in MVAPICH2
– Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node
2X
(Nodes=2,PPN=20)
4X
48%
3.3X
2X
2X
HPCAC-Stanford (Feb ‘19) 38Network Based Computing Laboratory
0
0.5
1
1.5
0 1 2 4 8 16 32 64 128256512 1K 2K 4K
Latency(us)
MVAPICH2-2.3
Intra-node Point-to-point Performance on ARM Cortex-A72
0
2000
4000
6000
8000
10000
Bandwidth(MB/s)
MVAPICH2-2.3
0
5000
10000
15000
20000
BidirectionalBandwidth
MVAPICH2-2.3
Platform: ARM Cortex A72 (aarch64) processor with 64 cores dual-socket CPU. Each socket contains 32 cores.
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
0.27 micro-second
(1 bytes)
0
200
400
600
800
8K 16K 32K 64K 128K256K512K 1M 2M 4M
Latency(us)
MVAPICH2-2.3
HPCAC-Stanford (Feb ‘19) 39Network Based Computing Laboratory
• Scalability for million to billion processors
• Exploiting Accelerators (NVIDIA GPGPUs)
• Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM
• Application Scalability and Best Practices
Overview of A Few Challenges being Addressed by the MVAPICH2
Project for Exascale
HPCAC-Stanford (Feb ‘19) 40Network Based Computing Laboratory
0
20
40
60
80
100
120
140
160
MILC Leslie3D POP2 LAMMPS WRF2 LU
ExecutionTimein(s)
Intel MPI 18.1.163
MVAPICH2-X-2.3rc1
31%
SPEC MPI 2007 Benchmarks: Broadwell + InfiniBand
MVAPICH2-X outperforms Intel MPI by up to 31%
Configuration: 448 processes on 16 Intel E5-2680v4 (Broadwell) nodes having 28 PPN and interconnected
with 100Gbps Mellanox MT4115 EDR ConnectX-4 HCA
29% 5%
-12%
1%
11%
HPCAC-Stanford (Feb ‘19) 41Network Based Computing Laboratory
Application Scalability on Skylake and KNL (Stamepede2)
MiniFE (1300x1300x1300 ~ 910 GB)
Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K
0
50
100
150
2048 4096 8192
ExecutionTime(s)
No. of Processes (KNL: 64ppn)
MVAPICH2
0
20
40
60
2048 4096 8192
ExecutionTime(s)
No. of Processes (Skylake: 48ppn)
MVAPICH2
0
500
1000
1500
48 96 192 384 768
No. of Processes (Skylake: 48ppn)
MVAPICH2
NEURON (YuEtAl2012)
Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b
0
1000
2000
3000
4000
64 128 256 512 1024 2048 4096
No. of Processes (KNL: 64ppn)
MVAPICH2
0
500
1000
1500
68 136 272 544 1088 2176 4352
No. of Processes (KNL: 68ppn)
MVAPICH2
0
500
1000
1500
2000
48 96 192 384 768 1536 3072
No. of Processes (Skylake: 48ppn)
MVAPICH2
Cloverleaf (bm64) MPI+OpenMP,
NUM_OMP_THREADS = 2
HPCAC-Stanford (Feb ‘19) 42Network Based Computing Laboratory
• MPI runtime has many parameters
• Tuning a set of parameters can help you to extract higher performance
• Compiled a list of such contributions through the MVAPICH Website
– http://mvapich.cse.ohio-state.edu/best_practices/
• Initial list of applications
– Amber
– HoomDBlue
– HPCG
– Lulesh
– MILC
– Neuron
– SMG2000
– Cloverleaf
– SPEC (LAMMPS, POP2, TERA_TF, WRF2)
• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.
• We will link these results with credits to you.
Applications-Level Tuning: Compilation of Best Practices
HPCAC-Stanford (Feb ‘19) 43Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData
– Virtualization with SR-IOV and Containers
HPC, Deep Learning and Cloud
HPCAC-Stanford (Feb ‘19) 44Network Based Computing Laboratory
• Deep Learning frameworks are a different game
altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State-of-the-art
– cuDNN, cuBLAS, NCCL --> scale-up performance
– NCCL2, CUDA-Aware MPI --> scale-out performance
• For small and medium message sizes only!
• Proposed: Can we co-design the MPI runtime (MVAPICH2-
GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large-Message Communication (Reductions)
– What application co-designs are needed to exploit
communication-runtime co-designs?
Deep Learning: New Challenges for MPI Runtimes
Scale-upPerformance
Scale-out Performance
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU
Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
cuDNN
gRPC
Hadoop
MPI
MKL-DNN
DesiredNCCL1
NCCL2
HPCAC-Stanford (Feb ‘19) 45Network Based Computing Laboratory
• MVAPICH2-GDR offers
excellent performance via
advanced designs for
MPI_Allreduce.
• Up to 11% better
performance on the RI2
cluster (16 GPUs)
• Near-ideal – 98% scaling
efficiency
Exploiting CUDA-Aware MPI for TensorFlow (Horovod)
0
100
200
300
400
500
600
700
800
900
1000
1 2 4 8 16
Images/second(Higherisbetter)
No. of GPUs
Horovod-MPI Horovod-NCCL2 Horovod-MPI-Opt (Proposed) Ideal
MVAPICH2-GDR 2.3 (MPI-Opt) is
up to 11% faster than MVAPICH2
2.3 (Basic CUDA support)
A. A. Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance
Evaluation”, Under Review, https://arxiv.org/abs/1810.11112
HPCAC-Stanford (Feb ‘19) 46Network Based Computing Laboratory
0
10000
20000
30000
40000
50000
512K 1M 2M 4M
Latency(us)
Message Size (Bytes)
MVAPICH2 BAIDU OPENMPI
0
1000000
2000000
3000000
4000000
5000000
6000000
8388608
16777216
33554432
67108864
134217728
268435456
536870912
Latency(us)
Message Size (Bytes)
MVAPICH2 BAIDU OPENMPI
1
10
100
1000
10000
100000
4
16
64
256
1024
4096
16384
65536
262144
Latency(us)
Message Size (Bytes)
MVAPICH2 BAIDU OPENMPI
• 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0
MVAPICH2-GDR: Allreduce Comparison with Baidu and OpenMPI
*Available since MVAPICH2-GDR 2.3a
~30X better
MV2 is ~2X better
than Baidu
~10X better OpenMPI is ~5X slower
than Baidu
~4X better
HPCAC-Stanford (Feb ‘19) 47Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation
• Optimized designs in MVAPICH2-GDR 2.3 offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs
1
10
100
1000
10000
100000
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~1.2X better
Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect
1
10
100
1000
4
8
16
32
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~3X better
HPCAC-Stanford (Feb ‘19) 48Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2)
• Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs)
1
10
100
1000
10000
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR-Next NCCL-2.3
~1.7X better
Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2
0
10
20
30
40
50
60
8
16
32
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
128K
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR-Next NCCL-2.3
~2.5X better
HPCAC-Stanford (Feb ‘19) 49Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – ResNet-50 Training
• ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (8 Volta GPUs)
0
500
1000
1500
2000
2500
3000
1 2 4 8
Imagepersecond
Number of GPUs
NCCL-2.3 MVAPICH2-GDR-Next
7.5% higher
Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2
75
80
85
90
95
100
1 2 4 8
ScalingEfficiency(%)
Number of GPUs
NCCL-2.3 MVAPICH2-GDR-Next
Scaling Efficiency =
Actual throughput
Ideal throughput at scale
× 100%
HPCAC-Stanford (Feb ‘19) 50Network Based Computing Laboratory
• Caffe : A flexible and layered Deep Learning framework.
• Benefits and Weaknesses
– Multi-GPU Training within a single node
– Performance degradation for GPUs across different
sockets
– Limited Scale-out
• OSU-Caffe: MPI-based Parallel Training
– Enable Scale-up (within a node) and Scale-out (across
multi-GPU nodes)
– Scale-out on 64 GPUs for training CIFAR-10 network on
CIFAR-10 dataset
– Scale-out on 128 GPUs for training GoogLeNet network on
ImageNet dataset
OSU-Caffe: Scalable Deep Learning
0
50
100
150
200
250
8 16 32 64 128
TrainingTime(seconds)
No. of GPUs
GoogLeNet (ImageNet) on 128 GPUs
Caffe OSU-Caffe (1024) OSU-Caffe (2048)
Invalid use case
OSU-Caffe publicly available from
http://hidl.cse.ohio-state.edu/
HPCAC-Stanford (Feb ‘19) 51Network Based Computing Laboratory
• High-Performance Design of TensorFlow over RDMA-enabled Interconnects
– High performance RDMA-enhanced design with native InfiniBand support at the verbs-level for gRPC and TensorFlow
– RDMA-based data communication
– Adaptive communication protocols
– Dynamic message chunking and accumulation
– Support for RDMA device selection
– Easily configurable for different protocols (native InfiniBand and IPoIB)
• Current release: 0.9.1
– Based on Google TensorFlow 1.3.0
– Tested with
• Mellanox InfiniBand adapters (e.g., EDR)
• NVIDIA GPGPU K80
• Tested with CUDA 8.0 and CUDNN 5.0
– http://hidl.cse.ohio-state.edu
RDMA-TensorFlow Distribution
HPCAC-Stanford (Feb ‘19) 52Network Based Computing Laboratory
0
50
100
150
200
16 32 64
Images/Second
Batch Size
gRPPC (IPoIB-100Gbps)
Verbs (RDMA-100Gbps)
MPI (RDMA-100Gbps)
AR-gRPC (RDMA-100Gbps)
Performance Benefit for RDMA-TensorFlow (Inception3)
• TensorFlow Inception3 performance evaluation on an IB EDR cluster
– Up to 20% performance speedup over Default gRPC (IPoIB) for 8 GPUs
– Up to 34% performance speedup over Default gRPC (IPoIB) for 16 GPUs
– Up to 37% performance speedup over Default gRPC (IPoIB) for 24 GPUs
4 Nodes (8 GPUS) 8 Nodes (16 GPUS) 12 Nodes (24 GPUS)
0
200
400
600
16 32 64
Images/Second
Batch Size
gRPPC (IPoIB-100Gbps)
Verbs (RDMA-100Gbps)
MPI (RDMA-100Gbps)
AR-gRPC (RDMA-100Gbps)
0
100
200
300
400
16 32 64Images/Second
Batch Size
gRPPC (IPoIB-100Gbps)
Verbs (RDMA-100Gbps)
MPI (RDMA-100Gbps)
AR-gRPC (RDMA-100Gbps)
HPCAC-Stanford (Feb ‘19) 53Network Based Computing Laboratory
• Traditional HPC
– Message Passing Interface (MPI), including MPI + OpenMP
– Exploiting Accelerators
• Deep Learning
– Caffe, CNTK, TensorFlow, and many more
• Cloud for HPC and BigData
– Virtualization with SR-IOV and Containers
HPC, Deep Learning and Cloud
HPCAC-Stanford (Feb ‘19) 54Network Based Computing Laboratory
• Virtualization has many benefits
– Fault-tolerance
– Job migration
– Compaction
• Have not been very popular in HPC due to overhead associated with
Virtualization
• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox
InfiniBand adapters changes the field
• Enhanced MVAPICH2 support for SR-IOV
• MVAPICH2-Virt 2.2 supports:
– OpenStack, Docker, and singularity
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based
Virtualized InfiniBand Clusters? EuroPar'14
J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Library over SR-IOV enabled InfiniBand
Clusters, HiPC’14
J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build
HPC Clouds, CCGrid’15
HPCAC-Stanford (Feb ‘19) 55Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
ExecutionTime(s)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
1%
9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
ExecutionTime(ms)
Problem Size (Scale, Edgefactor)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2-5% overhead for Graph500 with 128 Procs
• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs
Application-Level Performance on Chameleon
SPEC MPI2007Graph500
5%
A Release for Azure Coming Soon
HPCAC-Stanford (Feb ‘19) 56Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
22,16 22,20 24,16 24,20 26,16 26,20
BFSExecutionTime(ms)
Problem Size (Scale, Edgefactor)
Graph500
Singularity
Native
0
50
100
150
200
250
300
CG EP FT IS LU MG
ExecutionTime(s)
NPB Class D
Singularity
Native
• 512 Processes across 32 nodes
• Less than 7% and 6% overhead for NPB and Graph500, respectively
Application-Level Performance on Singularity with MVAPICH2
7%
6%
J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?,
UCC ’17, Best Student Paper Award
HPCAC-Stanford (Feb ‘19) 57Network Based Computing Laboratory
0
5000
10000
15000
20000
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
Docker Native
0
2000
4000
6000
8000
10000
12000
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
Docker Native
1
10
100
1000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Latency(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
Docker Native
MVAPICH2-GDR-2.3a
Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPU
Mellanox Connect-X4 EDR HCA
CUDA 9.0
Mellanox OFED 4.0 with GPU-Direct-RDMA
MVAPICH2-GDR on Container with Negligible Overhead
Works with NVIDIA HPC Container Maker
https://github.com/NVIDIA/hpc-container-maker/blob/master/recipes/hpcbase-pgi-mvapich2.py
HPCAC-Stanford (Feb ‘19) 58Network Based Computing Laboratory
• Supported through X-ScaleSolutions (http://x-scalesolutions.com)
• Benefits:
– Help and guidance with installation of the library
– Platform-specific optimizations and tuning
– Timely support for operational issues encountered with the library
– Web portal interface to submit issues and tracking their progress
– Advanced debugging techniques
– Application-specific optimizations and tuning
– Obtaining guidelines on best practices
– Periodic information on major fixes and updates
– Information on major releases
– Help with upgrading to the latest release
– Flexible Service Level Agreements
• Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years
Commercial Support for MVAPICH2, HiBD, and HiDL Libraries
HPCAC-Stanford (Feb ‘19) 59Network Based Computing Laboratory
Funding Acknowledgments
Funding Support by
Equipment Support by
HPCAC-Stanford (Feb ‘19) 60Network Based Computing Laboratory
Personnel Acknowledgments
Current Students (Graduate)
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– S. Chakraborthy (Ph.D.)
– C.-H. Chu (Ph.D.)
– S. Guganani (Ph.D.)
Past Students
– A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– R. Biswas (M.S.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– R. Rajachandrasekar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
– J. Zhang (Ph.D.)
Past Research Scientist
– K. Hamidouche
– S. Sur
Past Post-Docs
– D. Banerjee
– X. Besseron
– H.-W. Jin
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– M. Li (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– J. Hashmi (Ph.D.)
– A. Jain (Ph.D.)
– K. S. Khorassani (Ph.D.)
– P. Kousha (Ph.D.)
– D. Shankar (Ph.D.)
– J. Lin
– M. Luo
– E. Mancini
Current Research Asst. Professor
– X. Lu
Past Programmers
– D. Bureddy
– J. Perkins
Current Research Specialist
– J. Smith
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post-doc
– A. Ruhela
– K. Manian
Current Students (Undergraduate)
– V. Gangal (B.S.)
– M. Haupt (B.S.)
– N. Sarkauskas (B.S.)
– A. Yeretzian (B.S.)
Past Research Specialist
– M. Arnold
Current Research Scientist
– H. Subramoni
HPCAC-Stanford (Feb ‘19) 61Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
panda@cse.ohio-state.edu
The High-Performance MPI/PGAS Project
http://mvapich.cse.ohio-state.edu/
The High-Performance Deep Learning Project
http://hidl.cse.ohio-state.edu/
The High-Performance Big Data Project
http://hibd.cse.ohio-state.edu/

Mais conteúdo relacionado

Mais procurados

40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
inside-BigData.com
 
High Performance Data Analytics with Java on Large Multicore HPC Clusters
High Performance Data Analytics with Java on Large Multicore HPC ClustersHigh Performance Data Analytics with Java on Large Multicore HPC Clusters
High Performance Data Analytics with Java on Large Multicore HPC Clusters
Saliya Ekanayake
 
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
inside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Frederic Desprez
 

Mais procurados (19)

Towards a Systematic Study of Big Data Performance and Benchmarking
Towards a Systematic Study of Big Data Performance and BenchmarkingTowards a Systematic Study of Big Data Performance and Benchmarking
Towards a Systematic Study of Big Data Performance and Benchmarking
 
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Java Thread and Process Performance for Parallel Machine Learning on Multicor...Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
 
Current Trends in HPC
Current Trends in HPCCurrent Trends in HPC
Current Trends in HPC
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
 
Welcome to the 2016 HPC Advisory Council Switzerland Conference
Welcome to the 2016 HPC Advisory Council Switzerland ConferenceWelcome to the 2016 HPC Advisory Council Switzerland Conference
Welcome to the 2016 HPC Advisory Council Switzerland Conference
 
13 Supercomputer-Scale AI with Cerebras Systems
13 Supercomputer-Scale AI with Cerebras Systems13 Supercomputer-Scale AI with Cerebras Systems
13 Supercomputer-Scale AI with Cerebras Systems
 
High Performance Data Analytics with Java on Large Multicore HPC Clusters
High Performance Data Analytics with Java on Large Multicore HPC ClustersHigh Performance Data Analytics with Java on Large Multicore HPC Clusters
High Performance Data Analytics with Java on Large Multicore HPC Clusters
 
Nvidia SC16: The Greatest Challenges Can't Wait
Nvidia SC16: The Greatest Challenges Can't WaitNvidia SC16: The Greatest Challenges Can't Wait
Nvidia SC16: The Greatest Challenges Can't Wait
 
TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platform
 
A Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing ClustersA Library for Emerging High-Performance Computing Clusters
A Library for Emerging High-Performance Computing Clusters
 
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
 
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Advanced spark deep learning
Advanced spark deep learningAdvanced spark deep learning
Advanced spark deep learning
 
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
Grid'5000: Running a Large Instrument for Parallel and Distributed Computing ...
 
IBM Data Centric Systems & OpenPOWER
IBM Data Centric Systems & OpenPOWERIBM Data Centric Systems & OpenPOWER
IBM Data Centric Systems & OpenPOWER
 
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
BKK16-408B Data Analytics and Machine Learning From Node to ClusterBKK16-408B Data Analytics and Machine Learning From Node to Cluster
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architecture
 
DIET_BLAST
DIET_BLASTDIET_BLAST
DIET_BLAST
 

Semelhante a How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale Systems

The convergence of HPC and BigData: What does it mean for HPC sysadmins?
The convergence of HPC and BigData: What does it mean for HPC sysadmins?The convergence of HPC and BigData: What does it mean for HPC sysadmins?
The convergence of HPC and BigData: What does it mean for HPC sysadmins?
inside-BigData.com
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
inside-BigData.com
 
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
BigDataEverywhere
 

Semelhante a How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale Systems (20)

Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systems
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale SystemsDesigning Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
 
High-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale SystemsHigh-Performance and Scalable Designs of Programming Models for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale Systems
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
 
Scallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systemsScallable Distributed Deep Learning on OpenPOWER systems
Scallable Distributed Deep Learning on OpenPOWER systems
 
Communication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big DataCommunication Frameworks for HPC and Big Data
Communication Frameworks for HPC and Big Data
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data ScienceDesigning High-Performance and Scalable Middleware for HPC, AI and Data Science
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
 
Designing High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPCDesigning High performance & Scalable Middleware for HPC
Designing High performance & Scalable Middleware for HPC
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
 
Accelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learningAccelerating TensorFlow with RDMA for high-performance deep learning
Accelerating TensorFlow with RDMA for high-performance deep learning
 
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future PlansMVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
 
The convergence of HPC and BigData: What does it mean for HPC sysadmins?
The convergence of HPC and BigData: What does it mean for HPC sysadmins?The convergence of HPC and BigData: What does it mean for HPC sysadmins?
The convergence of HPC and BigData: What does it mean for HPC sysadmins?
 
Programming Models for Exascale Systems
Programming Models for Exascale SystemsProgramming Models for Exascale Systems
Programming Models for Exascale Systems
 
Ucx an open source framework for hpc network ap is and beyond
Ucx  an open source framework for hpc network ap is and beyondUcx  an open source framework for hpc network ap is and beyond
Ucx an open source framework for hpc network ap is and beyond
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
 
Real time machine learning proposers day v3
Real time machine learning proposers day v3Real time machine learning proposers day v3
Real time machine learning proposers day v3
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming Data
 
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
Big Data Everywhere Chicago: High Performance Computing - Contributions Towar...
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
 

Mais de inside-BigData.com

Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
inside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
inside-BigData.com
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
inside-BigData.com
 

Mais de inside-BigData.com (20)

Major Market Shifts in IT
Major Market Shifts in ITMajor Market Shifts in IT
Major Market Shifts in IT
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...Preparing to program Aurora at Exascale - Early experiences and future direct...
Preparing to program Aurora at Exascale - Early experiences and future direct...
 
Transforming Private 5G Networks
Transforming Private 5G NetworksTransforming Private 5G Networks
Transforming Private 5G Networks
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
 
HPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural NetworksHPC Impact: EDA Telemetry Neural Networks
HPC Impact: EDA Telemetry Neural Networks
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean MonitoringBiohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
 
Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
 

Último

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 

Último (20)

04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 

How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale Systems

  • 1. How to Design Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems? Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda Keynote Talk at HPCAC Stanford Conference (Feb ‘19) by
  • 2. HPCAC-Stanford (Feb ‘19) 2Network Based Computing Laboratory Big Data (Hadoop, Spark, HBase, Memcached, etc.) Deep Learning (Caffe, TensorFlow, BigDL, etc.) HPC (MPI, RDMA, Lustre, etc.) Increasing Usage of HPC, Big Data and Deep Learning Convergence of HPC, Big Data, and Deep Learning! Increasing Need to Run these applications on the Cloud!!
  • 3. HPCAC-Stanford (Feb ‘19) 3Network Based Computing Laboratory Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure? Physical Compute
  • 4. HPCAC-Stanford (Feb ‘19) 4Network Based Computing Laboratory Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
  • 5. HPCAC-Stanford (Feb ‘19) 5Network Based Computing Laboratory Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure?
  • 6. HPCAC-Stanford (Feb ‘19) 6Network Based Computing Laboratory Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure? Spark Job Hadoop Job Deep Learning Job
  • 7. HPCAC-Stanford (Feb ‘19) 7Network Based Computing Laboratory • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Exploiting Accelerators • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Cloud for HPC – Virtualization with SR-IOV and Containers HPC, Deep Learning and Cloud
  • 8. HPCAC-Stanford (Feb ‘19) 8Network Based Computing Laboratory Parallel Programming Models Overview P1 P2 P3 Shared Memory P1 P2 P3 Memory Memory Memory P1 P2 P3 Memory Memory Memory Logical shared memory Shared Memory Model SHMEM, DSM Distributed Memory Model MPI (Message Passing Interface) Partitioned Global Address Space (PGAS) Global Arrays, UPC, Chapel, X10, CAF, … • Programming models provide abstract machine models • Models can be mapped on different types of systems – e.g. Distributed Shared Memory (DSM), MPI within a node, etc. • PGAS models and Hybrid MPI+PGAS models are gradually receiving importance
  • 9. HPCAC-Stanford (Feb ‘19) 9Network Based Computing Laboratory Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Application Kernels/Applications Networking Technologies (InfiniBand, 40/100GigE, Aries, and Omni-Path) Multi-/Many-core Architectures Accelerators (GPU and FPGA) Middleware Co-Design Opportunities and Challenges across Various Layers Performance Scalability Resilience Communication Library or Runtime for Programming Models Point-to-point Communication Collective Communication Energy- Awareness Synchronization and Locks I/O and File Systems Fault Tolerance
  • 10. HPCAC-Stanford (Feb ‘19) 10Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up – Low memory footprint • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for Accelerators (GPGPUs and FPGAs) • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Broad Challenges in Designing Runtimes for (MPI+X) at Exascale
  • 11. HPCAC-Stanford (Feb ‘19) 11Network Based Computing Laboratory Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 2,950 organizations in 86 countries – More than 522,000 (> 0.5 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘18 ranking) • 3rd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China • 14th, 556,104 cores (Oakforest-PACS) in Japan • 17th, 367,024 cores (Stampede2) at TACC • 27th, 241,108-core (Pleiades) at NASA and many others – Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC) – http://mvapich.cse.ohio-state.edu • Empowering Top500 systems for over a decade Partner in the upcoming TACC Frontera System
  • 12. HPCAC-Stanford (Feb ‘19) 12Network Based Computing Laboratory 0 100000 200000 300000 400000 500000 600000 Sep-04 Feb-05 Jul-05 Dec-05 May-06 Oct-06 Mar-07 Aug-07 Jan-08 Jun-08 Nov-08 Apr-09 Sep-09 Feb-10 Jul-10 Dec-10 May-11 Oct-11 Mar-12 Aug-12 Jan-13 Jun-13 Nov-13 Apr-14 Sep-14 Feb-15 Jul-15 Dec-15 May-16 Oct-16 Mar-17 Aug-17 Jan-18 Jun-18 Nov-18 NumberofDownloads Timeline MV0.9.4 MV20.9.0 MV20.9.8 MV21.0 MV1.0 MV21.0.3 MV1.1 MV21.4 MV21.5 MV21.6 MV21.7 MV21.8 MV21.9 MV2-GDR2.0b MV2-MIC2.0 MV22.3 MV2-X2.3rc1 MV2Virt2.2 MV2-GDR2.3 OSUINAM0.9.4 MVAPICH2 Release Timeline and Downloads
  • 13. HPCAC-Stanford (Feb ‘19) 13Network Based Computing Laboratory Architecture of MVAPICH2 Software Family High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- point Primitives Collectives Algorithms Energy- Awareness Remote Memory Access I/O and File Systems Fault Tolerance Virtualization Active Messages Job Startup Introspection & Analysis Support for Modern Networking Technology (InfiniBand, iWARP, RoCE, Omni-Path) Support for Modern Multi-/Many-core Architectures (Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features RC XRC UD DC UMR ODP SR- IOV Multi Rail Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features MCDRAM* NVLink* CAPI* * Upcoming XPMEM
  • 14. HPCAC-Stanford (Feb ‘19) 14Network Based Computing Laboratory MVAPICH2 Software Family Requirements Library MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2 Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE MVAPICH2-X MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR HPC Cloud with MPI & IB MVAPICH2-Virt Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB
  • 15. HPCAC-Stanford (Feb ‘19) 15Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication – Scalable Start-up – Optimized Collectives using SHArP and Multi-Leaders – Optimized CMA-based and XPMEM-based Collectives – Asynchronous Progress • Exploiting Accelerators (NVIDIA GPGPUs) • Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 16. HPCAC-Stanford (Feb ‘19) 16Network Based Computing Laboratory One-way Latency: MPI over IB with MVAPICH2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Small Message Latency Message Size (bytes) Latency(us) 1.11 1.19 0.98 1.15 1.04 TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch 0 20 40 60 80 100 120 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-5-EDR Omni-Path Large Message Latency Message Size (bytes) Latency(us)
  • 17. HPCAC-Stanford (Feb ‘19) 17Network Based Computing Laboratory TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch Bandwidth: MPI over IB with MVAPICH2 0 5000 10000 15000 20000 25000 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-5-EDR Omni-Path Bidirectional Bandwidth Bandwidth (MBytes/sec) Message Size (bytes) 22,564 12,161 21,983 6,228 24,136 0 2000 4000 6000 8000 10000 12000 14000 Unidirectional Bandwidth Bandwidth (MBytes/sec) Message Size (bytes) 12,590 3,373 6,356 12,358 12,366
  • 18. HPCAC-Stanford (Feb ‘19) 18Network Based Computing Laboratory 0 20 40 60 80 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 180K 230K Time(seconds) Number of Processes TACC Stampede2 MPI_Init Hello World Startup Performance on KNL + Omni-Path 0 5 10 15 20 25 64 128 256 512 1K 2K 4K 8K 16K 32K 64K Time(seconds) Number of Processes Oakforest-PACS MPI_Init Hello World 22s 5.8s 21s 57s • MPI_Init takes 22 seconds on 231,936 processes on 3,624 KNL nodes (Stampede2 – Full scale) • At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS) • All numbers reported with 64 processes per node, MVAPICH2-2.3a • Designs integrated with mpirun_rsh, available for srun (SLURM launcher) as well
  • 19. HPCAC-Stanford (Feb ‘19) 19Network Based Computing Laboratory Cooperative Rendezvous Protocols Platform: 2x14 core Broadwell 2680 (2.4 GHz) Mellanox EDR ConnectX-5 (100 GBps) Baseline: MVAPICH2X-2.3rc1, Open MPI v3.1.0 Cooperative Rendezvous Protocols for Improved Performance and Overlap S. Chakraborty, M. Bayatpour,, J Hashmi, H. Subramoni, and DK Panda, SC ‘18 (Best Student Paper Award Finalist) 19% 16% 10% • Use both sender and receiver CPUs to progress communication concurrently • Dynamically select rendezvous protocol based on communication primitives and sender/receiver availability (load balancing) • Up to 2x improvement in large message latency and bandwidth • Up to 19% improvement for Graph500 at 1536 processes 0 500 1000 1500 28 56 112 224 448 896 1536 Time(Seconds) Number of Processes Graph500 MVAPICH2 Open MPI Proposed 0 200 400 28 56 112 224 448 896 1536 Time(Seconds) Number of Processes CoMD MVAPICH2 Open MPI Proposed 0 50 100 28 56 112 224 448 896 1536 Time(Seconds) Number of Processes MiniGhost MVAPICH2 Open MPI Proposed Will be available in upcoming MVAPICH2-X 2.3rc2
  • 20. HPCAC-Stanford (Feb ‘19) 20Network Based Computing Laboratory 0 0.1 0.2 0.3 0.4 (4,28) (8,28) (16,28) Latency(seconds) (Number of Nodes, PPN) MVAPICH2 Benefits of SHARP Allreduce at Application Level 12% Avg DDOT Allreduce time of HPCG SHARP support available since MVAPICH2 2.3a Parameter Description Default MV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled --enable-sharp Configure flag to enable SHARP Disabled • Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information • http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3-userguide.html#x1-990006.26
  • 21. HPCAC-Stanford (Feb ‘19) 21Network Based Computing Laboratory MPI_Allreduce on KNL + Omni-Path (10,240 Processes) 0 50 100 150 200 250 300 4 8 16 32 64 128 256 512 1024 2048 4096 Latency(us) Message Size MVAPICH2 MVAPICH2-OPT IMPI 0 200 400 600 800 1000 1200 1400 1600 1800 2000 8K 16K 32K 64K 128K 256K Message Size MVAPICH2 MVAPICH2-OPT IMPI OSU Micro Benchmark 64 PPN 2.4X • For MPI_Allreduce latency with 32K bytes, MVAPICH2-OPT can reduce the latency by 2.4X M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, SuperComputing '17. Available since MVAPICH2-X 2.3b
  • 22. HPCAC-Stanford (Feb ‘19) 22Network Based Computing Laboratory Optimized CMA-based Collectives for Large Messages 1 10 100 1000 10000 100000 1000000 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size KNL (2 Nodes, 128 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 Tuned CMA Latency(us) 1 10 100 1000 10000 100000 1000000 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Message Size KNL (4 Nodes, 256 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 Tuned CMA 1 10 100 1000 10000 100000 1000000 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M Message Size KNL (8 Nodes, 512 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 Tuned CMA • Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower) • New two-level algorithms for better scalability • Improved performance for other collectives (Bcast, Allgather, and Alltoall) ~ 2.5x Better ~ 3.2x Better ~ 4x Better ~ 17x Better S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist Performance of MPI_Gather on KNL nodes (64PPN) Available since MVAPICH2-X 2.3b
  • 23. HPCAC-Stanford (Feb ‘19) 23Network Based Computing Laboratory Shared Address Space (XPMEM)-based Collectives Design 1 10 100 1000 10000 100000 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size MVAPICH2-2.3b IMPI-2017v1.132 MVAPICH2-X-2.3rc1 OSU_Allreduce (Broadwell 256 procs) • “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2 • Offloaded computation/communication to peers ranks in reduction collective operation • Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce 73.2 1.8X 1 10 100 1000 10000 100000 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size MVAPICH2-2.3b IMPI-2017v1.132 MVAPICH2-2.3rc1 OSU_Reduce (Broadwell 256 procs) 4X 36.1 37.9 16.8 J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018. Available in MVAPICH2-X 2.3rc1
  • 24. HPCAC-Stanford (Feb ‘19) 24Network Based Computing Laboratory Application-Level Benefits of XPMEM-Based Collectives MiniAMR (Broadwell, ppn=16) • Up to 20% benefits over IMPI for CNTK DNN training using AllReduce • Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for MiniAMR application kernel 0 200 400 600 800 28 56 112 224 ExecutionTime(s) No. of Processes Intel MPI MVAPICH2 MVAPICH2-XPMEM CNTK AlexNet Training (Broadwell, B.S=default, iteration=50, ppn=28) 0 20 40 60 80 16 32 64 128 256 ExecutionTime(s) No. of Processes Intel MPI MVAPICH2 MVAPICH2-XPMEM20% 9% 27% 15%
  • 25. HPCAC-Stanford (Feb ‘19) 25Network Based Computing Laboratory Efficient Zero-copy MPI Datatypes for Emerging Architectures • New designs for efficient zero-copy based MPI derived datatype processing • Efficient schemes mitigate datatype translation, packing, and exchange overheads • Demonstrated benefits over prevalent MPI libraries for various application kernels • To be available in the upcoming MVAPICH2-X release 0.1 1 10 100 2 4 8 16 28 LogscaleLatency (milliseconds) No. of Processes MVAPICH2X-2.3 IMPI 2018 IMPI 2019 MVAPICH2X-Opt 5X 0.1 1 10 100 1000 Grid Dimensions (x, y, z, t) MVAPICH2X-2.3 IMPI 2018 MVAPICH2X-Opt 19X 0.01 0.1 1 10 Grid Dimensions (x, y, z, t) MVAPICH2X-2.3 IMPI 2018 MVAPICH2X-Opt 3X 3D-Stencil Datatype Kernel on Broadwell (2x14 core) MILC Datatype Kernel on KNL 7250 in Flat-Quadrant Mode (64-core) NAS-MG Datatype Kernel on OpenPOWER (20-core) FALCON: Efficient Designs for Zero-copy MPI Datatype Processing on Emerging Architectures, J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, D. K. (DK) Panda, 33rd IEEE International Parallel & Distributed Processing Symposium (IPDPS ’19), May 2019.
  • 26. HPCAC-Stanford (Feb ‘19) 26Network Based Computing Laboratory 0 5000 10000 15000 20000 25000 30000 224 448 896 PerformanceinGFLOPS Number of Processes MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default 0 1 2 3 4 5 6 7 8 9 112 224 448 Timeperloopinseconds Number of Processes MVAPICH2 Async MVAPICH2 Default IMPI 2019 Async Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand Up to 33% performance improvement in P3DFFT application with 448 processes Up to 29% performance improvement in HPL application with 896 processes Memory Consumption = 69% P3DFFT High Performance Linpack (HPL) 26% 27% Lower is better Higher is better A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, Efficient Asynchronous Communication Progress for MPI without Dedicated Resources, EuroMPI 2018 Available in MVAPICH2-X 2.3rc1 PPN=28 33% 29% 12% PPN=28 8%
  • 27. HPCAC-Stanford (Feb ‘19) 27Network Based Computing Laboratory • Scalability for million to billion processors • Exploiting Accelerators (NVIDIA GPGPUs) • Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 28. HPCAC-Stanford (Feb ‘19) 28Network Based Computing Laboratory At Sender: At Receiver: MPI_Recv(r_devbuf, size, …); inside MVAPICH2 • Standard MPI interfaces used for unified data movement • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0) • Overlaps data movement from GPU with RDMA transfers High Performance and High Productivity MPI_Send(s_devbuf, size, …); GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
  • 29. HPCAC-Stanford (Feb ‘19) 29Network Based Computing Laboratory CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases • Support for MPI communication from NVIDIA GPU device memory • High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) • High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU) • Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node • Optimized and tuned collectives for GPU device buffers • MPI datatype support for point-to-point and collective communication from GPU device buffers • Unified memory
  • 30. HPCAC-Stanford (Feb ‘19) 30Network Based Computing Laboratory 0 2000 4000 6000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bi-Bandwidth MV2-(NO-GDR) MV2-GDR-2.3 0 1000 2000 3000 4000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bandwidth MV2-(NO-GDR) MV2-GDR-2.3 0 10 20 30 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) GPU-GPU Inter-node Latency MV2-(NO-GDR) MV2-GDR 2.3 MVAPICH2-GDR-2.3 Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores NVIDIA Volta V100 GPU Mellanox Connect-X4 EDR HCA CUDA 9.0 Mellanox OFED 4.0 with GPU-Direct-RDMA 10x 9x Optimized MVAPICH2-GDR Design 1.85us 11X
  • 31. HPCAC-Stanford (Feb ‘19) 31Network Based Computing Laboratory • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB) • HoomdBlue Version 1.0.5 • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384 Application-Level Evaluation (HOOMD-blue) 0 500 1000 1500 2000 2500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes MV2 MV2+GDR 0 500 1000 1500 2000 2500 3000 3500 4 8 16 32 AverageTimeStepsper second(TPS) Number of Processes 64K Particles 256K Particles 2X 2X
  • 32. HPCAC-Stanford (Feb ‘19) 32Network Based Computing Laboratory Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland 0 0.2 0.4 0.6 0.8 1 1.2 16 32 64 96NormalizedExecutionTime Number of GPUs CSCS GPU cluster Default Callback-based Event-based 0 0.2 0.4 0.6 0.8 1 1.2 4 8 16 32 NormalizedExecutionTime Number of GPUs Wilkes GPU Cluster Default Callback-based Event-based • 2X improvement on 32 GPUs nodes • 30% improvement on 96 GPU nodes (8 GPUs/node) C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16 On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application Cosmo model: http://www2.cosmo-model.org/content /tasks/operational/meteoSwiss/
  • 33. HPCAC-Stanford (Feb ‘19) 33Network Based Computing Laboratory MVAPICH2-GDR: Enhanced Derived Datatype Processing • Kernel-based and GDRCOPY-based one-shot packing for inter-socket and inter-node communication • Zero-copy (packing-free) for GPUs with peer-to-peer direct access over PCIe/NVLink 0 5 10 15 20 25 [6, 8,8,8,8] [6, 8,8,8,16] [6, 8,8,16,16] [6, 16,16,16,16] MILC Speedup Problem size GPU-based DDTBench mimics MILC communication kernel OpenMPI 4.0.0 MVAPICH2-GDR 2.3 MVAPICH2-GDR-Next Platform: Nvidia DGX-2 system (16 NVIDIA Volta GPUs connected with NVSwitch), CUDA 9.2 0 1 2 3 4 5 8 16 32 64 ExecutionTime(s) Number of GPUs Communication Kernel of COSMO Model MVAPICH2-GDR 2.3 MVAPICH2-GDR-Next Platform: Cray CS-Storm (8 NVIDIA Tesla K80 GPUs per node), CUDA 8.0 Improved ~10X
  • 34. HPCAC-Stanford (Feb ‘19) 34Network Based Computing Laboratory • Scalability for million to billion processors • Exploiting Accelerators (NVIDIA GPGPUs) • Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 35. HPCAC-Stanford (Feb ‘19) 35Network Based Computing Laboratory 0 0.5 1 1.5 0 1 2 4 8 16 32 64 128 256 512 1K 2K Latency(us) MVAPICH2-2.3 SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 Intra-node Point-to-Point Performance on OpenPower Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA Intra-Socket Small Message Latency Intra-Socket Large Message Latency Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth 0.30us 0 20 40 60 80 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) MVAPICH2-2.3 SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 0 10000 20000 30000 40000 1 8 64 512 4K 32K 256K 2M Bandwidth(MB/s) MVAPICH2-2.3 SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 0 20000 40000 60000 80000 1 8 64 512 4K 32K 256K 2M Bandwidth(MB/s) MVAPICH2-2.3 SpectrumMPI-10.1.0.2 OpenMPI-3.0.0
  • 36. HPCAC-Stanford (Feb ‘19) 36Network Based Computing Laboratory 0 5 10 15 20 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) INTRA-NODE LATENCY (SMALL) Intra-Socket Inter-Socket MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Volta) 0 5 10 15 20 25 30 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(GB/sec) Message Size (Bytes) INTER-NODE BANDWIDTH Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Volta V100-SXM2 GPUs, and 2port EDR InfiniBand Interconnect 0 100 200 300 400 500 16K 32K 64K 128K256K512K 1M 2M 4M Latency(us) Message Size (Bytes) INTRA-NODE LATENCY (LARGE) Intra-Socket Inter-Socket 0 5 10 15 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) INTER-NODE LATENCY (SMALL) 0 100 200 300 400 Latency(us) Message Size (Bytes) INTER-NODE LATENCY (LARGE) Intra-node Bandwidth: 63.7 GB/sec (NVLINK)Intra-node Latency: 5.77 us (without GDRCopy) Inter-node Latency: 5.77 us (without GDRCopy) Inter-node Bandwidth: 23.7 GB/sec (2 port EDR)Available since MVAPICH2-GDR 2.3a 0 20 40 60 80 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(GB/sec) Message Size (Bytes) INTRA-NODE BANDWIDTH Intra-Socket Inter-Socket
  • 37. HPCAC-Stanford (Feb ‘19) 37Network Based Computing Laboratory 0 1000 2000 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) MVAPICH2-GDR-Next SpectrumMPI-10.1.0 OpenMPI-3.0.0 3X 0 2000 4000 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) Message Size MVAPICH2-GDR-Next SpectrumMPI-10.1.0 OpenMPI-3.0.0 34% 0 1000 2000 3000 4000 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) Message Size MVAPICH2-GDR-Next SpectrumMPI-10.1.0 OpenMPI-3.0.0 0 1000 2000 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) MVAPICH2-GDR-Next SpectrumMPI-10.1.0 OpenMPI-3.0.0 Optimized All-Reduce with XPMEM on OpenPOWER(Nodes=1,PPN=20) Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch • Optimized MPI All-Reduce Design in MVAPICH2 – Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node 2X (Nodes=2,PPN=20) 4X 48% 3.3X 2X 2X
  • 38. HPCAC-Stanford (Feb ‘19) 38Network Based Computing Laboratory 0 0.5 1 1.5 0 1 2 4 8 16 32 64 128256512 1K 2K 4K Latency(us) MVAPICH2-2.3 Intra-node Point-to-point Performance on ARM Cortex-A72 0 2000 4000 6000 8000 10000 Bandwidth(MB/s) MVAPICH2-2.3 0 5000 10000 15000 20000 BidirectionalBandwidth MVAPICH2-2.3 Platform: ARM Cortex A72 (aarch64) processor with 64 cores dual-socket CPU. Each socket contains 32 cores. Small Message Latency Large Message Latency Bi-directional BandwidthBandwidth 0.27 micro-second (1 bytes) 0 200 400 600 800 8K 16K 32K 64K 128K256K512K 1M 2M 4M Latency(us) MVAPICH2-2.3
  • 39. HPCAC-Stanford (Feb ‘19) 39Network Based Computing Laboratory • Scalability for million to billion processors • Exploiting Accelerators (NVIDIA GPGPUs) • Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM • Application Scalability and Best Practices Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale
  • 40. HPCAC-Stanford (Feb ‘19) 40Network Based Computing Laboratory 0 20 40 60 80 100 120 140 160 MILC Leslie3D POP2 LAMMPS WRF2 LU ExecutionTimein(s) Intel MPI 18.1.163 MVAPICH2-X-2.3rc1 31% SPEC MPI 2007 Benchmarks: Broadwell + InfiniBand MVAPICH2-X outperforms Intel MPI by up to 31% Configuration: 448 processes on 16 Intel E5-2680v4 (Broadwell) nodes having 28 PPN and interconnected with 100Gbps Mellanox MT4115 EDR ConnectX-4 HCA 29% 5% -12% 1% 11%
  • 41. HPCAC-Stanford (Feb ‘19) 41Network Based Computing Laboratory Application Scalability on Skylake and KNL (Stamepede2) MiniFE (1300x1300x1300 ~ 910 GB) Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K 0 50 100 150 2048 4096 8192 ExecutionTime(s) No. of Processes (KNL: 64ppn) MVAPICH2 0 20 40 60 2048 4096 8192 ExecutionTime(s) No. of Processes (Skylake: 48ppn) MVAPICH2 0 500 1000 1500 48 96 192 384 768 No. of Processes (Skylake: 48ppn) MVAPICH2 NEURON (YuEtAl2012) Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b 0 1000 2000 3000 4000 64 128 256 512 1024 2048 4096 No. of Processes (KNL: 64ppn) MVAPICH2 0 500 1000 1500 68 136 272 544 1088 2176 4352 No. of Processes (KNL: 68ppn) MVAPICH2 0 500 1000 1500 2000 48 96 192 384 768 1536 3072 No. of Processes (Skylake: 48ppn) MVAPICH2 Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2
  • 42. HPCAC-Stanford (Feb ‘19) 42Network Based Computing Laboratory • MPI runtime has many parameters • Tuning a set of parameters can help you to extract higher performance • Compiled a list of such contributions through the MVAPICH Website – http://mvapich.cse.ohio-state.edu/best_practices/ • Initial list of applications – Amber – HoomDBlue – HPCG – Lulesh – MILC – Neuron – SMG2000 – Cloverleaf – SPEC (LAMMPS, POP2, TERA_TF, WRF2) • Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu. • We will link these results with credits to you. Applications-Level Tuning: Compilation of Best Practices
  • 43. HPCAC-Stanford (Feb ‘19) 43Network Based Computing Laboratory • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Exploiting Accelerators • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Cloud for HPC and BigData – Virtualization with SR-IOV and Containers HPC, Deep Learning and Cloud
  • 44. HPCAC-Stanford (Feb ‘19) 44Network Based Computing Laboratory • Deep Learning frameworks are a different game altogether – Unusually large message sizes (order of megabytes) – Most communication based on GPU buffers • Existing State-of-the-art – cuDNN, cuBLAS, NCCL --> scale-up performance – NCCL2, CUDA-Aware MPI --> scale-out performance • For small and medium message sizes only! • Proposed: Can we co-design the MPI runtime (MVAPICH2- GDR) and the DL framework (Caffe) to achieve both? – Efficient Overlap of Computation and Communication – Efficient Large-Message Communication (Reductions) – What application co-designs are needed to exploit communication-runtime co-designs? Deep Learning: New Challenges for MPI Runtimes Scale-upPerformance Scale-out Performance A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17) cuDNN gRPC Hadoop MPI MKL-DNN DesiredNCCL1 NCCL2
  • 45. HPCAC-Stanford (Feb ‘19) 45Network Based Computing Laboratory • MVAPICH2-GDR offers excellent performance via advanced designs for MPI_Allreduce. • Up to 11% better performance on the RI2 cluster (16 GPUs) • Near-ideal – 98% scaling efficiency Exploiting CUDA-Aware MPI for TensorFlow (Horovod) 0 100 200 300 400 500 600 700 800 900 1000 1 2 4 8 16 Images/second(Higherisbetter) No. of GPUs Horovod-MPI Horovod-NCCL2 Horovod-MPI-Opt (Proposed) Ideal MVAPICH2-GDR 2.3 (MPI-Opt) is up to 11% faster than MVAPICH2 2.3 (Basic CUDA support) A. A. Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation”, Under Review, https://arxiv.org/abs/1810.11112
  • 46. HPCAC-Stanford (Feb ‘19) 46Network Based Computing Laboratory 0 10000 20000 30000 40000 50000 512K 1M 2M 4M Latency(us) Message Size (Bytes) MVAPICH2 BAIDU OPENMPI 0 1000000 2000000 3000000 4000000 5000000 6000000 8388608 16777216 33554432 67108864 134217728 268435456 536870912 Latency(us) Message Size (Bytes) MVAPICH2 BAIDU OPENMPI 1 10 100 1000 10000 100000 4 16 64 256 1024 4096 16384 65536 262144 Latency(us) Message Size (Bytes) MVAPICH2 BAIDU OPENMPI • 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0 MVAPICH2-GDR: Allreduce Comparison with Baidu and OpenMPI *Available since MVAPICH2-GDR 2.3a ~30X better MV2 is ~2X better than Baidu ~10X better OpenMPI is ~5X slower than Baidu ~4X better
  • 47. HPCAC-Stanford (Feb ‘19) 47Network Based Computing Laboratory MVAPICH2-GDR vs. NCCL2 – Allreduce Operation • Optimized designs in MVAPICH2-GDR 2.3 offer better/comparable performance for most cases • MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs 1 10 100 1000 10000 100000 Latency(us) Message Size (Bytes) MVAPICH2-GDR NCCL2 ~1.2X better Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect 1 10 100 1000 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K Latency(us) Message Size (Bytes) MVAPICH2-GDR NCCL2 ~3X better
  • 48. HPCAC-Stanford (Feb ‘19) 48Network Based Computing Laboratory MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2) • Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases • MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs) 1 10 100 1000 10000 Latency(us) Message Size (Bytes) MVAPICH2-GDR-Next NCCL-2.3 ~1.7X better Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2 0 10 20 30 40 50 60 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K Latency(us) Message Size (Bytes) MVAPICH2-GDR-Next NCCL-2.3 ~2.5X better
  • 49. HPCAC-Stanford (Feb ‘19) 49Network Based Computing Laboratory MVAPICH2-GDR vs. NCCL2 – ResNet-50 Training • ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (8 Volta GPUs) 0 500 1000 1500 2000 2500 3000 1 2 4 8 Imagepersecond Number of GPUs NCCL-2.3 MVAPICH2-GDR-Next 7.5% higher Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2 75 80 85 90 95 100 1 2 4 8 ScalingEfficiency(%) Number of GPUs NCCL-2.3 MVAPICH2-GDR-Next Scaling Efficiency = Actual throughput Ideal throughput at scale × 100%
  • 50. HPCAC-Stanford (Feb ‘19) 50Network Based Computing Laboratory • Caffe : A flexible and layered Deep Learning framework. • Benefits and Weaknesses – Multi-GPU Training within a single node – Performance degradation for GPUs across different sockets – Limited Scale-out • OSU-Caffe: MPI-based Parallel Training – Enable Scale-up (within a node) and Scale-out (across multi-GPU nodes) – Scale-out on 64 GPUs for training CIFAR-10 network on CIFAR-10 dataset – Scale-out on 128 GPUs for training GoogLeNet network on ImageNet dataset OSU-Caffe: Scalable Deep Learning 0 50 100 150 200 250 8 16 32 64 128 TrainingTime(seconds) No. of GPUs GoogLeNet (ImageNet) on 128 GPUs Caffe OSU-Caffe (1024) OSU-Caffe (2048) Invalid use case OSU-Caffe publicly available from http://hidl.cse.ohio-state.edu/
  • 51. HPCAC-Stanford (Feb ‘19) 51Network Based Computing Laboratory • High-Performance Design of TensorFlow over RDMA-enabled Interconnects – High performance RDMA-enhanced design with native InfiniBand support at the verbs-level for gRPC and TensorFlow – RDMA-based data communication – Adaptive communication protocols – Dynamic message chunking and accumulation – Support for RDMA device selection – Easily configurable for different protocols (native InfiniBand and IPoIB) • Current release: 0.9.1 – Based on Google TensorFlow 1.3.0 – Tested with • Mellanox InfiniBand adapters (e.g., EDR) • NVIDIA GPGPU K80 • Tested with CUDA 8.0 and CUDNN 5.0 – http://hidl.cse.ohio-state.edu RDMA-TensorFlow Distribution
  • 52. HPCAC-Stanford (Feb ‘19) 52Network Based Computing Laboratory 0 50 100 150 200 16 32 64 Images/Second Batch Size gRPPC (IPoIB-100Gbps) Verbs (RDMA-100Gbps) MPI (RDMA-100Gbps) AR-gRPC (RDMA-100Gbps) Performance Benefit for RDMA-TensorFlow (Inception3) • TensorFlow Inception3 performance evaluation on an IB EDR cluster – Up to 20% performance speedup over Default gRPC (IPoIB) for 8 GPUs – Up to 34% performance speedup over Default gRPC (IPoIB) for 16 GPUs – Up to 37% performance speedup over Default gRPC (IPoIB) for 24 GPUs 4 Nodes (8 GPUS) 8 Nodes (16 GPUS) 12 Nodes (24 GPUS) 0 200 400 600 16 32 64 Images/Second Batch Size gRPPC (IPoIB-100Gbps) Verbs (RDMA-100Gbps) MPI (RDMA-100Gbps) AR-gRPC (RDMA-100Gbps) 0 100 200 300 400 16 32 64Images/Second Batch Size gRPPC (IPoIB-100Gbps) Verbs (RDMA-100Gbps) MPI (RDMA-100Gbps) AR-gRPC (RDMA-100Gbps)
  • 53. HPCAC-Stanford (Feb ‘19) 53Network Based Computing Laboratory • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Exploiting Accelerators • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Cloud for HPC and BigData – Virtualization with SR-IOV and Containers HPC, Deep Learning and Cloud
  • 54. HPCAC-Stanford (Feb ‘19) 54Network Based Computing Laboratory • Virtualization has many benefits – Fault-tolerance – Job migration – Compaction • Have not been very popular in HPC due to overhead associated with Virtualization • New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field • Enhanced MVAPICH2 support for SR-IOV • MVAPICH2-Virt 2.2 supports: – OpenStack, Docker, and singularity Can HPC and Virtualization be Combined? J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14 J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Library over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
  • 55. HPCAC-Stanford (Feb ‘19) 55Network Based Computing Laboratory 0 50 100 150 200 250 300 350 400 milc leslie3d pop2 GAPgeofem zeusmp2 lu ExecutionTime(s) MV2-SR-IOV-Def MV2-SR-IOV-Opt MV2-Native 1% 9.5% 0 1000 2000 3000 4000 5000 6000 22,20 24,10 24,16 24,20 26,10 26,16 ExecutionTime(ms) Problem Size (Scale, Edgefactor) MV2-SR-IOV-Def MV2-SR-IOV-Opt MV2-Native 2% • 32 VMs, 6 Core/VM • Compared to Native, 2-5% overhead for Graph500 with 128 Procs • Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs Application-Level Performance on Chameleon SPEC MPI2007Graph500 5% A Release for Azure Coming Soon
  • 56. HPCAC-Stanford (Feb ‘19) 56Network Based Computing Laboratory 0 500 1000 1500 2000 2500 3000 22,16 22,20 24,16 24,20 26,16 26,20 BFSExecutionTime(ms) Problem Size (Scale, Edgefactor) Graph500 Singularity Native 0 50 100 150 200 250 300 CG EP FT IS LU MG ExecutionTime(s) NPB Class D Singularity Native • 512 Processes across 32 nodes • Less than 7% and 6% overhead for NPB and Graph500, respectively Application-Level Performance on Singularity with MVAPICH2 7% 6% J. Zhang, X .Lu and D. K. Panda, Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds?, UCC ’17, Best Student Paper Award
  • 57. HPCAC-Stanford (Feb ‘19) 57Network Based Computing Laboratory 0 5000 10000 15000 20000 Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bi-Bandwidth Docker Native 0 2000 4000 6000 8000 10000 12000 Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bandwidth Docker Native 1 10 100 1000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Latency(us) Message Size (Bytes) GPU-GPU Inter-node Latency Docker Native MVAPICH2-GDR-2.3a Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores NVIDIA Volta V100 GPU Mellanox Connect-X4 EDR HCA CUDA 9.0 Mellanox OFED 4.0 with GPU-Direct-RDMA MVAPICH2-GDR on Container with Negligible Overhead Works with NVIDIA HPC Container Maker https://github.com/NVIDIA/hpc-container-maker/blob/master/recipes/hpcbase-pgi-mvapich2.py
  • 58. HPCAC-Stanford (Feb ‘19) 58Network Based Computing Laboratory • Supported through X-ScaleSolutions (http://x-scalesolutions.com) • Benefits: – Help and guidance with installation of the library – Platform-specific optimizations and tuning – Timely support for operational issues encountered with the library – Web portal interface to submit issues and tracking their progress – Advanced debugging techniques – Application-specific optimizations and tuning – Obtaining guidelines on best practices – Periodic information on major fixes and updates – Information on major releases – Help with upgrading to the latest release – Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years Commercial Support for MVAPICH2, HiBD, and HiDL Libraries
  • 59. HPCAC-Stanford (Feb ‘19) 59Network Based Computing Laboratory Funding Acknowledgments Funding Support by Equipment Support by
  • 60. HPCAC-Stanford (Feb ‘19) 60Network Based Computing Laboratory Personnel Acknowledgments Current Students (Graduate) – A. Awan (Ph.D.) – M. Bayatpour (Ph.D.) – S. Chakraborthy (Ph.D.) – C.-H. Chu (Ph.D.) – S. Guganani (Ph.D.) Past Students – A. Augustine (M.S.) – P. Balaji (Ph.D.) – R. Biswas (M.S.) – S. Bhagvat (M.S.) – A. Bhat (M.S.) – D. Buntinas (Ph.D.) – L. Chai (Ph.D.) – B. Chandrasekharan (M.S.) – N. Dandapanthula (M.S.) – V. Dhanraj (M.S.) – T. Gangadharappa (M.S.) – K. Gopalakrishnan (M.S.) – R. Rajachandrasekar (Ph.D.) – G. Santhanaraman (Ph.D.) – A. Singh (Ph.D.) – J. Sridhar (M.S.) – S. Sur (Ph.D.) – H. Subramoni (Ph.D.) – K. Vaidyanathan (Ph.D.) – A. Vishnu (Ph.D.) – J. Wu (Ph.D.) – W. Yu (Ph.D.) – J. Zhang (Ph.D.) Past Research Scientist – K. Hamidouche – S. Sur Past Post-Docs – D. Banerjee – X. Besseron – H.-W. Jin – W. Huang (Ph.D.) – W. Jiang (M.S.) – J. Jose (Ph.D.) – S. Kini (M.S.) – M. Koop (Ph.D.) – K. Kulkarni (M.S.) – R. Kumar (M.S.) – S. Krishnamoorthy (M.S.) – K. Kandalla (Ph.D.) – M. Li (Ph.D.) – P. Lai (M.S.) – J. Liu (Ph.D.) – M. Luo (Ph.D.) – A. Mamidala (Ph.D.) – G. Marsh (M.S.) – V. Meshram (M.S.) – A. Moody (M.S.) – S. Naravula (Ph.D.) – R. Noronha (Ph.D.) – X. Ouyang (Ph.D.) – S. Pai (M.S.) – S. Potluri (Ph.D.) – J. Hashmi (Ph.D.) – A. Jain (Ph.D.) – K. S. Khorassani (Ph.D.) – P. Kousha (Ph.D.) – D. Shankar (Ph.D.) – J. Lin – M. Luo – E. Mancini Current Research Asst. Professor – X. Lu Past Programmers – D. Bureddy – J. Perkins Current Research Specialist – J. Smith – S. Marcarelli – J. Vienne – H. Wang Current Post-doc – A. Ruhela – K. Manian Current Students (Undergraduate) – V. Gangal (B.S.) – M. Haupt (B.S.) – N. Sarkauskas (B.S.) – A. Yeretzian (B.S.) Past Research Specialist – M. Arnold Current Research Scientist – H. Subramoni
  • 61. HPCAC-Stanford (Feb ‘19) 61Network Based Computing Laboratory Thank You! Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ panda@cse.ohio-state.edu The High-Performance MPI/PGAS Project http://mvapich.cse.ohio-state.edu/ The High-Performance Deep Learning Project http://hidl.cse.ohio-state.edu/ The High-Performance Big Data Project http://hibd.cse.ohio-state.edu/

Notas do Editor

  1. This one can be updated with the latest version of GDR
  2. Available.. now
  3. To get highest (bi-)bandwidth on Volta, use “MV2_GPUDIRECT_LIMIT=16384 MV2_CUDA_BLOCK_SIZE=1048576” Ref. can be added later.