SlideShare uma empresa Scribd logo
1 de 88
Baixar para ler offline
Overview of the MVAPICH Project:
Latest Status and Future Roadmap
MVAPICH2 User Group (MUG) Meeting
by
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
MVAPICH User Group Meeting (MUG) 2019 2Network Based Computing Laboratory
High-End Computing (HEC): PetaFlop to ExaFlop
Expected to have an ExaFlop system in 2020-2021!
100 PFlops in
2017
1 EFlops in
2020-2021?
143
PFlops
in 2018
MVAPICH User Group Meeting (MUG) 2019 3Network Based Computing Laboratory
Supporting Programming Models for Multi-Petaflop and
Exaflop Systems: Challenges
Programming Models
MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications (HPC and DL)
Networking Technologies
(InfiniBand, 40/100/200GigE,
Aries, and Omni-Path)
Multi-/Many-core
Architectures
Accelerators
(GPU and FPGA)
Middleware Co-Design
Opportunities
and Challenges
across Various
Layers
Performance
Scalability
Resilience
Communication Library or Runtime for Programming Models
Point-to-point
Communication
Collective
Communication
Energy-
Awareness
Synchronization
and Locks
I/O and
File Systems
Fault
Tolerance
MVAPICH User Group Meeting (MUG) 2019 4Network Based Computing Laboratory
• Scalability for million to billion processors
– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)
– Scalable job start-up
– Low memory footprint
• Scalable Collective communication
– Offload
– Non-blocking
– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)
– Multiple end-points per node
• Support for efficient multi-threading
• Integrated Support for Accelerators (GPGPUs and FPGAs)
• Fault-tolerance/resiliency
• QoS support for communication and I/O
• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)
• Virtualization
• Energy-Awareness
Designing (MPI+X) at Exascale
MVAPICH User Group Meeting (MUG) 2019 5Network Based Computing Laboratory
Overview of the MVAPICH2 Project
• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 3,025 organizations in 89 countries
– More than 564,000 (> 0.5 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘18 ranking)
• 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 5th, 448, 448 cores (Frontera) at TACC
• 8th, 391,680 cores (ABCI) in Japan
• 15th, 570,020 cores (Neurion) in South Korea and many others
– Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
Partner in the TACC Frontera System
MVAPICH User Group Meeting (MUG) 2019 6Network Based Computing Laboratory
Timeline
Jan-04
Jan-10
Nov-12
MVAPICH2-X
OMB
MVAPICH2
MVAPICH
Oct-02
Nov-04
Apr-15
EOL
MVAPICH2-GDR
MVAPICH2-MIC
MVAPICH Project Timeline
Jul-15
MVAPICH2-Virt
Aug-14
Aug-15
Sep-15
MVAPICH2-EA
OSU-INAM
MVAPICH User Group Meeting (MUG) 2019 7Network Based Computing Laboratory
0
100000
200000
300000
400000
500000
600000
Sep-04
Jan-05
May-05
Sep-05
Jan-06
May-06
Sep-06
Jan-07
May-07
Sep-07
Jan-08
May-08
Sep-08
Jan-09
May-09
Sep-09
Jan-10
May-10
Sep-10
Jan-11
May-11
Sep-11
Jan-12
May-12
Sep-12
Jan-13
May-13
Sep-13
Jan-14
May-14
Sep-14
Jan-15
May-15
Sep-15
Jan-16
May-16
Sep-16
Jan-17
May-17
Sep-17
Jan-18
May-18
Sep-18
Jan-19
May-19
NumberofDownloads
Timeline
MV0.9.4
MV20.9.0
MV20.9.8
MV21.0
MV1.0
MV21.0.3
MV1.1
MV21.4
MV21.5
MV21.6
MV21.7
MV21.8
MV21.9
MV2-GDR2.0b
MV2-MIC2.0
MV2-GDR2.3.2
MV2-X2.3rc2
MV2Virt2.2
MV22.3.2
OSUINAM0.9.3
MV2-Azure2.3.2MV2-AWS2.3
MVAPICH2 Release Timeline and Downloads
MVAPICH User Group Meeting (MUG) 2019 8Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family (for HPC and DL)
High Performance Parallel Programming Models
Message Passing Interface
(MPI)
PGAS
(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication Runtime
Diverse APIs and Mechanisms
Point-to-
point
Primitives
Collectives
Algorithms
Energy-
Awareness
Remote
Memory
Access
I/O and
File Systems
Fault
Tolerance
Virtualization
Active
Messages
Job Startup
Introspectio
n & Analysis
Support for Modern Networking Technology
(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures
(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC SHARP2* ODP
SR-
IOV
Multi
Rail
Transport Mechanisms
Shared
Memory
CMA IVSHMEM
Modern Features
MCDRAM* NVLink CAPI*
* Upcoming
XPMEM
MVAPICH User Group Meeting (MUG) 2019 9Network Based Computing Laboratory
• Research is done for exploring new designs
• Designs are first presented to conference/journal publications
• Best performing designs are incorporated into the codebase
• Rigorous Q&A procedure before making a release
– Exhaustive unit testing
– Various test procedures on diverse range of platforms and interconnects
– Test 19 different benchmarks and applications including, but not limited to
• OMB, IMB, MPICH Test Suite, Intel Test Suite, NAS, ScalaPak, and SPEC
– Spend about 18,000 core hours per commit
– Performance regression and tuning
– Applications-based evaluation
– Evaluation on large-scale systems
• All versions (alpha, beta, RC1 and RC2) go through the above testing
Strong Procedure for Design, Development and Release
MVAPICH User Group Meeting (MUG) 2019 10Network Based Computing Laboratory
MVAPICH2 Software Family
Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM),
OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric
Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning
Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and,
RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 11Network Based Computing Laboratory
• Released on 08/09/2019
• Major Features and Enhancements
– Improved performance for inter-node communication
– Improved performance for Gather, Reduce, and Allreduce with cyclic hostfile
– - Thanks to X-ScaleSolutions for the patch
– Improved performance for intra-node point-to-point communication
– Add support for Mellanox HDR adapters
– Add support for Cascade lake systems
– Add support for Microsoft Azure platform
• Enhanced point-to-point and collective tuning for Microsoft Azure
– Add support for new NUMA-aware hybrid binding policy
– Add support for AMD EPYC Rome architecture
– Improved multi-rail selection logic
– Enhanced heterogeneity detection logic
– Enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Mayer@Sandia, Pitzer@OSC,
Summit@ORNL, Lassen@LLNL, and Sierra@LLNL systems
– Add multiple PVARs and CVARs for point-to-point and collective operations
MVAPICH2 2.3.2
MVAPICH User Group Meeting (MUG) 2019 12Network Based Computing Laboratory
• Support for highly-efficient inter-node and intra-node communication
• Scalable Start-up
• Optimized Collectives using SHArP and Multi-Leaders
• Support for OpenPOWER and ARM architectures
• Performance Engineering with MPI-T
• Application Scalability and Best Practices
Highlights of MVAPICH2 2.3.2-GA Release
MVAPICH User Group Meeting (MUG) 2019 13Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Small Message Latency
Message Size (bytes)
Latency(us)
1.11
1.19
1.01
1.15
1.04
1.1
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
0
20
40
60
80
100
120
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-4-EDR
Omni-Path
ConnectX-6 HDR
Large Message Latency
Message Size (bytes)
Latency(us)
MVAPICH User Group Meeting (MUG) 2019 14Network Based Computing Laboratory
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000
30000
4 16 64 256 1024 4K 16K 64K 256K 1M
Unidirectional Bandwidth
Bandwidth(MBytes/sec)
Message Size (bytes)
12,590
3,373
6,356
12,083
12,366
24,532
0
10000
20000
30000
40000
50000
60000
4 16 64 256 1024 4K 16K 64K 256K 1M
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-4-EDR
Omni-Path
ConnectX-6 HDR
Bidirectional Bandwidth
Bandwidth(MBytes/sec)
Message Size (bytes)
21,227
12,161
21,983
6,228
48,027
24,136
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch
ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
MVAPICH User Group Meeting (MUG) 2019 15Network Based Computing Laboratory
• Near-constant MPI and OpenSHMEM
initialization time at any process count
• 10x and 30x improvement in startup time
of MPI and OpenSHMEM respectively at
16,384 processes
• Memory consumption reduced for remote
endpoint information by O(processes per
node)
• 1GB Memory saved per node with 1M
processes and 16 processes per node
Towards High Performance and Scalable Startup at Exascale
P M
O
Job Startup Performance
MemoryRequiredtoStore
EndpointInformation
a b c d
eP
M
PGAS – State of the art
MPI – State of the art
O PGAS/MPI – Optimized
PMIX_Ring
PMIX_Ibarrier
PMIX_Iallgather
Shmem based PMI
b
c
d
e
a
On-demand
Connection
On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K
Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)
PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st
European MPI Users' Group Meeting (EuroMPI/Asia ’14)
Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th
IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)
SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda,
16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)
a
b
c d
e
MVAPICH User Group Meeting (MUG) 2019 16Network Based Computing Laboratory
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64
128
256
512
1K
2K
4K
8K
16K
32K
64K
TimeTaken(Seconds)
Number of Processes
MPI_Init & Hello World - Oakforest-PACS
Hello World (MVAPICH2-2.3a)
MPI_Init (MVAPICH2-2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)
• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)
• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)
• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available since MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
MVAPICH User Group Meeting (MUG) 2019 17Network Based Computing Laboratory
Startup Performance on TACC Frontera
• MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes
• All numbers reported with 56 processes per node
4.5s
3.9s
New designs available in MVAPICH2-2.3.2
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
56 112 224 448 896 1792 3584 7168 14336 28672 57344
TimeTaken(Milliseconds)
Number of Processes
MPI_Init on Frontera
Intel MPI 2019
MVAPICH2 2.3.2
MVAPICH User Group Meeting (MUG) 2019 18Network Based Computing Laboratory
0
1000
2000
3000
4000
5000
6000
1 2 4 8 16 32 64 128
ExecutionTime(ms)
Number of nodes (56 ppn)
osu_init
MVAPICH2.3.2 MVAPICH2.3.1
0
1000
2000
3000
4000
5000
6000
7000
8000
1 2 4 8 16 32 64 128
ExecutionTime(ms)
Number of nodes (56 ppn)
osu_hello
MVAPICH2.3.2 MVAPICH2.3.1
Startup Performance on TACC Frontera MVAPICH2 2.3.1 vs 2.3.2
• MVAPICH2 2.3.2 significantly improves performance on top of MVAPICH2 2.3.1
• All numbers reported with 56 processes per node
MVAPICH User Group Meeting (MUG) 2019 19Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
(4,28) (8,28) (16,28)Latency(seconds)
(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
Benefits of SHARP Allreduce at Application Level
12%
Avg DDOT Allreduce time of HPCG
SHARP support available since MVAPICH2 2.3a
Parameter Description Default
MV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled
--enable-sharp Configure flag to enable SHARP Disabled
• Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information
• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3-userguide.html#x1-990006.26
MVAPICH User Group Meeting (MUG) 2019 20Network Based Computing Laboratory
0
0.5
1
0 1 2 4 8 16 32 64 128 256 512 1K 2K
Latency(us)
MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
Intra-node Point-to-Point Performance on OpenPower
Platform: Two nodes of OpenPOWER (POWER9-ppc64le) CPU using Mellanox EDR (MT4121) HCA
Intra-Socket Small Message Latency Intra-Socket Large Message Latency
Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth
0.22us
0
20
40
60
80
100
4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
0
10000
20000
30000
40000
1 8 64 512 4K 32K 256K 2M
Bandwidth(MB/s)
MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
0
20000
40000
1 8 64 512 4K 32K 256K 2M
Bi-Bandwidth(MB/s)
MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
MVAPICH User Group Meeting (MUG) 2019 21Network Based Computing Laboratory
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Latency(us)
MVAPICH2-2.3
Intra-node Point-to-point Performance on ARM Cortex-A72
0
2000
4000
6000
8000
10000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
8K
32K
64K
128K
256K
512K
1M
2M
4M
Bandwidth(MB/s)
MVAPICH2-2.3
0
2000
4000
6000
8000
10000
12000
14000
16000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
BidirectionalBandwidth
MVAPICH2-2.3
Platform: ARM Cortex A72 (aarch64) processor with 64 cores dual-socket CPU. Each socket contains 32 cores.
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
0.27 micro-second
(1 bytes)
0
100
200
300
400
500
600
700
8K 16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
MVAPICH2-2.3
MVAPICH User Group Meeting (MUG) 2019 22Network Based Computing Laboratory
● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of
performance and control variables
● Get and display MPI Performance Variables (PVARs) made available by the runtime
in TAU
● Control the runtime’s behavior via MPI Control Variables (CVARs)
● Introduced support for new MPI_T based CVARs to MVAPICH2
○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE,
MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE
● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for
uninstrumented applications
● S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI
Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH
and TAU, EuroMPI/USA ‘17, Best Paper Finalist
● More details in Sameer Shende’s talk today and poster presentations
Performance Engineering Applications using MVAPICH2 and TAU
VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf
MVAPICH User Group Meeting (MUG) 2019 23Network Based Computing Laboratory
Application Scalability on Skylake and KNL
MiniFE (1300x1300x1300 ~ 910 GB)
Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K
0
20
40
60
80
100
120
140
2048 4096 8192
ExecutionTime(s)
No. of Processes (KNL: 64ppn)
MVAPICH2
0
10
20
30
40
50
60
2048 4096 8192
ExecutionTime(s)
No. of Processes (Skylake: 48ppn)
MVAPICH2
0
200
400
600
800
1000
1200
48 96 192 384 768
No. of Processes (Skylake: 48ppn)
MVAPICH2
NEURON (YuEtAl2012)
Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b
0
500
1000
1500
2000
2500
3000
3500
64 128 256 512 1024 2048 4096
No. of Processes (KNL: 64ppn)
MVAPICH2
0
200
400
600
800
1000
1200
1400
68 136 272 544 1088 2176 4352
No. of Processes (KNL: 68ppn)
MVAPICH2
0
500
1000
1500
2000
48 96 192 384 768 1536 3072
No. of Processes (Skylake: 48ppn)
MVAPICH2
Cloverleaf (bm64) MPI+OpenMP,
NUM_OMP_THREADS = 2
MVAPICH User Group Meeting (MUG) 2019 24Network Based Computing Laboratory
• Released on 08/16/2019
• Major Features and Enhancements
– Based on MVAPICH2-2.3.2
– Enhanced tuning for point-to-point and collective operations
– Targeted for Azure HB & HC virtual machine instances
– Flexibility for 'one-click' deployment
– Tested with Azure HB & HC VM instances
MVAPICH2-Azure 2.3.2
MVAPICH User Group Meeting (MUG) 2019 25Network Based Computing Laboratory
Performance of Radix
0
5
10
15
20
25
30
16(1x16) 32(1x32) 44(1X44) 88(2X44) 176(4X44) 352(8x44)
ExecutionTime(Seconds)
Number of Processes (Nodes X PPN)
Total Execution Time on HC (Lower is better)
MVAPICH2-X
HPCx 3x faster
0
5
10
15
20
25
60(1X60) 120(2X60) 240(4X60)
ExecutionTime(Seconds)
Number of Processes (Nodes X PPN)
Total Execution Time on HB (Lower is better)
MVAPICH2-X HPCx
38% faster
MVAPICH User Group Meeting (MUG) 2019 26Network Based Computing Laboratory
Performance of FDS (HC)
0
1
2
3
4
5
6
7
8
9
10
16(1x16) 32(1x32) 44(1X44)
ExecutionTime(Seconds)
Processes (Nodes X PPN)
Single Node
Total Execution Time (Lower is better)
MVAPICH2-X HPCx
0
100
200
300
400
500
600
88(2X44) 176(4X44)
ExecutionTime(Seconds)
Processes (Nodes X PPN)
Multi-Node
Total Execution Time (Lower is better)
MVAPICH2-X HPCx
Part of input parameter: MESH IJK=5,5,5, XB=-1.0,0.0,-1.0,0.0,0.0,1.0, MULT_ID='mesh array'
1.11x better
MVAPICH User Group Meeting (MUG) 2019 27Network Based Computing Laboratory
• Integration of SHARP2 and associated Collective Optimizations
• Communication optimizations on upcoming architectures
– Intel Cooper Lake
– AMD Rome
– ARM
• Dynamic and Adaptive Communication Protocols
MVAPICH2 Upcoming Features
MVAPICH User Group Meeting (MUG) 2019 28Network Based Computing Laboratory
Dynamic and Adaptive MPI Point-to-point Communication Protocols
Process on Node 1 Process on Node 2
Eager Threshold for Example Communication Pattern with Different Designs
0 1 2 3
4 5 6 7
Default
16 KB 16 KB 16 KB 16 KB
0 1 2 3
4 5 6 7
Manually Tuned
128 KB 128 KB 128 KB 128 KB
0 1 2 3
4 5 6 7
Dynamic + Adaptive
32 KB 64 KB 128 KB 32 KB
H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper
0
100
200
300
400
500
600
128 256 512 1K
WallClockTime(seconds)
Number of Processes
Execution Time of Amber
Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold
0
1
2
3
4
5
6
7
8
9
128 256 512 1K
RelativeMemoryConsumption
Number of Processes
Relative Memory Consumption of Amber
Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold
Default Poor overlap; Low memory requirement Low Performance; High Productivity
Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity
Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity
Process Pair Eager Threshold (KB)
0 – 4 32
1 – 5 64
2 – 6 128
3 – 7 32
Desired Eager Threshold
MVAPICH User Group Meeting (MUG) 2019 29Network Based Computing Laboratory
MVAPICH2 Software Family
Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM),
OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric
Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning
Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and,
RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 30Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI
– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF
– Available with since 2012 (starting with MVAPICH2-X 1.9)
– http://mvapich.cse.ohio-state.edu
High Performance and Scalable Unified Communication Runtime
Support for Modern Multi-/Many-core Architectures
(Intel-Xeon, Intel-Xeon Phi, OpenPOWER, ARM…)
High Performance Parallel Programming Models
MPI
Message Passing Interface
PGAS
(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
Diverse APIs and Mechanisms
Optimized Point-to-
point Primitives
Collectives Algorithms
(Blocking and Non-
Blocking)
Remote Memory
Access
Fault
Tolerance
Active Messages
Scalable Job
Startup
Introspection &
Analysis with OSU
INAM
Support for Modern Networking Technologies
(InfiniBand, iWARP, RoCE, Omni-Path…)
Support for Efficient Intra-node Communication
(POSIX SHMEM, CMA, LiMIC, XPMEM…)
MVAPICH User Group Meeting (MUG) 2019 31Network Based Computing Laboratory
• Released on 03/01/2019
• Major Features and Enhancements
– MPI Features
– Based on MVAPICH2 2.3.1
• OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces
– MPI (Advanced) Features
• Improved performance of large message communication
• Support for advanced co-operative (COOP) rendezvous protocols in SMP
channel
– OFA-IB-CH3 and OFA-IB-RoCE interfaces
• Support for RGET, RPUT, and COOP protocols for CMA and XPMEM
– OFA-IB-CH3 and OFA-IB-RoCE interfaces
• Support for load balanced and dynamic rendezvous protocol selection
– OFA-IB-CH3 and OFA-IB-RoCE interfaces
• Support for XPMEM-based MPI collective operations (Broadcast, Gather,
Scatter, Allgather)
– OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces
• Extend support for XPMEM-based MPI collective operations (Reduce and
All-Reduce for PSM-CH3 and PSM2-CH3 interfaces
MVAPICH2-X 2.3rc2
• Improved connection establishment for DC transport
– OFA-IB-CH3 interface
• Add improved Alltoallv algorithm for small messages
• OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces
– OpenSHMEM Features
• Support for XPMEM-based collective operations (Broadcast, Collect,
Reduce_all, Reduce, Scatter, Gather)
– UPC Features
• Support for XPMEM-based collective operations (Broadcast, Collect,
Scatter, Gather)
– UPC++ Features
• Support for XPMEM-based collective operations (Broadcast, Collect,
Scatter, Gather)
– Unified Runtime Features
• Based on MVAPICH2 2.3.1 (OFA-IB-CH3 interface). All the runtime
features enabled by default in OFA-IB-CH3 and OFA-IB-RoCE interface
of MVAPICH2 2.3.1 are available in MVAPICH2-X 2.3rc2
MVAPICH User Group Meeting (MUG) 2019 32Network Based Computing Laboratory
MVAPICH2-X Feature Table
• * indicates disabled by default at runtime. Must use appropriate environment variable in MVAPICH2-X user guide to enable it.
• + indicates features only tested with InfiniBand network
Features for InfiniBand (OFA-IB-CH3) and RoCE (OFA-RoCE-CH3) Basic Basic-XPMEM Intermediate Advanced
Architecture Specific Point-to-point and Collective Optimizations
for x86, OpenPOWER, and ARM
Optimized Support for PGAS models
(UPC, UPC++, OpenSHMEM, CAF) and Hybrid
MPI+PGAS models
CMA-Aware Collectives
Optimized Asynchronous Progress*
InfiniBand Hardware Multicast-based MPI_Bcast*+
OSU InfiniBand Network Analysis and Monitoring (INAM)*+
XPMEM-based Point-to-Point and Collectives
Direct Connected (DC) Transport Protocol*+
User mode Memory Registration (UMR)*+
On Demand Paging (ODP)*+
Core-direct based Collective Offload*+
SHARP-based Collective Offload*+
MVAPICH User Group Meeting (MUG) 2019 33Network Based Computing Laboratory
• Direct Connect (DC) Transport
• Co-operative Rendezvous Protocol
• Advanced All-reduce with SHARP
• CMA-based Collectives
• Asynchronous Progress
• XPMEM-based Reduction Collectives
• XPMEM-based Non-reduction Collectives
• Optimized Collective Communication and Advanced Transport Protocols
• PGAS and Hybrid MPI+PGAS Support
Overview of Some of the MVAPICH2-X Features
MVAPICH User Group Meeting (MUG) 2019 34Network Based Computing Laboratory
Minimizing Memory Footprint by Direct Connect (DC) Transport
Node
0
P1
P0 Node 1
P3
P2
Node 3
P7
P6
Node
2
P5
P4IB
Network
• Constant connection cost (One QP for any peer)
• Full Feature Set (RDMA, Atomics etc)
• Separate objects for send (DC Initiator) and receive (DC
Target)
– DC Target identified by “DCT Number”
– Messages routed with (DCT Number, LID)
– Requires same “DC Key” to enable communication
• Available since MVAPICH2-X 2.2a
0
0.2
0.4
0.6
0.8
1
1.2
160 320 620
NormalizedExecutionTime
Number of Processes
NAMD - Apoa1: Large data set
RC DC-Pool UD XRC
10
22
47
97
1 1 1
2
10 10 10 10
1 1
3
5
1
10
100
80 160 320 640
ConnectionMemory(KB)
Number of Processes
Memory Footprint for Alltoall
RC DC-Pool UD XRC
H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT)
of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)
MVAPICH User Group Meeting (MUG) 2019 35Network Based Computing Laboratory
Impact of DC Transport Protocol on Neuron
• Up to 76% benefits over MVAPICH2 for
Neuron using Direct Connected transport
protocol at scale
– VERSION 7.6.2 master (f5a1284) 2018-08-15
• Numbers taken on bbpv2.epfl.ch
– Knights Landing nodes with 64 ppn
– ./x86_64/special -mpi -c stop_time=2000 -c is_split=1
parinit.hoc
– Used “runtime” reported by execution to measure
performance
• Environment variables used
– MV2_USE_DC=1
– MV2_NUM_DC_TGT=64
– MV2_SMALL_MSG_DC_POOL=96
– MV2_LARGE_MSG_DC_POOL=96
– MV2_USE_RDMA_CM=0
0
200
400
600
800
1000
1200
1400
1600
512 1024 2048 4096
ExecutionTime(s)
No. of Processes
MVAPICH2 MVAPICH2-X
Neuron with YuEtAl2012
10%
76%
39%
Overhead of RC protocol for
connection establishment and
communication Available from MVAPICH2-X 2.3rc2 onwards
MVAPICH User Group Meeting (MUG) 2019 36Network Based Computing Laboratory
Cooperative Rendezvous Protocols
Platform: 2x14 core Broadwell 2680 (2.4 GHz)
Mellanox EDR ConnectX-5 (100 GBps)
Baseline: MVAPICH2X-2.3rc1, Open MPI v3.1.0
Cooperative Rendezvous Protocols for Improved Performance and Overlap
S. Chakraborty, M. Bayatpour, J Hashmi, H. Subramoni, and DK Panda,
SC ‘18 (Best Student Paper Award Finalist)
19%
16% 10%
• Use both sender and receiver CPUs to progress communication concurrently
• Dynamically select rendezvous protocol based on communication primitives and sender/receiver
availability (load balancing)
• Up to 2x improvement in large message latency and bandwidth
• Up to 19% improvement for Graph500 at 1536 processes
-100
100
300
500
700
900
1100
1300
1500
28 56 112 224 448 896 1536
Time(Seconds)
Number of Processes
Graph500
MVAPICH2
Open MPI
Proposed
0
100
200
300
400
500
28 56 112 224 448 896 1536
Time(Seconds)
Number of Processes
CoMD
MVAPICH2
Open MPI
Proposed
0
20
40
60
80
100
28 56 112 224 448 896 1536
Time(Seconds)
Number of Processes
MiniGhost
MVAPICH2
Open MPI
Proposed
Available in MVAPICH2-X 2.3rc2
MVAPICH User Group Meeting (MUG) 2019 37Network Based Computing Laboratory
Advanced Allreduce Collective Designs Using SHArP and Multi-Leaders
• Socket-based design can reduce the communication latency by 23% and 40% on
Broadwell + IB-EDR nodes
• Support is available since MVAPICH2-X 2.3b
HPCG (28 PPN)
0
0.1
0.2
0.3
0.4
0.5
0.6
56 224 448
CommunicationLatency(Seconds)
Number of Processes
MVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP
0
10
20
30
40
50
60
4 8 16 32 64 128 256 512 1K 2K 4K
Latency(us)
Message Size (Byte)
MVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP
OSU Micro Benchmark (16 Nodes, 28 PPN)
23%
40%
Lowerisbetter
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-
Leader Design, Supercomputing '17.
MVAPICH User Group Meeting (MUG) 2019 38Network Based Computing Laboratory
Performance of MPI_Allreduce On Stampede2 (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Latency(us)
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
0
200
400
600
800
1000
1200
1400
1600
1800
2000
8K 16K 32K 64K 128K 256K
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
OSU Micro Benchmark 64 PPN
2.4X
• MPI_Allreduce latency with 32K bytes reduced by 2.4X
MVAPICH User Group Meeting (MUG) 2019 39Network Based Computing Laboratory
Optimized CMA-based Collectives for Large Messages
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M 4M
Message Size
KNL (2 Nodes, 128 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
Latency(us)
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
Message Size
KNL (4 Nodes, 256 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
Message Size
KNL (8 Nodes, 512 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
• Significant improvement over existing implementation for Scatter/Gather with
1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPOWER)
• New two-level algorithms for better scalability
• Improved performance for other collectives (Bcast, Allgather, and Alltoall)
~ 2.5x
Better
~ 3.2x
Better
~ 4x
Better
~ 17x
Better
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI
Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist
Performance of MPI_Gather on KNL nodes (64PPN)
Available since MVAPICH2-X 2.3b
MVAPICH User Group Meeting (MUG) 2019 40Network Based Computing Laboratory
0
5000
10000
15000
20000
25000
30000
224 448 896
PerformanceinGFLOPS
Number of Processes
MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default
0
1
2
3
4
5
6
7
8
9
112 224 448
Timeperloopinseconds
Number of Processes
MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default IMPI 2019 Async
Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand
Up to 33% performance improvement in P3DFFT application with 448 processes
Up to 29% performance improvement in HPL application with 896 processes
Memory Consumption = 69%
P3DFFT High Performance Linpack (HPL)
26%
27% Lower is better Higher is better
A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda,
“Efficient design for MPI Asynchronous Progress without Dedicated Resources”, Parallel Computing 2019
Available since MVAPICH2-X 2.3rc1
PPN=28
33%
29%
12%
PPN=28
8%
MVAPICH User Group Meeting (MUG) 2019 41Network Based Computing Laboratory
• Direct Connect (DC) Transport
• Co-operative Rendezvous Protocol
• Advanced All-reduce with SHARP
• CMA-based Collectives
• Asynchronous Progress
• XPMEM-based Reduction Collectives
• XPMEM-based Non-reduction Collectives
• Optimized Collective Communication and Advanced Transport Protocols
• PGAS and Hybrid MPI+PGAS Support
Overview of Some of the MVAPICH2-X Features
MVAPICH User Group Meeting (MUG) 2019 42Network Based Computing Laboratory
Shared Address Space (XPMEM)-based Collectives Design
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-X-2.3rc1
OSU_Allreduce (Broadwell 256 procs)
• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2
• Offloaded computation/communication to peers ranks in reduction collective operation
• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce
73.2
1.8X
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-2.3rc1
OSU_Reduce (Broadwell 256 procs)
4X
36.1
37.9
16.8
J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction
Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.
Available since MVAPICH2-X 2.3rc1
MVAPICH User Group Meeting (MUG) 2019 43Network Based Computing Laboratory
Reduction Collectives on IBM OpenPOWER
MPI_Allreduce
• Two POWER8 dual-socket nodes each with 20 ppn
• Up to 2X improvement for Allreduce and 3X improvement for Reduce at 4MB message
• Used osu_reduce and osu_allreduce from OSU Microbenchmarks v5.5
MPI_Reduce
0
100
200
4K 8K 16K 32K 64K
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
0
1000
2000
3000
4000
128K 256K 512K 1M 2M
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
0
200
400
600
800
4K 8K 16K 32K 64K 128K 256K
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
0
10000
20000
30000
40000
512K 1M 2M 4M 8M 16M
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
Latency(us)Latency(us)
3.7X
2X
5X
3X
MVAPICH User Group Meeting (MUG) 2019 44Network Based Computing Laboratory
Application Level Benefits of XPMEM-based Designs
MiniAMR (dual-socket, ppn=16)
• Intel XeonCPU E5-2687W v3 @ 3.10GHz (10-core, 2-socket)
• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce
• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for MiniAMR application kernel
0
100
200
300
400
500
600
700
800
28 56 112 224
ExecutionTime(s)
No. of Processes
Intel MPI
MVAPICH2
MVAPICH2-XPMEM
CNTK AlexNet Training
(B.S=default, iteration=50, ppn=28)
0
10
20
30
40
50
60
70
16 32 64 128 256
ExecutionTime(s)
No. of Processes
Intel MPI
MVAPICH2
MVAPICH2-XPMEM
20%
9%
27%
15%
MVAPICH User Group Meeting (MUG) 2019 45Network Based Computing Laboratory
0
20
40
60
10 20 40 60
ExecutionTime(s)
No. of Processes
MVAPICH-2.3rc1
MVAPICH2-XPMEM
Impact of XPMEM-based Designs on MiniAMR
• Two POWER8 dual-socket nodes
each with 20 ppn
• MiniAMR application execution time
comparing MVAPICH2-2.3rc1 and
optimized All-Reduce design
– MiniAMR application for weak-
scaling workload on up to three
POWER8 nodes.
– Up to 45% improvement over
MVAPICH2-2.3rc1 in mesh-
refinement time
45%
41%
36%
42%
OpenPOWER (weak-scaling, 3 nodes, ppn=20)
MVAPICH User Group Meeting (MUG) 2019 46Network Based Computing Laboratory
Performance of Non-Reduction Collectives with XPMEM
• 28 MPI Processes on single dual-socket Broadwell E5-2680v4, 2x14 core processor
• Used osu_bcast from OSU Microbenchmarks v5.5
1
10
100
1000
10000
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
4M
Latency(us)
Message Size (Bytes)
Broadcast
Intel MPI 2018
OpenMPI 3.0.1
MV2X-2.3rc1 (CMA Coll)
MV2X-2.3rc2 (XPMEM Coll)
5X over
OpenMPI
1
10
100
1000
10000
100000
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
4M
Latency(us)
Message Size (Bytes)
Gather
Intel MPI 2018
OpenMPI 3.0.1
MV2X-2.3rc1 (CMA Coll)
MV2X-2.3rc2 (XPMEM Coll)
3X over
OpenMPI
MVAPICH User Group Meeting (MUG) 2019 47Network Based Computing Laboratory
Impact of Optimized Small Message MPI_Alltoallv Algorithm
• Optimized designs in MVAPICH2-X offer significantly improved performance for small message
MPI_Alltoallv
1
10
100
1000
10000
100000
1 2 4 8 16 32 64 128 256
Latency(us)
Message Size (Bytes)
MVAPICH2-X HPE-MPI
~5X better
• Up to 5X benefits over HPE-MPI using
optimized using optimized Alltoallv
algorithm and Direct Connected transport
protocol
• Numbers taken on bbpv2.epfl.ch
– 96 KNL nodes with 64 ppn (6,144 processes)
– osu_alltoallv from OSU Micro Benchmarks
• Environment variables used
– MV2_USE_DC=1
– MV2_NUM_DC_TGT=64
– MV2_SMALL_MSG_DC_POOL=96
– MV2_LARGE_MSG_DC_POOL=96
– MV2_USE_RDMA_CM=0
Courtesy: Pramod Shivaji Kumbhar@EPFL
Available from MVAPICH2-X 2.3rc2 onwards
MVAPICH User Group Meeting (MUG) 2019 48Network Based Computing Laboratory
Application Level Performance with Graph500 and Sort
Graph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,
International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,
Int'l Conference on Parallel Processing (ICPP '12), September 2012
0
5
10
15
20
25
30
35
4K 8K 16K
Time(s)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM) 13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design
• 8,192 processes
- 2.4X improvement over MPI-CSR
- 7.6X improvement over MPI-Simple
• 16,384 processes
- 1.5X improvement over MPI-CSR
- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned
Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Time(seconds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort
Application
• 4,096 processes, 4 TB Input Size
- MPI – 2408 sec; 0.16 TB/min
- Hybrid – 1172 sec; 0.36 TB/min
- 51% improvement over MPI-design
MVAPICH User Group Meeting (MUG) 2019 49Network Based Computing Laboratory
• Released on 08/12/2019
• Major Features and Enhancements
– Based on MVAPICH2-X 2.3
– New design based on Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol
– Support for XPMEM based intra-node communication for point-to-point and collectives
– Enhanced tuning for point-to-point and collective operations
– Targeted for AWS instances with Amazon Linux 2 AMI and EFA support
– Tested with c5n.18xlarge instance
MVAPICH2-X-AWS 2.3
MVAPICH User Group Meeting (MUG) 2019 50Network Based Computing Laboratory
Point-to-Point Performance
• Both UD and SRD shows similar latency for small messages
• SRD shows higher message rate due to lack of software reliability overhead
• SRD is faster for large messages due to larger MTU size
MVAPICH User Group Meeting (MUG) 2019 51Network Based Computing Laboratory
Collective Performance: MPI Gatherv
• Up to 33% improvement with SRD compared to UD
• Root does not need to send explicit acks to non-root processes
• Non-roots can exit as soon as the message is sent (no need to wait for acks)
MVAPICH User Group Meeting (MUG) 2019 52Network Based Computing Laboratory
Collective Performance: MPI Allreduce
• Up to 18% improvement with SRD compared to UD
• Bidirectional communication pattern allows piggybacking of acks
• Modest improvement compared to asymmetric communication patterns
MVAPICH User Group Meeting (MUG) 2019 53Network Based Computing Laboratory
Application Performance
0
10
20
30
40
50
60
70
80
72(2x36) 144(4x36) 288(8x36)
ExecutionTime(Seconds)
Processes (Nodes X PPN)
miniGhost
MV2X OpenMPI
10% better
0
5
10
15
20
25
30
72(2x36) 144(4x36) 288(8x36)
ExecutionTime(Seconds)
Processes (Nodes X PPN)
CloverLeaf
MV2X-UD MV2X-SRD
OpenMPI
27.5%
better
• Up to 10% performance improvement for MiniGhost on 8 nodes
• Up to 27% better performance with CloverLeaf on 8 nodes
S. Chakraborty, S. Xu, H. Subramoni and D. K. Panda, Designing Scalable and High-Performance MPI Libraries on Amazon Elastic Adapter, Hot
Interconnect, 2019
MVAPICH User Group Meeting (MUG) 2019 54Network Based Computing Laboratory
• XPMEM-based MPI Derived Datatype Designs
• Exploiting Hardware Tag Matching
MVAPICH2-X Upcoming Features
MVAPICH User Group Meeting (MUG) 2019 55Network Based Computing Laboratory
Efficient Zero-copy MPI Datatypes for Emerging Architectures
• New designs for efficient zero-copy based MPI derived datatype processing
• Efficient schemes mitigate datatype translation, packing, and exchange overheads
• Demonstrated benefits over prevalent MPI libraries for various application kernels
• To be available in the upcoming MVAPICH2-X release
0.1
1
10
100
2 4 8 16 28
LogscaleLatency(milliseconds)
No. of Processes
MVAPICH2X-2.3
IMPI 2018
IMPI 2019
MVAPICH2X-Opt
5X
0.1
1
10
100
1000
Grid Dimensions (x, y, z, t)
MVAPICH2X-2.3
IMPI 2018
MVAPICH2X-Opt
19X
0.01
0.1
1
10
Grid Dimensions (x, y, z, t)
MVAPICH2X-2.3
IMPI 2018
MVAPICH2X-Opt
3X
3D-Stencil Datatype Kernel on
Broadwell (2x14 core)
MILC Datatype Kernel on KNL 7250
in Flat-Quadrant Mode (64-core)
NAS-MG Datatype Kernel on
OpenPOWER (20-core)
MVAPICH User Group Meeting (MUG) 2019 56Network Based Computing Laboratory
• Offloads the processing of point-to-point MPI messages from the host processor
to HCA
• Enables zero copy of MPI message transfers
– Messages are written directly to the user's buffer without extra buffering and copies
• Provides rendezvous progress offload to HCA
– Increases the overlap of communication and computation
Hardware Tag Matching Support
MVAPICH User Group Meeting (MUG) 2019 57Network Based Computing Laboratory
Impact of Zero Copy MPI Message Passing using HW Tag Matching
0
50
100
150
200
250
300
350
400
32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size (byte)
osu_latency
Rendezvous Message Range
MVAPICH2 MVAPICH2+HW-TM
0
1
2
3
4
5
6
7
8
0 1 2 4 8 16 32 64 128256512 1K 2K 4K 8K 16K
Latency(us)
Message Size (byte)
osu_latency
Eager Message Range
MVAPICH2 MVAPICH2+HW-TM
Removal of intermediate buffering/copies can lead up to 35% performance
improvement in latency of medium messages
35%
MVAPICH User Group Meeting (MUG) 2019 58Network Based Computing Laboratory
Impact of Rendezvous Offload using HW Tag Matching
0
10000
20000
30000
40000
50000
60000
70000
16K 32K 64K 128K 256K 512K
Latency(us)
Message Size (byte)
osu_iscatterv
1,280 Processes
MVAPICH2 MVAPICH2+HW-TM
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
16K 32K 64K 128K 256K 512K
Latency(us)
Message Size (byte)
osu_iscatterv
640 Processes
MVAPICH2 MVAPICH2+HW-TM
The increased overlap can lead to 1.8X performance improvement in total
latency of osu_iscatterv
1.7 X 1.8 X
MVAPICH User Group Meeting (MUG) 2019 59Network Based Computing Laboratory
MVAPICH2 Software Family
Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM),
OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric
Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning
Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and,
RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 60Network Based Computing Laboratory
At Sender:
At Receiver:
MPI_Recv(r_devbuf, size, …);
inside
MVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
MVAPICH User Group Meeting (MUG) 2019 61Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3.2 Releases
• Support for MPI communication from NVIDIA GPU device memory
• High performance RDMA-based inter-node point-to-point
communication (GPU-GPU, GPU-Host and Host-GPU)
• High performance intra-node point-to-point communication for multi-
GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)
• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node
• Optimized and tuned collectives for GPU device buffers
• MPI datatype support for point-to-point and collective communication
from GPU device buffers
• Unified memory
MVAPICH User Group Meeting (MUG) 2019 62Network Based Computing Laboratory
• Released on 08/08/2019
• Major Features and Enhancements
– Based on MVAPICH2 2.3.1
– Support for CUDA 10.1
– Support for PGI 19.x
– Enhanced intra-node and inter-node point-to-point performance
– Enhanced MPI_Allreduce performance for DGX-2 system
– Enhanced GPU communication support in MPI_THREAD_MULTIPLE mode
– Enhanced performance of datatype support for GPU-resident data
• Zero-copy transfer when P2P access is available between GPUs through NVLink/PCIe
– Enhanced GPU-based point-to-point and collective tuning
• OpenPOWER systems such as ORNL Summit and LLNL Sierra ABCI system @AIST, Owens and Pitzer systems @Ohio Supercomputer Center
– Scaled Allreduce to 24,576 Volta GPUs on Summit
– Enhanced intra-node and inter-node point-to-point performance for DGX-2 and IBM POWER8 and IBM POWER9 systems
– Enhanced Allreduce performance for DGX-2 and IBM POWER8/POWER9 systems
– Enhanced small message performance for CUDA-Aware MPI_Put and MPI_Get
– Flexible support for running TensorFlow (Horovod) jobs
MVAPICH2-GDR 2.3.2
MVAPICH User Group Meeting (MUG) 2019 63Network Based Computing Laboratory
0
1000
2000
3000
4000
5000
6000
1
2
4
8
16
32
64
128
256
512
1K
2K
4K
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3
0
500
1000
1500
2000
2500
3000
3500
1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Bandwidth(MB/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3
0
5
10
15
20
25
30
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Latency(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
MV2-(NO-GDR) MV2-GDR 2.3
MVAPICH2-GDR-2.3
Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPU
Mellanox Connect-X4 EDR HCA
CUDA 9.0
Mellanox OFED 4.0 with GPU-Direct-RDMA
10x
9x
Optimized MVAPICH2-GDR Design
1.85us
11X
MVAPICH User Group Meeting (MUG) 2019 64Network Based Computing Laboratory
0
2
4
6
8
10
12
14
16
18
20
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Latency(us)
Message Size (Bytes)
INTRA-NODE LATENCY (SMALL)
Intra-Socket
Inter-Socket
Device-to-Device Performance on OpenPOWER (NVLink2 + Volta)
0
5
10
15
20
25
30
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Bandwidth(GB/sec)
Message Size (Bytes)
INTER-NODE BANDWIDTH
Platform: OpenPOWER (POWER9-ppc64le) nodes equipped with a dual-socket CPU, 4 Volta V100 GPUs, and 2port EDR InfiniBand Interconnect
0
50
100
150
200
250
300
350
400
450
500
16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size (Bytes)
INTRA-NODE LATENCY (LARGE)
Intra-Socket
Inter-Socket
0
2
4
6
8
10
12
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Latency(us)
Message Size (Bytes)
INTER-NODE LATENCY (SMALL)
0
50
100
150
200
250
300
350
16K 32K 64K 128K 256K 512K 1M 2M 4M
Latency(us)
Message Size (Bytes)
INTER-NODE LATENCY (LARGE)
Intra-node Bandwidth: 70.4 GB/sec for 128MB
(via NVLINK2)
Intra-node Latency: 5.36 us (without GDRCopy)
Inter-node Latency: 5.66 us (without GDRCopy) Inter-node Bandwidth: 23.7 GB/sec (2 port EDR)Available since MVAPICH2-GDR 2.3a
0
10
20
30
40
50
60
70
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Bandwidth(GB/sec)
Message Size (Bytes)
INTRA-NODE BANDWIDTH
Intra-Socket Inter-Socket
MVAPICH User Group Meeting (MUG) 2019 65Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)
• HoomDBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768
MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768
MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
AverageTimeStepspersecond(TPS)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32
AverageTimeStepspersecond
(TPS)
Number of Processes
64K Particles 256K Particles
2X
2X
MVAPICH User Group Meeting (MUG) 2019 66Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96NormalizedExecutionTime
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32
NormalizedExecutionTime
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes
• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data
Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content
/tasks/operational/meteoSwiss/
MVAPICH User Group Meeting (MUG) 2019 67Network Based Computing Laboratory
• Deep Learning frameworks are a different game
altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State-of-the-art
– cuDNN, cuBLAS, NCCL --> scale-up performance
– NCCL2, CUDA-Aware MPI --> scale-out performance
• For small and medium message sizes only!
• Proposed: Can we co-design the MPI runtime (MVAPICH2-
GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large-Message Communication (Reductions)
– What application co-designs are needed to exploit
communication-runtime co-designs?
Deep Learning: New Challenges for MPI Runtimes
Scale-upPerformance
Scale-out Performance
cuDNN
NCCL
gRPC
Hadoop
Proposed
Co-
Designs
MPI
cuBLAS
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU
Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
NCCL2
MVAPICH User Group Meeting (MUG) 2019 68Network Based Computing Laboratory
• Efficient Allreduce is crucial for Horovod’s
overall training performance
– Both MPI and NCCL designs are available
• We have evaluated Horovod extensively
and compared across a wide range of
designs using gRPC and gRPC extensions
• MVAPICH2-GDR achieved up to 90%
scaling efficiency for ResNet-50 Training
on 64 Pascal GPUs
Scalable TensorFlow using Horovod, MPI, and NCCL
Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI:
Characterization, Designs, and Performance Evaluation”, CCGrid ‘19.
https://arxiv.org/abs/1810.11112
MVAPICH User Group Meeting (MUG) 2019 69Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation
• Optimized designs in MVAPICH2-GDR 2.3 offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs
1
10
100
1000
10000
100000
128K 256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~1.2X better
Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect
1
10
100
1000
4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~3X better
MVAPICH User Group Meeting (MUG) 2019 70Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2)
• Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs)
1
10
100
1000
10000
256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR-Next NCCL-2.4
~2.5X better
Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2
0
10
20
30
40
50
60
70
8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K
Latency(us)
Message Size (Bytes)
MVAPICH2-GDR-Next NCCL-2.4
~5.8X better
MVAPICH User Group Meeting (MUG) 2019 71Network Based Computing Laboratory
MVAPICH2-GDR: Enhanced MPI_Allreduce at Scale
• Optimized designs in upcoming MVAPICH2-GDR offer better performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) up to 1,536 GPUs
0
1
2
3
4
5
6
32M 64M 128M 256M
Bandwidth(GB/s)
Message Size (Bytes)
Bandwidth on 1,536 GPUs
MVAPICH2-GDR-2.3.2 NCCL 2.4
1.7X better
0
50
100
150
200
250
300
350
400
450
4
8
16
32
64
128
256
512
1K
2K
4K
8K
16K
Latency(us)
Message Size (Bytes)
Latency on 1,536 GPUs
MVAPICH2-GDR-2.3.2 NCCL 2.4
1.6X better
Platform: Dual-socket IBM POWER9 CPU, 6 NVIDIA Volta V100 GPUs, and 2-port InfiniBand EDR Interconnect
0
1
2
3
4
5
6
7
8
9
10
24 48 96 192 384 768 1536
Bandwidth(GB/s)
Number of GPUs
128MB Message
SpectrumMPI 10.2.0.11 OpenMPI 4.0.1 NCCL 2.4 MVAPICH2-GDR-2.3.2
1.7X better
MVAPICH User Group Meeting (MUG) 2019 72Network Based Computing Laboratory
Distributed Training with TensorFlow and MVAPICH2-GDR
• ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (16 Volta GPUs)
0
1000
2000
3000
4000
5000
6000
7000
1 2 4 8 16
Imagepersecond
Number of GPUs
NCCL-2.4 MVAPICH2-GDR-2.3.2
9% higher
Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2
0
10
20
30
40
50
60
70
80
90
100
1 2 4 8 16
ScalingEfficiency(%)
Number of GPUs
NCCL-2.4 MVAPICH2-GDR-2.3.2
Scaling Efficiency =
Actual throughput
Ideal throughput at scale
× 100%
MVAPICH User Group Meeting (MUG) 2019 73Network Based Computing Laboratory
Distributed Training with TensorFlow and MVAPICH2-GDR
• ResNet-50 Training using
TensorFlow benchmark on
SUMMIT -- 1536 Volta
GPUs!
• 1,281,167 (1.2 mil.) images
• Time/epoch = 3.6 seconds
• Total Time (90 epochs)
= 3.6 x 90 = 332 seconds =
5.5 minutes!
0
50
100
150
200
250
300
350
400
1 2 4 6 12 24 48 96 192 384 768 1536
Imagepersecond(Thousands)
Number of GPUs
NCCL-2.4 MVAPICH2-GDR-2.3.2
Platform: The Summit Supercomputer (#1 on Top500.org) – 6 NVIDIA Volta GPUs per node connected with NVLink, CUDA 9.2
*We observed errors for NCCL2 beyond 96 GPUs
MVAPICH2-GDR reaching ~0.35 million
images per second for ImageNet-1k!
ImageNet-1k has 1.2 million images
MVAPICH User Group Meeting (MUG) 2019 74Network Based Computing Laboratory
• Scalable Host-based Collectives
• Enhanced Derived Datatype
• Integrated Collective Support with SHArP from GPU Buffers
• Optimization for PyTorch and MXNET
MVAPICH2-GDR Upcoming Features for HPC and DL
MVAPICH User Group Meeting (MUG) 2019 75Network Based Computing Laboratory
0
10
20
30
40
4 8 16 32 64 128 256 512 1K 2K 4K
Latency(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
Scalable Host-based Collectives on OpenPOWER (Intra-node Reduce & AlltoAll)(Nodes=1,PPN=20)
0
50
100
150
200
4 8 16 32 64 128 256 512 1K 2K 4K
Latency(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nodes=1,PPN=20)
Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively
3.6X
Alltoall
Reduce
5.2X
3.2X
3.3X
0
400
800
1200
1600
2000
8K 16K 32K 64K 128K 256K 512K 1M
Latency(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nodes=1,PPN=20)
0
2500
5000
7500
10000
8K 16K 32K 64K 128K 256K 512K 1M
Latency(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nodes=1,PPN=20)
3.2X
1.4X
1.3X
1.2X
MVAPICH User Group Meeting (MUG) 2019 76Network Based Computing Laboratory
MVAPICH2-GDR: Enhanced Derived Datatype
• Kernel-based and GDRCOPY-based one-shot packing for inter-socket and inter-node communication
• Zero-copy (packing-free) for GPUs with peer-to-peer direct access over PCIe/NVLink
0
5
10
15
20
25
[6, 8,8,8,8] [6, 8,8,8,16] [6, 8,8,16,16] [6, 16,16,16,16]
MILC
Speedup
Problem size
GPU-based DDTBench mimics MILC
communication kernel
OpenMPI 4.0.0 MVAPICH2-GDR 2.3.1 MVAPICH2-GDR-Next
Platform: Nvidia DGX-2 system
(NVIDIA Volta GPUs connected with NVSwitch), CUDA 9.2
0
5
10
15
20
25
16 32 64
ExecutionTime(s)
Number of GPUs
Communication Kernel of COSMO Model
MVAPICH2-GDR 2.3.1 MVAPICH2-GDR-Next
Platform: Cray CS-Storm
(16 NVIDIA Tesla K80 GPUs per node), CUDA 8.0
Improved 3.4X
(https://github.com/cosunae/HaloExchangeBenchmarks)
Improved 15X
MVAPICH User Group Meeting (MUG) 2019 77Network Based Computing Laboratory
MVAPICH2 Software Family
Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM),
OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric
Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning
Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and,
RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 78Network Based Computing Laboratory
Overview of OSU INAM
• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI runtime
– http://mvapich.cse.ohio-state.edu/tools/osu-inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication
– Point-to-Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
• OSU INAM 0.9.4 released on 11/10/2018
– Enhanced performance for fabric discovery using optimized OpenMP-based multi-threaded designs
– Ability to gather InfiniBand performance counters at sub-second granularity for very large (>2000 nodes) clusters
– Redesign database layout to reduce database size
– Enhanced fault tolerance for database operations
• Thanks to Trey Dockendorf @ OSC for the feedback
– OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously
– Improved database purging time by using bulk deletes
– Tune database timeouts to handle very long database operations
– Improved debugging support by introducing several debugging levels
MVAPICH User Group Meeting (MUG) 2019 79Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters
• Visualize traffic pattern on different links
• Quickly identify congested links/links in error state
• See the history unfold – play back historical state of the network
Comet@SDSC --- Clustered View
(1,879 nodes, 212 switches, 4,377 network links)
Finding Routes Between Nodes
MVAPICH User Group Meeting (MUG) 2019 80Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view
• Show different network metrics (load, error, etc.) for any live job
• Play back historical data for completed jobs to identify bottlenecks
• Node level view - details per process or per node
• CPU utilization for each rank/node
• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)
• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view
• Classify data flowing over a network link at
different granularity in conjunction with
MVAPICH2-X 2.2rc1
• Job level and
• Process level
More Details in Tutorial/Demo
Session Tomorrow
MVAPICH User Group Meeting (MUG) 2019 81Network Based Computing Laboratory
• Available since 2004
• Suite of microbenchmarks to study communication performance of various programming models
• Benchmarks available for the following programming models
– Message Passing Interface (MPI)
– Partitioned Global Address Space (PGAS)
• Unified Parallel C (UPC)
• Unified Parallel C++ (UPC++)
• OpenSHMEM
• Benchmarks available for multiple accelerator based architectures
– Compute Unified Device Architecture (CUDA)
– OpenACC Application Program Interface
• Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks
• Continuing to add support for newer primitives and features
• Please visit the following link for more information
– http://mvapich.cse.ohio-state.edu/benchmarks/
OSU Microbenchmarks
MVAPICH User Group Meeting (MUG) 2019 82Network Based Computing Laboratory
• MPI runtime has many parameters
• Tuning a set of parameters can help you to extract higher performance
• Compiled a list of such contributions through the MVAPICH Website
– http://mvapich.cse.ohio-state.edu/best_practices/
• Initial list of applications
– Amber
– HoomDBlue
– HPCG
– Lulesh
– MILC
– Neuron
– SMG2000
– Cloverleaf
– SPEC (LAMMPS, POP2, TERA_TF, WRF2)
• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.
• We will link these results with credits to you.
Applications-Level Tuning: Compilation of Best Practices
MVAPICH User Group Meeting (MUG) 2019 83Network Based Computing Laboratory
MVAPICH2 – Plans for Exascale
• Performance and Memory scalability toward 1-10M cores
• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)
• MPI + Task*
• Enhanced Optimization for GPU Support and Accelerators
• Taking advantage of advanced features of Mellanox InfiniBand
• Tag Matching*
• Adapter Memory*
• Enhanced communication schemes for upcoming architectures
• Intel Optane*
• BlueField*
• CAPI*
• Extended topology-aware collectives
• Extended Energy-aware designs and Virtualization Support
• Extended Support for MPI Tools Interface (as in MPI 3.0)
• Extended FT support
• Support for * features will be available in future MVAPICH2 Releases
MVAPICH User Group Meeting (MUG) 2019 84Network Based Computing Laboratory
• Supported through X-ScaleSolutions (http://x-scalesolutions.com)
• Benefits:
– Help and guidance with installation of the library
– Platform-specific optimizations and tuning
– Timely support for operational issues encountered with the library
– Web portal interface to submit issues and tracking their progress
– Advanced debugging techniques
– Application-specific optimizations and tuning
– Obtaining guidelines on best practices
– Periodic information on major fixes and updates
– Information on major releases
– Help with upgrading to the latest release
– Flexible Service Level Agreements
• Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years
Commercial Support for MVAPICH2, HiBD, and HiDL Libraries
MVAPICH User Group Meeting (MUG) 2019 85Network Based Computing Laboratory
• Has joined the OpenPOWER Consortium as a silver ISV member
• Provides flexibility:
– To have MVAPICH2, HiDL and HiBD libraries getting integrated into the OpenPOWER
software stack
– A part of the OpenPOWER ecosystem
– Can participate with different vendors for bidding, installation and deployment process
• Introduced two new integrated products with support for OpenPOWER systems
(Presented yesterday at the OpenPOWER North America Summit)
– X-ScaleHPC
– X-ScaleAI
– Send an e-mail to contactus@x-scalesolutions.com for free trial!!
Silver ISV Member for the OpenPOWER Consortium + Products
MVAPICH User Group Meeting (MUG) 2019 86Network Based Computing Laboratory
Funding Acknowledgments
Funding Support by
Equipment Support by
MVAPICH User Group Meeting (MUG) 2019 87Network Based Computing Laboratory
Personnel Acknowledgments
Current Students (Graduate)
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– C.-H. Chu (Ph.D.)
– J. Hashmi (Ph.D.)
– A. Jain (Ph.D.)
– K. S. Kandadi (M.S.)
Past Students
– A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– R. Biswas (M.S.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– S. Chakraborthy (Ph.D.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– R. Rajachandrasekar (Ph.D.)
– D. Shankar (Ph.D.)
– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
– J. Zhang (Ph.D.)
Past Research Scientist
– K. Hamidouche
– S. Sur
– X. Lu
Past Post-Docs
– D. Banerjee
– X. Besseron
– H.-W. Jin
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– M. Li (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– Kamal Raj (M.S.)
– K. S. Khorassani (Ph.D.)
– P. Kousha (Ph.D.)
– A. Quentin (Ph.D.)
– B. Ramesh (M. S.)
– S. Xu (M.S.)
– J. Lin
– M. Luo
– E. Mancini
Past Programmers
– D. Bureddy
– J. Perkins
Current Research Specialist
– J. Smith
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post-doc
– M. S. Ghazimeersaeed
– A. Ruhela
– K. Manian
Current Students (Undergraduate)
– V. Gangal (B.S.)
– N. Sarkauskas (B.S.)
Past Research Specialist
– M. Arnold
Current Research Scientist
– H. Subramoni– Q. Zhou (Ph.D.)
MVAPICH User Group Meeting (MUG) 2019 88Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
The MVAPICH2 Project
http://mvapich.cse.ohio-state.edu/

Mais conteúdo relacionado

Mais de inside-BigData.com

Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecastsinside-BigData.com
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Updateinside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuninginside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODinside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Erainside-BigData.com
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Clusterinside-BigData.com
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...inside-BigData.com
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolversinside-BigData.com
 
Scientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous ArchitecturesScientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous Architecturesinside-BigData.com
 
SW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computingSW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computinginside-BigData.com
 
Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)inside-BigData.com
 

Mais de inside-BigData.com (20)

Machine Learning for Weather Forecasts
Machine Learning for Weather ForecastsMachine Learning for Weather Forecasts
Machine Learning for Weather Forecasts
 
HPC AI Advisory Council Update
HPC AI Advisory Council UpdateHPC AI Advisory Council Update
HPC AI Advisory Council Update
 
Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19Fugaku Supercomputer joins fight against COVID-19
Fugaku Supercomputer joins fight against COVID-19
 
Energy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic TuningEnergy Efficient Computing using Dynamic Tuning
Energy Efficient Computing using Dynamic Tuning
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPODHPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
 
State of ARM-based HPC
State of ARM-based HPCState of ARM-based HPC
State of ARM-based HPC
 
Versal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud AccelerationVersal Premium ACAP for Network and Cloud Acceleration
Versal Premium ACAP for Network and Cloud Acceleration
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance EfficientlyZettar: Moving Massive Amounts of Data across Any Distance Efficiently
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
 
Scaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's EraScaling TCO in a Post Moore's Era
Scaling TCO in a Post Moore's Era
 
CUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computingCUDA-Python and RAPIDS for blazing fast scientific computing
CUDA-Python and RAPIDS for blazing fast scientific computing
 
Introducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi ClusterIntroducing HPC with a Raspberry Pi Cluster
Introducing HPC with a Raspberry Pi Cluster
 
Overview of HPC Interconnects
Overview of HPC InterconnectsOverview of HPC Interconnects
Overview of HPC Interconnects
 
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
 
Data Parallel Deep Learning
Data Parallel Deep LearningData Parallel Deep Learning
Data Parallel Deep Learning
 
Making Supernovae with Jets
Making Supernovae with JetsMaking Supernovae with Jets
Making Supernovae with Jets
 
Adaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and EigensolversAdaptive Linear Solvers and Eigensolvers
Adaptive Linear Solvers and Eigensolvers
 
Scientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous ArchitecturesScientific Applications and Heterogeneous Architectures
Scientific Applications and Heterogeneous Architectures
 
SW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computingSW/HW co-design for near-term quantum computing
SW/HW co-design for near-term quantum computing
 
FPGAs and Machine Learning
FPGAs and Machine LearningFPGAs and Machine Learning
FPGAs and Machine Learning
 
Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)Deep Learning State of the Art (2020)
Deep Learning State of the Art (2020)
 

Último

ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 

Último (20)

ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 

Overview of the MVAPICH Project and Future Roadmap

  • 1. Overview of the MVAPICH Project: Latest Status and Future Roadmap MVAPICH2 User Group (MUG) Meeting by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
  • 2. MVAPICH User Group Meeting (MUG) 2019 2Network Based Computing Laboratory High-End Computing (HEC): PetaFlop to ExaFlop Expected to have an ExaFlop system in 2020-2021! 100 PFlops in 2017 1 EFlops in 2020-2021? 143 PFlops in 2018
  • 3. MVAPICH User Group Meeting (MUG) 2019 3Network Based Computing Laboratory Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Application Kernels/Applications (HPC and DL) Networking Technologies (InfiniBand, 40/100/200GigE, Aries, and Omni-Path) Multi-/Many-core Architectures Accelerators (GPU and FPGA) Middleware Co-Design Opportunities and Challenges across Various Layers Performance Scalability Resilience Communication Library or Runtime for Programming Models Point-to-point Communication Collective Communication Energy- Awareness Synchronization and Locks I/O and File Systems Fault Tolerance
  • 4. MVAPICH User Group Meeting (MUG) 2019 4Network Based Computing Laboratory • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up – Low memory footprint • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for Accelerators (GPGPUs and FPGAs) • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Designing (MPI+X) at Exascale
  • 5. MVAPICH User Group Meeting (MUG) 2019 5Network Based Computing Laboratory Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 3,025 organizations in 89 countries – More than 564,000 (> 0.5 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘18 ranking) • 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 5th, 448, 448 cores (Frontera) at TACC • 8th, 391,680 cores (ABCI) in Japan • 15th, 570,020 cores (Neurion) in South Korea and many others – Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC) – http://mvapich.cse.ohio-state.edu • Empowering Top500 systems for over a decade Partner in the TACC Frontera System
  • 6. MVAPICH User Group Meeting (MUG) 2019 6Network Based Computing Laboratory Timeline Jan-04 Jan-10 Nov-12 MVAPICH2-X OMB MVAPICH2 MVAPICH Oct-02 Nov-04 Apr-15 EOL MVAPICH2-GDR MVAPICH2-MIC MVAPICH Project Timeline Jul-15 MVAPICH2-Virt Aug-14 Aug-15 Sep-15 MVAPICH2-EA OSU-INAM
  • 7. MVAPICH User Group Meeting (MUG) 2019 7Network Based Computing Laboratory 0 100000 200000 300000 400000 500000 600000 Sep-04 Jan-05 May-05 Sep-05 Jan-06 May-06 Sep-06 Jan-07 May-07 Sep-07 Jan-08 May-08 Sep-08 Jan-09 May-09 Sep-09 Jan-10 May-10 Sep-10 Jan-11 May-11 Sep-11 Jan-12 May-12 Sep-12 Jan-13 May-13 Sep-13 Jan-14 May-14 Sep-14 Jan-15 May-15 Sep-15 Jan-16 May-16 Sep-16 Jan-17 May-17 Sep-17 Jan-18 May-18 Sep-18 Jan-19 May-19 NumberofDownloads Timeline MV0.9.4 MV20.9.0 MV20.9.8 MV21.0 MV1.0 MV21.0.3 MV1.1 MV21.4 MV21.5 MV21.6 MV21.7 MV21.8 MV21.9 MV2-GDR2.0b MV2-MIC2.0 MV2-GDR2.3.2 MV2-X2.3rc2 MV2Virt2.2 MV22.3.2 OSUINAM0.9.3 MV2-Azure2.3.2MV2-AWS2.3 MVAPICH2 Release Timeline and Downloads
  • 8. MVAPICH User Group Meeting (MUG) 2019 8Network Based Computing Laboratory Architecture of MVAPICH2 Software Family (for HPC and DL) High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- point Primitives Collectives Algorithms Energy- Awareness Remote Memory Access I/O and File Systems Fault Tolerance Virtualization Active Messages Job Startup Introspectio n & Analysis Support for Modern Networking Technology (InfiniBand, iWARP, RoCE, Omni-Path) Support for Modern Multi-/Many-core Architectures (Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features RC XRC UD DC SHARP2* ODP SR- IOV Multi Rail Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features MCDRAM* NVLink CAPI* * Upcoming XPMEM
  • 9. MVAPICH User Group Meeting (MUG) 2019 9Network Based Computing Laboratory • Research is done for exploring new designs • Designs are first presented to conference/journal publications • Best performing designs are incorporated into the codebase • Rigorous Q&A procedure before making a release – Exhaustive unit testing – Various test procedures on diverse range of platforms and interconnects – Test 19 different benchmarks and applications including, but not limited to • OMB, IMB, MPICH Test Suite, Intel Test Suite, NAS, ScalaPak, and SPEC – Spend about 18,000 core hours per commit – Performance regression and tuning – Applications-based evaluation – Evaluation on large-scale systems • All versions (alpha, beta, RC1 and RC2) go through the above testing Strong Procedure for Design, Development and Release
  • 10. MVAPICH User Group Meeting (MUG) 2019 10Network Based Computing Laboratory MVAPICH2 Software Family Requirements Library MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2 Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis), MVAPICH2-X Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA) MVAPICH2-X-AWS Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications MVAPICH2-GDR Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB
  • 11. MVAPICH User Group Meeting (MUG) 2019 11Network Based Computing Laboratory • Released on 08/09/2019 • Major Features and Enhancements – Improved performance for inter-node communication – Improved performance for Gather, Reduce, and Allreduce with cyclic hostfile – - Thanks to X-ScaleSolutions for the patch – Improved performance for intra-node point-to-point communication – Add support for Mellanox HDR adapters – Add support for Cascade lake systems – Add support for Microsoft Azure platform • Enhanced point-to-point and collective tuning for Microsoft Azure – Add support for new NUMA-aware hybrid binding policy – Add support for AMD EPYC Rome architecture – Improved multi-rail selection logic – Enhanced heterogeneity detection logic – Enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Mayer@Sandia, Pitzer@OSC, Summit@ORNL, Lassen@LLNL, and Sierra@LLNL systems – Add multiple PVARs and CVARs for point-to-point and collective operations MVAPICH2 2.3.2
  • 12. MVAPICH User Group Meeting (MUG) 2019 12Network Based Computing Laboratory • Support for highly-efficient inter-node and intra-node communication • Scalable Start-up • Optimized Collectives using SHArP and Multi-Leaders • Support for OpenPOWER and ARM architectures • Performance Engineering with MPI-T • Application Scalability and Best Practices Highlights of MVAPICH2 2.3.2-GA Release
  • 13. MVAPICH User Group Meeting (MUG) 2019 13Network Based Computing Laboratory One-way Latency: MPI over IB with MVAPICH2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Small Message Latency Message Size (bytes) Latency(us) 1.11 1.19 1.01 1.15 1.04 1.1 TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch 0 20 40 60 80 100 120 TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-4-EDR Omni-Path ConnectX-6 HDR Large Message Latency Message Size (bytes) Latency(us)
  • 14. MVAPICH User Group Meeting (MUG) 2019 14Network Based Computing Laboratory Bandwidth: MPI over IB with MVAPICH2 0 5000 10000 15000 20000 25000 30000 4 16 64 256 1024 4K 16K 64K 256K 1M Unidirectional Bandwidth Bandwidth(MBytes/sec) Message Size (bytes) 12,590 3,373 6,356 12,083 12,366 24,532 0 10000 20000 30000 40000 50000 60000 4 16 64 256 1024 4K 16K 64K 256K 1M TrueScale-QDR ConnectX-3-FDR ConnectIB-DualFDR ConnectX-4-EDR Omni-Path ConnectX-6 HDR Bidirectional Bandwidth Bandwidth(MBytes/sec) Message Size (bytes) 21,227 12,161 21,983 6,228 48,027 24,136 TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
  • 15. MVAPICH User Group Meeting (MUG) 2019 15Network Based Computing Laboratory • Near-constant MPI and OpenSHMEM initialization time at any process count • 10x and 30x improvement in startup time of MPI and OpenSHMEM respectively at 16,384 processes • Memory consumption reduced for remote endpoint information by O(processes per node) • 1GB Memory saved per node with 1M processes and 16 processes per node Towards High Performance and Scalable Startup at Exascale P M O Job Startup Performance MemoryRequiredtoStore EndpointInformation a b c d eP M PGAS – State of the art MPI – State of the art O PGAS/MPI – Optimized PMIX_Ring PMIX_Ibarrier PMIX_Iallgather Shmem based PMI b c d e a On-demand Connection On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15) PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st European MPI Users' Group Meeting (EuroMPI/Asia ’14) Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15) SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16) a b c d e
  • 16. MVAPICH User Group Meeting (MUG) 2019 16Network Based Computing Laboratory Startup Performance on KNL + Omni-Path 0 5 10 15 20 25 64 128 256 512 1K 2K 4K 8K 16K 32K 64K TimeTaken(Seconds) Number of Processes MPI_Init & Hello World - Oakforest-PACS Hello World (MVAPICH2-2.3a) MPI_Init (MVAPICH2-2.3a) • MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale) • 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC) • At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS) • All numbers reported with 64 processes per node 5.8s 21s 22s New designs available since MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
  • 17. MVAPICH User Group Meeting (MUG) 2019 17Network Based Computing Laboratory Startup Performance on TACC Frontera • MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes • All numbers reported with 56 processes per node 4.5s 3.9s New designs available in MVAPICH2-2.3.2 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 56 112 224 448 896 1792 3584 7168 14336 28672 57344 TimeTaken(Milliseconds) Number of Processes MPI_Init on Frontera Intel MPI 2019 MVAPICH2 2.3.2
  • 18. MVAPICH User Group Meeting (MUG) 2019 18Network Based Computing Laboratory 0 1000 2000 3000 4000 5000 6000 1 2 4 8 16 32 64 128 ExecutionTime(ms) Number of nodes (56 ppn) osu_init MVAPICH2.3.2 MVAPICH2.3.1 0 1000 2000 3000 4000 5000 6000 7000 8000 1 2 4 8 16 32 64 128 ExecutionTime(ms) Number of nodes (56 ppn) osu_hello MVAPICH2.3.2 MVAPICH2.3.1 Startup Performance on TACC Frontera MVAPICH2 2.3.1 vs 2.3.2 • MVAPICH2 2.3.2 significantly improves performance on top of MVAPICH2 2.3.1 • All numbers reported with 56 processes per node
  • 19. MVAPICH User Group Meeting (MUG) 2019 19Network Based Computing Laboratory 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 (4,28) (8,28) (16,28)Latency(seconds) (Number of Nodes, PPN) MVAPICH2 MVAPICH2-SHArP Benefits of SHARP Allreduce at Application Level 12% Avg DDOT Allreduce time of HPCG SHARP support available since MVAPICH2 2.3a Parameter Description Default MV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled --enable-sharp Configure flag to enable SHARP Disabled • Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information • http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3-userguide.html#x1-990006.26
  • 20. MVAPICH User Group Meeting (MUG) 2019 20Network Based Computing Laboratory 0 0.5 1 0 1 2 4 8 16 32 64 128 256 512 1K 2K Latency(us) MVAPICH2-2.3.1 SpectrumMPI-2019.02.07 Intra-node Point-to-Point Performance on OpenPower Platform: Two nodes of OpenPOWER (POWER9-ppc64le) CPU using Mellanox EDR (MT4121) HCA Intra-Socket Small Message Latency Intra-Socket Large Message Latency Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth 0.22us 0 20 40 60 80 100 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) MVAPICH2-2.3.1 SpectrumMPI-2019.02.07 0 10000 20000 30000 40000 1 8 64 512 4K 32K 256K 2M Bandwidth(MB/s) MVAPICH2-2.3.1 SpectrumMPI-2019.02.07 0 20000 40000 1 8 64 512 4K 32K 256K 2M Bi-Bandwidth(MB/s) MVAPICH2-2.3.1 SpectrumMPI-2019.02.07
  • 21. MVAPICH User Group Meeting (MUG) 2019 21Network Based Computing Laboratory 0 0.2 0.4 0.6 0.8 1 1.2 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Latency(us) MVAPICH2-2.3 Intra-node Point-to-point Performance on ARM Cortex-A72 0 2000 4000 6000 8000 10000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 32K 64K 128K 256K 512K 1M 2M 4M Bandwidth(MB/s) MVAPICH2-2.3 0 2000 4000 6000 8000 10000 12000 14000 16000 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M BidirectionalBandwidth MVAPICH2-2.3 Platform: ARM Cortex A72 (aarch64) processor with 64 cores dual-socket CPU. Each socket contains 32 cores. Small Message Latency Large Message Latency Bi-directional BandwidthBandwidth 0.27 micro-second (1 bytes) 0 100 200 300 400 500 600 700 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) MVAPICH2-2.3
  • 22. MVAPICH User Group Meeting (MUG) 2019 22Network Based Computing Laboratory ● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables ● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU ● Control the runtime’s behavior via MPI Control Variables (CVARs) ● Introduced support for new MPI_T based CVARs to MVAPICH2 ○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE ● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for uninstrumented applications ● S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU, EuroMPI/USA ‘17, Best Paper Finalist ● More details in Sameer Shende’s talk today and poster presentations Performance Engineering Applications using MVAPICH2 and TAU VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf
  • 23. MVAPICH User Group Meeting (MUG) 2019 23Network Based Computing Laboratory Application Scalability on Skylake and KNL MiniFE (1300x1300x1300 ~ 910 GB) Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K 0 20 40 60 80 100 120 140 2048 4096 8192 ExecutionTime(s) No. of Processes (KNL: 64ppn) MVAPICH2 0 10 20 30 40 50 60 2048 4096 8192 ExecutionTime(s) No. of Processes (Skylake: 48ppn) MVAPICH2 0 200 400 600 800 1000 1200 48 96 192 384 768 No. of Processes (Skylake: 48ppn) MVAPICH2 NEURON (YuEtAl2012) Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b 0 500 1000 1500 2000 2500 3000 3500 64 128 256 512 1024 2048 4096 No. of Processes (KNL: 64ppn) MVAPICH2 0 200 400 600 800 1000 1200 1400 68 136 272 544 1088 2176 4352 No. of Processes (KNL: 68ppn) MVAPICH2 0 500 1000 1500 2000 48 96 192 384 768 1536 3072 No. of Processes (Skylake: 48ppn) MVAPICH2 Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2
  • 24. MVAPICH User Group Meeting (MUG) 2019 24Network Based Computing Laboratory • Released on 08/16/2019 • Major Features and Enhancements – Based on MVAPICH2-2.3.2 – Enhanced tuning for point-to-point and collective operations – Targeted for Azure HB & HC virtual machine instances – Flexibility for 'one-click' deployment – Tested with Azure HB & HC VM instances MVAPICH2-Azure 2.3.2
  • 25. MVAPICH User Group Meeting (MUG) 2019 25Network Based Computing Laboratory Performance of Radix 0 5 10 15 20 25 30 16(1x16) 32(1x32) 44(1X44) 88(2X44) 176(4X44) 352(8x44) ExecutionTime(Seconds) Number of Processes (Nodes X PPN) Total Execution Time on HC (Lower is better) MVAPICH2-X HPCx 3x faster 0 5 10 15 20 25 60(1X60) 120(2X60) 240(4X60) ExecutionTime(Seconds) Number of Processes (Nodes X PPN) Total Execution Time on HB (Lower is better) MVAPICH2-X HPCx 38% faster
  • 26. MVAPICH User Group Meeting (MUG) 2019 26Network Based Computing Laboratory Performance of FDS (HC) 0 1 2 3 4 5 6 7 8 9 10 16(1x16) 32(1x32) 44(1X44) ExecutionTime(Seconds) Processes (Nodes X PPN) Single Node Total Execution Time (Lower is better) MVAPICH2-X HPCx 0 100 200 300 400 500 600 88(2X44) 176(4X44) ExecutionTime(Seconds) Processes (Nodes X PPN) Multi-Node Total Execution Time (Lower is better) MVAPICH2-X HPCx Part of input parameter: MESH IJK=5,5,5, XB=-1.0,0.0,-1.0,0.0,0.0,1.0, MULT_ID='mesh array' 1.11x better
  • 27. MVAPICH User Group Meeting (MUG) 2019 27Network Based Computing Laboratory • Integration of SHARP2 and associated Collective Optimizations • Communication optimizations on upcoming architectures – Intel Cooper Lake – AMD Rome – ARM • Dynamic and Adaptive Communication Protocols MVAPICH2 Upcoming Features
  • 28. MVAPICH User Group Meeting (MUG) 2019 28Network Based Computing Laboratory Dynamic and Adaptive MPI Point-to-point Communication Protocols Process on Node 1 Process on Node 2 Eager Threshold for Example Communication Pattern with Different Designs 0 1 2 3 4 5 6 7 Default 16 KB 16 KB 16 KB 16 KB 0 1 2 3 4 5 6 7 Manually Tuned 128 KB 128 KB 128 KB 128 KB 0 1 2 3 4 5 6 7 Dynamic + Adaptive 32 KB 64 KB 128 KB 32 KB H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper 0 100 200 300 400 500 600 128 256 512 1K WallClockTime(seconds) Number of Processes Execution Time of Amber Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold 0 1 2 3 4 5 6 7 8 9 128 256 512 1K RelativeMemoryConsumption Number of Processes Relative Memory Consumption of Amber Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold Default Poor overlap; Low memory requirement Low Performance; High Productivity Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity Process Pair Eager Threshold (KB) 0 – 4 32 1 – 5 64 2 – 6 128 3 – 7 32 Desired Eager Threshold
  • 29. MVAPICH User Group Meeting (MUG) 2019 29Network Based Computing Laboratory MVAPICH2 Software Family Requirements Library MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2 Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis), MVAPICH2-X Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA) MVAPICH2-X-AWS Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications MVAPICH2-GDR Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB
  • 30. MVAPICH User Group Meeting (MUG) 2019 30Network Based Computing Laboratory MVAPICH2-X for Hybrid MPI + PGAS Applications • Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI – Possible deadlock if both runtimes are not progressed – Consumes more network resource • Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF – Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu High Performance and Scalable Unified Communication Runtime Support for Modern Multi-/Many-core Architectures (Intel-Xeon, Intel-Xeon Phi, OpenPOWER, ARM…) High Performance Parallel Programming Models MPI Message Passing Interface PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk) Diverse APIs and Mechanisms Optimized Point-to- point Primitives Collectives Algorithms (Blocking and Non- Blocking) Remote Memory Access Fault Tolerance Active Messages Scalable Job Startup Introspection & Analysis with OSU INAM Support for Modern Networking Technologies (InfiniBand, iWARP, RoCE, Omni-Path…) Support for Efficient Intra-node Communication (POSIX SHMEM, CMA, LiMIC, XPMEM…)
  • 31. MVAPICH User Group Meeting (MUG) 2019 31Network Based Computing Laboratory • Released on 03/01/2019 • Major Features and Enhancements – MPI Features – Based on MVAPICH2 2.3.1 • OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces – MPI (Advanced) Features • Improved performance of large message communication • Support for advanced co-operative (COOP) rendezvous protocols in SMP channel – OFA-IB-CH3 and OFA-IB-RoCE interfaces • Support for RGET, RPUT, and COOP protocols for CMA and XPMEM – OFA-IB-CH3 and OFA-IB-RoCE interfaces • Support for load balanced and dynamic rendezvous protocol selection – OFA-IB-CH3 and OFA-IB-RoCE interfaces • Support for XPMEM-based MPI collective operations (Broadcast, Gather, Scatter, Allgather) – OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces • Extend support for XPMEM-based MPI collective operations (Reduce and All-Reduce for PSM-CH3 and PSM2-CH3 interfaces MVAPICH2-X 2.3rc2 • Improved connection establishment for DC transport – OFA-IB-CH3 interface • Add improved Alltoallv algorithm for small messages • OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces – OpenSHMEM Features • Support for XPMEM-based collective operations (Broadcast, Collect, Reduce_all, Reduce, Scatter, Gather) – UPC Features • Support for XPMEM-based collective operations (Broadcast, Collect, Scatter, Gather) – UPC++ Features • Support for XPMEM-based collective operations (Broadcast, Collect, Scatter, Gather) – Unified Runtime Features • Based on MVAPICH2 2.3.1 (OFA-IB-CH3 interface). All the runtime features enabled by default in OFA-IB-CH3 and OFA-IB-RoCE interface of MVAPICH2 2.3.1 are available in MVAPICH2-X 2.3rc2
  • 32. MVAPICH User Group Meeting (MUG) 2019 32Network Based Computing Laboratory MVAPICH2-X Feature Table • * indicates disabled by default at runtime. Must use appropriate environment variable in MVAPICH2-X user guide to enable it. • + indicates features only tested with InfiniBand network Features for InfiniBand (OFA-IB-CH3) and RoCE (OFA-RoCE-CH3) Basic Basic-XPMEM Intermediate Advanced Architecture Specific Point-to-point and Collective Optimizations for x86, OpenPOWER, and ARM Optimized Support for PGAS models (UPC, UPC++, OpenSHMEM, CAF) and Hybrid MPI+PGAS models CMA-Aware Collectives Optimized Asynchronous Progress* InfiniBand Hardware Multicast-based MPI_Bcast*+ OSU InfiniBand Network Analysis and Monitoring (INAM)*+ XPMEM-based Point-to-Point and Collectives Direct Connected (DC) Transport Protocol*+ User mode Memory Registration (UMR)*+ On Demand Paging (ODP)*+ Core-direct based Collective Offload*+ SHARP-based Collective Offload*+
  • 33. MVAPICH User Group Meeting (MUG) 2019 33Network Based Computing Laboratory • Direct Connect (DC) Transport • Co-operative Rendezvous Protocol • Advanced All-reduce with SHARP • CMA-based Collectives • Asynchronous Progress • XPMEM-based Reduction Collectives • XPMEM-based Non-reduction Collectives • Optimized Collective Communication and Advanced Transport Protocols • PGAS and Hybrid MPI+PGAS Support Overview of Some of the MVAPICH2-X Features
  • 34. MVAPICH User Group Meeting (MUG) 2019 34Network Based Computing Laboratory Minimizing Memory Footprint by Direct Connect (DC) Transport Node 0 P1 P0 Node 1 P3 P2 Node 3 P7 P6 Node 2 P5 P4IB Network • Constant connection cost (One QP for any peer) • Full Feature Set (RDMA, Atomics etc) • Separate objects for send (DC Initiator) and receive (DC Target) – DC Target identified by “DCT Number” – Messages routed with (DCT Number, LID) – Requires same “DC Key” to enable communication • Available since MVAPICH2-X 2.2a 0 0.2 0.4 0.6 0.8 1 1.2 160 320 620 NormalizedExecutionTime Number of Processes NAMD - Apoa1: Large data set RC DC-Pool UD XRC 10 22 47 97 1 1 1 2 10 10 10 10 1 1 3 5 1 10 100 80 160 320 640 ConnectionMemory(KB) Number of Processes Memory Footprint for Alltoall RC DC-Pool UD XRC H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT) of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)
  • 35. MVAPICH User Group Meeting (MUG) 2019 35Network Based Computing Laboratory Impact of DC Transport Protocol on Neuron • Up to 76% benefits over MVAPICH2 for Neuron using Direct Connected transport protocol at scale – VERSION 7.6.2 master (f5a1284) 2018-08-15 • Numbers taken on bbpv2.epfl.ch – Knights Landing nodes with 64 ppn – ./x86_64/special -mpi -c stop_time=2000 -c is_split=1 parinit.hoc – Used “runtime” reported by execution to measure performance • Environment variables used – MV2_USE_DC=1 – MV2_NUM_DC_TGT=64 – MV2_SMALL_MSG_DC_POOL=96 – MV2_LARGE_MSG_DC_POOL=96 – MV2_USE_RDMA_CM=0 0 200 400 600 800 1000 1200 1400 1600 512 1024 2048 4096 ExecutionTime(s) No. of Processes MVAPICH2 MVAPICH2-X Neuron with YuEtAl2012 10% 76% 39% Overhead of RC protocol for connection establishment and communication Available from MVAPICH2-X 2.3rc2 onwards
  • 36. MVAPICH User Group Meeting (MUG) 2019 36Network Based Computing Laboratory Cooperative Rendezvous Protocols Platform: 2x14 core Broadwell 2680 (2.4 GHz) Mellanox EDR ConnectX-5 (100 GBps) Baseline: MVAPICH2X-2.3rc1, Open MPI v3.1.0 Cooperative Rendezvous Protocols for Improved Performance and Overlap S. Chakraborty, M. Bayatpour, J Hashmi, H. Subramoni, and DK Panda, SC ‘18 (Best Student Paper Award Finalist) 19% 16% 10% • Use both sender and receiver CPUs to progress communication concurrently • Dynamically select rendezvous protocol based on communication primitives and sender/receiver availability (load balancing) • Up to 2x improvement in large message latency and bandwidth • Up to 19% improvement for Graph500 at 1536 processes -100 100 300 500 700 900 1100 1300 1500 28 56 112 224 448 896 1536 Time(Seconds) Number of Processes Graph500 MVAPICH2 Open MPI Proposed 0 100 200 300 400 500 28 56 112 224 448 896 1536 Time(Seconds) Number of Processes CoMD MVAPICH2 Open MPI Proposed 0 20 40 60 80 100 28 56 112 224 448 896 1536 Time(Seconds) Number of Processes MiniGhost MVAPICH2 Open MPI Proposed Available in MVAPICH2-X 2.3rc2
  • 37. MVAPICH User Group Meeting (MUG) 2019 37Network Based Computing Laboratory Advanced Allreduce Collective Designs Using SHArP and Multi-Leaders • Socket-based design can reduce the communication latency by 23% and 40% on Broadwell + IB-EDR nodes • Support is available since MVAPICH2-X 2.3b HPCG (28 PPN) 0 0.1 0.2 0.3 0.4 0.5 0.6 56 224 448 CommunicationLatency(Seconds) Number of Processes MVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP 0 10 20 30 40 50 60 4 8 16 32 64 128 256 512 1K 2K 4K Latency(us) Message Size (Byte) MVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP OSU Micro Benchmark (16 Nodes, 28 PPN) 23% 40% Lowerisbetter M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi- Leader Design, Supercomputing '17.
  • 38. MVAPICH User Group Meeting (MUG) 2019 38Network Based Computing Laboratory Performance of MPI_Allreduce On Stampede2 (10,240 Processes) 0 50 100 150 200 250 300 4 8 16 32 64 128 256 512 1024 2048 4096 Latency(us) Message Size MVAPICH2 MVAPICH2-OPT IMPI 0 200 400 600 800 1000 1200 1400 1600 1800 2000 8K 16K 32K 64K 128K 256K Message Size MVAPICH2 MVAPICH2-OPT IMPI OSU Micro Benchmark 64 PPN 2.4X • MPI_Allreduce latency with 32K bytes reduced by 2.4X
  • 39. MVAPICH User Group Meeting (MUG) 2019 39Network Based Computing Laboratory Optimized CMA-based Collectives for Large Messages 1 10 100 1000 10000 100000 1000000 1K 4K 16K 64K 256K 1M 4M Message Size KNL (2 Nodes, 128 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 MVAPICH2-X Latency(us) 1 10 100 1000 10000 100000 1000000 1K 4K 16K 64K 256K 1M Message Size KNL (4 Nodes, 256 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 MVAPICH2-X 1 10 100 1000 10000 100000 1000000 1K 4K 16K 64K 256K 1M Message Size KNL (8 Nodes, 512 Procs) MVAPICH2-2.3a Intel MPI 2017 OpenMPI 2.1.0 MVAPICH2-X • Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPOWER) • New two-level algorithms for better scalability • Improved performance for other collectives (Bcast, Allgather, and Alltoall) ~ 2.5x Better ~ 3.2x Better ~ 4x Better ~ 17x Better S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist Performance of MPI_Gather on KNL nodes (64PPN) Available since MVAPICH2-X 2.3b
  • 40. MVAPICH User Group Meeting (MUG) 2019 40Network Based Computing Laboratory 0 5000 10000 15000 20000 25000 30000 224 448 896 PerformanceinGFLOPS Number of Processes MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default 0 1 2 3 4 5 6 7 8 9 112 224 448 Timeperloopinseconds Number of Processes MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default IMPI 2019 Async Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand Up to 33% performance improvement in P3DFFT application with 448 processes Up to 29% performance improvement in HPL application with 896 processes Memory Consumption = 69% P3DFFT High Performance Linpack (HPL) 26% 27% Lower is better Higher is better A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, “Efficient design for MPI Asynchronous Progress without Dedicated Resources”, Parallel Computing 2019 Available since MVAPICH2-X 2.3rc1 PPN=28 33% 29% 12% PPN=28 8%
  • 41. MVAPICH User Group Meeting (MUG) 2019 41Network Based Computing Laboratory • Direct Connect (DC) Transport • Co-operative Rendezvous Protocol • Advanced All-reduce with SHARP • CMA-based Collectives • Asynchronous Progress • XPMEM-based Reduction Collectives • XPMEM-based Non-reduction Collectives • Optimized Collective Communication and Advanced Transport Protocols • PGAS and Hybrid MPI+PGAS Support Overview of Some of the MVAPICH2-X Features
  • 42. MVAPICH User Group Meeting (MUG) 2019 42Network Based Computing Laboratory Shared Address Space (XPMEM)-based Collectives Design 1 10 100 1000 10000 100000 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size MVAPICH2-2.3b IMPI-2017v1.132 MVAPICH2-X-2.3rc1 OSU_Allreduce (Broadwell 256 procs) • “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2 • Offloaded computation/communication to peers ranks in reduction collective operation • Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce 73.2 1.8X 1 10 100 1000 10000 100000 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size MVAPICH2-2.3b IMPI-2017v1.132 MVAPICH2-2.3rc1 OSU_Reduce (Broadwell 256 procs) 4X 36.1 37.9 16.8 J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018. Available since MVAPICH2-X 2.3rc1
  • 43. MVAPICH User Group Meeting (MUG) 2019 43Network Based Computing Laboratory Reduction Collectives on IBM OpenPOWER MPI_Allreduce • Two POWER8 dual-socket nodes each with 20 ppn • Up to 2X improvement for Allreduce and 3X improvement for Reduce at 4MB message • Used osu_reduce and osu_allreduce from OSU Microbenchmarks v5.5 MPI_Reduce 0 100 200 4K 8K 16K 32K 64K MVAPICH2-2.3rc1 SpectrumMPI-10.1.0 OpenMPI-3.0.0 MVAPICH2-XPMEM 0 1000 2000 3000 4000 128K 256K 512K 1M 2M MVAPICH2-2.3rc1 SpectrumMPI-10.1.0 OpenMPI-3.0.0 MVAPICH2-XPMEM 0 200 400 600 800 4K 8K 16K 32K 64K 128K 256K MVAPICH2-2.3rc1 SpectrumMPI-10.1.0 OpenMPI-3.0.0 MVAPICH2-XPMEM 0 10000 20000 30000 40000 512K 1M 2M 4M 8M 16M MVAPICH2-2.3rc1 SpectrumMPI-10.1.0 OpenMPI-3.0.0 MVAPICH2-XPMEM Latency(us)Latency(us) 3.7X 2X 5X 3X
  • 44. MVAPICH User Group Meeting (MUG) 2019 44Network Based Computing Laboratory Application Level Benefits of XPMEM-based Designs MiniAMR (dual-socket, ppn=16) • Intel XeonCPU E5-2687W v3 @ 3.10GHz (10-core, 2-socket) • Up to 20% benefits over IMPI for CNTK DNN training using AllReduce • Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for MiniAMR application kernel 0 100 200 300 400 500 600 700 800 28 56 112 224 ExecutionTime(s) No. of Processes Intel MPI MVAPICH2 MVAPICH2-XPMEM CNTK AlexNet Training (B.S=default, iteration=50, ppn=28) 0 10 20 30 40 50 60 70 16 32 64 128 256 ExecutionTime(s) No. of Processes Intel MPI MVAPICH2 MVAPICH2-XPMEM 20% 9% 27% 15%
  • 45. MVAPICH User Group Meeting (MUG) 2019 45Network Based Computing Laboratory 0 20 40 60 10 20 40 60 ExecutionTime(s) No. of Processes MVAPICH-2.3rc1 MVAPICH2-XPMEM Impact of XPMEM-based Designs on MiniAMR • Two POWER8 dual-socket nodes each with 20 ppn • MiniAMR application execution time comparing MVAPICH2-2.3rc1 and optimized All-Reduce design – MiniAMR application for weak- scaling workload on up to three POWER8 nodes. – Up to 45% improvement over MVAPICH2-2.3rc1 in mesh- refinement time 45% 41% 36% 42% OpenPOWER (weak-scaling, 3 nodes, ppn=20)
  • 46. MVAPICH User Group Meeting (MUG) 2019 46Network Based Computing Laboratory Performance of Non-Reduction Collectives with XPMEM • 28 MPI Processes on single dual-socket Broadwell E5-2680v4, 2x14 core processor • Used osu_bcast from OSU Microbenchmarks v5.5 1 10 100 1000 10000 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size (Bytes) Broadcast Intel MPI 2018 OpenMPI 3.0.1 MV2X-2.3rc1 (CMA Coll) MV2X-2.3rc2 (XPMEM Coll) 5X over OpenMPI 1 10 100 1000 10000 100000 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size (Bytes) Gather Intel MPI 2018 OpenMPI 3.0.1 MV2X-2.3rc1 (CMA Coll) MV2X-2.3rc2 (XPMEM Coll) 3X over OpenMPI
  • 47. MVAPICH User Group Meeting (MUG) 2019 47Network Based Computing Laboratory Impact of Optimized Small Message MPI_Alltoallv Algorithm • Optimized designs in MVAPICH2-X offer significantly improved performance for small message MPI_Alltoallv 1 10 100 1000 10000 100000 1 2 4 8 16 32 64 128 256 Latency(us) Message Size (Bytes) MVAPICH2-X HPE-MPI ~5X better • Up to 5X benefits over HPE-MPI using optimized using optimized Alltoallv algorithm and Direct Connected transport protocol • Numbers taken on bbpv2.epfl.ch – 96 KNL nodes with 64 ppn (6,144 processes) – osu_alltoallv from OSU Micro Benchmarks • Environment variables used – MV2_USE_DC=1 – MV2_NUM_DC_TGT=64 – MV2_SMALL_MSG_DC_POOL=96 – MV2_LARGE_MSG_DC_POOL=96 – MV2_USE_RDMA_CM=0 Courtesy: Pramod Shivaji Kumbhar@EPFL Available from MVAPICH2-X 2.3rc2 onwards
  • 48. MVAPICH User Group Meeting (MUG) 2019 48Network Based Computing Laboratory Application Level Performance with Graph500 and Sort Graph500 Execution Time J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013 J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012 0 5 10 15 20 25 30 35 4K 8K 16K Time(s) No. of Processes MPI-Simple MPI-CSC MPI-CSR Hybrid (MPI+OpenSHMEM) 13X 7.6X • Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design • 8,192 processes - 2.4X improvement over MPI-CSR - 7.6X improvement over MPI-Simple • 16,384 processes - 1.5X improvement over MPI-CSR - 13X improvement over MPI-Simple J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013. Sort Execution Time 0 500 1000 1500 2000 2500 3000 500GB-512 1TB-1K 2TB-2K 4TB-4K Time(seconds) Input Data - No. of Processes MPI Hybrid 51% • Performance of Hybrid (MPI+OpenSHMEM) Sort Application • 4,096 processes, 4 TB Input Size - MPI – 2408 sec; 0.16 TB/min - Hybrid – 1172 sec; 0.36 TB/min - 51% improvement over MPI-design
  • 49. MVAPICH User Group Meeting (MUG) 2019 49Network Based Computing Laboratory • Released on 08/12/2019 • Major Features and Enhancements – Based on MVAPICH2-X 2.3 – New design based on Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol – Support for XPMEM based intra-node communication for point-to-point and collectives – Enhanced tuning for point-to-point and collective operations – Targeted for AWS instances with Amazon Linux 2 AMI and EFA support – Tested with c5n.18xlarge instance MVAPICH2-X-AWS 2.3
  • 50. MVAPICH User Group Meeting (MUG) 2019 50Network Based Computing Laboratory Point-to-Point Performance • Both UD and SRD shows similar latency for small messages • SRD shows higher message rate due to lack of software reliability overhead • SRD is faster for large messages due to larger MTU size
  • 51. MVAPICH User Group Meeting (MUG) 2019 51Network Based Computing Laboratory Collective Performance: MPI Gatherv • Up to 33% improvement with SRD compared to UD • Root does not need to send explicit acks to non-root processes • Non-roots can exit as soon as the message is sent (no need to wait for acks)
  • 52. MVAPICH User Group Meeting (MUG) 2019 52Network Based Computing Laboratory Collective Performance: MPI Allreduce • Up to 18% improvement with SRD compared to UD • Bidirectional communication pattern allows piggybacking of acks • Modest improvement compared to asymmetric communication patterns
  • 53. MVAPICH User Group Meeting (MUG) 2019 53Network Based Computing Laboratory Application Performance 0 10 20 30 40 50 60 70 80 72(2x36) 144(4x36) 288(8x36) ExecutionTime(Seconds) Processes (Nodes X PPN) miniGhost MV2X OpenMPI 10% better 0 5 10 15 20 25 30 72(2x36) 144(4x36) 288(8x36) ExecutionTime(Seconds) Processes (Nodes X PPN) CloverLeaf MV2X-UD MV2X-SRD OpenMPI 27.5% better • Up to 10% performance improvement for MiniGhost on 8 nodes • Up to 27% better performance with CloverLeaf on 8 nodes S. Chakraborty, S. Xu, H. Subramoni and D. K. Panda, Designing Scalable and High-Performance MPI Libraries on Amazon Elastic Adapter, Hot Interconnect, 2019
  • 54. MVAPICH User Group Meeting (MUG) 2019 54Network Based Computing Laboratory • XPMEM-based MPI Derived Datatype Designs • Exploiting Hardware Tag Matching MVAPICH2-X Upcoming Features
  • 55. MVAPICH User Group Meeting (MUG) 2019 55Network Based Computing Laboratory Efficient Zero-copy MPI Datatypes for Emerging Architectures • New designs for efficient zero-copy based MPI derived datatype processing • Efficient schemes mitigate datatype translation, packing, and exchange overheads • Demonstrated benefits over prevalent MPI libraries for various application kernels • To be available in the upcoming MVAPICH2-X release 0.1 1 10 100 2 4 8 16 28 LogscaleLatency(milliseconds) No. of Processes MVAPICH2X-2.3 IMPI 2018 IMPI 2019 MVAPICH2X-Opt 5X 0.1 1 10 100 1000 Grid Dimensions (x, y, z, t) MVAPICH2X-2.3 IMPI 2018 MVAPICH2X-Opt 19X 0.01 0.1 1 10 Grid Dimensions (x, y, z, t) MVAPICH2X-2.3 IMPI 2018 MVAPICH2X-Opt 3X 3D-Stencil Datatype Kernel on Broadwell (2x14 core) MILC Datatype Kernel on KNL 7250 in Flat-Quadrant Mode (64-core) NAS-MG Datatype Kernel on OpenPOWER (20-core)
  • 56. MVAPICH User Group Meeting (MUG) 2019 56Network Based Computing Laboratory • Offloads the processing of point-to-point MPI messages from the host processor to HCA • Enables zero copy of MPI message transfers – Messages are written directly to the user's buffer without extra buffering and copies • Provides rendezvous progress offload to HCA – Increases the overlap of communication and computation Hardware Tag Matching Support
  • 57. MVAPICH User Group Meeting (MUG) 2019 57Network Based Computing Laboratory Impact of Zero Copy MPI Message Passing using HW Tag Matching 0 50 100 150 200 250 300 350 400 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size (byte) osu_latency Rendezvous Message Range MVAPICH2 MVAPICH2+HW-TM 0 1 2 3 4 5 6 7 8 0 1 2 4 8 16 32 64 128256512 1K 2K 4K 8K 16K Latency(us) Message Size (byte) osu_latency Eager Message Range MVAPICH2 MVAPICH2+HW-TM Removal of intermediate buffering/copies can lead up to 35% performance improvement in latency of medium messages 35%
  • 58. MVAPICH User Group Meeting (MUG) 2019 58Network Based Computing Laboratory Impact of Rendezvous Offload using HW Tag Matching 0 10000 20000 30000 40000 50000 60000 70000 16K 32K 64K 128K 256K 512K Latency(us) Message Size (byte) osu_iscatterv 1,280 Processes MVAPICH2 MVAPICH2+HW-TM 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 16K 32K 64K 128K 256K 512K Latency(us) Message Size (byte) osu_iscatterv 640 Processes MVAPICH2 MVAPICH2+HW-TM The increased overlap can lead to 1.8X performance improvement in total latency of osu_iscatterv 1.7 X 1.8 X
  • 59. MVAPICH User Group Meeting (MUG) 2019 59Network Based Computing Laboratory MVAPICH2 Software Family Requirements Library MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2 Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis), MVAPICH2-X Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA) MVAPICH2-X-AWS Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications MVAPICH2-GDR Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB
  • 60. MVAPICH User Group Meeting (MUG) 2019 60Network Based Computing Laboratory At Sender: At Receiver: MPI_Recv(r_devbuf, size, …); inside MVAPICH2 • Standard MPI interfaces used for unified data movement • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0) • Overlaps data movement from GPU with RDMA transfers High Performance and High Productivity MPI_Send(s_devbuf, size, …); GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
  • 61. MVAPICH User Group Meeting (MUG) 2019 61Network Based Computing Laboratory CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3.2 Releases • Support for MPI communication from NVIDIA GPU device memory • High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU) • High performance intra-node point-to-point communication for multi- GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU) • Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node • Optimized and tuned collectives for GPU device buffers • MPI datatype support for point-to-point and collective communication from GPU device buffers • Unified memory
  • 62. MVAPICH User Group Meeting (MUG) 2019 62Network Based Computing Laboratory • Released on 08/08/2019 • Major Features and Enhancements – Based on MVAPICH2 2.3.1 – Support for CUDA 10.1 – Support for PGI 19.x – Enhanced intra-node and inter-node point-to-point performance – Enhanced MPI_Allreduce performance for DGX-2 system – Enhanced GPU communication support in MPI_THREAD_MULTIPLE mode – Enhanced performance of datatype support for GPU-resident data • Zero-copy transfer when P2P access is available between GPUs through NVLink/PCIe – Enhanced GPU-based point-to-point and collective tuning • OpenPOWER systems such as ORNL Summit and LLNL Sierra ABCI system @AIST, Owens and Pitzer systems @Ohio Supercomputer Center – Scaled Allreduce to 24,576 Volta GPUs on Summit – Enhanced intra-node and inter-node point-to-point performance for DGX-2 and IBM POWER8 and IBM POWER9 systems – Enhanced Allreduce performance for DGX-2 and IBM POWER8/POWER9 systems – Enhanced small message performance for CUDA-Aware MPI_Put and MPI_Get – Flexible support for running TensorFlow (Horovod) jobs MVAPICH2-GDR 2.3.2
  • 63. MVAPICH User Group Meeting (MUG) 2019 63Network Based Computing Laboratory 0 1000 2000 3000 4000 5000 6000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bi-Bandwidth MV2-(NO-GDR) MV2-GDR-2.3 0 500 1000 1500 2000 2500 3000 3500 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Bandwidth(MB/s) Message Size (Bytes) GPU-GPU Inter-node Bandwidth MV2-(NO-GDR) MV2-GDR-2.3 0 5 10 15 20 25 30 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) GPU-GPU Inter-node Latency MV2-(NO-GDR) MV2-GDR 2.3 MVAPICH2-GDR-2.3 Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores NVIDIA Volta V100 GPU Mellanox Connect-X4 EDR HCA CUDA 9.0 Mellanox OFED 4.0 with GPU-Direct-RDMA 10x 9x Optimized MVAPICH2-GDR Design 1.85us 11X
  • 64. MVAPICH User Group Meeting (MUG) 2019 64Network Based Computing Laboratory 0 2 4 6 8 10 12 14 16 18 20 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) INTRA-NODE LATENCY (SMALL) Intra-Socket Inter-Socket Device-to-Device Performance on OpenPOWER (NVLink2 + Volta) 0 5 10 15 20 25 30 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(GB/sec) Message Size (Bytes) INTER-NODE BANDWIDTH Platform: OpenPOWER (POWER9-ppc64le) nodes equipped with a dual-socket CPU, 4 Volta V100 GPUs, and 2port EDR InfiniBand Interconnect 0 50 100 150 200 250 300 350 400 450 500 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size (Bytes) INTRA-NODE LATENCY (LARGE) Intra-Socket Inter-Socket 0 2 4 6 8 10 12 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K Latency(us) Message Size (Bytes) INTER-NODE LATENCY (SMALL) 0 50 100 150 200 250 300 350 16K 32K 64K 128K 256K 512K 1M 2M 4M Latency(us) Message Size (Bytes) INTER-NODE LATENCY (LARGE) Intra-node Bandwidth: 70.4 GB/sec for 128MB (via NVLINK2) Intra-node Latency: 5.36 us (without GDRCopy) Inter-node Latency: 5.66 us (without GDRCopy) Inter-node Bandwidth: 23.7 GB/sec (2 port EDR)Available since MVAPICH2-GDR 2.3a 0 10 20 30 40 50 60 70 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Bandwidth(GB/sec) Message Size (Bytes) INTRA-NODE BANDWIDTH Intra-Socket Inter-Socket
  • 65. MVAPICH User Group Meeting (MUG) 2019 65Network Based Computing Laboratory • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB) • HoomDBlue Version 1.0.5 • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384 Application-Level Evaluation (HOOMD-blue) 0 500 1000 1500 2000 2500 4 8 16 32 AverageTimeStepspersecond(TPS) Number of Processes MV2 MV2+GDR 0 500 1000 1500 2000 2500 3000 3500 4 8 16 32 AverageTimeStepspersecond (TPS) Number of Processes 64K Particles 256K Particles 2X 2X
  • 66. MVAPICH User Group Meeting (MUG) 2019 66Network Based Computing Laboratory Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland 0 0.2 0.4 0.6 0.8 1 1.2 16 32 64 96NormalizedExecutionTime Number of GPUs CSCS GPU cluster Default Callback-based Event-based 0 0.2 0.4 0.6 0.8 1 1.2 4 8 16 32 NormalizedExecutionTime Number of GPUs Wilkes GPU Cluster Default Callback-based Event-based • 2X improvement on 32 GPUs nodes • 30% improvement on 96 GPU nodes (8 GPUs/node) C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16 On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application Cosmo model: http://www2.cosmo-model.org/content /tasks/operational/meteoSwiss/
  • 67. MVAPICH User Group Meeting (MUG) 2019 67Network Based Computing Laboratory • Deep Learning frameworks are a different game altogether – Unusually large message sizes (order of megabytes) – Most communication based on GPU buffers • Existing State-of-the-art – cuDNN, cuBLAS, NCCL --> scale-up performance – NCCL2, CUDA-Aware MPI --> scale-out performance • For small and medium message sizes only! • Proposed: Can we co-design the MPI runtime (MVAPICH2- GDR) and the DL framework (Caffe) to achieve both? – Efficient Overlap of Computation and Communication – Efficient Large-Message Communication (Reductions) – What application co-designs are needed to exploit communication-runtime co-designs? Deep Learning: New Challenges for MPI Runtimes Scale-upPerformance Scale-out Performance cuDNN NCCL gRPC Hadoop Proposed Co- Designs MPI cuBLAS A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17) NCCL2
  • 68. MVAPICH User Group Meeting (MUG) 2019 68Network Based Computing Laboratory • Efficient Allreduce is crucial for Horovod’s overall training performance – Both MPI and NCCL designs are available • We have evaluated Horovod extensively and compared across a wide range of designs using gRPC and gRPC extensions • MVAPICH2-GDR achieved up to 90% scaling efficiency for ResNet-50 Training on 64 Pascal GPUs Scalable TensorFlow using Horovod, MPI, and NCCL Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation”, CCGrid ‘19. https://arxiv.org/abs/1810.11112
  • 69. MVAPICH User Group Meeting (MUG) 2019 69Network Based Computing Laboratory MVAPICH2-GDR vs. NCCL2 – Allreduce Operation • Optimized designs in MVAPICH2-GDR 2.3 offer better/comparable performance for most cases • MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs 1 10 100 1000 10000 100000 128K 256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M Latency(us) Message Size (Bytes) MVAPICH2-GDR NCCL2 ~1.2X better Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect 1 10 100 1000 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K Latency(us) Message Size (Bytes) MVAPICH2-GDR NCCL2 ~3X better
  • 70. MVAPICH User Group Meeting (MUG) 2019 70Network Based Computing Laboratory MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2) • Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases • MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs) 1 10 100 1000 10000 256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M Latency(us) Message Size (Bytes) MVAPICH2-GDR-Next NCCL-2.4 ~2.5X better Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2 0 10 20 30 40 50 60 70 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K Latency(us) Message Size (Bytes) MVAPICH2-GDR-Next NCCL-2.4 ~5.8X better
  • 71. MVAPICH User Group Meeting (MUG) 2019 71Network Based Computing Laboratory MVAPICH2-GDR: Enhanced MPI_Allreduce at Scale • Optimized designs in upcoming MVAPICH2-GDR offer better performance for most cases • MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) up to 1,536 GPUs 0 1 2 3 4 5 6 32M 64M 128M 256M Bandwidth(GB/s) Message Size (Bytes) Bandwidth on 1,536 GPUs MVAPICH2-GDR-2.3.2 NCCL 2.4 1.7X better 0 50 100 150 200 250 300 350 400 450 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K Latency(us) Message Size (Bytes) Latency on 1,536 GPUs MVAPICH2-GDR-2.3.2 NCCL 2.4 1.6X better Platform: Dual-socket IBM POWER9 CPU, 6 NVIDIA Volta V100 GPUs, and 2-port InfiniBand EDR Interconnect 0 1 2 3 4 5 6 7 8 9 10 24 48 96 192 384 768 1536 Bandwidth(GB/s) Number of GPUs 128MB Message SpectrumMPI 10.2.0.11 OpenMPI 4.0.1 NCCL 2.4 MVAPICH2-GDR-2.3.2 1.7X better
  • 72. MVAPICH User Group Meeting (MUG) 2019 72Network Based Computing Laboratory Distributed Training with TensorFlow and MVAPICH2-GDR • ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (16 Volta GPUs) 0 1000 2000 3000 4000 5000 6000 7000 1 2 4 8 16 Imagepersecond Number of GPUs NCCL-2.4 MVAPICH2-GDR-2.3.2 9% higher Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2 0 10 20 30 40 50 60 70 80 90 100 1 2 4 8 16 ScalingEfficiency(%) Number of GPUs NCCL-2.4 MVAPICH2-GDR-2.3.2 Scaling Efficiency = Actual throughput Ideal throughput at scale × 100%
  • 73. MVAPICH User Group Meeting (MUG) 2019 73Network Based Computing Laboratory Distributed Training with TensorFlow and MVAPICH2-GDR • ResNet-50 Training using TensorFlow benchmark on SUMMIT -- 1536 Volta GPUs! • 1,281,167 (1.2 mil.) images • Time/epoch = 3.6 seconds • Total Time (90 epochs) = 3.6 x 90 = 332 seconds = 5.5 minutes! 0 50 100 150 200 250 300 350 400 1 2 4 6 12 24 48 96 192 384 768 1536 Imagepersecond(Thousands) Number of GPUs NCCL-2.4 MVAPICH2-GDR-2.3.2 Platform: The Summit Supercomputer (#1 on Top500.org) – 6 NVIDIA Volta GPUs per node connected with NVLink, CUDA 9.2 *We observed errors for NCCL2 beyond 96 GPUs MVAPICH2-GDR reaching ~0.35 million images per second for ImageNet-1k! ImageNet-1k has 1.2 million images
  • 74. MVAPICH User Group Meeting (MUG) 2019 74Network Based Computing Laboratory • Scalable Host-based Collectives • Enhanced Derived Datatype • Integrated Collective Support with SHArP from GPU Buffers • Optimization for PyTorch and MXNET MVAPICH2-GDR Upcoming Features for HPC and DL
  • 75. MVAPICH User Group Meeting (MUG) 2019 75Network Based Computing Laboratory 0 10 20 30 40 4 8 16 32 64 128 256 512 1K 2K 4K Latency(us) MVAPICH2-GDR SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 Scalable Host-based Collectives on OpenPOWER (Intra-node Reduce & AlltoAll)(Nodes=1,PPN=20) 0 50 100 150 200 4 8 16 32 64 128 256 512 1K 2K 4K Latency(us) MVAPICH2-GDR SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 (Nodes=1,PPN=20) Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively 3.6X Alltoall Reduce 5.2X 3.2X 3.3X 0 400 800 1200 1600 2000 8K 16K 32K 64K 128K 256K 512K 1M Latency(us) MVAPICH2-GDR SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 (Nodes=1,PPN=20) 0 2500 5000 7500 10000 8K 16K 32K 64K 128K 256K 512K 1M Latency(us) MVAPICH2-GDR SpectrumMPI-10.1.0.2 OpenMPI-3.0.0 (Nodes=1,PPN=20) 3.2X 1.4X 1.3X 1.2X
  • 76. MVAPICH User Group Meeting (MUG) 2019 76Network Based Computing Laboratory MVAPICH2-GDR: Enhanced Derived Datatype • Kernel-based and GDRCOPY-based one-shot packing for inter-socket and inter-node communication • Zero-copy (packing-free) for GPUs with peer-to-peer direct access over PCIe/NVLink 0 5 10 15 20 25 [6, 8,8,8,8] [6, 8,8,8,16] [6, 8,8,16,16] [6, 16,16,16,16] MILC Speedup Problem size GPU-based DDTBench mimics MILC communication kernel OpenMPI 4.0.0 MVAPICH2-GDR 2.3.1 MVAPICH2-GDR-Next Platform: Nvidia DGX-2 system (NVIDIA Volta GPUs connected with NVSwitch), CUDA 9.2 0 5 10 15 20 25 16 32 64 ExecutionTime(s) Number of GPUs Communication Kernel of COSMO Model MVAPICH2-GDR 2.3.1 MVAPICH2-GDR-Next Platform: Cray CS-Storm (16 NVIDIA Tesla K80 GPUs per node), CUDA 8.0 Improved 3.4X (https://github.com/cosunae/HaloExchangeBenchmarks) Improved 15X
  • 77. MVAPICH User Group Meeting (MUG) 2019 77Network Based Computing Laboratory MVAPICH2 Software Family Requirements Library MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2 Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis), MVAPICH2-X Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA) MVAPICH2-X-AWS Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications MVAPICH2-GDR Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB
  • 78. MVAPICH User Group Meeting (MUG) 2019 78Network Based Computing Laboratory Overview of OSU INAM • A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI runtime – http://mvapich.cse.ohio-state.edu/tools/osu-inam/ • Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes • Capability to analyze and profile node-level, job-level and process-level activities for MPI communication – Point-to-Point, Collectives and RMA • Ability to filter data based on type of counters using “drop down” list • Remotely monitor various metrics of MPI processes at user specified granularity • "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X • Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes • OSU INAM 0.9.4 released on 11/10/2018 – Enhanced performance for fabric discovery using optimized OpenMP-based multi-threaded designs – Ability to gather InfiniBand performance counters at sub-second granularity for very large (>2000 nodes) clusters – Redesign database layout to reduce database size – Enhanced fault tolerance for database operations • Thanks to Trey Dockendorf @ OSC for the feedback – OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously – Improved database purging time by using bulk deletes – Tune database timeouts to handle very long database operations – Improved debugging support by introducing several debugging levels
  • 79. MVAPICH User Group Meeting (MUG) 2019 79Network Based Computing Laboratory OSU INAM Features • Show network topology of large clusters • Visualize traffic pattern on different links • Quickly identify congested links/links in error state • See the history unfold – play back historical state of the network Comet@SDSC --- Clustered View (1,879 nodes, 212 switches, 4,377 network links) Finding Routes Between Nodes
  • 80. MVAPICH User Group Meeting (MUG) 2019 80Network Based Computing Laboratory OSU INAM Features (Cont.) Visualizing a Job (5 Nodes) • Job level view • Show different network metrics (load, error, etc.) for any live job • Play back historical data for completed jobs to identify bottlenecks • Node level view - details per process or per node • CPU utilization for each rank/node • Bytes sent/received for MPI operations (pt-to-pt, collective, RMA) • Network metrics (e.g. XmitDiscard, RcvError) per rank/node Estimated Process Level Link Utilization • Estimated Link Utilization view • Classify data flowing over a network link at different granularity in conjunction with MVAPICH2-X 2.2rc1 • Job level and • Process level More Details in Tutorial/Demo Session Tomorrow
  • 81. MVAPICH User Group Meeting (MUG) 2019 81Network Based Computing Laboratory • Available since 2004 • Suite of microbenchmarks to study communication performance of various programming models • Benchmarks available for the following programming models – Message Passing Interface (MPI) – Partitioned Global Address Space (PGAS) • Unified Parallel C (UPC) • Unified Parallel C++ (UPC++) • OpenSHMEM • Benchmarks available for multiple accelerator based architectures – Compute Unified Device Architecture (CUDA) – OpenACC Application Program Interface • Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks • Continuing to add support for newer primitives and features • Please visit the following link for more information – http://mvapich.cse.ohio-state.edu/benchmarks/ OSU Microbenchmarks
  • 82. MVAPICH User Group Meeting (MUG) 2019 82Network Based Computing Laboratory • MPI runtime has many parameters • Tuning a set of parameters can help you to extract higher performance • Compiled a list of such contributions through the MVAPICH Website – http://mvapich.cse.ohio-state.edu/best_practices/ • Initial list of applications – Amber – HoomDBlue – HPCG – Lulesh – MILC – Neuron – SMG2000 – Cloverleaf – SPEC (LAMMPS, POP2, TERA_TF, WRF2) • Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu. • We will link these results with credits to you. Applications-Level Tuning: Compilation of Best Practices
  • 83. MVAPICH User Group Meeting (MUG) 2019 83Network Based Computing Laboratory MVAPICH2 – Plans for Exascale • Performance and Memory scalability toward 1-10M cores • Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …) • MPI + Task* • Enhanced Optimization for GPU Support and Accelerators • Taking advantage of advanced features of Mellanox InfiniBand • Tag Matching* • Adapter Memory* • Enhanced communication schemes for upcoming architectures • Intel Optane* • BlueField* • CAPI* • Extended topology-aware collectives • Extended Energy-aware designs and Virtualization Support • Extended Support for MPI Tools Interface (as in MPI 3.0) • Extended FT support • Support for * features will be available in future MVAPICH2 Releases
  • 84. MVAPICH User Group Meeting (MUG) 2019 84Network Based Computing Laboratory • Supported through X-ScaleSolutions (http://x-scalesolutions.com) • Benefits: – Help and guidance with installation of the library – Platform-specific optimizations and tuning – Timely support for operational issues encountered with the library – Web portal interface to submit issues and tracking their progress – Advanced debugging techniques – Application-specific optimizations and tuning – Obtaining guidelines on best practices – Periodic information on major fixes and updates – Information on major releases – Help with upgrading to the latest release – Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years Commercial Support for MVAPICH2, HiBD, and HiDL Libraries
  • 85. MVAPICH User Group Meeting (MUG) 2019 85Network Based Computing Laboratory • Has joined the OpenPOWER Consortium as a silver ISV member • Provides flexibility: – To have MVAPICH2, HiDL and HiBD libraries getting integrated into the OpenPOWER software stack – A part of the OpenPOWER ecosystem – Can participate with different vendors for bidding, installation and deployment process • Introduced two new integrated products with support for OpenPOWER systems (Presented yesterday at the OpenPOWER North America Summit) – X-ScaleHPC – X-ScaleAI – Send an e-mail to contactus@x-scalesolutions.com for free trial!! Silver ISV Member for the OpenPOWER Consortium + Products
  • 86. MVAPICH User Group Meeting (MUG) 2019 86Network Based Computing Laboratory Funding Acknowledgments Funding Support by Equipment Support by
  • 87. MVAPICH User Group Meeting (MUG) 2019 87Network Based Computing Laboratory Personnel Acknowledgments Current Students (Graduate) – A. Awan (Ph.D.) – M. Bayatpour (Ph.D.) – C.-H. Chu (Ph.D.) – J. Hashmi (Ph.D.) – A. Jain (Ph.D.) – K. S. Kandadi (M.S.) Past Students – A. Augustine (M.S.) – P. Balaji (Ph.D.) – R. Biswas (M.S.) – S. Bhagvat (M.S.) – A. Bhat (M.S.) – D. Buntinas (Ph.D.) – L. Chai (Ph.D.) – B. Chandrasekharan (M.S.) – S. Chakraborthy (Ph.D.) – N. Dandapanthula (M.S.) – V. Dhanraj (M.S.) – R. Rajachandrasekar (Ph.D.) – D. Shankar (Ph.D.) – G. Santhanaraman (Ph.D.) – A. Singh (Ph.D.) – J. Sridhar (M.S.) – S. Sur (Ph.D.) – H. Subramoni (Ph.D.) – K. Vaidyanathan (Ph.D.) – A. Vishnu (Ph.D.) – J. Wu (Ph.D.) – W. Yu (Ph.D.) – J. Zhang (Ph.D.) Past Research Scientist – K. Hamidouche – S. Sur – X. Lu Past Post-Docs – D. Banerjee – X. Besseron – H.-W. Jin – T. Gangadharappa (M.S.) – K. Gopalakrishnan (M.S.) – W. Huang (Ph.D.) – W. Jiang (M.S.) – J. Jose (Ph.D.) – S. Kini (M.S.) – M. Koop (Ph.D.) – K. Kulkarni (M.S.) – R. Kumar (M.S.) – S. Krishnamoorthy (M.S.) – K. Kandalla (Ph.D.) – M. Li (Ph.D.) – P. Lai (M.S.) – J. Liu (Ph.D.) – M. Luo (Ph.D.) – A. Mamidala (Ph.D.) – G. Marsh (M.S.) – V. Meshram (M.S.) – A. Moody (M.S.) – S. Naravula (Ph.D.) – R. Noronha (Ph.D.) – X. Ouyang (Ph.D.) – S. Pai (M.S.) – S. Potluri (Ph.D.) – Kamal Raj (M.S.) – K. S. Khorassani (Ph.D.) – P. Kousha (Ph.D.) – A. Quentin (Ph.D.) – B. Ramesh (M. S.) – S. Xu (M.S.) – J. Lin – M. Luo – E. Mancini Past Programmers – D. Bureddy – J. Perkins Current Research Specialist – J. Smith – S. Marcarelli – J. Vienne – H. Wang Current Post-doc – M. S. Ghazimeersaeed – A. Ruhela – K. Manian Current Students (Undergraduate) – V. Gangal (B.S.) – N. Sarkauskas (B.S.) Past Research Specialist – M. Arnold Current Research Scientist – H. Subramoni– Q. Zhou (Ph.D.)
  • 88. MVAPICH User Group Meeting (MUG) 2019 88Network Based Computing Laboratory Thank You! Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ The MVAPICH2 Project http://mvapich.cse.ohio-state.edu/