SlideShare uma empresa Scribd logo
1 de 26
Baixar para ler offline
Parallel Programming
                    using
             Message Passing
                  Interface
                    (MPI)
                         metu-ceng
                 ts@TayfunSen.com
11/05/08          Parallel Programming Using MPI   1
                       25 April 2008
Outline
•   What is MPI?
•   MPI Implementations
•   OpenMPI
•   MPI
•   References
•   Q&A


11/05/08      Parallel Programming Using MPI   2/26
What is MPI?
    • A standard with many implementations
      (lam-mpi and mpich, evolving into
      OpenMPI and MVAPICH).
    • message passing API
    • Library for programming clusters
    • Needs to be high performing, scalable,
      portable ...


11/05/08        Parallel Programming Using MPI   3/26
MPI Implementations
    • Is it up for the challenge?
       MPI does not have many alternatives
       (what about OpenMP, map-reduce etc?).
    • Many implementations out there.
    • The programming interface is all the same. But
      underlying implementations and what they
      support in terms of connectivity, fault tolerance
      etc. differ.
    • On ceng-hpc, both MVAPICH and OpenMPI is
      installed.

11/05/08           Parallel Programming Using MPI    4/26
OpenMPI
• We'll use OpenMPI for this presentation
• It's open source, MPI2 compliant,
  portable, has fault tolerance, combines
  best practices of number of other MPI
  implementations.
• To install it, for example on
  Debian/Ubuntu type:
    # apt-get install openmpi-bin libopenmpi-dev
     openmpi-doc
11/05/08         Parallel Programming Using MPI   5/26
MPI – General Information
• Functions start with MPI_* to differ
  from application
• MPI has defined its own data types to
  abstract machine dependent
  implementations (MPI_CHAR,
  MPI_INT, MPI_BYTE etc.)



11/05/08     Parallel Programming Using MPI   6/26
MPI - API and other stuff
• Housekeeping (initialization,
  termination, header file)
• Two types of communication: Point-
  to-point and collective communication
• Communicators




11/05/08     Parallel Programming Using MPI   7/26
Housekeeping
• You include the header mpi.h
• Initialize using MPI_Init(&argc, &argv)
  and end MPI using MPI_Finalize()
• Demo time, “hello world!” using MPI




11/05/08      Parallel Programming Using MPI   8/26
Point-to-point
           communication
• Related definitions – source,
  destination, communicator, tag,
  buffer, data type, count
• man MPI_Send, MPI_Recv
int MPI_Send(void *buf, int count, MPI_Datatype
 datatype, int dest,int tag, MPI_Comm comm)

• Blocking send, that is the processor
  doesn't do anything until the message
  is sent
11/05/08         Parallel Programming Using MPI   9/26
P2P Communication (cont.)
•     int MPI_Recv(void *buf, int count, MPI_Datatype
      datatype, int source, int tag, MPI_Comm comm,
      MPI_Status *status)

• Source, tag, communicator has to be
  correct for the message to be received
• Demo time – simple send
• One last thing, you can use wildcards in
  place of source and tag.
  MPI_ANY_SOURCE and MPI_ANY_TAG

    11/05/08          Parallel Programming Using MPI    10/26
P2P Communication (cont.)
• The receiver actually does not know
  how much data it received. He takes
  a guess and tries to get the most.
• To be sure of how much received, one
  can use:
•     int MPI_Get_count(MPI_Status *status, MPI_Datatype
      dtype, int *count);

• Demo time – change simple send to
  check the received message size.
    11/05/08          Parallel Programming Using MPI   11/26
P2P Communication (cont.)
• For a receive operation, communication ends when
  the message is copied to the local variables.
• For a send operation, communication is completed
  when the message is transferred to MPI for
  sending. (so that the buffer can be recycled)
• Blocked operations continue when the
  communication has been completed
• Beware – There are some intricacies
  Check [2] for more information.

11/05/08         Parallel Programming Using MPI   12/26
P2P Communication (cont.)
• For blocking communications, deadlock is a
  possibility:
if( myrank == 0 ) {
      /* Receive, then send a message */
      MPI_Recv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &status );
      MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD );
  }
  else if( myrank == 1 ) {
      /* Receive, then send a message */
      MPI_Recv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status );
      MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD );
  }

• How to remove the deadlock?
11/05/08                 Parallel Programming Using MPI          13/26
P2P Communication (cont.)
• When non-blocking communication is used,
  program continues its execution
• A program can send a blocking send and
  the receiver may use non-blocking receive
  or vice versa.
• Very similar function calls
int MPI_Isend(void *buf, int count, MPI_Datatype dtype, int dest,
   int tag, MPI_Comm comm, MPI_Request *request);

• Request handle can be used later
  eg. MPI_Wait, MPI_Test ...
11/05/08               Parallel Programming Using MPI          14/26
P2P Communication (cont.)
• Demo time – non_blocking
• There are other modes of sending
  (but not receiving!) check out the
  documentation for synchronous,
  buffered and ready mode send in
  addition to standard one we have
  seen here.


11/05/08     Parallel Programming Using MPI   15/26
P2P Communication (cont.)
• Keep in mind that each send/receive is costly
  – try to piggyback
• You can send different data types at the same
  time – eg. Integers, floats, characters,
  doubles... using MPI_Pack. This function gives
  you an intermediate buffer which you will
  send.
•     int MPI_Pack(void *inbuf, int incount, MPI_Datatype
      datatype, void *outbuf, int outsize, int *position,
      MPI_Comm comm)
•     MPI_Send(buffer, count, MPI_PACKED, dest, tag,
      MPI_COMM_WORLD);

    11/05/08            Parallel Programming Using MPI      16/26
P2P Communication (cont.)
• You can also send your own structs
  (user defined types). See the
  documentation




11/05/08     Parallel Programming Using MPI   17/26
Collective Communication
• Works like point to point except you
  send to all other processors
• MPI_Barrier(comm), blocks until each
  processor calls this. Synchronizes
  everyone.
• Broadcast operation MPI_Bcast copies
  the data value in one processor to
  others.
• Demo time - bcast_example
11/05/08     Parallel Programming Using MPI   18/26
Collective Communication
• MPI_Reduce collects data from other
  processors, operates on them and
  returns a single value
• reduction operation is performed
• Demo time – reduce_op example
• There are MPI defined reduce
  operations but you can define your
  own
11/05/08     Parallel Programming Using MPI   19/26
Collective Communication -
          MPI_Gather
• gather and scatter operations
• Like what their name implies
• Gather – like every process sending
  their send buffer and root process
  receiving
• Demo time - gather_example



11/05/08     Parallel Programming Using MPI   20/26
Collective Communication -
         MPI_Scatter
• Similar to MPI_Gather but here data
  is sent from root to other processors
• Like gather, you can accomplish it by
  having root calling MPI_Send
  repeatedly and others calling
  MPI_Recv
• Demo time – scatter_example


11/05/08     Parallel Programming Using MPI   21/26
Collective Communication –
      More functionality
• Many more functions to lift hard work
  from you.
• MPI_Allreduce, MPI_Gatherv,
  MPI_Scan, MPI_Reduce_Scatter ...
• Check out the API documentation
• Manual files are your best friend.



11/05/08     Parallel Programming Using MPI   22/26
Communicators
• Communicators group processors
• Basic communicator
  MPI_COMM_WORLD defined for all
  processors
• You can create your own
  communicators to group processors.
  Thus you can send messages to only
  a subset of all processors.
11/05/08     Parallel Programming Using MPI   23/26
More Advanced Stuff
• Parallel I/O – when one node is used
  for reading from disk it is slow. You
  can have each node use its local disk.
• One sided communications – Remote
  memory access
• Both are MPI-2 capabilities. Check
  your MPI implementation to see how
  much it is implemented.
11/05/08        Parallel Programming Using MPI   24/26
References
[1] Wikipedia articles in general, including but not limited to:
http://en.wikipedia.org/wiki/Message_Passing_Interface
[2] An excellent guide at NCSA (National Center for
   Supercomputing Applications):
http://webct.ncsa.uiuc.edu:8900/public/MPI/
[3] OpenMPI Official Web site:
http://www.open-mpi.org/




11/05/08               Parallel Programming Using MPI         25/26
The End

           Thanks For Your Time.
              Any Questions
                            ?
11/05/08        Parallel Programming Using MPI   26/26

Mais conteúdo relacionado

Mais procurados

4 evolution-of-programming-languages
4 evolution-of-programming-languages4 evolution-of-programming-languages
4 evolution-of-programming-languages
Rohit Shrivastava
 
Bash shell
Bash shellBash shell
Bash shell
xylas121
 
Image encryption
Image encryptionImage encryption
Image encryption
rakshit2105
 

Mais procurados (20)

MPI Tutorial
MPI TutorialMPI Tutorial
MPI Tutorial
 
4 evolution-of-programming-languages
4 evolution-of-programming-languages4 evolution-of-programming-languages
4 evolution-of-programming-languages
 
Introduction to Shell script
Introduction to Shell scriptIntroduction to Shell script
Introduction to Shell script
 
RSA - ALGORITHM by Muthugomathy and Meenakshi Shetti of GIT COLLEGE
RSA - ALGORITHM by Muthugomathy and Meenakshi Shetti of GIT COLLEGE RSA - ALGORITHM by Muthugomathy and Meenakshi Shetti of GIT COLLEGE
RSA - ALGORITHM by Muthugomathy and Meenakshi Shetti of GIT COLLEGE
 
cpu scheduling in os
cpu scheduling in oscpu scheduling in os
cpu scheduling in os
 
用十分鐘 向jserv學習作業系統設計
用十分鐘  向jserv學習作業系統設計用十分鐘  向jserv學習作業系統設計
用十分鐘 向jserv學習作業系統設計
 
Linux Internals - Part I
Linux Internals - Part ILinux Internals - Part I
Linux Internals - Part I
 
Open mp
Open mpOpen mp
Open mp
 
operating system structure
operating system structureoperating system structure
operating system structure
 
Programming Fundamentals lecture 1
Programming Fundamentals lecture 1Programming Fundamentals lecture 1
Programming Fundamentals lecture 1
 
Bash shell
Bash shellBash shell
Bash shell
 
Pgp pretty good privacy
Pgp pretty good privacyPgp pretty good privacy
Pgp pretty good privacy
 
Image encryption
Image encryptionImage encryption
Image encryption
 
OpenWrt From Top to Bottom
OpenWrt From Top to BottomOpenWrt From Top to Bottom
OpenWrt From Top to Bottom
 
Shell scripting
Shell scriptingShell scripting
Shell scripting
 
Linux-Internals-and-Networking
Linux-Internals-and-NetworkingLinux-Internals-and-Networking
Linux-Internals-and-Networking
 
Complete Guide for Linux shell programming
Complete Guide for Linux shell programmingComplete Guide for Linux shell programming
Complete Guide for Linux shell programming
 
Linux Kernel Development
Linux Kernel DevelopmentLinux Kernel Development
Linux Kernel Development
 
Monitoring Dual Stack IPv4/IPv6 Networks
Monitoring Dual Stack IPv4/IPv6 NetworksMonitoring Dual Stack IPv4/IPv6 Networks
Monitoring Dual Stack IPv4/IPv6 Networks
 
Point-to-Point Communicationsin MPI
Point-to-Point Communicationsin MPIPoint-to-Point Communicationsin MPI
Point-to-Point Communicationsin MPI
 

Destaque

Pratical mpi programming
Pratical mpi programmingPratical mpi programming
Pratical mpi programming
unifesptk
 
It 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingIt 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processing
Harish Khodke
 
Digital image processing unit 1
Digital image processing unit 1Digital image processing unit 1
Digital image processing unit 1
Anantharaj Manoj
 
MPI Introduction
MPI IntroductionMPI Introduction
MPI Introduction
Rohit Banga
 

Destaque (20)

The Message Passing Interface (MPI) in Layman's Terms
The Message Passing Interface (MPI) in Layman's TermsThe Message Passing Interface (MPI) in Layman's Terms
The Message Passing Interface (MPI) in Layman's Terms
 
MPI message passing interface
MPI message passing interfaceMPI message passing interface
MPI message passing interface
 
Introduction to MPI
Introduction to MPI Introduction to MPI
Introduction to MPI
 
Cloud Services On UI and Ideas for Federated Cloud on idREN
Cloud Services On UI and Ideas for Federated Cloud on idRENCloud Services On UI and Ideas for Federated Cloud on idREN
Cloud Services On UI and Ideas for Federated Cloud on idREN
 
Введение в MPI
Введение в MPIВведение в MPI
Введение в MPI
 
presentation
presentationpresentation
presentation
 
Intro to MPI
Intro to MPIIntro to MPI
Intro to MPI
 
MPI History
MPI HistoryMPI History
MPI History
 
message passing interface
message passing interfacemessage passing interface
message passing interface
 
OpenMP
OpenMPOpenMP
OpenMP
 
ISBI MPI Tutorial
ISBI MPI TutorialISBI MPI Tutorial
ISBI MPI Tutorial
 
Message passing interface
Message passing interfaceMessage passing interface
Message passing interface
 
Pratical mpi programming
Pratical mpi programmingPratical mpi programming
Pratical mpi programming
 
It 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingIt 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processing
 
Digital image processing unit 1
Digital image processing unit 1Digital image processing unit 1
Digital image processing unit 1
 
Dip Unit Test-I
Dip Unit Test-IDip Unit Test-I
Dip Unit Test-I
 
MPI Introduction
MPI IntroductionMPI Introduction
MPI Introduction
 
OGSA
OGSAOGSA
OGSA
 
Globus ppt
Globus pptGlobus ppt
Globus ppt
 
Beowulf cluster
Beowulf clusterBeowulf cluster
Beowulf cluster
 

Semelhante a MPI Presentation

Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2
Marcirio Chaves
 
Programming Models for High-performance Computing
Programming Models for High-performance ComputingProgramming Models for High-performance Computing
Programming Models for High-performance Computing
Marc Snir
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward
 

Semelhante a MPI Presentation (20)

Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2
 
Parallel programming using MPI
Parallel programming using MPIParallel programming using MPI
Parallel programming using MPI
 
Advanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPCAdvanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPC
 
My ppt hpc u4
My ppt hpc u4My ppt hpc u4
My ppt hpc u4
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPI
 
Advanced MPI
Advanced MPIAdvanced MPI
Advanced MPI
 
Nug2004 yhe
Nug2004 yheNug2004 yhe
Nug2004 yhe
 
Introduction to GPUs in HPC
Introduction to GPUs in HPCIntroduction to GPUs in HPC
Introduction to GPUs in HPC
 
Mpi.net tutorial
Mpi.net tutorialMpi.net tutorial
Mpi.net tutorial
 
Programming Models for High-performance Computing
Programming Models for High-performance ComputingProgramming Models for High-performance Computing
Programming Models for High-performance Computing
 
Lecture9
Lecture9Lecture9
Lecture9
 
Programming using MPI and OpenMP
Programming using MPI and OpenMPProgramming using MPI and OpenMP
Programming using MPI and OpenMP
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPI
 
openmp final2.pptx
openmp final2.pptxopenmp final2.pptx
openmp final2.pptx
 
Streaming your Lyft Ride Prices - Flink Forward SF 2019
Streaming your Lyft Ride Prices - Flink Forward SF 2019Streaming your Lyft Ride Prices - Flink Forward SF 2019
Streaming your Lyft Ride Prices - Flink Forward SF 2019
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
 
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
Flink Forward San Francisco 2019: Streaming your Lyft Ride Prices - Thomas We...
 
Python introduction
Python introductionPython introduction
Python introduction
 
“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...
“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...
“Programação paralela híbrida com MPI e OpenMP – uma abordagem prática”. Edua...
 
Phases of compiler
Phases of compilerPhases of compiler
Phases of compiler
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Último (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

MPI Presentation

  • 1. Parallel Programming using Message Passing Interface (MPI) metu-ceng ts@TayfunSen.com 11/05/08 Parallel Programming Using MPI 1 25 April 2008
  • 2. Outline • What is MPI? • MPI Implementations • OpenMPI • MPI • References • Q&A 11/05/08 Parallel Programming Using MPI 2/26
  • 3. What is MPI? • A standard with many implementations (lam-mpi and mpich, evolving into OpenMPI and MVAPICH). • message passing API • Library for programming clusters • Needs to be high performing, scalable, portable ... 11/05/08 Parallel Programming Using MPI 3/26
  • 4. MPI Implementations • Is it up for the challenge? MPI does not have many alternatives (what about OpenMP, map-reduce etc?). • Many implementations out there. • The programming interface is all the same. But underlying implementations and what they support in terms of connectivity, fault tolerance etc. differ. • On ceng-hpc, both MVAPICH and OpenMPI is installed. 11/05/08 Parallel Programming Using MPI 4/26
  • 5. OpenMPI • We'll use OpenMPI for this presentation • It's open source, MPI2 compliant, portable, has fault tolerance, combines best practices of number of other MPI implementations. • To install it, for example on Debian/Ubuntu type: # apt-get install openmpi-bin libopenmpi-dev openmpi-doc 11/05/08 Parallel Programming Using MPI 5/26
  • 6. MPI – General Information • Functions start with MPI_* to differ from application • MPI has defined its own data types to abstract machine dependent implementations (MPI_CHAR, MPI_INT, MPI_BYTE etc.) 11/05/08 Parallel Programming Using MPI 6/26
  • 7. MPI - API and other stuff • Housekeeping (initialization, termination, header file) • Two types of communication: Point- to-point and collective communication • Communicators 11/05/08 Parallel Programming Using MPI 7/26
  • 8. Housekeeping • You include the header mpi.h • Initialize using MPI_Init(&argc, &argv) and end MPI using MPI_Finalize() • Demo time, “hello world!” using MPI 11/05/08 Parallel Programming Using MPI 8/26
  • 9. Point-to-point communication • Related definitions – source, destination, communicator, tag, buffer, data type, count • man MPI_Send, MPI_Recv int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,int tag, MPI_Comm comm) • Blocking send, that is the processor doesn't do anything until the message is sent 11/05/08 Parallel Programming Using MPI 9/26
  • 10. P2P Communication (cont.) • int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) • Source, tag, communicator has to be correct for the message to be received • Demo time – simple send • One last thing, you can use wildcards in place of source and tag. MPI_ANY_SOURCE and MPI_ANY_TAG 11/05/08 Parallel Programming Using MPI 10/26
  • 11. P2P Communication (cont.) • The receiver actually does not know how much data it received. He takes a guess and tries to get the most. • To be sure of how much received, one can use: • int MPI_Get_count(MPI_Status *status, MPI_Datatype dtype, int *count); • Demo time – change simple send to check the received message size. 11/05/08 Parallel Programming Using MPI 11/26
  • 12. P2P Communication (cont.) • For a receive operation, communication ends when the message is copied to the local variables. • For a send operation, communication is completed when the message is transferred to MPI for sending. (so that the buffer can be recycled) • Blocked operations continue when the communication has been completed • Beware – There are some intricacies Check [2] for more information. 11/05/08 Parallel Programming Using MPI 12/26
  • 13. P2P Communication (cont.) • For blocking communications, deadlock is a possibility: if( myrank == 0 ) { /* Receive, then send a message */ MPI_Recv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD ); } else if( myrank == 1 ) { /* Receive, then send a message */ MPI_Recv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD ); } • How to remove the deadlock? 11/05/08 Parallel Programming Using MPI 13/26
  • 14. P2P Communication (cont.) • When non-blocking communication is used, program continues its execution • A program can send a blocking send and the receiver may use non-blocking receive or vice versa. • Very similar function calls int MPI_Isend(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm, MPI_Request *request); • Request handle can be used later eg. MPI_Wait, MPI_Test ... 11/05/08 Parallel Programming Using MPI 14/26
  • 15. P2P Communication (cont.) • Demo time – non_blocking • There are other modes of sending (but not receiving!) check out the documentation for synchronous, buffered and ready mode send in addition to standard one we have seen here. 11/05/08 Parallel Programming Using MPI 15/26
  • 16. P2P Communication (cont.) • Keep in mind that each send/receive is costly – try to piggyback • You can send different data types at the same time – eg. Integers, floats, characters, doubles... using MPI_Pack. This function gives you an intermediate buffer which you will send. • int MPI_Pack(void *inbuf, int incount, MPI_Datatype datatype, void *outbuf, int outsize, int *position, MPI_Comm comm) • MPI_Send(buffer, count, MPI_PACKED, dest, tag, MPI_COMM_WORLD); 11/05/08 Parallel Programming Using MPI 16/26
  • 17. P2P Communication (cont.) • You can also send your own structs (user defined types). See the documentation 11/05/08 Parallel Programming Using MPI 17/26
  • 18. Collective Communication • Works like point to point except you send to all other processors • MPI_Barrier(comm), blocks until each processor calls this. Synchronizes everyone. • Broadcast operation MPI_Bcast copies the data value in one processor to others. • Demo time - bcast_example 11/05/08 Parallel Programming Using MPI 18/26
  • 19. Collective Communication • MPI_Reduce collects data from other processors, operates on them and returns a single value • reduction operation is performed • Demo time – reduce_op example • There are MPI defined reduce operations but you can define your own 11/05/08 Parallel Programming Using MPI 19/26
  • 20. Collective Communication - MPI_Gather • gather and scatter operations • Like what their name implies • Gather – like every process sending their send buffer and root process receiving • Demo time - gather_example 11/05/08 Parallel Programming Using MPI 20/26
  • 21. Collective Communication - MPI_Scatter • Similar to MPI_Gather but here data is sent from root to other processors • Like gather, you can accomplish it by having root calling MPI_Send repeatedly and others calling MPI_Recv • Demo time – scatter_example 11/05/08 Parallel Programming Using MPI 21/26
  • 22. Collective Communication – More functionality • Many more functions to lift hard work from you. • MPI_Allreduce, MPI_Gatherv, MPI_Scan, MPI_Reduce_Scatter ... • Check out the API documentation • Manual files are your best friend. 11/05/08 Parallel Programming Using MPI 22/26
  • 23. Communicators • Communicators group processors • Basic communicator MPI_COMM_WORLD defined for all processors • You can create your own communicators to group processors. Thus you can send messages to only a subset of all processors. 11/05/08 Parallel Programming Using MPI 23/26
  • 24. More Advanced Stuff • Parallel I/O – when one node is used for reading from disk it is slow. You can have each node use its local disk. • One sided communications – Remote memory access • Both are MPI-2 capabilities. Check your MPI implementation to see how much it is implemented. 11/05/08 Parallel Programming Using MPI 24/26
  • 25. References [1] Wikipedia articles in general, including but not limited to: http://en.wikipedia.org/wiki/Message_Passing_Interface [2] An excellent guide at NCSA (National Center for Supercomputing Applications): http://webct.ncsa.uiuc.edu:8900/public/MPI/ [3] OpenMPI Official Web site: http://www.open-mpi.org/ 11/05/08 Parallel Programming Using MPI 25/26
  • 26. The End Thanks For Your Time. Any Questions ? 11/05/08 Parallel Programming Using MPI 26/26