SlideShare uma empresa Scribd logo
1 de 74
MPI Rohit Banga Prakher Anand K Swagat Manoj Gupta Advanced Computer Architecture Spring, 2010
ORGANIZATION ,[object Object],[object Object],[object Object],[object Object]
GOALS ,[object Object],[object Object],[object Object]
MESSAGE PASSING INTERFACE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
GOALS OF MPI SPECIFICATION ,[object Object],[object Object],[object Object],[object Object]
REASONS FOR USING MPI ,[object Object],[object Object],[object Object],[object Object],[object Object]
BASIC MODEL ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
BASIC MODEL (CONTD.) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
COMMUNICATORS ,[object Object],[object Object]
COMMUNICATOR AND GROUPS ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
VIRTUAL TOPOLOGIES ,[object Object],[object Object],[object Object],[object Object],[object Object]
SEMANTICS ,[object Object],[object Object],[object Object],[object Object],Format:  rc = MPI_Xxxxx(parameter, ... )  Example:  rc = MPI_Bsend(&buf,count,type,dest,tag,comm)  Error code:  Returned as "rc". MPI_SUCCESS if successful
MPI PROGRAM STRUCTURE
MPI FUNCTIONS – MINIMAL SUBSET ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
CLASSIFICATION OF MPI ROUTINES ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
MPI_INIT ,[object Object],[object Object],[object Object],[object Object],[object Object],int main(int argc, char **argv) { MPI_Init(&argc, &argv); … }
MPI_COMM_SIZE ,[object Object],[object Object],[object Object],int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); … }
MPI_COMM_RANK ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); … }
MPI_FINALIZE ,[object Object],[object Object],[object Object],int main(int argc, char **argv) { MPI_Init(&argc, &argv); int p; MPI_Comm_size(MPI_COMM_WORLD, &p); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); printf(“no. of processors: %d rank: %d”, p, rank); MPI_Finalize(); }
 
HOW TO COMPILE THIS ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
HOW TO RUN THIS ,[object Object],[object Object],[object Object]
MPIRUN ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
A NOTE ON IMPLEMENTATION ,[object Object],[object Object],MPI_Init MPI Thread MPI_Init MPI Thread
SOME MORE FUNCTIONS ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
POINT TO POINT COMMUNICATION
POINT-TO-POINT COMMUNICATION ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
POINT-TO-POINT COMMUNICATION ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
POINT-TO-POINT COMMUNICATION ,[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object]
 
BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE (CONTD.) Process 1 Process 2 Data Processor 1 Application Send System Buffer Processor 2 Application Send System Buffer
A WORD ABOUT SPECIFICATION ,[object Object],[object Object],[object Object],[object Object]
BLOCKING SEND/RECEIVE (CONTD.) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
NON-BLOCKING SEND/RECEIVE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
NON-BLOCKING SEND/RECEIVE (CONTD.) Process 1 Process 2 Data Processor 1 Application Send System Buffer Processor 2 Application Send System Buffer
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],NON-BLOCKING SEND/RECEIVE (CONTD.)
[object Object],[object Object],NON-BLOCKING SEND/RECEIVE (CONTD.)
STANDARD MODE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
SYNCHRONOUS MODE ,[object Object],[object Object],[object Object],[object Object],[object Object]
BUFFERED MODE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
BUFFER MANAGEMENT ,[object Object],[object Object],[object Object],[object Object],MPI_Buffer_attach( malloc(BUFFSIZE), BUFFSIZE);  /* a buffer of BUFFSIZE bytes can now be used by MPI_Bsend */  MPI_Buffer_detach( &buff, &size);  /* Buffer size reduced to zero */  MPI_Buffer_attach( buff, size);  /* Buffer of BUFFSIZE bytes available again */
READY MODE ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
ORDER AND FAIRNESS ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
EXAMPLE OF NON-OVERTAKING MESSAGES. CALL MPI_COMM_RANK(comm, rank, ierr)  IF (rank.EQ.0) THEN  CALL MPI_BSEND(buf1, count, MPI_REAL, 1, tag, comm, ierr)  CALL MPI_BSEND(buf2, count, MPI_REAL, 1, tag, comm, ierr)  ELSE ! rank.EQ.1  CALL MPI_RECV(buf1, count, MPI_REAL, 0, MPI_ANY_TAG, comm,  status, ierr)  CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag, comm, status, ierr)  END IF
EXAMPLE OF INTERTWINGLED MESSAGES. CALL MPI_COMM_RANK(comm, rank, ierr)  IF (rank.EQ.0) THEN  CALL MPI_BSEND(buf1, count, MPI_REAL, 1, tag1, comm, ierr)  CALL MPI_SSEND(buf2, count, MPI_REAL, 1, tag2, comm, ierr)  ELSE ! rank.EQ.1  CALL MPI_RECV(buf1, count, MPI_REAL, 0, tag2, comm,  status, ierr)  CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag1, comm, status, ierr)  END IF
DEADLOCK EXAMPLE CALL MPI_COMM_RANK(comm, rank, ierr)  IF (rank.EQ.0) THEN  CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr)  ELSE ! rank.EQ.1  CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr)  END IF
EXAMPLE OF BUFFERING CALL MPI_COMM_RANK(comm, rank, ierr)  IF (rank.EQ.0) THEN  CALL MPI_SEND(buf1, count, MPI_REAL, 1, tag, comm, ierr)  CALL MPI_RECV (recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE ! rank.EQ.1  CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag, comm, status, ierr)  END IF
COLLECTIVE COMMUNICATIONS
COLLECTIVE ROUTINES ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
COLLECTIVE ROUTINES (CONTD.) ,[object Object],[object Object],[object Object]
COLLECTIVE ROUTINES (CONTD.) ,[object Object],[object Object],[object Object]
COLLECTIVE ROUTINES (CONTD.) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
COLLECTIVE ROUTINES (CONTD.) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
MPI OPERATIONS MPI_OP operator MPI_MIN Minimum MPI_SUM Sum MPI_PROD product MPI_MAX maximum MPI_LAND Logical and MPI_BAND Bitwise and MPI_LOR Logical or MPI_BOR Bitwise or MPI_LXOR Logical xor MPI_BXOR Bit-wise xor MPI_MAXLOC Max value and location MPI_MINLOC Min value and location
COLLECTIVE ROUTINES (CONTD.)
Learn by Examples
Parallel Trapezoidal Rule Output:  Estimate of the integral from a to b of f(x) using the trapezoidal rule and n trapezoids. Algorithm: 1.  Each process calculates "its" interval of integration. 2.  Each process estimates the integral of f(x) over its interval using the trapezoidal rule. 3a. Each process != 0 sends its integral to 0. 3b. Process 0 sums the calculations received from the individual processes and prints the result. Notes:  1.  f(x), a, b, and n are all hardwired. 2.  The number of processes (p) should evenly divide the number of trapezoids (n = 1024)
Parallelizing the Trapezoidal Rule #include <stdio.h> #include &quot;mpi.h&quot; main(int argc, char** argv) { int  my_rank;  /* My process rank  */ int  p;  /* The number of processes  */ double  a = 0.0;  /* Left endpoint  */ double  b = 1.0;  /* Right endpoint  */ int  n = 1024;  /* Number of trapezoids  */ double  h;  /* Trapezoid base length  */ double  local_a;  /* Left endpoint my process  */ double  local_b;  /* Right endpoint my process */ int  local_n;  /* Number of trapezoids for  */ /* my calculation  */ double  integral;  /* Integral over my interval */ double  total;  /* Total integral  */ int  source;  /* Process sending integral  */ int  dest = 0;  /* All messages go to 0  */ int  tag = 0; MPI_Status  status;
Continued… double Trap(double local_a, double local_b, int local_n,double h);  /* Calculate local integral  */ MPI_Init (&argc, &argv); MPI_Barrier(MPI_COMM_WORLD); double elapsed_time = -MPI_Wtime(); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &p); h = (b-a)/n;  /* h is the same for all processes */ local_n = n/p;  /* So is the number of trapezoids */ /* Length of each process' interval of integration = local_n*h.  So my interval starts at: */ local_a = a + my_rank*local_n*h; local_b = local_a + local_n*h; integral = Trap(local_a, local_b, local_n, h);
Continued… /* Add up the integrals calculated by each process */ if (my_rank == 0) { total = integral; for (source = 1; source < p; source++) { MPI_Recv(&integral, 1, MPI_DOUBLE, source, tag,  MPI_COMM_WORLD,  &status); total = total + integral; }//End for } else  MPI_Send(&integral, 1, MPI_DOUBLE, dest, tag, MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); elapsed_time += MPI_Wtime(); /* Print the result */ if (my_rank == 0) { printf(&quot;With n = %d trapezoids, our estimate&quot;,n); printf(&quot;of the integral from %lf to %lf = %lf&quot;,a, b, total); printf(&quot;time taken: %lf&quot;, elapsed_time); }
Continued… /* Shut down MPI */ MPI_Finalize(); } /*  main  */  double Trap(  double  local_a , double  local_b, int local_n, double  h) { double integral;  /* Store result in integral  */ double x; int i; double f(double x); /* function we're integrating */ integral = (f(local_a) + f(local_b))/2.0; x = local_a; for (i = 1; i <= local_n-1; i++) { x = x + h; integral = integral + f(x); } integral = integral*h; return integral; } /*  Trap  */
Continued… double f(double x) { double return_val; /* Calculate f(x). */ /* Store calculation in return_val. */ return_val = 4 / (1+x*x); return return_val; } /* f */
Program 2 Process other than root generates the random value less than 1 and sends to root. Root sums up and displays sum.
#include <stdio.h> #include <mpi.h> #include<stdlib.h> #include <string.h> #include<time.h> int main(int argc, char **argv) { int myrank,  p; int tag =0, dest=0; int i; double randIn,randOut; int source; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myrank);
MPI_Comm_size(MPI_COMM_WORLD, &p); if(myrank==0)//I am the root { double total=0,average=0; for(source=1;source<p;source++) { MPI_Recv(&randIn,1, MPI_DOUBLE, source, MPI_ANY_TAG,  MPI_COMM_WORLD, &status); printf(&quot;Message from root: From %d received number %f&quot;,source  ,randIn); total+=randIn; }//End for average=total/(p-1); }//End if
else//I am other than root { srand48((long int) myrank); randOut=drand48(); printf(&quot;randout=%f, myrank=%d&quot;,randOut,myrank); MPI_Send(&randOut,1,MPI_DOUBLE,dest,tag,MPI_COMM_WORLD); }//End If-Else MPI_Finalize(); return 0; }
MPI References ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
THANK YOU

Mais conteúdo relacionado

Mais procurados

High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPIAnkit Mahato
 
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationAnas Ebrahim
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture Haris456
 
Multi threaded programming
Multi threaded programmingMulti threaded programming
Multi threaded programmingAnyapuPranav
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryWayne Jones Jnr
 
Processor Organization
Processor OrganizationProcessor Organization
Processor OrganizationDominik Salvet
 
INTER PROCESS COMMUNICATION (IPC).pptx
INTER PROCESS COMMUNICATION (IPC).pptxINTER PROCESS COMMUNICATION (IPC).pptx
INTER PROCESS COMMUNICATION (IPC).pptxLECO9
 
Basic communication operations - One to all Broadcast
Basic communication operations - One to all BroadcastBasic communication operations - One to all Broadcast
Basic communication operations - One to all BroadcastRashiJoshi11
 
2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semesterRafi Ullah
 
Understanding operating systems 5th ed ch06
Understanding operating systems 5th ed ch06Understanding operating systems 5th ed ch06
Understanding operating systems 5th ed ch06BarrBoy
 
System Programing Unit 1
System Programing Unit 1System Programing Unit 1
System Programing Unit 1Manoj Patil
 
Directory and discovery services
Directory and discovery servicesDirectory and discovery services
Directory and discovery servicesRamchandraRegmi
 
Lecture 01 introduction to compiler
Lecture 01 introduction to compilerLecture 01 introduction to compiler
Lecture 01 introduction to compilerIffat Anjum
 
OS - Ch2
OS - Ch2OS - Ch2
OS - Ch2sphs
 
Lecture 14 run time environment
Lecture 14 run time environmentLecture 14 run time environment
Lecture 14 run time environmentIffat Anjum
 

Mais procurados (20)

My ppt hpc u4
My ppt hpc u4My ppt hpc u4
My ppt hpc u4
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPI
 
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and SynchronizationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture
 
Multi threaded programming
Multi threaded programmingMulti threaded programming
Multi threaded programming
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual Memory
 
Processor Organization
Processor OrganizationProcessor Organization
Processor Organization
 
Ch1-Operating System Concepts
Ch1-Operating System ConceptsCh1-Operating System Concepts
Ch1-Operating System Concepts
 
INTER PROCESS COMMUNICATION (IPC).pptx
INTER PROCESS COMMUNICATION (IPC).pptxINTER PROCESS COMMUNICATION (IPC).pptx
INTER PROCESS COMMUNICATION (IPC).pptx
 
Basic communication operations - One to all Broadcast
Basic communication operations - One to all BroadcastBasic communication operations - One to all Broadcast
Basic communication operations - One to all Broadcast
 
Linux Memory Management
Linux Memory ManagementLinux Memory Management
Linux Memory Management
 
2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester
 
Distributed Operating System_1
Distributed Operating System_1Distributed Operating System_1
Distributed Operating System_1
 
Understanding operating systems 5th ed ch06
Understanding operating systems 5th ed ch06Understanding operating systems 5th ed ch06
Understanding operating systems 5th ed ch06
 
System Programing Unit 1
System Programing Unit 1System Programing Unit 1
System Programing Unit 1
 
Directory and discovery services
Directory and discovery servicesDirectory and discovery services
Directory and discovery services
 
Lecture 01 introduction to compiler
Lecture 01 introduction to compilerLecture 01 introduction to compiler
Lecture 01 introduction to compiler
 
Linux Programming
Linux ProgrammingLinux Programming
Linux Programming
 
OS - Ch2
OS - Ch2OS - Ch2
OS - Ch2
 
Lecture 14 run time environment
Lecture 14 run time environmentLecture 14 run time environment
Lecture 14 run time environment
 

Semelhante a MPI Introduction

Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10AbdullahMunir32
 
Parallel programming using MPI
Parallel programming using MPIParallel programming using MPI
Parallel programming using MPIAjit Nayak
 
Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Marcirio Chaves
 
Intro to MPI
Intro to MPIIntro to MPI
Intro to MPIjbp4444
 
cs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdfcs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdfssuserada6a9
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPIyaman dua
 
Message Passing Interface (MPI)-A means of machine communication
Message Passing Interface (MPI)-A means of machine communicationMessage Passing Interface (MPI)-A means of machine communication
Message Passing Interface (MPI)-A means of machine communicationHimanshi Kathuria
 
Programming using MPI and OpenMP
Programming using MPI and OpenMPProgramming using MPI and OpenMP
Programming using MPI and OpenMPDivya Tiwari
 
Rgk cluster computing project
Rgk cluster computing projectRgk cluster computing project
Rgk cluster computing projectOstopD
 
What is [Open] MPI?
What is [Open] MPI?What is [Open] MPI?
What is [Open] MPI?Jeff Squyres
 
Advanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPCAdvanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPCIJSRD
 

Semelhante a MPI Introduction (20)

Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPI
 
Parallel computing(2)
Parallel computing(2)Parallel computing(2)
Parallel computing(2)
 
Mpi
Mpi Mpi
Mpi
 
Parallel programming using MPI
Parallel programming using MPIParallel programming using MPI
Parallel programming using MPI
 
25-MPI-OpenMP.pptx
25-MPI-OpenMP.pptx25-MPI-OpenMP.pptx
25-MPI-OpenMP.pptx
 
Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2
 
Intro to MPI
Intro to MPIIntro to MPI
Intro to MPI
 
cs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdfcs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdf
 
MPI
MPIMPI
MPI
 
More mpi4py
More mpi4pyMore mpi4py
More mpi4py
 
Lecture9
Lecture9Lecture9
Lecture9
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPI
 
BASIC_MPI.ppt
BASIC_MPI.pptBASIC_MPI.ppt
BASIC_MPI.ppt
 
Message Passing Interface (MPI)-A means of machine communication
Message Passing Interface (MPI)-A means of machine communicationMessage Passing Interface (MPI)-A means of machine communication
Message Passing Interface (MPI)-A means of machine communication
 
Programming using MPI and OpenMP
Programming using MPI and OpenMPProgramming using MPI and OpenMP
Programming using MPI and OpenMP
 
Rgk cluster computing project
Rgk cluster computing projectRgk cluster computing project
Rgk cluster computing project
 
What is [Open] MPI?
What is [Open] MPI?What is [Open] MPI?
What is [Open] MPI?
 
Open MPI 2
Open MPI 2Open MPI 2
Open MPI 2
 
Advanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPCAdvanced Scalable Decomposition Method with MPICH Environment for HPC
Advanced Scalable Decomposition Method with MPICH Environment for HPC
 

Último

General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 

Último (20)

General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 

MPI Introduction

  • 1. MPI Rohit Banga Prakher Anand K Swagat Manoj Gupta Advanced Computer Architecture Spring, 2010
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.  
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26. POINT TO POINT COMMUNICATION
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.  
  • 34.
  • 35.
  • 36.
  • 37. BLOCKING SEND/RECEIVE (CONTD.) Process 1 Process 2 Data Processor 1 Application Send System Buffer Processor 2 Application Send System Buffer
  • 38.
  • 39.
  • 40.
  • 41. NON-BLOCKING SEND/RECEIVE (CONTD.) Process 1 Process 2 Data Processor 1 Application Send System Buffer Processor 2 Application Send System Buffer
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50. EXAMPLE OF NON-OVERTAKING MESSAGES. CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.EQ.0) THEN CALL MPI_BSEND(buf1, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_BSEND(buf2, count, MPI_REAL, 1, tag, comm, ierr) ELSE ! rank.EQ.1 CALL MPI_RECV(buf1, count, MPI_REAL, 0, MPI_ANY_TAG, comm, status, ierr) CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag, comm, status, ierr) END IF
  • 51. EXAMPLE OF INTERTWINGLED MESSAGES. CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.EQ.0) THEN CALL MPI_BSEND(buf1, count, MPI_REAL, 1, tag1, comm, ierr) CALL MPI_SSEND(buf2, count, MPI_REAL, 1, tag2, comm, ierr) ELSE ! rank.EQ.1 CALL MPI_RECV(buf1, count, MPI_REAL, 0, tag2, comm, status, ierr) CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag1, comm, status, ierr) END IF
  • 52. DEADLOCK EXAMPLE CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.EQ.0) THEN CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) ELSE ! rank.EQ.1 CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF
  • 53. EXAMPLE OF BUFFERING CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.EQ.0) THEN CALL MPI_SEND(buf1, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV (recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE ! rank.EQ.1 CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag, comm, status, ierr) END IF
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60. MPI OPERATIONS MPI_OP operator MPI_MIN Minimum MPI_SUM Sum MPI_PROD product MPI_MAX maximum MPI_LAND Logical and MPI_BAND Bitwise and MPI_LOR Logical or MPI_BOR Bitwise or MPI_LXOR Logical xor MPI_BXOR Bit-wise xor MPI_MAXLOC Max value and location MPI_MINLOC Min value and location
  • 63. Parallel Trapezoidal Rule Output: Estimate of the integral from a to b of f(x) using the trapezoidal rule and n trapezoids. Algorithm: 1. Each process calculates &quot;its&quot; interval of integration. 2. Each process estimates the integral of f(x) over its interval using the trapezoidal rule. 3a. Each process != 0 sends its integral to 0. 3b. Process 0 sums the calculations received from the individual processes and prints the result. Notes: 1. f(x), a, b, and n are all hardwired. 2. The number of processes (p) should evenly divide the number of trapezoids (n = 1024)
  • 64. Parallelizing the Trapezoidal Rule #include <stdio.h> #include &quot;mpi.h&quot; main(int argc, char** argv) { int my_rank; /* My process rank */ int p; /* The number of processes */ double a = 0.0; /* Left endpoint */ double b = 1.0; /* Right endpoint */ int n = 1024; /* Number of trapezoids */ double h; /* Trapezoid base length */ double local_a; /* Left endpoint my process */ double local_b; /* Right endpoint my process */ int local_n; /* Number of trapezoids for */ /* my calculation */ double integral; /* Integral over my interval */ double total; /* Total integral */ int source; /* Process sending integral */ int dest = 0; /* All messages go to 0 */ int tag = 0; MPI_Status status;
  • 65. Continued… double Trap(double local_a, double local_b, int local_n,double h); /* Calculate local integral */ MPI_Init (&argc, &argv); MPI_Barrier(MPI_COMM_WORLD); double elapsed_time = -MPI_Wtime(); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &p); h = (b-a)/n; /* h is the same for all processes */ local_n = n/p; /* So is the number of trapezoids */ /* Length of each process' interval of integration = local_n*h. So my interval starts at: */ local_a = a + my_rank*local_n*h; local_b = local_a + local_n*h; integral = Trap(local_a, local_b, local_n, h);
  • 66. Continued… /* Add up the integrals calculated by each process */ if (my_rank == 0) { total = integral; for (source = 1; source < p; source++) { MPI_Recv(&integral, 1, MPI_DOUBLE, source, tag, MPI_COMM_WORLD, &status); total = total + integral; }//End for } else MPI_Send(&integral, 1, MPI_DOUBLE, dest, tag, MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); elapsed_time += MPI_Wtime(); /* Print the result */ if (my_rank == 0) { printf(&quot;With n = %d trapezoids, our estimate&quot;,n); printf(&quot;of the integral from %lf to %lf = %lf&quot;,a, b, total); printf(&quot;time taken: %lf&quot;, elapsed_time); }
  • 67. Continued… /* Shut down MPI */ MPI_Finalize(); } /* main */ double Trap( double local_a , double local_b, int local_n, double h) { double integral; /* Store result in integral */ double x; int i; double f(double x); /* function we're integrating */ integral = (f(local_a) + f(local_b))/2.0; x = local_a; for (i = 1; i <= local_n-1; i++) { x = x + h; integral = integral + f(x); } integral = integral*h; return integral; } /* Trap */
  • 68. Continued… double f(double x) { double return_val; /* Calculate f(x). */ /* Store calculation in return_val. */ return_val = 4 / (1+x*x); return return_val; } /* f */
  • 69. Program 2 Process other than root generates the random value less than 1 and sends to root. Root sums up and displays sum.
  • 70. #include <stdio.h> #include <mpi.h> #include<stdlib.h> #include <string.h> #include<time.h> int main(int argc, char **argv) { int myrank, p; int tag =0, dest=0; int i; double randIn,randOut; int source; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myrank);
  • 71. MPI_Comm_size(MPI_COMM_WORLD, &p); if(myrank==0)//I am the root { double total=0,average=0; for(source=1;source<p;source++) { MPI_Recv(&randIn,1, MPI_DOUBLE, source, MPI_ANY_TAG, MPI_COMM_WORLD, &status); printf(&quot;Message from root: From %d received number %f&quot;,source ,randIn); total+=randIn; }//End for average=total/(p-1); }//End if
  • 72. else//I am other than root { srand48((long int) myrank); randOut=drand48(); printf(&quot;randout=%f, myrank=%d&quot;,randOut,myrank); MPI_Send(&randOut,1,MPI_DOUBLE,dest,tag,MPI_COMM_WORLD); }//End If-Else MPI_Finalize(); return 0; }
  • 73.

Notas do Editor

  1. Can work with shared memory architectures also
  2. Why MPI is still being used
  3. Many vendors can compete for providing better implementation
  4. Boring topic. But fundamental for understanding the basics
  5. Safe – different libraries can work together
  6. Different return codes for different functions
  7. To start coding we need to use these functions.
  8. Mention the case where the buffer space might not be available
  9. Buffer used only during Buffered mode communication.
  10. Ready call indicates the system that a receive has already been posted.
  11. Built in collective operations. Reduce, Bcast, Datatypes