02 Mpi 0
02 Mpi 0
02 Mpi 0
Talk Overview
Background on MPI
Documentation
Hello world in MPI
Basic communications
Simple send and receive program
Background on MPI
MPI - Message Passing Interface
Library standard defined by a committee of
vendors, implementers, and parallel
programmers
Used to create parallel programs based on
message passing
100% portable: one standard, many
implementations
Available on almost all parallel machines in C
and Fortran
Over 100 advanced routines but 6 basic
3
Documentation
MPI home page (contains the library
standard): www.mcs.anl.gov/mpi
Books
Tutorials
many online, just do a search
MPI Implementations
Most parallel supercomputer vendors provide
optimized implementations
Others:
www.lam-mpi.org
www-unix.mcs.anl.gov/mpi/mpich
GLOBUS:
www.globus.org/mpi/
Messages
Simplest message: an array of data of one
type.
Predefined types correspond to commonly
used types in a given language
MPI_REAL (Fortran), MPI_FLOAT (C)
MPI_DOUBLE_PRECISION (Fortran),
MPI_DOUBLE (C)
MPI_INTEGER (Fortran), MPI_INT (C)
Communicators
Communicator
A collection of processors working on some
part of a parallel job
Used as a parameter for most MPI calls
MPI_COMM_WORLD includes all of the
processors in your job
Processors within a communicator are
assigned numbers (ranks) 0 to n-1
Can create subsets of MPI_COMM_WORLD
Include files
The MPI include file
C: mpi.h
Fortran: mpif.h (a f90 module is a good
place for this)
Defines many constants used within MPI
programs
In C defines the interfaces for the functions
Compilers know where to find the include files
/*thempiincludefile*/
#include<mpi.h>
intnPEs,ierr,iam;
/*InitializeMPI*/
ierr=MPI_Init(&argc,&argv);
/*Howmanyprocessors(nPEs)arethere?*/
ierr=MPI_Comm_size(MPI_COMM_WORLD,&nPEs);
/*WhatprocessoramI(whatismyrank)?*/
ierr=MPI_Comm_rank(MPI_COMM_WORLD,&iam);
...
ierr=MPI_Finalize();
10
!MPIincludefile
include'mpif.h'
integernPEs,ierr,iam
!InitializeMPI
callMPI_Init(ierr)
!Howmanyprocessors(nPEs)arethere?
callMPI_Comm_size(MPI_COMM_WORLD,nPEs,ierr)
!WhatprocessoramI(whatismyrank)?
callMPI_Comm_rank(MPI_COMM_WORLD,iam,ierr)
...
callMPI_Finalize(ierr)
11
12
Basic Communication
Data values are transferred from one
processor to another
One processor sends the data
Another receives the data
Synchronous
Call does not return until the message is
sent or received
Asynchronous
Call indicates a start of send or receive, and
another call is made to determine if finished
13
Synchronous Send
C
MPI_Send(&buffer,count,datatype,destination,
tag,communicator);
Fortran
CallMPI_Send(buffer,count,datatype,
destination,tag,communicator,ierr)
14
CallMPI_Send(buffer,count,datatype,
destination,tag,communicator,ierr)
Buffer:Thedataarraytobesent
Count:Lengthofdataarray(in
elements,1forscalars)
Datatype:Typeofdata,forexample:
MPI_DOUBLE_PRECISION,MPI_INT,etc
Destination:Destinationprocessor
number(withingivencommunicator)
Tag:Messagetype(arbitraryinteger)
Communicator:Yoursetofprocessors
Ierr:Errorreturn(Fortranonly)
15
Synchronous Receive
C
MPI_Recv(&buffer,count,datatype,source,
tag,communicator,&status);
Fortran
CallMPI_RECV(buffer,count,datatype,
source,tag,communicator,status,ierr)
Fortran
Integer status(MPI_STATUS_SIZE)
16
CallMPI_Recv(buffer,count,datatype,
source,tag,communicator,
status,ierr)
Buffer:Thedataarraytobereceived
Count:Maximumlengthofdataarray(in
elements,1forscalars)
Datatype:Typeofdata,forexample:
MPI_DOUBLE_PRECISION,MPI_INT,etc
Source:Sourceprocessornumber(within
givencommunicator)
Tag:Messagetype(arbitraryinteger)
Communicator:Yoursetofprocessors
Status:Informationaboutmessage
Ierr:Errorreturn(Fortranonly)
17
18
Summary
MPI_INIT(ierr)
MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierr)
MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,ierr)
MPI_Send(buffer,count,MPI_INTEGER,destination,
tag,MPI_COMM_WORLD,ierr)
MPI_Recv(buffer,count,MPI_INTEGER,source,tag,
MPI_COMM_WORLD,status,ierr)
callMPI_FINALIZE(ierr)
19