0% found this document useful (0 votes)
95 views19 pages

02 Mpi 0

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 19

Parallel Programming

Using Basic MPI


Presented by
Timothy H. Kaiser, Ph.D.
San Diego Supercomputer Center

Talk Overview

Background on MPI
Documentation
Hello world in MPI
Basic communications
Simple send and receive program

Background on MPI
MPI - Message Passing Interface
Library standard defined by a committee of
vendors, implementers, and parallel
programmers
Used to create parallel programs based on
message passing
100% portable: one standard, many
implementations
Available on almost all parallel machines in C
and Fortran
Over 100 advanced routines but 6 basic
3

Documentation
MPI home page (contains the library

standard): www.mcs.anl.gov/mpi

Books

"MPI: The Complete Reference" by Snir, Otto,


Huss-Lederman, Walker, and Dongarra, MIT Press
(also in Postscript and html)
"Using MPI" by Gropp, Lusk and Skjellum, MIT
Press

Tutorials
many online, just do a search

MPI Implementations
Most parallel supercomputer vendors provide
optimized implementations
Others:
www.lam-mpi.org
www-unix.mcs.anl.gov/mpi/mpich
GLOBUS:
www.globus.org/mpi/

Key Concepts of MPI


Used to create parallel programs based on
message passing
Normally the same program is running on
several different processors
Processors communicate using message
passing
Typical methodology:
startjobonnprocessors
doi=1toj
eachprocessordoessomecalculation
passmessagesbetweenprocessor
enddo
endjob
6

Messages
Simplest message: an array of data of one
type.
Predefined types correspond to commonly
used types in a given language
MPI_REAL (Fortran), MPI_FLOAT (C)
MPI_DOUBLE_PRECISION (Fortran),
MPI_DOUBLE (C)
MPI_INTEGER (Fortran), MPI_INT (C)

User can define more complex types and


send packages.
7

Communicators
Communicator
A collection of processors working on some
part of a parallel job
Used as a parameter for most MPI calls
MPI_COMM_WORLD includes all of the
processors in your job
Processors within a communicator are
assigned numbers (ranks) 0 to n-1
Can create subsets of MPI_COMM_WORLD

Include files
The MPI include file
C: mpi.h
Fortran: mpif.h (a f90 module is a good
place for this)
Defines many constants used within MPI
programs
In C defines the interfaces for the functions
Compilers know where to find the include files

Minimal MPI program


Every MPI program needs these
C version

/*thempiincludefile*/
#include<mpi.h>
intnPEs,ierr,iam;
/*InitializeMPI*/
ierr=MPI_Init(&argc,&argv);
/*Howmanyprocessors(nPEs)arethere?*/
ierr=MPI_Comm_size(MPI_COMM_WORLD,&nPEs);
/*WhatprocessoramI(whatismyrank)?*/
ierr=MPI_Comm_rank(MPI_COMM_WORLD,&iam);
...
ierr=MPI_Finalize();

In C MPI routines are functions and return


an error value

10

Minimal MPI program


Every MPI program needs these
Fortran version

!MPIincludefile
include'mpif.h'
integernPEs,ierr,iam
!InitializeMPI
callMPI_Init(ierr)
!Howmanyprocessors(nPEs)arethere?
callMPI_Comm_size(MPI_COMM_WORLD,nPEs,ierr)
!WhatprocessoramI(whatismyrank)?
callMPI_Comm_rank(MPI_COMM_WORLD,iam,ierr)
...
callMPI_Finalize(ierr)

In Fortran, MPI routines are subroutines,


and last parameter is an error value

11

Exercise 1 : Hello World


Write a parallel hello world program
Initialize MPI
Have each processor print out Hello,
World and its processor number (rank)
Quit MPI

12

Basic Communication
Data values are transferred from one
processor to another
One processor sends the data
Another receives the data
Synchronous
Call does not return until the message is
sent or received
Asynchronous
Call indicates a start of send or receive, and
another call is made to determine if finished

13

Synchronous Send
C
MPI_Send(&buffer,count,datatype,destination,
tag,communicator);

Fortran
CallMPI_Send(buffer,count,datatype,
destination,tag,communicator,ierr)

Call blocks until message on the way

14

CallMPI_Send(buffer,count,datatype,
destination,tag,communicator,ierr)
Buffer:Thedataarraytobesent
Count:Lengthofdataarray(in
elements,1forscalars)
Datatype:Typeofdata,forexample:
MPI_DOUBLE_PRECISION,MPI_INT,etc
Destination:Destinationprocessor
number(withingivencommunicator)
Tag:Messagetype(arbitraryinteger)
Communicator:Yoursetofprocessors
Ierr:Errorreturn(Fortranonly)

15

Synchronous Receive

C
MPI_Recv(&buffer,count,datatype,source,
tag,communicator,&status);

Fortran
CallMPI_RECV(buffer,count,datatype,
source,tag,communicator,status,ierr)

Call blocks the program until message is in buffer


Status - contains information about incoming
message
C
MPI_Status status;

Fortran

Integer status(MPI_STATUS_SIZE)
16

CallMPI_Recv(buffer,count,datatype,
source,tag,communicator,
status,ierr)
Buffer:Thedataarraytobereceived
Count:Maximumlengthofdataarray(in
elements,1forscalars)
Datatype:Typeofdata,forexample:
MPI_DOUBLE_PRECISION,MPI_INT,etc
Source:Sourceprocessornumber(within
givencommunicator)
Tag:Messagetype(arbitraryinteger)
Communicator:Yoursetofprocessors
Status:Informationaboutmessage
Ierr:Errorreturn(Fortranonly)
17

Exercise 2 : Basic Send and Receive


Write a parallel program to send & receive
data
Initialize MPI
Have processor 0 send an integer to
processor 1
Have processor 1 receive an integer from
processor 0
Both processors print the data
Quit MPI

18

Summary

MPI is used to create parallel programs based on


message passing
Usually the same program is run on multiple
processors
The 6 basic calls in MPI are:

MPI_INIT(ierr)
MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierr)
MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,ierr)
MPI_Send(buffer,count,MPI_INTEGER,destination,
tag,MPI_COMM_WORLD,ierr)
MPI_Recv(buffer,count,MPI_INTEGER,source,tag,
MPI_COMM_WORLD,status,ierr)
callMPI_FINALIZE(ierr)

19

You might also like