Introduction MPI - Chap2 - Slide 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Introduction to The Message

Passing Interface (MPI)


C. El Amrani

Pr. C. El Amrani - 2024


Introduction
• MPI is the standard programming environment for distributed-memory parallel
computers.
• MPI includes message passing (one process packages information into a message
and sends that message to another process), routines to synchronize processes,
sum numbers distributed among a collection of processes, scatter data across a
collection of processes, and much more.
• MPI was created in the early 1994. It is distributed in a form of a library. The
official specification defines bindings for C, C++, Java, Python and Fortran.
• The last version is MPI 4.0 which was released in June 2021. The first version MPI
1.0 was relased in May 1994.
• There are several implementations of MPI in common use. The three most
common are Open MPI, LAM/MPI and MPICH. They support a wide range of
parallel computers, including Linux clusters, NUMA computers, and SMPs.
• http://www.mcs.anl.gov/research/projects/mpi/
• https://www.mpi-forum.org/

Pr. C. El Amrani - 2024


Concepts
• The basic idea of passing a message is deceptively simple:
One process sends a message and another one receives it.
• MPI approach is based on two core elements: process
groups and a communication context.
• Process groups is a set of processes involved in the
computation. As the computation proceeds, the
programmer can divide the processes into subgroups and
control how the groups interact.
• Communication context provides a mechanism for grouping
together sets of related communications.
• At program startup, the runtime system creates a common
communicator called: MPI_COMM_WORLD
• There are 6 fundamental MPI routines

Pr. C. El Amrani - 2024


The 7 fundamental MPI routines
MPI_Init()
MPI_Comm_rank()
MPI_Comm_size()
MPI_Send()
MPI_Recv()
MPI_Finalize()

Pr. C. El Amrani - 2024


MPI Hello World
#include <stdio.h>
#include <mpi.h>
int main (int argc, char * argv[])
{
int rank, size,
MPI_Init (&argc, &argv); /* starts MPI */
MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current
process id */
MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of
processes */
printf( "Hello world from process %d of %d\n", rank, size );
MPI_Finalize();
return 0;
}

Pr. C. El Amrani - 2024


Basic Point-To-Point
Message Passing
The point-to-point message-passing routines
in MPI send a message from one process to
another. There are more than 21 functions for
point-to-point communication. The most
commonly used message-passing functions
are the blocking send/receive functions:
MPI_Send() and MPI_Recv().

Pr. C. El Amrani - 2024


Basic Point-To-Point
Message Passing

- How will data be described?


- How will processes be identified?
- What will it mean for these operations to complete

Requires cooperation of sender and receiver

Pr. C. El Amrani - 2024


Basic Point-To-Point
Message Passing
int MPI_Send(buff, count, MPI_type, dest, tag, Comm)
int MPI_Recv(buff, count, MPI_type, source, tag, Comm, &stat);

- buff: Pointer to a buffer with a type compatible with MPI_type


- int count: The number of items of the indicated type contained in buff
- MPI_type: The type of the items in the buffers. The most commonly used types are:
MPI_DOUBLE, MPI_INIT, MPI_LONG and MPI_FLOAT.
- int source: The rank for the process sending the message. The constant MPI_ANY_SOURCE can
be used by MPI_Recv to accept a message from any source.
- int dest: The rank of the process receiving the message.
- int tag: An integer to indentify the message involved in the communication. The constant
MPI_ANY_TAG is a wild card that matches with any tag value.
- MPI_Status stat: A structure holding status information on receipt of the message
- MPI_COMM Comm: The MPI communicator, this is usually the default MPI_COMM_WORLD

Pr. C. El Amrani - 2024


Example
This program is called sometimes a “ping pong” program.

#include <stdio.h>
#include <stdlib.h>
//#include "memory.h" //include file with function prototypes for memory management
#include "mpi.h"
int main(int argc, char **argv){
int Tag1=1; int Tag2=2;
int num_procs;
int ID;
int buffer_count=10;
long *buffer;
int i;
MPI_Status stat;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &ID);
MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
if(num_procs !=2) MPI_Abort(MPI_COMM_WORLD, 1);
buffer=(long *)malloc(buffer_count* sizeof(long));
for(i=0; i<buffer_count; i++)
buffer[i]=(long) i;

Pr. C. El Amrani - 2024


Example (suite)
if(ID==0){
MPI_Send(buffer, buffer_count, MPI_LONG, 1, Tag1, MPI_COMM_WORLD);
printf("Tag1 sent from 0 to 1\n");
MPI_Recv(buffer, buffer_count, MPI_LONG, 1, Tag2, MPI_COMM_WORLD, &stat);
printf("Tag2 received by 0 from 1\n");
}
else{
MPI_Recv(buffer, buffer_count, MPI_LONG, 0, Tag1, MPI_COMM_WORLD,&stat);
printf("Tag1 received by 1 from 0\n");
MPI_Send(buffer, buffer_count, MPI_LONG, 0, Tag2, MPI_COMM_WORLD);
printf("Tag2 sent from 1 to 0\n");
}
MPI_Finalize();
}

Pr. C. El Amrani - 2024


Example
#include "mpi.h"
#include<stdio.h>

main(int argc, char *argv[])


{
int id, nb, type = 666, i, code;
int mesage, rec=0;
MPI_Status stat;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nb);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
code = id * 11;
if(id == 0)
while(rec++ < nb-1)
{ MPI_Recv(&mesage, 1, MPI_INT, MPI_ANY_SOURCE, type, MPI_COMM_WORLD, &stat);
printf("I received %d and I am the process %d\n", mesage, id);
}
else
MPI_Send(&code, 1, MPI_INT, 0, type, MPI_COMM_WORLD);

MPI_Finalize();
}

Pr. C. El Amrani - 2024


Collective Operations
In addition to the point-to-point message-passing routines, MPI includes a
set of operations in which all the processes in the group work together to
carry out a complex communication.

Pr. C. El Amrani - 2024


Collective Operations
The most commonly used collective operations include the following:
- MPI_Barrier: A barrier defines a synchronization point at which all
processes must arrive before any of them are allowed to proceed.
- MPI_Bcast: A broadcast sends a message from one process to all the
processes in a group.
- MPI_Reduce: A reduction operation takes a set of values spread out
around a process group and combines them using the indicated binary
operation.
Notice: The time consumed in each process is measured using the MPI timing
function: double MPI_Wtime()

Pr. C. El Amrani - 2024


Example
Barriers : Synchronize processes, No interchange of data, Program is stopped until all processes get to the barrier
#include "mpi.h"
#include<stdio.h>
main(int argc, char *argv[])
{
int id, nb, type = 666, i, code;
int mesage;
MPI_Status stat;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nb);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
code = id * 11;
if(id == 0)
for(i=1; i<nb;i++)
{ MPI_Recv(&mesage, 1, MPI_INT, i, type, MPI_COMM_WORLD, &stat);
printf("Received %d and I am process: %d\n", mesage, id);
} else
MPI_Send(&code, 1, MPI_INT, 0, type, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Finalize();
}
Question:
- Tester avec et sans MPI_Barrier(). Que remarquez vous?
- Utiliser la commande time pour évaluer le temps d’exécution dans les deux cas. Que constatez vous?
Pr. C. El Amrani - 2024
Example
Broadcast: One to all, communication, one of the processes sends a message to several destinations

#include "mpi.h"
#include<stdio.h>
#include<string.h>
main(int argc, char *argv[])
{
int id, nb, len;
char mesage[10];

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nb);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
strcpy(mesage, "Hello");
len = strlen(mesage)+1;
MPI_Bcast(mesage, len, MPI_CHAR, 0, MPI_COMM_WORLD);
printf("I am the process: %d and say: %s\n", id, mesage);
MPI_Finalize();
}

Pr. C. El Amrani - 2024


Using MPI
Compiling:
mpicc test1.c –o test1

Execution:
mpiexec –np 4 test1 (to execute the program on 4 processors)

Exercice:
Trapezoidal Integration (serial implementation and the corresponding MPI implementation)

Pr. C. El Amrani - 2024

You might also like