0% found this document useful (0 votes)
147 views34 pages

MPI Basics and Examples Explained

This document summarizes John Burkardt's presentation on using MPI (Message Passing Interface). The presentation introduces the basic MPI functions including MPI_Init, MPI_Finalize, MPI_Comm_Rank, MPI_Comm_Size, MPI_Send, and MPI_Recv. It provides examples of how to use these functions, such as a prime number summation example where processors compute partial sums and send them to the master processor. The document also discusses how messages are sent and received between processors in MPI and gives an example of performing a matrix-vector multiplication in parallel using MPI.

Uploaded by

api-27351105
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views34 pages

MPI Basics and Examples Explained

This document summarizes John Burkardt's presentation on using MPI (Message Passing Interface). The presentation introduces the basic MPI functions including MPI_Init, MPI_Finalize, MPI_Comm_Rank, MPI_Comm_Size, MPI_Send, and MPI_Recv. It provides examples of how to use these functions, such as a prime number summation example where processors compute partial sums and send them to the master processor. The document also discusses how messages are sent and received between processors in MPI and gives an example of performing a matrix-vector multiplication in parallel using MPI.

Uploaded by

api-27351105
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

8: Using MPI

John Burkardt1
1 Virginia Tech

10-12 June 2008

Burkardt Using MPI


Introductory MPI

Your First Six Words in MPI


How Messages Are Sent and Received
Prime Sum in C+MPI
Matrix*Vector in Fortran77+MPI
Conclusion

Burkardt Using MPI


Your First Six “Words” in MPI

You can write useful programs using the six fundamental routines:

MPI Init
MPI Finalize
MPI Comm Rank
MPI Comm Size
MPI Send
MPI Recv

Burkardt Using MPI


MPI Language Lesson: MPI Init

MPI Init ( &argc, &argv )


&argc, the address of the program argument counter;
&argv, the address of the program argument list
Must be the first MPI routine called.

Burkardt Using MPI


MPI Language Lesson: MPI Finalize

MPI Finalize ( )
Must be the last MPI routine called.

Burkardt Using MPI


MPI Language Lesson: MPI Comm Rank

MPI Comm Rank ( communicator, &id )


communicator, set this to MPI COMM WORLD;
&id, returns the MPI ID of this process.
This is how a processor figures out its ID.

Burkardt Using MPI


MPI Language Lesson: MPI Comm Size

MPI Comm Size ( communicator, &p )


communicator, set this to MPI COMM WORLD;
&p, returns the number of processors available.
This is how a processor finds out how many other processors there
are.

Burkardt Using MPI


MPI Language Lesson: MPI Send

MPI Send ( data, count, type, to, tag, communicator )


data, the address of the data;
count, the number of data items;
type, the data type (MPI INT, MPI FLOAT...);
to, the processor ID to which data is sent;
tag, a message identifier (”0”, ”1”, ”1492” etc);
communicator, set this to MPI COMM WORLD;

Burkardt Using MPI


MPI Language Lesson: MPI Recv

MPI Recv ( data, count, type, from, tag, communicator, status )


data, the address of the data;
count, number of data items;
type, the data type (must match what is sent);
from, the processor ID from which data is received (must
match the sender, or if don’t care, MPI ANY SOURCE;
tag, the message identifier (must match what is sent, or, if
don’t care, MPI ANY TAG);
communicator, (must match what is sent);
status, (auxilliary diagnostic information).

Burkardt Using MPI


How Messages Are Sent and Received

The main feature of MPI is the use of messages to send data


between processors.
There is a family of routines for sending messages, but the simplest
is the pair MPI Send and MPI Recv.
Two processors must be in a common ”communicator group” in
order to communicate. This is simply a way for the user to organize
processors into sub-groups. All processors can communicate in the
shared group known as MP COMM WORLD.
In order for data to be transferred by a message, there must be a
sending program that wants to send the data, and a receiving
program that expects to receive it.

Burkardt Using MPI


How Messages Are Sent and Received
The sender calls MPI Send, specifying the data, as well as an
identifier for the message, and the name of the communicator
group it is using.
On executing the call to MPI Send, the sending program pauses,
the message is transferred to a buffer on the receiving computer
system and the MPI system there prepares to deliver it to the
receiving program.
The receiving program must be expecting to receive a message,
that is, it must execute a call to MPI Recv and be waiting for a
response. The message it receives must correspond in size,
arithmetic precision, message identifier, and communicator group.
Once the message is received, the receiving process can proceed.
The sending process gets a response that the message was
received, and it can proceed as well.
Burkardt Using MPI
How Messages Are Sent and Received

If an error occurs during the message transfer, both the sender and
receiver return a nonzero flag value, either as the function value (in
C and C++) or in the final ierr argument in the FORTRAN
version of the MPI routines.
When the receiving program finishes the call to MPI Recv, the
extra parameter status includes information about the message
transfer.
The status variable is not usually of interest with simple
Send/Recv pairs, but for other kinds of message transfers, it can
contain important information

Burkardt Using MPI


The Prime Sum Example in MPI

Let’s do the PRIME SUM problem in MPI. Here we want to add


up the prime numbers from 2 to N.
Each of P processors will simply take about 1/P of the range of
numbers to check, and add up the primes it finds locally.
When it’s done, it will send the partial result to processor 0.
So processors 1 to P send a single message (simple) and processor
0 has to expect any of P-1 messages total.

Burkardt Using MPI


Prime Sum Example: Page 1

# include <stdio.h>
# include <stdlib.h>
# include "mpi.h"

int main ( int argc, char *argv[] )


{
int i, id, j, master = 0, n = 1000, n_hi, n_lo;
int p, prime, total, total_local;
MPI_Status status;
double wtime;

MPI_Init ( &argc, &argv );


MPI_Comm_size ( MPI_COMM_WORLD, &p );
MPI_Comm_rank ( MPI_COMM_WORLD, &id );

Burkardt Using MPI


Prime Sum Example: Page 2

n_lo = ( ( p - id ) * 1 + ( id ) * n ) / p + 1;
n_hi = ( ( p - id - 1 ) * 1 + ( id + 1 ) * n ) / p;

wtime = MPI_Wtime ( );

total_local = 0.0;
for ( i = n_lo; i <= n_hi; i++ ) {
prime = 1;
for ( j = 2; j < i; j++ ) {
if ( i % j == 0 ) {
prime = 0;
break; } }
if ( prime == 1 ) total_local = total_local + i;
}
wtime = MPI_Wtime ( ) - wtime;
Burkardt Using MPI
Prime Sum Example Page 3

if ( id != master ) {
MPI_Send ( &total_local, 1, MPI_INT, master, 1,
MPI_COMM_WORLD ); }
else {
total = total_local;
for ( i = 1; i < p; i++ ) {
MPI_Recv ( &total_local, 1, MPI_INT, MPI_ANY_SOURCE,
1, MPI_COMM_WORLD, &status );
total = total + total_local; } }
if ( id == master ) printf ( " Total is %d\n", total );
MPI_Finalize ( );
return 0;
}

Burkardt Using MPI


Prime Sum Example: Output

n825(0): PRIME_SUM - Master process:


n825(0): Add up the prime numbers from 2 to 1000.
n825(0): Compiled on Apr 21 2008 at 14:44:07.
n825(0):
n825(0): The number of processes available is 4.
n825(0):
n825(0): P0 [ 2, 250] Total = 5830 Time = 0.000137
n826(2): P2 [ 501, 750] Total = 23147 Time = 0.000507
n826(2): P3 [ 751, 1000] Total = 31444 Time = 0.000708
n825(0): P1 [ 251, 500] Total = 15706 Time = 0.000367
n825(0):
n825(0): The total sum is 76127

All nodes terminated successfully.

Burkardt Using MPI


The Prime Sum Example in MPI

Having all the processors compute partial results, which then have
to be collected together is another example of a reduction
operation.
Just as with OpenMP, MPI recognizes this common operation, and
has a special function call which can replace all the sending and
receiving code we just saw.

Burkardt Using MPI


Prime Sum Example Page 3 REVISED

MPI_Reduce ( &total_local, &total, 1, MPI_INT, MPI_SUM,


master, MPI_COMM_WORLD );

if ( id == master ) printf ( " Total is %d\n", total );


MPI_Finalize ( );
return 0;

Burkardt Using MPI


MPI Language Lesson: MPI REDUCE

MPI Reduce ( local data, reduced value, count, type, operation,


to, communicator )
local data, the address of the local data;
reduced value, the address of the variable to hold the result;
count, number of data items;
type, the data type;
operation, the reduction operation MPI SUM,
MPI PROD, MPI MAX...;
to, the processor ID which collects the local data into the
reduced data;
communicator;

Burkardt Using MPI


Matrix * Vector Example

We will now consider an example in which matrix multiplication is


carried out using MPI.
This is an artificial example, so don’t worry about why we’re going
to divide the task up. Concentrate on how we do it.
We are going to compute A ∗ x = b.
We start with the entire matrix A and vector X sitting on the
“master processor” (whichever processor has lucky number 0).
We need to send some of this data to other processors, they carry
out their part of the task, and processor 0 collects the results back

Burkardt Using MPI


Matrix * Vector Example

Because one processor will be special, directing the work, this


program will be an example of the “master-workers” model.
Entry bi is the dot product of row i of the matrix with x:
N
X
bi = Aij xj
j=1

So if there were N workers available, then each one could do one


entry of b.
There are only P processors available, and only P-1 can be
workers, so we’ll have to do the job in batches.

Burkardt Using MPI


Matrix * Vector Example

Give all the workers a copy of x.


Then send row i of A to processor i.
When processor i returns bi , send the next available row of A,
The way we are setting up this algorithm allows processors to finish
their work in any order. This approach is flexible.
In consequence, the master process doesn’t know which processor
will be sending a response. It has to keep careful track of what
data comes in, and when everything is done.

Burkardt Using MPI


Matrix * Vector Example

In a master-worker model, you can really see how an MPI program,


which is supposed to be a single program running on all machines,
can end up looking more like two programs.

Burkardt Using MPI


Matrix * Vector: Master Pseudocode

If I am the master:
SEND N to all workers.
SEND X to all workers.
SEND out first batch of rows.
While ( any entries of B not returned )
RECEIVE message, entry ? of B, from processor ?.
If ( any rows of A not sent )
SEND row ? of A to processor ?.
else
SEND "FINALIZE" message to processor ?.
end
end
FINALIZE

Burkardt Using MPI


Matrix * Vector: Worker Pseudocode

else if I am a worker

RECEIVE N.
RECEIVE X.
do
RECEIVE message.

if ( message is "FINALIZE" ) then


FINALIZE
else
it’s a row of A, so compute dot product with X.
SEND result to master.
end
end
end
Burkardt Using MPI
Matrix * Vector: An example algorithm

Compute A ∗ x = b.

a ”task” is to multiply one row of A times x;


we can assign one task to each processor. Whenever a
processor is done, give it another task.
each processor needs a copy of x at all times; for each task, it
needs a copy of the corresponding row of A.
processor 0 will do no tasks; instead, it will pass out tasks and
accept results.

Burkardt Using MPI


Matrix * Vector in FORTRAN77 (Page 1)

i f ( m y i d == m a s t e r )

numsent = 0
c
c BROADCAST X t o a l l t h e w o r k e r s .
c
c a l l MPI BCAST ( x , c o l s , MPI DOUBLE PRECISION , m a s t e r ,
& MPI COMM WORLD, i e r r )

c
c SEND row I t o w o r k e r p r o c e s s I ; t a g t h e m e s s a g e w i t h t h e row number .
c
do i = 1 , min ( num procs −1, r o w s )

do j = 1 , c o l s
buffer ( j ) = a( i , j )
end do

c a l l MPI SEND ( b u f f e r , c o l s , MPI DOUBLE PRECISION , i ,


& i , MPI COMM WORLD, i e r r )

numsent = numsent + 1

end do

Burkardt Using MPI


Matrix * Vector in FORTRAN77 (Page 2)
c
c Wait t o r e c e i v e a r e s u l t back from any p r o c e s s o r ;
c I f more r o w s t o do , s e n d t h e n e x t one back t o t h a t p r o c e s s o r .
c
do i = 1 , r o w s

c a l l MPI RECV ( ans , 1 , MPI DOUBLE PRECISION ,


& MPI ANY SOURCE , MPI ANY TAG ,
& MPI COMM WORLD, s t a t u s , i e r r )

s e n d e r = s t a t u s ( MPI SOURCE )
a n s t y p e = s t a t u s ( MPI TAG )
b ( anstype ) = ans

i f ( numsent . l t . r o w s ) t h e n

numsent = numsent + 1

do j = 1 , c o l s
b u f f e r ( j ) = a ( numsent , j )
end do

c a l l MPI SEND ( b u f f e r , c o l s , MPI DOUBLE PRECISION ,


& s e n d e r , numsent , MPI COMM WORLD, i e r r )

else

c a l l MPI SEND ( MPI BOTTOM, 0 , MPI DOUBLE PRECISION ,


& s e n d e r , 0 , MPI COMM WORLD, i e r r )

end i f

end do Burkardt Using MPI


Matrix * Vector in FORTRAN77 (Page 3)
c
c W o r k e r s r e c e i v e X , t h e n compute d o t p r o d u c t s u n t i l
c done m e s s a g e r e c e i v e d
c
else

c a l l MPI BCAST ( x , c o l s , MPI DOUBLE PRECISION , m a s t e r ,


& MPI COMM WORLD, i e r r )

90 continue

c a l l MPI RECV ( b u f f e r , c o l s , MPI DOUBLE PRECISION , m a s t e r ,


& MPI ANY TAG , MPI COMM WORLD, s t a t u s , i e r r )

i f ( s t a t u s ( MPI TAG ) . eq . 0 ) t h e n
go t o 200
end i f

row = s t a t u s ( MPI TAG )

ans = 0.0
do i = 1 , c o l s
ans = ans + b u f f e r ( i ) ∗ x ( i )
end do

c a l l MPI SEND ( ans , 1 , MPI DOUBLE PRECISION , m a s t e r ,


& row , MPI COMM WORLD, i e r r )

go t o 90

200 continue

end i f Burkardt Using MPI


Matrix * Vector: Using BROADCAST

In some cases, the communication that is to be carried out doesn’t


involve a pair of processors talking to each other, but rather one
processor “announcing” some information to all the others.
This is often the case when the program is written using the
master/worker model, in which case one processor, (usually the
one with ID 0) is put in charge. It takes care of interacting with
the user, doing I/O, collecting results from the other processors,
handling reduction operations and so on.
There is a “broadcast” function in MPI that makes it easy for the
master process to send information to all other processors.
In this case, the same function is used both for the sending and
receiving!

Burkardt Using MPI


MPI Language Lesson: MPI Bcast

MPI Bcast ( data, count, type, from, communicator )


data, the address of the data;
count, number of data items;
type, the data type;
from, the processor ID which sends the data;
communicator;

Burkardt Using MPI


Conclusion

One of MPI’s strongest features is that it is well suited to modern


clusters of 100 or 1,000 processors.
In most cases, an MPI implementation of an algorithm is quite
different from the serial implementation.
In MPI, communication is explicit, and you have to take care of it.
This means you have more control; you also have new kinds of
errors and inefficiencies to watch out for.
MPI can be difficult to use when you want tasks of different kinds
to be going on.
MPI and OpenMP can be used together; for instance, on a cluster
of multicore servers.

Burkardt Using MPI


References: Books

Gropp, Using MPI;


Openshaw, High Performance Computing;
Pacheco, Parallel Programming with MPI ;
Petersen, Introduction to Parallel Computing;
Quinn, Parallel Programming in C with MPI and
OpenMP;
Snir, MPI: The Complete Reference;

Burkardt Using MPI

You might also like