Introduction to High-Performance Computing:
Guest Lecture and Lab by Simon Levy
Part I: A simple MPI program
This page has a simple
illustration of how to write an MPI program, using the classic
example taught to generations of computer science students. To get this program working,
first log on to the HBAR cluster. Then create a new directory (folder) for this lab, and
change (move) to it. Here (in bold) how to do this, using my account as an example:
[levys@HBAR]$ mkdir mpi_tutorial
[levys@HBAR]$ cd mpi_tutorial
Now use the vi program to create the file hello.c:
[levys@HBAR mpi_tutorial]$ vi hello.c
This will open up an empty file, into which you can insert text by typing a single i,
putting you in insert mode. Now you can copy-and-paste from the
example into your
vi session. You can just copy-and-paste everything between the first two paragraphs -- i.e, from the
/*The Parallel Hello World Program*/ comment line through the closing }.
Note the similarity between some parts of this program (like the #include lines)
and the files you've been working with in Gromacs: Gromacs is obviously working closely with
the C programming language and the Unix operating system!
Once you're done pasting, hit ESC to get out of insert mode, hit : (colon, i.e.
shift-semicolon) to get into file mode, and type wq to write (save) the file and quit.
High-performance languages like C use a compiler to convert your human-readable program
into low-level commands that can be executed optimally on a particular architecture (like HBAR).
To compile your program, issue the following command:
[levys@HBAR mpi_tutorial]$ mpicc -o hello hello.c
The mpicc command invokes the MPI C Compiler. Again, note the similarity between this
command and the Gromacs commands: the -o option specifies the name of the output file,
which is usually the same as the name of the .c file, but without the .c extension.
Now you are ready to run your program. As you did in the
Production MD part of the Lysozyme in Water tutorial, you will use mpirun to
run the program on a specified number of processors; for example:
[levys@HBAR mpi_tutorial]$ mpirun -np 8 hello
If everything is working, you should see an output something like this:
Hello World from Node 0
Hello World from Node 3
Hello World from Node 4
Hello World from Node 5
Hello World from Node 1
Hello World from Node 2
Hello World from Node 6
Hello World from Node 7
(The output may be preceded and followed by some warnings or other messages, but don't worry about
those right now.) Repeat the run several times (using the up-arrow key is fastest!), and you'll
see that the node order isn't always the same. This is because MPI is executing each process
(copy of the program) concurrently (simulataneously, independently), with no guarantee of
who finishes first.
Part II: Timing
Now we will explore some non-trivial aspects of parallel computing using MPI. Use vi
to make a copy of cpi.c, which computes an approximate value of Π. Compile
the program as you did with the hello example. To run the new program, you will specify not
just the number of processors but also the number of intervals over which to compute the
value; for example:
[levys@HBAR mpi_tutorial]$ mpirun -np 4 cpi 1000
uses four processes and 1000 intervals. Now you can experiment to see how long it takes
to execute the program under various combinations of these parameters, using the Unix
[levys@HBAR mpi_tutorial]$ time mpirun -np 4 cpi 1000
One interesting experiment is to start with just one processor and increase the
number of intervals by a factor of 10 until it starts to take a non-trivial amount of time
(several seconds) to compute the result. Then add more processors and see what happens.
Conversely, you can keep the number of intervals small (like 1000) and see what happens as you
increase the number of processors. Does using more processors always give you a faster result?
Why not? If you're feeling ambitious, create some plots of your results, illustrating what