Computer Science 296
Parallel Programming
Due October 1, midnight.
In this assignment you are to implement a parallel version of the SPEC benchmark tomcatv. Tomcatv is a mesh generation program that is used for non-uniform mesh creation at the boundaries of objects. A C version of the serial program is available here. Note there are a few oddities due to a straight up conversion from Fortran to C of the program (like some wasted entries in arrays, loops starting at 2, etc.) You can compile and execute the program as follows:
% gcc –o tomcatv tomcatv.c
% ./tomcatv 50
The only argument is the number of iterations.
The initialization doesn’t need to be parallel, please parallelize the main loop. Please use the output of the serial program to verify the correct operation of your MPI implementation, you should get identical results. Also, please run for 100 iterations to ensure correct operation for all 100 iterations.
You will be running your jobs on the CS department linux cluster. To use MPI you have to perform two main actions:
1) start MPI daemons on the machines you plan to use (note this only needs to be performed once)
2) initiate execution of your MPI job
The following steps show how to get started. You can use the example hello_world program available here.
1) ssh to linux8.cs.duke.edu
2) create the file ~/.mpd.conf with the contents MPD_SECRETWORD=someword
3) mkdir ~/mpi, then cd to ~/mpi
4) save this file to mpd.hosts in ~/mpi
5) make sure that mpdboot and mpiexec are being grabbed from /usr/bin (run the command which mpdboot it should return /usr/bin/mpdboot).
6) start the MPI daemons with the following command (this only needs to be done once)
% mpdboot -r rsh -n 4 -f $HOME/mpi/mpd.hosts -v
7) Compile your program using the special MPI front end to the compiler, mpicc. I’ve found it best to compile on the same node I initiate execution (step 6 below).
% mpicc –o hello hello.c
8)
Run your program
% mpiexec -machinefile mpd.hosts -n 4 ./hello
You can run your program as many times as needed, and when you no longer are
using MPI please execute the command mpdallexit
to shutdown all the MPI daemons that you started. You can change the machines in the mpd.hosts
file to any of linux<1-21>, but I have not tested all nodes, only those
in the mpd.hosts file above. For both
mpdboot and mpiexec the –n option specifies the number of nodes to use. You can start daemons on N nodes with mpdboot
and then execute on a subset of those nodes with mpiexec (e.g., mpdboot with –n
8 while mpiexec uses –n 3)
Each file submitted needs to contain your name and email address.
What to submit:
1. Source code for parallel version of program (well commented).
2. Binary for parallel version of program.
3. README with documentation on how to compile and run your programs and explanation of general approach to parallelize the algorithm.
How to submit:
On any Computer Science linux or solaris machine (e.g., linux.cs.duke.edu, login.cs.duke.edu), run the program submit_cps296.3 prog2 <file1> <file2>…
I recommend that you submit a single file that is a tar gzipped file. (note: depending on which tar you use, this can be one command)
% tar –cvf tomcatv.tar <prog_directory>
% gzip tomcatv.tar
% submit_cps296.3 prog2 tomcatv.tar.gz