MPI

ethan on 2003-10-25T11:12:31

I am currently doing a lab at the Institute for Scientific Computing. It was quite a coincidence that I ended up in this one. Essentially it was the only lab offered this term where I wouldn't have to do boring things such as UML or Java. Instead they promised that it could all be done in C which eventually outweighted my worries that I don't really have a clue about scientific computing (the labs deals with parallelizing numerical mathematics, such as calculating a matrix' eigenvalues).

It turns out that it's real fun. I now have access to an impressive SUN midframe with 96 CPUs. Furthermore, we are only six people overall. The best thing however is that I learnt about MPI. The message passing interface works equally well on a single-CPU machine. Yesterday I compiled and installed MPICH and played quite a bit with it. The amazing thing about it is that it can be used as a quick-n-dirty drop-in replacement for C. Whenever I'd like to write a parallel link-checker, I could probably do it very easily this way. Whenever I want to increase the amount of parallelization, I can say C and my program is distributed over a 1000 processes. MPICH takes care of creating and shutting down the processes properly. The only thing I can't do is changing the number of processes at runtime (unlike I can do with C). But of course, I can still use C in each of the processes if I want to.

The documentation of MPICH is quite good, maybe a little terse in its description of what the various functions do. But I found out that I can simply use the documentation of Parallel::MPI::Simple. It covers the basic functions and explains what they do. From that I can derive all the rest that is needed. This is because the MPI specifications defined a beautiful and very consistent interface. Using it from C is a real pleasure.