#include "mpi.h" int MPI_Comm_split ( MPI_Comm comm, int color, int key, MPI_Comm *comm_out )
comm | communicator (handle)
| |
color | control of subset assignment (nonnegative integer)
| |
key | control of rank assigment (integer)
|
All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER in Fortran.
The current algorithm used has quite a few (read: a lot of) inefficiencies that can be removed. Here is what we do for now
1) A table is built of colors, and keys (has a next field also). 2) The tables of all processes are merged using MPI_Allreduce. 3) Two contexts are allocated for all the comms to be created. These same two contexts can be used for all created communicators since the communicators will not overlap. 4) If the local process has a color of MPI_UNDEFINED, it can return a NULL comm. 5) The table entries that match the local process color are sorted by key/rank. 6) A group is created from the sorted list and a communicator is created with this group and the previously allocated contexts.
All MPI routines (except MPI_Wtime and MPI_Wtick) return an error value; C routines as the value of the function and Fortran routines in the last argument. Before the value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Errhandler_set; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarentee that an MPI program can continue past an error.
Location:comm_split.c