Provided by: openmpi-doc_2.1.1-8_all bug

NAME

       MPI_Comm_split  - Creates new communicators based on colors and keys.

SYNTAX

C Syntax

       #include <mpi.h>
       int MPI_Comm_split(MPI_Comm comm, int color, int key,
            MPI_Comm *newcomm)

Fortran Syntax

       INCLUDE 'mpif.h'
       MPI_COMM_SPLIT(COMM, COLOR, KEY, NEWCOMM, IERROR)
            INTEGER   COMM, COLOR, KEY, NEWCOMM, IERROR

C++ Syntax

       #include <mpi.h>
       MPI::Intercomm MPI::Intercomm::Split(int color, int key) const

       MPI::Intracomm MPI::Intracomm::Split(int color, int key) const

INPUT PARAMETERS

       comm      Communicator (handle).

       color     Control of subset assignment (nonnegative integer).

       key       Control of rank assignment (integer).

OUTPUT PARAMETERS

       newcomm   New communicator (handle).

       IERROR    Fortran only: Error status (integer).

DESCRIPTION

       This  function  partitions the group associated with comm into disjoint subgroups, one for
       each value of color. Each subgroup contains all processes of the same color.  Within  each
       subgroup,  the processes are ranked in the order defined by the value of the argument key,
       with ties broken according to their rank in the old group. A new communicator  is  created
       for  each  subgroup  and  returned  in  newcomm.  A  process  may  supply  the color value
       MPI_UNDEFINED, in which case newcomm returns MPI_COMM_NULL. This is a collective call, but
       each process is permitted to provide different values for color and key.

       When  you call MPI_Comm_split on an inter-communicator, the processes on the left with the
       same color as those on the right combine to create  a  new  inter-communicator.   The  key
       argument  describes the relative rank of processes on each side of the inter-communicator.
       The function returns MPI_COMM_NULL for  those colors that are specified on only  one  side
       of the inter-communicator, or for those that specify MPI_UNDEFINED as the color.

       A   call   to   MPI_Comm_create(comm,   group,   newcomm)  is  equivalent  to  a  call  to
       MPI_Comm_split(comm, color, key, newcomm), where all members of group provide  color  =  0
       and  key  = rank in group, and all processes that are not members of group provide color =
       MPI_UNDEFINED. The function MPI_Comm_split allows more general  partitioning  of  a  group
       into one or more subgroups with optional reordering.

       The value of color must be nonnegative or MPI_UNDEFINED.

NOTES

       This  is  an  extremely  powerful  mechanism  for dividing a single communicating group of
       processes into k subgroups, with k chosen implicitly by the user (by the number of  colors
       asserted over all the processes). Each resulting communicator will be nonoverlapping. Such
       a division could be useful for defining a hierarchy of computations, such as for multigrid
       or linear algebra.

       Multiple  calls  to  MPI_Comm_split  can be used to overcome the requirement that any call
       have no overlap of the resulting communicators (each process is  of  only  one  color  per
       call). In this way, multiple overlapping communication structures can be created. Creative
       use of the color and key in such splitting operations is encouraged.

       Note that, for a fixed color,  the  keys  need  not  be  unique.  It  is  MPI_Comm_split's
       responsibility  to  sort  processes in ascending order according to this key, and to break
       ties in a consistent way. If all the keys are specified in the  same  way,  then  all  the
       processes  in  a given color will have the relative rank order as they did in their parent
       group. (In general, they will have different ranks.)

       Essentially, making the key value zero for all processes of a given color means  that  one
       needn't really pay attention to the rank-order of the processes in the new communicator.

ERRORS

       Almost all MPI routines return an error value; C routines as the value of the function and
       Fortran routines in the last argument. C++ functions do not return errors. If the  default
       error  handler  is  set  to  MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception
       mechanism will be used to throw an MPI::Exception object.

       Before the error value is returned, the current MPI error handler is called.  By  default,
       this  error  handler aborts the MPI job, except for I/O function errors. The error handler
       may   be   changed   with   MPI_Comm_set_errhandler;   the   predefined   error    handler
       MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not
       guarantee that an MPI program can continue past an error.

SEE ALSO

       MPI_Comm_create
       MPI_Intercomm_create
       MPI_Comm_dup
       MPI_Comm_free