oracular (3) MPI_Intercomm_create.3.gz

Provided by: mpich-doc_4.2.0-14_all bug

NAME

       MPI_Intercomm_create -  Creates an intercommuncator from two intracommunicators

SYNOPSIS

       int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader,
       MPI_Comm peer_comm, int remote_leader, int tag,
       MPI_Comm *newintercomm)

INPUT PARAMETERS

       local_comm
              - local intra-communicator (handle)
       local_leader
              - rank of local group leader in local_comm (integer)
       peer_comm
              - `peer communicator; significant only at the local_leader (handle)
       remote_leader
              - rank of remote group leader in peer_comm; significant only at the local_leader (integer)
       tag    - tag (integer)

OUTPUT PARAMETERS

       newintercomm
              - new inter-communicator (handle)

NOTES

       peer_comm is significant only for the process designated the local_leader in the local_comm .

       The  MPI 1.1 Standard contains two mutually exclusive comments on the input intercommunicators.  One says
       that their respective groups must be disjoint; the other that the leaders can be the same process.  After
       some  discussion  by  the MPI Forum, it has been decided that the groups must be disjoint.  Note that the
       reason given for this in the standard is not the reason for this choice; rather, the other operations  on
       intercommunicators (like MPI_Intercomm_merge ) do not make sense if the groups are not disjoint.

THREAD AND INTERRUPT SAFETY

       This routine is thread-safe.  This means that this routine may be safely used by multiple threads without
       the need for any user-provided thread locks.  However, the routine is  not  interrupt  safe.   Typically,
       this  is  due to the use of memory allocation routines such as malloc or other non-MPICH runtime routines
       that are themselves not interrupt-safe.

NOTES FOR FORTRAN

       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have an additional argument ierr at the
       end of the argument list.  ierr is an integer and has the same meaning as the return value of the routine
       in C.  In Fortran, MPI routines are subroutines, and are invoked with the call statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.

ERRORS

       All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as the value of  the
       function  and Fortran routines in the last argument.  Before the value is returned, the current MPI error
       handler is called.  By default, this error handler aborts the MPI job.  The error handler may be  changed
       with    MPI_Comm_set_errhandler   (for   communicators),   MPI_File_set_errhandler   (for   files),   and
       MPI_Win_set_errhandler (for RMA windows).  The MPI-1 routine MPI_Errhandler_set may be used but  its  use
       is  deprecated.   The  predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be
       returned.  Note that MPI does not guarantee that an MPI program can continue past an error; however,  MPI
       implementations will attempt to continue whenever possible.

       MPI_SUCCESS
              - No error; MPI routine completed successfully.
       MPI_ERR_ARG
              -  Invalid  argument.   Some  argument  is invalid and is not identified by a specific error class
              (e.g., MPI_ERR_RANK ).
       MPI_ERR_COMM
              - Invalid communicator.  A common error is to use a null communicator in a call (not even  allowed
              in MPI_Comm_rank ).
       MPI_ERR_TAG
              -  Invalid  tag  argument.   Tags must be non-negative; tags in a receive ( MPI_Recv , MPI_Irecv ,
              MPI_Sendrecv , etc.) may also be MPI_ANY_TAG .  The largest tag value is available through the the
              attribute MPI_TAG_UB .

       MPI_ERR_OTHER
              - Other error; use MPI_Error_string to get more information about this error code.

SEE ALSO

       MPI_Intercomm_merge, MPI_Comm_free, MPI_Comm_remote_group, MPI_Comm_remote_size

                                                    2/9/2024                             MPI_Intercomm_create(3)