oracular (3) MPI_Iallgather.openmpi.3.gz

Provided by: mpich-doc_4.2.0-14_all bug

NAME

       MPI_Allgather -  Gathers data from all tasks and distribute the combined data to all tasks

SYNOPSIS

       int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
       void *recvbuf, int recvcount, MPI_Datatype recvtype,
       MPI_Comm comm)

       int MPI_Allgather_c(const void *sendbuf, MPI_Count sendcount,
       MPI_Datatype sendtype, void *recvbuf, MPI_Count recvcount,
       MPI_Datatype recvtype, MPI_Comm comm)

INPUT PARAMETERS

       sendbuf
              - starting address of send buffer (choice)
       sendcount
              - number of elements in send buffer (non-negative integer)
       sendtype
              - data type of send buffer elements (handle)
       recvcount
              - number of elements received from any process (non-negative integer)
       recvtype
              - data type of receive buffer elements (handle)
       comm   - communicator (handle)

OUTPUT PARAMETERS

       recvbuf
              - address of receive buffer (choice)

NOTES

       The MPI standard (1.0 and 1.1) says that

       The jth block of data sent from  each process is received by every process and placed in the jth block of
       the buffer recvbuf .

       This is misleading; a better description is

       The block of data sent from the jth process is received by every process and placed in the jth  block  of
       the buffer recvbuf .

       This text was suggested by Rajeev Thakur and has been adopted as a clarification by the MPI Forum.

THREAD AND INTERRUPT SAFETY

       This routine is thread-safe.  This means that this routine may be safely used by multiple threads without
       the need for any user-provided thread locks.  However, the routine is  not  interrupt  safe.   Typically,
       this  is  due to the use of memory allocation routines such as malloc or other non-MPICH runtime routines
       that are themselves not interrupt-safe.

NOTES FOR FORTRAN

       All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have an additional argument ierr at the
       end of the argument list.  ierr is an integer and has the same meaning as the return value of the routine
       in C.  In Fortran, MPI routines are subroutines, and are invoked with the call statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.

ERRORS

       All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as the value of  the
       function  and Fortran routines in the last argument.  Before the value is returned, the current MPI error
       handler is called.  By default, this error handler aborts the MPI job.  The error handler may be  changed
       with    MPI_Comm_set_errhandler   (for   communicators),   MPI_File_set_errhandler   (for   files),   and
       MPI_Win_set_errhandler (for RMA windows).  The MPI-1 routine MPI_Errhandler_set may be used but  its  use
       is  deprecated.   The  predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be
       returned.  Note that MPI does not guarantee that an MPI program can continue past an error; however,  MPI
       implementations will attempt to continue whenever possible.

       MPI_SUCCESS
              - No error; MPI routine completed successfully.
       MPI_ERR_BUFFER
              - Invalid buffer pointer.  Usually a null buffer where one is not valid.
       MPI_ERR_COMM
              -  Invalid communicator.  A common error is to use a null communicator in a call (not even allowed
              in MPI_Comm_rank ).
       MPI_ERR_COUNT
              - Invalid count argument.  Count arguments must be non-negative; a count of zero is often valid.
       MPI_ERR_TYPE
              - Invalid datatype argument.  Additionally, this error can occur if  an  uncommitted  MPI_Datatype
              (see MPI_Type_commit ) is used in a communication call.
       MPI_ERR_OTHER
              - Other error; use MPI_Error_string to get more information about this error code.

                                                    2/9/2024                                    MPI_Allgather(3)