plucky (3) MPI_Comm_rank.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Comm_rank(MPI_Comm comm, int *rank)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_COMM_RANK(COMM, RANK, IERROR)
               INTEGER COMM, RANK, IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Comm_rank(comm, rank, ierror)
               TYPE(MPI_Comm), INTENT(IN) :: comm
               INTEGER, INTENT(OUT) :: rank
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

comm: Communicator (handle).

OUTPUT PARAMETERS

rank: Rank of the calling process in group of comm (integer).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       This  function  gives the rank of the process in the particular communicator’s group. It is equivalent to
       accessing the communicator’s group with MPI_Comm_group, computing the rank using MPI_Group_rank, and then
       freeing the temporary group via MPI_Group_free.

       Many  programs  will  be  written with the manager-worker model, where one process (such as the rank-zero
       process) will play a supervisory role, and the other processes will  serve  as  compute  nodes.  In  this
       framework,  MPI_Comm_size and MPI_Comm_rank are useful for determining the roles of the various processes
       of a communicator.

ERRORS

       Almost all MPI routines return an error value; C routines as  the  return  result  of  the  function  and
       Fortran routines in the last argument.

       Before  the  error  value  is  returned,  the current MPI error handler associated with the communication
       object (e.g., communicator, window, file) is called.  If no communication object is associated  with  the
       MPI  call,  then  the call is considered attached to MPI_COMM_SELF and will call the associated MPI error
       handler.  When  MPI_COMM_SELF  is  not  initialized   (i.e.,   before   MPI_Init/MPI_Init_thread,   after
       MPI_Finalize,  or  when using the Sessions Model exclusively) the error raises the initial error handler.
       The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF  when  using
       the  World  model,  or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.  If no other appropriate error handler has been set, then the  MPI_ERRORS_RETURN
       error  handler  is  called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called  on  a  communicator,  it  acts  as if MPI_Abort was called on that communicator. If called on a
         window or file, acts as if MPI_Abort was called on a communicator containing the group of processes  in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       SEE ALSO:MPI_Comm_groupMPI_Comm_sizeMPI_Comm_compare

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                                  MPI_COMM_RANK(3)