plucky (3) MPI_Recv.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Recv(void *buf, int count, MPI_Datatype datatype,
               int source, int tag, MPI_Comm comm, MPI_Status *status)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR)
               <type>  BUF(*)
               INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM
               INTEGER STATUS(MPI_STATUS_SIZE), IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Recv(buf, count, datatype, source, tag, comm, status, ierror)
               TYPE(*), DIMENSION(..) :: buf
               INTEGER, INTENT(IN) :: count, source, tag
               TYPE(MPI_Datatype), INTENT(IN) :: datatype
               TYPE(MPI_Comm), INTENT(IN) :: comm
               TYPE(MPI_Status) :: status
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

count: Maximum number of elements to receive (integer).

       • datatype: Datatype of each receive buffer entry (handle).

       • source: Rank of source (integer).

       • tag: Message tag (integer).

       • comm: Communicator (handle).

OUTPUT PARAMETERS

buf: Initial address of receive buffer (choice).

       • status: Status object (status).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       This  basic  receive  operation, MPI_Recv, is blocking: it returns only after the receive buffer contains
       the newly received message. A receive can complete before the matching send has completed (of course,  it
       can complete only after the matching send has started).

       The  blocking  semantics  of  this  call  are  described  in the “Communication Modes” section of the MPI
       Standard.

       The receive buffer contains a number (defined by the value of count) of consecutive elements.  The  first
       element in the set of elements is located at address_buf. The type of each of these elements is specified
       by datatype.

       The length of the received message must be less than or equal to the length of  the  receive  buffer.  An
       MPI_ERR_TRUNCATE is returned upon the overflow condition.

       If  a  message  that  is shorter than the length of the receive buffer arrives, then only those locations
       corresponding to the (shorter) received message are modified.

NOTES

       The count argument indicates the maximum number of entries of type datatype that can  be  received  in  a
       message.  Once  a  message  is received, use the MPI_Get_count function to determine the actual number of
       entries within that message.

       To receive messages of unknown length, use the MPI_Probe function.  For more information about  MPI_Probe
       and MPI_Cancel, see their respective man pages and the “Probe and Cancel” section of the MPI Standard.

       A message can be received by a receive operation only if it is addressed to the receiving process, and if
       its source, tag, and communicator (comm) values match the source, tag, and comm values specified  by  the
       receive  operation.  The receive operation may specify a wildcard value for source and/or tag, indicating
       that any source and/or tag are acceptable. The wildcard value for source is source = MPI_ANY_SOURCE.  The
       wildcard  value  for  tag  is  tag = MPI_ANY_TAG. There is no wildcard value for comm. The scope of these
       wildcards is limited to the processes in the group of the specified communicator.

       The message tag is specified by the tag argument of the receive operation.

       The argument source, if different from MPI_ANY_SOURCE, is specified as a rank within  the  process  group
       associated with that same communicator (remote process group, for intercommunicators). Thus, the range of
       valid values for the source argument is {0,…,n-1} {MPI_ANY_SOURCE}, where n is the number of processes in
       this group.

       Note  the  asymmetry between send and receive operations: A receive operation may accept messages from an
       arbitrary sender; on the other hand, a send operation must specify a  unique  receiver.  This  matches  a
       “push”  communication  mechanism,  where  data  transfer  is effected by the sender (rather than a “pull”
       mechanism, where data transfer is effected by the receiver).

       Source = destination is allowed, that is, a process can send a message to  itself.  However,  it  is  not
       recommended  for  a  process  to  send  messages to itself using the blocking send and receive operations
       described above, since this may lead to deadlock.  See the “Semantics of Point-to-Point Communication” of
       the MPI Standard for more details.

       If  your  application  does  not  need  to  examine the status field, you can save resources by using the
       predefined constant MPI_STATUS_IGNORE as a special value for the status argument.

ERRORS

       Almost all MPI routines return an error value; C routines as  the  return  result  of  the  function  and
       Fortran routines in the last argument.

       Before  the  error  value  is  returned,  the current MPI error handler associated with the communication
       object (e.g., communicator, window, file) is called.  If no communication object is associated  with  the
       MPI  call,  then  the call is considered attached to MPI_COMM_SELF and will call the associated MPI error
       handler.  When  MPI_COMM_SELF  is  not  initialized   (i.e.,   before   MPI_Init/MPI_Init_thread,   after
       MPI_Finalize,  or  when using the Sessions Model exclusively) the error raises the initial error handler.
       The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF  when  using
       the  World  model,  or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.  If no other appropriate error handler has been set, then the  MPI_ERRORS_RETURN
       error  handler  is  called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called  on  a  communicator,  it  acts  as if MPI_Abort was called on that communicator. If called on a
         window or file, acts as if MPI_Abort was called on a communicator containing the group of processes  in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       Note  that  per  the  “Return  Status”  section  in the “Point-to-Point Communication” chapter in the MPI
       Standard, MPI errors on messages received by MPI_Recv do  not  set  the  status.MPI_ERROR  field  in  the
       returned status.  The error code is always passed to the back-end error handler and may be passed back to
       the caller through the return  value  of  MPI_Recv  if  the  back-end  error  handler  returns  it.   The
       pre-defined MPI error handler MPI_ERRORS_RETURN exhibits this behavior, for example.

       SEE ALSO:MPI_IrecvMPI_Probe

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                                       MPI_RECV(3)