plucky (3) MPI_Unpack.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Unpack(const void *inbuf, int insize, int *position,
               void *outbuf, int outcount, MPI_Datatype datatype,
               MPI_Comm comm)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_UNPACK(INBUF, INSIZE, POSITION, OUTBUF, OUTCOUNT,
               DATATYPE, COMM, IERROR)
               <type>  INBUF(*), OUTBUF(*)
               INTEGER INSIZE, POSITION, OUTCOUNT, DATATYPE,
                       COMM, IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Unpack(inbuf, insize, position, outbuf, outcount, datatype, comm,
                       ierror)
               TYPE(*), DIMENSION(..), INTENT(IN) :: inbuf
               TYPE(*), DIMENSION(..) :: outbuf
               INTEGER, INTENT(IN) :: insize, outcount
               INTEGER, INTENT(INOUT) :: position
               TYPE(MPI_Datatype), INTENT(IN) :: datatype
               TYPE(MPI_Comm), INTENT(IN) :: comm
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

inbuf: Input buffer start (choice).

       • insize: Size of input buffer, in bytes (integer).

       • outcount: Number of items to be unpacked (integer).

       • datatype: Datatype of each output data item (handle).

       • comm: Communicator for packed message (handle).

INPUT/OUTPUT PARAMETER

position: Current position in bytes (integer).

OUTPUT PARAMETERS

outbuf: Output buffer start (choice).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       Unpacks  a  message into the receive buffer specified by outbuf, outcount, datatype from the buffer space
       specified by inbuf and insize. The output buffer can be any communication buffer allowed in MPI_Recv. The
       input  buffer  is a contiguous storage area containing insize bytes, starting at address inbuf. The input
       value of position is the first location in the input buffer occupied by the packed message.  position  is
       incremented by the size of the packed message, so that the output value of position is the first location
       in the input buffer after the  locations  occupied  by  the  message  that  was  unpacked.  comm  is  the
       communicator used to receive the packed message.

NOTES

       Note  the  difference  between  MPI_Recv  and  MPI_Unpack:  In MPI_Recv, the count argument specifies the
       maximum number of items that can be received. The actual number of items received is  determined  by  the
       length  of  the  incoming message. In MPI_Unpack, the count argument specifies the actual number of items
       that are to be unpacked; the “size” of the corresponding message is the increment in position. The reason
       for  this change is that the “incoming message size” is not predetermined since the user decides how much
       to unpack; nor is it easy to determine the “message size” from the number of items to be unpacked.

       To understand the behavior of pack and unpack, it is convenient to think of the data part of a message as
       being  the  sequence  obtained  by  concatenating  the  successive  values sent in that message. The pack
       operation stores this sequence in the buffer space, as if sending the message to that buffer. The  unpack
       operation  retrieves  this sequence from buffer space, as if receiving a message from that buffer. (It is
       helpful to think of internal Fortran files or sscanf in C for a similar function.)

       Several messages can be successively packed into one packing unit. This is effected by several successive
       related  calls  to  MPI_Pack, where the first call provides position = 0, and each successive call inputs
       the value of position that was output by the previous call, and the same values for outbuf, outcount, and
       comm.  This packing unit now contains the equivalent information that would have been stored in a message
       by one send call with a send buffer that is the “concatenation” of the individual send buffers.

       A packing unit can be sent using type MPI_Packed. Any point-to-point or collective communication function
       can  be  used to move the sequence of bytes that forms the packing unit from one process to another. This
       packing unit can now be received using any receive operation, with any datatype: The type-matching  rules
       are relaxed for messages sent with type MPI_Packed.

       A  message  sent  with  any type (including MPI_Packed) can be received using the type MPI_Packed. Such a
       message can then be unpacked by calls to MPI_Unpack.

       A packing unit (or a message created by a regular, “typed” send) can be unpacked into several  successive
       messages.  This  is  effected  by  several  successive  related calls to MPI_Unpack, where the first call
       provides position = 0, and each successive call inputs the value of  position  that  was  output  by  the
       previous call, and the same values for inbuf, insize, and comm.

       The concatenation of two packing units is not necessarily a packing unit; nor is a substring of a packing
       unit necessarily a packing unit.  Thus, one cannot concatenate two packing  units  and  then  unpack  the
       result  as one packing unit; nor can one unpack a substring of a packing unit as a separate packing unit.
       Each packing unit that was created by a related sequence of pack calls or  by  a  regular  send  must  be
       unpacked as a unit, by a sequence of related unpack calls.

ERRORS

       Almost  all  MPI  routines  return  an  error  value; C routines as the return result of the function and
       Fortran routines in the last argument.

       Before the error value is returned, the current MPI  error  handler  associated  with  the  communication
       object  (e.g.,  communicator, window, file) is called.  If no communication object is associated with the
       MPI call, then the call is considered attached to MPI_COMM_SELF and will call the  associated  MPI  error
       handler.   When   MPI_COMM_SELF   is   not  initialized  (i.e.,  before  MPI_Init/MPI_Init_thread,  after
       MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial  error  handler.
       The  initial  error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using
       the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info  key  to  MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.   If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN
       error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is  called  for  all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called on a communicator, it acts as if MPI_Abort was called on  that  communicator.  If  called  on  a
         window  or file, acts as if MPI_Abort was called on a communicator containing the group of processes in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       SEE ALSO:MPI_PackMPI_Pack_size

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                                     MPI_UNPACK(3)