plucky (3) MPI_Scan.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Scan(const void *sendbuf, void *recvbuf, int count,
                       MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

          int MPI_Iscan(const void *sendbuf, void *recvbuf, int count,
                        MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
                        MPI_Request *request)

          int MPI_Scan_init(const void *sendbuf, void *recvbuf, int count,
                        MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
                        MPI_Info info, MPI_Request *request)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR)
               <type>  SENDBUF(*), RECVBUF(*)
               INTEGER COUNT, DATATYPE, OP, COMM, IERROR

          MPI_ISCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, REQUEST, IERROR)
               <type>  SENDBUF(*), RECVBUF(*)
               INTEGER COUNT, DATATYPE, OP, COMM, REQUEST, IERROR

          MPI_SCAN_INIT(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, INFO, REQUEST, IERROR)
               <type>  SENDBUF(*), RECVBUF(*)
               INTEGER COUNT, DATATYPE, OP, COMM, INFO, REQUEST, IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Scan(sendbuf, recvbuf, count, datatype, op, comm, ierror)
               TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
               TYPE(*), DIMENSION(..) :: recvbuf
               INTEGER, INTENT(IN) :: count
               TYPE(MPI_Datatype), INTENT(IN) :: datatype
               TYPE(MPI_Op), INTENT(IN) :: op
               TYPE(MPI_Comm), INTENT(IN) :: comm
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

          MPI_Iscan(sendbuf, recvbuf, count, datatype, op, comm, request, ierror)
               TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
               TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
               INTEGER, INTENT(IN) :: count
               TYPE(MPI_Datatype), INTENT(IN) :: datatype
               TYPE(MPI_Op), INTENT(IN) :: op
               TYPE(MPI_Comm), INTENT(IN) :: comm
               TYPE(MPI_Request), INTENT(OUT) :: request
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

          MPI_Scan_init(sendbuf, recvbuf, count, datatype, op, comm, info, request, ierror)
               TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
               TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
               INTEGER, INTENT(IN) :: count
               TYPE(MPI_Datatype), INTENT(IN) :: datatype
               TYPE(MPI_Op), INTENT(IN) :: op
               TYPE(MPI_Comm), INTENT(IN) :: comm
               TYPE(MPI_Info), INTENT(IN) :: info
               TYPE(MPI_Request), INTENT(OUT) :: request
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

sendbuf: Send buffer (choice).

       • count: Number of elements in input buffer (integer).

       • datatype: Data type of elements of input buffer (handle).

       • op: Operation (handle).

       • comm: Communicator (handle).

       • info: Info (handle, persistent only)

OUTPUT PARAMETERS

recvbuf: Receive buffer (choice).

       • request: Request (handle, non-blocking only).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       MPI_Scan  is  used  to  perform  an  inclusive  prefix  reduction  on data distributed across the calling
       processes. The operation returns, in the recvbuf of the process with rank i,  the  reduction  (calculated
       according  to the function op) of the values in the sendbufs of processes with ranks 0, …, i (inclusive).
       The type of operations supported, their semantics, and the constraints on send and receive buffers are as
       for MPI_Reduce.

EXAMPLE

       This  example  uses  a  user-defined  operation  to produce a segmented scan.  A segmented scan takes, as
       input, a set of values and a set of logicals, where the logicals delineate the various  segments  of  the
       scan. For example,

          values     v1      v2      v3      v4      v5      v6      v7      v8
          logicals   0       0       1       1       1       0       0       1
          result     v1    v1+v2     v3    v3+v4  v3+v4+v5   v6    v6+v7     v8

       The result for rank j is thus the sum v(i) + … + v(j), where i is the lowest rank such that for all ranks
       n, i <= n <= j, logical(n) = logical(j). The operator that produces this effect is

                [ u ]     [ v ]     [ w ]
                [   ]  o  [   ]  =  [   ]
                [ i ]     [ j ]     [ j ]

          where

              ( u + v if i = j w = ( ( v if i != j

       Note that this is a noncommutative operator. C code that implements it is given below.

          typedef struct {
                  double val;
                  int log;
          } SegScanPair;

          /*
           * the user-defined function
           */
          void segScan(SegScanPair *in, SegScanPair *inout, int *len,
                  MPI_Datatype *dptr)
          {
                  int i;
                  SegScanPair c;

                  for (i = 0; i < *len; ++i) {
                          if (in->log == inout->log)
                                  c.val = in->val + inout->val;
                          else
                                  c.val = inout->val;

                          c.log = inout->log;
                          *inout = c;
                          in++;
                          inout++;
                  }
          }

       Note that the inout argument to the user-defined function corresponds to the right-hand  operand  of  the
       operator.  When  using  this operator, we must be careful to specify that it is noncommutative, as in the
       following:

          int                     i, base;
          SeqScanPair     a, answer;
          MPI_Op          myOp;
          MPI_Datatype    type[2] = {MPI_DOUBLE, MPI_INT};
          MPI_Aint                disp[2];
          int                     blocklen[2] = {1, 1};
          MPI_Datatype    sspair;

          /*
           * explain to MPI how type SegScanPair is defined
           */
          MPI_Get_address(a, disp);
          MPI_Get_address(a.log, disp + 1);
          base = disp[0];
          for (i = 0; i < 2; ++i)
                  disp[i] -= base;
          MPI_Type_struct(2, blocklen, disp, type, &sspair);
          MPI_Type_commit(&sspair);

          /*
           * create the segmented-scan user-op
           * noncommutative - set commute (arg 2) to 0
           */
          MPI_Op_create((MPI_User_function *)segScan, 0, &myOp);
          ...
          MPI_Scan(a, answer, 1, sspair, myOp, comm);

USE OF IN-PLACE OPTION

       When the communicator is an intracommunicator, you can perform a scanning operation in place (the  output
       buffer  is used as the input buffer). Use the variable MPI_IN_PLACE as the value of the sendbuf argument.
       The input data is taken from the receive buffer and replaced by the output data.

NOTES ON COLLECTIVE OPERATIONS

       The reduction functions of type MPI_Op do not return an error value. As a result, if the functions detect
       an  error,  all  they  can  do  is either call MPI_Abort or silently skip the problem. Thus, if the error
       handler is changed from MPI_ERRORS_ARE_FATAL to something else (e.g., MPI_ERRORS_RETURN), then  no  error
       may be indicated.

       The  reason for this is the performance problems in ensuring that all collective routines return the same
       error value.

ERRORS

       Almost all MPI routines return an error value; C routines as  the  return  result  of  the  function  and
       Fortran routines in the last argument.

       Before  the  error  value  is  returned,  the current MPI error handler associated with the communication
       object (e.g., communicator, window, file) is called.  If no communication object is associated  with  the
       MPI  call,  then  the call is considered attached to MPI_COMM_SELF and will call the associated MPI error
       handler.  When  MPI_COMM_SELF  is  not  initialized   (i.e.,   before   MPI_Init/MPI_Init_thread,   after
       MPI_Finalize,  or  when using the Sessions Model exclusively) the error raises the initial error handler.
       The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF  when  using
       the  World  model,  or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.  If no other appropriate error handler has been set, then the  MPI_ERRORS_RETURN
       error  handler  is  called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called  on  a  communicator,  it  acts  as if MPI_Abort was called on that communicator. If called on a
         window or file, acts as if MPI_Abort was called on a communicator containing the group of processes  in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       SEE ALSO:MPI_ExscanMPI_Op_createMPI_Reduce

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                                       MPI_SCAN(3)