Provided by: mpich-doc_3.3~a2-4_all
NAME
MPI_Rget_accumulate - Perform an atomic, one-sided read-and-accumulate operation and return a request handle for the operation.
SYNOPSIS
int MPI_Rget_accumulate(const void *origin_addr, int origin_count, MPI_Datatype origin_datatype, void *result_addr, int result_count, MPI_Datatype result_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win, MPI_Request *request) MPI_Rget_accumulate is similar to MPI_Get_accumulate , except that it allocates a communication request object and associates it with the request handle (the argument request) that can be used to wait or test for completion. The completion of an MPI_Rget_accumulate operation indicates that the data is available in the result buffer and the origin buffer is free to be updated. It does not indicate that the operation has been completed at the target window.
INPUT PARAMETERS
origin_addr - initial address of buffer (choice) origin_count - number of entries in buffer (nonnegative integer) origin_datatype - datatype of each buffer entry (handle) result_addr - initial address of result buffer (choice) result_count - number of entries in result buffer (non-negative integer) result_datatype - datatype of each entry in result buffer (handle) target_rank - rank of target (nonnegative integer) target_disp - displacement from start of window to beginning of target buffer (nonnegative integer) target_count - number of entries in target buffer (nonnegative integer) target_datatype - datatype of each entry in target buffer (handle) op - predefined reduce operation (handle) win - window object (handle)
OUTPUT PARAMETERS
request - RMA request (handle)
NOTES
This operations is atomic with respect to other "accumulate" operations. The get and accumulate steps are executed atomically for each basic element in the datatype (see MPI 3.0 Section 11.7 for details). The predefined operation MPI_REPLACE provides fetch-and-set behavior. The basic components of both the origin and target datatype must be the same predefined datatype (e.g., all MPI_INT or all MPI_DOUBLE_PRECISION ).
NOTES FOR FORTRAN
All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have an additional argument ierr at the end of the argument list. ierr is an integer and has the same meaning as the return value of the routine in C. In Fortran, MPI routines are subroutines, and are invoked with the call statement. All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.
ERRORS
All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as the value of the function and Fortran routines in the last argument. Before the value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Comm_set_errhandler (for communicators), MPI_File_set_errhandler (for files), and MPI_Win_set_errhandler (for RMA windows). The MPI-1 routine MPI_Errhandler_set may be used but its use is deprecated. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarentee that an MPI program can continue past an error; however, MPI implementations will attempt to continue whenever possible. MPI_SUCCESS - No error; MPI routine completed successfully. MPI_ERR_ARG - Invalid argument. Some argument is invalid and is not identified by a specific error class (e.g., MPI_ERR_RANK ). MPI_ERR_COUNT - Invalid count argument. Count arguments must be non-negative; a count of zero is often valid. MPI_ERR_RANK - Invalid source or destination rank. Ranks must be between zero and the size of the communicator minus one; ranks in a receive ( MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.) may also be MPI_ANY_SOURCE . MPI_ERR_TYPE - Invalid datatype argument. Additionally, this error can occur if an uncommitted MPI_Datatype (see MPI_Type_commit ) is used in a communication call. MPI_ERR_WIN - Invalid MPI window object
SEE ALSO
MPI_Get_accumulate MPI_Fetch_and_op 11/12/2016 MPI_Rget_accumulate(3)