Provided by: mpich-doc_4.2.0-5build3_all
NAME
MPI_Rget_accumulate - Perform an atomic, one-sided read-and-accumulate
SYNOPSIS
int MPI_Rget_accumulate(const void *origin_addr, int origin_count, MPI_Datatype origin_datatype, void *result_addr, int result_count, MPI_Datatype result_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win, MPI_Request *request) int MPI_Rget_accumulate_c(const void *origin_addr, MPI_Count origin_count, MPI_Datatype origin_datatype, void *result_addr, MPI_Count result_count, MPI_Datatype result_datatype, int target_rank, MPI_Aint target_disp, MPI_Count target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win, MPI_Request *request)
INPUT PARAMETERS
origin_addr - initial address of buffer (choice) origin_count - number of entries in origin buffer (non-negative integer) origin_datatype - datatype of each entry in origin buffer (handle) result_count - number of entries in result buffer (non-negative integer) result_datatype - datatype of entries in result buffer (handle) target_rank - rank of target (non-negative integer) target_disp - displacement from start of window to beginning of target buffer (non-negative integer) target_count - number of entries in target buffer (non-negative integer) target_datatype - datatype of each entry in target buffer (handle) op - reduce operation (handle) win - window object (handle)
OUTPUT PARAMETERS
result_addr - initial address of result buffer (choice) request - RMA request (handle)
NOTES
This operations is atomic with respect to other "accumulate" operations. The get and accumulate steps are executed atomically for each basic element in the datatype (see MPI 3.0 Section 11.7 for details). The predefined operation MPI_REPLACE provides fetch-and-set behavior. The basic components of both the origin and target datatype must be the same predefined datatype (e.g., all MPI_INT or all MPI_DOUBLE_PRECISION ).
NOTES FOR FORTRAN
All MPI routines in Fortran (except for MPI_WTIME and MPI_WTICK ) have an additional argument ierr at the end of the argument list. ierr is an integer and has the same meaning as the return value of the routine in C. In Fortran, MPI routines are subroutines, and are invoked with the call statement. All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.
ERRORS
All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as the value of the function and Fortran routines in the last argument. Before the value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job. The error handler may be changed with MPI_Comm_set_errhandler (for communicators), MPI_File_set_errhandler (for files), and MPI_Win_set_errhandler (for RMA windows). The MPI-1 routine MPI_Errhandler_set may be used but its use is deprecated. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error; however, MPI implementations will attempt to continue whenever possible. MPI_SUCCESS - No error; MPI routine completed successfully. MPI_ERR_ARG - Invalid argument. Some argument is invalid and is not identified by a specific error class (e.g., MPI_ERR_RANK ). MPI_ERR_BUFFER - Invalid buffer pointer. Usually a null buffer where one is not valid. MPI_ERR_COUNT - Invalid count argument. Count arguments must be non-negative; a count of zero is often valid. MPI_ERR_DISP - MPI_ERR_RANK - Invalid source or destination rank. Ranks must be between zero and the size of the communicator minus one; ranks in a receive ( MPI_Recv , MPI_Irecv , MPI_Sendrecv , etc.) may also be MPI_ANY_SOURCE . MPI_ERR_TYPE - Invalid datatype argument. Additionally, this error can occur if an uncommitted MPI_Datatype (see MPI_Type_commit ) is used in a communication call. MPI_ERR_WIN - Invalid MPI window object MPI_ERR_OTHER - Other error; use MPI_Error_string to get more information about this error code.
SEE ALSO
MPI_Get_accumulate MPI_Fetch_and_op 2/9/2024 MPI_Rget_accumulate(3)