Provided by: mpich-doc_4.0.2-3build2_all bug

NAME

       MPI_Win_create_c -  Create an MPI Window object for one-sided communication

SYNOPSIS

       int MPI_Win_create_c(void *base, MPI_Aint size, MPI_Aint disp_unit, MPI_Info info, MPI_Comm comm,
       MPI_Win *win)

INPUT PARAMETERS

       base   - initial address of window (choice) (choice)
       size   - size of window in bytes (non-negative integer) (non-negative integer)
       disp_unit
              - local unit size for displacements, in bytes (positive integer) (positive integer)
       info   - info argument (handle) (handle)
       comm   - intra-communicator (handle) (handle)

OUTPUT PARAMETERS

       win    - window object (handle) (handle)

NOTES

       The  displacement  unit  argument  is  provided  to  facilitate  address arithmetic in RMA
       operations: the target displacement argument of an RMA operation is scaled by  the  factor
       disp_unit specified by the target process, at window creation.

       The  info  argument  provides  optimization  hints to the runtime about the expected usage
       pattern of the window. The following info keys are predefined.

       no_locks
              - If  set  to  true,  then  the  implementation  may  assume  that  passive  target
              synchronization  (i.e.,  MPI_Win_lock  , MPI_Win_lock_all ) will not be used on the
              given window. This implies that this window is not used for 3-party  communication,
              and  RMA  can  be  implemented  with  no (less) asynchronous agent activity at this
              process.

       accumulate_ordering
              - Controls the ordering of accumulate  operations  at  the  target.   The  argument
              string  should  contain a comma-separated list of the following read/write ordering
              rules, where e.g. "raw" means read-after-write: "rar,raw,war,waw".

       accumulate_ops
              - If set to same_op, the implementation will assume that all concurrent  accumulate
              calls  to  the  same  target  address  will  use  the  same  operation.  If  set to
              same_op_no_op, then the implementation will assume that all  concurrent  accumulate
              calls  to  the same target address will use the same operation or MPI_NO_OP .  This
              can eliminate the need to protect access for  certain  operation  types  where  the
              hardware can guarantee atomicity. The default is same_op_no_op.

THREAD AND INTERRUPT SAFETY

       This  routine is thread-safe.  This means that this routine may be safely used by multiple
       threads without the need for any user-provided thread locks.  However, the routine is  not
       interrupt  safe.   Typically, this is due to the use of memory allocation routines such as
       malloc or other non-MPICH runtime routines that are themselves not interrupt-safe.

NOTES FOR FORTRAN

       All MPI routines in Fortran (except for MPI_WTIME  and  MPI_WTICK  )  have  an  additional
       argument  ierr  at  the  end  of  the  argument list.  ierr is an integer and has the same
       meaning as the  return  value  of  the  routine  in  C.   In  Fortran,  MPI  routines  are
       subroutines, and are invoked with the call statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm ) are of type INTEGER in Fortran.

ERRORS

       All  MPI  routines  (except MPI_Wtime and MPI_Wtick ) return an error value; C routines as
       the value of the function and Fortran routines in the last argument.  Before the value  is
       returned,  the current MPI error handler is called.  By default, this error handler aborts
       the MPI  job.   The  error  handler  may  be  changed  with  MPI_Comm_set_errhandler  (for
       communicators),  MPI_File_set_errhandler  (for files), and MPI_Win_set_errhandler (for RMA
       windows).  The MPI-1 routine MPI_Errhandler_set may be used but  its  use  is  deprecated.
       The  predefined  error  handler  MPI_ERRORS_RETURN may be used to cause error values to be
       returned.  Note that MPI does not guarantee that an  MPI  program  can  continue  past  an
       error; however, MPI implementations will attempt to continue whenever possible.

       MPI_SUCCESS
              - No error; MPI routine completed successfully.

       MPI_ERR_ARG
              -  Invalid  argument.  Some argument is invalid and is not identified by a specific
              error class (e.g., MPI_ERR_RANK ).
       MPI_ERR_COMM
              - Invalid communicator.  A common error is to use a null  communicator  in  a  call
              (not even allowed in MPI_Comm_rank ).
       MPI_ERR_DISP
              -
       MPI_ERR_INFO
              - Invalid Info
       MPI_ERR_SIZE
              -
       MPI_ERR_OTHER
              - Other error; use MPI_Error_string to get more information about this error code.

SEE ALSO

       MPI_Win_allocate MPI_Win_allocate_shared MPI_Win_create_dynamic MPI_Win_free

                                            2/22/2022                         MPI_Win_create_c(3)