plucky (3) MPI_Win_allocate_shared.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Win_allocate_shared (MPI_Aint size, int disp_unit, MPI_Info info,
                                       MPI_Comm comm, void *baseptr, MPI_Win *win)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_WIN_ALLOCATE_SHARED(SIZE, DISP_UNIT, INFO, COMM, BASEPTR, WIN, IERROR)
               INTEGER(KIND=MPI_ADDRESS_KIND) SIZE, BASEPTR
               INTEGER DISP_UNIT, INFO, COMM, WIN, IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Win_allocate_shared(size, disp_unit, info, comm, baseptr, win, ierror)
               USE, INTRINSIC :: ISO_C_BINDING, ONLY : C_PTR
               INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: size
               INTEGER, INTENT(IN) :: disp_unit
               TYPE(MPI_Info), INTENT(IN) :: info
               TYPE(MPI_Comm), INTENT(IN) :: comm
               TYPE(C_PTR), INTENT(OUT) :: baseptr
               TYPE(MPI_Win), INTENT(OUT) :: win
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

size: Size of window in bytes (nonnegative integer).

       • disp_unit: Local unit size for displacements, in bytes (positive integer).

       • info: Info argument (handle).

       • comm: Communicator (handle).

OUTPUT PARAMETERS

baseptr: Initial address of window.

       • win: Window object returned by the call (handle).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       MPI_Win_allocate_shared is a collective call executed by all processes in the  group  of  comm.  On  each
       process,  it  allocates  memory  of  at  least size bytes that is shared among all processes in comm, and
       returns a pointer to the locally allocated segment in baseptr that can be used for load/store accesses on
       the  calling  process.  The  locally  allocated memory can be the target of load/store accesses by remote
       processes; the base pointers for other processes can be queried using the function  MPI_Win_shared_query.
       The  call  also  returns  a  window  object  that  can  be  used  by all processes in comm to perform RMA
       operations. The size argument may be different at each process and size = 0 is valid. It  is  the  user’s
       responsibility  to  ensure  that  the communicator comm represents a group of processes that can create a
       shared memory segment that can be accessed by all processes in the group. The discussions  of  rationales
       for  MPI_Alloc_mem  and  MPI_Free_mem  in  MPI-3.1  section 8.2 also apply to MPI_Win_allocate_shared; in
       particular, see the rationale in MPI-3.1 section 8.2 for an explanation of the type used for baseptr. The
       allocated  memory  is  contiguous  across  process  ranks  unless  the info key alloc_shared_noncontig is
       specified. Contiguous across process ranks means that the first address in the memory segment of  process
       i  is  consecutive with the last address in the memory segment of process i - 1. This may enable the user
       to calculate remote address offsets with local information only.

       The following info keys are supported:

       alloc_shared_noncontig
              If not set to true, the allocation strategy is to allocate contiguous memory across process ranks.
              This  may limit the performance on some architectures because it does not allow the implementation
              to modify the data layout (e.g., padding to reduce access latency).

       blocking_fence
              If set to true, the osc/sm component will use MPI_Barrier for MPI_Win_fence. If  set  to  false  a
              condition  variable and counter will be used instead. The default value is false. This info key is
              Open MPI specific.

       For additional supported info keys see MPI_Win_create.

NOTES

       Common choices for disp_unit are 1 (no scaling), and (in  C  syntax)  sizeof(type),  for  a  window  that
       consists  of  an  array of elements of type type. The later choice will allow one to use array indices in
       RMA calls, and have those scaled correctly to byte displacements, even in a heterogeneous environment.

       Calling MPI_Win_free will  deallocate  the  memory  allocated  by  MPI_Win_allocate_shared.  It  is  thus
       erroneous to manually free baseptr.

C NOTES

       While  baseptr is a void * type, this is to allow easy use of any pointer object for this parameter. This
       argument is really a void ** type.

ERRORS

       Almost all MPI routines return an error value; C routines as  the  return  result  of  the  function  and
       Fortran routines in the last argument.

       Before  the  error  value  is  returned,  the current MPI error handler associated with the communication
       object (e.g., communicator, window, file) is called.  If no communication object is associated  with  the
       MPI  call,  then  the call is considered attached to MPI_COMM_SELF and will call the associated MPI error
       handler.  When  MPI_COMM_SELF  is  not  initialized   (i.e.,   before   MPI_Init/MPI_Init_thread,   after
       MPI_Finalize,  or  when using the Sessions Model exclusively) the error raises the initial error handler.
       The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF  when  using
       the  World  model,  or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.  If no other appropriate error handler has been set, then the  MPI_ERRORS_RETURN
       error  handler  is  called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called  on  a  communicator,  it  acts  as if MPI_Abort was called on that communicator. If called on a
         window or file, acts as if MPI_Abort was called on a communicator containing the group of processes  in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       SEE ALSO:MPI_Alloc_memMPI_Free_memMPI_Win_allocateMPI_Win_createMPI_Win_shared_queryMPI_Win_free

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                        MPI_WIN_ALLOCATE_SHARED(3)