plucky (3) MPI_Type_create_darray.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Type_create_darray(int size, int rank, int ndims,
               const int array_of_gsizes[], const int array_of_distribs[],
               const int array_of_dargs[], const int array_of_psizes[],
               int order, MPI_Datatype oldtype, MPI_Datatype *newtype)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_TYPE_CREATE_DARRAY(SIZE, RANK, NDIMS, ARRAY_OF_GSIZES,
               ARRAY_OF_DISTRIBS, ARRAY_OF_DARGS, ARRAY_OF_PSIZES, ORDER,
               OLDTYPE, NEWTYPE, IERROR)

               INTEGER SIZE, RANK, NDIMS, ARRAY_OF_GSIZES(*), ARRAY_OF_DISTRIBS(*),
                       ARRAY_OF_DARGS(*), ARRAY_OF_PSIZES(*), ORDER, OLDTYPE,
                       NEWTYPE, IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Type_create_darray(size, rank, ndims, array_of_gsizes,
               array_of_distribs, array_of_dargs, array_of_psizes, order,
                       oldtype, newtype, ierror)
               INTEGER, INTENT(IN) :: size, rank, ndims, array_of_gsizes(ndims),
               array_of_distribs(ndims), array_of_dargs(ndims),
               array_of_psizes(ndims), order
               TYPE(MPI_Datatype), INTENT(IN) :: oldtype
               TYPE(MPI_Datatype), INTENT(OUT) :: newtype
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

size: Size of process group (positive integer).

       • rank: Rank in process group (nonnegative integer).

       • ndims: Number of array dimensions as well as process grid dimensions (positive integer).

       • array_of_gsizes:  Number  of  elements  of  type  oldtype  in  each dimension of global array (array of
         positive integers).

       • array_of_distribs: Distribution of array in each dimension (array of state).

       • array_of_dargs: Distribution argument in each dimension (array of positive integers).

       • array_of_psizes: Size of process grid in each dimension (array of positive integers).

       • order: Array storage order flag (state).

       • oldtype: Old data type (handle).

OUTPUT PARAMETERS

newtype: New data type (handle).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       MPI_Type_create_darray can be used to generate the data types corresponding to  the  distribution  of  an
       ndims-dimensional  array  of oldtype elements onto an ndims-dimensional grid of logical processes. Unused
       dimensions of array_of_psizes should be set to 1.  For a call to MPI_Type_create_darray  to  be  correct,
       the equation

            ndims-1
          pi              array_of_psizes[i] = size
            i=0

       must  be  satisfied.  The ordering of processes in the process grid is assumed to be row-major, as in the
       case of virtual Cartesian process topologies in the MPI Standard.

       Each dimension of the array can be distributed in one of three ways:

       • MPI_DISTRIBUTE_BLOCK - Block distribution

       • MPI_DISTRIBUTE_CYCLIC - Cyclic distribution

       • MPI_DISTRIBUTE_NONE - Dimension not distributed.

       The constant  MPI_DISTRIBUTE_DFLT_DARG  specifies  a  default  distribution  argument.  The  distribution
       argument  for  a  dimension  that  is  not  distributed  is  ignored.  For  any  dimension i in which the
       distribution is MPI_DISTRIBUTE_BLOCK, it erroneous to specify array_of_dargs[i]  *  array_of_psizes[i]  <
       array_of_gsizes[i].

       For  example,  the  HPF layout ARRAY(CYCLIC(15)) corresponds to MPI_DISTRIBUTE_CYCLIC with a distribution
       argument of 15, and the HPF layout ARRAY(BLOCK) corresponds to MPI_DISTRIBUTE_BLOCK with  a  distribution
       argument of MPI_DISTRIBUTE_DFLT_DARG.

       The order argument is used as in MPI_Type_create_subarray to specify the storage order. Therefore, arrays
       described by this type constructor may be stored in Fortran (column-major) or C (row-major) order.  Valid
       values for order are MPI_ORDER_FORTRAN and MPI_ORDER_C.

       This  routine creates a new MPI data type with a typemap defined in terms of a function called “cyclic()”
       (see below).

       Without loss of generality, it suffices to define the typemap for the  MPI_DISTRIBUTE_CYCLIC  case  where
       MPI_DISTRIBUTE_DFLT_DARG is not used.

       MPI_DISTRIBUTE_BLOCK  and  MPI_DISTRIBUTE_NONE  can  be  reduced  to  the  MPI_DISTRIBUTE_CYCLIC case for
       dimension i as follows.

       MPI_DISTRIBUTE_BLOCK  with  array_of_dargs[i]  equal  to  MPI_DISTRIBUTE_DFLT_DARG   is   equivalent   to
       MPI_DISTRIBUTE_CYCLIC with array_of_dargs[i] set to

          (array_of_gsizes[i] + array_of_psizes[i] - 1)/array_of_psizes[i]

       If array_of_dargs[i] is not MPI_DISTRIBUTE_DFLT_DARG, then MPI_DISTRIBUTE_BLOCK and DISTRIBUTE_CYCLIC are
       equivalent.

       MPI_DISTRIBUTE_NONE   is   equivalent   to   MPI_DISTRIBUTE_CYCLIC   with   array_of_dargs[i]   set    to
       array_of_gsizes[i].

       Finally,  MPI_DISTRIBUTE_CYCLIC with array_of_dargs[i] equal to MPI_DISTRIBUTE_DFLT_DARG is equivalent to
       MPI_DISTRIBUTE_CYCLIC with array_of_dargs[i] set to 1.

NOTES

       For both Fortran and C arrays, the ordering of processes in the process grid is assumed to be  row-major.
       This  is consistent with the ordering used in virtual Cartesian process topologies in MPI. To create such
       virtual process topologies, or to find the coordinates of a process in the process grid, etc., users  may
       use the corresponding functions provided in MPI.

ERRORS

       Almost  all  MPI  routines  return  an  error  value; C routines as the return result of the function and
       Fortran routines in the last argument.

       Before the error value is returned, the current MPI  error  handler  associated  with  the  communication
       object  (e.g.,  communicator, window, file) is called.  If no communication object is associated with the
       MPI call, then the call is considered attached to MPI_COMM_SELF and will call the  associated  MPI  error
       handler.   When   MPI_COMM_SELF   is   not  initialized  (i.e.,  before  MPI_Init/MPI_Init_thread,  after
       MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial  error  handler.
       The  initial  error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using
       the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info  key  to  MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.   If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN
       error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is  called  for  all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called on a communicator, it acts as if MPI_Abort was called on  that  communicator.  If  called  on  a
         window  or file, acts as if MPI_Abort was called on a communicator containing the group of processes in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                         MPI_TYPE_CREATE_DARRAY(3)