plucky (3) MPI_Comm_split_type.openmpi.3.gz

Provided by: openmpi-doc_5.0.7-1_all bug

SYNTAX

   C Syntax
          #include <mpi.h>

          int MPI_Comm_split_type(MPI_Comm comm, int split_type, int key,
               MPI_Info info, MPI_Comm *newcomm)

   Fortran Syntax
          USE MPI
          ! or the older form: INCLUDE 'mpif.h'
          MPI_COMM_SPLIT_TYPE(COMM, SPLIT_TYPE, KEY, INFO, NEWCOMM, IERROR)
               INTEGER COMM, SPLIT_TYPE, KEY, INFO, NEWCOMM, IERROR

   Fortran 2008 Syntax
          USE mpi_f08
          MPI_Comm_split_type(comm, split_type, key, info, newcomm, ierror)
               TYPE(MPI_Comm), INTENT(IN) :: comm
               INTEGER, INTENT(IN) :: split_type, key
               TYPE(MPI_Info), INTENT(IN) :: info
               TYPE(MPI_Comm), INTENT(OUT) :: newcomm
               INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INPUT PARAMETERS

comm: Communicator (handle).

       • split_type: Type of processes to be grouped together (integer).

       • key: Control of rank assignment (integer).

       • info: Info argument (handle).

OUTPUT PARAMETERS

newcomm: New communicator (handle).

       • ierror: Fortran only: Error status (integer).

DESCRIPTION

       This  function  partitions  the  group  associated  with  comm into disjoint subgroups, based on the type
       specified by split_type. Each subgroup contains all processes of the same type. Within each subgroup, the
       processes are ranked in the order defined by the value of the argument key, with ties broken according to
       their rank in the old group. A new communicator is created for each subgroup  and  returned  in  newcomm.
       This  is a collective call; all processes must provide the same split_type, but each process is permitted
       to provide different values for key. An exception to this rule is that a  process  may  supply  the  type
       value MPI_UNDEFINED, in which case newcomm returns MPI_COMM_NULL.

SPLIT TYPES

       MPI_COMM_TYPE_SHARED
              This  type splits the communicator into subcommunicators, each of which can create a shared memory
              region.

       OMPI_COMM_TYPE_NODE
              Synonym for MPI_COMM_TYPE_SHARED.

       OMPI_COMM_TYPE_HWTHREAD
              This type splits the communicator into  subcommunicators,  each  of  which  belongs  to  the  same
              hardware thread.

       OMPI_COMM_TYPE_CORE
              This  type  splits  the  communicator  into  subcommunicators,  each  of which belongs to the same
              core/processing unit.

       OMPI_COMM_TYPE_L1CACHE
              This type splits the communicator into subcommunicators, each of which  belongs  to  the  same  L1
              cache.

       OMPI_COMM_TYPE_L2CACHE
              This  type  splits  the  communicator  into subcommunicators, each of which belongs to the same L2
              cache.

       OMPI_COMM_TYPE_L3CACHE
              This type splits the communicator into subcommunicators, each of which  belongs  to  the  same  L3
              cache.

       OMPI_COMM_TYPE_SOCKET
              This type splits the communicator into subcommunicators, each of which belongs to the same socket.

       OMPI_COMM_TYPE_NUMA
              This  type  splits  the  communicator  into  subcommunicators,  each  of which belongs to the same
              NUMA-node.

       OMPI_COMM_TYPE_BOARD
              This type splits the communicator into subcommunicators, each of which belongs to the same board.

       OMPI_COMM_TYPE_HOST
              This type splits the communicator into subcommunicators, each of which belongs to the same host.

       OMPI_COMM_TYPE_CU
              This type splits the communicator into  subcommunicators,  each  of  which  belongs  to  the  same
              computational unit.

       OMPI_COMM_TYPE_CLUSTER
              This  type  splits  the  communicator  into  subcommunicators,  each  of which belongs to the same
              cluster.

NOTES

       The communicator keys denoted with an OMPI_ prefix instead of an MPI_ prefix are specific  to  Open  MPI,
       and are not part of the MPI standard. Their use should be protected by the OPEN_MPI C preprocessor macro.

ERRORS

       Almost  all  MPI  routines  return  an  error  value; C routines as the return result of the function and
       Fortran routines in the last argument.

       Before the error value is returned, the current MPI  error  handler  associated  with  the  communication
       object  (e.g.,  communicator, window, file) is called.  If no communication object is associated with the
       MPI call, then the call is considered attached to MPI_COMM_SELF and will call the  associated  MPI  error
       handler.   When   MPI_COMM_SELF   is   not  initialized  (i.e.,  before  MPI_Init/MPI_Init_thread,  after
       MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial  error  handler.
       The  initial  error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using
       the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info  key  to  MPI_Comm_spawn/‐
       MPI_Comm_spawn_multiple.   If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN
       error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is  called  for  all
       other MPI functions.

       Open MPI includes three predefined error handlers that can be used:

       • MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.

       • MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
         called on a communicator, it acts as if MPI_Abort was called on  that  communicator.  If  called  on  a
         window  or file, acts as if MPI_Abort was called on a communicator containing the group of processes in
         the corresponding window or file. If called on a session, aborts only the local process.

       • MPI_ERRORS_RETURN Returns an error code to the application.

       MPI applications can also implement their own error handlers by calling:

       • MPI_Comm_create_errhandler then MPI_Comm_set_errhandlerMPI_File_create_errhandler then MPI_File_set_errhandlerMPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_initMPI_Win_create_errhandler then MPI_Win_set_errhandler

       Note that MPI does not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

       See the Error Handling section of the MPI-3.1 standard for more information.

       SEE ALSO:MPI_Comm_createMPI_Intercomm_createMPI_Comm_dupMPI_Comm_freeMPI_Comm_split

       2003-2025, The Open MPI Community

                                                  Feb 17, 2025                            MPI_COMM_SPLIT_TYPE(3)