Provided by: lam4-dev_7.1.4-3.1_amd64 bug

NAME

       MPI_Comm_spawn_multiple -  Spawn a dynamic MPI process from multiple executables

SYNOPSIS

       #include <mpi.h>
       int
       MPI_Comm_spawn_multiple(int count, char **commands, char ***argvs,
                             int *maxprocs, MPI_Info *infos, int root,
                             MPI_Comm comm, MPI_Comm *intercomm,
                             int *errcodes)

INPUT PARAMETERS

       count  - number of commands (only significant at root)
       commands
              - commands to be executed (only significant at root)
       argvs  - arguments for commands (only significant at root)
       maxprocs
              - max number of processes for each command (only significant at root)
       infos  - startup hints for each command
       root   - rank of process to perform the spawn
       comm   - parent intracommunicator

OUTPUT PARAMETERS

       intercomm
              - child intercommunicator containing spawned processes
       errcodes
              - one code per process

DESCRIPTION

       A  group of processes can create another group of processes with MPI_Comm_spawn_multiple .
       This function is a collective operation over the parent  communicator.   The  child  group
       starts  up like any MPI application.  The processes must begin by calling MPI_Init , after
       which the pre-defined communicator, MPI_COMM_WORLD , may be used.  This world communicator
       contains  only  the child processes.  It is distinct from the MPI_COMM_WORLD of the parent
       processes.

       MPI_Comm_spawn_multiple is used to manually specify a group of different  executables  and
       arguments to spawn.  MPI_Comm_spawn is used to specify one executable and set of arguments
       (although a LAM/MPI appschema(5) can be provided to MPI_Comm_spawn  via  the  "file"  info
       key).

       Communication With Spawned Processes

       The  natural  communication  mechanism  between  two groups is the intercommunicator.  The
       second communicator argument to MPI_Comm_spawn_multiple returns an intercommunicator whose
       local  group  contains  the parent processes (same as the first communicator argument) and
       whose remote group contains child processes. The  child  processes  can  access  the  same
       intercommunicator  by  using  the  MPI_Comm_get_parent call.  The remote group size of the
       parent communicator is zero if the process was created by mpirun (1) instead of one of the
       spawn  functions.   Both  groups  can  decide  to  merge  the  intercommunicator  into  an
       intracommunicator (with the MPI_Intercomm_merge () function) and take advantage  of  other
       MPI  collective  operations.  They can then use the merged intracommunicator to create new
       communicators and reach other processes in the MPI application.

       Resource Allocation

       Note   that   no   MPI_Info   keys   are   recognized   by    this    implementation    of
       MPI_Comm_spawn_multiple  .   To  use  the  "file" info key to specify an appschema(5), use
       LAM's MPI_Comm_spawn .  This may  be  preferable  to  MPI_Comm_spawn_multiple  because  it
       allows the arbitrary specification of what nodes and/or CPUs should be used to launch jobs
       (either SPMD or MPMD).  See MPI_Comm_spawn(3) for more details.

       The value of MPI_INFO_NULL should be given for each value in infos (the infos array is not
       currently  examined  by LAM/MPI, so specifying non-NULL values for the array values is not
       harmful).  LAM schedules the given number of processes onto LAM nodes by starting with CPU
       0  (or  the  lowest  numbered CPU), and continuing through higher CPU numbers, placing one
       process on each CPU.  If the process count is greater than the CPU  count,  the  procedure
       repeats.

       Process Terminiation

       Note  that  the  process[es]  spawned  by  MPI_COMM_SPAWN  (and  MPI_COMM_SPAWN_MULTIPLE )
       effectively become orphans.  That is, the spawnning MPI application does not wait for  the
       spawned  application  to finish.  Hence, there is no guarantee the spawned application has
       finished when the spawning completes.  Similarly, killing the  spawning  application  will
       also have no effect on the spawned application.

       User  applications  can effect this kind of behavior with MPI_BARRIER between the spawning
       and spawned processed before MPI_FINALIZE .

       Note that lamclean will kill *all* MPI processes.

       Process Count

       The maxprocs array parameter to MPI_Comm_spawn_multiple  specifies  the  exact  number  of
       processes  to be started.  If it is not possible to start the desired number of processes,
       MPI_Comm_spawn_multiple will return an error code.  Note that even though maxprocs is only
       relevant  on  the  root,  all  ranks  must have an errcodes array long enough to handle an
       integer error code  for  every  process  that  tries  to  launch,  or  give  MPI  constant
       MPI_ERRCODES_IGNORE  for the errcodes argument.  While this appears to be a contradiction,
       it is per the MPI-2 standard.  :-\

       Frequently, an application wishes to chooses a process count so as to fill all  processors
       available  to  a job.  MPI indicates the maximum number of processes recommended for a job
       in the pre-defined attribute, MPI_UNIVERSE_SIZE , which is cached on MPI_COMM_WORLD .

       The typical usage is to subtract  the  value  of  MPI_UNIVERSE_SIZE  from  the  number  of
       processes  currently  in  the job and spawn the difference.  LAM sets MPI_UNIVERSE_SIZE to
       the number of CPUs in the user's LAM session (as defined in the boot schema [bhost(5)] via
       lamboot (1)).

       See MPI_Init(3) for other pre-defined attributes that are helpful when spawning.

       Locating an Executable Program

       The executable program file must be located on the node(s) where the process(es) will run.
       On any node, the directories  specified  by  the  user's  PATH  environment  variable  are
       searched to find the program.

       All MPI runtime options selected by mpirun (1) in the initial application launch remain in
       effect for all child processes created by the spawn functions.

       Command-line Arguments

       The argvs array parameter to MPI_Comm_spawn_multiple should not contain the  program  name
       since  it  is  given in the first parameter.  The command line that is passed to the newly
       launched program will be the program name followed by the strings in  corresponding  entry
       in the argvs array.

USAGE WITH IMPI EXTENSIONS

       The  IMPI  standard  only supports MPI-1 functions.  Hence, this function is currently not
       designed to operate within an IMPI job.

ERRORS

       If an error occurs in an MPI function, the current MPI error handler is called  to  handle
       it.   By default, this error handler aborts the MPI job.  The error handler may be changed
       with MPI_Errhandler_set ; the predefined error handler MPI_ERRORS_RETURN may  be  used  to
       cause  error values to be returned (in C and Fortran; this error handler is less useful in
       with the C++ MPI bindings.   The  predefined  error  handler  MPI::ERRORS_THROW_EXCEPTIONS
       should  be  used in C++ if the error value needs to be recovered).  Note that MPI does not
       guarantee that an MPI program can continue past an error.

       All MPI routines (except MPI_Wtime and MPI_Wtick ) return an error value;  C  routines  as
       the value of the function and Fortran routines in the last argument.  The C++ bindings for
       MPI do not return error  values;  instead,  error  values  are  communicated  by  throwing
       exceptions of type MPI::Exception (but not by default).  Exceptions are only thrown if the
       error value is not MPI::SUCCESS .

       Note that if the MPI::ERRORS_RETURN handler is set in C++, while MPI functions will return
       upon an error, there will be no way to recover what the actual error value was.
       MPI_SUCCESS
              - No error; MPI routine completed successfully.
       MPI_ERR_COMM
              -  Invalid  communicator.   A  common error is to use a null communicator in a call
              (not even allowed in MPI_Comm_rank ).
       MPI_ERR_SPAWN
              - Spawn error; one or more of the applications attempting to  be  launched  failed.
              Check the returned error code array.
       MPI_ERR_ARG
              -  Invalid  argument.  Some argument is invalid and is not identified by a specific
              error class.  This is typically a NULL pointer or other such error.
       MPI_ERR_ROOT
              - Invalid root.  The root must be specified as a rank in the  communicator.   Ranks
              must be between zero and the size of the communicator minus one.
       MPI_ERR_OTHER
              - Other error; use MPI_Error_string to get more information about this error code.
       MPI_ERR_INTERN
              - An internal error has been detected.  This is fatal.  Please send a bug report to
              the LAM mailing list (see http://www.lam-mpi.org/contact.php ).

SEE ALSO

       appschema(5),  bhost(5),   lamboot(1),   MPI_Comm_get_parent(3),   MPI_Intercomm_merge(3),
       MPI_Comm_spawn_multiple(3),   MPI_Info_create(3),   MPI_Info_set(3),   MPI_Info_delete(3),
       MPI_Info_free(3), MPI_Init(3), mpirun(1)

MORE INFORMATION

       For more information, please see the official MPI Forum web site, which contains the  text
       of both the MPI-1 and MPI-2 standards.  These documents contain detailed information about
       each MPI function (most of which is not duplicated in these man pages).

       http://www.mpi-forum.org/

ACKNOWLEDGEMENTS

       The LAM Team would like the thank the MPICH Team for the handy  program  to  generate  man
       pages   ("doctext"  from  ftp://ftp.mcs.anl.gov/pub/sowing/sowing.tar.gz  ),  the  initial
       formatting, and some initial text for most of the MPI-1 man pages.

LOCATION

       spawnmult.c