bionic (1) ffindex_apply_mpi.1.gz

Provided by: ffindex_0.9.9.7-4_amd64 bug

NAME

       ffindex_apply_mpi - apply a program to each FFindex entry (mpi enhanced)

DESCRIPTION

       -------------------------------------------------------------------------- The value of the MCA parameter
       "plm_rsh_agent" was set to a path that could not be found:

              plm_rsh_agent: ssh : rsh

       Please    either    unset    the    parameter,    or    check    that     the     path     is     correct
       --------------------------------------------------------------------------               [lcy01-05:11100]
       [[INVALID],INVALID]  ORTE_ERROR_LOG:  Unable  to  start  a   daemon   on   the   local   node   in   file
       ess_singleton_module.c at line 582 [lcy01-05:11100] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a
       daemon    on    the     local     node     in     file     ess_singleton_module.c     at     line     166
       -------------------------------------------------------------------------- It looks like orte_init failed
       for some reason; your parallel process is likely to abort.   There  are  many  reasons  that  a  parallel
       process  can fail during orte_init; some of which are due to configuration or environment problems.  This
       failure appears to be an internal failure; here's some additional information (which may only be relevant
       to an Open MPI developer):

              orte_ess_init  failed --> Returned value Unable to start a daemon on the local node (-127) instead
              of ORTE_SUCCESS

       --------------------------------------------------------------------------
       --------------------------------------------------------------------------  It looks like MPI_INIT failed
       for some reason; your parallel process is likely to abort.   There  are  many  reasons  that  a  parallel
       process  can  fail during MPI_INIT; some of which are due to configuration or environment problems.  This
       failure appears to be an internal failure; here's some additional information (which may only be relevant
       to an Open MPI developer):

              ompi_mpi_init:  ompi_rte_init  failed  -->  Returned  "Unable to start a daemon on the local node"
              (-127) instead of "Success" (0)

       --------------------------------------------------------------------------  ***  An  error  occurred   in
       MPI_Init  ***  on  a  NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
       abort, ***    and potentially your MPI  job)  [lcy01-05:11100]  Local  abort  before  MPI_INIT  completed
       completed  successfully,  but am not able to aggregate error messages, and not able to guarantee that all
       other processes were killed!

       The value of the MCA parameter "plm_rsh_agent" was set to a path that could not be found:

              plm_rsh_agent: ssh : rsh

       Please    either    unset    the    parameter,    or    check    that     the     path     is     correct
       --------------------------------------------------------------------------               [lcy01-05:11102]
       [[INVALID],INVALID]  ORTE_ERROR_LOG:  Unable  to  start  a   daemon   on   the   local   node   in   file
       ess_singleton_module.c at line 582 [lcy01-05:11102] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a
       daemon    on    the     local     node     in     file     ess_singleton_module.c     at     line     166
       -------------------------------------------------------------------------- It looks like orte_init failed
       for some reason; your parallel process is likely to abort.   There  are  many  reasons  that  a  parallel
       process  can fail during orte_init; some of which are due to configuration or environment problems.  This
       failure appears to be an internal failure; here's some additional information (which may only be relevant
       to an Open MPI developer):

              orte_ess_init  failed --> Returned value Unable to start a daemon on the local node (-127) instead
              of ORTE_SUCCESS

       --------------------------------------------------------------------------
       --------------------------------------------------------------------------  It looks like MPI_INIT failed
       for some reason; your parallel process is likely to abort.   There  are  many  reasons  that  a  parallel
       process  can  fail during MPI_INIT; some of which are due to configuration or environment problems.  This
       failure appears to be an internal failure; here's some additional information (which may only be relevant
       to an Open MPI developer):

              ompi_mpi_init:  ompi_rte_init  failed  -->  Returned  "Unable to start a daemon on the local node"
              (-127) instead of "Success" (0)

       --------------------------------------------------------------------------  ***  An  error  occurred   in
       MPI_Init  ***  on  a  NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
       abort, ***    and potentially your MPI  job)  [lcy01-05:11102]  Local  abort  before  MPI_INIT  completed
       completed  successfully,  but am not able to aggregate error messages, and not able to guarantee that all
       other processes were killed!

BUGS

       User feedback is welcome, especially bugs, performance issues and last but not least convenience  of  the
       programs and API.

       Email Andreas Hauser hauser@genzentrum.lmu.de.

ffindex_apply_mpi ----------------------------------June-2017-------------------------------FFINDEX_APPLY_MPI(1)