Provided by: openmpi-bin_1.4.3-2.1ubuntu3_i386
orterun, mpirun, mpiexec - Execute serial and parallel jobs in Open
Note: mpirun, mpiexec, and orterun are all synonyms for each other.
Using any of the names will produce the same behavior.
Single Process Multiple Data (SPMD) Model:
mpirun [ options ] <program> [ <args> ]
Multiple Instruction Multiple Data (MIMD) Model:
mpirun [ global_options ]
[ local_options1 ] <program1> [ <args1> ] :
[ local_options2 ] <program2> [ <args2> ] :
[ local_optionsN ] <programN> [ <argsN> ]
Note that in both models, invoking mpirun via an absolute path name is
equivalent to specifying the --prefix option with a <dir> value
equivalent to the directory where mpirun resides, minus its last
subdirectory. For example:
% /usr/local/bin/mpirun ...
is equivalent to
% mpirun --prefix /usr/local
If you are simply looking for how to run an MPI application, you
probably want to use a command line of the following form:
% mpirun [ -np X ] [ --hostfile <filename> ] <program>
This will run X copies of <program> in your current run-time
environment (if running under a supported resource manager, Open MPI's
mpirun will usually automatically use the corresponding resource
manager process starter, as opposed to, for example, rsh or ssh, which
require the use of a hostfile, or will default to running all X copies
on the localhost), scheduling (by default) in a round-robin fashion by
CPU slot. See the rest of this page for more details.
mpirun will send the name of the directory where it was invoked on the
local node to each of the remote nodes, and attempt to change to that
directory. See the "Current Working Directory" section below for
<program> The program executable. This is identified as the first non-
recognized argument to mpirun.
<args> Pass these run-time arguments to every new process. These
must always be the last arguments to mpirun. If an app
context file is used, <args> will be ignored.
Display help for this command
Suppress informative messages from orterun during application
Print version number. If no other arguments are given, this
will also cause orterun to exit.
To specify which hosts (nodes) of the cluster to run on:
-H, -host, --host <host1,host2,...,hostN>
List of hosts on which to invoke processes.
-hostfile, --hostfile <hostfile>
Provide a hostfile to use.
-machinefile, --machinefile <machinefile>
Synonym for -hostfile.
To specify the number of processes to launch:
-c, -n, --n, -np <#>
Run this many copies of the program on the given nodes. This
option indicates that the specified file is an executable
program and not an application context. If no value is provided
for the number of copies to execute (i.e., neither the "-np" nor
its synonyms are provided on the command line), Open MPI will
automatically execute a copy of the program on each process slot
(see below for description of a "process slot"). This feature,
however, can only be used in the SPMD model and will return an
error (without beginning execution of the application)
-npersocket, --npersocket <#persocket>
On each node, launch this many processes times the number of
processor sockets on the node. The -npersocket option also
turns on the -bind-to-socket option.
-npernode, --npernode <#pernode>
On each node, launch this many processes.
On each node, launch one process -- equivalent to -npernode 1.
To map processes to nodes:
Uniform distribution of ranks across all nodes. See more
detailed description below.
Do not run any copies of the launched application on the same
node as orterun is running. This option will override listing
the localhost with --host or any other host-specifying
Do not oversubscribe any nodes; error (without starting any
processes) if the requested number of processes would cause
oversubscription. This option implicitly sets "max_slots" equal
to the "slots" value for each node.
Launch processes one per node, cycling by node in a round-robin
fashion. This spreads processes evenly among nodes and assigns
ranks in a round-robin, "by node" manner.
For process binding:
Associate processes with successive cores if used with one of
the -bind-to-* options.
Associate processes with successive processor sockets if used
with one of the -bind-to-* options.
-cpus-per-proc, --cpus-per-proc <#perproc>
Use the number of cores per process if used with one of the
-cpus-per-rank, --cpus-per-rank <#perrank>
Alias for -cpus-per-proc.
Bind processes to cores.
Bind processes to processor sockets.
Do not bind processes. (Default.)
Report any bindings for launched processes.
-slot-list, --slot-list <slots>
List of processor IDs to be used for binding MPI processes. The
specified bindings will be applied to all MPI processes. See
explanation below for syntax.
-rf, --rankfile <rankfile>
Provide a rankfile file.
To manage standard I/O:
-output-filename, --output-filename <filename>
Redirect the stdout, stderr, and stddiag of all ranks to a rank-
unique version of the specified filename. Any directories in the
filename will automatically be created. Each output file will
consist of filename.rank, where the rank will be left-filled
with zero's for correct ordering in listings.
-stdin, --stdin <rank>
The MPI rank that is to receive stdin. The default is to forward
stdin to rank=0, but this option can be used to forward stdin to
any rank. It is also acceptable to specify none, indicating that
no ranks are to receive stdin.
Tag each line of output to stdout, stderr, and stddiag with
[jobid, rank]<stdxxx> indicating the process jobid and rank that
generated the output, and the channel which generated it.
Timestamp each line of output to stdout, stderr, and stddiag.
Provide all output to stdout, stderr, and stddiag in an xml
-xterm, --xterm <ranks>
Display the specified ranks in separate xterm windows. The ranks
are specified as a comma-separated list of ranges, with a -1
indicating all. A separate window will be created for each
specified rank. Note: In some environments, xterm may require
that the executable be in the user's path, or be specified in
absolute or relative terms. Thus, it may be necessary to specify
a local executable as "./foo" instead of just "foo". If xterm
fails to find the executable, mpirun will hang, but still
respond correctly to a ctrl-c. If this happens, please check
that the executable is being specified correctly and try again.
To manage files and runtime environment:
-path, --path <path>
<path> that will be used when attempting to locate the requested
executables. This is used prior to using the local PATH
Prefix directory that will be used to set the PATH and
LD_LIBRARY_PATH on the remote node before invoking Open MPI or
the target process. See the "Remote Execution" section, below.
Copy the specified executable(s) to remote machines prior to
starting remote processes. The executables will be copied to the
Open MPI session directory and will be deleted upon completion
of the job.
Preload the comma separated list of files to the current working
directory of the remote machines where processes will be
launched prior to starting those processes.
The destination directory to be used for preload-files, if other
than the current working directory. By default, the absolute and
relative paths provided by --preload-files are used.
Set the root for the session directory tree for mpirun only.
Synonym for -wdir.
Change to the directory <dir> before the user's program
executes. See the "Current Working Directory" section for notes
on relative paths. Note: If the -wdir option appears both on
the command line and in an application context, the context will
take precedence over the command line.
Export the specified environment variables to the remote nodes
before executing the program. Only one environment variable can
be specified per -x option. Existing environment variables can
be specified or new variable names specified with corresponding
values. For example:
% mpirun -x DISPLAY -x OFILE=/tmp/out ...
The parser for the -x option is not very sophisticated; it does
not even understand quoted values. Users are advised to set
variables in the environment, and then use -x to export (not
Setting MCA parameters:
-gmca, --gmca <key> <value>
Pass global MCA parameters that are applicable to all contexts.
<key> is the parameter name; <value> is the parameter value.
-mca, --mca <key> <value>
Send arguments to various MCA modules. See the "MCA" section,
Invoke the user-level debugger indicated by the
orte_base_user_debugger MCA parameter.
Sequence of debuggers to search for when --debug is used (i.e.
a synonym for orte_base_user_debugger MCA parameter).
Launch processes under the TotalView debugger. Deprecated
backwards compatibility flag. Synonym for --debug.
There are also other options:
-aborted, --aborted <#>
Set the maximum number of aborted processes to display.
Provide an appfile, ignoring all other command line options.
-cf, --cartofile <cartofile>
Provide a cartography file.
Indicates that multiple app_contexts are being provided that are
a mix of 32/64-bit binaries.
Do not detach OmpiRTE daemons used by this application. This
allows error messages from the daemons as well as the underlying
environment (e.g., when failing to launch a daemon) to be
-ompi-server, --ompi-server <uri or file>
Specify the URI of the Open MPI server, or the name of the file
(specified as file:filename) that contains that info. The Open
MPI server is used to support multi-application data exchange
via the MPI-2 MPI_Publish_name and MPI_Lookup_name functions.
Pause mpirun before launching the job until ompi-server is
detected. This is useful in scripts where ompi-server may be
started in the background, followed immediately by an mpirun
command that wishes to connect to it. Mpirun will pause until
either the specified ompi-server is contacted or the server-
wait-time is exceeded.
-server-wait-time, --server-wait-time <secs>
The max amount of time (in seconds) mpirun should wait for the
ompi-server to start. The default is 10 seconds.
The following options are useful for developers; they are not generally
useful to most ORTE and/or MPI users:
Enable debugging of the OmpiRTE (the run-time layer in Open
MPI). This is not generally useful for most users.
Enable debugging of any OmpiRTE daemons used by this
Enable debugging of any OmpiRTE daemons used by this
application, storing output in files.
Name of the executable that is to be used to start processes on
the remote nodes. The default is "orted". This option can be
used to test new daemon concepts, or to pass options back to the
daemons without having mpirun itself see them. For example,
specifying a launch agent of orted -mca odls_base_verbose 5
allows the developer to ask the orted for debugging output
without clutter from mpirun itself.
Disable the automatic --prefix behavior
There may be other options listed with mpirun --help.
One invocation of mpirun starts an MPI application running under Open
MPI. If the application is single process multiple data (SPMD), the
application can be specified on the mpirun command line.
If the application is multiple instruction multiple data (MIMD),
comprising of multiple programs, the set of programs and argument can
be specified in one of two ways: Extended Command Line Arguments, and
An application context describes the MIMD program set including all
arguments in a separate file. This file essentially contains multiple
mpirun command lines, less the command name itself. The ability to
specify different options for different instantiations of a program is
another reason to use an application context.
Extended command line arguments allow for the description of the
application layout on the command line using colons (:) to separate the
specification of programs and arguments. Some options are globally set
across all specified programs (e.g. --hostfile), while others are
specific to a single program (e.g. -np).
Specifying Host Nodes
Host nodes can be identified on the mpirun command line with the -host
option or in a hostfile.
mpirun -H aa,aa,bb ./a.out
launches two processes on node aa and one on bb.
Or, consider the hostfile
% cat myhostfile
Here, we list both the host names (aa, bb, and cc) but also how many
"slots" there are for each. Slots indicate how many processes can
potentially execute on a node. For best performance, the number of
slots may be chosen to be the number of cores on the node or the number
of processor sockets. If the hostfile does not provide slots
information, a default of 1 is assumed. When running under resource
managers (e.g., SLURM, Torque, etc.), Open MPI will obtain both the
hostnames and the number of slots directly from the resource manger.
mpirun -hostfile myhostfile ./a.out
will launch two processes on each of the three nodes.
mpirun -hostfile myhostfile -host aa ./a.out
will launch two processes, both on node aa.
mpirun -hostfile myhostfile -host dd ./a.out
will find no hosts to run on and abort with an error. That is, the
specified host dd is not in the specified hostfile.
Specifying Number of Processes
As we have just seen, the number of processes to run can be set using
the hostfile. Other mechanisms exist.
The number of processes launched can be specified as a multiple of the
number of nodes or processor sockets available. For example,
mpirun -H aa,bb -npersocket 2 ./a.out
launches processes 0-3 on node aa and process 4-7 on node bb, where
aa and bb are both dual-socket nodes. The -npersocket option also
turns on the -bind-to-socket option, which is discussed in a later
mpirun -H aa,bb -npernode 2 ./a.out
launches processes 0-1 on node aa and processes 2-3 on node bb.
mpirun -H aa,bb -npernode 1 ./a.out
launches one process per host node.
mpirun -H aa,bb -pernode ./a.out
is the same as -npernode 1.
Another alternative is to specify the number of processes with the -np
option. Consider now the hostfile
% cat myhostfile
mpirun -hostfile myhostfile -np 6 ./a.out
will launch ranks 0-3 on node aa and ranks 4-5 on node bb. The
remaining slots in the hostfile will not be used since the -np
option indicated that only 6 processes should be launched.
Mapping Processes to Nodes: Using Policies
The examples above illustrate the default mapping of process ranks to
nodes. This mapping can also be controlled with various mpirun options
that describe mapping policies.
Consider the same hostfile as above, again with -np 6:
node aa node bb node cc
mpirun 0 1 2 3 4 5
mpirun -loadbalance 0 1 2 3 4 5
mpirun -bynode 0 3 1 4 2 5
mpirun -nolocal 0 1 2 3 4 5
The -loadbalance option tries to spread processes out fairly among the
The -bynode option does likewise but numbers the processes in "by node"
in a round-robin fashion.
The -nolocal option prevents any processes from being mapped onto the
local host (in this case node aa). While mpirun typically consumes few
system resources, -nolocal can be helpful for launching very large jobs
where mpirun may actually need to use noticeable amounts of memory
and/or processing time.
Just as -np can specify fewer processes than there are slots, it can
also oversubscribe the slots. For example, with the same hostfile:
mpirun -hostfile myhostfile -np 14 ./a.out
will launch processes 0-3 on node aa, 4-7 on bb, and 8-11 on cc.
It will then add the remaining two processes to whichever nodes it
One can also specify limits to oversubscription. For example, with the
mpirun -hostfile myhostfile -np 14 -nooversubscribe ./a.out
will produce an error since -nooversubscribe prevents
Limits to oversubscription can also be specified in the hostfile
% cat myhostfile
aa slots=4 max_slots=4
The max_slots field specifies such a limit. When it does, the slots
value defaults to the limit. Now:
mpirun -hostfile myhostfile -np 14 ./a.out
causes the first 12 processes to be launched as before, but the
remaining two processes will be forced onto node cc. The other two
nodes are protected by the hostfile against oversubscription by
Using the --nooversubscribe option can be helpful since Open MPI
currently does not get "max_slots" values from the resource manager.
Of course, -np can also be used with the -H or -host option. For
mpirun -H aa,bb -np 8 ./a.out
launches 8 processes. Since only two hosts are specified, after
the first two processes are mapped, one to aa and one to bb, the
remaining processes oversubscribe the specified hosts.
And here is a MIMD example:
mpirun -H aa -np 1 hostname : -H bb,cc -np 2 uptime
will launch process 0 running hostname on node aa and processes 1
and 2 each running uptime on nodes bb and cc, respectively.
Mapping Processes to Nodes: Using Arbitrary Mappings
The mapping of process ranks to nodes can be prescribed not just with
general policies but also, if necessary, using arbitrary mappings that
cannot be described by a simple policy. One can use the "sequential
mapper," which reads the hostfile line by line, assigning processes to
nodes in whatever order the hostfile specifies. Use the -mca rmaps seq
option. For example, using the same hostfile as before
mpirun -hostfile myhostfile ./a.out
will launch three processes, on ranks aa, bb, and cc, respectively.
The slot counts don't matter; one process is launched per line on
whatever node is listed on the line.
Another way to specify arbitrary mappings is with a rank file, which
gives you detailed control over process binding as well. Rank files
are discussed below.
Processes may be bound to specific resources on a node. This can
improve performance if the operating system is placing processes
suboptimally. For example, it might oversubscribe some multi-core
processor sockets, leaving other sockets idle; this can lead processes
to contend unnecessarily for common resources. Or, it might spread
processes out too widely; this can be suboptimal if application
performance is sensitive to interprocess communication costs. Binding
can also keep the operating system from migrating processes
excessively, regardless of how optimally those processes were placed to
To bind processes, one must first associate them with the resources on
which they should run. For example, the -bycore option associates the
processes on a node with successive cores. Or, -bysocket associates
the processes with successive processor sockets, cycling through the
sockets in a round-robin fashion if necessary. And -cpus-per-proc
indicates how many cores to bind per process.
But, such association is meaningless unless the processes are actually
bound to those resources. The binding option specifies the granularity
of binding -- say, with -bind-to-core or -bind-to-socket. One can also
turn binding off with -bind-to-none, which is typically the default.
Finally, -report-bindings can be used to report bindings.
As an example, consider a node with two processor sockets, each
comprising four cores. We run mpirun with -np 4 -report-bindings and
the following additional options:
% mpirun ... -bycore -bind-to-core
[...] ... binding child [...,0] to cpus 0001
[...] ... binding child [...,1] to cpus 0002
[...] ... binding child [...,2] to cpus 0004
[...] ... binding child [...,3] to cpus 0008
% mpirun ... -bysocket -bind-to-socket
[...] ... binding child [...,0] to socket 0 cpus 000f
[...] ... binding child [...,1] to socket 1 cpus 00f0
[...] ... binding child [...,2] to socket 0 cpus 000f
[...] ... binding child [...,3] to socket 1 cpus 00f0
% mpirun ... -cpus-per-proc 2 -bind-to-core
[...] ... binding child [...,0] to cpus 0003
[...] ... binding child [...,1] to cpus 000c
[...] ... binding child [...,2] to cpus 0030
[...] ... binding child [...,3] to cpus 00c0
% mpirun ... -bind-to-none
Here, -report-bindings shows the binding of each process as a mask. In
the first case, the processes bind to successive cores as indicated by
the masks 0001, 0002, 0004, and 0008. In the second case, processes
bind to all cores on successive sockets as indicated by the masks 000f
and 00f0. The processes cycle through the processor sockets in a
round-robin fashion as many times as are needed. In the third case,
the masks show us that 2 cores have been bind per process. In the
fourth case, binding is turned off and no bindings are reported.
Open MPI's support for process binding depends on the underlying
operating system. Therefore, processing binding may not be available
on every system.
Process binding can also be set with MCA parameters. Their usage is
less convenient than that of mpirun options. On the other hand, MCA
parameters can be set not only on the mpirun command line, but
alternatively in a system or user mca-params.conf file or as
environment variables, as described in the MCA section below. The
mpirun option MCA parameter key value
-bycore rmaps_base_schedule_policy core
-bysocket rmaps_base_schedule_policy socket
-bind-to-core orte_process_binding core
-bind-to-socket orte_process_binding socket
-bind-to-none orte_process_binding none
The orte_process_binding value can also take on the :if-avail
attribute. This attribute means that processes will be bound only if
this is supported on the underlying operating system. Without the
attribute, if there is no such support, the binding request results in
an error. For example, you could have
% cat $HOME/.openmpi/mca-params.conf
rmaps_base_schedule_policy = socket
orte_process_binding = socket:if-avail
Rankfiles provide a means for specifying detailed information about how
process ranks should be mapped to nodes and how they should be bound.
Consider the following:
rank 0=aa slot=1:0-2
rank 1=bb slot=0:0,1
rank 2=cc slot=1-2
mpirun -H aa,bb,cc,dd -rf myrankfile ./a.out So that
Rank 0 runs on node aa, bound to socket 1, cores 0-2.
Rank 1 runs on node bb, bound to socket 0, cores 0 and 1.
Rank 2 runs on node cc, bound to cores 1 and 2.
Application Context or Executable Program?
To distinguish the two different forms, mpirun looks on the command
line for --app option. If it is specified, then the file named on the
command line is assumed to be an application context. If it is not
specified, then the file is assumed to be an executable program.
If no relative or absolute path is specified for a file, Open MPI will
first look for files by searching the directories specified by the
--path option. If there is no --path option set or if the file is not
found at the --path location, then Open MPI will search the user's PATH
environment variable as defined on the source node(s).
If a relative directory is specified, it must be relative to the
initial working directory determined by the specific starter used. For
example when using the rsh or ssh starters, the initial directory is
$HOME by default. Other starters may set the initial directory to the
current working directory from the invocation of mpirun.
Current Working Directory
The -wdir mpirun option (and its synonym, -wd) allows the user to
change to an arbitrary directory before the program is invoked. It can
also be used in application context files to specify working
directories on specific nodes and/or for specific applications.
If the -wdir option appears both in a context file and on the command
line, the context file directory will override the command line value.
If the -wdir option is specified, Open MPI will attempt to change to
the specified directory on all of the remote nodes. If this fails,
mpirun will abort.
If the -wdir option is not specified, Open MPI will send the directory
name where mpirun was invoked to each of the remote nodes. The remote
nodes will try to change to that directory. If they are unable (e.g.,
if the directory does not exist on that node), then Open MPI will use
the default directory determined by the starter.
All directory changing occurs before the user's program is invoked; it
does not wait until MPI_INIT is called.
Open MPI directs UNIX standard input to /dev/null on all processes
except the MPI_COMM_WORLD rank 0 process. The MPI_COMM_WORLD rank 0
process inherits standard input from mpirun. Note: The node that
invoked mpirun need not be the same as the node where the
MPI_COMM_WORLD rank 0 process resides. Open MPI handles the redirection
of mpirun's standard input to the rank 0 process.
Open MPI directs UNIX standard output and error from remote nodes to
the node that invoked mpirun and prints it on the standard output/error
of mpirun. Local processes inherit the standard output/error of mpirun
and transfer to it directly.
Thus it is possible to redirect standard I/O for Open MPI applications
by using the typical shell redirection procedure on mpirun.
% mpirun -np 2 my_app < my_input > my_output
Note that in this example only the MPI_COMM_WORLD rank 0 process will
receive the stream from my_input on stdin. The stdin on all the other
nodes will be tied to /dev/null. However, the stdout from all nodes
will be collected into the my_output file.
When orterun receives a SIGTERM and SIGINT, it will attempt to kill the
entire job by sending all processes in the job a SIGTERM, waiting a
small number of seconds, then sending all processes in the job a
SIGUSR1 and SIGUSR2 signals received by orterun are propagated to all
processes in the job.
One can turn on forwarding of SIGSTOP and SIGCONT to the program
executed by mpirun by setting the MCA parameter
orte_forward_job_control to 1. A SIGTSTOP signal to mpirun will then
cause a SIGSTOP signal to be sent to all of the programs started by
mpirun and likewise a SIGCONT signal to mpirun will cause a SIGCONT
Other signals are not currently propagated by orterun.
Process Termination / Signal Handling
During the run of an MPI application, if any rank dies abnormally
(either exiting before invoking MPI_FINALIZE, or dying as the result of
a signal), mpirun will print out an error message and kill the rest of
the MPI application.
User signal handlers should probably avoid trying to cleanup MPI state
(Open MPI is currently not async-signal-safe; see MPI_Init_thread(3)
for details about MPI_THREAD_MULTIPLE and thread safety). For example,
if a segmentation fault occurs in MPI_SEND (perhaps because a bad
buffer was passed in) and a user signal handler is invoked, if this
user handler attempts to invoke MPI_FINALIZE, Bad Things could happen
since Open MPI was already "in" MPI when the error occurred. Since
mpirun will notice that the process died due to a signal, it is
probably not necessary (and safest) for the user to only clean up non-
Processes in the MPI application inherit their environment from the
Open RTE daemon upon the node on which they are running. The
environment is typically inherited from the user's shell. On remote
nodes, the exact environment is determined by the boot MCA module used.
The rsh launch module, for example, uses either rsh/ssh to launch the
Open RTE daemon on remote nodes, and typically executes one or more of
the user's shell-setup files before launching the Open RTE daemon.
When running dynamically linked applications which require the
LD_LIBRARY_PATH environment variable to be set, care must be taken to
ensure that it is correctly set when booting Open MPI.
See the "Remote Execution" section for more details.
Open MPI requires that the PATH environment variable be set to find
executables on remote nodes (this is typically only necessary in rsh-
or ssh-based environments -- batch/scheduled environments typically
copy the current environment to the execution of remote jobs, so if the
current environment has PATH and/or LD_LIBRARY_PATH set properly, the
remote nodes will also have it set properly). If Open MPI was compiled
with shared library support, it may also be necessary to have the
LD_LIBRARY_PATH environment variable set on remote nodes as well
(especially to find the shared libraries required to run user MPI
However, it is not always desirable or possible to edit shell startup
files to set PATH and/or LD_LIBRARY_PATH. The --prefix option is
provided for some simple configurations where this is not possible.
The --prefix option takes a single argument: the base directory on the
remote node where Open MPI is installed. Open MPI will use this
directory to set the remote PATH and LD_LIBRARY_PATH before executing
any Open MPI or user applications. This allows running Open MPI jobs
without having pre-configured the PATH and LD_LIBRARY_PATH on the
Open MPI adds the basename of the current node's "bindir" (the
directory where Open MPI's executables are installed) to the prefix and
uses that to set the PATH on the remote node. Similarly, Open MPI adds
the basename of the current node's "libdir" (the directory where Open
MPI's libraries are installed) to the prefix and uses that to set the
LD_LIBRARY_PATH on the remote node. For example:
Local bindir: /local/node/directory/bin
Local libdir: /local/node/directory/lib64
If the following command line is used:
% mpirun --prefix /remote/node/directory
Open MPI will add "/remote/node/directory/bin" to the PATH and
"/remote/node/directory/lib64" to the D_LIBRARY_PATH on the remote node
before attempting to execute anything.
Note that --prefix can be set on a per-context basis, allowing for
different values for different nodes.
The --prefix option is not sufficient if the installation paths on the
remote node are different than the local node (e.g., if "/lib" is used
on the local node, but "/lib64" is used on the remote node), or if the
installation paths are something other than a subdirectory under a
Note that executing mpirun via an absolute pathname is equivalent to
specifying --prefix without the last subdirectory in the absolute
pathname to mpirun. For example:
% /usr/local/bin/mpirun ...
is equivalent to
% mpirun --prefix /usr/local
Exported Environment Variables
All environment variables that are named in the form OMPI_* will
automatically be exported to new processes on the local and remote
nodes. The -x option to mpirun can be used to export specific
environment variables to the new processes. While the syntax of the -x
option allows the definition of new variables, note that the parser for
this option is currently not very sophisticated - it does not even
understand quoted values. Users are advised to set variables in the
environment and use -x to export them; not to define them.
Setting MCA Parameters
The -mca switch allows the passing of parameters to various MCA
(Modular Component Architecture) modules. MCA modules have direct
impact on MPI programs because they allow tunable parameters to be set
at run time (such as which BTL communication device driver to use, what
parameters to pass to that BTL, etc.).
The -mca switch takes two arguments: <key> and <value>. The <key>
argument generally specifies which MCA module will receive the value.
For example, the <key> "btl" is used to select which BTL to be used for
transporting MPI messages. The <value> argument is the value that is
passed. For example:
mpirun -mca btl tcp,self -np 1 foo
Tells Open MPI to use the "tcp" and "self" BTLs, and to run a
single copy of "foo" an allocated node.
mpirun -mca btl self -np 1 foo
Tells Open MPI to use the "self" BTL, and to run a single copy of
"foo" an allocated node.
The -mca switch can be used multiple times to specify different <key>
and/or <value> arguments. If the same <key> is specified more than
once, the <value>s are concatenated with a comma (",") separating them.
Note that the -mca switch is simply a shortcut for setting environment
variables. The same effect may be accomplished by setting
corresponding environment variables before running mpirun. The form of
the environment variables that Open MPI sets is:
Thus, the -mca switch overrides any previously set environment
variables. The -mca settings similarly override MCA parameters set in
the $OPAL_PREFIX/etc/openmpi-mca-params.conf or $HOME/.openmpi/mca-
Unknown <key> arguments are still set as environment variable -- they
are not checked (by mpirun) for correctness. Illegal or incorrect
<value> arguments may or may not be reported -- it depends on the
specific MCA module.
To find the available component types under the MCA architecture, or to
find the available parameters for a specific component, use the
ompi_info command. See the ompi_info(1) man page for detailed
information on the command.
Be sure also to see the examples throughout the sections above.
mpirun -np 4 -mca btl ib,tcp,self prog1
Run 4 copies of prog1 using the "ib", "tcp", and "self" BTL's for
the transport of MPI messages.
mpirun -np 4 -mca btl tcp,sm,self
--mca btl_tcp_if_include eth0 prog1
Run 4 copies of prog1 using the "tcp", "sm" and "self" BTLs for the
transport of MPI messages, with TCP using only the eth0 interface
to communicate. Note that other BTLs have similar if_include MCA
mpirun returns 0 if all ranks started by mpirun exit after calling
MPI_FINALIZE. A non-zero value is returned if an internal error
occurred in mpirun, or one or more ranks exited before calling
MPI_FINALIZE. If an internal error occurred in mpirun, the
corresponding error code is returned. In the event that one or more
ranks exit before calling MPI_FINALIZE, the return value of the rank of
the process that mpirun first notices died before calling MPI_FINALIZE
will be returned. Note that, in general, this will be the first rank
that died but is not guaranteed to be so.