Provided by: clustershell_1.7-1-1_all bug

NAME

       clush - execute shell commands on a cluster

SYNOPSIS

       clush -a | -g group | -w nodes  [OPTIONS]

       clush -a | -g group | -w nodes  [OPTIONS] command

       clush  -a  |  -g  group | -w nodes  [OPTIONS] --copy file | dir [ file | dir ...] [ --dest
       path ]

       clush -a | -g group | -w nodes  [OPTIONS] --rcopy file | dir [ file | dir  ...]  [  --dest
       path ]

DESCRIPTION

       clush is a program for executing commands in parallel on a cluster and for gathering their
       results. clush executes commands interactively or can be used  within  shell  scripts  and
       other  applications.  It is a partial front-end to the ClusterShell library that ensures a
       light,  unified  and  robust  parallel  command  execution  framework.  Thus,  it   allows
       traditional  shell  scripts  to benefit from some of the library features. clush currently
       makes use of the Ssh worker  of  ClusterShell,  by  default,  that  only  requires  ssh(1)
       (OpenSSH SSH client).

INVOCATION

       clush  can  be  started  non-interactively to run a shell command, or can be invoked as an
       interactive shell. To start a clush interactive session, invoke the clush command  without
       providing command.

       Non-interactive mode
              When  clush  is started non-interactively, the command is executed on the specified
              remote hosts in parallel. If option -b or --dshbak is specified,  clush  waits  for
              command completion and then displays gathered output results.

              The  -w  option  allows  you  to specify remote hosts by using ClusterShell NodeSet
              syntax, including the node groups @group special syntax and the  Extended  Patterns
              syntax  to  benefits  from  NodeSet  basic arithmetics (like @Agroup\&@Bgroup). See
              EXTENDED PATTERNS in nodeset(1) and also groups.conf(5) for more information.

              Unless option --nostdin is specified, clush detects  when  its  standard  input  is
              connected  to  a terminal (as determined by isatty(3)).  If actually connected to a
              terminal, clush listens to standard input when commands are running, waiting for an
              Enter  key  press.  Doing so will display the status of current nodes.  If standard
              input is not connected to a terminal, and unless  option  --nostdin  is  specified,
              clush  binds  the  standard input of the remote commands to its own standard input,
              allowing scripting methods like:
                 # echo foo | clush -w node[40-42] -b cat
                 ---------------
                 node[40-42]
                 ---------------
                 foo

              Please see some other great examples in the EXAMPLES section below.

       Interactive session
              If a command is not specified, and its standard input is connected to  a  terminal,
              clush runs interactively. In this mode, clush uses the GNU readline library to read
              command lines. Readline provides commands for searching through the command history
              for  lines containing a specified string. For instance, type Control-R to search in
              the history for the next entry matching the search string typed so far.  clush also
              recognizes special single-character prefixes that allows the user to see and modify
              the current nodeset (the nodes where the commands are executed).

              Single-character interactive commands are:

                     clush> ?
                            show current nodeset

                     clush> =<NODESET>
                            set current nodeset

                     clush> +<NODESET>
                            add nodes to current nodeset

                     clush> -<NODESET>
                            remove nodes from current nodeset

                     clush> !COMMAND
                            execute COMMAND on the local system

                     clush> =
                            toggle the output format (gathered or standard mode)

              To leave an interactive session, type quit or Control-D.

       Local execution ( --worker=exec or -R exec )
              Instead of running provided command on remote nodes, clush can  use  the  dedicated
              exec worker to launch the command locally, for each node.  Some parameters could be
              used in the command line to make a different command for each  node.  %h  or  %host
              will  be  replaced  by node name and %r or %rank by the remote rank [0-N] (to get a
              literal % use %%)

       File copying mode ( --copy )
              When clush is started with the -c  or  --copy  option,  it  will  attempt  to  copy
              specified  file  and/or  dir  to  the provided target cluster nodes.  If the --dest
              option is specified, it will put the copied files there.

       Reverse file copying mode ( --rcopy )
              When clush is started  with  the  --rcopy  option,  it  will  attempt  to  retrieve
              specified  file  and/or  dir  from  provided cluster nodes. If the --dest option is
              specified, it must be a directory path where the files will be  stored  with  their
              hostname appended. If the destination path is not specified, it will take the first
              file or dir basename directory as the local destination.

OPTIONS

       --version
              show clush version number and exit

       -s GROUPSOURCE, --groupsource=GROUPSOURCE
              optional groups.conf(5) group source to use

       --nostdin
              do not watch for possible input from stdin

       -O <KEY=VALUE>, --option=<KEY=VALUE>
              override any key=value clush.conf(5) options (repeat as needed)

       Selecting target nodes:

              -w NODES
                     nodes where to run the command

              -x NODES
                     exclude nodes from the node list

              -a, --all
                     run command on all nodes

              -g GROUP, --group=GROUP
                     run command on a group of nodes

              -X GROUP
                     exclude nodes from this group

              --hostfile=FILE, --machinefile=FILE
                     path to a file containing a list of single hosts, node sets or node  groups,
                     separated  by  spaces  and  lines  (may be specified multiple times, one per
                     file)

              --topology=FILE
                     topology configuration file to use for tree mode

       Output behaviour:

              -q, --quiet
                     be quiet, print essential output only

              -v, --verbose
                     be verbose, print informative messages

              -d, --debug
                     output more messages for debugging purpose

              -G, --groupbase
                     do not display group source prefix

              -L     disable header block and order output by nodes; additionally, when  used  in
                     conjunction  with  -b/-B, it will enable "life gathering" of results by line
                     mode, such as the next line is displayed as soon as possible (eg.  when  all
                     nodes have sent the line)

              -N     disable labeling of command line

              -P, --progress
                     show  progress during command execution; if writing is performed to standard
                     input, the live progress indicator will display the global bandwidth of data
                     written to the target nodes

              -b, --dshbak
                     display gathered results in a dshbak-like way

              -B     like -b but including standard error

              -r, --regroup
                     fold nodeset using node groups

              -S     return the largest of command return codes

              --color=WHENCOLOR
                     whether  to  use  ANSI colors to surround node or nodeset prefix/header with
                     escape sequences to display them in color  on  the  terminal.  WHENCOLOR  is
                     never,  always  or auto (which use color if standard output/error refer to a
                     terminal). Colors are set to [34m (blue foreground text) for stdout and [31m
                     (red foreground text) for stderr, and cannot be modified.

              --diff show diff between common outputs (find the best reference output by focusing
                     on largest nodeset and also smaller command return code)

       File copying:

              -c, --copy
                     copy local file or directory to remote nodes

              --rcopy
                     copy file or directory from remote nodes

              --dest=DEST_PATH
                     destination file or directory on the nodes (optional: use the  first  source
                     directory path when not specified)

              -p     preserve modification times and modes

       Connection options:

              -f FANOUT, --fanout=FANOUT
                     use  a  specified  maximum  fanout size (ie. do not execute more than FANOUT
                     commands at the same time, useful to limit resource usage)

              -l USER, --user=USER
                     execute remote command as user

              -o OPTIONS, --options=OPTIONS
                     can be used to give ssh options, eg. -o "-p 2022 -i  ~/.ssh/myidrsa";  these
                     options are added first to ssh and override default ones

              -t CONNECT_TIMEOUT, --connect_timeout=CONNECT_TIMEOUT
                     limit time to connect to a node

              -u COMMAND_TIMEOUT, --command_timeout=COMMAND_TIMEOUT
                     limit time for command to run on the node

              -R WORKER, --worker=WORKER
                     worker name to use for connection (exec, ssh, rsh, pdsh), default is ssh

       For a short explanation of these options, see -h, --help.

EXIT STATUS

       By  default,  an  exit  status of zero indicates success of the clush command but gives no
       information about the remote  commands  exit  status.  However,  when  the  -S  option  is
       specified,  the  exit  status  of clush is the largest value of the remote commands return
       codes.

       For failed remote commands whose exit status is non-zero, and unless  the  combination  of
       options -qS is specified, clush displays messages similar to:

       clush: node[40-42]: exited with exit code 1

EXAMPLES

   Remote parallel execution
       # clush -w node[3-5,62] uname -r
              Run command uname -r in parallel on nodes: node3, node4, node5 and node62

   Local parallel execution
       # clush -w node[1-3] --worker=exec ping -c1 %host
              Run  locally,  in  parallel, a ping command for nodes: node1, node2 and node3.  You
              may also use -R exec as the shorter and pdsh compatible option.

   Display features
       # clush -w node[3-5,62] -b uname -r
              Run  command  uname  -r  on  nodes[3-5,62]  and  display  gathered  output  results
              (integrated dshbak-like).

       # clush -w node[3-5,62] -bL uname -r
              Line  mode:  run  command  uname  -r  on  nodes[3-5,62] and display gathered output
              results without default header block.

       # ssh node32 find /etc/yum.repos.d -type f | clush -w node[40-42] -b xargs ls -l
              Search some files on node32 in /etc/yum.repos.d and use clush to list the  matching
              ones on node[40-42], and use -b to display gathered results.

       # clush -w node[3-5,62] --diff dmidecode -s bios-version
              Run  this  Linux  command  to  get  BIOS  version on nodes[3-5,62] and show version
              differences (if any).

   All nodes
       # clush -a uname -r
              Run command uname -r on all cluster nodes, see groups.conf(5) to setup all  cluster
              nodes (all: field).

       # clush -a -x node[5,7] uname -r
              Run command uname -r on all cluster nodes except on nodes node5 and node7.

       # clush -a --diff cat /some/file
              Run command cat /some/file on all cluster nodes and show differences (if any), line
              by line, between common outputs.

   Node groups
       # clush -w @oss modprobe lustre
              Run command modprobe lustre on nodes from node group named oss, see  groups.conf(5)
              to setup node groups (map: field).

       # clush -g oss modprobe lustre
              Same as previous example but using -g to avoid @ group prefix.

       # clush -w @mds,@oss modprobe lustre
              You  may  specify  several  node  groups by separating them with commas (please see
              EXTENDED PATTERNS in nodeset(1) and also groups.conf(5) for more information).

   Copy files
       # clush -w node[3-5,62] --copy /etc/motd
              Copy local file /etc/motd to remote nodes node[3-5,62].

       # clush -w node[3-5,62] --copy /etc/motd --dest /tmp/motd2
              Copy local file /etc/motd to remote nodes node[3-5,62] at path /tmp/motd2.

       # clush -w node[3-5,62] -c /usr/share/doc/clustershell
              Recursively copy local directory /usr/share/doc/clustershell to the  same  path  on
              remote nodes node[3-5,62].

       # clush -w node[3-5,62] --rcopy /etc/motd --dest /tmp
              Copy  /etc/motd  from  remote nodes node[3-5,62] to local /tmp directory, each file
              having their remote hostname appended, eg. /tmp/motd.node3.

FILES

       /etc/clustershell/clush.conf
              System-wide clush configuration file.

       ~/.clush.conf
              This is the per-user clush configuration file.

       ~/.clush_history
              File in which interactive clush command history is saved.

SEE ALSO

       clubak(1), nodeset(1), readline(3), clush.conf(5), groups.conf(5).

BUG REPORTS

       Use the following URL to submit a bug report or feedback:
              https://github.com/cea-hpc/clustershell/issues

AUTHOR

       Stephane Thiell <sthiell@stanford.edu>

COPYRIGHT

       CeCILL-C V1