Provided by: ganeti2_2.1.2.1-2_all bug

NAME

       gnt-cluster - ganeti administration, cluster-wide

SYNOPSIS

       gnt-cluster  command [ arguments... ]

DESCRIPTION

       The  gnt-cluster  is used for cluster-wide administration in the ganeti
       system.

COMMANDS

   ADD-TAGS
       add-tags [ --from file ] tag ...

       Add  tags  to  the  cluster.  If  any  of  the  tags  contains  invalid
       characters, the entire operation will abort.

       If  the  --from option is given, the list of tags will be extended with
       the contents of that file (each line becomes  a  tag).  In  this  case,
       there  is  not  need  to pass tags on the command line (if you do, both
       sources will be used). A file name of - will be interpreted as stdin.

   COMMAND
       command [ -n node ] command

       Executes a command on all nodes. If the option -n  is  not  given,  the
       command  will  be  executed on all nodes, otherwise it will be executed
       only on the node(s)  specified.  Use  the  option  multiple  times  for
       running it on multiple nodes, like:

                 # gnt-cluster command -n node1.example.com -n node2.example.com date

       The  command  is executed serially on the selected nodes. If the master
       node is present in the list, the command will be executed last  on  the
       master.  Regarding  the  other  nodes,  the execution order is somewhat
       alphabetic,  so   that   node2.example.com   will   be   earlier   than
       node10.example.com but after node1.example.com.

       So given the node names node1, node2, node3, node10, node11, with node3
       being the master, the order will  be:  node1,  node2,  node10,  node11,
       node3.

       The  command  is  constructed  by  concatenating all other command line
       arguments. For example, to list the contents of the /etc  directory  on
       all nodes, run:

                 # gnt-cluster command ls -l /etc

       and the command which will be executed will be "ls -l /etc"

   COPYFILE
       copyfile [ --use-replication-network ] [ -n node ] file

       Copies  a  file  to  all  or  to some nodes. The argument specifies the
       source file (on the current system),  the  -n  argument  specifies  the
       target  node,  or nodes if the option is given multiple times. If -n is
       not given at all, the file will be copied to all  nodes.   Passing  the
       --use-replication-network  option  will  cause the copy to be done over
       the replication network (only matters if the primary/secondary IPs  are
       different).  Example:

                 # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test

       This  will  copy  the  file  /tmp/test from the current node to the two
       named nodes.

   DESTROY
       destroy --yes-do-it

       Remove all configuration files related to the cluster, so that  a  gnt-
       cluster init can be done again afterwards.

       Since  this  is  a  dangerous  command,  you  are  required to pass the
       argument --yes-do-it.

   GETMASTER
       getmaster

       Displays the current master node.

   INFO
       info

       Shows runtime cluster information: cluster name, architecture (32 or 64
       bit), master node, node list and instance list.

   INIT
       init
           [ -s secondary_ip ]
           [ -g vg-name ]
           [ --master-netdev vg-name ]
           [ -m mac-prefix ]
           [ --no-lvm-storage ]
           [ --no-etc-hosts ]
           [ --no-ssh-init ]
           [ --file-storage-dir dir ]
           [ --enabled-hypervisors hypervisors ]
           [ -t hypervisor name ]
           [    --hypervisor-parameters   hypervisor:hv-param=value   [   ,hv-
       param=value ... ] ]
           [ --backend-parameters be-param=value [ ,be-param=value ... ] ]
           [ --nic-parameters nic-param=value [ ,nic-param=value ... ] ]
           [ --maintain-node-health  { yes | no } ]
           [ --uid-pool user-id pool definition ]
           clustername

       This commands is only run once initially  on  the  first  node  of  the
       cluster.  It  will  initialize the cluster configuration and setup ssh-
       keys and more.

       Note that the clustername  is  not  any  random  name.  It  has  to  be
       resolvable  to  an IP address using DNS, and it is best if you give the
       fully-qualified domain name.  This  hostname  must  resolve  to  an  IP
       address reserved exclusively for this purpose.

       The  cluster  can  run  in two modes: single-home or dual-homed. In the
       first case, all traffic (both public traffic,  inter-node  traffic  and
       data  replication  traffic)  goes over the same interface. In the dual-
       homed case, the data replication traffic goes over the second  network.
       The  -s  option  here marks the cluster as dual-homed and its parameter
       represents this node’s address on the second network. If you initialise
       the  cluster with -s, all nodes added must have a secondary IP as well.

       Note that for Ganeti it doesn’t matter  if  the  secondary  network  is
       actually  a separate physical network, or is done using tunneling, etc.
       For performance reasons, it’s recommended to use a separate network, of
       course.

       The  -g  option  will  let  you  specify  a volume group different than
       ’xenvg’ for ganeti to use when creating instance  disks.   This  volume
       group  must  have  the  same  name  on  all  nodes. Once the cluster is
       initialized this can be altered by using the  modify  command.  If  you
       don’t  want  to use lvm storage at all use the --no-lvm-storage option.
       Once the cluster is initialized you can  change  this  setup  with  the
       modify command.

       The  --master-netdev  option  is  useful  for  specifying  a  different
       interface on which the master  will  activate  its  IP  address.   It’s
       important that all nodes have this interface because you’ll need it for
       a master failover.

       The -m option will let you specify a three byte prefix under which  the
       virtual  MAC  addresses of your instances will be generated. The prefix
       must be specified in the format XX:XX:XX and the default is aa:00:00.

       The --no-lvm-storage  option  allows  you  to  initialize  the  cluster
       without  lvm  support.  This  means  that only instances using files as
       storage backend will  be  possible  to  create.  Once  the  cluster  is
       initialized you can change this setup with the modify command.

       The  --no-etc-hosts option allows you to initialize the cluster without
       modifying the /etc/hosts file.

       The --no-ssh-init option allows you to initialize the  cluster  without
       creating or distributing SSH key pairs.

       The  --file-storage-dir  option allows you set the directory to use for
       storing the instance disk files when using file storage as backend  for
       instance disks.

       The  --enabled-hypervisors  option  allows  you  to  set  the  list  of
       hypervisors that will be enabled for this cluster. Instance hypervisors
       can  only be chosen from the list of enabled hypervisors, and the first
       entry of this list will be used by default.  Currently,  the  following
       hypervisors are available:

       xen-pvm
              Xen PVM hypervisor

       xen-hvm
              Xen HVM hypervisor

       kvm    Linux KVM hypervisor

       chroot a  simple chroot manager that starts chroot based on a script at
              the root of the filesystem holding the chroot

       fake   fake hypervisor for development/testing

       Either a single hypervisor name or a comma-separated list of hypervisor
       names  can be specified. If this option is not specified, only the xen-
       pvm hypervisor is enabled by default.

       The --hypervisor-parameters option allows you to set default hypervisor
       specific  parameters  for the cluster. The format of this option is the
       name of the hypervisor, followed by a colon and a comma-separated  list
       of  key=value  pairs.  The  keys  available  for  each  hypervisors are
       detailed in the gnt-instance(8) man page, in the add command  plus  the
       following  parameters  which are only configurable globally (at cluster
       level):

       migration_port
              Valid for the Xen PVM and KVM hypervisors.

              This options specifies the TCP port to use  for  live-migration.
              For  Xen, the same port should be configured on all nodes in the
              /etc/xen/xend-config.sxp file, under the key  ‘‘xend-relocation-
              port’’.

       The  --backend-parameters  option allows you to set the default backend
       parameters for the cluster. The parameter format is  a  comma-separated
       list of key=value pairs with the following supported keys:

       vcpus  Number  of  VCPUs  to set for an instance by default, must be an
              integer, will be set to 1 if no specified.

       memory Amount of memory to allocate for an instance by default, can  be
              either  an  integer  or  an  integer  followed  by a unit (M for
              mebibytes and G for gibibytes are supported),  will  be  set  to
              128M if not specified.

       auto_balance
              Value  of the auto_balance flag for instances to use by default,
              will be set to true if not specified.

       The  --nic-parameters  option  allows  you  to  set  the  default   nic
       parameters  for  the cluster. The parameter format is a comma-separated
       list of key=value pairs with the following supported keys:

       mode   The default nic mode, ’routed’ or ’bridged’.

       link   In bridged mode the  default  NIC  bridge.  In  routed  mode  it
              represents  an  hypervisor-vif-script  dependent  value to allow
              different instance groups. For example  under  the  KVM  default
              network  script  it  is interpreted as a routing table number or
              name.

       The option --maintain-node-health allows  to  enable/disable  automatic
       maintenance   actions  on  nodes.  Currently  these  include  automatic
       shutdown of instances and  deactivation  of  DRBD  devices  on  offline
       nodes;  in  the  future  it  might  be extended to automatic removal of
       unknown LVM volumes, etc.

       The --uid-pool option initializes the user-id pool.  The  user-id  pool
       definition  can  contain  a  list  of user-ids and/or a list of user-id
       ranges. The parameter format is a comma-separated list of numeric user-
       ids  or  user-id  ranges.  The ranges are defined by a lower and higher
       boundary, separated by a dash. The boundaries are  inclusive.   If  the
       --uid-pool  option  is not supplied, the user-id pool is initialized to
       an empty list. An empty list means that the  user-id  pool  feature  is
       disabled.

   LIST-TAGS
       list-tags

       List the tags of the cluster.

   MASTERFAILOVER
       masterfailover [ --no-voting ]

       Failover the master role to the current node.

       The  --no-voting option skips the remote node agreement checks. This is
       dangerous, but necessary in some cases (for example  failing  over  the
       master  role in a 2 node cluster with the original master down). If the
       original master then comes up, it won’t be able  to  start  its  master
       daemon because it won’t have enough votes, but so won’t the new master,
       if the master daemon ever needs a restart. You can pass --no-voting  to
       ganeti-masterd on the new master to solve this problem, and gnt-cluster
       redist-conf to make sure the cluster is consistent again.

   MODIFY
       modify
           [ -g vg-name ]
           [ --no-lvm-storage ]
           [ --enabled-hypervisors hypervisors ]
           [   --hypervisor-parameters   hypervisor:hv-param=value   [    ,hv-
       param=value ... ] ]
           [ --backend-parameters be-param=value [ ,be-param=value ... ] ]
           [ --nic-parameters nic-param=value [ ,nic-param=value ... ] ]
           [ --uid-pool user-id pool definition ]
           [ --add-uids user-id pool definition ]
           [ --remove-uids user-id pool definition ]
           [ -C candidate_pool_size ]
           [ --maintain-node-health  { yes | no } ]

       Modify the options for the cluster.

       The   -g,   --no-lvm-storarge,   --enabled-hypervisors,   --hypervisor-
       parameters,  --backend-parameters,  --nic-parameters,  --maintain-node-
       health and --uid-pool options are described in the init command.

       The -C option specifies the candidate_pool_size cluster parameter. This
       is  the  number  of  nodes  that  the  master  will  try  to  keep   as
       master_candidates.  For  more  details  about  this role and other node
       roles, see the ganeti(7). If you increase the  size,  the  master  will
       automatically  promote  as many nodes as required and possible to reach
       the intended number.

       The --add-uids and --remove-uids options can  be  used  to  modify  the
       user-id pool by adding/removing a list of user-ids or user-id ranges.

   QUEUE
       queue [ drain ] [ undrain ] [ info ]

       Change job queue properties.

       The drain option sets the drain flag on the job queue. No new jobs will
       be accepted, but jobs already in the queue will be processed.

       The undrain will unset the drain flag on the job queue. New  jobs  will
       be accepted.

       The info option shows the properties of the job queue.

   WATCHER
       watcher { pause duration | continue | info }

       Make the watcher pause or let it continue.

       The pause option causes the watcher to pause for duration seconds.

       The continue option will let the watcher continue.

       The info option shows whether the watcher is currently paused.

   REDIST-CONF
       redist-conf [ --submit ]

       This  command forces a full push of configuration files from the master
       node to the other nodes in the cluster. This is  normally  not  needed,
       but  can be run if the verify complains about configuration mismatches.

       The --submit option is used to send the job to the  master  daemon  but
       not wait for its completion. The job ID will be shown so that it can be
       examined via gnt-job info.

   REMOVE-TAGS
       remove-tags [ --from file ] tag ...

       Remove tags from the cluster. If any of the tags are  not  existing  on
       the cluster, the entire operation will abort.

       If  the  --from option is given, the list of tags will be extended with
       the contents of that file (each line becomes  a  tag).  In  this  case,
       there  is  not  need  to pass tags on the command line (if you do, both
       sources will be used). A file name of - will be interpreted as stdin.

   RENAME
       rename [ -f ] name

       Renames the cluster and in the process updates the master IP address to
       the  one  the  new name resolves to. At least one of either the name or
       the IP address must be  different,  otherwise  the  operation  will  be
       aborted.

       Note that since this command can be dangerous (especially when run over
       SSH), the command will require confirmation  unless  run  with  the  -f
       option.

   RENEW-CRYPTO
       renew-crypto [ -f ]
           [ --new-cluster-certificate ] [ --new-confd-hmac-key ]
           [ --new-rapi-certificate ] [ --rapi-certificate rapi-cert ]

       This command will stop all Ganeti daemons in the cluster and start them
       again once the new certificates and keys are  replicated.  The  options
       --new-cluster-certificate  and  --new-confd-hmac-key  can  be  used  to
       regenerate the cluster-internal SSL certificate respective the HMAC key
       used by ganeti-confd(8). To generate a new self-signed RAPI certificate
       (used by ganeti-rapi(8)) specify --new-rapi-certificate. If you want to
       use  your  own  certificate, e.g. one signed by a certificate authority
       (CA), pass its filename to --rapi-certificate.

   REPAIR-DISK-SIZES
       repair-disk-sizes [ instance ... ]

       This command checks that the recorded  size  of  the  given  instance’s
       disks matches the actual size and updates any mismatches found. This is
       needed if  the  Ganeti  configuration  is  no  longer  consistent  with
       reality,  as  it  will impact some disk operations. If no arguments are
       given, all instances will be checked.

       Note that only active disks can be checked by this command; in  case  a
       disk  cannot  be  activated  it’s advised to use gnt-instance activate-
       disks --ignore-size ... to  force  activation  without  regard  to  the
       current size.

       When  the  all  disk  sizes  are consistent, the command will return no
       output. Otherwise it will log details about the inconsistencies in  the
       configuration.

   SEARCH-TAGS
       search-tags pattern

       Searches  the  tags  on all objects in the cluster (the cluster itself,
       the nodes and the instances)  for  a  given  pattern.  The  pattern  is
       interpreted  as  a  regular  expression and a search will be done on it
       (i.e. the given pattern is not anchored to the beggining of the string;
       if you want that, prefix the pattern with ^).

       If  no tags are matching the pattern, the exit code of the command will
       be one. If there is at least one match, the exit  code  will  be  zero.
       Each match is listed on one line, the object and the tag separated by a
       space. The cluster will be listed as /cluster, a node will be listed as
       /nodes/name, and an instance as /instances/name.  Example:

       # gnt-cluster search-tags time
       /cluster ctime:2007-09-01
       /nodes/node1.example.com mtime:2007-10-04

   VERIFY
       verify [ --no-nplus1-mem ]

       Verify  correctness of cluster configuration. This is safe with respect
       to running instances, and incurs no downtime of the instances.

       If the --no-nplus1-mem option is given, ganeti won’t check  whether  if
       it  loses  a node it can restart all the instances on their secondaries
       (and report an error otherwise).

   VERIFY-DISKS
       verify-disks

       The command  checks  which  instances  have  degraded  DRBD  disks  and
       activates the disks of those instances.

       This  command  is  run  from  the ganeti-watcher tool, which also has a
       different, complementary algorithm  for  doing  this  check.  Together,
       these two should ensure that DRBD disks are kept consistent.

   VERSION
       version

       Show the cluster version.

REPORTING BUGS

       Report  bugs  to  <URL:http://code.google.com/p/ganeti/> or contact the
       developers using the ganeti mailing list <ganeti@googlegroups.com>.

SEE ALSO

       Ganeti  overview  and  specifications:  ganeti(7)  (general  overview),
       ganeti-os-interface(7) (guest OS definitions).

       Ganeti  commands:  gnt-cluster(8)  (cluster-wide  commands), gnt-job(8)
       (job-related  commands),  gnt-node(8)  (node-related  commands),   gnt-
       instance(8)  (instance  commands),  gnt-os(8) (guest OS commands), gnt-
       backup(8)  (instance  import/export  commands),   gnt-debug(8)   (debug
       commands).

       Ganeti   daemons:  ganeti-watcher(8)  (automatic  instance  restarter),
       ganeti-cleaner(8) (job queue cleaner), ganeti-noded(8)  (node  daemon),
       ganeti-masterd(8)  (master daemon), ganeti-rapi(8) (remote API daemon).

COPYRIGHT

       Copyright (C) 2006, 2007, 2008, 2009 Google Inc. Permission is  granted
       to  copy,  distribute  and/or modify under the terms of the GNU General
       Public License as published by the  Free  Software  Foundation;  either
       version 2 of the License, or (at your option) any later version.

       On  Debian systems, the complete text of the GNU General Public License
       can be found in /usr/share/common-licenses/GPL.