Provided by: ganeti2_2.0.5-1_all bug

NAME

       gnt-cluster - ganeti administration, cluster-wide

SYNOPSIS

       gnt-cluster  command [ arguments... ]

DESCRIPTION

       The  gnt-cluster  is used for cluster-wide administration in the ganeti
       system.

COMMANDS

   ADD-TAGS
       add-tags [ --from file ] tag ...

       Add  tags  to  the  cluster.  If  any  of  the  tags  contains  invalid
       characters, the entire operation will abort.

       If  the  --from option is given, the list of tags will be extended with
       the contents of that file (each line becomes  a  tag).  In  this  case,
       there  is  not  need  to pass tags on the command line (if you do, both
       sources will be used). A file name of - will be interpreted as stdin.

   COMMAND
       command [ -n node ] command

       Executes a command on all nodes. If the option -n  is  not  given,  the
       command  will  be  executed on all nodes, otherwise it will be executed
       only on the node(s)  specified.  Use  the  option  multiple  times  for
       running it on multiple nodes, like:

                 # gnt-cluster command -n node1.example.com -n node2.example.com date

       The  command  is executed serially on the selected nodes. If the master
       node is present in the list, the command will be executed last  on  the
       master.  Regarding  the  other  nodes,  the execution order is somewhat
       alphabetic,  so   that   node2.example.com   will   be   earlier   than
       node10.example.com but after node1.example.com.

       So given the node names node1, node2, node3, node10, node11, with node3
       being the master, the order will  be:  node1,  node2,  node10,  node11,
       node3.

       The  command  is  constructed  by  concatenating all other command line
       arguments. For example, to list the contents of the /etc  directory  on
       all nodes, run:

                 # gnt-cluster command ls -l /etc

       and the command which will be executed will be "ls -l /etc"

   COPYFILE
       copyfile [ -n node ] file

       Copies  a  file  to  all  or  to some nodes. The argument specifies the
       source file (on the current system),  the  -n  argument  specifies  the
       target  node,  or nodes if the option is given multiple times. If -n is
       not given at all, the file will be copied to all nodes.  Example:

                 # gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test

       This will copy the file /tmp/test from the  current  node  to  the  two
       named nodes.

   DESTROY
       destroy --yes-do-it

       Remove  all  configuration files related to the cluster, so that a gnt-
       cluster init can be done again afterwards.

       Since this is a  dangerous  command,  you  are  required  to  pass  the
       argument --yes-do-it.

   GETMASTER
       getmaster

       Displays the current master node.

   INFO
       info

       Shows runtime cluster information: cluster name, architecture (32 or 64
       bit), master node, node list and instance list.

   INIT
       init
           [ -s secondary_ip ]
           [ -b bridge ]
           [ -g vg-name ]
           [ --master-netdev vg-name ]
           [ -m mac-prefix ]
           [ --no-lvm-storage ]
           [ --file-storage-dir dir ]
           [ --enabled-hypervisors hypervisors ]
           [ -t hypervisor name ]
           [   --hypervisor-parameters   hypervisor:hv-param=value   [    ,hv-
       param=value ... ] ]
           [ --backend-parameters be-param=value [ ,be-param=value ... ] ]
           clustername

       This  commands  is  only  run  once  initially on the first node of the
       cluster. It will initialize the cluster configuration  and  setup  ssh-
       keys and more.

       Note  that  the  clustername  is  not  any  random  name.  It has to be
       resolvable to an IP address using DNS, and it is best if you  give  the
       fully-qualified  domain  name.  This  hostname  must  resolve  to an IP
       address reserved exclusively for this purpose.

       The cluster can run in two modes: single-home  or  dual-homed.  In  the
       first  case,  all  traffic (both public traffic, inter-node traffic and
       data replication traffic) goes over the same interface.  In  the  dual-
       homed  case, the data replication traffic goes over the second network.
       The -s option here marks the cluster as dual-homed  and  its  parameter
       represents this node’s address on the second network. If you initialise
       the cluster with -s, all nodes added must have a secondary IP as  well.

       Note  that  for  Ganeti  it  doesn’t matter if the secondary network is
       actually a separate physical network, or is done using tunneling,  etc.
       For performance reasons, it’s recommended to use a separate network, of
       course.

       The -b option specifies the default bridge for instances.

       The -g option will let  you  specify  a  volume  group  different  than
       ’xenvg’  for  ganeti  to use when creating instance disks.  This volume
       group must have the same  name  on  all  nodes.  Once  the  cluster  is
       initialized  this  can  be  altered by using the modify command. If you
       don’t want to use lvm storage at all use the  --no-lvm-storage  option.
       Once  the  cluster  is  initialized  you can change this setup with the
       modify command.

       The  --master-netdev  option  is  useful  for  specifying  a  different
       interface  on  which  the  master  will  activate its IP address.  It’s
       important that all nodes have this interface because you’ll need it for
       a master failover.

       The  -m option will let you specify a three byte prefix under which the
       virtual MAC addresses of your instances will be generated.  The  prefix
       must be specified in the format XX:XX:XX and the default is aa:00:00.

       The  --no-lvm-storage  allows you to initialize the cluster without lvm
       support. This means that only instances using files as storage  backend
       will  be  possible  to  create. Once the cluster is initialized you can
       change this setup with the modify command.

       The --file-storage-dir option allows you set the directory to  use  for
       storing  the instance disk files when using file storage as backend for
       instance disks.

       The  --enabled-hypervisors  option  allows  you  to  set  the  list  of
       hypervisors that will be enabled for this cluster. Instance hypervisors
       can only be choosen from the list of  enabled  hypervisors.  Currently,
       the following hypervisors are available:

       xen-pvm
              Xen PVM hypervisor

       xen-hvm
              Xen HVM hypervisor

       kvm    Linux KVM hypervisor

       fake   fake hypervisor for development/testing

       Either a single hypervisor name or a comma-separated list of hypervisor
       names can be specified. If this option is not specified, only the  xen-
       pvm hypervisor is enabled by default.

       With  the  -t option, the default hypervisor can be set. It has to be a
       member of the list of enabled hypervisors. If not specified, the  first
       entry on the list of enabled hypervisors will be used by default.

       The  --backend-parameters  option allows you to set the default backend
       parameters for the cluster. The parameter format is  a  comma-separated
       list of key=value pairs with the following supported keys:

       vcpus  Number  of  VCPUs  to set for an instance by default, must be an
              integer, will be set to 1 if no specified.

       memory Amount of memory to allocate for an instance by default, can  be
              either  an  integer  or  an  integer  followed  by a unit (M for
              mebibytes and G for gibibytes are supported),  will  be  set  to
              128M if not specified.

       auto_balance
              Value  of the auto_balance flag for instances to use by default,
              will be set to true if not specified.

       The --hypervisor-parameters option allows you to set default hypervisor
       specific  parameters  for the cluster. The format of this option is the
       name of the hypervisor, followed by a colon and a comma-separated  list
       of  key=value  pairs.  The  keys  available  for  each  hypervisors are
       detailed int the gnt-instance(8) man page, in the add command.

   LIST-TAGS
       list-tags

       List the tags of the cluster.

   MASTERFAILOVER
       masterfailover [ --no-voting ]

       Failover the master role to the current node.

       The --no-voting option skips the remote node agreement checks. This  is
       dangerous,  but  necessary  in some cases (for example failing over the
       master role in a 2 node cluster with the original master down). If  the
       original  master  then  comes  up, it won’t be able to start its master
       daemon because it won’t have enough votes, but so won’t the new master,
       if  the master daemon ever needs a restart. You can pass --no-voting to
       ganeti-masterd on the new master to solve this problem, and gnt-cluster
       redist-conf to make sure the cluster is consistent again.

   MODIFY
       modify
           [ -g vg-name ]
           [ --no-lvm-storage ]
           [ --enabled-hypervisors hypervisors ]
           [    --hypervisor-parameters   hypervisor:hv-param=value   [   ,hv-
       param=value ... ] ]
           [ --backend-parameters be-param=value [ ,be-param=value ... ] ]
           [ -C candidate_pool_size ]

       Modify the options for the cluster.

       The   -g,   --no-lvm-storarge,   --enabled-hypervisors,   --hypervisor-
       parameters  and  --backend-parameters options are described in the init
       command.

       The -C options specifies  the  candidate_pool_size  cluster  parameter.
       This  is  the  number  of  nodes  that  the  master will try to keep as
       master_candidates. For more details about  this  role  and  other  node
       roles,  see  the  ganeti(7).  If you increase the size, the master will
       automatically promote as many nodes as required and possible  to  reach
       the intended number.

   QUEUE
       queue [ drain ] [ undrain ] [ info ]

       Change job queue properties.

       The drain option sets the drain flag on the job queue. No new jobs will
       be accepted, but jobs already in the queue will be processed.

       The undrain will unset the drain flag on the job queue. New  jobs  will
       be accepted.

       The info option shows the properties of the job queue.

   REDIST-CONF
       redist-conf [ --submit ]

       This  command forces a full push of configuration files from the master
       node to the other nodes in the cluster. This is  normally  not  needed,
       but  can be run if the verify complains about configuration mismatches.

       The --submit option is used to send the job to the  master  daemon  but
       not wait for its completion. The job ID will be shown so that it can be
       examined via gnt-job info.

   REMOVE-TAGS
       remove-tags [ --from file ] tag ...

       Remove tags from the cluster. If any of the tags are  not  existing  on
       the cluster, the entire operation will abort.

       If  the  --from option is given, the list of tags will be extended with
       the contents of that file (each line becomes  a  tag).  In  this  case,
       there  is  not  need  to pass tags on the command line (if you do, both
       sources will be used). A file name of - will be interpreted as stdin.

   RENAME
       rename [ -f ] name

       Renames the cluster and in the process updates the master IP address to
       the  one  the  new name resolves to. At least one of either the name or
       the IP address must be  different,  otherwise  the  operation  will  be
       aborted.

       Note that since this command can be dangerous (especially when run over
       SSH), the command will require confirmation  unless  run  with  the  -f
       option.

   REPAIR-DISK-SIZES
       repair-disk-sizes [ instance ... ]

       This  command  checks  that  the  recorded size of the given instance’s
       disks matches the actual size and updates any mismatches found. This is
       needed  if  the  Ganeti  configuration  is  no  longer  consistent with
       reality, as it will impact some disk operations. If  no  arguments  are
       given, all instances will be checked.

       Note  that  only active disks can be checked by this command; in case a
       disk cannot be activated it’s advised  to  use  gnt-instance  activate-
       disks  --ignore-size  ...  to  force  activation  without regard to the
       current size.

       When the all disk sizes are consistent,  the  command  will  return  no
       output.  Otherwise it will log details about the inconsistencies in the
       configuration.

   SEARCH-TAGS
       search-tags pattern

       Searches the tags on all objects in the cluster  (the  cluster  itself,
       the  nodes  and  the  instances)  for  a  given pattern. The pattern is
       interpreted as a regular expression and a search will  be  done  on  it
       (i.e. the given pattern is not anchored to the beggining of the string;
       if you want that, prefix the pattern with ^).

       If no tags are matching the pattern, the exit code of the command  will
       be  one.  If  there  is at least one match, the exit code will be zero.
       Each match is listed on one line, the object and the tag separated by a
       space. The cluster will be listed as /cluster, a node will be listed as
       /nodes/name, and an instance as /instances/name.  Example:

       # gnt-cluster search-tags time
       /cluster ctime:2007-09-01
       /nodes/node1.example.com mtime:2007-10-04

   VERIFY
       verify [ --no-nplus1-mem ]

       Verify correctness of cluster configuration. This is safe with  respect
       to running instances, and incurs no downtime of the instances.

       If  the  --no-nplus1-mem option is given, ganeti won’t check whether if
       it loses a node it can restart all the instances on  their  secondaries
       (and report an error otherwise).

   VERIFY-DISKS
       verify-disks

       The  command  checks  which  instances  have  degraded  DRBD  disks and
       activates the disks of those instances.

       This command is run from the ganeti-watcher  tool,  which  also  has  a
       different,  complementary  algorithm  for  doing  this check. Together,
       these two should ensure that DRBD disks are kept consistent.

   VERSION
       version

       Show the cluster version.

REPORTING BUGS

       Report bugs to  <URL:http://code.google.com/p/ganeti/> or  contact  the
       developers using the ganeti mailing list <ganeti@googlegroups.com>.

SEE ALSO

       Ganeti  overview  and  specifications:  ganeti(7)  (general  overview),
       ganeti-os-interface(7) (guest OS definitions).

       Ganeti commands:  gnt-cluster(8)  (cluster-wide  commands),  gnt-job(8)
       (job-related   commands),  gnt-node(8)  (node-related  commands),  gnt-
       instance(8) (instance commands), gnt-os(8) (guest  OS  commands),  gnt-
       backup(8)   (instance   import/export  commands),  gnt-debug(8)  (debug
       commands).

       Ganeti  daemons:  ganeti-watcher(8)  (automatic  instance   restarter),
       ganeti-cleaner(8)  (job  queue cleaner), ganeti-noded(8) (node daemon),
       ganeti-masterd(8) (master daemon), ganeti-rapi(8) (remote API  daemon).

COPYRIGHT

       Copyright  (C) 2006, 2007, 2008, 2009 Google Inc. Permission is granted
       to copy, distribute and/or modify under the terms of  the  GNU  General
       Public  License  as  published  by the Free Software Foundation; either
       version 2 of the License, or (at your option) any later version.

       On Debian systems, the complete text of the GNU General Public  License
       can be found in /usr/share/common-licenses/GPL.