Provided by: ganeti_1.2.7-1_all bug

NAME

       gnt-instance - ganeti instance administration

SYNOPSIS

       gnt-instance  command [ arguments... ]

DESCRIPTION

       The  gnt-instance  is  used  for  instance administration in the ganeti
       system.

COMMANDS

   CREATION/REMOVAL/QUERYING
   ADD
       add [ -s disksize ] [ --swap-size disksize ] [ -m memsize ]
           [ -b bridge ] [ --mac MAC-address ]
           [ --hvm-boot-order boot-order ] [ --hvm-acpi ACPI-support ]
           [ --hvm-pae PAE-support ]
           [ --hvm-cdrom-image-path cdrom-image-path ]
           [ --hvm-nic-type NICTYPE ]
           [ --hvm-disk-type DISKTYPE ]
           [ --vnc-bind-address vnc-bind-address ]
           [ --kernel { default | kernel_path } ]
           [ --initrd { default | none | initrd_path } ]
           [ --cpu vcpus ] [ --ip { default | none | ip-address } ]
           [ --auto-balance AUTO_BALANCE ] [ --no-wait-for-sync ] [ --no-start
       ] [ --no-ip-check ]
           -t { diskless | plain | local_raid1 | remote_raid1 | drbd }
           { -n node[:secondary-node] | --iallocator name }
           -o os-type
           instance

       Creates  a  new instance on the specified host. instance must be in DNS
       and resolve to a IP in the same network as the nodes in the cluster.

       The -s option specifies the disk size for the  instance,  in  mebibytes
       (defaults to 20480MiB = 20GiB). You can also use one of the suffixes m,
       g or t to specificy the exact the units used;  these  suffixes  map  to
       mebibytes, gibibytes and tebibytes.

       The  --swap-size option specifies the swap disk size (in mebibytes) for
       the instance (the one presented as /dev/sdb). The default  is  4096MiB.
       As for the disk size, you can specify other suffixes.

       The  -m option specifies the memory size for the instance, in mebibytes
       (defaults to 128 MiB). Again, you can use other suffixes (e.g. 2g).

       The -o options specifies the operating  system  to  be  installed.  The
       available operating systems can be listed with gnt-os list.

       The  -b  option  specifies  the  bridge  to  which the instance will be
       connected. (defaults to the cluster-wide default  bridge  specified  at
       cluster initialization time).

       The  --mac  option  specifies the MAC address of the ethernet interface
       for the instance. If this option is not specified, a new MAC address is
       generated  randomly  with  the  configured  MAC  prefix.  The  randomly
       generated MAC address is guaranteed to be unique among the instances of
       this cluster.

       The --hvm-boot-order option specifies the boot device order for Xen HVM
       instances. The boot order is a  string  of  letters  listing  the  boot
       devices, with valid device letters being:

       a      floppy drive

       c      hard disk

       d      CDROM drive

       n      network boot (PXE)

       The  default  is  not  to set an HVM boot order which is interpreted as
       ’dc’. This option, like  all  options  starting  with  ’hvm’,  is  only
       relevant for Xen HVM instances and ignored by all other instance types.

       The --hvm-acpi option specifies if Xen should enable ACPI  support  for
       this HVM instance. Valid values are true or false. The default value is
       false, disabling ACPI support for this instance.

       The --hvm-pae option specifies if Xen should enabled  PAE  support  for
       this  HVM  instance.  Valid  values  are  true or false. The default is
       false, disabling PAE support for this instance.

       The --hvm-cdrom-image-path option specifies the path to  the  file  Xen
       uses  to  emulate  a  virtual  CDROM drive for this HVM instance. Valid
       values are either an absolute path to an existing file or  None,  which
       disables  virtual CDROM support for this instance. The default is None,
       disabling virtual CDROM support.

       The --hvm-nic-type specifies the NIC type Xen should use for  this  HVM
       instance. Valid choices are rtl8139, ne2k_pci, ne2k_isa and paravirtual
       with rtl8139 as the default. The paravirtual setting  is  intended  for
       use with the GPL PV drivers inside HVM Windows instances.

       The  --hvm-disk-type specifies the disk type Xen should use for the HVM
       instance. Valid choices are ioemu and paravirtual  with  ioemu  as  the
       default.  The  paravirtual  setting is intended for use with the GPL PV
       drivers inside HVM Windows instances.

       The --vnc-bind-address  option  specifies  the  address  that  the  VNC
       listener  for  this  instance  should  bind  to.  Valid values are IPv4
       addresses. Use the address 0.0.0.0 to bind to all available  interfaces
       (this  is  the default) or specify the address of one of the interfaces
       on the node to restrict listening to that interface.

       The --iallocator option specifies the instance allocator plugin to use.
       If  you  pass  in  this option the allocator will select nodes for this
       instance automatically, so you don’t need to  pass  them  with  the  -n
       option.  For  more  information  please refer to the instance allocator
       documentation.

       The --kernel option allows the instance to use a custom  kernel  (if  a
       filename    is    passed)    or    to    use    the    default   kernel
       (/boot/vmlinuz-2.6-xenU), if the string default is passed.

       The --initrd option is similar: it allows the instance to use a  custom
       initrd  (if  a  filename  is  passed)  or  to  use  the  default initrd
       (/boot/initrd-2.6-xenU), if the string default is passed, or to disable
       the  use  of  an initrd, if the string none is passed. Note that in the
       case the instance is set to use  the  default  initrd  and  it  doesn’t
       exist,  it  will  be  silently ignored; if the instance is set to use a
       custom initrd and it doesn’t exist, this will be treated  as  an  error
       and will prevent the startup of the instance.

       The  --auto-balance  option  specifies  whether  the memory size of the
       instance will be considered in cluster verify checks; in the future, it
       might be used for automated cluster operations, but currently it is not
       used anywhere else. It defaults to true.

       The -t options specifies the disk layout type  for  the  instance.  The
       available choices are:

       diskless
              This  creates  an instance with no disks. Its useful for testing
              only (or other special cases).

       plain  Disk devices will be logical volumes.

       local_raid1
              Disk devices will be md raid1  arrays  over  two  local  logical
              volumes.

       remote_raid1
              Disk devices will be md raid1 arrays with one component (so it’s
              not  actually  raid1):  a  drbd  (0.7.x)  device   between   the
              instance’s  primary  node and the node given by the second value
              of the --node option.

       drbd   Disk devices will be drbd (version 8.x) on top of  lvm  volumes.
              They  are  equivalent  in functionality to remote_raid1, but are
              recommended for new instances (if you have drbd 8.x  installed).

       The  optional  second  value  of the --node is used for the remote raid
       template type and specifies the remote node.

       If you do not want gnt-instance to wait  for  the  disk  mirror  to  be
       synced, use the --no-wait-for-sync option.

       Use the --cpu option to set the number of virtual CPUs.

       To  pass  an  IPv4  address to the hypervisor, specify the --ip option.
       Note that this IP address will not  be  used  by  the  OS  scripts  and
       changing  it  later  will  change  the  address  that the instance will
       actually use.

       In case you don’t want the new instance to  be  automatically  started,
       specify the --no-start option.

       Ganeti will not check whether an instance’s IP address is already alive
       if the --no-ip-check option is specified.

       Example:

       # gnt-instance add -t plain -s 30g -m 512 -o debian-etch \
         -n node1.example.com instance1.example.com
       # gnt-instance add -t remote_raid1 -s 30g -m 512 -o debian-etch \
         -n node1.example.com:node2.example.com instance2.example.com

   REMOVE
       remove [ --ignore-failures ] instance

       Remove an instance. This will remove all data  from  the  instance  and
       there is no way back. If you are not sure if you use an instance again,
       use shutdown first and leave it in the shutdown state for a while.

       The --ignore-failures option will cause the removal to proceed even  in
       the  presence of errors during the removal of the instance (e.g. during
       the shutdown or the disk removal). If this option  is  not  given,  the
       command will stop at the first error.

       Example:

       # gnt-instance remove instance1.example.com

   LIST
       list [ --no-headers ] [ --separator=SEPARATOR ] [ -o [+]FIELD,... ]

       Shows the currently configured instances with memory usage, disk usage,
       the node they are running on, and the CPU  time,  counted  in  seconds,
       used by each instance since its latest restart.

       The  --no-headers  option  will  skip  the  initial  header  line.  The
       --separator option takes an argument which denotes what  will  be  used
       between the output fields. Both these options are to help scripting.

       The  -o  option  takes  a  comma-separated  list  of output fields. The
       available fields and their meaning are:

       name   the instance name

       os     the OS of the instance

       pnode  the primary node of the instance

       snodes comma-separated  list  of  secondary  nodes  for  the  instance;
              usually this will be just one node

       admin_state
              the desired state of the instance (either "yes" or "no" denoting
              the instance should run or not)

       admin_ram
              the desired memory for the instance

       disk_template
              the disk template of the instance

       oper_state
              the actual state of the instance;  can  be  one  of  the  values
              "running", "stopped", "(node down)"

       status combined  form of admin_state and oper_stat; this can be one of:
              ERROR_nodedown if the node of the instance is  down,  ERROR_down
              if the instance should run but is down, ERROR_up if the instance
              should be stopped but is actually  running,  ADMIN_down  if  the
              instance  has  been  stopped (and is stopped) and running if the
              instance is set to be running (and is running)

       oper_ram
              the  actual  memory  usage  of  the  instance  as  seen  by  the
              hypervisor

       ip     the ip address ganeti recognizes as associated with the instance
              interface

       mac    the instance interface MAC address

       bridge bridge the instance is connected to

       sda_size
              the size of the instance’s first disk

       sdb_size
              the size of the instance’s second disk

       vcpus  the number of VCPUs allocated to the instance

       tags   comma-separated list of the instances’s tags

       If the value of the option starts with the character +, the new  fields
       will  be  added  to  the  default  list. This allows to quickly see the
       default list plus a few other fields, instead of  retyping  the  entire
       list of fields.

       There  is  a  subtle  grouping  about  the available output fields: all
       fields except for oper_state, oper_ram  and  status  are  configuration
       value  and not run-time values. So if you don’t select any of the these
       fields,  the  query  will  be  satisfied  instantly  from  the  cluster
       configuration,  without  having  to  ask the remote nodes for the data.
       This can be helpful for big clusters when you only want some  data  and
       it makes sense to specify a reduced set of output fields.

       The  default  output  field  list  is:  name,  os,  pnode, admin_state,
       oper_state, oper_ram.

   INFO
       info [ -s | --static ] [ instance ... ]

       Show  detailed  information  about  the  (given)  instances.  This   is
       different  from  list  as  it  shows detailed data about the instance’s
       disks  (especially  useful  for  remote  raid  templates)   and   other
       parameters.

       If   the   option  -s  is  used,  only  information  available  in  the
       configuration file is returned,  without  querying  nodes,  making  the
       operation faster.

   MODIFY
       modify [ -m memsize ] [ -p vcpus ] [ -i ip ] [ -b bridge ] [ --mac MAC-
       address ] [ --hvm-boot-order boot-order ] [ --hvm-acpi ACPI-support ] [
       --hvm-pae  PAE-support  ] [ --hvm-cdrom-image-path cdrom-image-path ] [
       --hvm-nic-type NICTYPE ] [ --hvm-disk-type  DISKTYPE  ]  [  --vnc-bind-
       address vnc-bind-address ]
           [ --kernel  { default | kernel_path } ]
           [ --initrd  { default | none | initrd_path }  ]
           [ --auto-balance AUTO_BALANCE ] instance

       Modify the memory size, number of vcpus, ip address, MAC address and/or
       bridge for an instance.

       The memory size is given in MiB. Note that you need to  give  at  least
       one of the arguments, otherwise the command complains.

       The  --kernel,  --initrd  and --hvm-boot-order options are described in
       the add command.

       Additionally, the HVM boot order can be reset to the default values  by
       using --hvm-boot-order=default.

       The  --hvm-acpi  option specifies if Xen should enable ACPI support for
       this HVM instance. Valid values are true or false.

       The --hvm-pae option specifies if Xen should enabled  PAE  support  for
       this HVM instance. Valid values are true or false.

       The  --hvm-cdrom-image-path  option  specifies the path to the file xen
       uses to emulate a virtual CDROM drive  for  this  HVM  instance.  Valid
       values  are  either an absolute path to an existing file or None, which
       disables virtual CDROM support for this instance.

       The --hvm-nic-type option specifies the NIC type  Xen  should  use  for
       this  HVM  instance.  Valid choices are rtl8139, ne2k_pci, ne2k_isa and
       paravirtual with rtl8139 as the default.  The  paravirtual  setting  is
       intended  for use with the GPL PV drivers inside HVM Windows instances.

       The --hvm-disk-type option specifies the disk type Xen should  use  for
       the HVM instance. Valid choices are ioemu and paravirtual with ioemu as
       the default. The paravirtual setting is intended for use with  the  GPL
       PV drivers inside HVM Windows instances.

       The  --vnc-bind-address  option  specifies  the  address  that  the VNC
       listener for this instance  should  bind  to.  Valid  values  are  IPv4
       addresses. Use the address 0.0.0.0 to bind to all available interfaces.

       The --auto-balance option specifies whether  the  memory  size  of  the
       instance will be considered in cluster verify checks; in the future, it
       might be used for automated cluster operations, but currently it is not
       used anywhere else.

       All  the  changes  take  effect at the next restart. If the instance is
       running, there is no effect on the instance.

   REINSTALL
       reinstall [ -o os-type ] [ -f force ] [ --select-os ] instance

       Reinstalls the operating system on the  given  instance.  The  instance
       must  be  stopped  when  running  this  command.  If  the  --os-type is
       specified, the operating system is changed.

       The --select-os option switches to an  interactive  OS  reinstall.  The
       user  is  prompted to select the OS template from the list of available
       OS templates.

   RENAME
       rename [ --no-ip-check ] instance new_name

       Renames the given instance. The instance must be stopped  when  running
       this  command.  The  requirements  for the new name are the same as for
       adding an instance: the new name must  be  resolvable  and  the  IP  it
       resolves  to  must  not be reachable (in order to prevent duplicate IPs
       the next time the instance is started). The IP test can be  skipped  if
       the --no-ip-check option is passed.

   STARTING/STOPPING/CONNECTING TO CONSOLE
   STARTUP
       startup [ --extra=PARAMS ] [ --force ]
           [ --instance | --node | --primary | --secondary | --all ]
           [ name ... ]

       Starts  one  or more instances, depending on the following options. The
       four available modes are:

       --instance
              will start the  instances  given  as  arguments  (at  least  one
              argument required); this is the default selection

       --node will  start  the  instances  who  have  the given node as either
              primary or secondary

       --primary
              will start all instances whose primary node is in  the  list  of
              nodes passed as arguments (at least one node required)

       --secondary
              will  start all instances whose secondary node is in the list of
              nodes passed as arguments (at least one node required)

       --all  will start all instances in the cluster (no arguments accepted)

       Note that although you can pass more than  one  selection  option,  the
       last  one wins, so in order to guarantee the desired result, don’t pass
       more than one such option.

       The  --extra  option  is  used  to  pass  additional  argument  to  the
       instance’s  kernel  for  this  start only. Currently there is no way to
       specify a persistent set of arguments (beside the one hardcoded).  Note
       that this may not apply to all virtualization types.

       Use --force to start even if secondary disks are failing.

       Example:

       # gnt-instance start instance1.example.com
       # gnt-instance start --extra single test1.example.com
       # gnt-instance start --node node1.example.com node2.example.com
       # gnt-instance start --all

   SHUTDOWN
       shutdown
           [ --instance | --node | --primary | --secondary | --all ]
           [ name ... ]

       Stops  one or more instances. If the instance cannot be cleanly stopped
       during a hardcoded interval (currently 2  minutes),  it  will  forcibly
       stop  the instance (equivalent to switching off the power on a physical
       machine).

       The --instance, --node, --primary, --secondary and  --all  options  are
       similar  as  for  the  startup  command  and  they influence the actual
       instances being shutdown.

       Example:

       # gnt-instance shutdown instance1.example.com
       # gnt-instance shutdown --all

   REBOOT
       reboot
           [ --extra=PARAMS ]
           [ --type=REBOOT-TYPE ]
           [ --ignore-secondaries ]
           [ --force-multiple ]
           [ --instance | --node | --primary | --secondary | --all ]
           [ name ... ]

       Reboots one or more instances. The type of reboot depends on the  value
       of --type. A soft reboot does a hypervisor reboot, a hard reboot does a
       instance stop, recreates the hypervisor config  for  the  instance  and
       starts  the instance. A full reboot does the equivalent of gnt-instance
       shutdown && gnt-instance startup. The default is hard reboot.

       For the hard reboot the option --ignore-secondaries ignores errors  for
       the secondary node while re-assembling the instance disks.

       The  --instance,  --node,  --primary, --secondary and --all options are
       similar as for the  startup  command  and  they  influence  the  actual
       instances being rebooted.

       Use  the  --force-multiple  option to keep gnt-instance from asking for
       confirmation when more than one instance is affected.

       Example:

       # gnt-instance reboot instance1.example.com
       # gnt-instance reboot --type=full instance1.example.com

   CONSOLE
       console instance

       Connects to the console of the given instance. If the instance  is  not
       up, an error is returned.

       For  HVM  instances, this will attempt to connect to the serial console
       of the instance. To connect to the virtualized "physical" console of  a
       HVM  instance,  use  a  VNC  client  with the connection info from gnt-
       instance info.

       Example:

       # gnt-instance console instance1.example.com

   DISK MANAGEMENT
   REPLACE-DISKS
       replace-disks { --new-secondary NODE | --iallocator name } instance

       replace-disks { --iallocator name | --new-secondary NODE }
           [ -s ] instance

       replace-disks [ -s | -p ] instance

       This command is a generalized form for adding and replacing disks.

       The first form is usable with the remote_raid1 disk template. This will
       replace  the  disks  on  both  the  primary  and  secondary  node,  and
       optionally will change the secondary node to a new one if you pass  the
       --new-secondary option.

       The  second and third forms are usable with the drbd disk template. The
       second form will do a secondary replacement,  but  as  opposed  to  the
       remote_raid1  will  not  replace the disks on the primary, therefore it
       will execute faster. The third form will replace the  disks  on  either
       the  primary  (-p)  or  the  secondary  (-s) node of the instance only,
       without changing the node.

       Specifying --iallocator enables  secondary  node  replacement  and  and
       makes  the  new  secondary  be  selected automatically by the specified
       allocator plugin.

   ADD-MIRROR
       add-mirror -b sdX -n node instance

       Adds a new mirror to the disk layout of the instance, if  the  instance
       has  a  remote raid disk layout.  The new mirror member will be between
       the instance’s primary node and the node given with the -n option.

   REMOVE-MIRROR
       removemirror -b sdX -p id instance

       Removes a mirror componenent from the disk layout of the  instance,  if
       the instance has a remote raid disk layout.

       You  need  to  specifiy  on  which  disk  to act on using the -b option
       (either sda or sdb) and the mirror component, which  is  identified  by
       the -p option. You can find the list of valid identifiers with the info
       command.

   ACTIVATE-DISKS
       activate-disks instance

       Activates the block devices of the given instance. If  successful,  the
       command will show the location and name of the block devices:

       node1.example.com:sda:/dev/md0
       node1.example.com:sdb:/dev/md1

       In this example, node1.example.com is the name of the node on which the
       devices have been activated. The sda and sdb are the names of the block
       devices inside the instance. /dev/md0 and /dev/md1 are the names of the
       block devices as visible on the node.

       Note that it is safe to run this command while the instance is  already
       running.

   DEACTIVATE-DISKS
       deactivate-disks instance

       De-activates  the block devices of the given instance. Note that if you
       run this command for a remote raid instance type, while it is  running,
       it  will not be able to shutdown the block devices on the primary node,
       but it will shutdown the block devices on  the  secondary  nodes,  thus
       breaking the replication.

   GROW-DISK
       grow-disk [ --no-wait-for-sync ] instance disk amount

       Grows  an instance’s disk. This is only possible for instances having a
       plain or drbd disk template.

       Note that this command only change the block device size; it  will  not
       grow  the  actual filesystems, partitions, etc. that live on that disk.
       Usually, you will need to:

       1. use gnt-instance grow-disk

       2. reboot the instance (later, at a convenient time)

       3. use a filesystem resizer, such as ext2online(8) or xfs_growfs(8)  to
          resize the filesystem, or use fdisk(8) to change the partition table
          on the disk

       The disk argument is either sda or sdb. The amount  argument  is  given
       either  as  a number (and it represents the amount to increase the disk
       with in mebibytes) or can be given similar  to  the  arguments  in  the
       create instance operation, with a suffix denoting the unit.

       Note  that  the disk grow operation might complete on one node but fail
       on the other; this will leave the instance with different-sized LVs  on
       the  two  nodes,  but  this will not create problems (except for unused
       space).

       If you do not want gnt-instance to wait for the new disk region  to  be
       synced, use the --no-wait-for-sync option.

       Example (increase sda for instance1 by 16GiB):

       # gnt-instance grow-disk instance1.example.com sda 16g

       Also  note  that  disk  shrinking will not be supported; use gnt-backup
       export and then gnt-backup  import  to  reduce  the  disk  size  of  an
       instance.

   RECOVERY
   FAILOVER
       failover [ -f ] [ --ignore-consistency ] instance

       Failover  will  fail  the  instance over its secondary node. This works
       only for instances having a remote raid disk layout.

       Normally the failover will check the consistency of  the  disks  before
       failing over the instance. If you are trying to migrate instances off a
       dead node, this will fail. Use the --ignore-consistency option for this
       purpose.  Note  that this option can be dangerous as errors in shutting
       down the instance will be ignored, resulting  in  possibly  having  the
       instance  running  on  two  machines  in parallel (on disconnected DRBD
       drives).

       Example:

       # gnt-instance failover instance1.example.com

   MIGRATE
       migrate [ -f ] --cleanup instance

       migrate [ -f ] [ --non-live ] instance

       Migrate will move the instance to its secondary node without  shutdown.
       It only works for instances having the drbd8 disk template type.

       The migration command needs a perfectly healthy instance, as we rely on
       the dual-master capability of drbd8 and the disks of the  instance  are
       not allowed to be degraded.

       The --non-live option will switch (for the hypervisors that support it)
       between a  "fully  live"  (i.e.  the  interruption  is  as  minimal  as
       possible)  migration and one in which the instance is frozen, its state
       saved and transported to the remote node, and then resumed there.  This
       all depends on the hypervisor support for two different methods. In any
       case, it is not an error to  pass  this  parameter  (it  will  just  be
       ignored if the hypervisor doesn’t support it).

       If the --cleanup option is passed, the operation changes from migration
       to attempting recovery from a failed previous migration. In this  mode,
       ganeti checks if the instance runs on the correct node (and updates its
       configuration if not) and ensures the instances’s disks are  configured
       correctly. In this mode, the --non-live option is ignored.

       The option -f will skip the prompting for confirmation.

       Example (and expected output):

       # gnt-instance migrate instance1
       Migrate will happen to the instance instance1. Note that migration is
       **experimental** in this version. This might impact the instance if
       anything goes wrong. Continue?
       y/[n]/?: y
       * checking disk consistency between source and target
       * ensuring the target is in secondary mode
       * changing disks into dual-master mode
        - INFO: Waiting for instance instance1 to sync disks.
        - INFO: Instance instance1’s disks are in sync.
       * migrating instance to node2.example.com
       * changing the instance’s disks on source node to secondary
        - INFO: Waiting for instance instance1 to sync disks.
        - INFO: Instance instance1’s disks are in sync.
       * changing the instance’s disks to single-master
       #

   TAGS
   ADD-TAGS
       add-tags [ --from file ] instancename tag ...

       Add  tags  to  the  given instance. If any of the tags contains invalid
       characters, the entire operation will abort.

       If the --from option is given, the list of tags will be  extended  with
       the  contents  of  that  file  (each line becomes a tag). In this case,
       there is not need to pass tags on the command line  (if  you  do,  both
       sources will be used). A file name of - will be interpreted as stdin.

   LIST-TAGS
       list-tags instancename

       List the tags of the given instance.

   REMOVE-TAGS
       remove-tags [ --from file ] instancename tag ...

       Remove  tags  from  the  given  instance.  If  any  of the tags are not
       existing on the node, the entire operation will abort.

       If the --from option is given, the list of tags will be  extended  with
       the  contents  of  that  file  (each line becomes a tag). In this case,
       there is not need to pass tags on the command line  (if  you  do,  both
       sources will be used). A file name of - will be interpreted as stdin.

REPORTING BUGS

       Report   bugs   to   http://code.google.com/p/ganeti/  or  contact  the
       developers using the ganeti mailing list <ganeti@googlegroups.com>.

SEE ALSO

       Ganeti  overview  and  specifications:  ganeti(7)  (general  overview),
       ganeti-os-interface(7) (guest OS definitions).

       Ganeti  commands:  gnt-cluster(8)  (cluster-wide commands), gnt-node(8)
       (node-related commands), gnt-instance(8) (instance commands), gnt-os(8)
       (guest  OS commands).  gnt-backup(8) (instance import/export commands).

       Ganeti  daemons:  ganeti-watcher(8)  (automatic  instance   restarter),
       ganeti-noded(8)  (node  daemon),  ganeti-master(8)  (the master startup
       script), ganeti-rapi(8) (remote API daemon).

COPYRIGHT

       Copyright (C) 2006, 2007, 2008 Google Inc.  Permission  is  granted  to
       copy,  distribute  and/or  modify  under  the  terms of the GNU General
       Public License as published by the  Free  Software  Foundation;  either
       version 2 of the License, or (at your option) any later version.

       On  Debian systems, the complete text of the GNU General Public License
       can be found in /usr/share/common-licenses/GPL.