Provided by: system-storage-manager_0.3-1_all bug

NAME

       ssm - System Storage Manager: a single tool to manage your storage

SYNOPSIS

       ssm [-h] [--version] [-v] [-f] [-b BACKEND] {check,resize,create,list,add,remove,snapshot}
       ...

       ssm create [-h] [-s SIZE] [-n NAME] [--fstype  FSTYPE]  [-r  LEVEL]  [-I  STRIPESIZE]  [-i
       STRIPES] [-p POOL] [device [device ...]] [mount]

       ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}]

       ssm remove [-h] [-a] [items [items ...]]

       ssm resize [-h] [-s SIZE] volume [device [device ...]]

       ssm check [-h] device [device ...]

       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume

       ssm add [-h] [-p POOL] device [device ...]

DESCRIPTION

       System  Storage Manager provides easy to use command line interface to manage your storage
       using various technologies like lvm, btrfs, encrypted volumes and more.

       In more sophisticated enterprise storage environments, management with Device Mapper (dm),
       Logical  Volume  Manager  (LVM),  or  Multiple  Devices (md) is becoming increasingly more
       difficult.  With file systems added to the mix, the number of tools  needed  to  configure
       and  manage  storage has grown so large that it is simply not user friendly.  With so many
       options for a system administrator to consider, the opportunity for errors and problems is
       large.

       The  btrfs  administration  tools have shown us that storage management can be simplified,
       and we are working to bring that ease of use to Linux filesystems in general.

OPTIONS

       -h, --help
              show this help message and exit

       --version
              show program's version number and exit

       -v, --verbose
              Show aditional information while executing.

       -f, --force
              Force execution in the case where ssm has some doubts or questions.

       -b BACKEND, --backend BACKEND
              Choose backend to use. Currently you can choose from (lvm,btrfs).

SYSTEM STORAGE MANAGER COMMANDS

   Introduction
       System Storage Manager have several commands you can specify on  the  command  line  as  a
       first  argument  to  the ssm. They all have specific use and its own arguments, but global
       ssm arguments are propagated to all commands.

   Create command
       ssm create [-h] [-s SIZE] [-n NAME] [--fstype  FSTYPE]  [-r  LEVEL]  [-I  STRIPESIZE]  [-i
       STRIPES] [-p POOL] [device [device ...]] [mount]

       This  command  creates a new volume with defined parameters. If device is provided it will
       be used to create a volume, hence it will be added into the pool prior the volume creation
       (See Add command section). More devices can be used to create a volume.

       If  the  device  is  already used in the different pool, then ssm will ask you whether you
       want to remove it from the original pool. If you decline, or the removal fails,  then  the
       volume  creation  fails  if  the  SIZE was not provided. On the other hand, if the SIZE is
       provided and some devices can not be added to the pool the volume creation  might  succeed
       if there is enough space in the pool.

       POOL  name  can  be  specified as well. If the pool exists new volume will be created from
       that pool (optionally adding device into the pool). However if the POOL does not exist ssm
       will  attempt  to create a new pool with provided device and then create a new volume from
       this pool. If --backend argument is  omitted,  the  default  ssm  backend  will  be  used.
       Default backend is lvm.

       ssm  also  supports  creating RAID configuration, however some back-ends might not support
       all the levels, or it might not support RAID at all. In this case,  volume  creation  will
       fail.

       If  mount  point  is  provided  ssm  will attempt to mount the volume after it is created.
       However it will fail if mountable file system is not present on the volume.

       -h, --help
              show this help message and exit

       -s SIZE, --size SIZE
              Gives the size to allocate for the new logical volume A size suffix K|k, M|m,  G|g,
              T|t,  P|p,  E|e can be used to define 'power of two' units. If no unit is provided,
              it defaults to kilobytes. This is optional if if not given  maximum  possible  size
              will be used.

       -n NAME, --name NAME
              The  name for the new logical volume. This is optional and if omitted, name will be
              generated by the corresponding backend.

       --fstype FSTYPE
              Gives the file system type to create on the  new  logical  volume.  Supported  file
              systems are (ext3, ext4, xfs, btrfs). This is optional and if not given file system
              will not be created.

       -r LEVEL, --raid LEVEL
              Specify a RAID level you want to use when creating a new  volume.  Note  that  some
              backends  might not implement all supported RAID levels. This is optional and if no
              specified, linear volume will be created.  You can choose from the  following  list
              of supported levels (0,1,10).

       -I STRIPESIZE, --stripesize STRIPESIZE
              Gives  the number of kilobytes for the granularity of stripes. This is optional and
              if not given, backend default will be used. Note that  you  have  to  specify  RAID
              level as well.

       -i STRIPES, --stripes STRIPES
              Gives  the  number  of  stripes. This is equal to the number of physical volumes to
              scatter the logical volume. This is optional and if stripesize is set and  multiple
              devices  are  provided  stripes  is  determined  automatically  from  the number of
              devices. Note that you have to specify RAID level as well.

       -p POOL, --pool POOL
              Pool to use to create the new volume.

   List command
       ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}]

       List informations about all detected devices, pools, volumes and snapshots  found  in  the
       system.  list  command  can  be  used either alone to list all the information, or you can
       request specific section only.

       Following sections can be specified:

       {volumes | vol}
              List information about all volumes found in the system.

       {devices | dev}
              List  information  about  all  devices  found  in  the  system.  Some  devices  are
              intentionally  hidden,  like  for  example  cdrom, or DM/MD devices since those are
              actually listed as volumes.

       {pools | pool}
              List information about all pools found in the system.

       {filesystems | fs}
              List information about all volumes containing filesystems found in the system.

       {snapshots | snap}
              List information about all snapshots found in the system. Note that some  back-ends
              does  not  support  snapshotting  and some can not distinguish between snapshot and
              regular volume. in this case ssm will try to recognize  volume  name  in  order  to
              identify  snapshot,  but  if the ssm regular expression does not match the snapshot
              pattern, this snapshot will not be recognized.

       -h, --help
              show this help message and exit

   Remove command
       ssm remove [-h] [-a] [items [items ...]]

       This command removes item from the system. Multiple items can be specified.  If  the  item
       can not be removed for some reason, it will be skipped.

       item can represent:

       device Remove device from the pool. Note that this can not be done in some cases where the
              device is used by pool. You can use -f argument to force  removal.  If  the  device
              does not belong to any pool, it will be skipped.

       pool   Remove  the  pool  from  the system. This will also remove all volumes created from
              that pool.

       volume Remove the volume from the system. Note that  this  will  fail  if  the  volume  is
              mounted and it can not be forced with -f.

       -h, --help
              show this help message and exit

       -a, --all
              Remove all pools in the system.

   Resize command
       ssm resize [-h] [-s SIZE] volume [device [device ...]]

       Change  size  of  the  volume  and file system. If there is no file system only the volume
       itself will be resized. You can specify device to add  into  the  volume  pool  prior  the
       resize.  Note  that device will only be added into the pool if the volume size is going to
       grow.

       If the device is already used in the different pool, then ssm will  ask  you  whether  you
       want to remove it from the original pool.

       In  some  cases  file system has to be mounted in order to resize. This will be handled by
       ssm automatically by mounting the volume temporarily.

       -h, --help
              show this help message and exit

       -s SIZE, --size SIZE
              New size of the volume. With the + or - sign the value is added  to  or  subtracted
              from the actual size of the volume and without it, the value will be set as the new
              volume size. A size suffix of [k|K] for kilobytes, [m|M] for megabytes,  [g|G]  for
              gigabytes,  [t|T]  for  terabytes or [p|P] for petabytes is optional. If no unit is
              provided the default is kilobytes.

   Check command
       ssm check [-h] device [device ...]

       Check the file system consistency on the volume.  You  can  specify  multiple  volumes  to
       check. If there is no file system on the volume, this volume will be skipped.

       In some cases file system has to be mounted in order to check the file system This will be
       handled by ssm automatically by mounting the volume temporarily.

       -h, --help
              show this help message and exit

   Snapshot command
       ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume

       Take a snapshot of existing volume. This operation will fail if back-end which the  volume
       belongs to does not support snapshotting. Note that you can not specify both NAME and DESC
       since those options are mutually exclusive.

       In some cases file system has to be mounted in order to take a  snapshot  of  the  volume.
       This will be handled by ssm automatically by mounting the volume temporarily.

       -h, --help
              show this help message and exit

       -s SIZE, --size SIZE
              Gives the size to allocate for the new snapshot volume A size suffix K|k, M|m, G|g,
              T|t, P|p, E|e can be used to define 'power of two' units. If no unit  is  provided,
              it  defaults  to  kilobytes.  This  is  option  and  if  not give, the size will be
              determined automatically.

       -d DEST, --dest DEST
              Destination of the snapshot specified with absolute path to be  used  for  the  new
              snapshot.  This  is  optional  and  if not specified default backend policy will be
              performed.

       -n NAME, --name NAME
              Name of the new snapshot. This is optional and if  not  specified  default  backend
              policy will be performed.

   Add command
       ssm add [-h] [-p POOL] device [device ...]

       This  command adds device into the pool. The device will not be added if it's already part
       of different pool. When multiple devices are provided, all of  them  are  added  into  the
       pool.  If  one  of  the devices can not be added into the pool for some reason, it will be
       skipped. If no pool is specified, default pool will be chosen. In the case of non existing
       pool, it will be created using provided devices.

       -h, --help
              show this help message and exit

       -p POOL, --pool POOL
              Pool to add device into. If not specified the default pool is used.

BACK-ENDS

   Introduction
       Ssm  aims  to  create  unified  user interface for various technologies like Device Mapper
       (dm), Btrfs file system, Multiple Devices (md) and possibly more. In order  to  do  so  we
       have  a  core  abstraction  layer in ssmlib/main.py. This abstraction layer should ideally
       know nothing about the underlying technology, but rather  comply  with  device,  pool  and
       volume abstraction.

       Various  backends  can be registered in ssmlib/main.py in order to handle specific storage
       technology implementing methods like create, snapshot, or remove volumes  and  pools.  The
       core  will then call these methods to manage the storage without needing to know what lies
       underneath it. There are already several backends registered in ssm.

   Btrfs backend
       Btrfs is the file system with many advanced features including volume management. This  is
       the  reason  why btrfs is handled differently than other conventional file systems in ssm.
       It is used as a volume management back-end.

       Pools, volumes and snapshots can be created with btrfs backend and here is what  it  means
       from the btrfs point of view:

       pool   Pool  is  actually a btrfs file system itself, because it can be extended by adding
              more devices, or shrink by removing devices from it. Subvolumes and  snapshots  can
              also  be  created.  When  the new btrfs pool should be created ssm simply creates a
              btrfs file system, which means that every new btrfs pool has one volume of the same
              name  as the pool itself which can not be removed without removing the entire pool.
              Default btrfs pool name is btrfs_pool.

              When creating new btrfs pool, the name of the pool  is  used  as  the  file  system
              label.  If  there  is  already  existing  btrfs file system in the system without a
              label, btrfs pool name will be generated for internal use in the  following  format
              "btrfs_{device base name}".

              Btrfs pool is created when create or add command is used with devices specified and
              non existing pool name.

       volume Volume in btrfs back-end is actually just btrfs subvolume with the exception of the
              first  volume  created  on  btrfs  pool  creation, which is the file system itself.
              Subvolumes can only be created on btrfs file system when the  it  is  mounted,  but
              user does not have to worry about that, since ssm will automatically mount the file
              system temporarily in order to create a new subvolume.

              Volume name is used as subvolume path in the btrfs file system and every object  in
              this  path  must  exists  in  order  to  create  a volume. Volume name for internal
              tracking  and  for  representing  to  the  user  is   generated   in   the   format
              "{pool_name}:{volume  name}",  but  volumes  can  be also referenced with its mount
              point.

              Btrfs volumes are only shown in the list output, when the file system  is  mounted,
              with the exception of the main btrfs volume - the file system itself.

              New btrfs volume can be created with create command.

       snapshot
              Btrfs file system support subvolume snapshotting, so you can take a snapshot of any
              btrfs volume in the system with ssm. However btrfs  does  not  distinguish  between
              subvolumes  and  snapshots, because snapshot actually is just a subvolume with some
              block shared with different subvolume. It means, that ssm is not able to  recognize
              btrfs  snapshot directly, but instead it is trying to recognize special name format
              of the btrfs volume. However, if the NAME is specified when creating snapshot which
              does  not match the special pattern, snapshot will not be recognized by the ssm and
              it will be listed as regular btrfs volume.

              New btrfs snapshot can be created with snapshot command.

       device Btrfs does not require any special device to be created on.

   Lvm backend
       Pools, volumes and snapshots can be created with lvm, which  pretty  much  match  the  lvm
       abstraction.

       pool   Lvm pool is just volume group in lvm language. It means that it is grouping devices
              and new logical volumes can be created out of the lvm pool. Default lvm  pool  name
              is lvm_pool.

              Lvm  pool  is created when create or add command is used with devices specified and
              non existing pool name.

       volume Lvm volume is just logical volume in lvm language. Lvm volume can  be  created  wit
              create command.

       snapshot
              Lvm  volumes  can  be  snapshotted as well. When a snapshot is created from the lvm
              volume, new snapshot volume is created, which can  be  handled  as  any  other  lvm
              volume.  Unlike  btrfs  lvm is able to distinguish snapshot from regular volume, so
              there is no need for a snapshot name to match special pattern.

       device Lvm requires physical device to be created on the device,  but  with  ssm  this  is
              transparent for the user.

   Crypt backend
       Crypt  backend  in ssm is currently limited to only gather the information about encrypted
       volumes in the system. You can not create or manage encrypted volumes  or  pools,  but  it
       will be extended in the future.

   MD backend
       MD  backend in ssm is currently limited to only gather the information about MD volumes in
       the system. You can not create or manage MD volumes or pools, but it will be  extended  in
       the future.

EXAMPLES

       List system storage information:

       # ssm list

       List all pools in the system:

       # ssm list pools

       Create  a  new  100GB volume with default lvm backend using /dev/sda and /dev/sdb with xfs
       file system:

       # ssm create --size 100G --fs xfs /dev/sda /dev/sdb

       Create a new volume with btrfs backend using /dev/sda and /dev/sdb and let the  volume  to
       be RAID 1:

       # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb

       Using  lvm  backend  create  a RAID 0 volume with devices /dev/sda and /dev/sdb with 128kB
       stripe size, ext4 file system and mount it on /home:

       # ssm create --raid 0 --stripesize 128k /dev/sda /dev/sdb /home

       Extend btrfs volume btrfs_pool by 500GB and use /dev/sdc and /dev/sde to cover the resize:

       # ssm resize -s +500G btrfs_pool /dev/sdc /dev/sde

       Shrink volume /dev/lvm_pool/lvol001 by 1TB:

       # ssm resize -s-1t /dev/lvm_pool/lvol001

       Remove /dev/sda device from the pool, remove the  btrfs_pool  pool  and  also  remove  the
       volume /dev/lvm_pool/lvol001:

       # ssm remove /dev/sda btrfs_pool /dev/lvm_pool/lvol001

       Take a snapshot of the btrfs volume btrfs_pool:my_volume:

       # ssm snapshot btrfs_pool:my_volume

       Add devices /dev/sda and /dev/sdb into the btrfs_pool pool:

       # ssm add -p btrfs_pool /dev/sda /dev/sdb

ENVIRONMENT VARIABLES

       SSM_DEFAULT_BACKEND
              Specify which backend will be used by default. This can be overridden by specifying
              -b or --backend argument. Currently only lvm and btrfs is supported.

       SSM_LVM_DEFAULT_POOL
              Name of the default lvm pool to be used if -p or --pool argument is omitted.

       SSM_BTRFS_DEFAULT_POOL
              Name of the default btrfs pool to be used if -p or --pool argument is omitted.

LICENCE

       (C)2011 Red Hat, Inc., Lukas Czerner <lczerner@redhat.com>

       This program is free software: you can redistribute it and/or modify it under the terms of
       the  GNU  General  Public  License  as  published  by the Free Software Foundation, either
       version 2 of the License, or (at your option) any later version.

       This program is distributed in the hope that it will be useful, but WITHOUT ANY  WARRANTY;
       without  even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
       See the GNU General Public License for more details.

       You should have received a copy of the GNU General Public License along with this program.
       If not, see <http://www.gnu.org/licenses/>.

REQUIREMENTS

       Python  2.6 or higher is required to run this tool. System Storage Manager can only be run
       as root since most of the commands requires root privileges.

       There are other requirements listed bellow, but note that you do not necessarily need  all
       dependencies  for  all  backends,  however if some of the tools required by the backend is
       missing, the backend would not work.

   Python modules
       · os

       · re

       · sys

       · stat

       · argparse

       · datetime

       · threading

       · subprocess

   System tools
       · tune2fs

       · fsck.SUPPORTED_FS

       · resize2fs

       · xfs_db

       · xfs_check

       · xfs_growfs

       · mkfs.SUPPORTED_FS

       · which

       · mount

       · blkid

       · wipefs

   Lvm backend
       · lvm2 binaries

   Btrfs backend
       · btrfs progs

   Crypt backend
       · dmsetup

       · cryptsetup

AVAILABILITY

       System storage manager is available from  http://storagemanager.sourceforge.net.  You  can
       subscribe to storagemanager-devel@lists.sourceforge.net to follow the current development.

AUTHOR

       Lukáš Czerner <lczerner@redhat.com>

COPYRIGHT

       2012, Red Hat, Inc., Lukáš Czerner <lczerner@redhat.com>