Provided by: zfsutils-linux_0.6.5.6-0ubuntu30_amd64 bug

NAME

       zpool - configures ZFS storage pools

SYNOPSIS

       zpool [-?]

       zpool add [-fgLnP] [-o property=value] pool vdev ...

       zpool attach [-f] [-o property=value] pool device new_device

       zpool clear pool [device]

       zpool create [-fnd] [-o property=value] ... [-O file-system-property=value]
            ... [-m mountpoint] [-R root] [-t tname] pool vdev ...

       zpool destroy [-f] pool

       zpool detach pool device

       zpool events [-vHfc] [pool] ...

       zpool export [-a] [-f] pool ...

       zpool get [-pH] "all" | property[,...] pool ...

       zpool history [-il] [pool] ...

       zpool import [-d dir] [-D]

       zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
            [-D] [-f] [-m] [-N] [-R root] [-F [-n] [-X] [-T]] -a

       zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
            [-D] [-f] [-m] [-R root] [-F [-n] [-X] [-T]] [-t]] pool |id [newpool]

       zpool iostat [-T d | u ] [-gLPvy] [pool] ... [interval[count]]

       zpool labelclear [-f] device

       zpool list [-T d | u ] [-HgLPv] [-o property[,...]] [pool] ...
            [interval[count]]

       zpool offline [-t] pool device ...

       zpool online pool device ...

       zpool reguid pool

       zpool reopen pool

       zpool remove pool device ...

       zpool replace [-f] [-o property=value]  pool device [new_device]

       zpool scrub [-s] pool ...

       zpool set property=value pool

       zpool split [-gLnP] [-R altroot] [-o property=value] pool newpool [device ...]

       zpool status [-gLPvxD] [-T d | u] [pool] ... [interval [count]]

       zpool upgrade

       zpool upgrade -v

       zpool upgrade [-V version] -a | pool ...

DESCRIPTION

       The  zpool command configures ZFS storage pools. A storage pool is a collection of devices
       that provides physical storage and data replication for ZFS datasets.

       All datasets within a storage pool share the same space. See  zfs(8)  for  information  on
       managing datasets.

   Virtual Devices (vdevs)
       A  "virtual  device"  describes  a  single  device  or  a  collection of devices organized
       according to certain performance and fault characteristics. The following virtual  devices
       are supported:

       disk      A block device, typically located under /dev. ZFS can use individual partitions,
                 though the recommended mode of operation is to use whole disks. A  disk  can  be
                 specified by a full path, or it can be a shorthand name (the relative portion of
                 the path under "/dev"). For example, "sda" is equivalent to "/dev/sda". A  whole
                 disk  can be specified by omitting the partition designation. When given a whole
                 disk, ZFS automatically labels the disk, if necessary.

       file      A regular file. The use of files as a backing store is strongly discouraged.  It
                 is  designed  primarily  for  experimental purposes, as the fault tolerance of a
                 file is only as good as the file system of which it is a part. A  file  must  be
                 specified by a full path.

       mirror    A  mirror  of  two  or  more devices. Data is replicated in an identical fashion
                 across all components of a mirror. A mirror with N disks of size X  can  hold  X
                 bytes  and  can  withstand  (N-1)  devices  failing  before  data  integrity  is
                 compromised.

       raidz     A variation on  RAID-5  that  allows  for  better  distribution  of  parity  and
       raidz1    eliminates the "RAID-5 write hole" (in which data and parity become inconsistent
       raidz2    after a power loss). Data and parity is striped across all disks within a  raidz
       raidz3    group.

                 A  raidz  group  can  have single-, double- , or triple parity, meaning that the
                 raidz group can sustain one,  two,  or  three  failures,  respectively,  without
                 losing any data. The raidz1 vdev type specifies a single-parity raidz group; the
                 raidz2 vdev type specifies a double-parity raidz group; and the raidz3 vdev type
                 specifies  a  triple-parity  raidz  group.  The  raidz vdev type is an alias for
                 raidz1.

                 A raidz group with N disks of size X with P parity disks can hold  approximately
                 (N-P)*X  bytes  and  can  withstand P device(s) failing before data integrity is
                 compromised. The minimum number of devices in a raidz group is one more than the
                 number  of  parity  disks.  The  recommended  number  is between 3 and 9 to help
                 increase performance.

       spare     A special pseudo-vdev which keeps track of available hot spares for a pool.  For
                 more information, see the "Hot Spares" section.

       log       A  separate-intent  log  device.  If more than one log device is specified, then
                 writes are load-balanced between devices. Log devices can be mirrored.  However,
                 raidz vdev types are not supported for the intent log. For more information, see
                 the "Intent Log" section.

       cache     A device used to cache storage pool data. A cache device cannot be configured as
                 a mirror or raidz group. For more information, see the "Cache Devices" section.

       Virtual  devices  cannot  be  nested, so a mirror or raidz virtual device can only contain
       files or disks. Mirrors of mirrors (or other combinations) are not allowed.

       A pool can have any number of virtual devices at the top of the  configuration  (known  as
       "root  vdevs").  Data  is  dynamically distributed across all top-level devices to balance
       data among devices. As new virtual devices are added, ZFS automatically places data on the
       newly available devices.

       Virtual  devices are specified one at a time on the command line, separated by whitespace.
       The keywords "mirror" and "raidz" are used to distinguish where a group ends  and  another
       begins. For example, the following creates two root vdevs, each a mirror of two disks:

         # zpool create mypool mirror sda sdb mirror sdc sdd

   Device Failure and Recovery
       ZFS supports a rich set of mechanisms for handling device failure and data corruption. All
       metadata and data is checksummed, and ZFS automatically repairs bad data from a good  copy
       when corruption is detected.

       In  order  to  take  advantage  of  these  features,  a pool must make use of some form of
       redundancy, using either mirrored or raidz groups. While ZFS supports running  in  a  non-
       redundant  configuration,  where each root vdev is simply a disk or file, this is strongly
       discouraged. A single case of  bit  corruption  can  render  some  or  all  of  your  data
       unavailable.

       A  pool's health status is described by one of three states: online, degraded, or faulted.
       An online pool has all devices operating normally. A degraded pool is one in which one  or
       more   devices  have  failed,  but  the  data  is  still  available  due  to  a  redundant
       configuration. A faulted pool has corrupted metadata, or one or more faulted devices,  and
       insufficient replicas to continue functioning.

       The  health of the top-level vdev, such as mirror or raidz device, is potentially impacted
       by the state of its associated vdevs, or component devices. A top-level vdev or  component
       device is in one of the following states:

       DEGRADED    One  or  more  top-level  vdevs  is  in the degraded state because one or more
                   component  devices  are  offline.  Sufficient  replicas  exist   to   continue
                   functioning.

                   One  or  more  component  devices  is  in  the  degraded or faulted state, but
                   sufficient replicas exist to continue functioning. The  underlying  conditions
                   are as follows:

                       o      The  number  of  checksum  errors exceeds acceptable levels and the
                              device is degraded as an indication that something  may  be  wrong.
                              ZFS continues to use the device as necessary.

                       o      The  number  of  I/O  errors  exceeds acceptable levels. The device
                              could not be marked  as  faulted  because  there  are  insufficient
                              replicas to continue functioning.

       FAULTED     One  or  more  top-level  vdevs  is  in  the faulted state because one or more
                   component  devices  are  offline.  Insufficient  replicas  exist  to  continue
                   functioning.

                   One  or  more  component  devices  is  in  the faulted state, and insufficient
                   replicas exist to continue  functioning.  The  underlying  conditions  are  as
                   follows:

                       o      The device could be opened, but the contents did not match expected
                              values.

                       o      The number of I/O errors exceeds acceptable levels and  the  device
                              is faulted to prevent further use of the device.

       OFFLINE     The device was explicitly taken offline by the "zpool offline" command.

       ONLINE      The device is online and functioning.

       REMOVED     The device was physically removed while the system was running. Device removal
                   detection is hardware-dependent and may not be supported on all platforms.

       UNAVAIL     The device could not be opened. If a  pool  is  imported  when  a  device  was
                   unavailable, then the device will be identified by a unique identifier instead
                   of its path since the path was never correct in the first place.

       If a device is removed and later re-attached to the system, ZFS attempts to put the device
       online  automatically.  Device  attach  detection  is  hardware-dependent and might not be
       supported on all platforms.

   Hot Spares
       ZFS allows devices to be associated with pools as "hot  spares".  These  devices  are  not
       actively  used  in the pool, but when an active device fails, it is automatically replaced
       by a hot spare. To create a pool with hot spares, specify a "spare" vdev with  any  number
       of devices. For example,

         # zpool create pool mirror sda sdb spare sdc sdd

       Spares  can be shared across multiple pools, and can be added with the "zpool add" command
       and removed with the "zpool remove" command. Once a spare replacement is initiated, a  new
       "spare" vdev is created within the configuration that will remain there until the original
       device is replaced. At this point, the hot spare becomes available again.

       If a pool has a shared spare that is currently being used, the pool can  not  be  exported
       since other pools may use this shared spare, which may lead to potential data corruption.

       An  in-progress  spare  replacement  can  be  cancelled by detaching the hot spare. If the
       original faulted device is  detached,  then  the  hot  spare  assumes  its  place  in  the
       configuration, and is removed from the spare list of all active pools.

       Spares cannot replace log devices.

   Intent Log
       The  ZFS  Intent  Log (ZIL) satisfies POSIX requirements for synchronous transactions. For
       instance, databases often require their transactions to be on stable storage devices  when
       returning  from  a  system call. NFS and other applications can also use fsync() to ensure
       data stability. By default, the intent log is allocated from blocks within the main  pool.
       However,  it might be possible to get better performance using separate intent log devices
       such as NVRAM or a dedicated disk. For example:

         # zpool create pool sda sdb log sdc

       Multiple log devices can also be specified, and they can be  mirrored.  See  the  EXAMPLES
       section for an example of mirroring multiple log devices.

       Log  devices can be added, replaced, attached, detached, and imported and exported as part
       of the larger pool. Mirrored log devices can be removed by specifying the top-level mirror
       for the log.

   Cache Devices
       Devices  can  be  added  to  a  storage  pool as "cache devices." These devices provide an
       additional layer of caching between main memory and disk. For read-heavy workloads,  where
       the  working  set  size is much larger than what can be cached in main memory, using cache
       devices allow much more of this working set to be served from  low  latency  media.  Using
       cache  devices  provides the greatest performance improvement for random read-workloads of
       mostly static content.

       To create a pool with cache devices, specify a "cache" vdev with any  number  of  devices.
       For example:

         # zpool create pool sda sdb cache sdc sdd

       Cache  devices  cannot  be  mirrored  or part of a raidz configuration. If a read error is
       encountered on a cache device, that read I/O is reissued  to  the  original  storage  pool
       device, which might be part of a mirrored or raidz configuration.

       The  content of the cache devices is considered volatile, as is the case with other system
       caches.

   Properties
       Each pool has several  properties  associated  with  it.  Some  properties  are  read-only
       statistics  while  others  are  configurable  and  change  the  behavior  of the pool. The
       following are read-only properties:

       available           Amount of storage available within the pool. This property can also be
                           referred to by its shortened column name, "avail".

       capacity            Percentage  of  pool space used. This property can also be referred to
                           by its shortened column name, "cap".

       expandsize          Amount of uninitialized space within the pool or device  that  can  be
                           used  to increase the total capacity of the pool.  Uninitialized space
                           consists of any space on an  EFI  labeled  vdev  which  has  not  been
                           brought  online  (i.e. zpool online -e).  This space occurs when a LUN
                           is dynamically expanded.

       fragmentation       The amount of fragmentation in the pool.

       free                The amount of free space available in the pool.

       freeing             After a file system or snapshot is destroyed, the space it  was  using
                           is returned to the pool asynchronously. freeing is the amount of space
                           remaining to be reclaimed. Over time freeing will decrease while  free
                           increases.

       health              The  current  health  of the pool. Health can be "ONLINE", "DEGRADED",
                           "FAULTED", " OFFLINE", "REMOVED", or "UNAVAIL".

       guid                A unique identifier for the pool.

       size                Total size of the storage pool.

       unsupported@feature_guid
                           Information about unsupported features that are enabled on  the  pool.
                           See zpool-features(5) for details.

       used                Amount of storage space used within the pool.

       The space usage properties report actual physical space available to the storage pool. The
       physical space can be different from the total amount of space that any contained datasets
       can  actually  use.  The  amount  of  space  used  in a raidz configuration depends on the
       characteristics of the data being written.  In  addition,  ZFS  reserves  some  space  for
       internal accounting that the zfs(8) command takes into account, but the zpool command does
       not. For non-full pools of a reasonable size, these effects should be invisible. For small
       pools,  or  pools  that are close to being completely full, these discrepancies may become
       more noticeable.

       The following property can be set at creation time:

       ashift

           Pool sector size exponent, to the power of 2 (internally referred to as "ashift"). I/O
           operations will be aligned to the specified size boundaries. Additionally, the minimum
           (disk) write size will be set to the specified size, so this represents  a  space  vs.
           performance  trade-off. The typical case for setting this property is when performance
           is important and the underlying disks use 4KiB sectors but report 512B sectors to  the
           OS (for compatibility reasons); in that case, set ashift=12 (which is 1<<12 = 4096).

           For  optimal  performance, the pool sector size should be greater than or equal to the
           sector size of the underlying disks. Since the property cannot be changed  after  pool
           creation,  if  in  a given pool, you ever want to use drives that report 4KiB sectors,
           you must set ashift=12 at pool creation time.

           Keep in mind is that the ashift is vdev specific and is not a pool global.  This means
           that when adding new vdevs to an existing pool you may need to specify the ashift.

       The following property can be set at creation time and import time:

       altroot

           Alternate  root  directory.  If  set,  this directory is prepended to any mount points
           within the pool. This can be used when examining  an  unknown  pool  where  the  mount
           points cannot be trusted, or in an alternate boot environment, where the typical paths
           are not valid. altroot is not a persistent property. It is valid only while the system
           is up. Setting altroot defaults to using cachefile=none, though this may be overridden
           using an explicit setting.

       The following property can only be set at import time:

       readonly=on | off

           If set to on, the pool will be imported in read-only mode:  Synchronous  data  in  the
           intent  log  will  not  be  accessible,  properties of the pool can not be changed and
           datasets of the pool can only be mounted read-only.   The  readonly  property  of  its
           datasets will be implicitly set to on.

           It can also be specified by its column name of rdonly.

           To write to a read-only pool, a export and import of the pool is required.

       The  following  properties  can be set at creation time and import time, and later changed
       with the zpool set command:

       autoexpand=on | off

           Controls automatic pool expansion when the underlying LUN is grown. If set to on,  the
           pool  will  be  resized according to the size of the expanded device. If the device is
           part of a mirror or raidz then all devices within  that  mirror/raidz  group  must  be
           expanded  before  the new space is made available to the pool. The default behavior is
           off. This property can also be referred to by its shortened column name, expand.

       autoreplace=on | off

           Controls automatic device replacement. If set to "off",  device  replacement  must  be
           initiated  by  the administrator by using the "zpool replace" command. If set to "on",
           any new device, found in the same  physical  location  as  a  device  that  previously
           belonged to the pool, is automatically formatted and replaced. The default behavior is
           "off". This property can also be referred to by its shortened column name, "replace".

       bootfs=pool/dataset

           Identifies the default bootable dataset for the root pool. This property  is  expected
           to be set mainly by the installation and upgrade programs.

       cachefile=path | none

           Controls the location of where the pool configuration is cached. Discovering all pools
           on system startup requires a cached copy of the configuration data that is  stored  on
           the  root  file  system.  All  pools in this cache are automatically imported when the
           system boots. Some environments, such as install and clustering, need  to  cache  this
           information  in  a  different  location  so that pools are not automatically imported.
           Setting this property caches the pool configuration in a different location  that  can
           later  be  imported  with  "zpool  import  -c". Setting it to the special value "none"
           creates a temporary pool that is never cached, and the special value '' (empty string)
           uses the default location.

           Multiple  pools  can  share  the  same  cache  file.  Because  the kernel destroys and
           recreates this file when pools are added  and  removed,  care  should  be  taken  when
           attempting  to  access  this file. When the last pool using a cachefile is exported or
           destroyed, the file is removed.

       comment=text

           A text string consisting of printable ASCII characters that will be stored  such  that
           it  is  available  even  if  the  pool  becomes faulted.  An administrator can provide
           additional information about a pool using this property.

       dedupditto=number

           Threshold for the number  of  block  ditto  copies.  If  the  reference  count  for  a
           deduplicated  block  increases  above  this  number, a new ditto copy of this block is
           automatically stored. The default setting is 0 which causes  no  ditto  copies  to  be
           created for deduplicated blocks.  The miniumum legal nonzero setting is 100.

       delegation=on | off

           Controls  whether  a  non-privileged  user  is  granted  access  based  on the dataset
           permissions defined on the dataset. See zfs(8) for more information on  ZFS  delegated
           administration.

       failmode=wait | continue | panic

           Controls the system behavior in the event of catastrophic pool failure. This condition
           is typically a result of a loss of connectivity to the underlying storage device(s) or
           a  failure of all devices within the pool. The behavior of such an event is determined
           as follows:

           wait        Blocks all I/O access until the device connectivity is recovered  and  the
                       errors are cleared. This is the default behavior.

           continue    Returns  EIO  to any new write I/O requests but allows reads to any of the
                       remaining healthy  devices.  Any  write  requests  that  have  yet  to  be
                       committed to disk would be blocked.

           panic       Prints out a message to the console and generates a system crash dump.

       feature@feature_name=enabled
           The  value of this property is the current state of feature_name. The only valid value
           when setting this property is enabled which moves feature_name to the  enabled  state.
           See zpool-features(5) for details on feature states.

       listsnaps=on | off

           Controls  whether information about snapshots associated with this pool is output when
           "zfs list" is run without the -t option. The default value is "off".

       version=version

           The current on-disk version of the pool. This can be increased, but  never  decreased.
           The  preferred  method  of  updating pools is with the "zpool upgrade" command, though
           this  property  can  be  used  when  a  specific  version  is  needed  for   backwards
           compatibility.  Once  feature flags are enabled on a pool this property will no longer
           have a value.

   Subcommands
       All subcommands that modify state are logged persistently to the pool  in  their  original
       form.

       The  zpool  command provides subcommands to create and destroy storage pools, add capacity
       to storage  pools,  and  provide  information  about  the  storage  pools.  The  following
       subcommands are supported:

       zpool -?

           Displays a help message.

       zpool add [-fgLnP] [-o property=value] pool vdev ...

           Adds  the  specified  virtual  devices  to  the  given pool. The vdev specification is
           described in the "Virtual Devices" section. The behavior of the  -f  option,  and  the
           device checks performed are described in the "zpool create" subcommand.

           -f    Forces  use  of  vdevs,  even  if  they  appear  in use or specify a conflicting
                 replication level. Not all devices can be overridden in this manner.

           -g    Display vdev GUIDs instead of the normal device names. These GUIDs can  be  used
                 in place of device names for the zpool detach/offline/remove/replace commands.

           -L    Display  real  paths for vdevs resolving all symbolic links. This can be used to
                 look up the current block device name regardless of the /dev/disk/ path used  to
                 open it.

           -n    Displays the configuration that would be used without actually adding the vdevs.
                 The actual pool creation can still fail due to insufficient privileges or device
                 sharing.

           -P    Display  full  paths  for  vdevs instead of only the last component of the path.
                 This can be used in conjunction with the -L flag.

           -o property=value

               Sets the given pool properties. See the "Properties" section for a list  of  valid
               properties  that  can be set. The only property supported at the moment is ashift.
               Do note that some properties (among them ashift) are not inherited from a previous
               vdev. They are vdev specific, not pool specific.

           Do  not add a disk that is currently configured as a quorum device to a zpool. After a
           disk is in the pool, that disk can then be configured as a quorum device.

       zpool attach [-f] [-o property=value] pool device new_device

           Attaches new_device to an existing zpool device. The existing device cannot be part of
           a  raidz  configuration.  If device is not currently part of a mirrored configuration,
           device automatically transforms into a two-way mirror of  device  and  new_device.  If
           device  is  part of a two-way mirror, attaching new_device creates a three-way mirror,
           and so on. In either case, new_device begins to resilver immediately.

           -f    Forces use of new_device, even if its appears to be in use. Not all devices  can
                 be overridden in this manner.

           -o property=value

               Sets  the  given pool properties. See the "Properties" section for a list of valid
               properties that can be set. The only property supported at the moment is "ashift".

       zpool clear pool [device] ...

           Clears device errors in a pool. If no  arguments  are  specified,  all  device  errors
           within  the  pool  are cleared. If one or more devices is specified, only those errors
           associated with the specified device or devices are cleared.

       zpool create [-fnd] [-o property=value] ... [-O file-system-property=value] ... [-m
       mountpoint] [-R root] [-t tname] pool vdev ...

           Creates  a  new  storage  pool containing the virtual devices specified on the command
           line. The pool name must begin with  a  letter,  and  can  only  contain  alphanumeric
           characters  as  well  as  underscore ("_"), dash ("-"), period ("."), colon (":"), and
           space (" "). The pool names "mirror", "raidz", "spare" and "log" are reserved, as  are
           names  beginning with the pattern "c[0-9]". The vdev specification is described in the
           "Virtual Devices" section.

           The command verifies that each device specified is accessible and not currently in use
           by  another  subsystem.  There  are  some  uses,  such  as being currently mounted, or
           specified as the dedicated dump device, that prevents a device from ever being used by
           ZFS.  Other uses, such as having a preexisting UFS file system, can be overridden with
           the -f option.

           The command also checks that the replication strategy for the pool is  consistent.  An
           attempt  to  combine  redundant  and non-redundant storage in a single pool, or to mix
           disks and files, results in an error unless -f is specified. The  use  of  differently
           sized devices within a single raidz or mirror group is also flagged as an error unless
           -f is specified.

           Unless the -R option is specified, the default mount point is "/pool". The mount point
           must  not exist or must be empty, or else the root dataset cannot be mounted. This can
           be overridden with the -m option.

           By default all supported features are enabled on the new pool unless the -d option  is
           specified.

           -f

               Forces  use  of  vdevs,  even  if  they  appear  in  use  or specify a conflicting
               replication level. Not all devices can be overridden in this manner.

           -n

               Displays the configuration that would be used without actually creating the  pool.
               The  actual  pool creation can still fail due to insufficient privileges or device
               sharing.

           -d

               Do not enable any features on the new pool. Individual features can be enabled  by
               setting  their  corresponding properties to enabled with the -o option. See zpool-
               features(5) for details about feature properties.

           -o property=value [-o property=value] ...

               Sets the given pool properties. See the "Properties" section for a list  of  valid
               properties that can be set.

           -O file-system-property=value
           [-O file-system-property=value] ...

               Sets the given file system properties in the root file system of the pool. See the
               "Properties" section of zfs(8) for a list of valid properties that can be set.

           -R root

               Equivalent to "-o cachefile=none,altroot=root"

           -m mountpoint

               Sets the mount point for the root dataset. The default mount point is  "/pool"  or
               "altroot/pool"  if altroot is specified. The mount point must be an absolute path,
               "legacy", or "none". For more information on dataset mount points, see zfs(8).

           -t tname

               Sets the in-core pool name to "tname" while the on-disk  name  will  be  the  name
               specified as the pool name "pool". This will set the default cachefile property to
               none. This is intended to handle name space collisions  when  creating  pools  for
               other  systems,  such as virtual machines or physical machines whose pools live on
               network block devices.

       zpool destroy [-f] pool

           Destroys the given pool, freeing up any devices for other use. This command  tries  to
           unmount any active datasets before destroying the pool.

           -f    Forces any active datasets contained within the pool to be unmounted.

       zpool detach pool device

           Detaches  device  from  a mirror. The operation is refused if there are no other valid
           replicas of the data.  If device may be re-added to the pool later  on  then  consider
           the "zpool offline" command instead.

       zpool events [-vHfc] [pool] ...

           Description  of  the  different  events  generated by the ZFS kernel modules. See zfs-
           events(5) for more information about the subclasses and event  payloads  that  can  be
           generated.

           -v    Get a full detail of the events and what information is available about it.

           -H    Scripted  mode.  Do  not  display  headers,  and separate fields by a single tab
                 instead of arbitrary space.

           -f    Follow mode.

           -c    Clear all previous events.

       zpool export [-a] [-f] pool ...

           Exports the given pools from the system. All devices are marked as exported,  but  are
           still  considered in use by other subsystems. The devices can be moved between systems
           (even those of different endianness) and imported as long as a  sufficient  number  of
           devices are present.

           Before  exporting the pool, all datasets within the pool are unmounted. A pool can not
           be exported if it has a shared spare that is currently being used.

           For pools to be portable, you must give  the  zpool  command  whole  disks,  not  just
           partitions,  so that ZFS can label the disks with portable EFI labels. Otherwise, disk
           drivers on platforms of different endianness will not recognize the disks.

           -a    Exports all pools imported on the system.

           -f    Forcefully unmount all datasets, using the "unmount -f" command.

                 This command will forcefully export the pool even if it has a shared spare  that
                 is currently being used. This may lead to potential data corruption.

       zpool get [-p] "all" | property[,...] pool ...

           Retrieves  the  given  list of properties (or all properties if "all" is used) for the
           specified storage pool(s). These properties are displayed with the following fields:

                    name          Name of storage pool
                     property      Property name
                     value         Property value
                     source        Property source, either 'default' or 'local'.

           See the "Properties" section for more information on the available pool properties.

           -p    Display numbers in parseable (exact) values.

           -H    Scripted mode. Do not display headers, and  separate  fields  by  a  single  tab
                 instead of arbitrary space.

       zpool history [-il] [pool] ...

           Displays  the  command  history  of  the  specified  pools  or all pools if no pool is
           specified.

           -i    Displays internally logged ZFS events in addition to user initiated events.

           -l    Displays log records in long  format,  which  in  addition  to  standard  format
                 includes,  the  user name, the hostname, and the zone in which the operation was
                 performed.

       zpool import [-d dir | -c cachefile] [-D]

           Lists pools available to import. If the -d  option  is  not  specified,  this  command
           searches for devices in "/dev". The -d option can be specified multiple times, and all
           directories are searched. If the device appears to be part of an exported  pool,  this
           command  displays  a  summary  of  the  pool  with  the  name  of  the pool, a numeric
           identifier, as well as the vdev layout and current  health  of  the  device  for  each
           device  or file. Destroyed pools, pools that were previously destroyed with the "zpool
           destroy" command, are not listed unless the -D option is specified.

           The numeric identifier is unique, and can be  used  instead  of  the  pool  name  when
           multiple exported pools of the same name are available.

           -c cachefile    Reads configuration from the given cachefile that was created with the
                           "cachefile" pool property. This cachefile is used instead of searching
                           for devices.

           -d dir          Searches  for  devices or files in dir. The -d option can be specified
                           multiple times.

           -D              Lists destroyed pools only.

       zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-m]
       [-N] [-R root] [-F [-n]] -a

           Imports  all pools found in the search directories. Identical to the previous command,
           except that all pools with a sufficient number  of  devices  available  are  imported.
           Destroyed  pools,  pools  that  were  previously  destroyed  with  the "zpool destroy"
           command, will not be imported unless the -D option is specified.

           -o mntopts           Comma-separated list  of  mount  options  to  use  when  mounting
                                datasets within the pool. See zfs(8) for a description of dataset
                                properties and mount options.

           -o property=value    Sets the  specified  property  on  the  imported  pool.  See  the
                                "Properties"  section  for more information on the available pool
                                properties.

           -c cachefile         Reads configuration from the given  cachefile  that  was  created
                                with  the  "cachefile"  pool  property.  This  cachefile  is used
                                instead of searching for devices.

           -d dir               Searches for devices or files  in  dir.  The  -d  option  can  be
                                specified multiple times. This option is incompatible with the -c
                                option.

           -D                   Imports destroyed pools only. The -f option is also required.

           -f                   Forces import, even if the pool appears to be potentially active.

           -F
                                Recovery mode for a non-importable pool. Attempt  to  return  the
                                pool   to   an  importable  state  by  discarding  the  last  few
                                transactions. Not all damaged pools can  be  recovered  by  using
                                this   option.   If  successful,  the  data  from  the  discarded
                                transactions is irretrievably lost. This option is ignored if the
                                pool is importable or already imported.

           -a                   Searches for and imports all pools found.

           -m
                                Allows a pool to import when there is a missing log device.

           -R root              Sets  the  "cachefile"  property  to  "none"  and  the  "altroot"
                                property to "root".

           -N
                                Import the pool without mounting any file systems.

           -n
                                Used with the -F  recovery  option.  Determines  whether  a  non-
                                importable  pool  can  be  made  importable  again,  but does not
                                actually perform the pool recovery. For more details  about  pool
                                recovery mode, see the -F option, above.

           -X
                                Used  with  the  -F  recovery  option. Determines whether extreme
                                measures to find a valid txg should take place.  This allows  the
                                pool  to be rolled back to a txg which is no longer guaranteed to
                                be consistent.  Pools imported at an inconsistent txg may contain
                                uncorrectable  checksum  errors.   For  more  details  about pool
                                recovery mode, see the -F option, above.   WARNING:  This  option
                                can  be extremely hazardous to the health of your pool and should
                                only be used as a last resort.

           -T
                                Specify the txg to use  for  rollback.   Implies  -FX.  For  more
                                details  about  pool  recovery  mode,  see  the -X option, above.
                                WARNING: This option can be extremely hazardous to the health  of
                                your pool and should only be used as a last resort.

       zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-m]
       [-R root] [-F [-n]] [-t]] pool | id [newpool]

           Imports a specific pool. A  pool  can  be  identified  by  its  name  or  the  numeric
           identifier.  If  newpool  is  specified,  the pool is imported using the name newpool.
           Otherwise, it is imported with the same name as its exported name.

           If a device is removed from a system without running "zpool export" first, the  device
           appears as potentially active. It cannot be determined if this was a failed export, or
           whether the device is really in use from another host. To import a pool in this state,
           the -f option is required.

           -o mntopts

               Comma-separated  list  of  mount  options to use when mounting datasets within the
               pool. See zfs(8) for a description of dataset properties and mount options.

           -o property=value

               Sets the specified property on the imported pool. See the "Properties" section for
               more information on the available pool properties.

           -c cachefile

               Reads configuration from the given cachefile that was created with the "cachefile"
               pool property. This cachefile is used instead of searching for devices.

           -d dir

               Searches for devices or files in dir. The -d  option  can  be  specified  multiple
               times. This option is incompatible with the -c option.

           -D

               Imports destroyed pool. The -f option is also required.

           -f

               Forces import, even if the pool appears to be potentially active.

           -F

               Recovery  mode  for  a  non-importable  pool.  Attempt  to  return  the pool to an
               importable state by discarding the last few transactions. Not  all  damaged  pools
               can  be recovered by using this option. If successful, the data from the discarded
               transactions is irretrievably  lost.  This  option  is  ignored  if  the  pool  is
               importable or already imported.

           -R root

               Sets the "cachefile" property to "none" and the "altroot" property to "root".

           -n

               Used  with the -F recovery option. Determines whether a non-importable pool can be
               made importable again, but does not actually perform the pool recovery.  For  more
               details about pool recovery mode, see the -F option, above.

           -X

               Used  with  the  -F recovery option. Determines whether extreme measures to find a
               valid txg should take place.  This allows the pool to be  rolled  back  to  a  txg
               which is no longer guaranteed to be consistent.  Pools imported at an inconsistent
               txg may contain uncorrectable  checksum  errors.   For  more  details  about  pool
               recovery  mode,  see  the -F option, above.  WARNING: This option can be extremely
               hazardous to the health of your pool and should only be used as a last resort.

           -T

               Specify the txg to use for rollback.  Implies -FX. For  more  details  about  pool
               recovery  mode,  see  the -X option, above.  WARNING: This option can be extremely
               hazardous to the health of your pool and should only be used as a last resort.

           -t

               Used with "newpool". Specifies that "newpool" is temporary. Temporary  pool  names
               last  until  export. Ensures that the original pool name will be used in all label
               updates and therefore is retained upon export. Will  also  set  -o  cachefile=none
               when not explicitly specified.

           -m

               Allows a pool to import when there is a missing log device.

       zpool iostat [-T d | u] [-gLPvy] [pool] ... [interval[count]]

           Displays  I/O  statistics  for the given pools. When given an interval, the statistics
           are printed every interval seconds until Ctrl-C is pressed. If no pools are specified,
           statistics  for  every pool in the system is shown. If count is specified, the command
           exits after count reports are printed.

           -T u | d    Display a time stamp.

                       Specify u for a printed representation of the internal  representation  of
                       time. See time(2). Specify d for standard date format. See date(1).

           -g          Display  vdev GUIDs instead of the normal device names. These GUIDs can be
                       used in place of device names for the zpool  detach/offline/remove/replace
                       commands.

           -L          Display  real  paths  for  vdevs resolving all symbolic links. This can be
                       used to look up the current block device name regardless of the /dev/disk/
                       path used to open it.

           -P          Display  full  paths  for  vdevs instead of only the last component of the
                       path.  This can be used in conjunction with the -L flag.

           -v          Verbose statistics. Reports usage statistics for individual  vdevs  within
                       the pool, in addition to the pool-wide statistics.

           -y          Omit statistics since boot.  Normally the first line of output reports the
                       statistics since boot.  This option suppresses that first line of output.

       zpool labelclear [-f] device

           Removes ZFS label information from the specified device. The device must not  be  part
           of an active pool configuration.

           -f          Treat exported or foreign devices as inactive.

       zpool list [-T d | u] [-HgLPv] [-o props[,...]] [pool] ... [interval[count]]

           Lists  the  given  pools  along  with a health status and space usage. If no pools are
           specified, all pools in the system are listed. When given an interval, the information
           is  printed every interval seconds until Ctrl-C is pressed. If count is specified, the
           command exits after count reports are printed.

           -H          Scripted mode. Do not display headers, and separate fields by a single tab
                       instead of arbitrary space.

           -g          Display  vdev GUIDs instead of the normal device names. These GUIDs can be
                       used in place of device names for the zpool  detach/offline/remove/replace
                       commands.

           -L          Display  real  paths  for  vdevs resolving all symbolic links. This can be
                       used to look up the current block device name regardless of the /dev/disk/
                       path used to open it.

           -P          Display  full  paths  for  vdevs instead of only the last component of the
                       path.  This can be used in conjunction with the -L flag.

           -T d | u    Display a time stamp.

                       Specify u for a printed representation of the internal  representation  of
                       time. See time(2). Specify d for standard date format. See date(1).

           -o props    Comma-separated  list  of  properties  to  display.  See  the "Properties"
                       section for a list of valid properties. The default list is  "name,  size,
                       used,  available, fragmentation, expandsize, capacity, dedupratio, health,
                       altroot"

           -v          Verbose statistics. Reports usage statistics for individual  vdevs  within
                       the pool, in addition to the pool-wise statistics.

       zpool offline [-t] pool device ...

           Takes  the  specified physical device offline. While the device is offline, no attempt
           is made to read or write to the device.

           This command is not applicable to spares or cache devices.

           -t    Temporary. Upon reboot, the specified physical device reverts  to  its  previous
                 state.

       zpool online [-e] pool device...

           Brings the specified physical device online.

           This command is not applicable to spares or cache devices.

           -e    Expand  the device to use all available space. If the device is part of a mirror
                 or raidz then all devices must be expanded before  the  new  space  will  become
                 available to the pool.

       zpool reguid pool

           Generates  a  new  unique identifier for the pool. You must ensure that all devices in
           this pool are online and healthy before performing this action.

       zpool reopen pool

           Reopen all the vdevs associated with the pool.

       zpool remove pool device ...

           Removes the specified device from the  pool.  This  command  currently  only  supports
           removing  hot  spares, cache, and log devices. A mirrored log device can be removed by
           specifying the top-level mirror for the log.  Non-log  devices  that  are  part  of  a
           mirrored  configuration  can  be removed using the zpool detach command. Non-redundant
           and raidz devices cannot be removed from a pool.

       zpool replace [-f] [-o property=value] pool old_device [new_device]

           Replaces old_device with new_device.  This  is  equivalent  to  attaching  new_device,
           waiting for it to resilver, and then detaching old_device.

           The  size  of  new_device must be greater than or equal to the minimum size of all the
           devices in a mirror or raidz configuration.

           new_device is required if the pool is not redundant. If new_device is  not  specified,
           it  defaults  to old_device. This form of replacement is useful after an existing disk
           has failed and has been physically replaced. In this case, the new disk may  have  the
           same  /dev  path  as  the old device, even though it is actually a different disk. ZFS
           recognizes this.

           -f    Forces use of new_device, even if its appears to be in use. Not all devices  can
                 be overridden in this manner.

           -o property=value
                 Sets the given pool properties. See the "Properties" section for a list of valid
                 properties that can be set. The only property supported at the moment is ashift.
                 Do  note  that  some  properties  (among  them  ashift) are not inherited from a
                 previous vdev. They are vdev specific, not pool specific.

       zpool scrub [-s] pool ...

           Begins a scrub. The scrub examines all data in the specified pools to verify  that  it
           checksums  correctly.  For  replicated  (mirror  or  raidz) devices, ZFS automatically
           repairs any damage discovered during the scrub. The "zpool status" command reports the
           progress of the scrub and summarizes the results of the scrub upon completion.

           Scrubbing  and  resilvering  are  very  similar  operations.  The  difference  is that
           resilvering only examines data that ZFS knows to be out of  date  (for  example,  when
           attaching a new device to a mirror or replacing an existing device), whereas scrubbing
           examines all data to discover silent errors due to hardware faults or disk failure.

           Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows one at
           a time. If a scrub is already in progress, the "zpool scrub" command terminates it and
           starts a new scrub. If a resilver is in progress, ZFS does not allow  a  scrub  to  be
           started until the resilver completes.

           -s    Stop scrubbing.

       zpool set property=value pool

           Sets  the  given property on the specified pool. See the "Properties" section for more
           information on what properties can be set and acceptable values.

       zpool split [-gLnP] [-R altroot] [-o property=value] pool newpool [device ...]

           Split devices off pool creating newpool. All vdevs in pool must  be  mirrors  and  the
           pool must not be in the process of resilvering. At the time of the split, newpool will
           be a replica of pool. By default, the last device in each mirror is split from pool to
           create newpool.

           The optional device specification causes the specified device(s) to be included in the
           new pool and, should any devices remain unspecified, the last device in each mirror is
           used as would be by default.

           -g    Display  vdev  GUIDs instead of the normal device names. These GUIDs can be used
                 in place of device names for the zpool detach/offline/remove/replace commands.

           -L    Display real paths for vdevs resolving all symbolic links. This can be  used  to
                 look  up the current block device name regardless of the /dev/disk/ path used to
                 open it.

           -n

               Do  dry  run,  do  not  actually  perform  the  split.  Print  out  the   expected
               configuration of newpool.

           -P    Display  full  paths  for  vdevs instead of only the last component of the path.
                 This can be used in conjunction with the -L flag.

           -R altroot

               Set altroot for newpool and automaticaly import it.  This can be useful  to  avoid
               mountpoint collisions if newpool is imported on the same filesystem as pool.

           -o property=value

               Sets  the  specified  property  for newpool. See the “Properties” section for more
               information on the available pool properties.

       zpool status [-gLPvxD] [-T d | u] [pool] ... [interval [count]]

           Displays the detailed health status for the given pools. If no pool is specified, then
           the  status  of each pool in the system is displayed. For more information on pool and
           device health, see the "Device Failure and Recovery" section.

           If a scrub or resilver is in progress, this command reports the  percentage  done  and
           the  estimated  time  to  completion.  Both of these are only approximate, because the
           amount of data in the pool and the other workloads on the system can change.

           -g          Display vdev GUIDs instead of the normal device names. These GUIDs can  be
                       used  innplace of device names for the zpool detach/offline/remove/replace
                       commands.

           -L          Display real paths for vdevs resolving all symbolic  links.  This  can  be
                       used to look up the current block device name regardless of the /dev/disk/
                       path used to open it.

           -P          Display full paths for vdevs instead of only the  last  component  of  the
                       path.  This can be used in conjunction with the -L flag.

           -v          Displays  verbose  data error information, printing out a complete list of
                       all data errors since the last complete pool scrub.

           -x          Only display status for pools that are exhibiting errors or are  otherwise
                       unavailable. Warnings about pools not using the latest on-disk format will
                       not be included.

           -D          Display a histogram of deduplication  statistics,  showing  the  allocated
                       (physically  present  on disk) and referenced (logically referenced in the
                       pool) block counts and sizes by reference count.

           -T d | u    Display a time stamp.

                       Specify u for a printed representation of the internal  representation  of
                       time. See time(2). Specify d for standard date format. See date(1).

           zpool upgrade

               Displays  pools  which  do  not  have  all  supported  features  enabled and pools
               formatted using a legacy ZFS version number. These pools can continue to be  used,
               but  some  features  may  not  be  available. Use "zpool upgrade -a" to enable all
               features on all pools.

           zpool upgrade -v

               Displays  legacy  ZFS  versions  supported  by  the  current  software.  See  zfs-
               features(5)  for  a description of feature flags features supported by the current
               software.

           zpool upgrade [-V version] -a | pool ...

               Enables all supported features on the given pool. Once this is done, the pool will
               no  longer  be  accessible  on systems that do not support feature flags. See zfs-
               features(5) for details on compatibility with systems that support feature  flags,
               but do not support all features enabled on the pool.

               -a            Enables all supported features on all pools.

               -V version    Upgrade  to  the  specified  legacy  version.  If  the  -V  flag  is
                             specified, no features will be enabled on the pool. This option  can
                             only be used to increase the version number up to the last supported
                             legacy version number.

EXAMPLES

       Example 1 Creating a RAID-Z Storage Pool

       The following command creates a pool with a single raidz root vdev that  consists  of  six
       disks.

         # zpool create tank raidz sda sdb sdc sdd sde sdf

       Example 2 Creating a Mirrored Storage Pool

       The  following  command  creates  a  pool with two mirrors, where each mirror contains two
       disks.

         # zpool create tank mirror sda sdb mirror sdc sdd

       Example 3 Creating a ZFS Storage Pool by Using Partitions

       The following command creates an unmirrored pool using two disk partitions.

         # zpool create tank sda1 sdb2

       Example 4 Creating a ZFS Storage Pool by Using Files

       The following command creates an unmirrored pool using files.  While  not  recommended,  a
       pool based on files can be useful for experimental purposes.

         # zpool create tank /path/to/file/a /path/to/file/b

       Example 5 Adding a Mirror to a ZFS Storage Pool

       The  following  command  adds  two  mirrored  disks to the pool tank, assuming the pool is
       already made up of two-way mirrors. The additional space is immediately available  to  any
       datasets within the pool.

         # zpool add tank mirror sda sdb

       Example 6 Listing Available ZFS Storage Pools

       The following command lists all available pools on the system. In this case, the pool zion
       is faulted due to a missing device.

       The results from this command are similar to the following:

         # zpool list
         NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
         rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
         tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
         zion       -      -      -      -         -      -      -  FAULTED -

       Example 7 Destroying a ZFS Storage Pool

       The following command destroys the pool tank and any datasets contained within.

         # zpool destroy -f tank

       Example 8 Exporting a ZFS Storage Pool

       The following command exports the devices in pool tank so that they can  be  relocated  or
       later imported.

         # zpool export tank

       Example 9 Importing a ZFS Storage Pool

       The  following command displays available pools, and then imports the pool tank for use on
       the system.

       The results from this command are similar to the following:

         # zpool import
           pool: tank
             id: 15451357997522795478
          state: ONLINE
         action: The pool can be imported using its name or numeric identifier.
         config:

                 tank        ONLINE
                   mirror    ONLINE
                     sda     ONLINE
                     sdb     ONLINE

         # zpool import tank

       Example 10 Upgrading All ZFS Storage Pools to the Current Version

       The following command upgrades all ZFS  Storage  pools  to  the  current  version  of  the
       software.

         # zpool upgrade -a
         This system is currently running ZFS pool version 28.

       Example 11 Managing Hot Spares

       The following command creates a new pool with an available hot spare:

         # zpool create tank mirror sda sdb spare sdc

       If  one  of  the  disks were to fail, the pool would be reduced to the degraded state. The
       failed device can be replaced using the following command:

         # zpool replace tank sda sdd

       Once the data has been  resilvered,  the  spare  is  automatically  removed  and  is  made
       available  for  use  should another device fails. The hot spare can be permanently removed
       from the pool using the following command:

         # zpool remove tank sdc

       Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs

       The following command creates a ZFS storage pool consisting of two,  two-way  mirrors  and
       mirrored log devices:

         # zpool create pool mirror sda sdb mirror sdc sdd log mirror \
            sde sdf

       Example 13 Adding Cache Devices to a ZFS Pool

       The following command adds two disks for use as cache devices to a ZFS storage pool:

         # zpool add pool cache sdc sdd

       Once  added,  the cache devices gradually fill with content from main memory. Depending on
       the size of your cache devices, it could take over an hour for them to fill. Capacity  and
       reads can be monitored using the iostat option as follows:

         # zpool iostat -v pool 5

       Example 14 Removing a Mirrored Log Device

       The following command removes the mirrored log device mirror-2.

       Given this configuration:

            pool: tank
           state: ONLINE
           scrub: none requested
         config:

                  NAME        STATE     READ WRITE CKSUM
                  tank        ONLINE       0     0     0
                    mirror-0  ONLINE       0     0     0
                      sda     ONLINE       0     0     0
                      sdb     ONLINE       0     0     0
                    mirror-1  ONLINE       0     0     0
                      sdc     ONLINE       0     0     0
                      sdd     ONLINE       0     0     0
                  logs
                    mirror-2  ONLINE       0     0     0
                      sde     ONLINE       0     0     0
                      sdf     ONLINE       0     0     0

       The command to remove the mirrored log mirror-2 is:

         # zpool remove tank mirror-2

       Example 15 Displaying expanded space on a device

       The  following  command  displays the detailed information for the data pool. This pool is
       comprised of a single raidz vdev where one of its devices increased its capacity by  10GB.
       In  this  example, the pool will not be able to utilized this extra capacity until all the
       devices under the raidz vdev have been expanded.

         # zpool list -v data
         NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
         data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
           raidz1    23.9G  14.6G  9.30G    48%         -
             c1t1d0      -      -      -      -         -
             c1t2d0      -      -      -      -       10G
             c1t3d0      -      -      -      -         -

EXIT STATUS

       The following exit values are returned:

       0    Successful completion.

       1    An error occurred.

       2    Invalid command line options were specified.

ENVIRONMENT VARIABLES

       ZFS_ABORT
              Cause zpool to dump core on exit for the purposes of running ::findleaks.

       ZPOOL_IMPORT_PATH
              The search path for devices or files to  use  with  the  pool.  This  is  a  colon-
              separated  list  of  directories  in  which zpool looks for device nodes and files.
              Similar to the -d option in zpool import.

       ZPOOL_VDEV_NAME_GUID
              Cause zpool subcommands  to  output  vdev  guids  by  default.   This  behavior  is
              identical to the zpool status -g command line option.

       ZPOOL_VDEV_NAME_FOLLOW_LINKS
              Cause  zpool  subcommands to follow links for vdev names by default.  This behavior
              is identical to the zpool status -L command line option.

       ZPOOL_VDEV_NAME_PATH
              Cause zpool subcommands to output full vdev path names by default.   This  behavior
              is identical to the zpool status -p command line option.

SEE ALSO

       zfs(8), zpool-features(5), zfs-events(5)