Provided by: zfsutils-linux_0.7.5-1ubuntu16.12_amd64 

NAME
zpool-features - ZFS pool feature descriptions
DESCRIPTION
ZFS pool on-disk format versions are specified via "features" which replace the old on-disk format
numbers (the last supported on-disk format number is 28). To enable a feature on a pool use the upgrade
subcommand of the zpool(8) command, or set the feature@feature_name property to enabled.
The pool format does not affect file system version compatibility or the ability to send file systems
between pools.
Since most features can be enabled independently of each other the on-disk format of the pool is
specified by the set of all features marked as active on the pool. If the pool was created by another
software version this set may include unsupported features.
Identifying features
Every feature has a guid of the form com.example:feature_name. The reverse DNS name ensures that the
feature's guid is unique across all ZFS implementations. When unsupported features are encountered on a
pool they will be identified by their guids. Refer to the documentation for the ZFS implementation that
created the pool for information about those features.
Each supported feature also has a short name. By convention a feature's short name is the portion of its
guid which follows the ':' (e.g. com.example:feature_name would have the short name feature_name),
however a feature's short name may differ across ZFS implementations if following the convention would
result in name conflicts.
Feature states
Features can be in one of three states:
active
This feature's on-disk format changes are in effect on the pool. Support for this feature is
required to import the pool in read-write mode. If this feature is not read-only compatible,
support is also required to import the pool in read-only mode (see "Read-only
compatibility").
enabled
An administrator has marked this feature as enabled on the pool, but the feature's on-disk
format changes have not been made yet. The pool can still be imported by software that does
not support this feature, but changes may be made to the on-disk format at any time which
will move the feature to the active state. Some features may support returning to the enabled
state after becoming active. See feature-specific documentation for details.
disabled
This feature's on-disk format changes have not been made and will not be made unless an
administrator moves the feature to the enabled state. Features cannot be disabled once they
have been enabled.
The state of supported features is exposed through pool properties of the form feature@short_name.
Read-only compatibility
Some features may make on-disk format changes that do not interfere with other software's ability to read
from the pool. These features are referred to as "read-only compatible". If all unsupported features on a
pool are read-only compatible, the pool can be imported in read-only mode by setting the readonly
property during import (see zpool(8) for details on importing pools).
Unsupported features
For each unsupported feature enabled on an imported pool a pool property named unsupported@feature_guid
will indicate why the import was allowed despite the unsupported feature. Possible values for this
property are:
inactive
The feature is in the enabled state and therefore the pool's on-disk format is still
compatible with software that does not support this feature.
readonly
The feature is read-only compatible and the pool has been imported in read-only mode.
Feature dependencies
Some features depend on other features being enabled in order to function properly. Enabling a feature
will automatically enable any features it depends on.
FEATURES
The following features are supported on this system:
async_destroy
GUID com.delphix:async_destroy
READ-ONLY COMPATIBLE yes
DEPENDENCIES none
Destroying a file system requires traversing all of its data in order to return its used space to the
pool. Without async_destroy the file system is not fully removed until all space has been reclaimed.
If the destroy operation is interrupted by a reboot or power outage the next attempt to open the pool
will need to complete the destroy operation synchronously.
When async_destroy is enabled the file system's data will be reclaimed by a background process,
allowing the destroy operation to complete without traversing the entire file system. The background
process is able to resume interrupted destroys after the pool has been opened, eliminating the need
to finish interrupted destroys as part of the open operation. The amount of space remaining to be
reclaimed by the background process is available through the freeing property.
This feature is only active while freeing is non-zero.
empty_bpobj
GUID com.delphix:empty_bpobj
READ-ONLY COMPATIBLE yes
DEPENDENCIES none
This feature increases the performance of creating and using a large number of snapshots of a single
filesystem or volume, and also reduces the disk space required.
When there are many snapshots, each snapshot uses many Block Pointer Objects (bpobj's) to track
blocks associated with that snapshot. However, in common use cases, most of these bpobj's are empty.
This feature allows us to create each bpobj on-demand, thus eliminating the empty bpobjs.
This feature is active while there are any filesystems, volumes, or snapshots which were created
after enabling this feature.
filesystem_limits
GUID com.joyent:filesystem_limits
READ-ONLY COMPATIBLE yes
DEPENDENCIES extensible_dataset
This feature enables filesystem and snapshot limits. These limits can be used to control how many
filesystems and/or snapshots can be created at the point in the tree on which the limits are set.
This feature is active once either of the limit properties has been set on a dataset. Once activated
the feature is never deactivated.
lz4_compress
GUID org.illumos:lz4_compress
READ-ONLY COMPATIBLE no
DEPENDENCIES none
lz4 is a high-performance real-time compression algorithm that features significantly faster
compression and decompression as well as a higher compression ratio than the older lzjb compression.
Typically, lz4 compression is approximately 50% faster on compressible data and 200% faster on
incompressible data than lzjb. It is also approximately 80% faster on decompression, while giving
approximately 10% better compression ratio.
When the lz4_compress feature is set to enabled, the administrator can turn on lz4 compression on any
dataset on the pool using the zfs(8) command. Please note that doing so will immediately activate the
lz4_compress feature on the underlying pool using the zfs(1M) command. Also, all newly written
metadata will be compressed with lz4 algorithm. Since this feature is not read-only compatible, this
operation will render the pool unimportable on systems without support for the lz4_compress feature.
Booting off of lz4-compressed root pools is supported.
This feature becomes active as soon as it is enabled and will never return to being enabled.
spacemap_histogram
GUID com.delphix:spacemap_histogram
READ-ONLY COMPATIBLE yes
DEPENDENCIES none
This features allows ZFS to maintain more information about how free space is organized within the
pool. If this feature is enabled, ZFS will set this feature to active when a new space map object is
created or an existing space map is upgraded to the new format. Once the feature is active, it will
remain in that state until the pool is destroyed.
multi_vdev_crash_dump
GUID com.joyent:multi_vdev_crash_dump
READ-ONLY COMPATIBLE no
DEPENDENCIES none
This feature allows a dump device to be configured with a pool comprised of multiple vdevs. Those
vdevs may be arranged in any mirrored or raidz configuration.
When the multi_vdev_crash_dump feature is set to enabled, the administrator can use the dumpadm(1M)
command to configure a dump device on a pool comprised of multiple vdevs.
Under Linux this feature is registered for compatibility but not used. New pools created under Linux
will have the feature enabled but will never transition to active. This functionality is not
required in order to support crash dumps under Linux. Existing pools where this feature is active
can be imported.
extensible_dataset
GUID com.delphix:extensible_dataset
READ-ONLY COMPATIBLE no
DEPENDENCIES none
This feature allows more flexible use of internal ZFS data structures, and exists for other features
to depend on.
This feature will be active when the first dependent feature uses it, and will be returned to the
enabled state when all datasets that use this feature are destroyed.
bookmarks
GUID com.delphix:bookmarks
READ-ONLY COMPATIBLE yes
DEPENDENCIES extensible_dataset
This feature enables use of the zfs bookmark subcommand.
This feature is active while any bookmarks exist in the pool. All bookmarks in the pool can be
listed by running zfs list -t bookmark -r poolname.
enabled_txg
GUID com.delphix:enabled_txg
READ-ONLY COMPATIBLE yes
DEPENDENCIES none
Once this feature is enabled ZFS records the transaction group number in which new features are
enabled. This has no user-visible impact, but other features may depend on this feature.
This feature becomes active as soon as it is enabled and will never return to being enabled.
hole_birth
GUID com.delphix:hole_birth
READ-ONLY COMPATIBLE no
DEPENDENCIES enabled_txg
This feature improves performance of incremental sends ("zfs send -i") and receives for objects with
many holes. The most common case of hole-filled objects is zvols.
An incremental send stream from snapshot A to snapshot B contains information about every block that
changed between A and B. Blocks which did not change between those snapshots can be identified and
omitted from the stream using a piece of metadata called the 'block birth time', but birth times are
not recorded for holes (blocks filled only with zeroes). Since holes created after A cannot be
distinguished from holes created before A, information about every hole in the entire filesystem or
zvol is included in the send stream.
For workloads where holes are rare this is not a problem. However, when incrementally replicating
filesystems or zvols with many holes (for example a zvol formatted with another filesystem) a lot of
time will be spent sending and receiving unnecessary information about holes that already exist on
the receiving side.
Once the hole_birth feature has been enabled the block birth times of all new holes will be recorded.
Incremental sends between snapshots created after this feature is enabled will use this new metadata
to avoid sending information about holes that already exist on the receiving side.
This feature becomes active as soon as it is enabled and will never return to being enabled.
embedded_data
GUID com.delphix:embedded_data
READ-ONLY COMPATIBLE no
DEPENDENCIES none
This feature improves the performance and compression ratio of highly-compressible blocks. Blocks
whose contents can compress to 112 bytes or smaller can take advantage of this feature.
When this feature is enabled, the contents of highly-compressible blocks are stored in the block
"pointer" itself (a misnomer in this case, as it contains the compressed data, rather than a pointer
to its location on disk). Thus the space of the block (one sector, typically 512 bytes or 4KB) is
saved, and no additional i/o is needed to read and write the data block.
This feature becomes active as soon as it is enabled and will never return to being enabled.
large_blocks
GUID org.open-zfs:large_block
READ-ONLY COMPATIBLE no
DEPENDENCIES extensible_dataset
The large_block feature allows the record size on a dataset to be set larger than 128KB.
This feature becomes active once a recordsize property has been set larger than 128KB, and will
return to being enabled once all filesystems that have ever had their recordsize larger than 128KB
are destroyed.
large_dnode
GUID org.zfsonlinux:large_dnode
READ-ONLY COMPATIBLE no
DEPENDENCIES extensible_dataset
The large_dnode feature allows the size of dnodes in a dataset to be set larger than 512B.
This feature becomes active once a dataset contains an object with a dnode larger than 512B, which
occurs as a result of setting the dnodesize dataset property to a value other than legacy. The
feature will return to being enabled once all filesystems that have ever contained a dnode larger
than 512B are destroyed. Large dnodes allow more data to be stored in the bonus buffer, thus
potentially improving performance by avoiding the use of spill blocks.
sha512
GUID org.illumos:sha512
READ-ONLY COMPATIBLE no
DEPENDENCIES extensible_dataset
This feature enables the use of the SHA-512/256 truncated hash algorithm (FIPS 180-4) for checksum
and dedup. The native 64-bit arithmetic of SHA-512 provides an approximate 50% performance boost over
SHA-256 on 64-bit hardware and is thus a good minimum-change replacement candidate for systems where
hash performance is important, but these systems cannot for whatever reason utilize the faster skein
and edonr algorithms.
When the sha512 feature is set to enabled, the administrator can turn on the sha512 checksum on any
dataset using the zfs set checksum=sha512(1M) command. This feature becomes active once a checksum
property has been set to sha512, and will return to being enabled once all filesystems that have ever
had their checksum set to sha512 are destroyed.
Booting off of pools utilizing SHA-512/256 is supported (provided that the updated GRUB stage2 module
is installed).
skein
GUID org.illumos:skein
READ-ONLY COMPATIBLE no
DEPENDENCIES extensible_dataset
This feature enables the use of the Skein hash algorithm for checksum and dedup. Skein is a high-
performance secure hash algorithm that was a finalist in the NIST SHA-3 competition. It provides a
very high security margin and high performance on 64-bit hardware (80% faster than SHA-256). This
implementation also utilizes the new salted checksumming functionality in ZFS, which means that the
checksum is pre-seeded with a secret 256-bit random key (stored on the pool) before being fed the
data block to be checksummed. Thus the produced checksums are unique to a given pool, preventing hash
collision attacks on systems with dedup.
When the skein feature is set to enabled, the administrator can turn on the skein checksum on any
dataset using the zfs set checksum=skein(1M) command. This feature becomes active once a checksum
property has been set to skein, and will return to being enabled once all filesystems that have ever
had their checksum set to skein are destroyed.
Booting off of pools using skein is NOT supported -- any attempt to enable skein on a root pool will
fail with an error.
edonr
GUID org.illumos:edonr
READ-ONLY COMPATIBLE no
DEPENDENCIES extensible_dataset
This feature enables the use of the Edon-R hash algorithm for checksum, including for nopwrite (if
compression is also enabled, an overwrite of a block whose checksum matches the data being written
will be ignored). In an abundance of caution, Edon-R can not be used with dedup (without
verification).
Edon-R is a very high-performance hash algorithm that was part of the NIST SHA-3 competition. It
provides extremely high hash performance (over 350% faster than SHA-256), but was not selected
because of its unsuitability as a general purpose secure hash algorithm. This implementation
utilizes the new salted checksumming functionality in ZFS, which means that the checksum is pre-
seeded with a secret 256-bit random key (stored on the pool) before being fed the data block to be
checksummed. Thus the produced checksums are unique to a given pool.
When the edonr feature is set to enabled, the administrator can turn on the edonr checksum on any
dataset using the zfs set checksum=edonr(1M) command. This feature becomes active once a checksum
property has been set to edonr, and will return to being enabled once all filesystems that have ever
had their checksum set to edonr are destroyed.
Booting off of pools using edonr is NOT supported -- any attempt to enable edonr on a root pool will
fail with an error.
userobj_accounting
GUID org.zfsonlinux:userobj_accounting
READ-ONLY COMPATIBLE yes
DEPENDENCIES extensible_dataset
This feature allows administrators to account the object usage information by user and group.
This feature becomes active as soon as it is enabled and will never return to being enabled. Each
filesystem will be upgraded automatically when remounted, or when new files are created under that
filesystem. The upgrade can also be started manually on filesystems by running `zfs set
version=current <pool/fs>`. The upgrade process runs in the background and may take a while to
complete for filesystems containing a large number of files.
SEE ALSO
zpool(8)
Aug 27, 2013 ZPOOL-FEATURES(5)