Provided by: zfsutils-linux_2.2.6-1ubuntu1_amd64 bug

NAME

     zfs — tuning of the ZFS kernel module

DESCRIPTION

     The ZFS module supports these parameters:

     dbuf_cache_max_bytes=UINT64_MAXB (u64)
             Maximum size in bytes of the dbuf cache.  The target size is determined by the MIN
             versus 1/2^dbuf_cache_shift (1/32nd) of the target ARC size.  The behavior of the
             dbuf cache and its associated settings can be observed via the
             /proc/spl/kstat/zfs/dbufstats kstat.

     dbuf_metadata_cache_max_bytes=UINT64_MAXB (u64)
             Maximum size in bytes of the metadata dbuf cache.  The target size is determined by
             the MIN versus 1/2^dbuf_metadata_cache_shift (1/64th) of the target ARC size.  The
             behavior of the metadata dbuf cache and its associated settings can be observed via
             the /proc/spl/kstat/zfs/dbufstats kstat.

     dbuf_cache_hiwater_pct=10% (uint)
             The percentage over dbuf_cache_max_bytes when dbufs must be evicted directly.

     dbuf_cache_lowater_pct=10% (uint)
             The percentage below dbuf_cache_max_bytes when the evict thread stops evicting
             dbufs.

     dbuf_cache_shift=5 (uint)
             Set the size of the dbuf cache (dbuf_cache_max_bytes) to a log2 fraction of the
             target ARC size.

     dbuf_metadata_cache_shift=6 (uint)
             Set the size of the dbuf metadata cache (dbuf_metadata_cache_max_bytes) to a log2
             fraction of the target ARC size.

     dbuf_mutex_cache_shift=0 (uint)
             Set the size of the mutex array for the dbuf cache.  When set to 0 the array is
             dynamically sized based on total system memory.

     dmu_object_alloc_chunk_shift=7 (128) (uint)
             dnode slots allocated in a single operation as a power of 2.  The default value
             minimizes lock contention for the bulk operation performed.

     dmu_prefetch_max=134217728B (128 MiB) (uint)
             Limit the amount we can prefetch with one call to this amount in bytes.  This helps
             to limit the amount of memory that can be used by prefetching.

     ignore_hole_birth (int)
             Alias for send_holes_without_birth_time.

     l2arc_feed_again=1|0 (int)
             Turbo L2ARC warm-up.  When the L2ARC is cold the fill interval will be set as fast
             as possible.

     l2arc_feed_min_ms=200 (u64)
             Min feed interval in milliseconds.  Requires l2arc_feed_again=1 and only applicable
             in related situations.

     l2arc_feed_secs=1 (u64)
             Seconds between L2ARC writing.

     l2arc_headroom=2 (u64)
             How far through the ARC lists to search for L2ARC cacheable content, expressed as a
             multiplier of l2arc_write_max.  ARC persistence across reboots can be achieved with
             persistent L2ARC by setting this parameter to 0, allowing the full length of ARC
             lists to be searched for cacheable content.

     l2arc_headroom_boost=200% (u64)
             Scales l2arc_headroom by this percentage when L2ARC contents are being successfully
             compressed before writing.  A value of 100 disables this feature.

     l2arc_exclude_special=0|1 (int)
             Controls whether buffers present on special vdevs are eligible for caching into
             L2ARC.  If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.

     l2arc_mfuonly=0|1|2 (int)
             Controls whether only MFU metadata and data are cached from ARC into L2ARC.  This
             may be desired to avoid wasting space on L2ARC when reading/writing large amounts of
             data that are not expected to be accessed more than once.

             The default is 0, meaning both MRU and MFU data and metadata are cached.  When
             turning off this feature (setting it to 0), some MRU buffers will still be present
             in ARC and eventually cached on L2ARC.  If l2arc_noprefetch=0, some prefetched
             buffers will be cached to L2ARC, and those might later transition to MRU, in which
             case the l2arc_mru_asize arcstat will not be 0.

             Setting it to 1 means to L2 cache only MFU data and metadata.

             Setting it to 2 means to L2 cache all metadata (MRU+MFU) but only MFU data (ie: MRU
             data are not cached). This can be the right setting to cache as much metadata as
             possible even when having high data turnover.

             Regardless of l2arc_noprefetch, some MFU buffers might be evicted from ARC, accessed
             later on as prefetches and transition to MRU as prefetches.  If accessed again they
             are counted as MRU and the l2arc_mru_asize arcstat will not be 0.

             The ARC status of L2ARC buffers when they were first cached in L2ARC can be seen in
             the l2arc_mru_asize, l2arc_mfu_asize, and l2arc_prefetch_asize arcstats when
             importing the pool or onlining a cache device if persistent L2ARC is enabled.

             The evict_l2_eligible_mru arcstat does not take into account if this option is
             enabled as the information provided by the evict_l2_eligible_m[rf]u arcstats can be
             used to decide if toggling this option is appropriate for the current workload.

     l2arc_meta_percent=33% (uint)
             Percent of ARC size allowed for L2ARC-only headers.  Since L2ARC buffers are not
             evicted on memory pressure, too many headers on a system with an irrationally large
             L2ARC can render it slow or unusable.  This parameter limits L2ARC writes and
             rebuilds to achieve the target.

     l2arc_trim_ahead=0% (u64)
             Trims ahead of the current write size (l2arc_write_max) on L2ARC devices by this
             percentage of write size if we have filled the device.  If set to 100 we TRIM twice
             the space required to accommodate upcoming writes.  A minimum of 64 MiB will be
             trimmed.  It also enables TRIM of the whole L2ARC device upon creation or addition
             to an existing pool or if the header of the device is invalid upon importing a pool
             or onlining a cache device.  A value of 0 disables TRIM on L2ARC altogether and is
             the default as it can put significant stress on the underlying storage devices.
             This will vary depending of how well the specific device handles these commands.

     l2arc_noprefetch=1|0 (int)
             Do not write buffers to L2ARC if they were prefetched but not used by applications.
             In case there are prefetched buffers in L2ARC and this option is later set, we do
             not read the prefetched buffers from L2ARC.  Unsetting this option is useful for
             caching sequential reads from the disks to L2ARC and serve those reads from L2ARC
             later on.  This may be beneficial in case the L2ARC device is significantly faster
             in sequential reads than the disks of the pool.

             Use 1 to disable and 0 to enable caching/reading prefetches to/from L2ARC.

     l2arc_norw=0|1 (int)
             No reads during writes.

     l2arc_write_boost=8388608B (8 MiB) (u64)
             Cold L2ARC devices will have l2arc_write_max increased by this amount while they
             remain cold.

     l2arc_write_max=8388608B (8 MiB) (u64)
             Max write bytes per interval.

     l2arc_rebuild_enabled=1|0 (int)
             Rebuild the L2ARC when importing a pool (persistent L2ARC).  This can be disabled if
             there are problems importing a pool or attaching an L2ARC device (e.g. the L2ARC
             device is slow in reading stored log metadata, or the metadata has become somehow
             fragmented/unusable).

     l2arc_rebuild_blocks_min_l2size=1073741824B (1 GiB) (u64)
             Mininum size of an L2ARC device required in order to write log blocks in it.  The
             log blocks are used upon importing the pool to rebuild the persistent L2ARC.

             For L2ARC devices less than 1 GiB, the amount of data l2arc_evict() evicts is
             significant compared to the amount of restored L2ARC data.  In this case, do not
             write log blocks in L2ARC in order not to waste space.

     metaslab_aliquot=1048576B (1 MiB) (u64)
             Metaslab granularity, in bytes.  This is roughly similar to what would be referred
             to as the "stripe size" in traditional RAID arrays.  In normal operation, ZFS will
             try to write this amount of data to each disk before moving on to the next top-level
             vdev.

     metaslab_bias_enabled=1|0 (int)
             Enable metaslab group biasing based on their vdevs' over- or under-utilization
             relative to the pool.

     metaslab_force_ganging=16777217B (16 MiB + 1 B) (u64)
             Make some blocks above a certain size be gang blocks.  This option is used by the
             test suite to facilitate testing.

     metaslab_force_ganging_pct=3% (uint)
             For blocks that could be forced to be a gang block (due to metaslab_force_ganging),
             force this many of them to be gang blocks.

     brt_zap_prefetch=1|0 (int)
             Controls prefetching BRT records for blocks which are going to be cloned.

     brt_zap_default_bs=12 (4 KiB) (int)
             Default BRT ZAP data block size as a power of 2. Note that changing this after
             creating a BRT on the pool will not affect existing BRTs, only newly created ones.

     brt_zap_default_ibs=12 (4 KiB) (int)
             Default BRT ZAP indirect block size as a power of 2. Note that changing this after
             creating a BRT on the pool will not affect existing BRTs, only newly created ones.

     ddt_zap_default_bs=15 (32 KiB) (int)
             Default DDT ZAP data block size as a power of 2. Note that changing this after
             creating a DDT on the pool will not affect existing DDTs, only newly created ones.

     ddt_zap_default_ibs=15 (32 KiB) (int)
             Default DDT ZAP indirect block size as a power of 2. Note that changing this after
             creating a DDT on the pool will not affect existing DDTs, only newly created ones.

     zfs_default_bs=9 (512 B) (int)
             Default dnode block size as a power of 2.

     zfs_default_ibs=17 (128 KiB) (int)
             Default dnode indirect block size as a power of 2.

     zfs_history_output_max=1048576B (1 MiB) (u64)
             When attempting to log an output nvlist of an ioctl in the on-disk history, the
             output will not be stored if it is larger than this size (in bytes).  This must be
             less than DMU_MAX_ACCESS (64 MiB).  This applies primarily to
             zfs_ioc_channel_program() (cf. zfs-program(8)).

     zfs_keep_log_spacemaps_at_export=0|1 (int)
             Prevent log spacemaps from being destroyed during pool exports and destroys.

     zfs_metaslab_segment_weight_enabled=1|0 (int)
             Enable/disable segment-based metaslab selection.

     zfs_metaslab_switch_threshold=2 (int)
             When using segment-based metaslab selection, continue allocating from the active
             metaslab until this option's worth of buckets have been exhausted.

     metaslab_debug_load=0|1 (int)
             Load all metaslabs during pool import.

     metaslab_debug_unload=0|1 (int)
             Prevent metaslabs from being unloaded.

     metaslab_fragmentation_factor_enabled=1|0 (int)
             Enable use of the fragmentation metric in computing metaslab weights.

     metaslab_df_max_search=16777216B (16 MiB) (uint)
             Maximum distance to search forward from the last offset.  Without this limit,
             fragmented pools can see >100`000 iterations and metaslab_block_picker() becomes the
             performance limiting factor on high-performance storage.

             With the default setting of 16 MiB, we typically see less than 500 iterations, even
             with very fragmented ashift=9 pools.  The maximum number of iterations possible is
             metaslab_df_max_search / 2^(ashift+1).  With the default setting of 16 MiB this is
             16*1024 (with ashift=9) or 2*1024 (with ashift=12).

     metaslab_df_use_largest_segment=0|1 (int)
             If not searching forward (due to metaslab_df_max_search, metaslab_df_free_pct, or
             metaslab_df_alloc_threshold), this tunable controls which segment is used.  If set,
             we will use the largest free segment.  If unset, we will use a segment of at least
             the requested size.

     zfs_metaslab_max_size_cache_sec=3600s (1 hour) (u64)
             When we unload a metaslab, we cache the size of the largest free chunk.  We use that
             cached size to determine whether or not to load a metaslab for a given allocation.
             As more frees accumulate in that metaslab while it's unloaded, the cached max size
             becomes less and less accurate.  After a number of seconds controlled by this
             tunable, we stop considering the cached max size and start considering only the
             histogram instead.

     zfs_metaslab_mem_limit=25% (uint)
             When we are loading a new metaslab, we check the amount of memory being used to
             store metaslab range trees.  If it is over a threshold, we attempt to unload the
             least recently used metaslab to prevent the system from clogging all of its memory
             with range trees.  This tunable sets the percentage of total system memory that is
             the threshold.

     zfs_metaslab_try_hard_before_gang=0|1 (int)
             If unset, we will first try normal allocation.
             If that fails then we will do a gang allocation.
             If that fails then we will do a "try hard" gang allocation.
             If that fails then we will have a multi-layer gang block.

             If set, we will first try normal allocation.
             If that fails then we will do a "try hard" allocation.
             If that fails we will do a gang allocation.
             If that fails we will do a "try hard" gang allocation.
             If that fails then we will have a multi-layer gang block.

     zfs_metaslab_find_max_tries=100 (uint)
             When not trying hard, we only consider this number of the best metaslabs.  This
             improves performance, especially when there are many metaslabs per vdev and the
             allocation can't actually be satisfied (so we would otherwise iterate all
             metaslabs).

     zfs_vdev_default_ms_count=200 (uint)
             When a vdev is added, target this number of metaslabs per top-level vdev.

     zfs_vdev_default_ms_shift=29 (512 MiB) (uint)
             Default lower limit for metaslab size.

     zfs_vdev_max_ms_shift=34 (16 GiB) (uint)
             Default upper limit for metaslab size.

     zfs_vdev_max_auto_ashift=14 (uint)
             Maximum ashift used when optimizing for logical → physical sector size on new top-
             level vdevs.  May be increased up to ASHIFT_MAX (16), but this may negatively impact
             pool space efficiency.

     zfs_vdev_min_auto_ashift=ASHIFT_MIN (9) (uint)
             Minimum ashift used when creating new top-level vdevs.

     zfs_vdev_min_ms_count=16 (uint)
             Minimum number of metaslabs to create in a top-level vdev.

     vdev_validate_skip=0|1 (int)
             Skip label validation steps during pool import.  Changing is not recommended unless
             you know what you're doing and are recovering a damaged label.

     zfs_vdev_ms_count_limit=131072 (128k) (uint)
             Practical upper limit of total metaslabs per top-level vdev.

     metaslab_preload_enabled=1|0 (int)
             Enable metaslab group preloading.

     metaslab_preload_limit=10 (uint)
             Maximum number of metaslabs per group to preload

     metaslab_preload_pct=50 (uint)
             Percentage of CPUs to run a metaslab preload taskq

     metaslab_lba_weighting_enabled=1|0 (int)
             Give more weight to metaslabs with lower LBAs, assuming they have greater bandwidth,
             as is typically the case on a modern constant angular velocity disk drive.

     metaslab_unload_delay=32 (uint)
             After a metaslab is used, we keep it loaded for this many TXGs, to attempt to reduce
             unnecessary reloading.  Note that both this many TXGs and metaslab_unload_delay_ms
             milliseconds must pass before unloading will occur.

     metaslab_unload_delay_ms=600000ms (10 min) (uint)
             After a metaslab is used, we keep it loaded for this many milliseconds, to attempt
             to reduce unnecessary reloading.  Note, that both this many milliseconds and
             metaslab_unload_delay TXGs must pass before unloading will occur.

     reference_history=3 (uint)
             Maximum reference holders being tracked when reference_tracking_enable is active.

     reference_tracking_enable=0|1 (int)
             Track reference holders to refcount_t objects (debug builds only).

     send_holes_without_birth_time=1|0 (int)
             When set, the hole_birth optimization will not be used, and all holes will always be
             sent during a zfs send.  This is useful if you suspect your datasets are affected by
             a bug in hole_birth.

     spa_config_path=/etc/zfs/zpool.cache (charp)
             SPA config file.

     spa_asize_inflation=24 (uint)
             Multiplication factor used to estimate actual disk consumption from the size of data
             being written.  The default value is a worst case estimate, but lower values may be
             valid for a given pool depending on its configuration.  Pool administrators who
             understand the factors involved may wish to specify a more realistic inflation
             factor, particularly if they operate close to quota or capacity limits.

     spa_load_print_vdev_tree=0|1 (int)
             Whether to print the vdev tree in the debugging message buffer during pool import.

     spa_load_verify_data=1|0 (int)
             Whether to traverse data blocks during an "extreme rewind" (-X) import.

             An extreme rewind import normally performs a full traversal of all blocks in the
             pool for verification.  If this parameter is unset, the traversal skips non-metadata
             blocks.  It can be toggled once the import has started to stop or start the
             traversal of non-metadata blocks.

     spa_load_verify_metadata=1|0 (int)
             Whether to traverse blocks during an "extreme rewind" (-X) pool import.

             An extreme rewind import normally performs a full traversal of all blocks in the
             pool for verification.  If this parameter is unset, the traversal is not performed.
             It can be toggled once the import has started to stop or start the traversal.

     spa_load_verify_shift=4 (1/16th) (uint)
             Sets the maximum number of bytes to consume during pool import to the log2 fraction
             of the target ARC size.

     spa_slop_shift=5 (1/32nd) (int)
             Normally, we don't allow the last 3.2% (1/2^spa_slop_shift) of space in the pool to
             be consumed.  This ensures that we don't run the pool completely out of space, due
             to unaccounted changes (e.g. to the MOS).  It also limits the worst-case time to
             allocate space.  If we have less than this amount of free space, most ZPL operations
             (e.g. write, create) will return ENOSPC.

     spa_upgrade_errlog_limit=0 (uint)
             Limits the number of on-disk error log entries that will be converted to the new
             format when enabling the head_errlog feature.  The default is to convert all log
             entries.

     vdev_removal_max_span=32768B (32 KiB) (uint)
             During top-level vdev removal, chunks of data are copied from the vdev which may
             include free space in order to trade bandwidth for IOPS.  This parameter determines
             the maximum span of free space, in bytes, which will be included as "unnecessary"
             data in a chunk of copied data.

             The default value here was chosen to align with zfs_vdev_read_gap_limit, which is a
             similar concept when doing regular reads (but there's no reason it has to be the
             same).

     vdev_file_logical_ashift=9 (512 B) (u64)
             Logical ashift for file-based devices.

     vdev_file_physical_ashift=9 (512 B) (u64)
             Physical ashift for file-based devices.

     zap_iterate_prefetch=1|0 (int)
             If set, when we start iterating over a ZAP object, prefetch the entire object (all
             leaf blocks).  However, this is limited by dmu_prefetch_max.

     zap_micro_max_size=131072B (128 KiB) (int)
             Maximum micro ZAP size.  A micro ZAP is upgraded to a fat ZAP, once it grows beyond
             the specified size.

     zfetch_hole_shift=2 (uint)
             Log2 fraction of holes in speculative prefetch stream allowed for it to proceed.

     zfetch_min_distance=4194304B (4 MiB) (uint)
             Min bytes to prefetch per stream.  Prefetch distance starts from the demand access
             size and quickly grows to this value, doubling on each hit.  After that it may grow
             further by 1/8 per hit, but only if some prefetch since last time haven't completed
             in time to satisfy demand request, i.e.  prefetch depth didn't cover the read
             latency or the pool got saturated.

     zfetch_max_distance=67108864B (64 MiB) (uint)
             Max bytes to prefetch per stream.

     zfetch_max_idistance=67108864B (64 MiB) (uint)
             Max bytes to prefetch indirects for per stream.

     zfetch_max_reorder=16777216B (16 MiB) (uint)
             Requests within this byte distance from the current prefetch stream position are
             considered parts of the stream, reordered due to parallel processing.  Such requests
             do not advance the stream position immediately unless zfetch_hole_shift fill
             threshold is reached, but saved to fill holes in the stream later.

     zfetch_max_streams=8 (uint)
             Max number of streams per zfetch (prefetch streams per file).

     zfetch_min_sec_reap=1 (uint)
             Min time before inactive prefetch stream can be reclaimed

     zfetch_max_sec_reap=2 (uint)
             Max time before inactive prefetch stream can be deleted

     zfs_abd_scatter_enabled=1|0 (int)
             Enables ARC from using scatter/gather lists and forces all allocations to be linear
             in kernel memory.  Disabling can improve performance in some code paths at the
             expense of fragmented kernel memory.

     zfs_abd_scatter_max_order=MAX_ORDER-1 (uint)
             Maximum number of consecutive memory pages allocated in a single block for
             scatter/gather lists.

             The value of MAX_ORDER depends on kernel configuration.

     zfs_abd_scatter_min_size=1536B (1.5 KiB) (uint)
             This is the minimum allocation size that will use scatter (page-based) ABDs.
             Smaller allocations will use linear ABDs.

     zfs_arc_dnode_limit=0B (u64)
             When the number of bytes consumed by dnodes in the ARC exceeds this number of bytes,
             try to unpin some of it in response to demand for non-metadata.  This value acts as
             a ceiling to the amount of dnode metadata, and defaults to 0, which indicates that a
             percent which is based on zfs_arc_dnode_limit_percent of the ARC meta buffers that
             may be used for dnodes.

     zfs_arc_dnode_limit_percent=10% (u64)
             Percentage that can be consumed by dnodes of ARC meta buffers.

             See also zfs_arc_dnode_limit, which serves a similar purpose but has a higher
             priority if nonzero.

     zfs_arc_dnode_reduce_percent=10% (u64)
             Percentage of ARC dnodes to try to scan in response to demand for non-metadata when
             the number of bytes consumed by dnodes exceeds zfs_arc_dnode_limit.

     zfs_arc_average_blocksize=8192B (8 KiB) (uint)
             The ARC's buffer hash table is sized based on the assumption of an average block
             size of this value.  This works out to roughly 1 MiB of hash table per 1 GiB of
             physical memory with 8-byte pointers.  For configurations with a known larger
             average block size, this value can be increased to reduce the memory footprint.

     zfs_arc_eviction_pct=200% (uint)
             When arc_is_overflowing(), arc_get_data_impl() waits for this percent of the
             requested amount of data to be evicted.  For example, by default, for every 2 KiB
             that's evicted, 1 KiB of it may be "reused" by a new allocation.  Since this is
             above 100%, it ensures that progress is made towards getting arc_size under arc_c.
             Since this is finite, it ensures that allocations can still happen, even during the
             potentially long time that arc_size is more than arc_c.

     zfs_arc_evict_batch_limit=10 (uint)
             Number ARC headers to evict per sub-list before proceeding to another sub-list.
             This batch-style operation prevents entire sub-lists from being evicted at once but
             comes at a cost of additional unlocking and locking.

     zfs_arc_grow_retry=0s (uint)
             If set to a non zero value, it will replace the arc_grow_retry value with this
             value.  The arc_grow_retry value (default 5s) is the number of seconds the ARC will
             wait before trying to resume growth after a memory pressure event.

     zfs_arc_lotsfree_percent=10% (int)
             Throttle I/O when free system memory drops below this percentage of total system
             memory.  Setting this value to 0 will disable the throttle.

     zfs_arc_max=0B (u64)
             Max size of ARC in bytes.  If 0, then the max size of ARC is determined by the
             amount of system memory installed.  Under Linux, half of system memory will be used
             as the limit.  Under FreeBSD, the larger of all_system_memory - 1 GiB and 5/8 ×
             all_system_memory will be used as the limit.  This value must be at least 67108864B
             (64 MiB).

             This value can be changed dynamically, with some caveats.  It cannot be set back to
             0 while running, and reducing it below the current ARC size will not cause the ARC
             to shrink without memory pressure to induce shrinking.

     zfs_arc_meta_balance=500 (uint)
             Balance between metadata and data on ghost hits.  Values above 100 increase metadata
             caching by proportionally reducing effect of ghost data hits on target data/metadata
             rate.

     zfs_arc_min=0B (u64)
             Min size of ARC in bytes.  If set to 0, arc_c_min will default to consuming the
             larger of 32 MiB and all_system_memory / 32.

     zfs_arc_min_prefetch_ms=0ms(≡1s) (uint)
             Minimum time prefetched blocks are locked in the ARC.

     zfs_arc_min_prescient_prefetch_ms=0ms(≡6s) (uint)
             Minimum time "prescient prefetched" blocks are locked in the ARC.  These blocks are
             meant to be prefetched fairly aggressively ahead of the code that may use them.

     zfs_arc_prune_task_threads=1 (int)
             Number of arc_prune threads.  FreeBSD does not need more than one.  Linux may
             theoretically use one per mount point up to number of CPUs, but that was not proven
             to be useful.

     zfs_max_missing_tvds=0 (int)
             Number of missing top-level vdevs which will be allowed during pool import (only in
             read-only mode).

     zfs_max_nvlist_src_size= 0 (u64)
             Maximum size in bytes allowed to be passed as zc_nvlist_src_size for ioctls on
             /dev/zfs.  This prevents a user from causing the kernel to allocate an excessive
             amount of memory.  When the limit is exceeded, the ioctl fails with EINVAL and a
             description of the error is sent to the zfs-dbgmsg log.  This parameter should not
             need to be touched under normal circumstances.  If 0, equivalent to a quarter of the
             user-wired memory limit under FreeBSD and to 134217728B (128 MiB) under Linux.

     zfs_multilist_num_sublists=0 (uint)
             To allow more fine-grained locking, each ARC state contains a series of lists for
             both data and metadata objects.  Locking is performed at the level of these "sub-
             lists".  This parameters controls the number of sub-lists per ARC state, and also
             applies to other uses of the multilist data structure.

             If 0, equivalent to the greater of the number of online CPUs and 4.

     zfs_arc_overflow_shift=8 (int)
             The ARC size is considered to be overflowing if it exceeds the current ARC target
             size (arc_c) by thresholds determined by this parameter.  Exceeding by (arc_c >>
             zfs_arc_overflow_shift) / 2 starts ARC reclamation process.  If that appears
             insufficient, exceeding by (arc_c >> zfs_arc_overflow_shift) × 1.5 blocks new buffer
             allocation until the reclaim thread catches up.  Started reclamation process
             continues till ARC size returns below the target size.

             The default value of 8 causes the ARC to start reclamation if it exceeds the target
             size by 0.2% of the target size, and block allocations by 0.6%.

     zfs_arc_shrink_shift=0 (uint)
             If nonzero, this will update arc_shrink_shift (default 7) with the new value.

     zfs_arc_pc_percent=0% (off) (uint)
             Percent of pagecache to reclaim ARC to.

             This tunable allows the ZFS ARC to play more nicely with the kernel's LRU pagecache.
             It can guarantee that the ARC size won't collapse under scanning pressure on the
             pagecache, yet still allows the ARC to be reclaimed down to zfs_arc_min if
             necessary.  This value is specified as percent of pagecache size (as measured by
             NR_FILE_PAGES), where that percent may exceed 100.  This only operates during memory
             pressure/reclaim.

     zfs_arc_shrinker_limit=10000 (int)
             This is a limit on how many pages the ARC shrinker makes available for eviction in
             response to one page allocation attempt.  Note that in practice, the kernel's
             shrinker can ask us to evict up to about four times this for one allocation attempt.

             The default limit of 10000 (in practice, 160 MiB per allocation attempt with 4 KiB
             pages) limits the amount of time spent attempting to reclaim ARC memory to less than
             100 ms per allocation attempt, even with a small average compressed block size of ~8
             KiB.

             The parameter can be set to 0 (zero) to disable the limit, and only applies on
             Linux.

     zfs_arc_sys_free=0B (u64)
             The target number of bytes the ARC should leave as free memory on the system.  If
             zero, equivalent to the bigger of 512 KiB and all_system_memory/64.

     zfs_autoimport_disable=1|0 (int)
             Disable pool import at module load by ignoring the cache file (spa_config_path).

     zfs_checksum_events_per_second=20/s (uint)
             Rate limit checksum events to this many per second.  Note that this should not be
             set below the ZED thresholds (currently 10 checksums over 10 seconds) or else the
             daemon may not trigger any action.

     zfs_commit_timeout_pct=10% (uint)
             This controls the amount of time that a ZIL block (lwb) will remain "open" when it
             isn't "full", and it has a thread waiting for it to be committed to stable storage.
             The timeout is scaled based on a percentage of the last lwb latency to avoid
             significantly impacting the latency of each individual transaction record (itx).

     zfs_condense_indirect_commit_entry_delay_ms=0ms (int)
             Vdev indirection layer (used for device removal) sleeps for this many milliseconds
             during mapping generation.  Intended for use with the test suite to throttle vdev
             removal speed.

     zfs_condense_indirect_obsolete_pct=25% (uint)
             Minimum percent of obsolete bytes in vdev mapping required to attempt to condense
             (see zfs_condense_indirect_vdevs_enable).  Intended for use with the test suite to
             facilitate triggering condensing as needed.

     zfs_condense_indirect_vdevs_enable=1|0 (int)
             Enable condensing indirect vdev mappings.  When set, attempt to condense indirect
             vdev mappings if the mapping uses more than zfs_condense_min_mapping_bytes bytes of
             memory and if the obsolete space map object uses more than
             zfs_condense_max_obsolete_bytes bytes on-disk.  The condensing process is an attempt
             to save memory by removing obsolete mappings.

     zfs_condense_max_obsolete_bytes=1073741824B (1 GiB) (u64)
             Only attempt to condense indirect vdev mappings if the on-disk size of the obsolete
             space map object is greater than this number of bytes (see
             zfs_condense_indirect_vdevs_enable).

     zfs_condense_min_mapping_bytes=131072B (128 KiB) (u64)
             Minimum size vdev mapping to attempt to condense (see
             zfs_condense_indirect_vdevs_enable).

     zfs_dbgmsg_enable=1|0 (int)
             Internally ZFS keeps a small log to facilitate debugging.  The log is enabled by
             default, and can be disabled by unsetting this option.  The contents of the log can
             be accessed by reading /proc/spl/kstat/zfs/dbgmsg.  Writing 0 to the file clears the
             log.

             This setting does not influence debug prints due to zfs_flags.

     zfs_dbgmsg_maxsize=4194304B (4 MiB) (uint)
             Maximum size of the internal ZFS debug log.

     zfs_dbuf_state_index=0 (int)
             Historically used for controlling what reporting was available under
             /proc/spl/kstat/zfs.  No effect.

     zfs_deadman_enabled=1|0 (int)
             When a pool sync operation takes longer than zfs_deadman_synctime_ms, or when an
             individual I/O operation takes longer than zfs_deadman_ziotime_ms, then the
             operation is considered to be "hung".  If zfs_deadman_enabled is set, then the
             deadman behavior is invoked as described by zfs_deadman_failmode.  By default, the
             deadman is enabled and set to wait which results in "hung" I/O operations only being
             logged.  The deadman is automatically disabled when a pool gets suspended.

     zfs_deadman_failmode=wait (charp)
             Controls the failure behavior when the deadman detects a "hung" I/O operation.
             Valid values are:
                 wait      Wait for a "hung" operation to complete.  For each "hung" operation a
                           "deadman" event will be posted describing that operation.
                 continue  Attempt to recover from a "hung" operation by re-dispatching it to the
                           I/O pipeline if possible.
                 panic     Panic the system.  This can be used to facilitate automatic fail-over
                           to a properly configured fail-over partner.

     zfs_deadman_checktime_ms=60000ms (1 min) (u64)
             Check time in milliseconds.  This defines the frequency at which we check for hung
             I/O requests and potentially invoke the zfs_deadman_failmode behavior.

     zfs_deadman_synctime_ms=600000ms (10 min) (u64)
             Interval in milliseconds after which the deadman is triggered and also the interval
             after which a pool sync operation is considered to be "hung".  Once this limit is
             exceeded the deadman will be invoked every zfs_deadman_checktime_ms milliseconds
             until the pool sync completes.

     zfs_deadman_ziotime_ms=300000ms (5 min) (u64)
             Interval in milliseconds after which the deadman is triggered and an individual I/O
             operation is considered to be "hung".  As long as the operation remains "hung", the
             deadman will be invoked every zfs_deadman_checktime_ms milliseconds until the
             operation completes.

     zfs_dedup_prefetch=0|1 (int)
             Enable prefetching dedup-ed blocks which are going to be freed.

     zfs_delay_min_dirty_percent=60% (uint)
             Start to delay each transaction once there is this amount of dirty data, expressed
             as a percentage of zfs_dirty_data_max.  This value should be at least
             zfs_vdev_async_write_active_max_dirty_percent.  See ZFS TRANSACTION DELAY.

     zfs_delay_scale=500000 (int)
             This controls how quickly the transaction delay approaches infinity.  Larger values
             cause longer delays for a given amount of dirty data.

             For the smoothest delay, this value should be about 1 billion divided by the maximum
             number of operations per second.  This will smoothly handle between ten times and a
             tenth of this number.  See ZFS TRANSACTION DELAY.

             zfs_delay_scale × zfs_dirty_data_max must be smaller than 2^64.

     zfs_disable_ivset_guid_check=0|1 (int)
             Disables requirement for IVset GUIDs to be present and match when doing a raw
             receive of encrypted datasets.  Intended for users whose pools were created with
             OpenZFS pre-release versions and now have compatibility issues.

     zfs_key_max_salt_uses=400000000 (4*10^8) (ulong)
             Maximum number of uses of a single salt value before generating a new one for
             encrypted datasets.  The default value is also the maximum.

     zfs_object_mutex_size=64 (uint)
             Size of the znode hashtable used for holds.

             Due to the need to hold locks on objects that may not exist yet, kernel mutexes are
             not created per-object and instead a hashtable is used where collisions will result
             in objects waiting when there is not actually contention on the same object.

     zfs_slow_io_events_per_second=20/s (int)
             Rate limit delay and deadman zevents (which report slow I/O operations) to this many
             per second.

     zfs_unflushed_max_mem_amt=1073741824B (1 GiB) (u64)
             Upper-bound limit for unflushed metadata changes to be held by the log spacemap in
             memory, in bytes.

     zfs_unflushed_max_mem_ppm=1000ppm (0.1%) (u64)
             Part of overall system memory that ZFS allows to be used for unflushed metadata
             changes by the log spacemap, in millionths.

     zfs_unflushed_log_block_max=131072 (128k) (u64)
             Describes the maximum number of log spacemap blocks allowed for each pool.  The
             default value means that the space in all the log spacemaps can add up to no more
             than 131072 blocks (which means 16 GiB of logical space before compression and ditto
             blocks, assuming that blocksize is 128 KiB).

             This tunable is important because it involves a trade-off between import time after
             an unclean export and the frequency of flushing metaslabs.  The higher this number
             is, the more log blocks we allow when the pool is active which means that we flush
             metaslabs less often and thus decrease the number of I/O operations for spacemap
             updates per TXG.  At the same time though, that means that in the event of an
             unclean export, there will be more log spacemap blocks for us to read, inducing
             overhead in the import time of the pool.  The lower the number, the amount of
             flushing increases, destroying log blocks quicker as they become obsolete faster,
             which leaves less blocks to be read during import time after a crash.

             Each log spacemap block existing during pool import leads to approximately one extra
             logical I/O issued.  This is the reason why this tunable is exposed in terms of
             blocks rather than space used.

     zfs_unflushed_log_block_min=1000 (u64)
             If the number of metaslabs is small and our incoming rate is high, we could get into
             a situation that we are flushing all our metaslabs every TXG.  Thus we always allow
             at least this many log blocks.

     zfs_unflushed_log_block_pct=400% (u64)
             Tunable used to determine the number of blocks that can be used for the spacemap
             log, expressed as a percentage of the total number of unflushed metaslabs in the
             pool.

     zfs_unflushed_log_txg_max=1000 (u64)
             Tunable limiting maximum time in TXGs any metaslab may remain unflushed.  It
             effectively limits maximum number of unflushed per-TXG spacemap logs that need to be
             read after unclean pool export.

     zfs_unlink_suspend_progress=0|1 (uint)
             When enabled, files will not be asynchronously removed from the list of pending
             unlinks and the space they consume will be leaked.  Once this option has been
             disabled and the dataset is remounted, the pending unlinks will be processed and the
             freed space returned to the pool.  This option is used by the test suite.

     zfs_delete_blocks=20480 (ulong)
             This is the used to define a large file for the purposes of deletion.  Files
             containing more than zfs_delete_blocks will be deleted asynchronously, while smaller
             files are deleted synchronously.  Decreasing this value will reduce the time spent
             in an unlink(2) system call, at the expense of a longer delay before the freed space
             is available.  This only applies on Linux.

     zfs_dirty_data_max= (int)
             Determines the dirty space limit in bytes.  Once this limit is exceeded, new writes
             are halted until space frees up.  This parameter takes precedence over
             zfs_dirty_data_max_percent.  See ZFS TRANSACTION DELAY.

             Defaults to physical_ram/10, capped at zfs_dirty_data_max_max.

     zfs_dirty_data_max_max= (int)
             Maximum allowable value of zfs_dirty_data_max, expressed in bytes.  This limit is
             only enforced at module load time, and will be ignored if zfs_dirty_data_max is
             later changed.  This parameter takes precedence over zfs_dirty_data_max_max_percent.
             See ZFS TRANSACTION DELAY.

             Defaults to min(physical_ram/4, 4GiB), or min(physical_ram/4, 1GiB) for 32-bit
             systems.

     zfs_dirty_data_max_max_percent=25% (uint)
             Maximum allowable value of zfs_dirty_data_max, expressed as a percentage of physical
             RAM.  This limit is only enforced at module load time, and will be ignored if
             zfs_dirty_data_max is later changed.  The parameter zfs_dirty_data_max_max takes
             precedence over this one.  See ZFS TRANSACTION DELAY.

     zfs_dirty_data_max_percent=10% (uint)
             Determines the dirty space limit, expressed as a percentage of all memory.  Once
             this limit is exceeded, new writes are halted until space frees up.  The parameter
             zfs_dirty_data_max takes precedence over this one.  See ZFS TRANSACTION DELAY.

             Subject to zfs_dirty_data_max_max.

     zfs_dirty_data_sync_percent=20% (uint)
             Start syncing out a transaction group if there's at least this much dirty data (as a
             percentage of zfs_dirty_data_max).  This should be less than
             zfs_vdev_async_write_active_min_dirty_percent.

     zfs_wrlog_data_max= (int)
             The upper limit of write-transaction zil log data size in bytes.  Write operations
             are throttled when approaching the limit until log data is cleared out after
             transaction group sync.  Because of some overhead, it should be set at least 2 times
             the size of zfs_dirty_data_max to prevent harming normal write throughput.  It also
             should be smaller than the size of the slog device if slog is present.

             Defaults to zfs_dirty_data_max*2

     zfs_fallocate_reserve_percent=110% (uint)
             Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
             preallocated for a file in order to guarantee that later writes will not run out of
             space.  Instead, fallocate(2) space preallocation only checks that sufficient space
             is currently available in the pool or the user's project quota allocation, and then
             creates a sparse file of the requested size.  The requested space is multiplied by
             zfs_fallocate_reserve_percent to allow additional space for indirect blocks and
             other internal metadata.  Setting this to 0 disables support for fallocate(2) and
             causes it to return EOPNOTSUPP.

     zfs_fletcher_4_impl=fastest (string)
             Select a fletcher 4 implementation.

             Supported selectors are: fastest, scalar, sse2, ssse3, avx2, avx512f, avx512bw, and
             aarch64_neon.  All except fastest and scalar require instruction set extensions to
             be available, and will only appear if ZFS detects that they are present at runtime.
             If multiple implementations of fletcher 4 are available, the fastest will be chosen
             using a micro benchmark.  Selecting scalar results in the original CPU-based
             calculation being used.  Selecting any option other than fastest or scalar results
             in vector instructions from the respective CPU instruction set being used.

     zfs_bclone_enabled=1|0 (int)
             Enable the experimental block cloning feature.  If this setting is 0, then even if
             feature@block_cloning is enabled, attempts to clone blocks will act as though the
             feature is disabled.

     zfs_bclone_wait_dirty=0|1 (int)
             When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be written
             to disk.  This allows the clone operation to reliably succeed when a file is
             modified and then immediately cloned.  For small files this may be slower than
             making a copy of the file.  Therefore, this setting defaults to 0 which causes a
             clone operation to immediately fail when encountering a dirty block.

     zfs_blake3_impl=fastest (string)
             Select a BLAKE3 implementation.

             Supported selectors are: cycle, fastest, generic, sse2, sse41, avx2, avx512.  All
             except cycle, fastest and generic require instruction set extensions to be
             available, and will only appear if ZFS detects that they are present at runtime.  If
             multiple implementations of BLAKE3 are available, the fastest will be chosen using a
             micro benchmark. You can see the benchmark results by reading this kstat file:
             /proc/spl/kstat/zfs/chksum_bench.

     zfs_free_bpobj_enabled=1|0 (int)
             Enable/disable the processing of the free_bpobj object.

     zfs_async_block_max_blocks=UINT64_MAX (unlimited) (u64)
             Maximum number of blocks freed in a single TXG.

     zfs_max_async_dedup_frees=100000 (10^5) (u64)
             Maximum number of dedup blocks freed in a single TXG.

     zfs_vdev_async_read_max_active=3 (uint)
             Maximum asynchronous read I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_async_read_min_active=1 (uint)
             Minimum asynchronous read I/O operation active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_async_write_active_max_dirty_percent=60% (uint)
             When the pool has more than this much dirty data, use
             zfs_vdev_async_write_max_active to limit active async writes.  If the dirty data is
             between the minimum and maximum, the active I/O limit is linearly interpolated.  See
             ZFS I/O SCHEDULER.

     zfs_vdev_async_write_active_min_dirty_percent=30% (uint)
             When the pool has less than this much dirty data, use
             zfs_vdev_async_write_min_active to limit active async writes.  If the dirty data is
             between the minimum and maximum, the active I/O limit is linearly interpolated.  See
             ZFS I/O SCHEDULER.

     zfs_vdev_async_write_max_active=10 (uint)
             Maximum asynchronous write I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_async_write_min_active=2 (uint)
             Minimum asynchronous write I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

             Lower values are associated with better latency on rotational media but poorer
             resilver performance.  The default value of 2 was chosen as a compromise.  A value
             of 3 has been shown to improve resilver performance further at a cost of further
             increasing latency.

     zfs_vdev_initializing_max_active=1 (uint)
             Maximum initializing I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_initializing_min_active=1 (uint)
             Minimum initializing I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_max_active=1000 (uint)
             The maximum number of I/O operations active to each device.  Ideally, this will be
             at least the sum of each queue's max_active.  See ZFS I/O SCHEDULER.

     zfs_vdev_open_timeout_ms=1000 (uint)
             Timeout value to wait before determining a device is missing during import.  This is
             helpful for transient missing paths due to links being briefly removed and recreated
             in response to udev events.

     zfs_vdev_rebuild_max_active=3 (uint)
             Maximum sequential resilver I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_rebuild_min_active=1 (uint)
             Minimum sequential resilver I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_removal_max_active=2 (uint)
             Maximum removal I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_removal_min_active=1 (uint)
             Minimum removal I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_scrub_max_active=2 (uint)
             Maximum scrub I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_scrub_min_active=1 (uint)
             Minimum scrub I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_sync_read_max_active=10 (uint)
             Maximum synchronous read I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_sync_read_min_active=10 (uint)
             Minimum synchronous read I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_sync_write_max_active=10 (uint)
             Maximum synchronous write I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_sync_write_min_active=10 (uint)
             Minimum synchronous write I/O operations active to each device.  See ZFS I/O
             SCHEDULER.

     zfs_vdev_trim_max_active=2 (uint)
             Maximum trim/discard I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_trim_min_active=1 (uint)
             Minimum trim/discard I/O operations active to each device.  See ZFS I/O SCHEDULER.

     zfs_vdev_nia_delay=5 (uint)
             For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), the
             number of concurrently-active I/O operations is limited to zfs_*_min_active, unless
             the vdev is "idle".  When there are no interactive I/O operations active
             (synchronous or otherwise), and zfs_vdev_nia_delay operations have completed since
             the last interactive operation, then the vdev is considered to be "idle", and the
             number of concurrently-active non-interactive operations is increased to
             zfs_*_max_active.  See ZFS I/O SCHEDULER.

     zfs_vdev_nia_credit=5 (uint)
             Some HDDs tend to prioritize sequential I/O so strongly, that concurrent random I/O
             latency reaches several seconds.  On some HDDs this happens even if sequential I/O
             operations are submitted one at a time, and so setting zfs_*_max_active= 1 does not
             help.  To prevent non-interactive I/O, like scrub, from monopolizing the device, no
             more than zfs_vdev_nia_credit operations can be sent while there are outstanding
             incomplete interactive operations.  This enforced wait ensures the HDD services the
             interactive I/O within a reasonable amount of time.  See ZFS I/O SCHEDULER.

     zfs_vdev_queue_depth_pct=1000% (uint)
             Maximum number of queued allocations per top-level vdev expressed as a percentage of
             zfs_vdev_async_write_max_active, which allows the system to detect devices that are
             more capable of handling allocations and to allocate more blocks to those devices.
             This allows for dynamic allocation distribution when devices are imbalanced, as
             fuller devices will tend to be slower than empty devices.

             Also see zio_dva_throttle_enabled.

     zfs_vdev_def_queue_depth=32 (uint)
             Default queue depth for each vdev IO allocator.  Higher values allow for better
             coalescing of sequential writes before sending them to the disk, but can increase
             transaction commit times.

     zfs_vdev_failfast_mask=1 (uint)
             Defines if the driver should retire on a given error type.  The following options
             may be bitwise-ored together:

             ┌───────────────────────────────────────────────────────────────┐
             │    Value   Name        Description                            │
             ├───────────────────────────────────────────────────────────────┤
             │        1   Device      No driver retries on device errors     │
             │        2   Transport   No driver retries on transport errors. │
             │        4   Driver      No driver retries on driver errors.    │
             └───────────────────────────────────────────────────────────────┘
     zfs_vdev_disk_max_segs=0 (uint)
             Maximum number of segments to add to a BIO (min 4).  If this is higher than the
             maximum allowed by the device queue or the kernel itself, it will be clamped.
             Setting it to zero will cause the kernel's ideal size to be used.  This parameter
             only applies on Linux.  This parameter is ignored if zfs_vdev_disk_classic=1.

     zfs_vdev_disk_classic=0|1 (uint)
             Controls the method used to submit IO to the Linux block layer (default 1 classic)

             If set to 1, the "classic" method is used.  This is the method that has been in use
             since the earliest versions of ZFS-on-Linux.  It has known issues with highly
             fragmented IO requests and is less efficient on many workloads, but it well known
             and well understood.

             If set to 0, the "new" method is used.  This method is available since 2.2.4 and
             should resolve all known issues and be far more efficient, but has not had as much
             testing.  In the 2.2.x series, this parameter defaults to 1, to use the "classic"
             method.

             It is not recommended that you change it except on advice from the OpenZFS
             developers.  If you do change it, please also open a bug report describing why you
             did so, including the workload involved and any error messages.

             This parameter and the "classic" submission method will be removed in a future
             release of OpenZFS once we have total confidence in the new method.

             This parameter only applies on Linux, and can only be set at module load time.

     zfs_expire_snapshot=300s (int)
             Time before expiring .zfs/snapshot.

     zfs_admin_snapshot=0|1 (int)
             Allow the creation, removal, or renaming of entries in the .zfs/snapshot directory
             to cause the creation, destruction, or renaming of snapshots.  When enabled, this
             functionality works both locally and over NFS exports which have the no_root_squash
             option set.

     zfs_flags=0 (int)
             Set additional debugging flags.  The following flags may be bitwise-ored together:
             ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────┐
             │    Value   Name                         Description                                                      │
             ├──────────────────────────────────────────────────────────────────────────────────────────────────────────┤
             │        1   ZFS_DEBUG_DPRINTF            Enable dprintf entries in the debug log.                         │
             │*       2   ZFS_DEBUG_DBUF_VERIFY        Enable extra dbuf verifications.                                 │
             │*       4   ZFS_DEBUG_DNODE_VERIFY       Enable extra dnode verifications.                                │
             │        8   ZFS_DEBUG_SNAPNAMES          Enable snapshot name verification.                               │
             │*      16   ZFS_DEBUG_MODIFY             Check for illegally modified ARC buffers.                        │
             │       64   ZFS_DEBUG_ZIO_FREE           Enable verification of block frees.                              │
             │      128   ZFS_DEBUG_HISTOGRAM_VERIFY   Enable extra spacemap histogram verifications.                   │
             │      256   ZFS_DEBUG_METASLAB_VERIFY    Verify space accounting on disk matches in-memory range_trees.   │
             │      512   ZFS_DEBUG_SET_ERROR          Enable SET_ERROR and dprintf entries in the debug log.           │
             │     1024   ZFS_DEBUG_INDIRECT_REMAP     Verify split blocks created by device removal.                   │
             │     2048   ZFS_DEBUG_TRIM               Verify TRIM ranges are always within the allocatable range tree. │
             │     4096   ZFS_DEBUG_LOG_SPACEMAP       Verify that the log summary is consistent with the spacemap log  │
             │                                                and enable zfs_dbgmsgs for metaslab loading and flushing. │
--

ZFS I/O SCHEDULER

     ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.  The
     scheduler determines when and in what order those operations are issued.  The scheduler
     divides operations into five I/O classes, prioritized in the following order: sync read,
     sync write, async read, async write, and scrub/resilver.  Each queue defines the minimum and
     maximum number of concurrent operations that may be issued to the device.  In addition, the
     device has an aggregate maximum, zfs_vdev_max_active.  Note that the sum of the per-queue
     minima must not exceed the aggregate maximum.  If the sum of the per-queue maxima exceeds
     the aggregate maximum, then the number of active operations may reach zfs_vdev_max_active,
     in which case no further operations will be issued, regardless of whether all per-queue
     minima have been met.

     For many physical devices, throughput increases with the number of concurrent operations,
     but latency typically suffers.  Furthermore, physical devices typically have a limit at
     which more concurrent operations have no effect on throughput or can actually cause it to
     decrease.

     The scheduler selects the next operation to issue by first looking for an I/O class whose
     minimum has not been satisfied.  Once all are satisfied and the aggregate maximum has not
     been hit, the scheduler looks for classes whose maximum has not been satisfied.  Iteration
     through the I/O classes is done in the order specified above.  No further operations are
     issued if the aggregate maximum number of concurrent operations has been hit, or if there
     are no operations queued for an I/O class that has not hit its maximum.  Every time an I/O
     operation is queued or an operation completes, the scheduler looks for new operations to
     issue.

     In general, smaller max_actives will lead to lower latency of synchronous operations.
     Larger max_actives may lead to higher overall throughput, depending on underlying storage.

     The ratio of the queues' max_actives determines the balance of performance between reads,
     writes, and scrubs.  For example, increasing zfs_vdev_scrub_max_active will cause the scrub
     or resilver to complete more quickly, but reads and writes to have higher latency and lower
     throughput.

     All I/O classes have a fixed maximum number of outstanding operations, except for the async
     write class.  Asynchronous writes represent the data that is committed to stable storage
     during the syncing stage for transaction groups.  Transaction groups enter the syncing state
     periodically, so the number of queued async writes will quickly burst up and then bleed down
     to zero.  Rather than servicing them as quickly as possible, the I/O scheduler changes the
     maximum number of active async write operations according to the amount of dirty data in the
     pool.  Since both throughput and latency typically increase with the number of concurrent
     operations issued to physical devices, reducing the burstiness in the number of simultaneous
     operations also stabilizes the response time of operations from other queues, in particular
     synchronous ones.  In broad strokes, the I/O scheduler will issue more concurrent operations
     from the async write queue as there is more dirty data in the pool.

   Async Writes
     The number of concurrent operations issued for the async write I/O class follows a piece-
     wise linear function defined by a few adjustable points:

            |              o---------| <-- zfs_vdev_async_write_max_active
       ^    |             /^         |
       |    |            / |         |
     active |           /  |         |
      I/O   |          /   |         |
     count  |         /    |         |
            |        /     |         |
            |-------o      |         | <-- zfs_vdev_async_write_min_active
           0|_______^______|_________|
            0%      |      |       100% of zfs_dirty_data_max
                    |      |
                    |      `-- zfs_vdev_async_write_active_max_dirty_percent
                    `--------- zfs_vdev_async_write_active_min_dirty_percent

     Until the amount of dirty data exceeds a minimum percentage of the dirty data allowed in the
     pool, the I/O scheduler will limit the number of concurrent operations to the minimum.  As
     that threshold is crossed, the number of concurrent operations issued increases linearly to
     the maximum at the specified maximum percentage of the dirty data allowed in the pool.

     Ideally, the amount of dirty data on a busy pool will stay in the sloped part of the
     function between zfs_vdev_async_write_active_min_dirty_percent and
     zfs_vdev_async_write_active_max_dirty_percent.  If it exceeds the maximum percentage, this
     indicates that the rate of incoming data is greater than the rate that the backend storage
     can handle.  In this case, we must further throttle incoming writes, as described in the
     next section.

ZFS TRANSACTION DELAY

     We delay transactions when we've determined that the backend storage isn't able to
     accommodate the rate of incoming writes.

     If there is already a transaction waiting, we delay relative to when that transaction will
     finish waiting.  This way the calculated delay time is independent of the number of threads
     concurrently executing transactions.

     If we are the only waiter, wait relative to when the transaction started, rather than the
     current time.  This credits the transaction for "time already served", e.g. reading indirect
     blocks.

     The minimum time for a transaction to take is calculated as
           min_time = min(zfs_delay_scale × (dirty - min) / (max - dirty), 100ms)

     The delay has two degrees of freedom that can be adjusted via tunables.  The percentage of
     dirty data at which we start to delay is defined by zfs_delay_min_dirty_percent.  This
     should typically be at or above zfs_vdev_async_write_active_max_dirty_percent, so that we
     only start to delay after writing at full speed has failed to keep up with the incoming
     write rate.  The scale of the curve is defined by zfs_delay_scale.  Roughly speaking, this
     variable determines the amount of delay at the midpoint of the curve.

     delay
      10ms +-------------------------------------------------------------*+
           |                                                             *|
       9ms +                                                             *+
           |                                                             *|
       8ms +                                                             *+
           |                                                            * |
       7ms +                                                            * +
           |                                                            * |
       6ms +                                                            * +
           |                                                            * |
       5ms +                                                           *  +
           |                                                           *  |
       4ms +                                                           *  +
           |                                                           *  |
       3ms +                                                          *   +
           |                                                          *   |
       2ms +                                              (midpoint) *    +
           |                                                  |    **     |
       1ms +                                                  v ***       +
           |             zfs_delay_scale ---------->     ********         |
         0 +-------------------------------------*********----------------+
           0%                    <- zfs_dirty_data_max ->               100%

     Note, that since the delay is added to the outstanding time remaining on the most recent
     transaction it's effectively the inverse of IOPS.  Here, the midpoint of 500 us translates
     to 2000 IOPS.  The shape of the curve was chosen such that small changes in the amount of
     accumulated dirty data in the first three quarters of the curve yield relatively small
     differences in the amount of delay.

     The effects can be easier to understand when the amount of delay is represented on a
     logarithmic scale:

     delay
     100ms +-------------------------------------------------------------++
           +                                                              +
           |                                                              |
           +                                                             *+
      10ms +                                                             *+
           +                                                           ** +
           |                                              (midpoint)  **  |
           +                                                  |     **    +
       1ms +                                                  v ****      +
           +             zfs_delay_scale ---------->        *****         +
           |                                             ****             |
           +                                          ****                +
     100us +                                        **                    +
           +                                       *                      +
           |                                      *                       |
           +                                     *                        +
      10us +                                     *                        +
           +                                                              +
           |                                                              |
           +                                                              +
           +--------------------------------------------------------------+
           0%                    <- zfs_dirty_data_max ->               100%

     Note here that only as the amount of dirty data approaches its limit does the delay start to
     increase rapidly.  The goal of a properly tuned system should be to keep the amount of dirty
     data out of that range by first ensuring that the appropriate limits are set for the I/O
     scheduler to reach optimal throughput on the back-end storage, and then by changing the
     value of zfs_delay_scale to increase the steepness of the curve.