Provided by: zfsutils-linux_0.7.5-1ubuntu16.12_amd64 bug

NAME

       zfs-module-parameters - ZFS module parameters

DESCRIPTION

       Description of the different parameters to the ZFS module.

   Module parameters
       ignore_hole_birth (int)
                   When  set,  the  hole_birth  optimization will not be used, and all holes will
                   always be sent on zfs send. Useful if you suspect your datasets  are  affected
                   by a bug in hole_birth.

                   Use 1 for on (default) and 0 for off.

       l2arc_feed_again (int)
                   Turbo  L2ARC  warm-up. When the L2ARC is cold the fill interval will be set as
                   fast as possible.

                   Use 1 for yes (default) and 0 to disable.

       l2arc_feed_min_ms (ulong)
                   Min feed  interval  in  milliseconds.  Requires  l2arc_feed_again=1  and  only
                   applicable in related situations.

                   Default value: 200.

       l2arc_feed_secs (ulong)
                   Seconds between L2ARC writing

                   Default value: 1.

       l2arc_headroom (ulong)
                   How far through the ARC lists to search for L2ARC cacheable content, expressed
                   as a multiplier of l2arc_write_max

                   Default value: 2.

       l2arc_headroom_boost (ulong)
                   Scales l2arc_headroom  by  this  percentage  when  L2ARC  contents  are  being
                   successfully compressed before writing. A value of 100 disables this feature.

                   Default value: 200.

       l2arc_nocompress (int)
                   Skip compressing L2ARC buffers

                   Use 1 for yes and 0 for no (default).

       l2arc_noprefetch (int)
                   Do  not  write  buffers  to  L2ARC  if  they  were  prefetched but not used by
                   applications

                   Use 1 for yes (default) and 0 to disable.

       l2arc_norw (int)
                   No reads during writes

                   Use 1 for yes and 0 for no (default).

       l2arc_write_boost (ulong)
                   Cold L2ARC devices will have l2arc_write_max increased by  this  amount  while
                   they remain cold.

                   Default value: 8,388,608.

       l2arc_write_max (ulong)
                   Max write bytes per interval

                   Default value: 8,388,608.

       metaslab_aliquot (ulong)
                   Metaslab  granularity,  in  bytes.  This  is  roughly similar to what would be
                   referred to as the  "stripe  size"  in  traditional  RAID  arrays.  In  normal
                   operation,  ZFS  will  try  to  write  this amount of data to a top-level vdev
                   before moving on to the next one.

                   Default value: 524,288.

       metaslab_bias_enabled (int)
                   Enable metaslab group biasing based on its vdev's over-  or  under-utilization
                   relative to the pool.

                   Use 1 for yes (default) and 0 for no.

       zfs_metaslab_segment_weight_enabled (int)
                   Enable/disable segment-based metaslab selection.

                   Use 1 for yes (default) and 0 for no.

       zfs_metaslab_switch_threshold (int)
                   When  using  segment-based  metaslab  selection,  continue allocating from the
                   active metaslab until zfs_metaslab_switch_threshold worth of buckets have been
                   exhausted.

                   Default value: 2.

       metaslab_debug_load (int)
                   Load all metaslabs during pool import.

                   Use 1 for yes and 0 for no (default).

       metaslab_debug_unload (int)
                   Prevent metaslabs from being unloaded.

                   Use 1 for yes and 0 for no (default).

       metaslab_fragmentation_factor_enabled (int)
                   Enable use of the fragmentation metric in computing metaslab weights.

                   Use 1 for yes (default) and 0 for no.

       metaslabs_per_vdev (int)
                   When a vdev is added, it will be divided into approximately (but no more than)
                   this number of metaslabs.

                   Default value: 200.

       metaslab_preload_enabled (int)
                   Enable metaslab group preloading.

                   Use 1 for yes (default) and 0 for no.

       metaslab_lba_weighting_enabled (int)
                   Give more weight to metaslabs with lower  LBAs,  assuming  they  have  greater
                   bandwidth  as is typically the case on a modern constant angular velocity disk
                   drive.

                   Use 1 for yes (default) and 0 for no.

       spa_config_path (charp)
                   SPA config file

                   Default value: /etc/zfs/zpool.cache.

       spa_asize_inflation (int)
                   Multiplication factor used to estimate actual disk consumption from  the  size
                   of  data  being written. The default value is a worst case estimate, but lower
                   values may be valid for a given pool depending  on  its  configuration.   Pool
                   administrators  who understand the factors involved may wish to specify a more
                   realistic inflation factor, particularly if they operate  close  to  quota  or
                   capacity limits.

                   Default value: 24.

       spa_load_verify_data (int)
                   Whether to traverse data blocks during an "extreme rewind" (-X) import.  Use 0
                   to disable and 1 to enable.

                   An extreme rewind import normally performs a full traversal of all  blocks  in
                   the pool for verification.  If this parameter is set to 0, the traversal skips
                   non-metadata blocks.  It can be toggled once the import has started to stop or
                   start the traversal of non-metadata blocks.

                   Default value: 1.

       spa_load_verify_metadata (int)
                   Whether to traverse blocks during an "extreme rewind" (-X) pool import.  Use 0
                   to disable and 1 to enable.

                   An extreme rewind import normally performs a full traversal of all  blocks  in
                   the  pool  for  verification.  If this parameter is set to 0, the traversal is
                   not performed.  It can be toggled once the import has started to stop or start
                   the traversal.

                   Default value: 1.

       spa_load_verify_maxinflight (int)
                   Maximum  concurrent  I/Os  during  the  traversal performed during an "extreme
                   rewind" (-X) pool import.

                   Default value: 10000.

       spa_slop_shift (int)
                   Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space in  the
                   pool  to  be consumed.  This ensures that we don't run the pool completely out
                   of space, due to unaccounted changes (e.g. to the MOS).  It  also  limits  the
                   worst-case  time  to allocate space.  If we have less than this amount of free
                   space, most ZPL operations (e.g. write, create) will return ENOSPC.

                   Default value: 5.

       zfetch_array_rd_sz (ulong)
                   If prefetching is enabled, disable prefetching  for  reads  larger  than  this
                   size.

                   Default value: 1,048,576.

       zfetch_max_distance (uint)
                   Max bytes to prefetch per stream (default 8MB).

                   Default value: 8,388,608.

       zfetch_max_streams (uint)
                   Max number of streams per zfetch (prefetch streams per file).

                   Default value: 8.

       zfetch_min_sec_reap (uint)
                   Min time before an active prefetch stream can be reclaimed

                   Default value: 2.

       zfs_arc_dnode_limit (ulong)
                   When  the number of bytes consumed by dnodes in the ARC exceeds this number of
                   bytes, try to unpin some of it in response to demand  for  non-metadata.  This
                   value  acts  as  a  ceiling to the amount of dnode metadata, and defaults to 0
                   which indicates that a percent which is based  on  zfs_arc_dnode_limit_percent
                   of the ARC meta buffers that may be used for dnodes.

                   See  also  zfs_arc_meta_prune  which serves a similar purpose but is used when
                   the amount of metadata in the ARC exceeds zfs_arc_meta_limit  rather  than  in
                   response to overall demand for non-metadata.

                   Default value: 0.

       zfs_arc_dnode_limit_percent (ulong)
                   Percentage that can be consumed by dnodes of ARC meta buffers.

                   See  also  zfs_arc_dnode_limit which serves a similar purpose but has a higher
                   priority if set to nonzero value.

                   Default value: 10.

       zfs_arc_dnode_reduce_percent (ulong)
                   Percentage of ARC dnodes to try to scan in response to demand for non-metadata
                   when the number of bytes consumed by dnodes exceeds zfs_arc_dnode_limit.

                   Default value: 10% of the number of dnodes in the ARC.

       zfs_arc_average_blocksize (int)
                   The  ARC's  buffer  hash  table is sized based on the assumption of an average
                   block size of zfs_arc_average_blocksize  (default  8K).   This  works  out  to
                   roughly  1MB  of  hash  table per 1GB of physical memory with 8-byte pointers.
                   For configurations with a known larger average block size this  value  can  be
                   increased to reduce the memory footprint.

                   Default value: 8192.

       zfs_arc_evict_batch_limit (int)
                   Number  ARC  headers  to  evict per sub-list before proceeding to another sub-
                   list.  This batch-style operation prevents entire sub-lists from being evicted
                   at once but comes at a cost of additional unlocking and locking.

                   Default value: 10.

       zfs_arc_grow_retry (int)
                   If set to a non zero value, it will replace the arc_grow_retry value with this
                   value.  The arc_grow_retry value (default 5) is the number of seconds the  ARC
                   will wait before trying to resume growth after a memory pressure event.

                   Default value: 0.

       zfs_arc_lotsfree_percent (int)
                   Throttle  I/O  when  free  system  memory drops below this percentage of total
                   system memory.  Setting this value to 0 will disable the throttle.

                   Default value: 10.

       zfs_arc_max (ulong)
                   Max arc size of ARC in bytes. If set to 0 then it will consume 1/2  of  system
                   RAM. This value must be at least 67108864 (64 megabytes).

                   This value can be changed dynamically with some caveats. It cannot be set back
                   to 0 while running and reducing it below the current ARC size will  not  cause
                   the ARC to shrink without memory pressure to induce shrinking.

                   Default value: 0.

       zfs_arc_meta_adjust_restarts (ulong)
                   The  number  of  restart  passes to make while scanning the ARC attempting the
                   free buffers in order to stay below the zfs_arc_meta_limit.  This value should
                   not need to be tuned but is available to facilitate performance analysis.

                   Default value: 4096.

       zfs_arc_meta_limit (ulong)
                   The  maximum  allowed  size  in  bytes  that  meta data buffers are allowed to
                   consume in the ARC.  When this limit is reached  meta  data  buffers  will  be
                   reclaimed  even  if  the  overall  arc_c_max has not been reached.  This value
                   defaults  to  0  which  indicates  that  a   percent   which   is   based   on
                   zfs_arc_meta_limit_percent of the ARC may be used for meta data.

                   This  value  my  be changed dynamically except that it cannot be set back to 0
                   for a specific percent of the ARC; it must be set to an explicit value.

                   Default value: 0.

       zfs_arc_meta_limit_percent (ulong)
                   Percentage of ARC buffers that can be used for meta data.

                   See also zfs_arc_meta_limit which serves a similar purpose but  has  a  higher
                   priority if set to nonzero value.

                   Default value: 75.

       zfs_arc_meta_min (ulong)
                   The  minimum  allowed  size in bytes that meta data buffers may consume in the
                   ARC.  This value defaults to 0 which disables a floor on the amount of the ARC
                   devoted meta data.

                   Default value: 0.

       zfs_arc_meta_prune (int)
                   The  number of dentries and inodes to be scanned looking for entries which can
                   be dropped.  This may be required when the ARC reaches the  zfs_arc_meta_limit
                   because dentries and inodes can pin buffers in the ARC.  Increasing this value
                   will cause to dentry and inode caches to be pruned more aggressively.  Setting
                   this value to 0 will disable pruning the inode and dentry caches.

                   Default value: 10,000.

       zfs_arc_meta_strategy (int)
                   Define the strategy for ARC meta data buffer eviction (meta reclaim strategy).
                   A value of 0 (META_ONLY) will evict only the ARC meta data buffers.   A  value
                   of  1 (BALANCED) indicates that additional data buffers may be evicted if that
                   is required to in order to evict the required number of meta data buffers.

                   Default value: 1.

       zfs_arc_min (ulong)
                   Min arc size of ARC in bytes. If set to  0  then  arc_c_min  will  default  to
                   consuming the larger of 32M or 1/32 of total system memory.

                   Default value: 0.

       zfs_arc_min_prefetch_lifespan (int)
                   Minimum time prefetched blocks are locked in the ARC, specified in jiffies.  A
                   value of 0 will default to 1 second.

                   Default value: 0.

       zfs_multilist_num_sublists (int)
                   To allow more fine-grained locking, each ARC state contains a series of  lists
                   for  both  data  and  meta data objects.  Locking is performed at the level of
                   these "sub-lists".  This parameters controls the number of sub-lists  per  ARC
                   state, and also applies to other uses of the multilist data structure.

                   Default value: 4 or the number of online CPUs, whichever is greater

       zfs_arc_overflow_shift (int)
                   The  ARC  size  is  considered to be overflowing if it exceeds the current ARC
                   target size  (arc_c)  by  a  threshold  determined  by  this  parameter.   The
                   threshold  is  calculated  as  a fraction of arc_c using the formula "arc_c >>
                   zfs_arc_overflow_shift".

                   The default value of 8 causes the ARC to be considered to be overflowing if it
                   exceeds the target size by 1/256th (0.3%) of the target size.

                   When  the  ARC  is  overflowing,  new buffer allocations are stalled until the
                   reclaim thread catches up and the overflow condition no longer exists.

                   Default value: 8.

       zfs_arc_p_min_shift (int)
                   If set to a non zero value, this will update arc_p_min_shift (default 4)  with
                   the new value.  arc_p_min_shift is used to shift of arc_c for calculating both
                   min and max max arc_p

                   Default value: 0.

       zfs_arc_p_aggressive_disable (int)
                   Disable aggressive arc_p growth

                   Use 1 for yes (default) and 0 to disable.

       zfs_arc_p_dampener_disable (int)
                   Disable arc_p adapt dampener

                   Use 1 for yes (default) and 0 to disable.

       zfs_arc_shrink_shift (int)
                   If set to a non zero value, this will update arc_shrink_shift (default 7) with
                   the new value.

                   Default value: 0.

       zfs_arc_pc_percent (uint)
                   Percent of pagecache to reclaim arc to

                   This  tunable  allows  ZFS  arc  to  play  more  nicely  with the kernel's LRU
                   pagecache. It can guarantee that the arc size won't  collapse  under  scanning
                   pressure  on  the  pagecache,  yet  still  allows  arc to be reclaimed down to
                   zfs_arc_min if necessary. This value is specified as percent of pagecache size
                   (as  measured  by  NR_FILE_PAGES) where that percent may exceed 100. This only
                   operates during memory pressure/reclaim.

                   Default value: 0 (disabled).

       zfs_arc_sys_free (ulong)
                   The target number of bytes the ARC should leave as free memory on the  system.
                   Defaults  to  the  larger  of  1/64  of physical memory or 512K.  Setting this
                   option to a non-zero value will override the default.

                   Default value: 0.

       zfs_autoimport_disable (int)
                   Disable pool import at module load  by  ignoring  the  cache  file  (typically
                   /etc/zfs/zpool.cache).

                   Use 1 for yes (default) and 0 for no.

       zfs_dbgmsg_enable (int)
                   Internally  ZFS keeps a small log to facilitate debugging.  By default the log
                   is disabled, to enable it set this option to 1.  The contents of the  log  can
                   be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file.  Writing 0 to this
                   proc file clears the log.

                   Default value: 0.

       zfs_dbgmsg_maxsize (int)
                   The maximum size in bytes of the internal ZFS debug log.

                   Default value: 4M.

       zfs_dbuf_state_index (int)
                   This feature is currently unused. It is normally  used  for  controlling  what
                   reporting is available under /proc/spl/kstat/zfs.

                   Default value: 0.

       zfs_deadman_enabled (int)
                   When   a   pool  sync  operation  takes  longer  than  zfs_deadman_synctime_ms
                   milliseconds, a "slow spa_sync" message  is  logged  to  the  debug  log  (see
                   zfs_dbgmsg_enable).   If zfs_deadman_enabled is set, all pending IO operations
                   are also checked and if any haven't completed  within  zfs_deadman_synctime_ms
                   milliseconds,  a  "SLOW  IO"  message is logged to the debug log and a "delay"
                   system event with the details of the hung IO is posted.

                   Use 1 (default) to enable the slow IO check and 0 to disable.

       zfs_deadman_checktime_ms (int)
                   Once a pool sync  operation  has  taken  longer  than  zfs_deadman_synctime_ms
                   milliseconds,    continue    to    check    for    slow    operations    every
                   zfs_deadman_checktime_ms milliseconds.

                   Default value: 5,000.

       zfs_deadman_synctime_ms (ulong)
                   Interval in milliseconds after which the deadman is  triggered  and  also  the
                   interval   after  which  an  IO  operation  is  considered  to  be  "hung"  if
                   zfs_deadman_enabled is set.

                   See zfs_deadman_enabled.

                   Default value: 1,000,000.

       zfs_dedup_prefetch (int)
                   Enable prefetching dedup-ed blks

                   Use 1 for yes and 0 to disable (default).

       zfs_delay_min_dirty_percent (int)
                   Start to delay each transaction once there  is  this  amount  of  dirty  data,
                   expressed  as  a  percentage  of  zfs_dirty_data_max.  This value should be >=
                   zfs_vdev_async_write_active_max_dirty_percent.    See   the    section    "ZFS
                   TRANSACTION DELAY".

                   Default value: 60.

       zfs_delay_scale (int)
                   This  controls  how quickly the transaction delay approaches infinity.  Larger
                   values cause longer delays for a given amount of dirty data.

                   For the smoothest delay, this value should be about 1 billion divided  by  the
                   maximum  number  of  operations per second.  This will smoothly handle between
                   10x and 1/10th this number.

                   See the section "ZFS TRANSACTION DELAY".

                   Note: zfs_delay_scale * zfs_dirty_data_max must be < 2^64.

                   Default value: 500,000.

       zfs_delete_blocks (ulong)
                   This is the used to define a large file for the  purposes  of  delete.   Files
                   containing  more  than  zfs_delete_blocks will be deleted asynchronously while
                   smaller files are deleted synchronously.  Decreasing this  value  will  reduce
                   the  time  spent  in an unlink(2) system call at the expense of a longer delay
                   before the freed space is available.

                   Default value: 20,480.

       zfs_dirty_data_max (int)
                   Determines the dirty space limit in bytes.  Once this limit is  exceeded,  new
                   writes  are  halted until space frees up. This parameter takes precedence over
                   zfs_dirty_data_max_percent.  See the section "ZFS TRANSACTION DELAY".

                   Default value: 10 percent of all memory, capped at zfs_dirty_data_max_max.

       zfs_dirty_data_max_max (int)
                   Maximum allowable value of zfs_dirty_data_max, expressed in bytes.  This limit
                   is   only   enforced   at   module   load   time,   and  will  be  ignored  if
                   zfs_dirty_data_max is later changed.  This  parameter  takes  precedence  over
                   zfs_dirty_data_max_max_percent. See the section "ZFS TRANSACTION DELAY".

                   Default value: 25% of physical RAM.

       zfs_dirty_data_max_max_percent (int)
                   Maximum  allowable  value  of zfs_dirty_data_max, expressed as a percentage of
                   physical RAM.  This limit is only enforced at module load time,  and  will  be
                   ignored    if    zfs_dirty_data_max   is   later   changed.    The   parameter
                   zfs_dirty_data_max_max takes precedence over this one. See  the  section  "ZFS
                   TRANSACTION DELAY".

                   Default value: 25.

       zfs_dirty_data_max_percent (int)
                   Determines  the  dirty  space  limit, expressed as a percentage of all memory.
                   Once this limit is exceeded, new writes are halted until space frees up.   The
                   parameter  zfs_dirty_data_max takes precedence over this one.  See the section
                   "ZFS TRANSACTION DELAY".

                   Default value: 10%, subject to zfs_dirty_data_max_max.

       zfs_dirty_data_sync (int)
                   Start syncing out a transaction group if there is at  least  this  much  dirty
                   data.

                   Default value: 67,108,864.

       zfs_fletcher_4_impl (string)
                   Select a fletcher 4 implementation.

                   Supported  selectors  are:  fastest,  scalar,  sse2, ssse3, avx2, avx512f, and
                   aarch64_neon.   All  of  the  selectors  except  fastest  and  scalar  require
                   instruction set extensions to be available and will only appear if ZFS detects
                   that they are present at runtime. If multiple implementations  of  fletcher  4
                   are  available,  the fastest will be chosen using a micro benchmark. Selecting
                   scalar results in the original, CPU based calculation, being  used.  Selecting
                   any  option  other than fastest and scalar results in vector instructions from
                   the respective CPU instruction set being used.

                   Default value: fastest.

       zfs_free_bpobj_enabled (int)
                   Enable/disable the processing of the free_bpobj object.

                   Default value: 1.

       zfs_free_max_blocks (ulong)
                   Maximum number of blocks freed in a single txg.

                   Default value: 100,000.

       zfs_vdev_async_read_max_active (int)
                   Maximum asynchronous read I/Os active to each device.  See  the  section  "ZFS
                   I/O SCHEDULER".

                   Default value: 3.

       zfs_vdev_async_read_min_active (int)
                   Minimum  asynchronous  read  I/Os active to each device.  See the section "ZFS
                   I/O SCHEDULER".

                   Default value: 1.

       zfs_vdev_async_write_active_max_dirty_percent (int)
                   When the  pool  has  more  than  zfs_vdev_async_write_active_max_dirty_percent
                   dirty  data, use zfs_vdev_async_write_max_active to limit active async writes.
                   If the dirty data is between min and max, the active  I/O  limit  is  linearly
                   interpolated. See the section "ZFS I/O SCHEDULER".

                   Default value: 60.

       zfs_vdev_async_write_active_min_dirty_percent (int)
                   When  the  pool  has  less  than zfs_vdev_async_write_active_min_dirty_percent
                   dirty data, use zfs_vdev_async_write_min_active to limit active async  writes.
                   If  the  dirty  data  is between min and max, the active I/O limit is linearly
                   interpolated. See the section "ZFS I/O SCHEDULER".

                   Default value: 30.

       zfs_vdev_async_write_max_active (int)
                   Maximum asynchronous write I/Os active to each device.  See the  section  "ZFS
                   I/O SCHEDULER".

                   Default value: 10.

       zfs_vdev_async_write_min_active (int)
                   Minimum  asynchronous  write I/Os active to each device.  See the section "ZFS
                   I/O SCHEDULER".

                   Lower values are associated with better latency on rotational media but poorer
                   resilver  performance.  The  default  value of 2 was chosen as a compromise. A
                   value of 3 has been shown to improve resilver performance further at a cost of
                   further increasing latency.

                   Default value: 2.

       zfs_vdev_max_active (int)
                   The  maximum  number  of I/Os active to each device.  Ideally, this will be >=
                   the sum of each queue's max_active.  It must be  at  least  the  sum  of  each
                   queue's min_active.  See the section "ZFS I/O SCHEDULER".

                   Default value: 1,000.

       zfs_vdev_scrub_max_active (int)
                   Maximum  scrub  I/Os  active  to  each  device.   See  the  section  "ZFS  I/O
                   SCHEDULER".

                   Default value: 2.

       zfs_vdev_scrub_min_active (int)
                   Minimum  scrub  I/Os  active  to  each  device.   See  the  section  "ZFS  I/O
                   SCHEDULER".

                   Default value: 1.

       zfs_vdev_sync_read_max_active (int)
                   Maximum synchronous read I/Os active to each device.  See the section "ZFS I/O
                   SCHEDULER".

                   Default value: 10.

       zfs_vdev_sync_read_min_active (int)
                   Minimum synchronous read I/Os active to each device.  See the section "ZFS I/O
                   SCHEDULER".

                   Default value: 10.

       zfs_vdev_sync_write_max_active (int)
                   Maximum  synchronous  write  I/Os active to each device.  See the section "ZFS
                   I/O SCHEDULER".

                   Default value: 10.

       zfs_vdev_sync_write_min_active (int)
                   Minimum synchronous write I/Os active to each device.  See  the  section  "ZFS
                   I/O SCHEDULER".

                   Default value: 10.

       zfs_vdev_queue_depth_pct (int)
                   Maximum  number  of  queued  allocations  per  top-level  vdev  expressed as a
                   percentage of  zfs_vdev_async_write_max_active  which  allows  the  system  to
                   detect  devices  that are more capable of handling allocations and to allocate
                   more blocks to those devices.  It allows for dynamic  allocation  distribution
                   when  devices  are  imbalanced  as  fuller devices will tend to be slower than
                   empty devices.

                   See also zio_dva_throttle_enabled.

                   Default value: 1000.

       zfs_disable_dup_eviction (int)
                   Disable duplicate buffer eviction

                   Use 1 for yes and 0 for no (default).

       zfs_expire_snapshot (int)
                   Seconds to expire .zfs/snapshot

                   Default value: 300.

       zfs_admin_snapshot (int)
                   Allow the creation, removal, or  renaming  of  entries  in  the  .zfs/snapshot
                   directory  to cause the creation, destruction, or renaming of snapshots.  When
                   enabled this functionality works both locally and over NFS exports which  have
                   the 'no_root_squash' option set. This functionality is disabled by default.

                   Use 1 for yes and 0 for no (default).

       zfs_flags (int)
                   Set  additional  debugging  flags.  The  following  flags  may be bitwise-or'd
                   together.

                   ┌─────────────────────────────────────────────────────────────────────┐
                   │Value   Symbolic Name                                                │
                   │        Description                                                  │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │    1   ZFS_DEBUG_DPRINTF                                            │
                   │        Enable dprintf entries in the debug log.                     │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │    2   ZFS_DEBUG_DBUF_VERIFY *                                      │
                   │        Enable extra dbuf verifications.                             │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │    4   ZFS_DEBUG_DNODE_VERIFY *                                     │
                   │        Enable extra dnode verifications.                            │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │    8   ZFS_DEBUG_SNAPNAMES                                          │
                   │        Enable snapshot name verification.                           │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │   16   ZFS_DEBUG_MODIFY                                             │
                   │        Check for illegally modified ARC buffers.                    │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │   32   ZFS_DEBUG_SPA                                                │
                   │        Enable spa_dbgmsg entries in the debug log.                  │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │   64   ZFS_DEBUG_ZIO_FREE                                           │
                   │        Enable verification of block frees.                          │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │  128   ZFS_DEBUG_HISTOGRAM_VERIFY                                   │
                   │        Enable extra spacemap histogram verifications.               │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │  256   ZFS_DEBUG_METASLAB_VERIFY                                    │
                   │        Verify space accounting on disk matches in-core range_trees. │
                   ├─────────────────────────────────────────────────────────────────────┤
                   │  512   ZFS_DEBUG_SET_ERROR                                          │
                   │        Enable SET_ERROR and dprintf entries in the debug log.       │
                   └─────────────────────────────────────────────────────────────────────┘
                   * Requires debug build.

                   Default value: 0.

       zfs_free_leak_on_eio (int)
                   If destroy encounters an EIO while reading metadata  (e.g.  indirect  blocks),
                   space  referenced  by  the  missing  metadata can not be freed.  Normally this
                   causes the background destroy to become "stalled", as it  is  unable  to  make
                   forward  progress.   While  in this stalled state, all remaining space to free
                   from the error-encountering filesystem is "temporarily leaked".  Set this flag
                   to cause it to ignore the EIO, permanently leak the space from indirect blocks
                   that can not be read, and continue to free everything else that it can.

                   The default, "stalling" behavior is useful  if  the  storage  partially  fails
                   (i.e.  some but not all i/os fail), and then later recovers.  In this case, we
                   will be able to continue pool operations while it  is  partially  failed,  and
                   when  it recovers, we can continue to free the space, with no leaks.  However,
                   note that this case is actually fairly rare.

                   Typically pools either (a) fail completely (but perhaps  temporarily,  e.g.  a
                   top-level  vdev  going offline), or (b) have localized, permanent errors (e.g.
                   disk returns the wrong data due to bit flip or firmware bug).   In  case  (a),
                   this  setting  does not matter because the pool will be suspended and the sync
                   thread will not be able to make forward progress  regardless.   In  case  (b),
                   because  the error is permanent, the best we can do is leak the minimum amount
                   of space, which  is  what  setting  this  flag  will  do.   Therefore,  it  is
                   reasonable  for  this  flag  to  normally  be  set,  but  we  chose  the  more
                   conservative approach of not setting it, so that there is  no  possibility  of
                   leaking space in the "partial temporary" failure case.

                   Default value: 0.

       zfs_free_min_time_ms (int)
                   During  a  zfs destroy operation using feature@async_destroy a minimum of this
                   much time will be spent working on freeing blocks per txg.

                   Default value: 1,000.

       zfs_immediate_write_sz (long)
                   Largest data block to write to zil. Larger blocks will be treated  as  if  the
                   dataset being written to had the property setting logbias=throughput.

                   Default value: 32,768.

       zfs_max_recordsize (int)
                   We  currently  support  block  sizes  from 512 bytes to 16MB.  The benefits of
                   larger blocks, and thus larger IO, need to be  weighed  against  the  cost  of
                   COWing  a giant block to modify one byte.  Additionally, very large blocks can
                   have an impact on i/o latency, and also potentially on the  memory  allocator.
                   Therefore,   we   do   not   allow  the  recordsize  to  be  set  larger  than
                   zfs_max_recordsize (default 1MB).  Larger blocks can be  created  by  changing
                   this  tunable,  and  pools with larger blocks can always be imported and used,
                   regardless of this setting.

                   Default value: 1,048,576.

       zfs_mdcomp_disable (int)
                   Disable meta data compression

                   Use 1 for yes and 0 for no (default).

       zfs_metaslab_fragmentation_threshold (int)
                   Allow metaslabs to keep their active state  as  long  as  their  fragmentation
                   percentage  is  less  than  or  equal  to  this value. An active metaslab that
                   exceeds this threshold will no longer keep its active status  allowing  better
                   metaslabs to be selected.

                   Default value: 70.

       zfs_mg_fragmentation_threshold (int)
                   Metaslab groups are considered eligible for allocations if their fragmentation
                   metric (measured as a percentage) is less than or equal to this  value.  If  a
                   metaslab  group  exceeds  this  threshold  then  it will be skipped unless all
                   metaslab groups within the metaslab class have also crossed this threshold.

                   Default value: 85.

       zfs_mg_noalloc_threshold (int)
                   Defines  a  threshold  at  which  metaslab  groups  should  be  eligible   for
                   allocations.   The  value  is  expressed  as a percentage of free space beyond
                   which a metaslab group is always eligible  for  allocations.   If  a  metaslab
                   group's  free space is less than or equal to the threshold, the allocator will
                   avoid allocating to that group unless all groups in the pool have reached  the
                   threshold.  Once all groups have reached the threshold, all groups are allowed
                   to accept allocations.  The default value of 0 disables the feature and causes
                   all metaslab groups to be eligible for allocations.

                   This  parameter  allows one to deal with pools having heavily imbalanced vdevs
                   such as would be the case when  a  new  vdev  has  been  added.   Setting  the
                   threshold  to  a  non-zero percentage will stop allocations from being made to
                   vdevs that aren't filled to the specified percentage and allow  lesser  filled
                   vdevs  to  acquire  more  allocations  than they otherwise would under the old
                   zfs_mg_alloc_failures facility.

                   Default value: 0.

       zfs_multihost_history (int)
                   Historical statistics for the last N multihost updates will  be  available  in
                   /proc/spl/kstat/zfs/<pool>/multihost

                   Default value: 0.

       zfs_multihost_interval (ulong)
                   Used to control the frequency of multihost writes which are performed when the
                   multihost pool property is on.  This is  one  factor  used  to  determine  the
                   length of the activity check during import.

                   The   multihost   write   period   is   zfs_multihost_interval   /  leaf-vdevs
                   milliseconds.  This means that on average a multihost write will be issued for
                   each  leaf  vdev  every zfs_multihost_interval milliseconds.  In practice, the
                   observed period can vary with the I/O load and  this  observed  value  is  the
                   delay which is stored in the uberblock.

                   On  import  the  activity  check  waits a minimum amount of time determined by
                   zfs_multihost_interval * zfs_multihost_import_intervals.  The  activity  check
                   time  may  be  further  extended  if  the value of mmp delay found in the best
                   uberblock indicates actual multihost updates happened at longer intervals than
                   zfs_multihost_interval.  A minimum value of 100ms is enforced.

                   Default value: 1000.

       zfs_multihost_import_intervals (uint)
                   Used  to  control the duration of the activity test on import.  Smaller values
                   of zfs_multihost_import_intervals will reduce the import time but increase the
                   risk  of  failing  to detect an active pool.  The total activity check time is
                   never allowed to drop below one second.  A value of 0 is ignored  and  treated
                   as if it was set to 1

                   Default value: 10.

       zfs_multihost_fail_intervals (uint)
                   Controls the behavior of the pool when multihost write failures are detected.

                   When  zfs_multihost_fail_intervals  =  0  then  multihost  write  failures are
                   ignored.  The failures will still be reported to the ZED  which  depending  on
                   its  configuration  may take action such as suspending the pool or offlining a
                   device.

                   When zfs_multihost_fail_intervals > 0 then sequential multihost write failures
                   will    cause    the    pool    to    be    suspended.    This   occurs   when
                   zfs_multihost_fail_intervals * zfs_multihost_interval milliseconds have passed
                   since  the last successful multihost write.  This guarantees the activity test
                   will see multihost writes if the pool is imported.

                   Default value: 5.

       zfs_no_scrub_io (int)
                   Set for no scrub I/O. This results in scrubs not actually scrubbing  data  and
                   simply doing a metadata crawl of the pool instead.

                   Use 1 for yes and 0 for no (default).

       zfs_no_scrub_prefetch (int)
                   Set to disable block prefetching for scrubs.

                   Use 1 for yes and 0 for no (default).

       zfs_nocacheflush (int)
                   Disable  cache  flush operations on disks when writing. Beware, this may cause
                   corruption if disks re-order writes.

                   Use 1 for yes and 0 for no (default).

       zfs_nopwrite_enabled (int)
                   Enable NOP writes

                   Use 1 for yes (default) and 0 to disable.

       zfs_dmu_offset_next_sync (int)
                   Enable forcing txg sync to find holes. When enabled forces  ZFS  to  act  like
                   prior  versions when SEEK_HOLE or SEEK_DATA flags are used, which when a dnode
                   is dirty causes txg's to be synced so that this data can be found.

                   Use 1 for yes and 0 to disable (default).

       zfs_pd_bytes_max (int)
                   The number of bytes which should be prefetched during a  pool  traversal  (eg:
                   zfs send or other data crawling operations)

                   Default value: 52,428,800.

       zfs_per_txg_dirty_frees_percent  (ulong)
                   Tunable  to control percentage of dirtied blocks from frees in one TXG.  After
                   this threshold is crossed, additional dirty blocks from frees wait  until  the
                   next TXG.  A value of zero will disable this throttle.

                   Default value: 30 and 0 to disable.

       zfs_prefetch_disable (int)
                   This  tunable  disables  predictive prefetch.  Note that it leaves "prescient"
                   prefetch (e.g. prefetch for zfs send)  intact.   Unlike  predictive  prefetch,
                   prescient prefetch never issues i/os that end up not being needed, so it can't
                   hurt performance.

                   Use 1 for yes and 0 for no (default).

       zfs_read_chunk_size (long)
                   Bytes to read per chunk

                   Default value: 1,048,576.

       zfs_read_history (int)
                   Historical  statistics  for  the  last  N   reads   will   be   available   in
                   /proc/spl/kstat/zfs/<pool>/reads

                   Default value: 0 (no data is kept).

       zfs_read_history_hits (int)
                   Include cache hits in read history

                   Use 1 for yes and 0 for no (default).

       zfs_recover (int)
                   Set  to  attempt  to  recover from fatal errors. This should only be used as a
                   last resort, as it typically results in leaked space, or worse.

                   Use 1 for yes and 0 for no (default).

       zfs_resilver_delay (int)
                   Number of ticks to delay prior to issuing a resilver I/O operation when a non-
                   resilver or non-scrub I/O operation has occurred within the past zfs_scan_idle
                   ticks.

                   Default value: 2.

       zfs_resilver_min_time_ms (int)
                   Resilvers are processed by the sync thread. While resilvering it will spend at
                   least this much time working on a resilver between txg flushes.

                   Default value: 3,000.

       zfs_scan_idle (int)
                   Idle  window  in clock ticks.  During a scrub or a resilver, if a non-scrub or
                   non-resilver I/O operation has occurred during this window, the next scrub  or
                   resilver   operation   is   delayed   by,   respectively   zfs_scrub_delay  or
                   zfs_resilver_delay ticks.

                   Default value: 50.

       zfs_scan_min_time_ms (int)
                   Scrubs are processed by the sync thread. While  scrubbing  it  will  spend  at
                   least this much time working on a scrub between txg flushes.

                   Default value: 1,000.

       zfs_scrub_delay (int)
                   Number  of  ticks  to delay prior to issuing a scrub I/O operation when a non-
                   scrub or non-resilver I/O operation has occurred within the past zfs_scan_idle
                   ticks.

                   Default value: 4.

       zfs_send_corrupt_data (int)
                   Allow sending of corrupt data (ignore read/checksum errors when sending data)

                   Use 1 for yes and 0 for no (default).

       zfs_sync_pass_deferred_free (int)
                   Flushing of data to disk is done in passes. Defer frees starting in this pass

                   Default value: 2.

       zfs_sync_taskq_batch_pct (int)
                   This  controls  the  number of threads used by the dp_sync_taskq.  The default
                   value of 75% will create a maximum of one thread per cpu.

                   Default value: 75.

       zfs_sync_pass_dont_compress (int)
                   Don't compress starting in this pass

                   Default value: 5.

       zfs_sync_pass_rewrite (int)
                   Rewrite new block pointers starting in this pass

                   Default value: 2.

       zfs_top_maxinflight (int)
                   Max concurrent I/Os per top-level  vdev  (mirrors  or  raidz  arrays)  allowed
                   during scrub or resilver operations.

                   Default value: 32.

       zfs_txg_history (int)
                   Historical   statistics   for   the   last   N   txgs  will  be  available  in
                   /proc/spl/kstat/zfs/<pool>/txgs

                   Default value: 0.

       zfs_txg_timeout (int)
                   Flush dirty data to disk at least every N seconds (maximum txg duration)

                   Default value: 5.

       zfs_vdev_aggregation_limit (int)
                   Max vdev I/O aggregation size

                   Default value: 131,072.

       zfs_vdev_cache_bshift (int)
                   Shift size to inflate reads too

                   Default value: 16 (effectively 65536).

       zfs_vdev_cache_max (int)
                   Inflate reads smaller than this value to meet the  zfs_vdev_cache_bshift  size
                   (default 64k).

                   Default value: 16384.

       zfs_vdev_cache_size (int)
                   Total size of the per-disk cache in bytes.

                   Currently  this feature is disabled as it has been found to not be helpful for
                   performance and in some cases harmful.

                   Default value: 0.

       zfs_vdev_mirror_rotating_inc (int)
                   A number by which the balancing algorithm increments the load calculation  for
                   the  purpose of selecting the least busy mirror member when an I/O immediately
                   follows its  predecessor  on  rotational  vdevs  for  the  purpose  of  making
                   decisions based on load.

                   Default value: 0.

       zfs_vdev_mirror_rotating_seek_inc (int)
                   A  number by which the balancing algorithm increments the load calculation for
                   the purpose of selecting the least  busy  mirror  member  when  an  I/O  lacks
                   locality  as defined by the zfs_vdev_mirror_rotating_seek_offset.  I/Os within
                   this that are not immediately following the previous I/O  are  incremented  by
                   half.

                   Default value: 5.

       zfs_vdev_mirror_rotating_seek_offset (int)
                   The  maximum distance for the last queued I/O in which the balancing algorithm
                   considers an I/O to have locality.  See the section "ZFS I/O SCHEDULER".

                   Default value: 1048576.

       zfs_vdev_mirror_non_rotating_inc (int)
                   A number by which the balancing algorithm increments the load calculation  for
                   the  purpose of selecting the least busy mirror member on non-rotational vdevs
                   when I/Os do not immediately follow one another.

                   Default value: 0.

       zfs_vdev_mirror_non_rotating_seek_inc (int)
                   A number by which the balancing algorithm increments the load calculation  for
                   the  purpose  of  selecting  the  least  busy  mirror member when an I/O lacks
                   locality as defined by the zfs_vdev_mirror_rotating_seek_offset.  I/Os  within
                   this  that  are  not immediately following the previous I/O are incremented by
                   half.

                   Default value: 1.

       zfs_vdev_read_gap_limit (int)
                   Aggregate read I/O operations if the gap on-disk between them is  within  this
                   threshold.

                   Default value: 32,768.

       zfs_vdev_scheduler (charp)
                   Set  the  Linux  I/O  scheduler  on  whole disk vdevs to this scheduler. Valid
                   options are noop, cfq, bfq & deadline

                   Default value: noop.

       zfs_vdev_write_gap_limit (int)
                   Aggregate write I/O over gap

                   Default value: 4,096.

       zfs_vdev_raidz_impl (string)
                   Parameter for selecting raidz parity implementation to use.

                   Options marked (always) below may be selected  on  module  load  as  they  are
                   supported  on  all  systems.   The remaining options may only be set after the
                   module is loaded, as they  are  available  only  if  the  implementations  are
                   compiled in and supported on the running system.

                   Once       the       module      is      loaded,      the      content      of
                   /sys/module/zfs/parameters/zfs_vdev_raidz_impl  will  show  available  options
                   with the currently selected one enclosed in [].  Possible options are:
                     fastest  - (always) implementation selected using built-in benchmark
                     original - (always) original raidz implementation
                     scalar   - (always) scalar raidz implementation
                     sse2     - implementation using SSE2 instruction set (64bit x86 only)
                     ssse3    - implementation using SSSE3 instruction set (64bit x86 only)
                     avx2     - implementation using AVX2 instruction set (64bit x86 only)
                     avx512f  - implementation using AVX512F instruction set (64bit x86 only)
                     avx512bw  -  implementation using AVX512F & AVX512BW instruction sets (64bit
                   x86 only)
                     aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only)
                     aarch64_neonx2 - implementation using NEON with more  unrolling  (Aarch64/64
                   bit ARMv8 only)

                   Default value: fastest.

       zfs_zevent_cols (int)
                   When zevents are logged to the console use this as the word wrap width.

                   Default value: 80.

       zfs_zevent_console (int)
                   Log events to the console

                   Use 1 for yes and 0 for no (default).

       zfs_zevent_len_max (int)
                   Max  event  queue length. A value of 0 will result in a calculated value which
                   increases with the number of CPUs in the system (minimum 64 events). Events in
                   the queue can be viewed with the zpool events command.

                   Default value: 0.

       zfs_zil_clean_taskq_maxalloc (int)
                   The  maximum number of taskq entries that are allowed to be cached.  When this
                   limit is exceeded itx's will be cleaned synchronously.

                   Default value: 1048576.

       zfs_zil_clean_taskq_minalloc (int)
                   The number of taskq entries that are pre-populated when  the  taskq  is  first
                   created and are immediately available for use.

                   Default value: 1024.

       zfs_zil_clean_taskq_nthr_pct (int)
                   This  controls  the  number  of  threads  used by the dp_zil_clean_taskq.  The
                   default value of 100% will create a maximum of one thread per cpu.

                   Default value: 100.

       zil_replay_disable (int)
                   Disable intent logging replay. Can be disabled for recovery from corrupted ZIL

                   Use 1 for yes and 0 for no (default).

       zil_slog_bulk (ulong)
                   Limit SLOG write size per commit  executed  with  synchronous  priority.   Any
                   writes above that will be executed with lower (asynchronous) priority to limit
                   potential SLOG device abuse by single active ZIL writer.

                   Default value: 786,432.

       zio_delay_max (int)
                   A zevent will be logged if a ZIO operation takes more than N  milliseconds  to
                   complete.  Note  that  this  is  only  a  logging  facility,  not a timeout on
                   operations.

                   Default value: 30,000.

       zio_dva_throttle_enabled (int)
                   Throttle block allocations in  the  ZIO  pipeline.  This  allows  for  dynamic
                   allocation  distribution  when  devices  are  imbalanced.   When  enabled, the
                   maximum number of  pending  allocations  per  top-level  vdev  is  limited  by
                   zfs_vdev_queue_depth_pct.

                   Default value: 1.

       zio_requeue_io_start_cut_in_line (int)
                   Prioritize requeued I/O

                   Default value: 0.

       zio_taskq_batch_pct (uint)
                   Percentage  of  online CPUs (or CPU cores, etc) which will run a worker thread
                   for IO. These workers are responsible for IO  work  such  as  compression  and
                   checksum calculations. Fractional number of CPUs will be rounded down.

                   The default value of 75 was chosen to avoid using all CPUs which can result in
                   latency issues and inconsistent application performance, especially when  high
                   compression is enabled.

                   Default value: 75.

       zvol_inhibit_dev (uint)
                   Do  not  create  zvol  device nodes. This may slightly improve startup time on
                   systems with a very large number of zvols.

                   Use 1 for yes and 0 for no (default).

       zvol_major (uint)
                   Major number for zvol block devices

                   Default value: 230.

       zvol_max_discard_blocks (ulong)
                   Discard (aka TRIM) operations done on zvols will be done in  batches  of  this
                   many  blocks, where block size is determined by the volblocksize property of a
                   zvol.

                   Default value: 16,384.

       zvol_prefetch_bytes (uint)
                   When adding a zvol to the system prefetch zvol_prefetch_bytes from  the  start
                   and  end  of the volume.  Prefetching these regions of the volume is desirable
                   because they are likely to be accessed  immediately  by  blkid(8)  or  by  the
                   kernel scanning for a partition table.

                   Default value: 131,072.

       zvol_request_sync (uint)
                   When  processing  I/O  requests  for  a  zvol submit them synchronously.  This
                   effectively limits the queue depth to 1 for each I/O submitter.  When set to 0
                   requests  are handled asynchronously by a thread pool.  The number of requests
                   which can be handled concurrently is controller by zvol_threads.

                   Default value: 0.

       zvol_threads (uint)
                   Max number of threads which can handle zvol I/O requests concurrently.

                   Default value: 32.

       zvol_volmode (uint)
                   Defines zvol block devices behaviour when volmode is set  to  default.   Valid
                   values are 1 (full), 2 (dev) and 3 (none).

                   Default value: 1.

       zfs_qat_disable (int)
                   This  tunable  disables qat hardware acceleration for gzip compression.  It is
                   available only if qat acceleration is compiled in and qat driver is present.

                   Use 1 for yes and 0 for no (default).

ZFS I/O SCHEDULER

       ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.  The  I/O  scheduler
       determines  when and in what order those operations are issued.  The I/O scheduler divides
       operations into five I/O classes prioritized in  the  following  order:  sync  read,  sync
       write,  async  read,  async write, and scrub/resilver.  Each queue defines the minimum and
       maximum number of concurrent operations that may be issued to the  device.   In  addition,
       the  device  has  an aggregate maximum, zfs_vdev_max_active. Note that the sum of the per-
       queue minimums must not exceed the  aggregate  maximum.   If  the  sum  of  the  per-queue
       maximums  exceeds  the  aggregate  maximum,  then  the  number  of  active  I/Os may reach
       zfs_vdev_max_active, in which case no further I/Os will be issued  regardless  of  whether
       all per-queue minimums have been met.

       For  many physical devices, throughput increases with the number of concurrent operations,
       but latency typically suffers. Further, physical devices typically have a limit  at  which
       more  concurrent  operations  have  no  effect  on  throughput or can actually cause it to
       decrease.

       The scheduler selects the next operation to issue by first looking for an I/O class  whose
       minimum  has  not been satisfied. Once all are satisfied and the aggregate maximum has not
       been hit, the scheduler looks for classes whose maximum has not been satisfied.  Iteration
       through  the  I/O  classes is done in the order specified above. No further operations are
       issued if the aggregate maximum number of concurrent operations has been hit or  if  there
       are no operations queued for an I/O class that has not hit its maximum.  Every time an I/O
       is queued or an operation completes, the I/O scheduler looks for new operations to issue.

       In general, smaller max_active's will lead to lower  latency  of  synchronous  operations.
       Larger  max_active's  may  lead  to  higher  overall  throughput,  depending on underlying
       storage.

       The ratio of the queues' max_actives determines the balance of performance between  reads,
       writes,  and  scrubs.   E.g., increasing zfs_vdev_scrub_max_active will cause the scrub or
       resilver to complete more quickly, but reads and writes to have higher latency  and  lower
       throughput.

       All I/O classes have a fixed maximum number of outstanding operations except for the async
       write class. Asynchronous writes represent the data that is committed  to  stable  storage
       during  the  syncing  stage  for  transaction groups. Transaction groups enter the syncing
       state periodically so the number of queued async writes will quickly  burst  up  and  then
       bleed  down  to zero. Rather than servicing them as quickly as possible, the I/O scheduler
       changes the maximum number of active async write I/Os according to  the  amount  of  dirty
       data in the pool.  Since both throughput and latency typically increase with the number of
       concurrent operations issued to physical devices, reducing the burstiness in the number of
       concurrent operations also stabilizes the response time of operations from other -- and in
       particular synchronous -- queues. In broad strokes, the  I/O  scheduler  will  issue  more
       concurrent operations from the async write queue as there's more dirty data in the pool.

       Async Writes

       The  number of concurrent operations issued for the async write I/O class follows a piece-
       wise linear function defined by a few adjustable points.

              |              o---------| <-- zfs_vdev_async_write_max_active
         ^    |             /^         |
         |    |            / |         |
       active |           /  |         |
        I/O   |          /   |         |
       count  |         /    |         |
              |        /     |         |
              |-------o      |         | <-- zfs_vdev_async_write_min_active
             0|_______^______|_________|
              0%      |      |       100% of zfs_dirty_data_max
                      |      |
                      |      `-- zfs_vdev_async_write_active_max_dirty_percent
                      `--------- zfs_vdev_async_write_active_min_dirty_percent

       Until the amount of dirty data exceeds a minimum percentage of the dirty data  allowed  in
       the pool, the I/O scheduler will limit the number of concurrent operations to the minimum.
       As that threshold is  crossed,  the  number  of  concurrent  operations  issued  increases
       linearly  to  the maximum at the specified maximum percentage of the dirty data allowed in
       the pool.

       Ideally, the amount of dirty data on a busy pool will stay  in  the  sloped  part  of  the
       function         between         zfs_vdev_async_write_active_min_dirty_percent         and
       zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the maximum percentage,  this
       indicates that the rate of incoming data is greater than the rate that the backend storage
       can handle. In this case, we must further throttle incoming writes, as  described  in  the
       next section.

ZFS TRANSACTION DELAY

       We  delay  transactions  when  we've  determined  that  the  backend storage isn't able to
       accommodate the rate of incoming writes.

       If there is already a transaction waiting, we delay relative to when that transaction will
       finish  waiting.   This  way  the  calculated  delay  time is independent of the number of
       threads concurrently executing transactions.

       If we are the only waiter, wait relative to when the transaction started, rather than  the
       current  time.   This  credits  the  transaction  for  "time already served", e.g. reading
       indirect blocks.

       The minimum time for a transaction to take is calculated as:
           min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
           min_time is then capped at 100 milliseconds.

       The delay has two degrees of freedom that can be adjusted via tunables.  The percentage of
       dirty  data  at  which  we  start to delay is defined by zfs_delay_min_dirty_percent. This
       should typically be at or above zfs_vdev_async_write_active_max_dirty_percent so  that  we
       only  start  to  delay after writing at full speed has failed to keep up with the incoming
       write rate. The scale of the curve is defined by zfs_delay_scale. Roughly  speaking,  this
       variable determines the amount of delay at the midpoint of the curve.

       delay
        10ms +-------------------------------------------------------------*+
             |                                                             *|
         9ms +                                                             *+
             |                                                             *|
         8ms +                                                             *+
             |                                                            * |
         7ms +                                                            * +
             |                                                            * |
         6ms +                                                            * +
             |                                                            * |
         5ms +                                                           *  +
             |                                                           *  |
         4ms +                                                           *  +
             |                                                           *  |
         3ms +                                                          *   +
             |                                                          *   |
         2ms +                                              (midpoint) *    +
             |                                                  |    **     |
         1ms +                                                  v ***       +
             |             zfs_delay_scale ---------->     ********         |
           0 +-------------------------------------*********----------------+
             0%                    <- zfs_dirty_data_max ->               100%

       Note  that  since  the delay is added to the outstanding time remaining on the most recent
       transaction, the delay is effectively the inverse of IOPS.  Here  the  midpoint  of  500us
       translates  to 2000 IOPS. The shape of the curve was chosen such that small changes in the
       amount of accumulated dirty data in the first 3/4 of  the  curve  yield  relatively  small
       differences in the amount of delay.

       The  effects  can be easier to understand when the amount of delay is represented on a log
       scale:

       delay
       100ms +-------------------------------------------------------------++
             +                                                              +
             |                                                              |
             +                                                             *+
        10ms +                                                             *+
             +                                                           ** +
             |                                              (midpoint)  **  |
             +                                                  |     **    +
         1ms +                                                  v ****      +
             +             zfs_delay_scale ---------->        *****         +
             |                                             ****             |
             +                                          ****                +
       100us +                                        **                    +
             +                                       *                      +
             |                                      *                       |
             +                                     *                        +
        10us +                                     *                        +
             +                                                              +
             |                                                              |
             +                                                              +
             +--------------------------------------------------------------+
             0%                    <- zfs_dirty_data_max ->               100%

       Note here that only as the amount of dirty data approaches its limit does the delay  start
       to  increase  rapidly. The goal of a properly tuned system should be to keep the amount of
       dirty data out of that range by first ensuring that the appropriate limits are set for the
       I/O scheduler to reach optimal throughput on the backend storage, and then by changing the
       value of zfs_delay_scale to increase the steepness of the curve.

                                           Oct 28, 2017                  ZFS-MODULE-PARAMETERS(5)