Provided by: zfsutils-linux_0.6.5.6-0ubuntu30_amd64 bug

NAME

       zfs-module-parameters - ZFS module parameters

DESCRIPTION

       Description of the different parameters to the ZFS module.

   Module parameters
       l2arc_feed_again (int)
                   Turbo L2ARC warmup

                   Use 1 for yes (default) and 0 to disable.

       l2arc_feed_min_ms (ulong)
                   Min feed interval in milliseconds

                   Default value: 200.

       l2arc_feed_secs (ulong)
                   Seconds between L2ARC writing

                   Default value: 1.

       l2arc_headroom (ulong)
                   Number of max device writes to precache

                   Default value: 2.

       l2arc_headroom_boost (ulong)
                   Compressed l2arc_headroom multiplier

                   Default value: 200.

       l2arc_nocompress (int)
                   Skip compressing L2ARC buffers

                   Use 1 for yes and 0 for no (default).

       l2arc_noprefetch (int)
                   Skip caching prefetched buffers

                   Use 1 for yes (default) and 0 to disable.

       l2arc_norw (int)
                   No reads during writes

                   Use 1 for yes and 0 for no (default).

       l2arc_write_boost (ulong)
                   Extra write bytes during device warmup

                   Default value: 8,388,608.

       l2arc_write_max (ulong)
                   Max write bytes per interval

                   Default value: 8,388,608.

       metaslab_aliquot (ulong)
                   Metaslab  granularity,  in  bytes.  This  is  roughly similar to what would be
                   referred to as the  "stripe  size"  in  traditional  RAID  arrays.  In  normal
                   operation,  ZFS  will  try  to  write  this amount of data to a top-level vdev
                   before moving on to the next one.

                   Default value: 524,288.

       metaslab_bias_enabled (int)
                   Enable metaslab group biasing based on its vdev's over-  or  under-utilization
                   relative to the pool.

                   Use 1 for yes (default) and 0 for no.

       metaslab_debug_load (int)
                   Load all metaslabs during pool import.

                   Use 1 for yes and 0 for no (default).

       metaslab_debug_unload (int)
                   Prevent metaslabs from being unloaded.

                   Use 1 for yes and 0 for no (default).

       metaslab_fragmentation_factor_enabled (int)
                   Enable use of the fragmentation metric in computing metaslab weights.

                   Use 1 for yes (default) and 0 for no.

       metaslabs_per_vdev (int)
                   When a vdev is added, it will be divided into approximately (but no more than)
                   this number of metaslabs.

                   Default value: 200.

       metaslab_preload_enabled (int)
                   Enable metaslab group preloading.

                   Use 1 for yes (default) and 0 for no.

       metaslab_lba_weighting_enabled (int)
                   Give more weight to metaslabs with lower  LBAs,  assuming  they  have  greater
                   bandwidth  as is typically the case on a modern constant angular velocity disk
                   drive.

                   Use 1 for yes (default) and 0 for no.

       spa_config_path (charp)
                   SPA config file

                   Default value: /etc/zfs/zpool.cache.

       spa_asize_inflation (int)
                   Multiplication factor used to estimate actual disk consumption from  the  size
                   of  data  being written. The default value is a worst case estimate, but lower
                   values may be valid for a given pool depending  on  its  configuration.   Pool
                   administrators  who understand the factors involved may wish to specify a more
                   realistic inflation factor, particularly if they operate  close  to  quota  or
                   capacity limits.

                   Default value: 24

       spa_load_verify_data (int)
                   Whether to traverse data blocks during an "extreme rewind" (-X) import.  Use 0
                   to disable and 1 to enable.

                   An extreme rewind import normally performs a full traversal of all  blocks  in
                   the pool for verification.  If this parameter is set to 0, the traversal skips
                   non-metadata blocks.  It can be toggled once the import has started to stop or
                   start the traversal of non-metadata blocks.

                   Default value: 1

       spa_load_verify_metadata (int)
                   Whether to traverse blocks during an "extreme rewind" (-X) pool import.  Use 0
                   to disable and 1 to enable.

                   An extreme rewind import normally performs a full traversal of all  blocks  in
                   the  pool  for  verification.  If this parameter is set to 1, the traversal is
                   not performed.  It can be toggled once the import has started to stop or start
                   the traversal.

                   Default value: 1

       spa_load_verify_maxinflight (int)
                   Maximum  concurrent  I/Os  during  the  traversal performed during an "extreme
                   rewind" (-X) pool import.

                   Default value: 10000

       spa_slop_shift (int)
                   Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space in  the
                   pool  to  be consumed.  This ensures that we don't run the pool completely out
                   of space, due to unaccounted changes (e.g. to the MOS).  It  also  limits  the
                   worst-case  time  to allocate space.  If we have less than this amount of free
                   space, most ZPL operations (e.g. write, create) will return ENOSPC.

                   Default value: 5

       zfetch_array_rd_sz (ulong)
                   If prefetching is enabled, disable prefetching  for  reads  larger  than  this
                   size.

                   Default value: 1,048,576.

       zfetch_block_cap (uint)
                   Max number of blocks to prefetch at a time

                   Default value: 256.

       zfetch_max_streams (uint)
                   Max number of streams per zfetch (prefetch streams per file).

                   Default value: 8.

       zfetch_min_sec_reap (uint)
                   Min time before an active prefetch stream can be reclaimed

                   Default value: 2.

       zfs_arc_average_blocksize (int)
                   The  ARC's  buffer  hash  table is sized based on the assumption of an average
                   block size of zfs_arc_average_blocksize  (default  8K).   This  works  out  to
                   roughly  1MB  of  hash  table per 1GB of physical memory with 8-byte pointers.
                   For configurations with a known larger average block size this  value  can  be
                   increased to reduce the memory footprint.

                   Default value: 8192.

       zfs_arc_evict_batch_limit (int)
                   Number  ARC  headers  to  evict per sub-list before proceeding to another sub-
                   list.  This batch-style operation prevents entire sub-lists from being evicted
                   at once but comes at a cost of additional unlocking and locking.

                   Default value: 10.

       zfs_arc_grow_retry (int)
                   Seconds before growing arc size

                   Default value: 5.

       zfs_arc_lotsfree_percent (int)
                   Throttle  I/O  when  free  system  memory drops below this percentage of total
                   system memory.  Setting this value to 0 will disable the throttle.

                   Default value: 10.

       zfs_arc_max (ulong)
                   Max arc size

                   Default value: 0.

       zfs_arc_meta_limit (ulong)
                   The maximum allowed size in bytes  that  meta  data  buffers  are  allowed  to
                   consume  in  the  ARC.   When  this limit is reached meta data buffers will be
                   reclaimed even if the overall arc_c_max has  not  been  reached.   This  value
                   defaults to 0 which indicates that 3/4 of the ARC may be used for meta data.

                   Default value: 0.

       zfs_arc_meta_min (ulong)
                   The  minimum  allowed  size in bytes that meta data buffers may consume in the
                   ARC.  This value defaults to 0 which disables a floor on the amount of the ARC
                   devoted meta data.

                   Default value: 0.

       zfs_arc_meta_prune (int)
                   The  number of dentries and inodes to be scanned looking for entries which can
                   be dropped.  This may be required when the ARC reaches the  zfs_arc_meta_limit
                   because dentries and inodes can pin buffers in the ARC.  Increasing this value
                   will cause to dentry and inode caches to be pruned more aggressively.  Setting
                   this value to 0 will disable pruning the inode and dentry caches.

                   Default value: 10,000.

       zfs_arc_meta_adjust_restarts (ulong)
                   The  number  of  restart  passes to make while scanning the ARC attempting the
                   free buffers in order to stay below the zfs_arc_meta_limit.  This value should
                   not need to be tuned but is available to facilitate performance analysis.

                   Default value: 4096.

       zfs_arc_min (ulong)
                   Min arc size

                   Default value: 100.

       zfs_arc_min_prefetch_lifespan (int)
                   Min life of prefetch block

                   Default value: 100.

       zfs_arc_num_sublists_per_state (int)
                   To  allow more fine-grained locking, each ARC state contains a series of lists
                   for both data and meta data objects.  Locking is performed  at  the  level  of
                   these  "sub-lists".   This parameters controls the number of sub-lists per ARC
                   state.

                   Default value: 1 or the number of on-online CPUs, whichever is greater

       zfs_arc_overflow_shift (int)
                   The ARC size is considered to be overflowing if it  exceeds  the  current  ARC
                   target  size  (arc_c)  by  a  threshold  determined  by  this  parameter.  The
                   threshold is calculated as a fraction of arc_c using  the  formula  "arc_c  >>
                   zfs_arc_overflow_shift".

                   The default value of 8 causes the ARC to be considered to be overflowing if it
                   exceeds the target size by 1/256th (0.3%) of the target size.

                   When the ARC is overflowing, new buffer  allocations  are  stalled  until  the
                   reclaim thread catches up and the overflow condition no longer exists.

                   Default value: 8.

       zfs_arc_p_min_shift (int)
                   arc_c shift to calc min/max arc_p

                   Default value: 4.

       zfs_arc_p_aggressive_disable (int)
                   Disable aggressive arc_p growth

                   Use 1 for yes (default) and 0 to disable.

       zfs_arc_p_dampener_disable (int)
                   Disable arc_p adapt dampener

                   Use 1 for yes (default) and 0 to disable.

       zfs_arc_shrink_shift (int)
                   log2(fraction of arc to reclaim)

                   Default value: 5.

       zfs_arc_sys_free (ulong)
                   The  target number of bytes the ARC should leave as free memory on the system.
                   Defaults to the larger of 1/64 of  physical  memory  or  512K.   Setting  this
                   option to a non-zero value will override the default.

                   Default value: 0.

       zfs_autoimport_disable (int)
                   Disable  pool  import  at  module  load  by ignoring the cache file (typically
                   /etc/zfs/zpool.cache).

                   Use 1 for yes (default) and 0 for no.

       zfs_dbgmsg_enable (int)
                   Internally ZFS keeps a small log to facilitate debugging.  By default the  log
                   is  disabled,  to enable it set this option to 1.  The contents of the log can
                   be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file.  Writing 0 to this
                   proc file clears the log.

                   Default value: 0.

       zfs_dbgmsg_maxsize (int)
                   The maximum size in bytes of the internal ZFS debug log.

                   Default value: 4M.

       zfs_dbuf_state_index (int)
                   Calculate arc header index

                   Default value: 0.

       zfs_deadman_enabled (int)
                   Enable deadman timer

                   Use 1 for yes (default) and 0 to disable.

       zfs_deadman_synctime_ms (ulong)
                   Expiration time in milliseconds. This value has two meanings. First it is used
                   to determine  when  the  spa_deadman()  logic  should  fire.  By  default  the
                   spa_deadman()  will  fire  if  spa_sync()  has  not completed in 1000 seconds.
                   Secondly, the value determines if an I/O is considered "hung".  Any  I/O  that
                   has not completed in zfs_deadman_synctime_ms is considered "hung" resulting in
                   a zevent being logged.

                   Default value: 1,000,000.

       zfs_dedup_prefetch (int)
                   Enable prefetching dedup-ed blks

                   Use 1 for yes and 0 to disable (default).

       zfs_delay_min_dirty_percent (int)
                   Start to delay each transaction once there  is  this  amount  of  dirty  data,
                   expressed  as  a  percentage  of  zfs_dirty_data_max.  This value should be >=
                   zfs_vdev_async_write_active_max_dirty_percent.    See   the    section    "ZFS
                   TRANSACTION DELAY".

                   Default value: 60.

       zfs_delay_scale (int)
                   This  controls  how quickly the transaction delay approaches infinity.  Larger
                   values cause longer delays for a given amount of dirty data.

                   For the smoothest delay, this value should be about 1 billion divided  by  the
                   maximum  number  of  operations per second.  This will smoothly handle between
                   10x and 1/10th this number.

                   See the section "ZFS TRANSACTION DELAY".

                   Note: zfs_delay_scale * zfs_dirty_data_max must be < 2^64.

                   Default value: 500,000.

       zfs_dirty_data_max (int)
                   Determines the dirty space limit in bytes.  Once this limit is  exceeded,  new
                   writes  are  halted until space frees up. This parameter takes precedence over
                   zfs_dirty_data_max_percent.  See the section "ZFS TRANSACTION DELAY".

                   Default value: 10 percent of all memory, capped at zfs_dirty_data_max_max.

       zfs_dirty_data_max_max (int)
                   Maximum allowable value of zfs_dirty_data_max, expressed in bytes.  This limit
                   is   only   enforced   at   module   load   time,   and  will  be  ignored  if
                   zfs_dirty_data_max is later changed.  This  parameter  takes  precedence  over
                   zfs_dirty_data_max_max_percent. See the section "ZFS TRANSACTION DELAY".

                   Default value: 25% of physical RAM.

       zfs_dirty_data_max_max_percent (int)
                   Maximum  allowable  value  of zfs_dirty_data_max, expressed as a percentage of
                   physical RAM.  This limit is only enforced at module load time,  and  will  be
                   ignored    if    zfs_dirty_data_max   is   later   changed.    The   parameter
                   zfs_dirty_data_max_max takes precedence over this one. See  the  section  "ZFS
                   TRANSACTION DELAY".

                   Default value: 25

       zfs_dirty_data_max_percent (int)
                   Determines  the  dirty  space  limit, expressed as a percentage of all memory.
                   Once this limit is exceeded, new writes are halted until space frees up.   The
                   parameter  zfs_dirty_data_max takes precedence over this one.  See the section
                   "ZFS TRANSACTION DELAY".

                   Default value: 10%, subject to zfs_dirty_data_max_max.

       zfs_dirty_data_sync (int)
                   Start syncing out a transaction group if there is at  least  this  much  dirty
                   data.

                   Default value: 67,108,864.

       zfs_free_max_blocks (ulong)
                   Maximum number of blocks freed in a single txg.

                   Default value: 100,000.

       zfs_vdev_async_read_max_active (int)
                   Maxium asynchronous read I/Os active to each device.  See the section "ZFS I/O
                   SCHEDULER".

                   Default value: 3.

       zfs_vdev_async_read_min_active (int)
                   Minimum asynchronous read I/Os active to each device.  See  the  section  "ZFS
                   I/O SCHEDULER".

                   Default value: 1.

       zfs_vdev_async_write_active_max_dirty_percent (int)
                   When  the  pool  has  more  than zfs_vdev_async_write_active_max_dirty_percent
                   dirty data, use zfs_vdev_async_write_max_active to limit active async  writes.
                   If  the  dirty  data  is between min and max, the active I/O limit is linearly
                   interpolated. See the section "ZFS I/O SCHEDULER".

                   Default value: 60.

       zfs_vdev_async_write_active_min_dirty_percent (int)
                   When the  pool  has  less  than  zfs_vdev_async_write_active_min_dirty_percent
                   dirty  data, use zfs_vdev_async_write_min_active to limit active async writes.
                   If the dirty data is between min and max, the active  I/O  limit  is  linearly
                   interpolated. See the section "ZFS I/O SCHEDULER".

                   Default value: 30.

       zfs_vdev_async_write_max_active (int)
                   Maxium  asynchronous  write  I/Os active to each device.  See the section "ZFS
                   I/O SCHEDULER".

                   Default value: 10.

       zfs_vdev_async_write_min_active (int)
                   Minimum asynchronous write I/Os active to each device.  See the  section  "ZFS
                   I/O SCHEDULER".

                   Default value: 1.

       zfs_vdev_max_active (int)
                   The  maximum  number  of I/Os active to each device.  Ideally, this will be >=
                   the sum of each queue's max_active.  It must be  at  least  the  sum  of  each
                   queue's min_active.  See the section "ZFS I/O SCHEDULER".

                   Default value: 1,000.

       zfs_vdev_scrub_max_active (int)
                   Maxium scrub I/Os active to each device.  See the section "ZFS I/O SCHEDULER".

                   Default value: 2.

       zfs_vdev_scrub_min_active (int)
                   Minimum  scrub  I/Os  active  to  each  device.   See  the  section  "ZFS  I/O
                   SCHEDULER".

                   Default value: 1.

       zfs_vdev_sync_read_max_active (int)
                   Maxium synchronous read I/Os active to each device.  See the section "ZFS  I/O
                   SCHEDULER".

                   Default value: 10.

       zfs_vdev_sync_read_min_active (int)
                   Minimum synchronous read I/Os active to each device.  See the section "ZFS I/O
                   SCHEDULER".

                   Default value: 10.

       zfs_vdev_sync_write_max_active (int)
                   Maxium synchronous write I/Os active to each device.  See the section "ZFS I/O
                   SCHEDULER".

                   Default value: 10.

       zfs_vdev_sync_write_min_active (int)
                   Minimum  synchronous  write  I/Os active to each device.  See the section "ZFS
                   I/O SCHEDULER".

                   Default value: 10.

       zfs_disable_dup_eviction (int)
                   Disable duplicate buffer eviction

                   Use 1 for yes and 0 for no (default).

       zfs_expire_snapshot (int)
                   Seconds to expire .zfs/snapshot

                   Default value: 300.

       zfs_admin_snapshot (int)
                   Allow the creation, removal, or  renaming  of  entries  in  the  .zfs/snapshot
                   directory  to cause the creation, destruction, or renaming of snapshots.  When
                   enabled this functionality works both locally and over NFS exports which  have
                   the 'no_root_squash' option set. This functionality is disabled by default.

                   Use 1 for yes and 0 for no (default).

       zfs_flags (int)
                   Set  additional  debugging  flags.  The  following  flags  may be bitwise-or'd
                   together.

                   ┌───────────────────────────────────────────────────────┐
                   │Value   Symbolic Name                                  │
                   │        Description                                    │
                   ├───────────────────────────────────────────────────────┤
                   │    1   ZFS_DEBUG_DPRINTF                              │
                   │        Enable dprintf entries in the debug log.       │
                   ├───────────────────────────────────────────────────────┤
                   │    2   ZFS_DEBUG_DBUF_VERIFY *                        │
                   │        Enable extra dbuf verifications.               │
                   ├───────────────────────────────────────────────────────┤
                   │    4   ZFS_DEBUG_DNODE_VERIFY *                       │
                   │        Enable extra dnode verifications.              │
                   ├───────────────────────────────────────────────────────┤
                   │    8   ZFS_DEBUG_SNAPNAMES                            │
                   │        Enable snapshot name verification.             │
                   ├───────────────────────────────────────────────────────┤
                   │   16   ZFS_DEBUG_MODIFY                               │
                   │        Check for illegally modified ARC buffers.      │
                   ├───────────────────────────────────────────────────────┤
                   │   32   ZFS_DEBUG_SPA                                  │
                   │        Enable spa_dbgmsg entries in the debug log.    │
                   ├───────────────────────────────────────────────────────┤
                   │   64   ZFS_DEBUG_ZIO_FREE                             │
                   │        Enable verification of block frees.            │
                   ├───────────────────────────────────────────────────────┤
                   │  128   ZFS_DEBUG_HISTOGRAM_VERIFY                     │
                   │        Enable extra spacemap histogram verifications. │
                   └───────────────────────────────────────────────────────┘
                   * Requires debug build.

                   Default value: 0.

       zfs_free_leak_on_eio (int)
                   If destroy encounters an EIO while reading metadata  (e.g.  indirect  blocks),
                   space  referenced  by  the  missing  metadata can not be freed.  Normally this
                   causes the background destroy to become "stalled", as it  is  unable  to  make
                   forward  progress.   While  in this stalled state, all remaining space to free
                   from the error-encountering filesystem is "temporarily leaked".  Set this flag
                   to cause it to ignore the EIO, permanently leak the space from indirect blocks
                   that can not be read, and continue to free everything else that it can.

                   The default, "stalling" behavior is useful  if  the  storage  partially  fails
                   (i.e.  some but not all i/os fail), and then later recovers.  In this case, we
                   will be able to continue pool operations while it  is  partially  failed,  and
                   when  it recovers, we can continue to free the space, with no leaks.  However,
                   note that this case is actually fairly rare.

                   Typically pools either (a) fail completely (but perhaps  temporarily,  e.g.  a
                   top-level  vdev  going offline), or (b) have localized, permanent errors (e.g.
                   disk returns the wrong data due to bit flip or firmware bug).   In  case  (a),
                   this  setting  does not matter because the pool will be suspended and the sync
                   thread will not be able to make forward progress  regardless.   In  case  (b),
                   because  the error is permanent, the best we can do is leak the minimum amount
                   of space, which  is  what  setting  this  flag  will  do.   Therefore,  it  is
                   reasonable  for  this  flag  to  normally  be  set,  but  we  chose  the  more
                   conservative approach of not setting it, so that there is  no  possibility  of
                   leaking space in the "partial temporary" failure case.

                   Default value: 0.

       zfs_free_min_time_ms (int)
                   Min millisecs to free per txg

                   Default value: 1,000.

       zfs_immediate_write_sz (long)
                   Largest data block to write to zil

                   Default value: 32,768.

       zfs_max_recordsize (int)
                   We  currently  support  block  sizes  from 512 bytes to 16MB.  The benefits of
                   larger blocks, and thus larger IO, need to be  weighed  against  the  cost  of
                   COWing  a giant block to modify one byte.  Additionally, very large blocks can
                   have an impact on i/o latency, and also potentially on the  memory  allocator.
                   Therefore,   we   do   not   allow  the  recordsize  to  be  set  larger  than
                   zfs_max_recordsize (default 1MB).  Larger blocks can be  created  by  changing
                   this  tunable,  and  pools with larger blocks can always be imported and used,
                   regardless of this setting.

                   Default value: 1,048,576.

       zfs_mdcomp_disable (int)
                   Disable meta data compression

                   Use 1 for yes and 0 for no (default).

       zfs_metaslab_fragmentation_threshold (int)
                   Allow metaslabs to keep their active state  as  long  as  their  fragmentation
                   percentage  is  less  than  or  equal  to  this value. An active metaslab that
                   exceeds this threshold will no longer keep its active status  allowing  better
                   metaslabs to be selected.

                   Default value: 70.

       zfs_mg_fragmentation_threshold (int)
                   Metaslab  groups are considered eligible for allocations if their fragmenation
                   metric (measured as a percentage) is less than or equal to this  value.  If  a
                   metaslab  group  exceeds  this  threshold  then  it will be skipped unless all
                   metaslab groups within the metaslab class have also crossed this threshold.

                   Default value: 85.

       zfs_mg_noalloc_threshold (int)
                   Defines  a  threshold  at  which  metaslab  groups  should  be  eligible   for
                   allocations.   The  value  is  expressed  as a percentage of free space beyond
                   which a metaslab group is always eligible  for  allocations.   If  a  metaslab
                   group's  free  space is less than or equal to the the threshold, the allocator
                   will avoid allocating to that group unless all groups in the pool have reached
                   the  threshold.   Once  all  groups have reached the threshold, all groups are
                   allowed to accept allocations.  The default value of 0  disables  the  feature
                   and causes all metaslab groups to be eligible for allocations.

                   This  parameter allows to deal with pools having heavily imbalanced vdevs such
                   as would be the case when a new vdev has been added.  Setting the threshold to
                   a  non-zero  percentage  will  stop  allocations from being made to vdevs that
                   aren't filled to the specified percentage and allow  lesser  filled  vdevs  to
                   acquire   more   allocations   than   they   otherwise  would  under  the  old
                   zfs_mg_alloc_failures facility.

                   Default value: 0.

       zfs_no_scrub_io (int)
                   Set for no scrub I/O

                   Use 1 for yes and 0 for no (default).

       zfs_no_scrub_prefetch (int)
                   Set for no scrub prefetching

                   Use 1 for yes and 0 for no (default).

       zfs_nocacheflush (int)
                   Disable cache flushes

                   Use 1 for yes and 0 for no (default).

       zfs_nopwrite_enabled (int)
                   Enable NOP writes

                   Use 1 for yes (default) and 0 to disable.

       zfs_pd_bytes_max (int)
                   The number of bytes which should be prefetched.

                   Default value: 52,428,800.

       zfs_prefetch_disable (int)
                   Disable all ZFS prefetching

                   Use 1 for yes and 0 for no (default).

       zfs_read_chunk_size (long)
                   Bytes to read per chunk

                   Default value: 1,048,576.

       zfs_read_history (int)
                   Historic statistics for the last N reads

                   Default value: 0.

       zfs_read_history_hits (int)
                   Include cache hits in read history

                   Use 1 for yes and 0 for no (default).

       zfs_recover (int)
                   Set to attempt to recover from fatal errors. This should only  be  used  as  a
                   last resort, as it typically results in leaked space, or worse.

                   Use 1 for yes and 0 for no (default).

       zfs_resilver_delay (int)
                   Number of ticks to delay prior to issuing a resilver I/O operation when a non-
                   resilver or non-scrub I/O operation has occurred within the past zfs_scan_idle
                   ticks.

                   Default value: 2.

       zfs_resilver_min_time_ms (int)
                   Min millisecs to resilver per txg

                   Default value: 3,000.

       zfs_scan_idle (int)
                   Idle  window  in clock ticks.  During a scrub or a resilver, if a non-scrub or
                   non-resilver I/O operation has occurred during this window, the next scrub  or
                   resilver   operation   is   delayed   by,   respectively   zfs_scrub_delay  or
                   zfs_resilver_delay ticks.

                   Default value: 50.

       zfs_scan_min_time_ms (int)
                   Min millisecs to scrub per txg

                   Default value: 1,000.

       zfs_scrub_delay (int)
                   Number of ticks to delay prior to issuing a scrub I/O operation  when  a  non-
                   scrub or non-resilver I/O operation has occurred within the past zfs_scan_idle
                   ticks.

                   Default value: 4.

       zfs_send_corrupt_data (int)
                   Allow to send corrupt data (ignore read/checksum errors when sending data)

                   Use 1 for yes and 0 for no (default).

       zfs_sync_pass_deferred_free (int)
                   Defer frees starting in this pass

                   Default value: 2.

       zfs_sync_pass_dont_compress (int)
                   Don't compress starting in this pass

                   Default value: 5.

       zfs_sync_pass_rewrite (int)
                   Rewrite new bps starting in this pass

                   Default value: 2.

       zfs_top_maxinflight (int)
                   Max I/Os per top-level vdev during scrub or resilver operations.

                   Default value: 32.

       zfs_txg_history (int)
                   Historic statistics for the last N txgs

                   Default value: 0.

       zfs_txg_timeout (int)
                   Max seconds worth of delta per txg

                   Default value: 5.

       zfs_vdev_aggregation_limit (int)
                   Max vdev I/O aggregation size

                   Default value: 131,072.

       zfs_vdev_cache_bshift (int)
                   Shift size to inflate reads too

                   Default value: 16.

       zfs_vdev_cache_max (int)
                   Inflate reads small than max

       zfs_vdev_cache_size (int)
                   Total size of the per-disk cache

                   Default value: 0.

       zfs_vdev_mirror_switch_us (int)
                   Switch mirrors every N usecs

                   Default value: 10,000.

       zfs_vdev_read_gap_limit (int)
                   Aggregate read I/O over gap

                   Default value: 32,768.

       zfs_vdev_scheduler (charp)
                   I/O scheduler

                   Default value: noop.

       zfs_vdev_write_gap_limit (int)
                   Aggregate write I/O over gap

                   Default value: 4,096.

       zfs_zevent_cols (int)
                   Max event column width

                   Default value: 80.

       zfs_zevent_console (int)
                   Log events to the console

                   Use 1 for yes and 0 for no (default).

       zfs_zevent_len_max (int)
                   Max event queue length

                   Default value: 0.

       zil_replay_disable (int)
                   Disable intent logging replay

                   Use 1 for yes and 0 for no (default).

       zil_slog_limit (ulong)
                   Max commit bytes to separate log device

                   Default value: 1,048,576.

       zio_delay_max (int)
                   Max zio millisec delay before posting event

                   Default value: 30,000.

       zio_requeue_io_start_cut_in_line (int)
                   Prioritize requeued I/O

                   Default value: 0.

       zio_taskq_batch_pct (uint)
                   Percentage of online CPUs (or CPU cores, etc) which will run a  worker  thread
                   for  IO.  These  workers  are  responsible for IO work such as compression and
                   checksum calculations. Fractional number of CPUs will be rounded down.

                   The default value of 75 was chosen to avoid using all CPUs which can result in
                   latency  issues and inconsistent application performance, especially when high
                   compression is enabled.

                   Default value: 75.

       zvol_inhibit_dev (uint)
                   Do not create zvol device nodes

                   Use 1 for yes and 0 for no (default).

       zvol_major (uint)
                   Major number for zvol device

                   Default value: 230.

       zvol_max_discard_blocks (ulong)
                   Max number of blocks to discard at once

                   Default value: 16,384.

       zvol_prefetch_bytes (uint)
                   When adding a zvol to the system prefetch zvol_prefetch_bytes from  the  start
                   and  end  of the volume.  Prefetching these regions of the volume is desirable
                   because they are likely to be accessed  immediately  by  blkid(8)  or  by  the
                   kernel scanning for a partition table.

                   Default value: 131,072.

ZFS I/O SCHEDULER

       ZFS  issues  I/O operations to leaf vdevs to satisfy and complete I/Os.  The I/O scheduler
       determines when and in what order those operations are issued.  The I/O scheduler  divides
       operations  into  five  I/O  classes  prioritized  in the following order: sync read, sync
       write, async read, async write, and scrub/resilver.  Each queue defines  the  minimum  and
       maximum  number  of  concurrent operations that may be issued to the device.  In addition,
       the device has an aggregate maximum, zfs_vdev_max_active. Note that the sum  of  the  per-
       queue  minimums  must  not  exceed  the  aggregate  maximum.   If the sum of the per-queue
       maximums exceeds the  aggregate  maximum,  then  the  number  of  active  I/Os  may  reach
       zfs_vdev_max_active,  in  which  case no further I/Os will be issued regardless of whether
       all per-queue minimums have been met.

       For many physical devices, throughput increases with the number of concurrent  operations,
       but  latency  typically suffers. Further, physical devices typically have a limit at which
       more concurrent operations have no effect on  throughput  or  can  actually  cause  it  to
       decrease.

       The  scheduler selects the next operation to issue by first looking for an I/O class whose
       minimum has not been satisfied. Once all are satisfied and the aggregate maximum  has  not
       been  hit, the scheduler looks for classes whose maximum has not been satisfied. Iteration
       through the I/O classes is done in the order specified above. No  further  operations  are
       issued  if  the aggregate maximum number of concurrent operations has been hit or if there
       are no operations queued for an I/O class that has not hit its maximum.  Every time an I/O
       is queued or an operation completes, the I/O scheduler looks for new operations to issue.

       In  general,  smaller  max_active's  will lead to lower latency of synchronous operations.
       Larger max_active's may  lead  to  higher  overall  throughput,  depending  on  underlying
       storage.

       The  ratio of the queues' max_actives determines the balance of performance between reads,
       writes, and scrubs.  E.g., increasing zfs_vdev_scrub_max_active will cause  the  scrub  or
       resilver  to  complete more quickly, but reads and writes to have higher latency and lower
       throughput.

       All I/O classes have a fixed maximum number of outstanding operations except for the async
       write  class.  Asynchronous  writes represent the data that is committed to stable storage
       during the syncing stage for transaction groups.  Transaction  groups  enter  the  syncing
       state  periodically  so  the  number of queued async writes will quickly burst up and then
       bleed down to zero. Rather than servicing them as quickly as possible, the  I/O  scheduler
       changes  the  maximum  number  of active async write I/Os according to the amount of dirty
       data in the pool.  Since both throughput and latency typically increase with the number of
       concurrent operations issued to physical devices, reducing the burstiness in the number of
       concurrent operations also stabilizes the response time of operations from other -- and in
       particular  synchronous  --  queues.  In  broad strokes, the I/O scheduler will issue more
       concurrent operations from the async write queue as there's more dirty data in the pool.

       Async Writes

       The number of concurrent operations issued for the async write I/O class follows a  piece-
       wise linear function defined by a few adjustable points.

              |              o---------| <-- zfs_vdev_async_write_max_active
         ^    |             /^         |
         |    |            / |         |
       active |           /  |         |
        I/O   |          /   |         |
       count  |         /    |         |
              |        /     |         |
              |-------o      |         | <-- zfs_vdev_async_write_min_active
             0|_______^______|_________|
              0%      |      |       100% of zfs_dirty_data_max
                      |      |
                      |      `-- zfs_vdev_async_write_active_max_dirty_percent
                      `--------- zfs_vdev_async_write_active_min_dirty_percent

       Until  the  amount of dirty data exceeds a minimum percentage of the dirty data allowed in
       the pool, the I/O scheduler will limit the number of concurrent operations to the minimum.
       As  that  threshold  is  crossed,  the  number  of  concurrent operations issued increases
       linearly to the maximum at the specified maximum percentage of the dirty data  allowed  in
       the pool.

       Ideally,  the  amount  of  dirty  data  on a busy pool will stay in the sloped part of the
       function         between         zfs_vdev_async_write_active_min_dirty_percent         and
       zfs_vdev_async_write_active_max_dirty_percent.  If it exceeds the maximum percentage, this
       indicates that the rate of incoming data is greater than the rate that the backend storage
       can  handle.  In  this case, we must further throttle incoming writes, as described in the
       next section.

ZFS TRANSACTION DELAY

       We delay transactions when we've  determined  that  the  backend  storage  isn't  able  to
       accommodate the rate of incoming writes.

       If there is already a transaction waiting, we delay relative to when that transaction will
       finish waiting.  This way the calculated delay  time  is  independent  of  the  number  of
       threads concurrently executing transactions.

       If  we are the only waiter, wait relative to when the transaction started, rather than the
       current time.  This credits the  transaction  for  "time  already  served",  e.g.  reading
       indirect blocks.

       The minimum time for a transaction to take is calculated as:
           min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
           min_time is then capped at 100 milliseconds.

       The delay has two degrees of freedom that can be adjusted via tunables.  The percentage of
       dirty data at which we start to delay  is  defined  by  zfs_delay_min_dirty_percent.  This
       should  typically  be at or above zfs_vdev_async_write_active_max_dirty_percent so that we
       only start to delay after writing at full speed has failed to keep up  with  the  incoming
       write  rate.  The scale of the curve is defined by zfs_delay_scale. Roughly speaking, this
       variable determines the amount of delay at the midpoint of the curve.

       delay
        10ms +-------------------------------------------------------------*+
             |                                                             *|
         9ms +                                                             *+
             |                                                             *|
         8ms +                                                             *+
             |                                                            * |
         7ms +                                                            * +
             |                                                            * |
         6ms +                                                            * +
             |                                                            * |
         5ms +                                                           *  +
             |                                                           *  |
         4ms +                                                           *  +
             |                                                           *  |
         3ms +                                                          *   +
             |                                                          *   |
         2ms +                                              (midpoint) *    +
             |                                                  |    **     |
         1ms +                                                  v ***       +
             |             zfs_delay_scale ---------->     ********         |
           0 +-------------------------------------*********----------------+
             0%                    <- zfs_dirty_data_max ->               100%

       Note that since the delay is added to the outstanding time remaining on  the  most  recent
       transaction,  the  delay  is  effectively the inverse of IOPS.  Here the midpoint of 500us
       translates to 2000 IOPS. The shape of the curve was chosen such that small changes in  the
       amount  of  accumulated  dirty  data  in the first 3/4 of the curve yield relatively small
       differences in the amount of delay.

       The effects can be easier to understand when the amount of delay is represented on  a  log
       scale:

       delay
       100ms +-------------------------------------------------------------++
             +                                                              +
             |                                                              |
             +                                                             *+
        10ms +                                                             *+
             +                                                           ** +
             |                                              (midpoint)  **  |
             +                                                  |     **    +
         1ms +                                                  v ****      +
             +             zfs_delay_scale ---------->        *****         +
             |                                             ****             |
             +                                          ****                +
       100us +                                        **                    +
             +                                       *                      +
             |                                      *                       |
             +                                     *                        +
        10us +                                     *                        +
             +                                                              +
             |                                                              |
             +                                                              +
             +--------------------------------------------------------------+
             0%                    <- zfs_dirty_data_max ->               100%

       Note  here that only as the amount of dirty data approaches its limit does the delay start
       to increase rapidly. The goal of a properly tuned system should be to keep the  amount  of
       dirty data out of that range by first ensuring that the appropriate limits are set for the
       I/O scheduler to reach optimal throughput on the backend storage, and then by changing the
       value of zfs_delay_scale to increase the steepness of the curve.

                                           Nov 16, 2013                  ZFS-MODULE-PARAMETERS(5)