Provided by: varnish_7.5.0-3_amd64 bug

NAME

       varnishd - HTTP accelerator daemon

SYNOPSIS

       varnishd
              [-a  [name=][listen_address[,PROTO]]  [-b  [host[:port]|path]]  [-C]  [-d] [-F] [-f
              config] [-h type[,options]] [-I clifile] [-i identity] [-j jail[,jailoptions]]  [-l
              vsl]   [-M   address:port]   [-n   workdir]   [-P   file]   [-p   param=value]  [-r
              param[,param...]]  [-S secret-file] [-s [name=]kind[,options]] [-T  address[:port]]
              [-t TTL] [-V] [-W waiter]

       varnishd [-x parameter|vsl|cli|builtin|optstring]

       varnishd [-?]

DESCRIPTION

       The varnishd daemon accepts HTTP requests from clients, passes them on to a backend server
       and caches the returned documents to better satisfy future requests for the same document.

OPTIONS

   Basic options
       -a <[name=][listen_address[,PROTO]]>
              Accept for client requests on the specified listen_address (see below).

              Name is referenced in logs. If name is not specified, "a0", "a1", etc. is used.

              PROTO can be "HTTP" (the default) or "PROXY".  Both version 1 and 2  of  the  proxy
              protocol can be used.

              Multiple -a arguments are allowed.

              If  no  -a  argument  is given, the default -a :80 will listen to all IPv4 and IPv6
              interfaces.

       -a <[name=][ip_address][:port][,PROTO]>
              The ip_address can be a host name ("localhost"), an IPv4 dotted-quad  ("127.0.0.1")
              or an IPv6 address enclosed in square brackets ("[::1]")

              If port is not specified, port 80 (http) is used.

              At least one of ip_address or port is required.

       -a <[name=][path][,PROTO][,user=name][,group=name][,mode=octal]>
              (VCL4.1 and higher)

              Accept   connections   on   a   Unix   domain   socket.    Path  must  be  absolute
              ("/path/to/listen.sock") or  "@"  followed  by  the  name  of  an  abstract  socket
              ("@myvarnishd").

              The  user,  group  and mode sub-arguments may be used to specify the permissions of
              the socket file -- use names for user and group, and  a  3-digit  octal  value  for
              mode. These sub-arguments do not apply to abstract sockets.

       -b <[host[:port]|path]>
              Use  the specified host as backend server. If port is not specified, the default is
              8080.

              If the value of -b begins with /, it is interpreted as the absolute path of a  Unix
              domain socket to which Varnish connects. In that case, the value of -b must satisfy
              the conditions required for the .path field of a backend declaration,  see  vcl(7).
              Backends with Unix socket addresses may only be used with VCL versions >= 4.1.

              -b can be used only once, and not together with f.

       -f config
              Use  the  specified  VCL  configuration  file  instead of the builtin default.  See
              vcl(7) for details on VCL syntax.

              If a single -f option is used, then the VCL instance loaded from the file is  named
              "boot"  and immediately becomes active. If more than one -f option is used, the VCL
              instances are named "boot0", "boot1" and so forth, in the  order  corresponding  to
              the -f arguments, and the last one is named "boot", which becomes active.

              Either  -b  or  one  or  more  -f options must be specified, but not both, and they
              cannot both be left out, unless -d is used to start varnishd in debugging mode.  If
              the  empty  string is specified as the sole -f option, then varnishd starts without
              starting the worker process, and the management process will accept  CLI  commands.
              You  can  also combine an empty -f option with an initialization script (-I option)
              and the child process will be started if there is an active VCL at the end  of  the
              initialization.

              When  used  with  a  relative  file name, config is searched in the vcl_path. It is
              possible to set this path prior to using  -f  options  with  a  -p  option.  During
              startup,   varnishd   doesn't   complain   about   unsafe  VCL  paths:  unlike  the
              varnish-cli(7) that could later be accessed remotely,  starting  varnishd  requires
              local privileges.

       -n workdir
              Runtime directory for the shared memory, compiled VCLs etc.

              In  performance  critical  applications,  this  directory should be on a RAM backed
              filesystem.

              Relative paths will be appended to /var/run/ (NB: Binary packages  of  Varnish  may
              have adjusted this to the platform.)

              The default value is /var/run/varnishd (NB: as above.)

   Documentation options
       For  these  options,  varnishd  prints information to standard output and exits. When a -x
       option is used, it must be the only option (it outputs documentation in  reStructuredText,
       aka RST).

       -?
          Print the usage message.

       -x parameter
              Print documentation of the runtime parameters (-p options), see List of Parameters.

       -x vsl Print documentation of the tags used in the Varnish shared memory log, see vsl(7).

       -x cli Print documentation of the command line interface, see varnish-cli(7).

       -x builtin
              Print the contents of the default VCL program builtin.vcl.

       -x optstring
              Print the optstring parameter to getopt(3) to help writing wrapper scripts.

   Operations options
       -F     Do  not  fork, run in the foreground. Only one of -F or -d can be specified, and -F
              cannot be used together with -C.

       -T <address[:port]>
              Offer a management interface on the specified address and port. See  varnish-cli(7)
              for  documentation of the management commands.  To disable the management interface
              use none.

       -M <address:port>
              Connect to this port and offer the command  line  interface.   Think  of  it  as  a
              reverse  shell.  When  running  with  -M  and there is no backend defined the child
              process (the cache) will not start initially.

       -P file
              Write the PID of the process to the specified file.

       -i identity
              Specify  the  identity  of  the  Varnish  server.  This  can  be   accessed   using
              server.identity from VCL.

              The  server  identity is used for the received-by field of Via headers generated by
              Varnish. For this reason, it must be a valid token as defined by the HTTP grammar.

              If not specified the output of gethostname(3) is used, in which case the syntax  is
              assumed to be correct.

       -I clifile
              Execute  the management commands in the file given as clifile before the the worker
              process starts, see CLI Command File.

   Tuning options
       -t TTL Specifies the default time to live (TTL) for cached objects. This is a shortcut for
              specifying the default_ttl run-time parameter.

       -p <param=value>
              Set the parameter specified by param to the specified value, see List of Parameters
              for details. This option can be used multiple times to specify multiple parameters.

       -s <[name=]type[,options]>
              Use the specified storage backend. See Storage Backend section.

              This option can be used multiple times to specify multiple storage files.  Name  is
              referenced  in logs, VCL, statistics, etc. If name is not specified, "s0", "s1" and
              so forth is used.

       -l <vsl>
              Specifies size of the space for the VSL records, shorthand for -p  vsl_space=<vsl>.
              Scaling  suffixes like 'K' and 'M' can be used up to (G)igabytes. See vsl_space for
              more information.

   Security options
       -r <param[,param...]>
              Make the listed parameters read only. This gives the system administrator a way  to
              limit  what the Varnish CLI can do.  Consider making parameters such as cc_command,
              vcc_allow_inline_c and vmod_path read only as these  can  potentially  be  used  to
              escalate privileges from the CLI.

       -S secret-file
              Path  to  a  file containing a secret used for authorizing access to the management
              port. To disable authentication use none.

              If this argument is not provided, a secret drawn  from  the  system  PRNG  will  be
              written to a file called _.secret in the working directory (see opt_n) with default
              ownership and permissions of the user having started varnish.

              Thus, users wishing to delegate control over varnish will probably want to create a
              custom  secret  file  with appropriate permissions (ie. readable by a unix group to
              delegate control to).

       -j <jail[,jailoptions]>
              Specify the jailing mechanism to use. See Jail section.

   Advanced, development and debugging options
       -d     Enables debugging mode: The parent process  runs  in  the  foreground  with  a  CLI
              connection on stdin/stdout, and the child process must be started explicitly with a
              CLI command. Terminating the parent process will also terminate the child.

              Only one of -d or -F can be specified, and -d cannot be used together with -C.

       -C     Print VCL code compiled to C language and exit. Specify the  VCL  file  to  compile
              with  the  -f  option.  Either -f or -b must be used with -C, and -C cannot be used
              with -F or -d.

       -V     Display the version number and exit. This must be the only option.

       -h <type[,options]>
              Specifies the hash algorithm. See Hash Algorithm section for a  list  of  supported
              algorithms.

       -W waiter
              Specifies the waiter type to use.

   Hash Algorithm
       The following hash algorithms are available:

       -h critbit
              self-scaling  tree  structure.  The default hash algorithm in Varnish Cache 2.1 and
              onwards. In comparison to a more traditional B tree  the  critbit  tree  is  almost
              completely lockless. Do not change this unless you are certain what you're doing.

       -h simple_list
              A simple doubly-linked list.  Not recommended for production use.

       -h <classic[,buckets]>
              A  standard  hash  table.  The hash key is the CRC32 of the object's URL modulo the
              size of the hash table.  Each table entry points to a list of elements which  share
              the  same  hash  key.  The buckets parameter specifies the number of entries in the
              hash table.  The default is 16383.

   Storage Backend
       The argument format to define storage backends is:

       -s <[name]=kind[,options]>
              If  name  is  omitted,  Varnish  will  name  storages  sN,  starting  with  s0  and
              incrementing N for every new storage.

              For kind and options see details below.

       Storages  can  be used in vcl as storage.name, so, for example if myStorage was defined by
       -s myStorage=malloc,5G, it could be used in VCL like so:

          set beresp.storage = storage.myStorage;

       A special name is Transient which is  the  default  storage  for  uncacheable  objects  as
       resulting from a pass, hit-for-miss or hit-for-pass.

       If no -s options are given, the default is:

          -s default,100m

       If  no  Transient  storage  is  defined,  the  default is an unbound default storage as if
       defined as:

          -s Transient=default

       The following storage types and options are available:

       -s <default[,size]>
              The default storage type resolves to umem where available and malloc otherwise.

       -s <malloc[,size]>
              malloc is a memory based backend.

       -s <umem[,size]>
              umem is a storage backend which is more efficient than malloc on platforms where it
              is available.

              See  the section on umem in chapter Storage backends of The Varnish Users Guide for
              details.

       -s <file,path[,size[,granularity[,advice]]]>
              The file backend stores data in a file on disk. The file  will  be  accessed  using
              mmap. Note that this storage provide no cache persistence.

              The  path  is  mandatory.  If  path points to a directory, a temporary file will be
              created  in  that  directory  and  immediately  unlinked.  If  path  points  to   a
              non-existing file, the file will be created.

              If  size  is  omitted, and path points to an existing file with a size greater than
              zero, the size of that file will be used. If not, an error is reported.

              Granularity sets the allocation block size. Defaults to the system page size or the
              filesystem block size, whichever is larger.

              Advice  tells the kernel how varnishd expects to use this mapped region so that the
              kernel can choose the  appropriate  read-ahead  and  caching  techniques.  Possible
              values are normal, random and sequential, corresponding to MADV_NORMAL, MADV_RANDOM
              and MADV_SEQUENTIAL madvise() advice argument, respectively. Defaults to random.

       -s <persistent,path,size>
              Persistent storage. Varnish will store objects in a file  in  a  manner  that  will
              secure  the  survival of most of the objects in the event of a planned or unplanned
              shutdown of Varnish. The persistent storage backend has multiple issues with it and
              will likely be removed from a future version of Varnish.

   Jail
       Varnish  jails  are  a generalization over various platform specific methods to reduce the
       privileges of varnish processes. They may have specific options. Available jails are:

       -j <solaris[,worker=`privspec`]>
              Reduce privileges(5) for varnishd and sub-processes to the minimally required  set.
              Only available on platforms which have the setppriv(2) call.

              The  optional  worker  argument  can be used to pass a privilege-specification (see
              ppriv(1)) by which to extend the effective set of the varnish worker process. While
              extended privileges may be required by custom vmods, not using the worker option is
              always more secure.

              Example to grant basic privileges to the worker process:

                 -j solaris,worker=basic

       -j <unix[,user=`user`][,ccgroup=`group`][,workuser=`user`]>
              Default on all other platforms when varnishd is started with an effective uid of  0
              ("as root").

              With  the unix jail mechanism activated, varnish will switch to an alternative user
              for subprocesses and change the  effective  uid  of  the  master  process  whenever
              possible.

              The  optional user argument specifies which alternative user to use. It defaults to
              varnish.

              The optional ccgroup argument specifies a group  to  add  to  varnish  subprocesses
              requiring access to a c-compiler. There is no default.

              The  optional workuser argument specifies an alternative user to use for the worker
              process. It defaults to vcache.

              The users given for the user and workuser arguments need to have the  same  primary
              ("login") group.

              To  set up a system for the default users with a group name varnish, shell commands
              similar to these may be used:

                 groupadd varnish
                 useradd -g varnish -d /nonexistent -s /bin/false \
                   -c "Varnish-Cache Daemon User" varnish
                 useradd -g varnish -d /nonexistent -s /bin/false \
                   -c "Varnish-Cache Worker User" vcache

       -j none
              last resort jail choice: With jail mechanism none, varnish will run  all  processes
              with the privileges it was started with.

   Management Interface
       If the -T option was specified, varnishd will offer a command-line management interface on
       the specified address and port.  The recommended way of  connecting  to  the  command-line
       management interface is through varnishadm(1).

       The commands available are documented in varnish-cli(7).

   CLI Command File
       The  -I  option  makes  it  possible to run arbitrary management commands when varnishd is
       launched, before the worker process is started. In particular, this is  the  way  to  load
       configurations,  apply  labels  to  them,  and  make a VCL instance active that uses those
       labels on startup:

          vcl.load panic /etc/varnish_panic.vcl
          vcl.load siteA0 /etc/varnish_siteA.vcl
          vcl.load siteB0 /etc/varnish_siteB.vcl
          vcl.load siteC0 /etc/varnish_siteC.vcl
          vcl.label siteA siteA0
          vcl.label siteB siteB0
          vcl.label siteC siteC0
          vcl.load main /etc/varnish_main.vcl
          vcl.use main

       Every line in the file, including the last line,  must  be  terminated  by  a  newline  or
       carriage return or is otherwise considered truncated, which is a fatal error.

       If a command in the file is prefixed with '-', failure will not abort the startup.

       Note  that  it  is  necessary  to  include an explicit vcl.use command to select which VCL
       should be the active VCL when relying on CLI Command File to load  the  configurations  at
       startup.

RUN TIME PARAMETERS

       Runtime  parameters  can  either be set during startup with the -p command line option for
       varnishd(1) or through the CLI using the param.set or param.reset commands.  They  can  be
       locked during startup with the -r command line option.

   Run Time Parameter Units
       There  are  different  types  of  parameters that may accept a list of specific values, or
       optionally take a unit suffix.

   bool
       A boolean parameter accepts the values on and off.

       It will also recognize the following values:

       • yes and notrue and falseenable and disable

   bytes
       A bytes parameter requires one of the following units suffixes:

       • b (bytes)

       • k (kibibytes, 1024 bytes)

       • m (mebibytes, 1024 kibibytes)

       • g (gibibytes, 1024 mebibytes)

       • t (tebibytes, 1024 gibibytes)

       • p (pebibytes, 1024 tebibytes)

       Multiplicator units may be appended with an extra b. For  example  32k  is  equivalent  to
       32kb. Bytes units are case-insensitive.

   seconds
       A duration parameter may accept the following units suffixes:

       • ms (milliseconds)

       • s (seconds)

       • m (minutes)

       • h (hours)

       • d (days)

       • w (weeks)

       • y (years)

       If  the  parameter  is a timeout or a deadline, a value of "never" (when allowed) disables
       the effect of the parameter.

   Run Time Parameter Flags
       Runtime parameters are marked with shorthand flags to avoid repeating the same  text  over
       and over in the table below. The meaning of the flags are:

       • experimental

         We  have no solid information about good/bad/optimal values for this parameter. Feedback
         with experience and observations are most welcome.

       • delayed

         This parameter can be changed on the fly, but will not take effect immediately.

       • restart

         The worker process must be stopped and restarted, before this parameter takes effect.

       • reload

         The VCL programs must be reloaded for this parameter to take effect.

       • wizard

         Do not touch unless you really know what you're doing.

       • only_root

         Only works if varnishd is running as root.

   Default Value Exceptions on 32 bit Systems
       Be aware that on 32 bit systems, certain default or maximum values are reduced relative to
       the values listed below, in order to conserve VM space:

       • workspace_client: 24k

       • workspace_backend: 20k

       • http_resp_size: 8k

       • http_req_size: 12k

       • gzip_buffer: 4k

       • vsl_buffer: 4k

       • vsl_space: 1G (maximum)

       • thread_pool_stack: 64k

   List of Parameters
       This  text  is  produced  from  the  same  text  you  will  find in the CLI if you use the
       param.show command:

   accept_filter
       NB: This parameter depends on a feature which is not available on all platforms.

          • Units: bool

          • Default: on (if your platform supports accept filters)

       Enable kernel accept-filters. This may require a kernel module to be  loaded  to  have  an
       effect when enabled.

       Enabling  accept_filter  may  prevent  some  requests to reach Varnish in the first place.
       Malformed requests may go unnoticed and not increase the client_req_400  counter.  GET  or
       HEAD requests with a body may be blocked altogether.

   acceptor_sleep_decay
          • Default: 0.9

          • Minimum: 0

          • Maximum: 1

          • Flags: experimental

       If  we run out of resources, such as file descriptors or worker threads, the acceptor will
       sleep between accepts.  This parameter (multiplicatively) reduce the  sleep  duration  for
       each successful accept. (ie: 0.9 = reduce by 10%)

   acceptor_sleep_incr
          • Units: seconds

          • Default: 0.000

          • Minimum: 0.000

          • Maximum: 1.000

          • Flags: experimental

       If  we run out of resources, such as file descriptors or worker threads, the acceptor will
       sleep between accepts.  This parameter control how much longer we sleep, each time we fail
       to accept a new connection.

   acceptor_sleep_max
          • Units: seconds

          • Default: 0.050

          • Minimum: 0.000

          • Maximum: 10.000

          • Flags: experimental

       If  we run out of resources, such as file descriptors or worker threads, the acceptor will
       sleep between accepts.  This parameter limits how long it can sleep  between  attempts  to
       accept new connections.

   auto_restart
          • Units: bool

          • Default: on

       Automatically restart the child/worker process if it dies.

   backend_idle_timeout
          • Units: seconds

          • Default: 60.000

          • Minimum: 1.000

       Timeout before we close unused backend connections.

   backend_local_error_holddown
          • Units: seconds

          • Default: 10.000

          • Minimum: 0.000

          • Flags: experimental

       When  connecting to backends, certain error codes (EADDRNOTAVAIL, EACCESS, EPERM) signal a
       local resource shortage or configuration issue for which retrying connection attempts  may
       worsen the situation due to the complexity of the operations involved in the kernel.  This
       parameter prevents repeated connection attempts for the configured duration.

   backend_remote_error_holddown
          • Units: seconds

          • Default: 0.250

          • Minimum: 0.000

          • Flags: experimental

       When connecting to  backends,  certain  error  codes  (ECONNREFUSED,  ENETUNREACH)  signal
       fundamental  connection  issues  such  as the backend not accepting connections or routing
       problems for which repeated connection attempts  are  considered  useless  This  parameter
       prevents repeated connection attempts for the configured duration.

   ban_cutoff
          • Units: bans

          • Default: 0

          • Minimum: 0

          • Flags: experimental

       Expurge  long  tail  content from the cache to keep the number of bans below this value. 0
       disables.

       When this parameter is set to a non-zero value, the ban lurker continues to work  the  ban
       list  as  usual  top  to  bottom, but when it reaches the ban_cutoff-th ban, it treats all
       objects as if they matched a ban and expurges them from cache. As  actively  used  objects
       get  tested against the ban list at request time and thus are likely to be associated with
       bans near the top of the ban list, with ban_cutoff, least recently accessed  objects  (the
       "long tail") are removed.

       This  parameter  is  a  safety net to avoid bad response times due to bans being tested at
       lookup time. Setting a cutoff trades response time for cache efficiency.  The  recommended
       value  is  proportional to rate(bans_lurker_tests_tested) / n_objects while the ban lurker
       is working, which is the number of bans the system can sustain. The additional latency due
       to request ban testing is in the order of ban_cutoff / rate(bans_lurker_tests_tested). For
       example, for rate(bans_lurker_tests_tested) = 2M/s and a tolerable  latency  of  100ms,  a
       good value for ban_cutoff may be 200K.

   ban_dups
          • Units: bool

          • Default: on

       Eliminate  older  identical  bans  when  a new ban is added.  This saves CPU cycles by not
       comparing objects to identical bans.  This is a waste of time if you have many bans  which
       are never identical.

   ban_lurker_age
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.000

       The  ban lurker will ignore bans until they are this old.  When a ban is added, the active
       traffic will be tested against it as part of object  lookup.   Because  many  applications
       issue  bans  in  bursts,  this  parameter holds the ban-lurker off until the rush is over.
       This should be set to the approximate time which a ban-burst takes.

   ban_lurker_batch
          • Default: 1000

          • Minimum: 1

       The ban lurker sleeps ${ban_lurker_sleep} after examining this many objects.  Use this  to
       pace the ban-lurker if it eats too many resources.

   ban_lurker_holdoff
          • Units: seconds

          • Default: 0.010

          • Minimum: 0.000

          • Flags: experimental

       How long the ban lurker sleeps when giving way to lookup due to lock contention.

   ban_lurker_sleep
          • Units: seconds

          • Default: 0.010

          • Minimum: 0.000

       How  long  the ban lurker sleeps after examining ${ban_lurker_batch} objects.  Use this to
       pace the ban-lurker if it eats too many resources.  A value of zero will disable  the  ban
       lurker entirely.

   between_bytes_timeout
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.000

          • Flags: timeout

       We  only  wait for this many seconds between bytes received from the backend before giving
       up the fetch.  VCL values, per backend or  per  backend  request  take  precedence.   This
       parameter does not apply to pipe'ed requests.

   cc_command
       NB:  The  actual default value for this parameter depends on the Varnish build environment
       and options.

          • Default: exec $CC $CFLAGS %w -shared -o %o %s

          • Flags: must_reload

       The command used for compiling the C source code  to  a  dlopen(3)  loadable  object.  The
       following expansions can be used:

       • %s: the source file name

       • %o: the output file name

       • %w: the cc_warnings parameter

       • %d: the raw default cc_command

       • %D: the expanded default cc_command

       • %n: the working directory (-n option)

       • %%: a percent sign

       Unknown  percent  expansion  sequences  are ignored, and to avoid future incompatibilities
       percent characters should be escaped with a double percent sequence.

       The %d and %D expansions allow passing the parameter's default value to a  wrapper  script
       to perform additional processing.

   cc_warnings
       NB:  The  actual default value for this parameter depends on the Varnish build environment
       and options.

          • Default: -Wall -Werror

          • Flags: must_reload

       Warnings used when compiling the C source code with the cc_command parameter. By  default,
       VCL is compiled with the same set of warnings as Varnish itself.

   cli_limit
          • Units: bytes

          • Default: 64k

          • Minimum: 128b

          • Maximum: 99999999b

       Maximum  size of CLI response.  If the response exceeds this limit, the response code will
       be 201 instead of 200 and the last line will indicate the truncation.

   cli_timeout
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.000

          • Flags: timeout

       Timeout for the child's replies to CLI requests.

   clock_skew
          • Units: seconds

          • Default: 10

          • Minimum: 0

       How much clockskew we are willing to accept between the backend and our own clock.

   clock_step
          • Units: seconds

          • Default: 1.000

          • Minimum: 0.000

       How much observed clock step we are willing to accept before we panic.

   connect_timeout
          • Units: seconds

          • Default: 3.500

          • Minimum: 0.000

          • Flags: timeout

       Default connection timeout for backend connections. We only try to connect to the  backend
       for  this  many  seconds  before  giving  up. VCL can override this default value for each
       backend and backend request.

   critbit_cooloff
          • Units: seconds

          • Default: 180.000

          • Minimum: 60.000

          • Maximum: 254.000

          • Flags: wizard

       How long the critbit hasher keeps deleted objheads on the cooloff list.

   debug
          • Default: none

       Enable/Disable various kinds of debugging.

          none   Disable all debugging

       Use +/- prefix to set/reset individual bits:

          req_state
                 VSL Request state engine

          workspace
                 VSL Workspace operations

          waitinglist
                 VSL Waitinglist events

          syncvsl
                 Make VSL synchronous

          hashedge
                 Edge cases in Hash

          vclrel Rapid VCL release

          lurker VSL Ban lurker

          esi_chop
                 Chop ESI fetch to bits

          flush_head
                 Flush after http1 head

          vtc_mode
                 Varnishtest Mode

          witness
                 Emit WITNESS lock records

          vsm_keep
                 Keep the VSM file on restart

          slow_acceptor
                 Slow down Acceptor

          h2_nocheck
                 Disable various H2 checks

          vmod_so_keep
                 Keep copied VMOD libraries

          processors
                 Fetch/Deliver processors

          protocol
                 Protocol debugging

          vcl_keep
                 Keep VCL C and so files

          lck    Additional lock statistics

   default_grace
          • Units: seconds

          • Default: 10s

          • Minimum: 0.000

          • Flags: obj_sticky

       Default grace period.  We will deliver an object this long after it has expired,  provided
       another thread is attempting to get a new copy.

   default_keep
          • Units: seconds

          • Default: 0s

          • Minimum: 0.000

          • Flags: obj_sticky

       Default  keep period.  We will keep a useless object around this long, making it available
       for conditional backend fetches.  That means that the object  will  be  removed  from  the
       cache at the end of ttl+grace+keep.

   default_ttl
          • Units: seconds

          • Default: 2m

          • Minimum: 0.000

          • Flags: obj_sticky

       The TTL assigned to objects if neither the backend nor the VCL code assigns one.

   experimental
          • Default: none

       Enable/Disable experimental features.

          none   Disable all experimental features

       Use +/- prefix to set/reset individual bits:

          drop_pools
                 Drop thread pools

   feature
          • Default: none,+validate_headers,+vcl_req_reset

       Enable/Disable various minor features.

          default
                 Set default value (deprecated: use param.reset)

          none   Disable all features.

       Use +/- prefix to enable/disable individual feature:

          http2  Enable HTTP/2 protocol support.

          short_panic
                 Short panic message.

          no_coredump
                 No coredumps.  Must be set before child process starts.

          https_scheme
                 Extract host from full URI in the HTTP/1 request line, if the scheme is https.

          http_date_postel
                 Tolerate non compliant timestamp headers like Date, Last-Modified, Expires etc.

          esi_ignore_https
                 Convert <esi:include src"https://... to http://...

          esi_disable_xml_check
                 Allow ESI processing on non-XML ESI bodies

          esi_ignore_other_elements
                 Ignore XML syntax errors in ESI bodies.

          esi_remove_bom
                 Ignore UTF-8 BOM in ESI bodies.

          esi_include_onerror
                 Parse the onerror attribute of <esi:include> tags.

          wait_silo
                 Wait for persistent silos to completely load before serving requests.

          validate_headers
                 Validate all header set operations to conform to RFC7230.

          busy_stats_rate
                 Make busy workers comply with thread_stats_rate.

          trace  Enable  VCL  tracing  by  default  (enable  (be)req.trace). Required for tracing
                 vcl_init / vcl_fini

          vcl_req_reset
                 Stop  processing  client  VCL  once  the  client  is  gone.  When  this  happens
                 MAIN.req_reset is incremented.

   fetch_chunksize
          • Units: bytes

          • Default: 16k

          • Minimum: 4k

          • Flags: experimental

       The  default chunksize used by fetcher. This should be bigger than the majority of objects
       with short TTLs.  Internal limits in the storage_file module makes increases above 128kb a
       dubious idea.

   fetch_maxchunksize
          • Units: bytes

          • Default: 0.25G

          • Minimum: 64k

          • Flags: experimental

       The maximum chunksize we attempt to allocate from storage. Making this too large may cause
       delays and storage fragmentation.

   first_byte_timeout
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.000

          • Flags: timeout

       Default timeout for receiving first byte from backend. We only wait for this many  seconds
       for the first byte before giving up.  VCL can override this default value for each backend
       and backend request.  This parameter does not apply to pipe'ed requests.

   gzip_buffer
          • Units: bytes

          • Default: 32k

          • Minimum: 2k

          • Flags: experimental

       Size of malloc buffer used for gzip processing.  These buffers  are  used  for  in-transit
       data,  for  instance  gunzip'ed  data  being  sent  to a client.Making this space to small
       results in more overhead, writes to sockets etc, making it too  big  is  probably  just  a
       waste of memory.

   gzip_level
          • Default: 6

          • Minimum: 0

          • Maximum: 9

       Gzip compression level: 0=debug, 1=fast, 9=best

   gzip_memlevel
          • Default: 8

          • Minimum: 1

          • Maximum: 9

       Gzip memory level 1=slow/least, 9=fast/most compression.  Memory impact is 1=1k, 2=2k, ...
       9=256k.

   h2_header_table_size
          • Units: bytes

          • Default: 4k

          • Minimum: 0b

       HTTP2 header table size.  This is the size  that  will  be  used  for  the  HPACK  dynamic
       decoding table.

   h2_initial_window_size
          • Units: bytes

          • Default: 65535b

          • Minimum: 65535b

          • Maximum: 2147483647b

       HTTP2 initial flow control window size.

   h2_max_concurrent_streams
          • Units: streams

          • Default: 100

          • Minimum: 0

       HTTP2  Maximum  number  of concurrent streams.  This is the number of requests that can be
       active at the same time for a single HTTP2 connection.

   h2_max_frame_size
          • Units: bytes

          • Default: 16k

          • Minimum: 16k

          • Maximum: 16777215b

       HTTP2 maximum per frame payload size we are willing to accept.

   h2_max_header_list_size
          • Units: bytes

          • Default: 2147483647b

          • Minimum: 0b

       HTTP2 maximum size of an uncompressed header list.

   h2_rapid_reset
          • Units: seconds

          • Default: 1.000

          • Minimum: 0.000

          • Flags: delayed, experimental

       The upper threshold for how soon an http/2 RST_STREAM frame  has  to  be  parsed  after  a
       HEADERS  frame  for it to be treated as suspect and subjected to the rate limits specified
       by h2_rapid_reset_limit and h2_rapid_reset_period.  Changes to this parameter  affect  the
       default for new HTTP2 sessions. vmod_h2(3) can be used to adjust it from VCL.

   h2_rapid_reset_limit
          • Default: 100

          • Minimum: 0

          • Flags: delayed, experimental

       HTTP2  RST  Allowance.   Specifies the maximum number of allowed stream resets issued by a
       client over a time period before the connection is closed.  Setting this  parameter  to  0
       disables  the limit.  Changes to this parameter affect the default for new HTTP2 sessions.
       vmod_h2(3) can be used to adjust it from VCL.

   h2_rapid_reset_period
          • Units: seconds

          • Default: 60.000

          • Minimum: 1.000

          • Flags: delayed, experimental, wizard

       HTTP2 sliding window duration for h2_rapid_reset_limit.  Changes to this parameter  affect
       the default for new HTTP2 sessions. vmod_h2(3) can be used to adjust it from VCL.

   h2_rx_window_increment
          • Units: bytes

          • Default: 1M

          • Minimum: 1M

          • Maximum: 1G

          • Flags: wizard

       HTTP2  Receive  Window  Increments.   How big credits we send in WINDOW_UPDATE frames Only
       affects incoming request bodies (ie: POST, PUT etc.)

   h2_rx_window_low_water
          • Units: bytes

          • Default: 10M

          • Minimum: 65535b

          • Maximum: 1G

          • Flags: wizard

       HTTP2 Receive Window low water mark.  We try to keep the window at  least  this  big  Only
       affects incoming request bodies (ie: POST, PUT etc.)

   h2_rxbuf_storage
          • Default: Transient

          • Flags: must_restart

       The name of the storage backend that HTTP/2 receive buffers should be allocated from.

   h2_window_timeout
          • Units: seconds

          • Default: 5.000

          • Minimum: 0.000

          • Flags: timeout, wizard

       HTTP2  time  limit  without  window  credits. How long a stream may wait for the client to
       credit the window and allow for more DATA frames to be sent.

   http1_iovs
          • Units: struct iovec (=16 bytes)

          • Default: 64

          • Minimum: 5

          • Maximum: 1024

          • Flags: wizard

       Number of io vectors to allocate for HTTP1 protocol transmission.  A HTTP1 header needs  7
       +  2 per HTTP header field.  Allocated from workspace_thread.  This parameter affects only
       io vectors used for client delivery.  For  backend  fetches,  the  maximum  number  of  io
       vectors (up to IOV_MAX)  is allocated from available workspace_thread memory.

   http_gzip_support
          • Units: bool

          • Default: on

       Enable  gzip support. When enabled Varnish request compressed objects from the backend and
       store them compressed. If a client does not support gzip encoding Varnish will  uncompress
       compressed  objects  on  demand.  Varnish  will also rewrite the Accept-Encoding header of
       clients indicating support for gzip to:
              Accept-Encoding: gzip

       Clients that do not support gzip will have their Accept-Encoding header removed. For  more
       information  on  how  gzip  is  implemented  please see the chapter on gzip in the Varnish
       reference.

       When gzip support is disabled the variables beresp.do_gzip and  beresp.do_gunzip  have  no
       effect in VCL.

   http_max_hdr
          • Units: header lines

          • Default: 64

          • Minimum: 32

          • Maximum: 65535

       Maximum  number of HTTP header lines we allow in {req|resp|bereq|beresp}.http (obj.http is
       autosized to the exact number of headers).   Cheap,  ~20  bytes,  in  terms  of  workspace
       memory.  Note that the first line occupies five header lines.

   http_range_support
          • Units: bool

          • Default: on

       Enable support for HTTP Range headers.

   http_req_hdr_len
          • Units: bytes

          • Default: 8k

          • Minimum: 40b

       Maximum  length  of  any HTTP client request header we will allow.  The limit is inclusive
       its continuation lines.

   http_req_size
          • Units: bytes

          • Default: 32k

          • Minimum: 0.25k

       Maximum number of bytes of HTTP client request we will deal with.  This is a limit on  all
       bytes up to the double blank line which ends the HTTP request.  The memory for the request
       is allocated from the client workspace (param: workspace_client) and this parameter limits
       how much of that the request is allowed to take up.

   http_resp_hdr_len
          • Units: bytes

          • Default: 8k

          • Minimum: 40b

       Maximum  length of any HTTP backend response header we will allow.  The limit is inclusive
       its continuation lines.

   http_resp_size
          • Units: bytes

          • Default: 32k

          • Minimum: 0.25k

       Maximum number of bytes of HTTP backend response we will deal with.  This is  a  limit  on
       all  bytes  up  to the double blank line which ends the HTTP response.  The memory for the
       response is allocated from the  backend  workspace  (param:  workspace_backend)  and  this
       parameter limits how much of that the response is allowed to take up.

   idle_send_timeout
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.000

          • Maximum: 3600.000

          • Flags: timeout, delayed

       Send  timeout  for  individual  pieces  of data on client connections. May get extended if
       'send_timeout' applies.

       When this timeout is hit, the session is closed.

       See the man page for setsockopt(2) or socket(7) under SO_SNDTIMEO for more information.

   listen_depth
          • Units: connections

          • Default: 1024

          • Minimum: 0

          • Flags: must_restart

       Listen queue depth.

   lru_interval
          • Units: seconds

          • Default: 2.000

          • Minimum: 0.000

          • Flags: experimental

       Grace period before object moves on LRU list.  Objects are only moved to the front of  the
       LRU  list  if  they  have  not  been moved there already inside this timeout period.  This
       reduces the amount of lock operations necessary for LRU list access.

   max_esi_depth
          • Units: levels

          • Default: 5

          • Minimum: 0

       Maximum depth of esi:include processing.

   max_restarts
          • Units: restarts

          • Default: 4

          • Minimum: 0

       Upper limit on how many times a request can restart.

   max_retries
          • Units: retries

          • Default: 4

          • Minimum: 0

       Upper limit on how many times a backend fetch can retry.

   max_vcl
          • Default: 100

          • Minimum: 0

       Threshold  of  loaded  VCL  programs.   (VCL   labels   are   not   counted.)    Parameter
       max_vcl_handling determines behaviour.

   max_vcl_handling
          • Default: 1

          • Minimum: 0

          • Maximum: 2

       Behaviour when attempting to exceed max_vcl loaded VCL.

       • 0 - Ignore max_vcl parameter.

       • 1 - Issue warning.

       • 2 - Refuse loading VCLs.

   nuke_limit
          • Units: allocations

          • Default: 50

          • Minimum: 0

          • Flags: experimental

       Maximum number of objects we attempt to nuke in order to make space for a object body.

   pcre2_depth_limit
          • Default: 20

          • Minimum: 1

       The recursion depth-limit for the internal match logic in a pcre2_match().

       (See: pcre2_set_depth_limit() in pcre2 docs.)

       This  puts  an  upper  limit  on  the amount of stack used by PCRE2 for certain classes of
       regular expressions.

       We have set the default value low in order to prevent crashes, at  the  cost  of  possible
       regexp matching failures.

       Matching failures will show up in the log as VCL_Error messages.

   pcre2_jit_compilation
          • Units: bool

          • Default: on

       Use the pcre2 JIT compiler if available.

   pcre2_match_limit
          • Default: 10000

          • Minimum: 1

       The limit for the number of calls to the internal match logic in pcre2_match().

       (See: pcre2_set_match_limit() in pcre2 docs.)

       This parameter limits how much CPU time regular expression matching can soak up.

   ping_interval
          • Units: seconds

          • Default: 3

          • Minimum: 0

          • Flags: must_restart

       Interval  between  pings  from parent to child.  Zero will disable pinging entirely, which
       makes it possible to attach a debugger to the child.

   pipe_sess_max
          • Units: connections

          • Default: 0

          • Minimum: 0

       Maximum number of sessions dedicated to pipe transactions.

   pipe_task_deadline
          • Units: seconds

          • Default: 0.000

          • Minimum: 0.000

          • Flags: timeout

       Deadline for PIPE sessions. Regardless of activity in either  direction  after  this  many
       seconds, the session is closed.

   pipe_timeout
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.000

          • Flags: timeout

       Idle timeout for PIPE sessions. If nothing have been received in either direction for this
       many seconds, the session is closed.

   pool_req
          • Default: 10,100,10

       Parameters for per worker pool request memory pool.

       The three numbers are:

          min_pool
                 minimum size of free pool.

          max_pool
                 maximum size of free pool.

          max_age
                 max age of free element.

   pool_sess
          • Default: 10,100,10

       Parameters for per worker pool session memory pool.

       The three numbers are:

          min_pool
                 minimum size of free pool.

          max_pool
                 maximum size of free pool.

          max_age
                 max age of free element.

   pool_vbo
          • Default: 10,100,10

       Parameters for backend object fetch memory pool.

       The three numbers are:

          min_pool
                 minimum size of free pool.

          max_pool
                 maximum size of free pool.

          max_age
                 max age of free element.

   prefer_ipv6
          • Units: bool

          • Default: off

       Prefer IPv6 address when connecting to backends which have both IPv4 and IPv6 addresses.

   rush_exponent
          • Units: requests per request

          • Default: 3

          • Minimum: 2

          • Flags: experimental

       How many parked request we start for each completed request on the object.  NB: Even  with
       the  implict  delay of delivery, this parameter controls an exponential increase in number
       of worker threads.

   send_timeout
          • Units: seconds

          • Default: 600.000

          • Minimum: 0.000

          • Flags: timeout, delayed

       Total timeout for ordinary HTTP1 responses. Does not apply to  some  internally  generated
       errors and pipe mode.

       When  'idle_send_timeout'  is hit while sending an HTTP1 response, the timeout is extended
       unless the total time already taken for sending the response in its entirety exceeds  this
       many seconds.

       When this timeout is hit, the session is closed

   shortlived
          • Units: seconds

          • Default: 10.000

          • Minimum: 0.000

       Objects  created  with  (ttl+grace+keep)  shorter  than  this  are always put in transient
       storage.

   sigsegv_handler
          • Units: bool

          • Default: on

          • Flags: must_restart

       Install a signal handler which tries to dump debug information on segmentation faults, bus
       errors and abort signals.

   startup_timeout
          • Units: seconds

          • Default: 0.000

          • Minimum: 0.000

          • Flags: timeout

       Alternative timeout for the initial worker process startup.  If cli_timeout is longer than
       startup_timeout, it is used instead.

   syslog_cli_traffic
          • Units: bool

          • Default: on

       Log all CLI traffic to syslog(LOG_INFO).

   tcp_fastopen
       NB: This parameter depends on a feature which is not available on all platforms.

          • Units: bool

          • Default: off

          • Flags: must_restart

       Enable TCP Fast Open extension.

   tcp_keepalive_intvl
       NB: This parameter depends on a feature which is not available on all platforms.

          • Units: seconds

          • Default: platform dependent

          • Minimum: 1.000

          • Maximum: 100.000

          • Flags: experimental

       The number of seconds between TCP keep-alive probes. Ignored for Unix domain sockets.

   tcp_keepalive_probes
       NB: This parameter depends on a feature which is not available on all platforms.

          • Units: probes

          • Default: platform dependent

          • Minimum: 1

          • Maximum: 100

          • Flags: experimental

       The maximum number of TCP keep-alive probes to send  before  giving  up  and  killing  the
       connection if no response is obtained from the other end. Ignored for Unix domain sockets.

   tcp_keepalive_time
       NB: This parameter depends on a feature which is not available on all platforms.

          • Units: seconds

          • Default: platform dependent

          • Minimum: 1.000

          • Maximum: 7200.000

          • Flags: experimental

       The  number  of  seconds  a  connection  needs  to  be  idle before TCP begins sending out
       keep-alive probes. Ignored for Unix domain sockets.

   thread_pool_add_delay
          • Units: seconds

          • Default: 0.000

          • Minimum: 0.000

          • Flags: experimental

       Wait at least this long after creating a thread.

       Some (buggy) systems may need a short (sub-second) delay between  creating  threads.   Set
       this to a few milliseconds if you see the 'threads_failed' counter grow too much.

       Setting this too high results in insufficient worker threads.

   thread_pool_destroy_delay
          • Units: seconds

          • Default: 1.000

          • Minimum: 0.010

          • Flags: delayed, experimental

       Wait this long after destroying a thread.

       This controls the decay of thread pools when idle(-ish).

   thread_pool_fail_delay
          • Units: seconds

          • Default: 0.200

          • Minimum: 0.010

          • Flags: experimental

       Wait  at  least  this  long after a failed thread creation before trying to create another
       thread.

       Failure to create a worker thread is often a sign that   the  end  is  near,  because  the
       process  is  running  out  of  some  resource.   This  delay  tries to not rush the end on
       needlessly.

       If thread creation failures are a problem, check that thread_pool_max is not too high.

       It may also help to increase thread_pool_timeout and thread_pool_min, to reduce  the  rate
       at which treads are destroyed and later recreated.

   thread_pool_max
          • Units: threads

          • Default: 5000

          • Minimum: thread_pool_min

          • Flags: delayed

       The maximum number of worker threads in each pool.

       Do  not  set this higher than you have to, since excess worker threads soak up RAM and CPU
       and generally just get in the way of getting work done.

   thread_pool_min
          • Units: threads

          • Default: 100

          • Minimum: 5

          • Maximum: thread_pool_max

          • Flags: delayed

       The minimum number of worker threads in each pool.

       Increasing this may help ramp up faster from low load  situations  or  when  threads  have
       expired.

       Technical  minimum is 5 threads, but this parameter is strongly recommended to be at least
       10

   thread_pool_reserve
          • Units: threads

          • Default: 0 (auto-tune: 5% of thread_pool_min)

          • Maximum: 95% of thread_pool_min

          • Flags: delayed

       The number of worker threads reserved for vital tasks in each pool.

       Tasks may require other tasks to  complete  (for  example,  client  requests  may  require
       backend requests, http2 sessions require streams, which require requests). This reserve is
       to ensure that lower priority tasks do not prevent higher priority tasks from running even
       under high load.

       The  effective value is at least 5 (the number of internal priority classes), irrespective
       of this parameter.

   thread_pool_stack
          • Units: bytes

          • Default: 80k

          • Minimum: sysconf(_SC_THREAD_STACK_MIN)

          • Flags: delayed

       Worker thread stack size.  This will likely be rounded up to a multiple of 4k (or whatever
       the page_size might be) by the kernel.

       The required stack size is primarily driven by the depth of the call-tree. The most common
       relevant determining factors in varnish core code are GZIP (un)compression, ESI processing
       and  regular  expression matches. VMODs may also require significant amounts of additional
       stack.  The  nesting  depth  of  VCL  subs  is  another  factor,  although  typically  not
       predominant.

       The  stack  size  is  per  thread,  so the maximum total memory required for worker thread
       stacks is in the order of size = thread_pools x thread_pool_max x thread_pool_stack.

       Thus, in particular for setups with many threads, keeping the  stack  size  at  a  minimum
       helps reduce the amount of memory required by Varnish.

       On  the  other  hand,  thread_pool_stack  must  be  large  enough under all circumstances,
       otherwise varnish will crash due to a stack overflow. Usually, a stack overflow  manifests
       itself  as  a  segmentation fault (aka segfault / SIGSEGV) with the faulting address being
       near the stack pointer (sp).

       Unless stack usage can be reduced,  thread_pool_stack  must  be  increased  when  a  stack
       overflow  occurs.  Setting it in 150%-200% increments is recommended until stack overflows
       cease to occur.

   thread_pool_timeout
          • Units: seconds

          • Default: 300.000

          • Minimum: 10.000

          • Flags: delayed, experimental

       Thread idle threshold.

       Threads in excess of thread_pool_min, which have been idle for at least this long, will be
       destroyed.

   thread_pool_watchdog
          • Units: seconds

          • Default: 60.000

          • Minimum: 0.100

          • Flags: experimental

       Thread queue stuck watchdog.

       If no queued work have been released for this long, the worker process panics itself.

   thread_pools
          • Units: pools

          • Default: 2

          • Minimum: 1

          • Maximum: 32

          • Flags: delayed, experimental

       Number of worker thread pools.

       Increasing the number of worker pools decreases lock contention. Each worker pool also has
       a thread accepting new connections, so for very high rates of incoming new connections  on
       systems with many cores, increasing the worker pools may be required.

       Too  many  pools  waste CPU and RAM resources, and more than one pool for each CPU is most
       likely detrimental to performance.

       Can be increased on the fly, but decreases require a restart to take  effect,  unless  the
       drop_pools experimental debug flag is set.

   thread_queue_limit
          • Units: requests

          • Default: 20

          • Minimum: 0

          • Flags: experimental

       Permitted request queue length per thread-pool.

       This  sets  the  number of requests we will queue, waiting for an available thread.  Above
       this limit sessions will be dropped instead of queued.

   thread_stats_rate
          • Units: requests

          • Default: 10

          • Minimum: 0

          • Flags: experimental

       Worker threads accumulate statistics, and dump these into the global stats counters if the
       lock  is  free  when  they  finish a job (request/fetch etc.)  This parameters defines the
       maximum number of jobs a worker thread may  handle,  before  it  is  forced  to  dump  its
       accumulated stats into the global counters.

   timeout_idle
          • Units: seconds

          • Default: 5.000

          • Minimum: 0.000

          • Maximum: 3600.000

       Idle timeout for client connections.

       A connection is considered idle until we have received the full request headers.

       This  parameter is particularly relevant for HTTP1 keepalive  connections which are closed
       unless the next request is received before this timeout is reached.

   timeout_linger
          • Units: seconds

          • Default: 0.050

          • Minimum: 0.000

          • Flags: experimental

       How long the worker thread lingers on an idle  session  before  handing  it  over  to  the
       waiter.   When  sessions are reused, as much as half of all reuses happen within the first
       100 msec of the previous request completing.  Setting this  too  high  results  in  worker
       threads  not  doing  anything  for  their  keep,  setting  it too low just means that more
       sessions take a detour around the waiter.

   transit_buffer
          • Units: bytes

          • Default: 0b

          • Minimum: 0b

       The number of bytes which Varnish buffers for uncacheable backend streaming fetches  -  in
       other  words, how many bytes Varnish reads from the backend ahead of what has been sent to
       the client.  A zero value means no limit, the object is fetched as fast as possible.

       When dealing with slow clients, setting this  parameter  to  non-zero  can  prevent  large
       uncacheable  objects from being stored in full when the intent is to simply stream them to
       the client. As a result, a slow client transaction holds onto a backend  connection  until
       the end of the delivery.

       This parameter is the default to the VCL variable beresp.transit_buffer, which can be used
       to control the transit buffer per backend request.

   vary_notice
          • Units: variants

          • Default: 10

          • Minimum: 1

       How many variants need to be evaluated to log a  Notice  that  there  might  be  too  many
       variants.

   vcc_allow_inline_c
       Deprecated alias for the vcc_feature parameter.

   vcc_err_unref
       Deprecated alias for the vcc_feature parameter.

   vcc_feature
          • Default: none,+err_unref,+unsafe_path

       Enable/Disable various VCC behaviors.

          default
                 Set default value (deprecated: use param.reset)

          none   Disable all behaviors.

       Use +/- prefix to enable/disable individual behavior:

          err_unref
                 Unreferenced VCL objects result in error.

          allow_inline_c
                 Allow inline C code in VCL.

          unsafe_path
                 Allow '/' in vmod & include paths. Allow 'import ... from ...'.

   vcc_unsafe_path
       Deprecated alias for the vcc_feature parameter.

   vcl_cooldown
          • Units: seconds

          • Default: 600.000

          • Minimum: 1.000

       How  long  a  VCL  is  kept  warm  after  being  replaced  as  the active VCL (granularity
       approximately 30 seconds).

   vcl_path
       NB: The actual default value for this parameter depends on the Varnish  build  environment
       and options.

          • Default: ${sysconfdir}/varnish:${datadir}/varnish/vcl

       Directory  (or  colon  separated  list  of  directories) from which relative VCL filenames
       (vcl.load and include) are to be found.  By default Varnish searches VCL files in both the
       system configuration and shared data directories to allow packages to drop their VCL files
       in a standard location where relative includes would work.

   vmod_path
       NB: The actual default value for this parameter depends on the Varnish  build  environment
       and options.

          • Default: ${libdir}/varnish/vmods

       Directory (or colon separated list of directories) where VMODs are to be found.

   vsl_buffer
          • Units: bytes

          • Default: 16k

          • Minimum: vsl_reclen + 12 bytes

       Bytes of (req-/backend-)workspace dedicated to buffering VSL records.  When this parameter
       is adjusted, most likely workspace_client and workspace_backend will have to  be  adjusted
       by the same amount.

       Setting  this  too  high  costs memory, setting it too low will cause more VSL flushes and
       likely increase lock-contention on the VSL mutex.

   vsl_mask
          • Default:
            all,-Debug,-ObjProtocol,-ObjStatus,-ObjReason,-ObjHeader,-ExpKill,-WorkThread,-Hash,-VfpAcct,-H2RxHdr,-H2RxBody,-H2TxHdr,-H2TxBody,-VdpAcct

       Mask individual VSL messages from being logged.

          all    Enable all tags

          default
                 Set default value (deprecated: use param.reset)

       Use +/- prefix in front of VSL tag name to unmask/mask individual VSL messages. See vsl(7)
       for possible values.

   vsl_reclen
          • Units: bytes

          • Default: 255b

          • Minimum: 16b

          • Maximum: vsl_buffer - 12 bytes

       Maximum number of bytes in SHM log record.

   vsl_space
          • Units: bytes

          • Default: 80M

          • Minimum: 1M

          • Maximum: 4G

          • Flags: must_restart

       The amount of space to allocate for the VSL fifo buffer in the VSM memory segment.  If you
       make this too small, varnish{ncsa|log} etc will not be able to keep  up.   Making  it  too
       large just costs memory resources.

   vsm_free_cooldown
          • Units: seconds

          • Default: 60.000

          • Minimum: 10.000

          • Maximum: 600.000

       How  long  VSM  memory  is  kept  warm  after  a deallocation (granularity approximately 2
       seconds).

   workspace_backend
          • Units: bytes

          • Default: 96k

          • Minimum: 1k

          • Flags: delayed

       Bytes of HTTP protocol workspace for backend HTTP req/resp.  If  larger  than  4k,  use  a
       multiple of 4k for VM efficiency.

   workspace_client
          • Units: bytes

          • Default: 96k

          • Minimum: 9k

          • Flags: delayed

       Bytes  of  HTTP protocol workspace for clients HTTP req/resp.  Use a multiple of 4k for VM
       efficiency.  For HTTP/2 compliance this must be at least 20k, in order to receive fullsize
       (=16k)  frames from the client.   That usually happens only in POST/PUT bodies.  For other
       traffic-patterns smaller values work just fine.

   workspace_session
          • Units: bytes

          • Default: 0.75k

          • Minimum: 384b

          • Flags: delayed

       Allocation size for session structure and workspace.    The workspace  is  primarily  used
       for TCP connection addresses.  If larger than 4k, use a multiple of 4k for VM efficiency.

   workspace_thread
          • Units: bytes

          • Default: 2k

          • Minimum: 0.25k

          • Maximum: 8k

          • Flags: delayed

       Bytes  of  auxiliary  workspace  per thread.  This workspace is used for certain temporary
       data structures during the operation of a worker thread.  One use is  for  the  IO-vectors
       used  during  delivery. Setting this parameter too low may increase the number of writev()
       syscalls, setting it too high just wastes  space.   ~0.1k  +  UIO_MAXIOV  *  sizeof(struct
       iovec)  (typically  =  ~16k  for 64bit) is considered the maximum sensible value under any
       known circumstances (excluding exotic vmod use).

EXIT CODES

       Varnish and bundled tools will, in most cases, exit with one of the following codes

       • 0 OK

       • 1 Some error which could be system-dependent and/or transient

       • 2 Serious configuration / parameter error -  retrying  with  the  same  configuration  /
         parameters is most likely useless

       The varnishd master process may also OR its exit code

       • with 0x20 when the varnishd child process died,

       • with 0x40 when the varnishd child process was terminated by a signal and

       • with 0x80 when a core was dumped.

SEE ALSO

varnishlog(1)varnishhist(1)varnishncsa(1)varnishstat(1)varnishtop(1)varnish-cli(7)vcl(7)

HISTORY

       The varnishd daemon was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS
       and Varnish Software.

       This manual page was written by Dag-Erling Smørgrav with updates by Stig Sandbeck Mathisen
       <ssm@debian.org>, Nils Goroll and others.

COPYRIGHT

       This  document  is  licensed  under  the  same  licence as Varnish itself. See LICENCE for
       details.

       • Copyright (c) 2007-2015 Varnish Software AS

                                                                                      VARNISHD(1)