Provided by: hyperfine_1.12.0-3ubuntu0.1_amd64 bug

NAME

       hyperfine - hyperfine

DESCRIPTION

       hyperfine 1.12.0 A command-line benchmarking tool.

   USAGE:
              hyperfine [OPTIONS] <command>...

   OPTIONS:
       -w, --warmup <NUM>

              Perform  NUM  warmup  runs  before  the  actual benchmark. This can be used to fill
              (disk) caches for I/O-heavy programs.

       -m, --min-runs <NUM>

              Perform at least NUM runs for each command (default: 10).

       -M, --max-runs <NUM>

              Perform at most NUM runs for each command. By default, there is no limit.

       -r, --runs <NUM>

              Perform exactly NUM runs for  each  command.  If  this  option  is  not  specified,
              hyperfine automatically determines the number of runs.

       -p, --prepare <CMD>...

              Execute  CMD  before  each timing run. This is useful for clearing disk caches, for
              example.  The --prepare option can be specified once for all commands  or  multiple
              times,  once for each command. In the latter case, each preparation command will be
              run prior to the corresponding benchmark command.

       -c, --cleanup <CMD>

              Execute CMD after the completion of  all  benchmarking  runs  for  each  individual
              command to be benchmarked. This is useful if the commands to be benchmarked produce
              artifacts that need to be cleaned up.

       -P, --parameter-scan <VAR> <MIN> <MAX>

              Perform benchmark runs for each value in the range MIN..MAX.  Replaces  the  string
              '{VAR}' in each command by the current parameter value.

       Example:
              hyperfine -P threads 1 8 'make -j {threads}'

              This performs benchmarks for 'make -j 1', 'make -j 2', ???, 'make -j 8'.

              To have the value increase following different patterns, use shell arithmetics.

              Example: hyperfine -P size 0 3 'sleep $((2**{size}))'

              This  performs  benchmarks  with power of 2 increases: 'sleep 1', 'sleep 2', 'sleep
              4', ???  The exact syntax may vary depending on your shell and OS.

       -D, --parameter-step-size <DELTA>

              This argument requires --parameter-scan to be specified as well. Traverse the range
              MIN..MAX in steps of DELTA.

       Example:
              hyperfine -P delay 0.3 0.7 -D 0.2 'sleep {delay}'

              This performs benchmarks for 'sleep 0.3', 'sleep 0.5' and 'sleep 0.7'.

       -L, --parameter-list <VAR> <VALUES>

              Perform benchmark runs for each value in the comma-separated list VALUES.  Replaces
              the string '{VAR}' in each command by the current parameter value.

       Example:
              hyperfine -L compiler gcc,clang '{compiler} -O2 main.cpp'

              This performs benchmarks for 'gcc -O2 main.cpp' and 'clang -O2 main.cpp'.

              The option can be specified multiple times  to  run  benchmarks  for  all  possible
              parameter combinations.

       -s, --style <TYPE>

              Set  output  style  type  (default:  auto).  Set  this to 'basic' to disable output
              coloring and interactive elements. Set it to 'full' to enable all effects  even  if
              no interactive terminal was detected. Set this to 'nocolor' to keep the interactive
              output without any colors. Set this to 'color'  to  keep  the  colors  without  any
              interactive output. Set this to 'none' to disable all the output of the tool.

       -S, --shell <SHELL>

              Set the shell to use for executing benchmarked commands.

       -i, --ignore-failure

              Ignore non-zero exit codes of the benchmarked programs.

       -u, --time-unit <UNIT>

              Set the time unit to be used. Possible values: millisecond, second.

       --export-asciidoc <FILE>

              Export the timing summary statistics as an AsciiDoc table to the given FILE.

       --export-csv <FILE>

              Export  the  timing  summary  statistics  as CSV to the given FILE. If you need the
              timing results for each individual run, use the JSON export format.

       --export-json <FILE>

              Export the timing summary statistics and timings of individual runs as JSON to  the
              given FILE.

       --export-markdown <FILE>

              Export the timing summary statistics as a Markdown table to the given FILE.

       --show-output

              Print  the  stdout and stderr of the benchmark instead of suppressing it. This will
              increase the time it takes for benchmarks to run, so it should  only  be  used  for
              debugging purposes or when trying to benchmark output speed.

       -n, --command-name <NAME>...

              Give a meaningful name to a command

       -h, --help

              Print this help message.

       -V, --version

              Show version information.

   ARGS:
              <command>...

              Command to benchmark