Provided by: likwid_5.2.2+dfsg1-1_amd64 bug

NAME

       likwid-perfctr  -  configure  and  read  out hardware performance counters on x86, ARM and
       POWER CPUs and Nvidia GPUs

SYNOPSIS

       likwid-perfctr   [-vhHmaiefO]    [-c    core_list]    [-C    core_list_for_pinning]    [-g
       performance_group     or    performance_event_string]    [-t    timeline_frequency]    [-S
       monitoring_time]  [-T  group_switch_frequency]  [-V  verbosity]   [-M   access_mode]   [-o
       output_file]  [-s skip_mask] [-E search_str] [-G gpu_list(*)] [-W gpu_performance_group or
       gpu_performance_event_string(*)] [--stats]

DESCRIPTION

       likwid-perfctr is a lightweight  command  line  application  to  configure  and  read  out
       hardware performance monitoring data on supported x86, ARM and POWER processors and Nvidia
       GPUs. It can measure either as wrapper without changing the measured application  or  with
       marker  API functions inside the code, which will turn on and off the counters.  There are
       preconfigured performance groups with useful event sets and derived metrics. Additionally,
       arbitrary  events  can  be  measured  with  custom  event sets. The marker API can measure
       multiple named regions and  the  results  are  accumulated  over  multiple  region  calls.
       (*)OptiononlyavailableifbuiltwithNvidiaGPUsupport

OPTIONS

       -v, --version
              prints version information to standard output, then exits.

       -h, --help
              prints a help message to standard output, then exits.

       -H     prints group help message (use together with -g switch).

       -V <level>, --verbose <level>
              verbose   output  during  execution  for  debugging.  0  for  only  errors,  1  for
              informational output, 2 for detailed output and 3 for developer output

       -m, --marker
              run in marker API mode

       -a     print available performance groups for current processor, then exit.

       -e     print available counters and  performance  events  of  current  processor  and  (if
              available) Nvidia GPUs.

       -o, --output <filename>
              store  all  output  to  a  file  instead  of stdout. For the filename the following
              placeholders are supported: %j for PBS_JOBID, %r for MPI RANK (only  Intel  MPI  at
              the  moment),  %h  host  name  and  %p  for  process pid.  The placeholders must be
              separated by underscore as, e.g., -o test_%h_%p. You must specify a suffix  to  the
              filename. For txt the output is printed as is to the file. Other suffixes trigger a
              filter on the output.  Available filters are csv (comma separated values), xml  and
              json at the moment.

       -O     print     output     in     CSV     format    (conform    to    RFC    4180,    see
              https://tools.ietf.org/html/rfc4180 for details).

       -i, --info
              print cpuid information about processor  and  about  Intel  Performance  Monitoring
              features, then exit.

       -c <cpu expression>
              specify  a  numerical  list  of  processors.  The  list may contain multiple items,
              separated by comma, and ranges. For example 0,3,9-11.

       -C <cpu expression>
              specify a numerical list of  processors.  The  list  may  contain  multiple  items,
              separated  by  comma,  and ranges. For example 0,3,9-11. This variant will also pin
              the threads to the cores. Also logical numberings can be used.

       -g, --group <performance group> or <performance event set string>
              specify which performance group to measure. This can be one of the tags output with
              the -a flag.  Also a custom event set can be specified by a comma separated list of
              events. Each event has the format eventId:register with the the register being  one
              of a architecture supported performance counter registers.

       -t <frequency of measurements>
              timeline  mode  for  time  resolved  measurements.  The  time unit must be given on
              command line, e.g. 4s, 500ms or 900us.

       -S <waittime between measurements>
              End-to-end measurement using likwid-perfctr  but  sleep  instead  of  executing  an
              application. The time unit must be given on command line, e.g. 4s, 500ms or 900us.

       -T <time between group switches>
              Frequency  to  switch  groups  if multiple are given on commandline, default is 2s.
              Value is ignored for a single event set and default frequency of  30s  is  used  to
              catch  overflows.  The  time  unit must be given on command line, e.g. 4s, 500ms or
              900us.

       -s, --skip <mask>
              Specify skip mask as HEX number. For each  set  bit  the  corresponding  thread  is
              skipped.

       -f, --force
              Force writing of registers even if they are in use.

       -E <search_str>
              Print only events and corresponding counters matching <search_str>

       -G, --gpus <gpu_list>
              specify a numerical list of GPU IDs. The list may contain multiple items, separated
              by comma, and ranges. For example 0,3,9-11.

       -W, --gpugroup <gpu performance group> or <gpu performance event set string>
              specify which performance group to measure on the specified Nvidia GPUs.  This  can
              be one of the tags output with the -a flag in the GPU section.  Also a custom event
              set can be specified by a comma separated list of events. Each event has the format
              eventId:GPUx  (x=0,1,2,...). You can add as many events to the string until you hit
              an error.

       --stats
              Always print statistics table

EXAMPLE

       Because likwid-perfctr measures on processors and not single applications it is  necessary
       to ensure that processes and threads are pinned to dedicated resources. You can either pin
       the application yourself or use the builtin pin functionality.

       1.  As wrapper with performance group:

       likwid-perfctr -C 0-2 -g TLB ./cacheBench -n 2 -l 1048576 -i 100 -t Stream

       The parent process is pinned to processor 0, Thread 0 to  processor  1  and  Thread  1  to
       processor 2.

       2.  As wrapper with custom event set on AMD:

       likwid-perfctr    -C    0-4    -g   INSTRUCTIONS_RETIRED_SSE:PMC0,CPU_CLOCKS_UNHALTED:PMC3
       ./cacheBench

       It is specified that the event INSTRUCTIONS_RETIRED_SSE is measured on  counter  PMC0  and
       the  event  CPU_CLOCKS_UNHALTED on counter PMC3.  It is possible calculate the run time of
       all threads based on the CPU_CLOCKS_UNHALTED event. If you want this you have  to  include
       this event in your custom event string as shown above.

       3.  As wrapper with custom event set on Intel:

       likwid-perfctr -C 0 -g INSTR_RETIRED_ANY:FIXC0,CPU_CLK_UNHALTED_CORE:FIXC1 ./stream-icc

       On   Intel  processors  fixed  events  are  measured  on  dedicated  counters.  These  are
       INSTR_RETIRED_ANY and CPU_CLK_UNHALTED_CORE.   If  you  configure  these  fixed  counters,
       likwid-perfctr will calculate the run time and CPI metrics for your run.

       4.  Using  the  marker  API to measure only parts of your code (this can be used both with
           groups or custom event sets):

       likwid-perfctr  -m  -C   0-4   -g   INSTRUCTIONS_RETIRED_SSE:PMC0,CPU_CLOCKS_UNHALTED:PMC3
       ./cacheBench

       You have to link you code against liblikwid.so and use the marker API calls.  Examples can
       be found in examples folder /usr/share/likwid/examples.  The following code snippet  shows
       the necessary calls:

       #include <likwid-marker.h>

       /* only one thread calls init */
       LIKWID_MARKER_INIT;

       /* Can be called by each thread the should
        * perform measurements. It is only needed
        * if the pinning feature of LIKWID failed
        * and the threads need to be pinned explicitly.
        *
        * If you place it in the same parallel
        * region as LIKWID_MARKER_START, perform a
        * barrier between the statements to avoid
        * timing problems.
        */
       LIKWID_MARKER_THREADINIT;

       /* If you run the code region only once, register
        * the region tag previously to reduce the overhead
        * of START and STOP calls. Call it once for each
        * thread in parallel environment.
        * Note: No whitespace characters are allowed in the region tags
        * This call is optional but RECOMMENDED, START will do the same operations.
        */
       LIKWID_MARKER_REGISTER("name");

       /* Start measurement
        * Note: No whitespace characters are allowed in the region tags
        */
       LIKWID_MARKER_START("name");
       /*
        * Your code to be measured is here
        * You can also nest named regions
        * No whitespaces are allowed in the region names!
        */
       LIKWID_MARKER_STOP("name");

       /* If you want to measure multiple groups/event sets
        * Switches through groups in round-robin fashion
        */
       LIKWID_MARKER_SWITCH;

       /* If you want to get the data of a region inside your application
        * nevents in an (int*) and used as input length of the events array. After the
        * call, nevents contains the actual amount of events
        * events is an array of doubles (double*), time is a pointer to double to
        * retrieve the measured runtime of the region and count is a pointer to int
        * and is filled with the call count of the region.
        */
       LIKWID_MARKER_GET("name", nevents, events, time, count);

       /* If you want to reset the counts for a region
        */
       LIKWID_MARKER_RESET("name");

       /* Finally */
       LIKWID_MARKER_CLOSE;

       5.  Using likwid in timeline mode:

       likwid-perfctr -c 0-3 -g FLOPS_DP -t 300ms ./cacheBench > out.txt

       This will read out the counters every 300ms on physical hardware threads 0-3 and write the
       results to out.txt.  The application is not pinned to the CPUs. The output syntax  of  the
       timeline mode is for custom event sets:

       <groupID> <numberOfEvents> <numberOfThreads> <Timestamp> <Event1_Thread1> <Event2_Thread1>
       ... <Event1_Thread2> ... <EventN_ThreadM>

       For  performance  groups  with  metrics:  <groupID>  <numberOfMetrics>   <numberOfThreads>
       <Timestamp> <Metric1_Thread1> <Metric2_Thread1> ... <Metric1_Thread2> ...<MetricN_ThreadM>

       For  timeline mode there is a frontend application likwid-perfscope(1), which enables live
       plotting of selected events. Please be aware that  with  high  frequencies  (<100ms),  the
       values differ from the real results but the behavior of them is valid.

       6.  Using likwid in stethoscope mode:

       likwid-perfctr -c 0-3 -g FLOPS_DP -S 2s

       This  will  start the counters and read them out after 2s on physical hardware threads 0-3
       and write the results to stdout.

       7.  Using likwid with counter options:

       likwid-perfctr -c S0:1@S1:1 -g LLC_LOOKUPS_DATA_READ:CBOX0C0:STATE=0x9 ./cacheBench

       This will program the counter CBOX0C0 (the counter 0 of the LLC cache box  0)  to  measure
       the  event  LLC_LOOKUPS_DATA_READ  and  filter the increments by the state of a cacheline.
       STATE=0x9 for this event means all <invalid> and <modified> cachelines. Which options  are
       allowed for which box is listed in LIKWID's html documentation. The values for the options
       can be found in the vendors performance monitoring  documentations.  Likwid  measures  the
       first  CPU  of  socket  0  and  the  first  CPU of socket 1. See likwid-pin(1) for details
       regarding the cpu expressions.  For more code examples have a  look  at  the  likwid  WIKI
       pages and LIKWID's html documentation.

       7.   Using likwid with GPU events and NvMarkerAPI. The CUDA library and CUPTI library must
            be reachable (path in LD_LIBRARY_PATH).

       likwid-perfctr -G 0,1 -W FLOPS_DP -m ./cudaApp

       This runs the application in NvMarkerAPI mode on GPUs 0 and 1  and  measures  the  single-
       precision flops. The NvMarkerAPI is similar to the CPU MarkerAPI (compile -DLIKWID_NVMON):

       #include <likwid-marker.h>

       /* Initialize the library and add configured eventset */ LIKWID_NVMARKER_INIT;

       /* If you run the code region only once, register
        * the region tag previously to reduce the overhead
        * of START and STOP calls. Call it before calling START() for
        * the region the first time.
        *
        * Place it around your CUDA kernel call.
        *
        * Note: No whitespace characters are allowed in the region tags
        * This call is optional but RECOMMENDED, START will do the same operations.
        */ LIKWID_NVMARKER_REGISTER("name");

       /* Start measurement on Nvidia GPUs
        * Note: No whitespace characters are allowed in the region tags
        */ LIKWID_NVMARKER_START("name"); /*
        * Your code to be measured is here
        * You can also nest named regions
        */

       /* Stop measurment on Nvidia GPUs
        * No whitespaces are allowed in the region names!
        */ LIKWID_NVMARKER_STOP("name");

       /* If you want to measure multiple groups/event sets
        * Switches through groups in round-robin fashion.
        */ LIKWID_NVMARKER_SWITCH;

       /* If you want to get the data of a region inside your application
        * nevents in an (int*) and used as input length of the events array. After the
        * call, nevents contains the actual amount of events. Same for ngpus.
        * events is an array of doubles (double*), time is a pointer to double to
        * retrieve the measured runtime of the region and count is a pointer to int
        * and is filled with the call count of the region.
        */ LIKWID_NVMARKER_GET("name", ngpus, nevents, events, time, count);

       /* If you want to reset the counts for a region
        */ LIKWID_NVMARKER_RESET("name");

       /* Finally */ LIKWID_NVMARKER_CLOSE;

AUTHOR

       Written by Thomas Gruber <thomas.roehl@googlemail.com>.

BUGS

       Report Bugs on <https://github.com/RRZE-HPC/likwid/issues>.

SEE ALSO

       likwid-topology(1), likwid-perfscope(1), likwid-pin(1), likwid-bench(1)