Provided by: erlang-manpages_18.3-dfsg-1ubuntu3.1_all bug

NAME

       test_server_ctrl - This module provides a low level interface to the Test Server.

DESCRIPTION

       The  test_server_ctrl  module  provides  a  low  level  interface to the Test Server. This
       interface is normally not used directly by the tester, but through a  framework  built  on
       top of test_server_ctrl.

       Common  Test  is  such  a framework, well suited for automated black box testing of target
       systems of any kind (not necessarily implemented in Erlang). Common Test is  also  a  very
       useful  tool  for  white  box testing Erlang programs and OTP applications. Please see the
       Common Test User's Guide and reference manual for more information.

       If you want to write your own framework, some more information can be found in the chapter
       "Writing  your  own  test server framework" in the Test Server User's Guide. Details about
       the interface provided by test_server_ctrl follows below.

EXPORTS

       start() -> Result

              Types:

                 Result = ok | {error, {already_started, pid()}

              This function starts the test server.

       stop() -> ok

              This stops the test server and all its activity. The running test  suite  (if  any)
              will be halted.

       add_dir(Name, Dir) -> ok
       add_dir(Name, Dir, Pattern) -> ok
       add_dir(Name, [Dir|Dirs]) -> ok
       add_dir(Name, [Dir|Dirs], Pattern) -> ok

              Types:

                 Name = term()
                   The jobname for this directory.
                 Dir = term()
                   The directory to scan for test suites.
                 Dirs = [term()]
                   List of directories to scan for test suites.
                 Pattern = term()
                   Suite match pattern. Directories will be scanned for Pattern_SUITE.erl files.

              Puts  a  collection  of suites matching (*_SUITE) in given directories into the job
              queue. Name is an arbitrary name for the job, it can be any erlang term. If Pattern
              is given, only modules matching Pattern* will be added.

       add_module(Mod) -> ok
       add_module(Name, [Mod|Mods]) -> ok

              Types:

                 Mod = atom()
                 Mods = [atom()]
                   The name(s) of the module(s) to add.
                 Name = term()
                   Name for the job.

              This  function  adds  a module or a list of modules, to the test servers job queue.
              Name may be any Erlang term. When Name is not given, the job gets the name  of  the
              module.

       add_case(Mod, Case) -> ok

              Types:

                 Mod = atom()
                   Name of the module the test case is in.
                 Case = atom()
                   Function name of the test case to add.

              This  function  will  add one test case to the job queue. The job will be given the
              module's name.

       add_case(Name, Mod, Case) -> ok

              Types:

                 Name = string()
                   Name to use for the test job.

              Equivalent to add_case/2, but the test job will get the specified name.

       add_cases(Mod, Cases) -> ok

              Types:

                 Mod = atom()
                   Name of the module the test case is in.
                 Cases = [Case]
                 Case = atom()
                   Function names of the test cases to add.

              This function will add one or more test cases to the job queue.  The  job  will  be
              given the module's name.

       add_cases(Name, Mod, Cases) -> ok

              Types:

                 Name = string()
                   Name to use for the test job.

              Equivalent to add_cases/2, but the test job will get the specified name.

       add_spec(TestSpecFile) -> ok | {error, nofile}

              Types:

                 TestSpecFile = string()
                   Name of the test specification file

              This  function will add the content of the given test specification file to the job
              queue. The job will be given the name of the test specification file, e.g.  if  the
              file is called test.spec, the job will be called test.

              See the reference manual for the test server application for details about the test
              specification file.

       add_dir_with_skip(Name, [Dir|Dirs], Skip) -> ok
       add_dir_with_skip(Name, [Dir|Dirs], Pattern, Skip) -> ok
       add_module_with_skip(Mod, Skip) -> ok
       add_module_with_skip(Name, [Mod|Mods], Skip) -> ok
       add_case_with_skip(Mod, Case, Skip) -> ok
       add_case_with_skip(Name, Mod, Case, Skip) -> ok
       add_cases_with_skip(Mod, Cases, Skip) -> ok
       add_cases_with_skip(Name, Mod, Cases, Skip) -> ok

              Types:

                 Skip = [SkipItem]
                   List of items to be skipped from the test.
                 SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}
                 Mod = atom()
                   Test suite name.
                 Comment = string()
                   Reason why suite or case is being skipped.
                 Cases = [Case]
                 Case = atom()
                   Name of test case function.

              These functions add test jobs just  like  the  add_dir,  add_module,  add_case  and
              add_cases  functions  above, but carry an additional argument, Skip. Skip is a list
              of items that should be skipped in the current test run. Test job items that  occur
              in the Skip list will be logged as SKIPPED with the associated Comment.

       add_tests_with_skip(Name, Tests, Skip) -> ok

              Types:

                 Name = term()
                   The jobname for this directory.
                 Tests = [TestItem]
                   List of jobs to add to the run queue.
                 TestItem = {Dir,all,all} | {Dir,Mods,all} | {Dir,Mod,Cases}
                 Dir = term()
                   The directory to scan for test suites.
                 Mods = [Mod]
                 Mod = atom()
                   Test suite name.
                 Cases = [Case]
                 Case = atom()
                   Name of test case function.
                 Skip = [SkipItem]
                   List of items to be skipped from the test.
                 SkipItem = {Mod,Comment} | {Mod,Case,Comment} | {Mod,Cases,Comment}
                 Comment = string()
                   Reason why suite or case is being skipped.

              This  function adds various test jobs to the test_server_ctrl job queue. These jobs
              can be of different type (all or specific suites in one directory, all or  specific
              cases  in  one  suite, etc). It is also possible to get particular items skipped by
              passing them along in the Skip list (see the add_*_with_skip functions above).

       abort_current_testcase(Reason) -> ok | {error,no_testcase_running}

              Types:

                 Reason = term()
                   The reason for stopping the test case, which will be printed in the log.

              When calling this function, the currently executing test case will be  aborted.  It
              is  the  user's  responsibility  to  know  for  sure  which  test case is currently
              executing. The function is therefore only safe to call from a  function  which  has
              been called (or synchronously invoked) by the test case.

       set_levels(Console, Major, Minor) -> ok

              Types:

                 Console = integer()
                   Level for I/O to be sent to console.
                 Major = integer()
                   Level for I/O to be sent to the major logfile.
                 Minor = integer()
                   Level for I/O to be sent to the minor logfile.

              Determines  where  I/O  from  test suites/test server will go. All text output from
              test suites and the test server is tagged with a priority value which ranges from 0
              to 100, 100 being the most detailed. (see the section about log files in the user's
              guide). Output from the test cases (using io:format/2) has a detail  level  of  50.
              Depending  on the levels set by this function, this I/O may be sent to the console,
              the major log file (for the whole test suite) or to the minor logfile (separate for
              each test case).

              All output with detail level:

                * Less than or equal to Console is displayed on the screen (default 1)

                * Less than or equal to Major is logged in the major log file (default 19)

                * Greater than or equal to Minor is logged in the minor log files (default 10)

              To view the currently set thresholds, use the get_levels/0 function.

       get_levels() -> {Console, Major, Minor}

              Returns the current levels. See set_levels/3 for types.

       jobs() -> JobQueue

              Types:

                 JobQueue = [{list(), pid()}]

              This function will return all the jobs currently in the job queue.

       multiply_timetraps(N) -> ok

              Types:

                 N = integer() | infinity

              This  function  should  be  called before a test is started which requires extended
              timetraps, e.g. if extensive tracing is used. All timetraps started after this call
              will be multiplied by N.

       scale_timetraps(Bool) -> ok

              Types:

                 Bool = true | false

              This function should be called before a test is started. The parameter specifies if
              test_server should attempt to automatically scale the timetrap value  in  order  to
              compensate for delays caused by e.g. the cover tool.

       get_timetrap_parameters() -> {N,Bool}

              Types:

                 N = integer() | infinity
                 Bool = true | false

              This  function  may  be  called  to read the values set by multiply_timetraps/1 and
              scale_timetraps/1.

       cover(Application,Analyse) -> ok
       cover(CoverFile,Analyse) -> ok
       cover(App,CoverFile,Analyse) -> ok

              Types:

                 Application = atom()
                   OTP application to cover compile
                 CoverFile = string()
                   Name of file listing modules to exclude from or include in cover  compilation.
                   The filename must include full path to the file.
                 Analyse = details | overview

              This function informs the test_server controller that next test shall run with code
              coverage analysis. All timetraps will automatically be multiplied by 10 when  cover
              i run.

              Application and CoverFile indicates what to cover compile. If Application is given,
              the default is that all modules in the ebin directory of the  application  will  be
              cover    compiled.    The   ebin   directory   is   found   by   adding   ebin   to
              code:lib_dir(Application).

              A CoverFile can have the following entries:

              {exclude, all | ExcludeModuleList}.
              {include, IncludeModuleList}.
              {cross, CrossCoverInfo}.

              Note  that  each  line  must  end  with  a   full   stop.   ExcludeModuleList   and
              IncludeModuleList are lists of atoms, where each atom is a module name.

              CrossCoverInfo  is  used  when  collecting  cover data over multiple tests. Modules
              listed here are compiled, but they will not be analysed when the test is  finished.
              See  cross_cover_analyse/2 for more information about the cross cover mechanism and
              the format of CrossCoverInfo.

              If both an Application and a CoverFile is given, all modules in the application are
              cover  compiled, except for the modules listed in ExcludeModuleList. The modules in
              IncludeModuleList are also cover compiled.

              If a CoverFile is given, but no Application, only the modules in  IncludeModuleList
              are cover compiled.

              Analyse  indicates  the  detail  level of the cover analysis. If Analyse = details,
              each cover compiled  module  will  be  analysed  with  cover:analyse_to_file/1.  If
              Analyse  =  overview  an overview of all cover compiled modules is created, listing
              the number of covered and not covered lines for each module.

              If  the  test  following  this  call  starts  any  slave   or   peer   nodes   with
              test_server:start_node/3, the same cover compiled code will be loaded on all nodes.
              If the loading fails, e.g. if the node runs an old version of OTP,  the  node  will
              simply  not  be a part of the coverage analysis. Note that slave or peer nodes must
              be stopped with test_server:stop_node/1 for the node to be  part  of  the  coverage
              analysis,  else  the  test  server will not be able to fetch coverage data from the
              node.

              When the test is finished, the coverage analysis is automatically  completed,  logs
              are  created  and the cover compiled modules are unloaded. If another test is to be
              run with coverage analysis, test_server_ctrl:cover/2/3 must be called again.

       cross_cover_analyse(Level, Tests) -> ok

              Types:

                 Level = details | overview
                 Tests = [{Tag,LogDir}]
                 Tag = atom()
                   Test identifier.
                 LogDir = string()
                   Log directory for  the  test  identified  by  Tag.  This  can  either  be  the
                   run.<timestamp>  directory  or the parent directory of this (in which case the
                   latest run.<timestamp> directory is chosen.

              Analyse cover data collected from multiple tests. The modules analysed are the ones
              listed  in  cross statements in the cover files. These are modules that are heavily
              used by other tests than the one where they belong or are explicitly  tested.  They
              should  then  be  listed as cross modules in the cover file for the test where they
              are used but do not belong. Se example below.

              This function should be run after all tests are completed, and the result  will  be
              stored  in  a  file called cross_cover.html in the run.<timestamp> directory of the
              test the modules belong to.

              Note that the function can be executed  on  any  node,  and  it  does  not  require
              test_server_ctrl to be started first.

              The cross statement in the cover file must be like this:

              {cross,[{Tag,Modules}]}.

              where Tag is the same as Tag in the Tests parameter to this function and Modules is
              a list of module names (atoms).

              Example:

              If the module m1 belongs to system s1 but is heavily used also  in  the  tests  for
              another  system  s2,  then the cover files for the two systems' tests could be like
              this:

              s1.cover:
                {include,[m1]}.

              s2.cover:
                {include,[....]}. % modules belonging to system s2
                {cross,[{s1,[m1]}]}.

              When the tests for both s1 and s2 are completed, run

              test_server_ctrl:cross_cover_analyse(Level,[{s1,S1LogDir},{s2,S2LogDir}])

              and   the   accumulated   cover    data    for    m1    will    be    written    to
              S1LogDir/[run.<timestamp>/]cross_cover.html.

              Note  that  the  m1 module will also be presented in the normal coverage log for s1
              (due to the include statement in s1.cover), but that  only  includes  the  coverage
              achieved by the s1 test itself.

              The  Tag in the cross statement in the cover file has no other purpose than mapping
              the list of modules ([m1] in the example above) to the correct log directory  where
              it should be included in the cross_cover.html file (S1LogDir in the example above).
              I.e. the value of Tag has no meaning, it could be foo as well as s1 above, as  long
              as the same Tag is used in the cover file and in the call to this function.

       trc(TraceInfoFile) -> ok | {error, Reason}

              Types:

                 TraceInfoFile = atom() | string()
                   Name of a file defining which functions to trace and how

              This  function  starts  call  trace  on  target and on slave or peer nodes that are
              started or will be started by the test suites.

              Timetraps  are   not   extended   automatically   when   tracing   is   used.   Use
              multiply_timetraps/1 if necessary.

              Note  that  the  trace  support  in the test server is in a very early stage of the
              implementation, and thus not yet as powerful as one might wish for.

              The trace information file specified by the TraceInfoFile argument is a  text  file
              containing one or more of the following elements:

                * {SetTP,Module,Pattern}.

                * {SetTP,Module,Function,Pattern}.

                * {SetTP,Module,Function,Arity,Pattern}.

                * ClearTP.

                * {ClearTP,Module}.

                * {ClearTP,Module,Function}.

                * {ClearTP,Module,Function,Arity}.

                SetTP = tp | tpl:
                  This  is  maps to the corresponding functions in the ttb module in the observer
                  application. tp means set trace pattern on global function calls. tpl means set
                  trace pattern on local and global function calls.

                ClearTP = ctp | ctpl | ctpg:
                  This  is  maps to the corresponding functions in the ttb module in the observer
                  application. ctp means clear trace pattern (i.e. turn off) on global and  local
                  function calls. ctpl means clear trace pattern on local function calls only and
                  ctpg means clear trace pattern on global function calls only.

                Module = atom():
                  The module to trace

                Function = atom():
                  The name of the function to trace

                Arity = integer():
                  The arity of the function to trace

                Pattern = [] | match_spec():
                  The trace pattern to set for the module or function. For a description  of  the
                  match_spec()  syntax,  please  turn  to the User's guide for the runtime system
                  (erts). The chapter "Match Specification in Erlang" explains the general  match
                  specification language.

              The  trace  result will be logged in a (binary) file called NodeName-test_server in
              the current directory of the test server controller node. The log must be formatted
              using ttb:format/1/2.

       stop_trace() -> ok | {error, not_tracing}

              This  function  stops  tracing  on  target,  and  on  slave  or peer nodes that are
              currently running. New slave or peer nodes will no longer be traced after this.

FUNCTIONS INVOKED FROM COMMAND LINE

       The following functions are supposed to be invoked from the  command  line  using  the  -s
       option when starting the erlang node.

EXPORTS

       run_test(CommandLine) -> ok

              Types:

                 CommandLine = FlagList

              This  function  is  supposed to be invoked from the commandline. It starts the test
              server, interprets the argument supplied  from  the  commandline,  runs  the  tests
              specified  and  when  all  tests are done, stops the test server and returns to the
              Erlang prompt.

              The CommandLine argument is a  list  of  command  line  flags,  typically  ['KEY1',
              Value1, 'KEY2', Value2, ...]. The valid command line flags are listed below.

              Under a UNIX command prompt, this function can be invoked like this:
              erl  -noshell  -s  test_server_ctrl  run_test KEY1 Value1 KEY2 Value2 ... -s erlang
              halt

              Or make an alias (this is for unix/tcsh)
              alias erl_test 'erl -noshell -s test_server_ctrl run_test \!* -s erlang halt'

              And then use it like this
              erl_test KEY1 Value1 KEY2 Value2 ...

              The valid command line flags are

                DIR dir:
                  Adds all test modules in the directory dir to the job queue.

                MODULE mod:
                  Adds the module mod to the job queue.

                CASE mod case:
                  Adds the case case in module mod to the job queue.

                SPEC spec:
                  Runs the test specification file spec.

                SKIPMOD mod:
                  Skips all test cases in the module mod

                SKIPCASE mod case:
                  Skips the test case case in module mod.

                NAME name:
                  Names the test suite to something else than the default  name.  This  does  not
                  apply to SPEC which keeps its names.

                COVER app cover_file analyse:
                  Indicates  that the test should be run with cover analysis. app, cover_file and
                  analyse corresponds to the parameters to test_server_ctrl:cover/3. If no  cover
                  file is used, the atom none should be given.

                TRACE traceinfofile:
                  Specifies  a trace information file. When this option is given, call tracing is
                  started on the target node and all slave or peer nodes that  are  started.  The
                  trace  information file specifies which modules and functions to trace. See the
                  function trc/1 above for more information about the syntax of this file.

FRAMEWORK CALLBACK FUNCTIONS

       A  test  server  framework  can  be  defined   by   setting   the   environment   variable
       TEST_SERVER_FRAMEWORK  to  a  module  name.  This  module  will then be framework callback
       module, and it must export the following function:

EXPORTS

       get_suite(Mod,Func) -> TestCaseList

              Types:

                 Mod = atom()
                   Test suite name.
                 Func = atom()
                   Name of test case.
                 TestCaseList = [SubCase]
                   List of test cases.
                 SubCase = atom()
                   Name of a case.

              This function is called before a test case is started. The purpose is to retrieve a
              list  of  subcases.  The  default  behaviour  of  this  function  should be to call
              Mod:Func(suite) and return the result from this call.

       init_tc(Mod,Func,Args0) -> {ok,Args1} | {skip,ReasonToSkip} |  {auto_skip,ReasonToSkip}  |
       {fail,ReasonToFail}

              Types:

                 Mod = atom()
                   Test suite name.
                 Func = atom()
                   Name of test case or configuration function.
                 Args0 = Args1 = [tuple()]
                   Normally Args = [Config]
                 ReasonToSkip = term()
                   Reason to skip the test case or configuration function.
                 ReasonToFail = term()
                   Reason to fail the test case or configuration function.

              This  function is called before a test case or configuration function starts. It is
              called on the process executing the function Mod:Func. Typical use of this function
              can  be  to  alter  the input parameters to the test case function (Args) or to set
              properties for the executing process.

              By  returning  {skip,Reason},  Func  gets  skipped.  Func  also  gets  skipped   if
              {auto_skip,Reason}  is  returned, but then gets an auto skipped status (rather than
              user skipped).

              To fail Func immediately instead of executing it, return {fail,ReasonToFail}.

       end_tc(Mod,Func,Status) -> ok | {fail,ReasonToFail}

              Types:

                 Mod = atom()
                   Test suite name.
                 Func = atom()
                   Name of test case or configuration function.
                 Status = {Result,Args} | {TCPid,Result,Args}
                   The status of the test case or configuration function.
                 ReasonToFail = term()
                   Reason to fail the test case or configuration function.
                 Result = ok | Skip | Fail
                   The final result of the test case or configuration function.
                 TCPid = pid()
                   Pid of the process executing Func
                 Skip = {skip,SkipReason}
                 SkipReason = term() | {failed,{Mod,init_per_testcase,term()}}
                   Reason why the function was skipped.
                 Fail  =  {error,term()}  |  {'EXIT',term()}  |  {timetrap_timeout,integer()}   |
                 {testcase_aborted,term()}   |  testcase_aborted_or_killed  |  {failed,term()}  |
                 {failed,{Mod,end_per_testcase,term()}}
                   Reason why the function failed.
                 Args = [tuple()]
                   Normally Args = [Config]

              This function is called when a test case, or a configuration function, is finished.
              It  is  normally  called  on  the  process  where  the  function  Mod:Func has been
              executing, but if not, the pid of the test case process is passed with  the  Status
              argument.

              Typical use of the end_tc/3 function can be to clean up after init_tc/3.

              If  Func  is  a  test case, it is possible to analyse the value of Result to verify
              that init_per_testcase/2 and end_per_testcase/2 executed successfully.

              It is possible with  end_tc/3  to  fail  an  otherwise  successful  test  case,  by
              returning {fail,ReasonToFail}. The test case Func will be logged as failed with the
              provided term as reason.

       report(What,Data) -> ok

              Types:

                 What = atom()
                 Data = term()

              This function is called in order to keep the framework up-to-date with the progress
              of  the  test.  This  is  useful  e.g.  if the framework implements a GUI where the
              progress information is constantly updated. The following can be reported:

              What = tests_start, Data = {Name,NumCases}
              What = loginfo, Data = [{topdir,TestRootDir},{rundir,CurrLogDir}]
              What = tests_done, Data = {Ok,Failed,{UserSkipped,AutoSkipped}}
              What = tc_start, Data = {{Mod,{Func,GroupName}},TCLogFile}
              What = tc_done, Data = {Mod,{Func,GroupName},Result}
              What = tc_user_skip, Data = {Mod,{Func,GroupName},Comment}
              What = tc_auto_skip, Data = {Mod,{Func,GroupName},Comment}
              What = framework_error, Data = {{FWMod,FWFunc},Error}

              Note that for a test case function that doesn't belong to a  group,  GroupName  has
              value undefined, otherwise the name of the test case group.

       error_notification(Mod, Func, Args, Error) -> ok

              Types:

                 Mod = atom()
                   Test suite name.
                 Func = atom()
                   Name of test case or configuration function.
                 Args = [tuple()]
                   Normally Args = [Config]
                 Error = {Reason,Location}
                 Reason = term()
                   Reason for termination.
                 Location = unknown | [{Mod,Func,Line}]
                   Last known position in Mod before termination.
                 Line = integer()
                   Line number in file Mod.erl.

              This  function  is called as the result of function Mod:Func failing with Reason at
              Location. The function is intended mainly to aid specific logging or error handling
              in  the framework application. Note that for Location to have relevant values (i.e.
              other than unknown), the line macro or test_server_line  parse  transform  must  be
              used.  For  details,  please  see  the section about test suite line numbers in the
              test_server reference manual page.

       warn(What) -> boolean()

              Types:

                 What = processes | nodes

              The test server checks the number of processes and nodes before and after the  test
              is executed. This function is a question to the framework if the test server should
              warn when the number of processes or nodes has changed during the  test  execution.
              If true is returned, a warning will be written in the test case minor log file.

       target_info() -> InfoStr

              Types:

                 InfoStr = string() | ""

              The test server will ask the framework for information about the test target system
              and print InfoStr in the test case log file below the host information.