Provided by: libtest-unit-perl_0.25-2_all bug

NAME

       Test::Unit::TestCase - unit testing framework base class

SYNOPSIS

           package FooBar;
           use base qw(Test::Unit::TestCase);

           sub new {
               my $self = shift()->SUPER::new(@_);
               # your state for fixture here
               return $self;
           }

           sub set_up {
               # provide fixture
           }
           sub tear_down {
               # clean up after test
           }
           sub test_foo {
               my $self = shift;
               my $obj = ClassUnderTest->new(...);
               $self->assert_not_null($obj);
               $self->assert_equals('expected result', $obj->foo);
               $self->assert(qr/pattern/, $obj->foobar);
           }
           sub test_bar {
               # test the bar feature
           }

DESCRIPTION

       Test::Unit::TestCase is the 'workhorse' of the PerlUnit framework.  When writing tests,
       you generally subclass Test::Unit::TestCase, write "set_up" and "tear_down" functions if
       you need them, a bunch of "test_*" test methods, then do

           $ TestRunner.pl My::TestCase::Class

       and watch as your tests fail/succeed one after another. Or, if you want your tests to work
       under Test::Harness and the standard perlish 'make test', you'd write a t/foo.t that
       looked like:

           use Test::Unit::HarnessUnit;
           my $r = Test::Unit::HarnessUnit->new();
           $r->start('My::TestCase::Class');

   How To Use Test::Unit::TestCase
       (Taken from the JUnit TestCase class documentation)

       A test case defines the "fixture" (resources need for testing) to run multiple tests. To
       define a test case:

       1.  implement a subclass of TestCase

       2.  define instance variables that store the state of the fixture (I suppose if you are
           using Class::MethodMaker this is possible...)

       3.  initialize the fixture state by overriding "set_up()"

       4.  clean-up after a test by overriding "tear_down()".

       Implement your tests as methods.  By default, all methods that match the regex "/^test/"
       are taken to be test methods (see "list_tests()" and "get_matching_methods()").  Note
       that, by default all the tests defined in the current class and all of its parent classes
       will be run.  To change this behaviour, see "NOTES".

       By default, each test runs in its own fixture so there can be no side effects among test
       runs. Here is an example:

             package MathTest;
             use base qw(Test::Unit::TestCase);

             sub new {
                 my $self = shift()->SUPER::new(@_);
                     $self->{value_1} = 0;
                     $self->{value_2} = 0;
                     return $self;
             }

             sub set_up {
                     my $self = shift;
                     $self->{value_1} = 2;
                     $self->{value_2} = 3;
             }

       For each test implement a method which interacts with the fixture.  Verify the expected
       results with assertions specified by calling "$self->assert()" with a boolean value.

             sub test_add {
                     my $self = shift;
                     my $result = $self->{value_1} + $self->{value_2};
                     $self->assert($result == 5);
             }

       Once the methods are defined you can run them. The normal way to do this uses reflection
       to implement "run_test". It dynamically finds and invokes a method. For this the name of
       the test case has to correspond to the test method to be run. The tests to be run can be
       collected into a TestSuite. The framework provides different test runners, which can run a
       test suite and collect the results. A test runner either expects a method "suite()" as the
       entry point to get a test to run or it will extract the suite automatically.

   Writing Test Methods
       The return value of your test method is completely irrelevant. The various test runners
       assume that a test is executed successfully if no exceptions are thrown. Generally, you
       will not have to deal directly with exceptions, but will write tests that look something
       like:

           sub test_something {
               my $self = shift;
               # Execute some code which gives some results.
               ...
               # Make assertions about those results
               $self->assert_equals('expected value', $resultA);
               $self->assert_not_null($result_object);
               $self->assert(qr/some_pattern/, $resultB);
           }

       The assert methods throw appropriate exceptions when the assertions fail, which will
       generally stringify nicely to give you sensible error reports.

       Test::Unit::Assert has more details on the various different "assert" methods.

       Test::Unit::Exception describes the Exceptions used within the "Test::Unit::*" framework.

   Helper methods
       make_test_from_coderef (CODEREF, [NAME])
           Takes a coderef and an optional name and returns a Test case that inherits from the
           object on which it was called, which has the coderef installed as its "run_test"
           method. Class::Inner has more details on how this is generated.

       list_tests
           Returns the list of test methods in this class and its parents. You can override this
           in your own classes, but remember to call "SUPER::list_tests" in there too.  Uses
           "get_matching_methods".

       get_matching_methods (REGEXP)
           Returns the list of methods in this class matching REGEXP.

       set_up
       tear_down
           If you don't have any setup or tear down code that needs to be run, we provide a
           couple of null methods. Override them if you need to.

       annotate (MESSAGE)
           You can accumulate helpful debugging for each testcase method via this method, and it
           will only be outputted if the test fails or encounters an error.

   How it All Works
       The PerlUnit framework is achingly complex. The basic idea is that you get to write your
       tests independently of the manner in which they will be run, either via a "make test" type
       script, or through one of the provided TestRunners, the framework will handle all that for
       you. And it does. So for the purposes of someone writing tests, in the majority of cases
       the answer is 'It just does.'.

       Of course, if you're trying to extend the framework, life gets a little more tricky. The
       core class that you should try and grok is probably Test::Unit::Result, which, in tandem
       with whichever TestRunner is being used mediates the process of running tests, stashes the
       results and generally sits at the centre of everything.

       Better docs will be forthcoming.

NOTES

       Here's a few things to remember when you're writing your test suite:

       Tests are run in 'random' order; the list of tests in your TestCase are generated
       automagically from its symbol table, which is a hash, so methods aren't sorted there.

       If you need to specify the test order, you can do one of the following:

       •   Set @TESTS

             our @TESTS = qw(my_test my_test_2);

           This is the simplest, and recommended way.

       •   Override the "list_tests()" method

           to return an ordered list of methodnames

       •   Provide a "suite()" method

           which returns a Test::Unit::TestSuite.

       However, even if you do manage to specify the test order, be careful, object data will not
       be retained from one test to another, if you want to use persistent data you'll have to
       use package lexicals or globals.  (Yes, this is probably a bug).

       If you only need to restrict which tests are run, there is a filtering mechanism
       available.  Override the "filter()" method in your testcase class to return a hashref
       whose keys are filter tokens and whose values are either arrayrefs of test method names or
       coderefs which take the method name as the sole parameter and return true if and only if
       it should be filtered, e.g.

         sub filter {{
             slow => [ qw(my_slow_test my_really_slow_test) ],
             matching_foo => sub {
                 my $method = shift;
                 return $method =~ /foo/;
             }
         }}

       Then, set the filter state in your runner before the test run starts:

         # @filter_tokens = ( 'slow', ... );
         $runner->filter(@filter_tokens);
         $runner->start(@args);

       This interface is public, but currently undocumented (see doc/TODO).

BUGS

       See note 1 for at least one bug that's got me scratching my head.  There's bound to be
       others.

AUTHOR

       Copyright (c) 2000-2002, 2005 the PerlUnit Development Team (see Test::Unit or the AUTHORS
       file included in this distribution).

       All rights reserved. This program is free software; you can redistribute it and/or modify
       it under the same terms as Perl itself.

SEE ALSO

       •   Test::Unit::Assert

       •   Test::Unit::Exception

       •   Test::Unit::TestSuite

       •   Test::Unit::TestRunner

       •   Test::Unit::TkTestRunner

       •   For further examples, take a look at the framework self test collection
           (t::tlib::AllTests).