Boost logo

Boost :

Subject: [boost] Boost.Test updates in trunk: need for (mini) review?
From: Gennadiy Rozenal (rogeeff_at_[hidden])
Date: 2012-10-19 23:03:57


Hi,

It's been a long while since I merged any changes into boost release and by
now there are whole bunch of new features there. Some are small, but some
others are big ones such that they can change the way people use the library.
I'd like to list them here (excluding bug fixes and removal of deprecated
interfaces) and ask what should we do with these. Specifically let me know
if you think these require some kind of (mini) review from community, but any
other comments are welcome as well.

So here we go:

I. New testing tool BOOST_CHECKA

This tool is based on excellent idea from Kevlin Henney. I chose the name
CHECKA for this tool due to the lack of better name, but I am open to
suggestions. This testing tool capable of replacing whole bunch of existing
other tools like BOOST_CHECK_EQUAL, BOOST_CHECK_GT etc. Usage is the most
natural you can wish for:

BOOST_CHECKA( var1 - var2 >= 12 );

And the output will include as much information as we can get:

 error: in "foo": check var1 - var2 >= 12 failed [23-15<12]

II. New "data driven test case" subsystem

New data driven test case subsystem represent a generalization of parameterized
test case and test case template and eventually will replace both. The idea is
to allow user to specify an arbitrary (monomorphic or polymorphic) set of
samples and run a test case on each of the samples. Samples can have different
arity, thus you can have test cases with multiple parameters of different types.
For now we support following dataset kinds (aside of generators none of dataset
constructions routines performs *any* copying):

a) singleton - dataset constructed out of single sample

data::make(10) - singleton dataset with integer sample
data::make("qwerty") - singleton dataset with string sample

b) array - dataset constructed out of C array

int a[] = {1,2,3,4,5,6};
data::make(a) - dataset with 6 integer values

c) collection - dataset constructed out of C++ forward iterable collection

std::vector<double> v{1.2, 2.3, 3.4, 5.6 };
data::make(v) - dataset with 6 double values

d) join - dataset constructed by joining 2 datasets of the same type

int a[] = {1,2,3};
int b[] = {7,8,9};
data::make(a) + data::make(b) - dataset with 6 integer values

e) zip - dataset constructed by zipping 2 datasets of the same size, but not
         necessarily the same type

This dataset has an arity which is sum of argument dataset arities.

int a[] = {1,2,3};
char* b[] = {"qwe", "asd", "zxc"};

data::make(a) ^ data::make(b)
dataset with 3 samples which are pairs of int and char*.

f) grid - dataset constructed by "multiplying" 2 datasets of the same different
          sizes and types type

This dataset has an arity which is sum of argument dataset arities.

int a[] = {1,2,3};
char* b[] = {"qwe", "asd"};
double c[] = {1.1, 2.2};

data::make(a) * data::make(b) * data::make(c)
dataset with 12 samples which are tuples of int and char* and double.

g) xrange generator dataset which produes samples in some range

data::xrange( 0., 3., 0.4 ) - dataset with 8 double samples
data::xrange<int>((data::begin=9, data::end=15)) - dataset with 6 int samples
data::xrange( 1., 7.5 ) - dataset with 7 double samples
data::xrange( 5, 0, -1 ) - dataset with 5 int samples

h) random - generator dataset with unlimited number of random samples

data::random(data::distribution = std::normal_distribution<>(5.,2))
dataset with random double numbers following specified distribution

data::random(( data::engine = std::minstd_rand(),
               data::distribution = std::discrete_distribution<>(),
               data::seed = 20UL ));
dataset with random int numbers following specified distribution

While all this interfaces can be used by itself to build complex datasets with
various purposes, primary use case it was developped for is new data driven
test case interface:

BOOST_DATA_TEST_CASE( test_name, dataset, parameter_names... )

Here are couple examples:

int samples1[] = {1,2,3};
BOOST_DATA_TEST_CASE( t1, samples1, sample )
{
   BOOST_CHECKA( foo(sample) > 0 );
}

Above test case is going to be executed 3 times with different sample values

char* strs[] = {"qwe", "asd", "zxc", "mkl" };

BOOST_DATA_TEST_CASE( t1, data::xrange(4) ^ strs ^ data::random(),
                      intval, str, dblval )
{
   MyObj obj( dblval, str );

   BOOST_CHECKA( obj.goo() == intval );
}

Above test case will be executed 4 times with different values of parameters
intval, str, and dblval.

Polymorphic datasets are still being developed, but should appear soon(ish).

III. Auto-registered test unit decorators.

Previously is was not possible and/or convenient to assign attributes to the
automatically registered test units. To alleviate this I've introduced a
notion of test unit decorator. These can be "attached" to any test unit
similarly to like it is done in other languages

Following decorators already implemented:
    label - adds labels to a test unit
    expected_failures - set expected failures for a test unit
    timeout - sets timeout for a test unit
    description - sets a test unit description
    depends_on - sets a test unit dependency
    enable_if/disable_if - facilitates a test unit status change
    fixture - assigns fixture to a test units

Test unit description is new test unit attribute, which is reported by new
list_content command line argument described below. Usage of labels covered
below as well. enable_if/disable_if allow new, much more flexible test
management. By adding enable_if/disable_if decorators to the test unit you can
conditionally select which test units to run at construction time based on some
compile time or run-time parameters. And we finally we have suite level fixtures
which are set by attaching fixture decorator to a test suite (suite level
fixtures executed once per test suite).

Attachment of decorator is facilitated by using of BOOST_TEST_DECORATOR. Note
that you can use any of '+', '-', '*' symbols to attach decorator (and any
number of '*'):

BOOST_TEST_DECORATOR(
+ unittest::fixture<suite_fixture>()
)
BOOST_AUTO_TEST_SUITE( my_suite1 )
{
}

BOOST_TEST_DECORATOR(
- unittest::timeout( 100 )
- unittest::expected_failures( 1 )
- unittest::enable_if( 100 < 50 )
)
BOOST_AUTO_TEST_CASE( my_test5 )

BOOST_TEST_DECORATOR(
**** unittest::label( "L3" )
**** unittest::description( "suite description" )
**** unittest::depends_on( "my_suite2/my_test7" )
)
BOOST_AUTO_TEST_CASE( my_test11 )

IV Support for "run by label" plus more improvements for filtered runs

Previously you only had an ability to filter test unit to execute by name. Now
you can attach labels to test units and create collections to run which are
located in arbitrary positions in test tree. For example you can have special
label for performance test cases, which is attached to all test units which are
responsible for testing performance of your various components. Or you might
want to collect exception safety tests etc. To filter test units by level you
still use --run_test CLA. Labels are dented by @prefix:

test.exe --run=@performance

You can now repeat --run_test argument to specify multiple conditions:
 
test.exe --run=@performance,@exception_safety --run=prod/subsystem2

In addition run by name/label now recognizes dependencies. So if test unit A
depends on test unit B and test unit B is disabled/is not part of current run
test unit A will not run as well.

Finally you now have an ability to specify "negative" conditions, by prefixing
name or label with !

test.exe --run=!@performance

This will run all test units which are not labeled with "performance".

V. Support for failure context

In many scenarios it is desirable to specify an additional information to a
failure, but you do not need to see it if run is successful. Thus using
regular print statements is undesirable. This becomes especially important
when you develop common routine, which performs your testing and invoke it
from multiple test units. In this case failure location is no help at all.
To alleviate this problem two new tools are introduced:
  BOOST_TEST_INFO
  BOOST_TEST_CONTEXT

BOOST_TEST_INFO attaches context to next assertion which is executed.
For example:

BOOST_TEST_INFO( "a=" << a );
BOOST_CHECKA( foo(a) == 0 );

BOOST_CHECK_CONTEXT attaches context to all assertions within a scope:

BOOST_CHECK_CONTEXT( "Starting test foo from subset " << ss ) {

BOOST_CHECKA( foo(ss, a) == 1 );
BOOST_CHECKA( foo(ss, b) == 2 );

}

VI. Two new command line arguments: list_context, wait_for_debugger

Using list_context you can see a full or partial test tree of your test module.
wait_for_debugger allows to force test module to wait till you attach debugger.

VII. Colored output.

Using new CLA --color_output you can turn on colored output for error and
information messages

VIII New production testing tools interface introduced

This feature was covered in details in my presentation in BoostCon 2010. The
gist if this is an ability to use Boost.Test testing tools interfaces in
production (nothing to do with testing) code. User supplied implementation
is plugged in

IX. Number of smaller improvements:

* Notion of framework shutdown. Allows to eliminate some fake memory leaks
* Added checkpoints at fixture entry points, test case entry point and test case
exit point for auto registered test cases
* New FPE portable interfaces introduced and FPE handling is separated from
system errors handling. You can detect FPE even if catch_system_error is false
* Added an ability to erase registered exception translator
* execution_monitor: new interface vexecute - to be used to monitor nullary
functions with no result values
* test_tree_visitor interface extended to facilitate visitors applying the same
action to all test units
* execution_monitor use typeid to report "real" exception type if possible.
* New ability to redirect leaks report into a file

Thank you for your time,
Gennadiy


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk