|
Boost : |
From: Gennadiy Rozental (gennadiy.rozental_at_[hidden])
Date: 2002-10-03 23:02:14
> This looks like a good start syntactically. The tests have to give output
> and report failure for each test which doesn't meet its expectations
(where
> in the above case, all known MSVC versions are expected to fail the test,
> so if any pass, there should be diagnostic output and a failure should be
> reported).
>
> However, I'm a little nervous about having expected failure tests for
> certain compiler versions unless we can get information about what
actually
> works and what doesn't into the docs or the status tables. I don't think
> it's acceptable to say "this passes the test, but if you want to know what
> actually works you need to look at our regression test source".
Ok. Here another proposition. More complex, but I think it will work better
in a long-term outlook.
Let say we introduce notion of 'results pattern file' that will store the
information about expected result of every assertions in every test cases
for every compiler. Exact format is not important. It will be the Boost.Test
details. Then let introduce method
void results_pattern( string const& file_name )
to the class unit_test_result interface. Finally let introduce parameter
recognized by the framework named
--results_pattern <value>
where value = check|save|report_short|report_detailed
'check' - is the default value. It will make the test program to validate
testing results vs. result pattern file (if one is supplied)
'save' - will make the test program to save the results pattern for the
current compiler into file (if one is supplied)
'report_short' - will generate the test results report based on information
in a file supplied (cause an error if one not supplied) in a human readable
(or XML but "XML output" is completely separate topic that I will discuss
later) format. This report contain the test case based information like
this:
Compiler1:
test_is_same ..... pass
test_is_array........ fail
....
Compiler2:
test_is_same ..... fail
test_is_array........ pass
....
'report_detailed' - will do the same what report_short is doing but will
print assertion based report.
> It also seems to me that we're likely to find that some compilers just
> can't compile some of the tests. When that happens, I guess we need to
> factor the tests into a separate file.
Now about failing compilation. Unfortunately there is no way to put #ifdef
inside the macro definition. Separating the test program onto several could
also be very inconvenient. Especially if you have one assertion in one test
case and two in another failing for one compiler and different set of
assertions fail for another. There is one way though to solve it.
Let say we introduce the unique numeric configuration id for EVERY
configuration we support. By configuration I mean specific compiler, it's
version, maybe something else. The best place for this would be the
boost::config headers, though I could place them info Boost.Test config. For
example (all names further are tentative):
#define BOOST_CONFIG_GCC_291 1
#define BOOST_CONFIG_GCC_295 2
#define BOOST_CONFIG_GCC_301 3
#define BOOST_CONFIG_MSVC_65 4
#define BOOST_CONFIG_MSVC_70 5
#define BOOST_CONFIG_MSVC_71 6
....
Now I can introduce the tool into Boost.Test and define it like this (pseudo
code)
BOOST_CHECK_EXCLUDE_COMPILER( predicate, exclusion_list ) =
if( current config id in an exclusion_list ) <= I think BOOST_PP should
allow me to do this
BOOST_ERRROR( #predicate does not compile )
else
BOOST_CHECK( predicate )
Using this tool I could specify configurations that does not compile the
code under test. For those configurations error will be generated that will
signal that code does not compile.
BTW these unique IDs would also be helpful for the first part of my
proposition.
What do you think?
Gennadiy.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk