From: David Abrahams (dave_at_[hidden])
Date: 2002-10-04 07:47:01
From: "Gennadiy Rozental" <gennadiy.rozental_at_[hidden]>
> > However, I'm a little nervous about having expected failure tests for
> > certain compiler versions unless we can get information about what
> > works and what doesn't into the docs or the status tables. I don't
> > it's acceptable to say "this passes the test, but if you want to know
> > actually works you need to look at our regression test source".
> Ok. Here another proposition. More complex, but I think it will work
> in a long-term outlook.
> Let say we introduce notion of 'results pattern file' that will store the
> information about expected result of every assertions in every test cases
> for every compiler.
Expected results is an important feature of any testing system. Not just
for assertions, but also for printed output.
> Exact format is not important. It will be the Boost.Test
> details. Then let introduce method
> void results_pattern( string const& file_name )
> to the class unit_test_result interface. Finally let introduce parameter
> recognized by the framework named
> --results_pattern <value>
> where value = check|save|report_short|report_detailed
> 'check' - is the default value. It will make the test program to validate
> testing results vs. result pattern file (if one is supplied)
> 'save' - will make the test program to save the results pattern for the
> current compiler into file (if one is supplied)
> 'report_short' - will generate the test results report based on
> in a file supplied (cause an error if one not supplied) in a human
> (or XML but "XML output" is completely separate topic that I will discuss
> later) format. This report contain the test case based information like
> test_is_same ..... pass
> test_is_array........ fail
> test_is_same ..... fail
> test_is_array........ pass
> 'report_detailed' - will do the same what report_short is doing but will
> print assertion based report.
I like the above very much. However, it's going to be difficult to use for
some cases, like those in the type traits library, which run a suite of
checks with a variety of types. The problem is that on some compilers, a
given trait will work fine, e.g., for everything but pointers to
const-volatile-qualified member functions. That doesn't make the trait
completely broken on that platform -- not at all. The problem is that type
traits are SO useful for getting around the limitations of broken compilers
that we don't want to just say "this one works"/"this one doesn't".
Instead, it's important to tease apart just what the behavior of each one
Another similar example you can find in call_traits. Part of the problem is
with the docs: it's nearly impossible to understand what types it's
actually generating for any given case. When you add compiler limitations
to the mix, it gets to be really unpredictable.
> > It also seems to me that we're likely to find that some compilers just
> > can't compile some of the tests. When that happens, I guess we need to
> > factor the tests into a separate file.
> Now about failing compilation. Unfortunately there is no way to put
> inside the macro definition. Separating the test program onto several
> also be very inconvenient. Especially if you have one assertion in one
> case and two in another failing for one compiler and different set of
> assertions fail for another. There is one way though to solve it.
> Let say we introduce the unique numeric configuration id for EVERY
> configuration we support. By configuration I mean specific compiler, it's
> version, maybe something else. The best place for this would be the
> boost::config headers, though I could place them info Boost.Test config.
> example (all names further are tentative):
> #define BOOST_CONFIG_GCC_291 1
> #define BOOST_CONFIG_GCC_295 2
> #define BOOST_CONFIG_GCC_301 3
> #define BOOST_CONFIG_MSVC_65 4
> #define BOOST_CONFIG_MSVC_70 5
> #define BOOST_CONFIG_MSVC_71 6
> Now I can introduce the tool into Boost.Test and define it like this
> BOOST_CHECK_EXCLUDE_COMPILER( predicate, exclusion_list ) =
> if( current config id in an exclusion_list ) <= I think BOOST_PP should
> allow me to do this
> BOOST_ERRROR( #predicate does not compile )
> BOOST_CHECK( predicate )
> Using this tool I could specify configurations that does not compile the
> code under test. For those configurations error will be generated that
> signal that code does not compile.
> BTW these unique IDs would also be helpful for the first part of my
I'm not as fond of this part of your proposal. One problem is that if
something happens which causes a previously-non-compiling test to start
compiling, nothing will tell you because nothing ever tries to compile it.
I think we'd better deal with this differently, and it we'll need to take a
long-term view. Probably some kind of script-driven system will be most
appropriate... but let's see if we can deal with the other issues first.
David Abrahams * Boost Consulting
dave_at_[hidden] * http://www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk