From: David Abrahams (dave_at_[hidden])
Date: 2002-10-03 16:53:32
From: "Rozental, Gennadiy" <gennadiy.rozental_at_[hidden]>
> > Now that Aleksey has refactored type_traits, it would be
> > great if someone
> > could refactor the tests in such a way that one could determine the
> > behavior of individual traits on a given platform. Volunteers?
> Do you mean that we need a way to set for every assertion is it expected
> pass or not?
> I had in mind to add something to support this in Boost.Test. What about
> this interface:
> BOOST_<some name - please propose>( predicate, tuple of condition when it
> shoud fail ).
> For example:
> BOOST_...( boost::is_same<T1,T2>::value, ( BOOST_MSVC <
> 0x500 ))
> I am open to any proposition. Once we decide the interface I can
> it in Boost.Test and apply to type_trats unit tests.
This looks like a good start syntactically. The tests have to give output
and report failure for each test which doesn't meet its expectations (where
in the above case, all known MSVC versions are expected to fail the test,
so if any pass, there should be diagnostic output and a failure should be
However, I'm a little nervous about having expected failure tests for
certain compiler versions unless we can get information about what actually
works and what doesn't into the docs or the status tables. I don't think
it's acceptable to say "this passes the test, but if you want to know what
actually works you need to look at our regression test source".
It also seems to me that we're likely to find that some compilers just
can't compile some of the tests. When that happens, I guess we need to
factor the tests into a separate file.
David Abrahams * Boost Consulting
dave_at_[hidden] * http://www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk