Boost logo

Boost :

From: Gennadiy Rozental (gennadiy.rozental_at_[hidden])
Date: 2002-10-06 15:55:37


> > As it stands IMO no: the tests are both compile time and run time tests,
and
> > yes this really does make a difference (gory details and compiler bugs
> > omitted here...).
>
> Not obvious to me why, without the gories.
Not clear for me either, especially since I do not understand what do you
mean under compile time tests, since you are not using BOOST_STATIC_ASSERT.
In general mixing of compile time tests ('pass' means compiler able to
compile the code) and runtime tests in the same module does not look like a
good idea, cause even if only one compile time test is failing (compiler
fails to compile) all runtime ones are failing automatically.

>
> > However we can fairly easily construct our own tests on
> > top of BOOST_ERROR or BOOST_CHECK_MESSAGE. Gennadiy: would you be
> > interested in:
>
> Are those Boost.Test macros?
Yes, those are Boost.Test macros.

>
> > BOOST_CHECK_INTEGRAL_CONSTANT(value, expected), performs both compile
time
> > and runtime checks, and outputs extended error info on fail.

Again what means "compile time check"? How is if different from

BOOST_CHECK_EQUAL( value, expected )?

> >
> > and,
> >
> > BOOST_CHECK_TYPEID(type, expected), verifies that the two types are the
same
>
> Uhhh if I understand your naming, then not by itself, it doesn't. A
> conforming compiler will strip lots of type information out of
> typeid(T).
>
> typeid(T&) == typeid(T) == typeid(T const) == typeid(T const&)
>
> We could use my extended typeid implementation from Boost.Python, but
> of course that relies on type traits ;-).

I would also try not to rely on RTTI. What I could propose is
BOOST_SAME_TYPE( T1, T2 )
It will use boost::is_same for conforming compilers (plus it will print the
type names based on RTTI for mismatched types). It will use other methods
(like typeid based you are using) for every nonconforming compiler. But it
will need to be checked that it works for all cases as expected.

>
> > (note that this is not as simple as just checking is_same<>::value,
> > because we can't rely on that template working without partial
> > specialization support....
>
> Really?! That surprises me. For which cases does it fail?
Looks like nonconforming, non-MSVC version may have a problem with void type

>
> > Probably the thing to do is to refactor type_traits_tests.hpp to provide
> > these and then move them to boost.test later if Gennadiy is happy?
> >
> > 2) What should we do with tests that fail?
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > Historically we have allowed an expected number of failures, but
> > without them all the tests would have failed on all compilers, so
> > even though most of the tests pass for most compilers, different
> > compilers fail in different cases.
> >
> > We now have so many more fixes checked in that there are many fewer
> > failures, however we still need to deal with those that do occur,
> > particularly for the less common compilers that have had less effort
> > put into workarounds than say Visual C++.
> >
> > Normally in cases like this, one would have one file per test, and
> > just let them fail if they are going to fail. However the problem
> > here is the sheer quantity of data - we could easily put together a
> > code generator that produces one file per test, but there would be
> > about 1500 or so of them. I don't think any of the regression
> > testers would welcome putting that quantity of tests into the main
> > regression tests.

If you want to use a code generator you could generate one file with many
test cases, unless of course you are uncomfortable with use of unit test
framework.

> > I guess we could try and refactor as:
> >
> > One test file per trait, for "easy" tests that are always expected to
pass.
> > One or more files per trait for "difficult" tests that often fail.

Unless we have problems with compilation I afraid that using of several
files for the same test (like test_is_array_simple, test_is_array_advanced1,
test_is_array_advanced2 ..) is rather burden then help.

> >
> > And then just let the tests fail if they fail (i.e. no expected
failures).
> >
> > Thoughts?

You did not express an opinion on my proposition on result pattern file. I
think it will allow you transparently manage expected failures without need
to change the structures of your tests.

> I don't mind expected failures as long as they don't cause
> compile-time errors, and as long as the output very clearly denotes
> which cases are failing. However, your approach sounds cleaner. In
> general, "expected failure" should be a technique reserved for things
> which are actually /supposed/ to fail, on a perfectly conforming
> compiler (e.g. BOOST_STATIC_ASSERT() tests). Otherwise, things just
> become way too confusing.

1. If things are supposed to fail *always*, check for reverse condition
instead. For example, if operation is supposed to throw an exception you
could use BOOST_CHECK_THROW to make sure that an exception does thrown.
2. Could you clarify how BOOST_STATIC_ASSERT tests would allow you to check
condition that supposed to fail?. I always thought it allows to require some
compile time conditions are always be true.

Gennadiy.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk