From: John Maddock (jm_at_[hidden])
Date: 2002-10-08 06:27:12
> Not clear for me either, especially since I do not understand what do you
> mean under compile time tests, since you are not using
> In general mixing of compile time tests ('pass' means compiler able to
> compile the code) and runtime tests in the same module does not look like
> good idea, cause even if only one compile time test is failing (compiler
> fails to compile) all runtime ones are failing automatically.
BOOST_STATIC_ASSERT would cause the whole file to fail to compile and not
give the greatest error messages, the current tests combine using the values
within integral constant expressions, with runtime output. See my other
message for some of the gory details.
> Again what means "compile time check"? How is if different from
> BOOST_CHECK_EQUAL( value, expected )?
BOOST_CHECK_EQUAL doesn't use the value in an integral constant expression.
> If you want to use a code generator you could generate one file with many
> test cases, unless of course you are uncomfortable with use of unit test
> > > I guess we could try and refactor as:
> > >
> > > One test file per trait, for "easy" tests that are always expected to
> > > One or more files per trait for "difficult" tests that often fail.
> Unless we have problems with compilation I afraid that using of several
> files for the same test (like test_is_array_simple,
> test_is_array_advanced2 ..) is rather burden then help.
Yes we have problems with compilation in many of the cases. The failures in
general aren't so very numerous (or else fall into particular categories),
that we will end up with too many separate files IMO. Using one big file
(even with the unit test framwork) would be a step backwards IMO, we need to
get finer grained information back to the user so that they know which
traits may have problems with their compiler.
> You did not express an opinion on my proposition on result pattern file. I
> think it will allow you transparently manage expected failures without
> to change the structures of your tests.
I'm saying that the concept of expected failure is flawed (and yes I know I
put them in there in the first place!), basically using expected failures in
this way is just rigging the the tests so that they succeed. It doesn't get
the information back to the end user IMO.
> 1. If things are supposed to fail *always*, check for reverse condition
> instead. For example, if operation is supposed to throw an exception you
> could use BOOST_CHECK_THROW to make sure that an exception does thrown.
> 2. Could you clarify how BOOST_STATIC_ASSERT tests would allow you to
> condition that supposed to fail?. I always thought it allows to require
> compile time conditions are always be true.
We have some things that can fail "safely", and/or which can only be made to
work with the help of compiler extentions: is_POD for example. I think Dave
is right here, these are the only ones where we should tolerate failure.
Perhaps we could use BOOST_MESSAGE and/or BOOST_WARN_MESSAGE here.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk