|
Boost : |
From: John Maddock (jm_at_[hidden])
Date: 2002-10-08 06:27:51
> > 1) Should we use boost.test?
> > ~~~~~~~~~~~~~~~~~~~
> >
> > As it stands IMO no: the tests are both compile time and run time tests,
and
> > yes this really does make a difference (gory details and compiler bugs
> > omitted here...).
>
> Not obvious to me why, without the gories.
OK here are some of the gory details: there are some situations where using
::boost::some_trait<T>::value *not* as an integral constant expression
succeeds, but either:
1) The code doesn't compile when the trait is used as an integral constant
expression - for example some is_convertible cases with Borland C++ 5.51.
2) The traits has a garbage value when used as an integral constant
expression - for example this occurs if enum's rather than static const
integals are used with Borland C++ (all versions I think). Typically this
would show itself as an error if someone forgot to use BOOST_STATIC_CONSTANT
and used an enum instead, on the grounds that "enum's are always safe".
Given that these traits are almost always used in integral constant
expressions in "real life", it's important to catch these possible errors;
using BOOST_CHECK wouldn't do that, as it doesn't use the value within an
integral constant expression.
> > However we can fairly easily construct our own tests on
> > top of BOOST_ERROR or BOOST_CHECK_MESSAGE. Gennadiy: would you be
> > interested in:
>
> Are those Boost.Test macros?
Yep.
> > (note that this is not as simple as just checking is_same<>::value,
> > because we can't rely on that template working without partial
> > specialisation support....
>
> Really?! That surprises me. For which cases does it fail?
That was a really bad choice of name, and you are right about is_same - it's
probably as good as it gets - including for broken compilers, we do need
extended error info though so that we know why the test failed if it did.
Outputting something similar to the current tests would be good, currently
we get:
checking type of ::boost::add_const<UDT const>::type...failed
evaluating: type_checker<UDT const,::boost::add_const<UDT const>::type>
expected: type_checker<UDT const,UDT const>
but got: type_checker<const UDT,UDT>
Note that we can't just output the typeid(result_type).name() because that
drops reference and cv-qualifiers, and these are often the cause of the
failure.
> I don't mind expected failures as long as they don't cause
> compile-time errors, and as long as the output very clearly denotes
> which cases are failing. However, your approach sounds cleaner. In
> general, "expected failure" should be a technique reserved for things
> which are actually /supposed/ to fail, on a perfectly conforming
> compiler (e.g. BOOST_STATIC_ASSERT() tests). Otherwise, things just
> become way too confusing.
Agreed.
I also notice that the current "expected failures" aren't showing up in the
new regresssion test output, another reason for getting rid of them IMO.
> The refactoring affected which is the preferred header to find a trait
> in. Anything else?
I hope not.
> > In particular the compiler requirements column is now almost
> > meaningless. The thing is we now have so many fixes checked in that
> > most of the traits work most of the time with most compilers. I
> > guess we could remove the PCD legends and replace with comments
> > containing broad generalisations.
>
> I think that the category "requires compiler support" is still
> useful. However, there are cases where a trait is not an atom, but
> composed of other traits which need compiler support. In those cases,
> it's very helpful to know that you just need to just need to
> specialize the atomic traits in order to make the others work.
Good point, but doesn't the dependency tree vary depending upon the
compiler?
> And then there's the issue of traits such as is_POD for which we (I
> hope!) provide specializations for some of the built-ins. I think
> it's important for the user to be able to know, "may give a false
> negative without compiler support; will never give a false
> positive". That information can makes the trait useful for
> optimization, even if we can't always count on it.
>
> > We could even generate special
> > compiler status tables showing the traits that may fail for each
> > compiler, but again the problem is one of "too much information",
> > it's not clear to me how we can handle this without telling people
> > to "try it and see".
>
> I think, actually, what's important to me is to know, for a boolean
> trait, whether it gives false positives or false negatives when it
> fails, and for which categories of types it may fail on.
>
> For a type-valued trait, I'd like to know something similar: in what
> way does it fail when it fails? Also what warnings it's useful for
> suppressing. For example:
>
> remove_reference<T>::type - On compilers that don't support
> partial specialization, returns T. Otherwise, if T == U&, returns
> U.
>
>
> add_reference<T>::type - returns T& if T is not a reference
> type. Use instead of T& for compilers which don't implement core
> DR #whatever which allows silent reference collapse.
All good points, and all lot of work!
John Maddock
http://ourworld.compuserve.com/homepages/john_maddock/index.htm
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk