Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2002-10-06 08:58:14


"John Maddock" <jm_at_[hidden]> writes:

> Sorry for being a little late to jump in here, I think I will have some time
> next week to do some refactoring of the tests, however there are a number of
> unresolved issues:
>
> 1) Should we use boost.test?
> ~~~~~~~~~~~~~~~~~~~
>
> As it stands IMO no: the tests are both compile time and run time tests, and
> yes this really does make a difference (gory details and compiler bugs
> omitted here...).

Not obvious to me why, without the gories.

> However we can fairly easily construct our own tests on
> top of BOOST_ERROR or BOOST_CHECK_MESSAGE. Gennadiy: would you be
> interested in:

Are those Boost.Test macros?

> BOOST_CHECK_INTEGRAL_CONSTANT(value, expected), performs both compile time
> and runtime checks, and outputs extended error info on fail.
>
> and,
>
> BOOST_CHECK_TYPEID(type, expected), verifies that the two types are the same

Uhhh if I understand your naming, then not by itself, it doesn't. A
conforming compiler will strip lots of type information out of
typeid(T).

typeid(T&) == typeid(T) == typeid(T const) == typeid(T const&)

We could use my extended typeid implementation from Boost.Python, but
of course that relies on type traits ;-).

> (note that this is not as simple as just checking is_same<>::value,
> because we can't rely on that template working without partial
> specialisation support....

Really?! That surprises me. For which cases does it fail?

> Probably the thing to do is to refactor type_traits_tests.hpp to provide
> these and then move them to boost.test later if Gennadiy is happy?
>
> 2) What should we do with tests that fail?
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Historically we have allowed an expected number of failures, but
> without them all the tests would have failed on all compilers, so
> even though most of the tests pass for most compilers, different
> compilers fail in different cases.
>
> We now have so many more fixes checked in that there are many fewer
> failures, however we still need to deel with those that do occur,
> particularly for the less common compilers that have had less effort
> put into workarounds than say Visual C++.
>
> Normally in cases like this, one would have one file per test, and
> just let them fail if they are going to fail. However the problem
> here is the sheer quantity of data - we could easily put together a
> code generator that produces one file per test, but there would be
> about 1500 or so of them. I don't think any of the regression
> testers would welcome putting that quantity of tests into the main
> regression tests.

Agreed.

> We could run them separately I suppose, but I
> would be concerned that they would just be forgotten about. To put
> this in perspective boost.config has had separate tests (two per
> macro) from the outset, and these can tell you things that the
> consolidated tests can't (for example if a defact macro is defined
> when it need not be), however I suspect that I'm the only person to
> have ever run these :-(

Are you sure? I keep seeing posts from people who try to run the
configuration.

> I guess we could try and refactor as:
>
> One test file per trait, for "easy" tests that are always expected to pass.
> One or more files per trait for "difficult" tests that often fail.
>
> And then just let the tests fail if they fail (i.e. no expected failures).
>
> Thoughts?

I don't mind expected failures as long as they don't cause
compile-time errors, and as long as the output very clearly denotes
which cases are failing. However, your approach sounds cleaner. In
general, "expected failure" should be a technique reserved for things
which are actually /supposed/ to fail, on a perfectly conforming
compiler (e.g. BOOST_STATIC_ASSERT() tests). Otherwise, things just
become way too confusing.

> 3) The documentation:
> ~~~~~~~~~~~~~~~
>
> It's clear that this is now way out of date with the implementation,
> especially after Aleksey's refactoring.

The refactoring affected which is the preferred header to find a trait
in. Anything else?

> In particular the compiler requirements column is now almost
> meaningless. The thing is we now have so many fixes checked in that
> most of the traits work most of the time with most compilers. I
> guess we could remove the PCD legends and replace with comments
> containing broad generalisations.

I think that the category "requires compiler support" is still
useful. However, there are cases where a trait is not an atom, but
composed of other traits which need compiler support. In those cases,
it's very helpful to know that you just need to just need to
specialize the atomic traits in order to make the others work.

And then there's the issue of traits such as is_POD for which we (I
hope!) provide specializations for some of the built-ins. I think
it's important for the user to be able to know, "may give a false
negative without compiler support; will never give a false
positive". That information can makes the trait useful for
optimization, even if we can't always count on it.

> We could even generate special
> compiler status tables showing the traits that may fail for each
> compiler, but again the problem is one of "too much information",
> it's not clear to me how we can handle this without telling people
> to "try it and see".

I think, actually, what's important to me is to know, for a boolean
trait, whether it gives false positives or false negatives when it
fails, and for which categories of types it may fail on.

For a type-valued trait, I'd like to know something similar: in what
way does it fail when it fails? Also what warnings it's useful for
suppressing. For example:

    remove_reference<T>::type - On compilers that don't support
    partial specialization, returns T. Otherwise, if T == U&, returns
    U.

    add_reference<T>::type - returns T& if T is not a reference
    type. Use instead of T& for compilers which don't implement core
    DR #whatever which allows silent reference collapse.

-Dave


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk