Boost logo

Boost :

From: John Maddock (jm_at_[hidden])
Date: 2002-10-06 06:23:28


Sorry for being a little late to jump in here, I think I will have some time
next week to do some refactoring of the tests, however there are a number of
unresolved issues:

1) Should we use boost.test?
~~~~~~~~~~~~~~~~~~~

As it stands IMO no: the tests are both compile time and run time tests, and
yes this really does make a difference (gory details and compiler bugs
omitted here...). However we can fairly easily construct our own tests on
top of BOOST_ERROR or BOOST_CHECK_MESSAGE. Gennadiy: would you be
interested in:

BOOST_CHECK_INTEGRAL_CONSTANT(value, expected), performs both compile time
and runtime checks, and outputs extended error info on fail.

and,

BOOST_CHECK_TYPEID(type, expected), verifies that the two types are the same
(note that this is not as simple as just checking is_same<>::value, because
we can't rely on that template working without partial specialisation
support....

Probably the thing to do is to refactor type_traits_tests.hpp to provide
these and then move them to boost.test later if Gennadiy is happy?

2) What should we do with tests that fail?
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Historically we have allowed an expected number of failures, but without
them all the tests would have failed on all compilers, so even though most
of the tests pass for most compilers, different compilers fail in different
cases.

We now have so many more fixes checked in that there are many fewer
failures, however we still need to deel with those that do occur,
particularly for the less common compilers that have had less effort put
into workarounds than say Visual C++.

Normally in cases like this, one would have one file per test, and just let
them fail if they are going to fail. However the problem here is the sheer
quantity of data - we could easily put together a code generator that
produces one file per test, but there would be about 1500 or so of them. I
don't think any of the regression testers would welcome putting that
quantity of tests into the main regression tests. We could run them
separately I suppose, but I would be concerned that they would just be
forgotten about. To put this in perspective boost.config has had separate
tests (two per macro) from the outset, and these can tell you things that
the consolidated tests can't (for example if a defact macro is defined when
it need not be), however I suspect that I'm the only person to have ever run
these :-(

I guess we could try and refactor as:

One test file per trait, for "easy" tests that are always expected to pass.
One or more files per trait for "difficult" tests that often fail.

And then just let the tests fail if they fail (i.e. no expected failures).

Thoughts?

3) The documentation:
~~~~~~~~~~~~~~~

It's clear that this is now way out of date with the implementation,
especially after Aleksey's refactoring. In particular the compiler
requirements column is now almost meaningless. The thing is we now have so
many fixes checked in that most of the traits work most of the time with
most compilers. I guess we could remove the PCD legends and replace with
comments containing broad generalisations. We could even generate special
compiler status tables showing the traits that may fail for each compiler,
but again the problem is one of "too much information", it's not clear to me
how we can handle this without telling people to "try it and see".

John Maddock
http://ourworld.compuserve.com/homepages/john_maddock/index.htm


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk