From: Robert Ramey (ramey_at_[hidden])
Date: 2006-05-26 19:45:20
David Abrahams wrote:
> "Robert Ramey" <ramey_at_[hidden]> writes:
> * I didn't run all the tests in every configuration; that's simply
> impractical. Robert, you need to make the library's testing more
> selective; it's occupying vastly more testing resources than most
> other libraries, and way more than required to effectively find
> problems. It shouldn't be necessary to test the cross-product of
> all tests with all archives and all build configurations.
Actually, I'm thinking we should drop testing of serialization
entirely. It just takes too long to do every day. Its useless
to me since I only check in some change maybe once / month
or so. And I do that only after running the complete test
suite in both debug and release modes. I doubt that anyone else
finds it useful. That is, if an error is detected because some
interface breaking change occurs in another library, the other
libraries author doesn't find out about it until I tell him. And
I only check the results when I make some change. So it can't
be much help to anyone else either.
It is useful when someone adds a new compiler or something
like that. But It would be just as good for the party
interested in the compiler to run test complete serialization tests
and forward to me the results. I don't need to install python
or anything else. I just run bjam -test from a shell script
and generate the results with an enhanced version of compiler_status
(in the vault). In any case, this is also an infrequent
occurence. This release cycle we've got a new borland
compiler and a new STLPort platform.
As for not running the cross product of the various
combinations - the problem is that failures now occur
in just a couple of select situations that can't be
predicted in advance. BCB failed on text archives
only in some cases. STLPort 5.0 failed when
using mapped collections. etc. Paring down the
test suite would likely have missed these failures.
The selected tests would all pass - thereby giving the
false impression that the library was bug free.
Oh there's the issue of release mode. A number of
compilers fail to instantiate code in release mode
under some circumstances. Our current testing
doesn't address that in any systematic way. I'm not
even sure that the table shows whether the tests
are run in release mode or debug mode.
Then there is the issue of stability of the Boost test
library. Its a fine job with lots of features. But when
we started testing in January - I got a whole raft
of failures because some compilers couldn't handle the test
code. So I really need a simpler "plain vanilla - idiot proof
- never fails" system. I've started making this change
but its taking me some time to make the transition.
So, I think your on the right track here - just take
it to its logical conclusion.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk