From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-07-18 18:39:05
----- Original Message -----
From: "Aleksey Gurtovoy" <agurtovoy_at_[hidden]>
> Yep. Actually, better - more of the new tests are passing with the new
> headers than with the old ones.
> > Yeah, testing this library is difficult. I'd be interested in
> > discussing what an improved test would look like.
> Me too. First thing I would do (and have done) is to split the test files
> the same ways the headers are splitted - one test per trait. Somehow it
> makes you willing to put more work in testing each particular trait :).
Yes, of course. The big problem is how to deal with the expected failures.
I want to know exactly which sub-checks in a test are expected to
fail/succeed on a given compiler.
In the overall results, I want to see:
FAIL if any of the expected successes fails
pass if it passes all tests.
PASS* if it meets expectations exactly, but if there are expected
PASS? if all expected successes and some expected failures pass.
I also want a link to a list of expected failures and at the end of a link
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk