Boost logo

Boost :

From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2005-12-07 14:18:59


Daryle Walker wrote:
> On 12/6/05 6:00 PM, "Stefan Seefeld" <seefeld_at_[hidden]> wrote:
>
>
>>Daryle Walker wrote:
>>
>
> [SNIP a library that failed on every compiler]
>
>>>This could be another case to watch out for: when a library fails on every
>>>compiler (or a least a lot of them). Here it would be the library that's
>>>possibly broken.
>>
>>Isn't that exactly what these tests are supposed to measure ? Or do you mean
>>that in case *all* tests fail the report could just mention a single failure,
>>on a different granularity scale ?
>
>
> Yes, I meant the latter. If we suddenly see a compiler fail on every
> library, or a library fail on every compiler, then we probably have a
> configuration issue. (But how do we implement "suddenly," since the tests
> don't keep a history of past results? We only want to block blooper runs,
> not compilers or libraries that legitimately fail everything.)

Indeed. One thing I already proposed in the past: some dummy tests (akin
to autoconf macros) that make sure a sane configuration / environment is
available at all. These tests could cover a specific toolchain, or even
some lightweight library-specific code. If these are known as dependencies
for the rest of the test suite(s), quite a bit of computing power (and
mail bandwidth) could be spared whenever these fail.

Regards,
                Stefan


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk