Boost logo

Boost :

From: Martin Wille (mw8329_at_[hidden])
Date: 2004-02-13 05:37:27


David Abrahams wrote:
> Martin Wille <mw8329_at_[hidden]> writes:
>
>
>>>>2. Some tests are known to fail for certain compilers.
>>>> If those tests are joined with other tests then we'll
>>>> lose information about these other tests.
>>>
>>>If it runtime errors you could rely on "expected errors"
>>>feature. You are right for compile time errors, though one could
>>>try to ifdef part of test program.
>>
>>I think #ifdeffing around known failures would already
>>help with the existing tests.
>
>
> The problem is that the tests would "appear" to work to people looking
> at the regression logs, and if in fact they ever did start working,
> we'll never know that the tests can be re-enabled.
>
>
>>However, I'd prefer those compiler version checks to be in the build
>>system.
>
>
> Could you be more specific?

I think you misunderstood me. The tests should not simply be
skipped. Instead the should not be compiled/run but marked
as failure.

The Jamfile for a test could contain a check which OS/compiler
the test should be run for and report a failure immediately
for OSs/compilers the test is known to fail. Doing so would
be ugly, of course, and it would be a tedious job but it
would make the test procedure faster and expected test
failures would also be documented automatically.

We could also make those checks depend on an additional
flag. Setting the flag would disallow the skipping of tests.
With this feature we should be able to check for tests
that (unexpectedly) start to work again.

Regards,
m


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk