|
Boost Testing : |
From: Beman Dawes (bdawes_at_[hidden])
Date: 2007-12-13 11:01:36
David Abrahams wrote:
> on Tue Dec 11 2007, Aleksey Gurtovoy <agurtovoy-AT-meta-comm.com> wrote:
>
>> You also need a "regular" view of the codebase health, showing all the
>> libraries with their status, whatever it is.
>
> Just to make sure they're actually being tested at all?
Yep. Working on 1.35.0, we've run into at least these cases:
* Test runner didn't have enough disk space.
* Test runner was inadvertently using a local status/Jamfile that wasn't
being updated.
* New library author didn't correctly add a library to status/Jamfile.
* Two or three cases where incremental testers needed to do clear out
stale results and run from scratch. The presence of test results that
shouldn't have been there at all altered us to the problem.
* For testers who cycle irregularly, it is very helpful to see the date
and revision they last tested at. Currently, seeing passing tests does
that, although there are obviously other ways to achieve the same end.
In general, if the report generation checked for all tests that were
expected to be present, and reported tests that were expected but didn't
actually run, that would cut way down the need to report routine passing
tests. It would be a lot more direct that the reader of the report
having to infer something is wrong because a test result isn't present.
--Beman