Boost logo

Boost :

From: Beman Dawes (bdawes_at_[hidden])
Date: 2002-01-21 11:01:34


I'm working on writing release procedures, and that sparked this train of
thought:

On of the most glaring weaknesses in our current regression test approach
is that if a test is failing on a compiler/platform not used by the
developer, it may be a long time before the test failure comes to the
developer's attention.

A second weakness is that the current test reporting by platform is hard to
use for those who want to see the results on all all platforms.

One possibility would be to write a program which merges the regression
tests tables for all platforms into a single giant table, and then make
that table available daily (and on the web site for each release).

Would this be a good way to address the problem?

If so, will someone volunteer to write the table merge program (in C++,
bash, or Python)? Has to be able to cope with the fact a given test row
may not be present in some of the input tables. Since the program would
work at the HTML file level, I assume it will continue to work when we cut
over to jam for regression testing.

Does anyone have a better way to address the problem?

--Beman

PS: I'll mention the following, since someone else is bound to think of it,
but I don't really like it - too much effort, too likely to be abandoned as
a spam producer:

As a further refinement, we might associate an email address (or addresses)
with each regression test row, and email a notification if (1) the test
newly begins failing, (2) the test fails, period. Presumably option (1)
would be run before a release, while option (2) would be run less
often. The point being to notify, without flooding developers with email
they would soon hate.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk