From: Beman Dawes (bdawes_at_[hidden])
Date: 2004-02-11 16:50:42
At 03:51 PM 2/11/2004, David Abrahams wrote:
>> Well, perhaps we should ask whose responsibility it is to watch the
>> Boost.Python regression logs for VC6.
>I doubt there's any assigned responsibility. I was watching, but then
>I had to travel and lost connectivity. I did expect that Beman was
>going to look things over and make sure there were no new regressions
>before the release went out. I hypothesize that part of the problem
>is that he's not looking at the meta-comm tests, which include
>Boost.Python and Spirit and show regressions against the previous
>release, rather than just the last test run. I have been worried for
>some time that test effectiveness is diluted by having two
>reporting/display systems... did it bite us here?
No, actually I do look at the meta-comm tests. In fact I review every test
on every platform. It takes quite a while. I was also concerned about the
Python tests on Linux, and posted a query on January 27th:
>> Here are the three tests failing gcc 3.3.1 and 3.3.2:
>> * iterator interoperable_fail
>> * python embedding
>This one worries me a little. I'll look into it.
Anyhow, I think your point about multiple reporting is a good one. The
volume of tests is just too high. Fewer, more comprehensive, tests would be
easier to monitor. Also fewer compilers. Do we really need to test every
version of GCC and VC++ for the last four years? If our testing was more
focused, we could cycle the tests more often too.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk