Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2004-02-12 08:30:08


Beman Dawes <bdawes_at_[hidden]> writes:

> At 03:51 PM 2/11/2004, David Abrahams wrote:
> >> Well, perhaps we should ask whose responsibility it is to watch the
> >> Boost.Python regression logs for VC6.
> >
> >I doubt there's any assigned responsibility. I was watching, but then
> >I had to travel and lost connectivity. I did expect that Beman was
> >going to look things over and make sure there were no new regressions
> >before the release went out. I hypothesize that part of the problem
> >is that he's not looking at the meta-comm tests, which include
> >Boost.Python and Spirit and show regressions against the previous
> >release, rather than just the last test run. I have been worried for
> >some time that test effectiveness is diluted by having two
> >reporting/display systems... did it bite us here?
>
> No, actually I do look at the meta-comm tests. In fact I review every
> test on every platform. It takes quite a while. I was also concerned
> about the Python tests on Linux, and posted a query on January 27th:
>
> >> Here are the three tests failing gcc 3.3.1 and 3.3.2:
> >>
> >> * iterator interoperable_fail

That one was expected.

> >> * python embedding
>
> You replied:
>
> >This one worries me a little. I'll look into it.

And IIRC I fixed that problem. Different tests that are apparently failing
in 1.31.0.

> Anyhow, I think your point about multiple reporting is a good one. The
> volume of tests is just too high. Fewer, more comprehensive, tests
> would be easier to monitor. Also fewer compilers. Do we really need to
> test every version of GCC and VC++ for the last four years?

Yes, IMO, if people want to support those compilers, we do need to
test them.

> If our testing was more focused, we could cycle the tests more often
> too.

I'm really unconvinced that we need smaller tests on fewer compilers.
Human monitoring is just too error-prone. Why risk it? Why not have
comprehensive tests with automated notifications when something
breaks? It seems to me that less testing can only result in fewer
things working, and coupled with human monitoring it's just going to
make things worse, not better.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk