From: Douglas Gregor (doug.gregor_at_[hidden])
Date: 2007-06-04 12:15:26
On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
> The idea is that merge requests would be something as simple as
> an email to some server which is running a script that automatically
> checks that all criteria are met and then does the merge.
> After some more thought, I decided that when a developer requests a
> (via the "test on demand" mechanism), it should be possible to request
> tests on multiple libraries, so that a dependency sub-tree or any
> portion thereof can be tested.
> Rather than building dependency lists into the system (which is a
> heavyweight approach), it might be simpler to give developers a
> tool to
> find which libraries are dependent on their library, and then leave it
> up to the developer how much or little they test against the
> tree. If a developer who undertests runs the risk that a
> merge-into-stable request will fail, because merge-into-stable
> fail if they would cause any other library to fail.
That's three new tools, some of which are non-trivial to develop. All
tools are non-trivial to maintain.
We clearly need tools to improve the Boost development and release
process. The problem is that while good tools can help the process,
poor tools can hurt us even more than no tools. We can't build new
tools until we've fixed or replaced the existing tools, and we can't
build new tools without a solid plan for maintaining those tools.
Look at the 1.34 release series... the thing that's been holding us
back most of all is that the testing and test reporting tools are
broken. 1.34.1 is stalled because we have failures on one platform,
but nobody can see what those failures actually are: the test
reporting system removed all of the important information.
I agree with most of Beman's write-up, but it pre-supposes a robust
testing system for Boost that just doesn't exist. I hypothesize that
the vast majority of the problems with our release process would go
away without a single change to our process, if only we had a robust
testing system. We have only so much volunteer time we can spend. At
this point, I think our best bet is to spend it making the regression
testing infrastructure work well; then we can move on to a new
process with its new tools.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk