From: Beman Dawes (bdawes_at_[hidden])
Date: 2007-06-05 17:42:47
Douglas Gregor wrote:
> On Jun 4, 2007, at 10:10 AM, Beman Dawes wrote:
>> The idea is that merge requests would be something as simple as
>> an email to some server which is running a script that automatically
>> checks that all criteria are met and then does the merge.
>> After some more thought, I decided that when a developer requests a
>> (via the "test on demand" mechanism), it should be possible to request
>> tests on multiple libraries, so that a dependency sub-tree or any
>> portion thereof can be tested.
>> Rather than building dependency lists into the system (which is a
>> heavyweight approach), it might be simpler to give developers a
>> tool to
>> find which libraries are dependent on their library, and then leave it
>> up to the developer how much or little they test against the
>> tree. If a developer who undertests runs the risk that a
>> merge-into-stable request will fail, because merge-into-stable
>> fail if they would cause any other library to fail.
> That's three new tools, some of which are non-trivial to develop. All
> tools are non-trivial to maintain.
Some tools, particularly those that are dependency based, are
interesting to speculate about but are not essential. They can be safely
ignored for now. And some of the needs may be met by off-the-shelf tools
we don't have to develop or maintain.
The most critical new (to us) tool would be test-on-demand. I've been
very deliberately focusing on figuring out what is needed rather than
where we get the tool or how the details work. Now that the need seem
fairly firmly defined, we can start looking at what tools are available
to meet those needs.
> We clearly need tools to improve the Boost development and release
> process. The problem is that while good tools can help the process,
> poor tools can hurt us even more than no tools. We can't build new
> tools until we've fixed or replaced the existing tools, and we can't
> build new tools without a solid plan for maintaining those tools.
I'm tired of waiting. For what I'm proposing, bjam is good enough as it
stands now. The downstream reporting system is orthogonal to test-on-demand.
> Look at the 1.34 release series... the thing that's been holding us
> back most of all is that the testing and test reporting tools are
> broken. 1.34.1 is stalled because we have failures on one platform,
> but nobody can see what those failures actually are: the test
> reporting system removed all of the important information.
> I agree with most of Beman's write-up, but it pre-supposes a robust
> testing system for Boost that just doesn't exist.
That may be true for the whole system testing and reporting release
managers care about, but for for a developer wanting to test a single
library bjam works pretty well, and I suspect it will work well for
tests on a small number of dependent libraries too.
But regardless, the test-on-demand system should be independent of how
the testing is actually run. If we change to a different build system or
test execution framework, it would nice if the procedures as seen by
a developer don't change much.
> I hypothesize that
> the vast majority of the problems with our release process would go
> away without a single change to our process, if only we had a robust
> testing system.
I think you have to change the process at least enough so that a stable
branch is always maintained and developers can test their library's
development branch on deman against the stable branch. The current
"wild-west" problems in the trunk would not go away just because the
testing system worked better, was more responsive, etc.
> We have only so much volunteer time we can spend. At
> this point, I think our best bet is to spend it making the regression
> testing infrastructure work well; then we can move on to a new
> process with its new tools.
I hope you aren't counting Subversion as a "new" tool! And what about
starting the next release from the last release, rather than branching
the development truck? That is a new process, but it is one we can start
In general, I'd like to increment into new processes as we get them
figured out, and new tools when we find something that will better
support our processes.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk