Boost logo

Boost :

From: Doug Gregor (dgregor_at_[hidden])
Date: 2007-08-02 13:47:32


On Aug 2, 2007, at 12:51 PM, Vladimir Prus wrote:
> In other words, in the current system, if some other library breaks
> yours,
> you find about that immediately, and can take action. In the new
> system,
> you'll find about that only when your feature is done -- which is more
> inconvenient. You can try to workaround this by frequently merging
> from
> trunk, but it won't quite work. Trunk receives bulk updates. So, if
> the other developer did 100 changes on this branch and merged,
> you'll only
> have the chance to test all those 100 changes when they are merged
> to trunk.

Volodya is absolutely correct. Delaying integration of new work by
using more branches will not fix problems; it just delays them.

Frankly, I think this whole approach of "fixing the process" is
wrongheaded. We're in this mess because our *tools* are broken, not
our *process*. Lots of other projects, many larger than Boost, work
perfectly well with the same or similar processes because their tools
work better.

What doesn't work? Regression testing.

Thomas Witt has pointed out the myriad problems with our testing
setup that affected the Boost 1.34.0 release. He should know: he
managed the 1.34.x release series. I hit exactly the same problems
when I managed the 1.33.x release series. Report generation stalls
every few days, cycle times are horrible, it's impossible to isolate
what checkins caused failures, and we only really have our testers
testing one thing at a time. So either we aren't stabilizing a
release (because we're testing the trunk) or the trunk has turned
into an untested wild-west because we *are* stabilizing a release.
That wild-west went on for a *year* while we were stabilizing the
1.34.0 release, so our trunk is, of course a mess.

At one point, I thought we could fix this problem with a stable
branch based on 1.34.1, from which future releases would occur. Now,
I'm convinced that is the absolutely wrong approach. It means that
"trunk" and "stable" would be forever divergent, and would rely on
manual merges to get features into stable. That's a recipe for
unwanted surprises, because library authors---who typically work from
the trunk---are going to forget to merge features and bug-fixes
(including the test cases for those things) to the stable branch, and
BOOM! No progress. It's more work in the long run to require so many
small merges, and it really is just a way to avoid doing what we
really must do: fix the trunk. If our trunk were well-tested, release
branches would be short-lived and the risk of divergence (features/
fixes not making it between branch and trunk) would be minimized.
Plus, developers wouldn't need to manually merge anything *except*
the few things that are needed for those short-lived release
branches. Since we now have Subversion, svnmerge.py can even make it
easy to deal with those merges relatively easily.

        - Doug

P.S. Here are some of the many things I could have done that would
have been more productive than writing the message above:
        1) Made regression.py work with Subversion, so that we would be
performing regression testing on the trunk.
        2) Looked at the changes made on the RC_1_34_0 branch to determine
which ones can be merged back to the trunk.
        3) Fixed some of the current failures on the trunk.
        4) Setup a new nightly regression tester.
        5) Studied Dart2 to see how we can make it work for Boost
        6) Investigated the problems with incremental testing.
        7) Improved the existing test reporting system to track Subversion
revisions associated with test runs, and link to those
        revisions in the Trac.
        8) Improved the existing test reporting system to track changes from
day to day

Before I reply to any messages in this thread, I'll be thinking about
that list. Will you?

P.P.S. I know I sound grumpy, because I am. The amount of time we
have collectively used discussing policies would have used far more
wisely to improve the tools we have.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk