|
Boost : |
From: Vladimir Prus (ghost_at_[hidden])
Date: 2007-08-02 15:02:19
Doug Gregor wrote:
> On Aug 2, 2007, at 12:51 PM, Vladimir Prus wrote:
>> In other words, in the current system, if some other library breaks
>> yours,
>> you find about that immediately, and can take action. In the new
>> system,
>> you'll find about that only when your feature is done -- which is more
>> inconvenient. You can try to workaround this by frequently merging
>> from
>> trunk, but it won't quite work. Trunk receives bulk updates. So, if
>> the other developer did 100 changes on this branch and merged,
>> you'll only
>> have the chance to test all those 100 changes when they are merged
>> to trunk.
>
> Volodya is absolutely correct. Delaying integration of new work by
> using more branches will not fix problems; it just delays them.
>
> Frankly, I think this whole approach of "fixing the process" is
> wrongheaded. We're in this mess because our *tools* are broken, not
> our *process*. Lots of other projects, many larger than Boost, work
> perfectly well with the same or similar processes because their tools
> work better.
I'd disagree -- there's one bit where out process is not broken,
it's nonexistent. The important aspect of Boost is that we have
lots of automated tests, or lots of different configurations and
there's the goal of no regressions. This is a very strict goal.
At the same time we don't have any equally strict, or even written
down bug-triage-and-developer-pinging process. A process that makes
sure that:
(1) Every issue found by regression tester or reported
by a user is assigned to the right person and to the
right release.
(2) By the time the right release should be made, the issue
is either fixed, or it is made clear it cannot be fixed.
(3) The chances that a critical bug is fixed are higher
than for a minor nuisance.
The current process is basically expecting library authors will do
all that. But:
1. We have issues with "None" as component and as owner.
2. Not all authors start the day by looking at issues in
Trac, so they might miss important issues.
3. An author might just forget about important issue,
or just disappear.
As result, we used to have some regressions present for month,
without any apparent work being done on them.
So, where my proposal for a good process? There is none.
Many projects have such bug-triage-and-developer-pinging process,
so we don't have to invent anything, it only takes a volunteer
who will manage such process.
Ah, and BTW -- if the branch-based proposal is adopted, somebody
should volunteer to integrate changes to stable, and be ready
to integrate several patches per day.
> P.S. Here are some of the many things I could have done that would
> have been more productive than writing the message above:
> 1) Made regression.py work with Subversion, so that we would be
> performing regression testing on the trunk.
> 2) Looked at the changes made on the RC_1_34_0 branch to determine
> which ones can be merged back to the trunk.
> 3) Fixed some of the current failures on the trunk.
> 4) Setup a new nightly regression tester.
> 5) Studied Dart2 to see how we can make it work for Boost
> 6) Investigated the problems with incremental testing.
That's something long overdue, and I can probably fix it. The
question is -- will process_jam_logs remain? If not, I'd rather
not spend time doing chances that will have to be redone.
> 7) Improved the existing test reporting system to track Subversion
> revisions associated with test runs, and link to those
> revisions in the Trac.
> 8) Improved the existing test reporting system to track changes from
> day to day
I'd add
8.1) Implemented a mechanism to record a revision in which a failure
first occured, and a previous revision where test passed.
> Before I reply to any messages in this thread, I'll be thinking about
> that list. Will you?
I think this is a good list -- it's likely to have more direct effect
than any process discussion.
- Volodya
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk