|
Boost : |
Subject: Re: [boost] Process discussions
From: John Maddock (boost.regex_at_[hidden])
Date: 2011-01-31 05:10:02
> Maybe I suggest that for some time, we outright ban freeform discussion
> about
> process, and instead, we restrict them to threads started by a Boost
> developers
> and saying this: "I am maintainer of X, and had N commits and M trac
> changes
> in the last year. I most hate P1, P2 and P3. I would propose that we use
> T1,
> T2, and T3 to fix that". Then, everybody could join to suggest better
> way of fixing P1, P2 and P3 -- without making up other supposed problems.
OK let me give my pet hates:
* The only tool comment I have is that SVN is awfully slow for big merges
(Math lib docs for example), I probably need to find a better way of using
the tool better though.
* OK I have one more tool comment :-) When we changed from CVS to SVN I
suspect I lost about a month of "Boost time", changing over repositories
figuring out how the heck to use this new tool etc. It *was* worth it in
the end, but it wasn't pleasant at the time. In short - big bang tool
changes are disruptive.
* I think we could organize the testing more efficiently for faster
turnaround and better integration testing, and much to my surprise I'm
coming round to Robert Ramey's suggestion that we reorganize testing on a
library-by-library basis, with each library tested against the current
release/stable branch.
* I think the release branch is closed for stabilization for too long, and
that beta's are too short.
~~~~
Here's a concrete suggestion for how the testing workflow might work:
* Test machine pulls changes for lib X from version control (whatever tool
that is).
* Iff there are changes (either to lib X or to release), only then run the
tests for that library against current release branch.
* The testers machine builds it's own test results pages - ideally these
should go into some form of version control as well so we can roll back and
see what broke when.
* When a tester first starts testing they would add a short meta-description
to a script, and run the script to generate the test results index pages.
ie there would be no need for a separate machine collecting and processing
the results.
* The test script should run much of the above *in parallel* if requested.
The aim would be to speed processing of testing by reducing the cycle time
(most libraries most of the time don't need re-testing).
The version control system used would be a tiny part of the above changes,
the open question, is whether we would need to reorganize Trunk more like
the sandbox on a library by library basis in order to facilitate the new
testing script. ie a directory structure more like:
Trunk/
Jamfile // Facilitates integration testing by pointing to other
libraries in Release.
MyLib/
libs/mylib/
boost/mylib/
And yes, Trunk/Mylib could be an alias for some DVCS somewhere "out there",
I don't care, it's simply not part of the suggestion, it would work with
what we have now or some omnipotent VCS of the future.
I have one concern about this model - from time to time my stuff depends
upon some bleeding edge feature from another library or Boost tool -
sometimes too development of that new feature goes hand in hand with my
usage - which is to say it's developed specifically to handle problem X, and
the only way to really shake down the new feature is to put it to work. For
example Boost.Build's "check-target-builds" rule was developed for and
tested with Boost.Regex's ICU usage requirements. Development of the
Boost.Build and Regex went hand in hand. Not sure how we deal with this in
the new model?
~~~~~
Release process:
How about if once the release is frozen we branch the release branch to
"Version-whatever" and then immediately reopen release. Late changes could
be added to "version-whatever" via special pleading as before, but normal
Boost development (including merges to release) would continue unabated.
That would also allow for a slightly longer beta test time before release.
~~~~~~~
All of the above is more "thinking out loud" that solidly thought through,
but I would welcome feedback,
Regards, John.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk