|
Boost : |
Subject: Re: [boost] Process discussions
From: Daniel James (dnljms_at_[hidden])
Date: 2011-01-31 08:12:35
On 31 January 2011 10:10, John Maddock <boost.regex_at_[hidden]> wrote:
>
> * The only tool comment I have is that SVN is awfully slow for big merges
> (Math lib docs for example), I probably need to find a better way of using
> the tool better though.
Maybe we could work on making boostbook generate more consistent
output. I'm not sure how much of a difference that would make.
Alternatively, you could just not check in the documentation, and put
the development and release version somewhere convenient.
> * OK I have one more tool comment :-) When we changed from CVS to SVN I
> suspect I lost about a month of "Boost time", changing over repositories
> figuring out how the heck to use this new tool etc. It *was* worth it in
> the end, but it wasn't pleasant at the time. In short - big bang tool
> changes are disruptive.
Git is probably more disruptive than most. It's very quirky.
> * I think we could organize the testing more efficiently for faster
> turnaround and better integration testing, and much to my surprise I'm
> coming round to Robert Ramey's suggestion that we reorganize testing on a
> library-by-library basis, with each library tested against the current
> release/stable branch.
I mostly agree. But I'm not sure how workable Robert's suggestion is,
sometimes we need to make changes to more than one library at the same
time (ah sorry, you say that later and it'll take me too long to redo
my response).
> * I think the release branch is closed for stabilization for too long, and
> that beta's are too short.
You might be right about this. By the way, I'm thinking about how to
have better website support for the beta. We really need the beta
information to appear on the main site during the beta, but at the
moment it's hard to do that without 'announcing' the final release.
> Here's a concrete suggestion for how the testing workflow might work:
>
> * Test machine pulls changes for lib X from version control (whatever tool
> that is).
Could be from a branch so that we could use a single branch for
multiple libraries.
> * Iff there are changes (either to lib X or to release), only then run the
> tests for that library against current release branch.
Sometimes we also need to test dependent libraries. As you know,
changes I make to unordered, can cause failures in tr1. But maybe it's
acceptable if they only show up after integration (which is often the
case at the moment).
> * The testers machine builds it's own test results pages - ideally these
> should go into some form of version control as well so we can roll back and
> see what broke when.
> * When a tester first starts testing they would add a short meta-description
> to a script, and run the script to generate the test results index pages. ie
> there would be no need for a separate machine collecting and processing the
> results.
> * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit
scared of the testing scripts, although I thnk Rene is working on a
new reporting system.
> Trunk/
> Jamfile // Facilitates integration testing by pointing to other
> libraries in Release.
> MyLib/
> libs/mylib/
> boost/mylib/
I think this is a good idea. Would probably need some way to weave
together the headers for easy use (this could be in the release
scripts or as part of installation).
> And yes, Trunk/Mylib could be an alias for some DVCS somewhere "out there",
> I don't care, it's simply not part of the suggestion, it would work with
> what we have now or some omnipotent VCS of the future.
Exactly right.
> How about if once the release is frozen we branch the release branch to
> "Version-whatever" and then immediately reopen release. Late changes could
> be added to "version-whatever" via special pleading as before, but normal
> Boost development (including merges to release) would continue unabated.
> That would also allow for a slightly longer beta test time before release.
I do like this, but the problem is how we test these late changes.
Maybe we just accept that on the less popular platforms they're tested
on a slightly different version.
Daniel
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk