Boost logo

Boost :

Subject: Re: [boost] Process discussions
From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2011-02-01 11:12:45


On 2/1/2011 3:15 AM, Vladimir Prus wrote:
> John Maddock wrote:
>
>>> Maybe I suggest that for some time, we outright ban freeform discussion
>>> about
>>> process, and instead, we restrict them to threads started by a Boost
>>> developers
>>> and saying this: "I am maintainer of X, and had N commits and M trac
>>> changes
>>> in the last year. I most hate P1, P2 and P3. I would propose that we use
>>> T1,
>>> T2, and T3 to fix that". Then, everybody could join to suggest better
>>> way of fixing P1, P2 and P3 -- without making up other supposed problems.
>>
>> OK let me give my pet hates:
>>
>> * The only tool comment I have is that SVN is awfully slow for big merges
>> (Math lib docs for example), I probably need to find a better way of using
>> the tool better though.
>
> I can't shake the feeling that SVN performance is specific to our instance,
> at least other SVN servers I use feel faster. It would be worthwhile to
> experiment with different setups, including using svn+ssh instead of https,
> or using FSFS repository format on the server (if it uses BDB) now.

There are various problems with SVN. Like using HTTPS, which is known to
be unstable. The contention between Trac and SVN is problematic because
we have a heavily used Trac and it conflicts with regular SVN use
frequently. We do use FSFS repo format, but it's not the latest sharded
structure. Both the HTTPS and un-sharded aspects are something I intend
to change. But I'm giving priority at the moment to the test reporting
problems. Since they seem the most critical.

> Alas, I am not sure anybody is in position to try this.

As for trying the plain SVN server configuration.. I'm also not sure if
we can try it (and obviously I don't have time at the moment) as I don't
know what firewall changes, or server management changes, might need to
happen. And that's something I can't do.

>> * I think we could organize the testing more efficiently for faster
>> turnaround and better integration testing, and much to my surprise I'm
>> coming round to Robert Ramey's suggestion that we reorganize testing on a
>> library-by-library basis, with each library tested against the current
>> release/stable branch.
>>
>> * Test machine pulls changes for lib X from version control (whatever tool
>> that is).
>> * Iff there are changes (either to lib X or to release), only then run the
>> tests for that library against current release branch.
>> * The testers machine builds it's own test results pages - ideally these
>> should go into some form of version control as well so we can roll back and
>> see what broke when.
>> * When a tester first starts testing they would add a short meta-description
>> to a script, and run the script to generate the test results index pages.
>> ie there would be no need for a separate machine collecting and processing
>> the results.
>> * The test script should run much of the above *in parallel* if requested.
>>
>> The aim would be to speed processing of testing by reducing the cycle time
>> (most libraries most of the time don't need re-testing).
>
> I suppose an alternative approach would be just make the incremental testing
> work. Boost.Build, obviously, can rebuild and rerun just the necessary tests,
> but the regression framework used to have issues, like reporting stale
> tests. I think it should give the same increase in testing time, and not
> really sure which approach is harder to implement.

Implementing the incremental testing is the easiest, assuming we are
reimplementing the test reporting. And it's a major reason why I'm
reimplementing the test reporting :-) The fix will be possible because
the new reporting will not rely on process_jam_log to get information.
But instead use the BBv2 XML output directly, which has tons more
accurate information about the testing results.

>> I have one concern about this model - from time to time my stuff depends
>> upon some bleeding edge feature from another library or Boost tool -
>> sometimes too development of that new feature goes hand in hand with my
>> usage - which is to say it's developed specifically to handle problem X, and
>> the only way to really shake down the new feature is to put it to work. For
>> example Boost.Build's "check-target-builds" rule was developed for and
>> tested with Boost.Regex's ICU usage requirements. Development of the
>> Boost.Build and Regex went hand in hand. Not sure how we deal with this in
>> the new model?
>
> That's why I prefer 'test whole trunk, incrementally' model to the
> 'test each library individually, against last release' model.

I tend to prefer both. That is, I don't think we can live without full
trunk testing. But we also want the partial-integration testing that
using single-library-agaist-release provides. I'm perfectly fine without
having the dependencies of those release-tested libraries not be
available, and having them fail the pre-integration, as it would show
which parts that library depends on clearly. That may shock you ;-) But
I'd rather see failures that show likely integration hot-spots, than try
and be ultra smart about making a fully working partially integrated
release. So to summarize I'd like to see testing:

1. incremental full trunk
2. single-library against full release (incremental if tester disk space
allows it)
3. incremental fully integrated release

Note, "trunk" and "release" are just shorthands for the corresponding
concepts in our current procedures. So adjust for possible future
procedures as needed ;-)

-- 
-- Grafik - Don't Assume Anything
-- Redshift Software, Inc. - http://redshift-software.com
-- rrivera/acm.org (msn) - grafik/redshift-software.com
-- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk