Subject: Re: [boost] is review system in place is extremely slow? (was Re: [rfc] rcpp)
From: Vladimir Prus (vladimir_at_[hidden])
Date: 2010-02-28 14:44:58
Rene Rivera wrote:
> Vladimir Prus wrote:
>> Gennadiy Rozental wrote:
>>> Paul A. Bristow wrote:
>>>> But nobody has yet responded to the vital question of whether there are
>>>> resources to support a parallel tree to trunk,
>>>> in addition to sandbox, for what we are calling 'candidate' libraries. It
>>>> really needs to have an identical structure,
>>>> and tests which are run regularly, like trunk. This will encourage more users,
>>>> who have an important, and often informed, voice in reviews.
>>> IMO we do not need this. Candidate libraries should compile and test
>>> against last release. We can't expect to sync all development efforts.
>> This is pretty straight-forward to implement:
> Yeah, which as you mention is fairly easy.. So this is for others that
> don't read bjam source as easily as Volodya and I..
>> 1. Create a branch off the last release
>> 2. For each proposed library living in sandbox, add a couple of svn:externals
>> into the new branch.
> Which could be automated, and it was always my intent to do so, by the
> test scripts, and they already operate partially that way. But since we
> never agreed on a structure of sandbox libraries it hasn't really been
> possible. But I guess my suggestion years ago of the sandbox structure
> is apparently the defacto standard now perhaps it is possible.
>> 3. Modify status/Jamfile.v2 to only run the tests for the new libraries.
>> 4. Have one, or more people run tests on the new branch.
> 3 & 4 are already partially supported by status/Jamfile.v2 by using the
> "--limit-tests=*" option. For example to only run tests for
> program_options.. --limit-tests=program_options. And it would really
> easy to add a "--run-tests=some_lib" such that the list of libs doesn't
> need to be edited at all.
Oh, I did not realize this is implemented!
>> 5. Adjust reporting process to produce one more set of tables.
>> Of this, (1) and (2) is done in a matter of minutes. (3) requires really
>> minimal hacking. (4) requires a volunteer. I personally don't know how to
>> do (5) but should not be hard either.
> The main problem is #5. And it's the main problem because the report
> system is not really designed for that. And it's a big resource hog. SO
> perhaps the best alternative is to have separate results for each tested
> library. That way it's also easier to find someone to run the reports as
> they wont take much resources.
Alternatively, find a volunteer to rewrite reporting to not use XSLT. I guess
the display format itself is pretty good, and I did not see any other
system that offers similar, but the use XSLT is clearly failed experiment.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk