From: Aleksey Gurtovoy (agurtovoy_at_[hidden])
Date: 2004-07-29 07:23:04
Joaquín Mª López Muñoz writes:
> Aleksey Gurtovoy ha escrito:
>> Now that we have Boost-wide reports up and running , I'd like to
>> encourage all regression runners who's results are not represented
>> there yet take a tiny bit of time to join in.
> I'd like to add, some of the tests uploaded are difficult to
> identify because the toolset name is too generic (e.g. "gcc" or
> "cw"). It'd be great if those names could be more informative, like
> "gcc-3.2.3-sunos5" or whatever.
It's crutial, actually, because explicit failures markup is currently
based on toolset names alone, and marking a library as "unusable" with,
for example, 'gcc' is *way* too broad of a claim.
> I may be wrong, but I think it
> only takes renaming the corresponding *-tools.jam file.
Or, better yet, inheriting from it as Rene has already shown. In
fact, if you are running tests through 'regression.py', all it takes
to automate this is to place something like the attached 'patch_boost'
script ('patch_boost.bat' on Windows) in the driver scripts' directory.
> Slightly off-topic: is there any estimation for the branching date?
I'd like to target for Monday evening, but before committing to it we
*really* need to have an objective picture of the CVS state. In
particular, that means:
1) Having all "supported" platforms in the Boost-wide reports.
2) Having reports with "beta" libraries and non-required toolsets
We hope to have the second one up and running sometime today.
> Any news about the new MPL?
It's still in works :(, mostly for the lack of time than any
> I've got the impression that the rate of fixes has decreased these
I believe partly it is so because it's hard to see the progress behind
the new libraries and compilers that nobody cares about that are
populating the field with yellow cells. It's generally discouraging to
work on something and don't see a visible improvement over a
relatively short period of time.
Another contributing factors are long turnaround times (basically 24
hours), and the fact that many patches that could be committed
instantly are submitted to the list and have to be applied by somebody
with CVS access, consuming precious time of both sides (the patch
submitter and the developer).
Note that the problem with long regression cycles is *not* that it
takes too long to run the tests -- Boost-wide reports effectively
solve this problem by enabling the testing to be highly distributed
without loosing a bit of the results' informativeness. Our average
regression cycle is 24 hours because many the regression runners
cannot afford running the test continuosly rather than once daily.
I'm not sure what can be done about this one besides finding more
volunteers that have a machine to spare, or/and a greater number of
volunteers to run the tests interlacingly (e.g. if five people who
volunteer to test with gcc 3.2 on Linux can arrange running the tests
once daily but at different times, the gcc cycle will be shorten to 5
hours). In either case, it's going to take time to build this up, and
at the moment people who have local access to a particular compiler
are in the most privileged position to fix things in an agile way.
As for the patches, I beleive everybody will win if we grant a few
people who have been actively contributing the fixes write access --
for those who wants it, of course.
But I want to re-iterate my original point -- all other factors
notwithstanding, the process of fixing regressions has to be
rewarding, and the reports making the progress visible and real play a
significant role in it. They also have to be representative, so, to
our precious regression runners who's results are not in the
Boost-wide reports yet -- please take time to join in!
> so maybe it's about time to mark failures and pack.
Regressions (red cells) aside, that's basically what is going to be
done with the failures that are not resolved by the branch-for-release
date. I hope by that time most of them will be already marked up and
-- Aleksey Gurtovoy MetaCommunications Engineering
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk