Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2005-03-20 18:19:33


"Jeff Garland" <jeff_at_[hidden]> writes:

> On Sat, 19 Mar 2005 20:12:55 -0500, David Abrahams wrote
>> "Jeff Garland" <jeff_at_[hidden]> writes:
>>
>> > Fair enough -- I was almost afraid to ask since I hate to make the release
>> > process any harder than it is now.
>>
>> Agreed, it's too hard, but that shouldn't stop us from talking about
>> what we would be doing in an ideal world. Accordingly:
>>
>> - A health report for the latest release should always be available
>> on the website.
>
> Yes, it is there -- in the middle of the front page, but even so I missed it.
> So I think the link belongs under 'regression tests' called something like
> 'current release'.

I think it should also be available from an obvious link when you go
to get information on or download the current release.

>> - Regressions from the previous release are nice to know but less
>> important. I realize we show both in one report, but this may
>> help us adjust our emphasis or coloring (maybe it's already
>> perfect in the user report; I don't know)
>
> In fact, I think from the user perspective the question really goes something
> like: "I'm using Intel 8.1 on Windows and Linux and I want to use Python,
> smart_ptr, and serialization -- can I expect these libraries to work with
> release xyz?" And several variations on that theme. So in an "ideal world"
> scenario I would have a web form where the user could enter her needs and a
> script would filter the regression results down to the set of interest for the
> user.

Okay, yeah; that would be an improvement.

>> - A health report for the current state of the repository should
>> always be available on the website.
>>
>> - Regressions from the previous release are crucial to know also
>>
>> - When we branch for a release, we absolutely must track the
>> release branch, but we also should be continuing to display
>> the health of the trunk
>
> Agree -- I think the big blocker here is expanding the set of regression
> testers during this period.

This is part of why I think BuildBot is a good idea.

> Another factor is that the regression compilers/platforms tested
> between releases is not really stable. It has been growing, so we
> have now have 'new results' that can't really 'diff' from the last
> release. For example, we now have a regression tester for Solaris
> which we didn't have in 1.32. I'm not sure that's obvious from the
> way we display results.

It probably isn't.

>> - We ought to have a system for automatically notifying anyone who
>> checks in a regression, and displaying information about the
>> change responsible for the regression on the status page.
>
> Do we even have a way of tracking the check-ins?

CVS?

> That might be a good first step. I notice that sourceforge seems to
> be generating some sort of email when I check-in, but I don't know
> of a way to subscribe to the changelist.

We can set up a mailing list for it to send to, if you want to see
those. But I don't think that would solve the problem by itself.

>> - There should be a way for a developer to request testing of a
>> particular branch/set of revisions
>
> I'd put this high on my list. Without it there is no practical way for
> developers to regression test on a branch. Which means that using branches
> for development isn't really that practical.
>
>> - There should be enough computing power to handle all these tests
>> in a timely fashion.
>
> Guess it depends on what you consider timely -- 1 minute, 1 hour, 1 day? We
> are somewhere in the 2 day sort of range now. From the developer perspective,
> the ideal world would be 'right now'.

Yeah, I mean something very close to "right now."

> I've got these changes I'm working on,
> I've tested on my core compilers and I'm ready to see the results for other
> compilers. It seems like most testers run one regression test per day, while
> others run several. So depending on when you check-in, it takes up to a
> couple days to really see the results of all platforms/compilers. The only
> way I see us getting closer to the ideal is more machines really dedicated to
> just Boost testing...

Those irons are in the fire now; see my "Testing Farm" thread.

>> We also need to discuss how the main trunk will be treated. Gennadiy
>> has suggested in the past that checking in breaking changes to the
>> trunk is a perfectly legitimate technique for test-driven
>> development. I agree in principle, but that idea seems to generate a
>> lot of friction with other developers trying to stabilize their test
>> results.
>
> I agree with him about wanting to use the compiler to find breakage,
> but the problem is that his particular library is one that many
> libraries depend on. As a result, it really needs to stay stable
> during the release period to ensure that we don't have 2 days of
> downtime while something Boost-wide is broken.

If we could initiate tests on a branch by request we wouldn't have
this problem; he could run all those tests before merging.

> So I really think
> that we need to start thinking about a dependency analysis of Boost
> and an added freeze date for 'core libraries' that need to stay
> stable during the release process.

I'd really like to avoid that.

> Developers will have to finish what they want in the release
> earlier. This could certainly be relaxed if branch-testing is
> available since a developer could be much more sure of avoiding
> mainline breakage...

Yup.

>> The ability to request testing of a branch might go a long
>> way toward eliminating that sort of problem.
>
> Agree completely.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk