Boost logo

Boost Testing :

From: Beman Dawes (bdawes_at_[hidden])
Date: 2007-10-08 09:10:33


Rene Rivera wrote:
> Beman Dawes wrote:
>> The release branch has been created, using Version_1_34_1 as the
>> starting point. The URL is //svn.boost.org/svn/boost/branches/release.
>>
>> For now, no one except the release management team should commit to the
>> release branch.
>>
>> We need to set up the infrastructure and start testing on this branch.
>>
>> Rene has agreed to be testing manager; this is really just precognition
>> of the role he has already been playing. So this post is really a
>> question to Rene: who needs to do what to get us started?
>
> First task is to test that regression.py can handle testing the release
> branch. I.e. someone takes a shot at running one set of tests. This of
> course immediately means we have to merge into the release branch the
> testing infrastructure changes: bjam, BB, and the regression tools.
> Which raises the question of how we should manage changes to the tools
> for the release? Bjam is easy as I made the recent release so it could
> be frozen for the Boost release. For others how should we approach the
> process? Develop on the trunk and merge to release? Or the other way
> around? Or should we move the tools out of the release tree, as I
> suggested some time ago?

For this release, I'd like to develop on trunk and then merge to release
once changes have been shown to be reliable.

If that approach proves to be a problem, I'm open to moving to a
separate tree. But I'd like to defer than until we have more experience.

>> My preference is that testers currently testing on trunk don't have to
>> do anything unless they want to. We still need good test coverage on the
>> trunk. Any current tester who has the resources is welcome to also test
>> on the release branch, or switch to release branch testing.
>
> Hm, we are going to have to be a bit more formal than that. We need some
> real criteria for deciding which testers and platforms can go from trunk
> to release.

Agreed.

> In the main list you seem to be suggesting that release
> testers need to have consistent frequent and hence stable test setups.
> Some possible criteria:
>
> * Can run tests at least X times a day. Definitely once a day is
> minimal. But we could require twice a day to have results available
> roughly to cover US and non-US timezones, so people have fresh results
> during regular working hours.

Yes. Once a day is minimal, but I'd like to see key platforms cycle more
often. For Windows and Linux, I'm planning to cycle tests three or four
times a day.

> * The tester can guarantee they can be active (meaning they respond to
> emails within a few hours) on this list so we can resolve testing
> problems quickly.

Yep, although "within a few hours" is a bit too aggressive - its OK if
they sleep once in a while:-)

> Aside from testers themselves, I would feel much better about testing if
> we had redundancy on the result processing. Even though it looks like
> meta-comm is running the processing again, they seem to be totally
> missing from this list.

Good point. I've just built a new machine that I plan to dedicate to
Boost testing. It will run 24-7 until the release ships. It should have
plenty of capacity to also run result processing for the release branch.
Can the result processing run on either Windows or Linux? If so, is one
preferred over the other? The plan is to run Vista as the host OS, with
Ubuntu running virtualized under Virtual PC. Ask me in a couple of days
if that actually works:-)

--Beman


Boost-testing list run by mbergal at meta-comm.com