Boost logo

Boost :

From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2007-08-01 17:28:54


Beman Dawes wrote:
> Robert Ramey wrote:
>> Stefan Seefeld wrote:

>>>> c) Testing is applied to branches as requested.
>>> I believe how test runs are triggered most efficiently depends on
>>> the usage patterns. Ideally (i.e. with infinite resources), test runs
>>> would be triggered on each change. If that isn't possible, alternative
>>> approaches can be chosen, such as 'no earlier than x minutes after
>>> a checkin', to allow developers to make multiple connected checkins
>>> in a row (though with subversion there shouldn't be any need for that,
>>> in contrast to cvs). Or, "triggered by checkins but no more frequent
>>> than once per day". Etc.
>>> (See http://buildbot.net/repos/release/docs/buildbot.html#Schedulers)
>> This is the missing piece. I believe it will be available in a relatively
>> short
>> time. The mechanism will be tests of library x will be run on branch
>> y by any tester interested in doing this. Tests can be run whenever a
>> tester
>> wants to - but it will really only be necessary when a developer requests
>> it.
>
> Right, although as a practical matter most developers will want to test
> against "stable".

What are they testing ? And what (and, more importantly, where) are they
developing ?

>
> I've been trying the following procedure for the six or eight weeks:

[...]

> So whenever I want to see if the code is working on the non-Windows, I
> sign onto the web sites, request tests be run, and have the results in a
> couple of minutes.

For avoidance of doubt: the tests are run on your 'c++0x' branch, right ?
How many such branches do you expect to coexist ? How many people do you
expect to collaborate on such branches ? At what frequencies do you expect
branch-specific testing requests to be issued ?
Does the procedure scale ?

Also, of course, such requests can only be issued for machines (platforms)
that are readily available, right ?

I think this is where buildbot enters the picture. It allows to set up
a set of schedulers that control the actual testing, e.g. imposing constraints
on how often tests may be run. That will help to manage the available (and
probably rather scarce) resources: build slaves for the various platforms.

> Although the process needs a lot of polishing, it already works well
> enough to demonstrate the value of the approach. The tools involved are
> mainly just Subversion and bjam. The same approach would work with other
> testing frameworks.
>
> The bottom line is that I know that code works *before* it gets merged
> into the stable branch. That's the critical point; the exact way the
> testing is done is important operationally, but those details don't
> matter as far as the big picture goes.

Right. Again, for avoidance of doubt: do you expect the development
branch to be created from the stable branch, to make sure a passing
test on the development branch translates to a passing test on stable
after a merge. Correct ?

I'm asking because this essentially means that stable becomes the only
reference, throughout boost development. In fact, not only a reference,
but a synchronization point. It becomes the developer's duty to backport
all changes that go into stable from other development effords, making
sure the tests still pass, before forward-porting the local changes to
stable.

While I agree this sounds good, it also implies quite a bit of additional
work for every developer.

Thanks,
                Stefan

-- 
      ...ich hab' noch einen Koffer in Berlin...

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk