Boost logo

Boost :

From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2007-08-01 19:30:48


Robert Ramey wrote:
> Stefan Seefeld wrote:
>
>>> Right, although as a practical matter most developers will want to
>>> test against "stable".
>> What are they testing ? And what (and, more importantly, where) are
>> they developing ?
>
> They are testing changes to the libraries they are developing. They
> are depending upon only the last/next released version of boost.

But no development is taking place on 'stable'. Why test against it
(for other purposes than preparing a release) ?

>> For avoidance of doubt: the tests are run on your 'c++0x' branch,
>> right ? How many such branches do you expect to coexist ?
>
> approximately one per developer.

That doesn't answer my question, though: I'm wondering how many
build / test requests need to be dealt with. Where do the testing
resources come from ?

>
>> How many people do
>> you expect to collaborate on such branches ?
>
> one or two.
>
>> At what frequencies do
>> you expect branch-specific testing requests to be issued ?
>
> as needed - depending on how hard one is working it could
> be as often as once/day but I would expect 5-10 times for
> each major library revision.

I must be misunderstanding something fundamental. What is
being tested ? The code under development ? Running tests
on stable won't tell anything about my development branch.

>> Does the procedure scale ?
>
> very much so. Instead of testing the whole of boost - whether
> anyone needs it or not. Only one library is tested at time.

Indeed. That will remove redundancy.

> Currently, the time to test grows quadraticly. Number of
> libraries x Time to run a test. The time to run a test grows
> with the number of libraries.

Huh ? That's only true if each test is run stand-alone (as opposed
to incrementally, with an update instead of a fresh checkout). And
even then, if I only build a test, only its prerequisites should
be built. Since that shouldn't depend on the overal number of
libraries, I don't see how the time to test is quadratic.

  Under the new system, testing
> will only grow linearly with the number of libraries. as only
> the library branch is tested on request. This is a fundemental
> motivation for this system.

Yes.

>> Also, of course, such requests can only be issued for machines
>> (platforms) that are readily available, right ?
>
> LOL - this is "secret sauce" which is still secret - at least from me.
> I presume it wll be revealed to the "rest of us" when its "ready"

You make me curious. Is someone setting up a build farm ? All the more
reason to set up a buildbot harness. :-)

I think my main concern is that "on request" part. I believe there
needs to be some scheduling to manage the resources, no matter how
many there are, where they are located, and how they interact.

>> I think this is where buildbot enters the picture. It allows to set up
>> a set of schedulers that control the actual testing, e.g. imposing
>> constraints on how often tests may be run. That will help to manage
>> the available (and probably rather scarce) resources: build slaves
>> for the various platforms.
>
> Something that does this function will be needed but I doubt it will
> be as elaborate as you suggest. But who knows - it seems it's
> still being experimented with.

Why so secretly ? Rene and I have been talking about a buildbot
harness for many months now. I would very much appreciate if things
were handled a little more transparently, to avoid wasting effort.

>> Right. Again, for avoidance of doubt: do you expect the development
>> branch to be created from the stable branch, to make sure a passing
>> test on the development branch translates to a passing test on stable
>> after a merge. Correct ?
>
> Now you've hit upon the motivation for my original post. I was under
> the impression that the "trunk" would be the last released version. It
> turns out that its not so. But no matter. With SVN there is no special
> status accorded "trunk" we can just branch off the last release. The
> only think we need is a set of "Best Practices" (or whatever one
> wants to call it so we're all in sync.

We totally agree. That's what I was referring to as checkin policies for
all available branches, trunk and stable included.

>> I'm asking because this essentially means that stable becomes the only
>> reference, throughout boost development. In fact, not only a
>> reference,
>> but a synchronization point. It becomes the developer's duty to
>> backport
>> all changes that go into stable from other development effords, making
>> sure the tests still pass, before forward-porting the local changes to
>> stable.
>
> Halleluhuh - you've got it !!!.
>
>> While I agree this sounds good, it also implies quite a bit of
>> additional work for every developer.
>
> Its A LOT LESS work for the developer. Under the current (old) system
> every time a test failed I would have to investigate whether it was due
> to an error new error in my library or some change/error in something
> that the library depended up. It consumed waaaaay too much time. I gave
> up commiting changes except on a very infrequent basis. Turns out
> that the failures still occurred but I knew they weren't mine so I could
> ignore them. Bottom line - testing was a huge waste of time providing
> no value to a library developer.
>
> Don't even start on the effort trying to get a release out when everything
> is changing at once.

Don't worry. On that we very much agree, too. :-)

Regards,
                Stefan

-- 
      ...ich hab' noch einen Koffer in Berlin...

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk