|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2007-04-02 08:15:13
on Sun Apr 01 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
> David Abrahams wrote:
>> on Sun Apr 01 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
>>
>>>> Unfortunately, I don't believe all build variants _are_ tested. But
>>>> anyway, I don't understand what you mean about modularity and
>>>> slowdowns. The tests are distributed across many machines. If you
>>>> dedicate a single testing machine to one toolchain and build variant,
>>>> you can't do this much faster. Many testers do incremental testing,
>>>> so only the changed stuff gets rebuilt.
>>> You are right, the total execution time is still the same. However,
>>> a more modular system is more easily parallelizable.
>>
>> I still don't know what you mean by "more modular"
>
> I mean a system that provides many smaller test suites,
We already have that. Each library has its own test suite.
What we don't have is any way to partition Boost among different
testers.
> so users
> can offer to run them individually. (That would give the additional
> benefit of test-suite specific parametrization. For example, boost.python
> may be tested against different python versions, something that doesn't
> make any sense for any other part of boost.)
Agreed.
>>> More testers could contribute cycles as the resource requirements
>>> wouldn't be quite as high. This, then, makes the cycles from checkin
>>> to report containing associated test results smaller, helping to get
>>> fixes in quicker. Etc. etc.
>>
>> Maybe you're suggesting that tests of the whole suite on a single
>> compiler and platform could be distributed across many machines? That
>> could be vulnerable to small platform differences, but maybe there's a
>> way around that.
>
> If there are potentially 'small platform differences' they need to be captured
> by the testing harness anyway. Right now we get potentially multiple test runs
> with the same label (i.e. same toolchain / platform), with potentially differing
> results. One way to enhance the testing harness is to more strictly control
> the environment test suites are executed in. buildbot (http://buildbot.sourceforge.net/)
> would provide excellent tools to move into that direction. (Rene has been suggesting
> to do that for a long time, for example here:
> http://article.gmane.org/gmane.comp.lib.boost.devel/119457)
I hope you'll be at BoostCon; I'd like to get your ideas in the mix
for a discussion of (and maybe a sprint on) the testing architecture.
-- Dave Abrahams Boost Consulting www.boost-consulting.com Don't Miss BoostCon 2007! ==> http://www.boostcon.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk