From: Misha Bergal (mbergal_at_[hidden])
Date: 2004-02-04 03:23:41
David Abrahams <dave_at_[hidden]> writes:
> Misha Bergal <mbergal_at_[hidden]> writes:
>> Beman Dawes <bdawes_at_[hidden]> writes:
>>> I think we need a major upgrade to our testing infrastructure. I'd
>>> like to see a machine (perhaps running both Win XP and Linux using a
>>> virtual machine manager) constantly running Boost regression
>>> tests. The tests should be segmented into sets, including an
>>> "everything we've got set", with some sets running more often than
>>> others. As previously discussed, one set should be a "quicky test"
>>> that runs very often, and that developers can temporarily add a test
>>> to that they are concerned about.
>> It seems to me, that a lot of time is taken by Boost.Build
>> unnecesserily trying to execute the tests which have been failing
>> before, even though files they depend on haven't changed.
> It used to work the other way, but it caused confusion.
>> If this is fixed, it would make sense to set up continuosly running
>> regression tests: clean once a day and the updates for the rest of the
> We could make it optional and use it only for the Bots.
Agreed. Do you have a rough estimate about what needs to be done to
> There is also the problem that the type traits tests obfuscate their
> include files using macros, so some changes won't cause rebuilds.
> There is also a similar issue with libraries that use the PP library.
> We can customize Boost.Build to be aware of the special inclusion
> macros if neccessary.
The dependencies problems seem to be resolvable. So really what is
needed is to:
1. Implement BuildBot.
2. Change BoostBuild to have an option of not rebuilding the failed tests.
3. Implement regression test requests for branch/lib/toolset.
-- Misha Bergal MetaCommunications Engineering
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk