|
Boost : |
From: Beman Dawes (bdawes_at_[hidden])
Date: 2004-02-12 12:23:01
At 10:56 AM 2/12/2004, Samuel Krempp wrote:
>On Thu, 2004-02-12 at 11:06, Martin Wille wrote:
>> I'm under the impression that the test results aren't monitored
>> by the library maintainers unless we're close to a release.
>> So, it doesn't make much sense to run all the tests all the time.
>
>well, personnally I find the batch of test results very useful all the
>time when I'm working on my library. (and that's not always in the last
>days before release)
>I commit, wait a bit, check the regression results of each and every
>compiler listed, try to understand what seems to be issue with the ones
>failing, find a workaround, then proceed to next planned commit, etc..
That's the development process I'm targeting when I say cycling tests more
often would be helpful.
The other case is where a developer isn't expecting any change, so isn't
even looking at the latest test results. Automatic email notification might
be very helpful in that case.
>hmm, the real problem is each test is run all the time even when no
>source file used by it has been modified, is that right ? would it be
>hard to recompile and re-launch only what need be ?
That's already being done, and with good success. On my setup, with seven
compilers, it takes close to three hours to do a run from scratch. If there
are just changes to a single library which other libraries don't depend on,
it might take 15 minutes. But if nothing at all changed, it still takes,
say, 14 minutes. It would be nice to reduce that 14 minutes of "overhead",
where some tests are recompiled, even though we know the recompiles will
fail, and some tests are re-executed, even though we know the results of
re-execution. That being said, we don't want to reduce reliability, of
course.
--Beman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk