From: Martin Wille (mw8329_at_[hidden])
Date: 2004-02-12 05:06:33
Beman Dawes wrote:
> Anyhow, I think your point about multiple reporting is a good one. The
> volume of tests is just too high. Fewer, more comprehensive, tests would
> be easier to monitor.
This could be implemented by using another build target.
I'm under the impression that the test results aren't monitored
by the library maintainers unless we're close to a release.
So, it doesn't make much sense to run all the tests all the time.
We had some troubles with process_jam_log recently.
If that tool had worked fine then monitoring the test
results would have been much simpler.
> Also fewer compilers. Do we really need to test
> every version of GCC and VC++ for the last four years?
One problem is that different Linux distributions
prefer different compiler versions and developers
in turn seem to prefer to use the compiler shipped
by the distribution they use. This results in a
great diversity of compiler versions being actively
Of course, if a decision would be made not to
support very old compilers like gcc 2.95 then
I'd happily remove them from my list of toolsets.
> If our testing
> was more focused, we could cycle the tests more often too.
We could cycle more often if all test programs could
be compiled and run rather fast. However, this is not
the case currently.
Authors of tests probably are unaware of the compile/
run time their tests need on the systems/compilers
they don't use themselves (e.g. random_test takes
_very_ long to compile here).
Perhaps, it would be helpful to report compile/run
times in the test results. Slow tests could be
identified easily by displaying the results sorted
by the time then.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk