Boost Testing :
From: Beman Dawes (bdawes_at_[hidden])
Date: 2007-09-15 19:20:53
Rene Rivera wrote:
> Beman Dawes wrote:
>> What can be done to ensure the problem does not reoccur?
> Hm, hard question to tackle, since managing multiple disparate processes
> is never easy.
>> What can be done to automatically detect and report the problem if it
>> does reoccur?
> The one idea that needs investigation, that Robert has mentioned a few
> times, is in doing regression testing for the testing system itself, and
> related tools. We have tests for some of the tools involved, mainly
> bjam, Booost.Build, and some others, but the coverage and depth is
> spotty. The one idea I've had is to create a mini-Boost branch that can
> track the current state but runs a minimal set of tests. I.e. black-box
> testing the testing system as a whole.
Hum... One simple change would at least have made the problem obvious a
lot sooner; that's the idea of showing the revision number in the report
column heading for each test runner. I had suggested that to make it
easy for developers to see if a late change had made it into a
particular test run, but it would also have been a red flag that
something was going wrong with the runner.
You idea of including a file in the tarball that identifies the revision
number is a good one. The same file can be generated locally by
regression.py so both tarball and svn update approaches produce the
What program would have to be changed to display the revision number?
Maybe I'll give this a try.