Boost logo

Boost :

From: Jens Maurer (Jens.Maurer_at_[hidden])
Date: 2001-01-22 18:03:27


David Abrahams wrote:
> 1. There's really too much output to get a quick, clear view of what might
> have been broken. What I really want is to have the testing system compare
> the results I'm getting with the previously-checked-in state and tell me how
> many tests I've broken (and what I've fixed!)

Done. There's a new option "--diff" to the regression test program
which reads the current cs-XXX.html file before overwriting it.
Output style for differences is similar to Beman's script; please adjust
to taste. The difference mechanism can cope with new or removed tests,
but it cannot cope with new or re-ordered compilers. In the latter cases,
it is advisable to not use "--diff" to get a fresh cs-XXX.html file.

We should check in only non-highlighted versions; i.e. the highlight
feature is just some internal debugging aid for the individual developer.

> 2. Programs which fail at runtime typically give me no feedback about why
> they've failed,

Do you mean "core dump vs. normal failure detected by application"?
system() is not an adequate interface to give you that amount of information.
For Unix, I know how to get you the information, but I've no idea about
Windows.

To retrieve more information about application-level test failures
needs a test protocol and a test framework. Otherwise, just have a look
at the compiler (application output) message log.

Is it ok to revive some sort of annotation such as a footnote indicating
compile, link, or run failure for the failed tests?

> and I don't know how to get a test to run under the
> debugger.

Have your test recompile with
  regression -o /dev/null --compiler XXX compile-link the_broken_test
and fire up your debugger on the resulting boosttmp.exe.
 
Jens Maurer


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk