Boost logo

Boost Testing :

From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2005-12-05 12:20:42


David Abrahams wrote:

>>I find it quite hard to interpret the regression test matrix since
>>it is't obvious which test runs are in really in sync with the latest
>>revision (or any revision, for that matter), and because there doesn't
>>appear to be an obvious way to distinguish real test failures from
>>failed prerequisites (incl. configuration errors, unsufficient resources,
>>etc.)
>
>
> I don't think BuildBot fixes that anyway. Would you care to suggest
> some specific test matrix improvements (on the boost-testing list,
> preferably)?

Ok, I'll cross-post (the boost-testing page suggests that questions relating
to the boost.test framework should go to the main list).

Test runs on the test matrix displayed at http://engineering.meta-comm.com
are annotated by date, not revisions. Switching to subversion should fix
that, or rather, enable the infrastructure to pull out a single revision
number for each checkout that is used in the test run.

Another issue is about result interpretation and result interdependencies
(I was reminded of this issue by the recent mingw failure that issued a 'fail'
for each single test, instead of simply flagging the whole test run as 'untested'
as some precondition (environment, configuration, whatever) wasn't met.

In a similar vein, people have reported in the past that failures of tests
were caused by the disk being full or similar. That, too may be caught upfront,
or otherwise be flagged not as a test failure but some external precondition
failure.

What would it take in the current testing framework to enhance that ?

Regards,
                Stefan


Boost-testing list run by mbergal at meta-comm.com