|
Boost : |
From: Aleksey Gurtovoy (agurtovoy_at_[hidden])
Date: 2005-12-16 05:54:39
Stefan Seefeld <seefeld_at_[hidden]> writes:
> Ok, I'll cross-post (the boost-testing page suggests that questions relating
> to the boost.test framework should go to the main list).
Hmm, a bit of explanation: Boost.Test is a _C++ library_ that
"provides a matched set of components for writing test programs,
organizing tests in to simple test cases and test suites, and
controlling their runtime execution". While I can see how the last
part could create an impression that Boost.Test covers the
infrastructure behind the regression results on the website, it
doesn't. "Boost testing infrastructure" would be a more accurate
reference to the whole thing, and that's exactly what's this list is
dedicated to.
> Test runs on the test matrix displayed at
> http://engineering.meta-comm.com are annotated by date, not
> revisions. Switching to subversion should fix that, or rather,
> enable the infrastructure to pull out a single revision number for
> each checkout that is used in the test run.
FWIW, you can still get any particular file's revision by the
date. Why is it an issue, anyway?
> Another issue is about result interpretation and result
> interdependencies (I was reminded of this issue by the recent mingw
> failure that issued a 'fail' for each single test, instead of simply
> flagging the whole test run as 'untested' as some precondition
> (environment, configuration, whatever) wasn't met.
Personally, I don't feel that this happens often enough to warrant a
special effort, but if there is a consensus that it's a real issue,
sure, we could setup a mechanism for manually marking up the whole set
of results like you suggest above. Meanwhile, there is always an
option of taking the results out of the FTP.
> In a similar vein, people have reported in the past that failures of
> tests were caused by the disk being full or similar. That, too may
> be caught upfront,
Not really...
> or otherwise be flagged not as a test failure but
> some external precondition failure.
... but sure.
>
> What would it take in the current testing framework to enhance that
> ?
In case with "disk full" situation, I guess it's easy enough to detect
and flag it automatically. Boost.Build could even terminate the
build/test process as soon as it discovered the first failure due to a
lack of space.
In case of other environment/configuration problems, well, it depends
on how exactly do you envision this and what we eventually agree upon.
-- Aleksey Gurtovoy MetaCommunications Engineering
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk