|
Boost : |
From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2005-12-16 08:44:14
Aleksey Gurtovoy wrote:
> Stefan Seefeld <seefeld_at_[hidden]> writes:
>
>>Ok, I'll cross-post (the boost-testing page suggests that questions relating
>>to the boost.test framework should go to the main list).
>
>
> Hmm, a bit of explanation: Boost.Test is a _C++ library_ that
> "provides a matched set of components for writing test programs,
> organizing tests in to simple test cases and test suites, and
> controlling their runtime execution". While I can see how the last
> part could create an impression that Boost.Test covers the
> infrastructure behind the regression results on the website, it
> doesn't. "Boost testing infrastructure" would be a more accurate
> reference to the whole thing, and that's exactly what's this list is
> dedicated to.
I see. In my mind testing large C++ projects has at least two aspects:
An API that the to-be-tested C++ code uses to formalize und unify fine-grained
tests and associated (log) output.
On the other hand, there is the infrastructure to actually run the tests,
manage results, etc.
Both parts should (IMO) not be mixed, for a variety of reasons (robustness,
for example).
>>Test runs on the test matrix displayed at
>>http://engineering.meta-comm.com are annotated by date, not
>>revisions. Switching to subversion should fix that, or rather,
>>enable the infrastructure to pull out a single revision number for
>>each checkout that is used in the test run.
>
>
> FWIW, you can still get any particular file's revision by the
> date. Why is it an issue, anyway?
Because from looking at the online test matrix I can't identify
what file revision(s) was/were used for a given test run, and thus,
whether the failure I see is still there after my checkin, or whether
it just hasn't been updated since.
>>Another issue is about result interpretation and result
>>interdependencies (I was reminded of this issue by the recent mingw
>>failure that issued a 'fail' for each single test, instead of simply
>>flagging the whole test run as 'untested' as some precondition
>>(environment, configuration, whatever) wasn't met.
>
>
> Personally, I don't feel that this happens often enough to warrant a
> special effort, but if there is a consensus that it's a real issue,
> sure, we could setup a mechanism for manually marking up the whole set
> of results like you suggest above. Meanwhile, there is always an
> option of taking the results out of the FTP.
I'm not suggesting any manual intervention, but a mechanism to catch
such a situation before the actual test run is executed, i.e. as
a precondition.
>>In a similar vein, people have reported in the past that failures of
>>tests were caused by the disk being full or similar. That, too may
>>be caught upfront,
>
>
> Not really...
Ok, not in the general sense. May be you do want to test how a particular
code handles 'no space left on device' errors...
>
>
>>or otherwise be flagged not as a test failure but
>>some external precondition failure.
>
>
> ... but sure.
>
>
>>What would it take in the current testing framework to enhance that
>>?
>
>
> In case with "disk full" situation, I guess it's easy enough to detect
> and flag it automatically. Boost.Build could even terminate the
> build/test process as soon as it discovered the first failure due to a
> lack of space.
>
> In case of other environment/configuration problems, well, it depends
> on how exactly do you envision this and what we eventually agree upon.
At this point I am only wondering what infrastructure is available to
deal with preconditions / requirements.
I'm developing and maintaining the QMTest test automation tool
(http://www.codesourcery.com/qmtest), and there we provide means
specifically to declare tests dependent on some 'resources' that need
to set up first, and, if that fails, leads to all dependent tests to
be marked as 'untested'.
It is thus easy to conceive a variety of resources that prepare the
testing environment reliably and efficiently.
Regards,
Stefan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk