From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2007-08-19 12:50:10
David Abrahams wrote:
>>> I meant I want some kind of analogous statement about a way we could
>>> use it that you *are* convinced of.
>> * handle all test suite runs through QMTest
> too vague.
That's because QMTest is flexible, i.e. what it does exactly depends
on the test database.
>> * aggregate test results in a central place using QMTest,
> So QMTest stores results, OK.
>> and manage interpretation (including expectations)
> How would one do that?
Similarly to what is done now, you would set up an 'expectation database'
that manages expected outcomes for all tests / platforms, so QMTest can
tell for each result whether it is expected or not.
(In the simplest case you could just use an existing set of results from
a previous test run and use that as expectation.)
>> to generate test reports.
> Does QMTest generate reports?
Yes, where 'report' is an XML file, which presumably would be processed
by an XSLT stylesheet to generate an HTML report. (There is an XSLT
stylesheet that is provided with QMTest, but I'd expect some custom
layer to be added, to customize the generated HTML to fit into the
boost website style...)
As an alternative, QMTest can also be used as a server process from which
dynamic HTML can be obtained (QMTest has a HTTP/HTML - based GUI, built
>>>>> There's no a priori reason that Boost.Build needs to maintain the test
>>>>> database, is there?
>>> So what are the alternatives to that arrangement?
>> A boost-specific test database implementation could work like this:
>> * by default, map each source file under libs/*/test/ to a test id,
>> and provide a default test type which
>> 1) compiles,
>> 2) runs the result,
>> 3) checks exit status and output.
>> Default compile and link options could be generated per component
>> (boost library).
>> * For cases where the above doesn't work, some special test implementation
>> can be used (that incorporate the special rules now part of the various
> I think I understand. Essentially, one would need to implement Python
> classes whose instances represent each test and know how to do the
> testing. One could generate Jamfiles for the difficult cases. But
> how would we represent the tests? Python code? An actual database?
QMTest ships with a set of builtin test classes for the most frequent cases:
execution of programs with checks for exit codes / output, compilation and
source code, interpretation of python code, etc., so there is a fair chance
that only little code needs to be added for customization purposes.
QMTest also has some builtin test databases such as one that scans
a directory for files with a given extension (.cpp, say), and interprets
those as tests.
Again, I expect relatively little to be needed to customize those for
(For avoidance of doubt: I offer to do the customization to adapt QMTest
to boost's need, should you decide to give it a try.)
> As I see it right now, the most significant benefit available from
> QMTest is in the fact that it robustly controls the running of each
> test, capturing its results, and comparing those results with what's
> expected. Is that right?
Yes, the execution of the tests is certainly the most important thing,
but introspection of the test database (or expectations, results, etc.),
without running any tests is also part of it (something that is not
possible right now, IIUC. Finally, as I note above QMTest has a GUI
(Zope-based) that can be used as an alternative to a command-line
-- ...ich hab' noch einen Koffer in Berlin...
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk