From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2007-08-14 04:19:00
David Abrahams wrote:
> on Wed Aug 08 2007, Stefan Seefeld <seefeld-AT-sympatico.ca> wrote:
>>> Of course we're in serious danger of getting into the tool writing business
>>> again here ....
>> I have been suggesting to look into QMTest (http://www.codesourcery.com/qmtest, which
>> I happen to maintain), to drive the regression testing. It does allow the storage
>> of test results in databases, and it keeps track of how tests results evolve over
>> In fact, one of the more important features (which, unfortunately, make it hard
>> to hook it up with boost.build), is that it allows the introspection of a 'test database':
>> You can look at the test database (test suites, test sub-suites, as well as individual
>> tests, test results, etc.) without actually running them. This is good for robustness,
>> and allows to easily generate reports, involving different dimensions (across test suites,
>> across platforms, across time, etc.).
>> Oh, and QMTest makes it easy to run tests in parallel, dispatching to a compile farm, etc.
> Can you give a brief summary of what QMTest actually does and how
> Boost might use it?
QMTest is a testing harness. Its concepts are captured in python base classes ('Test', 'Suite',
'Resource', 'Target', etc.) which then are implemented to capture domain-specific details.
(It is straight forward to customize QMTest by adding new test classes, for example).
QMTest's central concept is that of a 'test database'. A test database organizes tests. It
lets users introspect tests (test types, test arguments, prerequisite resources, previous
test results, expectations, etc.), as well as run them (everything or only specific sub-suites,
by means of different 'target' implementations either in serial, or parallel using multi-threading,
multiple processes, or even multiple hosts).
Another important point is scalability: While some test suites are simple and small, we also
deal with test suites that hold many thousands of tests (QMTest is used for some of the GCC
test suites, for example). A test can mean to run a single (local) executable, or require
a compilation, an upload of the resulting executable to a target board with subsequent remote
execution, or other even more fancy things.
Test results are written to 'result streams' (which can be customized as most of QMTest).
There is a 'report' command that merges the results from multiple test runs into a single
test report (XML), which can then be translated to whatever output medium is desired.
How could this be useful for boost ?
I found that boost's testing harness lacks robustness. There is no way to ask seemingly
simple questions such as "what tests constitute this test suite ?" or "what revision / date /
runtime environment etc. does this result correspond to ?", making it hard to assess the
overall performance / quality of the software.
I believe the hardest part is the connection between QMTest and boost.build. Since boost.build
doesn't provide the level of introspection QMTest promises, a custom 'boost.build test database'
implementation needs some special hooks from the build system. I discussed that quite a bit
-- ...ich hab' noch einen Koffer in Berlin...
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk