Boost logo

Boost :

From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2007-03-22 08:35:05


Gennadiy Rozental wrote:
> "Stefan Seefeld" <seefeld_at_[hidden]> wrote in message

>> Actually, I don't think the issue here is GUI vs. CLI. Instead, it's
>> about how robust and scalable the testing harness is. Think of it as
>> a multi-tier design, where the UI is just a simple 'frontend' layer.
>> Some layer underneath proides an API that lets you query about what
>> tests exist (matching some suitable criteria, such as name pattern
>> matching,
>> or filtering per annotations), together with metadata.
>
> Boost.Test UTF itself provides this information already (using test tree
> travercing interfaces). We may consider adding some simpler, strait to the
> point interfaces, but even now you could get all you need.
>
>> That, together with other queries such as 'give me all platforms this
>> test is expected to fail on' would be very valuable for the release
>> process.
>
> Umm. I am not sure how do you plan to maintain this information.

I'm not sure what you are referring to as 'this information' here.
The test cases are clearly already encoded in the file system, i.e. can
be found by traversing the code (and possibly scanning for certain tokens).

Expectations are already encoded in some xml file.

I'm not sure whether and how 'platforms' are described, but that could
some simple lookup table, too.

These three together form a database that can be queried easily and
should provide all the relevant information.

>> (All this querying doesn't involve actually running any tests.)
>>
>>>> * An easy way to run subsets of tests.
>>>>
>>> On the current Open Issues page selectively running test cases by name
>>> is mentioned, which I think fits into this. I've included it in my
>>> list of running ideas above.
>> Right, but it may also include sub-suites. To push a little further
>> (and deviate from the topic only a little bit), it would be good to
>> parametrize sub-testsuites differently. For example, the boost.python
>> tests may be run against different python versions, while that particular
>> parameter is entirely meaningless for, say, boost.filesystem.
>
> What do you mean by "parametrize"?

I realize that this, too, concerns more the boost.build system than
the testing harness. However, the effect I describe is seen in the test
report:

Each report lists test runs in columns. Each has a set of parameters, such
as toolchain, platform, as well as some other environment parameters unaccounted
for in description.
Some of these parameters, however, are only meaningful for a subset of the test
run. For example, the python version is clearly only meaningful for boost.python,
but not boost.regex.
Thus, instead of only running full test suites with all parameter combinations
(toolchain,platform,etc.), it seems more meaningful to modularize the whole
and then parametrize the parts individually.

Thanks,
                Stefan

-- 
      ...ich hab' noch einen Koffer in Berlin...

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk