Boost logo

Boost :

From: Younes M (younes.m_at_[hidden])
Date: 2007-03-21 19:14:55


First, thank you all for the comments.

On 3/21/07, Stefan Seefeld <seefeld_at_[hidden]> wrote:
>
> There are a number of things I can think of that would make running boost tests
> more useful and convenient. Among them:
>
> * An easy way to introspect the test suite, i.e. display all tests, with
> metadata.

I agree. I think a big benefit in a GUI frontend will be to display
results in a more digestible manner than the current text output
allows. For example:

* We can display results chronologically along with any checkpoint and
general purpose messages the developer has used, or we can group
results by source file, test case, or type of failure.

* We can re-run individual test cases or groups of tests, as opposed
to the entire unit.

* We can display each group/test case/individual test in a widget that
can expand or contract to display more or less information as
required. The widget would allow the developer to click on it to
facilitate jumping directly to the source file and line the failure
occured at (if applicable).

* We can provide statistics on the number of passes and failures,
number of failures per source file, test case, or type of failure.

* We can keep a history of reports per unit and provide statistics
across reports, to allow us to better guage progress.

>
> * An easy way to run subsets of tests.
>

On the current Open Issues page selectively running test cases by name
is mentioned, which I think fits into this. I've included it in my
list of running ideas above.

> * Enhanced test run annotations, to allow report generation to build a more
> meaningful test report (e.g. fold multiple equivalent test runs into one,
> only consider test runs from a given time window, or associated with a given
> revision, etc.)

One issue I forsee is in synchronizing the unit test with the GUI. As
it stands, I had only considered using the output of Boost.Test to
generate reports, but this means that the reports can get stale until
you re-run. This also complicates the statistics across reports idea I
had above, since it makes little sense to consider such a thing when
the unit test changes considerably.

I think your idea, if I've understood it correctly, of taking into
account revisions of the test suite would work towards solving such
issues. I'm not sure how to go about detecting/delineating revisions
however, but I'll give it some thought.

> * Support for running tests in parallel, or even on distributed hardware, if
> available.
>
> * Cross-testing support (i.e. compiling with a cross-compiler toolchain on host A
> for target B, then uploading the test executables to B and run it there)

I must admit that I don't usually find myself cross-compiling or
running on distributed hardware, so I might not appreciate some of the
issues and requirements involved.

---
Anyhow, with respect to some of the other replies, Gennadiy mentioned
that he's included much more support for external test runners, so
I'll have a look at 1.34.0.
Up until now I was considering a standalone tool that was
crossplatform, but I have no issues with looking into MS Visual Studio
integration. If I recall correctly MSVS addins can be built with C++
using ATL or C++/C# using the .Net Framework. If that's the case then
I think I would prefer to do this in C#, given that Mono is a viable
option, even when using WinForms, while ATL is Windows only. This
would probably make it easier to produce both a standalone application
and a MSVS addin that share the bulk of the implementation. Given the
schedule constraints of the GSOC program, it might be the case that I
would only begin on one of these avenues however.

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk