Boost logo

Boost :

From: Jason Sankey (jason_at_[hidden])
Date: 2007-09-13 11:21:44


David Abrahams wrote:
> on Wed Sep 12 2007, Jason Sankey <jason-AT-zutubi.com> wrote:
<snip>
>> I was hoping that
>> there would be a way to run a full build with one or more XML test
>> reports generated (since I know that Boost.Test supports XML
>> reporting).
>
> We can generate XML test reports, but it's not done by Boost.Test;
> it's done by process_jam_log.py

With some more digging and trial and error I went down this path.
Instead of adding support for Boost.Test XML reports I have added
support for the test_log.xml files generated by process_jam_log. I was
expecting Boost.Test to be used as I had not considered the nature of
some boost tests: e.g. those that test "compilability". This is a
rather unique testing setup in that regard, but at the end of the day
once I was able to generate XML reports it was easy to integrate.

What I have running at present (in development - I have a server I am
readying to transfer this to) is a "developer centric" view like you
mentioned earlier in the thread. That is, the result of the build is
binary and useful to Boost developers for picking up regressions fast.
There are also developer-centric views, reports, notifications etc. All
this is less useful for reporting the status of each platform from the
Boost *user's* perspective. There are several possible ways to approach
this, probably best left until after you get an initial look at what the
heck Pulse does so far.

I do have a couple of issues though:

- Some of the test_log.xml files cannot be parsed by the library I am
using due to certain characters contained within. I have not looked
into whether the problem is the logs or the library I am using.

- I have 49 failing cases (out of 2232 that can be parsed atm). I guess
some of these failures may be a certain class of "expected" failure. I
am yet to fully understand all the classes of expected failure, in
particular which classes are reported as succeed in the test_log.xml vs
reported as fail.

- I get a couple of warnings about capabilities that are not being built
due to external dependencies (GraphML and MPI support). These may be
easy to add I just need to read a bit more.

>> Looking more closely I see that the current regression testing process
>> uses the normal test report format,
>
> What "normal test report format" are you referring to?

Sorry, poor choice of words. I was just referring to what comes out of
the bjam build and is processed by process_jam_log (which I also did not
really understand at that point).

<snip>

Cheers,
Jason


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk