From: William E. Kempf (williamkempf_at_[hidden])
Date: 2002-07-11 14:51:14
----- Original Message -----
From: "Beman Dawes" <bdawes_at_[hidden]>
To: <boost_at_[hidden]>; <boost_at_[hidden]>
Sent: Thursday, July 11, 2002 2:18 PM
Subject: Re: [boost] Regression test / compiler status progress
> At 05:00 PM 7/10/2002, William E. Kempf wrote:
> >Beman, if you'd like I could volunteer to do some of this for you. I
> >produce the XSLT file, and if it's easier I could even modify the
> >executables output to be XML after you get a working HTML version
> >What ever is easier/best for you, and assuming you agree that XML is a
> >better output choice here.
> I'm interested, and it would probably be easy to modify the programs once
> they are stable to output XML.
> But step back a bit. I quickly scanned back over this thread, and didn't
> see anyone explaining the benefits from generating XML as they would apply
> to regression. In other words, if the sole use is to look at the output
> with a web browser, why bother?
I gave one example, to view the data in a web browser in a different format.
To make this a little more concrete, let's think about a compiler vendor,
say Comeau C++. For marketing purposes it would be great if they could run
their own regression tests (possibly using more platforms and backends than
what we are using) and publish reports showing how well they can handle
Boost code (or even some other library). With the hard coded HTML output
they'll have to step through several hoops to get an accurate report that
shows only the data they find pertinent and in a format appropriate for
Or, they could use the XML generated by Boost, but combine the output from
the various platforms into one report about their compiler, something our
reports don't do for compilers that work on multiple platforms. (That's a
format that might be useful even for us.)
Or, imagine a compiler vendor that's working on their C++ conformance and
using Boost as a test bed. Instead of an HTML report they could use the XML
to generate bug reports with what ever bug tracking software they happen to
use in an automated build process.
Theoretically (not really suggesting this, just brain storming) the Boost
release process could even make use of this by e-mailing the owners of
specific libraries about which tests are failing on what
compilers/platforms. An automated script could even compare the current
results against previous results (pulled from CVS by the tag used for the
previous release) to pinpoint things that are breaking that never used to.
Once the work of creating the scripts was completed this automation could
simplify your job as the administrator of Boost releases.
I'm sure there are other things that people could think of. I don't think
that Boost themselves will have a need beyond generating an HTML page for
the web (well, again, maybe some automation in releases), but Boost users
would benefit in many ways from a regression test system that created XML
files instead... especially if that system can be reused in their own
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk