From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-07-23 12:43:14
What are we going to do about the Python tests?
I think they should be included in official test runs, at least.
Ralf is already doing some nightly builds at http://cci.lbl.gov/boost/,
----- Original Message -----
From: "Beman Dawes" <bdawes_at_[hidden]>
To: <boost_at_[hidden]>; <boost_at_[hidden]>
Sent: Tuesday, July 23, 2002 1:32 PM
Subject: RE: [boost] Beta posted for new regression tests
> At 12:17 PM 7/23/2002, Jeff Garland wrote:
> >I wonder then if it would be of benefit to generate one .html file for
> >.xml file instead of combining them? I noticed when I
> >hit one of the 'Fail' links and then returned to summary page that all
> >the 'Fail' links where colored as if I had visited each
> >one. It would be nice if only the one I actually navigated to was
> >highlighted. Since web browsers only track visits on the file
> >level this would require one html file per toolset per test.
> I considered that, but the management of all those files would be a
> problem in our current environment. Moving them across the Internet to a
> server is a problem. If we had control over the server, we could
> work something out, but that is hard to do as it stands now.
> >I also agree with Dave that an overall summary page by library would be
> >really nice. The page is already long and as boost
> >grows and adds more tests this is going to get out of hand. I'm
> >something like:
> > Summary Results for 2002-07-23 15:24:00 for Win32 -- 3 toolsets:
> > * GCC 3.1
> > * Borland
> > * Metroworks
> > library #tests failures warn missing
> > filesystem 3 0 1 0
> > thread 1 0 1 0
> > tokenizer 5 2 0 0
> >Library names would then hyperlink to an html file which contains only
> >results for that library.
> Yes, although we'll have to minimize the number of files unless we can
> figure out how to manage large numbers of files.
> >Finally, from the library author perspective is there a reference for
> >we need to do to get our tests plugged into the system?
> Basically, just add your tests to status/Jamfile. It is fairly
> >Great work!
> PS: I've just refreshed the files, in the process of testing the script
> be run automatically. Changed to reporting only tests that didn't pass
> all compilers. Still at http://www.esva.net/~beman/jam_regr.html
> Unsubscribe & other changes:
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk