Boost logo

Boost :

From: Benjamin Kosnik (bkoz_at_[hidden])
Date: 2007-01-19 07:48:59


Hi!

I am somewhat stymied by the proper test procedures for the current
boost release candidate. (ie, RC_1_34_0 branch.)

The documentation:
http://www.boost.org/tools/regression/xsl_reports/runner/instructions.html

Seems simple enough. However, it lies.

;)

First of all, the links to regression.py are dead. That's fine,
because there is

./tools/build/v2/test/regression.py
./tools/regression/xsl_reports/runner/regression.py

Great.

In addition, I'm more interested in checking my local build, after I
built it, not really the workflow envisioned from this script. (As
documented.)

In boost build v1, there was a script:

tools/regression/run_regression.sh

This is what I had been using, but it apparently does not work with v2
and has not been updated. (basics are wrong in that script, starting
with the location of the bjam sources.)

Now, there is even a third way, ie

"make check"

if you use the ./configure; make; make check approach. (Which I would
like to do!)

However, that rule is wrong. (no rule for "test", should be
--user-config=../user-config.jam). Omitting "test" as a rule and just
doing bjam in the status director runs the tests, but then I have no
summary.

So.

Before I start hacking in my own custom make check rules via
cannibalizing the old run_regression script, I feel as if I must ask
the obvious question:

How are people running the regression tests for local builds so that
they get results in an easy to comprehend format? There are pretty
results on the boost web page: is it possible for the rest of us to
generate these too?

best,
-benjamin


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk