From: Reece Dunn (msclrhd_at_[hidden])
Date: 2006-03-07 09:04:34
Vladimir Prus wrote:
>On Tuesday 07 March 2006 12:09, Reece Dunn wrote:
> > NOTE: In the V1 regression run, the iostream compression library tests
> > skipped (leaving the entry white) if the libraries don't exist. This
> > be done using something like:
>I'll see about this.
> > As I mentioned above, I have started another msvc run now. Let me know
> > the other changes are in so I can restart it.
>There are fixes to two Python tests (as mentioned in the "Last Python
>It would be great if you verify that those tests work before investing CPU
>cycles on full test run.
I already started a run this morning. I will also run those tests
individually to verify that the fix works.
> > BTW: if there is a BBv2 problem (e.g. like the previous ptr_container
> > problem), this doesn't get logged by the regression script. It would be
> > useful if this could be captured and displayed on the regression results
> > table to save me or others from posting the issue directly.
>I'm not sure how to arrange this, yes, and even where to report this
>enhancement request :-( Maybe, the SF tracker can be used, or the Wiki,
>I need to check on boost-testing.
No worries. I don't mind posting a BBv2 failure, it is just harder to
capture. One thing that would help is that if BBv2 fails and generates a
stack trace or some other error that bjam returns an error code. This would
mean that regression.py would throw an exception saying that bjam failed and
I could then detect that and look at what the bjam.log file is saying.
It would also help if the output of the various bjam errors (stack trace or
target name clash) were reported in a more consistent way and were such that
could be easily processed. E.g.
../Jamfile.v2(42): v2 error: duplicate target foo.cpp
$ bjam gcc | grep -F "v2 error" | wc -l
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk