From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2006-10-07 18:11:53
Vladimir Prus wrote:
> On Saturday 07 October 2006 23:01, Rene Rivera wrote:
>> Vladimir Prus wrote:
>>> On Monday 18 September 2006 10:53, Vladimir Prus wrote:
>>> Sorry for bothering, but I noticed that your sequence.unique optimization
>>> causes a couple of tests to fail -- which is a bit annoying when I try to
>>> figure out of some change of mine caused real regressions.
>> Yes. But since I can't run the tests, in a meaningful way, it's
>> impossible for me to tell when such things happen :-(
> That's why I'm trying to find that's wrong ;-)
Not that I can tell all that well, since the test output is verbose, but
when I run the tests with either the unique optimization on and off
they look about the same to me.
Which brings up another usage question; Is there a way to have a brief
output mode on the tests? Something that just outputs the success or
fail indicators so I can quickly, and more accurately, compare if there
are differences between test runs.
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk