Boost logo

Boost-Build :

From: David Abrahams (dave_at_[hidden])
Date: 2002-08-21 12:59:26


From: "bill_kempf" <williamkempf_at_[hidden]>

> > Weird. It seemed to work for me.
> > Are you sure nothing runs?
> > Just tried it again. The lines labelled "capture-run-output" are
> the ones
> > invoking your tests.
>
> Ahh... I'm used to seeing the run in the actual output. Having to
> inspect a text file to determine if the tests failed is a Bad Thing
> as well.

You don't. If the thing exits with a return status, the file is printed to
stdout. It's only silent when it succeeds.

> > > > The unit-test rule is really not very sophisticated: it makes
> > > the .exe
> > > > target dependent on a successful run. Of course that means
> that it
> > > gets
> > > > removed if the run fails.
> > >
> > > Which is precisely what I want. :(
> >
> > No you don't. If the run fails, you shouldn't have to re-link the
> > executable the next time you try to build.
>
> If the run fails, I'll be fixing code that will hopefully make it
> not fail the next time around, so a re-link is almost certainly
> going to be necessary the next time I try to build any way.

Unless the failure was due to an input file, for example.

> I guess what I want is a cross between the behavior of "run"
> and "unit-test". I want the output to be visible during bjam
> invocation (though it can still be captured in a file as well),

Do you really need to see the result of successful runs?

> and
> if the run fails I want the next bjam invocation to still run the
> executable (linking I suppose could be considered bad and can be
> removed).

Currently it will only run the executable again if something changed which
could cause it to succeed. If you need it to run regardless,
pass -sRUN_ALL_TESTS=1

> Further, failed runs should be reported as failed target
> updates by bjam.

They are.

> In any event, I don't see how the run targets are going to be useful
> to me. The results are just too difficult to inspect for test
> failures. Failed runs aren't reported by the bjam invocation at
> all, let alone which specific tests failed.

Sure they are! Did you ever fail a run with the new Jamfile?

> So I'd have to inspect
> all 96 output files, each buried in a deep tree structure, just to
> know if the tests passed or failed.

I don't think so.

> > > I don't understand why this is a problem only with the gcc
> toolset.
> > > The msvc, vc7 and borland toolsets all work flawlessly with unit-
> > > test, and they should all be doing the same thing in this case,
> no?
> >
> > They don't need the JAMSHELL workaround for the link step since
> they can
> > use command files.
>
> That makes sense. Unfortunately, that makes it sound like I'm not
> going to find an acceptable solution to my problem in a short time
> frame, huh?

It depends how demanding you are ;-)
I could try to update the unit-test rule to eliminate the issue, but I
really hope the "run" rule actually works for you and you are
misinterpreting reality.

-----------------------------------------------------------
David Abrahams * Boost Consulting
dave_at_[hidden] * http://www.boost-consulting.com

 


Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk