Boost logo

Boost-Build :

From: bill_kempf (williamkempf_at_[hidden])
Date: 2002-08-21 14:01:07


--- In jamboost_at_y..., "David Abrahams" <dave_at_b...> wrote:
> From: "bill_kempf" <williamkempf_at_h...>
>
> > > Weird. It seemed to work for me.
> > > Are you sure nothing runs?
> > > Just tried it again. The lines labelled "capture-run-output"
are
> > the ones
> > > invoking your tests.
> >
> > Ahh... I'm used to seeing the run in the actual output. Having
to
> > inspect a text file to determine if the tests failed is a Bad
Thing
> > as well.
>
> You don't. If the thing exits with a return status, the file is
printed to
> stdout. It's only silent when it succeeds.

If that's the behavior, that's usable.

> > I guess what I want is a cross between the behavior of "run"
> > and "unit-test". I want the output to be visible during bjam
> > invocation (though it can still be captured in a file as well),
>
> Do you really need to see the result of successful runs?

Not really.

> > and
> > if the run fails I want the next bjam invocation to still run the
> > executable (linking I suppose could be considered bad and can be
> > removed).
>
> Currently it will only run the executable again if something
changed which
> could cause it to succeed. If you need it to run regardless,
> pass -sRUN_ALL_TESTS=1

That's nice to know.

> > Further, failed runs should be reported as failed target
> > updates by bjam.
>
> They are.

That's also nice to know.

> > In any event, I don't see how the run targets are going to be
useful
> > to me. The results are just too difficult to inspect for test
> > failures. Failed runs aren't reported by the bjam invocation at
> > all, let alone which specific tests failed.
>
> Sure they are! Did you ever fail a run with the new Jamfile?

Sort of. See below.

> > So I'd have to inspect
> > all 96 output files, each buried in a deep tree structure, just
to
> > know if the tests passed or failed.
>
> I don't think so.

Sort of. See below.

> > > > I don't understand why this is a problem only with the gcc
> > toolset.
> > > > The msvc, vc7 and borland toolsets all work flawlessly with
unit-
> > > > test, and they should all be doing the same thing in this
case,
> > no?
> > >
> > > They don't need the JAMSHELL workaround for the link step since
> > they can
> > > use command files.
> >
> > That makes sense. Unfortunately, that makes it sound like I'm
not
> > going to find an acceptable solution to my problem in a short
time
> > frame, huh?
>
> It depends how demanding you are ;-)

Not very. But see below ;).

> I could try to update the unit-test rule to eliminate the issue,
but I
> really hope the "run" rule actually works for you and you are
> misinterpreting reality.

What you describe as the actual behavior, and what I see when I use
the -a option (or do a clean first), are in fact quite usable.
However, I'm experiencing several issues, some of which were the
result of the confusion here. Let me describe them.

Just modifying a test to produce an error and then invoking:

bjam -sTOOLS=vc7

appeared to run, with out any indication of failed targets. Careful
inspection did reveal there was no "capture-run-output", however, so
I'm not sure that it ever tried to run the test. I'm guessing it
only rebuilt the tests with out running.

Deleting the bin directory entirely results in the same behavior:
i.e. everything compiles, no failed targets are reported, and
there's no "capture-run-output" in the output.

I can't explain any of that behavior.

Another thing I notice is that only one target (including it's 4
variants) is being built, and is named test_base.test. It appears
that this is the target based on test_thread.cpp. The other 5
targets aren't built or run at all.

Bill Kempf

 


Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk