From: Beman Dawes (bdawes_at_[hidden])
Date: 2002-07-11 14:54:16
At 03:04 PM 7/11/2002, David Abrahams wrote:
>From: "Beman Dawes" <bdawes_at_[hidden]>
>> Hum... I assumed that linking, but without forcing a build if the
>> wasn't there, was doable. But I gather from other postings that this
>> one of those things that will have to wait until version 2.
>> I'm sorry. It never occurred to me that jam would not be able to link
>> library that it had built in a prior job step.
>Sure it can do that, if you know how point it at the right library object
>for each target you're building.
Well, John's current tests use:
How do I change that to use the library, but if it doesn't exist, just fail
all dependencies rather than trying to build the library.
>I don't know why you'd want to suppress building the unbuilt library as
>part of the test, though. That seems completely backwards and unnatural
>me. The test depends on the library. If I want to run the test, why
>shouldn't it try to build the library?
Because you said:
>We have a generalized DAG describing target dependencies. You want a
>message to appear when the first target on which some test target is
>dependent starts to build. Jam doesn't work that way. The make process
>proceeds by recursive descent through the build dependency tree, starting
>at the target you requested to build. It proceeds from dependents to
>dependencies. There's no provision for looking back up at all the
>dependents of a target. In fact, a target (such as a library) may well
>multiple dependent test targets. When we start to build that target,
>test are we starting?
By running the build first, as a separate jam invocation, we got a specific
target to tie the failure messages too without "looking back up at all the
dependents of a target"
>> Does that mean that until version 2 we should not report status on any
>> regression test that uses libraries? Or just report "It failed for
>> reasons". Depressing.
>I don't know. It semes like you're trying to get expedient results
>doing the hard work of thinking about how the system should ultimately
>work. As I said in a previous message, we need to understand what the
>system really needs to suport testing well. Until we do that, IMO
>everything else is a kind of flailing about with ad-hoc approaches, and
OK, I'll try to write something. Hard to do because a lot of the
requirements are ingrained assumptions that "of course" a test system will
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk