From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-04-18 11:11:08
----- Original Message -----
From: "Vladimir Prus" <ghost_at_[hidden]>
> > Where does the build request come from? Since you say "build request
> > 'of' target" above I assume you are saying that there's already a
> > build request attached to it somehow.
> Yes. It all starts with build request given by the user which are
> to the main targets in the Jamfile in the invocation dir. Build
> then propagated to other targets -- ususally as-is, but it is possible
> modify them explicitly using syntax like
> Initialization code grabs BUILD ans TOOLS variants, formulates build
> and expands it. Then, for each variant, if calls "generate" rule on
> target corresponding to the project in the invocation dir.
Good! Please add all clarifications to the document in CVS. Hmm, maybe
the above stuff belongs in architecture.html...
> > > rule generating-rule ( target : sources * : property-set )
> > >
> > > Target and source names passed to the generating rule will be
> > > actual jam target names, with their dependencies and locations
> > > already set.
> > Is it important that dependencies and locations are already
> > If so, why?
> Yes, because dependecies and location are handled by the build system.
> don't want some generating rule to override those decisions, especiall
Are you proposing to do away with binding via $(SEARCH) and $(LOCATE)?
> > > [I would like to retain the same interface for all rules that
> > > actually generate build actions. The current "flags" rule is
> > and
> > > should be carried over, but passing properties as well would
> > > sure everything is possible.
> > I don't see a relationship to the flags rule, but I agree that the
> > generating rule should get build properties explicitly.
> When we have flags rule, then rule 'gcc-compile' may be empty, because
> relevant variable will be set on targets thanks to flags rule.
Ah, sorry: when I said I wanted to keep the flags rule, I meant that I
wanted to keep the basic interface. I don't think "flags" should do any
work itself, but should be declarative like everything else <wink>. So
gcc-compile should exploit the data generated by all of the appropriate
> > > ]
> > >
> > > Names used in 'target' and 'sources' should strictly
> > > actual file names. (E.g. no guessing of ".exe")
> > Except, I suppose, for NOTFILE targets.
> Well.... I don't think that 'make' rule has anything to do with
> targets. You've previously remarked that is similar to
> Does that rule allow NOTFILE targets?
I haven't thought about it. Maybe not, but I never explicitly ruled them
> > > Make rule would create main target, which can be referred from
> > other
> > > jamfiles. Main targets defined in the Jamfile in jam
> > > dir will also be available as actual jam targets with plain
> > So, in order to produce the main target name, 'make' might strip the
> > suffix to produce a portable representation? Example, please!
> No, I don't envision any portable representation for make targets yet.
> make foo.exe : foo.obj : borland-link ;
> is given, then main target will be named "foo.exe". Since make does
> suffixes (like current "exe" does), I don't think it would make sense
> any other trasformation.
Then please explain what you mean by "will also be available as actual
jam targets with plain names".
> > > It is possible to use the 'make' rule several times with the
> > > target. When deciding which path should be used when
> > > build request, preference is given to 'make' invocations with
> > > list of requrements.
> > I don't understand this. Can you give an example, and can you
> > why you think this is a good idea?
> I believe this is explained in
It doesn't explain the sentence about prefering longer requirements
> > > [if we want to allow mutually exclusive requirements, we might
> > want
> > > in addition to <optimization>off allow <optimization>!off. For
> > more conrtived
> > > requirements, it probably makes sense to use executed
> > > All this is a way too complicated for the first milestone.
> > > ]
> > Needs clarification, still.
> For example you want different set of files on borland
> make foo.exe : file1.obj file2.obj : borland-link : <toolset>borland ;
> make foo.exe : file3.obj : ??????? : <toolset>!borland ;
> Oops, in this particular case it makes no sense, because we specify
> concrete rule in each case. It probably more resonable in the example
> message I've linked above.
I still don't get it, but I'm not sure it's important yet.
> > > All properties will be considered relevant for the generating
> > for
> > > the purpose of computing subvariant identifier.
> > Now there's a slight contradiction. If you have to compute a
> > subvariant identifier, then you aren't starting with a true Jam
> > name, because the target will have to acquire some grist to identify
> > the subvariant.
> Yes, actual jam targets will need grist. I don't understand the
Earlier you took great pains to say that the first argument to make is
already a true Jam target name. Now you're implying that grist must be
added to get the target name. Which is it (or have I misunderstood?)
> > > Subvariant targets will be located under:
> > > $(jamfile-dir)/bin/main_target_name/$(subvariant-path)
> > I presume jamfile-dir/subvariant-path aren't actually meant to be
> > variables
> No, there are not meant to be actual variables -- wanted to emphasize
> there're not constant part... hmm main_target_name is not constant
> > Yes, definitely. Otherwise, it will be hard for people to understand
> > why targets are skipped.
> Okay. It could just return an error string it that case, which can be
> distinguished for correct return by the absense of grist, right?
A little too subtle. Why don't we pick some nice, identifiable first
element like '@error'?
> > > - Rountine compose-requirements ( requirements1 :
> > >
> > > Returns a requirements set which is satisfied iff both
> > > 'requirements1' and 'requirements2' are satisfied.
> > > an empty string otherwise.
> > > (Seems like does exactly the same as
> > > apply-requirements. However, it's better to have another
> > > name, to avoid confusion)
> > I think it's not the same. I think the build request is a "soft"
> > property set. Some elements of the request may be ignored or
> > according to the target requirements. However, I wonder about the
> > for compose-requirements. Why not just dump the requirements
> > as in $(requirements1) $(requirements2)?
> Suppose a project's requirements and requrements of its parent cannot
> satisfied together. We'd need a decent error message in this case.
> BTW, what will happens if parant require <optimization>space and
> current project <optimization>speed (link-compatible features). Or
> <rtti>off and <rtti>on, respectively (link-incompatible features).
> we don't understand something here.
I agree. Generally these won't be used as requirements, but as
default-build settings. However, I can see that they might be.
OK, <rtti>on/off - I think this is simple: we just refuse to build the
parent. In general, linking would fail anyway.
As far as <optimization>on/off is concerned, it seems to me that we
should respect the requirements of the child when building the child,
rather than propagating the parent requirements. Doesn't this sound very
much like "request compatibility" as discussed here:
> > > 2. Another possibility is to specify which features
> > > compatible. But I belive that number of truly
> > > incompatible features is low, and explicitly writing
> > > down all compatible combinations will be harder.
> > I agree. I think it makes sense to write down which properties are
> > link-compatible with one another, as in gLINK_COMPATIBLE, but (if we
> > need to specify this at all) to write down which features are NOT
> > compatible with one-another. Put simply, there should be sensible
> > defaults: the default for features is that they are link-compatible
> > with one-another.
> Not sure I understand. Do you propose that features are by default
> link-compatible with one another and value of one feature are
Yes (except for free features, which may be multi-valued), that's the
> was proposing something different -- link-compatibility by default for
> Will need to just count the number of link-compatible and
> properties in existing toolsets.
Aha! I am beginning to see the light. I think you are saying that values
of a given non-free feature should be considered link-compatible by
default, and that *some other mechanism* should be used to deal with
overriding features in build requests (i.e. the fact that the user can
override <optimization>off in his debug build by explicitly specifying
> > > ** Full implementation of abstract-target.generate **
> > Can you describe the signature for this rule? I don't see a class
> > "abstract-target" anywhere.
> See the class 'target', defined in "targets.jam" (which is in CVS).
> the generate rule should take not build request but an element of
> build request -- i.e. property set.
Okay. Separate nits:
1. Parse errors:
rule abstract-target-name ( jamfile-location : target-in-jamfile ? )
# Creates project target at the specified location
rule create-abstract-project-target ( jamfile-location )
# Class which represents a virtual target
rule virtual-target ( name : subvariant : project )
(no curly braces)
2. I can't get used to the C++-style separation of interface and
implementation, and frankly I don't think the Jam language lends itself
to it. It makes it especially hard for me to evaluate the code because
there aren't any comments next to the implementations to guide me. I
would really appreciate it if you'd just move the comments to the
implementations and eliminate the empty interface specifications in the
beginning of the file. Rene's system will help us get a coherent
interface document that doesn't include implementations.
> > > It should
> > > - filter build request with requirements
> > > - select appropriate variants of used targets
> > > - construct a dependecy graph for target
> > > - return it.
> > >
> > > The depencency graph construction at this stage will be
> > implemented using
> > > call to construct.construct
> > > rule construct ( target : target-type : sources * :
> > properties )
> > Why pass the target-type? It seems to me that the abstract target
> > should know its target-type.
> Abstract target known it's type. But "construct" deals with dependency
> construction -- abstract target would call it passing it's own target
Still seems redundant to me, since it passes itself as the first
argument. Not a big deal, though.
> > I don't understand the above. Sorry that I need so much
> main target has it's generate rule called. It decides, which make
> for this main target name should be used.
Now I begin to get a hint of the "requirements length preference" thing
you described earlier.
> A target subvariant name is
> computed (assuming that all properties are relevant, we'd just use all
> properties). A virtual target is creates for the target and for each
> And action associate with target will be just invocation of the
> generating-rule given in the make invocation.
Very nice, thank you.
> > > [it is possible that we generate more than one target because
> > > build actions create more than one. Handling of this is
> > > in architecture.html and should be implemented.]
> > That document describes a very un-hardcoded system, so I can't
> > this with the previous sentence.
> I meant that we should be prepared that target's generate rule will
> list of virtual targets separated by "@". (In fact, for main target
> should be only one virtual target, possibly followed by "@" and a list
> other virtual targets that we can't avoid generating due to tools
> However, it seems that we'll never use "@" in M1, so the above
> be forgotten.
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk