From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-01-09 12:37:45
----- Original Message -----
From: "Vladimir Prus" <ghost_at_[hidden]>
> I'm not sure which consequences that will have. In any case we'd need to
> insure that other transformations won't become resticted in some aspects.
We see eye-to-eye.
> > > There might be the following transformations available
> > >
> > > type->type : requirements : rule
> > > C++->OBJ : <toolset>gcc : gcc-C++-compile
> > > C++->C : <toolset>auc : auc-C++-compile
> > > C->OBJ : <toolset>aux : auc-C-compile
> > I understand what you're doing here (and I understood your proposal on
> > Wiki). However, I think there are some important issues that it doesn't
> > address:
> > 1. The top-level target specification is something more like (for an
> > C++*,C*,OBJ*,LIB*->EXE
> > Some mechanism is needed to decompose that into the individual steps
> > (C++->OBJ, OBJ*,LIB*->EXE) which actually form the dependency graph.
> Hm, I don't understand. Up till now I thought we had similar approach to
> decomposing into steps!
I don't really have an approach, but am looking for one. The approach you
had didn't seem satisfactory to me, for reasons already mentioned.
> Transformations that I'm taling about are edges in
> the transformation (or dependency) graph. I don't think that finding the
> sequence of transformation is in any way different from finding
> shortest/unique path.
There's a big difference between shortest and unique, though ;-).
> > > With all said, I don't think we should look for shortest path to find
> > > transformation sequence -- we should look for unique one.
> > I'm a little concerned about how that would affect the extensibility of
> > system. Here's one simple example: when in "user mode" we might well
> > to enable an executable to be generated directly from source files,
> > intermediate .obj files when the toolset supports it. It would be nice
> > simply enabling those transitions could do the job.
> At this point we'd need to go back, I guess.
> 1. What is the semantic of build request and subvariants? I'm not sure we
> agree on this point.
> One alternative is that build request for a main target specify all the
> variants (property-sets) of that targets that we want to build. Initially,
> not all subvariants are known -- e.g. <runtime-link> might not be
> By providing "active" features it's possible to change relevance-set &c.
> Another alternative is that build request constrains possible
> For example, cpp may be compiled in obj by many toolsets, but <toolset>gcc
> build request would allow only those compilations which use gcc as
> All found allowed transformation pathes create a new subvariant. E.g. we
> cpp->obj transformation with two possible prorerty-sets:
> <toolset>gcc <runtime-link>dynamic
> <toolset>gcc <runtime-link>static
> As a result, we have two subvariants for each object file, and two
> subvariants for exe.
> For me, the second alternative is more attractive.
I think I understand what you're getting at here. It doesn't sound as though
it supports feature defaults very well, though. Does this mean that if I
don't specify <struct-alignment> I end up with a subvariant for all 5 of its
Does it solve the problem of intermediate target generation? If so, how? Can
you describe what happens in an example case? What controls when the
generators fire, and in what order?
> I don't like the idea of determining subvariants prior to finding all the
> transformation pathes. I'm not sure this is feasible at all -- some tool
> the middle of the path might have two ways of working, resuling in two
> subvariants, and we'll never know that unless the path is found.
Conversely, I don't like the idea of generating multiple build variants just
because the toolsets support a certain kind of variation. Normally, a user
will just want a release build, and a developer will just want a debug
build. One build.
I think it's reasonable to say this:
* the user gives us a set of property combinations (s)he wants
* we do some magic to expand the specified properties. This may include:
* expansion of compound properties
* executing rules associated with executable properties
* There is a procedure for running generators based on each property set.
This procedure should support a kind of overload resolution which allows us
to control precedence based on specificity. This part is complicated and I'd
rather have a simpler alternative, but I don't see one now.
* generators are run to build the dependency graph
> tranformations are found, I see no obvious need to use active features in
> order to find subvariants -- each transformation might well list features
> relevant to it.
How do you "find transformations?" Oh, that's below.
> Note that I don't mean that active features are not needed, but that's
> 2. How all those transformation pathes are found?
> This is hard question. In Jambase, everything is simple -- for each source
> type, the sequence of transformations to make an object of that type is
> hardcoded. We'd like to support more elaborated method.
> i) How precisely is the sequence of transformation is found?
> ii) Which support is needed for target type hierarchy?
> iii) Which hooks will allow tools to do something special?
> iv) Do we need to support make style implicit transformation sequences?
> i) When there's no toolsets and subvariants, just shortest path will do.
> toolsets, I think we can gather a set of shortest pathes with unique set
> properties that are required on that path. I'm yet to consider how such a
> search can be implemented. The idea is that toolset A might support
> C++->OBJ; OBJ*,C++*->EXE, and B might support
> C++->OBJ; OBJ*->EXE
> Naive seach for shortest path will give C++*->EXE, that toolset B does not
> ii) Let's start will the following question: is it okay to have a semantic
> - each derived target type initially inherits all the transformation that
> it's parent has
What does it mean for a target type to "have" a transformation. Can you make
this concrete for me?
> - it's possible to define new transformation that apply to the derived
> All the transformations with the same (or compatible?) set of required
> properties are the overrided.
> What I mean here that we would be able to change transformation rules for
> particular derived target type on a particular compiler without trashing
> transformations for other compiles.
> If such a semantics is okay, then we've agreed. Of cause, we'll probably
> some optimization, but that's not a design issue.
I can't agree until I understand it ;^)
> First level of hook is that new top-level rule can be written to do some
> special processing of its sources.
> Second, we might have toolset-specific rules that computed transformation
> from source type to target type. I think that graph based approach is
> for general use, but probably in some cases it makes sense to bypass it,
> use it in a odd way.
> Third, there might be active features: they cause some rules to be
> before pathes are computed.
> Last, set of relevant features for a transformation might be optionally
> computed via a rule invocation. Artifical example: some tool might happen
> provide two very different modes of operation starting with certain
> and a rule can take that into account.
> However, I'm not sure all those hooks are either needed or enough.
Yes, it sounds like an ad-hoc collection of "ways in" to the system - that
never inspires much confidence in me.
> So, I
> think we need to write down a set of use cases for source transformation.
> I've started doing it, maybe other can do. I think we're mostly interested
> corner cases -- the more bizarre transformations you can imagine -- the
I think I listed quite a few interesting cases in the mail you're responding
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk