|
Boost-Build : |
From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-01-04 12:16:27
----- Original Message -----
From: "Vladimir Prus" <ghost_at_[hidden]>
> David Abrahams wrote:
> > This message is concerned with building the dependency graph. That job
> > includes the generation of intermediate targets (e.g. when building an
exe
> > from sources, the object files are intermediate) and the invoking of
rules
> > which establish dependency relationships and build actions.
>
> 1. Why do we need hierarchy? Aren't all transformation between leaf
vertices
> in it? (I see dotted line from "object" to "LIB" on your graph. Do you
really
> mean that LIBS can be assembled in yet another library)
Yes, absolutely I mean that. It is possible, and it's easy to imagine that
some people will want to do it. For example, we might build Python, Regex,
and Threads as separate libraries, then assemble them into a single Boost
library.
More importantly, I was just trying to formalize my mental model for targets
and their relationship. I usually find that the best approach when I don't
know how to approach a problem is to code something which corresponds to my
mental model. Unfortunately, I'm not sure my mental model is adequate yet
;-(
> 2. What is your definition of "toolset"? You say:
> "build jobs outside the domain of the toolset". For me, toolset is nothing
> but a named set of transformation rules.
We have a specific feature called "toolset" which refers to an important
case of a named set of capabilities (usually, compiling, linking, etc.). I
mean to distinguish that from the more general idea of a named set of
transformation rules.
> 3 My proposal on building semantics (still on the Wiki) differs from yours
in
> that I have all the dotted edges (or transformations, or actions)
annotated
> with requirements. If requirements are not met, path search simply won't
> consider that transformation.
I want to clarify that I was not yet making a proposal. I agree that some
mechanism to choose transition rules based on the properties of the target
will be important.
> To consider you example with C++ fronend
> compiler:
>
> exe foo : foo.cpp ;
>
> There might be the following transformations available
>
> type->type : requirements : rule
> C++->OBJ : <toolset>gcc : gcc-C++-compile
> C++->C : <toolset>auc : auc-C++-compile
> C->OBJ : <toolset>aux : auc-C-compile
>
> Therefore, transformations for both gcc and auc can be found. To elaborate
> futher, we should assume that every rule given to toolset is
deterministic:
> it performs all the transformation in which it's mentioned. Then, suppose
we
> have to deal with g++'s template repositories (e.g. to clean them)
>
> C++->RPO : <toolset>gcc <template-repository>on : gcc-C++-compile ;
>
> Now it's easy for the build system to notice that when gcc-C++-compile is
> invoked and template repository is used, one more file will be created.
I understand what you're doing here (and I understood your proposal on the
Wiki). However, I think there are some important issues that it doesn't
address:
1. The top-level target specification is something more like (for an exe):
C++*,C*,OBJ*,LIB*->EXE
Some mechanism is needed to decompose that into the individual steps
(C++->OBJ, OBJ*,LIB*->EXE) which actually form the dependency graph.
2. I find myself wanting to create new variations on existing target types.
PYD as a specialization of DLL is one example. I suppose it's possible that
this is sufficiently rare that it can be handled as a special case, but
somehow I doubt it (plugins with special requirements are a common idea).
This causes a significant amount of code repetition in the current system.
It would be best if these transition rules could choose the "best match", so
that, e.g., when building a PYD <target-type>PYD and <target-type>DLL are
both in the property set, and available, but PYD-specific transitions will
take precedence. That is part of the motivation behind the "shortest path"
idea.
3. I worry a little bit about throwing all of these generators into a big
"soup" and letting them compete without any opportunity to control the
process. Will we paint ourselves into a corner where someone wants to extend
the capabilities of the system but can't get the results they want?
> Lex/Yacc seem simple as well:
>
> LEX->C : : lex-generate
>
> There's a question, though: how will the set of subvariants will be
computed.
> One option is too specify features that are relevant to each target type.
But
> then, relevant features specific to toolset won't work.
My current thinking is that:
1. A target has a large collection of properties, many of which are not
"relevant" in the sense that we use it, i.e. will not contribute to
determining a unique subvariant identity. The relevant subset of the
target's property set is called it's "relevance-set"
2. Some of the features in the target's property set are "active", in the
sense that they cause rules to be executed which modify attributes and
properties of the target.
3. any of the "active" features of the target has an opportunity to add
features to its relevance-set.
That way, <target-type> can add relevant features and <toolset> can also add
relevant features.
> I'd prefer simply
> considering all the alternative pathes, which can be used to generate a
FWIW, this is "paths"-------------^^^^^^, but it's pronounced "pathes" ;-)
> target, and to generate subvariants based on that information.
I don't understand what you mean above. Could you try to explain?
> Then there're combining actions like Archive and Link. I don't think
allowing
> them inside implicitly generated transformation is needed or feasible.
Then,
> they always should generated using explicit main targets. It should also
be
> insured that path search algorithm don't accidentially pick those
> transformations. In all the other respects, they can work in a similar
fashion
That's an interesting simplification. Maybe it will help.
> EXE<-OBJ,LIB : <toolset>gcc <runtime-link>static : gcc-link
> EXE<-OBJ,LIB : <toolset>gcc <runtime-link>dynamic : gcc-link
> EXE<-OBJ,LIB : <toolset>aux : aux-link
>
> (Actaully, there should be both "required requirements" and "relevant
> requirements" on transformation, to avoid writing all possible values of
> relevant features, but this is a detail).
>
> With all said, I don't think we should look for shortest path to find
> transformation sequence -- we should look for unique one.
I'm a little concerned about how that would affect the extensibility of the
system. Here's one simple example: when in "user mode" we might well want to
enable an executable to be generated directly from source files, without
intermediate .obj files when the toolset supports it. It would be nice if
simply enabling those transitions could do the job.
Here's another example: the user has a target which depends on extracting
information from the assembly-language translation of one of the C++ files
he's using in his executable. There would be two paths to generating the OBJ
from that C++ file: C++->ASM->OBJ and C++->OBJ.
I don't have an example of this, but I can imagine that if someone wants to
extend the system with new build capabilities, some ambiguity in possible
transformation sequence might be introduced, especially when you consider
things like code extractors, invoking existing makefiles, or e.g., Python
distutils.
-Dave
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk