Boost logo

Boost-Build :

Subject: Re: [Boost-build] RFC: Boost.Build Python Prototype
From: Stefan Seefeld (stefan_at_[hidden])
Date: 2016-11-15 11:43:24

On 15.11.2016 09:27, aaron_at_[hidden] wrote:
> > I didn't mean to drop any essential functionality. So perhaps we can
> > take the opportunity to spell out use-cases and see how they are
> > implemented in BB V2 as well as my prototype.
> > (Please note that my prototype is just that: a proof-of-concept. And
> > especially since I have attempted a "bottom-up" / layered approach, it
> > may appear as if I let all the low-level stuff (such as explicit
> > actions) creep into the interface, when what really matters are
> > high-level abstractions. I'm fully aware that even as a prototype this
> > is fairly incomplete, in particular as far as "composite targets" and
> > their interface is concerned. It wasn't my intent to present the current
> > state as a proposal for what to replace BBV2 with, but rather, how to
> > approach the development of the missing features.)
> I'm trying to keep in mind that this is just a prototype, but I'm also
> wondering
> if you've missed the point of metatargets and generators.

That is of course possible. On the other hand, as you say, this is a)
just a prototype, and b) I'm following a layered bottom-up approach, so
if composite targets don't capture an important functionality, perhaps
we can add an abstraction layer on top to generate the targets ?

As I just indicated in a reply to Vladimir, I'm hoping that the
generator logic can be plugged into the composite targets' "expand"
mechanism, where actual target graphs are instantiated. That's something
to explore a bit, but it requires a few other additions, notably the
formalization of "features", which I'm not yet quite sure about (read: I
don't yet fully understand how features are defined and used in b2)

> I might be mistaken. If so, I apologize :D
> Metatargets are simply a way of describing an abstract relationship of
> targets to targets and the features a target can have; both are absolutely
> essential in supporting a multi-variant build. Toolsets do not matter
> at this
> point. Creating the first dependency tree with metatargets allows for any
> and all toolsets to be used in order to build the targets given by that
> dependency tree.

Again, I was hoping that my "composite target" approach would support
that. Right now the workflow looks somewhat like this:

  lib = library(name, source)

so, it seems like 'lib' would be a "metatarget" in your ontology, and
the lib.expand() call would then create the actual targets depending on
the "properties" argument ("features" in your ontology). Or am I missing
something there ?

> Some Comments on the Prototype:
> I very much dislike the idea of a "build script" file: a file that
> contains Python code,
> but isn't necessarily a Python module. Moreover, I dislike the idea of
> having
> additional functions injected into such scripts or expecting special
> variables
> to be defined in order to enable or disable certain functionality.
> Doing any of
> this is what I consider to be "magic" and, more often than not, will
> confuse even
> the most seasoned of Python developers, let alone C/C++ developers who
> "know" (or worse, do not know) Python. Having constructs such as these
> requires
> the author of such build scripts to heavily rely on documentation in
> order to know
> what the builtin functions are or to know not to create a variable
> named "default"
> because it will accidentally enable some functionality (or most
> likely, cause errors
> by providing a bad value).

OK, so let's drop "default" for the sake of focus. What is fundamentally
different between

  rule('hello.o', '', cxx.compile)

and the equivalent Jam code ? Both inform the build tool about different
targets and their relationships among each other. In both cases you need
to understand the language - or API - that such constructs use.

I can understand it being slightly disorienting to see the usage of
"scripts" like this, where certain calls have apparent side-effects
(such as registering targets), and others don't. This obviously needs to
be clearly documented.
The appeal of such scripts comes from the fact that they are using a
"real" language, with well established syntax and semantics, so it
becomes much simpler for users to use (extend, notably), and for the
build system maintainers to maintain.

> Rather than a "build script", I would rather see a Python module at
> the root of
> the project (like a Jamroot.jam) that delegates what is contained
> within the project
> much like how "use-project" works. Basically, this ""
> file will
> "install" all of the sub-projects by listing paths to each of the Python
> modules, loading them as needed.

I'm not sure what you are saying here, and how this differs from what
I'm proposing. A top-level directory may contain a "build.b3" file which
then "imports" (by virtue of the "module(subdir)" call) sub-projects.

> I get why the current implementation of Boost.Build
> uses lazy loading of projects, but I would much rather see loading
> everything
> at initialization.

One of the use-cases I have in mind are modular builds. It should be
possible to step into a directory containing a "build.b3" script and run
`b3` there to build that directory stand-alone. Having a project-level
config file (with default parameters etc.) is fine, as that isn't
essential to run the build.
But what you seem to suggest implies a monolithic build, which is
something I have been trying hard to get away from (and is the primary
reason why I had to switch to SCons for Boost.Python, so I can build it
outside the Boost tree.

> I think constructing the metatarget tree should be lightning fast
> (so, no I/O hits in order to construct the metatargets). Declaring all
> types, features,
> generators, etc. at the beginning would make it much easier to query
> the state
> of the project and would prevent any sort of contention in creating
> each of
> the aforementioned.
> Additionally, I believe it's much better to be explicit about what a
> module needs.
> Don't inject functions or expect variables to be defined.

I don't. The only variable that has any effect is "default", but that
can easily be done differently, so please let's not get hung up on that.
No local functions will have any effect, unless they are called, and
create targets. But otherwise the ability to have local functions is
very important for extensibility.

> Allow the user to import
> what they need and "set" special variables or enable functionality by
> making function
> calls to the build system itself. Firstly, by being explicit
> (requiring the user to
> import what they need), it provides a familiar or search-engine
> friendly way of
> getting functions. Secondly, it tells the user exactly where to look
> in the event
> that they would like to examine the source code of the function they
> are calling.
> That's the beauty of Python code: you can throw in print statements
> anywhere
> for quick and easy debugging.

Yes. As to "be explicit": I'm really following Python's own recipe:
Certain names are bound even if you start an empty script. You can of
course rebind the names to new objects, and recover the original
variables by importing the "__builtin__" module. The exact same is true
in the environment I set for build scripts, I just happen to pre-define
a handful of symbols most frequently used in build script. Please let's
not argue about names and what to include in that "builtin" set just yet.

> I do like the idea that a "rule", as you've called it, actually
> returns the target it
> created. I like the idea of passing target instances around rather
> than target's
> names. This will make the Python API more like Jam in that you can create
> a variable to hold the target and use that rather than having to
> constantly
> create a string to refer to the target's ID.


      ...ich hab' noch einen Koffer in Berlin...

Boost-Build list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at