Boost logo

Boost-Build :

From: David Abrahams (dave_at_[hidden])
Date: 2003-06-16 13:47:20

I'm going to crosspost this to Jamboost now, along with a digest of
our conversation up to this point. I hope nobody minds.

Steven Knight <knight_at_[hidden]> writes:

> Hey David--
>> Wow, great discussion. I'm really sorry we didn't do this on the
>> Jamboost list now... anyone mind if we move?
> No problem on my account. Feel free to forward past messages of mine as
> appropriate.
>> >> 1. Features. Those are quite important and nifty things. I believe Scons is
>> >> using Environment for the same purpose?
>> >
>> > Yes (based on my quick scan through the documentation). An
>> > Environment is where you set up how you want one or more products to
>> > be built: use *this* compiler, *this* version of yacc, these flags,
>> > these include paths, these libraries, etc...
>> I always imagined that many features might end up being translated
>> into Environment settings, but I guess another possibility is that we
>> just bypass the Environment so that its "smarts" don't get in the way
>> :(.
> Hmm, maybe I gave you the wrong impression. Environments are actually
> pretty dumb, they're basically just dictionaries of values that get
> plugged in to how you build things. They're also *the* way to interact
> with the SCons build engine.

I'm sorry, I guess I meant the smarts of the things that turn
Environment settings into command-lines.

>> If we want people who specify features to have a uniform way to
>> express them, and if we don't think the Environment is going to cover
>> all of our needs, we may have to do that. I'd rather that we're all
>> able capitalize on one-another's knowledge of tools and platforms,
>> though.
> I think that's covered. The tools that we support are each in a module
> that contain the information about how that tool needs to be built.

I'm not talking about how tools are built at this point, only how they
are invoked.

> What we don't do right now is tie the tools in different tool chains
> together as tightly as I'd like. It's *theoretically* possible, for
> example, that a given build run will configure the MinGW compiler and
> the Visual Studio linker in the same Environment. In practice, it's not
> a problem because if your PATH finds the MinGW compiler first, it'd be
> really, really weird for it to not find the corresponding linker first,
> too...

We don't like the idea of relying on that sort of thing. In fact, we
allow/encourage building with multiple toolchains in a single

> Nevertheless, this is something I've wanted us to clean up, but it's
> working well enough now that it's never been a high priority.
>> >> 3. Main targets in Boost.Build can have "usage-requirements" --- properties
>> >> which are applied to dependents. That's covered in docs at
>> >>
>> >>
>> >
>> > If I understand this correctly, these would be best handled by
>> > building the libraries and dependent programs with the same
>> > Environment.
>> That won't happen except in special cases. In general there are
>> differences between how libraries and their dependents need to be
>> built. This is just a mechanism for propagating the known required
>> similarities.
> Cool, so your interface would probably just have to track how you want
> them propagated, and then set up Environments appropriately.

That sounds about right.

>> >> 1. How SCons finds the transformation from sources to desired
>> >> type. I've tried looking at, but did not understood
>> >> much.
>> >
>> > The target type isn't purely intuited from the source type; it's
>> > explicitly chosen by the builder that's called:
>> >
>> > env.Program('foo', [psources]) # build 'foo' or 'foo.exe'
>> > env.Library('foo', [lsources]) # build 'libfoo.a' or 'foo.lib'
>> >
>> > Does that answer the question, or did I misunderstand?
>> We do the same thing. I think Volodya is asking about how
>> intermediate targets are determined. This question is related closely
>> to the script I referenced earlier. IOW, when
>> psources contains '.cpp' files, what mechanism decides that '.o' files
>> will be built from them and chooses a particular linker to assembe the
>> executable?
> Builders have a 'src_builder' attribute that can be set to one or
> more other Builders that can be used to generate input source files.
> Builders also have 'suffix' and 'src_suffix' attributes that can be set
> to the target and (list of) source file suffixes.
> A Builder like the Program() builder has an Object() builder as its
> 'src_builder', and '.o' (or '.obj') as its 'src_suffix'. The Object()
> builder also has a list of known src_suffixes (.c, .cpp, .s, etc.) that
> get added to it as different tools are configured/discovered, and it can
> in turn have a list of 'src_builders' that know how to build .c files
> from .y or .l, etc.
> So you can basically hook up Builders arbitrarily using 'src_builder',
> and when env.Program() is invoked, we walk back through the list of
> src_builders until we find a chain that leads back to the specified
> source suffixes. So you end up just listing the input source files:
> env.Program('foo', ['f1.o', 'f2.c', 'f3.y', 'f4.s'])
> And the build engine works out the internal details based on how the
> builders are configured.

There are real scenarios where that procedure will find a suboptimal
dependency/transformation graph. The search prototype I pointed you
at is designed to find the graph which requires the minimal number of
transformations: you dump all the allowed transformations into a soup
and it just figures it out (and tells you if there's ambiguity).

>> That was just an example. There are lots of other common options,
>> such as enabling/disabling debug symbols, optimizations, ... I think
>> Volodya's question is whether there's a general framework for handling
>> these things.
> Yes, there is a framwork for this.
> We have a separate module for each of the tools that we support,
> each module with two interface functions: one searches for the tool
> and returns a value that says, "Yes, they have compiler X installed
> in a PATH that this Environment can get to;" the other actually
> initializes an Environment with all of the appropriate values so that
> the Environment can use the tool, creating any necessary Builders or
> doing anything else that's required to set up things properly.

Now you're talking about tool setup and configuration, which is a
separate topic. I was trying to describe the translation of abstract
concepts like "debugging enabled" into command-line options like "-g".

>> > For example, Stephen Kennedy has put a continuous-integration
>> > interface
>> What does that mean?
> Kind of like AntHill, something that reads up the configuration files
> (SConscripts) just once, and then can build and rebuild in response to,
> I don't know, signals or checkins or something. He clears out the state
> of the Nodes and does the next rebuild without having to restart and
> reread the configs all the time.


>> > Packaging-wise, we also distribute SCons in some "local" packages
>> > that are designed to be dropped in and delivered with other software
>> > that wants to use SCons but doesn't want to force everyone to
>> > install it. One way would be to just use that package to deliver
>> > your interface to your own schedule. Then you can upgrade the build
>> > engine when you choose, and not be bound by our release schedule.
>> > My guess is that would be the most politically palatable way...?
>> I guess I don't understand the implications. Could you be more
>> specific?
> Basically, we've created a package that's designed to be dropped in to
> other packages and used from your local directory, instead of being
> installed in a system-wide directory. You could baseline on SCons build
> engine version 0.90, say, by just dropping it in to your source tree and
> packaging and shipping it with everything else. Then you can upgrade
> the build engine version whenever it suits you, and not have to worry
> about *your* installation breaking just because we shipped a new version
> of SCons. You can keep shipping 0.90 as long as it suits your purposes.

OK, that's nice. A much more important issue for Boost would be "can
a build system be packaged in such a way that someone installing (or
building it from scratch) doesn't get the sense he's installing
Python. Lord knows why; I guess some companies are regressive in
that way.

Honestly, I'm not sure it's so important: if we could just provide
prebuilt executables for a few major platforms I bet we could get
away with telling everyone else to install Python.

>> > Alternatively, if you wanted to, I'd be happy to merge an integrated
>> > Boost.Build interface into the packages we deliver. Then anyone
>> > installing SCons would just have multiple interfaces to choose from
>> > (two--or more--for the price of one!). I'm already considering doing
>> > this with Asko Kauppi's Lua interface. If that's better because it save
>> > time and effort, great.
>> >
>> > I guess it comes down to: I'm more than happy with whatever way makes
>> > the most sense for Boost.Build. We've tried to build in the right
>> > flexibility in the SCons architecture and packaging so we can make
>> > it work with any other software's requirements, be they technical,
>> > political, or otherwise...
>> >
>> > Let me know how I can continue helping. Thanks!
>> Wow, that is the true spirit of cooperation. Thanks for being you,
>> Steven.
> Well, I guess if somebody has to be me, it might as well be me... :-)
> Thanks for the interest. This sounds like it could be really cool to
> bring this stuff together. What's prompting the move in this direction,
> anyway?

No matter what we do to it, the Jam language still has limits in its
expressiveness, limits in speed due to representing everything as
lists of strings, etc. We have added one advantage that Python
doesn't have: optional type declarations. The core build engine of
Jam that we inherited from Perforce, its dependency evaluator, etc.,
is a pile of twisty 'C' code. From watching bug reports on their
list, Perforce doesn't seem to have regression/unit tests for their
code so we're reluctant to try to keep tracking Perforce's codebase.
We have evolved the core of Boost.Jam sufficiently far from what
Perforce is doing that it's difficult to gain much advantage from the
existence of another "supported" tool,...

...get the picture?

> (BTW, are you guys aware that Ralf W. Grosse-Kunstleve is already
> using or shipping Boost, I believe, with an SCons-based build system
> for cctbx, his Computational Crystallography ToolBox?


> I guess he has about twenty lines of Python code that parse up the
> .jam or .bjam or whatever files and turn them into calls into the
> SCons build engine. Might be a useful starting point to look at how
> someone else has done some of the basic stuff.)

Somehow, given the different levels of abstraction in the two
systems, I can't imagine he's handling everything and I can't imagine
you end up with a cross-manymany-platform Sconscript, even though you
start with a Jamfile that has those capabilities. You're right,
though: it's worth a look.

Dave Abrahams
Boost Consulting

Boost-Build list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at