Boost logo

Boost-Build :

From: David Abrahams (dave_at_[hidden])
Date: 2007-10-06 16:21:23

on Fri Oct 05 2007, Rene Rivera <> wrote:

> David Abrahams wrote:
>> on Thu Oct 04 2007, Rene Rivera <> wrote:
>>> David Abrahams wrote:
>>>> on Wed Oct 03 2007, Rene Rivera <> wrote:
>>>>> As I've mentioned before I have yet to hear an argument that makes
>>>>> meta-make systems, like Cmake, worth the effort.
>>>> I'm not convinced one way or the other. It seems like you have lots
>>>> of fears which may or may not be realized.
>>> We all have fears ;-) But I'm not speaking from fear. I'm only trying to
>>> be logical. So I have the simple question... How can we get away from
>>> having to test N+1 make systems when using Cmake?
>> The same way we get away from testing the assembler that your C++
>> compiler targets. Just say no.
>> Just ignore the fact that Cmake uses other tools to get its job done,
>> just like you ignore the fact that (abstractly, anyway) GCC generates
>> assembly language and uses an assembler to convert that into machine
>> code.
> Well you may ignore that, but I don't. It's foolish to think we can
> understand a system without understanding it's parts, and taking those
> parts into consideration.

It's one thing to understand and consider the parts; quite another to
take responsibility for testing them and diagnosing problems in them.
When was the last time you had a problem because of your computer's
CPU chip? When you realized the problem, did you just report it to
the manufacturer, or did you try to diagnose which functional unit in
the CPU was at fault and direct a bug report at the team responsible?

>> Then you're just testing Cmake on each platform, if you really
>> feel that Kitware's extensive testing is inadequate. Frankly, one of
>> the main attractions of Cmake for me is that I know someone else is
>> doing the testing.
> That is what I see as a fallacy. Kitware tests Cmake... They can't test
> all uses of Cmake

Just like we can't test all uses of Boost. However, we do a pretty
good job covering everything, and people find fairly few bugs in the
field. They certainly find a whole lot fewer bugs than they would if
it weren't for our extensive multiplatform testing. That testing
represents troubleshooting that our users don't have to do, because
we're taking responsibility for it, right?

The process that accounts for all the green squares at
represents troubleshooting that users of Cmake don't have to do,
because Cmake developers take responsibility for that part.

Right now, Boost developers take responsibility for all the bjam/BBv2
troubleshooting. Why not trade most of that away to other people?

> (and as I mentioned elsewhere we are likely to stretch
> it past it's limits). Regardless, I fail to see how we can assume we
> only test Cmake on each platform when we will have to invoke both Cmake
> and /make/ on each platform for testing.

Well, I don't know what you mean by "test Cmake." I don't expect to
test Cmake any more than we test C++ compilers. Sometimes, it's true,
flaws are revealed in C++ compilers due to the way Boost's C++ code
stretches the limits of what the manufacturer's test suites
do. However, Boost's software construction and testing requirements
are very much closer to general industry standard existing practice
than Boost's use of the C++ language is.

> We will, at some point, run into problems when running /make/

Very plausibly, yes, a problem will appear eventually.

> which we will have to diagnose and decide if it was a problem in BB
> layer, or Cmake, or /make/.

Maybe. I think once we determine that the BB layer is OK we can
probably report it as a Cmake bug. Even if we can't, I hope we can
agree that bugs in make are are about as likely (to within an order
of magnitude) to show up as bugs in the C compilers we currently use
to build bjam.

> Except that with Cmake that last one is different on each platform
> and hence we need to understand those N /make/ systems.

That's only true if you think we will run into problems with all N
make systems *and* you feel the need to do some of the Cmake team's
work for it. I don't feel that need. Any surprising nonuniformity in
the operation of a Cmake-based build across platforms is, from my
perspective, a Cmake bug. As I stated elsewhere, it's Cmake's job to
handle problems in the underlying make as long as Cmake claims to
support that tool.

>>>>> First what we get from the Cmake community is that we don't develop
>>>>> *part* of the build system ourselves.
>>>> Sounds significant to me; potentially *very* significant, since the
>>>> high-level facilities provided by CMake are much more powerful than
>>>> the very basic facilities provided by Perforce Jam.
>>> Well, I'm not talking about Perforce Jam, but Boost Jam. It has
>>> progressed in many ways by now. So which high-level facilities are you
>>> thinking of?
>> I'm thinking of the ones that find your python installation, find your
>> compilers, and build installers, just to name a few.
> You mean python installation*s*, right? I've had as many as 5 in my
> machine at one point.

Yes, I mean that. If you're implying that Cmake doesn't find multiple
installations for you, I'd say you're probably right. However, for
those of us who are outliers with multiple Pythons, editing the
configuration file that describes a python installation to the build
system is not especially burdensom.

>>> How do they help in writing Boost.Build on top of Cmake?
>> You use them instead of writing complex toolset configuration.
> Are the Cmake equivalents not complex?

If they are, I don't care. Actually, I'm glad. That's complexity
that has to be managed by the Cmake team instead of me. Getting all
the auto-configuration stuff in python.jam working was a huge chore.

>>> How do they help in testing?
>> I don't understand the question. How could high-level Cmake features
>> possibly help in testing?
> Well you mentioned Python configuration. And on the testing list the
> subject of testing halting because the Python build Jamfile was
> non-functional when Python was not configured came up. So, for example,
> are you saying that Cmake would find the correct Python install in a
> testers machine without intervention from the tester?

In general, yes.

> Are there other Cmake high level features that similarly help
> testing?

Well, sure. Cmake appears to do the same jobs as an
autotools-generated configure script does, so unless you've done
something very nonstandard, it can find all your tools and libraries
without intervention.

>>> And to be clear... Volodya is talking about Boost.Build v2, not only
>>> bjam.
>> Yes, I realize that. Cmake would replace bjam, large chunks of BBv2
>> implementation, and some significant parts of BBv2's interface.
> Since much of the testing of Boost is built around the functionality of
> BB and bjam... You have to consider that you would need to replace *all*
> of bjam, BB, process_jam_log,, and the whole of
> xsl_reports.

Yeah, well, I interpreted my reporting tools analysis as showeing that
those last three need to be replaced anyway.

>>> It seems like a foregone conclusion to me. But of course I will be
>>> the last to claim I know everything... or even just tiny bit ;-)
>> Yes, I can tell you're very certain; I just don't see how you can be.
> Design structure complexity. I look at Cmake, and I see adding a miriad
> of additional independent components to the testing pipeline.

I don't get it. It seems like you get a 1-1 mapping between C
compilers that build bjam and make systems used by Cmake.

> This immediately brings up flares in my head, as we are in a time
> when we are trying to reduce the number of components in the testing
> pipeline. My next reaction is to try and figure out why one would
> need those extra components, to determine if they are
> beneficial. But you've mentioned that Cmake uses those
> sub-components as essentially dumb build drivers. Which brings up
> the next set of flares in my head, as to why bother with components
> developed externally when it's just as easy to build those dumb
> features in.

But it isn't just as easy, especially if we need to maintain them. Do
you really understand bjam's code for satisfying dependencies (make.c,
make1.c, et al)?

> At which point I'm questioning how I can rely on something with such
> a flawed, IMO, design foundation.
> But that is all stuff I already knew categorically about meta-make
> systems. I.e. that IMO they are structurally an inferior design.

I do appreciate your honesty about the prejudices you carried into
this analysis. There is, however, a structural difference between
Cmake and the meta-makes that try to generate totally standalone
Makefiles. But even if *all* meta-make systems are structurally
inferior to others, Boost has criteria other than structure to
consider when making a build system choice.

>>> * How much of the Boost.Build v2 functionality does it implement?
>> I don't know, but that may not be a relevant question for Boost at
>> large. I would ask how much of what we need for building and testing
>> Boost it implements. According to Doug, it does everything we need.
> If Doug is relying on Dart as the reporting system end to make that
> judgment. And if you've discounted Dart as viable for Boost test
> reporting needs. How can it implement everything Boost needs?

I'll assume all those periods are supposed to be commas, and I'm happy
to concede that the interface between the output of our build system
and our reporting system is still an open issue.

Dave Abrahams
Boost Consulting

Boost-Build list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at