Boost logo

Boost :

From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-02-21 14:00:32

----- Original Message -----
From: "Steve M. Robbins" <steven.robbins_at_[hidden]>

> On Wed, Feb 20, 2002 at 04:10:55PM -0000, bill_kempf wrote:
> > Jam was never meant as a solution to autoconf, it was meant as a
> > solution to make.
> Precisely. That's the main point I was trying to make.
> My secondary point is that replacing make is the easy part.

When you've solved the same problems we're solving, I will consider you
qualified to make that pronouncement.

> If I came on strong, it's because I get the feeling that the boost
> community thinks that configuration has been solved, or is a
> non-issue. And because the responses to "why not autoconf" is often
> "we use jam", which is nonsense.

No, what happened is that you and a few others have asked "why Jam as
opposed to autoconf?"
I and others have tried to answer the questions asked. Calling the response
nonsense is not only illogical, but needlessly insulting.

> > A few points. First, many of us are living fine with the static
> > config headers in Boost today, so we haven't "outgrown" this usage
> > yet.
> I wonder if you over-generalize here.
> I'm building the libraries for packaging on Debian (linux). I did
> manage to learn enough Jam to get the 1.26 libraries to build on my
> machine, which is an i386. However, Debian releases packages on about
> a dozen different CPU architectures, and the next thing I know they
> are all failing to autobuild the package because of
> boost/detail/limits.hpp:
> #if defined(__sparc) || defined(__sparc__) || defined(__powerpc__) ||
defined(__ppc__) || defined(__hppa) || defined(_MIPSEB)
> #elif defined(__i386__)
> #else
> #error The file boost/detail/limits.hpp needs to be set up for your CPU
> #endif
> This is nonsense.

It's not nonsense. These headers were developed before the boost config
library was introduced. In a large multi-contributor library it can take
some time for everything to be made consistent.

> Detecting endianness is one of the easiest things
> to do with a build-time check. It turns out that GNU libc has an
> <endian.h> header that I can use to get around this, so I patched
> the file to use it.

So contribute your patch, and a patch to the boost configure script if you
know how.

> But the boost headers are rife with things
> like
> #if (__BORLANDC__ == 0x550) || (__BORLANDC__ == 0x551)
> and
> #if __GNUC__ < 3 || __GNUC__ == 3 && __GNUC_MINOR__ == 0 &&
> This is nonsense.

It's not nonsense for the same reasons cited earlier (and it just gets more
insulting every time you say it). Also, there are occasionally bugs which
are so compiler-specific that there's no point in making a corresponding
configuration macro. Finally, we're not sure there's a reasonable way for
users of MinGW and Borland to use a configure script.

> Debian builds on a number of architectures each
> with its own version of GNUCC, and libc, and attendent bugs. I don't
> believe that you are going to be able to track all these compiler bugs
> on all the various platforms. At best, you're going to be reacting to
> bug reports.
> It's far better to detect the quirks automatically at build time on
> the build system. That gives you a fighting chance to install it on a
> novel combination of OS/compiler/stdlib.

Have you looked at
If you don't like the status quo, stop complaining and submit patches for
the configure script and the corresponding boost headers.

> And it would help if there weren't stupidities like
> else if $(UNIX)
> PYTHON_ROOT ?= /usr/local ;
> buried deep in the jam files. I don't want to use version 1.5 and I
> don't put it into /usr/local. These are things that can and should be
> probed for. Ditto for the STLPort libraries that I'm using.

Calling our work stupid isn't going to buy you any sympathy around here.
Those are the default settings, and there are several ways to override them.
I use Python 2.2 most of the time, but probing doesn't always work:

1. I have to test against multiple Python versions
2. I work on systems where the Python version I want is installed in
non-standard places. If that Python isn't in my path, how is autoconf going
to find without searching my entire network?

> > Second, autoconf can be used to dynamically set up the configuration
> > headers today. See
> True. I did give it a try when I was getting the 1.26 package into
> shape. It caused some of the libs to fail, so I backed out for now.

AFAIK That's because there are some configuration settings that need to be
tuned by hand; we haven't got a way to detect them with autoconf.

> I'll give that a try again once I figure out how to get 1.27 built.

> And we haven't even touched on installation. In order to just reach
> where autotools is today, Jam needs to be taught how to build shared
> libraries on all the platforms, knowledge that is embodied in a 5000
> line shell script.

Yeah, that's ridiculous, hard-to-maintain, and hard-to-modify. I've solved
the problems on enough platforms to know that there's no reason it needs to
be in a monolithic 5000 line script. I have no confidence that developers
will be able to get the control they need if we use this script for
build-and-test, and even then it won't work for all the platforms we want to

> In the end, I am not volunteering to do anything like reimplement a
> configure and build system. Please take these observations as simple
> desiderata. If they sound like criticism, it is only my frustration
> at not being able to build 1.27 showing through ;-)

Please keep your frustration under control in the future and substitute
constructive improvements.

> Regards, [and thanks for all the hard work]

The nature of your posts (especially this one) is such that we either have
to ignore them or spend lots of time dealing with unconstructive criticism.
That just makes for a lot more hard work for all of us. If you think you
know a better way, put your money where your mouth is.


Boost list run by bdawes at, gregod at, cpdaniel at, john at