Boost logo

Boost Users :

From: David Abrahams (dave_at_[hidden])
Date: 2006-02-02 15:54:30

Michael Shapiro <mshapiro_at_[hidden]> writes:

> On Monday 30 January 2006 17:37, David Abrahams wrote:
>> Michael Shapiro <mshapiro_at_[hidden]> writes:
>> > Okay, I have whined long enough. What would be helpful would be
>> > a) Complete instructions for building Boost. (I worked from the getting
>> > started page.
>> In what way is that page incomplete?
> This claim may be wrong. When I wrote it, I was under the
> impression that I needed compiler_status and process_jam_log (see
> below). However, it may not be wrong. If I ever get any of this to
> run, I'll know if there was something missing here.


>> > It was only after much hacking around that I found another
>> > page telling me I needed compiler_status and process_jam_log.
>> You did? Which page, please? You certainly don't need either one of
>> those in order to build Boost.
> This is in boost_1_32_0/more/regression.html By then I was looking for ways
> to test the installation and sufficiently flummoxed to wonder if I needed
> these for the original build.
>> > b) Instructions on how to tell is your build is complete and
>> > correct.
>> Hmm, good one. When bjam completes it will tell you whether there
>> were errors. If there were no errors, you can assume it was complete
>> and correct. Or did you have something else in mind?
> I'm not sure what would work here. When bjam lists the number of targets
> found and then says it is making a smaller number of targets, it's unclear to
> the novice whether the targets that are being omitted should be omitted.

Yeah, we had a long discussion about that, and some people came up
with good ideas, but there was never a solid proposal that would
work. I would appreciate it if someone could come up with one.

Ultimately, the output of a tool used for the edit/build/debug cycle
is probably not appropriate for the build-to-install user.

> I suppose a static test of this would be in the form of a check to
> see that all files are where they should be. I guess this is
> platform dependent and might be a pain to maintain.


> Perhaps a better idea would be a test suite that builds and runs a set of
> basic examples from each of the libraries and checks their results. This
> would have the advantages of
> a) discovering whether the basics work, thus
> b) exhonerating the installation and convicting me of my own ignorance if my
> examples don't work, and
> c) giving me the means to remedy my ignorance by looking at the examples
> embedded in the test suite.

We have a comprehensive test suite in the status/ subdirectory. If you

        bjam test

there, it will test everything. If you go to libs/<library>/test and
issue the same command there, it will test just that library.

Here's the problem: the libraries as built for testing are not 100%
identical to those built for installation. They differ in location
and in name, for reasons that are hard to explain -- this has to do
again with the difference between edit/build/test and
build-to-install. The upshot is that bjam will be testing different
library files than the ones that result from your bjam install. It
would be interesting to be able to have an option that runs tests with
the libraries in their installed locations and using their installed
names. I'm not sure how difficult that would be.

>> > c) An example of how to compile and link the demo, either from the
>> > command line or via a make file.
>> That, again, sounds like something specific to the serialization
>> library. In other words, the serialization library author ought to
>> improve his docs. So, add that to your message with "[serialization]"
>> in the subject line.

Thanks again. I really hope you take my advice about that, because
otherwise your keen insight will be wasted.

Dave Abrahams
Boost Consulting

Boost-users list run by williamkempf at, kalb at, bjorn.karlsson at, gregod at, wekempf at