From: David Abrahams (dave_at_[hidden])
Date: 2003-06-17 09:53:49
comeau_at_[hidden] (Greg Comeau) writes:
> In article <u65n5kaxk.fsf_at_[hidden]>,
> David Abrahams <jamboost_at_[hidden]> wrote:
>>>In article <uptlgcdjo.fsf_at_[hidden]>,
>>>David Abrahams <jamboost_at_[hidden]> wrote:
>>>>My guess is that Comeau is being held to a higher standard because
>>>>people are motivated to get it to check their code for conformance.
>>> This just enhances my confusion then, since then
>>> I'm not sure what http://boost.sourceforge.net/regression-logs
>>> and respective sublinks reflect [across tools].
>>They reflect whether the library works with a particular toolset given
>>the build instructions we have prepared for the library.
>>Often failures also correlate to problems in the toolset.
> My observation, which may be incorrect, is that the
> build instructions are each different
different between libraries or toolsets?
Naturally, toolsets have different command-line options with different
meanings, so of course the final build instructions (command-lines)
need to differ across toolsets. Generally a library's Jamfile will
avoid using any toolset-specific settings.
Naturally, libraries have different requirements and, well, source
files, so the of course the build instructions for different
libraries will differ.
I guess I don't understand how you could expect it to be otherwise.
> and so therefore, when I look at regression results, I see tables,
> through which it seems natural to compare them. But if the build
> instruction are different, then they are, at least partially,
> uncomparable, or at least more information about the raw tables need
> to be present and/or clarified.
It probably wouldn't hurt for us to capture the command-lines used by
either making a separate "bjam -n -a" run part of the regression process or
simply by adding "-d+2" to the bjam command-line and capturing the
build instructions in the log. The 2nd one would only capture
instructions for things that needed to be rebuilt, while the first
would be more verbose.
> As per my other posts, it's clear they are different
> because compilers are different, or options may be
> different. But it's one thing to say expect the
> same results when compiling boost with say optimization
> off and then with optimization on, but it seems to
> me that's on a different level when different definitions
> of C++ are being requested across instructions.
Across instructions? What do you mean by instructions?
> I realize that different bugs in different compilers
> lend to certain situations, options, etc. but those
> seem to be transparent to at least the casual observer.
I don't know what you mean by transparent nor can I tell whether you
are saying this is a good, bad, or neutral thing.
> On a more general note... what are the regression results for?
To indicate the health of any library and what compilers it's likely
to work with OOTB.
> Who is supposed to be their readers?
Developers and users.
> What information is one supposed to gleam from perusing them?
How shiny the library is ;-)
You might also infer how good a compiler's language support is in
some cases, but that takes a much more sophisticated view.
> What should one walk away from them knowing or saying?
Thanks to everyone who works on Boost testing; it's great to be able
to tell which libraries are healthy and great to know that Boost
tries to maintain its libraries in good working order!
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk