|
Boost-Build : |
From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2007-10-06 18:33:36
David Abrahams wrote:
> on Fri Oct 05 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
>> David Abrahams wrote:
>>> on Thu Oct 04 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
>>>> David Abrahams wrote:
>>>>> on Wed Oct 03 2007, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
>> Well you may ignore that, but I don't. It's foolish to think we can
>> understand a system without understanding it's parts, and taking those
>> parts into consideration.
>
> It's one thing to understand and consider the parts; quite another to
> take responsibility for testing them and diagnosing problems in them.
Of course... But does a user of your "whole" care which of the "parts"
is broken? You are effectively taking the responsibility when you force
the "whole" on users for the functioning of the "parts".
> When was the last time you had a problem because of your computer's
> CPU chip?
Well, RAM... and it was yesterday. And CDROM drive a few days ago. And
CDROM disk today.
> When you realized the problem, did you just report it to
> the manufacturer, or did you try to diagnose which functional unit in
> the CPU was at fault and direct a bug report at the team responsible?
Hm... For the RAM problem I had to diagnose it down to a fault in the
BIOS in it's inability to correctly detect the timing of the RAM chips.
Unfortunately I don't have access to the BIOS source, so I can't fix it
that way. And the manufacturer no longer provides patches for the
particular motherboard, so I can't even report it to them with a hope of
getting a fix. There are also no alternative BIOS providers, so I have
no choices in implementations.
My point has been that at minimum we will have to diagnose which
component is at fault. This requires somewhat more understanding of the
components than just the recognition of their existence. In particular
it also requires either very precise definitions of the interfaces we
use, or understanding of the internal operation. Hence, someone, and
likely many someones, need considerable knowledge in the area both of
the parts and the whole to determine where the system breaks.
>>> Then you're just testing Cmake on each platform, if you really
>>> feel that Kitware's extensive testing is inadequate. Frankly, one of
>>> the main attractions of Cmake for me is that I know someone else is
>>> doing the testing.
>> That is what I see as a fallacy. Kitware tests Cmake... They can't test
>> all uses of Cmake
>
> Just like we can't test all uses of Boost. However, we do a pretty
> good job covering everything, and people find fairly few bugs in the
> field. They certainly find a whole lot fewer bugs than they would if
> it weren't for our extensive multiplatform testing. That testing
> represents troubleshooting that our users don't have to do, because
> we're taking responsibility for it, right?
No it doesn't. A user will still go through the process of determining
where the line of responsibility lies, as they can not immediately know
if what they are doing is covered by a line we've already drawn.
> The process that accounts for all the green squares at
> http://www.cmake.org/Testing/Dashboard/20071006-0100-Nightly/TestOverviewByCount.html
> represents troubleshooting that users of Cmake don't have to do,
> because Cmake developers take responsibility for that part.
>
> Right now, Boost developers take responsibility for all the bjam/BBv2
> troubleshooting. Why not trade most of that away to other people?
No, there is no such trade.
First... Boost *library* authors take responsibility for their libraries
building. Which means they take responsibility for the Jamfiles working.
And they take responsibility for determining when the Jamfiles are
broken instead of the library or bjam+BBv2. And they take responsibility
for asking the bjam+BBv2 developers to fix problems. And they take
responsibility for making sure those problems are fixed... at best. At
worse they have to go in and fix the problems themselves. These are
things that involve writing test cases and understanding of the build
components.
Second... Switching to cmake only swaps one responsibility for another.
Authors will now be responsible for the the same as above except for
cmake scripts, and dealing with kitware.
Third... The people you would trade aren't there to trade. Bjam and BB
work won't stop, hence it means adding other people to be responsible
for the cmake system.
>> (and as I mentioned elsewhere we are likely to stretch
>> it past it's limits). Regardless, I fail to see how we can assume we
>> only test Cmake on each platform when we will have to invoke both Cmake
>> and /make/ on each platform for testing.
>
> Well, I don't know what you mean by "test Cmake."
I think you used the term first... But to me it means that I have to sit
at a computer and run the cmake meta-make building procedure for each
platform (OS+toolset combination), to make sure it works. And by "I have
to sit" I mean the testing procedures and/or release manger have to do
it. Hence I mean "test our use of Cmake", after all I will not assume
things will work from a user's perspective.
> I don't expect to
> test Cmake any more than we test C++ compilers. Sometimes, it's true,
> flaws are revealed in C++ compilers due to the way Boost's C++ code
> stretches the limits of what the manufacturer's test suites
> do. However, Boost's software construction and testing requirements
> are very much closer to general industry standard existing practice
> than Boost's use of the C++ language is.
Hm, that sounds historically inconsistent. If Boost's requirements are
that close to industry practice why did you start the development of
BBv1 in the first place? AFAIK there aren't that many companies doing
the kind of software construction and testing that Boosts does. The one
I immediately think of is Sandia, and they are using bjam. Most of the
industry software development I'm familiar with doesn't do portable
library development with large multi-platform testing.
>> which we will have to diagnose and decide if it was a problem in BB
>> layer, or Cmake, or /make/.
>
> Maybe. I think once we determine that the BB layer is OK we can
> probably report it as a Cmake bug.
Of course. But what happens if they say it's not really their bug? Or if
our reports are insufficient.
> Even if we can't, I hope we can
> agree that bugs in make are are about as likely (to within an order
> of magnitude) to show up as bugs in the C compilers we currently use
> to build bjam.
Sure.
>> Except that with Cmake that last one is different on each platform
>> and hence we need to understand those N /make/ systems.
>
> That's only true if you think we will run into problems with all N
> make systems *and* you feel the need to do some of the Cmake team's
> work for it. I don't feel that need. Any surprising nonuniformity in
> the operation of a Cmake-based build across platforms is, from my
> perspective, a Cmake bug. As I stated elsewhere, it's Cmake's job to
> handle problems in the underlying make as long as Cmake claims to
> support that tool.
I have problems justifying choices based on "feel" and "perspective".
>>>>>> First what we get from the Cmake community is that we don't develop
>>>>>> *part* of the build system ourselves.
>>>>> Sounds significant to me; potentially *very* significant, since the
>>>>> high-level facilities provided by CMake are much more powerful than
>>>>> the very basic facilities provided by Perforce Jam.
>>>> Well, I'm not talking about Perforce Jam, but Boost Jam. It has
>>>> progressed in many ways by now. So which high-level facilities are you
>>>> thinking of?
>>> I'm thinking of the ones that find your python installation, find your
>>> compilers, and build installers, just to name a few.
>> You mean python installation*s*, right? I've had as many as 5 in my
>> machine at one point.
>
> Yes, I mean that. If you're implying that Cmake doesn't find multiple
> installations for you, I'd say you're probably right. However, for
> those of us who are outliers with multiple Pythons, editing the
> configuration file that describes a python installation to the build
> system is not especially burdensom.
OK, but given that all Boost developers fall in that outlier group. And
given the portability goal of Boost libraries, Boost users are likely to
also fall in that group. How burdensome are we talking about? Do we
expect users to make such changes? Does CMake have comprehensive
documentation for making those changes? Will we need to write additional
documentation for Boost users to help them in making those changes? Dito
for Boost authors.
Note, I'm asking all these CMake question because I don't have the time
(and inclination) to find out the answers for myself. But they are
questions that need consideration before switching.
>> Are there other Cmake high level features that similarly help
>> testing?
>
> Well, sure. Cmake appears to do the same jobs as an
> autotools-generated configure script does, so unless you've done
> something very nonstandard, it can find all your tools and libraries
> without intervention.
Figures, I tend to operate in nonstandard mode all the time :-) Anyway,
you should write down someplace what all those features are. The wiki
comes to mind as a good place ;-)
>> BB and bjam... You have to consider that you would need to replace *all*
>> of bjam, BB, process_jam_log, regression.py, and the whole of
>> xsl_reports.
>
> Yeah, well, I interpreted my reporting tools analysis as showeing that
> those last three need to be replaced anyway.
Indeed. But it also means that if we want the ideal testing and
reporting I mentioned, of immediate feedback of results with a database,
will likely require using something other than cmake/ctest, or
understanding it sufficiently to implement additions to it.
>>>> It seems like a foregone conclusion to me. But of course I will be
>>>> the last to claim I know everything... or even just tiny bit ;-)
>>> Yes, I can tell you're very certain; I just don't see how you can be.
>> Design structure complexity. I look at Cmake, and I see adding a miriad
>> of additional independent components to the testing pipeline.
>
> I don't get it. It seems like you get a 1-1 mapping between C
> compilers that build bjam and make systems used by Cmake.
I think visually so for example... BB+bjam...
Jamfile --> BB --> bjam
--> compiler1/OS1
--> compiler2/OS1
--> compiler3/OS2
CMake...
cmakefile --> cmake
--> makefile1a --> make1/OS1
--> compiler1/OS1
--> compiler2/OS1
--> makefile2 --> make2/OS1
--> compiler1/OS1
--> compiler2/OS1
--> makefile1b --> make1/OS2
--> compiler3/OS2
--> makefile3 --> make3/OS2
--> compiler3/OS2
And I haven't even mentioned cross-compilation. Do you envision the
testing to work in some other way? And why?
>> This immediately brings up flares in my head, as we are in a time
>> when we are trying to reduce the number of components in the testing
>> pipeline. My next reaction is to try and figure out why one would
>> need those extra components, to determine if they are
>> beneficial. But you've mentioned that Cmake uses those
>> sub-components as essentially dumb build drivers. Which brings up
>> the next set of flares in my head, as to why bother with components
>> developed externally when it's just as easy to build those dumb
>> features in.
>
> But it isn't just as easy, especially if we need to maintain them. Do
> you really understand bjam's code for satisfying dependencies (make.c,
> make1.c, et al)?
Hm, I understand it sufficiently IMO. At least enough to say that it
would be much easier to rewrite it with coroutines, since most of the
work it does is in maintaining the push down state of the DAG traversal
for the dependencies.
>> At which point I'm questioning how I can rely on something with such
>> a flawed, IMO, design foundation.
>>
>> But that is all stuff I already knew categorically about meta-make
>> systems. I.e. that IMO they are structurally an inferior design.
>
> I do appreciate your honesty about the prejudices you carried into
> this analysis. There is, however, a structural difference between
> Cmake and the meta-makes that try to generate totally standalone
> Makefiles. But even if *all* meta-make systems are structurally
> inferior to others, Boost has criteria other than structure to
> consider when making a build system choice.
Yes, which is why I avoided mentioning my preconceptions until you asked
me for them explicitly. I was trying to concentrate on the tangible
effects using cmake would have on testing, library author experience,
and user experience.
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk