From: Beman Dawes (bdawes_at_[hidden])
Date: 2002-02-05 19:57:16
At 04:29 PM 2/5/2002, vesa_karvonen wrote:
>> * Increased errors by experienced programmers. Experienced
>> programmers recover immediately, but still complain of what is seen
>> as an unnecessary burden. The STL headers have been cited as
>> specific examples of overly fine-grained headers.
>...splitting existing headers does indeed cause some grief. That is
>why headers should be atomic in the first place. Then there will be
>no grief caused by change as is happening now with Boost.
>I'd like to see references to articles/whatever that complain about
>STL headers being overly fine-grained. Not that I would doubt such
>would exist, because I can certainly imagine some people complaining
>about the subject.
The C++ committee's LWG has been discussing headers recently. I think that
Bjarne was the most recent person to complain about the stl headers. There
is some discussion about providing a single <stl> to deal with that. There
is also discussion of a <std> header which includes all the C and C++
>> * Difficulty teaching. Teachers apparently don't want to have to
>> explain much of the details of headers until later in a
>> programming course. Fine-grained headers get in the way of this.
>Perhaps. I must admit that I fail to see the difficulty of teaching
>how to use #include. It shouldn't take many minutes.
I think what the teachers are complaining about isn't the need to explain
the concept of a header, but rather the need to teach which headers must be
included to get which set of library features.
>> * Header dependency metric. Header dependency is sometimes
>> measured, with a high numbers of dependencies seen as bad
>> practice. Harder to avoid with fine-grained headers. Very similar
>> to intimidation factor.
>Dependencies should not be measured in numbers of headers. It is a
But it is one that real programmers use, nevertheless.
>Ideally dependencies should be measured in how long it takes for the
>compiler to process the dependencies. Since lexing (and parsing)
>typically consumes most of the time taken by the processing of
>headers, an easy to compute, rather portable and usable metric is the
>bytes of code metric.
It really depends on the system you are using. On some systems, I/O times
dominate, and the cost of opens is particularly high.
>My bet, which, I believe, is supported by John Lakos, is that the
>incremental development of large software systems can be made
>considerably more effective by using fine grained headers than
>coarsely grained headers. What I'm ultimately getting at is that
>modification of a single header should cause the recompilation of a
>minimal number of translation units. This simply can not be achieved
>by using coarsely grained headers.
But "large software systems" are only one use of C++. That's the point; if
you totally optimize headers (or anything else) for a particular scenario,
you are probably pessimizing for other environments. Thus a balanced
approach (mid-size granularity of headers) serves a wide variety of needs,
although it certainly isn't optimal for all needs.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk