Boost logo

Boost :

From: vesa_karvonen (vesa_karvonen_at_[hidden])
Date: 2002-02-06 03:51:24


--- In boost_at_y..., Beman Dawes <bdawes_at_a...> wrote:
[snip]
> The C++ committee's LWG has been discussing headers recently. I
> think that Bjarne was the most recent person to complain about the
> stl headers. There is some discussion about providing a single
> <stl> to deal with that. There is also discussion of a <std>
> header which includes all the C and C++ standard headers.

I have no problem with providing composite headers as long as it is
possible to #include each service separately.

> I think what the teachers are complaining about isn't the need to
> explain the concept of a header, but rather the need to teach which
> headers must be included to get which set of library features.

My estimate included the time required to teach how to search for
components:
- for std headers use a table
- for other headers use grep
Ideally library components are divided into headers so that the
header to include can easily be derived from the name of the
component. A library which fails to satisfy this criteria should be
refactored.

>>Dependencies should not be measured in numbers of headers. It is a
>>useless metric.
>
> But it is one that real programmers use, nevertheless.

I'm sorry, but I'm not responsible for the stupid things real
programmers do. I'm only responsible for the stupid things I do.

I must also refuse to write software for dummies. If people always
programmed by using only the programming techniques known to
everyone, no programming would ever get done.

"A reasonable man adapts himself to suit his environment. An
unreasonable man persists in attempting to adapt his environment to
suit himself. Therefore, all progress depends on the unreasonable
man." -- George Bernard Shaw

>>Ideally dependencies should be measured in how long it takes for the
>>compiler to process the dependencies. Since lexing (and parsing)
>>typically consumes most of the time taken by the processing of
>>headers, an easy to compute, rather portable and usable metric is
>>the bytes of code metric.
>
> It really depends on the system you are using. On some systems,
> I/O times dominate, and the cost of opens is particularly high.

I've heard that claim before and even seen some experimental results,
but unfortunately I have never experienced it first hand. The systems
that I mainly use have relatively huge file system caches and I/O is
relatively fast. I'd like to know which systems we are discussing. If
the systems we are talking about are some really old systems that are
hardly ever used for software development then there is no point in
optimizing for those systems.

> But "large software systems" are only one use of C++. That's the
> point; if you totally optimize headers (or anything else) for a
> particular scenario, you are probably pessimizing for other
> environments. Thus a balanced approach (mid-size granularity of
> headers) serves a wide variety of needs, although it certainly
> isn't optimal for all needs.

I understand your point. However, I think that you are fundamentally
wrong. The time wasted on compiling small systems is insignificant
compared to the time wasted on compiling large systems. It really
doesn't matter if it takes 1 or even 10 seconds to compile a small
system after a change. Compile times become a problem when it starts
to take several minutes to compile after a change. This is why we
must optimize for large systems.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk