Boost logo

Boost :

From: mfdylan (dylan_at_[hidden])
Date: 2002-02-06 00:50:39


--- In boost_at_y..., "Jeff Garland" <jeff_at_c...> wrote:
> However,
> I can tell you it was a top priority to keep include dependencies
down; not by
> combining headers, but by splitting them. The reason is testing
and release.
> Whenever you get things running (especially in 24X7 systems) it
gets hard to
> release things b/c any change might break the working system. If
the change
> you make recompiles one component/library testing will be reduced
b/c the
> effects will be better understood. However, if you have to
recompile/relink
> fifty libraries b/c of bad include structure release managers get
very nervous
> b/c it feels like you are replacing the whole system -- even if you
didn't
> change those libraries -- just recompiled them. So usually
avoiding one event
> where you have to retest everything, or you can't release a feature
b/c of the
> impact outweighs all the compile times, training issues, and other
downsides.
>
> My 2 cents....
>

Unfortunately this is a problem whether or not headers are coarsely
or finely grained. You might have a header file that declares only
one class but it is a class that is a fundamental part of the
application and hence included by almost everything. It may just
happen to contain one single member function that is only used in 2
or 3 places, and requires a tweak to its defintion. Because this
could cause a huge rebuild most programmers are loathe to do it, and
in my experience, tend to up resorting to hacks to avoid a total
rebuild (and yes I've done it myself!). The problem really comes
down to dumb make systems, as I commented on c.l.c++.m recently.
Make systems that work at the file-level granularity are really
inadequate in today's big software development environments.
Unfortunately noone seems to have come up with something that can
easily replace already-in-use systems (there are a few "total system"
replacements, but often the risk and/or cost involved is judged to be
too high). If someone would write makedepend/make replacements that
did sufficient parsing of source file dependencies and changes to
detect where changes really *were* significant, I suspect they would
be quickly very popular.
Given that we don't have this (yet), then obviously the coarser the
header file granularity the more likely the problem is to occur.
To be honest I'm not sure where this conversation started from, IMO
libraries should always provide finely grained headers along with
wrapper headers (preferably one single one for the whole library) so
that users can choose what works for them. With modern precompiled
header support, for a library that isn't likely to change much, I'd
always use a single header file. For our own libraries that are
under constant development I'll #include as little as possible. This
seems to be so patently common sense I can't imagine there'd be so
much disagreement over it.

Dylan


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk