Boost logo

Boost :

Subject: Re: [boost] [unordered] Buffered functions?
From: Howard Hinnant (hinnant_at_[hidden])
Date: 2008-12-27 20:03:33

On Dec 27, 2008, at 6:57 PM, David Abrahams wrote:

> on Tue Nov 25 2008, Howard Hinnant <>
> wrote:
>> <nod> Believe it or not vector<unordered_set<A>> is a viable
>> container, even when many
>> of those inner containers are empty. If the
>> overhead(unordered_set<A>) can be 5
>> words instead of 9 words (or more), then the ability to fit a data
>> structure into
>> nearly half the space is a significant size *and* speed
>> optimization (for some
>> clients).
>> It is never the number of words that matter in shaving off the
>> space overhead of a
>> container. It is the percent difference in space overhead between
>> an optimized and
>> unoptimized implementation that matters. Because for the clients
>> who care, they have
>> a very large number of containers which statically have a small
>> size(). If you can
>> cut the overhead(container) by 25% or more, it is a big win.
> One thing I've always wondered about this: in that case -- at least
> when
> you're not also trying to use the small object optimization -- why not
> always use a Pimpl idiom where an empty container (with no capacity)
> can
> be represented by a single null pointer?

I've simply had clients complain to me: why is sizeof(vector) 4 words
instead of 3? Please make it 3.

Given that complaint, I can attempt to generate reasonable use cases
which motivate such a complaint. But my basic premise is that those
complaining know their needs far better than I do.

I've never actually had anyone complain to me about the sizeof an
unordered_set. However it seems logical to me that if clients are
concerned about the overhead of one container, they could be concerned
about the overhead of another (and thus any) container.

Concerning your suggestion about
container<unique_ptr<unordered_set<A>>> as a substitute for
container<unordered_set<A>> (or equivalent), my best guess is that
there may be clients with use cases where the median size of the inner
container is small but non-zero, and thus the extra heap allocation
and indirection ends up costing more.

Suffice it to say that from a general purpose library point of view,
more overhead is never better if you can get the equivalent
functionality/performance out of less overhead. And historically, the
design of the std::containers has given precedence to speed and space
performance as opposed to turning basic exception safety into strong
exception safety (and I agree with that design philosophy for a
general purpose library).

>>>> It seems like you think compressed pair can only compress away one
>>> member, but IIUC you can chain them and then derive your hash
>>> table from
>>> that to compress as much as you like. Of course it would be nice to
>>> have a compressed_tuple that handles it all for us, but I thikn you
>>> _can_ do it.
>> I have high hopes that std::tuple will become the compressed_pair of
>> the future. Technically, and especially with variadic template
>> language support, it is not that hard to do (tuple is a pain no
>> matter
>> what without variadic template language support). Standards-wise,
>> the
>> current C++0X draft (N2798) does not outlaw, nor mandate, the empty
>> member optimization for tuple,
> Except that there is no "empty member optimization," right? Unless
> this
> is one of the langauge changes I've failed to notice (and there are a
> few), I think what you mean is that tuple is free to use derivation to
> take full advantage of the EBO, right?
>> and I hope to keep it that way. Recent straw polls on the LWG have
>> been favorable to allowing the empty member optimization for
>> std::tuple.
> Until we get CWG to agree that the EMO exists, I doubt straw polls in
> LWG can make much difference ;-)

I used the term "empty member optimization" to refer to the library
technique for optimization as opposed to a language optimization.
Your assumption regarding my meaning referring to derivation under the
covers is correct. As long as we're on the subject, I should warn off
naive attempts at this optimization which look like:

namespace std {

template <class T, class A>
class vector
    : private A

} // std

Such an implementation of "EMO" is error prone as it associates the
namespace of the client-supplied A with that of the vector. I'm not
preaching to Dave with this statement as he has a proven track record
in this department with his implementation of boost::noncopyable (
). I mention this warning aimed at the widest possible audience.
Instead one should use the techniques employed by
boost::compressed_pair, or go back to (as far as I know) the
originator of this technique, Nathan Myers (


Boost list run by bdawes at, gregod at, cpdaniel at, john at