|
Boost Users : |
Subject: Re: [Boost-users] Flyweight: wrapping shared_ptr
From: Joaquin M Lopez Munoz (joaquin_at_[hidden])
Date: 2014-10-10 16:02:12
Akim Demaille <akim <at> lrde.epita.fr> writes:
>
> Hi JoaquÃn,
>
> You are a tough one, I'm surprised (and very pleased) you still answer
> to this thread :)
I love C++ in general, and these low-level design problems in particular.
> Well, I couldn't get this work. In fact b neither. [...]
All the problems you describe are precisely the ones Boost.Flyweight
tries to solve :-) It is very hard to implement a flyweight-like
design (refcount semantics plus unique values) on top of shared_ptr.
In fact, Boost.Flyweight does not use shared_ptr's internally.
Also, Boost.Fyweight is not designed for easy and efficient casting
of flyweight<Derived> to flyweight<Base>: I'm afraid that, to do
scenario b right, you'll basically have to write everything from
scratch --factory, tracking, locking, etc.
> > I'd be interested in knowing your outcomes.
>
> Well, you won: the single store is roughly 20% faster in my
> experiment. I am surprised, I really expected the in-place
> allocation in the per-type factories to be much cheaper that
> the pointer and malloc agitation in the case of the flyweight
> of single-pointer type.
>
> Maybe my attempt to have a polymorphic wrapper around flyweight
> is slow, I don't know. But after all, it is true that most of
> the time these values are manipulated via the base type, which
> is cheap (well, gratis) in the single-factory case, while it
> requires quite some allocations in the per-type-factory case.
> So after all, it might make some sense anyway.
>
> The bench was as follows [...]
I think the main penalty in your per-type test comes from the fact that
you're maintaining two separate refcounting systems (flyweight and
shared_ptr<Holder>) and two virtual hierarchies (Exp_impl and
Holder) in parallel, which effectively doubles the bookkeeping
load of the program. On the other hand, I think the benchmark is
probably too lightweight to really discern whether per-type hashing
is beneficial or detrimental: the number of elements involved is
too low, so everything will fit in the working memory very comfortably,
and hash collision, either in the single or in the per-type case, is
almost certain not to occur. May I suggest you extend the benchmarking
to deal with tens of thousands of unique values *at a given time*:
this would yield, I guess, fairer results.
JoaquÃn M López Muñoz
Telefónica
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net