From: Karl Nelson (kenelson_at_[hidden])
Date: 2000-12-05 16:46:05
> In message <200012042317.PAA03457_at_[hidden]>, Karl Nelson
> <kenelson_at_[hidden]> writes
> >> Could you clarify the size issue? Do you mean sizeof footprint, memory
> >> allocated, ...?
> >There are three sizes to a program and the design of the system
> >affects all 3.
> > - static binary size (how much stuff gets compiled into the executable)
> > - runtime binary size (how much actually gets loaded)
> > - runtime data size (how big are the objects created at run time.)
> I tend to split the data view up in terms of
> - the sizeof of footprint, eg one pointer vs two pointers
> - the number of heap objects associated/owned
> - the sizes of the heap objects
> The last two -- or even all three, depending on your definition -- could
> be said to constitute the 'working set' of the object.
Well, in the cloned case because we can't have variable sized
objects, inevitably we need an indirection on to the heap.
Thus for the sake of argument lets assume that both sharing and
cloning have a stack size of 1 pointer. Thus stack size is
generally considered a wash.
In the cloned case the heap size is the size of the composite object
which is the size of all the adaptors plus the size of the generic
functor plus the user supplied functor.
If the internals of the functor are shared segmented you get one counter
per segment plus the new vtable. Thus two adaptors and a functor
increase the heap size by 2*sizeof(vtable) + 3*sizeof(int).
(see diagrams in previous email.)
> The footprint for a reference counting instance using shared_ptr is the
> same as that for cloning. The working set, for a given object, is
> larger, but with increased sharing this decreases.
> >Of the things in my last summary, here are some the affects
> > sharing (reference counting) - reduces runtime data size if many copies
> > are used, but increases if things can't be shared as you need extra
> > data (counters) This system also potentially increases binary size
> > as the counter dealing code may get scattered everywhere. This
> > competes against the copy code which we avoided.
> > cloning - every copy increases runtime data size, thus if the user copies
> > functors frequently, they will get loads of runtime data. Further,
> > each of those copied will implement lots of operator=() so you get
> > runtime and static binaries which are larger.
> I'm not sure that I understand what you mean by operator= being
> implemented lots. Both the reference-counted and cloning designs must
> implement operator=, but in one case clone is called and in the other
> the count is incremented. However, roughly the same amount of code --
> which is small -- will be generated in each case for operator=.
Okay let me explain this better.... assume we have 4 levels of indirection
2 adaptor one of which introduced some objects, the generic functor, and
then the user functor (a function ptr in this case).
(in sigc++ terms, this is a rather extreme case)
s2=s1; // <-- this statement
In the case of cloning we have to have an
adp_rettype<int, adp_bind<A, func_functor<void,int,A> > >::copy
where copy is either operator= or X::X(const X&) depending on the
implementation. If I call this 1000 times in a row this would be
significant. Further, every different set of adaptors and
functors will generate a new set of assignment operator or copy
constructor. (Basically this is a type explosion.)
While in the case of sharing we get a reference count increase,
but that is generic of the type of functor and thus it can be
factored out so we just implement.
And this cost is fixed regardless of how many Slot0<type1>, Slot1<type,type>
Slot2<type,type,type> I implement.
This has big effects on the binary size. The cloning potentially
implements all those copies inline thus the size of a function body like
(all variables are callback<void,MyStruct>)
Has the potential of being very large in terms of binary size and
cpu cycles spent allocating and copying heap items. In the case
of cloning the heap memory has doubled be the end of the
function minus the old functors we throw away.
Had this been shared, the code would be 3 reference increases,
3 decreases. At worst case the heap has to deallocate the
On the other hand had this same code been written as
Each time through we must create the functors. For the sharing
we get the extra overhead in counters which will never go more than
1 which we avoid in cloning.
Depending on how the code is written the shared could be considerably
less stack, unless the functors are never shared in which case we
have an int which will never have a value other than 1 or 0. In which
case the cloned will have less heap.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk