Boost logo

Boost :

From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-03-10 01:18:42


----- Original Message -----
From: "Rainer Deyke" <root_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Saturday, March 09, 2002 10:26 PM
Subject: Re: [boost] Interest in a cache class?

> ----- Original Message -----
> From: "David Abrahams" <david.abrahams_at_[hidden]>
> To: <boost_at_[hidden]>
> Sent: Saturday, March 09, 2002 2:49 PM
> Subject: Re: [boost] Interest in a cache class?
>
>
> > I'm interested if it interacts intelligently with shared_ptr.
> > What I want is a cache which can be incrementally flushed, such that
> the
> > only items that will be destroyed have a reference count of zero.
>
> My design uses 'cache<...>::handle' instead of 'boost::shared_ptr' and
> the existance of a 'cache<...>::handle' does not guarantee that the
> objects is being kept around.
>
> My rationale is as follows: the existance of a handle to an object is
> not the best indicator of whether or not the object should be
> discarded.

What criteria do you use for flushing something from the cache? In my
applications, I've needed some way to say "you can't flush this", and
preferably also, "flush this only if there's no other alternative". One
very convenient way for me to represent that information is with
shared_ptrs, but I could use some other mechanism.

> Searching the cache for an object is a realitively slow
> O(lg n) operation.

That's what I call "relatively fast".

> Therefore I want to perform this action only once
> at startup where possible.

Startup of what? The data I've wanted to cache get created on demand,
and are generally expensive to re-create.

> However, having lots of shared_ptrs
> keeping the cached objects alive defeats the purpose of the cache.

Maybe your purpose. When I've needed a cache, I've needed some way to
prevent certain objects from being flushed.

> Separating the handles from the mechanism for discarding old objects
> from the cache means the user never has to worry about that.

I guess I'd need to see a bit more detail to understand that statement.

> Even without that, I would prefer a pointer with an instrusive
> reference count. The difference in efficiency (in both memory usage
> and execution time) seems very significant to me.

Have you done any measurements?
I did a simple experiment years ago when people were complaining about
the speed of shared_ptr: I used a deque<> where the elements were a
union of the count object and a free list pointer to do count
allocation, and as I recall it showed little appreciable difference in
speed versus the intrusive version. I wonder if that optimization is
still applicable to shared_ptr since the redesign?

-Dave


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk