Boost logo

Boost :

From: Rainer Deyke (root_at_[hidden])
Date: 2002-03-10 11:16:03


----- Original Message -----
From: "David Abrahams" <david.abrahams_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Saturday, March 09, 2002 11:18 PM
Subject: Re: [boost] Interest in a cache class?

>
> ----- Original Message -----
> From: "Rainer Deyke" <root_at_[hidden]>
> To: <boost_at_[hidden]>
> Sent: Saturday, March 09, 2002 10:26 PM
> Subject: Re: [boost] Interest in a cache class?
>
>
> > ----- Original Message -----
> > From: "David Abrahams" <david.abrahams_at_[hidden]>
> > To: <boost_at_[hidden]>
> > Sent: Saturday, March 09, 2002 2:49 PM
> > Subject: Re: [boost] Interest in a cache class?
> >
> >
> > > I'm interested if it interacts intelligently with shared_ptr.
> > > What I want is a cache which can be incrementally flushed, such
that
> > the
> > > only items that will be destroyed have a reference count of
zero.
> >
> > My design uses 'cache<...>::handle' instead of 'boost::shared_ptr'
and
> > the existance of a 'cache<...>::handle' does not guarantee that
the
> > objects is being kept around.
> >
> > My rationale is as follows: the existance of a handle to an object
is
> > not the best indicator of whether or not the object should be
> > discarded.
>
> What criteria do you use for flushing something from the cache? In
my
> applications, I've needed some way to say "you can't flush this",
and
> preferably also, "flush this only if there's no other alternative".
One
> very convenient way for me to represent that information is with
> shared_ptrs, but I could use some other mechanism.

Right now there is no way. This functionality could be added through
another smart pointer ('boost::shared_ptr' or otherwise) with a
conversion from 'cache::handle' to the other smart pointer.

> > Searching the cache for an object is a realitively slow
> > O(lg n) operation.
>
> That's what I call "relatively fast".

A previous design that did a 'std::map' lookup on each access proved
to be too slow for my purposes. (I need to access the cache about
10000 to 100000 times a second.)

> > Therefore I want to perform this action only once
> > at startup where possible.
>
> Startup of what?

Often the startup of the entire program. Sometimes on the loading of
other objects into the cache.

> The data I've wanted to cache get created on demand,
> and are generally expensive to re-create.

That why getting a cache handle doesn't actually load an object yet.
The object is loaded on the dereference of 'cache::handle'.

> > However, having lots of shared_ptrs
> > keeping the cached objects alive defeats the purpose of the cache.
>
> Maybe your purpose. When I've needed a cache, I've needed some way
to
> prevent certain objects from being flushed.

Why?

> > Even without that, I would prefer a pointer with an instrusive
> > reference count. The difference in efficiency (in both memory
usage
> > and execution time) seems very significant to me.
>
> Have you done any measurements?

No. Maybe the difference is less significant than I thought.

--
Rainer Deyke | root_at_[hidden] | http://rainerdeyke.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk