Boost logo

Boost :

From: Ian Bruntlett (Ian.Bruntlett_at_[hidden])
Date: 2000-04-10 03:38:24


Miki,

> You class is basically an auto_ptr with parametrised release function.
> Looking at this level, I would much prefer to wait for our
> scoped_ptr's and shared_ptr's to get the parametrised release
> function. There has been some talk about this, so I don't think it
> should be too long.

Yes, that would be useful. I think that would mean scoped_array<> would
effectively be a scoped_ptr<> with a release function object that does a
delete [].

I feel that auto_ptr<> would only be usable for resources wrapped up in
classes because it would attempt to provide operators -> and *.

> - The smart pointer does not store a pointer but the actual instance.
> This is an interesting special case usage of smart pointers. I would
> actually be tempted to generalize your class into something like
> auto_value, so it is not limited to resources.
Although I didn't intend it, I think it would be possible to use auto_res<>
as auto_value<>.

> Have you considered making shared_res?
I thought about that over the weekend. I basically split resource sharing
into 3 theoretical types.
1. Sharing implies use (object is shared by many sharers)
2. Sharing with optional use (objects is shared by many users, if none are
really using it, it is freed up)
3. Constrained sharing with optional use (as 2 but there is an upper limit
on the number of sharers).

My thoughts on the interactions of shared resources and pooling are
incomplete, so bear with me...

Here's an explanation.
1. Covers most cases of sharing. Simply wrap the resource up in a class and
use a shared_ptr to an instance of that class. Each sharer of the resource
is considered an active user of the resource. Each shared_res<> would have a
pointer to a shared counter, NoOfSharers. shared_ptr<> uses a long for
that - I would have used a parameterised type Counter_t, defaulting to
unsigned int - this is so that, in a multi-threaded environment, someone
would have the option of providing a MT safe counter type.

2. As 1. except each user is only *optionally* using the resource. The
resource is considered used if a PickUp() or get() is invoked on the
shared_res. The resource is considered unused if a Drop() is invoked on
shared_res. This would be tracked in each shared_res using a boolean flag.
Each shared_res<> has a pointer to a shared block of memory, that block of
memory would have two counters, SharerCount and UseCount.Whenever PickUp()
or get() is invoked, a PickUp function object would be invoked, whenever a
Drop() is invoked a Drop() function object would be invoked. I'd envisage
this being used with scarce pooled resources - even though hundreds of
objects could be sharing a particular resource, if none of them are using
it, it could be put back in a pool. Remember, this is theoretical.

3. Is already dealt with by the implementation of (2).

To exercise the theoretical shared_res<> I thought of a Gizmos and Widgets
online ordering system. Apologies for the silly acronym disorder :). I
didn't think about it too hard, but for a really heavily used system,
pooling may be necessary. That introduced the need for a Drop() function
object. Then I considered, how would I try to cope with a DDoS (distributed
denial of service)? O.k., this is only theoretical but I was trying to break
shared_res<>. So I figured there may be a low priority thread running in the
system acting as a pulse, so that the system could possibly know how
overloaded it was and take varying measures of triage to deal with it. And
one of those would be, if the system is really overloaded and there are more
than N users of a resource, don't allow share this resourcce with any more
users. That allows "load-bucketing" where excess work just ends up spilling
all over the floor. If "load-balancing" gets brought in, it would be useful
if the shared_ptr<> could be updated to move to a more suitable resource in
a pool. Some of this work is best handled by a "pool" smart pointer.

Hmm. If I can sort out shared_res<> and pool<>, I'll probably write it up
for Overload.

Ian


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk