|
Boost : |
From: Martin Wille (mw8329_at_[hidden])
Date: 2003-11-11 07:46:25
I wrote:
> What we, or you if you want to contribute, can do: find a way of
> storing many thread specific items without wasting that many thread
> keys. (I'll post separately on this)
Spirit already uses a technique for reducing the number of
tss keys used (the problem to solve was to reduce
one key/grammar object to one key/grammar class). Maybe,
some of the code can be transformed into a general approach
of reducing the number of thread keys used by
boost::thread_specific_ptr instances.
Spirit uses some code that generates small, unique
(during their existence) ids (numbers). Acquiring/
releasing such an id takes amortized O(1) with a
rather large constant factor (vector operations
and a lock involved). The release operation offers
the nothrow guarantee.
These ids are then used as an index into a vector,
which exists once per thread. So, using the id,
the thread specific vector elements can be accessed
in O(1) without locks involved (i.e. it is quite
fast).
I think it is possible to implement a thread
specific pointer based on the same idea. Such
an implementation would only consume a few native
tss keys (probably only one) and it would overcome
the (possibly small) implementation defined limit on
the number of thread specific storage objects.
A downside of this approach is that operator->(),
operator*() and .reset(p) (for p!=0) for this
implementation could throw (due the underlying vector
failing to be expanded). However, .get(), .release(),
.reset(0) and the dtor could be made nothrow.
Would you be interested in having such an implementation for
thread specific storage (which would have the same interface as
boost::thread_specific_ptr) available in Boost.Thread?
If there is interest, then I would rewrite the existing (Spirit
specific) code to match the interface of thread_specific_ptr.
(naming suggestions for that alternative tsp are welcome)
Jorge: FYI, this won't be available in Boost 1.31 or in Spirit 1.8.
Regards,
m
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk