Boost logo

Boost :

From: dmoore99atwork (dmoore_at_[hidden])
Date: 2002-03-13 21:02:54


--- In boost_at_y..., "danl_miller" <danl_miller_at_b...> wrote:
 
> Dave, because "violation of the contract" is an exceptional
aberrant case, please throw an exception in each thread which is still
> pending on that now-defunct barrier.

Guaranteeing an exception throw seems difficult because the
barrier::~barrier would have to block while every wait()ing thread
becomes unblocked and then throws an exception when they notice the
pending destruction of the barrier. It seems that the potential is
there to block for a while in the destructor, which may in turn
interfere with that thread's destruction/cancellation, etc.

If a wait()er hasn't cleared the function, then the underlying
primitives (mutex, condition, etc.) could be deallocated out from
under the wait()ing threads, and we're right back at undefined
behavior, probably of the core dump variety.

If you know of a specific technique for blocking destruction until
you *know* that all wait()ing threads have cleared the function,
please point me in the right direction!

> > --- In boost_at_y..., "dmoore99atwork" <dmoore_at_a...> wrote:
> > > I wanted to do some more reading on cancellation mechanics for
> > > barriers to see if some solution that would throw a controlled
> > > exception out of wait() in this case is possible.
>
> By the way, for the reason that we want thread-synchronization to
be
> useful on multiprocessors too (not merely uniprocessors), please do
> not pursue that Mutex type-parameter on your template-based design
for
> rwlocks. That Mutex template exposes the uniprocessor
implementation
> guts in a way which would need to be ignored/defeated in
> multiprocessor environments which might want very much to use some
> underlying high-efficient highly-tuned operating-system primitive
for
> barrier (or might want to use an operating system-primitive spinlock
> instead of some mutex which you are encouraging/mandating).

In stepping back and looking at the design+implementation on
rw_mutex, the "exposed" mutex was there to try and address the self-
deadlocking problem. It doesn't even do that in the case where
explicit scheduling is provided, so it is losing value quickly. As
you point out, it could needlessly constrain implementors on
platforms with alternative mechanisms...

I still need to think and collect more ideas on self-deadlock
detection/prevention in rw_locks.

BTW - did you receive the prototype explicitly scheduled rw_mutex I
sent via email? Did it address your guaranteed scheduling concerns,
(save for the exposed mutex mentioned here?) I am planning on
posting the revised design this weekend, and if you had any early
feedback, that'd be great.

Thanks!
Dave


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk