Boost logo

Boost :

From: Pavel Vasiliev (pavel_at_[hidden])
Date: 2003-02-10 10:46:31

Alexander Terekhov wrote:

>> Pavel Vasiliev wrote:
>> > The true locking/unlocking, not InterlockedIncrement/Decrement()
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

> Nah, pthread_refcount_t. ;-)

>> > even if availabe, is necessary to support weak references.
>> > [...]

> It's probably a bit more exciting to take care of all possible races
> without "a true lock" protecting both counters. I'm not sure that the
> true locking is *necessary* to support weak references. Do you have
> an illustration, Pavel?

May be the following code answers the challenge? :-).

The "true lock" is still used to protect deallocating code and to
ensure cache sync before "delete something". But the most frequently
used operations acquire/release now are "mutex-free" for both
counters; they take only a single call to atomic_increment/decrement.

Provided that this example is correct, I don't see necessity to
remove the mutex completely. First, it may be used to ensure cache
sync. Second, lock/unlock are to be implemented by atomic_exchange,
and the mutex data - by one integer variable (lightweight spinlock
mutex). All other possible implementations that I currently expect
require at least one additional flag and atomic operations on it. Not
cheaper than lightweight mutex. Or other solutions exist?


// Weak reference control block example. Pseudocode.
// More source code comments in

// Acquires/releases strong count in weak_ref_control_block.
class strong_ptr;

// Acquires/releases weak count in weak_ref_control_block.
class weak_ref;

// Control block for an allocated object. Stores strong and weak counts.
// Allocated object is destructed when strong count drops to 0.
// Control block by itself is destructed when all strong and weak
// references are lost.
class weak_ref_control_block
    // Called by existing strong_ptr.
    void acquire_strong();
    void release_strong();
    void acquire_weak_from_strong();

    // Called by existing weak_ref.
    void acquire_weak();
    void release_weak();
    bool acquire_strong_from_weak();

    void strong_refs_lost();
    void weak_refs_lost();
    void destruct_allocated(); // "delete p_allocated_obj".
    void destruct_self(); // "delete this".

    atomic_int_type strong_count;
    atomic_int_type weak_count;
    mutex mutex_destruct;

    T *p_allocated_obj;

void weak_ref_control_block::acquire_strong()

void weak_ref_control_block::release_strong()
    if(atomic_decrement(&strong_count) == 0)

void weak_ref_control_block::acquire_weak()

void weak_ref_control_block::release_weak()
    if(atomic_decrement(&weak_count) == 0)

void weak_ref_control_block::acquire_weak_from_strong()

bool weak_ref_control_block::acquire_strong_from_weak()
    scope_lock lock(mutex_destruct);
    if(atomic_increment(&strong_count) > 0)
        return true;

    atomic_set(&strong_count, atomic_int_type_MIN);
    return false;

void weak_ref_control_block::strong_refs_lost()
    scope_lock lock(mutex_destruct);
    if(atomic_query(&strong_count) != 0)

    atomic_set(&strong_count, atomic_int_type_MIN);

    destruct_allocated(); // smp caches are in sync due to mutex above
    release_weak(); // fire destruct_self().

void weak_ref_control_block::weak_refs_lost()
    bool b_destruct = atomic_query(&strong_count) < 0
                   && atomic_query(&weak_count) == 0;


Boost list run by bdawes at, gregod at, cpdaniel at, john at