Boost logo

Boost :

From: Anthony Williams (anthony_w.geo_at_[hidden])
Date: 2006-10-31 16:19:14


Roland Schwarz <roland.schwarz_at_[hidden]> writes:

> In thread_rewrite there is an implementation from Anthony.
> This however currently is only for windows so far, correct?

Yes. We need a POSIX version. For simple rw-locks, it should be a trivial
wrapper around the POSIX-supplied rw-locks. For upgradeable stuff, we will
need to wrap it.

> As I do understand the problem (please correct me if I am wrong)
> there is not a single reader-writer problem, but three:
>
> 1) reader-priority rw-access
> 2) writer-priority rw-access
> 3) starvation free rw-access
>
> which of these three cases applies is in the application domain
> and not in the domain of the library implementation.

IMO, 3 is the only way to go.

> Consequently a generally usable library would need to provide
> mutices for all three cases. I can see this as a policy that
> is fed to the ctor of the rwmutex in the original draft of Bill Kempf:
>
> writer_priority,
> reader_priority,
> alternating_many_reads,
> alternating_single_read
>
> I cannot see however a similar policy parameter in thread_rewrite
> version. How is this getting controlled in this case?

It is not. After much discussion and various implementations from myself,
Peter, and Chris, as well as examining prior implementations from others, the
final algorithm is "fair". We all seemed to be in agreement that "fair" was
the way to go.

Reader-priority can potentially starve writers. Writer-priority can
potentially starve readers. The "fair" solution in the thread-rewrite
implementation is that a writer blocked in lock will stop additional readers,
but when the last reader (or the only writer) unlocks then one writer and all
the readers are unblocked. These then compete on equal terms. If a reader gets
in first, but the writer blocks before the other readers have run, then they
will block again.

> Glancing over Anthonys code I can see that he is making use of
> some atomic primitives and semaphore calls. So I think this should
> be portable fairly easy.

Yes, it should. The hardest part will be the WaitForMultipleObjects used in
lock() to ensure that a writer only unblocks when it can get the shared
semaphore AND the exclusive semaphore.

(implements generic version)

This isn't too much of a problem, actually. If we use cond vars, then the
mutex is held when they are signalled, so both the writer and the readers are
all competing for the mutex in order to return from the wait.

> However, since rw-mutices can be made of a mutex and some condition
> variables, shouldn't we follow the old principle of getting it
> correct first and optimize later? I'd very much like to see this
> happen. Not only is the generic version valuable as a fallback, but
> also as a reference to measure performance and correctness against.

OK, I'll provide a generic implementation of my algorithm.... Attached.

> I propose the following roadmap:
>
> 0) Decide if we want the current (broken) version in 1.34.
> Please note that this is URGENT. (A previous poll about this
> interestingly did not get answered by anyone.)

I didn't reply because I had already expressed my opinion that it should be
removed.

> 1) Decide if the current interface proposed by Bill Kempf still is
> appropriate.

No, it's too complicated. I think we should follow the interface from Howard's
proposal to the C++ Standards committee (N2094).

> 2) Correct the bug that is present in Bill's current (generic)
> implementation.

No. This is not reasonably possible.

> 3) Optimize for each platform.

Optimization is important. As Peter says, the whole point of a rw-lock is that
it is an optimization, so if it's slow, then it loses much of the benefit.

I have now extended my implementation to include upgradeable stuff. I was
planning on working on try and timed variants next.

I have attached a generic implementation of my basic rw algorithm using a
mutex and two cond vars, without the upgradeable stuff.

Anthony

-- 
Anthony Williams
Software Developer
Just Software Solutions Ltd
http://www.justsoftwaresolutions.co.uk

#ifndef BOOST_THREAD_WIN32_READ_WRITE_MUTEX_HPP
#define BOOST_THREAD_WIN32_READ_WRITE_MUTEX_HPP

// (C) Copyright 2006 Anthony Williams
//
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)

#include <boost/assert.hpp>
#include <boost/static_assert.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>

namespace boost
{
    class read_write_mutex
    {
    private:
        struct state_data
        {
            unsigned shared_count:10;
            unsigned shared_waiting:10;
            unsigned exclusive:1;
            unsigned exclusive_waiting:10;
            unsigned exclusive_waiting_blocked:1;

            friend bool operator==(state_data const& lhs,state_data const& rhs)
            {
                return *reinterpret_cast<unsigned const*>(&lhs)==*reinterpret_cast<unsigned const*>(&rhs);
            }
        };
        

        state_data state;
        boost::mutex state_change;
        boost::condition shared_cond;
        boost::condition exclusive_cond;

        void release_waiters(state_data old_state)
        {
            if(old_state.exclusive_waiting)
            {
                exclusive_cond.notify_one();
            }
                        
            if(old_state.shared_waiting || old_state.exclusive_waiting)
            {
                shared_cond.notify_all();
            }
        }
        

    public:
        read_write_mutex()
        {
            state_data state_={0};
            state=state_;
        }

        ~read_write_mutex()
        {
        }

        void lock_shareable()
        {
            boost::mutex::scoped_lock lock(state_change);
                
            while(true)
            {
                state_data old_state=state;
                state_data new_state=old_state;
                if(new_state.exclusive || new_state.exclusive_waiting_blocked)
                {
                    ++new_state.shared_waiting;
                }
                else
                {
                    ++new_state.shared_count;
                }
                
                state=new_state;

                if(!(old_state.exclusive| old_state.exclusive_waiting_blocked))
                {
                    return;
                }
                
                shared_cond.wait(lock);
            }
        }

        void unlock_shareable()
        {
            boost::mutex::scoped_lock lock(state_change);
            state_data old_state=state;
            state_data new_state=old_state;
            bool const last_reader=!--new_state.shared_count;
                
            if(last_reader)
            {
                if(new_state.exclusive_waiting)
                {
                    --new_state.exclusive_waiting;
                    new_state.exclusive_waiting_blocked=false;
                }
                new_state.shared_waiting=0;
            }
            
            state=new_state;
            if(last_reader)
            {
                release_waiters(old_state);
            }
        }

        void lock()
        {
            boost::mutex::scoped_lock lock(state_change);
                
            while(true)
            {
                state_data old_state=state;

                state_data new_state=old_state;
                if(new_state.shared_count || new_state.exclusive)
                {
                    ++new_state.exclusive_waiting;
                    new_state.exclusive_waiting_blocked=true;
                }
                else
                {
                    new_state.exclusive=true;
                }
                
                state=new_state;

                if(!old_state.shared_count && !old_state.exclusive)
                {
                    break;
                }
                exclusive_cond.wait(lock);
            }
        }

        void unlock()
        {
            boost::mutex::scoped_lock lock(state_change);
            state_data old_state=state;
            state_data new_state=old_state;
            new_state.exclusive=false;
            if(new_state.exclusive_waiting)
            {
                --new_state.exclusive_waiting;
                new_state.exclusive_waiting_blocked=false;
            }
            new_state.shared_waiting=0;

            state=new_state;
            release_waiters(old_state);
        }
        
        
        class scoped_read_lock
        {
            read_write_mutex& m;
        public:
            scoped_read_lock(read_write_mutex& m_):
                m(m_)
            {
                m.lock_shareable();
            }
            ~scoped_read_lock()
            {
                m.unlock_shareable();
            }
        };

        class scoped_write_lock
        {
            read_write_mutex& m;
            bool locked;
            
        public:
            scoped_write_lock(read_write_mutex& m_):
                m(m_),locked(false)
            {
                lock();
            }
            void lock()
            {
                m.lock();
                locked=true;
            }
            
            void unlock()
            {
                m.unlock();
                locked=false;
            }
            ~scoped_write_lock()
            {
                if(locked)
                {
                    unlock();
                }
            }
        };
    };
}

#endif


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk