Le 17/03/13 13:22, Vicente J. Botet Escriba a écrit :
Le 17/03/13 11:32, Fredrik Orderud a écrit :
On Fri, Feb 15, 2013 at 11:51 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 15/02/13 15:34, Fredrik Orderud a écrit :
On Tue, Feb 12, 2013 at 7:08 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
I will start to analyze your patch this week.

I don't know if you have followed this ticket.
#7906 Very bad performance of generic implementation of shared_mutex on windows

From the performances presented it seems that the windows specific implementation will be needed, so maybe a windows specific implementation for shared priority mutex should also be needed. The problem is that the windows implementation contains some bugs and missing features.

Thank you for the update Vicente. I was not aware of the poor win32 performance for generic shared_mutex, so that of course needs to be taken into consideration. I believe it should be feasible to refactor the pthread & win32 shared_mutex implementations to share state_data (locking policy logic) even though the locking "backends" are different.

yes, but I don't know if a generic backend for priority shared mutex will be enough for windows, so if you have some ideas on how to implements them on windows it is worth exploring them. Anyway, it will be better to have a generic implementation of priority shared mutex than none.

I would like to introduce your patch soon, but lastly there has been a lot of troubles and I have not be tempted to add more disturbance.

I'll let you know when I have some news.

Vicente,
have you had time to examine my patch yet?

Please let me know if there is something I can do to expedite the process. If preferable, I can e.g. break it up into a series of smaller patches. If accepted, I want to do the same refactoring also for the win32 shared_mutex, and finally unify the state_data inner-class between the pthread and win32 implementations, so that the implementations behave consistently.

I have started today.

I'm not confident with this refactoring

         void unlock_shared()
         {
             boost::unique_lock<boost::mutex> lk(state_change);
-            bool const last_reader=!--state.shared_count;
 
-            if(last_reader)
-            {
-                if(state.upgrade)
-                {
-                    state.upgrade=false;
-                    state.exclusive=true;
-                    upgrade_cond.notify_one();
-                }
-                else
-                {
-                    state.exclusive_waiting_blocked=false;
-                }
+            bool const last_reader = !state.unlock_shared();
+
+            if (last_reader) {
+                upgrade_cond.notify_one();

where

            unsigned unlock_shared () {
                bool const last_reader = !--shared_count;
           
                if (last_reader) {
                    if (upgrade) {
                        upgrade=false;
                        exclusive=true;
                    } else {
                        exclusive_waiting_blocked=false;
                    }
                }
           
                return shared_count;
            }

In the current code
                upgrade_cond.notify_one();

is done if !--state.shared_count and state.upgrade

After the pacth
                upgrade_cond.notify_one();
 
is done if !--state.shared_count independently of the state of state.upgrade.

I guess that unlock_shared () should return whether the mutex is downgraded

            bool unlock_shared () {
                bool const last_reader = !--shared_count;
           
                if (last_reader) {
                    if (upgrade) {
                        upgrade=false;
                        exclusive=true;
                        return true;
                    } else {
                        exclusive_waiting_blocked=false;
                    }
                }
                return false;
            }

and the code is replaced by

+            bool const downgraded = state.unlock_shared();
+
+            if (downgraded) {
+                upgrade_cond.notify_one();


It is a little bit worse, the solution should be something like

        void unlock_shared()
        {
            boost::unique_lock<boost::mutex> lk(state_change);
            state.unlock_shared();
            if (state.is_last_shared())
            {
              if (state.unlock_shared_downgrades())
              {
                upgrade_cond.notify_one();
              }
              release_waiters();
            }
        }

where

            bool state::is_last_shared () const
            {
                return !shared_count ;
            }
            void unlock_shared ()
            {
                --shared_count;
            }
            bool unlock_shared_downgrades()
            {
                  if (upgrade) {
                      upgrade=false;
                      exclusive=true;
                      return true;
                  } else {
                      exclusive_waiting_blocked=false;
                      return false;
                  }
            }


For what I have seen the management of exclusive_waiting_blocked is not hidden by your patch.

Another point: the following refactoring sets

  state.upgrade=false;

before been sure tat can move to lock.

        void unlock_upgrade_and_lock()
        {
#if defined BOOST_THREAD_PROVIDES_INTERRUPTIONS
            boost::this_thread::disable_interruption do_not_disturb;
#endif
             boost::unique_lock<boost::mutex> lk(state_change);
-            --state.shared_count;
-            while(state.shared_count)
-            {
+
+            state.unlock_upgrade();
+            while (!state.can_lock())
                 upgrade_cond.wait(lk);
-            }
-            state.upgrade=false;
-            state.exclusive=true;
+           
+            state.lock();

In this case another thread could take the mutex with lock_upgrade which should not be allowed.


Vicente