On Sun, Mar 17, 2013 at 7:53 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
After a more deep analysis, it seems to me that the encapsulated data and the condition variables are quite correlated, that is, changing the way the data is managed should have some impacts on which condition variable should be notified or waited on. For example, the implementation in boost/thread/v2/shared_mutex.hpp uses only two condition variables instead of 3 in boost/thread/pthread/shared_mutex.hpp.

Both pthread and win32 shared_mutex actually are quite similar in both their state parameterization (pthread state_data is a subset of win32) and usage of 3 condition variables/semaphores. I therefore believe that they can be refactored to share significant parts of their locking policy code.

v2/shared_mutex indeed appears to be quite different. From what I can see, it lacks the exclusive_waiting_blocked field. This makes me question if it is capable of prioritizing write-lock request in favor of read-lock requests, as the pthread and win32 equivalents are doing. What are your thoughts on this?
 
Could you tell me what is the added value of doing this refactoring?

My motivations for the patch:
1: Separate locking "policy" implementation from the locking "management". This should give simpler and more maintainable code.
2: Provide a step towards shared locking policy implementation between pthread and win32 shared_mutex, so that they behave consistently (even though win32 uses semaphores & atomics). This should not be unrealistic, since pthread state_data is currently a subset of win32 state_data. They can therefore be harmonized by either extending the pthread state_data, or cutting down on the win32 variant.
3: Simplify future extension of shared_mutex with a state_data template parameter for reader/writer/unspecified priority policy. Such an extension is much simpler if the locking policy is split into a separate class.
 
BTW, I had in mind to split the the shared_mutex class into shared_mutex and a upgrade_mutex following the design in boost/thread/v2/shared_mutex.hpp. This has the advantage to make possible to improve the performances of shared_mutex as there are less constraints.

I agree that such a split is (also) desirable, and it can be combined with separation of locking policy from management.

Regards,
Fredrik