|
Boost Users : |
Subject: Re: [Boost-users] [Interprocess] hang locking p_hdr->m_mutex
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2011-08-28 12:08:31
El 27/08/2011 1:25, David Byron escribió:
> On 8/26/2011 9:56 AM, Ion Gaztañaga wrote:
>> El 26/08/2011 14:40, David Byron escribió:
>>
>>> If CreateMutex behaves the "right way" on windows, does it make sense to
>>> have the behavior differ across platforms?
>>
>> Portability is the most important goal for Interprocess :(
>
> Makes sense. I'm not hell bent on changing it. I'd love to use it just
> as it is. I just can't figure out how to do it safely given that a
> process might die while holding the interprocess_mutex. I could easily
> be missing something. If someone would tell me if that's the case, I'd
> be eternally grateful.
I'm integrating a patch kindly sent by Ross MacGregor that activates a
timeout when locking, if a define is set. When you can't lock a mutex
for a time longer than X milliseconds, a special exception is thrown.
Then you should erase that resource (message queue or whatever) as it is
likely to be corrupted by the crashing process.
I hope we can put this in Boost 1.48.
>> And CreateMutex needs a name, you can't construct a named mutex in
>> shared memory, both are different beasts.
>
> From http://msdn.microsoft.com/en-us/library/ms682411%28v=vs.85%29.aspx:
>
> "Multiple processes can have handles of the same mutex object, enabling
> use of the object for interprocess synchronization."
>
> and then:
>
> "A process can specify a named mutex in a call to the OpenMutex or
> CreateMutex function to retrieve a handle to the mutex object."
>
> The name of the message queue seems OK, perhaps beginning with "Global\"
> on some versions of windows.
>
> So I still think windows mutexes would work.
I repeat, you still have lifetime issues so we can't maintain message
queue lifetime semantics and use simple CreateMutex.
Best,
Ion
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net