|
Boost Users : |
From: Lars Hagström (lars_at_[hidden])
Date: 2007-01-31 10:02:24
I tried it on another gentoo box, which is not a vmware machine, and I
get the exact same result.
Cheers
Lars
Lars Hagström wrote:
> (oops, didnt send to the list, just Ion. Sorry!)
>
> Okay, examples are always the best way to explain things... Should have
> gone that way from the start :)
>
> Receiver code (complete project is attached):
> std::cout << "Creating or opening semaphore" << std::endl;
> boost::interprocess::named_semaphore
> connectSem(boost::interprocess::open_or_create,
> "semtest-connect",0);
>
> std::cout << "Got semaphore, simulating shared memory setup"
> << "(by sleeping 5 seconds)" << std::endl;
> sleep(5);
>
> std::cout << "Done, allowing senders to connect "
> << "(by posting semaphore)" << std::endl;
> connectSem.post();
>
> std::cout << "Continuing with simulated work" << std::endl;
> for (;;)
> sleep(1);
>
> Sender code (also attached)
> std::cout << "Creating or opening semaphore" << std::endl;
> boost::interprocess::named_semaphore
> connectSem(boost::interprocess::open_or_create,
> "semtest-connect",0);
> std::cout << "Waiting on semaphore" << std::endl;
> connectSem.wait();
>
> std::cout << "Got semaphore, simulating some connection "
> << "work (by sleeping 5 seconds)" << std::endl;
> sleep(5);
>
> std::cout << "Done, posting semaphore" << std::endl;
> connectSem.post();
>
> std::cout << "Continuing with simulated work" << std::endl;
> for (;;)
> sleep(1);
>
> If I start off with no semaphore on disk these programs work fine. It
> does not matter if I start the sender or the receiver first.
> If I then kill the programs (and leave the semaphore on disk) it only
> works as long as I start the receiver before the senders.
> This would be because the semaphore is left at 1 by the last sender to
> connect in the first run of the programs. When a sender starts (before
> the receiver) in a second run it will think that the receiver is already
> done with its setup, since the wait will return immediately (since the
> semaphore is at 1).
>
> If I try to use named_semaphore::remove on the semaphore before
> opening_or_creating it (in both sender and receiver) the program will
> start behaving correctly on windows (since the remove will fail if any
> process has the semaphore opened), but not on linux (since the semaphore
> will be unliked and the next process to open_or_create it will then
> create a completely new semaphore).
> I've tried to think of a combination of create_only and open_only
> strategies that would make this possible, but I can't think of one.
>
> When I've tried (for example) ACE semaphores I always get a semaphore
> that is initialized to 0 when an unused one (one that no process has got
> a handle to) is re-opened. So for the above example when a sender is
> started with a semaphore on disk (whose last value was 1) the semaphore
> gets initialized to 0, so the wait will hang and wait for a receiver to
> post on the semaphore.
>
> My tests have been performed on Windows XP with VC80 and Gentoo Linux
> (kernel 2.6.18-r3) with gcc-4.1.1. One thing that I hope is not
> confusing the issue is that the Linux system is run inside vmware.
>
> I hope I am clearer this time...
>
> Cheers
> Lars
>
> Ion Gaztañaga wrote:
>> Lars Hagström wrote:
>>> Hi,
>>>
>>> I am trying to use Boost.Interprocess to get N processes (senders) to
>>> send data to one receiver through shared memory.
>>> The receiver has to be the one that initializes the shared memory, so no
>>> senders can be "let in" before the receiver has completed
>>> initialization. I can't guarantee the order that the processes are
>>> starting in.
>>> To do this I have both the receiver and the senders create_or_open a
>>> named_semaphore with 0 as initial value.
>>> The senders then immediately do a wait on the semaphore, but the
>>> receiver goes ahead and initializes the memory and then does a post on
>>> the semaphore. This lets the first sender in, which can then initialize
>>> itself and then do a post on the semaphore and start running. The second
>>> sender is then let in (because of the post) and it can get going and
>>> so on.
>> I understand.
>>
>>> This approach works very well if the semaphore has been manually deleted
>>> from /dev/shm/ or c:\temp\boost_interprocess\, but will not work if
>>> there is an old semaphore around.
>> I understand that the semaphore will still have value 0 and other
>> senders will continue to work fine. Unless a sender crashes in the
>> initialization and can't post the semaphore, of course. In that case,
>> you are lost.
>>
>> When do you have problems, when launching another receiver if the first
>> one crashes?
>>
>>> I can get around this on Windows by doing a named_semaphore::remove,
>>> since that will fail if a process has got it loaded. But this same code
>>> will not work on linux, since the semaphore will be unlinked and then
>>> all the processes will be able to delete the semaphore and will then be
>>> using different semaphores!
>> I don't understand what you mean here.
>>
>>> I am used to semaphores working so that if it is opened when noone has
>>> it loaded it will be initialized to some known value, but this does not
>>> appear to be the case with Boost.Interprocess semaphores?
>> Still lost. In theory, if the semaphore is created you initialize it
>> with a value, if it's already created you get the old value. What do you
>> mean with "loaded"
>>
>> I'm afraid I need a bit more information to help. I know I am missing
>> something ;-)
>>
>> Regards,
>>
>> Ion
>
>
>
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net