Boost logo

Boost Users :

From: Zeljko Vrba (zvrba_at_[hidden])
Date: 2007-10-08 15:09:56


On Mon, Oct 08, 2007 at 02:27:53PM -0400, Scott Gifford wrote:
>
> It's not OK in my case, but according to pthread_signal(3), Linux
> threads will do the right thing accidentally (each thread has its own
> PID, so if I signal the PID the server had when it started, that will
> always be the first thread). Still, the point of using boost::thread
> is portability, so I should probably figure out something more robust.
>
The NPTL implementation implements signals differently. When you send a
signal, either with kill(2) or ^C on the controlling terminal (which in
the end again uses kill(2); pthread_kill(2) can't be used externally to
the process), SIGINT is delivered to the process _as a whole_. In this
case, as POSIX prescribes, an arbitrary thread that does not block the
signal will be picked to handle it.

As for portability - what platforms do you care about? If it's only
POSIX and Win32, and you don't need Win32 GUI - I suggest that you
look into Cygwin or Microsoft's SFU (Services for UNIX) which is also
free (of charge). Personally I prefer the latter and, unless I'm
mistaken, it has received UNIX certification.

Yes, according to

http://en.wikipedia.org/wiki/POSIX

an NT kernel + SFU is fully POSIX compliant. And you get gcc in the
package too :)

[No, I'm *not* a MS advocate. But I do recognize and recommend a quality
solution when I see one. And I *do* have good opinion on SFU.]

>
> I was under the impression it was necessary to avoid deadlocks.
>
Not deadlocks, but lost signals (i.e. race conditions).

>
> Basically: Thread 1 checks the condition before it is set, then Thread
> 2 notifies of a change before Thread 1 starts waiting, then Thread 1
> starts waiting for a change, but it will never see one, so it hangs
> forever.
>
Indeed, but that's a "lost signal", not a deadlock. Deadlock is a
completely different situation: either circular waiting on a chain of
locks or a thread attempting to lock a mutex that it _itself_ already
has locked (as happens in your case; but this is just a special case
of circular waiting).

>
> Is there some mechanism I'm not aware of that prevents this race from
> happening?
>
No. Hmm, it'd be best to avoid using shared data. Make a message queue
and send work to your worker threads as messages in the queue. If there
are no messages in the queue, thread trying to read from it will just
sleep. When it is time to quit the application, just send N messages to
the queue (where N is the number of worker threads) and wait for them to
finish. [Thus, you can use the message queue as a condition variable with
"memory" - no signal (=message) ever gets lost.] Plus message queues are
implicit synchronization points so you don't need any additional mutexes
and CVs. I believe that POSIX MQs also support priority, so your "quit"
messages could arrive earlier than any other "normal" messages.

You might want to look into Boost.Interprocess for portable MQs (I haven't
personally used it so I have no idea what features it supports).

>
> > destructor for each) executing? Does your runtime library and/or compiler
> > guarantee that every global destructor is executed exactly once even in a
> > MT setting? (This sounds kinda the inverse of the threadsafe singleton
> > pattern.)
>
> I have no idea how I would go about looking for this guarantee, but it
> seems that if an environment doesn't provide it, it would be
> completely impossible to use global data reliably, making it too
> broken to use. I'm using g++ 4.1.2. Any pointers as to where to look
> for some sort of guarantee like this?
>
The best place would be the gcc mailing list.


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net