|
Boost : |
From: williamkempf_at_[hidden]
Date: 2001-03-15 13:14:06
--- In boost_at_y..., terekhov_at_y... wrote:
> --- In boost_at_y..., williamkempf_at_h... wrote:
>
> > > interlocked (atomic) stuff is non-portable ("portable" version
> > > with mutex has completely different semantics)
> >
> > How are the semantics different. They pass the unit tests, which
> > should show the semantics to be the same. The usefullness of
> > the "portable version" is highly suspect, but many thought that
> > atomic operations were needed even if some platforms fell back on
> the
> > slower mutex implementation.
>
> interlocked calls (ops/instructions) are _usually_ used either
> in order to avoid relative expensive synchronization or in order
> to implement synchronization primitives themselves - e.g.
> spinlocks/mutexes/semaphores/... (things like: load, calc_new,
> compare_and_swap - too late: try again..) and require careful
> use of other non-portable things such as memory barriers, etc..
That's why I said "the usefullness of the 'portable version' is
highly suspect". However, there are other uses for atomic integer
types. For a simple example, the ref-counting done in shared_ptr
needs to be thread safe. An atomic integer type allows this to be
done in an optimum fashion for platforms that support atomic
operations and in a fashion that's at least as fast as traditional
Thread Safe Object patterns for those platforms that do not have them.
I'm willing to remove atomic_t entirely, but for the reasons I just
gave many thought the type would be useful for inclusion.
> IMHO it is a misuse of concept to provide a library, which would
> just add full bloat lock/unlock to non-interlocked ops.
The goal is to provide implementations that DON'T do "full bloat
lock/unlock" when ever possible (which should be most of the time).
> > > POSIX CV impl. is incorrect (see comp.programming.threads)
> >
> > Care to explain this? comp.programming.threads is not much of a
> > pointer to find something like this, especially since you give no
> > clue as to why you find it "incorrect". Links to specific threads
> > would be more beneficial.
>
> ok. sorry. why I find it "incorrect" - link:
>
> http://sources.redhat.com/ml/pthreads-win32/2001/msg00015.html
>
> note: the fix is outdated there are better solutions - see:
>
> specific threads:
>
> pthread_cond_wait() and WaitForSingleObject()
>
> and
>
> pthread_cond_* implementation questions
The implementation I used is a variation on the ACE implementation,
which is still considered the most accurate implementation available
on Win32. I'll address some points made in the link you gave above:
1) Spurious wake ups are allowed and expected. The documentation
clearly recommends usage of the Predicate version of waits and warns
that when not using them a loop needs to be used to deal with
spurious wakeups. Even the POSIX standard allows for such spurious
wakeups to occur with condition variables.
2) The unfairness is very closely related to spurious wakeups, and
is again accounted for both in the Boost.Threads documentation and in
the POSIX requirements for condition variables. In other words,
there's no gaurantee of such fairness, and in practice it's not
needed.
I looked carefully at both the latest ACE and latest pthreads-win32
implementation during implementation here. With out knowing what
the "correct solution" is, I see little difference between the two.
I'll try to code up the example given in the link using Boost.Threads
to see if the reported problem is even reproducable, but if you've
got an implementation that's supposed to be more correct it would
save me time here.
> > > mutex impl. is looking quite strange (2 "real" mutexes + CV ??)
> >
> > This is simply necessary for insuring either checked locking
> > semantics as found in boost::mutex (which is something I'm open to
> > discussing as to whether it should be checked instead of
> unspecified)
> > or recursive locking as found in boost::recursive_mutex. Unless
> > POSIX gauranteed such behavior in some manner the "strange
> > implementation" is necessary.
>
> the checks are only needed (if at all) in debug mode.
That's at least some what debatable. Many thread APIs perform these
checks any way (pthreads specifically allows an implementation to do
so). The goal was to insure that all implementations "did the same
thing" here, even though it added some overhead and complicated the
design. I'm willing to remove the checks for boost::mutex here... I
knew my decision would be controversial, I just thought it needed to
be brought out in discussion.
> POSIX threads standard (and coming SUS) does have recursive mutex.
It's my understanding that POSIX has only recently added the
recursive mutex to the standard and that implementations that
targeted an older version may not have them. Thus my implementation
was designed to hit the "lowest common denominator". However, I'm
not a pthreads expert and it's very likely that the pthreads
implementation is far from optimal.
> in general, i would suggest that you simply adopt POSIX threads
> programming model and just provide C++ wrappers on top of POSIX
> threads API (you may add things such as win events, etc on top
> as non-portable features) and spend some energy trying to bring
> pthreads-win32 (POSIX threads impl for win32) in "production
> ready" state so that the boost threads library would simply use
> pthreads-win32 under windows.
I don't agree with this approach at all. First, it violates the
Boost criterion of not being reliant on third party libraries that
would need to be down loaded by users. Second, it adds another level
of overhead to the Win32 implementation which causes some noticable
speed problems (though I must admit that it may be more due to my
pthreads implementation of Boost.Threads than to the extra level).
Third, the goal is for the interface to be implementable on numerous
platforms, many of which won't have pthreads support. By
implementing the Win32 version directly instead of using pthreads-
win32 I get some indication of whether or not that goal has been met.
Bill Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk