|
Boost : |
From: Darryl Green (green_at_[hidden])
Date: 2001-12-03 04:53:28
fwiw I once wrote a portable C++ threading library that dealt with a fairly
mixed set of underlying platforms by deciding that about as much as could be
standardised was the ability for a thread to block (I would say suspend
itself, but suspend has other connotations) and for any other thread to
allow a blocked thread to become runnable. Actually that is a slight
simplification. It was expedient to extend this to include a timeout on the
blocking, plus I used some form of critical section for the library's
"scheduler" itself and TLS to find the current/self thread object. The
library essentially did everything except schedule the runnable processes,
one platform was simply the library plus a simple scheduler running as the
kernel for an embedded system. Conventional OSs don't really like to offer
APIs that allow this level of control directly, but by including a native
synchronisation object as part the thread object and having the thread wait
on that when it needed to block this facility was easily "emulated". The
result was a a library that was really very easy to port (pretty much
anything would do for the synchronisation object, there could only be 1
thread waiting on it ever). No doubt this will cause the "but what about
efficiency" question to arise, but despite rumours to the contrary, kernel
writers still only get to have their code execute on the same CPU(s) as the
rest of us. Selecting the most efficient primitive to schedule with was
critical, but beyond that, there wasn't a big issue. I don't know if this is
directly useful/comparable at all in to what you are trying to do to
implement fair mutexes etc. My objectives at the time were probably somewhat
different to the current boost threads objectives.
> -----Original Message-----
> From: Kevin S. Van Horn [mailto:kevin.vanhorn_at_[hidden]]
> Sent: Sunday, 2 December 2001 2:17 AM
> To: boost_at_[hidden]
> Subject: [boost] Belated reply on Boost.Threads
>
>
>
> William Kempf wrote:
>
> >> *** I have looked over the POSIX implementations of condition
> >> variables and the various Mutex types, and several of
> these are weak
> >> mutexes. ***
> >
> >From a documentation stand point I can't really address this with out
> >causing (theoretical at least) problems with portability. [...]
> >
> >> 2. Strong mutexes. [...]
> >
> >Again, I can't really mention this in the documentation.
> >
> >> 3. Fair mutexes. [...]
> >
> >Again, I don't think the documentation can do this with out causing
> >problems with portability.
>
> I don't see that portability enters into the picture. Saying
> that a mutex is
> strong is a particular guarantee, and saying that it is weak
> is a lack of that
> guarantee; likewise, saying that it is fair is a particular
> guarantee, and
> saying that it is unfair is a lack of the same guarantee. So
> if you simply
> document that your mutexes are weak and unfair, there is no
> portability
> problem -- you've just refused to make certain guarantees,
> which gives you
> more implementation freedom.
>
> >However, starvation can occur even with "fair" mutexes (by
> one definition of
> >fair). POSIX mutexes, for instance, are open to priority starvation
> >problems if you aren't careful.
>
> By definition, "fair" means lack of starvation, assuming that
> every thread
> that acquires the mutex eventually releases it. This isn't just my
> definition; it's standard in the research literature.
>
> >> [...] "Users should assume that low-priority threads may
> wait indefinitely,
> >> [...]
> >> Comment: Does "indefinitely" mean "forever", or just "an
> >unspecified but
> >> finite amount of time"? [...]
> >
> >It means what it says ;). There is no "forever" since the program
> >will execute in a finite amount of time, but the distinction isn't
> >worth arguing about. You can't write reasonable code regardless of
> >whether it's "forever" or "an unspecified but finite amount of time".
>
> You should read Lamport's classic paper, "'Sometime' is sometimes 'not
> never'"
(http://research.microsoft.com/users/lamport/pubs/pubs.html#sometime).
Some programs (e.g., servers) are intended to run indefinitely, and
so are best analyzed as if they were going to run forever, even though we
know
that eventually (perhaps years from now) they will eventually terminate.
As to whether a notion of "eventually" without a specific bound is useful
notion, computer scientists have in fact found this to be a useful notion
for
decades. One views lack of starvation as a correctness requirement, and the
wait time as a performance measure. Obviously, you have to have reasonable
performance, but the first step with any program is to make sure it is
correct, i.e., does what is specified! Bounds on wait times are generally
not
specified because the analysis required to guarantee the bounds is too
complicated, too platform-dependent, and too dependent on the vagaries of
the
thread/process scheduling algorithm. In summary, "fairness" (lack of
starvation) is a property one may wish to guarantee without giving specific
bounds on the wait time, and making sure the wait time is reasonable is then
a
quality-of-implementation issue.
--- Let me move on to the issue of the current weak, unfair mutexes. Alexander Terekhov has pointed out that one might prefer weak mutexes over strong in order to allow the implementation to minimize context switches, and has pointed out situations in which unfair mutexes are OK (functionally identical threads), so I now think the problem isn't as bad as it at first appeared. I've been looking at how to make timed_mutex fair, assuming that the underlying mutexes and condition variables are fair. So far, my solutions have all had subtle bugs. The strategy I've been pursuing is to use aging. The idea is that the longer a thread waits, the higher some measure of priority becomes. At some point, if a thread determines that it has waited long enough and has the highest priority, then in the loop while (m_locked) { res = pthread_cond_wait(&m_condition, &m_mutex); } it sets a flag indicating that it has "claimed" the mutex the next time it becomes available, and waits on a condition variable m_my_turn. (Obviously, it must not claim the next use of the mutex if the flag is already set.) The unlock method signals m_my_turn if the flag is set, otherwise it signals m_condition. Another approach is to have each waiting thread have its own condition variable as a local variable, and push onto a list for the mutex a pair consisting of a pointer to its condition variable and a number indicating the time at which it started waiting. The mutex unlock operation would then take a look at the list and decide who to signal. Info: http://www.boost.org Unsubscribe: <mailto:boost-unsubscribe_at_[hidden]> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk