|
Boost : |
From: terekhov_at_[hidden]
Date: 2001-11-14 13:35:09
--- In boost_at_y..., "Kevin S. Van Horn" <kevin.vanhorn_at_n...> wrote:
> On Wed, 14 Nov 2001, Alexander Terekhov wrote:
>
> > Ah, in other words, you want some threads to be given
> > "equitable" shares of everything they compete (mtx:lock,
> > condvar:wait&lock) for, right?
>
> No. You misunderstand what "fairness" means.
>
> > "Fairness" is relative, subjective, and
> > usually isn't actually what anyone wants when they ask for it.
>
> No, there is a precise technical meaning of "fairness" in the
literature
> on correctness proofs for concurrent programs.
How about posting a URL?
> "Fairness" in this sense
> has nothing to do with getting a "fair" share of anything; it
merely means
> lack of starvation, i.e., getting a *nonzero* share. If threads A
and B
> are in loops competing for a mutex and thread A gets the mutex 10
times
> for every time B gets it, this is still fair. It becomes unfair if
A
> manages to completely crowd out B so that B *never* gets the mutex
again.
Are you saying that if A gets the mutex 100000000000000
times for every time B gets it, it would still be "fair"?
> You keep on claiming that fairness is inefficient
Butenhof (on c.p.t) keeps on claiming it. I agree, however.
> because of context
> switches. That may be an argument against strong mutexes, but as I
> mentioned in another message, it is possible to have weak but fair
> mutexes
hmm. You wrote:
"Note that mutexes that use FIFO scheduling are fair"
Why do you think so?
a) in POSIX, FIFO/RR priority-scheduling only work
on uniprocessors ("scheduling allocation domains of
size equal to one"; "Conforming implementations on
multi-processors may map all or any subset of the
CPUs to one or multiple scheduling allocation
domains"; "The choice of scheduling allocation
domain size and the level of application control
over scheduling allocation domains is implementation-
defined. Conforming implementations may change
the size of scheduling allocation domains and
the binding of threads to scheduling allocation
domains at any time."); also, take a look at:
http://groups.google.com/groups?as_umsgid=32D116BE.31DF%40zko.dec.com
b) in POSIX, mutexes do not hand-off lock ownership
to a selected waitor on unlock.
c) I have yet to see a mutex which would guarantee
starvation-free access to its "queue" of waitors.
AFAICT, without such guarantee, it does not really
matter whether you hand-off lock ownership to
a selected waitor or not.
regards,
alexander.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk