|
Boost : |
From: William Kempf (sirwillard_at_[hidden])
Date: 2000-09-07 13:55:54
--- In boost_at_[hidden], "Bill Wade" <bill.wade_at_s...> wrote:
> > From: William Kempf [mailto:sirwillard_at_m...]
> >
> > A corner case indeed! It's very rarely a good idea to use node
level
> > locking here. The granularity is much too small and will lead to
> > very poor performance.
>
> In practice you'd lock enough nodes at once (or make node processing
> expensive enough) that the amortized cost of the locks was less
than the
> advantage gained by allowing multiple processors to attack the
problem.
>
> If you have just one processor and are using MT because it is a
good way to
> express the solution to the problem, it becomes much more likely
that
> locking the entire list will be appropriate.
One of us isn't being clear enough. If all that you're doing is
traversing the list then all you're doing is reading, in which case
the optimal solution isn't node level locking but list level read-
locking. Even if on a multiple processor machine you could convince
me that node locking were faster, since we're only reading here the
race condition I mentioned is irrelevant and so the locks need not
overlap.
Now, if you're modifying the data contained in the list the lock is
more appropriate on the data object, not the node, and the race
condition is again irrelevant.
Now, ir you're inserting or removing nodes, the race condition is
important, but only very rare uses of this would benefit in speed
from node level locking, multiple CPUs or not. Why? Because the
operation is so simple that a lock won't be held long enough to cause
any real contention, or it's complicated enough that you'll have to
lock several nodes in order to insure you don't cause a race
condition with other insertion/deletions that are occuring. In this
case, a single list level lock is usually (not always) faster.
That's why I call it a corner case. There are some very rare
situations in which you can eke out a little better performance. But
the parameters have to be just so (multiple CPUs, long transaction,
minimal number of locks needing to be acquired to prevent race
conditions, etc.) in order for this to be used efficiently. It's
going to be very rare that you've got something that fits this bill,
let alone fits it often enough to warrant this level of optimization.
If we didn't allow for this, we'd be wrong. But we do. We just make
it slightly more cumbersome to do (for the programmer, not for the
compiler). Non-issue, AFAIC.
Bill Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk