|
Boost : |
From: Jens Maurer (Jens.Maurer_at_[hidden])
Date: 2001-03-26 14:50:38
williamkempf_at_[hidden] wrote:
> [Jens Maurer wrote:]
> > - I believe we need the possibility to specify the desired features
> > of a mutex in a much more fine-grained manner.
> See my reply to another response to this. I'm not at all sure that I
> agree with a generative approach and would rather see individual
> types defined.
The suggestions I saw weren't very nice and possibly hard to implement,
but
mutex_generator<recursive|timedlock>
looks ok to me (with recursive and timedlock enum's or int's using
a bitmask). It's easy to remember and order-invariant.
> BTW, a recursive type created with pthread libraries
> should not be allowed to be used with boost::condition since it can
> not be used safely (pthreads leaves the behavior as undefined if you
> use a recursive mutex with a condition variable when the mutex has
> been locked recursively).
So there's another feature to choose from: usable_with_condition_variable
[ Interfaces vs. performance]
> Unless you think the interface prevents creating
> efficient implementations
So I believe.
> > Note how boost::mutex is 3x slower than the native implementation,
> > because it's emulating recursive mutexes with a condition variable.
>
> boost::mutex does not emulate a recursive mutex. The condition
> variable exists only because of the timedlock.
Sorry for the misinterpretation, but my main point is that there *is*
a condition variable which makes things slow.
> > Also note that the pimpl-wrapping in boost::fast_mutex costs about
> > 10% performance.
>
> I don't see this evidenced in your timings above. How did you figure
> this out?
Your fast_mutex takes 0.25 usec, even the slowest native mutex
takes only 0.22 usec. That's about 10% (or even more) of a difference
to me.
> Several people have complained about the use of pimpl.
Agreed about postponing the pimpl issue.
> You must not be looking at the latest upload, which makes me question
> your timings above. The implementation changed drastically for
> pthreads and so they may (likely will) run much faster if you truly
> were using an older implementation. As for bool conversions, they've
> long since been replaced by void* conversions.
I'm confused. I've looked at
http://groups.yahoo.com/group/boost/files/threads/threads.zip
dated 2001-03-22, size 62080 bytes, which seems to be the latest
as of this writing. boost/thread/xlock.hpp contains three operator bool()
functions.
Here's an update with recursive mutexes:
boost::fast_mutex: 0.243522
boost::mutex: 0.672946
boost::recursive: 0.687127
pthreads.fast: 0.210987
pthreads.normal: 0.196872
pthreads.recursive: 0.22889
pthreads.errorcheck: 0.213668
> > - The timedlock interface (milliseconds) should be changed to
> > the equivalent of POSIX timespec, i.e. seconds and nanoseconds to
> > be future-safe.
>
> A) I very much dislike using a "time out at a specified time"
> implementation preferring to "time out after a specified duration".
Sorry, I should have been more descriptive: I mean the "time out after
a specified duration" semantics are what they should be. However,
I advocate using seconds and nanoseconds to specify the duration.
Since the largest portable datatype is long (32 bits min. guaranteed),
we need to split the duration into a "long" for the nanosecond
(fractional) part of the duration and another "long" for the seconds
part. I mentioned "timespec" for exposition of the kind of
description for the timeout.
> C) It might be possible to define the waits in terms of seconds and
> nanoseconds, the basic options available with timespec, but not all
> platforms support this fine a grain of time duration. Milliseconds
> are more likely to be implementable on multiple platforms. This was
> the reasoning chosen at the time these interfaces were designed, but
> I'm not locked into this.
My reasoning is that the interface should not preclude more
precise timeouts. It should be implementation-defined what the
internal resolution of timeouts is (i.e. every platforms defines
the resolution). We can have a general "at least microseconds
resolution is available everywhere" clause, but this is incorrect on
Linux/x86 at least which only has 100 Hz scheduling frequency.
> > - atomic_t should say that value_type must be at least "int".
> > Otherwise, it's of not much use in the real world.
>
> I'm not sure I follow you here. Care to explain?
According to my reading, atomic_t's value_type is
"implementation defined", with no restriction on what the type may
be. So it could be a "bool", seriously limiting the portable
use of atomic_t, because the portable assumptions I can make
on atomic_t are rather limited. If you say it's at least an "int" ,
I can be sure it's 15 bits at least (guaranteed by the C++ standard).
> > - How do I put a variable number of mutexes in a dynamic data
> > structure? Say, I'd like to implement the "dining philosophers"
> > problem with a runtime configurable number of philosophers?
>
> I don't see the problem here either.
I don't remember all the details of the dining philosophers' problem,
but here's my current interpretation: There are n philosophers around
a round table, with n forks between them. There's a plate with
an unlimited amount of spaghetti in front of each philosopher,
but any philosopher needs the fork to its right *and* the one
to its left to start eating. A philosopher can decide to stop
eating (and return the forks) for a while. Everyone should have a
chance to eat once in a while. Assume there's no global
coordination (e.g. numbering the philosophers and having the
even numbered ones eat first and then the odd numbered ones and
so on).
For a trivial (possibly deadlock-prone) solution, I'd like to have
a std::vector<mutex>, one mutex for each fork (a fork is a shared
resource and thus needs to be protected). However, a mutex is
noncopyable and thus not usable in a standard container.
Here's the conceptual question: How can I have a runtime-configurable
amount of mutexes (here: representing forks)? (No, I don't want
to allocate these with "new" manually. Too tedious and error-prone.)
> The only define really used outside of compiling Boost.Threads for a
> given platform is BOOST_HAS_THREADS. I imagine the other defines
> could be specified by the build process instead of being placed in a
> header at all.
Ok, so we don't expand config.hpp, but have a threads-private header
for the threads configuration, while we're waiting for the boost build
system to happen. (No need to depend on vaporware).
> I don't think we should use additional headers. Either specify these
> at build time, or leave them in config.hpp. Defining a new header
> for this is only going to complicate things for users.
Why? The user (of the boost library) never has anything to do with
these defines (and, in an ideal world, should have no business
with defines in config.hpp).
So having a new header will just clearly separate compiler/library
issues from OS/hardware things. For example, implementing atomic_t
on Linux will need to be done from scratch (as far as I can see
at the moment), so it's very much CPU dependent.
Jens Maurer
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk