|
Boost : |
From: matt_at_[hidden]
Date: 2003-12-16 01:34:23
> On Behalf Of Beman Dawes
> Also, are there any others who would like to step forward? It might help
if there were several people working together.
I'd be happy to contribute a few thoughts and code but time does not
permit full attention. I'll try and contribute as much as I can until I
get told to go away ;-)
I would like to see something like rw_mutex get up, but I would also like
to see a substantial change in direction for the interface style and
approach. There are a number of directions that may be taken quite
appropriately independently.
Firstly, and to comment on Howard's nice rw approach as well, I think the
term read/write is wrong, but I am in two minds as it is so popular that
perhaps it should stay. The true meaning is really "shared" and
"exclusive" access rather than read and write. Read and write is the
common use case.
Secondly, I can see two approaches to the direction of the library.
1. Explicit thread support, control and synchronisation primitives.
2. Architectural neutral policy helpers.
A lot of 1. is already in place though I would like to see some interface
changes to help 2. Some of the extra stuff in thread_dev would be handy
for 1, especially such things as barriers. Active objects and "futures"
are two obvious missing things. Message queues, workflow type
processes,etc would be another step.
2. Is interesting. There is no magic, in the AC Clarke sense, that will
make concurrency transparent and efficient. It would be nice to be able
to include a policy that has synchronization primitives you can code to
that would enable synchronisation within the principle of you don't pay
for what you don't use.
Along these lines, I've attached some untidy code that I use on win32 v7.1
with boost::thread to provide synchroniztion primitives where I've
profiled them against the no-code options and native critical sections to
ensure their is on abstraction penalty.
Note I use rw_mutex from thread_dev, etal, but wrap the mutexes to provide
a consistent interface. The basic principle is code to the
shared_exclusive model and you get the "weaker concurrency" models for free.
Also, I've attached basic synch primitives for atomic ops which are around
3 times faster than mutex approaches. ACE has code for gcc / x86 with
in-line assembly we can use. We should have a mutex approach for
non-specialized platforms.
I'd imagine the future direction of 2 excluding explicit thread control
entirely. The threading control should be implicit from dispatching
parameters to a "sequence point" that is either a function, thread pool
queue, external process, message interface or whatever. This way you will
be able to change your application from single threaded to multithreaded
to distributed by changing policies. The best architecture is no
architecture at all. This fits in with the OMG model driven architecture
direction.
The other category of stuff I'd like to see, while I'm in brain dump mode,
which I am working on and some libs such as Loki already have in a
primitive form, is extending the mutex policy to suit collections of
objects. Per class, per object and, importantly, because I use it a lot
;-) , a pool of mutexes per collection.
Anyway, attached is some ugly trivial code that I now place into the
public domain to help get the ball rolling... You can add my name to the
"boost license anything I post" list if you wish.
Here is an example of using a policy based synch primitive:
template< class S = synch::no_synch >
struct dumb_adder
{
dumb_adder(size_t size) : v_(size) {
std::for_each(v_.begin(),v_.end(),l::_1 = 1.0);
};
double operator() (size_t size)
{
double result = 0;
size_t v_size = v_.size();
for (size_t j = 0; j < size; ++j) {
for (size_t i = 0; i < size; ++i) {
{
// lock each time through just to be really expensive
S::lock lk(guard_);
result += std::accumulate(v_.begin(), v_.end(),0.0);
}
}
}
return result;
}
private:
typename S::mutex guard_;
std::vector<double> v_;
};
Which you can then use like this:
dumb_adder<> ns(10);
dumb_adder<synch::simple> s (10);
dumb_adder<synch::recursive> r (10);
dumb_adder<synch::shareable> rw(10);
An example of the atomic_op usage would be:
template< class S = synch::no_synch >
struct atomic_inc
{
long operator() (size_t size)
{
long result = 0;
for (size_t j = 0; j < size; ++j) {
for (size_t i = 0; i < size; ++i)
{
S::atomic_op::inc(result);
// need to fool the optimizer for benching...
if (result % 2 == 1) S::atomic_op::inc(result);
}
}
return result;
}
};
Then usage becomes:
atomic_inc<> ns;
atomic_inc<synch::simple> s;
atomic_inc<synch::recursive> r;
atomic_inc<synch::shareable> rw;
Also, I'm happy to contribute some message queue stuff I use...
Hope this $0.02 helps,
Matt Hurd.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk