Boost logo

Boost :

Subject: Re: [boost] [threadpool] version 22 with default pool
From: Edouard A. (edouard_at_[hidden])
Date: 2009-03-08 18:11:58


> You have also to prevent deadlocks - so sometimes it would be faster to
> hold a
> lock but a deadlock could be raised.

I figured you did that for a reason. Deadlocks are tricky and one way is to
indeed hold one lock at the time. Another possibility is to make sure you
take all the locks in the same order every time... You can also reduce the
number of locks. I know this is not straightforward and it takes a lot of
time to find the right settings.

How many different locks do you have?

> that's what threadpool does - you submit work and you get for each item
> a
> handle (task) back. the pool schedules and executes the work inside to
> the
> worker-threads. the pool itself is not interessted in the result of the
> work-items nor should the pool have knowledge about the submitted work.
> This can only be done outside the pool where you have anougth context.
> So it makes no sense that the pool waits for a subset of the submitted
> work.

I understand. This sounds logical. But... I don't want to sound narrow
minded, I really think there are use cases where you simply want to know
that your pool has done all the work you gave to it (without knowing what
the work actually was). That would be waiting for pending() and running() to
be == 0.

Of course when your threadpool is handling a lot of tasks coming from
different clients, that doesn't make sense anymore.

In which case it would be nice to have some sort of "root" task on which
other tasks depend. You would only need to wait for the root task to finish,
making code simpler to write (and maybe the waiting more efficient to
write?).

Alternatively you can embed in your task some sort of synchronization
mechanism... But I think it's best to have the client write as little
synchronization code as possible.

> you could take a look into the future library - because future is used
> to
> transfer the result between threads (using condition variables inside).

The problem is that you can have lots of tasks when sorting a container, and
that means a lot of overhead with this approach. If I'm correct, if you have
many tasks the wait_all starts to be slow. Maybe it's just a problem on my
platform. I would need to investigate this further.

Regards.

-- 
EA

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk