|
Boost : |
Subject: Re: [boost] [threadpool] new version v12
From: vicente.botet (vicente.botet_at_[hidden])
Date: 2008-11-03 03:48:14
----- Original Message -----
From: <k-oli_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Sunday, November 02, 2008 8:59 PM
Subject: Re: [boost] [threadpool] new version v12
> Using fibers doesn't prevent you calling functions recursivly in the task
> object.
You should have misunderstood my concern. What I mean is that the future get
function call can *recursivelly* call to the worker thread scheduleer to
schedule one sub_task or steel one from others working threads instead of
using fibers. This is another possibility for the thread_pool library
independent of whether the user has a recursive function.
> The purpose of using fibers in Boost.Threadpool is not to block the
> parent-task until the sub-task becomes ready/fulfilled. The parent-task
> gets
> suspended and the worker-thread takes another task from the pool-queue.
> Later
> the suspended task is resumed.
> Boost.Threadpool allows to disable fibers -> tp::fibers_disabled.
Yes I know all that. As you know I'm insterested on the fork_join enabled
thread pool variant, which doesn't means I'm for a fiber implementation.
Both features are orthogonal.
>> > The problem with doing this (whether you use Fibers or just recurse on
>> > the same stack) is that the nested task inherits context from its
>> > parent: locked mutexes, thread-local data, etc. If the tasks are not
>> > prepared for this the results may not be as expected (e.g. thread
>> > waits on a task, resumes after waiting and finds all its thread-local
>> > variables have been munged).
>>
>> You are right, this reenforce my initial thinking. We need to separate
>> between tasks (root tasks) and sub_tasks (which will inherit context from
>> its parent task).
>
> I believe this separation is not necessary. If all fibers are processes by
> the
> same worker-thread we don't have to worry.
<snip>
> What does this_task::submit? Creating a new task in the threadpool?
Creates a sub_tasks that inherits from the context of its parent task.
>> * Independently of whether the implementation uses fibers or a recursive
>> call the working thread scheduler, there are other blocking functions
>> that
>> could be wrapped to use the current working thread to schedule other
>> tasks/sub_tasks doing a busy wait instead of a blocking wait.
>
> A I wrote above - Boost.Threadpool already does this (with the support of
> fibers). Currently it is encapsulated in task< R >::get() - but the
> interface
> can be extended to provide this_working_thread::wait() function.
Yes I know, it does only for the future get() function. What I'm asking is
to explore the ability to do that for other blocking functions, even not yet
defined blocking functions, i.e. open the interface. Do you plan to open the
interface with something like one_step_schedule()?
>> BTW Olivier,
>> * could the interrupt function extract the task from the queue if the
>> task is not running already?
>
> This would be complicated because we have different queues; one global
> queue
> and local worker-queues. The task has to maintain an iterator after
> insertion
> of one of the queues etc.
> The current implementaion stores a interrupt-flag so that the task becomes
> interrupted immediatly after dequeuing.
This will be better than nothing. Could you tell me where is the code doing
this. Anyway with the separation between task and subtask we can avoid the
problem. A task could be only on the pool queue and a sub_task only on the
internal worker thread queue.
>> * as a task can be only on one queue maybe the use of intrusive
>> containers could improve the performances
>> * the fiber queue is a std::list and you use size function which
>> could
>> have a O(n) complexity. This should be improved in some way (intrusive
>> container provides already a constants_size list implementation).
>
> I choosed std::list for its fast insertions/deletions - I'll take a look
> into
> intrusive containers.
I'm sure you will be convinced with the intrusive containers.
Thanks,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk