|
Boost : |
Subject: Re: [boost] [gsoc-2013] Boost.Thread/ThreadPool project
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2013-05-07 07:37:40
Le 07/05/13 12:57, Dan Lincan a écrit :
>> *The new proposal contains:*
>>
>> *Waitable tasks*
>>
>> This feature gives the user the option to wait for a task to finish and
>> retrieve its result.
>>
>> Drawback is that maybe the user is not interested in the result of the task.
>>
>> Interface
>>
>> template<classFunction,class...Args >
>>
>> boost::future<typenameboost::result_of<Functionf()>::type>
>>
>> submit(Function&&f, Args&&args);
>>
>> My comments:
>>
>> what do you think of separating the responsabilities, the thread pool
>> schedules Callables (void()) and a submit free function manage with futures:
>>
>> class thread_pool {
>> public:
>>
>> template<class Callable >
>> void submit(Callable&&);
>> ...
>> };
>>
>> template<typename, Executor, class Function,class... Args >
>> boost::future<typename boost::result_of<Function (Args...)>::type>
>> submit(Executor& e, Function&& f, Args&& args);
>>
>> Implementing this function needs to use the same techniques than
>> boost::async()*
> What techniques from boost::async() are you referring to?
boost::async is able to build waiting futures that synchronize with the
end of the task.
>
>> The new proposal contains:*
>>
>> *Task scheduling priority*
>>
>> The user would be allowed to give the tasks a priority, something similar to
>> the operating system process priority: low, medium, high etc.
>>
>> Performance is lost when the algorithm to schedule tasks is running. Based
>> on the implementation, this can be more that O(1), thus big overhead.
>> Another problem is fair scheduling. Low priority tasks have to run at some
>> point even if only high priority tasks are being submitted to the
>> threadpool.
>>
>> Interface
>>
>> template<classFunction,class...Args ,class Priority>
>>
>> boost::future<typenameboost::result_of<Functionf()>::type>
>>
>> submit(Function&&f, Args&&args, Priorityp = default_priority());
>>
>> My comments:
>>
>> I don't think adding priorities in this way would improve the performances.
> Well I thought about adding priorities this way to achieve close to
> O(1) performance.
> I had in mind something similar to counting sort. As long as we know
> that we have at most x priorities level ( in this case x=3: low,
> medium, high ), we can store the tasks in an array of
> containers(queue/lock-free queue/others) so submitting a task to the
> threadpool can be done in O(1). The only problem after this is having
> to deal with fair scheduling. This can be done in O(1) too so O(1)
> performance overall.
Ok. I see.
>
>> *The new proposal contains:*
>>
>> *Task execution order*
>>
>> For example the user might want x tasks to be executed, then only after all
>> x have been executed another y can be executed and so on. So the user could
>> assign the priority 1 to the x tasks and 2 to y tasks. The threadpool will
>> know that level 2 tasks can only be executed after level 1 tasks are done.
>>
>> Performance is lost when a task is chosen to be executed. For example, if
>> using priority queues, the complexity would be O(logn) plus the locks which
>> can slow down a lot the scheduling.
>>
>> This feature can be used as a kind of serial executor by giving higher and
>> higher numbers to the order argument.
>>
>> Interface
>>
>> template<class Function,class...Args ,class Order>
>>
>> boost::future<typenameboost::result_of<Functionf()>::type>
>>
>> *submit**(**Function**&&***f*, **Args&&*args*, **Order*o= default_order());
>>
>> *Callback functions / Continuations*
>>
>> The user can also set a callback function when submitting a task to the
>> threadpool. This function will run as soon as the task is finished by the
>> threadpool.
>>
>> Drawback: Since the callback function is runned by the thread which executs
>> the task, this thread can be block if the callback function is blocking.
>> Also the callback has no parameters.
>>
>> Interface
>>
>> template<class Function,class...Args ,class Callback>
>>
>> boost::future<typenameboost::result_of<Functionf()>::type>
>>
>> *submit**(**Function**&&***f*, **Args&&*args*, **Callback*c=
>> default_callback());
>>
>> My comments:
>>
>> I prefer continuations than callbacks and prefer an interface that
>> explicitly states the dependencies.
>>
>> when_all(submit(f), submit(g)).then(h);
> This is what I had in mind with this feature. I should have been more
> clear as this is the reason I have added [1] as reference.
Great.
>
>> *The new proposal contains:*
>>
>> *Task cancellation*
>>
>> With this feature the use can cancel a submitted task. If the task is
>> currently being executed it will be interrupted. For implementation
>> boost::thread::intterupts will be activated.
>>
>> template<class Function,class...Args ,class Callback,class Task>
>>
>> boost::future<typenameboost::result_of<Functionf()>::type>
>>
>> *submit**(**Function**&&***f*, **Args&&*args*, ****Task&*task);
>>
>> Task has the function cancel which cancels/interrupts when called.
>>
>> My comments:
>>
>> This is a little bit complex from the users point of view.
>> I guess the best is to either:
>> * adding to boost::future the possibility to interrupt the task, or
>> * returning a interrupted_future<T> or task<T> that can be interrupted.
> Then I would prefer the first alternative, extending boost::future.
> For the user it would be really nice to use just result.interrupt()
> where result is a future.
Great.
>
>> There is an additional requirement. The user must ensure that all the tasks
>> related to some object are executed sequentially but the task submission
>> don't knows about the pending tasks.
>> Continuations don't solve the problem. What would you add to make the users
>> life easier? (please read the discussion about threadpool and ASIO, the
>> answer is someway there).
> I will search it and get back with an answer.
>
> [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3558.pdf
> & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
Best,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk