Boost logo

Boost Users :

Subject: Re: [Boost-users] Boost.Thread Continuation future with executor blocking in destructor
From: Konrad Zemek (konrad.zemek_at_[hidden])
Date: 2015-04-22 20:49:06


2015-04-23 0:32 GMT+02:00 Vicente J. Botet Escriba <vicente.botet_at_[hidden]>:
> Le 22/04/15 14:28, Konrad Zemek a écrit :
>>
>> Hi,
>>
>> When creating a continuation future with future<>::then(), and the
>> policy is boost::launch::async, the created future's destructor
>> blocks, like a future created by boost::async, and probably for the
>> same reasons (as outlined by the standardization committee in n3679).
>>
>> The same behavior is coded into continuation futures' destructor when
>> using an executor. Why is it needed here? The situation with
>> executor-run continuations is a little different, as even when the
>> future is destroyed, the lifetime of running job is still bound to the
>> executor.
>
> An Executor can also create a thread for each task, and the Executor
> destructor don't need to wait until all the tasks have finished (even if
> some can do it). I don't see how to make the difference without making the
> Executor concept more complex.

You're right, I erroneously assumed that executors have to join on
destruction much like a future returned from async. Re-reading the
proposal I see that join behavior is specified for a concrete
executor, not the general executor concept.

> Note however that, I have in my todo list an implementation that doesn't
> block at all. Just that everything need more time than we have. This would
> mean a breaking change, and all of us know how annoying is to introduce
> breaking changes.
>>
>> My use case:
>> I've coded a network communication stack that returns to the caller a
>> future<NetworkResponse>. I've previously used
>> std::async(std::launch::deferred, ...) to transform the future's
>> content before returning it to the caller (bytes -> protobuf ->
>> internal object), and I consider such manipulation of a future value
>> to be a very powerful feature.
>> I've used std::launch::deferred to reduce the number of running
>> threads, but the downside is that the client can't wait for the future
>> value with a timeout. On the other side of the spectrum is
>> std::launch::async, which would run a new thread - per pending
>> communication - that would do little more than block.
>> boost::future<>::then is a fantastic fit for my use case, as I can use
>> my boost::asio::io_service as an executor and let my communication
>> stack's threads do the work without having them block on future.get()
>> to first retrieve the result. The caller can then call
>> future<>::wait_for() to wait for the reply with a custom timeout. This
>> being network communication, though, a reply message may never arrive
>> - the corresponding promise will eventually be destroyed by the stack,
>> but I can't have users block on the future's destructor until that
>> happens,
>> after they have already decided that the reply is no longer worth waiting
>> for.
>
> I would expect that the program must take care of the lost of communication
> and do a set_exception on the promise before the promise is destroyed.
> However, the call to wait_for seems to be a good hint that the user knows
> what is doing. I could consider that any call to a timed wait function
> disable blocking. An alternative could be to have a way to request to don't
> block.
> Please let me know if one of these options would take in account your use
> case.

Either of these options would work for me, as I could emulate a "don't
block" request with a call to future<>::wait_for(0). I'd prefer the
latter to the former though, if only because it fits my needs better
and it would make it more explicit that future's behavior is modified.

> For the time being (and very temporarily) , if you don't want anymore this
> blocking future, you can move it to a list of detached futures.
>>
>> Please advise if there's a workaround for this behavior that doesn't
>> involve me distributing a custom version of Boost.Thread with my
>> binaries? :)
>>
>>
>
> You can send any PR you consider improves the behavior of the library.

Of course; I just prefer to discuss it first to find out if others
consider such a change to be an improvement. :)

Konrad


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net