Boost logo

Boost :

Subject: Re: [boost] [coroutine] interface suggestion
From: Giovanni Piero Deretta (gpderetta_at_[hidden])
Date: 2012-09-21 06:41:23


On Fri, Sep 21, 2012 at 7:02 AM, Vicente J. Botet Escriba <
vicente.botet_at_[hidden]> wrote:

> Le 20/09/12 15:41, Giovanni Piero Deretta a écrit :
>
>> On Thu, Sep 20, 2012 at 12:17 PM, Vicente J. Botet Escriba <
>> vicente.botet_at_[hidden]> wrote:
>>
>> Le 20/09/12 12:18, Giovanni Piero Deretta a écrit :
>>>
>>> On Thu, Sep 20, 2012 at 3:51 AM, Vicente J. Botet Escriba <
>>>
>>>> vicente.botet_at_[hidden]> wrote:
>>>>
>>>> Note that as I commented in some of my first post related to bind, you
>>>>> must take care of const and reference parameters in some tricky ways,
>>>>> as
>>>>> in order to be able to reassign them you should do some casts. I hope
>>>>> this
>>>>> will not cache some undefined behavior.
>>>>>
>>>>> How would you exactly rebind references? I can't see any sane way to
>>>>> do
>>>>>
>>>> it.
>>>>
>>>> Yes, references could not be rebound. We could rebind types, pointers
>>> but
>>> not references.
>>> I guess the coroutine library could store a pointer on the caller object,
>>> and allow to obtain them using get<>. But in this case the coroutine
>>> function could not follow the coroutine signature.
>>>
>>
>> you mean that caller_t::yield does not really model "int&()"? Well, this
>> is
>> an issue. As far I understand, currently Oliver is leaning on passing the
>> parameters to the coroutine-fn, having yield() return a tuple, and having
>> the coroutine-fn return the final result. This means that yield() will
>> closely follow the signature and everything is fine.
>>
> Well, this is the initial design that was the source of all the
> alternative proposals. I have voted YES for the inclusion of the library
> without conditions. So if at the end this the original interface is
> retained it will be Ok for me.
>

Pretty much everybody (including me) unconditionally voted yes for the
original interface. I guess we are having this discussion for two reasons:
1) getting the first inputs as arguments (and returning the last result as
return) breaks a bit the symmetry of yield. Works very well for some
scenarios but makes some other a bit awkward
2) more importantly, initial parameters might be left dangling if they are
references to objects that go out of scope in subsequent calls.

Probably 1) is just a matter of getting used to it, while 2) will need a
big warning in the documentation. Another solution is to disallow implicit
references (like std::thread) and require an explicit reference_wrapper,
but I'm not sure it is the best solution.

BTW, if Oliver goes with the original interface, then I agree that
coroutines and generators should be separated, but the missing additional
*output* generator should be added to the library. It will also be fine if
only generators will have range semantics. I expect that for most use
cases, people will only use generators.

>
> After a more deep analysis of the alternatives I have proposed it seems
> that none make easier to write used code:
> * bind can not be used with references, limiting the coroutine interface
> to don't use references is not a solution.
> * access to the coroutine parameter using get, while it ensures a uniform
> and safe way to obtain them, is not easier to read, and in this case the
> coroutine function shouldn't follow the coroutine signature, which for some
> of you make the interface type unsafe.

Wait, of which type unsafety are you talking about here?

>
>
>
>> If instead he decides to go with the get<> interface, then maybe it is
>> time
>> to abandon the coroutine as a function model.
>>
>> Could you elaborate more on the alternative model you are suggesting?
>
>
not much really, just that a coroutine with get() shouldn't be instantiated
as coroutine<In(Out)> as that would lead the user to believe that it is
similar to a function. Something like coroutine<In, Out> might be more
appropriate.

-- gpd


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk