Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2003-02-07 22:37:23


"William E. Kempf" <wekempf_at_[hidden]> writes:

> David Abrahams said:
>>>> ...and if it can't be default-constructed?
>>>
>>> That's what boost::optional<> is for ;).
>>
>> Yeeeh. Once the async_call returns, you have a value, and should be able
>> to count on it. You shouldn't get back an object whose invariant allows
>> there to be no value.
>
> I'm not sure I can interpret the "yeeeh" part. Do you think there's still
> an issue to discuss here?

Yes. Yeeeeh means I'm uncomfortable with asking people to get
involved with complicated state like "it's there or it isn't there"
for something as conceptually simple as a result returned from waiting
on a thread function to finish.

>>> It's not "thread-creation" in this case. You don't create threads
>>> when you use a thread_pool.
>>
>> OK, "thread acquisition", then.
>
> No, not even that. An RPC mechanism, for instance, isn't acquiring
> a thread.

Yes, but we don't have an RPC mechanism in Boost. It's important to
be able to use a generic interface that will handle RPC, but for
common tasks where nobody's interested in RPC it's important to be
able to do something reasonably convenient and uncomplicated.

Anyway, if you want to stretch this to cover RPC it's easy enough:
just call it "acquisition of an executor resource."

> And a message queue implementation wouldn't be acquiring
> a thread either.

But it _would_ be aquiring an execution resource.

> These are the two obvious (to me) alternatives, but the idea is to
> leave the call/execute portion orthogonal and open. Alexander was
> quite right that this is similar to the "Future" concept in his Java
> link. The "Future" holds the storage for the data to be returned
> and provides the binding mechanism for what actually gets called,
> while the "Executor" does the actual invocation. I've modeled the
> "Future" to use function objects for the binding, so the "Executor"
> can be any mechanism which can invoke a function object. This makes
> thread, thread_pool and other such classes "Executors".

Yes, it is a non-functional (stateful) model which allows efficient
re-use of result objects when they are large, but complicates simple
designs that could be better modeled as stateless functional programs.
When there is an argument for "re-using the result object", C++
programmers tend to write void functions and pass the "result" by
reference anyway. There's a good reason people write functions
returning non-void, though. There's no reason to force them to twist
their invocation model inside out just to achieve parallelism.

If you were designing a language from the ground up to support
parallelism, would you encourage or rule out a functional programming
model? I bet you can guess what the designers of Erlang
(http://www.erlang.org/) chose to do ;o)

>>> And there's other examples as well, such as RPC mechanisms.
>>
>> True.
>>
>>> And personally, I find passing such a "creation parameter" to be
>>> turning the design inside out.
>>
>> A bit, yes.

It turns _your_ design inside out, which might not be a bad thing for
quite a few use cases ;-)

>>> It might make things a little simpler for the default case, but it
>>> complicates usage for all the other cases. With the design I
>>> presented every usage is treated the same.
>>
>> There's a lot to be said for making "the default case" very easy.
>
> Only if you have a clearly defined "default case". Someone doing a lot of
> client/server development might argue with you about thread creation being
> a better default than RPC calling, or even thread_pool usage.

Yes, they certainly might. Check out the systems that have been
implemented in Erlang with great success and get back to me ;-)

>>> More importantly, if you really don't like the syntax of my design, it
>>> at least allows you to *trivially* implement your design.
>>
>> I doubt most users regard anything involving typesafe varargs as
>> "trivial to implement."
>
> Well, I'm not claiming to support variadric parameters here. I'm only
> talking about supporting a 0..N for some fixed N interface.

That's what I mean by "typesafe varargs"; it's the best we can do in
C++98/02.

> And with Boost.Bind already available, that makes other such
> interfaces "trivial to implement". At least usually.

For an expert in library design familiar with the workings of boost
idioms like ref(x), yes. For someone who just wants to accomplish a
task using threading, no.

> The suggestion that the binding occur at the time of construction is
> going to complicate things for me, because it makes it much more
> difficult to handle the reference semantics required here.

a. What "required reference semantics?"

b. As a user, I don't really care if I'm making it hard for the
   library provider, (within reason). It's the library provider's job
   to make my life easier.

>>> Sometimes there's something to be said for being "lower level".
>>
>> Sometimes. I think users have complained all along that the
>> Boost.Threads library takes the "you can implement it yourself using our
>> primitives" line way too much. It's important to supply
>> simplifying high-level abstractions, especially in a domain as
>> complicated as threading.
>
> OK, I actually believe this is a valid criticism. But I also think
> it's wrong to start at the top of the design and work backwards. In
> other words, I expect that we'll take the lower level stuff I'm
> building now and use them as the building blocks for the higher
> level constructs later. If I'd started with the higher level stuff,
> there'd be things that you couldn't accomplish.

I don't buy it, at least not for this case. You could just spend the
time to find a high-level interface that meets all the needs. For
example, if you had an interface which takes an (optional?) "executor
resource supplier" and a function (object), and can return a value,
you can still use it to do your "return by reference" business with a
void-returning function taking a reference parameter.

>>>>> > That's what we mean by the terms "high-level" and "encapsulation"
>>>>> ;-)
>>>>>
>>>>> Yes, but encapsulation shouldn't hide the implementation to the
>>>>> point that users aren't aware of what the operations actually are.
>>>>> ;)
>>>>
>>>> I don't think I agree with you, if you mean that the implementation
>>>> should be apparent from looking at the usage. Implementation details
>>>> that must be revealed should be shown in the documentation.
>>>
>>> I was referring to the fact that you have no idea if the "async call"
>>> is being done via a thread, a thread_pool, an RPC mechanism, a simple
>>> message queue, etc. Sometimes you don't care, but often you do.
>>
>> And for those cases you have a low-level interface, right?
>
> Where's the low level interface if I don't provide it? ;)

I never suggested that you should not supply the capabilities of your
low-level interface.

Well, I don't really feel like arguing about this much longer. I
certainly understand what you're saying: simple primitives allow for
lots of flexibility, and definitely dealing with functions taking no
parameters and returning no parameters leaves lots of room to
maneuver.

I still think I'm onto something with the importance of being able to
do functional concurrent programming. The minimum requirement for
that is to be able to return a result; you can always bind all the
arguments to fully curry function objects so that they take no
arguments, but that seems needlessly limiting. 'Nuff said; if you
can't see my point now I'm gonna let it lie.

Cheers,
Dave

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk