|
Boost : |
Subject: Re: [boost] [Boost-users] [compute] Review period starts today December 15, 2014, ends on December 24, 2014
From: Kyle Lutz (kyle.r.lutz_at_[hidden])
Date: 2014-12-17 23:38:36
On Wed, Dec 17, 2014 at 5:13 AM, Hartmut Kaiser
<hartmut.kaiser_at_[hidden]> wrote:
> All,
>
>> Review of the Compute library starts today on Mon 25st of December 2014
>> and will last for ten days.
>>
>> The Compute library provides a C++ interface to multi-core GPGPU and CPU
>> computing platforms based on OpenCL.
>
> Caveat: I have spent only very little time to look into this library, so I might be off by a large margin.
Thanks for taking a look! I've addressed your comments in-line below.
> Mainly, I have two comments wrt the API of this library:
>
> a) I would have expected for the STL-like algorithms to be 100% aligned with N4105 (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4105.pdf). I strongly believe we will see an Boost implementation for N4105, so Boost.Compute could nicely integrate (or even lay the foundation) by defining its own execution policies.
Well the API can't be 100% aligned with the proposal as Boost.Compute
supports C++03 compilers. Places where the APIs differ (e.g. slightly
different signatures/semantics for some of the new algorithms) are due
to what I consider some short-comings in the proposal (at least as far
as it applies to GPU/accelerator programming).
But anyway, I think Boost.Compute is perfectly suited to provide an
N4105-style ExcecutionPolicy which could be used to execute algorithms
on the GPU using the proposed standard API. Also, I would be very much
in support of an implementation of the proposal for Boost and would be
interested in collaborating with anyone working on that.
> b) As already mentioned elsewhere, I find it to be confusing for the library to expose two different mechanisms for dealing with the asynchrony. There is the queue type representing OCL events, and the partially used future<> return type for the user to synchronize. I believe this can be unified
To be clear, there is only one mechanism for asynchrony, the command
queue abstraction provided by OpenCL. Calls that enqueue operations on
the command queue (e.g. copying between memory buffers or launching a
kernel) are non-blocking and return an event object which can be used
to track the progress of the operation or wait for its completion. The
future<> class in Boost.Compute simply wraps the returned OpenCL event
object and provides a standard C++ future API for it. This is merely
provided as a convenience.
Hope this makes things more clear.
-kyle
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk