Hello Kyle,

On 17 March 2014 00:03, Kyle Lutz <kyle.r.lutz@gmail.com> wrote:
I'm proud to announce the initial release (version 0.1) of
Boost.Compute! It is available on GitHub [1] and instructions for
using the library can be found in the documentation [2].

Boost.Compute is a GPGPU and parallel-programming library based on
OpenCL. It provides an STL-like API and implements many common
containers (e.g. vector<T>, array<T, N>) as well as many common
algorithms (e.g. sort(), accumulate(), transform()). A full list can
be found in the header reference [3].

I hope to propose Boost.Compute for review in the next few months but
for I'm looking for more wide-spread testing and feedback from the
Boost community (please note the FAQ [4] and design rationale [5]
where I hope to have answered some common questions).

Thanks,
Kyle

[1] https://github.com/kylelutz/compute
[2] http://kylelutz.github.io/compute/
[3] http://kylelutz.github.io/compute/compute/reference.html
[4] http://kylelutz.github.io/compute/boost_compute/faq.html
[5] http://kylelutz.github.io/compute/boost_compute/design.html
_______________________________________________
Boost-users mailing list
Boost-users@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-users

I am looking forward to try this out. I have a couple of questions:

- how do the algorithms compare performance-wise with similar CUDA libraries? I remember trying Boost.Compute in the early days and IIRC there was quite a performance gap. Would it be possible to add a performance section to the documentation?

- Are you planning any support for multi-device computations? In my experience, available memory can be quite a bottleneck on GPUs, and having support for muti-device computations (i.e., multiple GPUs but also GPUs/CPU hybrids) would be quite handy.

I am happy to see this kind of work happening in OpenCL and Boost land, and I really like the STL-like design of the library.

Cheers,

  Francesco.