Hi,

I am the developer of VexCL :). 


On Mon, Mar 17, 2014 at 7:43 PM, Rhys Ulerich <rhys.ulerich@gmail.com> wrote:
Hi Kyle

> Good point. That FAQ entry was written before VexCL added its CUDA
> back-end (which occurred relatively recently). Boost.Compute and VexCL
> have different aims and scopes. Boost.Compute is more similar to the
> C++ STL while VexCL is more similar to a linear algebra library like
> Eigen. Also see this StackOverflow question [1] entitled "Differences
> between VexCL, Thrust, and Boost.Compute".
>
> [1] http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust-and-boost-compute

Thank you for the information.

I have updated the answer on stackoverflow following Gonzalo's comment above about unavailability of easy interaction with user-defined functors and lambdas. I'll duplicate it here for convenience:

Update: @gnzlbg commented that there is no support for C++ functors and lambdas in OpenCL-based libraries. And indeed, OpenCL is based on C99 and is compiled from sources stored in strings at runtime, so there is no easy way to fully interact with C++ classes. But to be fare, OpenCL-based libraries do support user-based functions and even lambdas to some extent.

Having said that, CUDA-based libraries (and may be C++ AMP) have an obvious advantage of actual compile-time compiler (can you even say that?), so the integration with user code can be much tighter.

Another point is that when you have an AMD GPU (which generally provide more performance per dollar), then more advanced CUDA compiler has zero advantages :).


I do believe that there is a place for a library (such as Boost.Compute) that would provide a set of standard accelerated algorithms. I missed such a library a few times while implementing VexCL. But Kyle, before you propose Boost.Compute for the inclusion into Boost (I think you should really do that!) you should make sure that the provided algorithms perform on par with the other libraries (e.g. Thrust) on the same hardware (I did not compare the performances, so this could be the case already).

--
Cheers,
Denis