|
Boost : |
Subject: Re: [boost] [compute] Review
From: Ioannis Papadopoulos (ipapadop_at_[hidden])
Date: 2014-12-31 12:57:53
On 12/30/2014 11:57 PM, Kyle Lutz wrote:
> On Tue, Dec 30, 2014 at 8:14 PM, Yiannis Papadopoulos
> <ipapadop_at_[hidden]> wrote:
>> Hi,
>>
>> This is my review of Boost.Compute:
>>
>> 2. What is your evaluation of the implementation?
>>
>> There is some code duplication (e.g. type traits) and various other bits and
>> pieces that can be moved to existing Boost components. I think there should
>> be some effort spent towards that.
>
> Could you let me know which type-traits you think are duplicated or
> should be moved elsewhere?
For example, the is_fundamental<T> is already implemented in
Boost.TypeTraits. Or type_traits/type_name.hpp may be able to leverage
Boost.TypeIndex?
>> 8. Do you think the library should be accepted as a Boost library?
>>
>> This will be a maybe. It is a well-written library with a few minor issues
>> that can be resolved.
>>
>> However, why would someone use Boost.Compute against what is out there?
>> Average users can resort to Bolt or Thrust. Power users will probably always
>> try to hand-tune their OpenCL or CUDA algorithm. How can we test it and
>> prove its performance?
>
> Yes, Thrust and Bolt are alternatives. The problem is that each is
> incompatible with the other. Thrust works on NVIDIA GPUs while Bolt
> only works on AMD GPUs. Choosing one will preclude your code from
> working on devices from the other.
>
> On the other hand, code written with Boost.Compute will work on any
> device with an OpenCL implementation. This includes NVIDIA GPUs, AMD
> GPUs/CPUs, Intel GPUs/CPUs as well as other more exotic architectures
> (Xeon Phi, FPGAs, Parallella Epiphany, etc.). Furthermore, unlike
> CUDA/Thrust, Boost.Compute requires no special complier or
> compiler-extensions in order to execute code on GPUs, it is a pure
> library-level solution which is compatible with any standard C++
> compiler.
>
> Also, Boost.Compute does allow for users to access the low-level APIs
> and execute their own hand-rolled kernels (and even interleave their
> custom operations with the high-level algorithms available in
> Boost.Compute). I think using Boost.Compute in this way allows for
> both rapid development and the ability to fully-optimize kernels for
> specific operations where necessary.
>
> Thanks for the review. Let me know if I can explain anything more clearly.
>
> -kyle
>
> [1] https://github.com/kylelutz/compute/tree/master/perf
I realize that, but the thing is that what is the advantage of
Boost.Compute vs doing something like:
template<class InputIterator , class EqualityComparable >
auto count(InputIterator first, InputIterator last, const
EqualityComparable& value)
{
#ifdef THRUST
return thrust::count(first, last, value);
#elif BOLT
return bolt::cl::count(first, last, value);
#elif STL
return std::count(first, last, value);
#endif
}
where first and last are iterators on some vector<> that is ifdefed
similarly (or just use some template magic to invoke the right algorithm
based on the container type). I have this concern, and IMO users might
question themselves that while shopping for GPU libraries.
Just to be clear, I am not dissing your work: I really like it and your
positive attitude for addressing issues.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk