Boost logo

Boost :

Subject: Re: [boost] [compute] GPGPU Library - Request For Feedback
From: Ioannis Papadopoulos (ipapadop_at_[hidden])
Date: 2013-03-06 09:36:28


On 3/4/2013 8:47 PM, Kyle Lutz wrote:
> On Sun, Mar 3, 2013 at 9:15 PM, Ioannis Papadopoulos
> <ipapadop_at_[hidden]> wrote:
>> A comparison would be nice. Moreover, why not piggy-back on the libraries
>> that are already available (and they probably have better optimizations in
>> place) and simply write a nice wrapper around them (and maybe, crazy idea,
>> allow a single codebase to use both AMD and nVidia GPUs at the same time)?
>
> Boost.Compute does allow you to use both AMD and nVidia GPUs at the
> same time with the same codebase. In fact you can also throw in your
> multi-core CPU, Xeon Phi accelerator card and even a Playstation 3.
> Not such a crazy idea after all ;-).
>
> -kyle
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>

Thanks for the comparison.

The only issue I see with Boost.Compute is that it will have problems
supporting well the well-known architectures. Basically, it takes all
the optimizations that have been researched and developed for maximum
efficiency and throws them out of the window.

For example, there is a wealth of CUDA algorithms highly optimized for
nVidia GPUs. These will have to be reimplemented in OpenCL. And tuned
(ouch); possibly for each device (ouch x 2). I see it as a massive task
for a single person or a small group of people that are doing that in
their spare time.

However, if Boost.Compute implements something similar to
Boost.Multiprecision's multi-backend approach, then you can use
underneath thrust, Bolt, whatever else there is and only fall-back to
the OpenCL code if there is nothing else (or the user explicitly
requires it).

They way I'd see something with the title Boost.Compute is as an
algorithm selection library - you have multiple backends, which you may
choose from based on automatic configuration at compile time and at run
time based on type and input size of your input data.

Starting from multiple backends is a good start.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk