Boost logo

Boost :

Subject: Re: [boost] Interest in a GPU computing library
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2012-09-18 04:32:15


On 09/18/2012 03:53 AM, Kyle Lutz wrote:
> This is a call for interest in a GPU computing library for Boost. I
> have been working on the library in my spare time for the past month
> or so and it’s reached a point where I am ready for feedback. Below
> gives a brief overview, some notes on the design, and a small example
> of the library which I’ve named Boost.Compute.
>
> --- Overview ---
>
> * C++ library for general-purpose computing on GPUs/Accelerators
> * Based on OpenCL (Open Computing Language)
> * Header-only implementation
> * API inspired by the STL, Boost and Thrust
> * Boost dependencies: Config, Utility, Iterator, Exception,
> Preprocessor, TypeTraits, StaticAssert, MPL, Proto
>
> --- Design ---
>
> OpenCL is a framework for writing programs that run on parallel
> computing devices such as GPUs and multi-core CPUs. The OpenCL
> language is based on C99 with a few extensions to simplify writing
> parallel and vector-based code. More background:
> http://en.wikipedia.org/wiki/OpenCL.
>
> The core of the Boost Compute library is a thin C++ wrapper over the
> OpenCL C API. It provides classes for creating and managing various
> OpenCL entities such as contexts, buffers, devices and kernels. These
> classes are written in a style consistent with Boost and the C++
> standard library.
>
> Written on top of the core library is a partial implementation of the
> C++ STL which includes common containers (e.g. vector<T>, array<T, N>)
> and algorithms (e.g. copy, find_if, sort) along with a few extensions
> (e.g. scatter, exclusive_scan, flat_set<T>).
>
> The aim of Boost.Compute’s STL API is to provide a familiar interface
> to developers wanting to easily write new code or port existing code
> to run on GPU devices. It also features a few “fancy” iterators
> inspired by the Boost.Iterator library such as transform_iterator<>,
> counting_iterator<>, and permutation_iterator<>.
>
> Furthermore, a lambda expression library was written using Boost.Proto
> which allows for mathematical expressions to be defined at the call
> site of an algorithm and then be executed on the GPU. For example, to
> multiply each element in a vector by the square root of itself and
> then add four:
>
> transform(v.begin(), v.end(), v.begin(), _1 * sqrt(_1) + 4);
Nice work! Where can we find the code? The only thing i miss from your
description is the support for asynchronous operations and how to extend
the lambda facilities. Are those based on phoenix? How to integrate user
defined kernels?
I am working on a very similar library
(https://github.com/sithhell/oclm) so far our efforts went into making
opencl programming easier with a strong focus on supporting asynchronous
operations. Would be cool to see support for that in your library. As
far as i can tell, your work is far more advanced than what I and my
collaborators have done so far. It would be great if i could give up the
development of my library and merge my findings with your work!

>
> --- Example ---
>
> Below is a small example of using the Boost.Compute API to sort a
> vector of int values:
>
> // create vector of random values on the host
> std::vector<int> host_vector(10000);
> std::generate(host_vector.begin(), host_vector.end(), rand);
>
> // create a compute context for the default gpu device
> boost::compute::context gpu_context = boost::compute::default_gpu_context();
>
> // create a vector on the gpu
> boost::compute::vector<int> device_vector(gpu_context);
>
> // transfer the values to the device
> device_vector = host_vector;
>
> // sort the values on the device
> boost::compute::sort(device_vector.begin(), device_vector.end());
>
> // transfer the sorted values back to the host
> boost::compute::copy(device_vector.begin(), device_vector.end(),
> host_vector.begin());
>
> --- Conclusion ---
>
> The Boost Compute library provides a useful, intuitive, and familiar
> interface for running high-performance parallel code on GPU devices.
> Incorporating Boost.Compute into the Boost libraries would make GPU
> computing readily accessible to a large number of C++ developers.
>
> All comments and feedback are welcome and greatly appreciated.
>
> Thanks,
> Kyle
>
> _______________________________________________
> Unsubscribe& other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk