Boost logo

Boost :

Subject: Re: [boost] [compute] GPGPU Library - Request For Feedback
From: Hartmut Kaiser (hartmut.kaiser_at_[hidden])
Date: 2013-03-03 18:37:52


> > Looks interesting. One question: what support does the library provide
to
> >
> > orchestrate parallelism, i.e. doing useful work while the GPGPU is
> > executing a kernel? Do you have something like:
> >
> > int main()
> > {
> > // create data array on host
> > int host_data[] = { 1, 3, 5, 7, 9 };
> >
> > // create vector on device
> > boost::compute::vector<int> device_vector(5);
> >
> > // copy from host to device
> > future<void> f = boost::compute::copy_async(host_data,
> > host_data + 5,
> > device_vector.begin());
> >
> > // do other stuff
> >
> > f.get(); // wait for transfer to be done
> >
> > return 0;
> > }
> >
> > ?
> >
> > All libraries I have seen so far assume that the CPU has to idle while
> > waiting for the GPU, is yours different?
>
> Yes. The library allows for asynchronous computation and the API is almost
> exactly the same as your proposed example.
>
> See this example:
> http://kylelutz.github.com/compute/boost_compute/advanced_topics.html#boos
> t_compute.advanced_topics.asynchronous_operations

That's excellent (sorry I have not seen this in the docs before). However I
think it's not a good idea to create your own futures. This does not scale
well nor does it compose with std::future or boost::future. Is there a way
to use whatever futures (or more specifically, threading implementation) the
user decides to use?

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk