|
Boost : |
Subject: Re: [boost] [compute] GPGPU Library - Request For Feedback
From: Michael Marcin (mike.marcin_at_[hidden])
Date: 2013-03-02 23:39:43
Kyle Lutz wrote:
> Looks interesting. One question: what support does the library provide to
>>
>> orchestrate parallelism, i.e. doing useful work while the GPGPU is executing
>> a kernel? Do you have something like:
>>
>> int main()
>> {
>> // create data array on host
>> int host_data[] = { 1, 3, 5, 7, 9 };
>>
>> // create vector on device
>> boost::compute::vector<int> device_vector(5);
>>
>> // copy from host to device
>> future<void> f = boost::compute::copy_async(host_data,
>> host_data + 5,
>> device_vector.begin());
>>
>> // do other stuff
>>
>> f.get(); // wait for transfer to be done
>>
>> return 0;
>> }
>>
>> ?
>>
>> All libraries I have seen so far assume that the CPU has to idle while
>> waiting for the GPU, is yours different?
>
> Yes. The library allows for asynchronous computation and the API is
> almost exactly the same as your proposed example.
>
> See this example:
> http://kylelutz.github.com/compute/boost_compute/advanced_topics.html#boost_compute.advanced_topics.asynchronous_operations
>
Very nice interface, very nice docs.
I'll put some more time into looking at it.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk