From: Mateusz Loskot (mateusz_at_[hidden])
Date: 2019-10-11 22:22:38
On Thu, 10 Oct 2019 at 19:45, Olzhas Zhumabek
> I would like to attempt GPU implementation of some image processing
> algorithms for my university project. I would like to be able to call as
> much of original GIL as possible, minus the io extension.
I realise I may be jumping ahead, but do you think it could be integrated
as cuda extension for GIL or you expect extensive additions to the core?
> *Problem description*
> Memory is no longer uniform when heterogeneous system is used. CPU cannot
> write to GPU's on-board VRAM without experiencing slow down. This makes the
> solution that involves only writing an allocator impossible, because some
> code in e.g. std::vector does some writing. The reverse is true as well,
> GPU cannot write into RAM without experiencing slow down (not sure if it is
> even possible in this direction). One could create a function that would
> copy the data around when needed, but there is additional problem.
> Imagine copying std::vector into GPU memory. The naive approach would be
> copy top-level std::vector contents and then copy whatever it points at as
> well. The problem is that then top-level representation in GPU memory will
> still point to RAM memory that is not supposed to be used. One has to
> somehow rewrite that pointer when copying.
> *Candidate solution*
> I'm thinking about creating an allocator (...)
> Is there a better approach than this?
Disclaimer: I have close to zero hands-on experience with CUDA,
so I'm very happy to let Stefan lead any related developments.
I, however, am interested to learn and brainstorm about it :-)
Could CUDA streams be useful?
> Are there any other types besides gil::image and gil::kernel that deal with memory?
Yes, I think gil::image is the primary 'memory manager' in GIL,
it relies on allocator. I'd root my focus on the image indeed.
-- Mateusz Loskot, http://mateusz.loskot.net
Boost list run by Boost-Gil-Owners