Boost.MPI does not provide any high-level abstraction for CUDA devices, but if you want to use that functionality you can always use MPI's C API. That is, use Boost.MPI where you can, use the C API where Boost.MPI does not provide equivalent functionality. If you do write wrappers for C API features that are not covered by Boost.MPI, please do contribute patches!


On Mon, Mar 31, 2014 at 8:30 PM, Christoph Winter <christoph.winter@stud.hs-regensburg.de> wrote:
Hi all,

I have a short question: Does Boost.MPI support CUDA-aware MPI Backends
(e.g. MVAPICH 1.8/1.9b, OpenMPI 1.7 (beta), ...)  or might I face some
problems? I'd like to use the Boost.MPI abstraction but I'm not sure,
how Boost.MPI interfaces the MPI standard and how CUDA-awareness impacts
the MPI standard.

Greetings,
Christoph
_______________________________________________
Boost-mpi mailing list
Boost-mpi@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-mpi