Subject: Re: [ublas] cublas bindings
From: OvermindDL1 (overminddl1_at_[hidden])
Date: 2010-07-20 21:01:22
On Tue, Jul 20, 2010 at 9:45 AM, Andrey Asadchev <asadchev_at_[hidden]> wrote:
> On Tue, Jul 20, 2010 at 1:54 AM, Rutger ter Borg <rutger_at_[hidden]> wrote:
>> Andrey Asadchev wrote:
>> > thank you
>> > I was not aware off existing bindings.
>> > I will definitely take a look at them.
>> > I am trying to make implementation little bit more like expressions,
>> > rather than fortran counterpart:
>> > Here is an example:
>> > Â Â cublas::matrix<double> A;
>> > Â Â cublas::matrix<double> B;
>> > Â Â B = cublas::matrix<double>(5,5);
>> > Â Â A = B; // device to device copy
>> > Â Â ublas::matrix<double> h(cublas::host(B)); Â // device to host copy
>> > Â Â ublas::matrix<double, ublas::column_major> f;
>> > Â Â cublas::host(f) = A; // device to host copy
>> > Â Â cublas::matrix<double> C = cublas::gemm(A, B);
>> > Â Â B += cublas::gemm(A, C);
>> > Â Â cublas::gemm(3.0, B, C, 6.0, A); Â // fortran style dgemm
>> We have chosen to put expression syntax and handling of that in the hands
>> higher level libraries, e.g., MTL, GLAS, etc.. In turn, some of these
>> libraries have chosen to use the bindings for their BLAS/LAPACK access.
>> One of the things I would like to achieve with the bindings, are programs
>> that can be recompiled against (a mixture of) different backends, hybrid
>> and/or GPU-backends included. Then, it would be nice if users are able to
>> write code as if a GPU is present, but, if the CPU-only compilation option
>> is chosen, all host/device copying is ignored.
> Â HelloÂ Rutger
> I think my requirements were a little bit different, as I need explicit GPU
> "semantics" (there is some domain cuda code).
> In any case, this is learning experience for me.
> I am going to continue working on this approach (I really need it).
> Once I make some progress, and if quality is okay, would you be willing to
> integrate my bindings somehow?
I just have a quite question of curiosity. I was under the impression
that cuda was an nvidia-only construct that no one else supported, and
it only supports programming the gpu, whereas opencl allows you to do
the same things, but on the gpu, cpu, some dedicated vector board,
etc... Why would someone use cuda over opencl, or am I vastly
mistaken in what I had the impression of?