Boost logo

Ublas :

Subject: Re: [ublas] sparse matrix dense matrix multiplication dilemma
From: Tarek Elsayed (t.elsayed_at_[hidden])
Date: 2011-09-26 04:57:58


I tried Eigen and MTL4. Neither could compete with ublas in its slowness !
If you think of going to GMM++, you can also try MTL4, I found it very fast
in sparse matrix computations. If you want more speed, then you can use the
intel MKL library. I have tried both MKL and MTL4 simultaneously. MTL4 can
provide you with the pointers to pass for MKL functions, so you don't have
to rewrite your entire code. Since each of MTL4, eigen, gmm++ are tempelated
libraries like ublas, I think you don't have to change much of your code to
benchmark each of them.

On Sat, Sep 24, 2011 at 10:49 PM, Umut Tabak <u.tabak_at_[hidden]> wrote:

> Dear all,
>
> What is the most efficient way to do sparse matrix-dense matrix
> computations in ublas if any? Say sparse matrix is in csr format.
>
> In my tests with axpy_prod and prod, I got very bad results and almost no
> results after a long time. I wondered if these multiplications are optimized
> or not? Or are there some tricks, apparently gmm++ outperforms ublas on
> these kinds of multiplications and many alike operations.
>
> I started thinking to completely port all my sparse matrices to gmm++,
> however, I do not want to miss any information that I could possibly
> oversee.
>
> Your help and input is very much appreciated.
>
> Best regards,
> Umut
> ______________________________**_________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/**mailman/listinfo.cgi/ublas>
> Sent to: t.elsayed_at_thphys.uni-**heidelberg.de<t.elsayed_at_[hidden]>
>