I tried Eigen and MTL4. Neither could compete with ublas in its slowness !  If you think of going to GMM++, you can also try MTL4, I found it very fast in sparse matrix computations. If you want more speed, then you can use the intel MKL library. I have tried both MKL and MTL4 simultaneously. MTL4 can provide you with the pointers to pass for MKL functions, so you don't have to rewrite your entire code. Since each of MTL4, eigen, gmm++ are tempelated libraries like ublas, I think you don't have to change much of your code to benchmark each of them. 

On Sat, Sep 24, 2011 at 10:49 PM, Umut Tabak <u.tabak@tudelft.nl> wrote:
Dear all,

What is the most efficient way to do sparse matrix-dense matrix computations in ublas if any? Say sparse matrix is in csr format.

In my tests with axpy_prod and prod, I got very bad results and almost no results after a long time. I wondered if these multiplications are optimized or not? Or are there some tricks, apparently gmm++ outperforms ublas on these kinds of multiplications and many alike operations.

I started thinking to completely port all my sparse matrices to gmm++,  however, I do not want to miss any information that I could possibly oversee.

Your help and input is very much appreciated.

Best regards,
Umut
_______________________________________________
ublas mailing list
ublas@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: t.elsayed@thphys.uni-heidelberg.de