Boost logo

Ublas :

Subject: Re: [ublas] eigen Vs. ublas
From: Karl Rupp (rupp_at_[hidden])
Date: 2011-04-11 08:32:09


Hi Umut,

we have recently added a interface for eigen (v2.0.15) and MTL in
ViennaCL, where we have also compared iterative solver performance. For
a unpreconditioned CG with 100 iterations, we obtained for our 65k
sample matrix the following timings:
ViennaCL-CG with MTL: 0.67 sec
ViennaCL-CG with Eigen2: 0.27 sec
ViennaCL-CG with ublas: 0.52 sec
All three libraries are used "as is" without further tuning anything.
NDEBUG is set. Timings will differ with other sparsity patterns, of
course ;-)

Unfortunately, I can't give you any timings on pure matrix-vector
products, but since you are aiming at CG-like methods, above timings are
probably more interesting.

Best regards,
Karli

On 04/10/2011 11:56 PM, Umut Tabak wrote:
> Dear all,
>
> Since most of the people are more knowledgeable and more experienced
> than me here, before testing myself, it is good to ask for some advice
> and directions.
>
> I have been looking at the eigen3 matrix library which seems to have
> nice documentation and examples and some interfaces to some solvers
> either (however the sparse module is not that mature as ublas I guess,
> not sure on what I am saying here ;) due to lack of info). The main
> issue is that looking at the benchmarks page here:
>
> http://eigen.tuxfamily.org/index.php?title=Benchmark
>
> It seems that it outperforms ublas and gmm++, especially for vector
> operations. With both I had nice experiences as a user, maybe not on
> serious problems. On matrix-vector side However, I had hard time in
> understanding the important differences on the benchmarks, and I guess
> these are provided for dense matrices, right?
>
> There might be mistakes in the things I wrote, ublas is also highly
> optimized in many sense, so if there are users of both libraries, could
> someone draw some conclusions for both, especially for sparse matrix
> operations.
>
> One more, did anyone try to interface boost sparse matrices, especially
> the csr, with the Intel MKL library for matrix vector multiplications(I
> remember reading sth like that some time ago here but could not find the
> post), if yes what is the performance gain if any? Since I should test
> some conjugate gradient type methods, these matrix-vector products are
> really important for me and moreover I have never used the MKL.
>
> Any ideas and help are highly appreciated.
>
> Greetings,
> Umut
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: rupp_at_[hidden]
>