Boost logo

Ublas :

Subject: Re: [ublas] eigen Vs. ublas
From: Nasos Iliopoulos (nasos_i_at_[hidden])
Date: 2011-04-11 07:42:02


Umut
a partial answer to the items of your question:

The main performance difference between eigen and ublas is that eigen performs explicit vectorization: the operations are put on special registers that allow for the execution of multiple instructions per cycle (SIMD).

Modern cpus have 128 bits per register, allowing 4 floats or 2 doubles to be stored (and operated upon) simultaneously. This means that the performance gain you have for double precision (If you are using a linear algebra system for anything other than programming graphics, probably you want to be using double precision), is at most about 2. That is what is expected from eigen over uBlas at most, but this is not the full story.

Compilers usually perform implicit vectorization: what eigen is doing internally as a library is done automatically by the compiler (http://gcc.gnu.org/projects/tree-ssa/vectorization.html). Now I find that the compilers are becoming better and better into doing that and uBlas performance is actually better than what it was, exactly because compilers do those things better.

I think (somebody correct me if I am wrong) that the eigen benchmarks are for simple precision arithmetic and the picture is different when it comes to double precision ones.

The other thing the eigen3 has (a few months ago I checked this was not working efficiently though) is openmp. It seems though that openmp functionality for parallel execution can be incorporated into ublas without too much trouble.

Please also be aware that there are things that can be done to improve the performance of certain uBlas algorithms that are not optimal at the moment.

Best,
Nasos

> Date: Sun, 10 Apr 2011 23:56:07 +0200
> From: u.tabak_at_[hidden]
> To: ublas_at_[hidden]
> Subject: [ublas] eigen Vs. ublas
>
> Dear all,
>
> Since most of the people are more knowledgeable and more experienced
> than me here, before testing myself, it is good to ask for some advice
> and directions.
>
> I have been looking at the eigen3 matrix library which seems to have
> nice documentation and examples and some interfaces to some solvers
> either (however the sparse module is not that mature as ublas I guess,
> not sure on what I am saying here ;) due to lack of info). The main
> issue is that looking at the benchmarks page here:
>
> http://eigen.tuxfamily.org/index.php?title=Benchmark
>
> It seems that it outperforms ublas and gmm++, especially for vector
> operations. With both I had nice experiences as a user, maybe not on
> serious problems. On matrix-vector side However, I had hard time in
> understanding the important differences on the benchmarks, and I guess
> these are provided for dense matrices, right?
>
> There might be mistakes in the things I wrote, ublas is also highly
> optimized in many sense, so if there are users of both libraries, could
> someone draw some conclusions for both, especially for sparse matrix
> operations.
>
> One more, did anyone try to interface boost sparse matrices, especially
> the csr, with the Intel MKL library for matrix vector multiplications(I
> remember reading sth like that some time ago here but could not find the
> post), if yes what is the performance gain if any? Since I should test
> some conjugate gradient type methods, these matrix-vector products are
> really important for me and moreover I have never used the MKL.
>
> Any ideas and help are highly appreciated.
>
> Greetings,
> Umut
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: nasos_i_at_[hidden]