|
Ublas : |
Subject: Re: [ublas] eigen Vs. ublas
From: Ungermann, Jörn (j.ungermann_at_[hidden])
Date: 2011-04-11 09:43:34
Hi,
as a side note, eigen uses a column-major compressed format as default, which is not that efficient for matrix-vector products (your mileage may vary). We use rather large, rather dense matrices (~1 percent nnz) and eigen does not perform that well for us, as it seems to have optimized only its column-major matrix-vector product.
Also, for precision and memory bandwidth reasons, we use float for the matrix and double for the vectors, which I couldn't get to compile with eigen3 (with admittedly low effort).
So here are the benchmarks for double precision matrix-vector performance between eigen, ublas and a manually tuned (unrolled, SIMD, prefetch) compressed-row_major-matrix-vector product. The matrix is ~ 100.000 x 100.000 with ~ 80000000 non-zeros. Measured is the average of 100 products.
# test name, mean [s], stddev [s]
Vanilla ublas:
CompressedRowDouble, 0.313926, 0.000588 <- faster than either eigen product
CompressedColDouble, 0.526223, 0.000405
Manually optimized ublas:
CompressedRowDouble, 0.235913, 0.000136
Eigen:
CompressedRowDouble, 0.439802, 0.000153
CompressedColDouble, 0.375919, 0.000466 <- faster than ublas column-major product
As you can see, ublas actually outperforms eigen3 for matrix-vector products, if you choose to use row-major matrices (at least for my matrix on my machine). I think eigen didn't spend time on optimizing that matrix-type, yet, even though it's matrix-vector product is easier to optimize, as you can more readily use registers for the accumulation.
Regards,
Joern
> -----Original Message-----
> From: ublas-bounces_at_[hidden] [mailto:ublas-
> bounces_at_[hidden]] On Behalf Of Karl Rupp
> Sent: Montag, 11. April 2011 14:32
> To: ublas_at_[hidden]
> Subject: Re: [ublas] eigen Vs. ublas
>
> Hi Umut,
>
> we have recently added a interface for eigen (v2.0.15) and MTL in
> ViennaCL, where we have also compared iterative solver performance. For
> a unpreconditioned CG with 100 iterations, we obtained for our 65k
> sample matrix the following timings:
> ViennaCL-CG with MTL: 0.67 sec
> ViennaCL-CG with Eigen2: 0.27 sec
> ViennaCL-CG with ublas: 0.52 sec
> All three libraries are used "as is" without further tuning anything.
> NDEBUG is set. Timings will differ with other sparsity patterns, of
> course ;-)
>
> Unfortunately, I can't give you any timings on pure matrix-vector
> products, but since you are aiming at CG-like methods, above timings
> are
> probably more interesting.
>
> Best regards,
> Karli
>
>
> On 04/10/2011 11:56 PM, Umut Tabak wrote:
> > Dear all,
> >
> > Since most of the people are more knowledgeable and more experienced
> > than me here, before testing myself, it is good to ask for some
> advice
> > and directions.
> >
> > I have been looking at the eigen3 matrix library which seems to have
> > nice documentation and examples and some interfaces to some solvers
> > either (however the sparse module is not that mature as ublas I
> guess,
> > not sure on what I am saying here ;) due to lack of info). The main
> > issue is that looking at the benchmarks page here:
> >
> > http://eigen.tuxfamily.org/index.php?title=Benchmark
> >
> > It seems that it outperforms ublas and gmm++, especially for vector
> > operations. With both I had nice experiences as a user, maybe not on
> > serious problems. On matrix-vector side However, I had hard time in
> > understanding the important differences on the benchmarks, and I
> guess
> > these are provided for dense matrices, right?
> >
> > There might be mistakes in the things I wrote, ublas is also highly
> > optimized in many sense, so if there are users of both libraries,
> could
> > someone draw some conclusions for both, especially for sparse matrix
> > operations.
> >
> > One more, did anyone try to interface boost sparse matrices,
> especially
> > the csr, with the Intel MKL library for matrix vector
> multiplications(I
> > remember reading sth like that some time ago here but could not find
> the
> > post), if yes what is the performance gain if any? Since I should
> test
> > some conjugate gradient type methods, these matrix-vector products
> are
> > really important for me and moreover I have never used the MKL.
> >
> > Any ideas and help are highly appreciated.
> >
> > Greetings,
> > Umut
> > _______________________________________________
> > ublas mailing list
> > ublas_at_[hidden]
> > http://lists.boost.org/mailman/listinfo.cgi/ublas
> > Sent to: rupp_at_[hidden]
> >
>
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: j.ungermann_at_[hidden]
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Besuchen Sie uns auf unserem neuen Webauftritt unter www.fz-juelich.de