Boost logo

Ublas :

From: Michael Lehn (michael.lehn_at_[hidden])
Date: 2008-02-11 03:21:26


Am 09.02.2008 um 14:54 schrieb Gunter Winkler:

> Michael Lehn schrieb:
>> Hi
>>
>> I am currently preparing some benchmarks comparing uBLAS with FLENS.
>> So far these pages
>>
>> http://flens.sourceforge.net/session2/tut6.html
>> http://flens.sourceforge.net/session2/tut7.html
>>
>> are currently not link on the FLENS site. I am pretty sure that
>> there is a lot potential for improving the uBLAS implementations and
>> the compiler flags.
>>
>>
> Interesting results. I completely agree that ublas requires a lot of
> improvements. So the results are quite good ;-) I think the most
> performance critical point is that ublas completely ignores any
> optimized BLAS.
>
> BTW. Did you run the comparison in 32 or 64-Bit mode?

Do you mean the BLAS benchmarks (gemm, gemv, axpy,... ) on

        http://flens.sourceforge.net/session1/tut9.html

for these test I used ATLAS and MKL in 64 Bit mode.

For the sparse matrix-vector product I wrote a simple implementation
crs_gemv as shown on

        http://flens.sourceforge.net/session2/tut3.html

Therefore
    r = b-Ax is equivalent to copy and crs_gemv (4.7s)
    r = Ax-b is equivalent to crs_gemv and axpy (5.3s)

As vectors are dense for copy and axpy BLAS gets used. In this case
ATLAS which (compared to MKL) is not so good for axpy.

By the way: for small matrices and vectors the performance of uBLAS
ist much better than that of ATLAS, MKL, ... So I think some hybrid
approach might be interesting ...

About
        
        http://flens.sourceforge.net/session2/tut7.html

is there a better way to initialize a compressed sparse matrix in
arbitrary order?

cheers,

Michael

>
> mfg
> Gunter
>
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas