From: Michael Lehn (michael.lehn_at_[hidden])
Date: 2008-02-11 03:21:26
Am 09.02.2008 um 14:54 schrieb Gunter Winkler:
> Michael Lehn schrieb:
>> I am currently preparing some benchmarks comparing uBLAS with FLENS.
>> So far these pages
>> are currently not link on the FLENS site. I am pretty sure that
>> there is a lot potential for improving the uBLAS implementations and
>> the compiler flags.
> Interesting results. I completely agree that ublas requires a lot of
> improvements. So the results are quite good ;-) I think the most
> performance critical point is that ublas completely ignores any
> optimized BLAS.
> BTW. Did you run the comparison in 32 or 64-Bit mode?
Do you mean the BLAS benchmarks (gemm, gemv, axpy,... ) on
for these test I used ATLAS and MKL in 64 Bit mode.
For the sparse matrix-vector product I wrote a simple implementation
crs_gemv as shown on
r = b-Ax is equivalent to copy and crs_gemv (4.7s)
r = Ax-b is equivalent to crs_gemv and axpy (5.3s)
As vectors are dense for copy and axpy BLAS gets used. In this case
ATLAS which (compared to MKL) is not so good for axpy.
By the way: for small matrices and vectors the performance of uBLAS
ist much better than that of ATLAS, MKL, ... So I think some hybrid
approach might be interesting ...
is there a better way to initialize a compressed sparse matrix in
> ublas mailing list