Boost logo

Ublas :

From: Preben Hagh Strunge Holm (preben_at_[hidden])
Date: 2007-03-13 03:46:07


>>>> Okay, I use 72x72 matrices and vectors of the size 72. So it might be
>>>> beneficial to use atlas-blas then!
>>> Yes, unless your matrices are sparse. "Standard" BLAS only support dense
>>> (and sometimes packed, banded) matrices.
>> Hmm... most of them are actually sparse (compressed matrix).
>>
>> For future optimizations I better try these other proposed methods out.
>
> For size 72, the matrices would have to be *extremely* sparse to see a
> speedup over dense routines. Sparse operations lose a lot of efficiency on
> mondern architectures from indirect pointer lookups, and possibly poor
> cache performance, whereas vectorized dense operations are getting
> relatively faster. If you try some performance benchmarks of BLAS vs
> sparse, it would be interesting to see the results.

I have in mind that one of my matrices are more "banded" than sparse,
but else I sometimes have only 3 or 6 elements in the sparse matrices.
Some times it's quite more elements, but even though quite sparse anyway.

To keep the software architecture I need to keep it sparse for overall
better efficiency - it improves speed quite a bit actually.

Actually I found out that a product that was written like this:

M(alpha, beta) = matrix(i,j)*v(i)*v(j)

actually was quite more efficient than the associated product and inner
product function.
Don't know if this was in debug mode - I don't remember. But I was quite
surprised. The matrix and v was both dense and NOT sparse.

If I get the time for it, I'll definately try more benchmarking. And
I'll return the results here again!

Best regards,

Preben Holm