Boost logo

Ublas :

From: Frank Astier (fastier_at_[hidden])
Date: 2005-09-21 13:04:14


I'm still trying to multiply a sparse matrix by a vector efficiently,
comparing ublas to just looping over the whole matrix.

For a square matrix of size 256x256 of floats with % non-zeros
between 10 and 90%, ublas does not seem to be efficient compared to
hand-crafted code that just carries out the naive matrix
multiplication. axpy_prod performs worse than prod. sparse_matrix in
1_32 performs 3 times worse than compressed_matrix. I was hoping
ublas would be efficient even with matrices of that size, and not
that far off from "naive" code speed.

I tried boost 1_32 and 1_33. Performance seems to be worse in 1_33 by
a factor 4 ?!?

My source code and Makefile are attached (it's less than one page of
code, very easy). My results are below for boost 1_32 and 1_33. It
seems that the amount of work performed by ublas does not depend on
the sparsity of the matrix... I was hoping it would do less work if
the matrix is sparser.

Of course, I hope I am wrong, or I missed something. Or maybe ublas
is not particularly efficient with "small" matrices of 256x256? Maybe
it starts shining when the matrix is bigger? Is there a boost::ublas
guideline for when to use which part of ublas for best speed??

Thanks,

Frank

Results for 256x256 compressed matrix multiplication with dense vector

Boost 1_33:
% NZ ublas time in ms naive multiplication time in ms
0.1 8.37 0.29
0.2 8.38 0.29
0.3 8.38 0.29
0.4 8.36 0.3
0.5 8.35 0.3
0.6 8.37 0.31
0.7 8.39 0.32
0.8 8.38 0.3
0.9 8.38 0.3

Boost 1_32:
% NZ ublas time in ms naive multiplication time in ms
0.1 2.35 0.25
0.2 2.34 0.25
0.3 2.32 0.26
0.4 2.33 0.26
0.5 2.35 0.25
0.6 2.36 0.25
0.7 2.35 0.25
0.8 2.33 0.26
0.9 2.34 0.25