Boost logo

Boost Users :

From: Paul C. Leopardi (leopardi_at_[hidden])
Date: 2002-10-24 19:34:41


Hi,
I posted the following message to boost and have so far had no response. Maybe
boost-users is the correct mailing list?

Hi,
Thanks to Joerg Walter's original conversion of GluCat ( http://glucat.sf.net
) to uBLAS, I have fairly painlessly been able to get GluCat going with the
version of uBLAS now promoted to Boost. I'm using sparse matrices:

boost::numeric::ublas::sparse_matrix< Scalar_T,
boost::numeric::ublas::row_major>

and comparing performance to MTL:

mtl::matrix< Scalar_T, mtl::rectangle<>, mtl::compressed<>,
mtl::row_major >::type

The operation I'm performing is quite complicated. Basically, I am generating
a large number of sparse matrices, and computing the inner product of a given
matrix with each of these matrices in turn.
( If you have a copy of GluCat, see basis_element() in matrix_multi_imp.h )

Because these sparse matrices have only one non-zero per row, operations on an
n x n matrix are typically O(n), and seem to be so using MTL:

Size vs speed for MTL:
Size Time (ms)
  32 10
  64 20
 128 80
 256 160
 512 700

But with uBLAS, essentially the same operations seem to be O(n^2):

Size Time (ms) Slowdown
  32 30 3
  64 70 3.5
 128 830 10.4
 256 1940 12.1
 512 31770 45.4

o Is there a more appropriate data type in uBLAS?
o Do you have any tips on where to tweak uBLAS to get better (ie. O(n) )
performance?
o What is BOOST_UBLAS_USE_CANONICAL_ITERATOR and what effect should it have on
performance?
Thanks


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net