Boost logo

Ublas :

From: Gunter Winkler (guwi17_at_[hidden])
Date: 2008-02-11 13:49:39


Am Montag, 11. Februar 2008 09:21 schrieb Michael Lehn:
> > BTW. Did you run the comparison in 32 or 64-Bit mode?
>
> For the sparse matrix-vector product I wrote a simple implementation
> crs_gemv as shown on
>
> http://flens.sourceforge.net/session2/tut3.html
>
> Therefore
> r = b-Ax is equivalent to copy and crs_gemv (4.7s)
> r = Ax-b is equivalent to crs_gemv and axpy (5.3s)
>
> As vectors are dense for copy and axpy BLAS gets used. In this case
> ATLAS which (compared to MKL) is not so good for axpy.

This means you used 32 Bit integers for the index arrays, right? uBLAS
by default uses std::size_t which is 64 Bit. Please try this type which
is the usual row major compressed matrix with 32 Bit indices.

// type of stiffnes matrix
typedef boost::numeric::ublas::compressed_matrix<
  double,
  boost::numeric::ublas::basic_row_major<unsigned int, int>,
  0,
  boost::numeric::ublas::unbounded_array<unsigned int>,
  boost::numeric::ublas::unbounded_array<double>
> MY_STIMA;

> By the way: for small matrices and vectors the performance of uBLAS
> ist much better than that of ATLAS, MKL, ... So I think some hybrid
> approach might be interesting ...

for small fixed size matrices I suggest http://tvmet.sourceforge.net/ .
For medium size matrices ublas might be comparable to BLAS, but in my
last tests the limit was around 100x100 ...

>
> About
>
> http://flens.sourceforge.net/session2/tut7.html
>
> is there a better way to initialize a compressed sparse matrix in
> arbitrary order?

Personally, I use a coordinate matrix for assembly and then compress it.
Unfortunately the current implementation requires a copy of the data. I
did some experiments to compress the matrix 'in place' but the
performance gain was quite small compared to the additional complexity.

Alternatively one can use a vector of compressed or coordinate vectors.
The uBLAS type is

typedef boost::numeric::ublas::generalized_vector_of_vector<
  double,
  boost::numeric::ublas::row_major,
  boost::numeric::ublas::vector<
     boost::numeric::ublas::compressed_vector<double> >
> MY_STIMA_ASS;

which is quite efficient for random insertions.

mfg
Gunter