Boost logo

Ublas :

From: Gunter Winkler (guwi17_at_[hidden])
Date: 2008-01-15 15:59:18


Am Dienstag, 15. Januar 2008 10:00 schrieb dariomt_at_[hidden]:
> I am trying to implement a tridiagonal_matrix (
> http://en.wikipedia.org/wiki/Tridiagonal_matrix)
> It is just a special case for a banded_matrix, where there are only
> one diagonal above and under the main diagonal.
>
> That is, a tridiagonal_matrix<T>(m, n) is functionally equivalent to
> a banded_matrix<T>(m,n,1,1) (I probably want to make it square but
> that is another subject)
>
> I'd like to take advantage of very useful compile time info:
> 1) all elements outside the diagonals are zero
> 2) there are only three (3 compile time fixed!) diagonals
>
> e.g
>
> matrix<double> x;
> tridiagonal_matrix<double> y;
> matrix<double> z1 = prod(x,y);
> matrix<double> z2 = x + y;
>
> Both prod and operator+ should know that all values of y outside the
> three diagonals are zero, to save many operations respect a (dense)
> matrix
>
> Is this possible with ublas? How is this done?

The key of all operation is the speed of the (const) iterators. The
should have as little as possible overhead. A fast operator(i,j) is
good, too. But since most algorithms are iterator based you have to
tune the iterators first.

BTW. I would not expect a big difference between a banded matrix with
compile time known bandwidth compared to the current implementation. I
still wait for someone who proves the opposite ;-)

Do you already have a benchmark to measure the speed of banded_matrix?
Did you already try the axpy_prod from operation.hpp?

mfg
Gunter