|
Boost : |
From: boost (boost_at_[hidden])
Date: 2001-11-21 17:08:06
Hello,
On Wednesday 21 November 2001 11:58, walter_at_[hidden] wrote:
> You've lost me. Could you please explain or give a reference?
Please see below.
> > I'd be happy if I could replace (specialize) a few routines of ublas
> > by ATLAS or vendor supplied BLAS routines in order to perform
>
> benchmarks,
>
> > (mainly _x_gemm).
>
>
> I think, this is one of the next steps as already discussed with Toon
> Knapen. We'll also look at it.
That would be really a good thing for me, since I have to support several
platforms. And on some platforms vendor supplied BLAS might be superior.
Best wishes.
Peter
------------------------------------------------
Outline of my most cpu intensive part
------------------------------------------------
In my application I have to iterativly diagonalize
large sparse matrices which involve the Matrix-Vector
product of a matrix C with a vector x.
To improve performance one can use a representation
of C where the vector space of C is a tensor product of two
vector spaces V and W.
C is now a sum of operators A_i \otimes B_i, where A_i (B_i) acts on
V (W) only. The total dimension of C is equal the product of the
dimensions of V and W. A basis of the vector space of the product space of
V and W can be represented by a dyadic product of basis states of V and W,
i.e. if v * w^T.
This representation has the advantage that the (sparse) matrix vector
multiplication can be represented by BLAS-3 operations instead of BLAS-2
operations using dense matrices.
In case you're not lost again, The spaces V and W themselves can
be represented by by direct sums of sub-spaces V_l (W_k).
The matrices A can now be represente by blocks A_lm, where
A_lm is a mapping from V_l to V_m.
If you're still interested in details you may look at my thesis,
http://www.Physik.Uni-Augsburg.DE/~peters/thesis/index.html
chapter 5.5 .
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk