|
Boost : |
From: lums_at_[hidden]
Date: 2001-03-14 08:59:52
--- In boost_at_y..., walter_at_g... wrote:
> If I understand TNT correctly, it supports some sparse storage
> schemes. But I think another goal to achieve is also some
performance
> enhancement using the right iterator concept. So if you require a
> matrix library to deal with dense and sparse matrices supporting
> expression templates too, things seem to get very complicated. I
> don't know whether any existing library implements all these
concepts
> together.
Again at the risk of being somewhat tacky, MTL does in fact support
sparse and dense in a unified fashion along with unified iterators
and very careful attention to performance. The present release does
not have expression templates. However, Jeremy has created a high-
performance expression template layer for MTL. So, there is an
available library that implements most of these things, and one in
development that implements them all. (I am trying to get more
person-power to devote to this so hopefully things will accelerate
soon.)
> I have got another problem with sparse matrices also, which I would
> like to mention here: IMHO dense matrices and classical linear
> equation solvers together form a pair, for example. Sparse matrices
> are normally used in the context of iterative solvers, which are
> quite subtle to handle AFAIK.
Sparse versus dense is solely an issue of underlying representation.
Both types of representation represent the same thing -- a matrix
(or, more precisely a finite dimensional linear operator). It is
quite an easy matter to write a linear solver in a few lines of code
for a dense matrix (the classic solver you mention), and one can even
write a stable solver in a few more lines of code. Doing LU
factorization, then back and forward solves are together called
a 'direct' solution process.
However, one can still perform direct solution on a sparse system of
equations. In many cases, such an approach will give you performance
comparable to, or better than, using an iterative solver. The
drawback is that sparse solvers are extraordinarily complicated. One
has to represent only the non-zero elements of the matrix and then,
when factorizing, add fill-in elements where required by the
factorization operations. To make this most efficient, one has to
first order the sparse matrix for low fill (finding the order for
absolute minimum fill is an NP complete problem). FWIW, there are a
few heuristic ordering algorithms in the BGL.
There are some domains where iterative solvers just don't work (or
haven't been made to work) because of conditioning of the problem --
circuit simulation is one notable example.
When you start talking about iterative solvers you also have to start
talking about preconditioners (incomplete factorizations), a black
art unto itself. The subtlety in iterative solvers all comes down to
preconditioning -- the iterative solvers are not all that
complicated. The Iterative Template Library has almost all iterative
solvers of consequence (and a number of preconditioners). The
problem is that no one has ever found a one-size fits all solver (and
in fact, there is a very famous result by Faber and Manteuffel that
shows that one solver can't fit all problems) and no one has ever
found a one-size fits-all preconditioner. The best preconditioners
are those that exploit some part of the problem domain (think of the
preconditioner as an approximate solver). Usually a good
preconditioner will be almost as complicated as a direct solver.
A broad over-generalization about performance. The performance of
direct vs. iterative methods is quite controversial. However, for
certain model problems (discretized elliptic PDEs), the rule of thumb
is that direct is a win for 1-D problems, it is a wash for 2-D, and
iterative solver will be a win for 3-D. Iterative here means an
optimal solver (CG or GMRES) with an incomplete factorization and
natrual ordering.
In short. Sparse matrices are used for efficiency of
representation. Direct methods can be used with sparse matrices
although ordering and factorization are complicated and there will be
some growth in memory as a result of fill-in. Iterative solvers are
easy to write, but subtle and quick to anger -- use the right
preconditioner.
For the curious, the link for ITL is
http://www.lsc.nd.edu/research/itl
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk