Boost logo

Boost :

Subject: Re: [boost] [GSOC2015] uBLAS Matrix Solver Project
From: Mikael Persson (mikael.s.persson_at_[hidden])
Date: 2015-03-06 16:37:04


Hi Raj,

I think you have a pretty good list there. It has a good set of fundamental
methods that are very useful and reasonably easy to implement, and you are
avoiding (for now) the more advanced methods that are very tricky to
implement (and that very few people have implemented at all, since most
people bind to legacy fortran routines instead).

Here are a few notes:

> 1) *QR Decomposition* - *(Must have)* For orthogonalization of column
> spaces and solutions to linear systems. (Bonus : Also rank revealing..)

QR decomp is obviously a must have, but I would say that the rank-revealing
version is also a must have. Ideally, we should be able to cover the full
trio of matrix inversion / linear-solution. The trio is QR, RRQR and SVD.
In my experience, the best way to robustly do a general matrix inversion or
linear solution is to first try the simplest and fastest method, which is
QR decomposition. This could fail due to rank-deficiency, at which point
you fall-back to the next simplest and fastest method that can deal with
that deficiency, which is RRQR decomposition. And because there are some
rare cases where RRQR will fail to correctly reveal the rank (push all the
zeros down), there must be an ultimate, albeit much slower, fall-back
method that will always work, which is basically SVD (afaik). In some real
life situations (e.g., inverting the Jacobian of a system), I've had
problems where about 1/1,000 times the QR would fail, requiring fall-back
to RRQR, and then, having about 1/1,000,000 times where RRQR would fail,
requiring to fall-back on SVD. I don't see how one could professionally
rely on Boost.uBLAS (alone, without binding to the older fortran libraries)
if those three layers of fall-back are not present.

> 2) *Cholesky Decomposition* - *(Must have)* For symmetric Positive
Definite
> systems often encountered in PDE for FEM Systems...

For Cholesky, I think it is important to also include its variants,
especially LDL and band-limited version (for band-symmetric matrices).
That's mainly because implementing them is very easy and they are also very
useful, when applicable.

> *EIGEN DECOMPOSITION MODULES (ONLY FOR DENSE MODULES)**:*

That's where it gets trickier.

First, you have to implement matrix balancing with exact floating-point
arithmetic. This is a deceivingly simple algorithm to write, but don't
underestimate its difficulty, nor its importance later on.

I think that in order to tackle eigen decompositions, you will need to work
your way up with its many building blocks. You have to make sure to have
good, streamlined and flexible code for applying the three fundamental
transforms (Householder, Givens, and Jacobi rotations). Then, get your
hands dirty with the Hessenberg decompositions and reductions. Then, move
to the different Schur decompositions. And then, go on to QR/QZ steps,
whose difficulty you should not underestimate. And then, you'll have your
eigen value solvers.

In my opinion, if you are going to implement any kind of eigen value
decomposition, you should just do them all. These algorithms are all very
much related, Jacobi / QR / QZ, Hess / Tri-diag, Shur / Real-Schur, etc...
you'll be better off tackling them together.

In so far as any additional methods, I think that one easy thing to add
would be matrix exponentials (Pade Square-and-sum algorithm), which is
sometimes useful in various contexts, especially in system dynamics. But
really, I think that for me, the holy grail is an algebraic Riccati
equation (ARE) solver. I implemented one in my own library, and so, I know
what it entails, and it's really not easy to do, and my implementation is
still rough around the edges (when numerical conditioning is bad on the
matrices involved). If you think eigen value decomposition is tricky, AREs
are ten times worse (and as far as I know, my implementation is one of two
implementations that exist in the world, the other one is the fortran code
written by the original authors of the algorithm itself). I don't think
this is possible to achieve for a GSoC project, but there are a number of
building blocks within it that are useful on their own, like Schur and
reductions.

By the way, if you need additional inspiration, you might want to look at
my matrix numerical methods code:
https://github.com/mikael-s-persson/ReaK/tree/master/src/ReaK/core/lin_alg
Most of this code is a bit old, and still somewhat rough around the edges,
but I've been using it for years and with great success.

I would also be happy to help in the mentorship of this GSoC project.
Things are in a state of flux for me right now (in my work), so I'm not
exactly sure how much time I could dedicate to that over the summer.

Cheers,
Mikael.

-- 
Sven Mikael Persson, M.Sc.(Tech.)
PhD Candidate and Vanier CGS Scholar,
Department of Mechanical Engineering,
McGill University,

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk