Boost logo

Boost :

Subject: Re: [boost] [GSOC2015] uBLAS Matrix Solver Project
From: Rajaditya Mukherjee (rajaditya.mukherjee_at_[hidden])
Date: 2015-03-07 12:48:06


Hi Mikael and Thijs,

I am extremely glad for your response and I would like to address each of
you.

*Thijs* : I am not so sure what is the algorithmic challenges of such an
undertaking is. More specifically, I am concerned whether it will be
substantial enough for a Gsoc Project. If you think thats something you can
mentor me for this undertaking, I am extremely open to the idea. I use
Intel High Performance MKL libraries for my day to day reseach and one of
the reasons why Eigen is so attractive is because it can be made to call
the high performance MKL Libraries in the backend with a single pragma
call. I am sure such a functionality will go a long way in enhancing the
usefulness of Boost. Let me know your thoughts on this..

*Mikael* : After reading your observations, I believe that a nice way to
write the proposal would be to skip out on the sparse decompositions
altogether and just focus on a subset of dense decompositions and methods
but extremely test them to account for numerical stability. As you have
mentioned that such an endeavour would require writing quite a lot of
additional helper based decompositions - (for example an actual stable
eigen solver would require implementing all the algorithms such as Schur,
QR, Decompositions like Hessenberg and Tridiagonal)...

With that in mind, I would like to propose for this project a complete set
of dense eigen value solvers with maturity comparable to eigen library.
What do you think ? Regarding the mentorship, perhaps David and you can
work something depending on your time commitment availability. If you have
any more queries regarding my experiences with numerical solvers, please
let me know..

*Rajaditya Mukherjee *
*3rd Year Graduate Student*
Dept. of Computer Science and Engineering
The Ohio State University
Columbus, Ohio
Tel :- +1-(614)-271-4439
email :- rajaditya.mukherjee_at_[hidden] <mukherjee.62_at_[hidden]>,
raj_at_[hidden] <mukherjr_at_[hidden]>

On Fri, Mar 6, 2015 at 4:37 PM, Mikael Persson <mikael.s.persson_at_[hidden]>
wrote:

> Hi Raj,
>
> I think you have a pretty good list there. It has a good set of fundamental
> methods that are very useful and reasonably easy to implement, and you are
> avoiding (for now) the more advanced methods that are very tricky to
> implement (and that very few people have implemented at all, since most
> people bind to legacy fortran routines instead).
>
> Here are a few notes:
>
> > 1) *QR Decomposition* - *(Must have)* For orthogonalization of column
> > spaces and solutions to linear systems. (Bonus : Also rank revealing..)
>
> QR decomp is obviously a must have, but I would say that the rank-revealing
> version is also a must have. Ideally, we should be able to cover the full
> trio of matrix inversion / linear-solution. The trio is QR, RRQR and SVD.
> In my experience, the best way to robustly do a general matrix inversion or
> linear solution is to first try the simplest and fastest method, which is
> QR decomposition. This could fail due to rank-deficiency, at which point
> you fall-back to the next simplest and fastest method that can deal with
> that deficiency, which is RRQR decomposition. And because there are some
> rare cases where RRQR will fail to correctly reveal the rank (push all the
> zeros down), there must be an ultimate, albeit much slower, fall-back
> method that will always work, which is basically SVD (afaik). In some real
> life situations (e.g., inverting the Jacobian of a system), I've had
> problems where about 1/1,000 times the QR would fail, requiring fall-back
> to RRQR, and then, having about 1/1,000,000 times where RRQR would fail,
> requiring to fall-back on SVD. I don't see how one could professionally
> rely on Boost.uBLAS (alone, without binding to the older fortran libraries)
> if those three layers of fall-back are not present.
>
> > 2) *Cholesky Decomposition* - *(Must have)* For symmetric Positive
> Definite
> > systems often encountered in PDE for FEM Systems...
>
> For Cholesky, I think it is important to also include its variants,
> especially LDL and band-limited version (for band-symmetric matrices).
> That's mainly because implementing them is very easy and they are also very
> useful, when applicable.
>
> > *EIGEN DECOMPOSITION MODULES (ONLY FOR DENSE MODULES)**:*
>
> That's where it gets trickier.
>
> First, you have to implement matrix balancing with exact floating-point
> arithmetic. This is a deceivingly simple algorithm to write, but don't
> underestimate its difficulty, nor its importance later on.
>
> I think that in order to tackle eigen decompositions, you will need to work
> your way up with its many building blocks. You have to make sure to have
> good, streamlined and flexible code for applying the three fundamental
> transforms (Householder, Givens, and Jacobi rotations). Then, get your
> hands dirty with the Hessenberg decompositions and reductions. Then, move
> to the different Schur decompositions. And then, go on to QR/QZ steps,
> whose difficulty you should not underestimate. And then, you'll have your
> eigen value solvers.
>
> In my opinion, if you are going to implement any kind of eigen value
> decomposition, you should just do them all. These algorithms are all very
> much related, Jacobi / QR / QZ, Hess / Tri-diag, Shur / Real-Schur, etc...
> you'll be better off tackling them together.
>
> In so far as any additional methods, I think that one easy thing to add
> would be matrix exponentials (Pade Square-and-sum algorithm), which is
> sometimes useful in various contexts, especially in system dynamics. But
> really, I think that for me, the holy grail is an algebraic Riccati
> equation (ARE) solver. I implemented one in my own library, and so, I know
> what it entails, and it's really not easy to do, and my implementation is
> still rough around the edges (when numerical conditioning is bad on the
> matrices involved). If you think eigen value decomposition is tricky, AREs
> are ten times worse (and as far as I know, my implementation is one of two
> implementations that exist in the world, the other one is the fortran code
> written by the original authors of the algorithm itself). I don't think
> this is possible to achieve for a GSoC project, but there are a number of
> building blocks within it that are useful on their own, like Schur and
> reductions.
>
> By the way, if you need additional inspiration, you might want to look at
> my matrix numerical methods code:
> https://github.com/mikael-s-persson/ReaK/tree/master/src/ReaK/core/lin_alg
> Most of this code is a bit old, and still somewhat rough around the edges,
> but I've been using it for years and with great success.
>
> I would also be happy to help in the mentorship of this GSoC project.
> Things are in a state of flux for me right now (in my work), so I'm not
> exactly sure how much time I could dedicate to that over the summer.
>
> Cheers,
> Mikael.
>
> --
> Sven Mikael Persson, M.Sc.(Tech.)
> PhD Candidate and Vanier CGS Scholar,
>
> Department of Mechanical Engineering,
> McGill University,
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost
>


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk