Boost logo

Boost :

From: Philipp K. Janert (janert_at_[hidden])
Date: 2001-08-28 17:16:45


Dear All!

I would like to support those who suggested a SIMPLE
interface for matrix types. In my experience, special
problems in numerical analysis require special approaches
anyway. This is massively true for non-linear problems,
but is not that far off even in the linear case. A general
library should therefore attempt to make the "normal"
case as convenient as possible. Trying to provide a general
library for special problems is at best futile, at worst
misleading (because it gives a false sense of security).

As a start, I would like to provide a catalogue of the most
common operation that can be invoked on vectors and matrices
and some related issues (some of them have already been
mentioned in previous postings). Let's group them into four
categories:
  i) Basic vector space operations
 ii) Matrix decomposition and inversion
iii) Matrix element types
 iv) Special cases

i) Basic vector space operations
================================
There is actually a rather small set of operations
defined on vectors and matrices that might be considered
"standard".

1) The usual vector space operations for vectors and matrices
        (Addition and Subtraction, Multiplication and Division
        by a scalar)
2) Vector product, matrix product, matrix times vector
3) Absolute value of a vector
4) Transpose, Hermitian conjugate
5) Creation and Initialization (Unit matrix, Hilbert matrix,
        zero matrix, ...)
6) Element access and submatrices, rows and columns

I would consider it desirable to make these operations
available via operator overloading so that one can write
        matrix1 + matrix2
etc (for sake of consistency, do as the ints do).

These five topics about exhaust the general operations!!!

Note that matrix inversion is not included in the above
list - that's because there is no one preferred way to
compute it. It depends, not only on the values in the
matrix, but also on the intended use (ie QR decomposition
may be more advantageous for my current problem, since I
will be able to update it - if that's not an issue, LU is
probably better, etc).

ii) Matrix decomposition and inversion
======================================
For real matrices only, there are a handful of standard
operations, practically all of them based on matrix
factorizations, to wit:
1) Cholesky (A=transpose(R) * R, where R: upper triangular)
2) LU (A=LU, where L is lower, U is upper triangular)
3) QR (A=QR, where Q is orthogonal, and R is triangular)
4) SVD (Singular Value Decomposition)
5) Spectral (U*diag*transpose(U))
There are special, very effective algos available for
certain other kinds of matrices: tri- and band-diagonal,
Vandermonde and Toeplitz. Spectral decomposition (ie
Eigenvalue calculations) are already a somewhat special
case, usually requiring the matrix to be symmetric for
the problem to be well posed (otherwise we get into
left and right eigenvector issues).

I think to provide a single "wrapper" over these various
algos would be the wrong approach - it will be necessary
for the programmer to be able to choose the right algo for
each problem individually. However, a group of functions
that co-operate well would be helpful. Questions of
co-operation become the main sticking point. Example: if
the matrix is very large, it may be beneficial to return
both parts of an LU decomposition as part of the same
matrix. However, this is an ugly pain in the neck, which
one does not want to deal with, unless necessary. Trade off?

iii) Matrix element types
=========================
One thing that I believe would be very beneficial is
to define the requirements for the element types. If one
would require that each type permissible to be used as
matrix (or vector) element must support a certain set of
operations (such as add, sub, mul, ...) then algos (such
as LU etc), defined exclusively in terms of those operations
should be generally applicable. (Unfortunately, that's the
theory, in practive there are problems. Real and complex
numbers are not necessarily interchangeable in numerical
computations, though both support the same set of operations.)
Candidates are: built-in numerical types, complex numbers,
multi-precision integer and floating point types, rational
numbers. Furthermore: matrices!

iv) Special cases
=================
are abundant. The most important ones are the various
forms of sparseness, usually combined with large (or
very large) size. Other examples: ill-conditioned,
possibly requiring iterative improvement or SVD editing.
And all the special cases of the eigenproblem.

To summarize:
A generic interface comprising the operations of part i)
would be most helpful - at least it provides a starting
point. I am less optimistic about a generic interface
description for part ii) - I am sure it can be achieved,
but I am not so certain that it will be very useful.
Part iii) provides exciting prospects - but one should
note well the caveat: Numerical Analysis can be a
fickle art. As to part iv) - that might be better left
to specialised libraries.

I guess the most important message that this message
wants to convey is that Numerical Analysis is very
much about solving concrete problems. Any design
that focusses too much on abstract interfaces is likely
to miss the point (being anything between wastefully
inefficient to just plain wrong).

By the way, is there any active collaboration with OON?
No need to reinvent the wheel...

Best regards,

                        Ph.

-----------------------------------------------------------------

Philipp K. Janert, Ph.D. janert_at_[hidden]


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk