Boost logo

Boost :

From: Paul A. Bristow (pbristow_at_[hidden])
Date: 2001-03-21 07:46:12


I concur with this excellent summary of requirements,
but I would like to add cautionary
comments based on unpleasant experiences in trying to use these things in
anger.

1 The business of getting numbers from devices or files into suitable
structures is messy, tiresome
and error prone for starters.

2 There are nasty interactions between the choice of array (vector,
valarray, matrix etc)
and the real business of numerical work.

3 The zero or 1 based differences between C and FORTRAN (BLAS) is still
a major practical barrier, mainly because it makes everywhere a minefield.

4 I have found that F2C produces diabolical and inscrutable code.
It is only useful to allow one to run a complete FORTRAN program
for comparison purposes. I have found it better to start writing
in Std C++ using the same algorithm and check results are the same.

5 Non-standard compilers, especially MSVC6, are a very serious problem
if we make heavy use of templates - essential for speed.

I get the impression that more than half the BOOST effort is going into
alligator wrestling!

So perhaps this matrix proposal is very important, but premature?

Dr Paul A Bristow, hetp Chromatography
Prizet Farmhouse
Kendal, Cumbria
LA8 8AB UK
+44 1539 561830
mailto:pbristow_at_[hidden]

> --- In boost_at_y..., Jeremy Siek <jsiek_at_l...> wrote:
> > BTW, this kind of process led to the development of the
> > original BLAS... they are the basic operations needed in the direct
> dense
> > matrix solvers provided by LAPACK.
> >
> > I'd challenge those interested in boost numerics libraries to come
> forward
> > with concrete lists of problems and algorithms for a particular
> domain.
> > With those in hand we'll be able to determine scope and priority for
> > various peices, and it will provide a firmer ground for discussing
> > interface and implementation issues.
> >
> > Personally, I'll contribute to the list of algorithms for the
> > traditionally linear algebra domain.
>
> Let me take a broad stab at a couple of different levels. At the
> highest level, the canonical set of problem classes for numerical
> linear algebra consist of:
>
> o) Solving linear systems of equations
> o) Solving least-squares problems (e.g., over and under determined
> systems)
> o) SVD
> o) Eigensystems
>
> Within each of these problem classes, solution methods fall along the
> lines of application area and problem representation. E.g., there
> will be different algorithms for symmetric versus unsymmetric
> systems, dense or sparse, direct or iterative. I suspect that the
> result of this survey will be that it would be nice to have an
> algorithm for each possible combination of dense/sparse,
> symmetric/unsymmetric, and direct/iterative -- with the possible
> exception of dense+iterative -- although this is not unheard of.
> LAPACK covers all of these combinations for the dense case, e.g.
>
> Going one level deeper -- one would want to provide algorithms at all
> the leaf nodes of the following hierarchy. I list some example
> classical algorithms for each leaf -- note that there are a number of
> implementation variations (left vs right looking Cholesky
> factorizations, e.g.) Note also that beyond linear systems, direct
> methods for sparse systems become more and more difficult because the
> popular factorizations tend to destroy sparsity. There are sparsity
> preserving QR factorizations, but I am not aware of anything
> equivalent for SVD or eigenproblems.
>
> I. Linear systems
> A. Direct
> 1. Symmetric
> a. Dense -- Cholesky factorization, forward + back solves
> b. Sparse -- Cholesky factorization, forward + back solves,


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk