Date: 2001-03-14 09:12:56
--- In boost_at_y..., jerome_lecomte_at_c... wrote:
> The work already done on LAPACK is also huge. I mean HUGE: we're
> talking of 10 years of work; and the library is actually quite well
> designed once you get familiar with their cryptic naming scheme.
One has to wonder how much of that work was due to the limitations of
their development language. This is not to disparage the
intellectual contribution of LAPACK in any way -- I know Jack
Dongarra, Jack Dongarra is a friend of mine, and I am no Jack
Dongarra -- however, if you were to categorize what is there
algorithmically versus what is there in the implementation because
they could not perform very much re-use (except with BLAS of
course!), there is alot of bloat.
For instance, every algorithm has to be separately implemented four
distinct times -- once each for real, double, complex, and complex
double. Then, some of the algorithms are multiply implemented for
different storage formats (rectangular, triangular, packed, etc.)
I suspect a generic implementation of LAPACK would be quite a bit
more compact than the existing LAPACK. It would still be a huge
undertaking to get the numerical stuff right -- and this is LAPACK's
primary contribution IMHO.
> FYI, I've never used Lapack in real work myself, but was very
> impressed with it (see doc on netlib). Feel free to correct me if I
> missundestood anything.
One limitation of LAPACK of course, is that it only works with dense
systems (and matrices stored in column-major order). Many real-life
problems (at least in large-scale scientific and engineering
computing) are sparse. Although I have done lots and lots of
numerical computing in my life, I too have never had an opportunity
to use LAPACK -- it hasn't been applicable.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk