Boost logo

Glas :

Re: [glas] summary of fonctionnalities

From: C J Kenneth Tan -- OptimaNumerics (cjtan_at_[hidden])
Date: 2005-02-09 13:59:28

Hi Karl and Yves,

Looking primarily at dense cases:

> Simple expressions can be translated automatically to BLAS calls for the dense
> case. Most expressions (in my experience) are a combination of BLAS
> expressions, so this is no problem. For the sparse case, the situation is
> different. There we need efficient implementations for specific operations.
> When data are stored on disk, we also need specific algorithms. Similarly for
> structrured matrices (Hankel e.g.), we need specific algorithms.
> Andrew asked for which algorithms we want to use GLAS. We have to make a list,
> which we have to provide, but it should be possible to add algorithms in the
> future. We cannot predict what is needed in 5 years. Also BLAS evolved over
> many years.

It would make sense to have code that just translates to BLAS calls,
and use BLAS code from an implementation of BLAS. While it is easy to
get a working implementation of BLAS code, it is actually not always
easy to get good BLAS performance. So it would make sense to have
GLAS code just calls BLAS code, to take advantage of tuned BLAS

> The minimum we need to provide is the functionality of the BLAS for dense and
> sparse and structured matrices. This allows for developing more complicated
> algorithms as LU factorization, QR factorization etc using these basic
> operations (as LAPACK is built upon BLAS).

What about using the same model as above (BLAS case), for LAPACK also?
Just have the code that calls the tuned LAPACK code? In general,
LAPACK code is even more difficult to tune than BLAS code.

Kenneth Tan
News: OptimaNumerics Powers Russia's Fastest Supercomputer
C. J. Kenneth Tan, Ph.D.
OptimaNumerics Ltd.
E-mail: cjtan_at_[hidden] Telephone: +44 798 941 7838
Web: Facsimile: +44 289 066 3015