Boost logo

Boost :

From: jlecomte1972_at_[hidden]
Date: 2001-03-13 11:23:32


lums_at_l... wrote:
> One has to be quite careful here when talking about performance and
> Fortran and so forth. There is nothing per se about one language
> versus another that gives or takes away performance. Fortran
> compilers have an easier task with certain optimizations because of
> intrinsic complex data types and because of guarantees not to
alias,
> but these can both be gotten around.

Agreed.

> If you just go to netlib and download the reference BLAS
> implementation you will not get very good performance (I am
thinking
> particularly of that grand-daddy of all linear algebra performance
> metrics, DGEMM). To get high levels of performance (some
significant
> fraction of machine peak) requires careful hand structuring of the
> code to do things like tiling, cache blocking, software pipelining
> and the like. However, all of these things can be done in C++ as
> well -- and they can be done in a much nicer way -- and they can be
> made much more tunable -- and they can be done with template meta-
> programs. The bottom line being that at least with KCC (and in the
> future with other compilers), you can write a very concise but very
> high performance library. MTL (cf Jeremy's thesis) achieves
> performance better than Fortran -- and better than vendor libraries
> (some written in Fortran, some in C I think).

I agree again. PETE, Blitz, and MTL will certainly be usefull (and
most likely are already for some) in the future. But I don't see why
we should start yet an other template based linear algebra package;
Well, to be honest, I guess the reason would be the license policy.

I feel though it is simpler to wrap the Fortran code base (ie.
ATLAS/LAPACK on unices and MKL/LAPACK on Win32) than rewrite MTL from
scratch. Both ATLAS and MKL are optimized implementations of BLAS (I
don't know how they compare to MTL) and all in all are quite
portable, but correct me if I am wrong (my references are Win32/VC++
and Cygwin/gcc).

> Now, vendor-tuned libraries can give you good performance (which is
> where BLAS get their reputation). The problem with this is that a)
> the performance is not portable, b) you have to pay for them, c)
they
> are highly optimized for only the subset of BLAS necessary to
market
> their machines well, and d) the BLAS do not really cover everything
> that you would want to do in an efficient way. Sparse operations
are
> not part of the classic BLAS, there are many many interesting cases
> of operations that are not covered, only single, double, complex,
and
> double complex data types are covered (and no mixing them!). There
> is a soone to be released update to BLAS that covers some of these
> issues, but not all, and it is huge and unwieldy. In fact, we are
> using MTL to implement these new BLAS.

Uh ? That, I didn't know. I guess it doesn't make much sense to wrap
C++-based-Fortran behind C++ then. How does it work licensewise ?

I mean, I realize there is a lot of work that have been done on these
libraries; and I don't want to push anybody just to give away their
work to the public. But MTL is being used to generate BLAS which in
turn is very flexible in terms of licensing I believe. Why not
directly put MTL in boost then ? I am sure I miss something, please
let me know.

> > (BLAS by the way is a standard that happen to
> > have Fortran calling convention for historical reasons but one
> could write
> > BLAS in C).
>
> Many BLAS are written in C. Jack Dongarra's ATLAS package
> (automatically tuned BLAS) generates C code, e.g.
>

see ... :-) I knew it !!


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk