Boost logo

Boost :

From: lums_at_[hidden]
Date: 2001-03-13 10:24:05


--- In boost_at_y..., "Jerome Lecomte" <jlecomte_at_i...> wrote:

> > - efficiency
> That's where I don't quiet follow. Isn't the best way to achieve
Fortran
> speed, to call Fortran ??? I don't see the point in pushing further
the
> expression template trend ... This is a nice concept, and something
to
> develop for the future (when compilers better support export
keyword) but
> right now, I would support something along the line of lapack++
(commercial
> by RogueWave) where the C++ is actually just an interface to
Fortran code.

One has to be quite careful here when talking about performance and
Fortran and so forth. There is nothing per se about one language
versus another that gives or takes away performance. Fortran
compilers have an easier task with certain optimizations because of
intrinsic complex data types and because of guarantees not to alias,
but these can both be gotten around.

If you just go to netlib and download the reference BLAS
implementation you will not get very good performance (I am thinking
particularly of that grand-daddy of all linear algebra performance
metrics, DGEMM). To get high levels of performance (some significant
fraction of machine peak) requires careful hand structuring of the
code to do things like tiling, cache blocking, software pipelining
and the like. However, all of these things can be done in C++ as
well -- and they can be done in a much nicer way -- and they can be
made much more tunable -- and they can be done with template meta-
programs. The bottom line being that at least with KCC (and in the
future with other compilers), you can write a very concise but very
high performance library. MTL (cf Jeremy's thesis) achieves
performance better than Fortran -- and better than vendor libraries
(some written in Fortran, some in C I think).

Now, vendor-tuned libraries can give you good performance (which is
where BLAS get their reputation). The problem with this is that a)
the performance is not portable, b) you have to pay for them, c) they
are highly optimized for only the subset of BLAS necessary to market
their machines well, and d) the BLAS do not really cover everything
that you would want to do in an efficient way. Sparse operations are
not part of the classic BLAS, there are many many interesting cases
of operations that are not covered, only single, double, complex, and
double complex data types are covered (and no mixing them!). There
is a soone to be released update to BLAS that covers some of these
issues, but not all, and it is huge and unwieldy. In fact, we are
using MTL to implement these new BLAS.

> (BLAS by the way is a standard that happen to
> have Fortran calling convention for historical reasons but one
could write
> BLAS in C).

Many BLAS are written in C. Jack Dongarra's ATLAS package
(automatically tuned BLAS) generates C code, e.g.

______________________________________________________________________
_______
> ifrance.com, l'email gratuit le plus complet de l'Internet !
> vos emails depuis un navigateur, en POP3, sur Minitel, sur le WAP...
> http://www.ifrance.com/_reloc/email.emailif


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk