Boost logo

Ublas :

Subject: Re: [ublas] Matrix multiplication performance
From: Karl Meerbergen (karl.meerbergen_at_[hidden])
Date: 2016-01-28 16:01:41

> On 28 Jan 2016, at 21:47, Michael Lehn <michael.lehn_at_[hidden]> wrote:
>> do you also do sparse linear algebra by chance?
> Sorry, not directly. I just looked at libraries like SuperLU and Umfpack. However, not as close as to other BLAS libraries. But
> from my impression this also could be done much more elegant in C++. The big headache in these libraries is that they basically
> have the same code for float, double, complex<float> and complex<double> . Just using C++ as "C plus function templates” would
> make it much easier. And the performance relevant part in these libraries is again a fast dense BLAS.

Correct, but I would bet on MUMPS, which is,in my opinion, more advanced and still improving. They also use a ’template’ mechanism in fortran 90, based on the C preprocessor ;-) They made it clear they will not redo their developments of more than 30 man years in C++.