|
Ublas : |
Subject: Re: [ublas] Matrix multiplication performance
From: Nasos Iliopoulos (nasos_i_at_[hidden])
Date: 2016-01-29 09:51:04
I second this
On 01/28/2016 04:01 PM, Karl Meerbergen wrote:
>> On 28 Jan 2016, at 21:47, Michael Lehn <michael.lehn_at_[hidden]> wrote:
>>
>>
>>
>>> do you also do sparse linear algebra by chance?
>> Sorry, not directly. I just looked at libraries like SuperLU and Umfpack. However, not as close as to other BLAS libraries. But
>> from my impression this also could be done much more elegant in C++. The big headache in these libraries is that they basically
>> have the same code for float, double, complex<float> and complex<double> . Just using C++ as "C plus function templates would
>> make it much easier. And the performance relevant part in these libraries is again a fast dense BLAS.
> Correct, but I would bet on MUMPS, which is,in my opinion, more advanced and still improving. They also use a template mechanism in fortran 90, based on the C preprocessor ;-) They made it clear they will not redo their developments of more than 30 man years in C++.
>
> Best,
>
> Karl
>
> Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: athanasios.iliopoulos.ctr.gr_at_[hidden]
>