Boost logo

Ublas :

Subject: Re: [ublas] Matrix multiplication performance
From: Peter Schmitteckert (Peter.Schmitteckert_at_[hidden])
Date: 2016-01-24 11:02:00


Dear All,

first of all I'm impressed by the recent results.

> We need both native implementations and bindings.

I completely agree.

> Native implementations are nice to try out a concept, bindings are needed for specific functionalities, sometimes for performance.

I'm using ublas for about 15 years in a HPC project sometimes running on over 1000 cores 24h/7 for months.
In this kind of applications it is critical to get maximum performance. Anything else just hurts.
For that I have to interface to MKL/ACML.

However, there are also applications, where I'm concerned with accuracy. I had problems where I had to use
multiprecision arithmetics for matrices. In eigen3 that worked amazingly well. Sadly, I can't use that for my
main code, as I'm using my BLAS interface for ublas directly at many places. (I wrote that before the bindings library).

Then I have applications, where we just want to play with funny ideas and flexibility in data types beyond BLAS .

> packages are set up, linking is pretty straightforward.

Sometimes there are projects where you spend more time on resolving linking issues than doing numerics.
I'd also like to remark that with today’s machines we often need a fast implementation of BLAS operations only,
not an optimal one.

The problem for a generic C++ matrix interface is that we are waiting for it for decades now while many
interesting ideas/projects/implementations have gone with the tides.

Best regards.
Peter

P.S.
On 24 Jan 2016, at 16:32, Michael Lehn <michael.lehn_at_[hidden]> wrote:
> So this would be something new in the C++ world: reduce the optimization level :-)
Well, not actually new, sometimes that happens.