|
Ublas : |
Subject: Re: [ublas] Matrix multiplication performance
From: palik imre (imre_palik_at_[hidden])
Date: 2016-01-19 09:14:01
Hi Michael,
I cannot see any attachments ...
On Tuesday, 19 January 2016, 11:12, palik imre <imre_palik_at_[hidden]> wrote:
Is there a public git repo for ublas 2.0?
On Monday, 18 January 2016, 9:25, Oswin Krause <Oswin.Krause_at_[hidden]> wrote:
Hi Palik,
this is a known problem. In your case you should already get better
performance when using axpy_prod instead of prod. There are currently
moves towards a ublas 2.0 which should make this a non-problem in the
future.
On 2016-01-17 21:23, palik imre wrote:
> Hi all,
>
> It seems that the matrix multiplication in ublas ends up with the
> trivial algorithm. On my machine, even the following function
> outperforms it for square matrices bigger than 173*173 (by a huge
> margin for matrices bigger than 190*190), while not performing
> considerably worse for smaller matrices:
>
> matrix<double>
> matmul_byrow(const matrix<double> &lhs, const matrix<double> &rhs)
> {
>Â assert(lhs.size2() == rhs.size1());
>Â matrix<double> rv(lhs.size1(), rhs.size2());
>Â matrix<double> r = trans(rhs);
>Â for (unsigned c = 0; c < rhs.size2(); c++)
>Â Â {
>Â Â Â matrix_column<matrix<double> > out(rv, c);
>Â Â Â matrix_row<matrix<double> > in(r, c);
>Â Â Â out = prod(lhs, in);
>Â Â }
>Â return rv;
> }
>
>
> Is there anybody working on improving the matrix multiplication
> performance?
>
> If not, then I can try to find some spare cycles ...
>
> Cheers,
>
> Imre Palik
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: Oswin.Krause_at_[hidden]