Boost logo

Ublas :

From: liu chang (liuchangjohn_at_[hidden])
Date: 2006-11-05 23:30:57


On 11/3/06, Gunter Winkler <guwi17_at_[hidden]> wrote:
> Am Freitag, 3. November 2006 08:45 schrieb liu chang:
> > dv = prod(sm, sv); // operation 1
> > dv = prod(dm, sv); // operation 2
>
> > So is this a limitation of ublas or am I doing something wrong? If
> > ublas isn't able to handle such situation, can I code a routine
> > myself to specifically take care of multiplication of dense matrix
> > with sparse vector?
>
> I had a closer look at the dispatcher. The current mechanism chooses the
> wrong routine:
> assignment of a sparse product (but dense*sparse yields dense)
> I think this is a problem of the used priority: dense < packed < sparse.
> (Which is the best choice for evaluating x+y, but the worst for the
> result type of x*y)
>
> The fastest solution would be a specialization of axpy_prod (see
> operation.hpp). I attach my benchmark example. The sparse vector has a
> nonzero entry at every 4th position.
>
> $ ./mulv_bench 4000 2>errors
> size of matrices - 4000x4000
>
> ublas axpy block
> D+=RD 0.08 0.91 0.08
> D+=CD 0.72 0.64 0.16
> D+=RS 0.16 0.19 0.1
> D+=CS 0.49 0.01 0.16
>
> $ ./mulv_bench 8000 2>errors
> size of matrices - 8000x8000
>
> ublas axpy block
> D+=RD 0.22 2.67 0.35
> D+=CD 4.31 2.61 0.59
> D+=RS 0.68 1.06 0.36
> D+=CS 1.97 0.06 0.6
>
> (AMD Athlon(tm) 64 Processor 3200+, g++-4.1 -DNDEBUG -O2)
>
> You see: the layout of the matrix determines the speed of the product.
> The best is column major * sparse (which gives the expected time of 1/4
> of the dense*dense product)
>
> mfg
> Gunter

Thank you very much Gunter for the very helpful reply. As I'm very new
to ublas, I hope I've not misunderstood you. To rephrase, all I need
to do to get the speed up is to replace

dv = prod(dm, sv);

by

dv = axpy_prod(dm, sv);

and optionally to change dm to column_major format as well?
Thank you so much.

Liu Chang