Boost logo

Ublas :

From: Frank Astier (fastier_at_[hidden])
Date: 2005-09-22 15:56:40


Actually, I've now turned to ATLAS, and I want to see if I can get
better performance there. C or C++, about the same deal for me, I can
wrap C ATLAS in C++ classes easily, I think.

To reproduce your results, do I need to worry about
BOOST_UBLAS_CHECK? I think I tried axpy_prod and dismissed it (too
quickly), as on a par with prod for my specific problem. But I'd like
to try again with your settings.

Thanks,

Frank

On 平成 17/09/22, at 12:26, Michael Stevens wrote:

> On Donnerstag 22 September 2005 01:53, Frank Astier wrote:
>
>> Ian -
>>
>> Thanks for the correction. It got rid of the bizarre effect that
>> ublas seemed to always to the same amount of work, regardless of
>> sparsity :-)
>>
>
> It is not all doom and gloom! In "operation.hpp" there axpy_prod
> which help
> considerably. Using in the test loop
> axpy_prod (m, v, r1);
>
> I get.
> 0.1 0.32 0.42=
> 0.2 0.35 0.42=
> 0.3 0.46 0.43=
> 0.4 0.56 0.42=
> 0.5 0.56 0.42=
> 0.6 0.79 0.43=
> 0.7 0.95 0.42=
> 0.8 1.07 0.42=
> 0.9 1.17 0.42=
>
> and looking in more detail
> 0 0.02 0.49=
> 0.02 0.12 0.42=
> 0.04 0.12 0.42=
> 0.06 0.13 0.41=
> 0.08 0.12 0.42=
> 0.1 0.24 0.42=
> 0.12 0.24 0.42=
> 0.14 0.23 0.42=
> 0.16 0.23 0.42=
> 0.18 0.24 0.42=
> 0.2 0.24 0.42=
> 0.22 0.35 0.42=
> 0.24 0.34 0.42=
> 0.26 0.35 0.43=
> 0.28 0.35 0.42=
>
>
>
> So even at 20% sparsity we are doing better. I think this is pritty
> good if
> not glittering.
>
>
>> I implemented my own sparse matrix too, optimized for multiplication
>> with dense vectors (I didn't even write random access to elements, I
>> don't need it!). Below 10% non zeros, I get at least 3 times better
>> than the naive dense multiplication, and up to 40 times better than
>> ublas on small matrix sizes (around 256x256). The matrices I handle
>> don't have any structure I can take advantage of, unfortunately.
>>
>> I wish ublas documentation ad enough details about what to expect.
>>
>
> I think one could write several PhD on matrix multiplications
> involving sparse
> matrices.
>
> That said the normal ublas::prod is unlikely to be a good choice
> where sparse
> matrics are involved. The are many ways the basic element access
> loops can be
> ordered to achive a multiplication. axpy_prod use one ordering that
> iterates
> over the matrix.
> For prod to work in general expressions it uses an ordering which
> iterates of
> the elements in the result. For sparse matrices this is bad news as
> we have
> to do a great deal of work to find where we are in the sparse matrix.
>
> That said I would love to see this documented further. I have some
> material of
> my own with diagrams showing the element access paterns I could
> contribute.
>
> Michael
>
> ___________________________________
> Michael Stevens Systems Engineering
>
> 34128 Kassel, Germany
> Phone/Fax: +49 561 5218038
>
> Navigation Systems, Estimation and
> Bayesian Filtering
> http://bayesclasses.sf.net
> ___________________________________
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
>