Hi Jorn,
Your answer correctly identified the problem. It appears for my problem, mapped_matrix is a
much better (by factor of 110), if not the best choice as far as assembling goes. It also hasn't
hurt traversal that badly, it appears
In the link you attached (thank you!), the author uses something called "sparse_matrix" in
boost::numeric::ublas - does this even exist? Atleast in version 1.34, it gives me a compile
error, saying no sparse_matrix type exists in boost::numeric::ublas (it was the first thing I
tested before going for mapped_matrix) and the documentation online doesn't mention
sparse_matrix as a type of sparse storage at all.
Anyway, thanks a lot for all of your help!
Regards,
Sunil.
2010/4/21 Jörn Ungermann
<j.ungermann@fz-juelich.de>
Hi Sunil,
this is likely not a problem of uBLAS, but one of the principal problems
of using sparse matrices. Depending on the type of matrix either random
access or multiplication performance is efficient.
For the compressed_matrix, random access is rather costly *unless* you
can control the way in which elements are added to the matrix. If you
can assemble the (row_major) compressed_matrix row-by-row with ascending
column indices, this should take no time at all.
If you can't do this, use a different matrix type for assembly, e.g.
mapped_matrix (which offers efficient random access, but bad
computational performance) and construct the compressed_matrix from
there.
See Gunter Winkler's page for details:
http://www.guwi17.de/ublas/matrix_sparse_usage.html
Kind reagrds,
Jörn
On Wed, 2010-04-21 at 03:09 +0200, Sunil Thomas wrote:
> Hi all,
>
> I've been using boost 1.34 ublas library, especially the
> compressed_matrix class for sparse matrices in
>
> compressed row storage form. But I noticed that simply accessing an
> element of the matrix (to assign
>
> it a value, for example) slows my application down to unusable levels,
> for problems of the order of just
>
> 80,000 unknowns. I've identified the program is there and yes, I am
> allocating the memory as I should
>
> be for the matrix, - for example here is a snippet (of important
> lines):
>
>
> ************************************************************************************
> matrix_A = compressed_matrix(nelem_a(), nelem_a(), nonzeros()); //
> allocation
>
> matrix_A(uic1, uic1) += -trans; // assignment
>
> matrix_A(uic2, uic2) += -trans; // assignment
>
> ************************************************************************************
>
> where all variables (and/or functions), e.g. uic1, uic2, trans,
> neleme_a(), nonzeros(), etc.. are all well-defined
>
> (this is all been checked thoroughly). Commenting out the two
> assignment statements for example reduced
>
> my overall run time from 110 seconds to 0 (practically zero), for
> 80000 runs. Has anyone encountered this
>
> problem and know of a solution? I've heard a lot of stories about how
> boost::ublas is just not up there in
>
> performance and I certainly hope I am missing something trivial. Do
> later versions of boost address this
>
> better?
>
>
>
> Greatly appreciate any help.
>
> Thanks,
>
> Sunil.
>
>
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzende des Aufsichtsrats: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
_______________________________________________
ublas mailing list
ublas@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: sgthomas27@gmail.com