
Ublas : 
Subject: Re: [ublas] Question on ublas performance (resending as subscriber).
From: David Bellot (david.bellot_at_[hidden])
Date: 20100426 07:56:52
sorry for not coming back on that thread before.
I do agree on the lack of documentation, especially on sparse matrices.
But... tada... as I am working on improving this documentation now, I would
appreciate if you guys can send me examples of nice tricks when using sparse
matrices of all sorts. Any examples are welcome and I will compile them into
the new documentation.
If you have web pages and/or code that you can communicate, it will be
greatly appreciated too.
I must say that I use dense matrices most of the time, so contributions are
most welcome on sparse matrices.
In order not to bother the mailing list, you can also send all the materials
to my personal email address directly.
Thanks everybody.
Cheers,
David
On Thu, Apr 22, 2010 at 02:10, Sunil Thomas <sgthomas27_at_[hidden]> wrote:
> Hi Nasos,
>
> I see...thanks a lot for testing this out..really appreciate!
> I am using Boost 1.36.0 here (not 1.34 as I said before  sorry).
> And yes, I do have DNDEBUG option on all the time.
>
> Cheers,
> Sunil.
>
> On Wed, Apr 21, 2010 at 4:11 PM, Nasos Iliopoulos <nasos_i_at_[hidden]>wrote:
>
>> Thomas,
>>
>> I tested your code and it looks that it runs for both compressed and
>> mapped matrices (using DNDEBUG, Boost v 1.40). ALTHOUGH for mapped
>> matrices without DNDEBUG it throws from an assertion in the iterator. It
>> looks like a bug in uBlas (maybe wrong assertion), but I am not definite
>> about it. I will file a bug report if I find out. The code I used to test
>> is:
>>
>> #define NDEBUG
>>
>> #include <boost/numeric/ublas/matrix_sparse.hpp>
>>
>> #include <boost/numeric/ublas/io.hpp>
>>
>> //#define USE_MAPPED
>>
>> using namespace boost::numeric::ublas;
>>
>> #ifdef USE_MAPPED
>>
>> typedef mapped_matrix<double> matrix_type;
>>
>> #else
>>
>> typedef compressed_matrix<double> matrix_type;
>>
>> #endif
>>
>> int main(int argc, char *argv[])
>>
>> {
>>
>> const std::size_t size = 10;
>>
>> matrix_type P(size,size);
>>
>> for (int ii=0; ii!=P.size1(); ii++)
>>
>> for (int j=0; j!=P.size2(); j++)
>>
>> P(ii,j)=ii*j+ii+j;
>>
>> std::cout << P << std::endl;
>>
>> typedef matrix_type::reverse_iterator1 itmr1;
>>
>> typedef matrix_type::iterator2 itm2;
>>
>> int i,j ;
>>
>> for(itmr1 i1 = P.rbegin1(); i1 != P.rend1(); ++i1) {
>>
>> itm2 i2 = i1.begin(); i = (int)i2.index1();
>>
>> for(; i2 != i1.end(); ++i2) {
>>
>> j = (int) i2.index2();
>>
>> if(j <= i) continue;
>>
>> //x(i) = (*i2)*x(j);
>>
>> }
>>
>> std::cout << P(i,i) << " ";
>>
>> //x(i) *= P(i,i);
>>
>> }
>>
>> return 0;
>>
>> }
>>
>>
>> Best
>> Nasos
>>
>> 
>> Date: Wed, 21 Apr 2010 15:11:48 0700
>>
>> From: sgthomas27_at_[hidden]
>> To: ublas_at_[hidden]
>> Subject: Re: [ublas] Question on ublas performance (resending as
>> subscriber).
>>
>> Thanks Nasos  I was going to try that next anyhow. Will let you know if I
>> get a major improvement.
>>
>> In a related problem, I find that reverse iterator is not working as
>> expected.. (for compressed_matrix
>> and mapped_matrix). I really doubt that it is a problem of my usage (but
>> that is possible). Here is a
>> snippet of my usage from ILU back substitution (P is the matrix 
>> compressed, or mapped):
>>
>> for(itmr1 i1 = P.rbegin1(); i1 != P.rend1(); ++i1) {
>> itm2 i2 = i1.begin(); i = (int)i2.index1();
>> for(; i2 != i1.end(); ++i2) {
>> j = (int) i2.index2();
>> if(j <= i) continue;
>> x(i) = (*i2)*x(j);
>> }
>> x(i) *= P(i,i);
>> }
>> Greatly appreciate if anybody knows why this crashes as soon as it
>> enters outer loop...let me know if you need
>> any further info. Its been a while since I used it, I had given up on it
>> and used a combination of a forward iterator
>> loop and a usual nested loop inside it. I have no such problem with
>> forward iterator.
>>
>> Thanks in advance,
>> Sunil.
>>
>>
>>
>> On Wed, Apr 21, 2010 at 2:53 PM, Nasos Iliopoulos <nasos_i_at_[hidden]>wrote:
>>
>> Thomas,
>>
>> replace sparse_matrix with compressed_matrix. Also try the following:
>>
>> use the generalized vector of compressed_vector A as in the link to build
>> the stiffness matrix, I think this is faster than every other sparse
>> container (mapped, compressed, coordinate). If you need a storage type like
>> compressed (most likely if you use a classical sparse solver), just assign
>> it after the generalized is filled by your stiffness matrix construction
>> algorithm like:
>> compressed_matrix<double> K=A;
>>
>> (I am unsure that in earlier versions of uBlas the above is optimum, the
>> latest svn though should be ok. What version are you using?)
>>
>> The downside of this approach is that it requires twice the memory.
>>
>> Let us know if it works for you.
>>
>> Best
>> Nasos
>>
>> 
>> Date: Wed, 21 Apr 2010 13:51:56 0700
>> From: sgthomas27_at_[hidden]
>> To: ublas_at_[hidden]
>> Subject: Re: [ublas] Question on ublas performance (resending as
>> subscriber).
>>
>>
>> Hi Jorn,
>>
>> Your answer correctly identified the problem. It appears for my problem,
>> mapped_matrix is a
>> much better (by factor of 110), if not the best choice as far as
>> assembling goes. It also hasn't
>> hurt traversal that badly, it appears
>>
>> In the link you attached (thank you!), the author uses something called
>> "sparse_matrix" in
>> boost::numeric::ublas  does this even exist? Atleast in version 1.34, it
>> gives me a compile
>> error, saying no sparse_matrix type exists in boost::numeric::ublas (it
>> was the first thing I
>> tested before going for mapped_matrix) and the documentation online
>> doesn't mention
>> sparse_matrix as a type of sparse storage at all.
>>
>> Anyway, thanks a lot for all of your help!
>>
>> Regards,
>> Sunil.
>>
>> 2010/4/21 Jörn Ungermann <j.ungermann_at_[hidden]>
>>
>> Hi Sunil,
>>
>> this is likely not a problem of uBLAS, but one of the principal problems
>> of using sparse matrices. Depending on the type of matrix either random
>> access or multiplication performance is efficient.
>> For the compressed_matrix, random access is rather costly *unless* you
>> can control the way in which elements are added to the matrix. If you
>> can assemble the (row_major) compressed_matrix rowbyrow with ascending
>> column indices, this should take no time at all.
>> If you can't do this, use a different matrix type for assembly, e.g.
>> mapped_matrix (which offers efficient random access, but bad
>> computational performance) and construct the compressed_matrix from
>> there.
>>
>> See Gunter Winkler's page for details:
>> http://www.guwi17.de/ublas/matrix_sparse_usage.html
>>
>> Kind reagrds,
>> Jörn
>>
>> On Wed, 20100421 at 03:09 +0200, Sunil Thomas wrote:
>> > Hi all,
>> >
>> > I've been using boost 1.34 ublas library, especially the
>> > compressed_matrix class for sparse matrices in
>> >
>> > compressed row storage form. But I noticed that simply accessing an
>> > element of the matrix (to assign
>> >
>> > it a value, for example) slows my application down to unusable levels,
>> > for problems of the order of just
>> >
>> > 80,000 unknowns. I've identified the program is there and yes, I am
>> > allocating the memory as I should
>> >
>> > be for the matrix,  for example here is a snippet (of important
>> > lines):
>> >
>> >
>> >
>> ************************************************************************************
>> > matrix_A = compressed_matrix(nelem_a(), nelem_a(), nonzeros()); //
>> > allocation
>> >
>> > matrix_A(uic1, uic1) += trans; // assignment
>> >
>> > matrix_A(uic2, uic2) += trans; // assignment
>> >
>> >
>> ************************************************************************************
>> >
>> > where all variables (and/or functions), e.g. uic1, uic2, trans,
>> > neleme_a(), nonzeros(), etc.. are all welldefined
>> >
>> > (this is all been checked thoroughly). Commenting out the two
>> > assignment statements for example reduced
>> >
>> > my overall run time from 110 seconds to 0 (practically zero), for
>> > 80000 runs. Has anyone encountered this
>> >
>> > problem and know of a solution? I've heard a lot of stories about how
>> > boost::ublas is just not up there in
>> >
>> > performance and I certainly hope I am missing something trivial. Do
>> > later versions of boost address this
>> >
>> > better?
>> >
>> >
>> >
>> > Greatly appreciate any help.
>> >
>> > Thanks,
>> >
>> > Sunil.
>> >
>> >
>>
>>
>>
>> 
>>
>> 
>> Forschungszentrum Juelich GmbH
>> 52425 Juelich
>> Sitz der Gesellschaft: Juelich
>> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
>> Vorsitzende des Aufsichtsrats: MinDir'in Baerbel BrummeBothe
>> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
>> Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.Ing. Harald Bolt,
>> Prof. Dr. Sebastian M. Schmidt
>>
>> 
>>
>> 
>> _______________________________________________
>> ublas mailing list
>> ublas_at_[hidden]
>> http://lists.boost.org/mailman/listinfo.cgi/ublas
>> Sent to: sgthomas27_at_[hidden]
>>
>>
>>
>> 
>> The New Busy is not the too busy. Combine all your email accounts with
>> Hotmail. Get busy.<http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:enUS:WM_HMP:042010_4>
>>
>> _______________________________________________
>> ublas mailing list
>> ublas_at_[hidden]
>> http://lists.boost.org/mailman/listinfo.cgi/ublas
>> Sent to: sgthomas27_at_[hidden]
>>
>>
>>
>> 
>> The New Busy is not the too busy. Combine all your email accounts with
>> Hotmail. Get busy.<http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:enUS:WM_HMP:042010_4>
>>
>> _______________________________________________
>> ublas mailing list
>> ublas_at_[hidden]
>> http://lists.boost.org/mailman/listinfo.cgi/ublas
>> Sent to: sgthomas27_at_[hidden]
>>
>
>
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/ublas
> Sent to: david.bellot_at_[hidden]
>
 David Bellot, PhD david.bellot_at_[hidden] http://david.bellot.free.fr