Subject: Re: [ublas] Question on ublas performance (resending as subscriber).
From: Paul Leopardi (paul.leopardi_at_[hidden])
Date: 2010-04-27 21:01:22
On Monday 26 April 2010 21:56:52 David Bellot wrote:
> sorry for not coming back on that thread before.
> I do agree on the lack of documentation, especially on sparse matrices.
> But... tada... as I am working on improving this documentation now, I would
> appreciate if you guys can send me examples of nice tricks when using
> sparse matrices of all sorts. Any examples are welcome and I will compile
> them into the new documentation.
> If you have web pages and/or code that you can communicate, it will be
> greatly appreciated too.
> I must say that I use dense matrices most of the time, so contributions are
> most welcome on sparse matrices.
> In order not to bother the mailing list, you can also send all the
> materials to my personal email address directly.
> Thanks everybody.
You could do well to look over the GluCat source code:
See especially the header files matrix.h, matrix_imp.h, matrix_multi.h,
matrix_multi_imp.h, generation.h and generation_imp.h. I don't know if any of
my code constitutes "nice tricks", or indeed if there are much better ways of
doing what I am trying to accomplish. Some highlights:
1) The preprocessor flag _GLUCAT_USE_DENSE_MATRICES determines whether
ublas::matrix or ublas::compressed_matrix is used for the matrix_multi<>
template class. This flag is needed because ublas::compressed_matrix
performance for addition is (or was?) woeful for large matrices.
See thread including:
2) In either case, ublas::compressed_matrix is used for basis_matrix_t.
For this class, which is used in generation and in conversion between
framed_multi<> and matrix_multi<>, storage size is important. The function
basis_element() in matrix_multi_imp.h uses a basis_cache, which can become
3) The function matrix::mono_prod() produces a product of monomial matrices,
matrix::mono_kron() produces a sparse Kronecker product of monomial matrices,
etc. The functions try to improve performance over that which could be
obtained using more naive methods.
4) In manipulating matrices, the preferences used are:
ublas operations on whole matrices > ranges > iterators > matrix indices. I
think this matches the design of uBLAS.