Thanks for all the help, folks.

On Wed, Apr 21, 2010 at 3:38 AM, Andrea Cassioli <> wrote:
I completely agree that uBlas suffers from a serious lack of
documentation, also in the most stable part (as for instance matrix or
vector concept). Indeed, I do think the very first step in improving
uBlas would be a complete documentation revision, otherwise people
will never become frequent users.

For what concern the sparse matrix usage and performance problem, I
have experienced a similar one: I need to compute simple matrix-vector
products, namely some quadratic form xQx, where Q could be either a
dense one or a very sparse one (even a diagonal matrix). Being a very
crucial step in my algorithm, I decided to use the axpy function,
which is supposed to be quite efficient, for it provides compress
matrix specialization.

Well the result was quite disappointing. The axpy specialization for
compressed matrix, though much faster than using a normal matrix, is
still very slow. Looking at the code, I got the impression it is quite
complicate and difficult for a compiler to be optimized. The sparse
matrix representation is in fact based on a vector of vectors which
need to be iterated and each row be checked. Such nested loop seems to
ba the bottleneck. Why not to use a standard pattern based
representation, i.e. the usual (row,col,value) representation,
possibly in contiguous array?


On Wed, Apr 21, 2010 at 11:45 AM, Rui Maciel <> wrote:
> Jörn Ungermann wrote:
>> Hi Sunil,
>> this is likely not a problem of uBLAS, but one of the principal problems
>> of using sparse matrices. Depending on the type of matrix either random
>> access or multiplication performance is efficient.
>> For the compressed_matrix, random access is rather costly unless you
>> can control the way in which elements are added to the matrix. If you
>> can assemble the (row_major) compressed_matrix row-by-row with ascending
>> column indices, this should take no time at all.
>> If you can't do this, use a different matrix type for assembly, e.g.
>> mapped_matrix (which offers efficient random access, but bad
>> computational performance) and construct the compressed_matrix from
>> there.
>> See Gunter Winkler's page for details:
> You may be right in the technical details, that, when relying on uBLAS, using
> a certain matrix data type a certain way will end up being slower than some
> other alternative.
> Nonetheless, it is a uBLAS problem and a serious one at that.  The
> documentation doesn't mention anything about how slow some operations on
> certain data types (which incidentally may even come out as the "right way" of
> implementing some problems) can be when compared to other alternatives, it
> doesn't provide any test case or benchmark providing a quantitative assertion
> of how a method fair when compared to the alternatives and, probably more
> importantly than all, the example page you just provided indicates that
> possibly the right way of doing this sort of task heavily relies on
> undocumented features.
> That absence of information is a serious problem with uBLAS.  It invariably
> leads uBLAS newbies to go off writing their software completely unaware of how
> bad they are tailoring their program's performance to be.  The newbies may
> read the documentation from top to bottom, they may flawlessly implement the
> linear analysis bit but, as the document is incomplete and omits fundamental
> details which otherwise go through unnoticed, the newbies will only realize
> how painstakingly slow uBLAS gets to be way after they finish writing their
> program and start "stress-testing" it.  And by that time it may be too late to
> do anything about it.
> This is a very serious problem which has serious implications on how uBLAS is
> being used, how the programs which rely on uBLAS tend to perform and, as a
> direct consequence, how the reputation of uBLAS is being built.
> Rui Maciel
> _______________________________________________
> ublas mailing list
> Sent to:

Andrea Cassioli
ublas mailing list
Sent to: