From: Michael Stevens (michael_at_[hidden])
Date: 2002-10-24 21:13:52
Recently I have spent a fair time working on uBLAS and particularly the
sparse_matrix type so it is interesting to see your results. Firstly I
think it is important to realise that uBLAS is in Boost because of it
elegent interface and architecture. It doesn't have a perfect
>boost::numeric::ublas::sparse_matrix< Scalar_T, boost::numeric::ublas::row_major>
The orientation (row_major) is very important for the sparse_matrix type. The underlying implementation of the sparseness is as a map of 1D <index,value> pairs. This map can either be a ublas::map_array (your case) or std::map. The former is a sorted array, the latter probably some kind of tree. Both can be accessed quickly sequentialy (in index order) but require a search when accessed out of order.
uBLAS uses the orientation to convert from the matrix row/column representation to the underlying 1D representation. Therefore it is vitally important that you get the orientation correct so the underling map is accessed sequentially.
This is particularly important when assining new elements into sparse vector/matrix based on map_array. If this is done out of sequence then map_array must insert into its array (copying elements and possibly resizing) and is very slow. std::map is clearly significantly more flexible in this regard.
>The operation I'm performing is quite complicated. Basically, I am generating
>a large number of sparse matrices, and computing the inner product of a given
>matrix with each of these matrices in turn.
Be aware that the prod(A,B) algorithm accesses A in row_major orientation, and B in column_major orientation. This is in the nature of the matrix product. Therefore if a matrix appears on the right side of a product it should be of column_major orientation. Of course it may appear on both sides in different parts of your numerical algorithm :-) Sometimes it may be more efficient to maintain two copies!
One important points to remember when using sparse types is to check your results against dense matrices. There are a few bugs.
> But with uBLAS, essentially the same operations seem to be O(n2):
The O(n*n) suggests the matrices have become non sparse. Check non_zeros()!
uBLAS (Boost version) will not itself loose sparseness when you use BLAS expressions. However it is possible, by careless use of indexing on non constant objects, to force the sparse types to create zero elements. If you do index over all elements make sure you do so via a constant reference so uBLAS knows not to create zeros. Alternatively use iterators, which only access the non zeros.
> o Is there a more appropriate data type in uBLAS?
I know Joerg (and a few others) are working on this. I particular a row/column compressed matrix type.
> o Do you have any tips on where to tweak uBLAS to get better (ie. O(n) ) performance?
Try using std::map as the stroage for your sparse types. I would be interested in the performance. The proxy used in map_array has its problems. I'm trying to convince Joerg to use std::map as the default storage type for the sparse types!!
> o What is BOOST_UBLAS_USE_CANONICAL_ITERATOR and what effect should it have on
Probably not a speed enhancement, your millage may vary depending on compiler. Joerg also recently remarked that this code path had not been recently check.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk