|
Ublas : |
From: Gunter Winkler (guwi17_at_[hidden])
Date: 2007-01-23 09:57:46
Am Montag, 22. Januar 2007 12:29 schrieb Paul C. Leopardi:
> OK, I see with compressed_matrix, with the current addition
> algorithm, we have worst case complexity like
> O((nnz(lhs)+nnz(rhs))*nnz(rhs)). But the addition could use the same
> storage scheme and a different algorithm and achieve
> O(nnz(lhs)+nnz(rhs)), at the expense of some temporary storage.
>
> The way to achieve O(nnz(lhs)+nnz(rhs)) for addition is a 3 step
> algorithm: 1: Estimate nnz(lhs+rhs). This takes O(1) to
> O(nnz(lhs)+nnz(rhs)) depending on whether we want the worst case or
> an accurate estimate.
> 2: Allocate storage of size nnz(lhs+rhs) for the result.
> 3: Iterate over lhs and rhs, adding into the result. The iteration
> should be O(nnz(lhs)+nnz(rhs)) assuming that the lhs and rhs storage
> scheme is suitably sorted. Insertion into the result should be O(1).
Ok. Someone should implement a free function "compressed_plus_assign(A,
X)" which computes A += X in a better way than now. Then we can think
about how to automatically dispatch to this function.
I think a good compromise is to do the summation with 2 passes over the
Elements of A.
Pass 1: (forward) update all common elements and count the number of
elements to be inserted. (optional: remove these elements from X)
Pass 2: (backward) move elements of A (from last to first) to the new
positions and insert the elements of X accordingly.
(Maybe one shifts the update to the second pass.) The advantage of the
exact new size is, that we exactly know where each element of A is
stored after the addition.
What do you think?
mfg
Gunter