Boost logo

Ublas :

From: Preben Hagh Strunge Holm (preben_at_[hidden])
Date: 2007-03-05 14:19:36


> Exactly. The matrix is empty (all zeros) after construction. You fill
> only elements that are nonzero. A call to clear() then removes all
> nonzeros again.

This is really nice.. I like the speed improvements.

I did a little calculation on how many floating point operations I did
(worst operation for vector calculation).

I had a matrix of 72x72 multiplied with a vector of 72, and at least did
the inner product of this multiplication with a vector of 72. This was
done 72 times per iteration. This means I did 724 floating point
operations: 26.873.856

Let's say I've reduced (in average) the matrix of size 72x72 to a size
of 10x10. It means that 10*10*72*72 = 518.400 floating point operations
are now done.

Probably this could be optimized a bit more:
------
     ublas::vector<double> T(Msize);
     for (unsigned int i = 0; i < Msize; ++i) {
         T(i) = inner_prod(prod(ddG[i], v), v);
     }
------

(ddG[i] is one of the sparse matrices (72x72), v is a vector (72), T(i)
is the i'th element of a vector, Msize is 72)

I don't know whether the operations done here are to be optimized more?

If the result of prod(ddG[i], v) is a sparse vector as well the
operations are 10x10x10x72 = 72.000 floating points operations instead
of 518.400!
What happens in the multiplication above?

Is it 72.000 or 518.400 floating point operations?

Thanks for helping,

Best regards,
Preben