From: choon (teochoonhui_at_[hidden])
Date: 2007-09-30 23:07:27
Gunter Winkler wrote:
> choon wrote:
>> FYI, I also tried the following with v2 as sparse vector, but the speed
>> about as slow:
>> 1) noalias(v2) = prod(v1,M)
>> 2) noalias(v2) = prod(trans(M),v1)
> did you try a column major matrix? My latest benchmarks show that
> prod(M, v);
> axpy_prod(M,v,y); // y dense
> is fastest using compressed_matrix<double, column_major> and
> compressed_vector or coordinate_vector. If the matrix is row major then
> the products are computed as a set of inner products of sparse vectors
> which is not fully optimized, yet.
> Can you explain what kind of iteration do you have that you cannot use
> dense vectors? IMHO using dense vectors and recompress them into an
> archive is better than working with only sparse vectors.
Actually, I will use sparse matrix M in both left and right hand mat-vec
multiplication, so, storing M as column major will affect the performance of
the other one. My application is a cutting-plane method, so it need to store
some vectors (I am happy to describe this in more details, if necessary).
Could you please tell what do you mean by compressing dense vector into
archive? Also, I am thinking of doing the operation with dense vector but
keep the same vector in sparse format. Do you have any efficient way of
doing that of it is just a sparse vector construction with dense vector as
-- View this message in context: http://www.nabble.com/Any-efficient-way-to-do-sparse-matrix-vector-multiplication--tf4543944.html#a12972737 Sent from the Boost - uBLAS mailing list archive at Nabble.com.