Boost logo

Ublas :

Subject: Re: [ublas] request for sparse vector example
From: Gunter Winkler (guwi17_at_[hidden])
Date: 2008-12-18 17:23:18


Jose schrieb:
> On Wed, Dec 17, 2008 at 10:27 PM, Gunter Winkler <guwi17_at_[hidden]> wrote:
>
>> Have you been successful with this cosine similarity?
>>
>
> Yes
>
> I define these two vectors:
>
> compressed_vector<double> doc(10000000, 100000);
> compressed_vector<double> q(10000000, 100000);
>
> If I want to find the similarity between q vector and e.g. 1000
> different doc vectors, is there any possible optimization to speed up
> the results ?
>
>
This depends on your effort. You could store the 1000 doc vectors in a
matrix and use axpy_prod to compute all products at once. However, there
is no optimized version for sparse matrix * sparse vector. Usually this
is done by using a column major (compressed) matrix (or better a
generalized_vector_of_vector< ... , mapped_vector<
compressed_vector<double> > >) where the rows are your doc vectors.

Then the multiplication of this matrix with a sparse vector gives a
dense vector:
A * x = b
where b = sum( x[i] * column(A, i); ); // sum over all nonzero elements of x

However most time people try to use algorithm on (weighted) graphs
rather than sparse matrix computations ...

mfg
Gunter