|
Boost : |
From: Joerg Walter (jhr.walter_at_[hidden])
Date: 2002-07-01 01:25:36
----- Original Message -----
From: "David Abrahams" <david.abrahams_at_[hidden]>
To: "boost" <boost_at_[hidden]>
Sent: Sunday, June 30, 2002 6:06 PM
Subject: [boost] uBlas questions
> Hi,
>
> Carlos Coelho sent me these questions about uBlas for the review:
>
> -------
>
> I will try to take another look
> at the documentation before I go my major "concerns"
> were:
>
> - Can (or are) the dense level 3 blas routines,
> especially matrix-matrix product evaluated using some
> form of blocking as in MTL?
No. Should be mentioned in the documentation.
> - Can sparse_matrix store replicated/unsorted entries.
> I am not sure that this is a real issue for most
> people and dealing with it complicates most of the
> algorithms.
No. I haven't thought about a sparse matrix format with replicated entries
yet. A COO compatible sparse matrix format should be added.
> - Can the sparse structure of a sparse matrix be
> shared and fixed or is a non-const sparse matrix
> always assumed to allow having its sparse structure
> changed?
I'm unsure. The (still undocumented ;-() compressed_matrix (CRS/CCS format)
could be the base for such functionality.
>
> ------
>
> I think we know the current answer to the first question ("no, not yet").
>
> I think I can shed some light on the next 2 questions. Carlos and I used
to
> work together on simulation software. After much consideration, we
> discovered that for some probelms, log(N) random-access to the elements of
> a sparse matrix row was much less-important than being able to control the
> order in which the elements appeared. Being able to choose a non-sorted
> order allowed us to write the inner loop of our algorithm so that it
simply
> marched through continguous addresses, writing data to consecutive
> locations in memory. In fact, this loop never had to touch the sparse
> matrix structure at all.
>
> The last question arises because in these problems, it is very common to
> have to perform a calculation over and over again using the same sparse
> matrix structure with different values (think Newton's method). For
maximal
> efficiency it's important not only to be able to re-use a sparse structure
> for the inputs to this algorithm, but also to compute the result's sparse
> structure just once, and re-use that.
Regards
Joerg
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk