Thanks you very much Michael. I am working very hard due to a deadline and i hope your answer will help me to save the day! As now i am just trying to reach code completition but later i would like the code to perform as best as possible even if this means writing custom algebra code (with or without ublas).
 
I have to compute the following product:
 
J * d * Jt
 
where J is a square sparse matrix;
d is a diagonal matrix;
Jt is the transpose of J.
 
What's the most efficient way to compute this product. Also later i need to use the computed matrix
to solve a linear system with SuperLU, this means i have to copy stuff into three new memory buffers.
Where may i find a good reference about sparse matrix (multiplication) algorithms?

 
Thanks once again to Michael and all the mailing list.
Luca

 
On 5/29/05, Michael Stevens <mail@michael-stevens.de> wrote:
Luca,

On Saturday 28 May 2005 22:32, luca regini wrote:
> I know of Gunter Winkler's page about ublas hints and tricks. Sadly my
> Visual Studio 7.1 compiler refuese to compile the examples that are
> found on its website.

Oh Gunters VC7.1 version does look rather wrong. The following corrected
version should help out.

void gunter_example ()
{
    typedef boost::numeric::ublas::sparse_matrix<double> Matr;
    Matr a;
    // .... fill matrix
    // iterate over non-zero elements
    for ( Matr::iterator1 it1 = a.begin1 (); it1 != a.end1(); ++it1 )
#ifndef BOOST_UBLAS_NO_NESTED_CLASS_RELATION
   for ( Matr::iterator2 it2 = it1.begin(); it2 != it1.end(); ++it2 )
#else
   for ( Matr::iterator2 it2 = begin(it1, ublas::iterator1_tag()); it2 !=
end(it1, ublas::iterator1_tag()); ++it2 )
#endif
        *it1 = 0.;
}

Gunter any chance of correcting the web site.

Michael