From: Joerg Walter (jhr.walter_at_[hidden])
Date: 2003-02-17 15:48:16
> Thanks for all your info. I've run the tests with a Boost from CVS (from
> january 31st), compressed_matrix and axpy_prod, and the results give
> roughly the same speed as our implementation, and ca. 30% better memory
> efficiency. Great!
Kudos to the guys on groups.yahoo.com/group/ublas-dev. Without them sparse
matrices would be as bad as in boost_1_29_0.
> The -DNDEBUG flag also seems critical, without it
> performance is terrible (quadratic).
Oh yes. That's my paranoia. Without -DNDEBUG defined ublas is in debug mode
and even double checks sparse matrix computations with a dense control
computation. You could customize this using the BOOST_UBLAS_TYPE_CHECK
> Alexei's proposed optimizations seem interesting. I tried the axpy_prod
> you provided, but it didn't give any significant change. I trust your
> figures however.
Yup. I didn't post the necessary dispatch logic. I'll later update Boost CVS
with my current version.
> I will propose that we start using ublas as soon as the linear complexity
> functions appear in the stable branch.
> I provide our benchmark below for reference (with the timing calls, and
> other dependencies stripped out):
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk