From: Jason Kraftcheck (kraftche_at_[hidden])
Date: 2006-11-08 12:20:59
Oliver Koenig wrote:
> we recently investigated available C++ matrix libraries in the context
> of a new implementation of our Finite Element library eXperiment FeLyX.
> Within the element formulations (e.g. evaluation of stiffness matrices)
> of such FE libraries there are lots of fixed-size matrix operations
> where the sizes are known at compile time. Usually the matrix sizes are
> given through the actual element type used.
> We looked at MTL, UBLAS and TVMET (Tiny Vector Matrix library using
> Expression Templates, http://tvmet.sourceforge.net/), and finally
> decided to use TVMET.
> The reasons were:
> - As far as I understand your 2D-matrix template, TVMET offers exactly
> this type of fixed-size matrix library you are describing, providing
> compile-time checks for matrix sizes etc.
> - TVMET is fast. For matrices from size 3x3 up to app. 60x60 we did a
> very basic performance comparison (Matrix-Matrix / Matrix-Vector
> products) with MTL, UBLAS and TVMET (Tiny Vector Matrix library using
> Expression Templates, http://tvmet.sourceforge.net/). We were really
> impressed by the performance of TVMET, in our basic test it was
> significantly faster than the other two.
> - The operator overlads of TVMET (implemented using expression
> templates) provide a very convenient and easy-to-use interface
> (especially compared to MTL which we used before).
Thanks for the info. I looked at this implementation briefly, and I think my
implementation is better for the reasons I outline below. But as a said, I only
looked at it briefly, so it is possible that I missed or mis-understood some
portion of it. I'd greatly appreciate any corrections to my statements about
I think that most of my original operator implementations are equivalent to
those of TVMET. I later tuned the multiply operation. I'm sure that for the
specific case of multiplying 3x3 matrices on an Intel Netburst-architecture CPU
with G++ >= 4.0, my implementation will be faster than TVMET's. I think it will
be faster for many other cases also, but that's just a guess.
The TVMET library seems to define implementations of every libm function for
matrices (sin, pow, etc.). It implements them by applying the equivalent scalar
function to each matrix element. This kind of stuff results in a lot of noise
in the library, is unlikely to be very useful, and the theoretical meaning of
these operations is dubious. For example, mathematically, pow( M, 3 ) for a
matrix M means M * M * M, which is not the same as cubing each element of M.
I didn't notice any common square matrix operations such as inverse,
determinant, etc. Did I miss these?
Also, see my other message about a matrix as an array of vectors vs. a vector as
a Nx1 matrix.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk