
Boost : 
From: John Phillips (phillips_at_[hidden])
Date: 20060608 20:59:38
Andy Little wrote:
>
>
> If you work through the calcs using dimensionally analysis you find that
> concatenating two matrices then gives you the same type of matrix (give or take
> promotion ), which is kind of nice! Of course you can generalise the float by
> replacing it with typeof(length()/length()) too. (Then finally replace length by
> T .... Stick a single template param T on the front, giving matrix4x4<T> and
> Bobs your uncle!
>
> That is the basic idea and similarly for complex vect etc. The only one I havent
> tried is quaternion, but hopefully I can try it sometime!
>
> regards
> Andy Little
Maybe I'm not getting it yet, but I don't think this gets you out of
the woods.
The problem I see is that matrices are used for many things other
than just transformations. For example, a user might want to diagonalize
a matrix, or find the eigenvalues, or any number of other things. If so,
having units on the matrix becomes problematic, I think.
The old war horse of diagonalization routines is Gaussian elimination
(by any of a number of different names). Though it has some issues as a
numeric routine, it provides a good example of the problem I think I see
with the units being included in matrices. Because anyone who wants to
see the problem will need to remember the algorithm, I'll describe it as
I go. My apologies for those people who don't need this reminder.
Gaussian elimination has two basic parts. The first one is the actual
elimination part, and then an adjustment that avoids stability issues.
In the elimination phase, different rows or columns of the matrix are
added together with a possible multiplicative factor. The result of this
addition is placed in one of the rows or columns in the sum. This is
done to adjust the matrix so that one element becomes zero. It looks
like this
/a_11 a_12\ /a_11 a_12 \
  >  
\a_21 a_22/ \a_21  (a_11(a_21/a_11)) a_22  (a_12(a_21/a_11)/
The other operation is trading the contents of two rows or columns. This
is called pivoting, and it is done to make sure there are no zero (or
very small) divisors in the elimination step.
Both of these operations are valid because there is a matrix product
that does not change the values of the eventual diagonal elements that
has the same effect as these apparently ad hoc adjustments.
However, since both move values around in the matrix, it is quite
likely that they will cause type system errors in a matrix such as the
transformation matrices discussed above. The only way I see to avoid
this while enforcing types is to actually perform the matrix products as
part of the algorithm. However, matrix products have worse scaling than
the typical implementations of elimination and pivoting, so there would
be a huge efficiency hit in large problems. This would be a game breaker
for scientific and engineering calculations, of course, so it just can't
be the route the library chooses.
The other choice is to develop a way to shut off the library for some
operations and extrapolate the resultant types after the fact. I would
guess that this can be done, but I don't currently know enough to even
begin to do it.
So, matrices and vectors will be a major struggle, would be my guess
right now.
John Phillips
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk