From: Janek Kozicki (janek_listy_at_[hidden])
Date: 2006-06-14 12:57:07
I'm quoting Leland Brown completely, so that this very insighful comment
has now sane text formmatting. I fully agree with all observations.
I hope we can put it together somewhere and make specification for this
upcoming vector/matrix with units library? (part of pqs in fact)
My few comments are below.
Leland Brown said: (by the date of Wed, 14 Jun 2006 02:21:31 -0700 (PDT))
> There's been a lot of discussion on this issue of vectors and how/
> whether to deal with vectors or tuples of mixed units. I'd like to
> chime in with some of my thoughts: 1 - why I think mixed vectors are
> both sensible and important, and 2 - why I think this could be easy
> to implement, at least partially.
> 1. First, what's the real difference between a vector and a tuple?
> There are probably differences in the way they're visualized by
> various people, but I think we should be primarily concerned with the
> differences in semantics - what operations are meaningful and/or
> useful on each, which distinguish one kind from the other. The way I
> think about them, I see at least three differences:
> A. Vectors have operations like magnitude (length), dot
> products, cross products, angle between two vectors, etc. For
> tuples, in general, none of these functions have a meaningful
> definition. (This is probably why they are hard to "visualize" as
> B. Vectors can be transformed to other vectors by matrix
> multiplication. Thus, it's useful to have them be compatible with or
> embedded in some sort of matrix library. Tuples typically are not
> suited for such use.
> C. Vectors, like matrices, can be indexed numerically (1st row,
> 2nd column, etc.), so it's easy to loop over the elements. This is
> part of what makes them suited for matrix calculations. Tuples are
> frequently referenced by name instead of number - e.g., the members
> of a struct.
> So what about "vectors" of mixed dimensions/units? As far as
> property A, they would not act like vectors. But for property B, there
> *are* many engineering applications that need mixed aggregates to
> have these operations. The bulk of my own work falls into this
> category, and this is, in fact, the situation for which I developed
> my dimensional analysis library in the first place. Janek Kozicki
> also commented on the need for matrix multiplication with vectors in
> phase space.
> In my work, I tend to visualize these mentally as tuples, not as
> vectors in some N-dimensional space. But it turns out the
> mathematics I need to perform requires them to be involved in lots of
> matrix calculations - which also means the matrices themselves have
> mixed units.
> In summary, I think it's important to allow vectors whose elements
> have different physical dimensions - even though certain operations
> like vector length will fail unless all the dimensions
> are the same.
> 2. The good news is that I think this is almost trivial to implement
> using the "t3_quantity" or "free_quantity" or whatever we decide to
> call it. And with the other two "quantities" I found it extremely
> difficult to implement in a general way, so I suggest don't bother.
> If the user needs mixed vectors, he can use "free_quantity." (Or he
> can write his own matrix operations for his special case, or exit the
> strong typing and dimensions checking for the matrix
> We can do this if we define vectors as vector<N,T> like this:
> vector<2,double> // 2D dimensionless vector
> vector<3,pqs::length::km> // 3D position vector in km
> vector<6,pqs::free_quantity> // 6-element vector of mixed units
> // (e.g., phase space)
> And I agree that this is better than:
> Likewise with matrices, perhaps use matrix<M,N,T> like
> etc. FWIW, in my library I made the type parameter default to
> double, which allows simply vector<3> if you want a unitless vector.
> > Templatize specialization is probably required to get all the lower
> > dimensional versions fast, but that's happily transparent to the user.
> > Specialization also allows the low dimension versions to have nice member
> > variables like x,y,z.
> I agree. Especially since cross products only exist for 3D, template
> specialization is probably needed for that case at least.
Also specializations can distiguish whether T is free_quantity or not. And
depending on that, they could provide category A operations (dot product,
cross product, magnitude, etc..) - because such operations can work for
a vector with components of similar type.
Second point is about matrix operations. The need to "bundle" matrix
operations within this library limits possible usage of ublas/lapack
libraries. Possible solutions:
1. include transparent for the user call to external methods that solve
matrix problems, like the most popular Ax=b (bare bones approach is to
invert matrix A, but there are more subtle and efficient methods). The
user will work within this library, and this library will use external
backend, while taking care of units.
2. Taking care of units and calling external methods may be too
complicated. It would be simpler to implement the operations on our own.
Tempting, but can we provide ,,all'' the functionality?
3. don't do it at all. Just make matrix classes (like in examples
above) that cannot do any other operations except multiplication with a
vector. The user will decide how he wants to handle that - write own
code to solve Ax=b, while taking care of units, or call lapack and
temporarily turn units "off".
For the beginning we can take approach 3. because it minimizes the
amount of work. So we have a small library to start with. But later
maybe we can try to improve towards 1. or 2.
Another question - would quaternions be something like vector<4,double> ?
For me it looks like a good idea. Template specialization can offer
category A operations that are specific to quaternions, when someone
works with unitless vector<4>.
It would be just like template specialization will provide cross product
for vector<3>, which is specific only for vector<3>.
Besides quaternions are also known to be used together with
One last note: above design does not allow to resize vectors and
matrices in runtime. This limits the library usage. Should there be
added a slower complementary library that will allow to resize data in
runtime? Any ideas? Personally I see a limited need for that, the only
exception is working with FEM. But FEM would also certainly require a
working method that solves Ax=b.
So maybe first better focus on vector/matrix non resizeable in runtime.
Heh, I just recognized a similarity of this problem with pqs, look:
everything determined everything determined
during compilation during runtime
t1_quantity | t2_quantity | t3_quantity
fixed_quantity | scaled_quantity | free_quantity
vector<3> | | vector.resize(3)
matrix<4,4> | ? | matrix.resize(4,4)
fixed_vector ? | | free_vector ?
-- Janek Kozicki |
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk