|
Ublas : |
Subject: Re: [ublas] Patch/proposal for overloading operator * in ublas
From: Gunter Winkler (guwi17_at_[hidden])
Date: 2009-09-03 18:39:24
Hello Jesse,
after silently following the discussion for a while I'd like to comment
the findings.
Am Monday 17 August 2009 schrieb Jesse Perla:
> As discussed in previous posts, people really can't get past the lack
> of * overloading in ublas. I know this was partially a
> philosophical/design decision, but it is extremely important to "joe
> user". Can this be easily remedied?
>
> Here is a simple patch for discussion that implements:
> matrix<double> A;
> vector<double> y;
>
> cout << 2.0 * y << endl;
> cout << y * 2 << endl;
> cout << A * 2 << endl;
> cout << 2 * A << endl;
so far I agree
> cout << A * y << endl;
this is the common matrix-vector multiplication and should definitely
provided.
> cout << y * A << endl;
do we really need this? I'd prefer to fix vectors as column vectors and
write (trans(y) * A) or (trans(A) * y)
Otherwise we would have to tag vectors with an orientation.
> cout << A * A << endl;
here I also agree.
>
> The patch can be applied in the ublas directory. All it does is:
>
> - For the scalar * vector and scalar * matrix operator overloads,
> it uses boost::enable_if to remove anything from those overloads that
> doesn't fit boost::is_arithmetic. This is to prevent a clash with
> the new operator * - Then, new * operators are added for
> matrix*vector, vector*matrix, and matrix*matrix, which forward
> directly to the appropriate prod() dispatcher.
at first glance the patch looks ok. I'll spent some time testing it.
>
> What is missing:
>
> - Is this as simple as I think it is? Is there any chance of
> overhead here?
There is a big problem: How to compute the type of the result? If both
arguments are dense, then the answer is simply "dense". But what is the
type of "banded*dense", "sparse*dense" (or even worse "packed vector *
sparse vector", "compressed matrix * sparse vector"). Here we need a
big effort to invent a useful traits mechanism or we simply state to
use the "most dense" or "most sparse" representation.
> - What about y * y for two vectors? I think we should hold off on
> this because of the inner_prod vs. outer_prod confusion. Perhaps
> with the extended numeric_traits created for the bindings this could
> be used to emulate some kind of matlab approach with explicitly
> stated orientation (row vs. column vectors), but it may be more
> confusing than not. I personally hate the conformance bugs that come
> out of * meaning both inner and outer product in matlab.
here I'd suggest to use the same solution as GLAS. We define that all
vectors are column vectors and use
x = trans(a) * b   // inner prod
x = a * trans(b) Â Â // outer prod
x = prec(trans(a)*b) // prec inner prod
> - What about b *= A; and A *= B? Am I right to think this kind
> of aliased multiplication should be using axpy_prod(b, A, b, false);
> and axpy_prod(A, B, A, false); ? Should we just call this in the
> overload? This operation comes up all over the place in the
> statistical work that i use it for.
Yes, I think this is a good idea. However we have to assure that
axpy_prod correctly handles the aliasing. Currently it just works but
this is currently not required to work.
> The big one, of course, is nested matrix multiplications.. I don't
> entirely know how to do it, but here is my proposal:
>
> - Operator * will do a (relatively) inefficient nested matrix
> multiplication using a heap allocated, row-major matrix<T> as the
> temporary object. What type to use for T? Use some kind of type
> promotion on the value_type inside of the two argument types....
I think it could be helpful to have a default mechanism to create
temporary types and provide means of fine tuning. However I expect that
this requires weeks of developement. (Do you have some students to
assign the task to?)
> - The docs tell people what it does, and how to have finer control
> with the prod(A, B, temp); approach.
this looks quite nice.
> - In line 4834, etc. of matrix_expression it tests for the
> complexity at compile time. Instead, in the operator* overload, we
> could detect the complexity at compile time and tag-dispatch to a
> function that would create the appropriate temporary object based on
> the rules I mentioned above. It would then call the prod()
> dispatcher. We probably would have to do this for the vector*matrix
> operator overloads as well.
Does the product of three general matrices occur often? IMO it most time
appears as symmetric product like trans(X)*A*X, or a scaled product
like invD * A * D. Here I'd prefer to provide specialized functions.
> - Would this work? Any overhead or gotchas? I don't think I am
> capable of implementing it myself, but can help experts out with
> testing, etc?
Don't be shy to start developing uBLAS. Right now there is no active
developer. (note to myself: Spending a few hours per week does not
count as "active".)
mfg
Gunter