Thanks for the response Gunter.  Everyone definetely appreciates the time you are able to put into the project.  See below:

On Thu, Sep 3, 2009 at 6:39 PM, Gunter Winkler <guwi17@gmx.de> wrote:


>     cout << y * A  << endl;

do we really need this? I'd prefer to fix vectors as column vectors and
write (trans(y) * A) or (trans(A) * y)
Otherwise we would have to tag vectors with an orientation.

I think you are probably right.  Having a standard orientation makes life a lot easier, and eliminates all of the conformability errors that plague writing matlab code (where a vector is an N by 1 or 1 by N matrix).

at first glance the patch looks ok. I'll spent some time testing it.

My guess is that K.M.A.Chai's are a much better place to start.  I don't trust my own code very well, and I think that implementation went a lot further.

 
>
> What is missing:

There is a big problem: How to compute the type of the result? If both
arguments are dense, then the answer is simply "dense". But what is the
type of "banded*dense", "sparse*dense" (or even worse "packed vector *
sparse vector", "compressed matrix * sparse vector"). Here we need a
big effort to invent a useful traits mechanism or we simply state to
use the "most dense" or "most sparse" representation.

Is this a problem with just the typical matrix-matrix operations or only with nested ones?  For the existing ones, since the operator overload just forwards to the existing function I think that nothing really changes and it just uses the current traits with result_type?  Or are you saying that the current result type is not especially efficient.
 
But for the nested products, it certainly is an issue.  My proposal is the following:  If it generates a temporary on its own, it chooses the most dense, most dynamic one possible for now (i.e. ublas::matrix<double>), etc.  And to find the data type if does some kind of type promotion on the value_type's of the arguments to the biggest type (i.e. matrix<float> * compressed_matrix<double> -> matrix<double>).

The good thing about this approach is that people will still have the old method to control the type of the temporary, and they should never count on the type of the temporary remaining the same between versions(which I don't think would bug anyone since the temporary is between 2 ublas calls)...so if we later put in a great system for traits of the multiplication, it would happen seamlessly.
 

>    - What about y * y for two vectors?  I think we should hold off on
here I'd suggest to use the same solution as GLAS. We define that all
vectors are column vectors and use

x = trans(a) * b     // inner prod
x = a * trans(b)     // outer prod

x = prec(trans(a)*b) // prec inner prod

Yup, GLAS compatibility is perfect.  The sooner the GLAS, the happier I am.
 

Does the product of three general matrices occur often? IMO it most time
appears as symmetric product like trans(X)*A*X, or a scaled product
like invD * A * D. Here I'd prefer to provide specialized functions.

Yes, there is a lot of that, so it probably makes sense to have that specialization.  And if there are more efficient algorithms then so much the better.  If we go down that route there are a lot of great specializations to implement (for exmaple, I would love a specific function to do a quadratic form v' A v)

But 3 general comes up frequently enough that we would need to have it work.  For example, anyone from the matlab world will likely type in "invD * A * D" directly or invert to solve regressoins... we can try to untrain bad matlab form, but still need to allow these sorts of things.  Again, I think that having a slow and safe default temporary as discussed above is perfectly fine for this.

 
>    - Would this work?  Any overhead or gotchas?  I don't think I am
> capable of implementing it myself, but can help experts out with
> testing, etc?

Don't be shy to start developing uBLAS. Right now there is no active
developer. (note to myself: Spending a few hours per week does not
count as "active".)

I will do my best, but I am an economist, and this kind of library is extremely difficult to do right.  ublas is such an essential library that it is shameful there isn't more direct help on it, especially considering all of the time that people spend playing around with linear algebra libraries out there.  I think I will make a plea to the boost community...

-Jesse