|
Boost : |
Subject: Re: [boost] [gsoc] Proposal - Boost.Matrix - DRAFT RFC
From: Mike Tegtmeyer (tegtmeye_at_[hidden])
Date: 2009-04-01 12:35:35
On Wed, 1 Apr 2009, Kornel Kisielewicz wrote:
> On Wed, Apr 1, 2009 at 4:59 PM, Mike Tegtmeyer <tegtmeye_at_[hidden]>
wrote:
>> So my .02 is that vectors and matrices need to be two different things.
>> Having them as a single entity is not a good idea because:
>>
>> - unnecessarily complicates implementation as there are many operations
that
>> make sense on a vector but not on a matrix and vice versa
>
> Well thought out implementation will just branch into two at a given
point.
Just because it can, doesn't necessarily mean that it should. If the
implementation branches for 1D matrix and "everything else", then the
unification is in name only. There are too many operations are only
meaningful for 1D matrices and vice versa. I'm having a hard time
justifying to myself why they should fall under a common abstraction.
To me, an analogy is std::vector and std::list, even though they have
similar operations, then have different usage patterns. I think it makes
much more sense that they are top-level entities instead of having a
common std::sequence with a template argument that specifies the
underlying memory layout and branching the two implementations under the
hood.
>
>> - vectors and matrices are _really_ disjoint when it comes to common
usage
>> patterns. (in my world anyway)
>
> In CG it's multiplying vectors and matrices all the time, what do you
> suggest by disjoint?
I'm not saying that you couldn't, for example, multiply a vector and
matrix. I'm speculating that one of the reasons that a matrix interface is
more controversial than a vector is because how the intended audience uses
them is much more varied. Someone who uses matrices for transforms in
graphics is usually (broad brush here) different from folks who care about
triangle matrices, what makes a good triangle matrix implementation, and
how a useful matrix library to them would have compile-time enforcement of
symmetric matrices etc.
I guess my 'disjoint' comment: that this disparity (er religious wars??)
typically don't come up in 1D vectors.
>
>> - Vector operations are much more likely to take advantage of SIMD than
>> matrix operations - no point attempting to unify the un-unifiable
>
> That's where specializations enter.
Again my comment about being in name only.
> It's not the first and not the second time I would be writing such
> classes, the main difference would be compatibility, a higher level of
> genericness and the need of implementing some more obscure operations
> that I never bothered with. I still think it's in scope, as long as I
> focus on what's important, and not week-long tries of reducing the
> constant of an algorithms complexity.
Sorry, I wasn't suggesting that you weren't up for the task. If you could
pull it off, fantastic. I just would caution that if your library gains
some interest, it is likely that you will get bogged down by the same
stylistic, "I could do better", and "this isn't elegant for _my_ usage
pattern" clashes that have plagued every other attempt to have a unified
matrix library.
> The across compilers thing here is the major problem, however that is
> to be addressed with the performance suite. Also, the goal for this
> SoC is not to write *the optimal* implementation. It's to write an
> implementation that will be optimal enough to be optimized without
> full rewriting.
Absolutely.
I just have run into the same issues. In general, we really have to
justify any abstraction penalty to the community you are attempting to
target. And that abstraction penalty can very much be compile-time and
learning curve. I'll argue most of this community knows how to write
explicit SIMD instructions and need convincing to use a complex
abstraction and having faith that it will 'do the right thing' and doing
it fast enough as hand rolled code when speed it paramount.
In short, because of what this is, and who the target audience is-simple
and concise is probably better then clever and complicated.
Again, just my .02,
Mike
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk