|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2004-07-05 16:09:23
"Ralf W. Grosse-Kunstleve" <rwgk_at_[hidden]> writes:
> --- David Abrahams <dave_at_[hidden]> wrote:
>> Sorry, the "l" was a typo. I meant Return Value Optimization, where
>> values can be returned from functions without actually invoking the
>> copy ctor.
>
> That would be a cool thing to have, but searching for "RVO" and "Return Value
> Optimization" in the C++ ISO standard didn't produce any hits. As long as it
> isn't written down in the standard it is impractical for us (as scientific
> application developers) to count on it.
It is written in the standard, but not by that name. Look for the
word "elide", and check section 12.2.
Still, all optimizations are optional by definition, so you can
still decide that you don't want to count on it.
>> No, shape is separate. No point in me trying to explain here when
>> you can google for a good explanation from the MTL authors, though ;-)
>
> I looked and in my mind Orientation only makes sense given a Shape, and even
> Storage is only concerned with indexing! So I still believe Orientation is a
> specialization of Shape. :-) And I cannot find the equivalent of my
> "MemoryModel" concept (stack, heap, etc.) at all. "dense , banded, packed,
> banded_view," etc., that's all just concerned with indexing.
This stuff is Jeremy's bailiwick, so I'll leave it to him to answer.
> Googling for "heap stack" at the MTL site didn't turn up a hit. So that
> fundamental property of an array seems to be magically abstracted away.
It's not fundamental because you can never force anything to always
be on the stack:
new some_matrix<double, stack_allocated>
will defeat it. Internal vs. external might be more appropriate terms.
>> Whatever you like. There will be well-formed concepts, so you can
>> build any storage policy you like if it isn't directly supplied.
>
> If that means I can choose between "heap" and "stack", "heap
> shared", etc. I am excited.
Then you're excited.
> My hope was that you'd cleanly separate MemoryModel from the rest of
> matrix properties, and that the general framework is reusable for
> other array types, i.e. n-dim arrays.
Should be.
>> > Does my generic array design look too daunting?
>>
>> Not neccessarily... but I doubt I'm going to implement your design.
>> We have specific needs ;-).
>
> Right now it seems to me that MTL matrices have a specific memory
> model hardwired in (e.g. I see reference-counted std::vectors).
The old MTL codebase is not really worth looking at for the purposes
of this discussion. It's the concept taxonomy that I'm referring to.
> That makes the library a lot less interesting to me. Most of our
> matrices are small (3x3, sometimes 9x12, in that ballpark).
Specific support for small matrices including register-level blocking
is part of the plan.
> A heap based array type is therefore of little interest. If "heap"
> or "stack" are not template parameters in some form the whole matrix
> algebra code has to be replicated. That's not very generic.
Didn't I make it clear in my last message that you'll be able to use
any storage policy you like?
>> > A case of "premature optimization?"
>>
>> No, one of the main _goals_ of this project (from a CS/research
>> point-of-view) is to demonstrate that it can be done.
>
> Can't argue with that. Only that this is not what I as a scientific
> application developer am interested in. :-)
Fair enough.
>> Of course it is. If you're taking a performance hit of about two
>> orders of magnitude in your linear algebra code,
>
> No, of course that's all happening in C++.
>
>> I'll have to take your word for it. It seems to me that if you can
>> skip the brute force work in Python you can do it in C++, too.
>
> Sure, I could even do in in machine language... if only I'd live
> long enough. Python is very good at getting high-level bookkeeping
> done with relatively little effort compared to C++.
I think that's mostly due to the fact that you don't have the right
C++ libraries at your disposal. Of course you know I think there are
real advantages to hybrid development, but there's no reason that
Python and C++ should be so different aside from access to the right
libraries.
> This makes it practical to accumulate the knowledge that is required
> to selectively skip brute force number crunching work.
I can't really imagine how you do that selective skipping nor how
Python helps you get there. It all sounds very much like black magic
over here.
> It doesn't mean I do everything in Python. Thanks heaven we have
> Boost.Python and I can quickly farm out the number crunching stuff
> to C++.
:)
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk