Boost logo

Boost :

From: Ralf W. Grosse-Kunstleve (rwgk_at_[hidden])
Date: 2004-07-03 18:58:59

--- David Abrahams <dave_at_[hidden]> wrote:
> > You are hitting the nail on the head. Experience with my own
> > reference-counted array type tells me that the issue of
> > immutability/constness needs extreme care. My own solution has the
> > glaring shortcoming of not providing a const-reference-counted array
> > type (excuses available on request :-). But having said that, I found
> > it a lot worse not to be able to write a function that returns a new
> > array without incurring a deep copy.
> It seems quite a few compilers implement one or another form of RVO.l

Sorry, I don't know what RVO.l is. Is it portable? Does code exploiting RVO.l
to achieve move semantics look intuitive?

> I guess you haven't read the MTL concept breakdown:

Right. When I started with my array library (two years ago) it was clear that
MTL is too specialized for my purposes.

> - Shape
> - Storage (I think this is what you're calling memory management)
> - Orientation (row-major, column-major, diagonal-major...)

Is "Orientation" a specialization of "Shape". Couldn't both be viewed as
"Indexing models?"

> > 1. I found the following memory management models for arrays to
> > be essential in practical applications (i.e. I implemented
> > them all, and make heavy use of all):
> >
> > stack, fixed size
> > stack, variable size, fixed capacity
> > heap, variable capacity, one owner (e.g. std::vector)
> > heap, variable capacity, shared ownership
> > reference to any of these (i.e. "no management")
> That's storage.

Right. Which of these are you going to support?
> > 3. A general purpose array library should provide a mechanism for
> > reusing memory management models and indexing models in
> > various combinations. This could lead to:
> >
> > memory_model<typename ValueType, typename IndexingModelType>
> > or
> > indexing_model<typename ValueType, typename MemoryModelType>
> I don't understand why you think one of them should take the other as
> a parameter.

That's what I could cope with at the time with with my 12 months C++ experience
and Visual C++ 6 support in mind (brrrrrr). [I settled on memory_model<>; it
works OK for me; the biggest disadvantage is that I had to replicate the
algebras for each memory model.]

> > 4. Array algebras (unary and binary operators, functions) are a third
> > property that is ideally separated from memory models and, if
> > applicable, indexing models. This could lead to:

Does my generic array design look too daunting? Do we have to wait for another
language before array libraries can be implemented in such a generic way?

> > 5. Expression templates are an optimization on top of all this and
> > in my practical experience the least important of all the points
> > mentioned here. In all likelihood the 5-10% of code that matter for
> > runtime performance have to be hand-optimized anyway, usually
> > leading to distortions of the original, intuitive code beyond
> > recognition.
> The aim is to make the library smart enough to do those optimizations
> correctly so you don't have to do it by hand.

A case of "premature optimization?" I am still waiting for an example of
real-world code where the critical section is so large that the reduction in
code size/gain in maintainability due to the availability of expression
templates would justify the cost of implementing the ET library and the
increased compile times. -- Maybe I am just too spoiled by Python. :-) I am
constantly taking a performance hit of about two orders of magnitude, writing
only the critical code in C++, and it turns out to be very practical because
the old 90-10 rule is true! Sometimes I even win out compared to brute force
number crunching approaches because the very high abstraction level of Python
allows me to push bookkeeping and high-level decision making to a point where I
know how to skip much of the brute force work.


Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!

Boost list run by bdawes at, gregod at, cpdaniel at, john at