Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2005-08-18 15:42:26


Bruno Martínez <br1_at_[hidden]> writes:

> On Thu, 18 Aug 2005 10:59:51 -0300, David Abrahams
> <dave_at_[hidden]> wrote:
>
>> Memoization has lots of weird effects like this. If you amortize your
>> "fast" access over touching all the elements of the list you clearly
>> are beating mpl::vector. But what happens when you only need to look
>> at the last 5 elements of a 100-element sequence? I wonder how likely
>> that is after all.
>
> It should be just as efficient, because there's no 'extra' work involved.
> The same number of elements are touched.

My pointwas that for mpl::vector that's O(1) while for you it's O(N),
So vector might win in that case.

> On the other hand, I haven't been able to think how to accelerate
> index_of<>, even if it takes a little speed hit.

Hmm.

>>> I've been thinking the past days of other functions like drop that could
>>> be improved in a similar way, but they have eluded me. I didn't find
>>> mention of similar techniques in the MPL book, so I wonder if I'm making
>>> any sense.
>>
>> You're making plenty of sense; it's something I've thought about quite
>> a bit. But I never quite realized that you can make it "okay" to use
>> O(N) operations as though they were O(1). Maybe this demands a
>> re-think of metaprogram efficiency. If the MPL algorithms and
>> iterators could be reworked to take better advantage of memoization,
>> it might be that list becomes the most efficient datastructure
>> overall, again.
>
> The patch to mpl::advance seems easy. I haven't done it locally because I
> don't know which version is the one my compiler actually sees.

That's easy enough to test by preprocessing your source.

>> It would also be very interesting to see what happens to an
>> algorithm like sort when you take forced memoization into account.
>
> Indeed. I searched for theory on memoization before posting, without
> success. It would seem the C++ template system is the first language with
> complete memoization.

Probably the first one where "you pay for memoization no matter what,
so you may as well use it."

> It probably doesn't make sense to have a language like that on a
> speed basis. If I don't misunderstand the efficiency appendix of
> your book, even at<0, myvec> is O(N) because of the similar
> instantiations the compiler has to go through.

What led you to believe that?

> The difference is that this work is not interpreted.

<blink> Sorry, you lost me.

> Even without some weird O(N)->O(1) cases, memoization has a sw
> engineering impact. As it's said in the book, it's easier to avoid
> blobs, for example, because there's no speed hit.

The book says that?

> It seems to me that there should be several patterns and idioms
> appropiate for such a system treated in some paper somewhere.

Sounds like a good research topic for... someone like you! ;-)

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk