|
Boost : |
Subject: Re: [boost] [iterator] Efficiency of transform_iterator
From: Giovanni Piero Deretta (gpderetta_at_[hidden])
Date: 2008-09-04 09:12:06
On Thu, Sep 4, 2008 at 2:58 PM, Arno Schödl <aschoedl_at_[hidden]> wrote:
> Looking at other performance issues with stacked iterators, the fact that transform_iterators, in particular the ones doing calculation and returning by value, may get dereferenced many times and do the same calculation many times over seems bad to me, because it is so different from the behavior of trivial iterators, which are very cheap to dereference.
>
I do not know, most my uses of transform iterators (but not all) have
been for projections, which is extremely cheap. In fact I think that
adding a caching layer would actually slow down projections.
>
>
> I am sure this has been suggested many times, but why isn't there a
> cached_transform_iterator that contains an internal cache for its result? The actual
> calculation would be done in the ctor and after increment/decrement/advance. The
> cached_transform_iterator must know its end like a filter_iterator, because it must
> not dereference it. As invariant, the cache could either be always constructed, which
> saves some checks, but needs the value_type to be cheaply default-constructible, or
> only if not at end.
What about adding as yet another adaptor, a memoizing iterator? That
would solve the problem in general.
-- gpd
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk