Boost logo

Boost :

Subject: [boost] [iterator] Efficiency of transform_iterator
From: Arno Schödl (aschoedl_at_[hidden])
Date: 2008-09-04 08:58:34

Looking at other performance issues with stacked iterators, the fact that transform_iterators, in particular the ones doing calculation and returning by value, may get dereferenced many times and do the same calculation many times over seems bad to me, because it is so different from the behavior of trivial iterators, which are very cheap to dereference.


I am sure this has been suggested many times, but why isn't there a cached_transform_iterator that contains an internal cache for its result? The actual calculation would be done in the ctor and after increment/decrement/advance. The cached_transform_iterator must know its end like a filter_iterator, because it must not dereference it. As invariant, the cache could either be always constructed, which saves some checks, but needs the value_type to be cheaply default-constructible, or only if not at end.


tranform_iterators running on bidirectional_iterators could still claim that they are random_access, so that when advance is called, they skip N items of the underlying iterator and only do the computation on the destination item. This may be problematic because algorithmic choices may be made based on the traversal category, and performance suffer as a result. I am not sure how much of a problem that would be in practice.


It would be nice to do the same for forward_iterators, defining a forward_random_access_traversal category.



Dr. Arno Schoedl · aschoedl_at_[hidden] 
Technical Director 
think-cell Software GmbH · Invalidenstr. 34 · 10115 Berlin, Germany · phone +49-30-666473-10 · toll-free (US) +1-800-891-8091
Directors: Dr. Markus Hannebauer, Dr. Arno Schoedl · Amtsgericht Charlottenburg, HRB 85229

Boost list run by bdawes at, gregod at, cpdaniel at, john at