Subject: Re: [boost] [Range] Range adaptor approach for temporary range lifetime issue
From: Neil Groves (neil_at_[hidden])
Date: 2012-07-04 06:39:52
On 29/06/12 12:37, Thorsten Ottosen wrote:
> On 28-06-2012 13:57, Dave Abrahams wrote:
>> on Sun Jun 24 2012, "Jeffrey Lee Hellrung, Jr."
>> <jeffrey.hellrung-AT-gmail.com> wrote:
>>> And, ultimately, if there's ultimately just one call to begin/end on
>>> final adapted range (the common case?), both the present
>>> implementation of
>>> the Boost.Range adaptors and a lazy implementation would go through the
>>> same sequence of iterator constructions, right?
>> The lazy case is actually able to perform some "optimizations" like
>> eliminating double-reverses. It's not at all clear that these tricks
>> would improve performance in reality, though.
> Well, I think we would be going too far if we tried to do that in the
> library. There is a difference between optimizing common use-cases and,
> ahem, strange uses.
> That said, with some effort, it might be possible to join certain
> adaptors into one compound adaptor. For example
> range | transformed( &Trans ) | filtered( &When )
> may become one iterator type, hence simplyfying the task for the
I agree that there is almost certainly room for improvement in this area
without breaking backward compatibility. I'll try to find some time to
look at this.
> Also, I'm inclined to think "lazy" is best for other reasons: it
> allows the cpu to work on one iterator object at a time, instead of
> interleaving construction of the two iterators all the time.
For sufficiently small iterators the interleaving mechanism produces
faster code on some architectures due to benefits in instruction
pipelining. The result of this change is not obvious to me. The only
sensible approach is to benchmark the alternatives.
> I guess it would be a problem if the iterators are not cached, but
> recomputed; probably not a problem in practice.
If we are pushing the laziness to the extreme whereby the begin()/end()
calls create the iterators we are also changing the exception semantics
be delaying throws. I therefore tend to think that we would be better
having lazy range adaption by the adaptors creating a class convertible
to a range that captures and hence triggers the adaptor evaluation via
implicit conversion. I proposed this on the list a few weeks ago. I see
no reason to break backward compatibility by loosening the performance
guarantees of begin()/end().
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk