Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2008-06-26 09:34:17


Dean Michael Berris wrote:
> On Tue, Jun 24, 2008 at 2:22 PM, David Abrahams <dave_at_[hidden]> wrote:
>> Dean Michael Berris wrote:
>>> On Tue, Jun 24, 2008 at 4:02 AM, David Abrahams <dave_at_[hidden]> wrote:
>>>>> std::copy(fib.begin(), fib.end(), ostream_iterator<int>(std::cout, "
>>>>> ")); // prints the infinite sequence
>>>> If you're using Binet's formula, that's a pretty inefficient way to
>>>> compute all those Fibs, isn't it? Or does your generator cache nearby Fibs?
>>>>
>>> Actually, the function object can be as stateful or state-less as you
>>> want. There's an iterative way of computing the nth fibonacci number,
>> Well of course; that's the easy way to do it... but it makes random
>> access with fib[n] very inefficient.
>>
>
> The inefficiency is actually inherent in the underlying function
> implementation, and not directly as an effect of using the generator
> to wrap the function.

I know; I was just trying to find out if solving that problem was part
of your domain.

>>> but that's really beside the point here. ;-)
>> Well, _my_ point was to find out whether your proposal was going to
>> figure out how to handle both random access and sequential iteration
>> efficiently. Sounds like it isn't.
>>
>
> Accessing the generator wouldn't be inefficient whether you access it
> using operator[] or sequentially. Unless you're thinking that somehow
> the generator will cache results of previously accessed data and
> encapsulate an appropriately random-accessible container (like an
> std::vector) to mitigate the recomputation of already previously
> accessed data; would this be something that would be considered a
> value-add for a general-purpose utility?

No, I was just thinking that operator++ would see if there was a cache
of the previous two numbers and add them together.

>>> The make_*_iterator family works fine, but the problem becomes one of
>>> semantics/readability. If you just wanted iterators, the above would
>>> be sufficient. For 'prettier' syntax that models the sequence
>>> generators that other programming languages have (think Haskell's list
>>> comprehensions/construction) making something look like a list may be
>>> important.
>>>
>>> The idea really is to abstract the function that generates the list
>>> from the list it generates (lazily?) itself. Instead of doing:
>>>
>>> fib(n);
>>>
>>> Something that looks like
>>>
>>> fib[n];
>>>
>>> May look more semantically correct.
>>
>> But... you can already use operator[] on a random access iterator. So
>> if you're not going to explicitly solve the problem of efficient
>> sequential and random-access traversals,
>>
>> boost::transform_iterator<
>> boost::counting_iterator<T>, boost::function<T(T)>
>> >
>>
>> already does what you want, I think.
>>
>
> Almost does it, but not necessarily. Constructing an instance of the
> transform iterator would require the instantiation of the
> counting_iterator as well.

So?

>> So far this doesn't seem to have much of substance that isn't already in
>> the iterators library, but I could be missing something.
>>
>
> I think it's more semantics than anything -- something that looks and
> acts like a container can't just be an iterator.

You can always build an iterator_range.

> I guess the notion of
> an iterator is directly tied somehow with collections and containers,
> but seeing just an iterator somehow doesn't give the same semantics --
> at least as how I see it.

OK

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk