Boost logo

Boost :

From: Joel de Guzman (joel_at_[hidden])
Date: 2005-03-25 10:49:26


David Abrahams wrote:
> Joel de Guzman <joel_at_[hidden]> writes:
>
>
>>David Abrahams wrote:
>>
>>
>>>Whether results should be lazy (as in Fusion) or greedy, as in
>>>standard algorithms, or whether both variants should be provided
>>>somehow, are interesting questions. The answers change the usage
>>>pattern, i.e.:
>>> copy(transform(s1,op), output)
>>>or output = transform(s1,op) // if output supports it
>>>vs.
>>> transform(s1,op,output)
>>
>>WRT Fusion, I am re-considering its laziness. MPL algorithms are
>>sequence preserving. Fusion is not, due to laziness. i.e:
>>
>> transform(tuple, op)
>>
>>does not return a sequence like tuple. instead, it returns a
>>view (transform_view).
>>
>>However, I realize now, that in most use cases, I noticed that I
>>had to eagerly generate the result into a tuple almost as immediately
>>after each transform step.
>
>
> Really; both the compile-time part (type) and the runtime part
> (value)?

Yes. However, see below...

>>After leaving this for a while in my thought queue, and after
>>implementing phoenix2's lazy STL algorithms, I realized that
>>the lazy Fusion algorithms could as well be implemented as
>>phoenix2's lazy fusion algorithms.
>
>
> Interesting. I'm not sure I understand what you mean yet, though.

Consider an immediate fusion algorithm:

     transform(s, f)

which returns another s immediately. Now consider a corresponding
phoenix lazy function:

     phoenix::transform(_1, f)

This version returns a a curried (ahem.. partially applied) function
that is lazily evaluated, later, when you pass in a sequence, s.

Yet...

Well, it turns out that I am wrong. Such a scheme will suffer
because ,while the example above is as lazy as it can get,
composition of algos such as:

     transform(remove_if(_1, f1), f2)

will turn out to be unoptimal because the immediate function
being forwarded to (the immediate versions of transform and
remove_if) will both return intermediate structures, unlike
with current fusion, where no intermediates are ever created;
just views.

> Well, the use of segmented iterators and hierarchical algorithms
> (http://lafstern.org/matt/segmented.pdf) has some interesting
> implications. You can't just use iterator adaptors such as
> transform_iterator to implement laziness unless either:
>
> 1. You want to pay a big efficiency hit
>
> or
>
> 2. The adaptors expose the same "extended properties"
> (e.g. segmented_iterator_traits) as the iterators they're
> adapting. That requires knowledge of all extended properties.
>
> It seems to me that in order to do the traversal optimally, more than
> laziness is required: you'd have to build expression templates and
> then somehow analyze the traversal properties of the leaf sequences in
> order to evaluate the expressions.

After reading the doc "segmented.pdf", I see what you mean.
I am not sure, but perhaps views that take advantage of the
segmentation information will work as well as ETs because
these views can exploit the traversal properties of the sequences
they are acting on.

Regards,

-- 
Joel de Guzman
http://www.boost-consulting.com
http://spirit.sf.net

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk