Boost logo

Boost :

From: Terje Slettebø (tslettebo_at_[hidden])
Date: 2002-07-30 23:15:29

>From: "Paul Mensonides" <pmenso57_at_[hidden]>

> > Imagine this: You make a tree structure (like a binary tree), which
> > iterators for pre-, post- and in-order traversal. Then you may use that
> > directly with any of the algorithms and metafunctions in MPL. As he
> > you may find the size of the tree, the largest element, or whatever. All
> > without rewriting any of the algorithms. Powerful, indeed.
> Powerful, yes, but this doesn't answer Andrei's standing request for a
> real-world example.


> > If you used this, as a tree, in a search-algorithm, for example, you'd
> > O(N), rather than O(log N), for a balanced tree. Loki's IndexOf is O(N),
> > this reason.

By the way, there's another reason IndexOf is O(N), and that's because it
has to compare each type, as there's no ordering. So changing this to a
tree, in itself, wouldn't improve it. It would have to be a sorted tree,
sorting the types, using some criteria.

> If you process each element, the entire thing is still going to have
> qualities.


> The only times that this makes any sense is when you are directly
> processing many elements at one time (i.e. one per 'iteration')

I'm not sure what you mean here, could you explain?

> or when you are
> using a sequence of types to hold unrelated arbitrary types.

And here.

Like I said, if you store it as a binary search tree, i.e. you have
key/value pairs, then you may do an O(log N) rather than O(N) search.

> Otherwise, element
> access is the same. It takes O(1) to go from element A to element B in
both a
> cons-style typelist and a vector (i.e. tuple-style list).

It depends on what A and B is. Surely, you don't mean that going from the
beginning to the end of a cons-list, unless you provide something like
random access iterators for it, is O(1)?

> > Dave Abrahams' point was that Loki's algorithms would work for the first
> > kind of list, but not the second. To use any of the Loki algorithms for
> > second list, here, then, as he said, you'd either have to flatten it
> > it a linear list, like the first one), or have tailor-made algorithms
> > each kind of container. That would be like the pre-STL era, where
> > only worked on one type of container.
> >
> > However, if you provide the right iterators for the container or
> > you may use it in any algorithm, unchanged, and without changing the
> > algorithms.
> You can't gloss over this part so easily. Iterators are just a fancy way
> "flatten" some kind of structure into a sequence.


> This means that you are
> already providing the algorithm to flatten it, but you are doing it _all
> the place_ instead of in a single place.

All over the place? You mean as algorithms use iterators? For an algorithm
to use iterators, it's more or less the same as using a cons-list. For
containers, then, true, they would have to some way provide a way to flatten
the sequence. You could have this as an external algorithm if you like, too,
a sequence converter, as you suggest, such as e.g.:

// Convert tree to linear sequence, for in-order traversal


instead of:

// Return iterator for in-order traversal


I can't see how this makes a big difference to the argument about decoupling
sequences from algorithms. In the first case, you perform a conversion, in
the latter case, you provide access, making it possible to access the
sequence in the required order.

Specifically, the latter way doesn't need to mean more code than the former
way. So what do you mean with something having to be done "all over the

A difference is that the latter case is "lazy", performing the conversion
only when needed, while the former case transforms the whole sequence,
first. This could make a performance difference. Especially if the sequence,
itself, is lazy, generating only elements when required.

Besides, couldn't you use the same argument for STL?

> > This means, like with STL, that with M containers and N algorithms, you
> > get M x N combinations, with M+N code. In addition, you may tailor it
> > metafunction classes, quoted metafunctions, whatever they are called,
> > like function objects in STL. This means you may, for example, for a
> > compile-time accumulate/fold/reduce function (like STL's
> > get very different behaviour, depending on the metafunction passed to
> It also means that, like the STL, you get M containers, N algorithms,
_and_ M
> iterators. This is versus M containers, N algorithms, and M sequence
> implied by writing algorithms to operate on some *standard* sequence type.

I've also been considering such sequence converters. However, as you say
here, in both cases, you would get M + N + O (number of iterators, or
sequence converters, as a sequence may have more than one linear
representation) code. Like I said above, operating on iterators, and
cons-lists, are more or less the same, i.e:

iter::type // Current type
iter::next // Next iterator

list::head // Current type
list::tail // Rest of list

So what would be the advantage of using sequence-converters, instead of
iterators? Perhaps if the flattening is expensive, and you need to pass over
the sequence multiple times? However, do you have an example of expensive

> > Then, there's function composition, bind, lambda, etc.
> I think that the MPL is very interesting piece of work, and more
> there are other Boost libraries that are waiting (or already using) some
of the
> metaprogramming facilities present in the MPL. *However*, I don't buy
> never have) the comparison to the STL. Andrei is simply asking for a
> example where this kind of abstraction is actually *useful* (rather than
> *neat*) in template metaprogramming (without an assumptive reference to
> performance). He is asking that the MPL prove the usefulness of its
> purely in the context of template metaprogramming.

I guess we still haven't had much examples of that. I'm merely saying that
the design of MPL suggests that it's powerful. I agree that comparisons
between implementing things in MPL, and using Loki, could be useful. That
would let the "rubber meet the road". However, MPL is quite new, and Aleksey
have been quite busy making the library, in the first place, and lately the



Boost list run by bdawes at, gregod at, cpdaniel at, john at