|
Boost : |
From: Paul Mensonides (pmenso57_at_[hidden])
Date: 2002-07-31 00:36:56
> > The only times that this makes any sense is when you are directly
> > processing many elements at one time (i.e. one per 'iteration')
>
> I'm not sure what you mean here, could you explain?
The O(n) complexity only applies if an algorithm operates on more than one
element in a sequence at the same time (i.e. in the same recursive
instantiation) rather than one at a time. E.g.
template<class T> struct process {
typedef ... type;
};
template<class> struct for_each;
template<class head, class tail>
struct for_each< typelist<head, tail> > {
typedef typelist<
typename process<head>::type,
typename for_each<tail>::type
> type;
};
template<class head>
struct for_each< typelist<head, null_t> > {
typedef typelist<
typename process<head>::type, null_t
> type;
};
In the case above, the access time (from the beginning of a cons-style list) is
irrelevant. Because the algorithm operates on each element one at a time, the
access times are O(1).
This contrasts with something like this:
template<class A, class B, class C>
struct vector {
typedef A type_a;
typedef B type_b;
typedef C type_c;
};
template<class A, class B, class C>
struct for_each< vector<A, B, C> > {
typedef vector<
typename process<A>::type,
typename process<B>::type,
typename process<C>::type
> type;
};
An equivalent using a cons-style list would of course have an access time of
O(n):
template<class> struct for_each;
template<class A, class B, class C>
struct for_each< typelist<A, typelist<B, typelist<C, null_t> > > > {
typedef typelist<
typename process<A>::type,
typelist<
typename process<B>::type,
typelist<
typename process<C>::type,
null_t
>
>
> type;
};
The above raises some issues regarding the supposed 'reuse' of algorithms also.
You get _specializations_ of algorithms in many cases anyway. If so, what is
the point of using iterators, which, BTW, imply iterating over a sequence one at
a time--thereby obviating the O(n) access time from the beginning of a
cons-style list.
> > or when you are
> > using a sequence of types to hold unrelated arbitrary types.
>
> And here.
If you use a container of types to hold types that are not used in a related
fashion. For example, if I changed some template class to wrap all the template
parameters into one type--rather than disparate template parameters:
template<class T, class allocator>
-->
template<class args>
// where type_at<0, args>::type == T
// and type_at<1, args>::type == allocator
This, IMO, is bad design anyway.
> Like I said, if you store it as a binary search tree, i.e. you have
> key/value pairs, then you may do an O(log N) rather than O(N) search.
And you can do even better if you specialize the algorithm that searches to not
use iterators but operate directly on the tree. I see the point that you are
making, but I don't think that it really applies in the same sense as it does in
the STL.
> > Otherwise, element
> > access is the same. It takes O(1) to go from element A to element B in
> both a
> > cons-style typelist and a vector (i.e. tuple-style list).
>
> It depends on what A and B is. Surely, you don't mean that going from the
> beginning to the end of a cons-list, unless you provide something like
> random access iterators for it, is O(1)?
If you step through a cons-style list or tuple-style list one element at a time,
the access complexity is O(n) either way. Each step has an access complexity of
O(1)--in _both_ cases.
> > This means that you are
> > already providing the algorithm to flatten it, but you are doing it _all
> over
> > the place_ instead of in a single place.
>
> All over the place? You mean as algorithms use iterators? For an algorithm
> to use iterators, it's more or less the same as using a cons-list. For
> containers, then, true, they would have to some way provide a way to flatten
> the sequence. You could have this as an external algorithm if you like, too,
> a sequence converter, as you suggest, such as e.g.:
I mean the definition of generic iterators even when they don't apply to a large
class of algorithms.
> I can't see how this makes a big difference to the argument about decoupling
> sequences from algorithms. In the first case, you perform a conversion, in
> the latter case, you provide access, making it possible to access the
> sequence in the required order.
>
> Specifically, the latter way doesn't need to mean more code than the former
> way. So what do you mean with something having to be done "all over the
> place"?
I mean that it always as to be done in order to use the algorithms, even if only
a single algorithm applies to the structure.
> A difference is that the latter case is "lazy", performing the conversion
> only when needed, while the former case transforms the whole sequence,
> first. This could make a performance difference. Especially if the sequence,
> itself, is lazy, generating only elements when required.
This depends on how you design the 'default' sequence type that the algorithms
use. You could easily implement it lazily.
>
> Besides, couldn't you use the same argument for STL?
Yes, but with the STL there are specific needs for specific container types such
as access time, insertion time, memory use, etc.. Various types of optimization
for various needs. These have come about over decades of study, and the need
for them proven in the field. These same arguments do not apply here. I don't
think that we need multiple *sequence* types. Trees or other structures, yes,
but the number of algorithms that operate on trees as a sequence is rather
small.
> So what would be the advantage of using sequence-converters, instead of
> iterators? Perhaps if the flattening is expensive, and you need to pass over
> the sequence multiple times? However, do you have an example of expensive
> flattening?
Sequence converters would rarely be needed. There are only so many types of
structures that would be useful. At compile-time you don't need 10 different
pure sequence types like you do at runtime. This leads me to believe that only
three or four different container types are _useful_ total and that only a few
algorithms that operate on a sequence would be applicable to containers that are
not sequences (such as trees).
> I guess we still haven't had much examples of that. I'm merely saying that
> the design of MPL suggests that it's powerful. I agree that comparisons
> between implementing things in MPL, and using Loki, could be useful. That
> would let the "rubber meet the road". However, MPL is quite new, and Aleksey
> have been quite busy making the library, in the first place, and lately the
> docs.
Oh, I agree; it *is* powerful. My point is that I think it is a little *too*
powerful (i.e. unnecessarily powerful). And maybe more importantly than
anything else, it provides many samples of how to use the compile-time languages
of C++. At this point, it is moot anyway, we need _something_ regardless.
By the way, I'm not saying that the design of the MPL is bad, in fact I think it
is very clever. I also have no problem with the existance of the MPL. However,
I think that Boost needs a simpler subset that more complicated libraries (such
as the MPL) could be built on top of.
Paul Mensonides
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk