From: Paul Mensonides (pmenso57_at_[hidden])
Date: 2002-08-13 19:06:50
----- Original Message -----
From: "Terje Slettebø" <tslettebo_at_[hidden]>
>Why is it frowned upon to reimplement your own string class? Why is
>upon to reimplement what is already done a freely available? Uniformity is
>the purpose that the C++ standard library exists at all (except for the
>that are actually tightly integrated with the language itself!).
-> And MPL has a uniform iterator interface. _This_kind of uniformity, a
-> uniformity of interface (rather than implementation, such as standardising
-> on _one_ sequence), ensures interoperability, removes duplication, and keeps
-> the design open-ended. Oh, and did I mention you get a free doll, as well?
-> :) And a picture of the company president. :) Furthermore, there's a 30-day
-> money-back guarantee. Uh, come to think of it, it's free already, never
-> mind. :)
-> I think you may confuse interface/concept uniformity, which may be an
-> advantage, with implementation uniformity, which may make it hard to develop
-> and extend a library.
This time I had to insert the quotes again! Who knows why?
No, I'm not confusing anything. The standard library exists so that people
don't have to rewrite these things. That is implementation && concept
uniformity. Also, this doesn't make it hard to develop the library at all.
>> If you go for a uniform way, that means people have to agree on a uniform
>> way. So far, there haven't been any compelling reasons for exclusively
>> selecting one sequence compared to another, which you have pointed out,
>> too, as they vary in their behaviour,
>I have pointed out nothing of the sort except to say that vectors
>lists and that both are unrealistic at large sizes.
-> As mentioned in another posting, vectors more or less don't work on e.g.
-> Intel C++, except small ones, due to slow preprocessor, etc. Lists work.
-> This is some of what I meant.
If the preprocessor is causing that massive of a drain, then the MPL is not
using it properly. That has nothing to do with the validity of different
>We have no compelling
>reasons (or example to that effect) to *have* more than one sequence type.
-> The Burton et al paper is another example. Doug (and the paper) showed in a
-> recent posting, that such immutable vectors may be good for some
-> applications, especially if large structures are needed. On the other hand,
-> lists or mutable vectors may be useful for other applications. How would you
-> operate on both kinds, without iterators? Creating sequence converters for
-> the large, immutable vector is largely out, because of its size. If you
-> create a lazy converter, you essentially have an iterator. Why make it hard
-> for yourself, by creating abstractions (like fixed sequence type) that
-> requires such workarounds all the time?
When does this happen all the time? That is a massive assumption that has
_literally_ no examples or theory to back it up.
Try running a 10,000 element sequence (no matter how it is implemented through
an MPL algorithm). Then try running a 10,000 element sequence through an
algorithm specialized to work directly on that sequence. Also, fixed vectors is
such as that is such a specialized thing that it doesn't even deserve to be used
as a motivation of a library's generality.
-> To counter the argument in another posting of yours, about how you could
-> operate on such a big sequence in the first place, the answer is that
-> algorithms don't necessarily have to traverse the whole sequence. The size
-> operation you mentioned could be solved by storing the size in the
-> structure. This is what Burton et al did.
Nothing says that they have to, even without sequence abstractions.
>What is the purpose of Boost? Isn't it to create "existing practice"? Why
>would we want to create existing practice in an area that is largely
>experimental with absolutely no concrete data saying that it is worthwhile?
-> Boost, as I understand it, is also a testing ground for libraries.
Testing grounds relative to the C++ standard, not relative to practical use.
-> Furthermore, your argument goes against your own suggestion, as well. Why
-> should we standardise on one sequence, unless we know that to be the best
-> way? And in addition, there's the choice of which sequence, if any, to
-> standardise for.
No it doesn't. Making something generic for no reason at all is _bad_ design,
not good design. If you have that point of view, why do we even need the
template keyword anymore? Everything could just automatically be a template.
Generality for the sake of generality is over-complication of a simple concept
and a waste of time.
> So I think the argument that this is a new field, actually works against
> fixing the framework to things like specific sequences.
>Obviously. You and I have are viewpoints, and all that is happening is
>we see things as we want to see things.
-> No. I find that an insulting characterisation. We may use different base
-> assumptions, and therefore arrive at different conclusions. This is _not_
-> the same as seeing things that we want to. I challenge you to substantiate
-> that claim, or take it back. I even published the timing result, showing
-> that vector was faster than list, even though I had expected it to come the
-> other way around, and even though that meant that I couldn't use it as a
-> speed argument for list, out for the interest at scientific honesty!
No I won't take it back. Everybody does this all the time *because* of basic
assumptions. The timing result is not relevant if the all the time is spent in
the preprocessor. What is the result otherwise?
>> In addition, you have the mentioned advantages that David B. Held and
>> others have pointed out, that with the iterator abstraction, anything
>> iterators may be used for the algorithms, that includes things like
>> generators, too. range_c works like that. Are you disregarding this, as
>Why would it be difficult to make a convertor for the specific and uncommon
>case of something like range_c? Input/output doesn't apply, random number
>generators don't apply, etc..
-> Who are you to say what would be the common or uncommon thing to use? If the
-> creators of STL thought like this, they may have standardised on one
-> sequence, as well, e.g. vector, because it would be the "common" thing to
I am the voice of reason. The STL creators *already* had a valid reason to
abstract. Also, there are *actual* reasons--as I've said (over and over and
over)--that different sequences are necessary at runtime. Those reasons *do not
apply* to metaprogramming. As far as commonality goes, it is fairly obvious
that vector and list are used more than range generators.
-> If you wanted something with the characteristics of list, you could
-> make a list, but then you'd need a vector converter, to use it in any of the
-> algorithms, because they expect vector. Sorry, but your argument goes both
-> ways, also towards STL. Would this be the kind of STL you like? It certainly
-> wouldn't be for me.
How many times do I have to say the obvious. What characteristics of list and
vector are you talking about? And I'm not talking about blind references to the
STL and runtime programming. I'm talking about *actual* characteristics where
one is better than the other in one area vs. the other in another area. Give me
an example that shows that, and I'll listen. Otherwise, this is a bunch of
-> RNG may well apply. Sure, to get a different result, you need to supply a
-> different seed. However, getting a different result may not be that
-> necessary. You may for example use it for stocastical analysis, or to test
-> various sort implementations. In those cases, the starting seed doesn't
-> matter that much.
Why wouldn't you just use a random number instead? If you have to specify a
seed every time what is the point?
-> Personally, I object to a library design that makes unnecessary limiting
-> decisions, because at the time, no other use was found for it.
In what way is it limiting? All of your "examples" in this regard are contrived
and motivated by no real world practicality.
-> If the creators of C++ thought like this, when designing the template
-> system, that it wouldn't be necessary to make it more advanced than
-> supporting containers of T, because they couldn't think of anything else,
-> then there would be no metaprogramming in C++.
The creators of C++, i.e. the standards body as a whole, *do* think this way.
That is the reason that we don't have a ton more language features and library
facilities than we do.
>> In my opinion, this is a strong argument for the iterator abstraction. It
>> enables extension and experimentation, in this new, exciting field. MPL
>> would be much less extensible, if algorithms and functions only worked
>>a fixed set of sequences, that couldn't be extended.
>No it wouldn't be. It would simply be a case of defining convertors in
>case (yet to be seen) where it is absolutely necessary. That doesn't make
>it less extensible, it just shifts the extensibility and simplifies the
-> As mentioned in other postings, that I'm sending now, in what way would it
-> be simpler to implement an algorithm to work on e.g. vector, than iterators?
-> Please show something concrete.
I don't need to show something concrete. You do. Personally, I think that
type-vectors are an implementation atrocity, but I think that is is more
important to emphasize one type of sequence that the library is optimized for
than to use what I personally like best. Otherwise, everyone is paying the
price for generality that is almost never necessary.
>> By the way, I later found that the example program I posted (sequence
>> timing), used the typeof() of g++, a non-standard extension. Doing some
>> simple testing on MSVC 6.0 (which doesn't have typeof()), vector and list
>> came out equally fast.
>What kind of test were you doing? In order to really see performance
>benefits with a vector, you'd have to do random access of many elements at
-> It was the same test as posted earlier. This is a little dodgy, though, as
-> some compilers may optimise away the nested types. It may be hard to make
-> accurate tests, factoring out the part that doesn't contribute to what is
-> being tested.
What I was specifically wondering about was this...was the algorithm operating
on one element at a time? If so, I doubt that you would get a massive
performance gain with vectors. Only random access in quantity would make it
-> A reasonably good test might be sorting a random sequence, and trying it
-> with different sequences. From these preliminary tests, it appears that
-> vector generally is fastest. However, it has the problem that it may be very
-> slow on other compilers, such as the EDG based ones, especially if the
-> preprocessor is used.
As I said before, this is 1) irrelevant and 2) shows that the MPL is not using
the preprocessor in an efficient way. That is not a big deal and can be fixed.
-> In any case, as this varies across sequences, sizes, and implementations,
-> I've more focused on the flexibility in being able to select the sequence
-> that works best for you, and also making it possible to make new ones,
-> without having to convert them into vector, which in some cases (such as
-> large sequences, like the Burton et al ones) may be simply infeasable.
-> Rather, you may want to operate on the sequence in-place.
I know what you are saying. I disagree with it, that's all. :) Philosophically
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk