Boost logo

Boost :

From: John E. Potter (jpotter_at_[hidden])
Date: 2000-12-03 00:46:41


On Thu, 30 Nov 2000, Jeremy Siek wrote:

> I just finished testing *all* the STL algorithms for
> their concept requirements:

WOW!

> Anyone interested in STL algorithm requirements, and several of the open
> issues for the C++ standard, would probably be intersted in looking at
> this file.

Yes, indeed.

> What the file does is use the concept archetypes to try to determine what
> the minimal requirements are for various STL implementations, so far SGI
> STL (as distributed with g++) and the KAI STL (I think that's modified
> modena?).

I think the file shows that these implementations meet the requirements
given. It may be very hard to get these down to the minimum that the
algorithms actually require.

> There were several things that surprised me, such as how
> using the conditional operator ?: in algorithm implementations affects
> the requirements.

The true and false types must be convertable to some common type to
use as the result? I would have been surprised too if I had not just
looked at that part of the standard for other reasons. Or did you
have something else in mind?

Some comments:

The random_shuffle and random_selection algorithms which take a ran
did not compile with my g++ 2.95.2. Problems with the reference
producer getting a default_construtable when a ptrdiff_t was expected.

I've hacked away at the binary_search part since that is of current
interest to me.

> {
> #if defined(__KCC)
> // The KAI version of this uses a one-argument less-than function
> // object.

I would like to see this code to learn how it changes the requirements.

> typedef less_than_comparable_archetype<> FT;
> typedef convertible_to_archetype<FT> T;
> #else
> typedef less_than_op_first_archetype<> FT;
> typedef less_than_op_second_archetype<> T;
> #endif
> forward_iterator_archetype<FT> fi;
> T value(dummy_cons);
> fi = std::lower_bound(fi, fi, value);
> }

That is what I would expect.

> {
> typedef null_archetype<int> Arg1;
> typedef null_archetype<char> Arg2;
> typedef convertible_to_archetype<Arg1> FT;
> typedef convertible_to_archetype<Arg2> T;
> forward_iterator_archetype<FT> fi;
> T value(dummy_cons);
> binary_predicate_archetype<Arg1, Arg2> comp(dummy_cons);
> fi = std::lower_bound(fi, fi, value, comp);
> }

I don't understand why the convertable_to can be used here but
not above.

Does the binary_predicate_archtype test using the
algorithm with a function pointer?

> {
> #if defined(__GNUC__)
> typedef less_than_op_first_archetype<
> less_than_op_second_archetype<> > FT;
> typedef less_than_op_second_archetype<
> less_than_op_first_archetype<> > T;

These requirements are way to permissive.

> #else
> typedef less_than_op_first_archetype<> FT;
> typedef less_than_op_second_archetype<> T;

I can't believe this conforms. I can write one, but mine would take
lg(N) + 2 and the standard requires lg(N) + 1. Does it really
compile and work as required? Why does KAI not use a function
object here?

> #endif
> forward_iterator_archetype<FT> fi;
> T value(dummy_cons);
> fi = std::upper_bound(fi, fi, value);
> }

Here is a version with less requirements which works for g++. Note
that it is just lower_bound with the types reversed. Lower_bound
needs *it < value and upper_bound needs value < *it. They put the
equality on the correct side for the algorithms.

   {
     typedef less_than_op_first_archetype<> T;
     typedef less_than_op_second_archetype<> FT;
     forward_iterator_archetype<FT> fi;
     T value(dummy_cons);
     fi = std::upper_bound(fi, fi, value);
   }

> {
> typedef null_archetype<int> Arg1;
> typedef null_archetype<char> Arg2;
> #if defined(__GNUC__)
> typedef convertible_to_archetype<Arg1,
> convertible_to_archetype<Arg2> > FT;
> typedef convertible_to_archetype<Arg2,
> convertible_to_archetype<Arg1> > T;

As above.

> #else
> typedef convertible_to_archetype<Arg1> FT;
> typedef convertible_to_archetype<Arg2> T;

Again.

> #endif
> forward_iterator_archetype<FT> fi;
> T value(dummy_cons);
> binary_predicate_archetype<Arg1, Arg2> comp(dummy_cons);
> fi = std::upper_bound(fi, fi, value, comp);
> }

Below the stated minimum for g++.

   {
     typedef null_archetype<int> Arg1;
     typedef null_archetype<char> Arg2;
     typedef convertible_to_archetype<Arg1> T;
     typedef convertible_to_archetype<Arg2> FT;
     forward_iterator_archetype<FT> fi;
     T value(dummy_cons);
     binary_predicate_archetype<Arg1, Arg2> comp(dummy_cons);
     fi = std::upper_bound(fi, fi, value, comp);
   }

Why these are too permissive.

> typedef less_than_op_first_archetype<
> less_than_op_second_archetype<> > FT;
> typedef less_than_op_second_archetype<
> less_than_op_first_archetype<> > T;

There are four possible compares:
  *it < v
  *it < *it
  v < v
  v < *it

The equal_range and binary_search algorithms require only the two
with different types. The above types allow all four. I guess
that this may be a challenge for the concept classes. Is it
possible to have types A and B such that A < B and B < A are
allowed, but A < A and B < B are not?

Lest there be any doubt, I think this program is a great contribution.
My comments are intended to be constructive.

John


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk