Boost logo

Boost :

Subject: Re: [boost] Boost.Algorithm design question
From: Stephan T. Lavavej (stl_at_[hidden])
Date: 2011-10-07 00:37:19


[Dave Abrahams]
> Well, no. I'm trying hard to represent the views and theory of the
> STL's original author. I could just as easily look at it your way. But
> I think there's powerful and underappreciated validity to viewing
> algorithms as abstractions, and you can't really do that if the
> implementation leaks out in the specification.

I think that algorithms with weak requirements can still be viewed as abstractions - and lower_bound() provides the best example.

If I'm explaining lower_bound() to someone, I'm going to tell them that it finds (in logarithmic time) the first location in a sorted sequence where a value could be inserted while preserving the ordering. That's a very clean, abstract view of the algorithm, and it's precise enough to handle the usual corner cases (the value exists in the sequence, multiple values exist in the sequence, the value doesn't exist but would be inserted at the beginning/middle/end).

And yet, lower_bound() is capable of more - which is unimportant to most users but very important to a few. It can handle weaker iterators than random-access (performing a linear number of iterator operations, but a logarithmic number of predicate applications), it can handle heterogeneous comparisons, and the sequence doesn't have to be sorted in any sense - merely partitioned with respect to an expression. The first and the third are extremely obscure, but the heterogeneous part is very popular.

[STL]
> The design philosophy that I'm advocating is "preserve the user's
> types whenever possible". Here, a range of T and a value of V are
> involved. #1 says "all I need is t == v to return something
> convertible to bool, and I'll tell you whether this is 'false' for all
> elements in the range." It doesn't attempt to convert T to V or V to
> T. It doesn't attempt to say v == t, or !=, or anything else. It
> passes no judgment on what t == v "means". Perhaps t == v does attempt
> to convert T to V or V to T, but that's up to the types involved.

[Dave Abrahams]
> Yeah, but do you see how this gets the user involved with the
> implementation details of the algorithm?

I think of STL algorithms as performing unspecified magic, except that they want very specific operations with specific semantics from the types that I provide (iterators, elements, functors).

> There were algorithms that allowed all
> kinds of heterogeneity and conversions in their implementations, but
> wouldn't compile with the concept constraints given in the standard. If
> you tried to come up with concept constraints that actually would
> compile without breaking backward compatibility it turned out to be
> horrible. I think find_if might have been one of them. Doug Gregor
> knows for sure.

I agree that it's a problem (I remember seeing some of that on the reflector), but the solution is not to require homogeneous types everywhere.

[STL]
> Here's another example. Consider vector<unsigned char> and val =
> 0x1234UL. For all i, v[i] == 0x1234UL is false (the usual arithmetic
> conversions widen unsigned char to unsigned long). #1 will report that
> none of the unsigned chars are equal to 0x1234UL. But #2 smashes
> unsigned long to unsigned char, which must compile (possibly emitting
> a warning), and then the unsigned chars will be compared to 0x34.

[Dave Abrahams]
> Well, this is the algorithm needing to accomodate a weakness of the core
> language (implicit narrowing conversions). And since we don't live in
> an ideal world, maybe you have a point.

For the record, I dislike C's conversions. But if implicit narrowing were banned (or if the user enables the appropriate warning and compiles with warnings-as-errors), then #2 simply fails to compile while #1 compiles and works perfectly, because it doesn't attempt to convert the value type to the element type ahead of time.

STL


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk