# Boost :

Subject: Re: [boost] [Review:Algorithms] Order of args to clamp
From: Phil Endecott (spam_from_boost_dev_at_[hidden])
Date: 2011-09-26 13:59:59

Phil Endecott wrote:
> template<typename V>
> V clamp ( V val, V lo, V hi );
>
> I have a clamp function that orders the args low - middle - high, which
> seems like a more natural ordering to me.

This is really just a brain-teaser rather than a serious suggestion,
but it has just occurred to me that

For all x,y,z . clamp(x,y,z) == clamp(y,x,z)

IFF clamp's precondition that lo<hi is removed and suitable behaviour
is defined for that case. In other words, the two possible argument
orderings can be equivalent. Do you believe me?

Here is my chain of thought:

It seems to me that for some trivial algorithms, the implementation can
be a direct expression of what we think that the algorithm means; in
such cases, if we try to come up with a specification that is
independent of the implementation we can find ourselves writing
something that is more complex and less obvious than the implementation
itself. For example:

clamp(V,L,H) -> R
pre L<H
post R == sort(V,L,H)[1] // i.e. the middle value after sorting V, L
and H

I believe that that's a valid specification for the clamp problem, but
it's less useful to a user than simply writing v<lo ? lo : v<hi ? v :
hi because some though is required to see that "clamp" and "middle" get
the same result.

Anyway, that "middle value" idea triggered my observation that, since
the order of the inputs to sort cannot change its output, the two
argument orderings to clamp must be equivalent. But clearly they are
not with the obvious implementation. This is because assuming the
other argument ordering can break the L<H precondition. So, by
removing the precondition, both argument orderings work. (I'm not
seriously suggesting doing that. I'm just pointing it out as a curiosity.)

A little more seriously, that made me wonder whether we should really
be defining a "middle" algorithm, which one could see as complementing
3-arg versions of min and max:

min(1,2,3) == 1
middle(1,2,3) == 2
max(1,2,3) == 3

Anyway, this all reminded me that I had once considered writing a
"varargs sort" function; min, middle and max could all be written in
terms of it:

int r[3] = sort(1,2,3);

The idea is that knowing the number of elements to sort at compile time
might reduce the overhead of the "control" part of the algorithm enough
to make a significant run-time saving. Here's a quick example:

template <typename T>
std::tr1::array<T,3> sort(T a, T b, T c)
{
if (a < b) {
if (b < c) {
} else if (a < c) {
std::swap(b,c);
} else {
std::swap(b,c);
std::swap(a,b);
}
} else {
if (a < c) {
std::swap(a,b);
} else if (b < c) {
std::swap(a,b);
std::swap(b,c);
} else {
std::swap(a,c);
}
}
std::tr1::array<T,3> r = {{a,b,c}};
return r;
}

That seems to be more than an order of magnitude faster than std::sort
(though I've not checked that it gets the right answer!).
Unfortunately, my attempts to extend it to N inputs (i.e. by recursion)
produce something much slower. This could be an interesting exercise
if anyone has nothing better to do.

Anyway, that's all just an aside, for curiosity.

Regards, Phil.