Boost logo

Boost :

Subject: Re: [boost] [review][constrained_value] Review ofConstrainedValueLibrary begins today
From: Robert Kawulak (robert.kawulak_at_[hidden])
Date: 2008-12-08 18:39:24


> From: Stjepan Rajko

> > So what's the conclusion in the context of separation of
> invariant and the test?
> > That we may end up having bounded float with value a bit
> greater than the upper
> > bound, but that's fine, because the difference will never
> exceed some
> > user-defined epsilon? Is the epsilon constant? The "delta"
> (difference between
> > extended and truncated value) may have a very big value for
> big numbers and very
> > small for small ones, so epsilon should rather be scaled
> according to the
> > magnitude of compared numbers.
>
> I know little about floats and what the values of the deltas are and
> how they depend on the value of the float, but:
>
> The invariant is still: x < y

Are we still talking about the case when test ==> invariant? I'm confused -- if
we allow for a value (x) being a "delta" bigger than the upper bound (y), then
why the invariant should be "x < y" rather than "x - delta < y"?

> If you think about it, you are already separating the test from the
> invariant in your advanced examples. Think about the object that uses
> the library to keep track of it's min/max. The test checks for
> whether you have crossed the previous min/max. Sure, you could say
> the invariant is the same: "the object is between the min and max
> present in the constraint object". But really, what kind of guarantee
> is this? If I need to look at the constraint to figure out what I'm
> being guaranteed, I might as well look at the value itself and see
> where it stands. I would consider this as "no invariant". There, you
> already have docs for this case :-)

I would say the invariant is still there, but it is "inverted" -- in typical
bounded objects it is: "the value always lays within the bounds", while here it
is "the bounds always contain the value". When the value is going to be
modified, the error policy ensures that the invariant is still upheld by
modifying the constraint (in contrast to the more common case when it would
modify the value). The test here is always equal to the invariant, so it doesn't
seem to be a representative example for test ==> invariant concept.

> If you are sticking with test == invariant just for the sake of test
> == invariant (rather than a lack of time to investigate and document
> the other case), I think you are settling to sell your library for way
> shorter than you can.

Please forgive me my resistance, but I stick with test == invariant because I
believe that as a person responsible for a library I have to think 100 times and
be really convinced before I add/change anything. I wouldn't have so much doubts
if I see that there are useful and general applications that would outweigh the
added complexity (I hope you agree that it will be more difficult to explain
test ==> invariant approach to the users?) and some extra work needed. So far
you've shown one application that is dealing with the FP issue using epsilon,
but we don't know yet if this approach is leading to (best or any) solution of
the problem. Are there any other use cases that I should consider? Maybe it's
best to leave it as is for now, and when you test whether the approach is really
sound and useful, we could make the necessary changes (before the first official
release)?

> > And another issue is NaN -- it breaks the strict weak
> ordering, so it may or may
> > not be allowed as a valid value depening on the direction
> of comparison ("<" or
> > ">"). I guess NaN should not be an allowed value in any
> case, but I have no idea
> > yet how to enforce this without float-specific
> implementation of within_bounds.
> >
>
> I haven't taken a close look at bounded values, I'm just thinking of
> them as a specific case of constrained values. What is your invariant
> here? That ((min <= value) && (value <= max)) or that !((value < min)
> || (max < value))? Why do you need a strict weak ordering for either
> one? I believe NaN will fail the first test but pass the second one -
> if that is true, why is NaN a problem if you use the first test?
> (sorry if I'm missing something, like I said I'm not well versed in
> the details of floats)

The problem with NaN is that any comparison with this value yields false. So:

        NaN < x == false
        NaN > x == false
        NaN <= x == false
        ... and so on.

This violates the rules of strict weak ordering, which guarantee that we can
perform tests for bounds inclusion without surprises. For example, when x ==
NaN, the following "obvious" statement may be false:

        (l < x && x < u) ==> (l < u)

Maybe the requirement could be loosened if I find a generic way to implement the
bounds inclusion test which always returns false for NaN. Currently, to test x
for inclusion in a closed range [lower, upper], we have:

        !(x < lower) && !(upper < x)

While for an open range (lower, upper):

        (lower < x) && (x < upper)

Now, if we try to check if NaN is within the closed range, we get true, while
for the open range we get false. Therefore NaN belongs to the subset, but does
not belong to the superset, which is obviously a contradiction. I'm not sure if
such properties of NaN could lead to broken invariant, but surely it would be
good to avoid the strange results.

Best regards,
Robert


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk