Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2005-12-08 07:33:53


Guillaume Melquiond <guillaume.melquiond_at_[hidden]> writes:

> Le mercredi 07 décembre 2005 à 10:59 -0500, David Abrahams a écrit :
>> I find this in the documentation, which sounds self-contradictory:
>>
>> > Unprotected rounding
>>
>> > As explained in this section, a good way to speed up computations
>> > when the base type is a basic floating-point type is to unprotect
>> > the intervals at the hot spots of the algorithm. This method is safe
                                                      ^^^^^^^^^^^^^^^^^^^
>> > and really an improvement for interval computations. But please
>> > remember that any basic floating-point operation executed inside the
>> > unprotection blocks will probably have an undefined behavior (but
>> > only for the current thread).
>>
>> a. That doesn't sound "safe."
>
> Indeed.

But you just said it was! Make up your mind ;-)

> This is the reason why it is not enabled for the whole program,
> contrarily to what it is done in a few other interval libraries. This
> can be restricted to a scope and the user has to explicitly enable it.
>
>> 1. there's the potential undefined behavior.
>
> As soon as you break the assumption other parts of a program make about
> the rounding mode, you can lead them to invoke undefined behavior.
> sin(double) can easily return a value that is not between -1 and 1, if
> it is invoked in a scope where the rounding is not preserved.
>
>> 2. there's the whole notion of "unprotect"-ing the computation.
>> Don't I lose the value of interval computation? That is, will
>> my computed results still reflect the potential error due to
>> floating-point precision limits?
>
> The interval computations are fine. These are the floating-point
> computations that are not.

I figured out that's what you probably meant... eventually. But my
whole point is that the answer is very unclear from your docs. I had
to scratch my head about it and write a long email to this list before
that fact was apparent to me.

>> b. How am I going to do any useful computation in an unprotection
>> block without doing any basic floating-point operations?
>
> Interval computations are useful.

Only if it's clear to the reader that you can do them without causing
UB! ;-) Otherwise, they're just another illegal operation.

>> If it's not self-contradictory, could you explain what it means and,
>> if possible, improve the wording?
>>
>> >From reading the docs, it's very unclear what optimization this
>> unprotection mechanism allows, and it's unclear when/how it's
>> mathematically valid to use the results (e.g. why not do all
>> computations that way if it's faster?) I get only a vague sense of
>> the answers to these questions from the docs. Yes, I read the Horner
>> example.
>
> Ideally, compilers should do this optimization themselves.

Now I'm really confused. What is the optimization, exactly? It seems
to say, "stop tracking computational error altogether," (as though you
were using a plain double) but maybe that's not what you mean? [read
to the end; I may have figured it out]

> Unfortunately no compiler that I know does it. In fact, they are not
> even able to properly handle the floating-point pragmas, so we are
> still years away from the time they will do handle the optimization.
>
> We cannot just tell the users: "in ten years, your compiler will
> probably be able to optimize the code, you just have to wait till then,
> before using the library". Please note that this problem plagues all the
> interval libraries that do not benefit from dedicated compiler support
> (like the one the Sun compiler provides).
>
> So, in the meantime, we have provided a way for the user to emulate this
> optimization by manually deciding of program scopes where the rounding
> mode is not changed and restored at each interval computation.

Do you mean to say that the user delimits a region within which she
knows using a single rounding mode for all intervals computation will
yield correct results (I shouldn't have to guess -- the answer should
be obvious from the docs)? If so, why would that invalidate
computation with ordinary doubles? The rounding mode isn't normally
changed by the compiler for ordinary FP calculation, is it?

> The code can get a few orders of magnitude faster, if it does
> intensive interval computations.
>
>> Finally, the use of the term "unprotection block" looks extremely
>> misleading. It looks like you have unprotected datatypes, but "block"
>> implies that there's a lexical scope within which unprotection is in
>> effect. There does seem to be such a notion for rounding mode (by
>> declaring an auto variable of I::traits_type::rounding), but not so
>> for unprotect. Unless I'm gravely confused, which is possible, in
>> which case, again, the docs need to be upgraded.
>
> I agree the documentation should be clearer. As long as the variable of
> type I::traits_type::rounding is alive, then we are in a scope that is
> protected.

Now you're changing terms again. I thought it was "unprotected!"

> In such a scope, floating-point computations will have strange
> behaviors, but computations involving unprotected intervals (they
> run a lot faster than computations involving correct intervals) are
> able to give correct results.

Because rounding for ordinary numbers is supposed to be
"round-to-nearest" rather than "round up" or "round down" (at least
one of which is needed for interval arithmetic)?

> Thanks to your comments, I now understand how speaking of "unprotected"
> intervals can be misleading. By this term, we intended to express that
> unprotected intervals lead to incorrect computations, when used outside
> of a scope protected by a variable of type I::traits_type::rounding.

That's helpful at least. The docs need a lot of help in this area, still.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk