From: Guillaume Melquiond (guillaume.melquiond_at_[hidden])
Date: 2008-02-25 13:20:29
Le lundi 25 février 2008 à 18:25 +0100, Johan Råde a écrit :
> > Due to the way the code is written (memcpy), it is compliant with the
> > following part of the C99 standard: "First, an argument represented in a
> > format wider than its semantic type is converted to its semantic type.
> > Then determination is based on the type of the argument." But it may
> > seem like a stroke of luck. So perhaps a big "careful!" comment should
> > be written inside the code, so that this property is not inadvertently
> > lost.
> I don't understand. I'm not a C++ guru.
> Could you please elaborate.
It does not have anything to do with C++, but with the way some system
handle floating-point numbers. For instance, x87-style arithmetic uses
an internal exponent field of 15 bits. Let us suppose your code is:
double x = 1e300;
fpclassify(x * x);
For the processor, 1e600 is a normal number since it does not overflow
its internal representation, yet it has (supposedly) the type "double",
so it should be an infinity. That's why the standard asks for a
conversion to the "semantic" type beforehand.
Another example would be a system where all the computations are done in
double precision, yet single precision is available as a storage format.
So the compiler would store the intermediate values as double precision
number, even if they are single precision from a type point of view.
In case it is still not clear, here is an example of a code that does
not portably follow the C99 standard, but that works on any system where
the computation format is the same as the semantic type:
if (x != x) return FP_NAN;
else if (x - x == 0)
else return FP_INFINITE;
Once again, the reason is that x may be stored internally in a wider
format, so x - x will be 0 although it should be NaN in case of
> > I am missing the point of the changesign function. What is its
> > rationale? The documentation talks about copysign(x, signbit(x) ? 1 :
> > -1) but not about -x. So I am a bit lost.
> Is -x guaranteed to change the sign of zero and NaN?
> If it is, then I agree that the function is superfluous.
For NaN, the question is a bit tricky, since the 754 standard explicitly
states that the sign bit is not part of the payload of NaN, so it may
just be a bit that the processor randomly flips over time. Fortunately,
processor designers do not add this kind of feature to their product (or
at least they do not anymore). Note that it isn't related to negate; The
sign of NaN is a slippery topic in general.
Other than that, the negate operation is clearly defined by the 754
standard (just flip the sign bit). Now, the C/C++ standard does not
prevent compiler designers to be more imaginative. So they could slow
down the generated code just so that it does not follow the
floating-point standard. But I don't think the users of such a random
compiler really care about the sign of zero...
Back to the topic at hand, I don't know of any architecture/compiler
where negating a number does not flip the sign bit of the number, so -x
works both for zero and NaN.
> > What is the rationale for not enabling signed_zero by default?
> I think most users of the library will not want it enabled by default.
> They may not care much about the fine details of floating point arithmetic.
> Insted they may just want to prevent crashes when deserializing text archives
> that contain (by design or by accident) infinity or NaN.
I should have added the following: On all the systems I have access to
(various *nix flavors), the default behavior is to display the sign of
negative zero. So I don't know if that is what the users expect. But I'm
quite sure that users don't expect a change in the way zero is
displayed, when they use a facet dedicated to "nonfinite" things.
So perhaps there should be three states: signed_zero, unsigned_zero,
default_zero. And default_zero being the default state, which just
delegate the output to the underlying system.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk