|
Boost : |
Subject: Re: [boost] Specific-Width Floating-Point Typedefs
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2013-05-04 13:22:26
El 03/05/2013 13:05, Mathias Gaunard escribió:
> > 7 float64_t will not use 80 bits, but float80_t will.
>
> The C++ language allows the compiler to use higher precision for
> intermediate floating-point computations whenever it wants.
Searching information to define specific-width types for my company,
I've just read this thread and the proposal
https://svn.boost.org/svn/boost/sandbox/precision/libs/precision/doc/html/index.html
First of all, thank you Paul, Christopher and John for your proposal, I
think floating point typedefs are needed, and IMHO the paper is a big
step in the right direction.
I have some comments I'd like to share authors and boosters since I've
seen that the paper has been updated after WG21/SG6 meeting so I guess
numeric experts have reviewed the paper so many of my doubts might have
been already discussed. Sorry if some comments are not accurate as I'm
not an expert in floating point representations.
* * *
I don't think defining non-portable types like float80_t will help. As a
programmer, I find stdint.h "rules" are quite easy to understand and
they only define fixed-width types if they have a portable binary
representation (except endianness). int32_t has no padding and it's 2's
complement. If float32_t is defined I think it's ok to require it to be
IEEE single precission with no padding, guaranteeing CHAR_BIT *
sizeof(float32_t) is exactly 32. But if float80_t is defined the number
of padding bits and their position shouldn't be implementation defined
as it won't be consistent with integer types. We have float_least80_t
for non-portable representations.
<Off-topic>: Latest C and C++ standards are not consistent with stdint.h
guidelines as they define char16_t and char32_t typedefs (and _Char16_t,
_Char32_t native types in C++) with no width and padding guarantees.
char_least16_t and char_least32_t (and "short char" & "long char" for
native type names) would be more appropriate names that maintain type
consistency. Following stdint.h rules, char16_t and char32_t types would
be exactly 16 and 32 types with Unicode-16 and Unicode-32 encoding. It's
difficult to teach and error prone that uint32_t has exactly 32 bits but
char16_t can have more than 16 bits.</Off-topic>
Since C/C++ floating point base/radix is implementation defined,
float_least32_t does not offer much information about the base or the
precision (the number of base-b digits in the significand). With
int_least32_t I know INT_LEAST32_MIN is { -(2(N-1)-1) or lower }. I have
no guarantee that I can store there INT32_MIN, but at least I know what
can be portably stored. Assuming all current and future machines will be
2's complement (C & C++ only admit sign/magnitude, 1's complement, and
2's complement) I could decide int_least32_t can hold INT32_MIN.
Since according to the proposal float_leastN_t types are optional,
wouldn't be better to assume that float_[least|max]N_t types have binary
base and at least the same precision as floatN_t (which are IEEE types
with well known precision)? A DSP might only have binary64 floating
point types, but a programmer can use float_least32_t to store a
binary32 number with no rounding. The same can be applied to
float_least80_t, a programmer can't know where the padding is but he/she
knows that 2^-(2^70) can be safely stored there.
Does this make sense? Best,
Ion
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk