Boost logo

Boost :

Subject: Re: [boost] Specific-Width Floating-Point Typedefs
From: Paul A. Bristow (pbristow_at_[hidden])
Date: 2013-05-03 10:38:32


> -----Original Message-----
> From: Boost [mailto:boost-bounces_at_[hidden]] On Behalf Of Mathias Gaunard
> Sent: Friday, May 03, 2013 12:06 PM
> To: boost_at_[hidden]
> Subject: Re: [boost] Specific-Width Floating-Point Typedefs
>
> On 09/04/13 11:32, Paul A. Bristow wrote:
>
> > In case anyone following this thread is interested, I attach a cross
> > posting of our replies to comments from Nick MacLaren from the British Standards WG21 subgroup.
>
> I haven't found a simple way to reply to that email, so sorry for the bad formatting.
>
> > 1 templates mean that it isn't POD
>
> That's not true. There is nothing in the definition of POD that is related to templates.
> What he probably meant to say is that a template class cannot be a fundamental type (while a
> fundamental type is a POD, a POD isn't necessarily a fundamental type). But then again, it being a
> template doesn't require a template class to be involved; it could be implemented with template
aliases
> that could forward to a fundamental type if needed. It's a QoI issue.
>
> > how to specify constants with higher precision than long double
>
> If the new extended literal mechanism doesn't allow to do this as a library, then it should
probably be
> fixed.
>
> > 3 Ours is a simple pragmatic solution using existing hardware and no new software. We don't
expect
> 'reproducible execution', but experience with Boost.Math's extensive test suite suggests that it
is jolly
> close. (It's the number of bits that make the significant differences). We're specifying types
not strict
> semantics.
>
> From my experience Boost.Math is lacking when it comes to denormals, nans or inf. Those are the
tricky
> bits when different architectures are considered.
>
> > 4 Almost all C++ uses the X86 FP
>
> What is "the X86 FP"? Results on x86 are highly variable depending on the microarchitecture,
compilation
> flags and mood of the optimizer.
> (arbitrary usage of x87, SSE, FMA3, FMA4 may all lead to different results).
>
> > 7 float64_t will not use 80 bits, but float80_t will.
>
> The C++ language allows the compiler to use higher precision for intermediate floating-point
> computations whenever it wants.
>
> The way the above is phrased, it could be misunderstood that computations with float64_t may never
> use a 80-bit precision floating-point unit, even though that may well likely happen.
> This gives a false sense of security to people who write some code that would only work with a
IEEE754
> 64-bit floating-point unit.

Agree with all your comments.

The number of bits is by far the most important factor in precision.

This is a simple pragmatic proposal that will make things better - but not perfect.

Paul

---
Paul A. Bristow,
Prizet Farmhouse, Kendal LA8 8AB  UK
+44 1539 561830  07714330204
pbristow_at_[hidden]

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk