Boost logo

Boost :

From: pbristow_at_[hidden]
Date: 2019-10-15 13:33:18


> -----Original Message-----
> From: Boost <boost-bounces_at_[hidden]> On Behalf Of Matt Hurd via Boost
> Sent: 15 October 2019 13:05
> To: boost_at_[hidden]
> Cc: Matt Hurd <matthurd_at_[hidden]>
> Subject: Re: [boost] [Math] float16 on ARM
>
> >
> >
> > Thanks for these useful references.
> >
> n.p.
>
>
> > Are bloat16 and IEEE float16
> > I:\boost\libs\math\include\boost\math\cstdfloat\cstdfloat_types.hpp
> >
> > the two layouts that we need to consider?
> >
>
> Arm also supports another in that it has two similar formats __fp16 and
> _Float16 :-(
>
> "ARM processors support (via a floating point control register
> <https://en.wikipedia.org/wiki/Control_register> bit) an "alternative half-
> precision" format, which does away with the special case for an exponent value of
> 31 (111112).[10] <https://en.wikipedia.org/wiki/Half-precision_floating-
> point_format#cite_note-10>
> It
> is almost identical to the IEEE format, but there is no encoding for infinity or NaNs;
> instead, an exponent of 31 encodes normalized numbers in the range 65536 to
> 131008." from wiki:
> https://en.wikipedia.org/wiki/Half-precision_floating-point_format
>
> Arm reference:
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100067_0612_00_
> en/sex1519040854421.html
> gcc ref: https://gcc.gnu.org/onlinedocs/gcc/Half-Precision.html
>
> Strange complexity within Arm: "The ARM target provides hardware support for
> conversions between __fp16 and float values as an extension to VFP and NEON
> (Advanced SIMD), and from ARMv8-A provides hardware support for conversions
> between __fp16 and double values. GCC generates code using these hardware
> instructions if you compile with options to select an FPU that provides them; for
> example, -mfpu=neon-fp16 -mfloat-abi=softfp, in addition to the -mfp16-format
> option to select a half-precision format."
>
> Unpleasant, sorry.

I really, really don't wish to know all that! 😉 Paul

>
> More bfloat16 FWIW (hardware support list is longer):
> https://en.wikichip.org/wiki/brain_floating-point_format
>
> So that makes at least 3 I guess.
>
> Facebook has been experimenting with an alternate format Gustafson's post which
> is quite neat and rational and perhaps better than ieee at 64 bit
> too:
> https://www.nextplatform.com/2019/07/08/new-approach-could-sink-floating-
> point-computation/
> Facebook reference:
> https://engineering.fb.com/ai-research/floating-point-math/
> posit "land": https://posithub.org/news
>
> but posit only lives in FPGA land AFAICT and not yet something to worry about.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk