Boost logo

Boost :

Subject: Re: [boost] [math] Support for libquadmath/ __float128
From: Christopher Kormanyos (e_float_at_[hidden])
Date: 2013-02-25 13:25:36

>>> So this means that the suffixes L and l are the "longest"

>>> floating-point suffixes supported by the language and that the
>>> suffixes Q and q are non-portable language extensions. Is this right?
>> Correct.
>>> So the only way to add Q or q suffixes would be to find a way to query
>>> the environment if it is supported. I wonder if GCC with
>>> --enable-libquadmath has a kind of query for this.
>> I haven't found one - gcc-4.7.2 has all sorts of __SIZEOF_XXX__ defines for every last type
>> __float128 and __float80 :-(

> It won't help now, but is 'more precise types' an issue to raise
> with WG21 for the next C++ standard?
> Or is Multiprecision the way to go?
> Paul

I would welcome both.

Remember what a relief it was to finally be able to use
specified fixed-size integers such as std::uint16_t,
std::uint32_t, and the like? These standardized types
solved a host of portability problems for integer algorithms.

Having fixed-precision floating-point types such as
std::float32_t, std::float64_t, std::float128_t would,
in my opinion, solve an even grander host of portability
issues for floating-point algorithms. We have all grudgingly
struggled with these for 30 years.

(Where has my long lost love Fortran77's REAL*16
from the '80s gone?)

Although multiprecision can, and possibly should, be
addresses, I would not rely on multiprecision alone
for a potential std::float128_t. The reason for this is because
a potential std::float128_t can be well-implemented
*on-the-metal* on many systems and offer the
high performance thereof. Multiprecision will always
suffer large performance losses (unless it gets onto a GPU).

Just my $.02.

Sincerely, Chris.

Boost list run by bdawes at, gregod at, cpdaniel at, john at