Subject: Re: [boost] RFC: Multiprecision arithmetic library
Date: 2012-01-03 07:11:58
> On 12/26/2011 04:00 PM, Andrii Sydorchuk wrote:
>>> I like that idea. We have also been implementing our own floating point
>>> numbers with a base of 10, because of requirements in the financial
>>> business. I think it should be feasible to write a general wrapper for
>>> floating point numbers with an arbitrary base as well. Taking all that
>>> together we could have e.g. floating point numbers to a base of 77 with
>>> five byte mantissa and 42 bytes exponent, if needed anywhere.
>> While I agree that this functionality is useful especially for financial
>> problems, but it is probably not a good idea to try to unify it with
>> problem I mentioned.
>> Implementation of extended exponent wrapper around double/float types could
>> be very efficient. As in most cases it uses native double operators (e.g.
>> +,-,*,/,sqrt) plus
>> some additional magic with exponent bits which are stored in int64 for
>> double (or int32 for float). It also satisfies IEEE 754 standard
>> requirement for rounding of the result of next operations
>> +,-,*,/,sqrt without any additional overhead.
> Why not use the built-in quadruple precision support of the compiler?
> GCC, for example, has a __float128 type that implements the IEEE754 binary128 format.
For financial applications we would need an exact representation of numbers like 1.1 or 0.03551 this can be achieved by combining an integer (11, 3551) with a decimal exponent. This is somewhat orthogonal to the number of bits precision.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk