|
Boost : |
From: Gennaro Prota (gennaro_prota_at_[hidden])
Date: 2006-08-04 08:14:12
On Fri, 4 Aug 2006 12:16:03 +0100, "Paul A Bristow"
<pbristow_at_[hidden]> wrote:
>And in rely to Gennaro's question,
>
>| stream.precision(2 + std::numeric_limits<Source>::digits * 301/1000);
>
>| Does that work for non IEEE 754 floating points too?
>| I guess not. And in that case we have to check.
>
>It should be as good as you can get - certainly MUCH better than just using
>the default of 6 decimal digits.
>This is because numeric_limits<>::digits is the precision of the significand
>in bits, and the formula divides by log2 ~= 3 to convert binary to decimal
>and adds 2 to make sure there are enough to ensure a 1 least significant bit
>change shows up.
Paul, I understand the formula :-) And I'm aware of your N2005 and
N1171. My question was just if it (the formula) requires IEEE754 or
not. Of course max_digits10 is different in that respect, as it
doesn't need to be calculated from digits10. We are calculating
instead, and I'm not sure the formula is valid *in general*, probably
because I haven't thought enough about it.
As to the docs, this is what I would add. Feedback welcome.
lexical_cast<> offers the following guarantees:
- in the absence of overflow/underflow,
* if a "decimal string" with at most numeric_limits<T>::digits10
significant digits is converted to a floating type T and back
to a string the result will equal the original string value.
* if a floating point value having type F is converted to a
string allowing at least max_digits10 [or our formula here]
significant decimal digits and back to F the result will
be the original number.
-- [ Gennaro Prota, C++ developer for hire ]
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk