Whilest devising tests for the Boost lexical_cast function, I have encountered (for one compiler version) some surprising (to me) results outputting floats to decimal digit strings and reading them back in. Providing you use enough decimal digits, I expected the result of this 'loopback' to be the same, for example: float f = any_float_value; // Similarly for other types, UDTs even? std::stringstream s; s.precision(float_significant_decimal_digits); // 9 decimal digits should be enough (see Appendix below). s << f; // Output to decimal digit string(stream). float rf; s >> rf; // read string back into float. My expectation is that f == rf for ALL possible float values (and indeed this WAS true for an exhaustive test with a previous version of a well-known compiler, and for a randomish sample of double and long double values). A recent version outputs the same decimal digit strings - BUT the value read back in is 1 least significant binary digit different - suspiciously only for 1/3 of the float values. (Nor does increasing the number of decimal digits output via s.precision() change this). But perhaps this is not a Standard expectation? Paul Bristow Appendix For float, the number of significant binary digits is int float_significand_digits = std::numeric_limits::digits; // FLT_MANT_DIG == 24 for 32-bit FP the number of _guaranteed_ accurate decimal digits is given by int float_guaranteed_decimal_digits = std::numeric_limits::digits10; and is 6 for the MSVC 32-bit floating point format. The maximum number of digits that _can_ be significant is given by the formula float const log10Two = 0.30102999566398119521373889472449F; // log10(2.) int float_significant_digits = int(ceil(1 + float_significand_digits * log10Two)); // Note that C++ compiler will NOT evaluate log10 (2.) at compile time, nor a floating point division, // but an WILL perform an integer division, so can use 301/1000 as an approximation. // 3010/10000 is the nearest approximation using short int (10000 < max of 32767) but this is convenient numerically equivalent to int const float_significant_decimal_digits = 2 + std::numeric_limits::digits * 3010/10000; which CAN be calculated at compile time, and is 9 decimal digits for the IEEE 32-bit floating point format. To demonstrate, the following test asserts: #include #include #include #include using std::setprecision; int main() { int const float_significant_decimal_digits = 2 + std::numeric_limits::digits * 3010/10000; // == 9 float f = 3.1459F; // a test value - on a hemidemisemi-random test 1/3 fail. float rf; // for recalculate. std::stringstream s; s.precision(float_significant_decimal_digits); // 9 decimal digits is enough. s << f; // Output to string. s >> rf; // Read back in to float. assert(f == rf); // Check get back the same. return 0; } // int main()