|
Boost : |
From: Paul A Bristow (pbristow_at_[hidden])
Date: 2005-03-16 04:18:50
| -----Original Message-----
| From: boost-bounces_at_[hidden]
| [mailto:boost-bounces_at_[hidden]] On Behalf Of Gennadiy Rozental
| Sent: 15 March 2005 18:49
| To: boost_at_[hidden]
| Subject: [boost] Re: No tests on como?
|
| > template<typename T>
| > struct print_log_value {
| > void operator()( std::ostream& ostr, T const& t )
| > {
| > #if !BOOST_WORKAROUND(BOOST_MSVC,BOOST_TESTED_AT(1310))
| > // Show all possibly significant digits (for example, 17 for
| 64-bit
| > double).
| > if( std::numeric_limits<T>::is_specialized &&
| > std::numeric_limits<T>::radix == 2 )
| > ostr.precision(2 +
| std::numeric_limits<T>::digits * 301/1000);
| > #endif
| > ostr << t; // by default print the value
| > }
| > };
|
|
| Would solution like following will cover all FPTs?
|
| template<typename is_spec>
| struct set_log_precision { static _( std::ostream& ostr ) {} }
|
| // Show all possibly significant digits (for example, 17 for 64-bit
double).
| template<>
| struct set_log_precision<true>
| {
| static _( std::ostream& ostr )
| {
| ostr.precision(2 + std::numeric_limits<T>::digits * 301/1000);
| }
| }
|
| template<typename T>
| struct print_log_value {
| void operator()( std::ostream& ostr, T const& t )
| {
| set_log_precision<std::numeric_limits<T>::is_specialized &&
| std::numeric_limits<T>::radix == 2>::_( ostr );
|
| ostr << t;
| }
| };
|
And for a UDT it relies on a specialization being provided for
numeric_limits.
For example, for NTL, a popular collection of very high precision types,
no specialisation is provided for numeric_limits.
And if radix is something funny like 10, this formula
2 + std::numeric_limits<T>::digits * 301/1000
is wrong.
So I think you need a fall-back default precision.
It could be the default 6,
but it isn't enough for all the possibly significant digits
even for 32 bit floats for which you need 9 decimal digits
(6 is the number of guaranteed accurate decimal digits)
or it could be something else like 17 which is enough for popular 64-bit
double
(BUT the least significant 2 digits are noisy and so you would
get a lot of uninformative decimal digits for numbers that aren't quite
representable into the binary format).
Some comments in the code here would be helpful?
Paul
PS The code in lexical cast STILL has weakness in this area,
and doesn't even use the
2 + std::numeric_limits<T>::digits * 301/1000
formula.
So if you convert to a decimal string and back again you lose LOTS of
precision.
PPS MSVC 8.0 introduced a reduction in precision by 1 least significant
binary bit
in a third of values when inputting floats (doubles are exactly correct).
As you will see from the attached float_input.txt, MS say this is now a
'feature'.
You can guess my view on this.
Paul A Bristow
Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB
+44 1539 561830 +44 7714 330204
mailto: pbristow_at_[hidden]
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk