Boost logo

Boost :

From: Greg Chicares (chicares_at_[hidden])
Date: 2001-02-20 19:18:20

Kevlin Henney wrote:
> In message <3A90B900.E6B07CE5_at_[hidden]>, Greg Chicares
> <chicares_at_[hidden]> writes
> >Would it be worthwhile to modify lexical_cast to reflect the inherent
> >precision of floating-point numbers?
> Possibly, if it can be done consistently and uniformly.

[snip some discussion]

> issues to address: (1) built-in casts do not always have value-
> preserving semantics (and hence are not always inverses),

Quite right. Conversion from double to int must truncate (4.9/1);
that is its nature. Yet conversion from int to double must be
value preserving if possible (4.9/2). Following that, I would
prefer that a lexical_cast from double to std::string be value
preserving insofar as that is possible. I see this particular
conversion--an inverse of atof(), if you will--as an especially
important application of lexical_cast.

> and (2) how
> practical is it to root out all of the edge cases?

Do you mean denormals, NaNs, and infinities? Is it too cavalier
to say "let them throw if they will"?

> >I believe it suffices to add
> > const int prec0 = std::numeric_limits<Source>::digits10;
> > const int prec1 = std::numeric_limits<Target>::digits10;
> > interpreter.precision(1 + max(prec0, prec1)); // see Notes
> >to lexical_cast.hpp right after 'interpreter' is defined. Thus, for
> >double d, this statement would be true:
> > d == lexical_cast<double>(lexical_cast<std::string>(d));
> >except in degenerate cases like NANs.
> This works fine for float, double and long double, but have you tried it
> with std::complex? The results are unfortunate :-(

Thanks for pointing that out. Fixed below.

> digits10 is 0 for any non-specialised use of numeric_limits, which is
> the case for std::complex, which means that the stream is given an
> output precision of 1. So, other numeric types suffer as a result

Unacceptable, I agree.

> That said, there may be a solution if you select how you set the
> precision based on numeric_limits<>::is_specialized. This would
> discriminate in favour of built-ins, but at least would not actively
> discriminate against other types. However, I have not tried this
> approach out.

An approach that I believe is equivalent and simpler is to set the
precision to no less than the default of 6. This leads me to the
revised suggestion:

        ,1 + static_cast<std::streamsize>

> A couple of other implementation issues you might also want to consider:

I think they can be addressed by adding these includes:

# include <boost/pending/limits.hpp>
# include <limits>

# include <algorithm>

> - std::max is not defined for MSVC, therefore must be done by hand.

I don't have that compiler, so I tried to simulate that behavior using
before <boost/config.hpp> is included. It appears that the config file
supplies a correct implementation of std::max in this case.

> - <limits> is not defined for g++, so omit no support for this on g++.

I had grabbed <limits> from libstdc++-v3, but "modify your system headers"
isn't a good general approach.

I first thought to write a <limits> header, or at least the digits10
part (just returning DBL_MAX etc.), but I notice that's already been
done in /boost/boost_1_20_2/boost/pending/limits.hpp . Was that
copyrighted file intentionally included? If it's OK, then it could
be used with
all of which include <limits> .

Boost list run by bdawes at, gregod at, cpdaniel at, john at