
Boost : 
From: Lutz Kettner (kettner_at_[hidden])
Date: 20001128 12:47:48
Hi,
I guess there is a culture clash at work. My background is
computational geometry and we rely on exact arithmetic. A rational
number class and exactness is pretty fundamental to us. So, my
apologies if the following turns out to be a bit harsh.
> In my view, suitability boils down to "do what I mean".
> "do what I mean, don't do what I actually said"
You don't mean that really, do you? ;) ("I mean" is subjective here
that doesn't work well when programming one day and debugging the
other day ;)
Let's look at your example:
> I may be overcomplicating the issues, but in my view, someone who writes
>
> rational<int> rat = 0.1;
>
> wants rat to have numerator 1 and denominator 10. Yes, this is not the exact
> value of the double which is being passed as an argument to the rational
> constructor, but that isn't the programmer's fault.
So, you mean "1/10". Lets take the example apart into its equivalent form:
double d = 0.1;
rational<int> rat = d;
Can you explain to somebody just reading the line
rational<int> rat = d;
what you ment and that you prefer a denominator < 11?
My point is, with simple examples such as
for (int i = 100; i <= 100; ++i) {
for (int j = 1; j <= 100; ++j) {
rational<int> r = (static_cast<double>(i) /
static_cast<double>(j));
assert (i == r.numerator());
assert (j == r.denominator());
}
}
You are biased towards a rather simplistic model of where the
double comes from. Consider a bigger application doing some numerical
simulation and then switching over to some combinatorial or geometric
algorithms where you need exact rationals. In order for the results
to make sense on the original data, you rather prefer not to round
the doubles when converting to rational. That the original doubles
are approximations to begin with isn't the point here. The point is
that a combinatorial statement computed and proven with rational
arithmetic cannot be validated anymore in the original double data set.
(Sorry, I should just have a simple example here)
And sure, it is the programmers fault. If a programmer works with a double
the programmer is supposed to know that rounding occurs. But
if a double gets assigned to a rational, there would be no need
for further rounding. So I would consider any rounding here a rather
big surprise.
If the programmers intent is 1/10, the programmer could have
written that:
rational<int> rat( 1, 10);
For me, a constructor rational(double) has to be well defined
without any context at all. For me (maybe here comes the
culture clash, but I think that is also in the spirit of other
number conversions in C++) this conversion is supposed to be
as lossless as possible, and since base 2 floats can be converted
without any loss at all, that's the choice.
Thinking of "reverting a lossy conversion" doesn't fit the purpose
here, since you don't know if there happened a conversion from
some decimal representation or not.
I think, the continued fractions method is a very nice method to
have. I would like to see that as a function, and not used for
the constructor.
> (For a more obvious
> example, suppose that the 0.1 came from an input stream, as end user
> input...)
I don't think that makes it more obvious. If you want the exact
representation of decimal floating point representations you need
to write a scanner for that.
I think you are saying that yourself here:
> Put that way, option (2) sounds unreasonable. Maybe it is. (It's effectively
> attempting to invert a lossy conversion).
Another remark:
> I agree that we're in the area of "do what I mean, don't do what I actually
> said", but that's the point  floating point is hard, and rationals are
> "nicer" in some sense. Exposing floating point representation issues to
> naive users of rationals is, IMHO, not helpful.
What does a constructor expose if it simply converts the exact float
value? (Implementing it portable in the rational class might be an
issue though).
> To make my position explicit, I believe that
> rational<int> r1 = 0.1;
> rational<int> r2(1,10);
> should result in r1 == r2.
Again, I would be surprised if they are equal. :)
Yet another remark:
> I agree, that r1 should be the same as r2. But what I was wondering
> about is the following: When I give my compiler the following code
>
> cout << r1;
>
> it prints '0.1'  which is what I *mean*, not the exact value of r1!
> Therefore, some algorithms to handle the conversion must be already
> included in 'cout'.
cout << double uses a fixed point floating point format for the
output. I belive 7 digits are the default but you can change it.
The truncated output gets rounded into the last digit (I assume).
You can do a funny experiment and increase that output format to
30 digits and more. Try output 0.1 or 1.0/3.0. Doubles have about
15 meaningful digits but the output shows more and more.
So, conversion from binary repr. to decimal repr. is exact, but
2^53 (accuracy of last bit for a IEEE double 2 < double <= 1)
does not stop in decimal repr. at digit 15 although the accuracy
of doubles is about 15 digits, so the exact repr. trails somewhat off.
Best regards,
Lutz Kettner

UNC Computer Science email: kettner_at_[hidden]
CB 3175, Sitterson Hall phone: (919) 9621700 x7759
Chapel Hill, NC 275993175, USA fax: (919) 9621799

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk