|
Boost : |
From: Paul Moore (gustav_at_[hidden])
Date: 2000-11-28 17:04:37
From: Lutz Kettner [mailto:kettner_at_[hidden]]
> I guess there is a culture clash at work. My background is
> computational geometry and we rely on exact arithmetic. A rational
> number class and exactness is pretty fundamental to us. So, my
> apologies if the following turns out to be a bit harsh.
Far from it. I appreciate the comments, and in particular I'm very glad to
get some input from someone who feels that construction from doubles is not
natural or necessary.
As I've got sucked into attempting to implement this, I ought to take a step
back - I didn't include a constructor from double in the initial class,
deliberately. I am looking at adding it because a number of people had asked
for it. I'm very glad to hear from people who *don't* think it's a good
idea. It restores the balance...
> > In my view, suitability boils down to "do what I mean".
>
> > "do what I mean, don't do what I actually said"
>
> You don't mean that really, do you? ;-) ("I mean" is subjective here
> that doesn't work well when programming one day and debugging the
> other day ;-)
No, I probably don't :-) It was more my characterisation of how I saw people
looking at construction from double. Generally, the examples people gave
didn't seem to see a difficulty with rational<int> r = 0.1. I was trying to
say that people's requests seemed to want this to give r = 1/10, rather than
r = 7205759403792793/72057594037927936 (Thanks to Matt Austern for
calculating that for me :-)
> You are biased towards a rather simplistic model of where the
> double comes from. Consider a bigger application doing some numerical
> simulation and then switching over to some combinatorial or geometric
> algorithms where you need exact rationals. In order for the results
> to make sense on the original data, you rather prefer not to round
> the doubles when converting to rational. That the original doubles
> are approximations to begin with isn't the point here. The point is
> that a combinatorial statement computed and proven with rational
> arithmetic cannot be validated anymore in the original double data set.
> (Sorry, I should just have a simple example here)
If simple examples made the point, I suspect we wouldn't be having this
discussion in the first place :-) But the example is a good one. We
definitely don't want construction from double to depend on the "history" of
where the double came from, as well as on its value!
> And sure, it is the programmers fault. If a programmer works with a double
> the programmer is supposed to know that rounding occurs. But
> if a double gets assigned to a rational, there would be no need
> for further rounding. So I would consider any rounding here a rather
> big surprise.
I think that part of the problem is that many programmers DON'T remember
that working with doubles isn't simple...
> If the programmers intent is 1/10, the programmer could have
> written that:
>
> rational<int> rat( 1, 10);
I agree entirely. I wish someone who wants constructor from double would
provide a good example of how it would be used...
> For me, a constructor rational(double) has to be well defined
> without any context at all. For me (maybe here comes the
> culture clash, but I think that is also in the spirit of other
> number conversions in C++) this conversion is supposed to be
> as lossless as possible, and since base 2 floats can be converted
> without any loss at all, that's the choice.
I agree up to a point. But I take a slightly more extreme view, that there
should be no construction from double at all. This is based on my not seeing
any use for it, admittedly. Your example seems to indicate that there is a
(rare) use for an exact conversion, but in general I feel that it would be
surprising more often than useful.
> I think, the continued fractions method is a very nice method to
> have. I would like to see that as a function, and not used for
> the constructor.
I am rapidly coming to the conclusion that you are right. There remains a
question as to whether such "utility" functions should be bundled in with
the boost rational library. There's an argument for (otherwise, finding such
algorithms is hard), but there's also a (practical) argument against -
namely that, as the maintainer, I'm not competent to evaluate or maintain
such code...
I think that including Reggie Seagraves' algorithm as a utility function may
be a reasonable idea. But I'm not sure. How often will it be used? Is the
need to specify a maximum denominator reasonable in a general usage context?
Is the algorithm correct (no criticism of Reggie, but I sure can't tell!!!)?
Once again, thanks for the helpful comments.
Paul
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk