Boost logo

Boost :

From: Andras Erdei (aerdei_at_[hidden])
Date: 2005-03-29 02:52:23


Peter Dimov wrote:

> Since rounding is a subset of unspecified, it can only be worse if it
> carries an unacceptable overhead when overflow does not occur. I don't
> think that this is the case here.

yes!

> Exception on overflow is only better than rounding if no result is
> better than an approximate result, that is, when the answer has to be
> correct no matter what. This requirement is better served by
> rational<unlimited> which does deliver a correct answer. I don't see how
> rational<limited>+exception-on-overflow is better.

after i failed to convince people that the current boost::rational is useless,
i agreed to make an exception throwing version in the secret hope that it can
be made the default, and as in practice it will always throw and never give
a result, it will discourage people from using it

so far i have tried the following reasonings:

- did a boost::rational implementation (moved to www.ccg.hu/pub/src/old/rat.cpp)
  which does +, -, * and / _at the same speed_ as int +, -, * and / -- if
  rational computation (without hardware support) has the same speed as
  integer, then why was floating point (which is much worse than rational)
  ever introduced in the first place? isn't this suspicious?

- did a test with boost::rational, executing +, -, * and / with 10000 random
  numbers; + and - gave the correct result only in 44 cases, and * and / in 79
  cases -- does it sound useable?

- gave an example with small numbers (1.055.. + 1.021.. results in -0.229..),
  and argued that when you use floating point 1.0/5.0 "overflows" in the same
  sense, but no-one would ever accept a float which only operates on numbers
  whose denominators must be powers of 2 -- then why is it acceptable for us?

- tried to argue that the mental image of "overflow" is in this case faulty:
  the overflow in the representation is a detail of our _implementation_
  (and 1.0/5.0 "overflows" in exactly the same sense, but no-one ever called
  _that_ an overflow) -- even worse, users will never be able to tell in
  advance when this overflow will happen (and thus avoid it), even if they
  know our implementation (the criterion is something like "the distinct prime
  factors of the numerator of one side and the denominator of the other side
  are small and few")

what really makes this scary is that there is the very same rational proposal
before the committee, and if it makes through we will have a standard that
cannot be made to work for fixed precision (builtins) and cannot be implemented
efficiently for unlimited precision (bigints)

sorry, but feeling frustrated :O)
andras


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk