Subject: Re: [boost] [multiprecision] General/design query about which conversions should be implicit/explicit
From: Marc Glisse (marc.glisse_at_[hidden])
Date: 2014-05-31 08:37:16
On Sat, 31 May 2014, John Maddock wrote:
> I have an open bug report https://svn.boost.org/trac/boost/ticket/10082 that
> requests that conversions from floating point to rational
> multiprecision-types be made implicit (currently they're explicit).
But the doc says they are implicit.
> Now on the one hand the bug report is correct: these are non-lossy
> conversions, so there's no harm in them being implicit. However, it still
> sort of feels wrong to me, the only arguments against I can come up with are:
> 1) An implicit conversion lets you assign values such as 0.1 to a rational
> (which actually leads to 3602879701896397/36028797018963968 not 1/10), where
> as making the conversion explicit at least forces you to use a cast (or an
> explicit construction).
The problem is with people writing 0.1. If they mean the exact value, they
have already lost. What Boost.Multiprecision does later is not that
relevant, and it seems wrong to me to penalize a library because some
users don't understand the basics (and their program is likely broken for
a number of other reasons).
> 2) Floating point values can result in arbitrarily large integer parts to the
> rational, effectively running the machine out of memory. Arguably the
> converting constructor should guard against that, though frankly exactly how
> is less clear :-(
Er, that might be true if you include mpfr numbers in "floating point",
but if you only consider double, the maximum size of the numerator is
extremely limited. Even for a binary128 it can't be very big (about 2ko).
There could be good reasons for not making it implicit, but I am not
convinced by these two.
-- Marc Glisse
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk