Boost logo

Boost :

From: Guillaume Melquiond (guillaume.melquiond_at_[hidden])
Date: 2008-01-15 11:00:31


Le mardi 15 janvier 2008 à 10:10 +0000, John Maddock a écrit :
> Guillaume Melquiond wrote:
> > I'm a bit reluctant to apply your fix though, since it will not work
> > with big unsigned values. I need to think a bit more about it.
>
> Right if the unsigned value is greater than the max signed value then bad
> things happen, the only thing I can think of is to check for that case, and
> temporarily change the rounding mode rather than negating for that special
> case.

This seems like a good solution. And in order to avoid this branch, a
bigger signed type could be used instead for small unsigned types. I
will have to see to the template machinery for selecting the conversion
functions.

> I put those in a separate file for now: see
> http://svn.boost.org/trac/boost/browser/sandbox/interval_math_toolkit/boost/math/bindings/interval.hpp
>
> It contains interval versions of:
>
> floor
> ceil
> fabs
> pow
> ldexp
> frexp

Hmm... floor and ceil are two functions we did not consider for
standardization. They would make sense though, I have to take a look at
that. All the other functions (and some more of C99), however, are in
the proposal and in the new design of the library.

Concerning pow, I notice that your function does not handle negative
inputs for the lhs argument. Are you mathematically defining pow(x,y) as
exp(y*ln(x))?

> Of these fabs, ldexp and frexp are trivial. pow is fairly straightforward I
> hope (but requires changes to the traits concept as previously mentioned).
> The open question with floor and ceil is what should happen if the
> floor/ceil of the two bounds don't match? Currently it just returns the new
> interval, but I can imagine some situations where this would be an error.

>From an interval arithmetic point of view, this is never an error, since
the functions on real numbers are well defined, and so are their
interval extensions.

The only issue lies in the discontinuity when the two bounds are
different, which impedes the applicability of Brouwer's theorem (or any
of its variants). The proposal solves this issue by providing two
interval functions whenever the real function has discontinuities. The
second function takes a boolean reference as an additional parameter,
and it changes its value to true whenever a discontinuity was
encountered.

This makes it more flexible than an exception, since the interval result
is nonetheless meaningful and can be used in later computations.

> > These two policies are mainly for use with some "smart" mathematical
> > libraries that properly compute elementary functions according to the
> > current rounding mode. As surprising as it may seem, such libraries do
> > exist. There is one from Sun for instance.
>
> Nod, I realise that. BTW it would be nice to have ready made traits classes
> for these available :-)

This is something I have yet to tackle in the new design. All the other
policies have been subsumed by simpler mechanisms, but not this one.

> > When using an elementary function from the standard libm, there could
> > be an additional y*(1+eps)+eta in order to ensure that the computed
> > values are rounded outwards. The cost would be negligible with respect to the
> > rounding mode change.
>
> Yep, assuming the std lib is actually accurate to 1eps of course ;-)

Right (which is a dangerous assumption). I was actually thinking of
generic eps and eta values that would depend on the implementation.

> But this would be fairly trivial to add with nextbefore/nextafter I guess?

Good point. For 1 ulp changes, they could indeed be used. It just
happens that I am not fond of these two functions (does nextbefore even
exist?), since they take a second parameter which I have a hard time
filling. The IEEE-754R defines the nextup and nextdown functions, which
are a lot more useful in my opinion.

> > In the proposal for standardization, we only support floating-point
> > bounds, conversions are done through the constructors, and they are
> > "explicit" when there is a precision loss (that is a potential
> > widening).
> >
> > Perhaps this could be a way to do it. Only allow conversions in Boost if
> > there is no loss. For example, int and unsigned could not be used with
> > interval<float>, but they could be used with interval<double> on
> > computers where they are 32bit wide.
>
> This still seems overly restrictive for generic code, consider this use
> case:
>
> I have polynomial coefficients that are integers, sometimes small integers
> (int's), sometimes big ones (long long's). How should I best store those
> coefficients and evaluate the polynomial?
>
> To me it's a non brainer that integers should be stored as integers: there
> is no more accurate way to represent them.
>
> Neither is there any more space-efficient way to represent them IMO.

This I agree with. Storage should be performed with whatever type fits
the data. But that does not mean that computations should directly be
performed on storage types, in my opinion. Requiring a conversion to a
computable type beforehand does not seem like a bad design.

> If the code is called with a type wider than the integer-coefficient type
> then we have no problem, if it's called with a narrower type then we have
> rounding loss, but crucially, there is still no better way of performing the
> arithmetic, assuming it is the users intent that it be carried out at the
> precision given. Indeed for built in floating point types:
>
> int * T
>
> may well be carried out at double precision internally in the FPU even T is
> float, where as depending on the compiler settings:
>
> static_cast<T>(int) * T
>
> may be evaluated by converting the int to a float first.

I am not sure I get your point, since this uncertainty in the evaluation
scheme seems like a very good reason for putting as many hints to the
compiler as possible in the code.

> I guess an explicit conversion is still better than nothing, but would
> require me to obfuscate my code with a large number of type-casts :-(
>
> Given that an implicit convertion from an integer type will result in an
> exact interval representation if there is precision loss, what do we loose
> by making this conversion implicit? It's no worse than the existing
> implicit conversion from long double to interval<double> is it? Or am I
> missing something?

No, you are right. But I was referring to the proposal and not to the
existing library. In particular, the proposal fixes a few design
"issues" people encountered in the library. In particular, people felt
it was dangerous to implicitly get non-singleton (although correct)
intervals from scalar values. So the proposal states:

template<> class interval<double> {
  interval(double);
  explicit interval(long double);
};

Best regards,

Guillaume


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk