Boost logo

Boost :

From: John Maddock (john_at_[hidden])
Date: 2008-01-15 05:10:11


Guillaume Melquiond wrote:
> I'm a bit reluctant to apply your fix though, since it will not work
> with big unsigned values. I need to think a bit more about it.

Right if the unsigned value is greater than the max signed value then bad
things happen, the only thing I can think of is to check for that case, and
temporarily change the rounding mode rather than negating for that special
case.

>> The floor/ceil functions are the problematic ones here, these should
>> probably throw in the case that the two bounds don't yield the same
>> result :-)
>
> I didn't see anything related to floor and ceil in your patch. Which
> issues are you referring to?

I put those in a separate file for now: see
http://svn.boost.org/trac/boost/browser/sandbox/interval_math_toolkit/boost/math/bindings/interval.hpp

It contains interval versions of:

floor
ceil
fabs
pow
ldexp
frexp

Of these fabs, ldexp and frexp are trivial. pow is fairly straightforward I
hope (but requires changes to the traits concept as previously mentioned).
The open question with floor and ceil is what should happen if the
floor/ceil of the two bounds don't match? Currently it just returns the new
interval, but I can imagine some situations where this would be an error.

There's also an interval version of boost::math::fpclassify in there (and
hense isnan, isinf etc) which broadly follows the same classification rules
that C99 suggests for complex numbers in it's appendix.

>> The most important addition is for pow: which requires modifying the
>> transcendental function traits class support.
>
> That is fine. The support class was also extended for some new
> functions
> like expm1 and the like.

OK good.

> These two policies are mainly for use with some "smart" mathematical
> libraries that properly compute elementary functions according to the
> current rounding mode. As surprising as it may seem, such libraries do
> exist. There is one from Sun for instance.

Nod, I realise that. BTW it would be nice to have ready made traits classes
for these available :-)

>> * Using rounded_transc_exact as a "dummy" class with
>> rounded_arith_opp produced quite erroneous results with std::exp, I
>> gave up testing it after that. I assume it could be used in
>> conjunction with rounded_transc_std OK, but then normal arithmetic
>> would be slower?
>
> This policy is meant to be used when the _exact one is used for the
> arithmetic operations. For example, when performing non-guaranteed
> computations on wide intervals, or when using an arbitrary precision
> type for bounds.
>
> As a summary, mixing rounding policy probably leads to unexpected
> results, and calling default elementary functions with a non-exact
> policy may also lead to unexpected results. I should redo the
> documentation, since these points may not be that clear.

OK.

>> So providing a traits class that resets the rounding mode to nearest
>> before making a std lib call seemed sensible, on the assumption that
>> these calls occur much less frequently than normal arithmetic, and
>> that the cost of the call will swamp the cost of the rounding mode
>> change anyway. I haven't investigated using a correctly rounded
>> solution yet, but that will come :-)
>
> Actually, there is code in the new version in order to support the
> CRlibm library for computing correctly-rounded results of elementary
> functions. For efficiency reasons, this library only works in rounding
> to nearest (while the result is rounded in the wanted direction), so
> we
> are using a scheme similar to the one you are describing here.
>
> When using an elementary function from the standard libm, there could
> be
> an additional y*(1+eps)+eta in order to ensure that the computed
> values
> are rounded outwards. The cost would be negligible with respect to the
> rounding mode change.

Yep, assuming the std lib is actually accurate to 1eps of course ;-)

But this would be fairly trivial to add with nextbefore/nextafter I guess?

>> Let me know if you want any more information about these patches.
>> I'm not sure how these fit with the revised std lib design though?
>
> Except for the test_input.hpp part, they would probably apply with
> almost no changes. But I am still not sure about conversions.
>
> In the proposal for standardization, we only support floating-point
> bounds, conversions are done through the constructors, and they are
> "explicit" when there is a precision loss (that is a potential
> widening).
>
> Perhaps this could be a way to do it. Only allow conversions in Boost
> if
> there is no loss. For example, int and unsigned could not be used with
> interval<float>, but they could be used with interval<double> on
> computers where they are 32bit wide.

This still seems overly restrictive for generic code, consider this use
case:

I have polynomial coefficients that are integers, sometimes small integers
(int's), sometimes big ones (long long's). How should I best store those
coefficients and evaluate the polynomial?

To me it's a non brainer that integers should be stored as integers: there
is no more accurate way to represent them.

Neither is there any more space-efficient way to represent them IMO.

If the code is called with a type wider than the integer-coefficient type
then we have no problem, if it's called with a narrower type then we have
rounding loss, but crucially, there is still no better way of performing the
arithmetic, assuming it is the users intent that it be carried out at the
precision given. Indeed for built in floating point types:

int * T

may well be carried out at double precision internally in the FPU even T is
float, where as depending on the compiler settings:

static_cast<T>(int) * T

may be evaluated by converting the int to a float first.

I guess an explicit conversion is still better than nothing, but would
require me to obfuscate my code with a large number of type-casts :-(

Given that an implicit convertion from an integer type will result in an
exact interval representation if there is precision loss, what do we loose
by making this conversion implicit? It's no worse than the existing
implicit conversion from long double to interval<double> is it? Or am I
missing something?

Just trying to understand the issues, yours, John.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk