Boost logo

Boost :

From: Alberto Ganesh Barbati (abarbati_at_[hidden])
Date: 2006-04-23 21:27:10


John Maddock ha scritto:
>
>> 2) What are the comparison algorithms to include?
>
> I've already made a start, well not really a start, just a "I needed this
> functionality to move on" throw it together sort of start here:
> http://freespace.virgin.net/boost.regex/toolkit/libs/math/doc/html/math_toolkit/toolkit.html#math_toolkit.toolkit.relative_error_and_testing

Thanks a lot for the link. Very interesting.

>
> Relative error in particular is a tricky one, you have to deal with:
>
> One or both values may be zero.
> One or both values may be denorms.
> One or both values may be infinities.

Sure. The advantage to have a library solution is that it can handle
corner cases better than a naive implementation.

>
> So I guess if you wanted a really super-duper handle it all version you
> would make it policy based:

I agree that we should provide as much customization as possible, but
not more. In particular, I would use policy classes only if necessary,
preferring multiple functors instead of one single functor with multiple
policies. The task is perceived by a lot of programmer as trivial
(although it isn't) and if the class is too complicated people might get
discouraged and won't use it.

>
> * Values below a certain threshold are regarded as zero (so all denorms are
> equivalent for example).

Good point.

> * Values above a threshold are regarded as effectively infinity.

Is this necessary? I mean, that might be useful if we want to really
compute a meaningful estimate of the relative error, but in this case we
simply want to check if the relative error is smaller than a (hopefully
small) value.

> * Absolute errors are used if the values are small enough.

That might be a different algorithm. I would keep both relative-error
and relative-error-or-absolute-error-if-values-small-enough. They both
sounds right in different use cases.

>
> Which of these you actually want to use in your application depend on the
> QOI of the component you're testing I guess.
>
> Personally I treat all denorms as zeros because if the result is a denorm
> there probably isn't enough information content throughout the calculation
> that led to the denorm to get an accurate value (if you see what I mean).

I think I understand and I partially agree. However it's not the fact
that denormals are probably already inaccurate that worries me (the
library should not make assumptions on how the data has been obtained)
but rather that the division by such a small number might produce an
inaccurate, and therefore useless, result.

>
> Anyway, just my 2c worth, please do feel free to jump in with something
> better if you want!
>

Thanks for your feedback,

Ganesh


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk