Boost logo

Boost :

From: Fernando Cacciola (fernando_cacciola_at_[hidden])
Date: 2003-07-08 09:13:19


Rozental, Gennadiy <gennadiy.rozental_at_[hidden]> wrote in message
news:1373D6342FA1D4119A5100E029437F6405E1F9CC_at_clifford.devo.ilx.com...
> > A half-way solution is to have something like:
> >
> > BOOST_CHECK_EQUAL_NUMBERS(x,y,IsEqual)
> >
> > and let users specify their own Preciates.
>
> There is BOOST_CHECK_PREDICATE
>
Yes, I know.
My point was that with BOOST_CHECK_EQUAL_NUMBERS() the test library
could output something readable of the form:

"numbers x and y are not approximately equal"

It could even add to the output something of the form:

" according to " << Pred ;

which would use the comparator << operator so it can
output the relevant information such as epsilon, scale,
etc..

> > By default, the Test library could provide
> > a straight-forward ABSOLUTE-ERROR comparator:
>
> By default, the Test library provides relative error comparator, which is
> according to my understanding is more correct.
>
But there is no such thing as a "more correct" way to compare
FP values in the context free level of a test library.
You know already that relative errors have to be scaled to be
meaningful, but choosing the right scaling is the complex.
A default semantic that simply scales epsilon() times any of the
arguments will be simply unusable for most practical tests
because actual errors will easily exceed that; yet OTOH,
suppling a factor to increase the scaling will mainly lead users
to the problematic Illusion of Simplicity that brought us
to this discussion.

A comparison based on absolute errors is pesimistic, but for unbiased
comparisons it often results on what is expected, much more often
that relative-error based comparisons.
It isn't smart but is easy to understand.

BTW: The default comparator I showed before might better be named
"DifferAtMostBy"

Fernando Cacciola


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk