|
Boost : |
Subject: [boost] [Math] Rationale Behind Epsilon Values
From: Axel Ismirlian (aismirl_at_[hidden])
Date: 2014-05-27 12:19:50
Hello,
I am new to this mailing list, so its possible the format of my question
may be off by a little. My team has been working on getting boost to run on
PPC 64-LE. There are many tests in boost that rely on the machine epsilon
value. Why was this value chosen to determine success or failure of a
particular test knowing that it is hardware specific? More specifically,
the Long Double type is very different on both platforms.
However, by forcing the x86 Long Double epsilon value on the PPC machine we
were able to get many of the previously failing tests to pass. This change
was inspired by the fact that a similar fix was already in the code for
Darwin platform, but the value that it returns for the Darwin platform is
not the x86 value. Also the LDBL_MANT_DIG on both the Darwin platform and
our platform (PPC 64-LE) are the same, and are equal to 106. How was the
Long Double value on the Darwin platform determined? For reference, the
actual code lies in boost/math/tools/precision.hpp. We were also wondering
whether or not this was a valid solution to fixing the problem? Part of us
feel this doesn't address the underlaying problem.
Sincerely.
-Axel
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk