From: Kalle Rutanen (kalle_rutanen_at_[hidden])
Date: 2006-11-29 01:20:10
(This is my first posting, hope I fit the etiquette)
I was looking at the code for the computation of the sinc function in
sinc.hpp. If the sinc is evaluated at a certain distance away from zero,
then it is simply computed sin(x)/x. Near zero the computation is done by
using a Taylor series expansion of sinc at zero:
sin(x) = x - x^3/3! + x^5/5! + ...
sinc(x) = sin(x)/x = 1 - x^2/3! + x^4/5! + ...
The implementation cuts the series to a quartic (fourth degree)
sinc(x) ~= 1 - x^2/6 + x^4/120
All fine this far. Then it has been noticed that near zero the quadratic and
quartic terms become smaller than the smallest floating point number and
thus they evaluate zero: they need not be computed. This is reasonable as
The implementation gives the following bounds:
e0 = smallest floating point number
e2 = sqrt(e)
e4 = sqrt(e2)
They are used in the following manner:
if |x| < e0 then return 1
if |x| < e2 then return 1 - x^2/6
if |x| < e4 then return 1 - x^2/6 + x^4/120
otherwise return sin(x)/x;
But are these bounds right?
I'd derive them in the following manner:
|x^2/6| < e0
|x| < sqrt(e0 * 6)
=> e2' = sqrt(e0 * 6)
|x^4/120| < e0
|x| < pow(e0 * 120, 1/4)
=> e4' = pow(e0 * 120, 1/4)
The connection between the old and new values are:
e2' = e2 * sqrt(6) ~= e2 * 2.5
e4' = e4 * pow(120, 1/4) ~= e4 * 3.3
Clearly, because e2 < e2' and e4 < e4', this under-estimation does not cause
any wrong results, but since this bound-check has been taken as an
optimization, why not give as large bounds as possible..
What do you think, could this be by purpose or by accident?
-- Kalle Rutanen http://kaba.hilvi.org _________________________________________________________________ Nyt löydät etsimäsi tiedot nopeasti niin koneeltasi kuin netistä. http://toolbar.msn.fi
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk