|
Boost : |
From: Geoffrey Irving (irving_at_[hidden])
Date: 2006-06-09 21:19:58
On Sat, Jun 10, 2006 at 02:37:25AM +0200, Janek Kozicki wrote:
> Geoffrey Irving said: (by the date of Fri, 9 Jun 2006 09:09:25 -0700)
>
> > We use single precision floats, so 1e40 overflows. Here, the choice is
> > use O(1) numbers, sacrifice performance, or break completely.
>
> benchmark performance with double on your system. On mine (AMD X2, but
> running on 32bit platform) double is faster.
Indeed. A quick test on nocona has doubles beating floats by about 4% on a
small example. I'll have to try more tests to see if that holds up once
things fit in cache.
The main reason we switched to floats was memory and cache footprint
(sometimes switching to floats on a large example gets you back down to
fitting into 32G), but superlinear time complexity seems to kicking
in again these days, so maybe it's time to reconsider.
I just hope I don't have to double templatize all our code to store data
in floats for cache throughput and compute in doubles.
As for the original topic, I probably can't salvage my example unless I
cheat (say, by unrolling the power method and dropping intermediate
normalizations, or applying some sort of extremely naive high order
polynomial regression). All the nice Taylor series example seem unitless.
Thanks,
Geoffrey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk