From: John Maddock (john_at_[hidden])
Date: 2008-04-24 12:03:38
Johan Råde wrote:
> A typical data mining scenario might be to calculate the cdf for the
> t- or F-distribution for
> each value in an array of say 100,000 single or double precision
> floating point numbers.
> (I tend to use double precision.)
> Anything that could speed up that task would be interesting.
Nod, the question is what the actual combination of arguments that get
passed to the incomplete beta are: if the data isn't unduely sensitive, what
would be really useful is to have a log of those values so we can see which
parts of the implementation are getting hammered the most.
> SEE parallelism, if possible, would be interesting.
> Multi-core parallelism is less interesting,
> that is already easy to do, for instance using OpenMP.
> Then there are the issues brought up by Stephen Nuchia:
Nod, there's a lot that can be done, but ground is going to be won a yard at
a time :-(
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk