
Boost : 
From: Samuel Krempp (krempp_at_[hidden])
Date: 20040823 08:35:31
Looking inside boost::random, I noticed a few things that I think could be
better, and since I could not find mentions about those in the lists's
archive, I'll ask here.
1. the case : brange / _range <= 4
Is implemented by rejection, directly.
But the average number of rejects could be decreased (by a factor up to 4)
by rescaling before rejection.
i.e. (pseudo code) :
.k = (brange+1) / (_range+1)
.rejection_thresh = k * (_range+1)
.keep generating random number until result < rejection_thresh
.return result/k;
the probabilty of a reject is then less than _range/brange, and also always
less than 1/2, and can reach 0 (if (brange+1) is a multiple of (_range+1))
Was this possible speedup considered useless ?
2. the case brange / _range > 4
since direct rejection could mean very very many retries in this case, this
case is dealt with uniform_smallint (which always uses exactly one number
generation).
But by using previous rejection scheme, there is no need for this case since
the average number of needed number generations will be fairly close to 1,
and the generator's speed will be equivalent to uniform_smallint.
Plus, using a rejection scheme would completely avoid any added bias
(contrary to uniform_smallint in which the added bias can reach 1/32, thus
unsuitable for highprecision statistics programs)
Why not use such a "rescaled" rejection scheme as I describe rather than
uniform_smallint's possibly biased computation ?
 Samuel
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk