RE: [Boost-Users] Re: help with mersenne_twister

-----Original Message----- From: Jon Agiato [mailto:JonAgiato@nyc.rr.com]
Thanks again for all the help! I was looking through random-generators.html and see that there are in addition to the Mersenne Twister, Linear Congruential, and Lagged Fibonacci, five others: const_mod, additive_combined, shuffle_output, rand48, and inversive_congruential. I am assuming these are all good RNGs to use for comparison, am I wrong?
const_mod: not an RNG; it is used as an implementation helper for other RNGs rand48: uses Linear Congruential algorithm additive_combined, shuffle_output: modifications to the Linear Congruential to try to make it better; so these two require Linear Congruential generator(s) as input
And lastly (don't want to wear out my welcome.. lol), do you think there would be any benefit to using the advanced RNG classes in my research?
Actually, I think it might be a good idea to do so for your project. The specializations provided by Boost.Random are well-known RNGs that have been published in research papers and used extensively. There are undoubtedly many other good specializations out there, but few are used (mainly because it is very hard to define what a "good RNG" is).
Anyway, what I was thinking is that you might make some of your own specializations that are known to be bad RNGs, and discuss why they are bad. Here is where my knowledge about RNGs end; I'm sure that there are known bad RNGs, but I don't know what they are.
-Steve
That's a great idea Steve, thanks! I am rather new to RNG research but agree that a comparison of sorts would be good. Could you or anyone here provide assistance, perhaps a short example on how one would use one of the advanced RNGs Boost provides? I use Boost a lot, and think that one of the things it could really benefit from is more in depth documentation, especially for those of us struggling to utilize the library as quickly as possible. Then again, I know that 1_30_0 should be out soon, so these developers have enough to do on their plates.
Look at the docs again; they explain what is necessary to specialize the generic algorithms. Note that you should be familiar with the generic RNG algorithms before attempting to specialize. For example, looking at the Linear Congruential RNG (which is the only RNG algorithm I'm familiar with), the algorithm is -- as specified in the Boost.Random docs: x(n+1) := (a * x(n) + c) mod m and boost::random::linear_congruential is declared as: template<class IntType, IntType a, IntType c, IntType m, IntType val> class linear_congruential; with a constructor of: explicit linear_congruential_fixed(IntType x0 = 1); Given this, the specialization boost::minstd_rand, declared as: typedef random::linear_congruential<long, 48271L, 0, 2147483647L, 399268537L> minstd_rand; uses the specific Linear Congruential algorithm: x(n+1) := (48271 * x(n)) mod 2147483647 again, x(0) is given by the constructor. (The last template argument is used for implementation validation, and does not affect the algorithm). You can experiment with adding your own specializations, or those defined by other people that are not found in Boost.Random. For example, here's one from "Numerical Recipes in C": typedef boost::random::linear_congruential<long, 69621L, 0, 2147483647L, 0> ParkMiller; As far as the Boost.Random docs go, I think that they are quite good (with the exception that the "val" template parameter is inconsistently documented). Most users of this library just need a random number generator, and the comparision chart with the general recommendation of mt19937 suits the bill for anyone who wants to quickly use the library. What you want to do is not just quickly use a single RNG; you have much more extensive needs, covering a larger range of RNG algorithms. Naturally, more time will be necessary to learn all the algorithms and the Boost implementations of them. -Steve
participants (1)
-
sclearyï¼ jerviswebb.com