Boost logo

Boost :

From: Mark Borgerding (mborgerding_at_[hidden])
Date: 2000-02-18 09:55:38

jens maurer <jmaure-_at_[hidden]> wrote:
original article:
> Back in November last year, Jeet Sukumaran posted a proposal for a
> random number library framework. The original proposal was based
> on a polymorphic class hierarchy, but most of the people who commented
> felt that some template-based solution might be better.
> Unfortunately, I haven't received anything about that subject since.
> Thus, I have created a new directory "random" in the doc vault and
> put up my proposal for a framework:

Just an observation about random_device -- if you use buffering, you
may get higher throughput for your benchmark tests, but I think there
are a couple of reasons it is a bad idea to use ifstream("/dev/random")
, or FILE* with /dev/random

1) Responsiveness -- a class user who only wants a single byte would
need to wait for the entire buffer to fill (this will likely be several
2) Loss of system entropy -- randomness is gathered fairly slowly,
filling up a buffer with random data that is not really needed will
quickly drain the entropy pool.

Using /dev/random is great for systems that have it, but what about the
other systems? What do you think of adding a default random_device
implementation for systems that do not have /dev/random? The following
is platform-independent code, but unfortunately the quality of the
numbers it creates is somewhat platform dependent. Perhaps it can be
improved. The idea is based upon an algo from Schneier's Applied
Cryptography. Count until some system parameter changes (e.g. clock()
), and then use the lsb of the counter as a random bit.

unsigned char rand_byte(int nbits)
   unsigned char result = 0;
   if (nbits>8)

   for (int i=0;i<nbits;++i) {
      unsigned int counter=0;
      clock_t start = clock();
      do {
         ++counter;// keep counting until the clock changes
      }while (start == clock());

      result <<= 1; // left shift 1 bit
      result |= (counter&1);
      // put the LSB from the counter into the LSB of the result
   return result;

Another idea (not sure where I heard it first, maybe Schneier also) is
to have a rng that the user can churn with randomness from the system.
This randomness would get added into the mix.

This won't compile, since it depends upon having some hash function

unsigned char churn(const void * pData=0,size_t bytes=0) {
  const int hash_result_size = 20;//this would be true for SHA-1
  static unsigned char entropy_pool[hash_result_size];

  hasher h;
  // some class that does a cryptographic hash ,
  // or at least a long CRC

  if (bytes && pData)

  return entropy_pool[0]

So if no new randomness is added; the function acts like a PRNG,
hashing a single buffer over and over again. When randomness is added
by some true random source, the entropy is absorbed into the buffer by
hashing the buffer as well as the new random data. In this way, a
randomness generator can be strengthened, even if it has some bias.

Perhaps the two ideas above could be combined to make a decent RNG that
is completely platform independent.

What do you think?


Boost list run by bdawes at, gregod at, cpdaniel at, john at