Boost logo

Boost :

From: Greg Colvin (gcolvin_at_[hidden])
Date: 2000-02-18 11:00:25


From: Mark Borgerding <mborgerding_at_[hidden]>
> ...
> Just an observation about random_device -- if you use buffering, you
> may get higher throughput for your benchmark tests, but I think there
> are a couple of reasons it is a bad idea to use ifstream("/dev/random")
> , or FILE* with /dev/random
>
> 1) Responsiveness -- a class user who only wants a single byte would
> need to wait for the entire buffer to fill (this will likely be several
> K).
> 2) Loss of system entropy -- randomness is gathered fairly slowly,
> filling up a buffer with random data that is not really needed will
> quickly drain the entropy pool.

Yes, but if you can do it when otherwise idle it might store up entropy
for when you need it.

> Using /dev/random is great for systems that have it, but what about the
> other systems? What do you think of adding a default random_device
> implementation for systems that do not have /dev/random? The following
> is platform-independent code, but unfortunately the quality of the
> numbers it creates is somewhat platform dependent. Perhaps it can be
> improved. The idea is based upon an algo from Schneier's Applied
> Cryptography. Count until some system parameter changes (e.g. clock()
> ), and then use the lsb of the counter as a random bit.
>
> unsigned char rand_byte(int nbits)
> {
> unsigned char result = 0;
> if (nbits>8)
> nbits=8;
>
> for (int i=0;i<nbits;++i) {
> unsigned int counter=0;
> clock_t start = clock();
> do {
> ++counter;// keep counting until the clock changes
> }while (start == clock());
>
> result <<= 1; // left shift 1 bit
> result |= (counter&1);
> // put the LSB from the counter into the LSB of the result
> }
> return result;
> }

Have you tested this against any of usual tests for randomness?
I suspect that on an otherwise unloaded system this might be not
too random.

A non-standard but useful test is just to feed the output to a
compression program like pkzip or gzip and see it gets smaller.

> Another idea (not sure where I heard it first, maybe Schneier also) is
> to have a rng that the user can churn with randomness from the system.
> This randomness would get added into the mix.
>
> This won't compile, since it depends upon having some hash function
> class.

Another candidate for Boost -- I think there are some MD5 implentations
in the public domain.

> unsigned char churn(const void * pData=0,size_t bytes=0) {
> const int hash_result_size = 20;//this would be true for SHA-1
> static unsigned char entropy_pool[hash_result_size];
>
> hasher h;
> // some class that does a cryptographic hash ,
> // or at least a long CRC
>
> h.add(entropy_pool,sizeof(entropy_pool));
> if (bytes && pData)
> h.add(pData,bytes);
>
> h.get_result(entropy_pool);
> return entropy_pool[0]
> }
>
> So if no new randomness is added; the function acts like a PRNG,
> hashing a single buffer over and over again. When randomness is added
> by some true random source, the entropy is absorbed into the buffer by
> hashing the buffer as well as the new random data. In this way, a
> randomness generator can be strengthened, even if it has some bias.
>
> Perhaps the two ideas above could be combined to make a decent RNG that
> is completely platform independent.
>
> What do you think?

Go for it.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk