Consider an algorithm to test the probability that a certain number is picked from a set of N unique numbers after a specific number of tries (for example, with N=2, what\'s
This is due to glibc's random() function not being random enough. According to this page, for the random numbers returned by random(), we have:
oi = (oi-3 + oi-31) % 2^31
or:
oi = (oi-3 + oi-31 + 1) % 2^31.
Now take xi = oi % 36, and suppose the first equation above is the one used (this happens with a 50% chance for each number). Now if xi-31=0 and xi-3!=0, then the chance that xi=0 is less than 1/36. This is because 50% of the time oi-31 + oi-3 will be less than 2^31, and when that happens,
xi = oi % 36 = (oi-3 + oi-31) % 36 = oi-3 % 36 = xi-3,
which is nonzero. This causes the ditch you see 31 samples after a 0 sample.