I have been going through prime number generation in python using the sieve of Eratosthenes and the solutions which people tout as a relatively fast option such as those in
You should only use the "postponed" variant of that algorithm. Comparing your code test run up to 10 and 20 mln upper limit, as
...
print(len( [2] + [i*2+1 for i, v in
enumerate(sieve_for_primes_to(10000000)) if v and i>0]))
with the other one, run at the corresponding figures of 664579 and 1270607 primes to produce, as
...
print( list( islice( (p for p in postponed_sieve() ), n-1, n+1)))
shows your code running "only" 3.1x...3.3x times faster. :) Not 36x times faster, as your timings show for some reason.
I don't think anyone ever claimed it's an "ideal" prime generator, just that it is a conceptually clean and clear one. All these prime generation functions are toys really, the real stuff is working with the very big numbers, using completely different algorithms anyway.
Here in the low range, what matters is the time complexity of the algorithm, which should be around ~ n^(1+a)
, a < 0.1...0.2
empirical orders of growth, which both of them seem to be indeed. Having a toy generator with ~ n^1.5
or even ~ n^2
orders of growth is just no fun to play with.