I\'m trying to find a counterexample to the Pólya Conjecture which will be somewhere in the 900 millions. I\'m using a very efficient algorithm that doesn\'t even require an
What do you mean by "won't allow". You probably getting an OutOfMemoryError
, so add more memory with the -Xmx
command line option.
If you don't need it all loaded in memory at once, you could segment it into files and store on disk.
Java will allow up to 2 billions array entries. It’s your machine (and your limited memory) that can not handle such a large amount.
I second @sfossen's idea and @Aaron Digulla. I'd go for disk access. If your algorithm can take in a List interface rather than a plain array, you could write an adapter from the List to the memory mapped file.
900 million 32 bit ints with no further overhead - and there will always be more overhead - would require a little over 3.35 GiB. The only way to get that much memory is with a 64 bit JVM (on a machine with at least 8 GB of RAM) or use some disk backed cache.
I wrote a version of the Sieve of Eratosthenes for Project Euler which worked on chunks of the search space at a time. It processes the first 1M integers (for example), but keeps each prime number it finds in a table. After you've iterated over all the primes found so far, the array is re-initialised and the primes found already are used to mark the array before looking for the next one.
The table maps a prime to its 'offset' from the start of the array for the next processing iteration.
This is similar in concept (if not in implementation) to the way functional programming languages perform lazy evaluation of lists (although in larger steps). Allocating all the memory up-front isn't necessary, since you're only interested in the parts of the array that pass your test for primeness. Keeping the non-primes hanging around isn't useful to you.
This method also provides memoisation for later iterations over prime numbers. It's faster than scanning your sparse sieve data structure looking for the ones every time.