For a library, I need to store the first primes numbers up to a limit L. This collection must have a O(1) lookup time (to check whether a number is prime or not) and it must
Maybe a trie data structure which contains only the primes is what you're looking for. Instead of using characters as indexes you could use the integer digits. An implementation of this are Judy-Arrays.
Altough, they do not meet your O(1) requirement, they are extremely memory-efficient for similar keys (as most parts of numbers are) and pretty fast to look up with an O(m) (m=key-length) at maximum.
If you look up for a prime in the pre-generated tree, you can walk the tree until you find it or you are already at the node which is next to the preceeding and following prime.
If you can figure out which ones are Mersenne or other easily represented prime numbers, you might be able to save a few bits by using that representation with a flag for applicable numbers.
Also, how about storing the numbers as the difference from the previous number? Then the size shouldn't rise quite as fast (but lookup would be slow). Combining with the approach above, you could store Mersenne primes and the difference from the last Mersenne prime.
Check the topcoder tutorial on prime numbers: http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=math_for_topcoders
An alternative to packed bitmaps and wheels - but equally efficient in certain contexts - is storing the differences between consecutive primes. If you leave out the number 2 as usual then all differences are even. Storing difference/2 you can get up to 2^40ish regions (just before 1999066711391) using byte-sized variables.
The primes up 2^32 require only 194 MByte, compared to 256 MByte for an odds-only packed bitmap. Iterating over delta-stored primes is much faster than for wheeled storage, which includes the modulo-2 wheel known as odds-only bitmap.
For ranges from 1999066711391 onwards, bigger cell size or variable-length storage are needed. The latter can be extremely efficient even if very simple schemes are used (e.g. keep adding until a byte < 255 has been added, like in LZ4-style compression), because of the extremely low frequency of gaps longer than 510/2.
For efficiency's sake it is best to divide the range into sections (pages) and manage them B-Tree style.
Entropy-coding the differences (Huffmann or arithmetic coding) cuts permanent storage requirements to a bit less than half, which is close to the theoretical optimum and better than lists or wheels compressed using the best available packers.
If the data is stored uncompressed then it is still much more compact than files of binary or textual numbers, by an order of magnitude or more. With a B-Tree style index in place it is easy to simply map sections into memory as needed and iterate over them at blazing speed.
How about an Interval Tree? http://www.geeksforgeeks.org/interval-tree/
It may not be O(1) but it's really fast. Like maybe O(log(p(n))) where p(n) is the number of primes till the number n. This way you'll the memory you'll need will be in proportion to the number of primes only, greatly cutting the memory cost.
For example suppose you find a prime at say p1 and then the next one at p2, Insert interval (p1,p2) and so on and when you run a search for any number in that range it will return this interval and you can return p2 which would be the answer in your case.
You can explicitly check more prime numbers to remove redundancy.
At the moment you do this only for two, by checking for divisibility by two explicitly and then storing only for odd numbers whether they are prime.
For 2 and 3 you get remainders 0 to 5, of which only 1 and 5 are not divisible by two or three and can lead to a prime number, so you are down to 1/3.
For 2, 3, and 5 you get 8 numbers out of 30, which is nice to store in a byte.
This is explained in more detail here.