I have an algorithm which currently allocates a very large array of doubles, which it updates and searches frequently. The size of the array is N^2/2, where N is the number of
If you are starting to run out of available memory, then you will probably also soon start to run out of available array indexes, an array is bounded in size to Integer.MAX_VALUE
, and that when using doubles as the array elements is "only" 32GB in size.
Getting a machine with 32GB of memory is expensive, but probably not as expensive as your time to modify the algorithm, and all of the associated testing.
However, if the client is running to the edges of memory, and their datasets are still growing, then it makes sense for you to bite the bullet now, and make the changes to be able to use less memory at any given time, since they will likely soon outgrow an array anyway.
The other option that you have, assuming that the array is somewhat sparsely filled, is to use one of the various sparse array data structures, although these tend to only be beneficial if your array is less than 20% full.
Edit: Since it seems that you have already investigated the alternatives, then the MappedByteBuffer may well be the way to go. Obviously this is going to have a performance impact, however if you do mostly sequential reads and writes from the array, then this should not be too bad. If you are doing random reads and writes, then this is going to get very slow very fast. Or very slow very slowly... depending on how you look at these things ;-)