Per the documentation for Guava\'s MapMaker.softValues():
Warning: in most circumstances it is better to set a per-cache maximum size instead of using
One of the practical problems with using SoftReferences is that they tend to be discarded all at once. The reason you have a cache is to provide pretty good perform, most of the time.
However using SoftReferences for a cache can mean after your application has stopped for a GC, it will run slowly until the cache is rebuilt. i.e. Just at the time you need to application catch up.
Note: You can use a LinkedHashMap as an LRU cache, its doesn't have to be complex.
I think that all they are alluding too is that you should be prepared for maximum memory usage, and potentially more gc activity, if you use a Soft reference map, since references are only gc'd as memory needs to be freed up.
If you know you only need the last n values in the cache then using a LRU Cache is a leaner approach, with more predictable resource usage for a running application.
Furthermore, according to this, it seems there are subtle differences in behaviour between -server and -client JVM's.
The Sun JRE does treat SoftReferences differently from WeakReferences. We attempt to hold on to object referenced by a SoftReference if there isn't pressure on the available memory. One detail: the policy for the "-client" and "-server" JRE's are different: the -client JRE tries to keep your footprint small by preferring to clear SoftReferences rather than expand the heap, whereas the -server JRE tries to keep your performance high by preferring to expand the heap (if possible) rather than clear SoftReferences. One size does not fit all.