I am wondering about the parameters for constructing a ConcurrentHashMap:
initialCapacity is 16 by default (understood).
loadFactor: controls when the implementation decides to resize the hashtable. Too high a value will waste space; too low a value will result in expensive resize operations.
concurrencyLevel: tells the implementation to try to optimize for the given number of writing threads. According to the API docs, being off by up to a factor of 10 shouldn't have much effect on performance.
The allowed concurrency among update operations is guided by the optional concurrencyLevel constructor argument (default 16), which is used as a hint for internal sizing. The table is internally partitioned to try to permit the indicated number of concurrent updates without contention. Because placement in hash tables is essentially random, the actual concurrency will vary. Ideally, you should choose a value to accommodate as many threads as will ever concurrently modify the table. Using a significantly higher value than you need can waste space and time, and a significantly lower value can lead to thread contention. But overestimates and underestimates within an order of magnitude do not usually have much noticeable impact.
A good hashcode implementation will distribute the hash values uniformly over any interval. If the set of keys is known in advance it is possible to define a "perfect" hash function that creates a unique hash value for each key.