问题
I understand that there are two ways that a hash collision can occur in Java's HashMap ,
1.
hashCode()forKey Objectproduces samehash valueas already produced one ( even if hash bucket is not full yet )2.Hash Bucket is already full so new
Entryhas to go at existing index.
In case of Java's HashMap, situation#2 would really be rare due to so large number of allowed entries and automatic resizing ( See My other question )
Am I correct in my understanding?
But for the sake of theoretical knowledge, do programmers or JVM do anything or can do anything to avoid scenario # 2? OR
Is allowing hash-bucket to be of largest possible size and then continous re sizing the only strategy? ( As is being done in case of HashMap ).
I guess, as a programmer , I should be focused in writing a good hasCode() only and not worry about scenario#2 ( since that is already taken care of by API ).
回答1:
I think #2 is a special case of #1, it's really the same, as when HashMap decides where to put the new element, it decides not because everything else if full, but because the hashCode is the same as for an element that's already in the map.
I agree, you should focus on the hasCode(), see: Creating a hashCode() Method - Java
来源:https://stackoverflow.com/questions/35082909/hash-collisions-in-hashmap