问题
Why does String.hashcode() have so many conflicts?
I'm reading the String.hashCode() in jdk1.6, below is the codes
public int hashCode() {
int h = hash;
if (h == 0) {
int off = offset;
char val[] = value;
int len = count;
for (int i = 0; i < len; i++) {
h = 31*h + val[off++];
}
hash = h;
}
return h;
}
This looks quite confusing to me because it has so many conflicts; although it's not required to be unique (we can still rely on the equals()), but less conflicts means better performance without visiting the entries in a linked list.
Suppose we've two characters, then so long as we can find two strings matching below equation, then we will have the same hashcode()
a * 31 +b = c * 31 +d
It will be easy to conclude that (a-c) * 31 = d-b
take an easy example is make a-c = 1 and d-b = 31;
so i wrote below codes for simple test
public void testHash() {
System.out.println("A:" + (int)'A');
System.out.println("B:" + (int)'B');
System.out.println("a:" + (int)'a');
System.out.println("Aa".hashCode() + "," + "BB".hashCode());
System.out.println("Ba".hashCode() + "," + "CB".hashCode());
System.out.println("Ca".hashCode() + "," + "DB".hashCode());
System.out.println("Da".hashCode() + "," + "EB".hashCode());
}
it will print below results which means all the strings have the same hashcode(), and it's easy to do it in a loop.
A:65
B:66
a:97
2112,2112
2143,2143
2174,2174
2205,2205
even worse, suppose we've 4 characters in the string, according to the algorithm, suppose the first 2 characters produce a2, the 2nd 2 characters produce b2;
the hashcode will still be a2 * 31^2 + b2
thus, with a2 and b2 equal between 2 strings, we will get more strings with hashcode() conflict.
such examples are "AaAa", "BBBB" and so on;
then we will have 6 characters, 8 characters......
suppose most of the time we use characters in ascii table in a string which will be used in a hashmap or hashtable, then the choosen prime number 31 here is definitely too small;
one easy fix is to use a larger prime number (luckily, 257 is a prime number) which can avoid this conflict. of course, choose a too big number will cause returned int value to be overflowed if the string is very long, but i assume most of the time the string used as a key is not that big? of course, it could still return a long value to avoid this.
below is my modified version of betterhash() which can solve such conflicts easily by running the codes it will print below values, which is effective to solve this problem.
16802,17028
17059,17285
17316,17542
17573,17799
but why jdk does not fix it? thx.
@Test
public void testBetterhash() {
System.out.println(betterHash("Aa") + "," + betterHash("BB"));
System.out.println(betterHash("Ba") + "," + betterHash("CB"));
System.out.println(betterHash("Ca") + "," + betterHash("DB"));
System.out.println(betterHash("Da") + "," + betterHash("EB"));
}
public static int betterHash(String s) {
int h = 0;
int len = s.length();
for (int i = 0; i < len; i++) {
h = 257*h + s.charAt(i);
}
return h;
}
回答1:
I just hashed 58 thousand English language words (found here), both all-lowercase and also with the first letter capitalized . Know how many collided? Two: "Siblings" and "Teheran" (an alternate spelling of "Tehran").
Just like you, I took a subdomain (in my case a likely one) of possible strings and analyzed the hashCode collision rate for it, and found it to be exemplary. Who is to say that your arbitrary subdomain of possible strings is a better choice to optimize for than mine?
The people that wrote this class had to do so knowing that they couldn't predict (nor therefore optimize) the subdomain in which their users would be using Strings as keys. So they chose a hash function which distributes evenly over the entire domain of strings.
If you're interested, here's my code (it uses Guava):
List<String> words = CharStreams.readLines(new InputStreamReader(StringHashTester.class.getResourceAsStream("corncob_lowercase.txt")));
Multimap<Integer, String> wordMap = ArrayListMultimap.create();
for (String word : words) {
wordMap.put(word.hashCode(), word);
String capitalizedWord = word.substring(0, 1).toUpperCase() + word.substring(1);
wordMap.put(capitalizedWord.hashCode(), capitalizedWord);
}
Map<Integer, Collection<String>> collisions = Maps.filterValues(wordMap.asMap(), new Predicate<Collection<String>>() {
public boolean apply(Collection<String> strings) {
return strings.size() > 1;
}
});
System.out.println("Number of collisions: " + collisions.size());
for (Collection<String> collision : collisions.values()) {
System.out.println(collision);
}
Edit
By the way, if you're curious the same test with your hash function had 13 collisions compared to String.hashCode
's 1.
回答2:
I'm sorry, but we need to throw some cold water on this idea.
Your analysis is way too simplistic. You seem to have cherry-picked a subset of Strings that is designed to prove your point. This is not evidence that the number of collisions is (statistically) higher than expected across the domain of all strings.
Nobody in their right mind would expect String.hashCode to be highly collision free1. It is simply not designed with that in mind. (If you want highly collision free hashing, then use a crypto hash algorithm ... and pay the cost.) String.hashCode() is designed to be reasonably good across the domain of all Strings ... and fast.
Assuming that you could state a stronger case, this is not the place to state it. You need to raise this issue with the people who matter - Oracle's Java engineering team.
The current algorithm for
String::hashCode
has been part of the javadoc specification forString
since Java 1.2. (And the algorithm almost certainly goes back to Java 1.0 and earlier.) If the algorithm was changed, it would be a breaking change for some applications. This is probably enough kill the idea.The Java engineering team are going to weigh up the advantages of such a change against the costs of implementing it, for them, and for every user of Java.
The costs to users would include dealing with various potential performance and security issues, and the migration of any stored data that has dependencies on hashcodes. Or the cost of having more legacy applications that cannot realistically be ported to the latest version of Java.
1 - "Highly collision free hashing", is an idea / term that I pulled out of the air for the purposes of this answer. Sorry. However, the gist is that the probability of a hashcode collision for 2 strings should be independent of how related they are. So for instance "AA" and "bz" are related by virtue of having the same length. Obviously, this idea needs more thought. And it is also obvious that "relatedness" in the sense I'm talking about is not measurable ... sort of like Kolmogorov Complexity.)
回答3:
Collisions are inevitable when hashing. The hashCode()
method returns an integer which is used as an index into an array which is a bucket for all objects with the same hash code. The equals(Object)
method is used to compare the target object with each one in the bucket to identify the exactly matching object, if it exists.
Ultimately, the hashCode()
method just needs to be fast and not too weak (i.e. causing too many collisions), where too weak is quite a fuzzy metric.
回答4:
It is pretty efficient but also simple. All possible lower case (ASCII) words up to six letters or all numbers up to six digits long have a unique hashCode(). i.e. the hashCode is like a base 31 number. Using a larger number has its own issues. A 257 factor would leave every 8th bit not particularly random as all ASCII character have a 0 top bit. A larger factor would result in duplicate hashcodes for five and six digit/letter words.
What is perhaps the biggest problem if you can't change the hashing algorithm. Whatever approach you take, there can be a case where this is a very bad choice and its likely to be sub-optimal for your use case.
Perhaps the biggest problem is denial of service attacks making pathological cases, normally very rare quite common. For example, a way to attack a web server is to fill a cache with keys all with the same hashCode e.g. 0 which is calculated every time. This cause HashMap to degenerate into a linkedlist.
A simple way around this is to make the hash algorithm unknown, possibly changing. As its stands, the best could be to use a TreeMap (which supports custom Comparison, though the default would be fine in this case)
来源:https://stackoverflow.com/questions/9406775/why-does-string-hashcode-in-java-have-many-conflicts