I was just reading the book Clean Code and came across this statement:
When Java was young Doug Lea wrote the seminal book[8] Concurrent Programming
Doug Lea is extremely good at these things, so I won't be surprised if at one time his ConcurrentyHashMap performas better than Joshua Bloch's HashMap. However as of Java 7, the first @author of HashMap has become Doug Lea too. Obviously now there's no reason HashMap would be any slower than its concurrent cousin.
Out of curiosity, I did some benchmark anyway. I run it under Java 7. The more entries there are, the closer the performance is. Eventually ConcurrentHashMap is within 3% of HashMap, which is quite remarkable. The bottleneck is really memory access, as the saying goes, "memory is the new disk (and disk is the new tape)". If the entries are in the cache, both will be fast; if the entries don't fit in cache, both will be slow. In real applications, a map doesn't have to be big to compete with others for residing in cache. If a map is used often, it's cached; if not, it's not cached, and that is the real determining factor, not the implementations (given both are implemented by the same expert)
public static void main(String[] args)
{
for(int i = 0; i<100; i++)
{
System.out.println();
int entries = i*100*1000;
long t0=test( entries, new FakeMap() );
long t1=test( entries, new HashMap() );
long t2=test( entries, new ConcurrentHashMap() );
long diff = (t2-t1)*100/(t1-t0);
System.out.printf("entries=%,d time diff= %d%% %n", entries, diff);
}
}
static long test(int ENTRIES, Map map)
{
long SEED = 0;
Random random = new Random(SEED);
int RW_RATIO = 10;
long t0 = System.nanoTime();
for(int i=0; i