concurrenthashmap

ConcurrentHashMap

◇◆丶佛笑我妖孽 提交于 2019-11-30 16:20:29
一、引入背景(Why) 1. 在多线程环境下,HashMap的put会导致扩容,扩容引起死循环,导致CPU使用率100% 2. 可以使用HashTable和Collections.synchronizedMap(hashMap)可以解决多线程的问题 3. HashTable和Collections.synchronizedMap(hashMap)对读写进行加一个 全局锁 ,一个线程读写map的元素,其余线程必须等待,性能不好 4. ConcurrentHashMap也加了锁,但是只锁了map的一部分,其余线程可以继续读写没锁住的部分,优化了操作性能 二、JDK1.7的实现(了解):Segment分段锁+HashEntry+ReentrantLock 1. HashEntry是键值对,用来存储数据 2. 桶是由若干个HashEntry连接起来的链表 3. Segment分段锁继承ReentrantLock,每个Segment对象锁住若干个桶 4. 一个ConcurrentHashMap实例中,包含由若干个Segment对象组成的数组 5. HashEntry->HashBucket->Segment->ConcurrentHashMap,一层一层被包含 三、JDK1.8的实现(理解):Node+CAS+Synchronized,锁的粒度更小 1. 数据结构

亿达信息

孤人 提交于 2019-11-30 16:01:46
1.HashMap和TreeMap区别 =》HashMap为无序集合,TreeMap为有序集合,其实现了SortMap接口,可通过key值进行排序;因此HashMap的访问速度比TreeMap要快一些; =》HashMap基于哈希表实现,TreeMap基于红黑树实现 2.HashSet有什么功能,基于哪个类实现? =》HashSet是一个无序集合,基于Set接口实现,不能添加重复的元素,内部是使用HashMap实现的,对于一些大量插入不在乎顺序和极少查询的集合操作用得比较多; 3:ConcurrentHashMap并发能力为什么好于Hashtable =》ConcurrentHashMap是JDK1.5之后新加的特性,位于concurrent包下,专门用于处理并发场景的; =》Hashtable每次同步执行的时候都要锁住整个结构,ConcurrentHashMap锁的方式是稍微细粒度的,它默认将hash表分成16个桶,操作时只需锁住当前用到的桶;所以,原来只能一个线程进入,现在却能同时16个写线程进入,并发性能就提升了; =》传统集合在迭代时如果集合发生改变,那么就会抛出ConcurrentModificationException,现在ConcurrentHashMap是在改变时new新的数据从而不影响原有的数据,iterator完成后再将头指针替换为新的数据

What is the difference between Segment of ConcurrentHashMap and buckets of HashMap theoretically?

余生长醉 提交于 2019-11-30 15:29:34
I understand that in HashMap, the entries (Key, Value) are placed in buckets based on hash(Key.hashCode)--> The index that denotes the bucket location. In case an entry is already placed at that location, there is a linked list created and the new entry (if it has different key --> via equals() method) is placed at the beginning of the linked list. Can i co-relate this concept with that of ConcurrentHashMap, but instead of Buckets, there are Segments upon which individual threads have a lock. And instead of Entries, there are HashEntry(ies). In similar fashion, a linked list is created and if

面试连环炮系列(九):为什么ConcurrentHashMap是线程安全的

和自甴很熟 提交于 2019-11-30 14:49:51
为什么ConcurrentHashMap是线程安全的 JDK1.7中,ConcurrentHashMap使用的锁分段技术,将数据分成一段一段的存储,然后给每一段数据配一把锁,当一个线程占用锁访问其中一个段数据的时候,其他段的数据也能被其他线程访问。 那说说JDK1.7中Segment的原理 刚刚说的一段一段就是指Segment,它继承了ReentrantLock,具备锁和释放锁的功能。ConcurrentHashMap只有16个Segment,并且不会扩容,最多可以支持16个线程并发写。 JDK1.8的ConcurrentHashMap怎么实现线程安全的 JDK1.8放弃了锁分段的做法,采用CAS和synchronized方式处理并发。以put操作为例,CAS方式确定key的数组下标,synchronized保证链表节点的同步效果。 JDK1.8的做法有什么好处呢 减少内存开销 假设使用可重入锁,那么每个节点都需要继承AQS,但并不是每个节点都需要同步支持,只有链表的头节点(红黑树的根节点)需要同步,这无疑消耗巨大内存。 获得JVM的支持 可重入锁毕竟是API级别的,后续的性能优化空间很小。synchronized则是JVM直接支持的,JVM能够在运行时作出相应的优化措施:锁粗化、锁消除、锁自旋等等

ConcurrentHashMap constructor parameters?

≡放荡痞女 提交于 2019-11-30 06:37:43
I am wondering about the parameters for constructing a ConcurrentHashMap : initialCapacity is 16 by default (understood). loadFactor is 0.75 by default. concurrencyLevel is 16 by default. My questions are: What criteria should be used to adjust loadFactor up or down? How do we establish the number of concurrently updating threads? What criteria should be used to adjust concurrencyLevel up or down? Additionally: What are the hallmarks of a good hashcode implementation? (If an SO question addresses this, just link to it.) Thank you! The short answer: set "initial capacity" to roughly how many

When should I use ConcurrentSkipListMap?

泪湿孤枕 提交于 2019-11-30 06:12:35
问题 In Java, ConcurrentHashMap is there for better multithreading solution. Then when should I use ConcurrentSkipListMap ? Is it a redundancy? Does multithreading aspects between these two are common? 回答1: These two classes vary in a few ways. ConcurrentHashMap does not guarantee* the runtime of its operations as part of its contract. It also allows tuning for certain load factors (roughly, the number of threads concurrently modifying it). ConcurrentSkipListMap, on the other hand, guarantees

如何线程安全的使用HashMap

夙愿已清 提交于 2019-11-30 06:04:47
为什么HashMap是线程不安全的 总说HashMap是线程不安全的,不安全的,不安全的,那么到底为什么它是线程不安全的呢?要回答这个问题就要先来简单了解一下HashMap源码中的使用的存储结构(这里引用的是Java 8的源码,与7是不一样的)和它的扩容机制。 HashMap的内部存储结构 下面是HashMap使用的存储结构: transient Node<K,V>[] table; static class Node<K,V> implements Map.Entry<K,V> { final int hash; final K key; V value; Node<K,V> next; } 可以看到HashMap内部存储使用了一个Node数组(默认大小是16),而Node类包含一个类型为Node的next的变量,也就是相当于一个链表,所有hash值相同(即产生了冲突)的key会存储到同一个链表里,大概就是下面图的样子(顺便推荐个在线画图的网站Creately)。 需要注意的是,在Java 8中如果hash值相同的key数量大于指定值(默认是8)时使用平衡树来代替链表,这会将get()方法的性能从O(n)提高到O(logn)。具体的可以看我的另一篇博客Java 8中HashMap和LinkedHashMap如何解决冲突。http://yemengying.com/2016/02

Is read operation in ConcurrentHashMap reliable regarding the return value?

痞子三分冷 提交于 2019-11-30 05:40:46
问题 I read in a book that read in ConcurrentHashmap does not guarantee the most recently updated state and it can sometimes gives the closer value. Is this correct? I have read its javadocs and many blogs which seems to say otherwise (i.e. it is accurate). Which one is true? 回答1: Intuitively, a ConcurrentHashMap should behave like a set of volatile variables; map keys being the variable addresses. get(key) and put(key, value) should behave like volatile read and write. That is not explicitly

ThreadLocal HashMap vs ConcurrentHashMap for thread-safe unbound caches

巧了我就是萌 提交于 2019-11-30 03:45:43
问题 I'm creating a memoization cache with the following characteristics: a cache miss will result in computing and storing an entry this computation is very expensive this computation is idempotent unbounded (entries never removed) since: the inputs would result in at most 500 entries each stored entry is very small cache is relatively shorted-lived (typically less than an hour) overall, memory usage isn't an issue there will be thousands of reads - over the cache's lifetime, I expect 99.9%+