In a hashmap, the addition of a new element to the internal linked list of a bucket is always at the end. Why?

醉酒当歌 提交于 2021-02-07 05:49:27

问题


In a hashmap, when we have the same hashcodes, we insert objects as a linked list which are later converted to TreeNode. Every New object with the same hashcode is added to the last of the linked list attached. So, my question here is why don't we add the new element as first element of the internal linked list attached to the bucket? Why do we traverse till the last element, and then add the new element.

Time taken by Linked list to:

Insert New element at start = O(1)

Insert New element at end = O(n)


A probable answer could be, since, hashmap is not thread-safe, concurrent reading and writing of elements, from single position can lead to anomalies. For example, There are two transactions:

T1 -- adding of new object to the map having a hashcode already present in the hashmap.

T2 -- reading of new object from the map, again having a hashcode already present in the hashmap.

When these two transactions take place simultaneously, and we start adding the new elements to the start instead of last place, there might be a slight probability that reading operation gets affected, due to changes being made at the first position.

If anyone could have a better insight on this,please comment.


回答1:


Because you can't just add the element when the hash code is the same. When the hash code is the same, it has to check equals on the key in each linked list node. So, it needs to traverse the whole linked list to check if there is any equal key already present.



来源:https://stackoverflow.com/questions/57921060/in-a-hashmap-the-addition-of-a-new-element-to-the-internal-linked-list-of-a-buc

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!