In computer science, it is said that the insert, delete and searching operations for hash tables have a complexity of O(1), which is the best. So, I was wondering, why do we
HashTable
is not answer for all. If your hash function does not distribute your key well than hashMap
may turn into a linkedList
in worst case for which the insertion, deletion, search will take O(N)
in worst case.
HashMap
has significant memory footprint so there are some use cases where you memory is too precious than time complexity then you HashMap
may not be the best choice.
HashMap
is not an answer for range queries or prefix queries. So that is why most of the database vendor do implement indexing by Btree
rather than only by hashing for range or prefix queries.
HashTable
in general exhibit poor locality of reference that is, the data to be accessed is distributed seemingly at random in memory.
For certain string processing applications, such as spellchecking, hash tables may be less efficient than tries, finite automata, or Judy arrays. Also, if each key is represented by a small enough number of bits, then, instead of a hash table, one may use the key directly as the index into an array of values. Note that there are no collisions in this case.