Why not use hashing/hash tables for everything?

后端 未结 5 560
梦如初夏
梦如初夏 2020-12-29 03:13

In computer science, it is said that the insert, delete and searching operations for hash tables have a complexity of O(1), which is the best. So, I was wondering, why do we

相关标签:
5条回答
  • 2020-12-29 03:43

    Hash tables, on average, do have excellent time complexity for insertion, retrieval, and deletion. BUT:

    1. Big-O complexity isn't everything. The constant factor is also very important. You could use hashtables in place of arrays, with the array indexes as hash keys. In either case, the time complexity of retrieving an item is O(1). But the constant factor is way higher for the hash table as opposed to the array.

    2. Memory consumption may be much higher. This is certainly true if you use hash tables to replace arrays. (Of course, if the array is sparse, then the hash table may take less memory.)

    3. There are some operations which are not efficiently supported by hash tables, such as iterating over all the elements whose keys are within a certain range, finding the element with the largest key or smallest key, and so on.

    All of that aside, you do still have a good point. Hashtables have an extraordinarily broad range of suitable use cases. That's why they are the primary built-in data structure in some scripting languages, like Lua.

    0 讨论(0)
  • 2020-12-29 03:46

    The potential security issues of hash tables on the web should also be pointed out. If someone knows the hash function, that person may perform a denial-of-service attack by creating lots of items with the same hashcode.

    0 讨论(0)
  • 2020-12-29 03:47

    You may use Hash to search the element, but you cannot use it to do the things like find the largest number quickly, you should use the data strutcture for the specified problem. Hash cannot solve all the problem.

    0 讨论(0)
  • 2020-12-29 04:04
    1. Hash Tables are not sorted (map)
    2. Hash Tables are not best for head/tail insert (link list/deque)
    3. Hash Tables have overhead to support searching (vector/array)
    0 讨论(0)
  • 2020-12-29 04:08
    • HashTable is not answer for all. If your hash function does not distribute your key well than hashMap may turn into a linkedList in worst case for which the insertion, deletion, search will take O(N) in worst case.

    • HashMap has significant memory footprint so there are some use cases where you memory is too precious than time complexity then you HashMap may not be the best choice.

    • HashMap is not an answer for range queries or prefix queries. So that is why most of the database vendor do implement indexing by Btree rather than only by hashing for range or prefix queries.

    • HashTable in general exhibit poor locality of reference that is, the data to be accessed is distributed seemingly at random in memory.

    • For certain string processing applications, such as spellchecking, hash tables may be less efficient than tries, finite automata, or Judy arrays. Also, if each key is represented by a small enough number of bits, then, instead of a hash table, one may use the key directly as the index into an array of values. Note that there are no collisions in this case.

    0 讨论(0)
提交回复
热议问题