nested dictionaries or tuples for key?

后端 未结 4 2203
自闭症患者
自闭症患者 2020-12-15 04:49

Suppose there is a structure like this:

{\'key1\' : { \'key2\' : { .... { \'keyn\' : \'value\' } ... } } }

Using python, I\'m trying to det

4条回答
  •  甜味超标
    2020-12-15 04:56

    Without going into details (which are highly implementation-dependent anyway and may be invalidated by the next genius to come along and tweak the dictionary implementation):

    • For memory overhead: Each object has some overhead (e.g. refcount and type; an empty object is 8 bytes and an empty tuple is 28 bytes), but hash tables need to store hash, key and value and usually use more buckets than currently needed to avoid collision. Tuples on the other hand can't be resized and don't have collisions, i.e. a N-tuple can simple allocate N pointers to the contained objects and be done. This leads to noticeable differences in memory consumption.
    • For lookup and insertion complexity (the two are identical in this regard): Be it a string or a tuple, collisions are rather unlikely in CPython's dict implementation, and resolved very efficiently. More keys (because you flatten the key space by combining the keys in tuples) may seem to increase the likelihood of collisions, more keys also lead to more buckets (AFAIK the current implementation tries to keep the load factor between 2/3), which in turn makes collision less likely. Also, you don't need more hashing (well, one more function call and some C-level xor-ing for the tuple hash, but that's negible) to get to a value.

    You see, there shouldn't be any noticeable difference in performance, although some memory difference. The latter won't be notable though, I think. A one-element dict is 140 bytes, a ten-element tuple is 140 bytes as well (according to Python 3.2 sys.getsizeof). So even with the (already unrealistic, says my gut-feeling) ten-level nesting, you'd have slightly more than one kB of difference - possibly less if the nested dicts have multiple items (depends on the exact load factor). That's too much for a data-crunching application that has hundreds of such data structure im memory, but most objects aren't created that frequently.

    You should simply ask yourself which model is more appropriate for your problem. Consider that the second way requires you to have all keys for a value available at once, while the second allows getting there incrementally.

提交回复
热议问题