Suppose there is a structure like this:
{\'key1\' : { \'key2\' : { .... { \'keyn\' : \'value\' } ... } } }
Using python, I\'m trying to det
Without going into details (which are highly implementation-dependent anyway and may be invalidated by the next genius to come along and tweak the dictionary implementation):
You see, there shouldn't be any noticeable difference in performance, although some memory difference. The latter won't be notable though, I think. A one-element dict is 140 bytes, a ten-element tuple is 140 bytes as well (according to Python 3.2 sys.getsizeof). So even with the (already unrealistic, says my gut-feeling) ten-level nesting, you'd have slightly more than one kB of difference - possibly less if the nested dicts have multiple items (depends on the exact load factor). That's too much for a data-crunching application that has hundreds of such data structure im memory, but most objects aren't created that frequently.
You should simply ask yourself which model is more appropriate for your problem. Consider that the second way requires you to have all keys for a value available at once, while the second allows getting there incrementally.