On this page, I see something interesting:
Note that there is a fast-path for dicts that (in practice) only deal with str keys; this doesn\'t affect the algorith
As this only affects the constant time, it's likely not to matter at all. The only time you really need to optimise is when you are working with very large data sets - which this does nothing to affect.
What this does mean is that in the cases where you have small dictionaries with strings as keys, Python will be quick - this is a common usage, so it's been optimised for.
As Ignacio Vazquez-Abrams points out, it's likely that converting your key to a string will cost (far) more than the slight boost you might gain from it being a string for the dict.
In short use what is relevant to your situation - optimisation should only be done where there is a need for it, not before.
Some tests:
python -m timeit -s "a={key: 1 for key in range(1000)}" "a[500]"
10000000 loops, best of 3: 0.0773 usec per loop
python -m timeit -s "a={str(key): 1 for key in range(1000)}" "a[\"500\"]"
10000000 loops, best of 3: 0.0452 usec per loop
python -m timeit -s "a={str(key): 1 for key in range(1000)}" "a[str(500)]"
1000000 loops, best of 3: 0.244 usec per loop
As you can see, while the string-based dict is faster, converting the key is very expensive by comparison, totally mitigating the gain (and then some).
So yes, if the data you are using is only being used as keys to the dictionary, and what format your store them in doesn't matter, then strings are preferable, in a small dictionary. In practice, that is a very rare case (and you'd probably be using strings already).