问题
I found an interesting thing that partition
is faster than split
when get whole substring after the separator. I have tested in Python 3.5 and 3.6 (Cpython)
In [1]: s = 'validate_field_name'
In [2]: s.partition('_')[-1]
Out[2]: 'field_name'
In [3]: s.split('_', maxsplit=1)[-1]
Out[3]: 'field_name'
In [4]: %timeit s.partition('_')[-1]
220 ns ± 1.12 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [5]: %timeit s.split('_', maxsplit=1)[-1]
745 ns ± 48.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [6]: %timeit s[s.find('_')+1:]
340 ns ± 1.44 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
I look through the Cpython source code and found the partition
use the FASTSEARCH
algorithm, see here. And the split
only use FASTSEARCH
when the separator string's length is larger than 1, see here. But I have tested on sep string which length is larger. I got same result.
I guess the reason is partition
return a three elements tuple, instead of a list.
I want to know more details.
回答1:
Microbenchmarks can be misleading
py -m timeit "'validate_field_name'.split('_', maxsplit=1)[-1]"
1000000 loops, best of 3: 0.568 usec per loop
py -m timeit "'validate_field_name'.split('_', 1)[-1]"
1000000 loops, best of 3: 0.317 usec per loop
Just passing the argument as positional or keyword changes the time significantly. So I would guess another reason partition is faster, because it does not need a second argument...
来源:https://stackoverflow.com/questions/47898526/python-why-partitionsep-is-faster-than-splitsep-maxsplit-1