I was optimising some Python code, and tried the following experiment:
import time start = time.clock() x = 0 for i in range(10000000): x += 1 end = tim
$ python -m timeit -s "x=0" "x+=1" 10000000 loops, best of 3: 0.151 usec per loop $ python -m timeit -s "x=0" "x-=-1" 10000000 loops, best of 3: 0.154 usec per loop
Looks like you've some measurement bias