Why does division become faster with very large numbers

允我心安 提交于 2019-12-07 18:01:48

问题


I wanted to see how constant the division operation is, so I ran this code

import time

def di(n):
    n/101

i = 10
while i < 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000:
    start = time.clock()
    di(i)
    end = time.clock()
    print("On " + str(i) + " " + str(end-start))
    i *= 10000

And this was the output I got

On 10 5.98714047756e-06
On 100000 4.7041818038e-06
On 1000000000 2.56591734753e-06
On 10000000000000 2.99357023878e-06
On 100000000000000000 2.99357023878e-06
On 1000000000000000000000 2.99357023879e-06
On 10000000000000000000000000 2.99357023878e-06
On 100000000000000000000000000000 3.42122313003e-06
On 1000000000000000000000000000000000 3.42122313003e-06
On 10000000000000000000000000000000000000 3.84887602128e-06
On 100000000000000000000000000000000000000000 3.42122313003e-06
On 1000000000000000000000000000000000000000000000 3.84887602128e-06
On 10000000000000000000000000000000000000000000000000 1.71061156501e-06
On 100000000000000000000000000000000000000000000000000000 1.71061156502e-06
On 1000000000000000000000000000000000000000000000000000000000 1.71061156501e-06
On 10000000000000000000000000000000000000000000000000000000000000 2.13826445628e-06
On 100000000000000000000000000000000000000000000000000000000000000000 1.71061156502e-06
On 1000000000000000000000000000000000000000000000000000000000000000000000 2.13826445628e-06
On 10000000000000000000000000000000000000000000000000000000000000000000000000 2.13826445628e-06
On 100000000000000000000000000000000000000000000000000000000000000000000000000000 2.13826445626e-06
On 1000000000000000000000000000000000000000000000000000000000000000000000000000000000 2.13826445626e-06

Why does the division generally get faster later on as opposed to the time of the first two terms which are the smallest?


回答1:


Why does the division generally get faster later on as opposed to the time of the first two terms which are the smallest?

It doesn't, actually. If I replace di with:

def di(n):
    for i in range(10000000): n / 101

Then I get (Python 3.5, which I presume you are using):

On 10 0.546889
On 100000 0.545004
On 1000000000 0.5454929999999998
On 10000000000000 0.5519709999999998
On 100000000000000000 1.330797
On 1000000000000000000000 1.31053
On 10000000000000000000000000 1.3393129999999998
On 100000000000000000000000000000 1.3524339999999997
On 1000000000000000000000000000000000 1.3817269999999997
On 10000000000000000000000000000000000000 1.3412670000000002
On 100000000000000000000000000000000000000000 1.3358929999999987
On 1000000000000000000000000000000000000000000000 1.3773859999999996
On 10000000000000000000000000000000000000000000000000 1.3326890000000002
On 100000000000000000000000000000000000000000000000000000 1.3704769999999993
On 1000000000000000000000000000000000000000000000000000000000 1.3235019999999995
On 10000000000000000000000000000000000000000000000000000000000000 1.357647
On 100000000000000000000000000000000000000000000000000000000000000000 1.3341190000000012
On 1000000000000000000000000000000000000000000000000000000000000000000000 1.326544000000002
On 10000000000000000000000000000000000000000000000000000000000000000000000000 1.3671139999999973
On 100000000000000000000000000000000000000000000000000000000000000000000000000000 1.3630120000000012
On 1000000000000000000000000000000000000000000000000000000000000000000000000000000000 1.3600200000000022
On 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000 1.3189189999999975
On 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 1.3503469999999993

As you can see, there are roughly two times: one for smaller numbers, and one for larger numbers. In Python 3.5, / does floating point division, so it should actually take about the same amount of time regardless of the size of the numbers.

However, there is that gap from small to large. The same result happens with Python 2.7 using the following function to preserve semantics:

def di(n):
    for i in xrange(10000000): n / 101.0

On the same machine, I get:

On 10 0.617427
On 100000 0.61805
On 1000000000 0.6366
On 10000000000000 0.620919
On 100000000000000000 0.616695
On 1000000000000000000000 0.927353
On 10000000000000000000000000 1.007156
On 100000000000000000000000000000 0.98597
On 1000000000000000000000000000000000 0.99258
On 10000000000000000000000000000000000000 0.966753
On 100000000000000000000000000000000000000000 0.992684
On 1000000000000000000000000000000000000000000000 0.991711
On 10000000000000000000000000000000000000000000000000 0.994703
On 100000000000000000000000000000000000000000000000000000 0.978877
On 1000000000000000000000000000000000000000000000000000000000 0.982035
On 10000000000000000000000000000000000000000000000000000000000000 0.973266
On 100000000000000000000000000000000000000000000000000000000000000000 0.977911
On 1000000000000000000000000000000000000000000000000000000000000000000000 0.996857
On 10000000000000000000000000000000000000000000000000000000000000000000000000 0.972555
On 100000000000000000000000000000000000000000000000000000000000000000000000000000 0.985676
On 1000000000000000000000000000000000000000000000000000000000000000000000000000000000 0.987412
On 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0.997207
On 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0.970129

This has something to do (exact specifics to be determined) with converting the integer to a floating point number. This is slower with integers beyond a certain point, and when that point is crossed, the division starts taking longer.




回答2:


"Division generally gets faster" is not an accurate description of your observations. It looks like they get faster, then slower, then faster, then slower, then faster, then slower, then faster. I tried two modifications to your script:

  1. Inside the while loop, instead of just calling di once, call it 10 times and print out the min, max, and mean times. I found the max times show roughly the pattern you're seeing, and that the min times were 0.0, so perhaps the interpreter is caching the result of the function call as an optimization.
  2. Reversed the order of the inputs, so starting at a big number for i and going down each time i /= 10000. The max times showed a similar pattern here, in that the first few calculations took more time than the rest. That is, whatever calculation was done first seemed to take more time, didn't matter whether it was the program that started with small numbers, or the one that started with large numbers.


来源:https://stackoverflow.com/questions/35418072/why-does-division-become-faster-with-very-large-numbers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!