I have been reading about division and integer division in Python and the differences between division in Python2 vs Python3. For the most part it all makes sense. Python 2 uses integer division only when both values are integers. Python 3 always performs true division. Python 2.2+ introduced the //
operator for integer division.
Examples other programmers have offered work out nice and neat, such as:
>>> 1.0 // 2.0 # floors result, returns float
0.0
>>> -1 // 2 # negatives are still floored
-1
How is //
implemented? Why does the following happen:
>>> import math
>>> x = 0.5
>>> y = 0.1
>>> x / y
5.0
>>> math.floor(x/y)
5.0
>>> x // y
4.0
Shouldn't x // y = math.floor(x/y)
? These results were produced on python2.7, but since x and y are both floats the results should be the same on python3+. If there is some floating point error where x/y
is actually 4.999999999999999
and math.floor(4.999999999999999) == 4.0
wouldn't that be reflected in x/y
?
The following similar cases, however, aren't affected:
>>> (.5*10) // (.1*10)
5.0
>>> .1 // .1
1.0
I didn't find the other answers satisfying. Sure, .1
has no finite binary expansion, so our hunch is that representation error is the culprit. But that hunch alone doesn't really explain why math.floor(.5/.1)
yields 5.0
while .5 // .1
yields 4.0
.
The punchline is that a // b
is actually doing floor((a - (a % b))/b)
, as opposed to simply floor(a/b)
.
.5 / .1 is exactly 5.0
First of all, note that the result of .5 / .1
is exactly 5.0
in Python. This is the case even though .1
cannot be exactly represented. Take this code, for instance:
from decimal import Decimal
num = Decimal(.5)
den = Decimal(.1)
res = Decimal(.5/.1)
print('num: ', num)
print('den: ', den)
print('res: ', res)
And the corresponding output:
num: 0.5
den: 0.1000000000000000055511151231257827021181583404541015625
res: 5
This shows that .5
can be represented with a finite binary expansion, but .1
cannot. But it also shows that despite this, the result of .5 / .1
is exactly 5.0
. This is because floating point division results in the loss of precision, and the amount by which den
differs from .1
is lost in the process.
That's why math.floor(.5 / .1)
works as you might expect: since .5 / .1
is 5.0
, writing math.floor(.5 / .1)
is just the same as writing math.floor(5.0)
.
So why doesn't .5 // .1
result in 5?
One might assume that .5 // .1
is shorthand for floor(.5 / .1)
, but this is not the case. As it turns out, the semantics differ. This is even though the PEP says:
Floor division will be implemented in all the Python numeric types, and will have the semantics of
a // b == floor(a/b)
As it turns out, the semantics of .5 // .1
are actually equivalent to:
floor((.5 - mod(.5, .1)) / .1)
where mod
is the floating point remainder of .5 / .1
rounded towards zero. This is made clear by reading the Python source code.
This is where the fact that .1
can't be exactly represented by binary expansion causes the problem. The floating point remainder of .5 / .1
is not zero:
>>> .5 % .1
0.09999999999999998
and it makes sense that it isn't. Since the binary expansion of .1
is ever-so-slightly greater than the actual decimal .1
, the largest integer alpha
such that alpha * .1 <= .5
(in our finite precision math) is alpha = 4
. So mod(.5, .1)
is nonzero, and is roughly .1
. Hence floor((.5 - mod(.5, .1)) / .1)
becomes floor((.5 - .1) / .1)
becomes floor(.4 / .1)
which equals 4
.
And that's why .5 // .1 == 4
.
Why does //
do that?
The behavior of a // b
may seem strange, but there's a reason for it's divergence from math.floor(a/b)
. In his blog on the history of Python, Guido writes:
The integer division operation (//) and its sibling, the modulo operation (%), go together and satisfy a nice mathematical relationship (all variables are integers):
a/b = q with remainder r
such that
b*q + r = a and 0 <= r < b
(assuming a and b are >= 0).
Now, Guido assumes that all variables are integers, but that relationship will still hold if a
and b
are floats, if q = a // b
. If q = math.floor(a/b)
the relationship won't hold in general. And so //
might be preferred because it satisfies this nice mathematical relationship.
That's because
>>> .1
0.10000000000000001
.1
cannot be precisely represented in binary
You can also see that
>>> .5 / 0.10000000000000001
5.0
The issue is that Python will round the output as described here. Since 0.1
cannot be represented exactly in binary, the result is something like 4.999999999999999722444243844000
. Naturally this becomes 5.0
when not using format.
This isn't correct, I'm afraid. .5 / .1 is 5.0 exactly. See: (.5/.1).as_integer_ratio(), which yields (5,1).
Yes, 5
can be represented as 5/1
, that is true. But in order to see the fraction of the actual result Python gives due to the inexact representation, follow along.
First, import:
from decimal import *
from fractions import Fraction
Variables for easy usage:
// as_integer_ratio() returns a tuple
xa = Decimal((.5).as_integer_ratio()[0])
xb = Decimal((.5).as_integer_ratio()[1])
ya = Decimal((.1).as_integer_ratio()[0])
yb = Decimal((.1).as_integer_ratio()[1])
Yields the following values:
xa = 1
xb = 2
ya = 3602879701896397
yb = 36028797018963968
Naturally, 1/2 == 5
and 3602879701896397 / 36028797018963968 == 0.1000000000000000055511151231
.
So what happens when we divide?
>>> print (xa/xb)/(ya/yb)
4.999999999999999722444243845
But when we want the integer ratio...
>>> print float((xa/xb)/(ya/yb)).as_integer_ratio()
(5, 1)
As stated earlier, 5
is 5/1
of course. That's where Fraction
comes in:
>>> print Fraction((xa/xb)/(ya/yb))
999999999999999944488848769/200000000000000000000000000
And wolfram alpha confirms that this is indeed 4.999999999999999722444243845
.
Why don't you just do Fraction(.5/.1)
or Fraction(Decimal(.5)/Decimal(.1))
?
The latter will give us the same 5/1
result. The former will give us 1249999999999999930611060961/250000000000000000000000000
instead. This results in 4.999999999999999722444243844
, similar but not the same result.
来源:https://stackoverflow.com/questions/32123583/why-is-math-floorx-y-x-y-for-two-evenly-divisible-floats-in-python