Is integer division always equal to the floor of regular division? Is integer division always equal to the floor of regular division? python python

Is integer division always equal to the floor of regular division?


The reason the quotients in your test case are not equal is that in the math.floor(a/b) case, the result is calculated with floating point arithmetic (IEEE-754 64-bit), which means there is a maximum precision. The quotient you have there is larger than the 253 limit above which floating point is no longer accurate up to the unit.

With the integer division however, Python uses its unlimited integer range, and so that result is correct.

See also "Semantics of True Division" in PEP 238:

Note that for int and long arguments, true division may lose information; this is in the nature of true division (as long as rationals are not in the language). Algorithms that consciously use longs should consider using //, as true division of longs retains no more than 53 bits of precision (on most platforms).


You may be dealing with integral values that are too large to express exactly as floats. Your number is significantly larger than 2^53, which is where the gaps between adjacent floating point doubles start to get bigger than 1. So you lose some precision when doing the floating point division.

The integer division, on the other hand, is computed exactly.


Your problem is that, despite the fact that "/" is sometimes called the "true division operator" and its method name is __truediv__, its behavior on integers is not "true mathematical division". Instead it produces a floating point result which inevitably has limited precision.

For sufficiently large numbers even the integral part of a number can suffer from floating point rounding errors. When 648705536316023400 is converted to a Python float (IEEE double) it gets rounded to 6487055363160234241.

I can't seem to find authoritative documentation on the exact behavior of the operators on the built-in types in current Python. The original PEP that introduced the feature states that "/" is equivalent to converting the integers to floating point and then performing floating point division. However a quick test in Python 3.5 shows that not to be the case. If it was then the following code would produce no output.

for i in range(648705536316023400,648705536316123400):    if math.floor(i/7) != math.floor(float(i)/7):        print(i)

But at least for me it does produce output.

Instead it seems to me that Python is performing the division on the numbers as presented and rounding the result to fit in a floating point number. Taking an example from that programs output.

648705536316123383 // 7                   == 92672219473731911math.floor(648705536316123383 / 7)        == 92672219473731904math.floor(float(648705536316123383) / 7) == 92672219473731920int(float(92672219473731911))             == 92672219473731904

The Python standard library does provide a Fraction type and the division operator for a Fraction divided by an int does perform "true mathematical division".

math.floor(Fraction(648705536316023400) / 7) == 92672219473717628math.floor(Fraction(648705536316123383) / 7) == 92672219473731911

However you should be aware of the potentially severe performance and memory implications of using the Fraction type. Remember fractions can increase in storage requirement without increasing in magnitude.


To further test my theory of "one rounding vs two" I did a test with the following code.

#!/usr/bin/python3from fractions import Fractionedt = 0eft = 0base = 1000000000010000000000top = base + 1000000for i in range(base,top):    ex = (Fraction(i)/7)    di = (i/7)    fl = (float(i)/7)    ed = abs(ex-Fraction(di))    ef = abs(ex-Fraction(fl))    edt += ed    eft += efprint(edt/10000000000)print(eft/10000000000)

And the average error magnitude was substantially smaller for performing the division directly than for converting to float first, supporting the one rounding vs two theory.

1Note that printing a float directly does not show its exact value, instead it shows the shortest decimal number that will round to that value (allowing lossless round-trip conversion from float to string and back to float).