Why is 2 * x * x faster than 2 * ( x * x ) in Python 3.x, for integers? Why is 2 * x * x faster than 2 * ( x * x ) in Python 3.x, for integers? python python

Why is 2 * x * x faster than 2 * ( x * x ) in Python 3.x, for integers?


First of all, note that we don't see the same thing in Python 2.x:

>>> timeit("for i in range(1000): 2*i*i")51.00784397125244>>> timeit("for i in range(1000): 2*(i*i)")50.48330092430115

So this leads us to believe that this is due to how integers changed in Python 3: specifically, Python 3 uses long (arbitrarily large integers) everywhere.

For small enough integers (including the ones we're considering here), CPython actually just uses the O(MN) grade-school digit by digit multiplication algorithm (for larger integers it switches to the Karatsuba algorithm). You can see this yourself in the source.

The number of digits in x*x is roughly twice that of 2*x or x (since log(x2) = 2 log(x)). Note that a "digit" in this context is not a base-10 digit, but a 30-bit value (which are treated as single digits in CPython's implementation). Hence, 2 is a single-digit value, and x and 2*x are single-digit values for all iterations of the loop, but x*x is two-digit for x >= 2**15. Hence, for x >= 2**15, 2*x*x only requires single-by-single-digit multiplications whereas 2*(x*x) requires a single-by-single and a single-by-double-digit multiplication (since x*x has 2 30-bit digits).

Here's a direct way to see this (Python 3):

>>> timeit("a*b", "a,b = 2, 123456**2", number=100000000)5.796971936999967>>> timeit("a*b", "a,b = 2*123456, 123456", number=100000000)4.3559221399999615

Again, compare this to Python 2, which doesn't use arbitrary-length integers everywhere:

>>> timeit("a*b", "a,b = 2, 123456**2", number=100000000)3.0912468433380127>>> timeit("a*b", "a,b = 2*123456, 123456", number=100000000)3.1120400428771973

(One interesting note: If you look at the source, you'll see that the algorithm actually has a special case for squaring numbers (which we're doing here), but even still this is not enough to overcome the fact that 2*(x*x) just requires processing more digits.)


Python intern representation of integers is special, it uses slots of 30 bits :

In [6]: sys.getsizeof(2**30-1)Out[6]: 28 # one slot + headingIn [7]: sys.getsizeof(2**30)Out[7]: 32 # two slots 

So everything happens as if Python counts in base B = 2**30 = 1 073 741 824 ~1 billion.

For a human who want to calculate 2*4*4, two ways :

  • (2*4)*4 = 8*4 =32 = 30 + 2 is immediate if you knows your add tables.
  • 2*(4*4) = 2*16 = 2*10 + 2*6 = (2*10+10) + 2 = 30 + 2 since we have to put the operation down.

Python have the same problem. If x is a number such than 2x < B < x² , let x² = aB+b , with a,b <B. is stored in 2 slots, which I note (a|b). Computations leads to (without managing carries here):

   (x*x)*2 =>  (a|b)*2 => (2*a|2*b)   (2*x)*x =>  (2x)*x =>(2a|2b)

in the first case the 2* operation is done two times, against only one in the first case. That explains the difference.


If your benchmark is right (didn't check), it may come from the fact that Python integers may be two different things : native integers when they are small (with a quick computation), and big integers when they increase in size (slower computation). The first syntax keeps the size smaller after the first operation while the second syntax may lead to two operations involving big integers.