Why is pow(a, d, n) so much faster than a**d % n? Why is pow(a, d, n) so much faster than a**d % n? python python

Why is pow(a, d, n) so much faster than a**d % n?


See the Wikipedia article on modular exponentiation. Basically, when you do a**d % n, you actually have to calculate a**d, which could be quite large. But there are ways of computing a**d % n without having to compute a**d itself, and that is what pow does. The ** operator can't do this because it can't "see into the future" to know that you are going to immediately take the modulus.


BrenBarn answered your main question. For your aside:

why is it almost twice as fast when run with Python 2 or 3 than PyPy, when usually PyPy is much faster?

If you read PyPy's performance page, this is exactly the kind of thing PyPy is not good at—in fact, the very first example they give:

Bad examples include doing computations with large longs – which is performed by unoptimizable support code.

Theoretically, turning a huge exponentiation followed by a mod into a modular exponentiation (at least after the first pass) is a transformation a JIT might be able to make… but not PyPy's JIT.

As a side note, if you need to do calculations with huge integers, you may want to look at third-party modules like gmpy, which can sometimes be much faster than CPython's native implementation in some cases outside the mainstream uses, and also has a lot of additional functionality that you'd otherwise have to write yourself, at the cost of being less convenient.


There are shortcuts to doing modular exponentiation: for instance, you can find a**(2i) mod n for every i from 1 to log(d) and multiply together (mod n) the intermediate results you need. A dedicated modular-exponentiation function like 3-argument pow() can leverage such tricks because it knows you're doing modular arithmetic. The Python parser can't recognize this given the bare expression a**d % n, so it will perform the full calculation (which will take much longer).