Efficient memoization in Python Efficient memoization in Python python python

Efficient memoization in Python


The different styles of variable access have already been timed and compared at: http://code.activestate.com/recipes/577834-compare-speeds-of-different-kinds-of-access-to-varHere's a quick summary: local access beats nonlocal (nested scopes) which beat global access (module scope) which beats access to builtins.

Your solution #2 (with local access) should win. Solution #3 has a slow-dotted lookup (which requires a dictionary lookup). Solution #1 uses nonlocal (nested scope) access which uses cell-variables (faster than a dict lookup but slower than locals).

Also note, the KeyError exception class is a global lookup and can be sped-up by localizing it. You could replace the try/except entirely and use a memo.get(n, sentinel) instead. And even that could be sped-up by using a bound method. Of course, your easiest speed boost may just come from trying out pypy :-)

In short, there are many ways to tweak this code. Just make sure it's worth it.


For the benefit of people who stumble on this question while looking for a way to do memoization in python, I recommend fastcache.

It works on python 2 and 3, is faster than any of the methods described above, and gives the option to limit cache size so that it does not inadvertently get too big:

from fastcache import clru_cache@clru_cache(maxsize=128, typed=False)def foo(cat_1, cat_2, cat_3):    return cat_1 + cat_2 + cat_3

Installing fastcache is simple, using pip:

pip install fastcache

or conda:

conda install fastcache