Are tuples more efficient than lists in Python? Are tuples more efficient than lists in Python? python python

Are tuples more efficient than lists in Python?


Summary

Tuples tend to perform better than lists in almost every category:

1) Tuples can be constant folded.

2) Tuples can be reused instead of copied.

3) Tuples are compact and don't over-allocate.

4) Tuples directly reference their elements.

Tuples can be constant folded

Tuples of constants can be precomputed by Python's peephole optimizer or AST-optimizer. Lists, on the other hand, get built-up from scratch:

    >>> from dis import dis    >>> dis(compile("(10, 'abc')", '', 'eval'))      1           0 LOAD_CONST               2 ((10, 'abc'))                  3 RETURN_VALUE       >>> dis(compile("[10, 'abc']", '', 'eval'))      1           0 LOAD_CONST               0 (10)                  3 LOAD_CONST               1 ('abc')                  6 BUILD_LIST               2                  9 RETURN_VALUE 

Tuples do not need to be copied

Running tuple(some_tuple) returns immediately itself. Since tuples are immutable, they do not have to be copied:

>>> a = (10, 20, 30)>>> b = tuple(a)>>> a is bTrue

In contrast, list(some_list) requires all the data to be copied to a new list:

>>> a = [10, 20, 30]>>> b = list(a)>>> a is bFalse

Tuples do not over-allocate

Since a tuple's size is fixed, it can be stored more compactly than lists which need to over-allocate to make append() operations efficient.

This gives tuples a nice space advantage:

>>> import sys>>> sys.getsizeof(tuple(iter(range(10))))128>>> sys.getsizeof(list(iter(range(10))))200

Here is the comment from Objects/listobject.c that explains what lists are doing:

/* This over-allocates proportional to the list size, making room * for additional growth.  The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... * Note: new_allocated won't overflow because the largest possible value *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t. */

Tuples refer directly to their elements

References to objects are incorporated directly in a tuple object. In contrast, lists have an extra layer of indirection to an external array of pointers.

This gives tuples a small speed advantage for indexed lookups and unpacking:

$ python3.6 -m timeit -s 'a = (10, 20, 30)' 'a[1]'10000000 loops, best of 3: 0.0304 usec per loop$ python3.6 -m timeit -s 'a = [10, 20, 30]' 'a[1]'10000000 loops, best of 3: 0.0309 usec per loop$ python3.6 -m timeit -s 'a = (10, 20, 30)' 'x, y, z = a'10000000 loops, best of 3: 0.0249 usec per loop$ python3.6 -m timeit -s 'a = [10, 20, 30]' 'x, y, z = a'10000000 loops, best of 3: 0.0251 usec per loop

Here is how the tuple (10, 20) is stored:

    typedef struct {        Py_ssize_t ob_refcnt;        struct _typeobject *ob_type;        Py_ssize_t ob_size;        PyObject *ob_item[2];     /* store a pointer to 10 and a pointer to 20 */    } PyTupleObject;

Here is how the list [10, 20] is stored:

    PyObject arr[2];              /* store a pointer to 10 and a pointer to 20 */    typedef struct {        Py_ssize_t ob_refcnt;        struct _typeobject *ob_type;        Py_ssize_t ob_size;        PyObject **ob_item = arr; /* store a pointer to the two-pointer array */        Py_ssize_t allocated;    } PyListObject;

Note that the tuple object incorporates the two data pointers directly while the list object has an additional layer of indirection to an external array holding the two data pointers.


In general, you might expect tuples to be slightly faster. However you should definitely test your specific case (if the difference might impact the performance of your program -- remember "premature optimization is the root of all evil").

Python makes this very easy: timeit is your friend.

$ python -m timeit "x=(1,2,3,4,5,6,7,8)"10000000 loops, best of 3: 0.0388 usec per loop$ python -m timeit "x=[1,2,3,4,5,6,7,8]"1000000 loops, best of 3: 0.363 usec per loop

and...

$ python -m timeit -s "x=(1,2,3,4,5,6,7,8)" "y=x[3]"10000000 loops, best of 3: 0.0938 usec per loop$ python -m timeit -s "x=[1,2,3,4,5,6,7,8]" "y=x[3]"10000000 loops, best of 3: 0.0649 usec per loop

So in this case, instantiation is almost an order of magnitude faster for the tuple, but item access is actually somewhat faster for the list! So if you're creating a few tuples and accessing them many many times, it may actually be faster to use lists instead.

Of course if you want to change an item, the list will definitely be faster since you'd need to create an entire new tuple to change one item of it (since tuples are immutable).


The dis module disassembles the byte code for a function and is useful to see the difference between tuples and lists.

In this case, you can see that accessing an element generates identical code, but that assigning a tuple is much faster than assigning a list.

>>> def a():...     x=[1,2,3,4,5]...     y=x[2]...>>> def b():...     x=(1,2,3,4,5)...     y=x[2]...>>> import dis>>> dis.dis(a)  2           0 LOAD_CONST               1 (1)              3 LOAD_CONST               2 (2)              6 LOAD_CONST               3 (3)              9 LOAD_CONST               4 (4)             12 LOAD_CONST               5 (5)             15 BUILD_LIST               5             18 STORE_FAST               0 (x)  3          21 LOAD_FAST                0 (x)             24 LOAD_CONST               2 (2)             27 BINARY_SUBSCR             28 STORE_FAST               1 (y)             31 LOAD_CONST               0 (None)             34 RETURN_VALUE>>> dis.dis(b)  2           0 LOAD_CONST               6 ((1, 2, 3, 4, 5))              3 STORE_FAST               0 (x)  3           6 LOAD_FAST                0 (x)              9 LOAD_CONST               2 (2)             12 BINARY_SUBSCR             13 STORE_FAST               1 (y)             16 LOAD_CONST               0 (None)             19 RETURN_VALUE


matomo