copy.deepcopy vs pickle copy.deepcopy vs pickle python python

copy.deepcopy vs pickle


Problem is, pickle+unpickle can be faster (in the C implementation) because it's less general than deepcopy: many objects can be deepcopied but not pickled. Suppose for example that your class A were changed to...:

class A(object):  class B(object): pass  def __init__(self): self.b = self.B()

now, copy1 still works fine (A's complexity slows it downs but absolutely doesn't stop it); copy2 and copy3 break, the end of the stack trace says...:

  File "./c.py", line 20, in copy3    return cPickle.loads(cPickle.dumps(d, -1))PicklingError: Can't pickle <class 'c.B'>: attribute lookup c.B failed

I.e., pickling always assumes that classes and functions are top-level entities in their modules, and so pickles them "by name" -- deepcopying makes absolutely no such assumptions.

So if you have a situation where speed of "somewhat deep-copying" is absolutely crucial, every millisecond matters, AND you want to take advantage of special limitations that you KNOW apply to the objects you're duplicating, such as those that make pickling applicable, or ones favoring other forms yet of serializations and other shortcuts, by all means go ahead - but if you do you MUST be aware that you're constraining your system to live by those limitations forevermore, and document that design decision very clearly and explicitly for the benefit of future maintainers.

For the NORMAL case, where you want generality, use deepcopy!-)


You should be using deepcopy because it makes your code more readable. Using a serialization mechanism to copy objects in memory is at the very least confusing to another developer reading your code. Using deepcopy also means you get to reap the benefits of future optimizations in deepcopy.

First rule of optimization: don't.


It is not always the case that cPickle is faster than deepcopy(). While cPickle is probably always faster than pickle, whether it is faster than deepcopy depends on

  • the size and nesting level of the structures to be copied,
  • the type of contained objects, and
  • the size of the pickled string representation.

If something can be pickled, it can obviously be deepcopied, but the opposite is not the case: In order to pickle something, it needs to be fully serialized; this is not the case for deepcopying. In particular, you can implement __deepcopy__ very efficiently by copying a structure in memory (think of extension types), without being able to save everything to disk. (Think of suspend-to-RAM vs. suspend-to-disk.)

A well-known extension type that fulfills the conditions above may be ndarray, and indeed, it serves as a good counterexample to your observation: With d = numpy.arange(100000000), your code gives different runtimes:

In [1]: import copy, pickle, cPickle, numpyIn [2]: d = numpy.arange(100000000)In [3]: %timeit pickle.loads(pickle.dumps(d, -1))1 loops, best of 3: 2.95 s per loopIn [4]: %timeit cPickle.loads(cPickle.dumps(d, -1))1 loops, best of 3: 2.37 s per loopIn [5]: %timeit copy.deepcopy(d)1 loops, best of 3: 459 ms per loop

If __deepcopy__ is not implemented, copy and pickle share common infrastructure (cf. copy_reg module, discussed in Relationship between pickle and deepcopy).