How do I release memory used by a pandas dataframe? How do I release memory used by a pandas dataframe? python python

How do I release memory used by a pandas dataframe?


Reducing memory usage in Python is difficult, because Python does not actually release memory back to the operating system. If you delete objects, then the memory is available to new Python objects, but not free()'d back to the system (see this question).

If you stick to numeric numpy arrays, those are freed, but boxed objects are not.

>>> import os, psutil, numpy as np # psutil may need to be installed>>> def usage():...     process = psutil.Process(os.getpid())...     return process.memory_info()[0] / float(2 ** 20)... >>> usage() # initial memory usage27.5 >>> arr = np.arange(10 ** 8) # create a large array without boxing>>> usage()790.46875>>> del arr>>> usage()27.52734375 # numpy just free()'d the array>>> arr = np.arange(10 ** 8, dtype='O') # create lots of objects>>> usage()3135.109375>>> del arr>>> usage()2372.16796875  # numpy frees the array, but python keeps the heap big

Reducing the Number of Dataframes

Python keep our memory at high watermark, but we can reduce the total number of dataframes we create. When modifying your dataframe, prefer inplace=True, so you don't create copies.

Another common gotcha is holding on to copies of previously created dataframes in ipython:

In [1]: import pandas as pdIn [2]: df = pd.DataFrame({'foo': [1,2,3,4]})In [3]: df + 1Out[3]:    foo0    21    32    43    5In [4]: df + 2Out[4]:    foo0    31    42    53    6In [5]: Out # Still has all our temporary DataFrame objects!Out[5]: {3:    foo 0    2 1    3 2    4 3    5, 4:    foo 0    3 1    4 2    5 3    6}

You can fix this by typing %reset Out to clear your history. Alternatively, you can adjust how much history ipython keeps with ipython --cache-size=5 (default is 1000).

Reducing Dataframe Size

Wherever possible, avoid using object dtypes.

>>> df.dtypesfoo    float64 # 8 bytes per valuebar      int64 # 8 bytes per valuebaz     object # at least 48 bytes per value, often more

Values with an object dtype are boxed, which means the numpy array just contains a pointer and you have a full Python object on the heap for every value in your dataframe. This includes strings.

Whilst numpy supports fixed-size strings in arrays, pandas does not (it's caused user confusion). This can make a significant difference:

>>> import numpy as np>>> arr = np.array(['foo', 'bar', 'baz'])>>> arr.dtypedtype('S3')>>> arr.nbytes9>>> import sys; import pandas as pd>>> s = pd.Series(['foo', 'bar', 'baz'])dtype('O')>>> sum(sys.getsizeof(x) for x in s)120

You may want to avoid using string columns, or find a way of representing string data as numbers.

If you have a dataframe that contains many repeated values (NaN is very common), then you can use a sparse data structure to reduce memory usage:

>>> df1.info()<class 'pandas.core.frame.DataFrame'>Int64Index: 39681584 entries, 0 to 39681583Data columns (total 1 columns):foo    float64dtypes: float64(1)memory usage: 605.5 MB>>> df1.shape(39681584, 1)>>> df1.foo.isnull().sum() * 100. / len(df1)20.628483479893344 # so 20% of values are NaN>>> df1.to_sparse().info()<class 'pandas.sparse.frame.SparseDataFrame'>Int64Index: 39681584 entries, 0 to 39681583Data columns (total 1 columns):foo    float64dtypes: float64(1)memory usage: 543.0 MB

Viewing Memory Usage

You can view the memory usage (docs):

>>> df.info()<class 'pandas.core.frame.DataFrame'>Int64Index: 39681584 entries, 0 to 39681583Data columns (total 14 columns):...dtypes: datetime64[ns](1), float64(8), int64(1), object(4)memory usage: 4.4+ GB

As of pandas 0.17.1, you can also do df.info(memory_usage='deep') to see memory usage including objects.


As noted in the comments, there are some things to try: gc.collect (@EdChum) may clear stuff, for example. At least from my experience, these things sometimes work and often don't.

There is one thing that always works, however, because it is done at the OS, not language, level.

Suppose you have a function that creates an intermediate huge DataFrame, and returns a smaller result (which might also be a DataFrame):

def huge_intermediate_calc(something):    ...    huge_df = pd.DataFrame(...)    ...    return some_aggregate

Then if you do something like

import multiprocessingresult = multiprocessing.Pool(1).map(huge_intermediate_calc, [something_])[0]

Then the function is executed at a different process. When that process completes, the OS retakes all the resources it used. There's really nothing Python, pandas, the garbage collector, could do to stop that.


This solves the problem of releasing the memory for me!!!

import gcimport pandas as pddel [[df_1,df_2]]gc.collect()df_1=pd.DataFrame()df_2=pd.DataFrame()

the data-frame will be explicitly set to null

in the above statements

Firstly, the self reference of the dataframe is deleted meaning the dataframe is no longer available to python there after all the references of the dataframe is collected by garbage collector (gc.collect()) and then explicitly set all the references to empty dataframe.

more on the working of garbage collector is well explained in https://stackify.com/python-garbage-collection/