why numpy narray read from file consumes so much memory? why numpy narray read from file consumes so much memory? numpy numpy

why numpy narray read from file consumes so much memory?


I'm using Numpy 1.9.0 and the memory inneficiency of np.loadtxt() and np.genfromtxt() seems to be directly related to the fact they are based on temporary lists to store the data:

  • see here for np.loadtxt()
  • and here for np.genfromtxt()

By knowing beforehand the shape of your array you can think of a file reader that will consume an amount of memory very close to the theoretical amount of memory (3.2 GB for this case), by storing the data using the corresponding dtype:

def read_large_txt(path, delimiter=None, dtype=None):    with open(path) as f:        nrows = sum(1 for line in f)        f.seek(0)        ncols = len(f.next().split(delimiter))        out = np.empty((nrows, ncols), dtype=dtype)        f.seek(0)        for i, line in enumerate(f):            out[i] = line.split(delimiter)    return out


I think you should try pandas to handle big data ( text files). pandas is like a excel in python. And it internally use numpy to represent the data.

HDF5 files also an another method to save big data into hdf5 binary file.

This question would give some idea about how to handle big files - "Large data" work flows using pandas