Pandas, large data, HDF tables and memory usage when calling a function Pandas, large data, HDF tables and memory usage when calling a function pandas pandas

Pandas, large data, HDF tables and memory usage when calling a function


I think I have found the answer: yes and no, it depends on how you load your Pandas DataFrame.

As with the read_table() method, you have an "iterator" argument which allows to get a generator object which will get only one record at a time, as explained here: http://pandas.pydata.org/pandas-docs/dev/io.html#iterator

Now, I don't know how functions like .mean() and .apply() would work with these generators.

If someone has more info/experience, feel free to share!

About HDF5 overhead:

HDF5 keeps a B-tree in memory that is used to map chunk structures on disk. The more chunks that are allocated for a dataset the larger the B-tree. Large B-trees take memory and cause file storage overhead as well as more disk I/O and higher contention forthe metadata cache. Consequently, it’s important to balance between memory and I/O overhead (small B-trees) and time to access data (big B-trees).

http://pytables.github.com/usersguide/optimization.html