Pandas, large data, HDF tables and memory usage when calling a function

若如初见. 提交于 2019-12-07 05:54:16

问题


Short question

When Pandas work on a HDFStore (eg: .mean() or .apply() ), does it load the full data in memory as a DataFrame, or does it process record-by-record as a Serie?

Long description

I have to work on large data files, and I can specify the output format of the data file.

I intend to use Pandas to process the data, and I would like to setup the best format so that it maximizes the performances.

I have seen that panda.read_table() has gone a long way, but it still at least takes at least as much memory (in fact at least twice the memory) as the original file size that we want to read to transform into a DataFrame. This may work for files up to 1 GB, but above? That may be hard, especially on online shared machines.

However, I have seen that now Pandas seems to support HDF tables using pytables.

My question is: how does Pandas manage the memory when we do an operation on a whole HDF table? For example a .mean() or .apply(). Does it first load the entire table in a DataFrame, or does it process the function over data directly from the HDF file without storing in memory?

Side-question: is the hdf5 format compact on disk usage? I mean, is it verbose like xml or more like JSON? (I know there are indexes and stuff, but I am here interested in the bare description of the data)


回答1:


I think I have found the answer: yes and no, it depends on how you load your Pandas DataFrame.

As with the read_table() method, you have an "iterator" argument which allows to get a generator object which will get only one record at a time, as explained here: http://pandas.pydata.org/pandas-docs/dev/io.html#iterator

Now, I don't know how functions like .mean() and .apply() would work with these generators.

If someone has more info/experience, feel free to share!

About HDF5 overhead:

HDF5 keeps a B-tree in memory that is used to map chunk structures on disk. The more chunks that are allocated for a dataset the larger the B-tree. Large B-trees take memory and cause file storage overhead as well as more disk I/O and higher contention forthe metadata cache. Consequently, it’s important to balance between memory and I/O overhead (small B-trees) and time to access data (big B-trees).

http://pytables.github.com/usersguide/optimization.html



来源:https://stackoverflow.com/questions/15692984/pandas-large-data-hdf-tables-and-memory-usage-when-calling-a-function

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!