MemoryError with python/pandas and large left outer joins

折月煮酒 提交于 2019-12-04 17:51:31

Why not just read your right file into pandas (or even into a simple dictionary), then loop through your left file using the csv module to read, extend, and write each row? Is processing time a significant constraint (vs your development time)?

James T

This approach ended up working. Here's a model of my code:

import csv

idata = open("KEY_ABC.csv","rU")
odata = open("KEY_XYZ.csv","rU")

leftdata = csv.reader(idata)
rightdata = csv.reader(odata)

def gen_chunks(reader, chunksize=1000000):
    chunk = []
    for i, line in enumerate(reader):
        if (i % chunksize == 0 and i > 0):
            yield chunk
            del chunk[:]
        chunk.append(line)
    yield chunk

count = 0

d1 = dict([(rows[3],rows[0]) for rows in rightdata])
odata.seek(0)    
d2 = dict([(rows[3],rows[1]) for rows in rightdata])
odata.seek(0)
d3 = dict([(rows[3],rows[2]) for rows in rightdata])

for chunk in gen_chunks(leftdata):
    res = [[k[0], k[1], k[2], k[3], k[4], k[5], k[6], 
                d1.get(k[6], "NaN")] for k in chunk]
    res1 = [[k[0], k[1], k[2], k[3], k[4], k[5], k[6], k[7], 
                d2.get(k[6], "NaN")] for k in res]
    res2 = [[k[0], k[1], k[2], k[3], k[4], k[5], k[6], k[7], k[8],
                d3.get(k[6], "NaN")] for k in res1]
    namestart = "FINAL_"
    nameend = ".csv"
    count = count+1
    filename = namestart + str(count) + nameend
    with open(filename, "wb") as csvfile:
        output = csv.writer(csvfile)
        output.writerows(res2)

By splitting the left dataset into chunks, turning the right dataset into one dictionary per non-key column, and by adding columns to the left dataset (filling them using the dictionaries and the key match), the script managed to do the whole left join in about four minutes with no memory issues.

Thanks also to user miku who provided the chunk generator code in a comment on this post.

That said: I highly doubt this is the most efficient way of doing this. If anyone has suggestions to improve this approach, fire away.

MicPie

As suggested in another question "Large data" work flows using pandas, dask (http://dask.pydata.org) could be an easy option.

Simple example

import dask.dataframe as dd
df1 = dd.read_csv('df1.csv')
df2 = dd.read_csv('df2.csv')
df_merge = dd.merge(df1, df2, how='left')
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!