Using pandas to efficiently read in a large CSV file without crashing

后端 未结 2 1039
野趣味
野趣味 2020-12-17 05:55

I am trying to read a .csv file called ratings.csv from http://grouplens.org/datasets/movielens/20m/ the file is 533.4MB in my computer.

This is what am writing in j

2条回答
  •  清歌不尽
    2020-12-17 06:11

    You should consider using the chunksize parameter in read_csv when reading in your dataframe, because it returns a TextFileReader object you can then pass to pd.concat to concatenate your chunks.

    chunksize = 100000
    tfr = pd.read_csv('./movielens/ratings.csv', chunksize=chunksize, iterator=True)
    df = pd.concat(tfr, ignore_index=True)
    

    If you just want to process each chunk individually, use,

    chunksize = 20000
    for chunk in pd.read_csv('./movielens/ratings.csv', 
                             chunksize=chunksize, 
                             iterator=True):
        do_something_with_chunk(chunk)
    

提交回复
热议问题