Python - How to gzip a large text file without MemoryError?

前端 未结 3 642
温柔的废话
温柔的废话 2020-12-16 01:21

I use the following simple Python script to compress a large text file (say, 10GB) on an EC2 m3.large instance. However, I always got a MemoryError

3条回答
  •  一整个雨季
    2020-12-16 02:08

    The problem here has nothing to do with gzip, and everything to do with reading line by line from a 10GB file with no newlines in it:

    As an additional note, the file I used to test the Python gzip functionality is generated by fallocate -l 10G bigfile_file.

    That gives you a 10GB sparse file made entirely of 0 bytes. Meaning there are no newline bytes. Meaning the first line is 10GB long. Meaning it will take 10GB to read the first line. (Or possibly even 20 or 40GB, if you're using pre-3.3 Python and trying to read it as Unicode.)

    If you want to copy binary data, don't copy line by line. Whether it's a normal file, a GzipFile that's decompressing for you on the fly, a socket.makefile(), or anything else, you will have the same problem.

    The solution is to copy chunk by chunk. Or just use copyfileobj, which does that for you automatically.

    import gzip
    import shutil
    
    with open('test_large.csv', 'rb') as f_in:
        with gzip.open('test_out.csv.gz', 'wb') as f_out:
            shutil.copyfileobj(f_in, f_out)
    

    By default, copyfileobj uses a chunk size optimized to be often very good and never very bad. In this case, you might actually want a smaller size, or a larger one; it's hard to predict which a priori.* So, test it by using timeit with different bufsize arguments (say, powers of 4 from 1KB to 8MB) to copyfileobj. But the default 16KB will probably be good enough unless you're doing a lot of this.

    * If the buffer size is too big, you may end up alternating long chunks of I/O and long chunks of processing. If it's too small, you may end up needing multiple reads to fill a single gzip block.

提交回复
热议问题