Read file in chunks - RAM-usage, read Strings from binary files

£可爱£侵袭症+ 提交于 2019-11-30 02:55:05

问题


i'd like to understand the difference in RAM-usage of this methods when reading a large file in python.

Version 1, found here on stackoverflow:

def read_in_chunks(file_object, chunk_size=1024):
    while True:
        data = file_object.read(chunk_size)
        if not data:
            break
        yield data


f = open(file, 'rb')
for piece in read_in_chunks(f):
    process_data(piece)          
f.close()

Version 2, i used this before i found the code above:

f = open(file, 'rb')
while True:
    piece = f.read(1024)      
    process_data(piece)        
f.close()

The file is read partially in both versions. And the current piece could be processed. In the second example, piece is getting new content on every cycle, so i thought this would do the job to not load the complete file into memory..?

But i don't really understand what yield does, and i'm pretty sure i got something wrong here. Could anyone explain that to me?


There is something else that puzzles me, besides of the method used:

The content of the piece i read is defined by the chunk-size, 1KB in the examples above. But... what if i need to look for strings in the file? Something like "ThisIsTheStringILikeToFind" ?

Depending on where in the file the String occurs, it could be that one piece contains the part "ThisIsTheStr" - and the next piece would contain "ingILikeToFind". Using such a method it's not possible to detect the whole string in any piece.

Is there a way to read a file in chunks - but somehow care about such strings?

Any help or idea is welcome,

greets!


回答1:


yield is the keyword in python used for generator expressions. That means that the next time the function is called (or iterated on), the execution will start back up at the exact point it left off last time you called it. The two functions behave identically; the only difference is that the first one uses a tiny bit more call stack space than the second. However, the first one is far more reusable, so from a program design standpoint, the first one is actually better.

EDIT: Also, one other difference is that the first one will stop reading once all the data has been read, the way it should, but the second one will only stop once either f.read() or process_data() throws an exception. In order to have the second one work properly, you need to modify it like so:

f = open(file, 'rb')
while True:
    piece = f.read(1024)  
    if not piece:
        break
    process_data(piece)
f.close()


来源:https://stackoverflow.com/questions/17056382/read-file-in-chunks-ram-usage-read-strings-from-binary-files

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!