Reading Huge File in Python

大憨熊 提交于 2019-11-28 18:54:24

If you only need 200 of 50 million lines, then reading all of it into memory is a waste. I would sort the list of search keys and then apply binary search to the file using seek() or something similar. This way you would not read the entire file to memory which I think should speed things up.

Slight optimization of S.Lotts answer:

from collections import defaultdict
keyValues= defaultdict(list)
targetKeys= # some list of keys as strings
for line in fin:
    key, value = line.split()
    if key in targetKeys:
        keyValues[key].append( value )

Since we're using a dictionary rather than a list, the keys don't have to be numbers. This saves the map() operation and a string to integer conversion for each line. If you want the keys to be numbers, do the conversion a the end, when you only have to do it once for each key, rather than for each of 50 million lines.

It's not clear what "list[pointer]" is all about. Consider this, however.

from collections import defaultdict
keyValues= defaultdict(list)
targetKeys= # some list of keys
for line in fin:
    key, value = map( int, line.split())
    if key in targetKeys:
        keyValues[key].append( value )

I would use memory-maping: http://docs.python.org/library/mmap.html.
This way you can use the file as if it's stored in memory, but the OS decides which pages should actually be read from the file.

Here is a recursive binary search on the text file

import os, stat

class IntegerKeyTextFile(object):
    def __init__(self, filename):
        self.filename = filename
        self.f = open(self.filename, 'r')
        self.getStatinfo()

    def getStatinfo(self):
        self.statinfo = os.stat(self.filename)
        self.size = self.statinfo[stat.ST_SIZE]

    def parse(self, line):
        key, value = line.split()
        k = int(key)
        v = int(value)
        return (k,v)

    def __getitem__(self, key):
        return self.findKey(key)

    def findKey(self, keyToFind, startpoint=0, endpoint=None):
        "Recursively search a text file"

        if endpoint is None:
            endpoint = self.size

        currentpoint = (startpoint + endpoint) // 2

        while True:
            self.f.seek(currentpoint)
            if currentpoint <> 0:
                # may not start at a line break! Discard.
                baddata = self.f.readline() 

            linestart = self.f.tell()
            keyatpoint = self.f.readline()

            if not keyatpoint:
                # read returned empty - end of file
                raise KeyError('key %d not found'%(keyToFind,))

            k,v = self.parse(keyatpoint)

            if k == keyToFind:
                print 'key found at ', linestart, ' with value ', v
                return v

            if endpoint == startpoint:
                    raise KeyError('key %d not found'%(keyToFind,))

            if k > keyToFind:
                return self.findKey(keyToFind, startpoint, currentpoint)
            else:
                return self.findKey(keyToFind, currentpoint, endpoint)

A sample text file created in jEdit seems to work:

>>> i = integertext.IntegerKeyTextFile('c:\\sampledata.txt')
>>> i[1]
key found at  0  with value  345
345

It could definitely be improved by caching found keys and using the cache to determine future starting seek points.

If you have any control over the format of the file, the "sort and binary search" responses are correct. The detail is that this only works with records of a fixed size and offset (well, I should say it only works easily with fixed length records).

With fixed length records, you can easily seek() around the sorted file to find your keys.

One possible optimization is to do a bit of buffering using the sizehint option in file.readlines(..). This allows you to load multiple lines in memory totaling to approximately sizehint bytes.

You need to implement binary search using seek()

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!