sorting large text data

跟風遠走 提交于 2019-12-30 06:14:32

问题


I have a large file (100 million lines of tab separated values - about 1.5GB in size). What is the fastest known way to sort this based on one of the fields?

I have tried hive. I would like to see if this can be done faster using python.


回答1:


Have you considered using the *nix sort program? in raw terms, it'll probably be faster than most Python scripts.

Use -t $'\t' to specify that it's tab-separated, -k n to specify the field, where n is the field number, and -o outputfile if you want to output the result to a new file. Example:

sort -t $'\t' -k 4 -o sorted.txt input.txt

Will sort input.txt on its 4th field, and output the result to sorted.txt




回答2:


you want to build an in-memory index for the file:

  1. create an empty list
  2. open the file
  3. read it line by line (using f.readline(), and store in the list a tuple consisting of the value on which you want to sort (extracted with line.split('\t').strip()) and the offset of the line in the file (which you can get by calling f.tell() before calling f.readline())
  4. close the file
  5. sort the list

Then to print the sorted file, reopen the file and for each element of your list, use f.seek(offset) to move the file pointer to the beginning of the line, f.readline() to read the line and print the line.

Optimization: you may want to store the length of the line in the list, so that you can use f.read(length) in the printing phase.

Sample code (optimized for readability, not speed):

def build_index(filename, sort_col):
    index = []
    f = open(filename)
    while True:
        offset = f.tell()
        line = f.readline()
        if not line:
            break
        length = len(line)
        col = line.split('\t')[sort_col].strip()
        index.append((col, offset, length))
    f.close()
    index.sort()
    return index

def print_sorted(filename, col_sort):
    index = build_index(filename, col_sort)
    f = open(filename)
    for col, offset, length in index:
        f.seek(offset)
        print f.read(length).rstrip('\n')

if __name__ == '__main__':
    filename = 'somefile.txt'
    sort_col = 2
    print_sorted(filename, sort_col)



回答3:


Split up into files that can be sorted in memory. Sort each file in memory. Then merge the resulting files.

Merge by reading a portion of each of the files to be merged. The same amount from each file leaving enough space in memory for the merged result. Once merged saving this. Repeating adding blocks of merged data onto the file.

This minimises the file i/o and moving around the file on the disk.




回答4:


I would store the file in a good relational database, index it on the field your are interested in and then read the ordered items.



来源:https://stackoverflow.com/questions/7079473/sorting-large-text-data

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!