reportlab low performance

社会主义新天地 提交于 2019-12-12 21:26:47

问题


I'm using reportlab to convert some big library (plain text in Russian) into pdf format. When the original file is small enough (say, about 10-50 kB), it works fine. But if I'm trying to convert big texts (above 500kB) it takes lot of time to reportlab to proceed. Does anyone knows what could be the problem?

BYTES_TO_READ = 10000 

def go(text):
    doc = SimpleDocTemplate("output.pdf")
    Story = [Spacer(1, 2*inch)]
    style = styles["Normal"]
    p = Paragraph(text, style)
    Story.append(p)
    doc.build(Story)

def get_text_from_file():
    source_file = open("book.txt", "r")
    text = source_file.read(BYTES_TO_READ)
    source_file.close()
    return text

go(get_text_from_file())

So, when I try to set the BYTES_TO_READ variable to more than 200-300 thousands (i.e., just to see what happening, not reading the full book, just some part of it) - it takes HUGE amount of time


回答1:


Let me preface by saying that I don't have much experience with reportlab at all. This is just a general suggestion. It also does not deal with exactly how you should be parsing and formatting the text you are reading into proper structures. I am just continuing to use the Paragraph class to write text.

In terms of performance, I think your problem is related to trying to read a huge string once, and passing that huge string as a single paragraph to reportlab. If you think about it, what paragraph is really 500k bytes?

What you would probably want to do is read in smaller chunks, and build up your document:

def go_chunked(limit=500000, chunk=4096):

    BYTES_TO_READ = chunk

    doc = SimpleDocTemplate("output.pdf")
    Story = [Spacer(1, 2*inch)]
    style = styles["Normal"]

    written = 0

    with open("book.txt", "r") as source_file:
        while written < limit:
            text = source_file.read(BYTES_TO_READ)
            if not text:
                break
            p = Paragraph(text, style)
            Story.append(p)
            written += BYTES_TO_READ

    doc.build(Story)

When processing a total of 500k bytes:

%timeit go_chunked(limit=500000, chunk=4096)
1 loops, best of 3: 1.88 s per loop

%timeit go(get_text_from_file())
1 loops, best of 3: 64.1 s per loop

Again, obviously this is just splitting your text into arbitrary paragraphs being the size of the BYTES_TO_READ value, but its not much different than one huge paragraph. Ultimately, you might want to parse the text you are reading into a buffer, and determine your own paragraphs, or just split on lines if that is the format of your original source:

def go_lines(limit=500000):

    doc = SimpleDocTemplate("output.pdf")
    Story = [Spacer(1, 2*inch)]
    style = styles["Normal"]

    written = 0

    with open("book.txt", "r") as source_file:
        while written < limit:
            text = source_file.readline()
            if not text:
                break
            text = text.strip()
            p = Paragraph(text, style)
            Story.append(p)
            written += len(text)

    doc.build(Story)

Performance:

%timeit go_lines()
1 loops, best of 3: 1.46 s per loop


来源:https://stackoverflow.com/questions/12290583/reportlab-low-performance

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!