Troubles while parsing with python very large xml file

后端 未结 3 1587
醉酒成梦
醉酒成梦 2021-01-15 05:55

I have a large xml file (about 84MB) which is in this form:


    ...
    ....
    ...
         


        
3条回答
  •  轮回少年
    2021-01-15 06:13

    I would strongly recommend using a SAX parser here. I wouldn't recommend using minidom on any XML document larger than a few megabytes; I've seen it use about 400MB of RAM reading in an XML document that was about 10MB in size. I suspect the problems you are having are being caused by minidom requesting too much memory.

    Python comes with an XML SAX parser. To use it, do something like the following.

    from xml.sax.handlers import ContentHandler
    from xml.sax import parse
    
    class MyContentHandler(ContentHandler):
        # override various ContentHandler methods as needed...
    
    
    handler = MyContentHandler()
    parse("mydata.xml", handler)
    

    Your ContentHandler subclass will override various methods in ContentHandler (such as startElement, startElementNS, endElement, endElementNS or characters. These handle events generated by the SAX parser as it reads your XML document in.

    SAX is a more 'low-level' way to handle XML than DOM; in addition to pulling out the relevant data from the document, your ContentHandler will need to do work keeping track of what elements it is currently inside. On the upside, however, as SAX parsers don't keep the whole document in memory, they can handle XML documents of potentially any size, including those larger than yours.

    I haven't tried other using DOM parsers such as lxml on XML documents of this size, but I suspect that lxml will still take a considerable time and use a considerable amount of memory to parse your XML document. That could slow down your development if every time you run your code you have to wait for it to read in an 84MB XML document.

    Finally, I don't believe the Greek, Spanish and Arabic characters you mention will cause a problem.

提交回复
热议问题