Best way to process large XML in PHP [duplicate]

旧城冷巷雨未停 提交于 2019-11-26 22:11:11

For a large file, you'll want to use a SAX parser rather than a DOM parser.

With a DOM parser it will read in the whole file and load it into an object tree in memory. With a SAX parser, it will read the file sequentially and call your user-defined callback functions to handle the data (start tags, end tags, CDATA, etc.)

With a SAX parser you'll need to maintain state yourself (e.g. what tag you are currently in) which makes it a bit more complicated, but for a large file it will be much more efficient memory wise.

My take on it:

https://github.com/prewk/XmlStreamer

A simple class that will extract all children to the XML root element while streaming the file. Tested on 108 MB XML file from pubmed.com.

class SimpleXmlStreamer extends XmlStreamer {
    public function processNode($xmlString, $elementName, $nodeIndex) {
        $xml = simplexml_load_string($xmlString);

        // Do something with your SimpleXML object

        return true;
    }
}

$streamer = new SimpleXmlStreamer("myLargeXmlFile.xml");
$streamer->parse();

When using a DOMDocument with large XML files, don't forget to pass the LIBXML_PARSEHUGE flag in the options of the load() method. (Same applies for the other load methods of the DOMDocument object)

    $checkDom = new \DOMDocument('1.0', 'UTF-8');
    $checkDom->load($filePath, LIBXML_PARSEHUGE);

(Works with a 120mo XML file)

A SAX Parser, as Eric Petroelje recommends, would be better for large XML files. A DOM parser loads in the entire XML file and allows you to run xpath queries-- a SAX (Simple API for XML) parser will simply read one line at a time and give you hook points for processing.

It really depends on what you want to do with the data? Do you need it all in memory to effectively work with it?

6.5 MB is not that big, in terms of today's computers. You could, for example, ini_set('memory_limit', '128M');

However, if your data can be streamed, you may want to look at using a SAX parser. It really depends on your usage needs.

SAX parser is the way to go. I've found that SAX parsing can get messy if you don't stay organised.

I use an approach based on STX (Streaming Transformations for XML) to parse large XML files. I use the SAX methods to build a SimpleXML object to keep track of the data in the current context (ie just the nodes between the root and the current node). Other functions are then used for processing the SimpleXML document.

I needed to parse a large XML file that happened to have an element on each line (the StackOverflow data dump). In this specific case it was sufficient to read the file one line at a time and parse each line using SimpleXML. For me this had the advantage of not having to learn anything new.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!