I have a xml like this:
hello
world
Use iterparse:
import lxml.etree as ET
for event, elem in ET.iterparse(filelike_object):
if elem.tag == "a":
process_a(elem)
for child in elem:
process_child(child)
elem.clear() # destroy all child elements
elif elem.tag != "b":
elem.clear()
Note that this doesn't save all the memory, but I've been able to wade through XML streams of over a Gb using this technique.
Try import xml.etree.cElementTree as ET ... it comes with Python and its iterparse is faster than the lxml.etree iterparse, according to the lxml docs:
"""For applications that require a high parser throughput of large files, and that do little to no serialization, cET is the best choice. Also for iterparse applications that extract small amounts of data or aggregate information from large XML data sets that do not fit into memory. If it comes to round-trip performance, however, lxml tends to be multiple times faster in total. So, whenever the input documents are not considerably larger than the output, lxml is the clear winner."""