I\'m trying to parse a large file (> 2GB) of structured markup data and the memory is not enough for this.Which is the optimal way of XML parsing class for this condition.Mo
Most DOM libraries - like ElementTree - build the entire Document Model in core. Traditionally, when your model is too large to fit into memory at once, you need to use a more stream-oriented parser like xml.sax.
This is often harder than you expect it should be, especially when used to higher-order operations like dealing with the entire DOM at once.
Is it possible that your xml document is rather simple like
...
...
which would allow you to work on subsets of the data in a more ElementTree friendly manner?