I\'m using XPath to query my XML-file which has at the moment about 100KB.
I\'m iterating of an array and query for every valu
XPath itself is not very efficient when it comes to iterating through big XML-documents. I myself made the experience that parsing values out of a ~200kb XML-file took around 10 seconds on a low-end device.
After that I reimplemented the parser as a SAXParser and had huge performance increases of around 2 orders of magnitude. So I would propose that you try the SAXParser. It is actually not too hard to implement and there are a couple of tutorials out there.
There also exists a question on stackoverflow that deals with the topic of the various parsing methods: SAX vs. DOM vs. XPath
I also assume that the evaluation is done in no time when you don't use NodeSet because it will only look for a single node and return as soon as it has found a matching node.
EDIT:
Parsing the XML-document with SAX means that you iterate through it and store the sought information within objects. Take a look at this tutorial: SAX Tutorial
There the author parses staff information and transforms it into objects so I guess that's exactly what you need.