i have a Lucene-Index with following documents:
doc1 := { caldari, jita, shield, planet }
doc2 := { gallente, dodixie, armor, planet }
doc3 := { amarr, laser
Go here: http://lucene.apache.org/java/3_0_1/api/core/index.html and check this method
org.apache.lucene.index.IndexReader.getTermFreqVectors(int docno);
you will have to know the document id. This is an internal lucene id and it usually changes on every index update (that has deletes :-)).
I believe there is a similar method for lucene 2.x.x
I don't know Lucene, however; your naive implementation will scale, provided you don't read the entire document into memory at one time (i.e use an on-line parser). English text is about 83% redundant so your biggest document will have a map with 85000 entries in it. Use one map per thread (and one thread per file, pooled obviouly) and you will scale just fine.
Update: If your term list does not change frequently; you might try building a search tree out of the characters in your term list, or building a perfect hash function (http://www.gnu.org/software/gperf/) to speed up file parsing (mapping from search terms to target strings). Probably just a big HashMap would perform about as well.