How to count term frequency for set of documents?

前端 未结 2 1801
梦如初夏
梦如初夏 2020-12-17 02:00

i have a Lucene-Index with following documents:

doc1 := { caldari, jita, shield, planet }
doc2 := { gallente, dodixie, armor, planet }
doc3 := { amarr, laser         


        
相关标签:
2条回答
  • 2020-12-17 02:09

    Go here: http://lucene.apache.org/java/3_0_1/api/core/index.html and check this method

    org.apache.lucene.index.IndexReader.getTermFreqVectors(int docno);
    

    you will have to know the document id. This is an internal lucene id and it usually changes on every index update (that has deletes :-)).

    I believe there is a similar method for lucene 2.x.x

    0 讨论(0)
  • 2020-12-17 02:26

    I don't know Lucene, however; your naive implementation will scale, provided you don't read the entire document into memory at one time (i.e use an on-line parser). English text is about 83% redundant so your biggest document will have a map with 85000 entries in it. Use one map per thread (and one thread per file, pooled obviouly) and you will scale just fine.

    Update: If your term list does not change frequently; you might try building a search tree out of the characters in your term list, or building a perfect hash function (http://www.gnu.org/software/gperf/) to speed up file parsing (mapping from search terms to target strings). Probably just a big HashMap would perform about as well.

    0 讨论(0)
提交回复
热议问题