问题
Maybe this question is a little bit strange... But I'll try to ask it.
Everyone, who wrote applications with using Lucene API, seen something like this:
public static String removeStopWordsAndGetNorm(String text, String[] stopWords, Normalizer normalizer) throws IOException
{
TokenStream tokenStream = new ClassicTokenizer(Version.LUCENE_44, new StringReader(text));
tokenStream = new StopFilter(Version.LUCENE_44, tokenStream, StopFilter.makeStopSet(Version.LUCENE_44, stopWords, true));
tokenStream = new LowerCaseFilter(Version.LUCENE_44, tokenStream);
tokenStream = new StandardFilter(Version.LUCENE_44, tokenStream);
tokenStream.reset();
String result = "";
while (tokenStream.incrementToken())
{
CharTermAttribute token = tokenStream.getAttribute(CharTermAttribute.class);
try
{
//normalizer.getNormalForm(...) - stemmer or lemmatizer
result += normalizer.getNormalForm(token.toString()) + " ";
}
catch(Exception e)
{
//if something went wrong
}
}
return result;
}
Is it possible to rewrite words normalization using RDD? Maybe someone have an example of this transformation, or can specify web resource about it?
Thank You.
回答1:
I recently used a similar example for a talk. It shows how to remove the stop words. It has no normalization phase, but if that normalizer.getNormalForm
comes from a lib that can be reused, it should be easy to integrate.
This code could be a starting point:
// source text
val rdd = sc.textFile(...)
// stop words src
val stopWordsRdd = sc.textFile(...)
// bring stop words to the driver to broadcast => more efficient than rdd.subtract(stopWordsRdd)
val stopWords = stopWordsRdd.collect.toSet
val stopWordsBroadcast = sc.broadcast(stopWords)
val words = rdd.flatMap(line => line.split("\\W").map(_.toLowerCase))
val cleaned = words.mapPartitions{iterator =>
val stopWordsSet = stopWordsBroadcast.value
iterator.filter(elem => !stopWordsSet.contains(elem))
}
// plug the normalizer function here
val normalized = cleaned.map(normalForm(_))
Note: This is from the Spark job point of view. I'm not familiar with Lucene.
来源:https://stackoverflow.com/questions/26944216/words-normalization-using-rdd