Words normalization using RDD

谁都会走 提交于 2020-01-05 07:00:26

问题


Maybe this question is a little bit strange... But I'll try to ask it.

Everyone, who wrote applications with using Lucene API, seen something like this:

public static String removeStopWordsAndGetNorm(String text, String[] stopWords, Normalizer normalizer) throws IOException
{
    TokenStream tokenStream = new ClassicTokenizer(Version.LUCENE_44, new StringReader(text));
    tokenStream = new StopFilter(Version.LUCENE_44, tokenStream, StopFilter.makeStopSet(Version.LUCENE_44, stopWords, true));
    tokenStream = new LowerCaseFilter(Version.LUCENE_44, tokenStream);
    tokenStream = new StandardFilter(Version.LUCENE_44, tokenStream);
    tokenStream.reset();
    String result = "";
    while (tokenStream.incrementToken()) 
    {
        CharTermAttribute token = tokenStream.getAttribute(CharTermAttribute.class);
        try
        {
            //normalizer.getNormalForm(...) - stemmer or lemmatizer
            result += normalizer.getNormalForm(token.toString()) + " ";
        }
        catch(Exception e)
        {
            //if something went wrong
        }
    }
    return result;
}

Is it possible to rewrite words normalization using RDD? Maybe someone have an example of this transformation, or can specify web resource about it?

Thank You.


回答1:


I recently used a similar example for a talk. It shows how to remove the stop words. It has no normalization phase, but if that normalizer.getNormalForm comes from a lib that can be reused, it should be easy to integrate.

This code could be a starting point:

// source text
val rdd = sc.textFile(...)  
// stop words src
val stopWordsRdd = sc.textFile(...) 
// bring stop words to the driver to broadcast => more efficient than rdd.subtract(stopWordsRdd)
val stopWords = stopWordsRdd.collect.toSet
val stopWordsBroadcast = sc.broadcast(stopWords)
val words = rdd.flatMap(line => line.split("\\W").map(_.toLowerCase))
val cleaned = words.mapPartitions{iterator => 
    val stopWordsSet = stopWordsBroadcast.value
    iterator.filter(elem => !stopWordsSet.contains(elem))
    }
// plug the normalizer function here
val normalized = cleaned.map(normalForm(_)) 

Note: This is from the Spark job point of view. I'm not familiar with Lucene.



来源:https://stackoverflow.com/questions/26944216/words-normalization-using-rdd

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!