Using WordNet to determine semantic similarity between two texts?

孤者浪人 提交于 2019-12-09 13:36:53

问题


How can you determine the semantic similarity between two texts in python using WordNet?

The obvious preproccessing would be removing stop words and stemming, but then what?

The only way I can think of would be to calculate the WordNet path distance between each word in the two texts. This is standard for unigrams. But these are large (400 word) texts, that are natural language documents, with words that are not in any particular order or structure (other than those imposed by English grammar). So, which words would you compare between texts? How would you do this in python?


回答1:


One thing that you can do is:

  1. Kill the stop words
  2. Find as many words as possible that have maximal intersections of synonyms and antonyms with those of other words in the same doc. Let's call these "the important words"
  3. Check to see if the set of the important words of each document is the same. The closer they are together, the more semantically similar your documents.

There is another way. Compute sentence trees out of the sentences in each doc. Then compare the two forests. I did some similar work for a course a long time ago. Here's the code (keep in mind this was a long time ago and it was for class. So the code is extremely hacky, to say the least).

Hope this helps



来源:https://stackoverflow.com/questions/11463396/using-wordnet-to-determine-semantic-similarity-between-two-texts

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!