using python nltk to find similarity between two web pages?

后端 未结 2 1659
遥遥无期
遥遥无期 2020-12-25 10:08

I want to find whether two web pages are similar or not. Can someone suggest if python nltk with wordnet similarity functions helpful and how? What is the best similarity fu

相关标签:
2条回答
  • 2020-12-25 10:28

    consider implementing Spotsigs

    0 讨论(0)
  • 2020-12-25 10:35

    The spotsigs paper mentioned by joyceschan addresses content duplication detection and it contains plenty of food for thought.

    If you are looking for a quick comparison of key terms, nltk standard functions might suffice.

    With nltk you can pull synonyms of your terms by looking up the synsets contained by WordNet

    >>> from nltk.corpus import wordnet
    
    >>> wordnet.synsets('donation')
    [Synset('contribution.n.02'), Synset('contribution.n.03')]
    
    >>> wordnet.synsets('donations')
    [Synset('contribution.n.02'), Synset('contribution.n.03')]
    

    It understands plurals and it also tells you which part of speech the synonym corresponds to

    Synsets are stored in a tree with more specific terms at the leaves and more general ones at the root. The root terms are called hypernyms

    You can measure similarity by how close the terms are to the common hypernym

    Watch out for different parts of speech, according to the NLTK cookbook they don't have overlapping paths, so you shouldn't try to measure similarity between them.

    Say, you have two terms donation and gift, you can get them from synsets but in this example I initialized them directly:

    >>> d = wordnet.synset('donation.n.01')
    >>> g = wordnet.synset('gift.n.01')
    

    The cookbook recommends Wu-Palmer Similarity method

    >>> d.wup_similarity(g)
    0.93333333333333335
    

    This approach gives you a quick way to determine if the terms used correspond to related concepts. Take a look at Natural Language Processing with Python to see what else you can do to help your analysis of text.

    0 讨论(0)
提交回复
热议问题