two whole texts similarity using levenshtein distance [closed]

自闭症网瘾萝莉.ら 提交于 2020-01-01 10:58:08

问题


I have two text files which I'd like to compare. What I did is:

  1. I've split both of them into sentences.
  2. I've measured levenshtein distance between each of the sentences from one file with each of the sentences from second file.

I'd like to calculate average similarity between those two text files, however I have trouble to deliver any meaningful value - obviously arithmetic mean (sum of all the distances [normalized] divided by number of comparisions) is a bad idea.

How to interpret such results?

edit: Distance values are normalized.


回答1:


The levenshtein distances has a maximum value, i.e. the max. length of both input strings. It cannot get worse than that. So a normalized similarity index (0=bad, 1=match) for two strings a and b can be calculated as 1- distance(a,b)/max(a.length, b.length).

Take one sentence from File A. You said you'd compare this to each sentence of File B. I guess you are looking for a sentence out of B which has the smallest distance (i.e. the highest similarity index).

Simply calculate the average of all those 'minimum similarity indexes'. This should give you a rough estimation of the similarity of two texts.

But what makes you think that two texts which are similar might have their sentences shuffled? My personal opinion is that you should also introduce stop word lists, synonyms and all that.

Nevertheless: Please also check trigram matching which might be another good approach to what you are looking for.



来源:https://stackoverflow.com/questions/15612475/two-whole-texts-similarity-using-levenshtein-distance

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!