Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column

后端 未结 3 523
孤城傲影
孤城傲影 2020-12-18 07:57

I am trying to look for potential matches in a PANDAS column full of organization names. I am currently using iterrows() but it is extremely slow on a dataframe with ~70,000

3条回答
  •  北海茫月
    2020-12-18 08:49

    Given your task your comparing 70k strings with each other using fuzz.WRatio, so your having a total of 4,900,000,000 comparisions, with each of these comparisions using the levenshtein distance inside fuzzywuzzy which is a O(N*M) operation. fuzz.WRatio is a combination of multiple different string matching ratios that have different weights. It then selects the best ratio among them. Therefore it even has to calculate the Levenshtein distance multiple times. So one goal should be to reduce the search space by excluding some possibilities using a way faster matching algorithm. Another issue is that the strings are preprocessed to remove punctuation and to lowercase the strings. While this is required for the matching (so e.g. a uppercased word becomes equal to a lowercased one) we can basically do this ahead of time. So we only have to preprocess the 70k strings once. I will use RapidFuzz instead of FuzzyWuzzy here, since it is quite a bit faster (I am the author).

    The following version performs more than 10 times as fast as your previous solution in my experiments and applies the following improvements:

    1) it generates a dict mapping the organisations to the preprocessed organisations so this does not has to be done in each run

    2) it passes a score_cutoff to extractOne so it can skip calculations where it already knows they can not reach this ratio

    import pandas as pd, numpy as np
    from rapidfuzz import process, utils
    
    org_list = df['org_name']
    processed_orgs = {org: utils.default_process(org) for org in org_list}
    
    for (i, (query, processed_query)) in enumerate(processed_orgs.items()):
        choices = processed_orgs.copy()
        del choices[query]
        match = process.extractOne(processed_query, choices, processor=None, score_cutoff=93)
        if match:
            df.loc[i, 'fuzzy_match'] = match[2]
            df.loc[i, 'fuzzy_match_score'] = match[1]
    

    Here is a list of the most relevant improvements of RapidFuzz to make it faster than FuzzyWuzzy in this example:

    1) It is implemented fully in C++ while a big part of FuzzyWuzzy is implemented in Python

    2) When calculating the levenshtein distance it takes into account the score_cutoff to exit early when the score can not be reached. This way it can exit in O(1) when the length difference between the strings is to big or O(N) when there are to many uncommon characters between the two strings while calculating the Levenshtein distance has a time complexity of O(N*M)

    3) fuzz.WRatio is combining the results of multiple other string matching algorithms like fuzz.ratio, fuzz.token_sort_ratio and fuzz.token_set_ratio and takes the maximum result after weighting them. So while fuzz.ratio has a weighting of 1 fuzz.token_sort_ratio and fuzz.token_set_ratio have one of 0.95. When the score_cutoff is bigger than 95 fuzz.token_sort_ratio and fuzz.token_set_ratio are not calculated anymore, since the results are guaranteed to be smaller than the score_cutoff

    4) since extractOne only searches for the best match it uses the ratio of the current best match as score_cutoff for the next elements. This way it can quickly discard more elements by using the improvements to the levenshtein distance calculation from 2) in many cases

提交回复
热议问题