how to use word_tokenize in data frame

后端 未结 4 2211
南笙
南笙 2020-12-23 12:23

I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a p

4条回答
  •  无人及你
    2020-12-23 13:15

    pandas.Series.apply is faster than pandas.DataFrame.apply

    import pandas as pd
    import nltk
    
    df = pd.read_csv("/path/to/file.csv")
    
    start = time.time()
    df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
    print "series.apply", (time.time() - start)
    
    start = time.time()
    df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
    print "dataframe.apply", (time.time() - start)
    

    On a sample 125 MB csv file,

    series.apply 144.428858995

    dataframe.apply 201.884778976

    Edit: You could be thinking the Dataframe df after series.apply(nltk.word_tokenize) is larger in size, which might affect the runtime for the next operation dataframe.apply(nltk.word_tokenize).

    Pandas optimizes under the hood for such a scenario. I got a similar runtime of 200s by only performing dataframe.apply(nltk.word_tokenize) separately.

提交回复
热议问题