Passing a pandas dataframe column to an NLTK tokenizer

纵然是瞬间 提交于 2021-02-18 12:59:15

问题


I have a pandas dataframe raw_df with 2 columns, ID and sentences. I need to convert each sentence to a string. The code below produces no errors and says datatype of rule is "object."

raw_df['sentences'] = raw_df.sentences.astype(str)
raw.df.sentences.dtypes

Out: dtype('O')

Then, I try to tokenize sentences and get a TypeError that the method is expecting a string or bytes-like object. What am I doing wrong?

raw_sentences=tokenizer.tokenize(raw_df)

Same TypeError for

raw_sentences = nltk.word_tokenize(raw_df)

回答1:


I'm assuming this is an NLTK tokenizer. I believe these work by taking sentences as input and returning tokenised words as output.

What you're passing is raw_df - a pd.DataFrame object, not a str. You cannot expect it to apply the function row-wise, without telling it to, yourself. There's a function called apply for that.

raw_df['tokenized_sentences'] = raw_df['sentences'].apply(tokenizer.tokenize)

Assuming this works without any hitches, tokenized_sentences will be a column of lists.

Since you're performing text processing on DataFrames, I'd recommend taking a look at another answer of mine here: Applying NLTK-based text pre-proccessing on a pandas dataframe



来源:https://stackoverflow.com/questions/48363461/passing-a-pandas-dataframe-column-to-an-nltk-tokenizer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!