how to use word_tokenize in data frame

后端 未结 4 2205
南笙
南笙 2020-12-23 12:23

I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a p

4条回答
  •  失恋的感觉
    2020-12-23 13:19

    I will show you an example. Suppose you have a data frame named twitter_df and you have stored sentiment and text within that. So, first I extract text data into a list as follows

     tweetText = twitter_df['text']
    

    then to tokenize

     from nltk.tokenize import word_tokenize
    
     tweetText = tweetText.apply(word_tokenize)
     tweetText.head()
    

    I think this will help you

提交回复
热议问题