I have bunch of sentences in a list and I wanted to use nltk library to stem it. I am able to stem one sentence at a time, however I am having issues stemming sentences from a l
You're passing a list to word_tokenize
which you can't.
The solution is to wrap your logic in another for-loop
,
data_list = ['the gamers playing games','higher scores','sports']
for words in data_list:
words = tokenize.word_tokenize(words)
for w in words:
print(ps.stem(w))
>>>>the
gamer
play
game
higher
score
sport