Is there a more efficient way of doing this? My code reads a text file and extracts all Nouns.
import nltk File = open(fileName) #open file lines = File.rea
import nltk lines = 'lines is some string of words' tokenized = nltk.word_tokenize(lines) nouns = [word for (word, pos) in nltk.pos_tag(tokenized) if(pos[:2] == 'NN')] print (nouns)
Just simplied abit more.