Percentage Count Verb, Noun using Spacy?
问题 I want to count percentage split of POS in a sentence using spacy, similiar to Count verbs, nouns, and other parts of speech with python's NLTK Currently able to detect and count POS. How to find percentage split. from __future__ import unicode_literals import spacy,en_core_web_sm from collections import Counter nlp = en_core_web_sm.load() print Counter(([token.pos_ for token in nlp('The cat sat on the mat.')])) Current output: Counter({u'NOUN': 2, u'DET': 2, u'VERB': 1, u'ADP': 1, u'PUNCT':