wordnet

All synonyms for word in python? [duplicate]

别说谁变了你拦得住时间么 提交于 2019-11-26 16:36:31
This question already has an answer here: How to get synonyms from nltk WordNet Python 5 answers The code to get the synonyms of a word in python is say: from nltk.corpus import wordnet dog = wordnet.synset('dog.n.01') print dog.lemma_names >>['dog', 'domestic_dog', 'Canis_familiaris'] However dog.n.02 gives different words. For any words i can't know how many words there may be. How can i return all of the synonyms for a word? Using wn.synset('dog.n.1').lemma_names is the correct way to access the synonyms of a sense . It's because a word has many senses and it's more appropriate to list

Convert words between verb/noun/adjective forms

喜欢而已 提交于 2019-11-26 16:22:38
i would like a python library function that translates/converts across different parts of speech. sometimes it should output multiple words (e.g. "coder" and "code" are both nouns from the verb "to code", one's the subject the other's the object) # :: String => List of String print verbify('writer') # => ['write'] print nounize('written') # => ['writer'] print adjectivate('write') # => ['written'] i mostly care about verbs <=> nouns, for a note taking program i want to write. i.e. i can write "caffeine antagonizes A1" or "caffeine is an A1 antagonist" and with some NLP it can figure out they

How to check if a word is an English word with Python?

江枫思渺然 提交于 2019-11-26 12:00:55
I want to check in a Python program if a word is in the English dictionary. I believe nltk wordnet interface might be the way to go but I have no clue how to use it for such a simple task. def is_english_word(word): pass # how to I implement is_english_word? is_english_word(token.lower()) In the future, I might want to check if the singular form of a word is in the dictionary (e.g., properties -> property -> english word). How would I achieve that? Katriel For (much) more power and flexibility, use a dedicated spellchecking library like PyEnchant . There's a tutorial , or you could just dive

Extract Word from Synset using Wordnet in NLTK 3.0

半腔热情 提交于 2019-11-26 11:21:50
问题 Some time ago, someone on SO asked how to retrieve a list of words for a given synset using NLTK\'s wordnet wrapper. Here is one of the suggested responses: for synset in wn.synsets(\'dog\'): print synset.lemmas[0].name Running this code with NLTK 3.0 yields TypeError: \'instancemethod\' object is not subscriptable . I tried each of the previously-proposed solutions (each of the solutions described on the page linked above), but each throws an error. I therefore wanted to ask: Is it possible

Stemmers vs Lemmatizers

老子叫甜甜 提交于 2019-11-26 11:10:34
Natural Language Processing (NLP), especially for English, has evolved into the stage where stemming would become an archaic technology if "perfect" lemmatizers exist. It's because stemmers change the surface form of a word/token into some meaningless stems. Then again the definition of the "perfect" lemmatizer is questionable because different NLP task would have required different level of lemmatization. E.g. Convert words between verb/noun/adjective forms . Stemmers [in]: having [out]: hav Lemmatizers [in]: having [out]: have So the question is, are English stemmers any useful at all today?

How to get synonyms from nltk WordNet Python

落爺英雄遲暮 提交于 2019-11-26 11:01:42
问题 WordNet is great, but I\'m having a hard time getting synonyms in nltk. If you search similar to for the word \'small\' like here, it shows all of the synonyms. Basically I just need to know the following: wn.synsets(\'word\')[i].option() Where option can be hypernyms and antonyms, but what is the option for getting synonyms? 回答1: If you want the synonyms in the synset (aka the lemmas that make up the set), you can get them with lemma_names() : >>> for ss in wn.synsets('small'): >>> print(ss

wordnet lemmatization and pos tagging in python

醉酒当歌 提交于 2019-11-26 09:06:31
问题 I wanted to use wordnet lemmatizer in python and I have learnt that the default pos tag is NOUN and that it does not output the correct lemma for a verb, unless the pos tag is explicitly specified as VERB. My question is what is the best shot inorder to perform the above lemmatization accurately? I did the pos tagging using nltk.pos_tag and I am lost in integrating the tree bank pos tags to wordnet compatible pos tags. Please help from nltk.stem.wordnet import WordNetLemmatizer lmtzr =

Convert words between verb/noun/adjective forms

独自空忆成欢 提交于 2019-11-26 06:01:28
问题 i would like a python library function that translates/converts across different parts of speech. sometimes it should output multiple words (e.g. \"coder\" and \"code\" are both nouns from the verb \"to code\", one\'s the subject the other\'s the object) # :: String => List of String print verbify(\'writer\') # => [\'write\'] print nounize(\'written\') # => [\'writer\'] print adjectivate(\'write\') # => [\'written\'] i mostly care about verbs <=> nouns, for a note taking program i want to

All synonyms for word in python? [duplicate]

不羁岁月 提交于 2019-11-26 04:51:47
问题 This question already has answers here : How to get synonyms from nltk WordNet Python (5 answers) Closed 3 years ago . The code to get the synonyms of a word in python is say: from nltk.corpus import wordnet dog = wordnet.synset(\'dog.n.01\') print dog.lemma_names >>[\'dog\', \'domestic_dog\', \'Canis_familiaris\'] However dog.n.02 gives different words. For any words i can\'t know how many words there may be. How can i return all of the synonyms for a word? 回答1: Using wn.synset('dog.n.1')

How to check if a word is an English word with Python?

社会主义新天地 提交于 2019-11-26 01:05:17
问题 I want to check in a Python program if a word is in the English dictionary. I believe nltk wordnet interface might be the way to go but I have no clue how to use it for such a simple task. def is_english_word(word): pass # how to I implement is_english_word? is_english_word(token.lower()) In the future, I might want to check if the singular form of a word is in the dictionary (e.g., properties -> property -> english word). How would I achieve that? 回答1: For (much) more power and flexibility,