stanford-nlp

nltk interface to stanford parser [duplicate]

孤街醉人 提交于 2019-12-24 04:37:07
问题 This question already has answers here : Stanford Parser and NLTK (18 answers) Closed 3 years ago . I am getting problems to access Stanford parser through python NLTK (they developed an interface for NLTK) import nltk.tag.stanford Traceback (most recent call last): File "", line 1, in ImportError: No module named stanford 回答1: You can use stanford parser from NLTK. Check this link on how to use it - http://www.nltk.org/api/nltk.tag.html#module-nltk.tag.stanford I guess it isn't problem with

Invalid Stream header with Stanford nlp library

我与影子孤独终老i 提交于 2019-12-24 03:43:41
问题 I am working through this Stanford POS tagger tutorial. I am doing it in Scala but I do not think that this matters. The line that produces the error is val tagger=new MaxentTagger("/Users/user1/Documents/taggers/left3words-wsj-0-18.tagger") and the error is edu.stanford.nlp.io.RuntimeIOException: java.io.StreamCorruptedException: invalid stream header: 0003CBE8 The filepath is correct. 回答1: By default the tagger treats the model file path as a classpath-relative resource path, but it also

Stanford Parser as a Google App Engine Service

荒凉一梦 提交于 2019-12-24 03:23:12
问题 I'm new to Goole App Engine. I'm struggling to find a way to use Stanford Parser as a backend for a mobile app (iOS, Android). Is it possible to run the Parser as a service in GAE so that the app can send the string in wich the parsing will be done and after the processing, the app gets a JSON with the results? If yes, any hints or tutorial that you can direct me to? Thank you. 回答1: I can't answer your exact question, but I'm also very interested in this. Have you tried running the parser

CoreNLP SemanticGraph - search for edges with specific lemmas

a 夏天 提交于 2019-12-24 02:45:13
问题 I'm using Stanford CoreNLP's dependency parser, and wondering how to make a generic search for SemanticEdge(s) with specific head lemma, dependent lemma, and lexical relationship. For example, if I have an actual dependency like this: dobj(discover-4, insights-6) How do I search for it using lemmas instead of the literal word and the index? Basically I want to be able to pattern match the parts of the dependency graph using generic rules... 回答1: You could do this with semgrex . See http://nlp

Why do CoreNLP ner tagger and ner tagger join the separated numbers together?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-24 01:43:06
问题 Here is the code snippet: In [390]: t Out[390]: ['my', 'phone', 'number', 'is', '1111', '1111', '1111'] In [391]: ner_tagger.tag(t) Out[391]: [('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111\xa01111\xa01111', 'NUMBER')] What I expect is: Out[391]: [('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111', 'NUMBER'), ('1111', 'NUMBER'), ('1111', 'NUMBER')] As you can see the artificial phone number is joined by \xa0 which is said to be a non-breaking space. Can I

Setting max Length for Sentence in StanfordCoreNLP

我与影子孤独终老i 提交于 2019-12-24 01:23:57
问题 I am trying to restrict the max length for a sentence in StanfordCoreNLP. For some reason it does not seem to honor this property. This flag is part of the LexicalizedParser. But I am using StanfordCoreNLP instance in my class. Wondering what is the right way to set this flag. Properties properties = new Properties(); properties.put("annotators", "tokenize,ssplit,pos,lemma,ner"); properties.put("-maxLength", "100"); // does not work StanfordCoreNLP nap = new StanfordCoreNLP(properties); 回答1:

Stanford NER: AbstractSequenceClassifier vs NamedEntityTagAnnotation

我的未来我决定 提交于 2019-12-24 00:49:55
问题 QUESTIONS How do I load a custom properties file using AbstractSequenceClassifier? e.g., Master's Degree\tDEGREE MBA\tDEGREE What are the benefits/drawbacks of each approach?(AbstractSequenceClassifier vs NamedEntityTagAnnotation) Is there any accessible documentation/tutorial on the internet. I can play with demo code and read javadocs, but a good tutorial would save me and many others a lot of time. During my perusal of the Stanford NER documentation, I have encountered two java examples.

psutil.AccessDenied Error while trying to load StanfordCoreNLP

喜夏-厌秋 提交于 2019-12-23 18:12:57
问题 I'm trying to load the package StanfordCoreNLP to get the correct parsing for the movie reviews presented in their page (https://nlp.stanford.edu/sentiment/treebank.html): (I'm using MAC) nlp = StanfordCoreNLP("/Users//NLP_models/stanford-corenlp-full-2018-01-31") But get the error: Traceback (most recent call last): File "/Users/anaconda3/lib/python3.6/site-packages/psutil/_psosx.py", line 295, in wrapper return fun(self, *args, **kwargs) File "/Users/anaconda3/lib/python3.6/site-packages

extract NP-VP-NP from Stanford dependency parse tree

别说谁变了你拦得住时间么 提交于 2019-12-23 15:40:25
问题 I need to extract triplets of the form NP-VP-NP from the dependency parse tree produced as the output of lexalized parsing in Stanford Parser. Whats the best way to do this. e.g. If the parse tree is as follows: (ROOT (S (S (NP (NNP Exercise)) (VP (VBZ reduces) (NP (NN stress))) (. .)) (NP (JJ Regular) (NN exercise)) (VP (VBZ maintains) (NP (JJ mental) (NN fitness))) (. .))) I need to extract 2 triplets: Exercise-reduces-stress and Regular Exercise-maintains-mental fitness Any ideas? 回答1:

How to use serialized CRFClassifier with StanfordCoreNLP prop 'ner'

烂漫一生 提交于 2019-12-23 12:27:26
问题 I'm using the StanfordCoreNLP API interface to programmatically do some basic NLP. I need to train a model on my own corpus, but I'd like to use the StanfordCoreNLP interface to do it, because it handles a lot of the dry mechanics behind the scenes and I don't need much specialization there. I've trained a CRFClassifier that I'd like to use for NER, serialized to a file. Based on the documentation, I'd think the following would work, but it doesn't seem to find my model and instead barfs on