German corenlp model defaulting to english models

让人想犯罪 __ 提交于 2019-12-12 04:34:14

问题


I use the following command to serve a corenlp server for German language models which are downloaded as jar in the classpath , but it does not output german tags or parse but loads only english models:

 java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer   -props ./german.prop

german.prop contents:

annotators = tokenize, ssplit, pos, depparse, parse

tokenize.language = de

pos.model = edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger

ner.model = edu/stanford/nlp/models/ner/german.hgc_175m_600.crf.ser.gz
ner.applyNumericClassifiers = false
ner.useSUTime = false

parse.model = edu/stanford/nlp/models/lexparser/germanFactored.ser.gz
depparse.model = edu/stanford/nlp/models/parser/nndep/UD_German.gz

client command:

wget --post-data ' Meine Mutter ist aus Wuppertal' 'localhost:9000/?properties"="{"tokenize.whitespace":"true","annotators":"tokenize, ssplit, pos, depparse, parse","outputFormat":"text","tokenize.language" :"de" ,
 "pos.model":" edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger",
"depparse.model" : "edu/stanford/nlp/models/parser/nndep/UD_German.gz",
"parse.model" : "edu/stanford/nlp/models/lexparser/germanFactored.ser.gz"

 }' -O -

I get following incorrect output:

 {"dep":"dep","governor":4,"governorGloss":"aus","dependent":5,"dependentGloss":"Wuppertal"}],"openie":[{"subject":"Wuppertal","subjectSpan":[4,5],"relation":"is ist aus of","relationSpan":[2,4],"object":"Meine Mutter","objectSpan":[0,2]}],"tokens":[{"index":1,"word":"Meine","originalText":"Meine","lemma":"Meine","characterOffsetBegin":1,"characterOffsetEnd":6,"pos":"NNP","ner":"PERSON","speaker":"PER0","before":" ","after":" "},{"index":2,"word":"Mutter","originalText":"Mutter","lemma":"Mutter","characterOffsetBegin":7,"characterOffsetEnd":13,"pos":"NNP","ner":"PERSON","speaker":"PER0","before":" ","after":" "},{"index":3,"word":"ist","originalText":"ist","lemma":"ist","characterOffsetBegin":14,"characterOffsetEnd":17,"pos":"NN","ner":"O","speaker":"PER0","before":" ","after":" "},{"index":4,"word":"aus","originalText":"aus","lemma":"aus","characterOffsetBegin":18,"characterOffsetEnd":21,"pos":"NN","ner":"O","speaker":"PER0","before":" ","after":" "},{"index":5,"word":"Wuppertal","originalText":"Wuppertal","lemma":"Wuppertal","characterOffsetBegin":22,"characterOffsetEnd":31,"pos":"NNP","ner":"LOCATI100%[==========================================================================>] 2,

in the server log I see it loads english models eventhough it lists german models on startup:

pos.model=edu/stanford/nlp/models/pos-tagger/ge...
parse.model=edu/stanford/nlp/models/lexparser/ger...
tokenize.language=de
depparse.model=edu/stanford/nlp/models/parser/nndep/...
annotators=tokenize, ssplit, pos, depparse, parse
Starting server on port 9000 with timeout of 5000 milliseconds.
StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[/203.:61563] API call w/annotators tokenize,ssplit,pos,depparse
Die Katze liegt auf der Matte.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.5 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
Loading depparse model file: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
PreComputed 100000, Elapsed Time: 1.396 (s)

The following question for same error in french models also points to the same problem but even after following , it does not resolve the problem for the server case, I am able to get the correct output without using the server and just using the edu.stanford.nlp.pipeline.StanfordCoreNLP command , it is the server command edu.stanford.nlp.pipeline.StanfordCoreNLPServer which defaults to english: French dependency parsing using CoreNLP


回答1:


There have been some issues with getting the foreign language stuff to work on the server.

If you use the latest release available at our GitHub site, it should work.

The GitHub site is here: https://github.com/stanfordnlp/CoreNLP

That link has instructions for building a jar with the latest version of the code.

I ran this command on some sample German text and it looks like it works fine:

wget --post-data '<sample german text>' 'localhost:9000/?properties={"pipelineLanguage":"german","annotators":"tokenize,ssplit,pos,ner,parse", "parse.model":"edu/stanford/nlp/models/lexparser/germanFactored.ser.gz","tokenize.language":"de","pos.model":"edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger", "ner.model":"edu/stanford/nlp/models/ner/german.hgc_175m_600.crf.ser.gz", "ner.applyNumericClassifiers":"false", "ner.useSUTime":"false"}' -O -

I should note that the neural net German dependency parser is completely broken and we are working on fixing it soon, so you should just use the German settings I specified in that command.

More info on the server can be found here: http://stanfordnlp.github.io/CoreNLP/corenlp-server.html



来源:https://stackoverflow.com/questions/39688652/german-corenlp-model-defaulting-to-english-models

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!