Running Stanford corenlp server with custom models

孤街浪徒 提交于 2019-12-03 20:59:52

Yes, the server should (in theory) support all the functionality of the regular pipeline. The properties GET parameter is translated into the Properties object you would normally pass into StanfordCoreNLP. Therefore, if you'd like the server to load a custom model, you can just call it via, e.g.:

wget \
  --post-data 'the quick brown fox jumped over the lazy dog' \
  'localhost:9000/?properties={"parse.model": "/path/to/model/on/server/computer", "annotators": "tokenize,ssplit,pos", "outputFormat": "json"}' -O -

Note that the server won't garbage-collect this model afterwards though, so if you load too many models there's a good chance you'll run into out-of-memory errors...

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!