Executing and testing stanford core nlp example

谁说胖子不能爱 提交于 2019-11-29 07:49:11

问题


I downloaded stanford core nlp packages and tried to test it on my machine.

Using command: java -cp "*" -mx1g edu.stanford.nlp.sentiment.SentimentPipeline -file input.txt

I got sentiment result in form of positive or negative. input.txt contains the sentence to be tested.

On more command: java -cp stanford-corenlp-3.3.0.jar;stanford-corenlp-3.3.0-models.jar;xom.jar;joda-time.jar -Xmx600m edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file input.txt when executed gives follwing lines :

H:\Drive E\Stanford\stanfor-corenlp-full-2013~>java -cp stanford-corenlp-3.3.0.j
ar;stanford-corenlp-3.3.0-models.jar;xom.jar;joda-time.jar -Xmx600m edu.stanford
.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file
input.txt
Adding annotator tokenize
Adding annotator ssplit
Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3wo
rds/english-left3words-distsim.tagger ... done [36.6 sec].
Adding annotator lemma
Adding annotator parse
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCF
G.ser.gz ... done [13.7 sec].

Ready to process: 1 files, skipped 0, total 1
Processing file H:\Drive E\Stanford\stanfor-corenlp-full-2013~\input.txt ... wri
ting to H:\Drive E\Stanford\stanfor-corenlp-full-2013~\input.txt.xml {
  Annotating file H:\Drive E\Stanford\stanfor-corenlp-full-2013~\input.txt [13.6
81 seconds]
} [20.280 seconds]
Processed 1 documents
Skipped 0 documents, error annotating 0 documents
Annotation pipeline timing information:
PTBTokenizerAnnotator: 0.4 sec.
WordsToSentencesAnnotator: 0.0 sec.
POSTaggerAnnotator: 1.8 sec.
MorphaAnnotator: 2.2 sec.
ParserAnnotator: 9.1 sec.
TOTAL: 13.6 sec. for 10 tokens at 0.7 tokens/sec.
Pipeline setup: 58.2 sec.
Total time for StanfordCoreNLP pipeline: 79.6 sec.

H:\Drive E\Stanford\stanfor-corenlp-full-2013~>

Could understand. No informative result.

I got one example at : stanford core nlp java output

import java.io.*;
import java.util.*;

import edu.stanford.nlp.io.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.util.*;

public class StanfordCoreNlpDemo {

  public static void main(String[] args) throws IOException {
    PrintWriter out;
    if (args.length > 1) {
      out = new PrintWriter(args[1]);
    } else {
      out = new PrintWriter(System.out);
    }
    PrintWriter xmlOut = null;
    if (args.length > 2) {
      xmlOut = new PrintWriter(args[2]);
    }

    StanfordCoreNLP pipeline = new StanfordCoreNLP();
    Annotation annotation;
    if (args.length > 0) {
      annotation = new Annotation(IOUtils.slurpFileNoExceptions(args[0]));
    } else {
      annotation = new Annotation("Kosgi Santosh sent an email to Stanford University. He didn't get a reply.");
    }

    pipeline.annotate(annotation);
    pipeline.prettyPrint(annotation, out);
    if (xmlOut != null) {
      pipeline.xmlPrint(annotation, xmlOut);
    }
    // An Annotation is a Map and you can get and use the various analyses individually.
    // For instance, this gets the parse tree of the first sentence in the text.
    List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
    if (sentences != null && sentences.size() > 0) {
      CoreMap sentence = sentences.get(0);
      Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
      out.println();
      out.println("The first sentence parsed is:");
      tree.pennPrint(out);
    }
  }

}

Tried to execute it in netbeans with including necessary library. But it always stuck in between or gives exception Exception in thread “main” java.lang.OutOfMemoryError: Java heap space

Thou I set the memory to be allocated in property/run/VM box

Any idea how can I run above java example using command line?

I want to get sentiment score of the example

UPDATE

output of : java -cp "*" -mx1g edu.stanford.nlp.sentiment.SentimentPipeline -file input.txt

out put of: java -cp stanford-corenlp-3.3.0.j ar;stanford-corenlp-3.3.0-models.jar;xom.jar;joda-time.jar -Xmx600m edu.stanford .nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,parse -file input.txt


回答1:


You need to add the "sentiment" annotator to the list of annotators:

-annotators tokenize,ssplit,pos,lemma,parse,sentiment

This will add a "sentiment" property to each sentence node in your XML.




回答2:


You can do the following in your code:

String text = "I am feeling very sad and frustrated.";
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, parse, sentiment");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
<...>
Annotation annotation = pipeline.process(text);
List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
for (CoreMap sentence : sentences) {
  String sentiment = sentence.get(SentimentCoreAnnotations.SentimentClass.class);
  System.out.println(sentiment + "\t" + sentence);
}

It will print the sentiment of the sentence and the sentence itself, e.g. "I am feeling very sad and frustrated.":

Negative    I am feeling very sad and frustrated.



回答3:


Per the example here you need to run the Sentiment Analysis.

java -cp "*" -mx5g edu.stanford.nlp.sentiment.SentimentPipeline -file input.txt

Apparently this is a memory expensive operation, it may not complete with only 1 gigabyte. Then you can use the "Evaluation Tool"

java -cp "*" edu.stanford.nlp.sentiment.Evaluate edu/stanford/nlp/models/sentiment/sentiment.ser.gz input.txt



回答4:


This is working fine for me -

Maven Dependencies :

        <dependency>
            <groupId>edu.stanford.nlp</groupId>
            <artifactId>stanford-corenlp</artifactId>
            <version>3.5.2</version>
            <classifier>models</classifier>
        </dependency>
        <dependency>
            <groupId>edu.stanford.nlp</groupId>
            <artifactId>stanford-corenlp</artifactId>
            <version>3.5.2</version>
        </dependency>
        <dependency>
            <groupId>edu.stanford.nlp</groupId>
            <artifactId>stanford-parser</artifactId>
            <version>3.5.2</version>
        </dependency>

Java Code :

public static void main(String[] args) throws IOException {
        String text = "This World is an amazing place";
        Properties props = new Properties();
        props.setProperty("annotators", "tokenize, ssplit, pos, lemma, parse, sentiment");
        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

        Annotation annotation = pipeline.process(text);
        List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
        for (CoreMap sentence : sentences) {
            String sentiment = sentence.get(SentimentCoreAnnotations.SentimentClass.class);
            System.out.println(sentiment + "\t" + sentence);
        }
    }

Results :

Very positive This World is an amazing place



来源:https://stackoverflow.com/questions/20359346/executing-and-testing-stanford-core-nlp-example

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!