问题
I have been following the instructions at http://wiki.apache.org/nutch/Nutch2Tutorial to see if I can get a nutch installation running with ElasticSearch. I have successfully done a crawl with no real issues, but then when I try and load the results into elasticsearch I run into trouble.
I issue the command:
bin/nutch elasticindex <$cluser> -all
And it waits around for a long time and then comes back with an error: Exception in thread "main" java.lang.RuntimeException: job failed: name=elastic-index [ocpnutch], jobid=job_local_0001
If I look in the logs at:
~/apache-nutch-2.1/runtime/local/logs/hadoop.log
I see several errors like this:
Exception caught on netty layer [[id: 0x569764bd, /192.168.17.39:52554 => /192.168.17.60:9300]] java.lang.OutOfMemoryError: Java heap space
There is nothing in the logs on the elastic search.
I have tried changing: elastic.max.bulk.docs and elastic.max.bulk.size to small sizes and allocating large amounts of GB to nutch, but to no avail.
The jvm is: Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
Does anyone have any idea what I am doing wrong - what other diagnostic information would be helpful to solve this problem?
回答1:
I have exactly the same problem. I work with elasticsearch 0.90.2. I found a solution : with elasticsearch 0.19.4 it works !
回答2:
I had a similar problem caused by incompatible versions of HBase and elastic search. Using Hbase Version 0.90.4 and Elastic Search Version 0.90.9 worked for me.
I have done some changes in Configuration. In ~/apache-nutch-2.2.1/ivy/ivy.xml the revision of the dependency for elasticsearch must be set to 0.90.9
In the file ElasticWriter.java in line 104 the statement:
if (item.failed())
had to be changed to:
if (item.isFailed())
Then it worked for me.
来源:https://stackoverflow.com/questions/16729940/outofmemoryerror-for-bin-nutch-elasticindex-cluser-all-nutch-2-1