Nutch not crawling URLs except the one specified in seed.txt

做~自己de王妃 提交于 2019-12-01 13:33:06

You may try to tweak properties available in conf/nutch-default.xml. maybe control the number of outlinks your want or modify fetch properties. If you decide to overwrite any property, copy that info to conf/nutch-site.xml and put new value there.

Got that working after trying multiple things in last 2 days.Here is the solution:

Since the website I was crawling was very heavy, the property in nutch-default.xml was truncating it to 65536 bytes(default).The links I wanted to crawl unfortunately didn't get included in the selected part and hence nutch wasn't crawling it.When I changed it to unlimited by putting the following values in nutch-site.xml it starts crawling my pages :

<property>
  <name>http.content.limit</name>
  <value>-1</value>
  <description>The length limit for downloaded content using the http://
  protocol, in bytes. If this value is nonnegative (>=0), content longer
  than it will be truncated; otherwise, no truncation at all. Do not
  confuse this setting with the file.content.limit setting.
  </description>
</property>
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!