how to bypass robots.txt with apache nutch 2.2.1

北城余情 提交于 2020-01-07 06:46:10

问题


Can anyone please tell me if there is any way for apache nutch to ignore or bypass robots.txt while crawling. I am using nutch 2.2.1. I found that "RobotRulesParser.java"(full path:-src/plugin/lib-http/src/java/org/apache/nutch/protocol/http/api/ RobotRulesParser.java) is responsible for the reading and parsing the robots.txt. Is there any way to modify this file to ignore robots.txt and go on with crawling?

Or is there any other way to achieve the same?


回答1:


  1. At first, we should respect the robots.txt file if you are crawling any external sites. Otherwise you are at risk - your IP banned or worse can be any legal case.

  2. If your site is internal and not expose to external world, then you should change the robots.txt file to allow your crawler.

  3. If your site is exposed to the Internet and if data is confidential, then you can try out the following option. Because here you cannot take a risk of modifying the robots.txt file since external crawler can use your crawler name and crawl the site.

    In Fetcher.java file:

    if (!rules.isAllowed(fit.u.toString())) { }
    

    This is the block that is responsible for blocking the URLs. You can play around this code block to resolve your issue.



来源:https://stackoverflow.com/questions/24058899/how-to-bypass-robots-txt-with-apache-nutch-2-2-1

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!