Crawler4j with authentication

送分小仙女□ 提交于 2019-12-05 06:33:56

问题


I'm trying to execute the crawler4j in a personal redmine for testing purposes. I want to authenticate and crawl several leves of depth in the application.

I follow this tutorial from the FAQ of crawler4j. And create the next snippet:

import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.parser.HtmlParseData;
import edu.uci.ics.crawler4j.url.WebURL;

public class CustomWebCrawler extends WebCrawler{

    @Override
    public void visit(final Page pPage) {
        if (pPage.getParseData() instanceof HtmlParseData) {
            System.out.println("URL: " + pPage.getWebURL().getURL());
        }
    }

    @Override
    public boolean shouldVisit(final Page pPage, final WebURL pUrl) {
        WebURL webUrl = new WebURL();
        webUrl.setURL(Test.URL_LOGOUT);
        if (pUrl.equals(webUrl)) {
            return false;
        }        
        if(Test.MY_REDMINE_HOST.equals(pUrl.getDomain())){
            return true;
        }
        return false;
    }
}

In this class I extend from WebCrawler and in every visit, just show the URL visit, checking if the URL is in the same domain, and don't visit the logout URL.

And also I have a testing class configuring this crawler, with the authentication info and the site to crawl.

import org.apache.log4j.ConsoleAppender;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.log4j.PatternLayout;

import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.crawler.authentication.AuthInfo;
import edu.uci.ics.crawler4j.crawler.authentication.BasicAuthInfo;
import edu.uci.ics.crawler4j.crawler.authentication.FormAuthInfo;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;

public class Test {

    private static Logger rootLogger;

    public final static String MY_REDMINE_HOST = "my-redmine-server";
    public final static String URL_LOGOUT = "http://"+MY_REDMINE_HOST+"/redmine/logout";

    public static void main(String[] args) throws Exception {
        configureLogger();

        // Create the configuration
        CrawlConfig config = new CrawlConfig();
        String frontier = "/tmp/webCrawler/tmp_" + System.currentTimeMillis();
        config.setCrawlStorageFolder(frontier);

        //Starting point to crawl
        String seed = "http://"+MY_REDMINE_HOST+"/redmine/";

        // Data for the authentication methods
        String userName = "my-user";
        String password = "my-passwd";
        String urlLogin = "http://"+MY_REDMINE_HOST+"/redmine/login";
        String nameUsername = "username";
        String namePassword = "password";
        AuthInfo authInfo1 = new FormAuthInfo(userName, password, urlLogin,
                nameUsername, namePassword);
        config.addAuthInfo(authInfo1);
        AuthInfo authInfo2 = new BasicAuthInfo(userName, password, urlLogin);
        config.addAuthInfo(authInfo2);
        config.setMaxDepthOfCrawling(3);

        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig,
                pageFetcher);
        CrawlController controller = new CrawlController(config, pageFetcher,
                robotstxtServer);

        controller.addSeed(seed);

        controller.start(CustomWebCrawler.class, 5);
        controller.shutdown();

    }

    private static void configureLogger() {
        // This is the root logger provided by log4j
        rootLogger = Logger.getRootLogger();
        rootLogger.setLevel(Level.INFO);

        // Define log pattern layout
        PatternLayout layout = new PatternLayout(
                "%d{ISO8601} [%t] %-5p %c %x - %m%n");

        // Add console appender to root logger
        if (!rootLogger.getAllAppenders().hasMoreElements()) {
            rootLogger.addAppender(new ConsoleAppender(layout));
        }
    }
}

I was expecting to login in the application and keep the crawl in the rest of the site, but I couldn't. The crawl just visit the links in the starting point.

I don't know if something is missing or there is anything wrong. Or maybe I'm having a wrong approach with the current configuration of the crawl.

来源:https://stackoverflow.com/questions/30509805/crawler4j-with-authentication

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!