Use crawler4j to download js files

半世苍凉 提交于 2019-12-24 08:58:40

问题


I'm trying to use crawler4j to download some websites. The only problem I have is that even though I return true for all .js files in the shouldVisit function, they never get downloaded.

@Override
public boolean shouldVisit(WebURL url) {
    return true;
}

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();
    System.out.println("URL: " + url);
}

The URL for .js files never gets printed out.


回答1:


I noticed that "<script>" tags do not get processed by crawler4j. This was where all of the ".js" files occurred. So I don't think the problem is only limited to ".js" files - I think it's anything within the "<script>" tags (which usually happens to be ".js" files).

It does initially look like modifying HtmlContentHandler's Enumeration and startElement() method will solve problem. I tried that and it did not work. While debugging it, I observed that either the Tika Parser or TagSoup (which Tika uses) is not picking up the script tags. As a result it never even reaches crawler4j to get processed.

As a workaround, I used JSoup to parse the HTML for all "<script>" tags in my visit() method and then I schedule a crawl on those files.

I think the real solution is identifying why Tika (or TagSoup) is not picking up the script tags. It could be the way in which it is getting called by crawler4j. Once that is resolved, then modifying the HtmlContentHandler will work.




回答2:


Taking a look at the source, the reason is to be found in the HTMLContentHandler class.

This class is responsible for extracting link from downloaded webpages. The script tag is never processed.

If you want to download .js files, I suggest you clone the project, extend this class, which is quite simple. You need also to modify WebCrawler that calls the HTMLContentHandler.



来源:https://stackoverflow.com/questions/14413965/use-crawler4j-to-download-js-files

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!