Scrapy Crawl all websites in start_url even if redirect

南笙酒味 提交于 2019-12-12 03:08:15

问题


I am trying to crawl a long list of websites. Some of the websites in the start_url list redirect (301). I want scrapy to crawl the redirected websites from start_url list as if they were also on the allowed_domain list (which they are not). For example, example.com was on my start_url list and allowed domain list and example.com redirects to foo.com. I want to crawl foo.com.

DEBUG: Redirecting (301) to <GET http://www.foo.com/> from <GET http://www.example.com>

I tried dynamically adding allowed_domains in the parse_start_url method and return a Request object so that scrapy will go back and scrape the redirected websites once it is on the allowed domain list, but I still get:

 DEBUG: Filtered offsite request to 'www.foo.com'

Here is my attempt to dynamically add allowed_domains:

def parse_start_url(self,response):
    domain = tldextract.extract(str(response.request.url)).registered_domain
    if domain not in self.allowed_domains:
        self.allowed_domains.append(domain)
        return Request = (response.url,callback=self.parse_callback)
    else:
        return self.parse_it(response,1)

My other ideas were to try and create a function in the spidermiddleware offsite.py that dynamically adds allowed_domains for redirected websites that originated from start_urls, but I have not been able to get that solution to work either.


回答1:


I figured out the answer to my own question.

I edited the offsite middleware to get the updated list of allowed domains before it filters and I dynamically add to the allowed domain list in parse_start_url method.

I added this function to OffisteMiddleware

def update_regex(self,spider):
    self.host_regex = self.get_host_regex(spider)

I also edited this function inside OffsiteMiddleware

def should_follow(self, request, spider):
    #Custom code to update regex
    self.update_regex(spider)

    regex = self.host_regex
    # hostname can be None for wrong urls (like javascript links)
    host = urlparse_cached(request).hostname or ''
    return bool(regex.search(host))

Lastly for my use case I added this code to my spider

def parse_start_url(self,response):
    domain = tldextract.extract(str(response.request.url)).registered_domain
    if domain not in self.allowed_domains:
        self.allowed_domains.append(domain)
    return self.parse_it(response,1)

This code will add the redirected domain for any start_urls that get redirected and then will crawl those redirected sites.



来源:https://stackoverflow.com/questions/27988931/scrapy-crawl-all-websites-in-start-url-even-if-redirect

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!