Dynamically adding domains to scrapy crawlspider deny_domains list

六眼飞鱼酱① 提交于 2019-12-23 01:41:25

问题


I am currently using scrapy's CrawlSpider to look for specific info on a list of multiple start_urls. What I would like to do is stop scraping a specific start_url's domain once I've found the information I've looked for, so it won't keep hitting a domain and will instead just hit the other start_urls.

Is there a way to do this? I have tried appending to deny_domains like so:

deniedDomains = []
...
rules = [Rule(SgmlLinkExtractor(..., deny_domains=(etc), ...)]
...
def parseURL(self, response):
    ...
    self.deniedDomains.append(specificDomain)

Appending doesn't seem to stop the crawling, but if I start the spider with the intended specificDomain then it'll stop as requested. So I'm assuming that you can't change the deny_domains list after the spider's started?


回答1:


The best way to do this , is to maintain your own dynamic_deny_domain list in your Spider class :

  • write a simple Downloader Middleware,
  • it's a simple class, with one method implementation: process_request(request, spider):
  • return IgnoreRequest if the request is in your spider.dynamic_deny_domain list, None otherwise.

Then add your downloaderMiddleWare to Middleware list in scrapy settings , at first position 'myproject.downloadermiddleware.IgnoreDomainMiddleware': 50,

Should do the trick.




回答2:


Something ala?

from scrapy.contrib.spiders import CrawlSpider,Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

class MySpider(CrawlSpider):
    name = "foo"
    allowed_domains = ["example.org"]
    start_urls = ["http://www.example.org/foo/",]

    rules = (
        Rule(SgmlLinkExtractor(
            allow=('/foo/[^/+]',),
            deny_domains=('example.com',)),
        callback='parseURL'),
        )

    def parseURL(self, response):

        # here the rest of your code


来源:https://stackoverflow.com/questions/10657006/dynamically-adding-domains-to-scrapy-crawlspider-deny-domains-list

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!