Python Scrapy 301 redirects

不打扰是莪最后的温柔 提交于 2021-02-17 20:53:23

问题


I have a little problem in printing the redirected urls (new URLs after 301 redirection) when scraping a given website. My idea is to only print them and not scrape them. My current piece of code is:

import scrapy
import os
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class MySpider(CrawlSpider):
    name = 'rust'
    allowed_domains = ['example.com']
    start_urls = ['http://example.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by default).
        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(LinkExtractor(), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        #if response.status == 301:
        print response.url

However, this does not print the redirected urls. Any help will be appreciated.

Thank you.


回答1:


To parse any responses that are not 200 you'd need to do one of these things:

Project-wide

You can set setting HTTPERROR_ALLOWED_CODES = [301,302,...] in settings.py file. Or if you want to enable it for all codes you can set HTTPERROR_ALLOW_ALL = True instead.

Spider-wide

Add handle_httpstatus_list parameter to your spider. In your case something like:

class MySpider(scrapy.Spider):
    handle_httpstatus_list = [301]
    # or 
    handle_httpstatus_all = True

Request-wide

You can set these meta keys in your requests handle_httpstatus_list = [301, 302,...] or handle_httpstatus_all = True for all:

scrapy.request('http://url.com', meta={'handle_httpstatus_list': [301]})

To learn more see HttpErrorMiddleware



来源:https://stackoverflow.com/questions/38658247/python-scrapy-301-redirects

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!