Error 403 : HTTP status code is not handled or not allowed in scrapy

喜你入骨 提交于 2020-05-02 04:20:52

问题


This is the code, I have written to scrape justdial website.

import scrapy
from scrapy.http.request import Request

class JustdialSpider(scrapy.Spider):
    name = 'justdial'
    # handle_httpstatus_list = [400]
    # headers={'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"}
    # handle_httpstatus_list = [403, 404]
    allowed_domains = ['justdial.com']
    start_urls = ['https://www.justdial.com/Delhi-NCR/Chemists/page-1']
    # def  start_requests(self):
    #     # hdef start_requests(self):
    #     headers= {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0'}
    #     for url in self.start_urls:
    #         self.log("I just visited :---------------------------------- "+url)
    #         yield Request(url, headers=headers)
    def parse(self,response):
        self.log("I just visited the site:---------------------------------------------- "+response.url)
         urls = response.xpath('//a/@href').extract()
         self.log("Urls-------: "+str(urls))

This is Error is showing in Terminal:

2017-08-18 18:32:25 [scrapy.core.engine] INFO: Spider opened
2017-08-18 18:32:25 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2017-08-18 18:32:25 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache
storage in D:\scrapy\justdial\.scrapy\httpcache
2017-08-18 18:32:25 [scrapy.extensions.telnet] DEBUG: Telnet console listening o
n 127.0.0.1:6023
2017-08-18 18:32:25 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.j
ustdial.com/robots.txt> (referer: None) ['cached']
2017-08-18 18:32:25 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.j
ustdial.com/Delhi-NCR/Chemists/page-1> (referer: None) ['cached']
2017-08-18 18:32:25 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response
 <403 https://www.justdial.com/Delhi-NCR/Chemists/page-1>: HTTP status code is n
ot handled or not allowed 

I have seen the similar questions on stackoverflow i tried everything like, You can see in Code with comment what i tried,

  • changed the UserAgents

  • Setting handle_httpstatus_list = [400]

Note: This (https://www.justdial.com/Delhi-NCR/Chemists/page-1) website not even blocked in my system. When i open the website in chrome/mozilla, it's opening. This is same error with (https://www.practo.com/bangalore#doctor-search) site also.


回答1:


When you set user agent using an user_agent spider attribute, it starts to work. Probably setting request headers is not enough as it gets overridden by default user agent string. So set spider attribute

user_agent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"

(the same way you set start_urls) and try it.




回答2:


As (Tomáš Linhart) mentioned, We have to add a useragents setting in setting.py, like,

  • USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1'




回答3:


Your investigation showed that the issue seems to be with the HTTP client (scrapy) rather than a network issue (firewall, IP ban).

Read the scrapy documentation to turn on debug logging. You want to see the content of the HTTP request made by scrapy. It may include a cookie that was set by the website when the user agent was still scrapy's.

https://doc.scrapy.org/en/latest/topics/debug.html

https://doc.scrapy.org/en/latest/faq.html?highlight=cookies#how-can-i-see-the-cookies-being-sent-and-received-from-scrapy



来源:https://stackoverflow.com/questions/45758194/error-403-http-status-code-is-not-handled-or-not-allowed-in-scrapy

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!