Get proxy ip address scrapy using to crawl

前端 未结 2 1787
有刺的猬
有刺的猬 2021-01-03 03:43

I use Tor to crawl web pages. I started tor and polipo service and added

class ProxyMiddleware(object):   # overwrite process request   def
  process_reques         


        
2条回答
  •  天命终不由人
    2021-01-03 04:12

    The fastest option would be to use the scrapy shell and check for the meta to contain the proxy.

    Start it from the project root:

    $ scrapy shell http://google.com
    >>> request.meta
    {'handle_httpstatus_all': True, 'redirect_ttl': 20, 'download_timeout': 180, 'proxy': 'http://127.0.0.1:8123', 'download_latency': 0.4804518222808838, 'download_slot': 'google.com'}
    >>> response.meta
    {'download_timeout': 180, 'handle_httpstatus_all': True, 'redirect_ttl': 18, 'redirect_times': 2, 'redirect_urls': ['http://google.com', 'http://www.google.com/'], 'depth': 0, 'proxy': 'http://127.0.0.1:8123', 'download_latency': 1.5814828872680664, 'download_slot': 'google.com'}
    

    This way you would check that middleware is configured correctly and the request is going through proxy.

提交回复
热议问题