Rotating Proxies for web scraping

爱⌒轻易说出口 提交于 2019-12-03 02:52:18

问题


I've got a python web crawler and I want to distribute the download requests among many different proxy servers, probably running squid (though I'm open to alternatives). For example, it could work in a round-robin fashion, where request1 goes to proxy1, request2 to proxy2, and eventually looping back around. Any idea how to set this up?

To make it harder, I'd also like to be able to dynamically change the list of available proxies, bring some down, and add others.

If it matters, IP addresses are assigned dynamically.

Thanks :)


回答1:


Make your crawler have a list of proxies and with each HTTP request let it use the next proxy from the list in a round robin fashion. However, this will prevent you from using HTTP/1.1 persistent connections. Modifying the proxy list will eventually result in using a new or not using a proxy.

Or have several connections open in parallel, one to each proxy, and distribute your crawling requests to each of the open connections. Dynamics may be implemented by having the connetor registering itself with the request dispatcher.




回答2:


I've setted up rotating proxies using HAProxy + DeleGate + Multiple Tor Instances. With Tor you don't have good control of bandwidth and latency but it's useful for web scraping. I've just published an article on the subject: Running Your Own Anonymous Rotating Proxies




回答3:


Edit: There is even Python wrapper for gimmeproxy: https://github.com/ericfourrier/gimmeproxy-api

If you don't mind Node, you can use proxy-lists to collect public proxies and check-proxy to check them. It's exactly how https://gimmeproxy.com works, more info here



来源:https://stackoverflow.com/questions/1934088/rotating-proxies-for-web-scraping

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!