Web Crawler - Ignore Robots.txt file?

风格不统一 提交于 2019-12-03 16:32:08

问题


Some servers have a robots.txt file in order to stop web crawlers from crawling through their websites. Is there a way to make a web crawler ignore the robots.txt file? I am using Mechanize for python.


回答1:


The documentation for mechanize has this sample code:

br = mechanize.Browser()
....
# Ignore robots.txt.  Do not do this without thought and consideration.
br.set_handle_robots(False)

That does exactly what you want.




回答2:


This looks like what you need:

from mechanize import Browser
br = Browser()

# Ignore robots.txt
br.set_handle_robots( False )

but you know what you're doing…



来源:https://stackoverflow.com/questions/8386481/web-crawler-ignore-robots-txt-file

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!