Screen scraping: getting around “HTTP Error 403: request disallowed by robots.txt”

前端 未结 8 1127
借酒劲吻你
借酒劲吻你 2020-12-12 17:15

Is there a way to get around the following?

httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt

Is the only way around

8条回答
  •  抹茶落季
    2020-12-12 17:41

    As it seems, you have to do less work to bypass robots.txt, at least says this article. So you might have to remove some code to ignore the filter.

提交回复
热议问题