Is there a way to get around the following?
httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt
Is the only way around
As it seems, you have to do less work to bypass robots.txt, at least says this article. So you might have to remove some code to ignore the filter.
robots.txt