Is there a way to get around the following?
httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt
Is the only way around
oh you need to ignore the robots.txt
br = mechanize.Browser() br.set_handle_robots(False)