Scraping attempts getting 403 error
问题 I am trying to scrape a website and I am getting a 403 Forbidden error no matter what I try: wget CURL (command line and PHP) Perl WWW::Mechanize PhantomJS I tried all of the above with and without proxies, changing user-agent, and adding a referrer header. I even copied the request header from my Chrome browser and tried sending with my request using PHP Curl and I am still getting a 403 Forbidden error. Any input or suggestions on what is triggering the website to block the request and how