I wrote a xml grabber to receive/decode xml files from website. It works fine mostly but it always return error:
\"The remote server returned an e
Clearly, the URL works from a browser. It just doesn't work from the code. It would appear that the server is accepting/rejecting requests based on the user agent, probably as a very basic way of trying to prevent crawlers.
To get through, just set the UserAgent property to something it will recognize, for instance:
webRequest.UserAgent = @"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36";
That does seem to work.