I am working on a link checker, in general I can perform HEAD requests, however some sites seem to disable this verb, so on failure I need to also perform a G
When you do a GET, the server will start sending data from the start of the file to the end. Unless you interrupt it. Granted, at 10 Mb/sec, that's going to be a megabyte per second so if the file is small you'll get the whole thing. You can minimize the amount you actually download in a couple of ways.
First, you can call request.Abort after getting the response and before calling response.close. That will ensure that the underlying code doesn't try to download the whole thing before closing the response. Whether this helps on small files, I don't know. I do know that it will prevent your application from hanging when it's trying to download a multi-gigabyte file.
The other thing you can do is request a range, rather than the entire file. See the AddRange method and its overloads. You could, for example, write request.AddRange(512), which would download only the first 512 bytes of the file. This depends, of course, on the server supporting range queries. Most do. But then, most support HEAD requests, too.
You'll probably end up having to write a method that tries things in sequence:
request.Abort after GetResponse returns.