I once wrote a Crawler in .NET. In order to improve its scalability, I tried to take advantage of asynchronous API of .NET.
The System.Net.HttpWebRequest has asynch
You obviously want to limit the number of concurrent requests, no matter if your crawler is synch/asynch. That limit is not fixed, it depends on your hardware, network, ...
I'm not so sure what's your question here, as .NET implementation of HTTP/Sockets is "ok". There are some holes (See my post about controlling timeouts properly), but it gets the job done (we have a production crawler that fetches ~ hundreds of pages per second).
BTW, we use synchronous IO, just for convenience sake. Every task has a thread, and we limit the number of concurrent thread. For thread-management, we used Microsoft CCR.