I understand that with threadless async there are more threads available to service inputs (e.g. a HTTP request), but I don\'t understand how that doesn\'t potentially cause cau
The fact is, the notion that async/await "saves threads" is a mixture of truth and bullshit. It is true that it doesn't generally involve creating more threads just to service a particular task, but it happily glosses over the fact that under the covers there are a number of threads waiting for events on completion ports, which are created by the runtime. The number of completion port threads is about the number of processor cores in the system. So, on a system with eight processor cores, there are around eight threads waiting for IO completion events. In an application that goes nuts with async IO, that's great, but in an application that doesn't do so much IO, they're mostly just sitting there eating resources, not "saving threads" by any stretch of the imagination.
When an async IO operation completes, one of those threads will "wake up" and eventually call the continuation on whatever task is relevant. If all of the completion threads are busy executing continuations (perhaps because the developer has made the mistake of doing a lot of CPU intensive work in continuations) when another IO operation completes, that completion will not be handled until one of the completion threads is freed up and is able handle it. This is what is referred to as "thread starvation", and it is why it is recommended to start more threads than the number of processor cores in applications that make heavy use of async IO.
The problem with .NET and async IO and the blanket notion that async IO "saves threads", is that many developers don't understand what's actually happening under the covers, and mis-using the async/await pattern in ways that can starve the completion thread pool is all too easy.
In any case, "threadless" is not a term that makes any sense whatsoever here.