I am interested to know the limitations / disadvantages of writing applications in GCed languages. Assuming GC implemenation is optimal, I am just wondering we may be limited by GC to take some optimization decisions.
My belief is that automatic memory management imposes a glass ceiling on efficiency but I have no evidence to substantiate that. In particular, today's GC algorithms offer only high throughput or low latency but not both simultaneously. Production systems like .NET and the HotSpot JVM incur significant pauses precisely because they are optimized for throughput. Specialized GC algorithms like Staccato offer much lower latency but at the cost of much lower minimum mutator utilisation and, therefore, low throughput.