Note: I may have chosen the wrong word in the title; perhaps I\'m really talking about polynomial growth here. See the benchmark
If James and other people are correct about types being created in runtime, then performance is limited by speed of types creation. So, why speed of types creation is exponentially slow ? I think, that by definition, types are different to each other. Consequently, every next type causes series of increasingly different memory allocation and deallocation patterns. The speed is simply limited by how efficient is automatic managing of memory by a GC. There are some agressive sequencies, which will slow down any memory manager, no matter how good it is. GC and allocator will spend more and more time looking for optimally sized pieces of free memory for every next allocation and size.
Answer:
Because, you found one very agressive sequence, which fragments memory so bad and so fast, that GC is confused to no means.
What one can learn from it, is that: really fast real world apps (for example: Algorithmic Stock Trading apps) are very plain pieces of straight code with static data structures, allocated once only for the whole run of application.