I\'m curious as to why Haskell implementations use a GC.
I can\'t think of a case where GC would be necessary in a pure language. Is it just an optimization to reduc
Let's take a trivial example. Given this
f (x, y)
you need to allocate the pair (x, y) somewhere before calling f. When can you deallocate that pair? You have no idea. It cannot be deallocated when f returns, because f might have put the pair in a data structure (e.g, f p = [p]), so the lifetime of the pair might have to be longer than return from f. Now, say that the pair was put in a list, can whoever takes the list apart deallocate the pair? No, because the pair might be shared (e.g., let p = (x, y) in (f p, p)). So it's really difficult to tell when the pair can be deallocated.
The same holds for almost all allocations in Haskell. That said, it's possible have an analysis (region analysis) that gives an upper bound on the lifetime. This works reasonably well in strict languages, but less so in lazy languages (lazy languages tend to do a lot more mutation than strict languages in the implementation).
So I'd like to turn the question around. Why do you think Haskell does not need GC. How would you suggest memory allocation to be done?