I have long been wondering why lazy evaluation is useful. I have yet to have anyone explain to me in a way that makes sense; mostly it ends up boiling down to \"trust me\".<
Lazy evaluation is poor man's equational reasoning (which could be expected, ideally, to be deducing properties of code from properties of types and operations involved).
Example where it works quite well: sum . take 10 $ [1..10000000000]
. Which we don't mind being reduced to a sum of 10 numbers, instead of just one direct and simple numeric calculation. Without the lazy evaluation of course this would create a gigantic list in memory just to use its first 10 elements. It would certainly be very slow, and might cause an out-of-memory error.
Example where it's not as great as we'd like: sum . take 1000000 . drop 500 $ cycle [1..20]
. Which will actually sum the 1 000 000 numbers, even if in a loop instead of in a list; still it should be reduced to just one direct numeric calculation, with few conditionals and few formulas. Which would be a lot better then summing up the 1 000 000 numbers. Even if in a loop, and not in a list (i.e. after the deforestation optimization).
Another thing is, it makes it possible to code in tail recursion modulo cons style, and it just works.
cf. related answer.