I have long been wondering why lazy evaluation is useful. I have yet to have anyone explain to me in a way that makes sense; mostly it ends up boiling down to \"trust me\".<
It can boost efficiency. This is the obvious-looking one, but it's not actually the most important. (Note also that laziness can kill efficiency too - this fact is not immediately obvious. However, by storing up lots of temporary results rather than calculating them immediately, you can use up a huge amount of RAM.)
It lets you define flow control constructs in normal user-level code, rather than it being hard-coded into the language. (E.g., Java has for
loops; Haskell has a for
function. Java has exception handling; Haskell has various types of exception monad. C# has goto
; Haskell has the continuation monad...)
It lets you decouple the algorithm for generating data from the algorithm for deciding how much data to generate. You can write one function that generates a notionally-infinite list of results, and another function that processes as much of this list as it decides it needs. More to the point, you can have five generator functions and five consumer functions, and you can efficiently produce any combination - instead of manually coding 5 x 5 = 25 functions that combine both actions at once. (!) We all know decoupling is a good thing.
It more or less forces you to design a pure functional language. It's always tempting to take short-cuts, but in a lazy language, the slightest impurity makes your code wildly unpredictable, which strongly militates against taking shortcuts.