Lazy Evaluation and Time Complexity

前端 未结 7 576
走了就别回头了
走了就别回头了 2020-12-04 14:05

I was looking around stackoverflow Non-Trivial Lazy Evaluation, which led me to Keegan McAllister\'s presentation: Why learn Haskell. In slide 8, he shows the minimum functi

7条回答
  •  不思量自难忘°
    2020-12-04 14:50

    You've gotten a good number of answers that tackle the specifics of head . sort. I'll just add a couple more general statments.

    With eager evaluation, the computational complexities of various algorithms compose in a simple manner. For example, the least upper bound (LUB) for f . g must be the sum of the LUBs for f and g. Thus you can treat f and g as black boxes and reason exclusively in terms of their LUBs.

    With lazy evaluation, however, f . g can have a LUB better than the sum of f and g's LUBs. You can't use black-box reasoning to prove the LUB; you must analyze the implementations and their interaction.

    Thus the often-cited fact that complexity of lazy evaluation is much harder to reason about than for eager evaluation. Just think about the following. Suppose you're trying to improve the asymptotic performance of a piece of code whose form is f . g. In an eager language, there's on obvious strategy you can follow to do this: pick the more complex of f and g, and improve that one first. If you succeed at that, you succeed at the f . g task.

    In a lazy language, on the other hand, you can have these situations:

    • You improve the more complex of f and g, but f . g doesn't improve (or even gets worse).
    • You can improve f . g in ways that don't help (or even worsen) f or g.

提交回复
热议问题