Optimization of Function Calls in Haskell

醉酒当歌 提交于 2019-12-22 02:52:30

问题


Not sure what exactly to google for this question, so I'll post it directly to SO:

  1. Variables in Haskell are immutable
  2. Pure functions should result in same values for same arguments

From these two points it's possible to deduce that if you call somePureFunc somevar1 somevar2 in your code twice, it only makes sense to compute the value during the first call. The resulting value can be stored in some sort of a giant hash table (or something like that) and looked up during subsequent calls to the function. I have two questions:

  1. Does GHC actually do this kind of optimization?
  2. If it does, what is the behaviour in the case when it's actually cheaper to repeat the computation than to look up the results?

Thanks.


回答1:


GHC doesn't do automatic memoization. See the GHC FAQ on Common Subexpression Elimination (not exactly the same thing, but my guess is that the reasoning is the same) and the answer to this question.

If you want to do memoization yourself, then have a look at Data.MemoCombinators.

Another way of looking at memoization is to use laziness to take advantage of memoization. For example, you can define a list in terms of itself. The definition below is an infinite list of all the Fibonacci numbers (taken from the Haskell Wiki)

fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

Because the list is realized lazily it's similar to having precomputed (memoized) previous values. e.g. fibs !! 10 will create the first ten elements such that fibs 11 is much faster.




回答2:


Saving every function call result (cf. hash consing) is valid but can be a giant space leak and in general also slows your program down a lot. It often costs more to check if you have something in the table than to actually compute it.



来源:https://stackoverflow.com/questions/6086836/optimization-of-function-calls-in-haskell

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!