memoization

Memoize a currified function

笑着哭i 提交于 2019-12-03 01:18:55
const f = (arg1) => (arg2) => { /* returns something */ } Is it possible to memoize f with regard to the 2 arguments, namely: f(1)(2); f(1)(3); // Cache not hit f(4)(2); // Cache not hit f(1)(2); // Cache hit Nina Scholz You could take a Map as cache and take nested maps for all following arguments. This cache works for arbitrary count of arguments and reuses the values from the former calls. It works by taking a curried function and an optional Map . If the map is not supplied, a new map is created which serves as base cache for all other calls of the returned closure or the final result. The

What's the difference between recursion, memoization & dynamic programming? [duplicate]

匆匆过客 提交于 2019-12-03 00:12:05
问题 This question already has answers here : Closed 7 years ago . Possible Duplicate: Dynamic programming and memoization: top-down vs bottom-up approaches I have gone through a lot of articles on this but can't seem to make sense of it. At times recursion and dynamic programming looks the same and at others memoization & dynamic programming look alike. Can someone explain to me what's the difference? P.S. It will also be helpful if you could point me to some code using the three approaches on

Should I use recursion or memoization for an algorithm?

天大地大妈咪最大 提交于 2019-12-02 23:35:20
If I have a choice to use recursion or memoization to solve a problem which should I use? In other words if they are both viable solutions in that they give the correct output and can be reasonably expressed in the code I'm using, when would I use one over the other? I pick memoization because it's usually possible to access more heap memory than stack memory. That is, if your algorithm is run on a lot of data, in most languages you'll run out of stack space recursing before you run out of space on the heap saving data. They are not mutually exclusive. You can use them both. Personally, I'd

Efficient memoization in Python

守給你的承諾、 提交于 2019-12-02 20:42:41
I have some task to solve and the most important part at the moment is to make the script as time-efficient as possible. One of the elements I am trying to optimize is memoization within one of the functions. So my question is: Which of the following 3-4 methods is the most efficient / fastest method of implementing memoization in Python? I have provided code only as an example - if one of the methods is more efficient, but not in the case I mentioned, please share what you know. Solution 1 - using mutable variable from outer scope This solution is often shown as the example memoization, but I

How to memoize **kwargs?

杀马特。学长 韩版系。学妹 提交于 2019-12-02 20:37:52
I haven't seen an established way to memoize a function that takes key-word arguments, i.e. something of type def f(*args, **kwargs) since typically a memoizer has a dict to cache results for a given set of input parameters, and kwargs is a dict and hence unhashable. I have tried, following discussions here , using (args, frozenset(kwargs.items())) as key to the cache dict , but this only works if the values in kwargs are hashable. Furthermore, as pointed out in answers below is that frozenset is not an ordered data structure. Therefore this solution might be safer: (args, tuple(sorted(kwargs

Efficient table for Dynamic Programming in Haskell

≡放荡痞女 提交于 2019-12-02 17:46:46
I've coded up the 0-1 Knapsack problem in Haskell. I'm fairly proud about the laziness and level of generality achieved so far. I start by providing functions for creating and dealing with a lazy 2d matrix. mkList f = map f [0..] mkTable f = mkList (\i -> mkList (\j -> f i j)) tableIndex table i j = table !! i !! j I then make a specific table for a given knapsack problem knapsackTable = mkTable f where f 0 _ = 0 f _ 0 = 0 f i j | ws!!i > j = leaveI | otherwise = max takeI leaveI where takeI = tableIndex knapsackTable (i-1) (j-(ws!!i)) + vs!!i leaveI = tableIndex knapsackTable (i-1) j --

Which Ruby memoize pattern does ActiveSupport::Memoizable refer to?

岁酱吖の 提交于 2019-12-02 17:15:17
So in Rails 3.2, ActiveSupport::Memoizable has been deprecated. The message reads: DEPRECATION WARNING: ActiveSupport::Memoizable is deprecated and will be removed in future releases,simply use Ruby memoization pattern instead. It refers to "Ruby memoization pattern" (singular) as if there's one pattern we should all know and refer to... I presume they mean something like: def my_method @my_method ||= # ... go get the value end or def my_method return @my_method if defined?(@my_method) @my_method = # ... go get the value end Is there something else I've missed? Here is the commit (and

How do you make a generic memoize function in Haskell?

三世轮回 提交于 2019-12-02 17:13:10
I've seen the other post about this , but is there a clean way of doing this in Haskell? As a 2nd part, can it also be done without making the function monadic? This largely follows http://www.haskell.org/haskellwiki/Memoization . You want a function of type (a -> b). If it doesn't call itself, then you can just write a simple wrapper that caches the return values. The best way to store this mapping depends on what properties of a you can exploit. Ordering is pretty much a minimum. With integers you can construct an infinite lazy list or tree holding the values. type Cacher a b = (a -> b) -> a

Recursion with versus without memoization

孤街浪徒 提交于 2019-12-02 16:16:22
问题 I got homework in school to calculate Catalan number with recursion: 1st without memoization def catalan_rec(n): res = 0 if n == 0: return 1 else: for i in range (n): res += (catalan_rec(i))*(catalan_rec(n-1-i)) return res 2nd with: def catalan_mem(n, memo = None): if memo == None: memo = {0: 1} res = 0 if n not in memo: for i in range (n): res += (catalan_mem(i))*(catalan_mem(n-1-i)) memo[n] = res return memo[n] The weirdest thing happened to me: the memoization takes twice much time! When

What's the difference between recursion, memoization & dynamic programming? [duplicate]

拟墨画扇 提交于 2019-12-02 13:57:52
Possible Duplicate: Dynamic programming and memoization: top-down vs bottom-up approaches I have gone through a lot of articles on this but can't seem to make sense of it. At times recursion and dynamic programming looks the same and at others memoization & dynamic programming look alike. Can someone explain to me what's the difference? P.S. It will also be helpful if you could point me to some code using the three approaches on the same problem. (e.g. the Fibonacci series problem, I think every article I read used recursion but referred to it as dynamic programming) Consider calculating the