Replacing functions with Table Lookups

▼魔方 西西 提交于 2019-12-03 06:02:17

Brian just has showed that functions are data too. Functions in general are just a mapping of one set to another: y = f(x) is mapping of set {x} to set {y}: f:X->Y. The tables are mappings as well: [x1, x2, ..., xn] -> [y1, y2, ..., yn].

If function operates on finite set (this is the case in programming) then it's can be replaced with a table which represents that mapping. As Brian mentioned every imperative programmer goes through this phase of understanding that the functions can be replaced with the table lookups just for performance reason.

But it doesn't mean that all functions easily can or should be replaced with the tables. It only means that you theoretically can do that for every function. So the conclusion would be that the functions are data because tables are (in the context of programming of course).

There is a lovely trick in Mathematica that creates a table as a side-effect of evaluating function-calls-as-rewrite-rules. Consider the classic slow-fibonacci

fib[1] = 1
fib[2] = 1
fib[n_] := fib[n-1] + fib[n-2]

The first two lines create table entries for the inputs 1 and 2. This is exactly the same as saying

fibTable = {};
fibTable[1] = 1;
fibTable[2] = 1;

in JavaScript. The third line of Mathematica says "please install a rewrite rule that will replace any occurrence of fib[n_], after substituting the pattern variable n_ with the actual argument of the occurrence, with fib[n-1] + fib[n-2]." The rewriter will iterate this procedure, and eventually produce the value of fib[n] after an exponential number of rewrites. This is just like the recursive function-call form that we get in JavaScript with

function fib(n) {
  var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
  return result;
}

Notice it checks the table first for the two values we have explicitly stored before making the recursive calls. The Mathematica evaluator does this check automatically, because the order of presentation of the rules is important -- Mathematica checks the more specific rules first and the more general rules later. That's why Mathematica has two assignment forms, = and :=: the former is for specific rules whose right-hand sides can be evaluated at the time the rule is defined; the latter is for general rules whose right-hand sides must be evaluated when the rule is applied.

Now, in Mathematica, if we say

fib[4]

it gets rewritten to

fib[3] + fib[2]

then to

fib[2] + fib[1] + 1

then to

1 + 1 + 1

and finally to 3, which does not change on the next rewrite. You can imagine that if we say fib[35], we will generate enormous expressions, fill up memory, and melt the CPU. But the trick is to replace the final rewrite rule with the following:

fib[n_] := fib[n] = fib[n-1] + fib[n-2]

This says "please replace every occurrence of fib[n_] with an expression that will install a new specific rule for the value of fib[n] and also produce the value." This one runs much faster because it expands the rule-base -- the table of values! -- at run time.

We can do likewise in JavaScript

function fib(n) {
  var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
  fibTable[n] = result;
  return result;
}

This runs MUCH faster than the prior definition of fib.

This is called "automemoization" [sic -- not "memorization" but "memoization" as in creating a memo for yourself].

Of course, in the real world, you must manage the sizes of the tables that get created. To inspect the tables in Mathematica, do

DownValues[fib]

To inspect them in JavaScript, do just

fibTable

in a REPL such as that supported by Node.JS.

In the context of functional programming, there is the concept of referential transparency. A function that is referentially transparent can be replaced with its value for any given argument (or set of arguments), without changing the behaviour of the program.

Referential Transparency

For example, consider a function F that takes 1 argument, n. F is referentially transparent, so F(n) can be replaced with the value of F evaluated at n. It makes no difference to the program.

In C#, this would look like:

public class Square
{
    public static int apply(int n)
    {
        return n * n;
    }

    public static void Main()
    {
        //Should print 4
        Console.WriteLine(Square.apply(2));
    }
}

(I'm not very familiar with C#, coming from a Java background, so you'll have to forgive me if this example isn't quite syntactically correct).

It's obvious here that the function apply cannot have any other value than 4 when called with an argument of 2, since it's just returning the square of its argument. The value of the function only depends on its argument, n; in other words, referential transparency.

I ask you, then, what the difference is between Console.WriteLine(Square.apply(2)) and Console.WriteLine(4). The answer is, there's no difference at all, for all intents are purposes. We could go through the entire program, replacing all instances of Square.apply(n) with the value returned by Square.apply(n), and the results would be the exact same.

So what did Brian Beckman mean with his statement about replacing function calls with a table lookup? He was referring to this property of referentially transparent functions. If Square.apply(2) can be replaced with 4 with no impact on program behaviour, then why not just cache the values when the first call is made, and put it in a table indexed by the arguments to the function. A lookup table for values of Square.apply(n) would look somewhat like this:

              n: 0 1 2 3 4  5  ...
Square.apply(n): 0 1 4 9 16 25 ...

And for any call to Square.apply(n), instead of calling the function, we can simply find the cached value for n in the table, and replace the function call with this value. It's fairly obvious that this will most likely bring about a large speed increase in the program.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!