How does prolog run through recursive queries using succ?

邮差的信 提交于 2019-12-07 08:12:20

问题


Can someone explain to me why this prolog query works the way it does. The definition is:

add(0,Y,Y). 
add(succ(X),Y,succ(Z)):- add(X,Y,Z).

Given this:

?-  add(succ(succ(succ(0))),  succ(succ(0)),  R).

Heres the trace of the query:

Call:  (6)  add(succ(succ(succ(0))),  succ(succ(0)),  R) 

Call:  (7)  add(succ(succ(0)),  succ(succ(0)),  _G648) 

Call:  (8)  add(succ(0),  succ(succ(0)),  _G650) 

Call:  (9)  add(0,  succ(succ(0)),  _G652) 

Exit:  (9)  add(0,  succ(succ(0)),  succ(succ(0))) 

Exit:  (8)  add(succ(0),  succ(succ(0)),  succ(succ(succ(0)))) 

Exit:  (7)  add(succ(succ(0)),  succ(succ(0)), 
                                              succ(succ(succ(succ(0))))) 

Exit:  (6)  add(succ(succ(succ(0))),  succ(succ(0)), 
                                                succ(succ(succ(succ(succ(0))))))

The part that confused me the most about that tutorial was the fact that in the first argument, the succ is stripped, and it recurses. While recursing though, R gains a succ... HOW?! Also, where does the zero come from at the first exit (9)? I am new to prolog, and I am trying to understand the language for a homework. Any help much appreciated.

Note: for anyone interested, the link to this tutorial is http://www.learnprolognow.org/lpnpage.php?pagetype=html&pageid=lpn-htmlse9


回答1:


"where does the zero come from at the first exit (9)?"

The call add(0, succ(succ(0)), _G652) is unified with the first clause that says that if the first argument of add is zero, then the second and the third are the same. In this particular situatiob variable _G652 becomes succ(succ(0)).

"While recursing though, R gains a succ... HOW?!"

This is the result of the application of the second clause. This clause states (roughly) that you first strip succ from the first argument, then call add recursively, and, finally, add another "layer" of succ to the third argument coming back from this recursive call.

The predicate add is nothing but a direct implementation of addition in Peano arithmetics: http://en.wikipedia.org/wiki/Peano_axioms#Addition




回答2:


You see, call and exit are verbs, actions that the interpreter takes attempting to solve the query you pose. Then a trace exposes details of actual work done, and lets you view it in historical perspective.

When Prolog must choice a rule (a call), it uses the name you give it (so called functor), and attempts to to unify each argument in the head of the rule. Then we say usually that Prolog considers also the arity, i.e. number of arguments, for selection.

Unification attempts to 'make equals' two terms, and the noteworthy results are so called bindings of variables. You already know that variables are those names starting Uppercase. Such names identify unspecified values in rules, i.e. are placeholders for arguments. To avoid confusion, when Prolog shows the trace, variables are renamed so that we can identify them, because the relevant detail it's the identities or bindings established during the proof.

Then you see such _G648 symbols in trace. They stay for the yet to be instanced arguments of the called goal, namely R and Z. R is unique (occurs just in top level call), so this Prolog kindly keep the user friendly name, but Z come from the program, potentially occurs multiple times, then got renamed.

To answer this query

?-  add(succ(succ(succ(0))),  succ(succ(0)),  R).

Prolog first attempts to match

add(0,Y,Y). 

and fail because succ(succ(succ(0)) can't be made equal to 0. Then attempts

add(succ(X),Y,succ(Z)) :- add(X,Y,Z).

thus must solve these bindings (to the left the caller' terms):

succ(succ(succ(0))) = succ(X)
succ(succ(0)) = Y
R = Z

You can see why X becomes succ(succ(0)), and we have a new goal to prove, i.e. the rule body add(X,Y,Z) with the bindings just established:

add(succ(succ(0)),succ(succ(0)),_G648)

and so on... until X become 0 and the goal matches

add(0,Y,Y).

Then Y becomes succ(succ(0)), and noteworthy also give a value to Z in the calling rule.

HTH




回答3:


The (hopefully) more legible trace with more annotations is:

 (6) Call:  add(succ(succ(succ(0))),  succ(succ(0)),  R)          % R     = succ(_G648)
      (7) Call:  add(succ(succ(0)),  succ(succ(0)),  _G648)       % _G648 = succ(_G650) 
           (8) Call:  add(succ(0),  succ(succ(0)),  _G650)        % _G650 = succ(_G652) 
                (9) Call:  add(0,  succ(succ(0)),  _G652)         % _G652 = succ(succ(0))
                (9) Exit:  add(0,  succ(succ(0)),  succ(succ(0)))       
           (8) Exit:  add(succ(0),  succ(succ(0)),  succ(succ(succ(0)))) 
      (7) Exit:  add(succ(succ(0)),  succ(succ(0)), succ(succ(succ(succ(0))))) 
 (6) Exit:  add(succ(succ(succ(0))),  succ(succ(0)), succ(succ(succ(succ(succ(0))))))

As you can see, the four exits are redundant, the final answer is already known at the point of (9) Exit; only one exit is enough at that point:

     %     R = succ(_G648)
     %              _G648 = succ(_G650) 
     %                           _G650 = succ(_G652) 
     %                                        _G652 = succ(succ(0))
     % thus,
     %     R = succ(        succ(        succ(        succ(succ(0)) )))

That is indeed what happens under the tail call optimization, as the predicate definition is indeed tail recursive and the result under R is built in a top-down manner, with the "hole" gradually being filled by instantiation of the logical variables. So the succ( _ ) is not added after the recursive call, but is established before it. This is also the essence of the tail recursion modulo cons optimization.



来源:https://stackoverflow.com/questions/10308557/how-does-prolog-run-through-recursive-queries-using-succ

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!