Understanding A* heuristics for single goal maze

旧城冷巷雨未停 提交于 2020-01-02 04:04:42

问题


I have a maze like the following:

||||||||||||||||||||||||||||||||||||
|                                 P|
| ||||||||||||||||||||||| |||||||| |
| ||   |   |      |||||||   ||     |
| || | | | | |||| ||||||||| || |||||
| || | | | |             || ||     |
| || | | | | | ||||  |||    |||||| |
| |  | | |   |    || ||||||||      |
| || | | |||||||| ||        || |||||
| || |   ||       ||||||||| ||     |
|    |||||| |||||||      || |||||| |
||||||      |       |||| || |      |
|      |||||| ||||| |    || || |||||
| ||||||      |       ||||| ||     |
|        |||||| ||||||||||| ||  || |
||||||||||                  |||||| |
|+         ||||||||||||||||        |
||||||||||||||||||||||||||||||||||||

The goal is for P to find +, with sub-goals of

  • The path to + is the least cost (1 hop = cost+1)
  • The number of cells searched (nodes expanded) is minimized

I'm trying to understand why my A* heuristic is performing so much worse than an implementation I have for Greedy Best First. Here are the two bits of code for each:

#Greedy Best First -- Manhattan Distance
self.heuristic = abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0])

#A* -- Manhattan Distance + Path Cost from 'startNode' to 'currentNode'
return abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0]) + self.costFromStart

In both algorithms, I'm using a heapq, prioritizing based on the heuristic value. The primary search loop is the same for both:

theFrontier = []
heapq.heappush(theFrontier, (stateNode.heuristic, stateNode)) #populate frontier with 'start copy' as only available Node

#while !goal and frontier !empty
while not GOAL_STATE and theFrontier:
    stateNode = heapq.heappop(theFrontier)[1] #heappop returns tuple of (weighted-idx, data)
    CHECKED_NODES.append(stateNode.xy)
    while stateNode.moves and not GOAL_STATE:
        EXPANDED_NODES += 1
        moveDirection = heapq.heappop(stateNode.moves)[1]

        nextNode = Node()
        nextNode.setParent(stateNode)
        #this makes a call to setHeuristic
        nextNode.setLocation((stateNode.xy[0] + moveDirection[0], stateNode.xy[1] + moveDirection[1]))
        if nextNode.xy not in CHECKED_NODES and not isInFrontier(nextNode):
            if nextNode.checkGoal(): break
            nextNode.populateMoves()
            heapq.heappush(theFrontier, (nextNode.heuristic,nextNode))

So now we come to the issue. While A* finds the optimal path, it's pretty expensive at doing so. To find the optimal path of cost:68, it expands (navigates and searches through) 452 nodes to do so.

While the Greedy Best implementation I have finds a sub-optimal path (cost: 74) in only 160 expansions.

I'm really trying to understand where I'm going wrong here. I realize that Greedy Best First algorithms can behave like this naturally, but the gap in node expansions is just so large I feel like something has to be wrong here.. any help would be appreciated. I'm happy to add details if what I've pasted above is unclear in some way.


回答1:


A* provides the optimal answer to the problem, greedy best first search provides any solution.

It's expected that A* has to do more work.

If you want a variation of A* that is not optimal anymore but returns a solution much faster, you can look at weighted A*. It just consists of putting a weight to the heuristic (weight > 1). In practice, it gives you a huge performance increase

For example, could you try this :

return 2*(abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0])) + self.costFromStart



回答2:


A* search attempts to find the best possible solution to a problem, while greedy best-first just tries to find any solution at all. A* has a much, much harder task, and it has to put a lot of work into exploring every single path that could possibly be the best, while the greedy best-first algorithm just goes straight for the option that looks closest to the goal.




回答3:


Since this hasn't been resolved and even though the something wrong asked by OP could be solved with Fezvez's answer, I feel like I need to ask this and maybe answer to what is wrong and why Fezvez's answer can take care of it: have you checked the heuristic value of all your nodes with A* algorithm and noticed something odd? Aren't they all equal? Because even though your heuristic is correct for a best-first algorithm, it doesn't directly fit for your A* algorithm. I made a project similar in java and I had this issue which is why I'm asking here. For instance, suppose you have these points of interest:

  • Start(P) - (0,0)
  • End(+) - (20,20)
  • P1 - (2,2) -> (Your heuristic) + (path cost) = ((20-2) + (20-2)) + ((2-0) + (2-0)) = 40
  • P2 - (4,3) -> (Your heuristic) + (path cost) = ((20-4) + (20-3)) + ((4-0) + (3-0)) = 40

And, if I'm not mistaken, this will be true for all your points in the maze. Now, considering that an A* algorithm is normally implemented just like a breadth-first algorithm with heuristics (and paths cost), since your heuristics always gives you the same total (F = h+g) it becomes in fact a breadth-first algorithm which also gives you the best solution possible, but is in practice always slower than an A* would normally do. Now as Fezvez suggested, giving a weight to your heuristic might just mix the best of both worlds (best-first and breadth-first) and would look like this with the points given above:

  • Start(P) - (0,0)
  • End(+) - (20,20)
  • P1 - (2,2) -> 2*(Your heuristic) + (path cost) = 2*((20-2) + (20-2)) + ((2-0) + (2-0)) = 76
  • P2 - (4,3) -> 2*(Your heuristic) + (path cost) = 2*((20-4) + (20-3)) + ((4-0) + (3-0)) = 73


来源:https://stackoverflow.com/questions/28666629/understanding-a-heuristics-for-single-goal-maze

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!