Why does Monte Carlo Tree Search reset Tree

最后都变了- 提交于 2020-05-25 07:25:08

问题


I had a small but potentially stupid question about Monte Carlo Tree Search. I understand most of it but have been looking at some implementations and noticed that after the MCTS is run for a given state and a best move returned, the tree is thrown away. So for the next move, we have to run MCTS from scratch on this new state to get the next best position.

I was just wondering why we don't retain some of the information from the old tree. It seems like there is valuable information about the states in the old tree, especially given that the best move is one where the MCTS has explored most. Is there any particular reason we can't use this old information in some useful way?


回答1:


Some implementations do indeed retain the information.

For example, the AlphaGo Zero paper says:

The search tree is reused at subsequent time-steps: the child node corresponding to the played action becomes the new root node; the subtree below this child is retained along with all its statistics, while the remainder of the tree is discarded




回答2:


Well the reason may be the following.

Rollouts are truncated value estimations, contribution after maximum length are discarded.

Assume that maximum rollout depth is N.

If you consider an environment where average reward is !=0 (let's say >0).

After an action is taken and observation is obtained a child node of the tree could be selected.

Now the maximum length of the branches and the maximum length of the rollout that partecipated to the evaluation of a node value is N-1, as the root node has been discarded.

However, the new simulations will obviously still have length N but they will have to be combined with simulations of length N-1.

Longer simulations will have a biased value as the average reward is !=0

This means that the nodes are evaluated with mixed length evaluation will have a bias depending on the ratio of simulations with different lengths..

Another reason why recycling old simulations with shorter length is avoided is because of the bias induced on the sampling. Just imagine a T maze where at depth d on the left there is a maximum reward =R/2 while at depth=d+1 there is a maximum reward = R on the right. All the paths to the left that during the first step were able to reach the R/2 reward at depth d will be favoured during the second step with a recycled tree while paths to the right will be less common and there will higher chance to not reach the reward R. Starting from an empty tree will give the same probability to both sides of the maze.

Alpha Go Zero (see Peter de Rivaz's answer) actually does not use rollouts but uses a value approaximation (generated by a deep network). values are not truncated estimations. Thus Alpha Go Zero is not affected by this branch length bias.

Alpha Go, the predecessor of Alpha Go Zero, combined rollouts and the value approximation and also reused the tree.. but no the new version does not use the rollouts.. maybe for this reason. Also both Alpha Go Zero and Alpha Go do not use the value of the action but the number of times it was selected during search. This value may be less affected by the length bias, at least in the case where the average reward is negative

Hope this is clear..



来源:https://stackoverflow.com/questions/47389700/why-does-monte-carlo-tree-search-reset-tree

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!