alpha-beta-pruning

Is there something wrong with my quiescence search?

断了今生、忘了曾经 提交于 2021-02-10 20:14:23
问题 I keep getting weird behavior in my negamax-based AI when I try to implement QuiesenceSearch. I based it on the pseudo-code from here: int Quiesce( int alpha, int beta ) { int stand_pat = Evaluate(); if( stand_pat >= beta ) return beta; if( alpha < stand_pat ) alpha = stand_pat; until( every_capture_has_been_examined ) { MakeCapture(); score = -Quiesce( -beta, -alpha ); TakeBackMove(); if( score >= beta ) return beta; if( score > alpha ) alpha = score; } return alpha; } And this is my code:

Chess: Extracting the principal variation from the transposition table

耗尽温柔 提交于 2020-01-15 05:00:26
问题 Earlier, I was having an issue involving my principal variation becoming truncated by an alpha-beta search. Indeed, this appears to be a common issue. From the authors of Crafty: Another solution with even worse properties is to extract the full PV from the transposition table, and avoid using the triangular array completely. If the transposition table is large enough so that nothing gets overwritten at all, this would almost work. But there is a hidden “gotcha”. Once you search the PV, you

MiniMax Algorithm for Tic Tac Toe failure

杀马特。学长 韩版系。学妹 提交于 2019-12-25 04:56:21
问题 I'm trying to implement a minimax algorithm for tic tac toe with alpha-beta pruning. Right now I have the program running, but it does not seem to be working. Whenever I run it it seems to input garbage in all the squares. I've implemented it so that my minimax function takes in a board state and modifies that state so that when it is finished, the board state contains the next best move. Then, I set 'this' to equal the modified board. Here are my functions for the minimax algorithm: void

Tic-Tac-Toe - Iterative implementation of alpha beta tree search

懵懂的女人 提交于 2019-12-24 16:12:24
问题 Having issues trying to decipher the principal variation (PV) results. "The principal variation is a path from the root to a leaf node, in which every node has the same value . This leaf node, whose value determines the minimax value of the root, is called the principal leaf." The game demo below PV (move,eval) shows this line: 4,0 7,0 6,0 5,0 2,1 How can this be a valid PV since not ALL eval nodes have the same value? The AI never loses, but since the dizzying PV seems bogus, it casts a dark

Tic-Tac-Toe - Iterative implementation of alpha beta tree search

ⅰ亾dé卋堺 提交于 2019-12-24 16:03:51
问题 Having issues trying to decipher the principal variation (PV) results. "The principal variation is a path from the root to a leaf node, in which every node has the same value . This leaf node, whose value determines the minimax value of the root, is called the principal leaf." The game demo below PV (move,eval) shows this line: 4,0 7,0 6,0 5,0 2,1 How can this be a valid PV since not ALL eval nodes have the same value? The AI never loses, but since the dizzying PV seems bogus, it casts a dark

How is the alpha value in alpha-beta pruning algorithm used and updated?

我的梦境 提交于 2019-12-24 13:17:08
问题 I was looking at the post Strange behaviour in a function while implementing the alpha-beta pruning algorithm and the accepted answer, where it is stated: "Your rootAlphaBeta doesn't update the alpha value". I was wondering what the necessary addition to the code was. 回答1: For alpha-beta pruning to work, the alpha value needs to get propagated up to the top level of the depth first search. This can be achieved by initializing a variable to store alpha outside of the loop over the potential

Return a move from alpha-beta

无人久伴 提交于 2019-12-24 08:45:07
问题 I'm trying to use the alpha-beta minimax pruning algorithm to return a valid move from my board. The algorithm returns the correct value, but I have no idea how I would return the move as well. In the case of this code, I would want to return the child in get_successor_states when the value of bestValue is more than the current alpha. I thought about returning two values at the end of the max and min like return bestValue, child but I have no idea how I would get that to work with the other

Conversion of minimax with alpha beta pruning to negamax

大憨熊 提交于 2019-12-18 17:00:11
问题 I've written a minimax algorithm with alpha beta pruning for the game Checkers, and now I'm trying to rewrite it using the negamax approach. I'm expecting the two to be equivalent, since negamax is just a technique to write the minimax. But for some reason my two algorithms behave differently. When I run them both on the same input, the negamax version seems to evaluate more states, so I think something must be wrong with the alpha beta pruning. The code below shows both algorithms ( minimax

Adding Alpha Beta pruning to Negamax in Java

早过忘川 提交于 2019-12-14 03:55:24
问题 I am making a chess game in Java and (I think) have successfully implemented Negamax for the AI player. I am having some trouble adding alpha beta pruning to this to improve the algorithm. I have tried following tutorials and example code but just can't get my head around how it works. Below is the code I currently have to get the best move: private Move getBestMove() { System.out.println("Getting best move"); System.out.println("Thinking..."); List<Move> validMoves = generateMoves(true); int

Finding the best move using MinMax with Alpha-Beta pruning

▼魔方 西西 提交于 2019-12-13 11:36:49
问题 I'm working on an AI for a game and I want to use the MinMax algorithm with the Alpha-Beta pruning . I have a rough idea on how it works but I'm still not able to write the code from scratch, so I've spend the last two days looking for some kind of pseudocode online. My problem is that every pseudocode I've found online seems to be based on finding the value for the best move while I need to return the best move itself and not a number. My current code is based on this pseudocode (source)