artificial-intelligence

Using minimax search for card games with imperfect information

﹥>﹥吖頭↗ 提交于 2019-12-09 09:30:38
问题 I want to use minimax search (with alpha-beta pruning), or rather negamax search, to make a computer program play a card game. The card game actually consists of 4 players. So in order to be able to use minimax etc., I simplify the game to "me" against the "others". After each "move", you can objectively read the current state's evaluation from the game itself. When all 4 players have placed the card, the highest wins them all - and the cards' values count. As you don't know how the

What algorithm would you use to solve a very large tic-tac-toe game?

早过忘川 提交于 2019-12-09 07:03:25
问题 A small (3x3, 4x4) tic-tac-toe can be easily solved by considering all the cases. But for example, you have a 30x30 tic-tac-toe. What algorithm would you use to decide the next best move in that case? Minimax + alpha-beta pruning is one way that I know. Is there some other way that is more efficient/not more efficient but cooler? I know it would not be a very interesting game to play. I said 30x30 just to ask what I wanted to i.e. which algorithms work best at these sort of games where the

Clustering Algorithm with discrete and continuous attributes?

帅比萌擦擦* 提交于 2019-12-09 05:27:55
问题 Does anyone know a good algorithm for perform clustering on both discrete and continuous attributes? I am working on a problem of identifying a group of similar customers and each customer has both discrete and continuous attributes (Think type of customers, amount of revenue generated by this customer, geographic location and etc..) Traditionally algorithm like K-means or EM work for continuous attributes, what if we have a mix of continuous and discrete attributes? 回答1: If I remember

How to utilize Hebbian learning?

与世无争的帅哥 提交于 2019-12-09 04:08:26
问题 I want to upgrade my evolution simulator to use Hebb learning, like this one. I basically want small creatures to be able to learn how to find food. I achieved that with the basic feedforward networks, but I'm stuck at understanding how to do it with Hebb learning. The basic principle of Hebb learning is that, if two neurons fire together, they wire together. So, the weights are updated like this: weight_change = learning_rate * input * output The information I've found on how this can be

neuralnet prediction returns the same values for all predictions

这一生的挚爱 提交于 2019-12-09 03:11:09
问题 I'm trying to build a neural net with the neuralnet package and I'm having some trouble with it. I've been successful with the nnet package but no luck with the neuralnet one. I have read the whole documentation package and can't find the solution, or maybe I'm not able to spot it. The training command I'm using is nn<-neuralnet(V15 ~ V1 + V2 + V3 + V4 + V5 + V6 + V7 + V8 + V9 + V10 + V11 + V12 + V13 + V14,data=test.matrix,lifesign="full",lifesign.step=100,hidden=8) and for prediction result<

Combining heuristics when ranking social network news feed items

痞子三分冷 提交于 2019-12-08 15:44:44
问题 We have a news feed, and we want to surface items to the user based on a number of criteria. Certain items will be surfaced because of factor A, another because of factor B, and yet another because of factor C. We can create individual heuristics for each factor, but we then need to combine these heuristics in such a way that it promotes the best content considering each factor while still giving a mix of content from each factor. Our naive approach is to load the top n from each factor, take

Watson Conversation: What is lost when restoring a Workspace from a JSON “dump”-file?

筅森魡賤 提交于 2019-12-08 11:37:53
问题 What is "lost" or what kind of measurable impact has it, when restoring a previously heavily trained Watson Conversation Workspace from its JSON dump-file? As it seems to me on a small example workspace, the bot is running again. Most probably not as good as before. Considering a much larger workspace in the future: Is there a way to quantify and/or measure such lost quality? to "retrain" the bot (restore the original bot quality after restoring a trained workspace from its dump)? And when

What are some common admissible heuristics for distance? [closed]

自闭症网瘾萝莉.ら 提交于 2019-12-08 11:15:41
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 4 years ago . What are the most common heuristics used to estimate distance in intelligent search problems? In particular, I'm interested in metrics that can (usually) be used as admissible heuristics for A* search. I came across straight line distance and Manhattan distance but are there any

Why when using this simple model with multiple outputs does Keras complain about a lack of gradients?

筅森魡賤 提交于 2019-12-08 07:14:56
问题 So this problem occurs for in the context of a larger project, but I've assembled a minimal working example. Consider the following: input_1 = Input((5,)) hidden_a = Dense(2)(input_1) hidden_b = Dense(2)(input_1) m1 = Model(input_1, [hidden_a, hidden_b]) input_2 = Input((2,)) output = Dense(1)(input_2) m2 = Model(input_2, output) m3 = Model(input_1, m2(m1(input_1)[0])) print(m3.summary()) m3.compile(optimizer='adam', loss='mse') x = np.random.random(size=(10,5)) y = np.random.random(size=(10

Tensorflow converging but bad predictions

混江龙づ霸主 提交于 2019-12-08 06:41:05
问题 I posted a similar question the other day here, but I have since made edits to bugs that I found, and the problem of bad predictions remains. I have two networks -- one with 3 conv layers and another with 3 conv layers followed by 3 deconv layers. Both take a 200x200 input image. The output is the same resolution 200x200 but it has two classifications (either a zero of a 1 -- it's a segmentation network), so the network predictions dimensions are 200x200x2 (plus batch_size). Let's talk about