artificial-intelligence

A* Algorithm with Manhattan Distance Heuristic

半世苍凉 提交于 2019-12-10 11:22:33
问题 I've been working on a 15-puzzle solver in C. And I had some issues with the huge amounts of memory that my code uses. I won't be posting my code because it's too long... I've implemented most of the libraries I'm using and it will probably just bring confusion to you. Let's start on the basics. The things that I'm using right now are: (All of them implemented in C) - Fibonacci Heap: /* Struct for the Fibonacci Heap */ typedef struct _fiboheap { int size; // Number of nodes in the heap node

Saving a Python dictionary in external file?

杀马特。学长 韩版系。学妹 提交于 2019-12-10 11:17:10
问题 I'm working on a code that is essentially a super basic AI system (basically a simple Python version of Cleverbot). As part of the code, I've got a starting dictionary with a couple keys that have lists as the values. As the file runs, the dictionary is modified - keys are created and items are added to the associated lists. So what I want to do is have the dictionary saved as an external file in the same file folder, so that the program doesn't have to "re-learn" the data each time I start

Tron lightcycles AI in Prolog

梦想与她 提交于 2019-12-10 11:15:34
问题 I have the problem to write the AI to game(like tron lightcycles). I write all the graphics and movements on C using ncurses. Now i need to write the bot's ai on the prolog. I'm using swi prolog. I save the current game field(all matrix), current human position and current bot position(like matrix cells i, j). They saves like predicats in the .pl file from c. My game field is a matrix which contains 1 and 0( 1 - visited, 0 - unvisited ). Like this: human_current_position(0,1). bot_current

Search algorithm with avoiding repeated states

倖福魔咒の 提交于 2019-12-10 10:59:27
问题 With reference to Section 3.5 of Russel and Norvig : On a grid, each state has four successors, so the search tree including repeated states has 4^d leaves; but there are only about 2d^2 distinct states within d steps of any given state. What is the meaning of distinct states here. Can someone explain me by considering various values of d, say 1,2,3,4. 回答1: What is the meaning of distinct states here. The meaning of distinct state is a unique cell, you count each cell in the grid only once.

AI: Partial Unification in Open-World Reference Resolution

做~自己de王妃 提交于 2019-12-10 10:59:22
问题 When performing reference resolution on predicates describing the semantics of dialogue expressions, I need to be able to allow for partial unification due to working in an open world. For example, consider the following scenario: There is a blue box in front of you. We refer to this blue box using the id 3 . A set of predicates box(x)^blue(x) can easily resolve to the blue box you know about. Making this query will return 3 A set of predicates ball(x)^yellow(x) will not resolve to anything.

Converting Minimax to Negamax (python)

耗尽温柔 提交于 2019-12-10 10:44:19
问题 I'm making an Othello player, and implemented a minimax algorithm with alpha-beta pruning. Then I did a bunch of research on the best ones online and keep hearing about a "negamax" algorithm that they all use. It seems like most people think negamax is faster than minimax (i think because it doesn't switch between min and max player?), so I'd like to turn my minimax algorithm into negamax if that's not too difficult. I was wondering if people had any insight on how much faster using negamax

How correctly calculate tf.nn.weighted_cross_entropy_with_logits pos_weight variable

瘦欲@ 提交于 2019-12-09 23:47:19
问题 I am using convolution neural network. My data is quite imbalanced, I have two classes. My first class contains: 551,462 image files My second class contains: 52,377 image files I want to use weighted_cross_entropy_with_logits , but I'm not sure I'm calculating pos_weight variable correctly. Right now I'm using classes_weights = tf.constant([0.0949784, 1.0]) cross_entropy = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(logits=logits, targets=y_, pos_weight=classes_weights)) train

Programming a chess AI

时光怂恿深爱的人放手 提交于 2019-12-09 14:30:43
问题 I'm looking to try and write a chess AI. Is there something i can use on the .NET framework (or maybe even a chess program scripted in Lua) that will let me write and test a chess AI without worrying about actually makign a chess game? 回答1: Not sure about what you are trying to do. If you are looking for a ready-to-use chess GUI, you can use WinBoard. It is completely decoupled from the underlying chess engine(s), thanks to an established communication protocol. Your chess engine thus becomes

What is the difference between Deep Learning and traditional Artificial Neural Network machine learning? [closed]

二次信任 提交于 2019-12-09 12:33:34
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 7 months ago . Can you offer a concise explanation of the differences between Deep Learning and Traditional Machine Learning that utilize neural networks? How many levels are need to make a neural network "deep"? Is this all just marketing hype? 回答1: I beg to differ with @Frank Puffer's answer.

Combining Weak Learners into a Strong Classifier

狂风中的少年 提交于 2019-12-09 12:13:28
问题 How do I combine few weak learners into a strong classifier? I know the formula, but the problem is that in every paper about AdaBoost that I've read there are only formulas without any example. I mean - I got weak learners and their weights, so I can do what the formula tells me to do (multiply learner by its weight and add another one multiplied by its weight and another one etc.) but how exactly do I do that? My weak learners are decision stumps. They got attribute and treshold, so what do