artificial-intelligence

Predictional Logic in Programming?

泪湿孤枕 提交于 2019-12-06 12:41:32
问题 I was thinking about how in the probably distant future many people think that we wont rely on physical input (i.e. keyboard) as much because the technology that reads brain waves (which already exists to some extent) will be available. Kinda scares me....anyway, I while I was daydreaming about this, the idea came to me that: what if a programmer could implement logic in their code to accurately predict the users intentions and then carry out the intended operation with no need for human

A* (A-star) implementation in AS3

若如初见. 提交于 2019-12-06 09:33:56
I am putting together a project for a class that requires me to put AI in a top down Tactical Strategy game in Flash AS3. I decided that I would use a node based path finding approach because the game is based on a circular movement scheme. When a player moves a unit he essentially draws a series of line segments that connect that a player unit will follow along. I am trying to put together a similar operation for the AI units in our game by creating a list of nodes to traverse to a target node. Hence my use of Astar (the resulting path can be used to create this line). Here is my Algorithm

Iterative Deepening A Star (IDA*) to solve n-puzzle (sliding puzzle) in Java

我怕爱的太早我们不能终老 提交于 2019-12-06 09:32:11
问题 I've implemented a program able to solve the n-puzzle problem with A*. Since the space of the states is too big I cannot precompile it and I have to calculate the possible states at runtime. In this way A* works fine for a 3-puzzle, but for a 4-puzzle can take too long. Using Manhattan distance adjusted with linear conflicts, if the optimal solution requires around 25 moves is still fast, around 35 takes 10 seconds, for 40 takes 180 seconds. I haven't tried more yet. I think that's because I

A* Algorithm with Manhattan Distance Heuristic

我们两清 提交于 2019-12-06 09:14:38
I've been working on a 15-puzzle solver in C. And I had some issues with the huge amounts of memory that my code uses. I won't be posting my code because it's too long... I've implemented most of the libraries I'm using and it will probably just bring confusion to you. Let's start on the basics. The things that I'm using right now are: (All of them implemented in C) - Fibonacci Heap: /* Struct for the Fibonacci Heap */ typedef struct _fiboheap { int size; // Number of nodes in the heap node min; // Pointer to the minimun element in the Heap funCompare compare; // Function to compare within

Neural network not converging

蹲街弑〆低调 提交于 2019-12-06 07:54:56
I'm new to Neural Networks, and programming generally. I've written a neural network in java, and i'm looking at football data. I have two inputs: 1) Home team win % over n games 2) Away team win % over n games Using 'standard statistical models' one can predict the number of goals that will occur in a match using these two numbers alone, with a reasonable degree of accuracy. However, when i attempt to train my NN to predict the number of goals, it simply doesn't converge :( I'm using a genetic algorithm to train the network, here is the fittest individual from the first few generations with a

AI: Partial Unification in Open-World Reference Resolution

那年仲夏 提交于 2019-12-06 06:40:19
When performing reference resolution on predicates describing the semantics of dialogue expressions, I need to be able to allow for partial unification due to working in an open world. For example, consider the following scenario: There is a blue box in front of you. We refer to this blue box using the id 3 . A set of predicates box(x)^blue(x) can easily resolve to the blue box you know about. Making this query will return 3 A set of predicates ball(x)^yellow(x) will not resolve to anything. This is fine. But now consider ball(x)^yellow(x)^box(y)^blue(y)^behind(x,y) that is, the yellow ball

Converting Minimax to Negamax (python)

[亡魂溺海] 提交于 2019-12-06 06:31:04
I'm making an Othello player, and implemented a minimax algorithm with alpha-beta pruning. Then I did a bunch of research on the best ones online and keep hearing about a "negamax" algorithm that they all use. It seems like most people think negamax is faster than minimax (i think because it doesn't switch between min and max player?), so I'd like to turn my minimax algorithm into negamax if that's not too difficult. I was wondering if people had any insight on how much faster using negamax is, and any tips or code on how to turn my minimax code into a negamax algorithm that'd be appreciated!

Search algorithm with avoiding repeated states

心已入冬 提交于 2019-12-06 05:13:36
With reference to Section 3.5 of Russel and Norvig : On a grid, each state has four successors, so the search tree including repeated states has 4^d leaves; but there are only about 2d^2 distinct states within d steps of any given state. What is the meaning of distinct states here. Can someone explain me by considering various values of d, say 1,2,3,4. What is the meaning of distinct states here. The meaning of distinct state is a unique cell, you count each cell in the grid only once. Crude upper bound to number of distinct states: First, look at a subgrid of size 2d+1 X 2d+1 , and you start

Order Crossover (OX) - genetic algorithm

旧时模样 提交于 2019-12-06 05:09:16
问题 Can someone explain me how Order Crossover works? I will give this example and I want to understand it in a generic way to implement after. Parent 1 = 1 2 3 | 4 5 6 7 | 8 9 Parent 2 = 4 5 2 | 1 8 7 6 | 9 3 and the solution are two childreen: Children 1 = 2 1 8 | 4 5 6 7 | 9 3 Children 2 = 3 4 5 | 1 8 7 6 | 9 2 I understand some parts but others not. Thanks 回答1: One such solution for Ordered Crossover is detailed in this post. This answer provides some sample java code with documentation

How to train neural network incrementally in Matlab?

。_饼干妹妹 提交于 2019-12-06 04:22:52
问题 Suppose I have very big train set so that Matlab hangs while training or there is insufficient memory to hold train set. Is it possible to split the training set into parts and train the network by parts? Is it possible to train the network with one sample at a time (one by one)? 回答1: You can just manually divide dataset into batches and train them one after one: for bn = 1:num_batches inputs = <get batch bn inputs>; targets = <get batch bn targets>; net = train(net, inputs, targets); end