artificial-intelligence

What's the difference between uniform-cost search and Dijkstra's algorithm?

孤者浪人 提交于 2019-12-02 16:29:48
I was wondering what's the difference between uniform-cost search and Dijkstra's algorithm . They seem to be the same algorithm. Dijkstra's algorithm, which is perhaps better-known, can be regarded as a variant of uniform-cost search, where there is no goal state and processing continues until all nodes have been removed from the priority queue, i.e. until shortest paths to all nodes (not just a goal node) have been determined http://en.wikipedia.org/wiki/Uniform-cost_search#Relationship_to_other_algorithms RobertoR Dijkstra's algorithm searches for shortest paths from root to every other node

SVM and Neural Network

a 夏天 提交于 2019-12-02 16:23:12
What is difference between SVM and Neural Network? Is it true that linear svm is same NN, and for non-linear separable problems, NN uses adding hidden layers and SVM uses changing space dimensions? There are two parts to this question. The first part is "what is the form of function learned by these methods?" For NN and SVM this is typically the same. For example, a single hidden layer neural network uses exactly the same form of model as an SVM. That is: Given an input vector x, the output is: output(x) = sum_over_all_i weight_i * nonlinear_function_i(x) Generally the nonlinear functions will

How to train a neural network to supervised data set using pybrain black-box optimization?

隐身守侯 提交于 2019-12-02 16:21:26
I have played around a bit with pybrain and understand how to generate neural networks with custom architectures and train them to supervised data sets using backpropagation algorithm. However I am confused by the optimization algorithms and the concepts of tasks, learning agents and environments. For example: How would I implement a neural network such as (1) to classify the XOR dataset using pybrain genetic algorithm (2)? (1) pybrain.tools.shortcuts.buildNetwork(2, 3, 1) (2) pybrain.optimization.GA() I finally worked it out!! Its always easy once you know how! Essentially the first arg to

Tracing and Returning a Path in Depth First Search

天大地大妈咪最大 提交于 2019-12-02 16:16:57
So I have a problem that I want to use depth first search to solve, returning the first path that DFS finds. Here is my (incomplete) DFS function: start = problem.getStartState() stack = Stack() visited = [] stack.push(start) if problem.isGoalState(problem.getStartState): return something while stack: parent = stack.pop() if parent in visited: continue if problem.isGoalState(parent): return something visited.append(parent) children = problem.getSuccessors(parent) for child in children: stack.push(child[0]) The startState and goalState variables are simply a tuple of x, y coordinates. problem

What are some impressive algorithms or software in the world of AI?

妖精的绣舞 提交于 2019-12-02 16:04:49
I have always loved the idea of AI and evolutionary algorithms. Unfortunately, as we all know, the field hasn't developed nearly as fast as expected in the early days. What I am looking for are some examples that have the "wow" factor: Self-directed learning systems that adapted in unexpected ways. Game agents that were particularly dynamic and produced unexpected strategies Symbolic representation systems that actually produced some meaningful and insightful output Interesting emergent behavior in multiple agent systems. Let's not get into the semantics of what defines AI. If it looks or

When to use a certain Reinforcement Learning algorithm?

末鹿安然 提交于 2019-12-02 15:54:30
I'm studying Reinforcement Learning and reading Sutton's book for a university course. Beside the classic PD, MC, TD and Q-Learning algorithms, I'm reading about policy gradient methods and genetic algorithms for the resolution of decision problems. I have never had experience before in this topic and I'm having problems understanding when a technique should be preferred over another. I have a few ideas, but I'm not sure about them. Can someone briefly explain or tell me a source where I can find something about typical situation where a certain methods should be used? As far as I understand:

How to utilize Hebbian learning?

。_饼干妹妹 提交于 2019-12-02 15:52:34
I want to upgrade my evolution simulator to use Hebb learning, like this one . I basically want small creatures to be able to learn how to find food. I achieved that with the basic feedforward networks, but I'm stuck at understanding how to do it with Hebb learning. The basic principle of Hebb learning is that, if two neurons fire together, they wire together. So, the weights are updated like this: weight_change = learning_rate * input * output The information I've found on how this can be useful is pretty scarce, and I don't get it. In my current version of the simulator, the weights between

Using Markov chains (or something similar) to produce an IRC-bot

两盒软妹~` 提交于 2019-12-02 15:38:47
I tried google and found little that I could understand. I understand Markov chains to a very basic level: It's a mathematical model that only depends on previous input to change states..so sort of a FSM with weighted random chances instead of different criteria? I've heard that you can use them to generate semi-intelligent nonsense, given sentences of existing words to use as a dictionary of kinds. I can't think of search terms to find this, so can anyone link me or explain how I could produce something that gives a semi-intelligent answer? (if you asked it about pie, it would not start going

Chess Optimizations

♀尐吖头ヾ 提交于 2019-12-02 15:38:37
ok, so i have been working on my chess program for a while and i am beginning to hit a wall. i have done all of the standard optimizations (negascout, iterative deepening, killer moves, history heuristic, quiescent search, pawn position evaluation, some search extensions) and i'm all out of ideas! i am looking to make it multi-threaded soon, and that should give me a good boost in performance, but aside from that are there any other nifty tricks you guys have come across? i have considered switching to MDF(f), but i have heard it is a hassle and isn't really worth it. what i would be most

Rush Hour - Solving the game

一个人想着一个人 提交于 2019-12-02 15:10:27
Rush Hour if you're not familiar with it, the game consists of a collection of cars of varying sizes, set either horizontally or vertically, on a NxM grid that has a single exit. Each car can move forward/backward in the directions it's set in, as long as another car is not blocking it. You can never change the direction of a car. There is one special car, usually it's the red one. It's set in the same row that the exit is in, and the objective of the game is to find a series of moves (a move - moving a car N steps back or forward) that will allow the red car to drive out of the maze. I've