artificial-intelligence

Completeness of depth-first search

情到浓时终转凉″ 提交于 2019-12-01 02:22:32
I quote from Artificial Intelligence: A Modern Approach : The properties of depth-first search depend strongly on whether the graph-search or tree-search version is used. The graph-search version, which avoids repeated states and redundant paths, is complete in finite state spaces because it will eventually expand every node. The tree-search version, on the other hand, is not complete [...]. Depth-first tree search can be modified at no extra memory cost so that it checks new states against those on the path from the root to the current node; this avoids infinite loops in finite state spaces

Why is a bias neuron necessary for a backpropagating neural network that recognizes the XOR operator?

此生再无相见时 提交于 2019-11-30 20:45:49
I posted a question yesterday regarding issues that I was having with my backpropagating neural network for the XOR operator. I did a little more work and realized that it may have to do with not having a bias neuron. My question is, what is the role of the bias neuron in general, and what is its role in a backpropagating neural network that recognizes the XOR operator? Is it possible to create one without a bias neuron? Kiril It's possible to create a neural network without a bias neuron... it would work just fine, but for more information I would recommend you see the answers to this

Interesting NLP/machine-learning style project — analyzing privacy policies

懵懂的女人 提交于 2019-11-30 20:34:08
I wanted some input on an interesting problem I've been assigned. The task is to analyze hundreds, and eventually thousands, of privacy policies and identify core characteristics of them. For example, do they take the user's location?, do they share/sell with third parties?, etc. I've talked to a few people, read a lot about privacy policies, and thought about this myself. Here is my current plan of attack: First, read a lot of privacy and find the major "cues" or indicators that a certain characteristic is met. For example, if hundreds of privacy policies have the same line: "We will take

Manhattan distance in A*

…衆ロ難τιáo~ 提交于 2019-11-30 20:10:06
I am implementing a NxN puzzle solver using A* search algorithm and using Manhattan distance as a heuristic and I've run into a curious bug (?) which I can't wrap my head around. Consider these puzzles (0 element being blank space): (initial) 1 0 2 7 5 4 8 6 3 (goal) 1 2 3 4 5 6 7 8 0 The minumum number of moves to reach solution from initial state is 11. However, my solver, reaches goal in 17 moves. And therein lies the problem - my puzzle solver mostly solves the solvable puzzles in a correct (minimum) number of moves but for this particular puzzle, my solver overshoots the minimum number of

Simulated Binary Crossover (SBX) crossover operator in Scala genetic algorithm (GA) library

佐手、 提交于 2019-11-30 17:46:09
问题 I work on a very little research team to create/adapt a Genetic Algorithm library in Scala for distributed computation with Scientific Worklow System, in our case we use the open source OpenMole software (http://www.openmole.org/). Recently, i try to understand and re-implement the SBX crossover operator written in JMetal Metaheuristics library (http://jmetal.sourceforge.net/) to adapt it in functionnal version in our Scala library. I write some code, but i need our advice or your validation

Algorithm: shortest path between all points

删除回忆录丶 提交于 2019-11-30 17:39:59
Suppose I have 10 points. I know the distance between each point. I need to find the shortest possible route passing through all points. I have tried a couple of algorithms (Dijkstra, Floyd Warshall,...) and they all give me the shortest path between start and end, but they don't make a route with all points on it. Permutations work fine, but they are too resource-expensive. What algorithms can you advise me to look into for this problem? Or is there a documented way to do this with the above-mentioned algorithms? Have a look at travelling salesman problem . You may want to look into some of

Tensorflow accuracy at .99 but predictions awful

孤街浪徒 提交于 2019-11-30 16:06:19
Maybe I'm making predictions wrong? Here's the project... I have a greyscale input image that I am trying to segment. The segmentation is a simple binary classification (think of foreground vs background). So the ground truth (y) is a matrix of 0's and 1's -- so there's 2 classifications. Oh and the input image is a square, so I just use one variable called n_input My accuracy essentially converges to 0.99 but when I make a prediction I get all zero's. EDIT --> there is a single 1 in each output matrices, both in the same place... Here's my session code(everything else is working)... with tf

What is a Heuristic Function

倾然丶 夕夏残阳落幕 提交于 2019-11-30 15:35:41
问题 Can someone explain in very simple words what it is. Also provide an example. So for example if u have to find the heuristic function of something how is it supposed to look like? Take as an example the problem: For the water jug problem http://www.math.tamu.edu/~dallen/hollywood/diehard/diehard.htm Devise and explain an admissible heuristic function (h) [not the trivial h(n) = 0]. The cost of an action is defined as 1 unit for performing the action, an additional 1 unit for moving each

How do you get Python to write down the code of a function it has in memory?

守給你的承諾、 提交于 2019-11-30 15:18:49
When I pass the options in the program (a computational biology experiment) I usually pass them through a .py file. So I have this .py file that reads like: starting_length=9 starting_cell_size=1000 LengthofExperiments=5000000 Then I execute the file and get the data. Since the program is all on my machine and no one else has access to it, it is secure in a trivial way. I can also write a similar file very easily: def writeoptions(directory): options="" options+="starting_length=%s%s"%(starting_length,os.linesep) options+="starting_cell_size=%s%s"%(starting_cell_size,os.linesep) options+=

How to calculate the threshold value for numeric attributes in Quinlan's C4.5 algorithm?

纵饮孤独 提交于 2019-11-30 14:56:35
I am trying to find how the C4.5 algorithm determines the threshold value for numeric attributes. I have researched and can not understand, in most places I've found this information: The training samples are first sorted on the values of the attribute Y being considered. There are only a finite number of these values, so let us denote them in sorted order as {v1,v2, …,vm}. Any threshold value lying between vi and vi+1 will have the same effect of dividing the cases into those whose value of the attribute Y lies in {v1, v2, …, vi} and those whose value is in {vi+1, vi+2, …, vm}. There are thus