artificial-intelligence

Weight Initialisation

杀马特。学长 韩版系。学妹 提交于 2019-12-04 01:22:31
问题 I plan to use the Nguyen-Widrow Algorithm for an NN with multiple hidden layers . While researching, I found a lot of ambiguities and I wish to clarify them. The following is pseudo code for the Nguyen-Widrow Algorithm Initialize all weight of hidden layers with random values For each hidden layer{ beta = 0.7 * Math.pow(hiddenNeurons, 1.0 / number of inputs); For each synapse{ For each weight{ Adjust weight by dividing by norm of weight for neuron and * multiplying by beta value } } } Just

How to convert n-ary CSP to binary CSP using dual graph transformation

蓝咒 提交于 2019-12-04 00:27:36
When I read the book -- Artificial Intelligence (a modern approach), I came across the following sentence describing the method to convert a n-ary Constraint Search Problem to a binary one: Another way to convert an n-ary CSP to a binary one is the dual graph transformation: create a new graph in which there will be one variable for each constraint in the original graph, and one binary constraint for each pair of constraints in the original graph that share variables. For example, if the original graph has variables {X, Y, Z} and constraints ⟨(X, Y, Z), C1⟩ and ⟨(X, Y ), C2⟩ then the dual

How to implement the Gaussian mutation operator for a genetic algorithm in Java

99封情书 提交于 2019-12-04 00:25:53
I try to learn and implement a simple genetic algorithm library for my project. At this time, evolution, selection of population is ready, and I'm trying to implement a simple good mutation operator like the Gaussian mutation operator (GMO) for my genetic evolution engine in Java and Scala. I find some information on Gaussian mutation operator (GMO) into the paper A mutation operator based on a Pareto ranking for multi-objective evolutionary algorithms (P.M. Mateo, I. Alberto), page 6 and 7. But I have some problem to find other information on how to implement this Gaussian mutation operator

Programming a chess AI

我怕爱的太早我们不能终老 提交于 2019-12-04 00:25:25
I'm looking to try and write a chess AI. Is there something i can use on the .NET framework (or maybe even a chess program scripted in Lua) that will let me write and test a chess AI without worrying about actually makign a chess game? Not sure about what you are trying to do. If you are looking for a ready-to-use chess GUI, you can use WinBoard . It is completely decoupled from the underlying chess engine(s), thanks to an established communication protocol. Your chess engine thus becomes a console app exchanging commands with the GUI. A more modern alternative following the same concept is

keras error on predict

落爺英雄遲暮 提交于 2019-12-04 00:11:06
问题 I am trying to use a keras neural network to recognize canvas images of drawn digits and output the digit. I have saved the neural network and use django to run the web interface. But whenever I run it, I get an internal server error and an error on the server side code. The error says Exception: Error when checking : expected dense_input_1 to have shape (None, 784) but got array with shape (784, 1) . My only main view is from django.shortcuts import render from django.http import

How can I prevent my program from getting stuck at a local maximum (Feed forward artificial neural network and genetic algorithm)

ぐ巨炮叔叔 提交于 2019-12-03 22:15:06
I'm working on a feed forward artificial neural network (ffann) that will take input in form of a simple calculation and return the result (acting as a pocket calculator). The outcome wont be exact. The artificial network is trained using genetic algorithm on the weights. Currently my program gets stuck at a local maximum at: 5-6% correct answers, with 1% error margin 30 % correct answers, with 10% error margin 40 % correct answers, with 20% error margin 45 % correct answers, with 30% error margin 60 % correct answers, with 40% error margin I currently use two different genetic algorithms: The

AI algorithm for multi dimension solution optimization / prediction

ⅰ亾dé卋堺 提交于 2019-12-03 21:59:26
I have 6 int parameters ranging from 0 to 100 The total combination of the numbers are 100^6 and each combination gives a result ranging approx. from -10000 to 100000 or even more. Input data example: MySimulation (57, 78, 20, 10, 90, 50) = 300 <- Best Result MySimulation (50, 80, 10, 90, 35, 8) = 200 MySimulation (4, 55, 40, 99, 40, 50) = -50 <- Worst Result The higher the result the better the combination of numbers are, I already have the calculation which gives a result, I only need AI to find a better combination of numbers which gives a higher result. Output data example: 55, 70, 25, 15,

How to implement efficient Alpha-Beta pruning Game Search Tree?

白昼怎懂夜的黑 提交于 2019-12-03 21:51:35
I'm trying to learn about artificial intelligence and how to implement it in a program. The easiest place to start is probably with simple games (in this case Tic-Tac-Toe) and Game Search Trees (recursive calls; not an actual data structure). I found this very useful video on a lecture about the topic. The problem I'm having is that the first call to the algorithm is taking an extremely long amount of time (about 15 seconds) to execute. I've placed debugging log outputs throughout the code and it seems like it is calling parts of the algorithm an excessive amount of times. Here's the method

algorithm to detect time, date and place from invitation text

与世无争的帅哥 提交于 2019-12-03 17:19:54
I am researching some Natural Language Processing algorithms to read a piece of text, and if the text seems to be trying to suggest a meeting request, it sets up that meeting for you automatically. For example, if an email text reads: Let's meet tomorrow someplace in Downtown at 7pm ". The algorithm should be able to detect the Time, date and place of the event. Does someone know of some already existing NLP algorithms that I could use for this purpose? I have been researching some NLP resources (like NLTK and some tools in R ), but did not have much success. Thanks This is an application of

Multinomial classification using neuralnet package

微笑、不失礼 提交于 2019-12-03 16:37:58
This question ought to be real simple. But the documentation isn't helping. I am using R. I must use the neuralnet package for a multinomial classification problem. All examples are for binomial or linear output. I could do some one-vs-all implementation using binomial output. But I believe I should be able to do this by having 3 units as the output layer, where each is a binomial (ie. probability of that being the correct output). No? This is what I would using nnet (which I believe is doing what I want): data(iris) library(nnet) m1 <- nnet(Species ~ ., iris, size = 3) table(predict(m1, iris,