artificial-intelligence

Looping through training data in Neural Networks Backpropagation Algorithm

99封情书 提交于 2019-12-05 02:09:04
How many times do I use a sample of training data in one training cycle? Say I have 60 training data. I go through the 1st row and do a forward pass and adjust weights using results from backward pass. Using the sigmoidal function as below: Forward pass Si = sum of (Wi * Uj) Ui = f(Si) = 1 / 1 + e^ - Si Backward pass Output Cell = (expected -Ui)(f'(Si)), where f'(Si) = Ui(1-Ui) Do I then go through the 2nd row and do the same process as the 1st or do I go around the 1st row until the error is less? I hope someone can help please Training the network You should use each instance of the training

Part 2 Resilient backpropagation neural network

谁都会走 提交于 2019-12-04 23:18:27
问题 This is a follow-on question to this post. For a given neuron, I'm unclear as to how to take a partial derivative of its error and the partial derivative of it's weight. Working from this web page, it's clear how the propogation works (although I'm dealing with Resilient Propagation). For a Feedforward Neural Network, we have to 1) while moving forwards through the neural net, trigger neurons, 2) from the output layer neurons, calculate a total error. Then 3) moving backwards, propogate that

Can somebody explain in Manhattan dstance for the 8 puzzle in java for me?

天大地大妈咪最大 提交于 2019-12-04 21:33:50
i am writing an A* algorithm which can solve the 8-puzzle in Java, so far i have implemented DFS, BFS, A* using the number of tiles out of place and i just need to implement it using the heuristic for the Manhattan distance. As you are probably aware the Manhattan distance is the sum of each tiles displacement in relation to its current position and its index in the goal state. I have googled around and found these stack over flow topics: Calculating Manhattan Distance Manhattan distance in A* Which returned the following code: int manhattanDistanceSum = 0; for (int x = 0; x < N; x++) // x

Datasets to test Nonlinear SVM

喜欢而已 提交于 2019-12-04 21:08:13
问题 I'm implementing a nonlinear SVM and I want to test my implementation on a simple not linearly separable data. Google didn't help me find what I want. Can you please advise me where I can find such data. Or at least, how can I generate such data manually ? Thanks, 回答1: Well, SVMs are two-class classifiers--i.e., these classifiers place data on either side of a single decision boundary. Therefore, i would suggest a data set comprised of just two classes (that's not strictly necessary because

Which computer vision library & algorithm(s), for human behaviour analysis?

只谈情不闲聊 提交于 2019-12-04 20:51:42
Objective: Detect / determine human actions, s.a. picking / lifting items to read label and keeping it back on rack (in a store), sitting-on, mounting/climbing-atom objects s.a. chair, bench, ladder etc. Environment: Store / shop, which is mostly well lit. Cameras (VGA -> 1MP), fixed (i.e. not PTZ). Constraints: Presence of known and unknown human beings. Possible rearrangement of objects (items for sale) in the store, over a period of time. Possible changes in lighting over time. For example: Frontal areas of store might get ample sunlight during day, which changes to artificial light at

Predictional Logic in Programming?

跟風遠走 提交于 2019-12-04 19:41:58
I was thinking about how in the probably distant future many people think that we wont rely on physical input (i.e. keyboard) as much because the technology that reads brain waves (which already exists to some extent) will be available. Kinda scares me....anyway, I while I was daydreaming about this, the idea came to me that: what if a programmer could implement logic in their code to accurately predict the users intentions and then carry out the intended operation with no need for human interaction. I am not looking for anything specific, I'm just a little curious as to what anyone's thoughts

Finding minimum cut-sets between bounded subgraphs

佐手、 提交于 2019-12-04 19:15:27
问题 If a game map is partitioned into subgraphs, how to minimize edges between subgraphs? I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices for each subgraph that act as a soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally

Algorithm for rating the monotonicity of an array (i.e. judging the “sortedness” of an array)

我们两清 提交于 2019-12-04 18:56:41
问题 EDIT : Wow, many great responses. Yes, I am using this as a fitness function for judging the quality of a sort performed by a genetic algorithm. So cost-of-evaluation is important (i.e., it has to be fast, preferably O(n) .) As part of an AI application I am toying with, I'd like to be able to rate a candidate array of integers based on its monotonicity, aka its "sortedness". At the moment, I'm using a heuristic that calculates the longest sorted run, and then divides that by the length of

F# and Fuzzy Logic

人盡茶涼 提交于 2019-12-04 18:48:08
问题 I know it might sound strange but I would like to know one thing in this new world where Microsoft Visual F# is getting into. There are many application of this language, I am going to learn, regarding parsing, functional programming, structured programming... But what about artificial intelligence? Are there any applications for Fuzzy Logic? Is F# a good language to be used for Fuzzy Logic applications? At university we are studying Prolog and similar languages. Prolog is able to create

How correctly calculate tf.nn.weighted_cross_entropy_with_logits pos_weight variable

别等时光非礼了梦想. 提交于 2019-12-04 18:22:28
I am using convolution neural network. My data is quite imbalanced, I have two classes. My first class contains: 551,462 image files My second class contains: 52,377 image files I want to use weighted_cross_entropy_with_logits , but I'm not sure I'm calculating pos_weight variable correctly. Right now I'm using classes_weights = tf.constant([0.0949784, 1.0]) cross_entropy = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(logits=logits, targets=y_, pos_weight=classes_weights)) train_step = tf.train.AdamOptimizer(LEARNING_RATE, epsilon=1e-03).minimize( cross_entropy , global_step=global