backpropagation

Generating prediction using a back-propagation neural network model on R returns same values for all observation

只谈情不闲聊 提交于 2019-12-24 01:45:06
问题 I'm trying to generate prediction using a trained backpropagation neural network using the neuralnet package on a new data set. I used the 'compute' function but end up with the same value for all observations. What did I do wrong? # the data Var1 <- runif(50, 0, 100) sqrt.data <- data.frame(Var1, Sqrt=sqrt(Var1)) # training the model backnet = neuralnet(Sqrt~Var1, sqrt.data, hidden=2, err.fct="sse", linear.output=FALSE, algorithm="backprop", learningrate=0.01) print (backnet) Call: neuralnet

How to test if my implementation of back propagation neural Network is correct

我们两清 提交于 2019-12-23 15:37:58
问题 I am working on an implementation of the back propagation algorithm. What I have implemented so far seems working but I can't be sure that the algorithm is well implemented, here is what I have noticed during training test of my network : Specification of the implementation : A data set containing almost 100000 raw containing (3 variable as input, the sinus of the sum of those three variables as expected output). The network does have 7 layers all the layers use the Sigmoid activation

Why do I get good accuracy with IRIS dataset with a single hidden node?

耗尽温柔 提交于 2019-12-23 14:02:11
问题 I have a minimal example of a neural network with a back-propagation trainer, testing it on the IRIS data set. I started of with 7 hidden nodes and it worked well. I lowered the number of nodes in the hidden layer to 1 (expecting it to fail), but was surprised to see that the accuracy went up. I set up the experiment in azure ml, just to validate that it wasn't my code. Same thing there, 98.3333% accuracy with a single hidden node. Can anyone explain to me what is happening here? 回答1: First,

Why do I get good accuracy with IRIS dataset with a single hidden node?

坚强是说给别人听的谎言 提交于 2019-12-23 14:01:50
问题 I have a minimal example of a neural network with a back-propagation trainer, testing it on the IRIS data set. I started of with 7 hidden nodes and it worked well. I lowered the number of nodes in the hidden layer to 1 (expecting it to fail), but was surprised to see that the accuracy went up. I set up the experiment in azure ml, just to validate that it wasn't my code. Same thing there, 98.3333% accuracy with a single hidden node. Can anyone explain to me what is happening here? 回答1: First,

How does tensorflow handle non differentiable nodes during gradient calculation?

非 Y 不嫁゛ 提交于 2019-12-22 06:48:16
问题 I understood the concept of automatic differentiation, but couldn't find any explanation how tensorflow calculates the error gradient for non differentiable functions as for example tf.where in my loss function or tf.cond in my graph. It works just fine, but I would like to understand how tensorflow backpropagates the error through such nodes, since there is no formula to calculate the gradient from them. 回答1: In the case of tf.where , you have a function with three inputs, condition C ,

Looping through training data in Neural Networks Backpropagation Algorithm

落爺英雄遲暮 提交于 2019-12-22 04:03:27
问题 How many times do I use a sample of training data in one training cycle? Say I have 60 training data. I go through the 1st row and do a forward pass and adjust weights using results from backward pass. Using the sigmoidal function as below: Forward pass Si = sum of (Wi * Uj) Ui = f(Si) = 1 / 1 + e^ - Si Backward pass Output Cell = (expected -Ui)(f'(Si)), where f'(Si) = Ui(1-Ui) Do I then go through the 2nd row and do the same process as the 1st or do I go around the 1st row until the error is

How to convert deep learning gradient descent equation into python

对着背影说爱祢 提交于 2019-12-22 01:05:11
问题 I've been following an online tutorial on deep learning. It has a practical question on gradient descent and cost calculations where I been struggling to get the given answers once it was converted to python code. Hope you can kindly help me get the correct answer please Please see the following link for the equations used Click here to see the equations used for the calculations Following is the function given to calculate the gradient descent,cost etc. The values need to be found without

How to include a custom filter in a Keras based CNN?

瘦欲@ 提交于 2019-12-22 00:46:41
问题 I am working on a fuzzy convolution filter for CNNs. I have the function ready - it takes in the 2D input matrix and the 2D kernel/weight matrix. The function outputs the convolved feature or the activation map. Now, I want to use Keras to build the rest of the CNN that will have the standard 2D convolution filters too. Is there any way I can insert my custom filter into the Keras model in such a way that the kernel matrix is updated by the built in libraries of the Keras backend?

Backpropagation Algorithm Implementation

拥有回忆 提交于 2019-12-22 00:09:04
问题 Dea All, I am trying to implement a neural network which uses backpropagation. So far I got to the stage where each neuron receives weighted inputs from all neurons in the previous layer, calculates the sigmoid function based on their sum and distributes it across the following layer. Finally, the entire network produces a result O. A then calculate the error as E = 1/2(D-O)^2 where D is the desired value. At this point, having all neurons across the network their individual output and the

Backpropagating gradients through nested tf.map_fn

好久不见. 提交于 2019-12-21 19:49:22
问题 I would like to map a TensorFlow function on each vector corresponding to the depth channel of every pixel in a matrix with dimension [batch_size, H, W, n_channels] . In other words, for every image of size H x W that I have in the batch: I extract some features maps F_k (whose number is n_channels) with the same size H x W (hence, the features maps all together are a tensor of shape [H, W, n_channels] ; then, I wish to apply a custom function to the vector v_ij that is associated with the i