backpropagation

Neural network backpropagation algorithm not working in Python

孤者浪人 提交于 2019-12-21 01:18:37
问题 I am writing a neural network in Python, following the example here. It seems that the backpropagation algorithm isn't working, given that the neural network fails to produce the right value (within a margin of error) after being trained 10 thousand times. Specifically, I am training it to compute the sine function in the following example: import numpy as np class Neuralnet: def __init__(self, neurons): self.weights = [] self.inputs = [] self.outputs = [] self.errors = [] self.rate = .1 for

How to implement the Softmax derivative independently from any loss function?

最后都变了- 提交于 2019-12-20 12:19:14
问题 For a neural networks library I implemented some activation functions and loss functions and their derivatives. They can be combined arbitrarily and the derivative at the output layers just becomes the product of the loss derivative and the activation derivative. However, I failed to implement the derivative of the Softmax activation function independently from any loss function. Due to the normalization i.e. the denominator in the equation, changing a single input activation changes all

Pytorch ValueError: optimizer got an empty parameter list

梦想与她 提交于 2019-12-20 03:14:05
问题 When trying to create a neural network and optimize it using Pytorch, I am getting ValueError: optimizer got an empty parameter list Here is the code. import torch.nn as nn import torch.nn.functional as F from os.path import dirname from os import getcwd from os.path import realpath from sys import argv class NetActor(nn.Module): def __init__(self, args, state_vector_size, action_vector_size, hidden_layer_size_list): super(NetActor, self).__init__() self.args = args self.state_vector_size =

Calculate the error using a sigmoid function in backpropagation

↘锁芯ラ 提交于 2019-12-19 19:38:43
问题 I have a quick question regarding backpropagation. I am looking at the following: http://www4.rgu.ac.uk/files/chapter3%20-%20bp.pdf In this paper, it says to calculate the error of the neuron as Error = Output(i) * (1 - Output(i)) * (Target(i) - Output(i)) I have put the part of the equation that I don't understand in bold. In the paper, it says that the Output(i) * (1 - Output(i)) term is needed because of the sigmoid function - but I still don't understand why this would be nessecary. What

keras combining two losses with adjustable weights

喜你入骨 提交于 2019-12-18 13:26:22
问题 So here is the detail description. I have a keras functional model with two layers with outputs x1 and x2. x1 = Dense(1,activation='relu')(prev_inp1) x2 = Dense(2,activation='relu')(prev_inp2) I need to use these x1 and x2, Merge/add Them and come up with weighted loss function like in the attached image. Propagate the 'same loss' into both branches. Alpha is flexible to vary with iterations 回答1: It seems that propagating the "same loss" into both branches will not take effect, unless alpha

XOR neural network error stops decreasing during training

不羁岁月 提交于 2019-12-18 12:22:33
问题 I'm training a XOR neural network via back-propagation using stochastic gradient descent. The weights of the neural network are initialized to random values between -0.5 and 0.5. The neural network successfully trains itself around 80% of the time. However sometimes it gets "stuck" while backpropagating. By "stuck", I mean that I start seeing a decreasing rate of error correction. For example, during a successful training, the total error decreases rather quickly as the network learns, like

Multithreaded backpropagation

拥有回忆 提交于 2019-12-18 09:35:18
问题 I have written a back propagation class in VB.NET -it works well- and I'm using it in a C# artificial intelligence project. But I have a AMD Phenom X3 at home and a Intel i5 at school. and my neural network is not multi-threaded. How to convert that back propagation class to a multithreaded algorithm? or how to use GPGPU programming in it? or should I use any third party libraries that have a multithreaded back propagation neural network? 回答1: JeffHeaton has recommend that you use resilient

How does a back-propagation training algorithm work?

瘦欲@ 提交于 2019-12-17 18:03:34
问题 I've been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect. How does back-propagation work? How does it learn from a training dataset provided? I will have to code this, but until then I need to gain a stronger understanding of it. 回答1: Back-propagation works in a logic very similar to that of feed-forward . The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output

Data Encoding for Training in Neural Network

放肆的年华 提交于 2019-12-14 03:38:12
问题 I have converted 349,900 words from a dictionary file to md5 hash. Sample are below: 74b87337454200d4d33f80c4663dc5e5 594f803b380a41396ed63dca39503542 0b4e7a0e5fe84ad35fb5f95b9ceeac79 5d793fc5b00a2348c3fb9ab59e5ca98a 3dbe00a167653a1aaee01d93e77e730e ffc32e9606a34d09fca5d82e3448f71f 2fa9f0700f68f32d2d520302906e65ce 1c9b32ff1b53bd892b87578a11cbd333 26a10043bba821303408ebce568a2746 c3c32ff3481e9745e10defa7ce5b511e I want to train a neural network to decrypt a hash using just simple architecture

Multilayer perceptron - backpropagation

北城以北 提交于 2019-12-13 12:21:58
问题 I have a school project to program multilayer perceptron that classify data into three classes. I have implemented backpropagation algorithm from http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html. I have checked my algorithm (by manually calculating each step of backpropagation) if it really meets this explained steps and it meets. For classifing I am using one-hot code and I have inputs consisting of vectors with 2 values and three output neurons (each for individual class). After