Strange convergence in simple Neural Network
问题 I've been struggling for some time with building a simplistic NN in Java. I've been working on and off on this project for a few months and I wanna finish it. My main issue is that I dunno how to implement backpropagation correctly (all sources use Python, math jargon, or explain the idea too briefly). Today I tried deducing the ideology by myself and the rule that I'm using is: the weight update = error * sigmoidDerivative(error) * weight itself; error = output - actual; (last layer) error =