XOR neural network error stops decreasing during training

后端 未结 3 1593
不思量自难忘°
不思量自难忘° 2020-12-30 06:55

I\'m training a XOR neural network via back-propagation using stochastic gradient descent. The weights of the neural network are initialized to random values between -0.5 an

3条回答
  •  天命终不由人
    2020-12-30 07:29

    I encountered the same issue and found that using the activation function 1.7159*tanh(2/3*x) described in LeCun's "Efficient Backprop" paper helps. This is presumably because that function does not saturate around the target values {-1, 1}, whereas regular tanh does.

提交回复
热议问题