Neural networks: avoid bias in any direction for output

醉酒当歌 提交于 2019-12-24 12:05:17

问题


I'm having difficulties with the CartPole problem.

The input to the Cart takes either 0 or 1 as input; Either move left or right.

Lets say we have a net with 4 inputs plus bias, 3 hidden layers with 1 neuron each and 1 output; where all weights are randomized floats between 0 and 1, and the inputs will also be randomized floats between -10 and 10.

Because i chose everything random, I inherently expect the output to be approximately 0.5 on average, and that the cart will go as much right as it goes left.

This is not the case; i approximately get 0.63 on average. This leads to big problems, because the cart never decides to go to the left. This seems to be dependent on the amounts of neurons per hidden layer.

class NeuralNetwork(object):
  def __init__(self):
     self.inputLayerSize = 4
     self.hiddenLayerCount = 3
     self.hiddenLayerSize = 1
     self.outputLayerSize = 1

     #Initialize weights
     self.W = []
     self.W.append(np.random.rand(self.inputLayerSize + 1, self.hiddenLayerSize))
     for _ in range(self.hiddenLayerCount - 1):
        self.W.append( np.random.rand(self.hiddenLayerSize, self.hiddenLayerSize))
     self.W.append( np.random.rand(self.hiddenLayerSize, self.outputLayerSize))

  def forward(self, data):                                                                     
     layers = []
     data = np.append(data, [1])   #ADD BIAS                                                        
     layers.append(data)
     for h in range(self.hiddenLayerCount + 1):                                                
         z = np.dot( layers[h], self.W[h] )                                                     
         a = sigmoid(z)                                                                         
         layers.append(a)

     return sigmoid( layers[self.hiddenLayerCount + 1] )

I fix the problem by subtracting the output with 0.1, but this is obviously cheating; I see no mathematical reason to use 0.1 as some sort of magic number.

I believe I'm approaching the problem wrong, or got some of my code messed up. Any help would be appreciated!


回答1:


There's at least one problem with your neural network that skews your result probabilities: the model output is the sigmoid of the last layer which itself is a sigmoid.

This means that your logit (i.e., the raw score) is in [0, 1], so the final probability is computed on a [0, 1] range, not [-inf, inf].

As you can see from the graph above, this makes the results probability to be greater than 0.5.

Solution: try the same network without the last sigmoid.



来源:https://stackoverflow.com/questions/48138475/neural-networks-avoid-bias-in-any-direction-for-output

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!