solving XOR with single layer perceptron

╄→尐↘猪︶ㄣ 提交于 2020-01-31 18:43:12

问题


I've always heard that the XOR problem can not be solved by a single layer perceptron (not using a hidden layer) since it is not linearly separable. I understand that there is no linear function that can separate the classes.

However, what if we use a non-monotonic activation function like sin() or cos() is this still the case? I would imagine these types of functions might be able to separate them.


回答1:


Yes, a single layer neural network with a non-monotonic activation function can solve the XOR problem. More specifically, a periodic function would cut the XY plane more than once. Even an Abs or Gaussian activation function will cut it twice.

Try it yourself: W1 = W2 = 100, Wb = -100, activation = exp(-(Wx)^2)

  • exp(-(100 * 0 + 100 * 0 - 100 * 1)^2) = ~0
  • exp(-(100 * 0 + 100 * 1 - 100 * 1)^2) = 1
  • exp(-(100 * 1 + 100 * 0 - 100 * 1)^2) = 1
  • exp(-(100 * 1 + 100 * 1 - 100 * 1)^2) = ~0

Or with the abs activation: W1 = -1, W2 = 1, Wb = 0 (yes, you can solve it even without a bias)

  • abs(-1 * 0 + 1 * 0) = 0
  • abs(-1 * 0 + 1 * 1) = 1
  • abs(-1 * 1 + 1 * 0) = 1
  • abs(-1 * 1 + 1 * 1) = 0

Or with sine: W1 = W2 = -PI/2, Wb = -PI

  • sin(-PI/2 * 0 - PI/2 * 0 - PI * 1) = 0
  • sin(-PI/2 * 0 - PI/2 * 1 - PI * 1) = 1
  • sin(-PI/2 * 1 - PI/2 * 0 - PI * 1) = 1
  • sin(-PI/2 * 1 - PI/2 * 1 - PI * 1) = 0



回答2:


No, not without "hacks"

The reason why we need a hidden layer is intuitively apparent when illustrating the xor problem graphically.

You cannot draw a single sine or cosine function to separate the two colors. You need an additional line (hidden layer) as depicted in the following figure:



来源:https://stackoverflow.com/questions/30412427/solving-xor-with-single-layer-perceptron

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!