Neural Networks: What does “linearly separable” mean?

后端 未结 3 901
野趣味
野趣味 2020-12-23 16:54

I am currently reading the Machine Learning book by Tom Mitchell. When talking about neural networks, Mitchell states:

\"Although the perceptron rule

3条回答
  •  遥遥无期
    2020-12-23 17:26

    Look at the following two data sets:

    ^                         ^
    |   X    O                |  AA    /
    |                         |  A    /
    |                         |      /   B
    |   O    X                |  A  /   BB
    |                         |    /   B
    +----------->             +----------->
    

    The left data set is not linearly separable (without using a kernel). The right one is separable into two parts for A' andB` by the indicated line.

    I.e. You cannot draw a straight line into the left image, so that all the X are on one side, and all the O are on the other. That is why it is called "not linearly separable" == there exist no linear manifold separating the two classes.

    Now the famous kernel trick (which will certainly be discussed in the book next) actually allows many linear methods to be used for non-linear problems by virtually adding additional dimensions to make a non-linear problem linearly separable.

提交回复
热议问题