问题
I currently study the Neural Networks theory and I see that everywhere it is written that it consists of the following layers:
- Input Layer
- Hidden Layer(s)
- Output Layer
I see some graphical descriptions that show the input layer as real nodes in the net, while others show this layer as just a vector of values [x1, x2, ... xn]
What is the correct structure?
Is the "input layer" a real layer of neurons? Or is this just abstractly named as layer, while it really is just the input vector?
Adding contradicting and confusing photos I found in the web:
Here it looks like the input layer consists of neurons:
Here it looks like the input layer is just an input vector:
回答1:
Let me answer your question with some mathematical notations that will make it easier to understand than just random images. First, remember the Perceptron.
The task of the Perceptron is to find a decision function that will classify some points in a given set into n classes. So, for a function
f : R^n -> R , f(X) = <W, X> + b
where W is a vector of weights, and X is the vector of points. As an example, if you have a line defined by the equation 3x + y = 0 then W is (3,1) and X is (x,y).
A Neural Network can be thought of as a graph where each vertex of the graph is a simple perceptron - that is, each node in the network is nothing but a function that takes in some value and outputs a new one, which could then be used for the next node. In your second image, this would be the two hidden layers.
What then do these nodes need as input? A set of W and Xs - weight and point vectors. Which in your image is expressed by x0, x1, .. xn and w0, w1, .. wn.
Ultimately, we can conclude that what a Neural Network needs to function is a set of input vectors of weights and points.
My overall advice to you would be to pick one source for your learning and stick to that rather than going over the internet with conflicting images.
回答2:
Is the "input layer" a real layer of neurons? Or is this just abstractly named as layer, while it really is just the input vector?
Yes, it's both - depending on the abstraction. On paper the network has input neurons. On implementation level you have to organize this data (usually using arrays/vectors) which is why you speak of an input vector:
An input vector holds the input neuron values (representing the input layer).
If you're familiar with basics of graph theory or image processing - it's the same principle. For example, you can call an image a matrix (technical view) or a field of pixels (more abstract view).
来源:https://stackoverflow.com/questions/28288489/neural-networks-does-the-input-layer-consist-of-neurons