问题
## what my model looks like
# defining the model archictecture
model = Sequential()
# 1st conv layer
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=x_ip_shape))
# 1st max pool
model.add(MaxPooling2D(pool_size=(2, 2)))
# 2nd conv layer
model.add(Conv2D(64, (7, 7), activation='relu'))
# 2nd max pool
model.add(MaxPooling2D(pool_size=(2, 2)))
# Flattenning the input
model.add(Flatten())
# 1st Fully connected layer
model.add(Dense(10, activation='relu'))
# Adding droput
model.add(Dropout(0.25))
# softmax layer
model.add(Dense(classes_out, activation='softmax'))
# defining loss, optimizer learning rate and metric
model.compile(loss='categorical_crossentropy',optimizer=keras.optimizers.Adam(1e-4), metrics=['accuracy'])
## prediction
scores = model.evaluate(test_x, test_labels, verbose=0)
QUESTION:
Instead I can get the output for a forward pass for # 1st Fully connected layer, that is model.add(Dense(10, activation='relu'))?
I went through example on keras FAQ. But it confused me: In this:
get_3rd_layer_output = K.function([model.layers[0].input, K.learning_phase()], [model.layers[3].output])
where do I pass the input data? What does model.layers[0].input mean? Does the trained already model store the input?
回答1:
The get_3rd_layer_output is a Theano function. You do not need to make many modifications in it.
model.layers[0].input will stay as it is if you want the output (of any layer) given input of the first layer in the network. In other words, if you want output of some layer given 4th layer as input, then you should change this to model.layers[4].input.
K.learning_phase() indicates whether you want the output in the training phase or in the testing phase. There will some differences between the outputs of these two as there are layers such as Dropout that behave differently during train and test time. You would want to pass zero if you want output similar to predict().
model.layers[3].output: This is where you will need to make modifications. Find out the index of the layer that you want output from. If you have IDE (e.g. Pycharm), then click on model variable and see the index of your layer (remember it starts from zero). If not, assign some name to that layer and then you can find out all the layer names by doing model.layers. From this, you can easily get the index. For example, if you want output from 10th layer, then you would change this to model.layers[10].output.
How to call this?
Again this is a Theano function, so a symolic one. You have to pass values and evaluate it. You do it as follows:
out = get_3rd_layer_output([X, 0])[0] # test mode
Remember, even if X is a single data point, its shape should be (1,) + x_ip_shape.
来源:https://stackoverflow.com/questions/42919436/keras-getting-output-of-intermidate-layers