I need help with calculating derivatives for model output wrt inputs in Keras.
I want to add a regularization functional to the loss function. The regularizer contains the derivative of the classifier function. So I tried to take the derivative of model output. The model is a MLP with one hidden layer. The dataset is MNIST. When I compile the model and take the derivative, I get [None] as the result instead of the derivative function.
I have seen a similar post, but didn't get answer there either: Taking derivative of Keras model wrt to inputs is returning all zeros
Here is my code. Please help me to solve the problem.
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
num_hiddenNodes = 1024
num_classes = 10
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(-1, 28 * 28)
X_train = X_train.astype('float32')
X_train /= 255
y_train = keras.utils.to_categorical(y_train, num_classes)
model = Sequential()
model.add(Dense(num_hiddenNodes, activation='softplus', input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
logits = model.output
# logits = model.layers[-1].output
print(logits)
X = K.identity(X_train)
# X = tf.placeholder(dtype=tf.float32, shape=(None, 784))
print(X)
print(K.gradients(logits, X))
Here is the output for the code. The two parameters are Tensors. The gradients function returns None.
Tensor("dense_2/Softmax:0", shape=(?, 10), dtype=float32)
Tensor("Identity:0", shape=(60000, 784), dtype=float32)
[None]
You are computing the gradients respect to X_train, which is not an input variable to the computation graph. Instead you need to get the symbolic input tensor to the model, so try something like:
grads = K.gradients(model.output, model.input)
In order to calculate the gradients, you need to first figure out the trainable variables. Here is how you do it:
outputs = model.output
trainable_variables = model.trainable_weights
Now calculate the gradients as:
gradients = K.gradients(outputs, trainable_variables)
As a side note, the gradients are a part of your computational graph, the execution of which depends on your backend. If you are using tf, you might require to initialize a session and pass the gradients variable to the session for its evaluation.
来源:https://stackoverflow.com/questions/49312989/keras-calculating-derivatives-of-model-output-wrt-input-returns-none