How to compute loss gradient w.r.t to model inputs in a Keras model?

。_饼干妹妹 提交于 2020-08-26 02:11:36

问题


What I want to achieve is to compute the gradient of cross entropy with respect to input values x. In TensorFlow I had no troubles with that:

ce_grad = tf.gradients(cross_entropy, x)

But as my networks grew bigger and bigger I switched to Keras to build them faster. However, now I don't really know how to achieve the above? Is there a way to extract cross entropy and input tensor from model variable that store my whole model?

Just for clarity my cross_entropy is:

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits=y_conv))
<tf.Tensor 'Mean:0' shape=() dtype=float32>

and x:

x = tf.placeholder(tf.float32, shape = [None,784])
<tf.Tensor 'Placeholder:0' shape=(?, 784) dtype=float32>

回答1:


We can write a backend function to do that. We use K.categorical_crossentropy to compute the loss and use K.gradients to compute its gradient with respect to model inputs:

from keras import backend as K

# an input layer to feed labels
y_true = Input(shape=labels_shape)
# compute loss based on model's output and true labels
ce = K.mean(K.categorical_crossentropy(y_true, model.output))
# compute gradient of loss with respect to inputs
grad_ce = K.gradients(ce, model.inputs)
# create a function to be able to run this computation graph
func = K.function(model.inputs + [y_true], grad_ce)

# usage
output = func([model_input_array(s), true_labels])


来源:https://stackoverflow.com/questions/53649837/how-to-compute-loss-gradient-w-r-t-to-model-inputs-in-a-keras-model

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!