Why “softmax_cross_entropy_with_logits_v2” backprops into labels

孤人 提交于 2019-12-01 17:02:16

问题


I am wondering why in Tensorflow version 1.5.0 and later, softmax_cross_entropy_with_logits_v2 defaults to backpropagating into both labels and logits. What are some applications/scenarios where you would want to backprop into labels?


回答1:


I saw the github issue below asking the same question, you might want to follow it for future updates.

https://github.com/tensorflow/minigo/issues/37

I don't speak for the developers who made this decision, but I would surmise that they would do this by default because it is indeed used often, and for most application where you aren't backpropagating into the labels, the labels are a constant anyway and won't be adversely affected.

Two common uses cases for backpropagating into labels are:

  • Creating adversarial examples

There is a whole field of study around building adversarial examples that fool a neural network. Many of the approaches used to do so involve training a network, then holding the network fixed and backpropagating into the labels (original image) to tweak it (under some constraints usually) to produce a result that fools the network into misclassifying the image.

  • Visualizing the internals of a neural network.

I also recommend people watch the deepviz toolkit video on youtube, you'll learn a ton about the internal representations learned by a neural network.

https://www.youtube.com/watch?v=AgkfIQ4IGaM

If you continue digging into that and find the original paper you'll find that they also backpropagate into the labels to generate images which highly activate certain filters in the network in order to understand them.



来源:https://stackoverflow.com/questions/49100865/why-softmax-cross-entropy-with-logits-v2-backprops-into-labels

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!