Why doesn't Keras need the gradient of a custom loss function?

≯℡__Kan透↙ 提交于 2020-05-29 04:19:25

问题


To my understanding, in order to update model parameters through gradient descend, the algorithm needs to calculate at some point the derivative of the error function E with respect of the output y: dE/dy. Nevertheless, I've seen that if you want to use a custom loss function in Keras, you simply need to define E and you don't need to define its derivative. What am I missing?

Each lost function will have a different derivative, for example:

If loss function is the mean square error: dE/dy = 2(y_true - y)

If loss function is cross entropy: dE/dy = y_true/y

Again, how is it possible that the model does not ask me what the derivative is? How does the model calculate the gradient of the loss function with respect of parameters from just the value of E?

Thanks


回答1:


To my understanding, as long as each operator that you will use in your Error function has already a predefined gradient. the underlying framework will manage to calculate the gradient of you loss function.



来源:https://stackoverflow.com/questions/48219296/why-doesnt-keras-need-the-gradient-of-a-custom-loss-function

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!