问题
I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras?
回答1:
loss.backward()
equivalent in tensorflow is tf.GradientTape()
. TensorFlow provides the tf.GradientTape
API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape
onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation.
optimizer.step()
equivalent in tensorflow is minimize()
. Minimizes the loss by updating the variable list. Calling minimize()
takes care of both computing the gradients and applying them to the variables.
If you want to process the gradients before applying them you can instead use the optimizer in three steps:
- Compute the gradients with
tf.GradientTape
. - Process the gradients as you wish.
- Apply the processed gradients with
apply_gradients()
.
Hope this answers your question. Happy Learning.
来源:https://stackoverflow.com/questions/61623722/pytorch-equivalent-features-in-tensorflow