What is the purpose of the Tensorflow Gradient Tape?

前端 未结 3 1469
天涯浪人
天涯浪人 2020-12-12 14:39

I watched the Tensorflow Developer\'s summit video on Eager Execution in Tensorflow, and the presenter gave an introduction to \"Gradient Tape.\" Now I understand that Gradi

3条回答
  •  無奈伤痛
    2020-12-12 14:53

    With eager execution enabled, Tensorflow will calculate the values of tensors as they occur in your code. This means that it won't precompute a static graph for which inputs are fed in through placeholders. This means to back propagate errors, you have to keep track of the gradients of your computation and then apply these gradients to an optimiser.

    This is very different from running without eager execution, where you would build a graph and then simply use sess.run to evaluate your loss and then pass this into an optimiser directly.

    Fundamentally, because tensors are evaluated immediately, you don't have a graph to calculate gradients and so you need a gradient tape. It is not so much that it is just used for visualisation, but more that you cannot implement a gradient descent in eager mode without it.

    Obviously, Tensorflow could just keep track of every gradient for every computation on every tf.Variable. However, that could be a huge performance bottleneck. They expose a gradient tape so that you can control what areas of your code need the gradient information. Note that in non-eager mode, this will be statically determined based on the computational branches that are descendants of your loss but in eager mode there is no static graph and so no way of knowing.

提交回复
热议问题