Setting up variables for optimizer in tensorflow eager execution

匿名 (未验证) 提交于 2019-12-03 01:25:01

问题:

x=tfe.Variable(np.random.uniform(size=[166,]), name='x')  optimizer = tf.train.AdamOptimizer() optimizer.minimize(lambda: compute_cost(normed_data[:10], x))  --------------------------------------------------------------------------- AttributeError                            Traceback (most recent call last) <ipython-input-28-9ff2a070e305> in <module>()      23       24 optimizer = tf.train.AdamOptimizer() ---> 25 optimizer.minimize(lambda: compute_cost(normed_data[:10], x))  ~/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)     398         aggregation_method=aggregation_method,     399         colocate_gradients_with_ops=colocate_gradients_with_ops, --> 400         grad_loss=grad_loss)     401      402     vars_with_grad = [v for g, v in grads_and_vars if g is not None]  ~/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py in compute_gradients(self, loss, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, grad_loss)     471       if var_list is None:     472         var_list = tape.watched_variables() --> 473       grads = tape.gradient(loss_value, var_list, grad_loss)     474       return list(zip(grads, var_list))     475   ~/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients)     856     flat_grad = imperative_grad.imperative_grad(     857         _default_vspace, self._tape, nest.flatten(target), flat_sources, --> 858         output_gradients=output_gradients)     859      860     if not self._persistent:  ~/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(vspace, tape, target, sources, output_gradients)      61   """      62   return pywrap_tensorflow.TFE_Py_TapeGradient( ---> 63       tape._tape, vspace, target, sources, output_gradients)  # pylint: disable=protected-access  AttributeError: 'numpy.ndarray' object has no attribute '_id' 

Can someone explain why I'm getting this error? "x" is the only stateful variable/weight I have for my "model/loss fnx" here (which is MLE of a joint pdf). Compute_cost works fine on its own unit test.

回答1:

My guess is that there are operations in your compute_cost function that use numpy operations instead of TensorFlow operations. And TensorFlow cannot differentiate through those.

For example, consider the following:

import tensorflow as tf import numpy as np tf.enable_eager_execution()  v = tf.contrib.eager.Variable(2.0)  # x * v^2 def f(x):   return np.multiply(x, np.multiply(v, v))  with tf.GradientTape() as tape:   y = f(10.0)  # This next line will raise an error similar to what you observed. print(tape.gradient(y, v))   # However, replacing the `np` operations with their equivalent # `tf` operations will allow things to complete  def f(x):   return tf.multiply(x, tf.multiply(v, v))  with tf.GradientTape() as tape:   y = f(10.0) print(tape.gradient(y, v)) # Correctly prints 40.0 

So, it's most likely something similar where your compute_cost function is using numpy operations.

That said, this error message certainly has room for improvement, so you should consider filing a bug to improve that.

Hope that helps.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!