autodiff

Getting gradient of vectorized function in pytorch

久未见 提交于 2021-02-20 00:46:41
问题 I am brand new to PyTorch and want to do what I assume is a very simple thing but am having a lot of difficulty. I have the function sin(x) * cos(x) + x^2 and I want to get the derivative of that function at any point. If I do this with one point it works perfectly as x = torch.autograd.Variable(torch.Tensor([4]),requires_grad=True) y = torch.sin(x)*torch.cos(x)+torch.pow(x,2) y.backward() print(x.grad) # outputs tensor([7.8545]) However, I want to be able to pass in a vector as x and for it

What types of operations will/will not plugin the computational graph in the Tensorflow 2?

有些话、适合烂在心里 提交于 2021-01-05 07:21:33
问题 Based on this post, we can change the tf.variable threshold = 3 a = np.array([1,2,3,4,5,6]) b = tf.Variable(a) b = tf.where(b >= threshold, 199, b) Will b=tf.where(b>=3, 199, b) be plugin to the computational graph and affect the gradient of b in the back propragration? Or more general questions: What types of operations will/will not plugin the computational graph in the Tensorflow 2 ? 来源: https://stackoverflow.com/questions/65449945/what-types-of-operations-will-will-not-plugin-the

What types of operations will/will not plugin the computational graph in the Tensorflow 2?

五迷三道 提交于 2021-01-05 07:21:13
问题 Based on this post, we can change the tf.variable threshold = 3 a = np.array([1,2,3,4,5,6]) b = tf.Variable(a) b = tf.where(b >= threshold, 199, b) Will b=tf.where(b>=3, 199, b) be plugin to the computational graph and affect the gradient of b in the back propragration? Or more general questions: What types of operations will/will not plugin the computational graph in the Tensorflow 2 ? 来源: https://stackoverflow.com/questions/65449945/what-types-of-operations-will-will-not-plugin-the

How does tf.gradients manages complex functions?

徘徊边缘 提交于 2020-03-22 03:49:06
问题 I am working with complex-valued neural networks. For Complex-valued neural networks Wirtinger calculus is normally used. The definition of the derivate is then (take into acount that functions are non-Holomorphic because of Liouville's theorem): If you take Akira Hirose book "Complex-Valued Neural Networks: Advances and Applications", Chapter 4 equation 4.9 defines: Where the partial derivative is also calculated using Wirtinger calculus of course. Is this the case for tensorflow? or is it

How can I tell if a tf op has a gradient or not?

拜拜、爱过 提交于 2019-12-30 04:40:08
问题 I am interested in using a SparseTensor in tensorflow, however, I often get LookupError: No gradient defined for operation ... Apparently gradient computation is not defined for many ops for sparse tensors. Are there any easy ways to check if an op has a gradient or not before actually writing and running my code? 回答1: There is a get_gradient_function function in tensorflow.python.framework.ops . It accepts an op and returns a corresponding gradient op. Example: import tensorflow as tf from

Eigen's AutoDiffJacobian, need some help getting a learning example to work

随声附和 提交于 2019-12-12 22:51:57
问题 I have been using Eigen's AutoDiffScalar with much success and would now like to go over to the AutoDiffJacobian instead of doing this over by myself. Hence I created a learning example after have studied the AutoDiffJacobian.h, but something is wrong. Functor: template <typename Scalar> struct adFunctor { typedef Eigen::Matrix<Scalar, 3, 1> InputType; typedef Eigen::Matrix<Scalar, 2, 1> ValueType; typedef Eigen::Matrix<Scalar, ValueType::RowsAtCompileTime, InputType::RowsAtCompileTime>

How to alternate train op's in tensorflow?

↘锁芯ラ 提交于 2019-12-05 04:37:17
问题 I am implementing an alternating training scheme. The graph contains two training ops. The training should alternate between these. This is relevant for research like this or this Below is a small example. But it seems to update both the ops at every step. How can I explicitly alternate between these? from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf # Import data mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True) # Create the