tensor

Deriving the structure of a pytorch network

橙三吉。 提交于 2021-02-04 08:03:03
问题 For my use case, I require to be able to take a pytorch module and interpret the sequence of layers in the module so that I can create a “connection” between the layers in some file format. Now let’s say I have a simple module as below class mymodel(nn.Module): def __init__(self, input_channels): super(mymodel, self).__init__() self.fc = nn.Linear(input_channels, input_channels) def forward(self, x): out = self.fc(x) out += x return out if __name__ == "__main__": net = mymodel(5) for mod in

torch.optim returns “ValueError: can't optimize a non-leaf Tensor” for multidimensional tensor

梦想的初衷 提交于 2021-01-29 08:42:16
问题 I am trying to optimize the translations of the vertices of a scene with torch.optim.adam . It is a code piece from the redner tutorial series, which works fine with the initial setting. It tries to optimize a scene with shifting all the vertices by the same value called translation . Here is the original code: vertices = [] for obj in base: vertices.append(obj.vertices.clone()) def model(translation): for obj, v in zip(base, vertices): obj.vertices = v + translation # Assemble the 3D scene.

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

家住魔仙堡 提交于 2021-01-29 00:12:50
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

孤者浪人 提交于 2021-01-28 23:57:18
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

Why is the Pytorch Dropout layer affecting all values, not only the ones set to zero?

核能气质少年 提交于 2021-01-28 18:42:41
问题 The dropout layer from Pytorch changes the values that are not set to zero. Using Pytorch's documentation example: (source): import torch import torch.nn as nn m = nn.Dropout(p=0.5) input = torch.ones(5, 5) print(input) tensor([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]]) Then I pass it through a dropout layer: output = m(input) print(output) tensor([[0., 0., 2., 2., 0.], [2., 0., 2., 0., 0.], [0., 0., 0., 0., 2.], [2., 2., 2.,

Idris: proof about concatenation of vectors

对着背影说爱祢 提交于 2021-01-28 05:05:47
问题 Assume I have the following idris source code: module Source import Data.Vect --in order to avoid compiler confusion between Prelude.List.(++), Prelude.String.(++) and Data.Vect.(++) infixl 0 +++ (+++) : Vect n a -> Vect m a -> Vect (n+m) a v +++ w = v ++ w --NB: further down in the question I'll assume this definition isn't needed because the compiler -- will have enough context to disambiguate between these and figure out that Data.Vect.(++) -- is the "correct" one to use. lemma : reverse

Idris: proof about concatenation of vectors

感情迁移 提交于 2021-01-28 04:37:19
问题 Assume I have the following idris source code: module Source import Data.Vect --in order to avoid compiler confusion between Prelude.List.(++), Prelude.String.(++) and Data.Vect.(++) infixl 0 +++ (+++) : Vect n a -> Vect m a -> Vect (n+m) a v +++ w = v ++ w --NB: further down in the question I'll assume this definition isn't needed because the compiler -- will have enough context to disambiguate between these and figure out that Data.Vect.(++) -- is the "correct" one to use. lemma : reverse

what is uninitialized data in pytorch.empty function

笑着哭i 提交于 2021-01-27 05:33:13
问题 i was going through pytorch tutorial and came across pytorch.empty function. it was mentioned that empty can be used for uninitialized data . But, when i printed it, i got a value. what is the difference between this and pytorch.rand which also generates data(i know that rand generates between 0 and 1). Below is the code i tried a = torch.empty(3,4) print(a) Output: tensor([[ 8.4135e-38, 0.0000e+00, 6.2579e-41, 5.4592e-39], [-5.6345e-08, 2.5353e+30, 5.0447e-44, 1.7020e-41], [ 1.4000e-38, 5

what is uninitialized data in pytorch.empty function

倾然丶 夕夏残阳落幕 提交于 2021-01-27 05:32:23
问题 i was going through pytorch tutorial and came across pytorch.empty function. it was mentioned that empty can be used for uninitialized data . But, when i printed it, i got a value. what is the difference between this and pytorch.rand which also generates data(i know that rand generates between 0 and 1). Below is the code i tried a = torch.empty(3,4) print(a) Output: tensor([[ 8.4135e-38, 0.0000e+00, 6.2579e-41, 5.4592e-39], [-5.6345e-08, 2.5353e+30, 5.0447e-44, 1.7020e-41], [ 1.4000e-38, 5

Understanding input shape to PyTorch LSTM

心不动则不痛 提交于 2021-01-19 06:21:32
问题 This seems to be one of the most common questions about LSTMs in PyTorch, but I am still unable to figure out what should be the input shape to PyTorch LSTM. Even after following several posts (1, 2, 3) and trying out the solutions, it doesn't seem to work. Background: I have encoded text sequences (variable length) in a batch of size 12 and the sequences are padded and packed using pad_packed_sequence functionality. MAX_LEN for each sequence is 384 and each token (or word) in the sequence