tensor

Create a d-dimensional tensor dynamically

心不动则不痛 提交于 2019-12-10 19:20:17
问题 I would like to create a d-dimensional tensor using d as an input and without the if statement as below: if d == 2 B = zeros(r,r); for i = 1:r B(i,i) = 1; end elseif d == 3 B = zeros(r,r,r); for i = 1:r B(i,i,i) = 1; end end Is there a more efficient way? 回答1: You can use accumarray: f = @(d,r)accumarray(repmat((1:r).',1 , d), 1); > f(2,5) = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 Here is the basic signature of accumarray: accumarray( subs , val ) Using accumarray we can create an n

【tensorflow】使用convolution处理图片

筅森魡賤 提交于 2019-12-10 18:46:35
import numpy as np import matplotlib . pyplot as plt % matplotlib inline import tensorflow as tf # plt.figure(figsize=(12,9)) image = plt . imread ( './caise.jpg' ) plt . imshow ( image ) image . shape (388, 690, 3) data = image . reshape ( 1 , 388 , 690 , 3 ) . astype ( np . float32 ) # and a filter / kernel tensor of shape # `[filter_height, filter_width, in_channels, out_channels] filter_ = np . array ( [ [ 1 / 27 ] * 81 ] ) . reshape ( 3 , 3 , 3 , 3 ) conv = tf . nn . conv2d ( data , filter_ , [ 1 , 1 , 1 , 1 ] , 'SAME' ) with tf . Session ( ) as sess : image = sess . run ( conv ) .

Resize PyTorch Tensor

天大地大妈咪最大 提交于 2019-12-10 14:27:06
问题 I am currently using the tensor.resize() function to resize a tensor to a new shape t = t.resize(1, 2, 3) . This gives me a deprecation warning: non-inplace resize is deprecated Hence, I wanted to switch over to the tensor.resize_() function, which seems to be the appropriate in-place replacement. However, this leaves me with an cannot resize variables that require grad error. I can fall back to from torch.autograd._functions import Resize Resize.apply(t, (1, 2, 3)) which is what tensor

Loop over a tensor and apply function to each element

南笙酒味 提交于 2019-12-10 10:12:32
问题 I want to loop over a tensor which contains a list of Int , and apply a function to each of the elements. In the function every element will get the value from a dict of python. I have tried the easy way with tf.map_fn , which will work on add function, such as the following code: import tensorflow as tf def trans_1(x): return x+10 a = tf.constant([1, 2, 3]) b = tf.map_fn(trans_1, a) with tf.Session() as sess: res = sess.run(b) print(str(res)) # output: [11 12 13] But the following code throw

adding data to decoder in autoencoder during learning

谁说胖子不能爱 提交于 2019-12-10 10:06:20
问题 I want to implement an autoencoder using Keras and this structure is a large network that some operations is done on the output of autoencoder and then we should consider two loss I attached an image that shows my proposed structure. the link is below. autoencoder structure w has the same size as the input image and in this autoencoder, I do not use max pooling so the output of each phase has the same size as the input image. I want to send w and latent space representation to decoder part

pytorch how to remove cuda() from tensor

£可爱£侵袭症+ 提交于 2019-12-10 03:52:26
问题 I got TypeError: expected torch.LongTensor (got torch.cuda.FloatTensor) . How do I convert torch.cuda.FloatTensor to torch.LongTensor ? Traceback (most recent call last): File "train_v2.py", line 110, in <module> main() File "train_v2.py", line 81, in main model.update(batch) File "/home/Desktop/squad_vteam/src/model.py", line 131, in update loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y) File "/home/Desktop/squad_vteam/src/model.py", line 94,

Indexing a multi-dimensional tensor with a tensor in PyTorch

梦想与她 提交于 2019-12-10 03:38:02
问题 I have the following code: a = torch.randint(0,10,[3,3,3,3]) b = torch.LongTensor([1,1,1,1]) I have a multi-dimensional index b and want to use it to select a single cell in a . If b wasn't a tensor, I could do: a[1,1,1,1] Which returns the correct cell, but: a[b] Doesn't work, because it just selects a[1] four times. How can I do this? Thanks 回答1: A more elegant (and simpler) solution might be to simply cast b as a tuple: a[tuple(b)] Out[10]: tensor(5.) I was curious to see how this works

基于pytorch计算IoU

故事扮演 提交于 2019-12-10 02:46:51
IoU 是目标检测里面的一个基本的环节,这里看到别人的代码,感觉还是挺高效的,就记录一下: torch.Tensor.expand torch.Tensor.expand(*sizes) → Tensor 这是一个pytorch的函数,sizes是你想要扩展后的shape,其中原来tensor大小为1的维度可以扩展成任意值,并且这个操作不会分配新的内存。 栗子: >> > x = torch . tensor ( [ [ 1 ] , [ 2 ] , [ 3 ] ] ) >> > x . size ( ) torch . Size ( [ 3 , 1 ] ) >> > x . expand ( 3 , 4 ) tensor ( [ [ 1 , 1 , 1 , 1 ] , [ 2 , 2 , 2 , 2 ] , [ 3 , 3 , 3 , 3 ] ] ) >> > x . expand ( - 1 , 4 ) # -1 means not changing the size of that dimension tensor ( [ [ 1 , 1 , 1 , 1 ] , [ 2 , 2 , 2 , 2 ] , [ 3 , 3 , 3 , 3 ] ] ) torch.unsqueeze() torch.unsqueeze(input, dim, out=None) → Tensor

tf.split使用

烈酒焚心 提交于 2019-12-09 22:05:39
import tensorflow as tf tensor = [[[1,2,3], [4,5,6], [7,8,9]]] with tf.Session() as sess: tensor1, tensor2 = tf.split(tensor, [1,2], 2) #将tensor在第2个维度(基1)切为1份和2份 print(tensor2 .eval()) 代码运行结果: [[[2 3] [5 6] [8 9]]] 来源: CSDN 作者: 不喝牛奶的里昂 链接: https://blog.csdn.net/qq_34895059/article/details/103465782

Pytorch: Why is the memory occupied by the `tensor` variable so small?

谁都会走 提交于 2019-12-09 18:58:40
问题 In Pytorch 1.0.0, I found that a tensor variable occupies very small memory. I wonder how it stores so much data. Here's the code. a = np.random.randn(1, 1, 128, 256) b = torch.tensor(a, device=torch.device('cpu')) a_size = sys.getsizeof(a) b_size = sys.getsizeof(b) a_size is 262288. b_size is 72. 回答1: The answer is in two parts. From the documentation of sys.getsizeof, firstly All built-in objects will return correct results, but this does not have to hold true for third-party extensions as