tensor

pytorch how to remove cuda() from tensor

↘锁芯ラ 提交于 2019-12-05 06:30:43
I got TypeError: expected torch.LongTensor (got torch.cuda.FloatTensor) . How do I convert torch.cuda.FloatTensor to torch.LongTensor ? Traceback (most recent call last): File "train_v2.py", line 110, in <module> main() File "train_v2.py", line 81, in main model.update(batch) File "/home/Desktop/squad_vteam/src/model.py", line 131, in update loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y) File "/home/Desktop/squad_vteam/src/model.py", line 94, in adversarial_loss adv_embedding = torch.LongTensor(adv_embedding) TypeError: expected torch.LongTensor

How to cast a 1-d IntTensor to int in Pytorch

爷,独闯天下 提交于 2019-12-05 04:42:43
I get a 1-D IntTensor,but i want to convert it to a integer. I try it by this method: print(dictionary[IntTensor.int()]) but got an error: KeyError: Variable containing: 423 [torch.IntTensor of size 1] Thanks~ You can use: print(dictionary[IntTensor.data[0]]) The key you're using is an object of type autograd.Variable . .data gives the tensor and the index 0 can be used to access the element. The simplest and cleanest method I know: IntTensor.item() From PyTorch docs: "Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases,

Indexing a multi-dimensional tensor with a tensor in PyTorch

旧时模样 提交于 2019-12-05 02:57:15
I have the following code: a = torch.randint(0,10,[3,3,3,3]) b = torch.LongTensor([1,1,1,1]) I have a multi-dimensional index b and want to use it to select a single cell in a . If b wasn't a tensor, I could do: a[1,1,1,1] Which returns the correct cell, but: a[b] Doesn't work, because it just selects a[1] four times. How can I do this? Thanks A more elegant (and simpler) solution might be to simply cast b as a tuple: a[tuple(b)] Out[10]: tensor(5.) I was curious to see how this works with "regular" numpy, and found a related article explaining this quite well here . You can split b into 4

pytorch 花式张量(Tensor)操作

元气小坏坏 提交于 2019-12-05 02:34:09
一、张量的维度操作 1.squezee & unsqueeze x = torch.rand(5,1,2,1) x = torch.squeeze(x)#去掉大小为1的维度,x.shape =(5,2) x = torch.unsqueeze(x,3)#和squeeze相反在第三维上扩展,x.shape = (5,2,1) 2.张量扩散,在指定维度上将原来的张量扩展到指定大小,比如原来x是3 1,输入size为[3, 4],可以将其扩大成3 4,4为原来1个元素的复制 x = x.expand(*size) 3.转置,torch.transpose 只能交换两个维度 permute没有限制 x = torch.transpose(x, 1, 2) # 交换1和2维度 x = x.permute(1, 2, 3, 0) # 进行维度重组 4.改变形状,view&reshape 两者作用一样,区别在于是当从多的维度变到少的维度时,如果张量不是在连续内存存放,则view无法变成合并维度,会报错 x = x.view(1, 2, -1)#把原先tensor中的数据按照行优先的顺序排成一个一维的数据(这里应该是因为要求地址是连续存储的),然后按照参数组合成其他维度的tensor x = x.reshape(1, 2, -1) 5.张量拼接 cat & stack torch.cat(a

TensofFlow函数: tf.image.crop_and_resize

两盒软妹~` 提交于 2019-12-05 02:31:39
tf.image.crop_and_resize(   image,   boxes,   box_ind,   crop_size,   method='bilinear',   extrapolation_value=0,   name=None ) 从输入图像张量中提取crop(裁剪),并双线调整它们的大小(可能高宽比变化),到由crop_size指定的通用输出大小。这比从输入图像中提取固定大小切片并且不允许调整大小或宽高比变化的crop_to_bounding_box操作更普遍。 从输入image中返回一个crops张量,位于boxes(参数2)的边界框位置出定义的位置。 裁剪后的框都是调整大小为固定size=[crop_height, crop_width]. 结果是一个四维张量[num_boxes, crop_height, crop_width, depth]. 调整大小是角对齐。如果boxex=[[0,0,1,1]], 该方法将为使用tf.image.resize_biliner()与align_corners=True提供相同的结果。 参数: image: 一个Tensor, 一个形状为[batch, image_height, image_width, depth]的四维张量,image_height和image_width需要为正值。 boxes:

1、tensor的数据类型

亡梦爱人 提交于 2019-12-05 01:47:44
1、数据载体  ① list : list中可以添加多种数据,[1,1.2,‘hello’,(1,2)]  ② np.array:np数组主要用于解决同种数据的运算,不支持自动求导,不支持GPU运算  ③ tf.Tenso:    ▪ scalar: 1.1    ▪ vector: [1.1],[1.1,2.2,…]    ▪ matrix: [[1.1,2.2],[3.3,4.4],[5.5,6.6]]    ▪ tensor: 𝑟𝑎𝑛𝑘 > 2 2、TF是一个计算库,跟np非常接近,主要有一下数据类型 (1) int ,float, double (2) bool (3)string 3、检查tensor的类型 (1)isinstance (2)is_tensor (3)dtype 4、数据类型转换 (1)convert_to_tensor (2)cast   bool和int的转换   将tensor转换到numpy 5、tf.Variable,定义一个可以优化的参数,即变量 来源: https://www.cnblogs.com/pengzhonglian/p/11895590.html

AttributeError: 'module' object has no attribute '_rebuild_tensor_v2'

非 Y 不嫁゛ 提交于 2019-12-04 22:00:21
pytorch报错:AttributeError: ‘module’ object has no attribute ‘_rebuild_tensor_v2’ 原因:由于训练模型时使用的是新版本的pytorch,而加载时使用的是旧版本的pytorch。 解决方法: 在文件的顶部加上这段代码 import torch . _utils try : torch . _utils . _rebuild_tensor_v2 except AttributeError : def _rebuild_tensor_v2 ( storage , storage_offset , size , stride , requires_grad , backward_hooks ) : tensor = torch . _utils . _rebuild_tensor ( storage , storage_offset , size , stride ) tensor . requires_grad = requires_grad tensor . _backward_hooks = backward_hooks return tensor torch . _utils . _rebuild_tensor_v2 = _rebuild_tensor_v2 来源: CSDN 作者: summer2day

AttributeError: 'Tensor' object has no attribute 'assign'解决办法

☆樱花仙子☆ 提交于 2019-12-04 21:58:56
AttributeError: ‘Tensor’ object has no attribute 'assign’解决办法 问题描述 a = tf.ones(shape=[1,2]) tf.assign(a,-1) 会报错: AttributeError: 'Tensor' object has no attribute 'assign' - 原因是Constant类型的Tensor不能assign,只有Variable才能调用assign方法 解决办法: 比如我现在有个3维的Tensor,叫ee,我想要更改它第二个维度的某列的值,可以这样: bb = tf . ones_like ( ee ) * some_value left = aa [ : , : index , : ] right = aa [ : , index + 1 : , : ] new_ee = tf . concat ( values = [ left , bb , right ] , axis = 1 ) 其中,index是你想要替换掉的那一列的索引,some_value是你想要填充进去的值,left和right分别是两边保留的值。然后concat一起就行了。 同样的道理,如果你想替换掉的不是一列,而是某个值,那你直接定位那个值的索引,然后把它替换掉,就行了,这里就不举例了。 来源: CSDN 作者:

Keras -- AttributeError: 'Tensor' object has no attribute '_keras_history'

旧城冷巷雨未停 提交于 2019-12-04 21:58:45
** 在Keras下改网络时遇到AttributeError: ‘Tensor’ object has no attribute ‘_keras_history’ ** 直接对输入进行索引操作 错误代码如下: x = img_input[:,:,:,0:3] x_art = img_input[:,:,:,3:6] x_nc = img_input[:,:,:,6:9] 报错如上。 修改: x = Lambda(lambda img_input:img_input[:,:,:,0:3])(img_input) x_art = Lambda(lambda img_input: img_input[:,:,:,3:6])(img_input) x_nc = Lambda(lambda img_input: img_input[:,:,:,6:9])(img_input) Done! 这是Keras与Tensorflow混用导致报错,Keras中定义的Tensor与TensorFlow给的Tensor类型不同 以下为参考的策略 第一种策略:将tensorflow tensor转keras tensor 1.索引操作转换 #转换前 x = self.x[:, :, :, :] #转换后 x=Lambda(lambda x: x[:, :, :, :])(self.x) 2

[翻译]斯坦福CS 20SI:基于Tensorflow的深度学习研究课程笔记,Lecture note 2: TensorFlow Ops

天涯浪子 提交于 2019-12-04 19:24:19
“CS 20SI: TensorFlow for Deep Learning Research” Prepared by Chip Huyen Reviewed by Danijar Hafner Lecture note 2: TensorFlow Ops 个人翻译,部分内容较简略,建议参考原note阅读 1. tensorboard使用 示例代码: import tensorflow as tf a = tf.constant ( 2 ) b = tf . constant ( 3 ) x = tf . add ( a , b) with tf.Session () as sess: print sess.run ( x) 训练之前,建立graph以后,运行下列代码激活tensorboard: writer = tf.summary.FileWriter( logs_dir , sess . graph) 将你的event存储在logs_dir中,然后在terminal中输入 tensorboard --logdir=logs_dir 打开tensorboard a,b,x为代码中使用的变量名,tensorboard中使用 name=# 给节点命名,例如: a = tf.constant([ 2 , 2 ], name = "a" ) b = tf.constant([ 3 ,