tensor

torch笔记1

人走茶凉 提交于 2020-01-14 19:43:47
tensor 1、torch.(…)和torch.([…])的区别 前者参数是tensor的维度,后者参数是tensor中的数据 2、 size和shape的区别 shape is an attribute,while size is a function. 输出都是同一个tensor 3、 在函数后加_后缀表示对tensor进行就地操作(ex:X.t_()表示对X转置) 4、torch的tensor类 torch.tensor(data,dtype=None,device=None,requires_grad=None) data - 可以是list, tuple, numpy array, scalar或其他类型,如下图分别是错误和正确的data dtype - 可以返回想要的tensor类型 device - 可以指定返回的设备,“cuda"或"cpu” requires_grad -可以指定是否进行记录图的操作,默认为False 5、rand()和randn()区别 rand()生成tensor内的数据是符合[0,1)范围内的均匀分布的; randn()生成tensor内的数据是符合标准正态分布(standard normal distribution)的。 来源: CSDN 作者: 二八 链接: https://blog.csdn.net/qq_34977841

[转载]pytorch cuda上tensor的定义 以及 减少cpu操作的方法

安稳与你 提交于 2020-01-14 12:57:45
[转载]pytorch cuda上tensor的定义 以及 减少cpu操作的方法 来源: https://blog.csdn.net/u013548568/article/details/84350638 cuda上tensor的定义 a = torch.ones(1000,1000,3).cuda() 某一gpu上定义 cuda1 = torch.device('cuda:1')#使用该语句可以选择要使用的gpu b = torch.randn((1000,1000,1000),device=cuda1) 删除某一变量 del a 在cpu定义tensor然后转到gpu torch.zeros().cuda() 直接在gpu上定义,这样就减少了cpu的损耗 torch.cuda.FloatTensor(batch_size, self.hidden_dim, self.height, self.width).fill_(0) 来源: https://www.cnblogs.com/jiading/p/12191353.html

How to create a Keras Custom Layer using functions not included in the Backend, to perform tensor sampling?

試著忘記壹切 提交于 2020-01-14 06:17:20
问题 I'm trying to create a custom layer in keras . This layer should perform a sampling over the input tensor (according to a probability distribution), and output a tensor of same size, with only values that have been sampled, the rest being zero. However no sampling functions are available in keras.backend to my knowledge. Note that this layer hasn't any trainable parameters, I just want a function that modifies the previous output. For now I'm trying to convert the input tensor from a Tensor

How to reshape a tensor with multiple `None` dimensions?

醉酒当歌 提交于 2020-01-13 11:13:59
问题 I encountered a problem to reshape an intermediate 4D tensorflow tensor X to a 3D tensor Y , where X is of shape ( batch_size, nb_rows, nb_cols, nb_filters ) Y is of shape ( batch_size, nb_rows*nb_cols, nb_filters ) batch_size = None Of course, when nb_rows and nb_cols are known integers, I can reshape X without any problem. However, in my application I need to deal with the case nb_rows = nb_cols = None What should I do? I tried Y = tf.reshape( X, (-1, -1, nb_filters)) but it clearly fails

How to reshape a tensor with multiple `None` dimensions?

做~自己de王妃 提交于 2020-01-13 11:11:12
问题 I encountered a problem to reshape an intermediate 4D tensorflow tensor X to a 3D tensor Y , where X is of shape ( batch_size, nb_rows, nb_cols, nb_filters ) Y is of shape ( batch_size, nb_rows*nb_cols, nb_filters ) batch_size = None Of course, when nb_rows and nb_cols are known integers, I can reshape X without any problem. However, in my application I need to deal with the case nb_rows = nb_cols = None What should I do? I tried Y = tf.reshape( X, (-1, -1, nb_filters)) but it clearly fails

How to use torch.stack function

空扰寡人 提交于 2020-01-13 08:29:27
问题 I have a question about torch.stack I have 2 tensors, a.shape=(2, 3, 4) and b.shape=(2, 3). How to stack them without in-place operation? 回答1: Stacking requires same number of dimensions. One way would be to unsqueeze and stack. For example: a.size() # 2, 3, 4 b.size() # 2, 3 b = torch.unsqueeze(b, dim=2) # 2, 3, 1 # torch.unsqueeze(b, dim=-1) does the same thing torch.stack([a, b], dim=2) # 2, 3, 5 来源: https://stackoverflow.com/questions/52288635/how-to-use-torch-stack-function

Tensor创建:直接创建

时光毁灭记忆、已成空白 提交于 2020-01-13 08:19:39
Tensor创建:直接创建 2.Tensor:直接创建 (1)torch,tensor() import torch import numpy as np torch.manual_seed ( 1 ) #通过创建torch.tensor创建张量 arr = np.ones (( 3 , 3 )) print ( "ndarray的数据类型:" ,arr.dtype ) t = torch.tensor ( arr ) print ( t ) 输出结果: 如果想对张量加速 需要将tensor的device改为cuda t = torch.tensor ( arr,device = 'cuda' ) (2)torch.from_numpy(ndarray) //从numpy创建tensor 注意:从torch.from_numpy创建的tensor与原ndarray共享内存,当修改其中一个数据时,另一个也会变动 arr = np.array ( [ [ 1,2,3 ] , [ 4,5,6 ] ] ) t = torch.from_numpy ( arr ) 修改arr的数据 arr [ 0,0 ] = 0 改变tensor t [ 0,0 ] = 100 来源: CSDN 作者: Major_s 链接: https://blog.csdn.net/qq_41375318

Tensorflow Day1

╄→гoц情女王★ 提交于 2020-01-12 23:55:11
Tensor_flow The Second Lesson 2-1 创建图、启动图 包括对图(Graphs),会话(Session),张量(Tensor),变量(Variable)的一些解释和操作 使用图来表示计算任务 在被称之为会话的上下文中执行图 使用tensor表示数据 通过变量维护状态 使用feed和fetch可以为任意的操作赋值或者从其中获取数据 Tensorflow是一个编程系统,使用图来表示计算任务,图中的节点称之为op(operation),一个op获得0个或多个Tensor, 执行计算,产生0个或多个Tensor。Tensor看作是一个n维数组或列表。图必须在会话里面被启动。 import tensorflow as tf m1 = tf . constant ( [ [ 3 , 3 ] ] ) # 定义两个常量op m2 = tf . constant ( [ [ 2 ] , [ 3 ] ] ) # 创建一个矩阵乘法op,把m1和m2传入 product = tf . matmul ( m1 , m2 ) print ( product ) # 未启动会话窗,输出一个Tensor sess = tf . Session ( ) # 定义一个会话窗,启动默认图 result = sess . run ( product ) #

Tensor创建:依据数值创建

爷,独闯天下 提交于 2020-01-12 03:06:31
Tensor创建:依据数值创建 3.Tensor:依据数值创建 (1)创建全0张量 torch.zeros ( ) 有五个属性 size:张量的形状 如(3,224,224) out:输出张量 layout:内存中的布局形式,默认stride,但当存储矩阵为稀疏矩阵时可设为 sparse_coo,提高查询效率等 device:所在设备 requirs_grad:是否需要梯度 zeros = torch.zeros (( 3,3 ) ,out = out_t ) zeros与out_t也是共享同一内存空间的 (2)torch.zeros_like() 依据input形状创建全0张量 (3)全1张量 torch.ones() (4)torch.ones_like() (5)自定义数值的张量 torch.full(size,fill_value) torch.full (( 6,6 ) ,666 ) (6)torch.full_like() (7)等差张量 torch.arange() 功能:创建等差一维张量 数值区间为{start,end) end取不到 step 为公差,默认1 t = torch.arange ( 2,10,2 ) t (8)均分数列 torch.linspace() 创建一维均分数列张量 区间为闭合的区间 [start,end] t = torch

tensorflow学习之从图像处理看tensor和numpy数据间转换

两盒软妹~` 提交于 2020-01-11 01:50:32
参考文章:https://blog.csdn.net/xiaosongshine/article/details/84955891 问题一:tensor与numpy数据转换 在用tensorflow过程中,经常会接触到numpy,在编写过程不会感觉到太大差别,但在输出网络中,输出的结果仍然是tensor,用这些结果去执行numpy数据来执行的操作比如matplotlib时就会出现一些奇奇怪怪的错误,比如: import os import matplotlib . pyplot as plt import matplotlib #matplotlib.use("Qt5Agg") import tensorflow as tf import cv2 file_name = 'F:\\gaohu.bmp' image_raw = tf . gfile . FastGFile ( file_name , 'rb' ) . read ( ) image_data = tf . image . decode_bmp ( image_raw ) ##压缩维度,变二维 image_data = tf . squeeze ( image_data ) image_data = tf . image . convert_image_dtype ( image_data , dtype = tf .