tensor

PyTorch基础

左心房为你撑大大i 提交于 2019-12-19 00:46:45
Infi-chu: http://www.cnblogs.com/Infi-chu/ torch.FloatTensor:用于生成数据类型为浮点型的Tensor,参数可以是一个列表,也可以是一个维度。 import torch a = torch.FloatTensor(3,4) # 3行4列 a = torch.FloatTensor([2,3,4,5]) # 一个列表 torch.IntTensor:用于生成数据类型为整型的Tensor,参数可以是一个列表,也可以是一个维度。 a = torch.IntTensor(3,4) # 3行4列 a = torch.IntTensor([3,4,5,6]) # 一个列表 torch.rand:用于生成数据类型为浮点型且维度指定的Tensor,与NumPy的numpy.rand相似,随机生成的浮点数据在0-1区间均匀分布 a = torch.rand(2,3) torch.randn:用于生成数据类型为浮点型且维度指定的随机Tensor,与NumPy的numpy.randn相似,随机生成的浮点数的取值满足均值为0,方差为1的正太分布。 a = torch.randn(2,2) torch.range:用于生成数据类型为浮点型的且自定义取值范围的Tensor,参数有三个:起始值、结束值、步长 a = torch.range(1,20,1)

Is .data still useful in pytorch?

為{幸葍}努か 提交于 2019-12-18 19:18:26
问题 I'm new to pytorch. I read much pytorch code which heavily uses tensor's .data member. But I search .data in the official document and Google, finding little. I guess .data contains the data in the tensor, but I don't know when we need it and when not? 回答1: .data was an attribute of Variable (object representing Tensor with history tracking e.g. for automatic update), not Tensor . Actually, .data was giving access to the Variable 's underlying Tensor . However, since PyTorch version 0.4.0 ,

Tensorflow, how to multiply a 2D tensor (matrix) by corresponding elements in a 1D vector

≯℡__Kan透↙ 提交于 2019-12-18 17:56:49
问题 I have a 2D matrix M of shape [batch x dim] , I have a vector V of shape [batch] . How can I multiply each of the columns in the matrix by the corresponding element in the V? That is: I know an inefficient numpy implementation would look like this: import numpy as np M = np.random.uniform(size=(4, 10)) V = np.random.randint(4) def tst(M, V): rows = [] for i in range(len(M)): col = [] for j in range(len(M[i])): col.append(M[i][j] * V[i]) rows.append(col) return np.array(rows) In tensorflow,

doParallel performance on a tensor in R

浪尽此生 提交于 2019-12-18 09:48:10
问题 I need to perform some operations on a tensor and I would like make this parallel. Consider the following example: # first part without doParallel N = 8192 M = 128 F = 64 ma <- function(x,n=5){filter(x,rep(1/n,n), sides=2)} m <- array(rexp(N*M*F), dim=c(N,M,F)) new_m <- array(0, dim=c(N,M,F)) system.time ( for(i in 1:N) { for(j in 1:F) { ma_r <- ma(m[i,,j],2) ma_r <- c(ma_r[-length(ma_r)], ma_r[(length(ma_r)-1)]) new_m[i,,j] <- ma_r } } ) This takes around 38 seconds in my laptop. The

What is a batch in TensorFlow?

强颜欢笑 提交于 2019-12-17 23:47:34
问题 The introductory documentation, which I am reading (TOC here) introduces the term here without having defined it. [1] https://www.tensorflow.org/get_started/ [2] https://www.tensorflow.org/tutorials/mnist/tf/ 回答1: Let's say you want to do digit recognition (MNIST) and you have defined your architecture of the network (CNNs). Now, you can start feeding the images from the training data one by one to the network, get the prediction (till this step it's called as doing inference ), compute the

AttributeError: 'Tensor' object has no attribute 'numpy'

谁说胖子不能爱 提交于 2019-12-17 20:43:17
问题 How can I fix this error I downloaded this code from GitHub. predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy() throws the error AttributeError: 'Tensor' object has no attribute 'numpy' Please help me fix this! I used: sess = tf.Session() with sess.as_default(): predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].eval() And i get this error. Someone help me i just want it to work why is this so hard? D:\Python>python TextGenOut.py File

tf.shape() get wrong shape in tensorflow

一个人想着一个人 提交于 2019-12-17 15:22:43
问题 I define a tensor like this: x = tf.get_variable("x", [100]) But when I try to print shape of tensor : print( tf.shape(x) ) I get Tensor("Shape:0", shape=(1,), dtype=int32) , why the result of output should not be shape=(100) 回答1: tf.shape(input, name=None) returns a 1-D integer tensor representing the shape of input. You're looking for: x.get_shape() that returns the TensorShape of the x variable. Update: I wrote an article to clarify the dynamic/static shapes in Tensorflow because of this

Best way to save a trained model in PyTorch?

蓝咒 提交于 2019-12-17 06:19:15
问题 I was looking for alternative ways to save a trained model in PyTorch. So far, I have found two alternatives. torch.save() to save a model and torch.load() to load a model. model.state_dict() to save a trained model and model.load_state_dict() to load the saved model. I have come across to this discussion where approach 2 is recommended over approach 1. My question is, why the second approach is preferred? Is it only because torch.nn modules have those two function and we are encouraged to

pytorch常用函数

爷,独闯天下 提交于 2019-12-16 03:21:44
torch.cat() torch.squeeze() torch.unsqueeze() torch.stack() torch.sum() torch . sum ( input , dim , out = None ) → Tensor #input (Tensor) – 输入张量 #dim (int) – 缩减的维度 #out (Tensor, optional) – 结果张量 torch.multinomial() torch . multinomial ( input , num_samples , replacement = False , out = None ) → LongTensor 这个函数的作用是对 input 的每一行做 num_samples 次抽取,输出的张量是每一次取值时input张量对应的列下标,每次抽取的依据是input中元素值的大小,值越大,越有几率被先抽到。如果有值为0的元素,那么在所有非0的元素集合被取空之前是不会取0元素的。 replacement 指的是取样时是否是有放回的取样,True是有放回,False无放回。这个函数可以用来实现word2vec算法中的负采样。 下面看官网的一个例子。 >> > weights = torch . Tensor ( [ 0 , 10 , 3 , 0 ] ) # create a Tensor of

【转载】 tf.Print() (------------ tensorflow中的print函数)

只谈情不闲聊 提交于 2019-12-14 11:38:59
原文地址: https://blog.csdn.net/weixin_36670529/article/details/100191674 ---------------------------------------------------------------------------------------------- 调试程序的时候,经常会需要检查中间的参数,这些参数一般是定义在model或是别的函数中的局部参数,由于tensorflow要求先构建计算图再运算的机制,也不能定义后直接print出来。tensorflow有一个函数tf.Print()。 tf.Print(input, data, message=None, first_n=None, summarize=None, name=None) 最低要求两个输入,input和data,input是需要打印的变量的名字,data要求是一个list,里面包含要打印的内容。 参数: message是需要输出的错误信息 first_n指只记录( 打log日志 )前n次 summarize是对每个tensor只打印的条目数量,如果是None,对于每个输入tensor只打印3个元素 name是op的名字 需要注意的是tf.Print()只是构建一个op,需要run之后才会打印。 例子: x=tf.constant([2,3,4