tensor

tensorflow inference graph performance optimization

和自甴很熟 提交于 2019-12-23 12:11:59
问题 I am trying to understand more about certain surprising results i see in implementing a tf graph . The graph i am working with is just a forest (bunch of trees). This is just a plain forward inference graph , and nothing related to training. I am sharing the snippets for 2 implementation code snippet 1: with tf.name_scope("main"): def get_tree_output(offset): loop_vars = (offset,) leaf_indice = tf.while_loop(cond, body, loop_vars, back_prop=False, parallel_iterations=1, name="while_loop")

Broadcasting np.dot vs tf.matmul for tensor-matrix multiplication (Shape must be rank 2 but is rank 3 error)

丶灬走出姿态 提交于 2019-12-23 03:48:11
问题 Let's say I have the following tensors: X = np.zeros((3,201, 340)) Y = np.zeros((340, 28)) Making a dot product of X, Y is successful with numpy, and yields a tensor of shape (3, 201, 28). However with tensorflow I get the following error: Shape must be rank 2 but is rank 3 error ... minimal code example: X = np.zeros((3,201, 340)) Y = np.zeros((340, 28)) print(np.dot(X,Y).shape) # successful (3, 201, 28) tf.matmul(X, Y) # errornous Any idea how to achieve the same result with tensorflow? 回答1

torch.nn.functional.normalize详解

瘦欲@ 提交于 2019-12-22 16:01:49
torch.nn.functional.normalize torch.nn.functional.normalize(input, p=2, dim=1, eps=1e-12, out=None) 功能 :将某一个维度除以那个维度对应的范数(默认是2范数)。 v = v max ⁡ ( ∥ v ∥ p , ϵ ) v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)} v = max ( ∥ v ∥ p ​ , ϵ ) v ​ 主要讲以下三种情况: 输入为一维 Tensor a = torch . Tensor ( [ 1 , 2 , 3 ] ) torch . nn . functional . normalize ( a , dim = 0 ) tensor ( [ 0.2673 , 0.5345 , 0.8018 ] ) 可以看到每一个数字都除以了这个 Tensor 的范数: 1 2 + 2 2 + 3 2 = 3.7416 \sqrt{1^2+2^2+3^2}=3.7416 1 2 + 2 2 + 3 2 ​ = 3 . 7 4 1 6 输入为二维 Tensor b = torch . Tensor ( [ [ 1 , 2 , 3 ] , [ 4 , 5 , 6 ] ] ) torch . nn . functional .

Porting PyTorch code from CPU to GPU

[亡魂溺海] 提交于 2019-12-22 13:06:08
问题 Following the tutorial from https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb There is a USE_CUDA flag that is used to control the variable and tensor types between CPU (when False) to GPU (when True) types. Using the data from en-fr.tsv and converting the sentences to variables: import unicodedata import string import re import random import time import math from gensim.corpora.dictionary import Dictionary import torch import torch.nn as nn

SSD目标检测损失函数multibox loss pytorch源码详解 - 样本不均衡问题解决

拜拜、爱过 提交于 2019-12-22 11:10:56
最近做大作业,用到了目标检测算法,要解决样本不均衡的问题。 样本不均衡除了数据扩增外,在损失函数上,容易想到,对于数目较少的类别,当检测类别错误时将惩罚加倍。这里从multibox loss下手,关于它的计算公式不再赘述,已有许多原理解析,写的相当不错。要来解析的是它的源码。 其实已有许多自称源码解析,但是对于重要的部分讲解并不是很详细。。甚至翻译了一下注释就略过了。。。囧 以下注解集合了各路神仙的帮助,以及一些个人理解,在此感谢某知乎作者(忘记叫啥了 和另一位CSDN博主。 以下代码包含了一小段解决样本不均衡问题的部分,因为本人其实是写java的。。所以python写的可能比较丑,另外直接取了第一个标签,因为不知道多标签的时候应该怎么样的。。。所以对一张图有多个标注的可能还需要另外处理。 多说几句,Tensor的很多语法一开始真的是难住了我这个python 0基础,比如一个Tensor 方括号中包含了另一个Tensor 和0比较的结果,之类的,一些很迷的操作,可能会有一些阻碍,解决方法就是多试试,然后就会发现,哦,原来这个操作是这样子的。 各位,共勉。 def forward(self, predictions, targets): """Multibox Loss Args: predictions (tuple): A tuple containing loc preds,

How to apply a custom function to specific columns in a matrix in PyTorch

社会主义新天地 提交于 2019-12-22 05:32:02
问题 I have a tensor of size [150, 182, 91], the first part is just the batch size while the matrix I am interested in is the 182x91 one. I need to run a function on the 182x91 matrix for each of the 50 dimensions separately. I need to get a diagonal matrix stripe of the 182x91 matrix, and the function I am using is the following one (based on my previous question: Getting diagonal matrix stripe automatically in numpy or pytorch): def stripe(a): i, j = a.size() assert (i >= j) out = torch.zeros((i

How to apply a custom function to specific columns in a matrix in PyTorch

China☆狼群 提交于 2019-12-22 05:31:05
问题 I have a tensor of size [150, 182, 91], the first part is just the batch size while the matrix I am interested in is the 182x91 one. I need to run a function on the 182x91 matrix for each of the 50 dimensions separately. I need to get a diagonal matrix stripe of the 182x91 matrix, and the function I am using is the following one (based on my previous question: Getting diagonal matrix stripe automatically in numpy or pytorch): def stripe(a): i, j = a.size() assert (i >= j) out = torch.zeros((i

[停更]莫烦python中的'tf.stop_gradient'不太理解,于是转载mark[停更]

独自空忆成欢 提交于 2019-12-22 00:20:36
莫烦python《DDPG》代码中的’tf.stop_gradient()'不太理解,于是转载这篇好文。虽然还是没大懂,但是觉得有必要做个记录,方便以后学习。 以下内容均为转载内容,稍有改动: 引子 DQN中为什么要对q_target进行stop_gradient啊?这个函数在TensorFlow中还是很重要的,所以我们利用DQN的代码实例来说明该函数的作用。下面对其中的关键代码进行分析: No stop_gradient 这个版本就是人们写得相对较多的版本了,话不多说,直接上代码: . . . self . q_target = tf . placeholder ( tf . float32 , [ None , self . n_actions ] , name = 'Q_target' ) # for calculating loss . . . with tf . variable_scope ( 'loss' ) : self . loss = tf . reduce_mean ( tf . squared_difference ( self . q_target , self . q_eval ) ) with tf . variable_scope ( 'train' ) : self . _train_op = tf . train .

Pytorch中tensor常用语法

为君一笑 提交于 2019-12-21 20:07:40
我把常用的Tensor的数学运算总结到这里,以防自己在使用PyTorch做实验时,忘记这些方法应该传什么参数。总结的方法包括: Tensor求和以及按索引求和 :torch.sum() 和 torch.Tensor.indexadd() Tensor元素乘积 :torch.prod(input) 对Tensor求均值、方差、极值 :torch.mean() 、 torch.var() 、 torch.max() 、 torch.min() 求Tensor的平方根倒数 :torch.rsqrt(input) 求Tensor的线性插值 : torch.lerp(star,end,weight) 求Tensor的\双曲正切 :torch.tanh(input, out=None) 元素求和 torch.sum(input) → \rightarrow → Tensor 返回输入向量input中所有元素的和。 参数: input (Tensor) - 输入张量 例子: torch.sum(input, dim, keepdim=False, out=None) → \rightarrow → Tensor 返回新的张量,其中包括输入张量input中指定维度dim中每行的和。 若keepdim值为True,则在输出张量中,除了被操作的dim维度值降为1,其它维度与输入张量input相同。否则

Tensorflow之计算tensor平均值

守給你的承諾、 提交于 2019-12-21 14:09:39
https://www.tensorflow.org/versions/r0.12/api_docs/python/math_ops.html#reduce_mean tf.reduce_mean(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None) 计算tensor中各个维度上元素的平均值. 在给定维度axis上进行删减. keep_dims被设置为false的话, 原始变量的维度会减少1. 如果不对axis进行赋值, 那么返回所有元素的平均值. 例子: # 'x' is [[1., 1.] # [2., 2.]] tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.] 来源: https://www.cnblogs.com/huangshiyu13/p/6534264.html