tensor

Multidimensional sparse array (3-way tensor) in R

主宰稳场 提交于 2019-12-03 23:53:37
Using the Matrix package I can create a two-dimensional sparse matrix. Can someone suggest a package that would allow me to create a multidimensional (specifically a 3-dimensional) sparse matrix (array, or technically a three-way tensor) in R? The slam package has a simple_sparse_array class: http://finzi.psych.upenn.edu/R/library/slam/html/array.html , although it only has support for indexing and coercion (if you wanted to do tensor operations, or elementwise arithmetic, without converting back to a regular dense array, you'd have to implement them yourself ...) I found this by doing library

动手深度学习7-从零开始完成softmax分类

百般思念 提交于 2019-12-03 14:37:11
获取和读取数据 初始化模型参数 实现softmax运算 定义模型 定义损失函数 计算分类准确率 训练模型 小结 import torch import torchvision import numpy as np import sys import torchvision.transforms as transforms sys.path.append('..') import d2lzh_pytorch as d2l 获取和读取数据 我们将使用Fahsion_MNIST数据集,并设置批量大小为256 batch_size= 256 mnist_train= torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST',download=True,train=True,transform=transforms.ToTensor()) mnist_test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST',download=True,train=False,transform=transforms.ToTensor()) if sys.platform.startswith('win'): num_worker=0 #

Matlab - Accessing a part of a multidimensional array

巧了我就是萌 提交于 2019-12-03 14:36:37
I'm trying to access a part of a multidimensional array in Matlab - it could be done like this: X(2:3, 1:20, 5, 4:7) However, neither the number of elements, nor the ranges are fixed, so I want to provide the indices from arrays - for the above example they'd be ind1 = [2 1 5 4]; ind2 = [3 20 5 7]; For a fixed number of dimensions this is not a problem ( X(ind1(1):ind2(1),...) , but since they are not I'm not sure how to implement this in Matlab. Is there a way? Or should I approach this differently? Using comma separated lists you can make it a more quick and friendly: % some test data ind1 =

Eigen::Tensor, how to access matrix from Tensor

对着背影说爱祢 提交于 2019-12-03 13:56:43
问题 I have the following Eigen Tensor: Eigen::Tensor<float, 3> m(3,10,10); I want to access the 1st matrix. In numpy I would do it as such m(0,:,:) How would I do this in Eigen 回答1: You can access parts of a tensor using .slice(...) or .chip(...) . Do this to access the first matrix, equivalent to numpy m(0,:,:) : Eigen::Tensor<double,3> m(3,10,10); //Initialize m.setRandom(); //Set random values Eigen::array<long,3> offset = {0,0,0}; //Starting point Eigen::array<long,3> extent = {1,10,10}; /

pytorch和tensorflow

爷,独闯天下 提交于 2019-12-03 11:16:37
举个简单的例子,当我们要实现一个这样的计算图时: 用TensorFlow是这样的: 而用pytorch是这样的: 当然,里面都包含了建立前向计算图,传入变量数据,求梯度等操作,但是显而易见,pytorch的代码更为凝练。TensorFlow我也用得比较少,我本文重点说说pytorch的一些学习心得,一是总结,而是若有缘人能看见此文,也能当个参考。 其实一个好的框架应该要具备三点:对大的计算图能方便的实现;能自动求变量的导数;能简单的运行在GPU上;pytorch都做到了,但是现在很多公司用的都是TensorFlow,而pytorch由于比较灵活,在学术科研上用得比较多一点。鄙人认为可能,Google可能下手早一些,而Facebook作后来者,虽然灵活,但是很多公司已经入了TensorFlow的坑了,要全部迁移出来还是很费功夫;而且,TensorFlow在GPU的分布式计算上更为出色,在数据量巨大时效率比pytorch要高一些,我觉得这个也是一个重要的原因吧。 好的,不闲扯了。pytorch的一些心得,我总结一下: 首先,pytorch包括了三个层次:tensor,variable,Module。tensor,即张量的意思,由于是矩阵的运算,十分适合在GPU上跑。但是这样一个tensor为什么还不够呢?要搞出来一个variable,其实variable只是tensor的一个封装

'tensorboard' is not recognized as an internal or external command,

无人久伴 提交于 2019-12-03 09:08:46
问题 Just started using Tensorflow, but I am not able to use tensorboard command on my cmd, it gives the error command C:\Users\tushar\PycharmProjects>tensorboard --logdir="NewTF" 'tensorboard' is not recognized as an internal or external command, operable program or batch file. I am using window 10 and have installed tensorboard library/ 回答1: I had the same problem for tensorflow 1.5.0 and windows10. Following tensor documentation ("Launching TensorBoard" section), you can try: python -m

tf.unstack with dynamic shape

匿名 (未验证) 提交于 2019-12-03 09:02:45
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to unstack a Tensor because I need a sequence as input for the RNN. I am using variable sequence lengths which prevents me from correctly using tf.unstack . def MapToSequences(x): # x.get_shape().as_list() = [64, 1, None, 512] x = tf.squeeze(x) # tf.shape(x) = [None, None, None], at runtime would be [64, seqlen, 512] x = tf.transpose(x, perm=[1, 0, 2]) # [seqlen, 64, 512] # Here I'd like to unstack with seqlen as num x = tf.unstack(x) # Cannot infer num from shape (?, ?, ?) return x I tried using tf.shape(x) to infer the seqlen

LSTM with Attention

风格不统一 提交于 2019-12-03 09:02:29
I am trying to add attention mechanism to stacked LSTMs implementation https://github.com/salesforce/awd-lstm-lm All examples online use encoder-decoder architecture, which I do not want to use (do I have to for the attention mechanism?). Basically, I have used https://webcache.googleusercontent.com/search?q=cache:81Q7u36DRPIJ:https://github.com/zhedongzheng/finch/blob/master/nlp-models/pytorch/rnn_attn_text_clf.py+&cd=2&hl=en&ct=clnk&gl=uk def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers, dropout=0.5, dropouth=0.5, dropouti=0.5, dropoute=0.1, wdrop=0, tie_weights=False): super

Printing a generator in python tensor flow

匿名 (未验证) 提交于 2019-12-03 08:52:47
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I am trying to follow the tensor flow tutorial as described in this link I am trying to print the predicted result as described : print ( "Predicted %d, Label: %d" % ( classifier . predict ( test_data [ 0 ]), test_labels [ 0 ])) But I am not able to print the result. I am getting the following error. print ( "Predicted %d, Label: %d" % ( classifier . predict ( test_data [ 0 ]), test_labels [ 0 ])) TypeError : % d format : a number is required , not generator How do I print the generator in python . I tried to write a loop and

Gradient clipping appears to choke on None

匿名 (未验证) 提交于 2019-12-03 07:50:05
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to add gradient clipping to my graph. I used the approach recommended here: How to effectively apply gradient clipping in tensor flow? optimizer = tf.train.GradientDescentOptimizer(learning_rate) if gradient_clipping: gradients = optimizer.compute_gradients(loss) clipped_gradients = [(tf.clip_by_value(grad, -1, 1), var) for grad, var in gradients] opt = optimizer.apply_gradients(clipped_gradients, global_step=global_step) else: opt = optimizer.minimize(loss, global_step=global_step) But when I turn on gradient clipping, I get the