pytorch

Calling super's forward() method

与世无争的帅哥 提交于 2020-12-30 05:44:49
问题 What is the most appropriate way to call the forward() method of a parent Module ? For example, if I subclass the nn.Linear module, I might do the following class LinearWithOtherStuff(nn.Linear): def forward(self, x): y = super(Linear, self).forward(x) z = do_other_stuff(y) return z However, the docs say not to call the forward() method directly: Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since

Calling super's forward() method

孤街醉人 提交于 2020-12-30 05:43:59
问题 What is the most appropriate way to call the forward() method of a parent Module ? For example, if I subclass the nn.Linear module, I might do the following class LinearWithOtherStuff(nn.Linear): def forward(self, x): y = super(Linear, self).forward(x) z = do_other_stuff(y) return z However, the docs say not to call the forward() method directly: Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since

pytorch, AttributeError: module 'torch' has no attribute 'Tensor'

半世苍凉 提交于 2020-12-29 08:59:26
问题 I'm working with Python 3.5.1 on a computer having CentOS Linux 7.3.1611 (Core) operating system. I'm trying to use PyTorch and I'm getting started with this tutorial. Unfortunately, the #4 line of the example creates troubles: >>> torch.Tensor(5, 3) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'torch' has no attribute 'Tensor' I cannot understand this error... of course in Torch the 'torch' does have an attribute 'Tensor'. The same command

深入理解卷积网络的卷积

谁都会走 提交于 2020-12-28 10:07:00
卷积神经网络是一种特殊的神经网络结构,是自动驾驶汽车、人脸识别系统等计算机视觉应用的基础,其中基本的矩阵乘法运算被卷积运算取代。它们专门处理具有网格状拓扑结构的数据。例如,时间序列数据和图像数据可以看作是一个二维像素网格。 历史 卷积神经网络最初是由福岛核电站在1980年引入的,当时名为Neocognitron。它的灵感来自于Hubel和Weisel提出的神经系统的层次模型。但由于其复杂的无监督学习算法,即无监督学习,该模型并不受欢迎。1989年,Yann LeCun利用反向传播和Neocognitron的概念提出了一种名为LeNet的架构,该架构被美国和欧洲用于手写的邮政编码识别。邮政服务。Yann LeCun进一步研究了这个项目,最终在1998年发布了LeNet-5——第一个引入了我们今天在CNN仍然使用的一些基本概念的现代卷积神经网络。他还发布了MNIST手写数字数据集,这可能是机器学习中最著名的基准数据集。在20世纪90年代,计算机视觉领域转移了它的焦点,许多研究人员停止了对CNN架构的研究。神经网络的研究经历了一个寒冷的冬天,直到2012年,多伦多大学的一组研究人员在著名的ImageNet挑战赛中进入了一个基于CNN的模型(AlexNet),最终以16.4%的错误率赢得了比赛。此后,卷积神经网络不断向前发展,基于CNN的体系结构不断赢得ImageNet, 2015年

How to use multiple GPUs in pytorch?

浪子不回头ぞ 提交于 2020-12-28 07:00:57
问题 I am learning pytorch and follow this tutorial: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html I use this command to use a GPU. device = torch.device("**cuda:0**" if torch.cuda.is_available() else "cpu") But, I want to use two GPUs in jupyter , like this: device = torch.device("**cuda:0,1**" if torch.cuda.is_available() else "cpu") Of course, this is wrong. So, How can I do this? 回答1: Using multi-GPUs is as simply as wrapping a model in DataParallel and increasing the

Saving and reload huggingface fine-tuned transformer

我们两清 提交于 2020-12-26 11:11:18
问题 I am trying to reload a fine-tuned DistilBertForTokenClassification model. I am using transformers 3.4.0 and pytorch version 1.6.0+cu101. After using the Trainer to train the downloaded model, I save the model with trainer.save_model() and in my trouble shooting I save in a different directory via model.save_pretrained(). I am using Google Colab and saving the model to my Google drive. After testing the model I also evaluated the model on my test getting great results, however, when I return

Saving and reload huggingface fine-tuned transformer

一世执手 提交于 2020-12-26 11:10:03
问题 I am trying to reload a fine-tuned DistilBertForTokenClassification model. I am using transformers 3.4.0 and pytorch version 1.6.0+cu101. After using the Trainer to train the downloaded model, I save the model with trainer.save_model() and in my trouble shooting I save in a different directory via model.save_pretrained(). I am using Google Colab and saving the model to my Google drive. After testing the model I also evaluated the model on my test getting great results, however, when I return

How can I chunk a PyTorch tensor into a specified bucket size with overlap?

点点圈 提交于 2020-12-26 11:07:43
问题 Specifically, I have a tensor of shape: torch.Size([1, 16]) I want to bucket this into 7 buckets (of 4 each). Example: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] should become: [[1, 2, 3, 4], [3, 4, 5, 6], [5, 6, 7, 8], [7, 8, 9, 10], [9, 10, 11, 12], [11, 12, 13, 14], [13, 14, 15, 16], ] How can I achieve this with PyTorch? 回答1: Looks like an unfold: t.unfold(0,4,2) Output: tensor([[ 1., 2., 3., 4.], [ 3., 4., 5., 6.], [ 5., 6., 7., 8.], [ 7., 8., 9., 10.], [ 9., 10., 11., 12.],

How can I chunk a PyTorch tensor into a specified bucket size with overlap?

扶醉桌前 提交于 2020-12-26 11:06:04
问题 Specifically, I have a tensor of shape: torch.Size([1, 16]) I want to bucket this into 7 buckets (of 4 each). Example: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] should become: [[1, 2, 3, 4], [3, 4, 5, 6], [5, 6, 7, 8], [7, 8, 9, 10], [9, 10, 11, 12], [11, 12, 13, 14], [13, 14, 15, 16], ] How can I achieve this with PyTorch? 回答1: Looks like an unfold: t.unfold(0,4,2) Output: tensor([[ 1., 2., 3., 4.], [ 3., 4., 5., 6.], [ 5., 6., 7., 8.], [ 7., 8., 9., 10.], [ 9., 10., 11., 12.],

在数据科学方面,python和R有何区别?

前提是你 提交于 2020-12-25 09:11:14
  python和R都是当下比较流行的编程语言,拥有强大的生态系统和社区,受到大家的追捧和喜欢,那么在数据科学方面,python和R有何区别?我们来看看吧。   大多数深度学习研究都使用python完成的,因此Keras和PyTorch之类的工具具有python优先的开发,你可以在Keras的深度学习简介中了解这些主题。   python和R之上具有优势的另一个领域就是将模型部署到其他软件中,python是一种通用的编程语言,因此,如果您在使用python编写应用程序,包含基于python的模型的过程是无缝的。我们在python设计机器学习工作流中介绍了部署模型和构建python的数据工程管道。   python通常被认为是一种通用语言,具有易于理解的语法。   在R中进行大流的统计建模研究,有多种模型可供选择。R的另一个大窍门是使用Shiny轻松创建仪表板,对于没有太多技术经验的人来说,可以创建发布仪表板并进行分享。   R的功能在开发时考虑了统计学家的问题,从而赋予特定领域的优势,比如说数据可视化功能。   python最初是用于软件开发的编程语言,对于具有计算机科学或者软件开发的人员来说可以更容易使用,而且从其他语言过度到python要比R更加简单。 来源: oschina 链接: https://my.oschina.net/u/4408222/blog/4839636