torch

pytorch的显存释放机制torch.cuda.empty_cache()

老子叫甜甜 提交于 2021-02-16 09:46:35
Pytorch已经可以自动回收我们不用的显存,类似于python的引用机制,当某一内存内的数据不再有任何变量引用时,这部分的内存便会被释放。但有一点需要注意,当我们有一部分显存不再使用的时候,这部分释放的显存通过Nvidia-smi命令是看不到的,举个例子: device = torch.device('cuda:0') # 定义两个tensor dummy_tensor_4 = torch.randn(120, 3, 512, 512).float().to(device) # 120*3*512*512*4/1000/1000 = 377.48M dummy_tensor_5 = torch.randn(80, 3, 512, 512).float().to(device) # 80*3*512*512*4/1000/1000 = 251.64M # 然后释放 dummy_tensor_4 = dummy_tensor_4.cpu() dummy_tensor_2 = dummy_tensor_2.cpu() # 这里虽然将上面的显存释放了,但是我们通过Nvidia-smi命令看到显存依然在占用 torch.cuda.empty_cache() # 只有执行完上面这句,显存才会在Nvidia-smi中释放 Pytorch的开发者也对此进行说明了,这部分释放后的显存可以用

PyTorch: Dataloader for time series task

馋奶兔 提交于 2021-02-16 08:35:42
问题 I have a Pandas dataframe with n rows and k columns loaded into memory. I would like to get batches for a forecasting task where the first training example of a batch should have shape (q, k) with q referring to the number of rows from the original dataframe (e.g. 0:128). The next example should be (128:256, k) and so on. So, ultimately, one batch should have the shape (32, q, k) with 32 corresponding to the batch size. Since TensorDataset from data_utils does not work here, I am wondering

Detectron:Pytorch-Caffe2-Detectron的一些跟进

牧云@^-^@ 提交于 2021-02-15 02:26:52
pytorch官网: http://pytorch.org/ 上只有PyTroch的ubuntu和Mac版本,赤裸裸地歧视了一把Windows低端用户。 1. Caffe源码: Caffe源码理解之存 储 Caffe2存储 Caffe2中的存储结构层次从上到下依次是Workspace, Blob, Tensor。Workspace存储了运行时所有的Blob和实例化的Net。Blob可以视为对任意类型的一个封装的类,比如封装Tensor, float, string等等。Tensor就是一个多维数组,这个Tensor就类似于Caffe1中的Blob。Caffe2中真正涉及到分配存储空间的调用则在Context中,分为CPUContext和CUDAContext。下面按照从下到上的顺序分析一下Caffe2的存储分配过程。 Context Tensor Blob Workspace 总结 总结 下面是Operator中从创建Blob到实际分配空间的流程,这个图是怎么画出来的呢 : 2.Caffe2 Detectron的使用 初步 关于InferImage: 在 NVIDIA Tesla P100 GPU 上,单张图片的推断时间大概是 130-140ms.当然这与输入图像的参数设置size有关。 2. Detectron 训练 简单介绍在 COCO Dataset 上训练模型. 采用

在Windows上安装pytorch

家住魔仙堡 提交于 2021-02-13 07:22:03
电脑环境为 :windows10 python3.5 cuda8.0 可以在 官网 查询各个版本的安装方式 这里使用pip的安装方式,通过以下命令: 1 pip3 install http: // download.pytorch.org/whl/cu80/torch-0.4.0-cp35-cp35m-win_amd64.whl 2 pip3 install torchvision 在安装完成后在python中验证安装: Python 3.5 . 2 (v3. 5.2 :4def2a2901a5, Jun 25 2016 , 22 : 18 : 55 ) [MSC v. 1900 64 bit (AMD64)] on win32 Type " help " , " copyright " , " credits " or " license " for more information. >>> import torch >>> import torchvision >>> 可能出现的问题: 在import torch时出现 from torch._C import * ImportError: DLL load failed: 找不到指定的模块 的错误 原因是缺少一些dll文件,通过以下几步来解决: 1.下载 win-64/intel-openmp-2018.0.0-8.tar

Pytorch循环神经网络LSTM时间序列预测风速

独自空忆成欢 提交于 2021-02-12 07:16:19
# 时间序列预测分析 就是利用过去一段时间内某事件时间的特征来预测未来一段时间内该事件的特征。这是一类相对比较复杂的预测建模问题,和回归分析模型的预测不同,时间序列模型是依赖于事件发生的先后顺序的,同样大小的值改变顺序后输入模型产生的结果是不同的。 #时间序列模型最常用最强大的的工具就是递归神经网络(recurrent neural network, RNN)。相比与普通神经网络的各计算结果之间相互独立的特点,RNN的每一次隐含层的计算结果都与当前输入以及上一次的隐含层结果相关。通过这种方法,RNN的计算结果便具备了记忆之前几次结果的特点。 #LSTM(Long Short-Term Memory)模型是一种RNN的变型,可以处理rnn模型的局限性 #这里实现pytorch的LSTM来预测未来的风速的模型 #导包(都用得到) import torch from torch.autograd import Variable import torch.nn as nn import pandas as pd from pandas import DataFrame import matplotlib.pyplot as plt import numpy as np #原始数据 #时间序列问题,时间的那一列是不代入训练或者测试的,所以时间列可以删除。是用前几行的预测下一行的。通俗点[1

Classification with pretrained pytorch vgg16 model and its classes

大兔子大兔子 提交于 2021-02-11 15:54:33
问题 I wrote a image vgg classification model with pytorch's pretrained vgg16 model. import matplotlib.pyplot as plt import numpy as np import torch from PIL import Image import urllib from skimage.transform import resize from skimage import io import yaml # Downloading imagenet 1000 classes list file = urllib. request. urlopen("https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt") classes = '' for f in file:

Classification with pretrained pytorch vgg16 model and its classes

自古美人都是妖i 提交于 2021-02-11 15:54:21
问题 I wrote a image vgg classification model with pytorch's pretrained vgg16 model. import matplotlib.pyplot as plt import numpy as np import torch from PIL import Image import urllib from skimage.transform import resize from skimage import io import yaml # Downloading imagenet 1000 classes list file = urllib. request. urlopen("https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt") classes = '' for f in file:

pytorch allocate memory for small size tensor on cpu and gpu but got error on a node with more than 400 GB

南笙酒味 提交于 2021-02-11 14:57:38
问题 I would like to build a torch.nn.embedding with tensors on databricks (the node is p2.8xlarge) by py3. My code: import numpy as np import torch from torch import nn num_embedding, num_dim = 14000, 300 embedding = nn.Embedding(num_embedding, num_dim) row, col = 800000, 302 t = [[x for x in range(col)] for _ in range(row)] t1 = torch.tensor(t) print(t1.shape) # torch.Size([800000, 302]) t1.dtype, t1.nelement() # torch.int64, 241600000 type(t1), t1.device, (t1.nelement() * t1.element_size())/

How to convert a list of tensors into a torch::Tensor?

我的未来我决定 提交于 2021-02-10 04:24:22
问题 I'm trying to convert the following Python code into its equivalent libtorch: tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]], [A[0, 1], A[1, 1], A[2, 1]] ]) In Pytorch we could simply use torch.stack or simply use a torch.tensor() like below: tfm = torch.tensor([[A_tensor[0,0], A_tensor[1,0],0], [A_tensor[0,1], A_tensor[1,1],0] ]) However, in libtorch, this doesn't hold, that is I can not simply do: auto tfm = torch::tensor ({{A.index({0,0}), A.index({1,0}), A.index({2,0})}, {A.index({0,1}), A

How to convert a list of tensors into a torch::Tensor?

二次信任 提交于 2021-02-10 04:21:37
问题 I'm trying to convert the following Python code into its equivalent libtorch: tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]], [A[0, 1], A[1, 1], A[2, 1]] ]) In Pytorch we could simply use torch.stack or simply use a torch.tensor() like below: tfm = torch.tensor([[A_tensor[0,0], A_tensor[1,0],0], [A_tensor[0,1], A_tensor[1,1],0] ]) However, in libtorch, this doesn't hold, that is I can not simply do: auto tfm = torch::tensor ({{A.index({0,0}), A.index({1,0}), A.index({2,0})}, {A.index({0,1}), A