pytorch

Cuda and pytorch memory usage

。_饼干妹妹 提交于 2021-02-11 16:41:29
问题 I am using Cuda and Pytorch:1.4.0 . When I try to increase batch_size , I've got the following error: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.74 GiB already allocated; 7.80 MiB free; 2.96 GiB reserved in total by PyTorch) I haven't found anything about Pytorch memory usage. Also, I don't understand why I have only 7.80 mib available? Should I just use a videocard with better perfomance, or can I free some memory? FYI, I have a GTX 1050 TI, python 3,7

Is there any way to convert pytorch tensor to tensorflow tensor

我是研究僧i 提交于 2021-02-11 16:39:22
问题 https://github.com/taoshen58/BiBloSA/blob/ec67cbdc411278dd29e8888e9fd6451695efc26c/context_fusion/self_attn.py#L29 I need to use mulit_dimensional_attention from the above link which is implemented in TensorFlow but I am using PyTorch so can I Convert Pytorch Tensor to TensorFlow Tensor or I have to implement it in PyTorch. code which I am trying to use here I have to pass 'rep_tensor' as TensorFlow tensor type but I have PyTorch tensor def multi_dimensional_attention(rep_tensor, rep_mask

Classification with pretrained pytorch vgg16 model and its classes

大兔子大兔子 提交于 2021-02-11 15:54:33
问题 I wrote a image vgg classification model with pytorch's pretrained vgg16 model. import matplotlib.pyplot as plt import numpy as np import torch from PIL import Image import urllib from skimage.transform import resize from skimage import io import yaml # Downloading imagenet 1000 classes list file = urllib. request. urlopen("https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt") classes = '' for f in file:

Classification with pretrained pytorch vgg16 model and its classes

自古美人都是妖i 提交于 2021-02-11 15:54:21
问题 I wrote a image vgg classification model with pytorch's pretrained vgg16 model. import matplotlib.pyplot as plt import numpy as np import torch from PIL import Image import urllib from skimage.transform import resize from skimage import io import yaml # Downloading imagenet 1000 classes list file = urllib. request. urlopen("https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt") classes = '' for f in file:

TypeError: h5py objects cannot be pickled

≯℡__Kan透↙ 提交于 2021-02-11 15:48:11
问题 I am trying to run a PyTorch implementation of a code, which is supposed to work on SBD dataset. The training labels are originally available in .bin file, which are then converted to HDF5 (.h5) files. Upon running the algorithm, I get an error as: " TypeError: h5py objects cannot be pickled " I think the error is stemming from torch.utils.data.DataLoader. Any idea if I am missing any concept here? I read that pickling is generally not preferred but as of now, my dataset is in HDF5 format

TypeError: h5py objects cannot be pickled

喜欢而已 提交于 2021-02-11 15:47:32
问题 I am trying to run a PyTorch implementation of a code, which is supposed to work on SBD dataset. The training labels are originally available in .bin file, which are then converted to HDF5 (.h5) files. Upon running the algorithm, I get an error as: " TypeError: h5py objects cannot be pickled " I think the error is stemming from torch.utils.data.DataLoader. Any idea if I am missing any concept here? I read that pickling is generally not preferred but as of now, my dataset is in HDF5 format

How to write a scikit-learn estimator in PyTorch

若如初见. 提交于 2021-02-11 15:41:15
问题 I had developed an estimator in Scikit-learn but because of performance issues (both speed and memory usage) I am thinking of making the estimator to run using GPU. One way I can think of to do this is to write the estimator in PyTorch (so I can use GPU processing) and then use Google Colab to leverage on their cloud GPUs and memory capacity. What would be the best way to write an estimator which is already scikit-learn compatible in PyTorch? Any pointers or hints pointing to the right

How to write a scikit-learn estimator in PyTorch

我是研究僧i 提交于 2021-02-11 15:38:59
问题 I had developed an estimator in Scikit-learn but because of performance issues (both speed and memory usage) I am thinking of making the estimator to run using GPU. One way I can think of to do this is to write the estimator in PyTorch (so I can use GPU processing) and then use Google Colab to leverage on their cloud GPUs and memory capacity. What would be the best way to write an estimator which is already scikit-learn compatible in PyTorch? Any pointers or hints pointing to the right

pytorch allocate memory for small size tensor on cpu and gpu but got error on a node with more than 400 GB

南笙酒味 提交于 2021-02-11 14:57:38
问题 I would like to build a torch.nn.embedding with tensors on databricks (the node is p2.8xlarge) by py3. My code: import numpy as np import torch from torch import nn num_embedding, num_dim = 14000, 300 embedding = nn.Embedding(num_embedding, num_dim) row, col = 800000, 302 t = [[x for x in range(col)] for _ in range(row)] t1 = torch.tensor(t) print(t1.shape) # torch.Size([800000, 302]) t1.dtype, t1.nelement() # torch.int64, 241600000 type(t1), t1.device, (t1.nelement() * t1.element_size())/

Pytorch RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select

帅比萌擦擦* 提交于 2021-02-11 14:37:48
问题 I am training a model that takes tokenized strings which are then passed through an embedding layer and an LSTM thereafter. However, there seems to be an error in the input, as it does not pass through the embedding layer. class DrugModel(nn.Module): def __init__(self, input_dim, output_dim, hidden_dim, drug_embed_dim, lstm_layer, lstm_dropout, bi_lstm, linear_dropout, char_vocab_size, char_embed_dim, char_dropout, dist_fn, learning_rate, binary, is_mlp, weight_decay, is_graph, g_layer, g