pytorch

How do I list all currently available GPUs with pytorch?

霸气de小男生 提交于 2021-01-01 09:08:08
问题 I know I can access the current GPU using torch.cuda.current_device() , but how can I get a list of all the currently available GPUs? 回答1: You can list all the available GPUs by doing: >>> import torch >>> available_gpus = [torch.cuda.device(i) for i in range(torch.cuda.device_count())] >>> available_gpus [<torch.cuda.device object at 0x7f2585882b50>] 来源: https://stackoverflow.com/questions/64776822/how-do-i-list-all-currently-available-gpus-with-pytorch

How do I list all currently available GPUs with pytorch?

蓝咒 提交于 2021-01-01 09:07:29
问题 I know I can access the current GPU using torch.cuda.current_device() , but how can I get a list of all the currently available GPUs? 回答1: You can list all the available GPUs by doing: >>> import torch >>> available_gpus = [torch.cuda.device(i) for i in range(torch.cuda.device_count())] >>> available_gpus [<torch.cuda.device object at 0x7f2585882b50>] 来源: https://stackoverflow.com/questions/64776822/how-do-i-list-all-currently-available-gpus-with-pytorch

Truncate SVD decomposition of Pytorch tensor without transfering to cpu

耗尽温柔 提交于 2020-12-31 20:03:53
问题 I'm training a model in Pytorch and I want to use truncated SVD decomposition of input. For calculating SVD I transfer input witch is a Pytorch Cuda Tensor to CPU and using TruncatedSVD from scikit-learn perform truncate, after that, I transfer the result back to GPU. The following is code for my model: class ImgEmb(nn.Module): def __init__(self, input_size, hidden_size): super(ImgEmb, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.drop = nn.Dropout(0.2)

Truncate SVD decomposition of Pytorch tensor without transfering to cpu

纵然是瞬间 提交于 2020-12-31 20:00:34
问题 I'm training a model in Pytorch and I want to use truncated SVD decomposition of input. For calculating SVD I transfer input witch is a Pytorch Cuda Tensor to CPU and using TruncatedSVD from scikit-learn perform truncate, after that, I transfer the result back to GPU. The following is code for my model: class ImgEmb(nn.Module): def __init__(self, input_size, hidden_size): super(ImgEmb, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.drop = nn.Dropout(0.2)

How to load the saved tokenizer from pretrained model in Pytorch

混江龙づ霸主 提交于 2020-12-30 08:37:28
问题 I fine-tuned a pretrained BERT model in Pytorch using huggingface transformer. All the training/validation is done on a GPU in cloud. At the end of the training, I save the model and tokenizer like below: best_model.save_pretrained('./saved_model/') tokenizer.save_pretrained('./saved_model/') This creates below files in the saved_model directory: config.json added_token.json special_tokens_map.json tokenizer_config.json vocab.txt pytorch_model.bin Now, I download the saved_model directory in

How to load the saved tokenizer from pretrained model in Pytorch

点点圈 提交于 2020-12-30 08:37:05
问题 I fine-tuned a pretrained BERT model in Pytorch using huggingface transformer. All the training/validation is done on a GPU in cloud. At the end of the training, I save the model and tokenizer like below: best_model.save_pretrained('./saved_model/') tokenizer.save_pretrained('./saved_model/') This creates below files in the saved_model directory: config.json added_token.json special_tokens_map.json tokenizer_config.json vocab.txt pytorch_model.bin Now, I download the saved_model directory in

ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 1]))

流过昼夜 提交于 2020-12-30 06:32:26
问题 ValueError Traceback (most recent call last) <ipython-input-30-33821ccddf5f> in <module> 23 output = model(data) 24 # calculate the batch loss ---> 25 loss = criterion(output, target) 26 # backward pass: compute gradient of the loss with respect to model parameters 27 loss.backward() C:\Users\mnauf\Anaconda3\envs\federated_learning\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self

CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`

爷,独闯天下 提交于 2020-12-30 06:12:46
问题 I got the following error when I ran my pytorch deep learning model in colab /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1370 ret = torch.addmm(bias, input, weight.t()) 1371 else: -> 1372 output = input.matmul(weight.t()) 1373 if bias is not None: 1374 output += bias RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` I even reduced batch size from 128 to 64 i.e., reduced to half, but still, I got this error

Calling super's forward() method

戏子无情 提交于 2020-12-30 05:47:03
问题 What is the most appropriate way to call the forward() method of a parent Module ? For example, if I subclass the nn.Linear module, I might do the following class LinearWithOtherStuff(nn.Linear): def forward(self, x): y = super(Linear, self).forward(x) z = do_other_stuff(y) return z However, the docs say not to call the forward() method directly: Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since

Calling super's forward() method

允我心安 提交于 2020-12-30 05:45:01
问题 What is the most appropriate way to call the forward() method of a parent Module ? For example, if I subclass the nn.Linear module, I might do the following class LinearWithOtherStuff(nn.Linear): def forward(self, x): y = super(Linear, self).forward(x) z = do_other_stuff(y) return z However, the docs say not to call the forward() method directly: Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since