pytorch

AutoTokenizer.from_pretrained fails to load locally saved pretrained tokenizer (PyTorch)

◇◆丶佛笑我妖孽 提交于 2020-12-15 09:05:40
问题 I am new to PyTorch and recently, I have been trying to work with Transformers. I am using pretrained tokenizers provided by HuggingFace. I am successful in downloading and running them. But if I try to save them and load again, then some error occurs. If I use AutoTokenizer.from_pretrained to download a tokenizer, then it works. [1]: tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') text = "Hello there" enc = tokenizer.encode_plus(text) enc.keys() Out[1]: dict_keys(['input_ids'

AutoTokenizer.from_pretrained fails to load locally saved pretrained tokenizer (PyTorch)

安稳与你 提交于 2020-12-15 09:04:53
问题 I am new to PyTorch and recently, I have been trying to work with Transformers. I am using pretrained tokenizers provided by HuggingFace. I am successful in downloading and running them. But if I try to save them and load again, then some error occurs. If I use AutoTokenizer.from_pretrained to download a tokenizer, then it works. [1]: tokenizer = AutoTokenizer.from_pretrained('distilroberta-base') text = "Hello there" enc = tokenizer.encode_plus(text) enc.keys() Out[1]: dict_keys(['input_ids'

pytorch can't shuffle the dataset [closed]

梦想的初衷 提交于 2020-12-15 03:52:23
问题 Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 7 days ago . Improve this question i am trying to make an ai with the mnist dataset from torchvision and makeing it with pytorch, but when i type some of the code that shuffle the data, and run it, it say: trainset = torch.utils.data.Dataloader(train, batch_size=10, shuffle=True) AttributeError:

高速且精确的运算模型 可以模拟宇宙外观的人工智能

僤鯓⒐⒋嵵緔 提交于 2020-12-14 04:51:38
人工智能现在也可以帮助人类探索宇宙。 D3M(Deep)由伯克利大学,日本卡弗里数字研究所,不列颠哥伦比亚大学和卡内基梅隆大学的研究人员组成。密度位移模型(Density Displacement Model)模型可以快速准确地模拟宇宙的外观以及在更改特定参数后它如何变化。 这是天体物理学家首次使用人工智能进行宇宙的3D模拟。研究人员表示,他们开发的D3M模型可以通过参数调整快速准确地模拟宇宙的外观。答案就像宇宙一样。诸如暗物质存在之类的问题也可以让科学家知道宇宙在各种条件下是如何进化的。由于每种情况都需要数千次模拟和大量的计算时间,因此它也使得高速和精确计算模型的发展成为现代天体物理学的重要目标之一。 天体物理学家关注重力,因为重力是塑造宇宙最重要的力量,但精确的宇宙模拟需要计算宇宙中数十亿个粒子,长时间受重力影响的运动,需要大约300次模拟一次。计算小时,虽然有一个更快的模拟方法,模拟时间可以压缩到2分钟,但它会大大降低精度。 发表在美国国家科学院院刊上的D3M可以快速模拟重力如何塑造宇宙。研究团队使用PyTorch深度学习框架和GPU,使用8000种不同的模拟培训材料进行深度神经网络培训。另一种高精度模型产生结果。 D3M模型完成后,研究人员模拟了6亿年前的盒子宇宙,并将模拟结果与需要数百小时计算和几分钟高速模型的其他高精度模型进行了比较。 D3M模型的计算时间仅为30毫秒

CVPR2020丨UDVD:用于可变退化的统一动态卷积超分辨率网络

我是研究僧i 提交于 2020-12-13 11:04:37
点击上方“ AI公园 ”,选择“ 星标★ ”公众号 重磅干货,第一时间送达 CVPR2020论文:Unified Dynamic Convolutional Network for Super-Resolution with Variational Degradations 论文:https://arxiv.org/pdf/2004.06965.pdf 近些年,基于CNN的方法在图像超分辨率问题上表现出出色的性能。然而大多数方法基于一种退化或者是多种退化的组合,甚至去训练特定的模型以适应特定的退化过程。因此更加实际的方法是训练单独的模型以适用多样可变的退化。 因此为了实现这个目标,论文提出了一个统一网络去适应 图像间 (跨图像变化)和 图像内 (空间变化)的变化。 如何实现呢?论文首先提出了动态卷积,进而基于动态卷积提出了用于可变退化的统一可变卷积网络(UDVD)。从 图1 可以看到UDVD 针对不同退化,都能够生成良好的结果,而 RCAN、ZSSR 则无法很好地应对多种退化过程。 图1. UDVD 与RCAN ZSSR 生成图像细节对比 退化LR图像生成 为了训练模型,首先需要对HR 图像处理,生成退化的LR 图像。退化过程可以由如下公式定义: 表示模糊核, 表示下采样过程, 表示噪声, 分别为低分辨率图像和原始高分辨率图像。论文选取了各向同性高斯模糊核以及加性高斯白噪声

BERT-based NER model giving inconsistent prediction when deserialized

倖福魔咒の 提交于 2020-12-13 04:02:17
问题 I am trying to train an NER model using the HuggingFace transformers library on Colab cloud GPUs, pickle it and load the model on my own CPU to make predictions. Code The model is the following: from transformers import BertForTokenClassification model = BertForTokenClassification.from_pretrained( "bert-base-cased", num_labels=NUM_LABELS, output_attentions = False, output_hidden_states = False ) I am using this snippet to save the model on Colab import torch torch.save(model.state_dict(),

BERT-based NER model giving inconsistent prediction when deserialized

老子叫甜甜 提交于 2020-12-13 04:00:40
问题 I am trying to train an NER model using the HuggingFace transformers library on Colab cloud GPUs, pickle it and load the model on my own CPU to make predictions. Code The model is the following: from transformers import BertForTokenClassification model = BertForTokenClassification.from_pretrained( "bert-base-cased", num_labels=NUM_LABELS, output_attentions = False, output_hidden_states = False ) I am using this snippet to save the model on Colab import torch torch.save(model.state_dict(),

Pytorch equivalent features in tensorflow?

跟風遠走 提交于 2020-12-13 03:37:48
问题 I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras? 回答1: loss.backward() equivalent in tensorflow is tf.GradientTape() . TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then

Pytorch equivalent features in tensorflow?

社会主义新天地 提交于 2020-12-13 03:31:46
问题 I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras? 回答1: loss.backward() equivalent in tensorflow is tf.GradientTape() . TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then

torch.utils.data.dataloader outputs TypeError: 'module' object is not callable

一个人想着一个人 提交于 2020-12-13 02:55:19
问题 So im trying to learn pytorch and i got this code from a tutorial and its just there to import a mnist dataset but it outputs "TypeError: 'module' object is not callable" In the tutorial "dataloader" was written as "Dataloader" but when i run it like that it outputs "AttributeError: module 'torch.utils.data' has no attribute 'Dataloader'" The data downloaded inside a file mnist but i dont know if it is complete import torch import torch.nn as nn import torch.nn.functional as F import torch