pytorch

Calculate covariance matrix for complex data in two channels (no complex data type)

好久不见. 提交于 2021-01-20 20:31:23
问题 I have complex-valued data given in 2 channels of a matrix (one is the real, one the imaginary part, so the matrix dimensions are (height, width, 2) , since Pytorch does not have native complex data types. I now want to calculate the covariance matrix. The stripped-down numpy calculation adapted for Pytorch is this: def cov(m, y=None): if m.ndimension() > 2: raise ValueError("m has more than 2 dimensions") if y.ndimension() > 2: raise ValueError('y has more than 2 dimensions') X = m if X

Multi label classification in pytorch

此生再无相见时 提交于 2021-01-20 19:18:49
问题 I have a multi-label classification problem. I have 11 classes, around 4k examples. Each example can have from 1 to 4-5 label. At the moment, i'm training a classifier separately for each class with log_loss. As you can expect, it is taking quite some time to train 11 classifier, and i would like to try another approach and to train only 1 classifier. The idea is that the last layer of this classifer would have 11 nodes, and would output a real number by classes which would be converted to a

Multi label classification in pytorch

让人想犯罪 __ 提交于 2021-01-20 19:15:36
问题 I have a multi-label classification problem. I have 11 classes, around 4k examples. Each example can have from 1 to 4-5 label. At the moment, i'm training a classifier separately for each class with log_loss. As you can expect, it is taking quite some time to train 11 classifier, and i would like to try another approach and to train only 1 classifier. The idea is that the last layer of this classifer would have 11 nodes, and would output a real number by classes which would be converted to a

Multi label classification in pytorch

大兔子大兔子 提交于 2021-01-20 19:15:00
问题 I have a multi-label classification problem. I have 11 classes, around 4k examples. Each example can have from 1 to 4-5 label. At the moment, i'm training a classifier separately for each class with log_loss. As you can expect, it is taking quite some time to train 11 classifier, and i would like to try another approach and to train only 1 classifier. The idea is that the last layer of this classifer would have 11 nodes, and would output a real number by classes which would be converted to a

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

我们两清 提交于 2021-01-20 16:39:02
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

风格不统一 提交于 2021-01-20 16:37:58
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

笑着哭i 提交于 2021-01-20 16:37:44
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

Difference between 1 LSTM with num_layers = 2 and 2 LSTMs in pytorch

泪湿孤枕 提交于 2021-01-20 16:37:01
问题 I am new to deep learning and currently working on using LSTMs for language modeling. I was looking at the pytorch documentation and was confused by it. If I create a nn.LSTM(input_size, hidden_size, num_layers) where hidden_size = 4 and num_layers = 2, I think I will have an architecture something like: op0 op1 .... LSTM -> LSTM -> h3 LSTM -> LSTM -> h2 LSTM -> LSTM -> h1 LSTM -> LSTM -> h0 x0 x1 ..... If I do something like nn.LSTM(input_size, hidden_size, 1) nn.LSTM(input_size, hidden_size

How can I access layers in a pytorch module by index?

北慕城南 提交于 2021-01-20 11:40:33
问题 I am trying to write a pytorch module with multiple layers. Since I need the intermediate outputs I cannot put them all in a Sequantial as usual. On the other hand, since there are many layers, what I have in mind is to put the layers in a list and access them by index in a loop. Below describe what I am trying to achieve: import torch import torch.nn as nn import torch.optim as optim class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.layer_list = [] self.layer

ACL 2021投稿避坑指南

故事扮演 提交于 2021-01-19 12:32:33
本文转载自:哈工大讯飞联合实验室 原文链接: https://mp.weixin.qq.com/s/0cMM2MHUhsn0MKZGIMhyVw ​ mp.weixin.qq.com 注:文末附交流群,最近赶ACL,比较忙,很多同学加了没有回过期了,可以重新加一下,备注好的一定会回复,敬请谅解。 近日,ACL 2021大会官方发布了第二次征稿通知。距离ACL 2021的摘要截稿还有1周的时间,距离全文截稿还有2周的时间。HFL编辑部针对本届ACL 2021投稿的重要内容进行了细致讲解,希望能够帮助正在准备ACL 2021论文的读者。 ACL 2021征稿通知: https:// 2021.aclweb.org/calls/p apers/ 最最重要的:两段式投稿 今年的ACL是两段式投稿方法,即先进行“摘要投稿”,然后再进行“全文投稿”。一定要注意的是, 这两个阶段都是必须参与的 ,不可以忽略“摘要投稿”。另外,长短文的投稿时间是一样的,也需要注意一下。 摘要投稿截止:2021年1月25日 23:59(北京时间:1月26日 19:59) 全文投稿截止:2021年2月1日 23:59(北京时间:2月2日 19:59) 注:官方时间是UTC-12时区,北京时间是UTC+8时区。 温馨提示:不要都等到最后时刻再提交,按往年情况来看,最后一刻很可能会非常非常卡,到时候有可能提交不上去。