How to create variable names in loop for layers in pytorch neural network

后端 未结 1 1331
暗喜
暗喜 2020-12-22 11:45

I am implementing a straightforward feedforward neural newtork in PyTorch. However I am wondern if theres a nicer way to add a flexible amount of layer to the network? Maybe

相关标签:
1条回答
  • 2020-12-22 12:35

    You can put your layers in a ModuleList container:

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    
    class Net(nn.Module):
    
        def __init__(self, input_dim, output_dim, hidden_dim):
            super(Net, self).__init__()
            self.input_dim = input_dim
            self.output_dim = output_dim
            self.hidden_dim = hidden_dim
            current_dim = input_dim
            self.layers = nn.ModuleList()
            for hdim in hidden_dim:
                self.layers.append(nn.Linear(current_dim, hdim))
                current_dim = hdim
            self.layers.append(nn.Linear(current_dim, output_dim))
    
        def forward(self, x):
            for layer in self.layers[:-1]:
                x = F.relu(layer(x))
            out = F.softmax(self.layers[-1](x))
            return out    
    

    It is very important to use pytorch Containers for the layers, and not just a simple python lists. Please see this answer to know why.

    0 讨论(0)
提交回复
热议问题