By using pyTorch there is two ways to dropout
torch.nn.Dropout and torch.nn.functional.Dropout.
I struggle to see the difference between t
If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.
Look at the implementations in nn class:
from .. import functional as F
class Dropout(_DropoutNd):
def forward(self, input):
return F.dropout(input, self.p, self.training, self.inplace)
class Dropout2d(_DropoutNd):
def forward(self, input):
return F.dropout2d(input, self.p, self.training, self.inplace)
And so on.
Implementation of Functional class:
def dropout(input, p=0.5, training=False, inplace=False):
return _functions.dropout.Dropout.apply(input, p, training, inplace)
def dropout2d(input, p=0.5, training=False, inplace=False):
return _functions.dropout.FeatureDropout.apply(input, p, training, inplace)
look at the example below to understand:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:
In PyTorch you define your Models as subclasses of torch.nn.Module.
In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.
In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.
torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.
You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.