This is the model I defined it is a simple lstm with 2 fully connect layers.
import copy
import torch
import torch.nn as nn
import torch.nn.functional as F
i
I add this answer just because I'm facing now the same issue while trying to reproduce Deep Bayesian active learning through dropout disagreement. If you need to keep dropout active (for example to bootstrap a set of different predictions for the same test instances) you just need to leave the model in training mode, there is no need to define your own dropout layer.
Since in pytorch you need to define your own prediction function, you can just add a parameter to it like this:
def predict_class(model, test_instance, active_dropout=False):
if active_dropout:
model.train()
else:
model.eval()