I am trying to implement a simple autoencoder using PyTorch. My dataset consists of 256 x 256 x 3 images. I have built a torch.utils.data.dataloader.DataLoade
If your input is 3 x 256 x 256, then you need to convert it to B x N to pass it through the linear layer: nn.Linear(3*256*256, 128) where B is the batch_size and N is the linear layer input size.
If you are giving one image at a time, you can convert your input tensor of shape 3 x 256 x 256 to 1 x (3*256*256) as follows.
img = img.view(1, -1) # converts [3 x 256 x 256] to 1 x 196608
output = model(img)