Data Augmentation in PyTorch

后端 未结 3 614
太阳男子
太阳男子 2020-12-07 20:20

I am a little bit confused about the data augmentation performed in PyTorch. Now, as far as I know, when we are performing data augmentation, we are KEEPING our original dat

3条回答
  •  执笔经年
    2020-12-07 21:00

    I assume you are asking whether these data augmentation transforms (e.g. RandomHorizontalFlip) actually increase the size of the dataset as well, or are they applied on each item in the dataset one by one and not adding to the size of the dataset.

    Running the following simple code snippet we could observe that the latter is true, i.e. if you have a dataset of 8 images, and create a PyTorch dataset object for this dataset when you iterate through the dataset, the transformations are called on each data point, and the transformed data point is returned. So for example if you have random flipping, some of the data points are returned as original, some are returned as flipped (e.g. 4 flipped and 4 original). In other words, by one iteration through the dataset items, you get 8 data points(some flipped and some not). [Which is at odds with the conventional understanding of augmenting the dataset(e.g. in this case having 16 data points in the augmented dataset)]

    class experimental_dataset(Dataset):
    
        def __init__(self, data, transform):
            self.data = data
            self.transform = transform
    
        def __len__(self):
            return len(self.data.shape[0])
    
        def __getitem__(self, idx):
            item = self.data[idx]
            item = self.transform(item)
            return item
    
        transform = transforms.Compose([
            transforms.ToPILImage(),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor()
        ])
    
    x = torch.rand(8, 1, 2, 2)
    print(x)
    
    dataset = experimental_dataset(x,transform)
    
    for item in dataset:
        print(item)
    

    Results: (The little differences in floating points are caused by transforming to pil image and back)

    Original dummy dataset:

    tensor([[[[0.1872, 0.5518],
              [0.5733, 0.6593]]],
    
    
        [[[0.6570, 0.6487],
          [0.4415, 0.5883]]],
    
    
        [[[0.5682, 0.3294],
          [0.9346, 0.1243]]],
    
    
        [[[0.1829, 0.5607],
          [0.3661, 0.6277]]],
    
    
        [[[0.1201, 0.1574],
          [0.4224, 0.6146]]],
    
    
        [[[0.9301, 0.3369],
          [0.9210, 0.9616]]],
    
    
        [[[0.8567, 0.2297],
          [0.1789, 0.8954]]],
    
    
        [[[0.0068, 0.8932],
          [0.9971, 0.3548]]]])
    

    transformed dataset:

    tensor([[[0.1843, 0.5490],
         [0.5725, 0.6588]]])
    tensor([[[0.6549, 0.6471],
         [0.4392, 0.5882]]])
    tensor([[[0.5647, 0.3255],
             [0.9333, 0.1216]]])
    tensor([[[0.5569, 0.1804],
             [0.6275, 0.3647]]])
    tensor([[[0.1569, 0.1176],
             [0.6118, 0.4196]]])
    tensor([[[0.9294, 0.3333],
             [0.9176, 0.9608]]])
    tensor([[[0.8549, 0.2275],
             [0.1765, 0.8941]]])
    tensor([[[0.8902, 0.0039],
             [0.3529, 0.9961]]])
    

提交回复
热议问题