padding

Why use same padding with max pooling?

廉价感情. 提交于 2020-08-06 11:45:22
问题 While going through the autoencoder tutorial in Keras blog, I saw that the author uses same padding in max pooling layers in Convolutional Autoencoder part, as shown below. x = MaxPooling2D((2, 2), padding='same')(x) Could someone explain the reason behind this? With max pooling, we want to reduce the height and width but why is same padding, which keeps height and width the same, used here? In addition, the result of this code halves the dimensions by 2, so the same padding doesn't seem to