Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'

后端 未结 6 1703
情书的邮戳
情书的邮戳 2020-12-14 02:33

I got this error message when declaring the input layer in Keras.

ValueError: Negative dimension size caused by subtracting 3 from 1 for \'conv2d_2/

相关标签:
6条回答
  • 2020-12-14 03:15

    I had the same problem, however the solution provided in this thread did not help me. In my case it was a different problem that caused this error:


    Code

    imageSize=32
    classifier=Sequential() 
    
    classifier.add(Conv2D(64, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu'))
    classifier.add(MaxPooling2D(pool_size = (2, 2)))
    
    classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
    classifier.add(MaxPooling2D(pool_size = (2, 2)))
    
    classifier.add(Conv2D(64, (3, 3), activation = 'relu')) 
    classifier.add(MaxPooling2D(pool_size = (2, 2)))
    
    classifier.add(Conv2D(64, (3, 3), activation = 'relu')) 
    classifier.add(MaxPooling2D(pool_size = (2, 2)))
    
    classifier.add(Conv2D(64, (3, 3), activation = 'relu')) 
    classifier.add(MaxPooling2D(pool_size = (2, 2)))
    
    classifier.add(Flatten())
    

    Error

    The image size is 32 by 32. After the first convolutional layer, we reduced it to 30 by 30. (If I understood convolution correctly)

    Then the pooling layer divides it, so 15 by 15...

    I hope you can see where this is going: In the end, my feature map is so small that my pooling layer (or convolution layer) is too big to go over it - and that causes the error


    Solution

    The easy solution to this error is to either make the image size bigger or use less convolutional or pooling layers.

    0 讨论(0)
  • 2020-12-14 03:17

    Keras is available with following backend compatibility:

    TensorFlow : By google, Theano : Developed by LISA lab, CNTK : By Microsoft

    Whenever you see a error with [?,X,X,X], [X,Y,Z,X], its a channel issue to fix this use auto mode of Keras:

    Import

    from keras import backend as K
    K.set_image_dim_ordering('th')
    

    "tf" format means that the convolutional kernels will have the shape (rows, cols, input_depth, depth)

    This will always work ...

    0 讨论(0)
  • 2020-12-14 03:23

    You can instead preserve spatial dimensions of the volume such that the output volume size matches the input volume size, by setting the value to “same”. use padding='same'

    0 讨论(0)
  • 2020-12-14 03:31
        # define the model as a class
    class LeNet:
    
      '''
          In a sequential model, we stack layers sequentially. 
          So, each layer has unique input and output, and those inputs and outputs 
          then also come with a unique input shape and output shape.
    
      '''
    
      @staticmethod                ## class can instantiated only once 
      def init(numChannels, imgRows, imgCols , numClasses, weightsPath=None):
    
        # if we are using channel first we have update the input size
        if backend.image_data_format() == "channels_first":
          inputShape = (numChannels , imgRows , imgCols)
        else: 
          inputShape = (imgRows , imgCols , numChannels)
    
        # initilize the model
        model = models.Sequential()
    
        # Define the first set of CONV => ACTIVATION => POOL LAYERS
    
        model.add(layers.Conv2D(  filters=6,kernel_size=(5,5),strides=(1,1), 
                                  padding="valid",activation='relu',kernel_initializer='he_uniform',input_shape=inputShape))
        model.add(layers.AveragePooling2D(pool_size=(2,2),strides=(2,2)))
    

    I hope it would help :)

    See code : Fashion_Mnist_Using_LeNet_CNN

    0 讨论(0)
  • 2020-12-14 03:32

    By default, Convolution2D (https://keras.io/layers/convolutional/) expects the input to be in the format (samples, rows, cols, channels), which is "channels-last". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword data_format = 'channels_first' when declaring the Convolution2D layer.

    model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first'))
    
    0 讨论(0)
  • 2020-12-14 03:36

    Use the following:

    from keras import backend
    backend.set_image_data_format('channels_last')
    

    Depending on your preference, you can use 'channels_first' or 'channels_last' to set the image data format. (Source)

    If this does not work and your image size is small, try reducing the architecture of your CNN, as previous posters mentioned.

    Hope it helps!

    0 讨论(0)
提交回复
热议问题