问题
Hi I am trying out a simple autoencoder in Python 3.5 using Keras library. The issue I face is - ValueError: Error when checking input: expected input_40 to have 2 dimensions, but got array with shape (32, 256, 256, 3). My dataset is very small (60 RGB images with dimension - 256*256 and one same type of image to validate). I am a bit new to Python. Please help.
import matplotlib.pyplot as plt
from keras.layers import Input, Dense
from keras.models import Model
#Declaring the model
encoding_dim = 32
input_img = Input(shape=(65536,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(65536, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
#Constructing a data generator iterator
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set=
train_datagen.flow_from_directory('C:\\Users\\vlsi\\Desktop\\train',
batch_size = 32,
class_mode = 'binary')
test_set =
test_datagen.flow_from_directory('C:\\Users\\vlsi\\Desktop\\validation',
batch_size = 32,
class_mode = 'binary')
#fitting data
autoencoder.fit_generator(training_set,
steps_per_epoch = 80,
epochs = 25,
validation_data = test_set,
validation_steps = 20)
import numpy as np from keras.preprocessing import image
test_image =
image.load_img('C:\\Users\\vlsi\\Desktop\\validation\\validate\\apple1.jpg')
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
#Displaying output
encoded_imgs = encoder.predict(test_image)
decoded_imgs = decoder.predict(encoded_imgs)
plt.imshow(decoded_imgs)
回答1:
The problem is here:
input_img = Input(shape=(65536,))
You told Keras the input to the network will have 65K dimensions, meaning a vector of shape (samples, 65536), but your actual inputs have shape(samples, 256, 256, 3). Any easy solution would be to use the real input shape and for the network to perform the necessary reshaping:
input_img = Input(shape=((256, 256, 3))
flattened = Flatten()(input_img)
encoded = Dense(encoding_dim, activation='relu')(flattened)
decoded = Dense(256 * 256 * 3, activation='sigmoid')(encoded)
decoded = Reshape((256, 256, 3))(decoded)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
Note that I added a Flatten and a Reshape layer first to flatten the image, and then to take the flattened image back to the shape (256, 256, 3).
来源:https://stackoverflow.com/questions/52790068/value-error-with-dimensions-in-designing-a-simple-autoencoder