Input 0 is incompatible with layer lstm_93: expected ndim=3, found ndim=2

只愿长相守 提交于 2019-12-11 23:50:24

问题


My X_train shape is (171,10,1) and y_train shape is (171,)(contains values from 1 to 19). The output should be probability of each of the 19 class. I am trying to use a RNN for classification of 19 classes.

from sklearn.preprocessing import LabelEncoder,OneHotEncoder
label_encoder_X=LabelEncoder()
label_encoder_y=LabelEncoder()

y_train=label_encoder_y.fit_transform(y_train)
y_train=np.array(y_train)

X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))


from keras.models import Sequential
from keras.layers import Dense,Flatten
from keras.layers import LSTM
from keras.layers import Dropout


regressor = Sequential()

regressor.add(LSTM(units = 100, return_sequences = True, input_shape=( 
(X_train.shape[1], 1)))
regressor.add(Dropout(rate=0.15))

regressor.add(LSTM(units = 100, return_sequences =False))#False caused the 
exception ndim
regressor.add(Dropout(rate=0.15))


regressor.add(Flatten())
regressor.add(Dense(units= 19,activation='sigmoid'))
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error')

regressor.fit(X_train, y_train, epochs = 250, batch_size = 16)

回答1:


When you set return_sequences =False in the second LSTM layer, the result is that (None, 100) no longer needs Flatten(). You can set return_sequences=True in the second LSTM layer or delete regressor.add(Flatten()) according to your needs.

In addition, if you want to get probability of each of the 19 class, your label data should be in one-hot form. Using keras.utils.to_categorical:

one_hot_labels = keras.utils.to_categorical(y_train, num_classes=19) #(None,19)
regressor.fit(X_train, one_hot_labels, epochs = 250, batch_size = 16)


来源:https://stackoverflow.com/questions/53874013/input-0-is-incompatible-with-layer-lstm-93-expected-ndim-3-found-ndim-2

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!