Why my LSTM model is repeating the previous values?

纵饮孤独 提交于 2020-01-02 05:33:10

问题


I build a simple LSTM model in Keras as below:

model = Sequential()
model.add(keras.layers.LSTM(hidden_nodes, input_dim=num_features, input_length=window, consume_less="mem"))
model.add(keras.layers.Dense(num_features, activation='sigmoid'))
optimizer = keras.optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)

When I apply the model on some data I have this particular behaviour:

Where the orange line represents the predicted values and the blue one the grand truth.

As you can see, the network repeats previous values but it's not what I want. I have several features (not only the one shown in the pictures) and I want the network takes into account the dependencies with other time series instead of look just at past data of a single and repeats previous data.

I hope the questions is clear enough!

My data
I have 36 time series (categorical and numerical data). I use a window of length W and I resheape the data in order to create a numpy vector in the form required in Keras (num_samples, window, num_features).

Edit 1
Sample of data:

0.5, 0.1, 0.4, 1, 0,74
0.1, 0.1, 0.8, 0.9, 0,8
0.2, 0.3, 0.5, 1, 0,85

I have one categorical and two numerical attributes. The first three rows refer to the categorical one (one-hot encoding for categorical). Last two refer to two numerical attributes.

I build training and test as shown below:

So I execute model.fit(T, X).

I've also tried with a low number of Hidden Nodes but the result it's the same.

Edit 2
The custom loss function that takes into account the use of numerical and categorical features:

def mixed_num_cat_loss_backend(y_true, y_pred, signals_splits):
    if isinstance(y_true, np.ndarray):
        y_true = keras.backend.variable( y_true )
    if isinstance(y_pred, np.ndarray):
        y_pred = keras.backend.variable( y_pred )

    y_true_mse = y_true[:,:signals_splits[0]] 
    y_pred_mse = y_pred[:,:signals_splits[0]]
    mse_loss_v = keras.backend.square(y_true_mse-y_pred_mse)

    categ_loss_v = [ keras.backend.categorical_crossentropy(
                         y_pred[:,signals_splits[i-1]:signals_splits[i]], 
                         y_true[:,signals_splits[i-1]:signals_splits[i]], 
                         from_logits=False) # force keras to normalize
                   for i in range(1,len(signals_splits)) ]

    losses_v = keras.backend.concatenate( [mse_loss_v, keras.backend.stack(categ_loss_v,1)], 1)

    return losses_v

I use model.fit(T, X) in order to know where the numerical features are (in the matrix).

This is the function that prepares data by starting from a 2D numpy array, as shown in the pictures with M,T,X:

def prepare_training_data(data_matrix, boundaries, window = 5):

    num_rows, num_columns = data_matrix.shape
    effective_sizes = [max(0,(nrows - window)) for nrows in boundaries]
    total_training_rows = sum(effective_sizes)

    print " - Skipped dumps because smaller than window:", sum([z==0 for z in effective_sizes])

    # prepare target variables
    T = data_matrix[window:boundaries[0],:]

    start_row = boundaries[0]
    for good_rows, total_rows in zip(effective_sizes[1:],boundaries[1:]):
        if good_rows>0:
            T = np.vstack( (T,data_matrix[start_row+window:start_row+total_rows,:]) )
        start_row += total_rows
        # check concatenate

    # training input to the LSTM
    X = np.zeros((total_training_rows, window, num_columns))
    curr_row = 0
    curr_boundary = 0
    for good_rows, total_rows in zip(effective_sizes,boundaries):
        for i in range(good_rows):
            X[curr_row] = data_matrix[curr_boundary+i:curr_boundary+i+window,:]
            curr_row += 1
        curr_boundary += total_rows

    return X,T,effective_sizes

来源:https://stackoverflow.com/questions/47618285/why-my-lstm-model-is-repeating-the-previous-values

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!