Sequence to Sequence - for time series prediction

前端 未结 2 657
孤街浪徒
孤街浪徒 2020-12-17 02:12

I\'ve tried to build a sequence to sequence model to predict a sensor signal over time based on its first few inputs (see figure below)

The model works OK, but I wa

2条回答
  •  清酒与你
    2020-12-17 03:04

    the attention layer in Keras is not a trainable layer (unless we use the scale parameter). it only computes matrix operation. In my opinion, this layer can result in some mistakes if applied directly on time series, but let proceed with order...

    the most natural choice to replicate the attention mechanism on our time-series problem is to adopt the solution presented here and explained again here. It's the classical application of attention in enc-dec structure in NLP

    following TF implementation, for our attention layer, we need query, value, key tensor in 3d format. we obtain these values directly from our recurrent layer. more specifically we utilize the sequence output and the hidden state. these are all we need to build an attention mechanism.

    query is the output sequence [batch_dim, time_step, features]

    value is the hidden state [batch_dim, features] where we add a temporal dimension for matrix operation [batch_dim, 1, features]

    as the key, we utilize as before the hidden state so key = value

    In the above definition and implementation I found 2 problems:

    • the scores are calculated with softmax(dot(sequence, hidden)). the dot is ok but the softmax following Keras implementation is calculated on the last dimension and not on the temporal dimension. this implies the scores to be all 1 so they are useless
    • the output attention is dot(scores, hidden) and not dot(scores, sequences) as we need

    the example:

    def attention_keras(query_value):
    
        query, value = query_value # key == value
        score = tf.matmul(query, value, transpose_b=True) # (batch, timestamp, 1)
        score = tf.nn.softmax(score) # softmax on -1 axis ==> score always = 1 !!!
        print((score.numpy()!=1).any()) # False ==> score always = 1 !!!
        score = tf.matmul(score, value) # (batch, timestamp, feat)
        return score
    
    np.random.seed(33)
    time_steps = 20
    features = 50
    sample = 5
    
    X = np.random.uniform(0,5, (sample,time_steps,features))
    state = np.random.uniform(0,5, (sample,features))
    attention_keras([X,tf.expand_dims(state,1)]) # ==> the same as Attention(dtype='float64')([X,tf.expand_dims(state,1)])
    

    so for this reason, for time series attention I propose this solution

    def attention_seq(query_value, scale):
    
        query, value = query_value
        score = tf.matmul(query, value, transpose_b=True) # (batch, timestamp, 1)
        score = scale*score # scale with a fixed number (it can be finetuned or learned during train)
        score = tf.nn.softmax(score, axis=1) # softmax on timestamp axis
        score = score*query # (batch, timestamp, feat)
        return score
    
    np.random.seed(33)
    time_steps = 20
    features = 50
    sample = 5
    
    X = np.random.uniform(0,5, (sample,time_steps,features))
    state = np.random.uniform(0,5, (sample,features))
    attention_seq([X,tf.expand_dims(state,1)], scale=0.05)
    

    query is the output sequence [batch_dim, time_step, features]

    value is the hidden state [batch_dim, features] where we add a temporal dimension for matrix operation [batch_dim, 1, features]

    the weights are calculated with softmax(scale*dot(sequence, hidden)). the scale parameter is a scalar value that can be used to scale the weights before applying the softmax operation. the softmax is calculated correctly on the time dimension. the attention output is the weighted product of input sequence and scores. I use the scalar parameter as a fixed value, but it can be tuned or insert as a learnable weight in a custom layer (as scale parameter in Keras attention).

    In term of network implementation these are the two possibilities available:

    ######### KERAS #########
    inp = Input((time_steps,features))
    seq, state = GRU(32, return_state=True, return_sequences=True)(inp)
    att = Attention()([seq, tf.expand_dims(state,1)])
    
    ######### CUSTOM #########
    inp = Input((time_steps,features))
    seq, state = GRU(32, return_state=True, return_sequences=True)(inp)
    att = Lambda(attention_seq, arguments={'scale': 0.05})([seq, tf.expand_dims(state,1)])
    

    CONCLUSION

    I don't know how much added-value an introduction of an attention layer in simple problems can have. If you have short sequences, I suggest you leave all as is. What I reported here is an answer where I express my considerations, I'll accept comment or consideration about possible mistakes or misunderstandings


    In your model, these solutions can be embedded in this way

    ######### KERAS #########
    inp = Input((n_features, n_steps))
    seq, state = GRU(n_units, activation='relu',
                     return_state=True, return_sequences=True)(inp)
    att = Attention()([seq, tf.expand_dims(state,1)])
    x = GRU(n_units, activation='relu')(att)
    x = Dense(64, activation='relu')(x)
    x = Dropout(0.5)(x)
    out = Dense(n_steps_out)(x)
    
    model = Model(inp, out)
    model.compile(optimizer='adam', loss='mse', metrics=['mse'])
    model.summary()
    
    ######### CUSTOM #########
    inp = Input((n_features, n_steps))
    seq, state = GRU(n_units, activation='relu',
                     return_state=True, return_sequences=True)(inp)
    att = Lambda(attention_seq, arguments={'scale': 0.05})([seq, tf.expand_dims(state,1)])
    x = GRU(n_units, activation='relu')(att)
    x = Dense(64, activation='relu')(x)
    x = Dropout(0.5)(x)
    out = Dense(n_steps_out)(x)
    
    model = Model(inp, out)
    model.compile(optimizer='adam', loss='mse', metrics=['mse'])
    model.summary()
    

提交回复
热议问题