Whenever I try out LSTM models on Keras, it seems that the model is impossible to train due to long training time.
For instance, a model like this takes 80 seconds p
If you are using GPU please replace all LSTM layers with CuDNNLSTM layers. You can import it from keras.layers
:
from keras.layers import CuDNNLSTM
def create_model(self):
inputs = {}
inputs['input'] = []
lstm = []
placeholder = {}
for tf, v in self.env.timeframes.items():
inputs[tf] = Input(shape = v['shape'], name = tf)
lstm.append(CuDNNLSTM(8)(inputs[tf]))
inputs['input'].append(inputs[tf])
account = Input(shape = (3,), name = 'account')
account_ = Dense(8, activation = 'relu')(account)
dt = Input(shape = (7,), name = 'dt')
dt_ = Dense(16, activation = 'relu')(dt)
inputs['input'].extend([account, dt])
data = Concatenate(axis = 1)(lstm)
data = Dense(128, activation = 'relu')(data)
y = Concatenate(axis = 1)([data, account, dt])
y = Dense(256, activation = 'relu')(y)
y = Dense(64, activation = 'relu')(y)
y = Dense(16, activation = 'relu')(y)
output = Dense(3, activation = 'linear')(y)
model = Model(inputs = inputs['input'], outputs = output)
model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mae'])
return model
Here is more information: https://keras.io/layers/recurrent/#cudnnlstm
This will significantly speed up the model =)