I have run the model with LSTM as the first layer successfully. But out of curiosity, I replace LSTM with CuDNNLSTM. But after model.fit, it replied the following error mess
For TensorFlow v2, one solution would be -
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], enable=True)
Then you can use keras model too -
from tensorflow.keras.models import Model
Documentation
This solution worked for me, it enables memory growth for only one GPU.