Sentence similarity using keras

ぃ、小莉子 提交于 2019-11-29 07:42:27

问题


I'm trying to implement sentence similarity architecture based on this work using the STS dataset. Labels are normalized similarity scores from 0 to 1 so it is assumed to be a regression model.

My problem is that the loss goes directly to NaN starting from the first epoch. What am I doing wrong?

I have already tried updating to latest keras and theano versions.

The code for my model is:

def create_lstm_nn(input_dim):
    seq = Sequential()`
    # embedd using pretrained 300d embedding
    seq.add(Embedding(vocab_size, emb_dim, mask_zero=True, weights=[embedding_weights]))
    # encode via LSTM
    seq.add(LSTM(128))
    seq.add(Dropout(0.3))
    return seq

lstm_nn = create_lstm_nn(input_dim)

input_a = Input(shape=(input_dim,))
input_b = Input(shape=(input_dim,))

processed_a = lstm_nn(input_a)
processed_b = lstm_nn(input_b)

cos_distance = merge([processed_a, processed_b], mode='cos', dot_axes=1)
cos_distance = Reshape((1,))(cos_distance)
distance = Lambda(lambda x: 1-x)(cos_distance)

model = Model(input=[input_a, input_b], output=distance)

# train
rms = RMSprop()
model.compile(loss='mse', optimizer=rms)
model.fit([X1, X2], y, validation_split=0.3, batch_size=128, nb_epoch=20)

I also tried using a simple Lambda instead of the Merge layer, but it has the same result.

def cosine_distance(vests):
    x, y = vests
    x = K.l2_normalize(x, axis=-1)
    y = K.l2_normalize(y, axis=-1)
    return -K.mean(x * y, axis=-1, keepdims=True)

def cos_dist_output_shape(shapes):
    shape1, shape2 = shapes
    return (shape1[0],1)

distance = Lambda(cosine_distance, output_shape=cos_dist_output_shape)([processed_a, processed_b])

回答1:


The nan is a common issue in deep learning regression. Because you are using Siamese network, you can try followings:

  1. check your data: do they need to be normalized?
  2. try to add an Dense layer into your network as the last layer, but be careful picking up an activation function, e.g. relu
  3. try to use another loss function, e.g. contrastive_loss
  4. smaller your learning rate, e.g. 0.0001
  5. cos mode does not carefully deal with division by zero, might be the cause of NaN

It is not easy to make deep learning work perfectly.




回答2:


I didn't run into the nan issue, but my loss wouldn't change. I found this info check this out

def cosine_distance(shapes):
    y_true, y_pred = shapes
    def l2_normalize(x, axis):
        norm = K.sqrt(K.sum(K.square(x), axis=axis, keepdims=True))
        return K.sign(x) * K.maximum(K.abs(x), K.epsilon()) /     K.maximum(norm, K.epsilon())
    y_true = l2_normalize(y_true, axis=-1)
    y_pred = l2_normalize(y_pred, axis=-1)
    return K.mean(1 - K.sum((y_true * y_pred), axis=-1))


来源:https://stackoverflow.com/questions/39289050/sentence-similarity-using-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!