Results not reproducible with Keras and TensorFlow in Python

前端 未结 4 1705
春和景丽
春和景丽 2020-12-16 18:27

I have the problem, that I am not able to reproduce my results with Keras and ThensorFlow.

It seems like recently there has been a workaround published on the Keras

4条回答
  •  春和景丽
    2020-12-16 19:10

    I had exactly the same problem and managed to solve it by closing and restarting the tensorflow session every time I run the model. In your case it should look like this:

    #START A NEW TF SESSION
    np.random.seed(0)
    tf.set_random_seed(0)
    sess = tf.Session(graph=tf.get_default_graph())
    K.set_session(sess)
    
    embedding_vecor_length = 32
    neurons = 91
    epochs = 1
    model = Sequential()
    model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
    model.add(LSTM(neurons))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_logarithmic_error', optimizer='adam', metrics=['accuracy'])
    print(model.summary())
    model.fit(X_train, y_train, epochs=epochs, batch_size=64)
    # Final evaluation of the model
    scores = model.evaluate(X_test, y_test, verbose=0)
    print("Accuracy: %.2f%%" % (scores[1]*100))
    
    #CLOSE TF SESSION
    K.clear_session()
    

    I ran the following code and had reproducible results using GPU and tensorflow backend:

    print datetime.now()
    for i in range(10):
        np.random.seed(0)
        tf.set_random_seed(0)
        sess = tf.Session(graph=tf.get_default_graph())
        K.set_session(sess)
    
        n_classes = 3
        n_epochs = 20
        batch_size = 128
    
        task = Input(shape = x.shape[1:])
        h = Dense(100, activation='relu', name='shared')(task)
        h1= Dense(100, activation='relu', name='single1')(h)
        output1 = Dense(n_classes, activation='softmax')(h1)
    
        model = Model(task, output1)
        model.compile(loss='categorical_crossentropy', optimizer='Adam')
        model.fit(x_train, y_train_onehot, batch_size = batch_size, epochs=n_epochs, verbose=0)
    print(model.evaluate(x=x_test, y=y_test_onehot, batch_size=batch_size, verbose=0))
    K.clear_session()
    

    And obtained this output:

    2017-10-23 11:27:14.494482
    0.489712882132
    0.489712893813
    0.489712892765
    0.489712854426
    0.489712882132
    0.489712864011
    0.486303713004
    0.489712903398
    0.489712892765
    0.489712903398
    

    What I understood is that if you don't close your tf session (you are doing it by running in a new kernel) you keep sampling the same "seeded" distribution.

提交回复
热议问题