sequential model give a different result at every run

♀尐吖头ヾ 提交于 2019-12-11 06:09:25

问题


I have a python script for building a keras sequential model. Everytime i am getting different results without any changes in script. kindly have a look on script. where i am wrong please help.

thedata = pandas.read_csv("C:/User/Downloads/LSTM/data.csv", sep=', ', delimiter=',', header='infer', names=None)

np.random.seed(1337)

x = thedata['Review']
y = thedata['Polarity_Numeral']
x = x.iloc[:].values
y = y.iloc[:].values

tk = Tokenizer(num_words=40000, lower=True, split=" ")
tk.fit_on_texts(x)
x = tk.texts_to_sequences(x)    
max_len = 120
x = pad_sequences(x, maxlen=max_len)
max_features = 40000
testx = x[51000:52588]
print (testx)
testy = y[51000:52588]
x = x[0:50999]
y = y[0:50999]


model = Sequential()
model.add(Embedding(max_features, 128, input_length=max_len))
model.add(SpatialDropout1D(0.3))
model.add(GaussianNoise(0.2))
model.add(LSTM(128 , dropout_W=0.3, dropout_U=0.3, return_sequences=False))
model.add(Dense(1, W_regularizer=l2(0.2)))
model.add(Activation('sigmoid'))
model.summary()
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.00)
model.compile(loss='binary_crossentropy', optimizer=adam,metrics = ['accuracy'] )
model_history = model.fit(x, y=y, batch_size=64, epochs=1, verbose=1,validation_split = 0.2)
model.save('C:/User/Downloads/model.h5')
model.save_weights('C:/User/Downloads/weight_model.h5')

predictions = model.predict(testx)
print (predictions)

On first time run, i am getting i.e 57% On Second time run .. 53% On third .. 55% Everytime it is changing randomly. Thanks for the help!


回答1:


This code is for tensorflow backend

This is because the weights are initialised using random numbers and hence you will get different results every time. This is expected behaviour. To have reproducible result you need to set the random seed as:

import tensorflow as tf
import random as rn

os.environ['PYTHONHASHSEED'] = '0'

# Setting the seed for numpy-generated random numbers
np.random.seed(37)

# Setting the seed for python random numbers
rn.seed(1254)

# Setting the graph-level random seed.
tf.set_random_seed(89)

from keras import backend as K

session_conf = tf.ConfigProto(
      intra_op_parallelism_threads=1,
      inter_op_parallelism_threads=1)

#Force Tensorflow to use a single thread
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)

K.set_session(sess)

# Rest of the code follows from here on ...



回答2:


If you are running "exactly" that code, know that you're entirely creating a new model.

You're not loading a model, you're not adding your own weights to the model. You're simply creating a new model, with an entirely new random set of weights.

So, yes, it will produce different results. There is nothing wrong.


You probably should be using some kind of "load saved model" (perhaps model.load_weights()) if you want the same model to be kept. (In case you have the model saved somewhere)

Or you should "set_weights()" at some point after creating the model (if you know what weights you want, or if you have your weights saved)

Or you can use the initializers in each layer (as mentioned in another answer), if you want a new model with known weights.




回答3:


with a quick look i don't see anything wrong.. you should remember that when you compile your model, keras randomly initializes all the weights in your model (you can also specify how you would like this to be done, or if you don't want it to be random, but the default is usually fine). So every time you compile you will get different weights and different results... given enough epochs they should all converge to the same result.



来源:https://stackoverflow.com/questions/44305760/sequential-model-give-a-different-result-at-every-run

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!