Extremely slow model load with keras

天大地大妈咪最大 提交于 2019-11-27 12:19:30

问题


I have a set of Keras models (30) that I trained and saved using:

 model.save('model{0}.h5'.format(n_model))

When I try to load them, using load_model, the time required for each model is quite large and incremental. The loading is done as:

models = {}
for i in range(30):
    start = time.time()
    models[i] = load_model('model{0}.h5'.format(ix)) 
    end = time.time()
    print "Model {0}: seconds {1}".format(ix, end - start)

And the output is:

...
Model 9: seconds 7.38966012001
Model 10: seconds 9.99283003807
Model 11: seconds 9.7262301445
Model 12: seconds 9.17000102997
Model 13: seconds 10.1657290459
Model 14: seconds 12.5914049149
Model 15: seconds 11.652477026
Model 16: seconds 12.0126030445
Model 17: seconds 14.3402299881
Model 18: seconds 14.3761711121
...

Each model is really simple: 2 hidden layers with 10 neurons each (size ~50Kb). Why is the loading taking so much and why is the time increasing? Am I missing something (e.g. close function for the model?)

SOLUTION

I found out that to speed up the loading of the model is better to store the structure of the networks and the weights into two distinct files: The saving part:

model.save_weights('model.h5')
model_json = model.to_json()
with open('model.json', "w") as json_file:
    json_file.write(model_json)
json_file.close()

The loading part:

from keras.models import model_from_json
json_file = open("model.json", 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights("model.h5")

回答1:


I solved the problem by clearing the keras session before each load

from keras import backend as K
for i in range(...):
  K.clear_session()
  model = load_model(...)



回答2:


I tried with K.clear_session(), and it does boost the loading time each time.
However, my models loaded in this way are not able to use model.predict function due to the following error:
ValueError: Tensor Tensor("Sigmoid_2:0", shape=(?, 17), dtype=float32) is not an element of this graph.
Github #2397 provide a detailed discussion for this. The best solution for now is to predict the data right after loading the model, instead of loading a dozens of models at the same time. After predicting each time you can use K.clear_session() to release the GPU, so that next loading won't take more time.




回答3:


I done in this way

from keras.models import Sequential
from keras_contrib.losses import import crf_loss
from keras_contrib.metrics import crf_viterbi_accuracy

# To save model
model.save('my_model_01.hdf5')

# To load the model
custom_objects={'CRF': CRF,'crf_loss':  crf_loss,'crf_viterbi_accuracy':crf_viterbi_accuracy}

# To load a persisted model that uses the CRF layer 
model1 = load_model("/home/abc/my_model_01.hdf5", custom_objects = custom_objects)


来源:https://stackoverflow.com/questions/47455397/extremely-slow-model-load-with-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!