问题
We deploy lot of our models from TF1 by saving them through graph freezing:
tf.train.write_graph(self.session.graph_def, some_path)
# get graph definitions with weights
output_graph_def = tf.graph_util.convert_variables_to_constants(
self.session, # The session is used to retrieve the weights
self.session.graph.as_graph_def(), # The graph_def is used to retrieve the nodes
output_nodes, # The output node names are used to select the usefull nodes
)
# optimize graph
if optimize:
output_graph_def = optimize_for_inference_lib.optimize_for_inference(
output_graph_def, input_nodes, output_nodes, tf.float32.as_datatype_enum
)
with open(path, "wb") as f:
f.write(output_graph_def.SerializeToString())
and then loading them through:
with tf.Graph().as_default() as graph:
with graph.device("/" + args[name].processing_unit):
tf.import_graph_def(graph_def, name="")
for key, value in inputs.items():
self.input[key] = graph.get_tensor_by_name(value + ":0")
We would like to save TF2 models in similar way. One protobuf file which will include graph and weights. How can I achieve this?
I know that there are some methods for saving:
keras.experimental.export_saved_model(model, 'path_to_saved_model')
Which is experimental and creates multiple files :(.
model.save('path_to_my_model.h5')
Which saves h5 format :(.
tf.saved_model.save(self.model, "test_x_model")
Which agains save multiple files :(.
回答1:
I use TF2 to convert model like:
- pass
keras.callbacks.ModelCheckpoint(save_weights_only=True)
tomodel.fit
and savecheckpoint
while training; - After training,
self.model.load_weights(self.checkpoint_path)
loadcheckpoint
, and convert toh5
:self.model.save(h5_path, overwrite=True, include_optimizer=False)
; - convert
h5
topb
:
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
回答2:
The way I do it at the moment is TF2 -> SavedModel (via keras.experimental.export_saved_model
) -> frozen_graph.pb (via the freeze_graph
tools, which can take a SavedModel
as input). I don't know if this is the "recommended" way to do this though.
Also, I still don't know how to load back the frozen model and run inference "the TF2 way" (aka no graphs, sessions, etc).
You may also take a look at keras.save_model('path', save_format='tf')
which seems to produce checkpoint files (you still need to freeze them, though, so I personally think the saved model path is better)
来源:https://stackoverflow.com/questions/58119155/freezing-graph-to-pb-in-tensorflow2