I have exported a SavedModel
and now I with to load it back in and make a prediction. It was trained with the following features and labels:
F1
Once the graph is loaded, it is available in the current context and you can feed input data through it to obtain predictions. Each use-case is rather different, but the addition to your code will look something like this:
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
"/job/export/Servo/1503723455"
)
prediction = sess.run(
'prefix/predictions/Identity:0',
feed_dict={
'Placeholder:0': [20.9],
'Placeholder_1:0': [1.8],
'Placeholder_2:0': [0.9]
}
)
print(prediction)
Here, you need to know the names of what your prediction inputs will be. If you did not give them a nave in your serving_fn
, then they default to Placeholder_n
, where n
is the nth feature.
The first string argument of sess.run
is the name of the prediction target. This will vary based on your use case.