how make correct predictions of jpeg image in cloud-ml

浪子不回头ぞ 提交于 2019-11-28 13:01:57

CloudML requires you to feed the graph with a batch of images.

I'm pretty sure this is the issue with re-using retrain.py. See that code's sess.run line; it is feeding a single image at a time. Compare with the batched jpeg placeholder in the flowers sample.

Note that three slightly different TF graphs need to be constructed: Training, Evaluation, and Prediction. See this recent blog post for details. The training and Prediction graphs directly consume embedding from preprocessing so they do not contain an Inception graph. For prediction, we need to take image bytes as input and use Inception to extract embeddings.

For online prediction, you need to export the prediction graph.You should also specify the outputs and a key for inputs.

To build the prediction graph (the code):

def build_prediction_graph(self):
   """Builds prediction graph and registers appropriate endpoints."""
   tensors = self.build_graph(None, 1, GraphMod.PREDICT)
   keys_placeholder = tf.placeholder(tf.string, shape=[None])
   inputs = {
      'key': keys_placeholder.name,
      'image_bytes': tensors.input_jpeg.name
   }

   tf.add_to_collection('inputs', json.dumps(inputs))

   # To extract the id, we need to add the identity function.
   keys = tf.identity(keys_placeholder)
   outputs = {
       'key': keys.name,
       'prediction': tensors.predictions[0].name,
       'scores': tensors.predictions[1].name
   }
   tf.add_to_collection('outputs', json.dumps(outputs))

To export the preciction graph:

def export(self, last_checkpoint, output_dir):
  # Build and save prediction meta graph and trained variable values.
  with tf.Session(graph=tf.Graph()) as sess:        
    self.build_prediction_graph()
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    self.restore_from_checkpoint(sess, self.inception_checkpoint_file,
                                 last_checkpoint)
    saver = tf.train.Saver()
    saver.export_meta_graph(filename=os.path.join(output_dir, 'export.meta'))
    saver.save(sess, os.path.join(output_dir, 'export'), write_meta_graph=False)

last_checkpoint must point to the latest checkpoint file from training:

self.model.export(tf.train.latest_checkpoint(self.train_path), self.model_path)

In your post, you indicated that your inputs collection has only "image_bytes" tensor alias. However, in the code where you are framing the request, you are including 2 inputs: One is "key" and the other is "image_bytes". So, my suggestion would be to remove "key" from the request or add "key" to the inputs collection.

Second issue is that the shape of DecodeJpeg/contents:0', is (). For Cloud ML, you need to have a shape like (None, ) so that you can feed that in.

There are some suggestions in other answers to your question here, on how you might be able to follow the public posts to modify your graph, but at hand I can tell these two issues.

Let us know if you encounter any further issues.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!