Using Instance Keys for Batch Prediction w/Tensorflow

*爱你&永不变心* 提交于 2021-01-28 06:08:31

问题


I am trying to figure out how to do batch prediction using Google Cloud. Specifically, I'm looking to do object detection, getting from a faster-RCNN tensorflow ckpt to a graph/saved model.

My issue is that I need to be able to recover some kind of ID for my input images, perhaps an index or a filename. I'm not entirely sure how to do this in my situation, since this link mentions using instance keys, and the only relevant examples I've found regarding instance keys use JSON as the input format. As I am supposed to use TFRecords for input to my saved model, this would seem to be an issue. I also consulted the prediction guide, but was still confused.

In short, does anybody have any tips as to what file(s) I should edit (export_inference_graph.py?) to preserve some sort of indices/ordering of my input images for batch prediction? I am using the Object Detection API for reference. Thanks!


回答1:


Batch Prediction doesn't support instance keys by itself. You have to change the inference graph to output something from input as keys. That means you need to find a way to include keys in your inputs such as image id or index. One way you can do it is to change your input from TFrecord to json and add a id as keys. E.g, your input now would be like:

{"key": 1, "image": {"b64": "base64encodedstringabce"} {"key": 2, "image": {"b64": "base64encodedstringfg1d"}

This would of course make your input much larger. Another way is that if you use tf.Example proto in your TFRecord, you can add an extra feature. Its value would be a passthrough field from input to the output.

Here is a way to change the inference to pass a feature from input to output. https://github.com/GoogleCloudPlatform/cloudml-samples/pull/158



来源:https://stackoverflow.com/questions/50843427/using-instance-keys-for-batch-prediction-w-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!