Converting pretrained model from tfhub to tflite

你。 提交于 2021-01-29 06:26:53

问题


I'm trying to convert openimages_v4/ssd/mobilenet_v2 to tflite using:

$ pip3 install tensorflow==2.4.0
$ tflite_convert --saved_model_dir=openimages_v4_ssd_mobilenet_v2_1 --output_file=/tmp/openimages_v4_ssd_mobilenet_v2_1.tflite

but it's giving this error:

<stacktrace snipped ..>
RuntimeError: MetaGraphDef associated with tags {'serve'} could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
available_tags: [set()]

The output from saved_model_cli:

# saved_model_cli show --dir openimages_v4_ssd_mobilenet_v2_1 --all
2021-01-09 23:32:57.635704: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-01-09 23:32:57.635772: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

MetaGraphDef with tag-set: '' contains the following SignatureDefs:

signature_def['default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['images'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, -1, -1, 3)
        name: hub_input/image_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 4)
        name: hub_input/strided_slice:0
    outputs['detection_class_entities'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 1)
        name: hub_input/index_to_string_Lookup:0
    outputs['detection_class_labels'] tensor_info:
        dtype: DT_INT64
        shape: (-1, 1)
        name: hub_input/strided_slice_2:0
    outputs['detection_class_names'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 1)
        name: hub_input/index_to_string_1_Lookup:0
    outputs['detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: hub_input/strided_slice_1:0
  Method name is:

I also tried with tensorflow 1.15.0 and got the same error.

Would retraining the openimages_v4/ssd/mobilenet_v2 model with a newer version of tensorflow help? How can I find the original code or tensorflow version used to train that model?


回答1:


The default value for tags is 'serve', and the default value for signature_keys is 'serving_default'. You can override it using the tags param in the python API See https://www.tensorflow.org/lite/api_docs/python/tf/lite/TFLiteConverter#from_saved_model

EDIT: Adding details on the failure after passing the correct tags and signature keys. EDIT2: Updated sample code

This looks like an old model. It is saved using an old version.

First, let's fix this saved model version issue. You need to re-save it

MODEL_DIR = 'model_path'
SIGNATURE_KEYS = ['default']
SIGNATURE_TAGS = set()
saved_model = tf.saved_model.load(MODEL_DIR, tags=SIGNATURE_TAGS)
tf.saved_model.save(saved_model, 'new_model_path', signatures=saved_model.signatures)
# You can now convert like this.
converter = tf.lite.TFLiteConverter.from_saved_model(
      'new_model_path', signature_keys=SIGNATURE_KEYS, tags=['serve'])

Now if you tried converting, you won't see this problem, but you will see new issue :) From the error message log there are 2 points in the summary

Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
Flex ops: TensorArrayGatherV3, TensorArrayReadV3, TensorArrayScatterV3, TensorArraySizeV3, TensorArrayV3, TensorArrayWriteV3

and

Some ops in the model are custom ops, See instructions to implement custom ops: https://www.tensorflow.org/lite/guide/ops_custom
Custom ops: HashTableV2, LookupTableFindV2, LookupTableImportV2

The new issue is because this model is using ops that TFLite doesn't support at the moment. Example, TensorArray,Hashtables.

Some of these ops can be supported using TF select mode, see here The other ops "HashTableV2, LookupTableFindV2, LookupTableImportV2" are available in TFLite as custom ops. See this answer on how to enable it.

Also, TFLite team is working on adding support for hashtable as builtin ops, so soon you won't need to do the extra steps.



来源:https://stackoverflow.com/questions/65650859/converting-pretrained-model-from-tfhub-to-tflite

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!