tensorflow-lite

Converting saved_model.pb to model.tflite

旧街凉风 提交于 2021-01-28 08:48:43
问题 Tensorflow Version: 2.2.0 OS: Windows 10 I am trying to convert a saved_model.pb to a tflite file. Here is the code I am running: import tensorflow as tf # Convert converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='C:\Data\TFOD\models\ssd_mobilenet_v2_quantized') tflite_model = converter.convert() fo = open("model.tflite", "wb") fo.write(tflite_model) fo.close This code gives an error while converting: File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages

Reducing .tflite model size

吃可爱长大的小学妹 提交于 2021-01-28 08:17:25
问题 Any of the zoo .tflite models I see are no more than 3MB in size. On an edgetpu they run fine. However, when I train my own object detection model the .pb file is 60MB and the .tflite is also huge at 20MB! It's also quantized as per below. The end result is segmentation faults on an edgetpu object_detection model. What's causing this file to be so large? Do non-resized images being fed into the model cause the model to be large (some photos were 4096×2160 and not resized)? From object

Converting model to tflite with SELECT_TF_OPS cannot convert ops HashTableV2 + others

孤者浪人 提交于 2021-01-28 05:40:31
问题 I'm trying to convert openimages_v4/ssd/mobilenet_v2 to tflite using the following code as suggested here: import tensorflow as tf MODEL_DIR = 'openimages_v4_ssd_mobilenet_v2_1' SIGNATURE_KEYS = ['default'] SIGNATURE_TAGS = set() saved_model = tf.saved_model.load(MODEL_DIR, tags=SIGNATURE_TAGS) tf.saved_model.save(saved_model, 'new_model_path', signatures=saved_model.signatures) converter = tf.lite.TFLiteConverter.from_saved_model('new_model_path', signature_keys=SIGNATURE_KEYS, tags=['serve'

Tensorflow lite model request a buffer bigger than the neccesary

和自甴很熟 提交于 2021-01-28 05:22:19
问题 I created a custom model using keras in tensorflow. The version that I used was tensorflow nightly 1.13.1. I used the official tool to build the tensorflow lite model (the method tf.lite.TFLiteConverter.from_keras_model_file ). After I created the model I reviewed the input shape and nothing seems is bad. The input and output shapes in tensorflow lite model are: [{'name': 'input_1', 'index': 59, 'shape': array([ 1, 240, 240, 3], dtype=int32), 'dtype': , 'quantization': (0.0, 0)}] [{'name':

How can I convert yolo weights to tflite file

不羁的心 提交于 2021-01-28 05:19:27
问题 I will use yolo weights in android so I plan to convert yolo weights file to tflite file. I use this code in anaconda prompt because I downloaded keras library in env. activate env python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5 Finally, it did.Saved Keras model to model_data/yolo.h5 And I'm going to convert this h5 file to tflite file in jupyter notebook with this code. model = tf.keras.models.load_model("./yolo/yolo.h5", compile=False) converter = tf.lite.TFLiteConverter.from

convert tensorflow model to pb tensorflow

▼魔方 西西 提交于 2021-01-27 19:02:02
问题 I have a pretrained model that I need to convert to pb. I have following file in the folder: bert_config.json model.ckpt-1000data model.ckpt-10000.index model.ckpt-1000.meta vocab.txt How can I convert this to pb format? Thanks 回答1: You can freeze the model: TensorFlow: How to freeze a model and serve it with a python API import os, argparse import tensorflow as tf # The original freeze_graph function # from tensorflow.python.tools.freeze_graph import freeze_graph dir = os.path.dirname(os

Reducing TFLite model size?

一世执手 提交于 2021-01-27 12:41:44
问题 I'm currently making a multi-label image classification model by following this guide (it uses inception as the base model): https://towardsdatascience.com/multi-label-image-classification-with-inception-net-cbb2ee538e30 After converting from .pb to .tflite the model is only approximately 0.3mb smaller. Here is my conversion code: toco \ --graph_def_file=optimized_graph.pb \ --output_file=output/optimized_graph.tflite \ --output_format=TFLITE \ --input_shape=1,299,299,3 \ --input_array=Mul \

tflite quantized inference very slow

◇◆丶佛笑我妖孽 提交于 2021-01-27 04:14:36
问题 I am trying to convert a trained model from checkpoint file to tflite . I am using tf.lite.LiteConverter . The float conversion went fine with reasonable inference speed. But the inference speed of the INT8 conversion is very slow. I tried to debug by feeding in a very small network. I found that inference speed for INT8 model is generally slower than float model. In the INT8 tflite file, I found some tensors called ReadVariableOp, which doesn't exist in TensorFlow's official mobilenet tflite

Quantize MobileFaceNet with TFLITE failed

跟風遠走 提交于 2021-01-05 12:50:20
问题 I am trying to find a solution to run face recognition on AI camera. And found that MobileFacenet (code from sirius-ai) is great as a light model! I succeed to convert to TFLITE with F32 format with good accuracy. However when I failed when quantized to uint8 with the following command: tflite_convert --output_file tf-lite/MobileFacenet_uint8_128.tflite --graph_def_file tf-lite/MobileFacenet.pb --input_arrays "input" --input_shapes "1,112,112,3" --output_arrays output --output_format TFLITE -

How to get weights in tflite using c++ api?

筅森魡賤 提交于 2021-01-05 06:40:16
问题 I am using a .tflite model on device. The last layer is ConditionalRandomField layer, and I need weights of the layer to do prediction. How do I get weights with c++ api? related: How can I view weights in a .tflite file? Netron or flatc doesn't meet my needs. too heavy on device. It seems TfLiteNode stores weights in void* user_data or void* builtin_data. How do I read them? UPDATE: Conclusion: .tflite doesn't store CRF weights while .h5 dose. (Maybe because they do not affect output.) WHAT