tensorflow-lite

TensorFlow Lite quantization fails to improve inference latency

ぐ巨炮叔叔 提交于 2019-12-06 14:06:17
TensorFlow website claims that Quantization provides up to 3x lower latency on mobile devices: https://www.tensorflow.org/lite/performance/post_training_quantization I tried to verify this claim, and found that Quantized models are 45%-75% SLOWER than Float models in spite of being almost 4 times smaller in size. Needless to say, this is very disappointing and conflicts with Google's claims. My test uses Google's official MnasNet model: https://storage.googleapis.com/mnasnet/checkpoints/mnasnet-a1.tar.gz Here is the average latency based on 100 inference operations on a freshly rebooted phone:

Converting .tflite to .pb

人走茶凉 提交于 2019-12-06 13:17:46
Problem : How can i convert a .tflite (serialised flat buffer) to .pb (frozen model)? The documentation only talks about one way conversion. Use-case is : I have a model that is trained on converted to .tflite but unfortunately, i do not have details of the model and i would like to inspect the graph, how can i do that? I don't think there is a way to restore tflite back to pb as some information are lost after conversion. I found an indirect way to have a glimpse on what is inside tflite model is to read back each of the tensor. interpreter = tf.contrib.lite.Interpreter(model_path=model_path)

Error trying to convert from saved model to tflite format

吃可爱长大的小学妹 提交于 2019-12-06 06:41:08
问题 While trying to convert a saved model to tflite file I get the following error: F tensorflow/contrib/lite/toco/tflite/export.cc:363] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.toco_convert(). Here is a list of operators for which you will need custom implementations: AsString,

Error when I convert the frozen_pb file to tflite file using toco

北慕城南 提交于 2019-12-04 21:04:40
I use the mobileNet pre-trained model for object-detection.I have owned frozen_graph file, and I use tool to know the input_arrays and output_arrays.This is my command: bazel-bin/tensorflow/contrib/lite/toco/toco \ --input_file=$(pwd)/mobilenet_v1_1.0_224/frozen_graph.pb \ --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \ --output_file=/tmp/mobilenet_v1_1.0_224.tflite --inference_type=FLOAT \ --input_type=FLOAT --input_arrays=image_tensor \ --output_arrays=detection_boxes,detection_scores,detection_classes,num_detections --input_shapes=1,224,224,3 While I run the commend, the

Error converting Facenet model .pb file to TFLITE format

久未见 提交于 2019-12-04 20:36:58
i'm trying to convert a pre-trained frozen .pb based on Inception ResNet i got from David Sandbergs Github with the Tensorflow Lite Converter on Ubuntu using the following command: /home/nils/.local/bin/tflite_convert --output_file=/home/nils/Documents/frozen.tflite --graph_def_file=/home/nils/Documents/20180402-114759/20180402-114759.pb --input_arrays=input --output_arrays=embeddings --input_shapes=1,160,160,3 However, i get the following error: 2018-12-03 15:03:16.807431: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not

How to use outputs of posenet model in tflite

*爱你&永不变心* 提交于 2019-12-04 18:41:28
I am using the tflite model for posenet from here . It takes input 1*353*257*3 input image and returns 4 arrays of dimens 1*23*17*17, 1*23*17*34, 1*23*17*64 and 1*23*17*1. The model has an output stride of 16. How can I get the coordinates of all 17 pose points on my input image? I have tried printing the confidence scores from the heatmap of out1 array but I get near to 0.00 values for each pixel. Code is given below: public class MainActivity extends AppCompatActivity { private static final int CAMERA_REQUEST = 1888; private ImageView imageView; private static final int MY_CAMERA_PERMISSION

Tensorflow Lite toco --mean_values --std_values?

爷,独闯天下 提交于 2019-12-04 18:16:12
So I have trained a tensorflow model with fake quantization and froze it with a .pb file as output. Now I want to feed this .pb file to tensorflow lite toco for fully quantization and get the .tflite file. I am using this tensorflow example: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro/examples/micro_speech The part where I have question: bazel run tensorflow/lite/toco:toco -- \ --input_file=/tmp/tiny_conv.pb --output_file=/tmp/tiny_conv.tflite \ --input_shapes=1,49,43,1 --input_arrays=Reshape_1 --output_arrays='labels_softmax' \ --inference_type

Error trying to convert from saved model to tflite format

荒凉一梦 提交于 2019-12-04 15:20:02
While trying to convert a saved model to tflite file I get the following error: F tensorflow/contrib/lite/toco/tflite/export.cc:363] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.toco_convert(). Here is a list of operators for which you will need custom implementations: AsString, ParseExample .\nAborted (core dumped)\n' None I am using the DNN premade Estimator. from __future__ import

tensorflow lite model gives very different accuracy value compared to python model

痴心易碎 提交于 2019-12-04 09:04:10
问题 I am using tensorflow 1.10 Python 3.6 My code is based in the premade iris classification model provided by TensorFlow. This means, I am using a Tensorflow DNN premade classifier, with the following difference: 10 features instead 4. 5 classes instead 3. The test and training files can be downloaded from the following link: https://www.dropbox.com/sh/nmu8i2i8xe6hvfq/AADQEOIHH8e-kUHQf8zmmDMDa?dl=0 I have made a code to export this classifier to a tflite format, however the accuracy in the

How to import the tensorflow lite interpreter in Python?

泄露秘密 提交于 2019-12-04 04:25:46
I'm developing a Tensorflow embedded application using TF lite on the Raspberry Pi 3b, running Raspbian Stretch. I've converted the graph to a flatbuffer (lite) format and have built the TFLite static library natively on the Pi. So far so good. But the application is Python and there seems to be no Python binding available. The Tensorflow Lite development guide ( https://www.tensorflow.org/mobile/tflite/devguide ) states "There are plans for Python bindings and a demo app." Yet there is wrapper code in /tensorflow/contrib/lite/python/interpreter_wrapper that has all the needed interpreter