tensorflow-lite

Convert Keras model to quantized Tensorflow Lite model that can be used on Edge TPU

一笑奈何 提交于 2020-01-02 21:58:16
问题 I have a Keras model that I want to run on the Coral Edge TPU device. To do this, it needs to be a Tensorflow Lite model with full integer quantization. I was able to convert the model to a TFLite model: model.save('keras_model.h5') converter = tf.lite.TFLiteConverter.from_keras_model_file("keras_model.h5") tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model) But when I run edgetpu_compiler converted_model.tflite , I get this error: Edge TPU Compiler

TensorFlow Lite quantization fails to improve inference latency

馋奶兔 提交于 2020-01-02 11:05:51
问题 TensorFlow website claims that Quantization provides up to 3x lower latency on mobile devices: https://www.tensorflow.org/lite/performance/post_training_quantization I tried to verify this claim, and found that Quantized models are 45%-75% SLOWER than Float models in spite of being almost 4 times smaller in size. Needless to say, this is very disappointing and conflicts with Google's claims. My test uses Google's official MnasNet model: https://storage.googleapis.com/mnasnet/checkpoints

TensorFlow Lite quantization fails to improve inference latency

↘锁芯ラ 提交于 2020-01-02 11:05:16
问题 TensorFlow website claims that Quantization provides up to 3x lower latency on mobile devices: https://www.tensorflow.org/lite/performance/post_training_quantization I tried to verify this claim, and found that Quantized models are 45%-75% SLOWER than Float models in spite of being almost 4 times smaller in size. Needless to say, this is very disappointing and conflicts with Google's claims. My test uses Google's official MnasNet model: https://storage.googleapis.com/mnasnet/checkpoints

How to use outputs of posenet model in tflite

孤街浪徒 提交于 2020-01-01 19:24:08
问题 I am using the tflite model for posenet from here. It takes input 1*353*257*3 input image and returns 4 arrays of dimens 1*23*17*17, 1*23*17*34, 1*23*17*64 and 1*23*17*1. The model has an output stride of 16. How can I get the coordinates of all 17 pose points on my input image? I have tried printing the confidence scores from the heatmap of out1 array but I get near to 0.00 values for each pixel. Code is given below: public class MainActivity extends AppCompatActivity { private static final

Calculation operations with the parameters of a TFLite quantized model

為{幸葍}努か 提交于 2019-12-31 05:15:07
问题 I am trying to implement image classification in hardware using the quantized Mobilenetv2 model taken from here. To do that, I first need to reproduce the inference process from the beginning to the end to make sure I understand the calculations/operations that are performed on the data. The first target is the Conv fuction. I can see how it is being calculated, but there are several arguments that are passed to this function which I would like to know how they are produced: output_offset,

Could not resolve org.tensorflow:tensorflow-lite:0.0.0-nightly

匆匆过客 提交于 2019-12-25 18:14:48
问题 I download the latest tensorflow lite demo, show it: Unable to resolve dependency for ':app@debug/compileClasspath': Could not resolve org.tensorflow:tensorflow-lite:0.0.0-nightly. Can you help me? 回答1: try compile 'org.tensorflow:tensorflow-lite:1.12.0' more information, please visit https://jcenter.bintray.com/org/tensorflow/tensorflow-lite/maven-metadata.xml 回答2: When i open it again, I can built it successfully. Maybe because network-wall in china. i missed some files. i downloaded it in

tflite_diff_example_test fails to invoke interpreter

独自空忆成欢 提交于 2019-12-24 12:06:14
问题 I've been trying TensorFlow lite and I've been having issues with the detection on Android so I'm trying to test my .pb and .tflite models to see if there is a difference with tflite_diff_example_test I've retrained a mobilenet_v1_100_224 that I converted to .tflite I'm running on MacOS 10.13.3 the following: bazel build tensorflow/contrib/lite/testing/tflite_diff_example_test.cc bazel-bin/tensorflow/contrib/lite/testing/tflite_diff_example_test --tensorflow_model=../new_training_dir

How to do batching with TensorFlow Lite?

自古美人都是妖i 提交于 2019-12-24 11:44:51
问题 I have a custom CNN model, and I have converted it to .tflite format and deployed it on my Android app. However, I can't figure out how to do batching while inference with tensorflow lite. From this Google doc, it seems you have to set the input format of your model. However, this doc is using a code example with Firebase API, which I'm not planning on using. To be more specific: I want to inference multiple 100x100x3 images at once, so the input size is N x100x100x3. Question: How to do this

What are the parameters input_arrays and output_arrays that are needed to convert a frozen model '.pb' file to a '.tflite' file?

白昼怎懂夜的黑 提交于 2019-12-24 08:07:08
问题 I need to convert my .pb tensorflow model together with my .cpkt file to a tflite model to make it work in Mobile Devices. Is there any straight-forward way to find out how can I find what are the parameters I should use for input_arrays and output_arrays? import tensorflow as tf graph_def_file = "/path/to/Downloads/mobilenet_v1_1.0_224/frozen_graph.pb" input_arrays = ["input"] output_arrays = ["MobilenetV1/Predictions/Softmax"] converter = tf.lite.TFLiteConverter.from_frozen_graph( graph_def

How do I export a TensorFlow model as a .tflite file?

主宰稳场 提交于 2019-12-24 03:02:04
问题 Background information: I have written a TensorFlow model very similar to the premade iris classification model provided by TensorFlow. The differences are relatively minor: I am classifying football exercises, not iris species. I have 10 features and one label, not 4 features and one label. I have 5 different exercises, as opposed to 3 iris species. My trainData contains around 3500 rows, not only 120. My testData contains around 330 rows, not only 30. I am using a DNN classifier with n