tensorflow-lite

How to fix “TOCO failed. Check failed: dim >= 1 (0 vs. 1)” error while converting a frozen graph into a tensorflow_lite model

丶灬走出姿态 提交于 2019-12-11 23:38:28
问题 I have trained an object detection model. Now I'm trying to speed it up for inference using quantization provided by Tensorflow Lite graph converter. But when I call the tf.lite.TFLiteConverter.from_frozen_graph method, I am running into an error. I have also found a similar, unanswered question asked almost a year ago and I was wondering if TFLite's support has improved now. Here is what I'm calling: converter = tf.lite.TFLiteConverter.from_frozen_graph( model_path, input_arrays = ['input_1'

Build a model with Keras Functional API for Tensorflow LITE. Merge one classifier and regressor

做~自己de王妃 提交于 2019-12-11 19:05:14
问题 I need to build a model that based on the classification output selects one model for regression. In my example, there are 3 independent Regressors and 1 classifier. The Regressor model is selected based upon the previous classification. I would like to get an integrated model so I can compile it and use the interpreter of the tensorflow Lite in Android. In this example I get the class in y_class and select the model models[ y_class ] to finally make the prediction (regression). y_prob= clf

TFLite Conversion changing model weights

我的梦境 提交于 2019-12-11 18:48:32
问题 I have a custom built tensorflow graph implementing MobileNetV2-SSDLite which I implemented myself. It is working fine on the PC. However, when I convert the model to TFLite (all float, no quantization), the model weights are changed drastically. To give an example, a filter which was initially - 0.13172674179077148, 2.3185202252437188e-32, -0.003990101162344217 becomes- 4.165565013885498, -2.3981268405914307, -1.1919032335281372 The large weight values are completely throwing off my on

How to read output from tensorflow model in java

孤街醉人 提交于 2019-12-11 18:43:03
问题 I try to use TensorflowLite with ssdlite_mobilenet_v2_coco model from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md converted to tflite file to detect objects from camera stream in my android app (java). I execute interpreter.run(input, output); where input is an image converted to ByteBuffer, output is float array - size [1][10][4] to match tensor. How to convert this float array to some readable output? - e.g. to get coordinates of

Trying to convert TensorFlow model to TensorFlow lite, when running toco --help gives me an error

别等时光非礼了梦想. 提交于 2019-12-11 17:49:12
问题 I am on Windows 10, python 2.7, tensorflow 1.7. When attempting to call toco - "toco --help", gives me the next error. File "appdata\local\programs\python\python36\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "appdata\local\programs\python\python36\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "AppData\Local\Programs\Python\Python36\Scripts\toco.exe\__main__.py", line 5, in <module> ModuleNotFoundError: No module named 'tensorflow.contrib.lite

Keras how to preprocess input signal

折月煮酒 提交于 2019-12-11 17:31:46
问题 I want to preprocess the input of my Keras model with certain signal processing functions as below. I want these to be part of my model because I will (hopefully) convert these to tf-lite or coreml. So I dont have to re-write these functionality on mobile app again. Although I couldn't figure out how and where I should add these to my model so inputs are preprocessed? #method to preprocess the model input, when called def getMfcss(); stfts = tf.contrib.signal.stft(signals, frame_length=frame

Obtaining quantized activations in tensorflow lite

好久不见. 提交于 2019-12-11 15:57:43
问题 I'm trying to get intermediate feature map values in tf lite. I load the quantized mobilenet v1 224 tflite model using the interpreter and call invoke using sample input data. The network output seems correct but when I look at the output of get_tensor for intermediate outputs (written as images) some of them seem corrupted as if overwritten by later ops (see sample images). Is there a way to retrieve the correct quantized outputs for all layers? I built the current latest TF 1.10.1 Conv2d_1

How to set input with image for tensorflow-lite in c++?

浪子不回头ぞ 提交于 2019-12-11 12:44:46
问题 I am trying to move our Tensoflow model from Python+Keras version to Tensorflow Lite with C++ on an embedded platform. It looks like I don't know how set properly input for interpreter. Input shape should be (1, 224, 224, 3). As an input I am taking image with openCV, converting this to CV_BGR2RGB. std::unique_ptr<tflite::FlatBufferModel> model_stage1 = tflite::FlatBufferModel::BuildFromFile("model1.tflite"); TFLITE_MINIMAL_CHECK(model_stage1 != nullptr); // Build the interpreter tflite::ops:

Different predictions when running Keras model in TensorFlow Lite

非 Y 不嫁゛ 提交于 2019-12-11 12:18:39
问题 Trying out TensorFlow Lite with a pretrained Keras image classifier, I'm getting worse predictions after converting the H5 to the tflite format. Is this intended behaviour (e.g. weight quantization), a bug or am I forgetting something when using the interpreter? Example from imagesoup import ImageSoup from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input, decode_predictions from tensorflow.keras.preprocessing.image import load_img, img_to_array # Load an example image.

How to convert a retrained model to tflite format?

喜你入骨 提交于 2019-12-11 08:37:45
问题 I have retrained an image classifier model on MobileNet, I have these files. Further, I used toco to compress the retrained model to convert the model to .lite format, but I need it in .tflite format. Is there anyway I can get to tflite format from existing files? 回答1: You can rename the .lite model to .tflite and it should work just fine. Alternatively, with toco, you can rename the output as it is created : toco \ --input_file=tf_files/retrained_graph.pb \ --output_file=tf_files/optimized