tensorflow-lite

tensorflow lite model gives very different accuracy value compared to python model

自闭症网瘾萝莉.ら 提交于 2019-12-03 01:52:27
I am using tensorflow 1.10 Python 3.6 My code is based in the premade iris classification model provided by TensorFlow. This means, I am using a Tensorflow DNN premade classifier, with the following difference: 10 features instead 4. 5 classes instead 3. The test and training files can be downloaded from the following link: https://www.dropbox.com/sh/nmu8i2i8xe6hvfq/AADQEOIHH8e-kUHQf8zmmDMDa?dl=0 I have made a code to export this classifier to a tflite format, however the accuracy in the python model is higher than 75% but when exported the accuracy decrease approximately to 45% this means

Issues when converting tensorflow/keras model to tensorflow lite model

浪尽此生 提交于 2019-12-02 06:51:58
问题 While experimenting with tensorflow i've come accross an issue when converting keras model into tensorflow lite . This is my setup: model = models.Sequential() model.add(layers.Dense(128, activation='relu', input_shape=(1,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(17, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ... model.save('Foo.h5') ... converter = tf.contrib.lite.TFLiteConverter.from_keras

Issues when converting tensorflow/keras model to tensorflow lite model

倾然丶 夕夏残阳落幕 提交于 2019-12-02 04:10:12
While experimenting with tensorflow i've come accross an issue when converting keras model into tensorflow lite . This is my setup: model = models.Sequential() model.add(layers.Dense(128, activation='relu', input_shape=(1,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(17, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ... model.save('Foo.h5') ... converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file("Foo.h5") tflite_model = converter.convert() open("Foo_converted.tflite", "wb").write(tflite

Tensorflow Convert pb file to TFLITE using python

戏子无情 提交于 2019-12-01 06:50:25
I'm new with working on Tensorflow. I have a model saved after training as pb file, I want to use tensorflow mobile and it's important to work with TFLITE file. The problem is most of the examples I found after googling for converters are command on terminal or cmd. Can you please share me an example of converting to tflite files using python code? Thanks You can convert to tflite directly in python directly. You have to freeze the graph and use toco_convert . It needs the input and output names and shapes to be determined ahead of calling the API just like in the commandline case. An example

Why is TensorFlow Lite slower than TensorFlow on desktop?

北城余情 提交于 2019-12-01 06:30:22
I'm currently working on Single Image Superresolution and I've managed to freeze an existing checkpoint file and convert it into tensorflow lite. However, when performing inference using the .tflite file, the time taken to upsample one image is at least 4 times that when restoring the model using the .ckpt file. Inference using the .ckpt file is done using session.run(), while inference using the .tflite file is done using interpreter.invoke(). Both operations were done on a Ubuntu 18 VM running on a typical PC. What I did to find out more about the issue is to run top in a seperate terminal

Understanding tf.contrib.lite.TFLiteConverter quantization parameters

你离开我真会死。 提交于 2019-12-01 06:12:15
I'm trying to use UINT8 quantization while converting tensorflow model to tflite model: If use post_training_quantize = True , model size is x4 lower then original fp32 model, so I assume that model weights are uint8, but when I load model and get input type via interpreter_aligner.get_input_details()[0]['dtype'] it's float32. Outputs of the quantized model are about the same as original model. converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph( graph_def_file='tflite-models/tf_model.pb', input_arrays=input_node_names, output_arrays=output_node_names) converter.post_training

Why is TensorFlow Lite slower than TensorFlow on desktop?

廉价感情. 提交于 2019-12-01 05:17:54
问题 I'm currently working on Single Image Superresolution and I've managed to freeze an existing checkpoint file and convert it into tensorflow lite. However, when performing inference using the .tflite file, the time taken to upsample one image is at least 4 times that when restoring the model using the .ckpt file. Inference using the .ckpt file is done using session.run(), while inference using the .tflite file is done using interpreter.invoke(). Both operations were done on a Ubuntu 18 VM

How to know Tensorflow Lite model's input/output feature info?

。_饼干妹妹 提交于 2019-12-01 00:15:58
I'm mobile developer. And I want to use various Tensorflow Lite models( .tflite ) with MLKit . But there are some issues, I have no idea of how to know .tflite model's input/output feature info(these will be parameters for setup). Is there any way to know that? Sorry for bad English and thanks. Update(18.06.13.): I found this site https://lutzroeder.github.io/Netron/ . This visualize graph based on your uploaded model(like .mlmode or .tflite etc.) and find input/output form. Here is example screenshot! https://lutzroeder.github.io/Netron example If you already have a tflite model that you did

How to load a tflite model in script?

女生的网名这么多〃 提交于 2019-11-30 09:26:58
I have converted the .pb file to tflite file using the bazel . Now I want to load this tflite model in my python script just to test that weather this is giving me correct output or not ? You can use TensorFlow Lite Python interpreter to load the tflite model in a python shell, and test it with your input data. The code will be like this: import numpy as np import tensorflow as tf # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_path="converted_model.tflite") interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input

How can I test a .tflite model to prove that it behaves as the original model using the same Test Data?

心不动则不痛 提交于 2019-11-29 05:48:41
I have generated a .tflite model based on a trained model, I would like to test that the tfilte model gives the same results as the original model. Giving both the same test data and obtaining the same result. You may use TensorFlow Lite Python interpreter to test your tflite model. It allows you to feed input data in python shell and read the output directly like you are just using a normal tensorflow model. I have answered this question here . And you can read this TensorFlow lite official guide for detailed information. I also found a very good visualization tool which can load .tflite file