tensorflow-lite

keras LSTM model - a tf 1.15 equivalent that works with tflite

喜夏-厌秋 提交于 2021-02-11 12:44:49
问题 TLDR : How to implement this model using tf.lite.experimental.nn.TFLiteLSTMCell, tf.lite.experimental.nn.dynamic_rnn instead keras.layers.LSTM ? I have this network in keras: inputs = keras.Input(shape=(1, 52)) state_1_h = keras.Input(shape=(200,)) state_1_c = keras.Input(shape=(200,)) x1, state_1_h_out, state_1_c_out = layers.LSTM(200, return_sequences=True, input_shape=(sequence_length, 52), return_state=True)(inputs, initial_state=[state_1_h, state_1_c]) output = layers.Dense(13)(x1) model

A problem when run tflite model(the result of tflite model is nan)

老子叫甜甜 提交于 2021-02-10 22:42:24
问题 I trained a model to convert sketch picture to color picture. enter image description here middle is ground truth, left is original and right is pridiction. This result was ran by model.h5 but there is a wrong result when i run program using model.tflite. this is my convert code tflite_convert --keras_model_file=G:/pix2pix/generator.h5 --output_file=G:/pix2pix/convert.tflite this is the result ran by model.tflite [[[[nan nan nan] ... 回答1: I find a solution about this question. Just change the

Tensorflow Lite - ValueError: Cannot set tensor: Dimension mismatch

拜拜、爱过 提交于 2021-02-10 20:47:46
问题 This is probably going to be a stupid question but I am new to machine learning and Tensorflow. I am trying to run object detection API on Raspberry Pi using Tensorflow Lite . I am trying to modify my code with the help of this example https://github.com/freedomtan/tensorflow/blob/deeplab_tflite_python/tensorflow/contrib/lite/examples/python/object_detection.py This piece of code will detect object from a image. But instead of a image I want to detect object on real time through Pi camera. I

Save TensorFlowJS MobileNet + KNN to TFLite

风流意气都作罢 提交于 2021-02-10 14:31:12
问题 I have trained a KNN on top of MobileNet logits results using TensorFlowJS. And I want to know how can I export the result of the MobileNet + KNN to a TFLite model. const knn = knnClassifier.create() const net = await mobilenet.load() const handleTrain = (imgEl, label) => { const image = tf.browser.fromPixels(imgEl); const activation = net.infer(image, true); knn.addExample(activation, label) } 回答1: 1. Save the model Save the model this example saves the file to the native file system or if

How can I use GPU for running a tflite model (*.tflite) using tf.lite.Interpreter (in python)?

大城市里の小女人 提交于 2021-02-10 07:25:36
问题 I have converted a tensorflow inference graph to tflite model file (*.tflite), according to instructions from https://www.tensorflow.org/lite/convert. I tested the tflite model on my GPU server, which has 4 Nvidia TITAN GPUs. I used the tf.lite.Interpreter to load and run tflite model file. It works as the former tensorflow graph, however, the problem is that the inference became too slow. When I checked out the reason, I found that the GPU utilization is simply 0% when tf.lite.Interpreter is

tensorflow lite(tflite) invoke error after resize the input dimension

一世执手 提交于 2021-02-08 10:46:42
问题 I am using mobilenet_ssd.tflite as the mode from the official tensorflow github. Code below: int input = interpreter->inputs()[0]; interpreter->ResizeInputTensor(input, sizes); This will cause error when calling : interpreter->AllocateTensors() If I comment out the interpreter->ResizeInputTensor(input, sizes); Then every thing is fine. Any suggestions? Another question that I asked: change the input image size for mobilenet_ssd using tensorflow 回答1: ResizeInputTensor is restricted by the

tensorflow lite(tflite) invoke error after resize the input dimension

折月煮酒 提交于 2021-02-08 10:45:29
问题 I am using mobilenet_ssd.tflite as the mode from the official tensorflow github. Code below: int input = interpreter->inputs()[0]; interpreter->ResizeInputTensor(input, sizes); This will cause error when calling : interpreter->AllocateTensors() If I comment out the interpreter->ResizeInputTensor(input, sizes); Then every thing is fine. Any suggestions? Another question that I asked: change the input image size for mobilenet_ssd using tensorflow 回答1: ResizeInputTensor is restricted by the

Tensorflow per channel quantization

六眼飞鱼酱① 提交于 2021-02-08 04:41:59
问题 Using the current Tensorflow quantization ops, how would I go about simulating per-channel quantization during inference? This paper defines per-layer quantization as We can specify a single quantizer (defined by the scale and zero-point) for an entire tensor referred to as per-layer quantization and per-channel quantization as Per-channel quantization has a different scale and offset for each convolutional kernel. Let's assume we have this subgraph import tensorflow as tf x = np.random

How to Reduce size of Tflite model or Download and set it programmatically?

旧街凉风 提交于 2021-02-07 13:21:45
问题 Okay so in my app i am trying to implement face recognition using face net model which is converted to tflite averaging at about 93 MB approximately, however this model eventually increases size of my apk. so i am trying to find alternate ways to deal with this Firstly i can think of is to compress it in some way and then uncompress when app is installed Another way is that i should upload that model to server and after being downloaded get it loaded in my application. However i do not seem

How to Reduce size of Tflite model or Download and set it programmatically?

不问归期 提交于 2021-02-07 13:21:16
问题 Okay so in my app i am trying to implement face recognition using face net model which is converted to tflite averaging at about 93 MB approximately, however this model eventually increases size of my apk. so i am trying to find alternate ways to deal with this Firstly i can think of is to compress it in some way and then uncompress when app is installed Another way is that i should upload that model to server and after being downloaded get it loaded in my application. However i do not seem