tensorflow-lite

Does “tf.config.experimental.set_synchronous_execution” make the Python tensorflow lite interpreter use multiprocessing?

瘦欲@ 提交于 2020-04-18 05:46:59
问题 I am using Python to do object detection in a video stream. I have a TensorFlow Lite model which takes a relatively long time to evaluate. Using interpreter.invoke() , it takes about 500 ms per evaluation. I'd like to use parallelism to get more evaluations per second. I see that I can call the TensorFlow config tf.config.experimental.set_synchronous_execution . I was hoping that setting this would magically cause the interpreter to run in multiple processes. However, running help(tf.lite

ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'

妖精的绣舞 提交于 2020-04-12 02:15:23
问题 I am trying to convert my tensorflow model (2.0) into tensorflow lite format. My model has two input layers as follows: import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import load_model from tensorflow.keras.layers import Lambda, Input, add, Dot, multiply, dot from tensorflow.keras.backend import dot, transpose, expand_dims from tensorflow.keras.models import Model r1 = Input(shape=[None, 1, 512], name='flatbuffer_data') # I want to take a variable amount of

TF-lite model test fails with run-time Error

无人久伴 提交于 2020-03-23 07:23:11
问题 I have created a TF-lite model for MNIST classification (I am using TF 1.12.0 and running this on Google Colab) and I want to test it using TensorFlow Lite Python interpreter as given in https://github.com/freedomtan/tensorflow/blob/deeplab_tflite_python/tensorflow/contrib/lite/examples/python/label_image.py But I am getting this error when I try to invoke the interpreter - RuntimeError Traceback (most recent call last) <ipython-input-138-7d35ed1dfe14> in <module>() ----> 1 interpreter.invoke

TF-lite model test fails with run-time Error

我的梦境 提交于 2020-03-23 07:23:10
问题 I have created a TF-lite model for MNIST classification (I am using TF 1.12.0 and running this on Google Colab) and I want to test it using TensorFlow Lite Python interpreter as given in https://github.com/freedomtan/tensorflow/blob/deeplab_tflite_python/tensorflow/contrib/lite/examples/python/label_image.py But I am getting this error when I try to invoke the interpreter - RuntimeError Traceback (most recent call last) <ipython-input-138-7d35ed1dfe14> in <module>() ----> 1 interpreter.invoke

how to fix “There is at least 1 reference to internal data in the interpreter in the form of a numpy array or slice” and run inference on tf.lite

老子叫甜甜 提交于 2020-03-18 12:17:23
问题 I'm trying to run inference using tf.lite on an mnist keras model that I optimized by doing post-training-quantization according to this RuntimeError: There is at least 1 reference to internal data in the interpreter in the form of a numpy array or slice. Be sure to only hold the function returned from tensor() if you are using raw data access. It happens after I resize either the images to be in 4 dimensions, or the interpreter itself as seen in the commented line; since the error before

Tensorflow: Determine the output stride of a pretrained CNN model

被刻印的时光 ゝ 提交于 2020-02-25 04:16:17
问题 I have downloaded and am implementing a ML application using the Tensorflow Lite Posenet Model. The output of this model is a heatmap, which is a part of CNN's I am new to. One piece of information required to process the output is the "output stride". It is used to calculate the original coordinates of the keypoints found in the original image. keypointPositions = heatmapPositions * outputStride + offsetVectors But the documentation doesn't specify the output stride. Is there information or

How do i convert tensorflow 2.0 estimator model to tensorflow lite?

喜你入骨 提交于 2020-02-02 03:06:42
问题 THe following code i have below produce the regular tensorflow model but when i try to convert it to tensorflow lite it doesn't work, i followed the following documentations. https://www.tensorflow.org/tutorials/estimator/linear1 https://www.tensorflow.org/lite/guide/get_started export_dir = "tmp" serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( tf.feature_column.make_parse_example_spec(feat_cols)) estimator.export_saved_model(export_dir, serving_input_fn) #

Xamarin tf.lite input objects

心已入冬 提交于 2020-01-30 03:29:48
问题 Im trying to reproduce tensorflow object detection on xamarin. private MappedByteBuffer LoadModelFile() { AssetFileDescriptor fileDescriptor = Assets.OpenFd("detect.tflite"); FileInputStream inputStream = new FileInputStream(fileDescriptor.FileDescriptor); FileChannel fileChannel = inputStream.Channel; long startOffset = fileDescriptor.StartOffset; long declaredLength = fileDescriptor.DeclaredLength; return fileChannel.Map(FileChannel.MapMode.ReadOnly, startOffset, declaredLength); } View

Tflite prediction is totally different than frozen inference graph prediction

冷暖自知 提交于 2020-01-25 10:10:09
问题 I worked on eye region localisation project and I trained my own custom dataset to create a model using Tensorflow library. I produce .ckpts files ( model ), I get acceptable results, I convert this model to .pb frozen inference model and I test the accuracy of the frozen model on my webcam and it works fine. The problem is when I convert .pb model to tflite model. I get so bad results using an Android application and MLkit firebase custom model. I have posted this issue on GitHub (

SSD mobilenet v1 with tflite giving bad output

痞子三分冷 提交于 2020-01-25 06:47:26
问题 Background I'm using source code from Tensorflow's object detection , as well as Firebase's MLInterpreter . I'm trying to stick closely to the prescribed steps in the documentation. During training, I can see on TensorBoard that the models is training properly, but somehow I am not exporting and wiring things up correctly for inference. Here are the details: Commands I used, from training through .tflite file First, I submit the training job using a ssd_mobilenet_v1 config file. The config