tensorflow-lite

Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object

ぃ、小莉子 提交于 2020-05-30 06:47:07
问题 I am using MLKiT for loading custom tensoflow model While reading the model gets following error java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[[[F (which is compatible with the TensorFlowLite type FLOAT32). I am using below code for object detection using tlflite file private fun bitmapToInputArray(bitmap: Bitmap): Array<Array<Array<FloatArray>>> { var bitmap = bitmap bitmap = Bitmap.createScaledBitmap(bitmap,

How to install tensorflow on coral dev board?

北慕城南 提交于 2020-05-28 11:55:30
问题 How to install tf on coral dev? Getting errors following this on coral dev board like compile.sh not found etc. Please give detailed explaination. 回答1: It is really not going to be possible to help if you don't give details on what you've done or what errors you ran into while trying to install it. However, since the objective is to install tensorflow on the board, you can just do this using this pre-built package: $ wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v2.0.0

speedup TFLite inference in python with multiprocessing pool

…衆ロ難τιáo~ 提交于 2020-05-26 06:12:09
问题 I was playing with tflite and observed on my multicore CPU that it is not heavily stressed during inference time. I eliminated the IO bottleneck by creating random input data with numpy beforehand (random matrices resembling images) but then tflite still doesn't utilze the full potential of the CPU. The documentation mentions the possibility to tweak the number of used threads. However I was not able to find out how to do that in the Python API. But since I have seen people using multiple

How to build TensorFlow Lite as a static library and link to it from a separate (CMake) project?

佐手、 提交于 2020-05-25 23:46:29
问题 I've successfully built a simple C++ app running TF Lite model by adding my sources to tensorflow/lite/examples , similarly to what the official C++ TF guide suggests for full TF. Now I want to build it as a separate project (shared library) linking to TF Lite statically and using CMake as a build system. I tried to add a custom target to my CMakeLists.txt , which would build TF Lite with Bazel: set(TENSORFLOW_DIR ${CMAKE_SOURCE_DIR}/thirdparty/tensorflow) add_custom_target(TFLite COMMAND

Tensorflow js VS Tensorflow Lite

旧时模样 提交于 2020-05-16 03:52:10
问题 Quite an open-ended question. Just pretty curious whats the current difference if I want to deploy a machine learning (object detection) model on the browser, perhaps on a webapp to begin with (to be viewed on a phone). From what I know, both tensorflowjs and tensorflowlite are compatible for such a deployment. (I've heard tensorflowlite is superior but, curious to find the pros and cons if any) What are the main differences between them? Will tensorflowjs be a good choice too? 回答1: main

How can I view weights in a .tflite file?

↘锁芯ラ 提交于 2020-05-13 06:33:10
问题 I get the pre-trained .pb file of MobileNet and find it's not quantized while the fully quantized model should be converted into .tflite format. Since I'm not familiar with tools for mobile app developing, how can I get the fully quantized weights of MobileNet from .tflite file. More precisely, how can I extract quantized parameters and view its numerical values ? 回答1: The Netron model viewer has nice view and export of data, as well as a nice network diagram view. https://github.com

What is the correct way to create representative dataset for TFliteconverter?

半城伤御伤魂 提交于 2020-05-13 04:38:50
问题 I am trying to infer tinyYOLO-V2 with INT8 weights and activation. I can convert the weights to INT8 with TFliteConverter. For INT8 activation, I have to give representative dataset to estimate the scaling factor. My method of creating such dataset seems wrong. What is the correct procedure ? def rep_data_gen(): a = [] for i in range(160): inst = anns[i] file_name = inst['filename'] img = cv2.imread(img_dir + file_name) img = cv2.resize(img, (NORM_H, NORM_W)) img = img / 255.0 img = img

What is the correct way to create representative dataset for TFliteconverter?

风格不统一 提交于 2020-05-13 04:37:49
问题 I am trying to infer tinyYOLO-V2 with INT8 weights and activation. I can convert the weights to INT8 with TFliteConverter. For INT8 activation, I have to give representative dataset to estimate the scaling factor. My method of creating such dataset seems wrong. What is the correct procedure ? def rep_data_gen(): a = [] for i in range(160): inst = anns[i] file_name = inst['filename'] img = cv2.imread(img_dir + file_name) img = cv2.resize(img, (NORM_H, NORM_W)) img = img / 255.0 img = img

Screen Size of the camera on the example object detection of tensorflow lite

北战南征 提交于 2020-04-30 07:10:13
问题 On the tensorflow lite example object detection, the camera don't take all the screen but just a part. I tried to find some constant in CameraActivity, CameraConnectionFragment and Size classes but no results. So I just want a way to put the camera in all the screen or just an explanation. Thanks you. 回答1: I just find the solution, it's in the CameraConnectionFragment class : protected static Size chooseOptimalSize(final Size[] choices, final int width, final int height) { final int minSize =

Does “tf.config.experimental.set_synchronous_execution” make the Python tensorflow lite interpreter use multiprocessing?

蓝咒 提交于 2020-04-18 05:47:39
问题 I am using Python to do object detection in a video stream. I have a TensorFlow Lite model which takes a relatively long time to evaluate. Using interpreter.invoke() , it takes about 500 ms per evaluation. I'd like to use parallelism to get more evaluations per second. I see that I can call the TensorFlow config tf.config.experimental.set_synchronous_execution . I was hoping that setting this would magically cause the interpreter to run in multiple processes. However, running help(tf.lite