tf-lite

How to convert model trained on custom data-set for the Edge TPU board?

て烟熏妆下的殇ゞ 提交于 2020-06-17 15:20:24
问题 I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing. How to convert and run it on the Google Coral Edge TPU Board? Thanks 回答1: Without reading the documentation, it will be very hard to

Getting EXCEPTION CAUGHT BY WIDGETS LIBRARY & Failed assertion: line 3289 pos 12: '!_debugLocked': is not true

扶醉桌前 提交于 2020-06-17 09:40:46
问题 From the code below, I would have the the widget _buildResultsWidget with a condition where it would move to a new state and from there when the new page is reached it would stay on for 2 seconds where is would returned me back but there would be an error appearing the moment the condition is activated. import '../main.dart'; class DetectScreen extends StatefulWidget { DetectScreen({Key key, this.title}) : super(key: key); final String title; @override _DetectScreenPageState createState() =>

Inference problem using a tflite model java.lang.IllegalArgumentException: Invalid output Tensor index: 1

不打扰是莪最后的温柔 提交于 2020-06-01 05:43:25
问题 Input and output shape Java exception how did you solve this exception please java.lang.IllegalArgumentException: Invalid output Tensor index: 1. i converted a yolov3-tiny model i changed the NUM_DETECTION into 2535 (NUM_DETECTION=2535) because the input shape is (1,416,416,6) and the output shape is (1,2535,6). I ve trained the model on license plates so it can detect them. like i said i've worked with the yolov3-tiny version with darknet, so i converted it to pb file then tflite file so i

How to convert tflite_graph.pb to detect.tflite properly

∥☆過路亽.° 提交于 2020-06-01 05:17:07
问题 I am using tensorflow object-detection api for training a custom model using ssdlite_mobilenet_v2_coco_2018_05_09 from tensorflow model zoo. I successfully trained the model and test it out using a script provided in this tutorial. Here is the problem, I need a detect.tflite to use it in my target machine (an embedded system). But when I actually make a tflite out of my model, it outputs almost nothing and when it does, its a wrong detection . To make the .tflite file, I first used export

How to install tensorflow on coral dev board?

北慕城南 提交于 2020-05-28 11:55:30
问题 How to install tf on coral dev? Getting errors following this on coral dev board like compile.sh not found etc. Please give detailed explaination. 回答1: It is really not going to be possible to help if you don't give details on what you've done or what errors you ran into while trying to install it. However, since the objective is to install tensorflow on the board, you can just do this using this pre-built package: $ wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v2.0.0

TensorFlow Lite conversion

青春壹個敷衍的年華 提交于 2020-01-25 06:59:08
问题 I'm working with the Raspberry pi 4, to create an image detection model. I need to turn the model into a lite version since I'm installing it on a card called JeVois. I have Tensorflow 1.13.1 for Raspberry Pi. And, My problem is the following : I finished the training stage and I made the following steps to export the model in lite format: python3 export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1.config --trained_checkpoint_prefix training

Calculation operations with the parameters of a TFLite quantized model

為{幸葍}努か 提交于 2019-12-31 05:15:07
问题 I am trying to implement image classification in hardware using the quantized Mobilenetv2 model taken from here. To do that, I first need to reproduce the inference process from the beginning to the end to make sure I understand the calculations/operations that are performed on the data. The first target is the Conv fuction. I can see how it is being calculated, but there are several arguments that are passed to this function which I would like to know how they are produced: output_offset,