tensorflow-lite

How to Reduce size of Tflite model or Download and set it programmatically?

只谈情不闲聊 提交于 2021-02-07 13:21:08
问题 Okay so in my app i am trying to implement face recognition using face net model which is converted to tflite averaging at about 93 MB approximately, however this model eventually increases size of my apk. so i am trying to find alternate ways to deal with this Firstly i can think of is to compress it in some way and then uncompress when app is installed Another way is that i should upload that model to server and after being downloaded get it loaded in my application. However i do not seem

TensorFlow Lite C++ API example for inference

▼魔方 西西 提交于 2021-02-06 09:10:51
问题 I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API. I will try to explain what I have achieved so far. Create the tflite model I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1 . I got this code snippet from some tutorial, but I am unable to find it anymore. import

TensorFlow Lite C++ API example for inference

假装没事ソ 提交于 2021-02-06 09:10:17
问题 I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API. I will try to explain what I have achieved so far. Create the tflite model I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1 . I got this code snippet from some tutorial, but I am unable to find it anymore. import

TensorFlow Lite C++ API example for inference

做~自己de王妃 提交于 2021-02-06 09:07:31
问题 I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API. I will try to explain what I have achieved so far. Create the tflite model I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1 . I got this code snippet from some tutorial, but I am unable to find it anymore. import

How to use smart reply custom ops in python or tfjs?

可紊 提交于 2021-01-29 09:49:17
问题 I'm trying to implement smart reply tflite model in python or tfjs, but they are using custom ops. Please refer https://github.com/tensorflow/examples/tree/master/lite/examples/smart_reply/android/app/libs/cc. So how to build that custom op separately and use that custom op in python or tfjs? 来源: https://stackoverflow.com/questions/59644961/how-to-use-smart-reply-custom-ops-in-python-or-tfjs

Unable to convert YOLOv4 to tflite

时间秒杀一切 提交于 2021-01-29 09:23:56
问题 I'm trying to use yolo4 in my android project but I'm having problems with conversion. The code from: https://pypi.org/project/yolov4/ has worked well on Google-Colab , though it had some problem with my CUDA version on Jupyter notebook. I got conversion error from: yolo.save_as_tflite("yolov4.tflite") Error was long and I am not sure which part I should paste here. Can someone recommend some alternative method to convert tflite version of yolo4 (preferably working on colab )? 回答1: If you

tensoflow-trained ssd model not working after converting to tensorflow-lite for raspi

梦想的初衷 提交于 2021-01-29 09:22:43
问题 System information Laptop: Linux Ubuntu Tensorflow 1.15.0 Raspi: Raspberry Pi 4 tflite-runtime 2.5.0 tensorflow-estimator 1.14.0 Coral Edge TPU Hello everybody, I am stuck at getting my trained model running on raspi. I trained ssd_mobilenet_v2_coco model from tensorflow 1 modelzoo with my own custom dataset on google cloud with this config file where I did few changes: model { ssd { num_classes: 3 image_resizer { fixed_shape_resizer { height: 720 width: 1280 } } feature_extractor { type:

Converting pretrained model from tfhub to tflite

你。 提交于 2021-01-29 06:26:53
问题 I'm trying to convert openimages_v4/ssd/mobilenet_v2 to tflite using: $ pip3 install tensorflow==2.4.0 $ tflite_convert --saved_model_dir=openimages_v4_ssd_mobilenet_v2_1 --output_file=/tmp/openimages_v4_ssd_mobilenet_v2_1.tflite but it's giving this error: <stacktrace snipped ..> RuntimeError: MetaGraphDef associated with tags {'serve'} could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli` available_tags: [set()]

TFLite Inference on video input

老子叫甜甜 提交于 2021-01-28 19:26:42
问题 I have an SSD tflite detection model that I am running with Python on a desktop computer. As for now, my script below takes a single image as an input for inference and it works fine: # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_path="model.tflite") interpreter.allocate_tensors() img_resized = Image.open(file_name) input_data = np.expand_dims(img_resized, axis=0) input_data = (np.float32(input_data) - input_mean) / input_std input_details = interpreter.get

How to convert from Tensorflow.js (.json) model into Tensorflow (SavedModel) or Tensorflow Lite (.tflite) model?

我怕爱的太早我们不能终老 提交于 2021-01-28 12:16:22
问题 I have downloaded a pre-trained PoseNet model for Tensorflow.js (tfjs) from Google, so its a json file. However, I want to use it on Android, so I need the .tflite model. Although someone has 'ported' a similar model from tfjs to tflite here, I have no idea what model (there are many variants of PoseNet) they converted. I want to do the steps myself. Also, I don't want to run some arbitrary code someone uploaded into a file in stackOverflow: Caution: Be careful with untrusted code—TensorFlow