How to do batching with TensorFlow Lite?

自古美人都是妖i 提交于 2019-12-24 11:44:51

问题


I have a custom CNN model, and I have converted it to .tflite format and deployed it on my Android app. However, I can't figure out how to do batching while inference with tensorflow lite.

From this Google doc, it seems you have to set the input format of your model. However, this doc is using a code example with Firebase API, which I'm not planning on using.

To be more specific:

I want to inference multiple 100x100x3 images at once, so the input size is Nx100x100x3.

Question:

How to do this with TF lite?


回答1:


You can just call the resizeInput API (Java) or ResizeInputTensor API (if you're using C++).

For example, in Java:

interpreter.resizeInput(tensor_index, [num_batch, 100, 100, 3]);

Let us know if you have any problem batching in TensorFlow lite.



来源:https://stackoverflow.com/questions/52783747/how-to-do-batching-with-tensorflow-lite

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!