问题
I have a custom CNN model, and I have converted it to .tflite format and deployed it on my Android app. However, I can't figure out how to do batching while inference with tensorflow lite.
From this Google doc, it seems you have to set the input format of your model. However, this doc is using a code example with Firebase API, which I'm not planning on using.
To be more specific:
I want to inference multiple 100x100x3 images at once, so the input size is Nx100x100x3.
Question:
How to do this with TF lite?
回答1:
You can just call the resizeInput
API (Java) or ResizeInputTensor
API (if you're using C++).
For example, in Java:
interpreter.resizeInput(tensor_index, [num_batch, 100, 100, 3]);
Let us know if you have any problem batching in TensorFlow lite.
来源:https://stackoverflow.com/questions/52783747/how-to-do-batching-with-tensorflow-lite