tensorflow-lite - using tflite Interpreter to get an image in the output

折月煮酒 提交于 2019-12-06 15:50:08

问题


I am trying to use the workflow of Tensorflow-for-poets-2 TFLite tutorial, https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#6

But, instead of image classification, I am trying to do style transfer. It means that the input and the output of my network are images (compared to the original example, where the input is an image and the output is a list of scores).

One of my many problems is to get the output-processed image from the tflite inference:

After i loaded the tflite model, i have the tflite Interpreter tflite . Using this Interpreter I run the inference:

tflite.run(imgData, Out_imgData);

where the

imgData, Out_imgData

are ByteBuffers, created in the same way as in Tensorflow-for-poets-2 TFLite tutorial, https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#6.

Now I have my inference output as a ByteBuffer

Out_imgData

I can't find an example when the inference output is an image. Please help me to convert the float ByteBuffer Out_imgData to a bitmap image. Or point me to some example.

visual problem description: Using tflite Interpreter in python, I get the output image: enter image description here


回答1:


I have faced a similar problem in my segmentation project: almost all output pixels were 255s during inference with tflite file, but all was right during inference with exported model. Long search of solution has leaded me to this related issue. It says that a problem is in batch normalization layers. I removed them, and my outputs became normal, but neural network qulity fell drammatically without bn. I tried to replace tf.layer.batch_normalization with tf.keras.layers.BatchNormalization and tf.contrib.layers.batch_norm, but all the same. Finally I solved the problem by implementing my own batch normalization like this:

def my_moments(input_tensor):
    mean = tf.reduce_mean(input_tensor, axis=[0, 1, 2])
    dev = input_tensor - mean
    dev = dev * dev
    dev = tf.reduce_mean(dev, axis=[0, 1, 2])
    return mean, dev

def my_bn(input_tensor):
    mu = tf.Variable(tf.ones(input_tensor.shape[3]))
    beta = tf.Variable(tf.zeros(input_tensor.shape[3]))
    mean, dev = my_moments(input_tensor)
    return beta + mu * (input_tensor - mean) / (tf.sqrt(dev) + 0.001)

Note that this is not literal implementation of batch norm (here moving average is not used), because only train mode was required for my project. Also note that we cannot use tf.nn.moments to calc mean and dev because it is not supported by tflite (so we need to implement own function for moments). After replacing batch normalization with provided functions I was able to train my network, export it to tflite and use it during inference in tflite correctly.



来源:https://stackoverflow.com/questions/53141167/tensorflow-lite-using-tflite-interpreter-to-get-an-image-in-the-output

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!