tensorflow lite model gives very different accuracy value compared to python model

自闭症网瘾萝莉.ら 提交于 2019-12-03 01:52:27

I have met the same problem. It seems to me that the accuracy problem is mainly caused by failure to detect overlapping objects. I couldn't figure out what part of the code is wrong though.

This question is answered here, check it out it might help:

https://stackoverflow.com/a/58583602/11517841

As mentioned in the answer share, doing some

pre-processing

on the image before it is fed into "interpreter.invoke()" solves the issue if that was the problem in the first place.

To elaborate on that here is a block quote from the shared link:

The below code you see is what I meant by pre-processing:

test_image=cv2.imread(file_name)

test_image=cv2.resize(test_image,(299,299),cv2.INTER_AREA)

test_image= np.expand_dims((test_image)/255,axis=0).astype(np.float32)

interpreter.set_tensor(input_tensor_index, test_image)

interpreter.invoke() digit = np.argmax(output()[0])

#print(digit)

prediction=result[digit]

As you can see there are two crucial commands/pre-processing done on the image once it is read using "imread()":

i) The image should be resized to the size that is the "input_height" and "input_width" values of the input image/tensor that was used during the training. In my case (inception-v3) this was 299 for both "input_height" and "input_width". (Read the documentation of the model for this value or look for this variable in the file that you used to train or retrain the model)

ii) The next command in the above code is:

test_image = np.expand_dims((test_image)/255,axis=0).astype(np.float32)

I got this from the "formulae"/model code:

test_image=np.expand_dims((test_imageinput_mean)/input_std,axis=0).astype(np.float32)

Reading the documentation revealed that for my architecture input_mean = 0 and input_std = 255.

Hope this helps.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!