Tensorflow模型量化4 --pb转tflite(uint8量化)小结
Tensorflow模型量化4 --pb转tflite小结(uint8量化) 实验环境:tensorflow-gpu1.15+cuda10.0 模型的fp16量化和int8量化我之前有写,参考: 龟龟:Tensorflow模型量化实践2--量化自己训练的模型zhuanlan.zhihu.com 这次发现uint8量化时有参数设置,所以准备是从头再梳理一遍 2.参与量化的模型: 训练tensorflow-object-detection API 得到的ssdlite_mobilenet _ v2模型,导出为frozen_inference_graph.pb 3.获取输入输出节点 进行frozen_inference_graph.pb模型解析,得到输入输出节点信息 代码入下: """ code by zzg """ import tensorflow as tf import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" config = tf.ConfigProto() config.gpu_options.allow_growth = True with tf.Session() as sess: with open('frozen_inference_graph_resnet.pb','rb') as f: graph_def = tf