Base64 images with Keras and Google Cloud ML

前端 未结 2 1719
-上瘾入骨i
-上瘾入骨i 2020-12-17 01:37

I\'m predicting image classes using Keras. It works in Google Cloud ML (GCML), but for efficiency need change it to pass base64 strings instead of json array. Related Docu

2条回答
  •  独厮守ぢ
    2020-12-17 02:23

    first of all I use tf.keras but this should not be a big problem. So here is an example of how you can read a base64 decoded jpeg:

    def preprocess_and_decode(img_str, new_shape=[299,299]):
        img = tf.io.decode_base64(img_str)
        img = tf.image.decode_jpeg(img, channels=3)
        img = tf.image.resize_images(img, new_shape, method=tf.image.ResizeMethod.BILINEAR, align_corners=False)
        # if you need to squeeze your input range to [0,1] or [-1,1] do it here
        return img
    InputLayer = Input(shape = (1,),dtype="string")
    OutputLayer = Lambda(lambda img : tf.map_fn(lambda im : preprocess_and_decode(im[0]), img, dtype="float32"))(InputLayer)
    base64_model = tf.keras.Model(InputLayer,OutputLayer)   
    

    The code above creates a model that takes a jpeg of any size, resizes it to 299x299 and returns as 299x299x3 tensor. This model can be exported directly to saved_model and used for Cloud ML Engine serving. It is a little bit stupid, since the only thing it does is the convertion of base64 to tensor.

    If you need to redirect the output of this model to the input of an existing trained and compiled model (e.g inception_v3) you have to do the following:

    base64_input = base64_model.input
    final_output = inception_v3(base64_model.output)
    new_model = tf.keras.Model(base64_input,final_output)
    

    This new_model can be saved. It takes base64 jpeg and returns classes identified by the inception_v3 part.

提交回复
热议问题