Serving Keras Models With Tensorflow Serving

后端 未结 3 1504
刺人心
刺人心 2020-12-28 23:49

Right now we are successfully able to serve models using Tensorflow Serving. We have used following method to export the model and host it with Tensorflow Serving.

3条回答
  •  Happy的楠姐
    2020-12-29 00:29

    I have recently added this blogpost that explain how to save a Keras model and serve it with Tensorflow Serving.

    TL;DR: Saving an Inception3 pretrained model:

    ### Load a pretrained inception_v3
    inception_model = keras.applications.inception_v3.InceptionV3(weights='imagenet')
    
    # Define a destination path for the model
    MODEL_EXPORT_DIR = '/tmp/inception_v3'
    MODEL_VERSION = 1
    MODEL_EXPORT_PATH = os.path.join(MODEL_EXPORT_DIR, str(MODEL_VERSION))
    
    # We'll need to create an input mapping, and name each of the input tensors.
    # In the inception_v3 Keras model, there is only a single input and we'll name it 'image'
    input_names = ['image']
    name_to_input = {name: t_input for name, t_input in zip(input_names, inception_model.inputs)}
    
    # Save the model to the MODEL_EXPORT_PATH
    # Note using 'name_to_input' mapping, the names defined here will also be used for querying the service later
    tf.saved_model.simple_save(
        keras.backend.get_session(),
        MODEL_EXPORT_PATH,
        inputs=name_to_input,
        outputs={t.name: t for t in inception_model.outputs})
    

    And then starting a TF serving Docker:

    1. Copy the saved model to the hosts' specified directory. (source=/tmp/inception_v3 in this example)

    2. Run the docker:

    docker run -d -p 8501:8501 --name keras_inception_v3 --mount type=bind,source=/tmp/inception_v3,target=/models/inception_v3 -e MODEL_NAME=inception_v3 -t tensorflow/serving
    
    1. Verify that there's network access to the Tensorflow service. In order to get the local docker ip (172.*.*.*) for testing run:
    docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' keras_inception_v3
    

提交回复
热议问题