Training and Predicting with instance keys

前端 未结 2 1411
甜味超标
甜味超标 2020-12-08 16:59

I am able to train my model and use ML Engine for prediction but my results don\'t include any identifying information. This works fine when submitting one row at a time for

相关标签:
2条回答
  • 2020-12-08 17:23

    UPDATE: In version 1.3 the contrib estimators (tf.contrib.learn.DNNClassifier for example), were changed to inherit from the core estimator class tf.estimator.Estimator which unlike it's predecessor, hides the model function as a private class member, so you'll need to replace estimator.model_fn in the solution below with estimator._model_fn.

    Josh's answer points you to the Flowers example, which is a good solution if you want to use a custom estimator. If you want to stick with a canned estimator, (e.g. the tf.contrib.learn.DNNClassifiers) you can wrap it in a custom estimator that adds support for keys. (Note: I think it's likely canned estimators will gain key support when they move into core).

    KEY = 'key'
    def key_model_fn_gen(estimator):
        def _model_fn(features, labels, mode, params):
            key = features.pop(KEY, None)
            model_fn_ops = estimator.model_fn(
               features=features, labels=labels, mode=mode, params=params)
            if key:
                model_fn_ops.predictions[KEY] = key
                # This line makes it so the exported SavedModel will also require a key
                model_fn_ops.output_alternatives[None][1][KEY] = key
            return model_fn_ops
        return _model_fn
    
    my_key_estimator = tf.contrib.learn.Estimator(
        model_fn=key_model_fn_gen(
            tf.contrib.learn.DNNClassifier(model_dir=model_dir...)
        ),
        model_dir=model_dir
    )
    

    my_key_estimator can then be used exactly like your DNNClassifier would be used, except it will expect a feature with the name 'key' from input_fns (prediction, evaluation and training).

    EDIT2: You will also need to add the corresponding input tensor to the prediction input function of your choice. For example, a new JSON serving input fn would look like:

    def json_serving_input_fn():
      inputs = # ... input_dict as before
      inputs[KEY] = tf.placeholder([None], dtype=tf.int64)
      features = # .. feature dict made from input_dict as before
      tf.contrib.learn.InputFnOps(features, None, inputs)
    

    (slightly different between 1.2 and 1.3, as tf.contrib.learn.InputFnOps is replaced with tf.estimator.export.ServingInputReceiver, and padding tensors to rank 2 is no longer necessary in 1.3)

    Then ML Engine will send a tensor named "key" with your prediction request, which will be passed to your model, and through with your predictions.

    EDIT3: Modified key_model_fn_gen to support ignoring missing key values. EDIT4: Added key for prediction

    0 讨论(0)
  • 2020-12-08 17:26

    Great question. The Cloud ML Engine flowers sample does this, by using the tf.identity operation to pass a string straight through from input to output. Here are the relevant lines during graph construction.

    keys_placeholder = tf.placeholder(tf.string, shape=[None])
    inputs = {
        'key': keys_placeholder,
        'image_bytes': tensors.input_jpeg
    }
    
    # To extract the id, we need to add the identity function.
    keys = tf.identity(keys_placeholder)
    outputs = {
       'key': keys,
       'prediction': tensors.predictions[0],
       'scores': tensors.predictions[1]
    }
    

    For batch prediction you need to insert "key": "some_key_value" into your instance records. For online prediction you would query the above graph with a JSON request like:

    {'instances' : [
        {'key': 'first_key', 'image_bytes' : {'b64': ...}}, 
        {'key': 'second_key', 'image_bytes': {'b64': ...}}
        ]
    }
    
    0 讨论(0)
提交回复
热议问题