tensorflow-serving

RaggedTensor request to TensorFlow serving fails

旧时模样 提交于 2021-02-15 02:46:22
问题 I've created a TensorFlow model that uses RaggedTensors. Model works fine and when calling model.predict and I get the expected results. input = tf.ragged.constant([[[-0.9984272718429565, -0.9422321319580078, -0.27657580375671387, -3.185823678970337, -0.6360141634941101, -1.6579184532165527, -1.9000954627990723, -0.49169546365737915, -0.6758883595466614, -0.6677696704864502, -0.532067060470581], [-0.9984272718429565, -0.9421600103378296, 2.2048349380493164, -1.273996114730835, -0

TFX Pipeline Error While Executing TFMA: AttributeError: 'NoneType' object has no attribute 'ToBatchTensors'

本秂侑毒 提交于 2021-02-11 12:21:02
问题 Basically I only reused code from iris utils and iris pipeline with minor change on serving input: def _get_serve_tf_examples_fn(model, tf_transform_output): model.tft_layer = tf_transform_output.transform_features_layer() feature_spec = tf_transform_output.raw_feature_spec() print(feature_spec) feature_spec.pop(_LABEL_KEY) @tf.function def serve_tf_examples_fn(*args): parsed_features = {} for arg in args: parsed_features[arg.name.split(":")[0]] = arg print(parsed_features) transformed

Tensorflow serving custom gpu op cannot find dependency when compiling

十年热恋 提交于 2021-02-10 22:18:09
问题 I fallowed the guides on making custom gpu op for tensorflow and could make shared lib. For tensorflow-serving I adapted required paths but I get error when building: ERROR: /home/g360/Documents/eduardss/serving/tensorflow_serving/custom_ops/CUSTOM_OP/BUILD:32:1: undeclared inclusion(s) in rule '//tensorflow_serving/custom_ops/CUSTOM_OP:CUSTOM_OP_ops_gpu': this rule is missing dependency declarations for the following files included by 'tensorflow_serving/custom_ops/CUSTOM_OP/cc/magic_op.cu

Tensorflow serving custom gpu op cannot find dependency when compiling

天大地大妈咪最大 提交于 2021-02-10 22:12:56
问题 I fallowed the guides on making custom gpu op for tensorflow and could make shared lib. For tensorflow-serving I adapted required paths but I get error when building: ERROR: /home/g360/Documents/eduardss/serving/tensorflow_serving/custom_ops/CUSTOM_OP/BUILD:32:1: undeclared inclusion(s) in rule '//tensorflow_serving/custom_ops/CUSTOM_OP:CUSTOM_OP_ops_gpu': this rule is missing dependency declarations for the following files included by 'tensorflow_serving/custom_ops/CUSTOM_OP/cc/magic_op.cu

{ “error”: “inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map” }

懵懂的女人 提交于 2021-02-09 07:00:36
问题 I am using tensorflow serving to deploy my model . my tensorinfo map is saved_model_cli show --dir /export/1/ --tag_set serve --signature_def serving_default The given SavedModel SignatureDef contains the following input(s): inputs['length_0'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_0:0 inputs['length_1'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_1:0 inputs['length_2'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default

{ “error”: “inputs is a plain value/list, but expecting an object as multiple input tensors required as per tensorinfo_map” }

可紊 提交于 2021-02-09 07:00:33
问题 I am using tensorflow serving to deploy my model . my tensorinfo map is saved_model_cli show --dir /export/1/ --tag_set serve --signature_def serving_default The given SavedModel SignatureDef contains the following input(s): inputs['length_0'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_0:0 inputs['length_1'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default_length_1:0 inputs['length_2'] tensor_info: dtype: DT_INT32 shape: (-1) name: serving_default

How could I convert onnx model to tensorflow saved model?

感情迁移 提交于 2021-02-07 06:56:43
问题 I am trying to use tf-serving to deploy my torch model. I have exported my torch model to onnx. How could I generate the pb model for tf-serving ? 回答1: Use the onnx/onnx-tensorflow converter tool as a Tensorflow backend for ONNX. Install onnx-tensorflow: pip install onnx-tf Convert using the command line tool: onnx-tf convert -t tf -i /path/to/input.onnx -o /path/to/output.pb Alternatively, you can convert through the python API. import onnx from onnx_tf.backend import prepare onnx_model =

How could I convert onnx model to tensorflow saved model?

耗尽温柔 提交于 2021-02-07 06:56:21
问题 I am trying to use tf-serving to deploy my torch model. I have exported my torch model to onnx. How could I generate the pb model for tf-serving ? 回答1: Use the onnx/onnx-tensorflow converter tool as a Tensorflow backend for ONNX. Install onnx-tensorflow: pip install onnx-tf Convert using the command line tool: onnx-tf convert -t tf -i /path/to/input.onnx -o /path/to/output.pb Alternatively, you can convert through the python API. import onnx from onnx_tf.backend import prepare onnx_model =

Tensor(“args_0:0”, shape=(28, 28, 1), dtype=float32)

我是研究僧i 提交于 2021-01-29 13:20:15
问题 I was trying to execute the code below in Google Colab for learning purposes.I got this message when i executed the following code. Tensor("args_0:0", shape=(28, 28, 1), dtype=float32) def normalize(images, labels): print(images) images = tf.cast(images, tf.float32) print(images) images /= 255 print(images) return images, labels I am trying to understand what this message means but I am not able ti understand it. Tried searching in web , but couldn't find much resources. Can anyone say what

how to input multi features for tensorflow model inference

和自甴很熟 提交于 2021-01-29 12:59:09
问题 I'm trying to model serving test. Now, I'm following this example "https://www.tensorflow.org/beta/guide/saved_model" This example is OK. But, In my case, I have multi input features. loaded = tf.saved_model.load(export_path) infer = loaded.signatures["serving_default"] print(infer.structured_input_signature) => ((), {'input1': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input1'), 'input2': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input2')}) In example, for single input