tensorflow 1 Session.run is taking too much time to embed sentence using universal sentence encoder

Deadly 提交于 2020-07-23 06:35:37

问题


Using tensforflow with flask REST API

How should i reduce the time for session.run

I am using tf 1/2 in REST API, instead of serving it i am using it on my server.

i have tried tensorflow 1 and 2.

tensorflow 1 is taking too much time.

tensorflow 2 is not even returning the vectors for text.

in tensorflow 1 initialising is taking 2-4 seconds and session.run is taking 5-8 seconds. and time is getting increased as i keep hitting the requests.

tensorflow 1

import tensorflow.compat.v1 as tfo
import tensorflow_hub as hub
tfo.disable_eager_execution()

module_url = "https://tfhub.dev/google/universal-sentence-encoder-qa/3"
# Import the Universal Sentence Encoder's TF Hub module
embed = hub.Module(module_url)

def convert_text_to_vector(text):
    # Compute a representation for each message, showing various lengths supported.
    try:
        #text = "qwerty" or ["qwerty"]
        if isinstance(text, str):
            text = [text]
        with tfo.Session() as session:
            t_time = time.time()
            session.run([tfo.global_variables_initializer(), tfo.tables_initializer()])
            m_time = time.time()
            message_embeddings = session.run(embed(text))
            vector_array = message_embeddings.tolist()[0]
        return vector_array
    except Exception as err:
        raise Exception(str(err))

tensorflow 2

its getting stuck at vector_array = embedding_fn(text)

import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-qa/3"
embedding_fn = hub.load(module_url)

@tf.function
def convert_text_to_vector(text):
    try:
        #text = ["qwerty"]
        vector_array = embedding_fn(text)
        return vector_array
    except Exception as err:
        raise Exception(str(err))

回答1:


for the tensorflow 2 version I made few corrections. Basically I followed the example in universal sentence encoder that you provided.

import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
module_url = "https://tfhub.dev/google/universal-sentence-encoder-qa/3"
embedding_fn = hub.load(module_url)

@tf.function
def convert_text_to_vector(text):
  try:
      vector_array = embedding_fn.signatures['question_encoder'](
          tf.constant(text))
      return vector_array['outputs']
  except Exception as err:
      raise Exception(str(err))

### run the function
vector = convert_text_to_vector(['is this helpful ?'])
print(vector.shape())



回答2:


from flask import Flask
from flask_restplus import Api, Resource
from werkzeug.utils import cached_property

import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-qa/3"
embedding_fn = hub.load(module_url)


app = Flask(__name__)

@app.route('/embedding', methods=['POST'])
def entry_point(args):
    if args.get("text"):
        text_term = args.get("text")
        if isinstance(text_term, str):
            text_term = [text_term]
        vectors = convert_text_to_vector(text_term)
    return vectors



@tf.function
def convert_text_to_vector(text):
    try:
        vector_array = embedding_fn.signatures['question_encoder'](tf.constant(text))
        return vector_array['outputs']
    except Exception as err:
        raise Exception(str(err))


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=True)

"""
 ----- Requirements.txt ----
flask-restplus==0.13.0
Flask==1.1.1
Werkzeug==0.15.5
tensorboard==2.2.2
tensorboard-plugin-wit==1.6.0.post3
tensorflow==2.2.0
tensorflow-estimator==2.2.0
tensorflow-hub==0.8.0
tensorflow-text==2.2.1
"""


来源:https://stackoverflow.com/questions/62890024/tensorflow-1-session-run-is-taking-too-much-time-to-embed-sentence-using-univers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!