tensorflow2.0

'Error While Encoding with Hub.KerasLayer' while using TFF

依然范特西╮ 提交于 2020-07-23 06:51:24
问题 An error is being generated while training a federated model that uses hub.KerasLayer. The details of error and stack trace is given below. The complete code is available of gist https://gist.github.com/aksingh2411/60796ee58c88e0c3f074c8909b17b5a1. Help and suggestion in this regard would be appreciated. Thanks. from tensorflow import keras def create_keras_model(): encoder = hub.load("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1") return tf.keras.models.Sequential([ hub

'Error While Encoding with Hub.KerasLayer' while using TFF

…衆ロ難τιáo~ 提交于 2020-07-23 06:50:55
问题 An error is being generated while training a federated model that uses hub.KerasLayer. The details of error and stack trace is given below. The complete code is available of gist https://gist.github.com/aksingh2411/60796ee58c88e0c3f074c8909b17b5a1. Help and suggestion in this regard would be appreciated. Thanks. from tensorflow import keras def create_keras_model(): encoder = hub.load("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1") return tf.keras.models.Sequential([ hub

tensorflow 1 Session.run is taking too much time to embed sentence using universal sentence encoder

Deadly 提交于 2020-07-23 06:35:37
问题 Using tensforflow with flask REST API How should i reduce the time for session.run I am using tf 1/2 in REST API, instead of serving it i am using it on my server. i have tried tensorflow 1 and 2. tensorflow 1 is taking too much time. tensorflow 2 is not even returning the vectors for text. in tensorflow 1 initialising is taking 2-4 seconds and session.run is taking 5-8 seconds. and time is getting increased as i keep hitting the requests. tensorflow 1 import tensorflow.compat.v1 as tfo

Converting a list of unequally shaped arrays to Tensorflow 2 Dataset: ValueError: Can't convert non-rectangular Python sequence to Tensor

我们两清 提交于 2020-07-22 14:14:12
问题 I have tokenized data in the form of a list of unequally shaped arrays: array([array([1179, 6, 208, 2, 1625, 92, 9, 3870, 3, 2136, 435, 5, 2453, 2180, 44, 1, 226, 166, 3, 4409, 49, 6728, ... 10, 17, 1396, 106, 8002, 7968, 111, 33, 1130, 60, 181, 7988, 7974, 7970])], dtype=object) With their respective targets: Out[74]: array([0, 0, 0, ..., 0, 0, 1], dtype=object) I'm trying to transform them into a padded tf.data.Dataset() , but it won't let me convert unequal shapes to a tensor. I will get

Converting a list of unequally shaped arrays to Tensorflow 2 Dataset: ValueError: Can't convert non-rectangular Python sequence to Tensor

时光怂恿深爱的人放手 提交于 2020-07-22 14:14:09
问题 I have tokenized data in the form of a list of unequally shaped arrays: array([array([1179, 6, 208, 2, 1625, 92, 9, 3870, 3, 2136, 435, 5, 2453, 2180, 44, 1, 226, 166, 3, 4409, 49, 6728, ... 10, 17, 1396, 106, 8002, 7968, 111, 33, 1130, 60, 181, 7988, 7974, 7970])], dtype=object) With their respective targets: Out[74]: array([0, 0, 0, ..., 0, 0, 1], dtype=object) I'm trying to transform them into a padded tf.data.Dataset() , but it won't let me convert unequal shapes to a tensor. I will get

tf.function input_signature for distributed dataset in tensorflow 2.0

∥☆過路亽.° 提交于 2020-07-22 09:32:12
问题 I am trying to build a distributed custom training loop in TensorFlow 2.0, but I can't figure out how to annotate the autograph tf.function signature in order to avoid retracing. I have tried to use DatasetSpec and various combinations of TensorSpec tuples, but I get all sorts of errors. My question Is it possible to specify a tf.function input signature that accepts batched distributed datasets? Minimal reproducing code import tensorflow as tf from tensorflow import keras import numpy as np

Heroku : tensorflow 2.2.1 too large for deployment

那年仲夏 提交于 2020-07-22 03:54:51
问题 im trying to deploy a keras project to heroku but pushing to the repository master branch seems to be problematic for me as the following error is reported every time I try it: remote: -----> Compressing... remote: ! Compiled slug size: 836M is too large (max is 500M). remote: ! See: http://devcenter.heroku.com/articles/slug-size remote: remote: ! Push failed remote: Verifying deploy... remote: remote: ! Push rejected to ... I figured this is due to the tensorflow requirement being way too

how to fix “OperatorNotAllowedInGraphError ” error in Tensorflow 2.0

允我心安 提交于 2020-07-20 07:48:45
问题 I'm learn tensorflow2.0 from official tutorials.I can understand the result from below code. def square_if_positive(x): return [i ** 2 if i > 0 else i for i in x] square_if_positive(range(-5, 5)) # result [-5, -4, -3, -2, -1, 0, 1, 4, 9, 16] But if I change the inputs with tensor not python code, just like this def square_if_positive(x): return [i ** 2 if i > 0 else i for i in x] square_if_positive(tf.range(-5, 5)) I get below error!! OperatorNotAllowedInGraphError Traceback (most recent call

Failed to load Tensorboard

倖福魔咒の 提交于 2020-07-16 10:40:19
问题 ERROR: Failed to launch TensorBoard (exited with 1). Contents of stderr: Traceback (most recent call last): File "/home/arshad/anaconda3/bin/tensorboard", line 10, in sys.exit(run_main()) File "/home/arshad/anaconda3/lib/python3.7/site-packages/tensorboard/main.py", line 58, in run_main default.get_plugins() + default.get_dynamic_plugins(), File "/home/arshad/anaconda3/lib/python3.7/site-packages/tensorboard/default.py", line 110, in get_dynamic_plugins for entry_point in pkg_resources.iter

How to get gradients in TF 2.2 Eager?

安稳与你 提交于 2020-07-16 06:58:32
问题 model.total_loss has been deprecated in Eager, so below no longer works - how else to fetch gradients? Works in TF 2.1/2.0 : import tensorflow as tf import numpy as np from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model from tensorflow.keras import backend as K ipt = Input((16,)) out = Dense(16)(ipt) model = Model(ipt, out) model.compile('adam', 'mse') x = y = np.random.randn(32, 16) model.train_on_batch(x, y) grad_tensors = model.optimizer.get_gradients