tensorflow2.0

Tensorflow2 Tensorboard is not working in jupyter (static ip)

余生颓废 提交于 2020-06-28 06:07:45
问题 I Want to see it Tensorflow Graph and Weight using Tensorflow in Jupyter Notebook . But It is not working. I use Jupyter notebook on the remote server. %tensorboard --logdir logs %load_ext tensorboard it say it is too long to response at static ip. How can I solve it?! 回答1: Reading this Github issue, you can find that specifying the host manually when launching Tensorboard apparently does the trick. Instead of %tensorboard --logdir {logs_base_dir} Run %tensorboard --logdir {logs_base_dir} -

Tensorflow pad sequence feature column

只谈情不闲聊 提交于 2020-06-28 04:37:55
问题 How to pad sequences in the feature column and also what is a dimension in the feature_column . I am using Tensorflow 2.0 and implementing an example of text summarization. Pretty new to machine learning, deep learning, and TensorFlow. I came across feature_column and found them useful as I think they can be embedded in the processing pipeline of the model. In a classic scenario where not using feature_column , I can pre-process the text, tokenize it, convert it into a sequence of numbers and

Save model wrapped in Keras

偶尔善良 提交于 2020-06-27 14:57:20
问题 Sorry for my naive question but I am trying to save my keras model () in which I use TFBertModel() function as an hidden layer. To do that I use the save() function provided by the tf.keras package. But I got this error: --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-13-3b315f7219da> in <module>() ----> 1 model.save('model_weights.h5') 8 frames /tensorflow-2.1.0/python3.6/tensorflow_core/python

TensorFlow 2.0: How to update tensors?

旧巷老猫 提交于 2020-06-27 12:59:05
问题 In TensorFlow 1.x, to update a tensor, I would use tf.scatter_update , to only update the relevant part of the tensor. How can we do the same thing in TF 2.0? 回答1: You can use tf.tensor_scatter_nd_update(): import tensorflow as tf import numpy as np tensor = tf.convert_to_tensor(np.ones((2, 2)), dtype=tf.float32) indices = tf.constant([[0, 0]]) updates = tf.constant([0.0]) tf.tensor_scatter_nd_update(tensor, indices, updates).numpy() # array([[0., 1.], # [1., 1.]], dtype=float32) 来源: https:/

Import tensorflow module is slow in tensorflow 2

时间秒杀一切 提交于 2020-06-27 07:22:15
问题 Related: Import TensorFlow contrib module is slow in TensorFlow 1.2.1 also: What can cause the TensorFlow import to be so slow? I am using an ssd and importing TensorFlow. I have 4 ghz 8 core pc with 16 gb ram (Processor AMD FX(tm)-8350 Eight-Core Processor, 4000 Mhz, 4 Core(s), 8 Logical Processor(s)). TensorFlow takes 10-12 seconds to import. Is there any way to selectively import parts of TensorFlow? Would a RAM disk help? Is there any more work being done on stuff like this or: Slow to

How to compute gradient of output wrt input in Tensorflow 2.0

早过忘川 提交于 2020-06-25 12:21:55
问题 I have a trained Tensorflow 2.0 model (from tf.keras.Sequential()) that takes an input layer with 26 columns (X) and produces an output layer with 1 column (Y). In TF 1.x I was able to calculate the gradient of the output with respect to the input with the following: model = load_model('mymodel.h5') sess = K.get_session() grad_func = tf.gradients(model.output, model.input) gradients = sess.run(grad_func, feed_dict={model.input: X})[0] In TF2 when I try to run tf.gradients(), I get the error:

How to compute gradient of output wrt input in Tensorflow 2.0

人盡茶涼 提交于 2020-06-25 12:20:36
问题 I have a trained Tensorflow 2.0 model (from tf.keras.Sequential()) that takes an input layer with 26 columns (X) and produces an output layer with 1 column (Y). In TF 1.x I was able to calculate the gradient of the output with respect to the input with the following: model = load_model('mymodel.h5') sess = K.get_session() grad_func = tf.gradients(model.output, model.input) gradients = sess.run(grad_func, feed_dict={model.input: X})[0] In TF2 when I try to run tf.gradients(), I get the error:

How to compute gradient of output wrt input in Tensorflow 2.0

我们两清 提交于 2020-06-25 12:18:11
问题 I have a trained Tensorflow 2.0 model (from tf.keras.Sequential()) that takes an input layer with 26 columns (X) and produces an output layer with 1 column (Y). In TF 1.x I was able to calculate the gradient of the output with respect to the input with the following: model = load_model('mymodel.h5') sess = K.get_session() grad_func = tf.gradients(model.output, model.input) gradients = sess.run(grad_func, feed_dict={model.input: X})[0] In TF2 when I try to run tf.gradients(), I get the error:

How to compute gradient of output wrt input in Tensorflow 2.0

◇◆丶佛笑我妖孽 提交于 2020-06-25 12:16:07
问题 I have a trained Tensorflow 2.0 model (from tf.keras.Sequential()) that takes an input layer with 26 columns (X) and produces an output layer with 1 column (Y). In TF 1.x I was able to calculate the gradient of the output with respect to the input with the following: model = load_model('mymodel.h5') sess = K.get_session() grad_func = tf.gradients(model.output, model.input) gradients = sess.run(grad_func, feed_dict={model.input: X})[0] In TF2 when I try to run tf.gradients(), I get the error:

How do you apply layer normalization in an RNN using tf.keras?

谁都会走 提交于 2020-06-25 03:49:08
问题 I would like to apply layer normalization to a recurrent neural network using tf.keras. In TensorFlow 2.0, there is a LayerNormalization class in tf.layers.experimental , but it's unclear how to use it within a recurrent layer like LSTM , at each time step (as it was designed to be used). Should I create a custom cell, or is there a simpler way? For example, applying dropout at each time step is as easy as setting the recurrent_dropout argument when creating an LSTM layer, but there is no