tensorflow

Gradients do not exist for variables?

久未见 提交于 2020-12-13 12:15:01
问题 How can I fix error shown below? The input and output shape is supposed to be consist of 1 or -1. Here is my code: #Data Input main_input=Input(shape=(2*N_c),name='main_input') encoding_x=Dense(2*N_c,activation='relu',name='input_layer')(main_input) #Channel Input # channel_input=Input(shape=(4,),dtype='complex64',name='channel_input') channel_input = Lambda(set_channel2)(encoding_x) padded_channel = Lambda(z_padding,name='ppading_layerddddd')(channel_input) ffted_channel = Lambda(ffting,name

Gradients do not exist for variables?

ぐ巨炮叔叔 提交于 2020-12-13 12:07:18
问题 How can I fix error shown below? The input and output shape is supposed to be consist of 1 or -1. Here is my code: #Data Input main_input=Input(shape=(2*N_c),name='main_input') encoding_x=Dense(2*N_c,activation='relu',name='input_layer')(main_input) #Channel Input # channel_input=Input(shape=(4,),dtype='complex64',name='channel_input') channel_input = Lambda(set_channel2)(encoding_x) padded_channel = Lambda(z_padding,name='ppading_layerddddd')(channel_input) ffted_channel = Lambda(ffting,name

How to freeze a device specific saved model?

霸气de小男生 提交于 2020-12-13 09:41:20
问题 I need to freeze saved models for serving, but some saved model is device specific, how to solve this? with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess: sess.run(tf.tables_initializer()) tf.saved_model.loader.load(sess, [tag_constants.SERVING], saved_model_dir) inference_graph_def=tf.get_default_graph().as_graph_def() for node in inference_graph_def.node: node.device = '' frozen_graph_path = os.path.join(frozen_dir, 'frozen_inference_graph.pb') output_keys = ['ToInt64

How to freeze a device specific saved model?

喜你入骨 提交于 2020-12-13 09:40:17
问题 I need to freeze saved models for serving, but some saved model is device specific, how to solve this? with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess: sess.run(tf.tables_initializer()) tf.saved_model.loader.load(sess, [tag_constants.SERVING], saved_model_dir) inference_graph_def=tf.get_default_graph().as_graph_def() for node in inference_graph_def.node: node.device = '' frozen_graph_path = os.path.join(frozen_dir, 'frozen_inference_graph.pb') output_keys = ['ToInt64

How to freeze a device specific saved model?

試著忘記壹切 提交于 2020-12-13 09:40:04
问题 I need to freeze saved models for serving, but some saved model is device specific, how to solve this? with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess: sess.run(tf.tables_initializer()) tf.saved_model.loader.load(sess, [tag_constants.SERVING], saved_model_dir) inference_graph_def=tf.get_default_graph().as_graph_def() for node in inference_graph_def.node: node.device = '' frozen_graph_path = os.path.join(frozen_dir, 'frozen_inference_graph.pb') output_keys = ['ToInt64

how to see tensor value of a layer output in keras

点点圈 提交于 2020-12-13 09:27:21
问题 I have a Seq2Seq model. I am interested to print out the matrix value of the output of the encoder per iteration. So for example as the dimension of the matrix in the encoder is (?,20) and the epoch =5 and in each epoch, there are 10 iteration, I would like to see 10 matrix of the dimension (?,20) per epoch . I have gone to several links as here but it still does not print out the value matrix. With this code as mentioned in the aboved link: import keras.backend as K k_value = K.print_tensor

Why I'm getting zero accuracy in Keras binary classification model?

大兔子大兔子 提交于 2020-12-13 04:50:51
问题 I have a Keras Sequential model taking inputs from csv files. When I run the model, its accuracy remains zero even after 20 epochs. I have gone through these two stackoverflow threads (zero-accuracy-training and why-is-the-accuracy-for-my-keras-model-always-0) but nothing solved my problem. As my model is binary classification, and I think it should not work like a regression model to make accuracy metric ineffective. Here is the Model def preprocess(*fields): return tf.stack(fields[:-1]), tf

Why is ValueError thrown by keras.models.model_from_config() in the Keras-to-Tensorflow exporting example?

别等时光非礼了梦想. 提交于 2020-12-13 04:20:37
问题 The Keras website has this article about exporting Keras models to core Tensorflow. However the step new_model = model_from_config(config) throws an error: Traceback (most recent call last): File "/home/hal9000/tf_serving_experiments/sndbx.py", line 38, in <module> new_model = model_from_config(config) File "/home/hal9000/keras2env/local/lib/python2.7/site-packages/keras/models.py", line 304, in model_from_config return layer_module.deserialize(config, custom_objects=custom_objects) File "

Pytorch equivalent features in tensorflow?

跟風遠走 提交于 2020-12-13 03:37:48
问题 I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras? 回答1: loss.backward() equivalent in tensorflow is tf.GradientTape() . TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then

Pytorch equivalent features in tensorflow?

社会主义新天地 提交于 2020-12-13 03:31:46
问题 I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras? 回答1: loss.backward() equivalent in tensorflow is tf.GradientTape() . TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then