tensorflow2.0

Replacing placeholder for tensorflow v2

倖福魔咒の 提交于 2020-01-09 23:51:10
问题 For my project, I need to convert a directed graph into a tensorflow implementation of the graph as if it was a neural network. In tensorflow version 1, I could just define all of my inputs as placeholders and then just generate the dataflow graph for the outputs using a breadthfirst search of the graph. Then I would just feed in my inputs using a feed_dict. However, in TensorFlow v2.0 they have decided to do away with placeholders entirely. How would I make a tf.function for each graph that

How do I find the derivative of a custom model in Keras? [duplicate]

蹲街弑〆低调 提交于 2020-01-06 08:06:16
问题 This question already has an answer here : How do I get the gradient of a keras model with respect to its inputs? (1 answer) Closed yesterday . I have a custom model that takes an arbitrary "hidden model" as an input and wraps it in another tensor that treats the output of the hidden model as a return and computes the implied output by adding 1 and multiplying it by the original data: class Model(tf.keras.Model): def __init__(self, hidden_model): super(Model, self).__init__(name='') self

How to get activation values from Tensor for Keras model?

旧时模样 提交于 2020-01-06 06:57:22
问题 I am trying to access the activation values from my nodes in a layer. l0_out = model.layers[0].output print(l0_out) print(type(l0_out)) Tensor("fc1_1/Relu:0", shape=(None, 10), dtype=float32) <class 'tensorflow.python.framework.ops.Tensor'> I've tried several different ways of eval() and K.function without success. I've also tried every method in this post Keras, How to get the output of each layer? How can I work with this object? MODEL Just using something everyone is familiar with. import

Tensorflow 2.0: Packing numerical features of a dataset together in a functional way

非 Y 不嫁゛ 提交于 2020-01-06 05:41:10
问题 I am trying to reproduce Tensorflow tutorial code from here which is supposed to download CSV file and preprocess data (up to combining numerical data together). The reproducible example goes as follows: import tensorflow as tf print("TF version is: {}".format(tf.__version__)) # Download data train_url = "https://storage.googleapis.com/tf-datasets/titanic/train.csv" test_url = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv" train_path = tf.keras.utils.get_file("train.csv", train

Bounding hyperparameter optimization with Tensorflow bijector chain in GPflow 2.0

我们两清 提交于 2020-01-06 05:27:09
问题 While doing GP regression in GPflow 2.0, I want to set hard bounds on lengthscale (i.e. limiting lengthscale optimization range). Following this thread (Setting hyperparameter optimization bounds in GPflow 2.0), I constructed a TensorFlow Bijector chain (see bounded_lengthscale function below). However, the bijector chain below does not prevent the model from optimizing outside the supposed bounds. What do I need to change to make the bounded_lengthscale function put hard bounds on

AttributeError: The layer has never been called and thus has no defined input shape

ぃ、小莉子 提交于 2020-01-04 04:08:25
问题 I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder. Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape. import os import shutil import numpy as np import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.layers import Dense, Layer def mse(model, original): return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))

Issue with AdamOptimizer

大兔子大兔子 提交于 2019-12-25 00:28:53
问题 I'm using a simple network and i'm trying to use AdamOptimizer to minimize the loss in a Q learning contexte . Here the code : ### DATASET IMPORT from DataSet import * ### NETWORK state_size = STATE_SIZE stack_size = STACK_SIZE action_size = ACTION_SIZE learning_rate = LEARNING_RATE hidden_tensors = HIDDEN_TENSORS gamma = GAMMA import tensorflow as tf import numpy as np class NNetwork: def __init__(self, name='NNetwork'): # Initialisations self.state_size = state_size self.action_size =

How to apply Layer Normalisation in LSTMCell

妖精的绣舞 提交于 2019-12-24 19:00:25
问题 I want to apply Layer Normalisation to recurrent neural network while using tf.compat.v1.nn.rnn_cell.LSTMCell . There is a LayerNormalization class but how should I apply this in LSTMCell. I am using tf.compat.v1.nn.rnn_cell.LSTMCell because I want to use projection layer. How should I achieve Normalisation in this case. class LM(tf.keras.Model): def __init__(self, hidden_size=2048, num_layers=2): super(LM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm

Tensorflow profiling in TF2.0

北战南征 提交于 2019-12-24 10:02:22
问题 I am trying to visualize the performance of tf.data.Datasets using TF2.0 (Beta). I found examples on how to use profiler in older versions of tensorflow. How is profiling done in TF2.0? I could use tf.compat.v1, but the procedure does not seem to be straight forward. I want to measure memory consumption (device placement wise) and timeline. Below examples explain profiling with TF1.x Can I measure the execution time of individual operations with TensorFlow? Understanding tensorflow profiling

What does experimental in TensorFlow mean?

五迷三道 提交于 2019-12-24 01:17:22
问题 In TensorFlow 2.0 APIs, there is a module tf.experimental . Such a name also appears in other places like tf.data.experimental . I just would like to know what the motivate for designing these modules is. 回答1: tf.experimental indicates that the said class/method is in early development, incomplete, or less commonly, not up-to-standards. It's a collection of user contributions which weren't yet integrated w/ main TensorFlow, but are still available as a part of open-source for users to test