tensorflow2.0

How to graph tf.keras model in Tensorflow-2.0?

末鹿安然 提交于 2019-12-09 17:27:25
问题 I upgraded to Tensorflow 2.0 and there is no tf.summary.FileWriter("tf_graphs", sess.graph) . I was looking through some other StackOverflow questions on this and they said to use tf.compat.v1.summary etc . Surely there must be a way to graph and visualize a tf.keras model in Tensorflow version 2. What is it? I'm looking for a tensorboard output like the one below. Thank you! 回答1: According to the docs, you can use Tensorboard to visualise graphs once your model has been trained. First,

ValueError: Tried to convert 'y' to a tensor and failed. Error: None values not supported

江枫思渺然 提交于 2019-12-08 02:17:28
DOESN'T WORK : from tensorflow.python.keras.layers import Input, Dense from tensorflow.python.keras.models import Model from tensorflow.python.keras.optimizers import Nadam import numpy as np ipt = Input(shape=(4,)) out = Dense(1, activation='sigmoid')(ipt) model = Model(ipt, out) model.compile(optimizer=Nadam(lr=1e-4), loss='binary_crossentropy') X = np.random.randn(32,4) Y = np.random.randint(0,2,(32,1)) model.train_on_batch(X,Y) WORKS : remove .python from above's imports. What's the deal, and how to fix? ADDITIONAL INFO : CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10 tensorflow ,

Tensorflow 2.0: minimize a simple function

回眸只為那壹抹淺笑 提交于 2019-12-08 00:05:39
问题 import tensorflow as tf import numpy as np x = tf.Variable(2, name='x', trainable=True, dtype=tf.float32) with tf.GradientTape() as t: t.watch(x) log_x = tf.math.log(x) y = tf.math.square(log_x) opt = tf.optimizers.Adam(0.5) # train = opt.minimize(lambda: y, var_list=[x]) # FAILS @tf.function def f(x): log_x = tf.math.log(x) y = tf.math.square(log_x) return y yy = f(x) train = opt.minimize(lambda: yy, var_list=[x]) # ALSO FAILS Yields ValueError: No gradients provided for any variable: ['x:0'

Unable to import Keras(from TensorFlow 2.0) in PyCharm

邮差的信 提交于 2019-12-07 00:10:09
问题 I have just installed the stable version of TensorFlow 2.0 (released on October 1st 2019) in PyCharm. The problem is that the keras package is unavailable . The actual error i s : " cannot import name 'keras' from tensorflow ". I have installed via pip install tensorflow==2.0.0 the CPU version , and then uninstalled the CPU version and installed the GPU version , via pip install tensorflow-gpu==2.0.0. Neither of the above worked versions of TensorFlow were working properly(could not import

Tensorflow 2.0: minimize a simple function

好久不见. 提交于 2019-12-06 13:17:42
import tensorflow as tf import numpy as np x = tf.Variable(2, name='x', trainable=True, dtype=tf.float32) with tf.GradientTape() as t: t.watch(x) log_x = tf.math.log(x) y = tf.math.square(log_x) opt = tf.optimizers.Adam(0.5) # train = opt.minimize(lambda: y, var_list=[x]) # FAILS @tf.function def f(x): log_x = tf.math.log(x) y = tf.math.square(log_x) return y yy = f(x) train = opt.minimize(lambda: yy, var_list=[x]) # ALSO FAILS Yields ValueError: No gradients provided for any variable: ['x:0']. This looks like the examples they partially give. I'm not sure if this is a bug with eager or 2.0 or

This model has not yet been built error on model.summary()

一笑奈何 提交于 2019-12-06 01:47:29
问题 I've keras model defined as follow class ConvLayer(Layer) : def __init__(self, nf, ks=3, s=2, **kwargs): self.nf = nf self.grelu = GeneralReLU(leak=0.01) self.conv = (Conv2D(filters = nf, kernel_size = ks, strides = s, padding = "same", use_bias = False, activation = "linear")) super(ConvLayer, self).__init__(**kwargs) def rsub(self): return -self.grelu.sub def set_sub(self, v): self.grelu.sub = -v def conv_weights(self): return self.conv.weight[0] def build(self, input_shape): # No weight to

Tensorflow 2.0 Keras is training 4x slower than 2.0 Estimator

假装没事ソ 提交于 2019-12-04 17:17:32
问题 We recently switched to Keras for TF 2.0, but when we compared it to the DNNClassifier Estimator on 2.0, we experienced around 4x slower speeds with Keras. But I cannot for the life of me figure out why this is happening. The rest of the code for both are identical, using an input_fn() that returns the same tf.data.Dataset, and using identical feature_columns. Been struggling with this problem for days now. Any help would be greatly greatly appreciated. Thank you Estimator code: estimator =

Batch Normalization doesn't have gradient in tensorflow 2.0?

依然范特西╮ 提交于 2019-12-04 12:42:00
I am trying to make a simple GANs to generate digits from the MNIST dataset. However when I get to training(which is custom) I get this annoying warning that I suspect is the cause of not training like I'm used to. Keep in mind this is all in tensorflow 2.0 using it's default eager execution. GET THE DATA(not that important) (train_images,train_labels),(test_images,test_labels) = tf.keras.datasets.mnist.load_data() train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32') train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1] BUFFER_SIZE =

Tensorflow 2.0 Keras is training 4x slower than 2.0 Estimator

笑着哭i 提交于 2019-12-04 00:58:39
We recently switched to Keras for TF 2.0, but when we compared it to the DNNClassifier Estimator on 2.0, we experienced around 4x slower speeds with Keras. But I cannot for the life of me figure out why this is happening. The rest of the code for both are identical, using an input_fn() that returns the same tf.data.Dataset, and using identical feature_columns. Been struggling with this problem for days now. Any help would be greatly greatly appreciated. Thank you Estimator code: estimator = tf.estimator.DNNClassifier( feature_columns = feature_columns, hidden_units = [64,64], activation_fn =

Custom Neural Network Implementation on MNIST using Tensorflow 2.0?

十年热恋 提交于 2019-12-03 07:56:24
问题 I tried to write a custom implementation of basic neural network with two hidden layers on MNIST dataset using *TensorFlow 2.0 beta* but I'm not sure what went wrong here but my training loss and accuracy seems to stuck at 1.5 and around 85 respectively. But If I build the using Keras I was getting very low training loss and accuracy above 95% with just 8-10 epochs. I believe that maybe I'm not updating my weights or something? So do I need to assign my new weights which I compute in backprop