tensorflow2.0

How to Get Reproducible Results (Keras, Tensorflow):

孤人 提交于 2020-06-13 05:34:09
问题 To make the results reproducible I've red more than 20 articles and added to my script maximum of the functions ... but failed. In the official source I red there are 2 kinds of seeds - global and operational. May be, the key to solving my problem is setting the operational seed, but I don't understand where to apply it. Would you, please, help me to achieve reproducible results with tensorflow (version > 2.0)? Thank you very much. from keras.models import Sequential from keras.layers import

How to Get Reproducible Results (Keras, Tensorflow):

坚强是说给别人听的谎言 提交于 2020-06-13 05:33:12
问题 To make the results reproducible I've red more than 20 articles and added to my script maximum of the functions ... but failed. In the official source I red there are 2 kinds of seeds - global and operational. May be, the key to solving my problem is setting the operational seed, but I don't understand where to apply it. Would you, please, help me to achieve reproducible results with tensorflow (version > 2.0)? Thank you very much. from keras.models import Sequential from keras.layers import

TypeError: An op outside of the function building code is being passed a Graph tensor

我与影子孤独终老i 提交于 2020-06-11 17:36:19
问题 I am getting the following exception TypeError: An op outside of the function building code is being passed a "Graph" tensor. It is possible to have Graph tensors leak out of the function building context by including a tf.init_scope in your function building code. For example, the following function will fail: @tf.function def has_init_scope(): my_constant = tf.constant(1.) with tf.init_scope(): added = my_constant * 2 The graph tensor has name: conv2d_flipout/divergence_kernel:0 which also

Function call stack: keras_scratch_graph Error

筅森魡賤 提交于 2020-06-10 10:44:38
问题 I am reimplementing a text2speech project. I am facing a Function call stack : keras_scratch_graph error in decoder part. The network architecture is from Deep Voice 3 paper. I am using keras from TF 2.0 on Google Colab. Below is the code for Decoder Keras Model. y1 = tf.ones(shape = (16, 203, 320)) def Decoder(name = "decoder"): # Decoder Prenet din = tf.concat((tf.zeros_like(y1[:, :1, -hp.mel:]), y1[:, :-1, -hp.mel:]), 1) keys = K.Input(shape = (180, 256), batch_size = 16, name = "keys")

Function call stack: keras_scratch_graph Error

为君一笑 提交于 2020-06-10 10:44:06
问题 I am reimplementing a text2speech project. I am facing a Function call stack : keras_scratch_graph error in decoder part. The network architecture is from Deep Voice 3 paper. I am using keras from TF 2.0 on Google Colab. Below is the code for Decoder Keras Model. y1 = tf.ones(shape = (16, 203, 320)) def Decoder(name = "decoder"): # Decoder Prenet din = tf.concat((tf.zeros_like(y1[:, :1, -hp.mel:]), y1[:, :-1, -hp.mel:]), 1) keys = K.Input(shape = (180, 256), batch_size = 16, name = "keys")

How to use windows created by the Dataset.window() method in TensorFlow 2.0?

时间秒杀一切 提交于 2020-06-10 02:54:47
问题 I'm trying to create a dataset that will return random windows from a time series, along with the next value as the target, using TensorFlow 2.0. I'm using Dataset.window() , which looks promising: import tensorflow as tf dataset = tf.data.Dataset.from_tensor_slices(tf.range(10)) dataset = dataset.window(5, shift=1, drop_remainder=True) for window in dataset: print([elem.numpy() for elem in window]) Outputs: [0, 1, 2, 3, 4] [1, 2, 3, 4, 5] [2, 3, 4, 5, 6] [3, 4, 5, 6, 7] [4, 5, 6, 7, 8] [5, 6

How to use windows created by the Dataset.window() method in TensorFlow 2.0?

☆樱花仙子☆ 提交于 2020-06-10 02:54:30
问题 I'm trying to create a dataset that will return random windows from a time series, along with the next value as the target, using TensorFlow 2.0. I'm using Dataset.window() , which looks promising: import tensorflow as tf dataset = tf.data.Dataset.from_tensor_slices(tf.range(10)) dataset = dataset.window(5, shift=1, drop_remainder=True) for window in dataset: print([elem.numpy() for elem in window]) Outputs: [0, 1, 2, 3, 4] [1, 2, 3, 4, 5] [2, 3, 4, 5, 6] [3, 4, 5, 6, 7] [4, 5, 6, 7, 8] [5, 6

I am getting an error that I can't figure out when I run my neural network in Keras as soon as I introduce a class weight

大兔子大兔子 提交于 2020-06-09 05:39:26
问题 Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d (Conv1D) (None, 35, 32) 96 _________________________________________________________________ batch_normalization (BatchNo (None, 35, 32) 128 _________________________________________________________________ dropout (Dropout) (None, 35, 32) 0 ______________________________________________________________

In tensorflow, for custom layers that need arguments at instantialion, does the get_config method need overriding?

别说谁变了你拦得住时间么 提交于 2020-06-01 05:53:08
问题 Ubuntu - 20.04, Tensorflow - 2.2.0, Tensorboard - 2.2.1 I have read that one needs to reimplement the config method in order for a custom layer to be serializable. I have a custom layer that accepts arguments in its __init__ . It uses another custom layer and that consumes arguments in its __init__ as well. I can: Without Tensorboard callbacks: Use them in a model both in eager model and graph form Run tf.saved_model.save and it executes without a glich Load the thus saved model using tf

How can I combine ImageDataGenerator with TensorFlow datasets in TF2?

时间秒杀一切 提交于 2020-05-29 05:00:44
问题 I have a TF dataset to classify cats and dogs: import tensorflow_datasets as tfds SPLIT_WEIGHTS = (8, 1, 1) splits = tfds.Split.TRAIN.subsplit(weighted=SPLIT_WEIGHTS) (raw_train, raw_validation, raw_test), metadata = tfds.load( 'cats_vs_dogs', split=list(splits), with_info=True, as_supervised=True) In the example they use some image augmentation with a map function. I was wondering if that could also be done with the nice ImageDataGenerator class such as described here: from tensorflow.keras