tensorflow2.0

Keras not training on entire dataset

别说谁变了你拦得住时间么 提交于 2020-05-21 07:16:59
问题 So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix? import tensorflow as tf from tensorflow import keras import numpy as np fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images =

Keras not training on entire dataset

寵の児 提交于 2020-05-21 07:12:06
问题 So I've been following Google's official tensorflow guide and trying to build a simple neural network using Keras. But when it comes to training the model, it does not use the entire dataset (with 60000 entries) and instead uses only 1875 entries for training. Any possible fix? import tensorflow as tf from tensorflow import keras import numpy as np fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images =

Save the model in gpflow 2

倖福魔咒の 提交于 2020-05-17 07:46:27
问题 I am trying to save a GPflow model (in GPflow version 2.0). model = gpflow.models.VGP((X, Y_data), kernel=kernel, likelihood=likelihood, num_latent_gps=1) Since the gpflow package no longer has a saver module, could anyone help me with an alternative way? 回答1: There are different ways of saving a GPflow model and the way to do it will depend on your use-case. You can either use TensorFlow's checkpointing (saving the trained weights) or use TensorFlow's SavedModel format (saving weights and

Build tensorflow - pip package on 2 different platforms(Ubuntu, OSX Catalina)

删除回忆录丶 提交于 2020-05-17 06:35:08
问题 I have been trying for a while to build tensorflow from source on linux GCP Ubuntu 18.04, I finally managed to do so successfully. I built the pip package using: ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg It generates a linux wheel which is incompatible with mac os and I test installed it with pip and it worked just fine on Ubuntu. So I compressed the tensorflow folder and downloaded it on my macbook. When I run the command above again on my macbook, the

Predictions from a model become very small. The loss is either 0 or a positive constant

一曲冷凌霜 提交于 2020-05-17 06:06:39
问题 I am implementing the following architecture in Tensorflow. Dual Encoder LSTM https://i.stack.imgur.com/ZmcsX.png During the first few iterations, the loss remains 0.6915 but after that as you can see in the output below, no matter how many iterations I run, the loss keeps varying between -0.0 and a positive constant depending upon the hyperparams. This is happening because the predictions of my model become very small(close to zero) or very high (close to 1). So the model cannot be trained.

Predictions from a model become very small. The loss is either 0 or a positive constant

对着背影说爱祢 提交于 2020-05-17 06:04:52
问题 I am implementing the following architecture in Tensorflow. Dual Encoder LSTM https://i.stack.imgur.com/ZmcsX.png During the first few iterations, the loss remains 0.6915 but after that as you can see in the output below, no matter how many iterations I run, the loss keeps varying between -0.0 and a positive constant depending upon the hyperparams. This is happening because the predictions of my model become very small(close to zero) or very high (close to 1). So the model cannot be trained.

How to avoid defining target tensors in Tensorflow 2 for CTC loss model?

别等时光非礼了梦想. 提交于 2020-05-16 22:05:13
问题 I am trying to use tf.distribute.MirroredStrategy() for multi GPU training in Tensorflow 2, on a model with CTC loss. Problem is that model needs defining target_tensors in order to compile. What can be the cause of that? Is there some workaround and compile model without defining target_tensors? If I do not pass the targets I get the following: TypeError: Value passed to parameter 'indices' has DataType float32 not in list of allowed values: uint8, int32, int64 The model is defined using

How to avoid defining target tensors in Tensorflow 2 for CTC loss model?

时光怂恿深爱的人放手 提交于 2020-05-16 22:04:56
问题 I am trying to use tf.distribute.MirroredStrategy() for multi GPU training in Tensorflow 2, on a model with CTC loss. Problem is that model needs defining target_tensors in order to compile. What can be the cause of that? Is there some workaround and compile model without defining target_tensors? If I do not pass the targets I get the following: TypeError: Value passed to parameter 'indices' has DataType float32 not in list of allowed values: uint8, int32, int64 The model is defined using

Why does tf.executing_eagerly() return False in TensorFlow 2?

眉间皱痕 提交于 2020-05-16 22:03:42
问题 Let me explain my set up. I am using TensorFlow 2.1, the Keras version shipped with TF, and TensorFlow Probability 0.9. I have a function get_model that creates (with the functional API) and returns a model using Keras and custom layers. In the __init__ method of these custom layers A , I call a method A.m , which executes the statement print(tf.executing_eagerly()) , but it returns False . Why? To be more precise, this is roughly my setup def get_model(): inp = Input(...) x = A(...)(inp) x =

Understanding device allocation, parallelism(tf.while_loop) and tf.function in tensorflow

最后都变了- 提交于 2020-05-16 06:45:44
问题 I'm trying to understand parallelism on GPU in tensorflow as I need to apply it on uglier graphs. import tensorflow as tf from datetime import datetime with tf.device('/device:GPU:0'): var = tf.Variable(tf.ones([100000], dtype=tf.dtypes.float32), dtype=tf.dtypes.float32) @tf.function def foo(): return tf.while_loop(c, b, [i], parallel_iterations=1000) #tweak @tf.function def b(i): var.assign(tf.tensor_scatter_nd_update(var, tf.reshape(i, [-1,1]), tf.constant([0], dtype=tf.dtypes.float32)))