tensorflow2.0

Tensorflow 2.0 model using tf.function very slow and is recompiling every time the train count changes. Eager runs about 4x faster

僤鯓⒐⒋嵵緔 提交于 2019-12-03 07:37:47
问题 I have models built from uncompiled keras code and am trying to run them through a custom training loop. The TF 2.0 eager (by default) code runs about 30s on a CPU (laptop). When I create a keras model with wrapped tf.function call methods, it is running much, much slower and appears to take a very long time to start, particularly the "first" time. For example, in the tf.function code the initial train on 10 samples takes 40s, and the follow up one on 10 samples takes 2s. On 20 samples, the

Tensorflow 2.0 model using tf.function very slow and is recompiling every time the train count changes. Eager runs about 4x faster

五迷三道 提交于 2019-12-02 21:05:16
I have models built from uncompiled keras code and am trying to run them through a custom training loop. The TF 2.0 eager (by default) code runs about 30s on a CPU (laptop). When I create a keras model with wrapped tf.function call methods, it is running much, much slower and appears to take a very long time to start, particularly the "first" time. For example, in the tf.function code the initial train on 10 samples takes 40s, and the follow up one on 10 samples takes 2s. On 20 samples, the initial takes 50s and the follow up takes 4s. The first train on 1 sample takes 2s and follow up takes

how can I maximize the GPU usage of Tensorflow 2.0 from R (with Keras library)?

流过昼夜 提交于 2019-12-02 07:32:11
问题 I use R with Keras and tensorflow 2.0 on the GPU. After connecting a second monitor to my GPU, I receive this error during a deep learning script: I concluded that the GPU is short of memory and a solution seems to be this code: import tensorflow as tf from keras.backend.tensorflow_backend import set_session config = tf.ConfigProto() config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU config.log_device_placement = True # to log device placement (on which

What is the difference between tf.keras and tf.python.keras?

六眼飞鱼酱① 提交于 2019-12-01 21:05:21
I've ran into serious incompatibility problems for the same code ran with one vs. the other; e.g.: Getting value of tensor Compiling model Saving optimizer Looking into the Github source , the modules and their imports look fairly identical, and tf.keras even imports from tf.python.keras . In tutorials, I see both being used time to time. As an example, code below will fail with tf.python.keras . What's the deal? What is the difference, and when should I use one or the other? from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model from tensorflow.keras

Optimise function for many pseudodata realisations in TensorFlow 2

浪子不回头ぞ 提交于 2019-12-01 00:24:48
My end goal is to simulate likelihood ratio test statistics, however, the core problem I am having is that I do not understand how to get TensorFlow 2 to perform many optimizations for different data inputs. Here is my attempt, hopefully, it gives you the idea of what I am trying: import tensorflow as tf import tensorflow_probability as tfp from tensorflow_probability import distributions as tfd import numpy as np # Bunch of independent Poisson distributions that we want to combine poises0 = [tfp.distributions.Poisson(rate = 10) for i in range(5)] # Construct joint distributions joint0 = tfd

What is the difference between tf.keras and tf.python.keras?

被刻印的时光 ゝ 提交于 2019-11-27 07:50:57
问题 I've ran into serious incompatibility problems for the same code ran with one vs. the other; e.g.: Getting value of tensor Compiling model Saving optimizer Looking into the Github source, the modules and their imports look fairly identical, and tf.keras even imports from tf.python.keras . In tutorials, I see both being used time to time. As an example, code below will fail with tf.python.keras . What's the deal? What is the difference, and when should I use one or the other? from tensorflow

Why is TensorFlow 2 much slower than TensorFlow 1?

ぐ巨炮叔叔 提交于 2019-11-26 11:51:36
问题 It\'s been cited by many users as the reason for switching to Pytorch, but I\'ve yet to find a justification / explanation for sacrificing the most important practical quality, speed, for eager execution. Below is code benchmarking performance, TF1 vs. TF2 - with TF1 running anywhere from 47% to 276% faster . My question is: what is it, at the graph or hardware level, that yields such a significant slowdown? Looking for a detailed answer - am already familiar with broad concepts. Relevant Git