tensorflow2.0

Tensorflow 2.0 on Windows (SSE2): How do you stop 'ImportError: DLL load failed: The specified module could not be found.' when importing tensorflow

有些话、适合烂在心里 提交于 2020-04-30 06:25:53
问题 System Information: Windows 10 Python 3.6.4 and 3.7.0 <- default version Tensorflow 1.11.0 Conda 4.7.12 pip 19.3.1 I was trying to get Tensorflow 2.0 running on Windows 10, but my computer doesn't support the AVX/AVX2 instruction set, so I used a wheel supporting SSE2 instead, which I installed with pip. I then procceded to attempting to run the python code from the MINST Basic classification tutorial from Tensorflow's official website: from __future__ import absolute_import, division, print

Tensorflow 2.0 on Windows (SSE2): How do you stop 'ImportError: DLL load failed: The specified module could not be found.' when importing tensorflow

戏子无情 提交于 2020-04-30 06:25:48
问题 System Information: Windows 10 Python 3.6.4 and 3.7.0 <- default version Tensorflow 1.11.0 Conda 4.7.12 pip 19.3.1 I was trying to get Tensorflow 2.0 running on Windows 10, but my computer doesn't support the AVX/AVX2 instruction set, so I used a wheel supporting SSE2 instead, which I installed with pip. I then procceded to attempting to run the python code from the MINST Basic classification tutorial from Tensorflow's official website: from __future__ import absolute_import, division, print

Tensorflow 2.0 on Windows (SSE2): How do you stop 'ImportError: DLL load failed: The specified module could not be found.' when importing tensorflow

孤街浪徒 提交于 2020-04-30 06:25:19
问题 System Information: Windows 10 Python 3.6.4 and 3.7.0 <- default version Tensorflow 1.11.0 Conda 4.7.12 pip 19.3.1 I was trying to get Tensorflow 2.0 running on Windows 10, but my computer doesn't support the AVX/AVX2 instruction set, so I used a wheel supporting SSE2 instead, which I installed with pip. I then procceded to attempting to run the python code from the MINST Basic classification tutorial from Tensorflow's official website: from __future__ import absolute_import, division, print

Tensorboard Graph with custom training loop does not include my Model

强颜欢笑 提交于 2020-04-18 05:48:58
问题 I have created my own loop as shown in the TF 2 migration guide here. I am currently able to see the graph for only the --- VISIBLE --- section of the code below. How do I make my model (defined in the ---NOT VISIBLE--- section) visible in tensorboard? If I was not using a custom training loop, I could have gone with the documented model.fit approach : model.fit(..., callbacks=[keras.callbacks.TensorBoard(log_dir=logdir)]) In TF 1, the approach used to be quite straightforward: tf.compat.v1

ValueError: No gradients provided for any variable - Tensorflow 2.0, Keras

孤街醉人 提交于 2020-04-18 01:08:11
问题 I am trying to implement a simple sequence-to-sequence model using Keras. However, I keep seeing the following ValueError : ValueError: No gradients provided for any variable: ['simple_model/time_distributed/kernel:0', 'simple_model/time_distributed/bias:0', 'simple_model/embedding/embeddings:0', 'simple_model/conv2d/kernel:0', 'simple_model/conv2d/bias:0', 'simple_model/dense_1/kernel:0', 'simple_model/dense_1/bias:0']. Other questions like "ValueError: No gradients provided for any variable

Best practice to write code compatible to both TensorFlow 1 and 2

拥有回忆 提交于 2020-04-17 22:56:06
问题 This official guide explains how to migrate TF 1 code to TF 2. This is however not what I want. I want that my code runs fine on both TF 1 and TF 2 (and I only want the non-eager mode). Also, I slowly want to use some of the new features, but in an optional way. (E.g. the user could pass some option like --use-fancy-new-tf2-feature , which would only work with TF 2. That's fine.) And maybe after one or two years, I would slowly drop the TF 1 support. But I definitely need this transition

How to train a model on multi gpus with tensorflow2 and keras?

岁酱吖の 提交于 2020-04-16 04:06:39
问题 I have an LSTM model that I want to train on multiple gpus. I transformed the code to do this and in nvidia-smi I could see that it is using all the memory of all the gpus and each of the gpus are utilizing around 40% BUT the estimated time for training of each batch was almost the same as 1 gpu. Can someone please guid me and tell me how I can train properly on multiple gpus? My code: import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import

Tensorboard for custom training loop in Tensorflow 2

≯℡__Kan透↙ 提交于 2020-04-16 02:29:31
问题 I want to create a custom training loop in tensorflow 2 and use tensorboard for visualization. Here is an example I've created based on tensorflow documentation: import tensorflow as tf import datetime os.environ["CUDA_VISIBLE_DEVICES"] = "0" # which gpu to use mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_dataset = tf.data

Tensorboard for custom training loop in Tensorflow 2

穿精又带淫゛_ 提交于 2020-04-16 02:27:22
问题 I want to create a custom training loop in tensorflow 2 and use tensorboard for visualization. Here is an example I've created based on tensorflow documentation: import tensorflow as tf import datetime os.environ["CUDA_VISIBLE_DEVICES"] = "0" # which gpu to use mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_dataset = tf.data

Tensorflow Datasets with string inputs do not preserve data type

只愿长相守 提交于 2020-04-14 06:17:15
问题 All reproducible code below is run at Google Colab with TF 2.2.0-rc2. Adapting the simple example from the documentation for creating a dataset from a simple Python list: import numpy as np import tensorflow as tf tf.__version__ # '2.2.0-rc2' np.version.version # '1.18.2' dataset1 = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset1: print(element) print(type(element.numpy())) we get the result tf.Tensor(1, shape=(), dtype=int32) <class 'numpy.int32'> tf.Tensor(2, shape=(),