tensorflow

Memory leak with TensorFlow

蹲街弑〆低调 提交于 2021-02-07 11:24:46
问题 I have a memory leak with TensorFlow. I refered to Tensorflow : Memory leak even while closing Session? to address my issue, and I followed the advices of the answer, that seemed to have solved the problem. However it does not work here. In order to recreate the memory leak, I have created a simple example. First, I use this function (that I got here : How to get current CPU and RAM usage in Python?) to check the memory use of the python process : def memory(): import os import psutil pid =

Multithreading - How to use CPU as much as possible?

霸气de小男生 提交于 2021-02-07 11:08:46
问题 I'm currently implementing Tensorflow custom op(for custom data fetcher) using C++ in order to speed up my Tensorflow model. Since my Tensorflow model doesn't use GPU a lot, I believe I can achieve maximal performance using multiple worker threads concurrently. The problem is, even though I have enough workers, my program doesn't utilize all CPU. In my development machine, (4 physical core) it uses about 90% of user time, 4% of sys time with 4 worker threads and tf.ConfigProto(inter_op

Multithreading - How to use CPU as much as possible?

旧街凉风 提交于 2021-02-07 11:07:04
问题 I'm currently implementing Tensorflow custom op(for custom data fetcher) using C++ in order to speed up my Tensorflow model. Since my Tensorflow model doesn't use GPU a lot, I believe I can achieve maximal performance using multiple worker threads concurrently. The problem is, even though I have enough workers, my program doesn't utilize all CPU. In my development machine, (4 physical core) it uses about 90% of user time, 4% of sys time with 4 worker threads and tf.ConfigProto(inter_op

is there any way to load local data set folder directly from google drive to google colab?

会有一股神秘感。 提交于 2021-02-07 10:59:31
问题 see the image carefullyi couldn't load custom data folder from google drive to google colab.though i mounted google drive.like instead of MNIST data set i want to load my own image data set folder.i have tried pydrive wrapper.but i need simple solution. suppose i have dataset of images inside google drive.how to load it to google colab? from google.colab import drive drive.mount('/content/gdrive') then with open('/content/gdrive/My Drive/foo.txt', 'w') as f: f.write('Hello Google Drive!')

How to translate the neural network of MLP from tensorflow to pytorch

自古美人都是妖i 提交于 2021-02-07 10:45:26
问题 I have built up an MLP neural network using 'Tensorflow', which is stated as follow: model_mlp=Sequential() model_mlp.add(Dense(units=35, input_dim=train_X.shape[1], kernel_initializer='normal', activation='relu')) model_mlp.add(Dense(units=86, kernel_initializer='normal', activation='relu')) model_mlp.add(Dense(units=86, kernel_initializer='normal', activation='relu')) model_mlp.add(Dense(units=10, kernel_initializer='normal', activation='relu')) model_mlp.add(Dense(units=1)) I want to

Tensorflow: How to prefetch data on the GPU from CPU tf.data.Dataset (from_generator)

你说的曾经没有我的故事 提交于 2021-02-07 10:15:38
问题 I am struggling with the following. I am creating a tf.data.Dataset using the from_generator method. I perform these actions on CPU as I don't want to overload my GPU memory. The dataset consists of tuples, which contain a tf.bool 1-D mask (tf.Tensor) with fixed length, and a tf.float 2-D matrix (tf.Tensor) with variable size. The loss function is decorated using the following decorator, so I would not assume the variable size is the problem. @tf.function(experimental_relax_shapes=True)

How to run parallel map_fn when eager execution enabled

自闭症网瘾萝莉.ら 提交于 2021-02-07 09:25:32
问题 Consider the following tensorflow code snippet: import time import numpy as np import tensorflow as tf def fn(i): # do some junk work for _ in range(100): i ** 2 return i n = 1000 n_jobs = 8 stuff = np.arange(1, n + 1) eager = False t0 = time.time() if eager: tf.enable_eager_execution() res = tf.map_fn(fn, stuff, parallel_iterations=n_jobs) if not eager: with tf.Session() as sess: res = sess.run(res) print(sum(res)) else: print(sum(res)) dt = time.time() - t0 print("(eager=%s) Took %ims" %

How to run parallel map_fn when eager execution enabled

故事扮演 提交于 2021-02-07 09:25:22
问题 Consider the following tensorflow code snippet: import time import numpy as np import tensorflow as tf def fn(i): # do some junk work for _ in range(100): i ** 2 return i n = 1000 n_jobs = 8 stuff = np.arange(1, n + 1) eager = False t0 = time.time() if eager: tf.enable_eager_execution() res = tf.map_fn(fn, stuff, parallel_iterations=n_jobs) if not eager: with tf.Session() as sess: res = sess.run(res) print(sum(res)) else: print(sum(res)) dt = time.time() - t0 print("(eager=%s) Took %ims" %

How to use TensorBoard and summary operations with the tf.layers module

只愿长相守 提交于 2021-02-07 09:18:08
问题 I have followed the TensorFlow Layers tutorial to create a CNN for MNIST digit classification using TensorFlow's tf.layers module. Now I'm trying to learn how to use TensorBoard from TensorBoard: Visualizing Learning. Perhaps this tutorial hasn't been updated recently, because it says its example code is a modification of that tutorial's and links to it, but the code is completely different: it manually defines a single-hidden-layer fully-connected network. The TensorBoard tutorial shows how

How to use TensorBoard and summary operations with the tf.layers module

不想你离开。 提交于 2021-02-07 09:17:59
问题 I have followed the TensorFlow Layers tutorial to create a CNN for MNIST digit classification using TensorFlow's tf.layers module. Now I'm trying to learn how to use TensorBoard from TensorBoard: Visualizing Learning. Perhaps this tutorial hasn't been updated recently, because it says its example code is a modification of that tutorial's and links to it, but the code is completely different: it manually defines a single-hidden-layer fully-connected network. The TensorBoard tutorial shows how