tensorflow-gpu

How to predict a specific image using Mnist

只愿长相守 提交于 2019-12-24 19:29:10
问题 I am new to tensorflow, and I think I got the right answer, but I am missing something minimal, that I cant find online. I hope someone send me a reference or leads me to what I am missing. import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data batch_size = 128 test_size = 256 def init_weights(shape): return tf.Variable(tf.random_normal(shape, stddev=0.01)) def model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden): l1a = tf.nn.relu(tf.nn

ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory

时光毁灭记忆、已成空白 提交于 2019-12-24 19:25:06
问题 I get following error while importing Tensorflow. >>> import tensorflow Traceback (most recent call last): File "/home/jarvis/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/home/jarvis/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/home/jarvis/anaconda3/lib

Convert tensorflow tensor of ASCII codes to string

本秂侑毒 提交于 2019-12-23 21:40:13
问题 I am quite new to tensorflow, and I couldn't find what I wanted on tensorflow.org or online discussions. I have a tensor of ASCII codes I would like to convert to a string (each tensor is a word). In numpy I could just iterate and use chr(), but tensor object is not iterable. Is there a function that operates on the whole tensor, or a method that does not require evaluation? Thanks! Python 3.6.1 and Tensorflow 1.2.1 来源: https://stackoverflow.com/questions/45593967/convert-tensorflow-tensor-of

Dataset API for TensorFlow : Variable sized Input

余生颓废 提交于 2019-12-23 10:06:39
问题 I have my entire dataset in memory as list of tuples where each tuple corresponds to a batch of fixed size 'N' . i.e (x[i],label[i],length[i]) x[i]: numpy array of shape [N,W,F]; here there are N examples, with W timestep each; all timesteps have fixed number of features F label[i] : class: shape [N,] one for each example in batch length[i] : length (number of timesteps ) in data : shape [N,] : this is number of timesteps (W) for each example in batch Main problem : Across the batches W

Is there anyway to use tensorflow-gpu with intel(r) hd graphics 520?

瘦欲@ 提交于 2019-12-22 03:43:24
问题 I am working on my master's project which uses keras and tensorflow backend .I have intel(r) hd graphics 520 ,So I am not able to use tensorflow-gpu. The cpu version is working fine .Is there any way to use tensorflow-gpu with the intel(r) hd graphics 520? 回答1: Tensorflow GPU support needs Nvidia Cuda and CuDNN packages installed. For GPU accelerated training you will need a dedicated GPU . Intel onboard graphics can't be used for that purpose. You can see full requirements for tensorflow-gpu

GRPC causes training to pause in individual worker (distributed tensorflow, synchronised)

非 Y 不嫁゛ 提交于 2019-12-22 01:01:00
问题 I am trying to train model in synchronous distributed fashion for data parallelism. There are 4 gpus in my machine. Each gpu should should run a worker to train on separate non-overlapping subset of the data (between graph replication). The main data file is separated into 16 smaller TFRecord files. Each worker is supposed to process 4 different files. The problem is that training freezes independently and at different times in each worker process. They freeze at some point. One of the 'ps'

Very low GPU usage during training in Tensorflow

怎甘沉沦 提交于 2019-12-20 20:02:12
问题 I am trying to train a simple multi-layer perceptron for a 10-class image classification task, which is a part of the assignment for the Udacity Deep-Learning course. To be more precise, the task is to classify letters rendered from various fonts (the dataset is called notMNIST). The code I ended up with looks fairly simple, but no matter what I always get very low GPU usage during training. I measure load with GPU-Z and it shows just 25-30%. Here is my current code: graph = tf.Graph() with

First tf.session.run() performs dramatically different from later runs. Why?

让人想犯罪 __ 提交于 2019-12-20 12:39:10
问题 Here's an example to clarify what I mean: First session.run(): First run of a TensorFlow session Later session.run(): Later runs of a TensorFlow session I understand TensorFlow is doing some initialization here, but I'd like to know where in the source this manifests. This occurs on CPU as well as GPU, but the effect is more prominent on GPU. For example, in the case of a explicit Conv2D operation, the first run has a much larger quantity of Conv2D operations in the GPU stream. In fact, if I

How to run Tensorflow Estimator on multiple GPUs with data parallelism

社会主义新天地 提交于 2019-12-20 08:59:57
问题 I have a standard tensorflow Estimator with some model and want to run it on multiple GPUs instead of just one. How can this be done using data parallelism? I searched the Tensorflow Docs but did not find an example; only sentences saying that it would be easy with Estimator. Does anybody have a good example using the tf.learn.Estimator? Or a link to a tutorial or so? 回答1: I think tf.contrib.estimator.replicate_model_fn is a cleaner solution. The following is from tf.contrib.estimator

Multi threading in Dataset api

寵の児 提交于 2019-12-20 03:15:57
问题 TL;DR: how to ensure that data is loaded in multi threaded manner when using Dataset api in tensorflow 0.1.4? Previously I did something like this with my images in disk: filename_queue = tf.train.string_input_producer(filenames) image_reader = tf.WholeFileReader() _, image_file = image_reader.read(filename_queue) imsize = 120 image = tf.image.decode_jpeg(image_file, channels=3) image = tf.image.convert_image_dtype(image, dtype=tf.float32) image_r = tf.image.resize_images(image, [imsize,