tensorflow

Tensorflow failed to create a newwriteablefile when retraining inception

故事扮演 提交于 2021-02-08 12:59:13
问题 I am following this tutorial: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/?utm_campaign=chrome_series_machinelearning_063016&utm_source=gdev&utm_medium=yt-desc#4 I am running this part of the code: python retrain.py \ --bottleneck_dir=bottlenecks \ --how_many_training_steps=500 \ --model_dir=inception \ --summaries_dir=training_summaries/basic \ --output_graph=retrained_graph.pb \ --output_labels=retrained_labels.txt \ --image_dir=flower_photos Here is the error that

module 'tensorflow' has no attribute 'logging'

感情迁移 提交于 2021-02-08 12:17:58
问题 I'm trying to run a tensorflow code in v2.0 and I'mg getting the following error AttributeError: module 'tensorflow' has no attribute 'logging' I don't want to simply remove it from the code. why this code has been removed? why should I do instead? 回答1: tf.logging was for Logging and Summary Operations and in TF 2.0 it has been removed in favor of the open-source absl-py, and to make the main tf.* namespace has functions that will be used more often. In TF.2 lesser used functions are gone or

module 'tensorflow' has no attribute 'logging'

非 Y 不嫁゛ 提交于 2021-02-08 12:17:13
问题 I'm trying to run a tensorflow code in v2.0 and I'mg getting the following error AttributeError: module 'tensorflow' has no attribute 'logging' I don't want to simply remove it from the code. why this code has been removed? why should I do instead? 回答1: tf.logging was for Logging and Summary Operations and in TF 2.0 it has been removed in favor of the open-source absl-py, and to make the main tf.* namespace has functions that will be used more often. In TF.2 lesser used functions are gone or

【tensorflow使用笔记二】:tensorflow中input_data.py代码有问题的解决方法

帅比萌擦擦* 提交于 2021-02-08 12:03:39
【tensorflow使用笔记二】:tensorflow中input_data.py代码有问题的解决方法 参考文章: (1)【tensorflow使用笔记二】:tensorflow中input_data.py代码有问题的解决方法 (2)https://www.cnblogs.com/joelwang/p/10690226.html 备忘一下。 来源: oschina 链接: https://my.oschina.net/u/4432649/blog/4950052

Tensorflow 2.0: How can I fully customize a Tensorflow training loop like I can with PyTorch?

假装没事ソ 提交于 2021-02-08 11:42:17
问题 I used to use Tensorflow a lot before, but moved over to Pytorch because it was just a lot easier to debug. The nice thing I found with PyTorch is that I have to write my own training loop, so I can step through the code and find errors. I can fire up pdb and check the tensor shapes and transformations, etc., without difficulty. In Tensorflow I was using the model.fit() function all the time, and so any error message I got was like 6 pages of C code where the error message did not give me any

Run parallel op with different inputs and same placeholder

送分小仙女□ 提交于 2021-02-08 11:34:40
问题 I have the necessity to calculate more then one accuracy in the same time, concurrently. correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) The piece of code is the same of the mnist example in the tutorial of TensorFlow but instead of having: W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) I have two placeolder

How can I pull/push data between gpu and cpu in tensorflow

☆樱花仙子☆ 提交于 2021-02-08 11:27:55
问题 I used a temporary tensor to store data in my customized gpu-based op. For debug purpose, I want to print the data of this tensor by traditional printf inside C++. How can I pull this gpu-based tensor to cpu and then print its contents. Thank you very much. 回答1: If by temporary you mean allocate_temp instead of allocate_output , there is no way of fetching the data on the python side. I usually return the tensor itself during debugging so that a simple sess.run fetches the result. Otherwise,

How can I pull/push data between gpu and cpu in tensorflow

夙愿已清 提交于 2021-02-08 11:27:53
问题 I used a temporary tensor to store data in my customized gpu-based op. For debug purpose, I want to print the data of this tensor by traditional printf inside C++. How can I pull this gpu-based tensor to cpu and then print its contents. Thank you very much. 回答1: If by temporary you mean allocate_temp instead of allocate_output , there is no way of fetching the data on the python side. I usually return the tensor itself during debugging so that a simple sess.run fetches the result. Otherwise,

Why Anaconda has separate packages for Tensorflow with and without GPU, and should I use conda or pip?

一个人想着一个人 提交于 2021-02-08 11:20:51
问题 Anaconda has different packages for Tensorflow with and without GPU support. In particular, to install Tensorflow with GPU, you should run: conda install tensorflow-gpu While for the non-GPU version, you should install: conda install tensorflow By checking the version of the installed package, conda installs Tensorflow version 2.1. But as of today the latest version of Tensorflow is 2.3. Furthermore, as can be seen in the Tensorflow officla documentation, the latest version can be installed

TypeError when trying to predict labels with argmax

落花浮王杯 提交于 2021-02-08 11:20:25
问题 I have successfully followed this transfer learning tutorial to make my own classifier with two classes, "impressionism" and "modernism". Now trying to get a label for my test image, applying advice from this thread: y_prob = model.predict(new_image) y_prob (gives this output) array([[3.1922062e-04, 9.9968076e-01]], dtype=float32) y_classes = y_prob.argmax(axis=-1) y_classes (gives this output) array([1]) # create a list containing the class labels labels = ['modernism', 'impressionism']