tensorboard

Keras - monitoring quantities with TensorBoard during training

孤人 提交于 2019-12-08 02:06:10
问题 With Tensorflow it is possible to monitor quantities during training, using tf.summary. Is it possible to do the same using Keras ? Could you include an example by modifying the code at https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py and monitoring the KL loss (defined at line 53) Thank you in advance ! 回答1: Have you tried the TensorBoard callback? [1] tensorboard = keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1, write_graph=True, write_images

in add_summary for value in summary.value: AttributeError: 'Tensor' object has no attribute 'value'

≡放荡痞女 提交于 2019-12-07 21:09:54
问题 This is a very basic tensorboard scalar log: import numpy as np import tensorflow as tf a = np.arange(10) x = tf.convert_to_tensor(a, dtype=tf.float32) x_summ = tf.summary.scalar("X", x) writer = tf.summary.FileWriter('/tmp/logdir') writer.add_summary(x_summ) However, I get an error in add_summary for value in summary.value: AttributeError: 'Tensor' object has no attribute 'value'. Any solution for this? TensorFlow documentation says ValueError is raised when the summary tensor has a wrong

tensorflow conv2d memory consumption explain?

蓝咒 提交于 2019-12-07 20:33:29
output = tf.nn.conv2d(input, weights, strides = [1,3,3,1], padding = 'VALID') My input has shape 200x225x225x1, weights is 15x15x1x64. Hence, the output has shape 200x71x71x64 since (225-15)/3 + 1 = 71 Tensorboard shows that this operation consumes totally 768MB (see pic below). Assuming it takes into account the size of input (38.6MB), weights (0.06MB) and output (246.2MB) the total memory consumption should not exceed 300MB. So where does the rest of the memory consumption come from? Although I'm not able to reproduce your graph and values based on information provided, it's possible that

Tensorboard TypeError: __init__() got an unexpected keyword argument 'serialized_options'

冷暖自知 提交于 2019-12-07 18:35:18
问题 I am using tensorflow version 1.3.0 and tensorboard version 1.10.0 I just updated my tensorboard version and after the update when I am trying to start tensorboard i got the following error message; Traceback (most recent call last): File "c:\users\sztaki_user\anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\sztaki_user\anaconda3\envs\tensorflow\lib\runpy.py", line 85, i n _run_code exec(code, run_globals) File "C:\Users\Sztaki

Tensorboard histograms to matplotlib

本秂侑毒 提交于 2019-12-07 13:21:21
问题 I would like to "dump" the tensorboard histograms and plot them via matplotlib. I would have more scientific paper appealing plots. I managed to hack the way through the Summary file using the tf.train.summary_iterator and dump the histogram that I wanted to dump( tensorflow.core.framework.summary_pb2.HistogramProto object). By doing that and implementing what the java-script code does with the data (https://github.com/tensorflow/tensorboard/blob/c2fe054231fe77f3a5b05dbc519f713d2e738d1c

Number of tensors mismatch for embeddings in the TensorBoard callback of Keras

岁酱吖の 提交于 2019-12-07 09:21:19
问题 I am using the CIFAR-10 dataset, so there are 10000 test images. I successfully created a .tsv file containing the metadata: the test set's labels (in human-readable text, not the indexes) on each of the 10000 rows. However, in TensorBoard when I open the embedding tab, I get this error: Number of tensors (16128) do not match the number of lines in metadata (10000). But I would expect embeddings to be taken on the test set which is normally properly of length 10000, as in the .tsv file I made

Tensorflow recommended system specifications?

妖精的绣舞 提交于 2019-12-07 05:42:17
问题 I am getting started with installation of Tensorflow on my RHEL 6.5 box. But it turns out that Tensorflow needs glibc >= 2.17 and the default glibc on rhel 6.5 is 2.12. I was wondering if anybody could help me with minimum/recommended system specifications for tensorflow? 回答1: The TensorFlow requirements are listed here, but these do not recommend a particular operating system or glibc version. The best-supported operating systems are Ubuntu 14.04 64-bit, and Mac OS X 10.10 (Yosemite) and

launching tensorboard from google cloud datalab

ε祈祈猫儿з 提交于 2019-12-07 03:53:58
问题 I need help in luanching tensorboard from tensorflow running on the datalab, My code is the followings (everything is on the datalab): import tensorflow as tf with tf.name_scope('input'): print ("X_np") X_np = tf.placeholder(tf.float32, shape=[None, num_of_features],name="input") with tf.name_scope('weights'): print ("W is for weights & - 15 number of diseases") W = tf.Variable(tf.zeros([num_of_features,15]),name="W") with tf.name_scope('biases'): print ("b") #todo:authemate for more diseases

tensorboard with numpy array

女生的网名这么多〃 提交于 2019-12-06 19:01:14
问题 Can someone give a example on how to use tensorboard visualize numpy array value? There is a related question here, I don't really get it. Tensorboard logging non-tensor (numpy) information (AUC) For example, If I have for i in range(100): foo = np.random.rand(3,2) How can I keep tracking the distribution of foo using tensorboard for 100 iterations? Can someone give a code example? Thanks. 回答1: For simple values (scalar), you can use this recipe summary_writer = tf.train.SummaryWriter(FLAGS

Tensorboard: No graph definition files were found.

≯℡__Kan透↙ 提交于 2019-12-06 16:29:37
In my Python code I execute train_writer = tf.summary.FileWriter(TBOARD_LOGS_DIR) train_writer.add_graph(sess.graph) I can see 1.6MB file created in E:\progs\tensorboard_logs (and no other file) but then when I execute tensorboard --logdir=E:\progs\tensorboard_logs it loads, but says: "No graph definition files were found." when I click on Graph. Additionally, running tensorboard --inspect --logdir=E:\progs\tensorboard_logs displays Found event files in: E:\progs\tensorboard_logs These tags are in E:\progs\tensorboard_logs: audio - histograms - images - scalars - Event statistics for E:\progs