tensorboard

how to log validation loss and accuracy using tfslim

旧时模样 提交于 2019-12-06 14:56:49
Is there any way that I can log the validaton loss and accuracy to tensorboard when using tf-slim? When I was using keras, the following code can do this for me: model.fit_generator(generator=train_gen(), validation_data=valid_gen(),...) Then the model will evaluate the validation loss and accuracy after each epoch, which is very convenient. But how to achieve this using tf-slim? The following steps are using primitive tensorflow, which is not what I want: with tf.Session() as sess: for step in range(100000): sess.run(train_op, feed_dict={X: X_train, y: y_train}) if n % batch_size * batches

Calculate/Visualize Tensorflow Keras Dense model layer relative connection weights w.r.t output classes

倾然丶 夕夏残阳落幕 提交于 2019-12-06 14:54:51
Here is my tensorflow keras model,(you can ignore dropout layer if it makes things tough) import tensorflow as tf optimizers = tf.keras.optimizers Sequential = tf.keras.models.Sequential Dense = tf.keras.layers.Dense Dropout = tf.keras.layers.Dropout to_categorical = tf.keras.utils.to_categorical model = Sequential() model.add(Dense(256, input_shape=(20,), activation="relu")) model.add(Dropout(0.1)) model.add(Dense(256, activation="relu")) model.add(Dropout(0.1)) model.add(Dense(256, activation="relu")) model.add(Dropout(0.1)) model.add(Dense(3, activation="softmax")) adam = optimizers.Adam(lr

Converting .tflite to .pb

人走茶凉 提交于 2019-12-06 13:17:46
Problem : How can i convert a .tflite (serialised flat buffer) to .pb (frozen model)? The documentation only talks about one way conversion. Use-case is : I have a model that is trained on converted to .tflite but unfortunately, i do not have details of the model and i would like to inspect the graph, how can i do that? I don't think there is a way to restore tflite back to pb as some information are lost after conversion. I found an indirect way to have a glimpse on what is inside tflite model is to read back each of the tensor. interpreter = tf.contrib.lite.Interpreter(model_path=model_path)

Tensorboard TypeError: __init__() got an unexpected keyword argument 'serialized_options'

一个人想着一个人 提交于 2019-12-06 12:01:38
I am using tensorflow version 1.3.0 and tensorboard version 1.10.0 I just updated my tensorboard version and after the update when I am trying to start tensorboard i got the following error message; Traceback (most recent call last): File "c:\users\sztaki_user\anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\sztaki_user\anaconda3\envs\tensorflow\lib\runpy.py", line 85, i n _run_code exec(code, run_globals) File "C:\Users\Sztaki_user\Anaconda3\envs\tensorflow\Scripts\tensorboard.exe\_ _main__.py", line 5, in <module> File "c:\users

Graph visualisaton is not showing in tensorboard for seq2seq model

僤鯓⒐⒋嵵緔 提交于 2019-12-06 07:01:34
问题 I build a seq2seq model using the seq2seq.py library provided with tensorflow. Before training anything I wanted to visualize the graph network of my untrained model in tensorboard, but it does not want to display this. Below a minimal example to reproduce my problem. Anybody an idea why this does not work? Can you only visualize a grap of a model after it has been trained? import tensorflow as tf import numpy as np from tensorflow.models.rnn import rnn_cell from tensorflow.models.rnn import

Tensorboard error: 'Tensor' object has no attribute 'value'

ぐ巨炮叔叔 提交于 2019-12-06 02:51:08
问题 My goal: Add arbitrary text to tensorboard. My code: text = "muh teeeext" summary = tf.summary.text("Muh taaaag", tf.convert_to_tensor(text)) writer.add_summary(summary) My error: File xxx, line xxx, in xxx writer.add_summary(summary) File "/home/xxx/.local/lib/python3.5/site-packages/tensorflow/python/summary/writer/writer.py", line 123, in add_summary for value in summary.value: AttributeError: 'Tensor' object has no attribute 'value' 回答1: writer.add_summary(summary) is a tensor. the

Tensorboard not found as magic function in jupyter

蹲街弑〆低调 提交于 2019-12-06 02:18:10
I want to run tensorboard in jupyter using the latest tensorflow 2.0.0a0. With the tensorboard version 1.13.1, and python 3.6. using ... %tensorboard --logdir {logs_base_dir} I get the error : UsageError: Line magic function %tensorboard not found Do you have an idea what the problem could be? It seems that all versions are up to date and the command seems correct too. Thanks The extension needs to be loaded first: %load_ext tensorboard.notebook %tensorboard --logdir {logs_base_dir} UPDATE For newer TF versions ( tensorflow>=1.14.0 & tensorflow != 2.0.0a0 ) (newer than TF2-alpha) use %load_ext

Tensorboard: File system scheme gs not implemented

﹥>﹥吖頭↗ 提交于 2019-12-05 20:06:48
I am not able to connect tensorboard to my Google Cloud Platform as I am facing the following error: Command that I am running: gcloud auth application-default login tensorboard --logdir=gs://mybucket_which_contains_train_and_eval_directories Stacktrace: Exception in thread Reloader: Traceback (most recent call last): File "c:\python\python35\lib\threading.py", line 914, in _bootstrap_inner self.run() File "c:\python\python35\lib\threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "c:\python\python35\lib\site-packages\tensorboard\backend\application.py", line 327, in

Event files in Google Tensorflow

这一生的挚爱 提交于 2019-12-05 18:36:20
I am using Tensorflow to build up the Neural Network, and I would like to show training results on the Tensorboard. So far everything works fine. But I have a question on "event file" for the Tensorboard. I notice that every time when I run my python script, it generates different event files. And when I run my local server using $ python /usr/local/lib/python2.7/dist-packages/tensorflow/tensorboard/tensorboard.py --logdir=/home/project/tmp/ , it shows up error if there are more than 1 event files. It seems to be annoying since whenever I run my local server, I have to delete all previous

What's the right way for summary write to avoid overlapping on tensorboard when restore a model

爱⌒轻易说出口 提交于 2019-12-05 13:41:29
I wrote a CNN image classifier with tensorflow and use tensorboard to monitor the training. However when I stop and restore from a checkpoint, there are overlaps like: I followed the instruction on the Tensorboard README to write a SessionStatus.START message to the summary file, but it doesn't seem to work. This is my code: summary_writer.add_session_log(SessionLog(status=SessionLog.START),global_step=step) i dont know if its an answer, but if you put the global_step variable into a tensorflow variable (to store it with your data) and then, when you restore the model, the global_step variable