How to use TensorBoard and summary operations with the tf.layers module

不想你离开。 提交于 2021-02-07 09:17:59

问题


I have followed the TensorFlow Layers tutorial to create a CNN for MNIST digit classification using TensorFlow's tf.layers module. Now I'm trying to learn how to use TensorBoard from TensorBoard: Visualizing Learning. Perhaps this tutorial hasn't been updated recently, because it says its example code is a modification of that tutorial's and links to it, but the code is completely different: it manually defines a single-hidden-layer fully-connected network.

The TensorBoard tutorial shows how to use tf.summary to attach summaries to a layer by creating operations on the layer's weights tensor, which is directly accessible because we manually defined the layer, and attaching tf.summary objects to those operations. To do this if I'm using tf.layers and its tutorial code, I believe I'd have to:

  1. Modify the Layers tutorial's example code to use the non-functional interface (Conv2D instead of conv2d and Dense instead of dense) to create the layers
  2. Use the layer objects' trainable_weights() functions to get the weight tensors and attach tf.summary objects to those

Is that the best way to use TensorBoard with tf.layers, or is there a way that's more directly compatible with tf.layers and the functional interface? If so, is there an updated official TensorBoard tutorial? It would be nice if the documentation and tutorials were more unified.


回答1:


You should be able to use the output of your tf.layers call to get the activations. Taking the first convolutional layer of the linked layers tutorial:

# Convolutional Layer #1
conv1 = tf.layers.conv2d(
    inputs=input_layer,
    filters=32,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

You could do:

tensor_name = conv1.op.name
tf.summary.histogram(tensor_name + '/activation', conv1)

Not sure if this is the best way, but I believe it is the most direct way of doing what you want.

Hope this helps!




回答2:


You can use something like this

with tf.name_scope('dense2'):

    preds = tf.layers.dense(inputs=dense1,units = 12,  
                    activation=tf.nn.sigmoid, name="dense2")

    d2_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'dense2')

    tf.summary.histogram("weights", d2_vars[0])
    tf.summary.histogram("biases", d2_vars[1])
    tf.summary.histogram("activations", preds)



回答3:


Another choice is to use tf.layers.Dense instead of tf.layers.dense (difference between d and D).

The paradigm for Dense is :

x = tf.placeholder(shape=[None, 100])
dlayer = tf.layers.Dense(hidden_unit)
y = dlayer(x)

With dlayer as intermediate, you're able to do:

k = dlayer.kernel
b = dlayer.bias
k_and_b = dlayer.weights

Note that you won't get the dlayer.kernel until you apply y = dlayer(x).

Things are similar for other layers such as convolution layer. Check them with any available auto-completion.



来源:https://stackoverflow.com/questions/49201832/how-to-use-tensorboard-and-summary-operations-with-the-tf-layers-module

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!