问题
I have the following example code to test BasicRNNCell
. I'd like to get its internal matrix so that I can calculate the values of output_res
, newstate_res
using my own code to make sure that I can reproduce the values of output_res
, newstate_res
.
In tensorflow source code, it says output = new_state = act(W * input + U * state + B)
. Does anybody know how I can get W
and U
? (I tried to access cell._kernel
, but it is not available.)
$ cat ./main.py
#!/usr/bin/env python
# vim: set noexpandtab tabstop=2 shiftwidth=2 softtabstop=-1 fileencoding=utf-8:
import tensorflow as tf
import numpy as np
batch_size = 4
vector_size = 3
inputs = tf.placeholder(
tf.float32
, [batch_size, vector_size]
)
num_units = 2
state = tf.zeros([batch_size, num_units], tf.float32)
cell = tf.contrib.rnn.BasicRNNCell(num_units=num_units)
output, newstate = cell(inputs = inputs, state = state)
X = np.zeros([batch_size, vector_size])
#X = np.ones([batch_size, vector_size])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_res, newstate_res = sess.run([output, newstate], feed_dict = {inputs: X})
print(output_res)
print(newstate_res)
sess.close()
$ ./main.py
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
回答1:
Short answer: You recognize you're after cell._kernel
. Here's some code to get kernel (and bias) using the variables
property, which is in most TensorFlow RNNs:
import tensorflow as tf
import numpy as np
batch_size = 4
vector_size = 3
inputs = tf.placeholder(tf.float32, [batch_size, vector_size])
num_units = 2
state = tf.zeros([batch_size, num_units], tf.float32)
cell = tf.contrib.rnn.BasicRNNCell(num_units=num_units)
output, newstate = cell(inputs=inputs, state=state)
print("Output of cell.variables is a list of Tensors:")
print(cell.variables)
kernel, bias = cell.variables
X = np.zeros([batch_size, vector_size])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_, newstate_, k_, b_ = sess.run(
[output, newstate, kernel, bias], feed_dict = {inputs: X})
print("Output:")
print(output_)
print("New State == Output:")
print(newstate_)
print("\nKernel:")
print(k_)
print("\nBias:")
print(b_)
That outputs
Output of cell.variables is a list of Tensors:
[<tf.Variable 'basic_rnn_cell/kernel:0' shape=(5, 2) dtype=float32_ref>,
<tf.Variable 'basic_rnn_cell/bias:0' shape=(2,) dtype=float32_ref>]
Output:
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
New State == Output:
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
Kernel:
[[ 0.41417515 -0.64997244]
[-0.40868729 -0.90995187]
[ 0.62134564 -0.88962835]
[-0.35878009 -0.25680023]
[ 0.35606658 -0.83596271]]
Bias:
[ 0. 0.]
Long answer: You also ask about how to get W and U. Let me copy the implementation of call
and discuss where W and U are.
def call(self, inputs, state):
"""Most basic RNN: output = new_state = act(W * input + U * state + B)."""
gate_inputs = math_ops.matmul(
array_ops.concat([inputs, state], 1), self._kernel)
gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)
output = self._activation(gate_inputs)
return output, output
Doesn't look like there's a W and a U, but they are there. Essentially, the first vector_size
rows of the kernel are W and the next num_units
rows of the kernel are U. Maybe it's helpful to see the element-wise math in LaTeX:
I'm using m to be a generic batch index, v as vector_size
, n as num_units
, and b as batch_size
. Also [ ; ] denotes concatenation. Since TensorFlow is batch-major, implementations usually use right-multiply matrices.
And since this is a very basic RNN, output == new_state
. The "history" for the next iteration is simply the output of the current iteration.
来源:https://stackoverflow.com/questions/47965256/internal-variables-in-basicrnncell