I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss valu
I tested tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
and tf.losses.get_regularization_loss()
with one l2_regularizer
in the graph, and found that they return the same value. By observing the value's quantity, I guess reg_constant has already make sense on the value by setting the parameter of tf.contrib.layers.l2_regularizer
.