In tensorflow 1.4, I found two functions that do batch normalization and they look same:
Which function should I use? Which one is more stable?
Just to add to the list, there're several more ways to do batch-norm in tensorflow:
tf.nn.batch_normalizationis a low-level op. The caller is responsible to handlemeanandvariancetensors themselves.tf.nn.fused_batch_normis another low-level op, similar to the previous one. The difference is that it's optimized for 4D input tensors, which is the usual case in convolutional neural networks.tf.nn.batch_normalizationaccepts tensors of any rank greater than 1.tf.layers.batch_normalizationis a high-level wrapper over the previous ops. The biggest difference is that it takes care of creating and managing the running mean and variance tensors, and calls a fast fused op when possible. Usually, this should be the default choice for you.tf.contrib.layers.batch_normis the early implementation of batch norm, before it's graduated to the core API (i.e.,tf.layers). The use of it is not recommended because it may be dropped in the future releases.tf.nn.batch_norm_with_global_normalizationis another deprecated op. Currently, delegates the call totf.nn.batch_normalization, but likely to be dropped in the future.- Finally, there's also Keras layer
keras.layers.BatchNormalization, which in case of tensorflow backend invokestf.nn.batch_normalization.
As show in doc, tf.contrib is a contribution module containing volatile or experimental code. When function is complete, it will be removed from this module. Now there are two, in order to be compatible with the historical version.
So, the former tf.layers.batch_normalization is recommended.
来源:https://stackoverflow.com/questions/48001759/what-is-right-batch-normalization-function-in-tensorflow