I\'m quite confused about whether to use tf.nn.dropout or tf.layers.dropout.
many MNIST CNN examples seems to use tf.nn.droput, with keep_prop as one of params.
Apart from the answers from @nikpod and @Salvador Dali
The tf.nn.dropout scaled the weights by 1./keep prob during training phase, while tf.layers.dropout scaled the weights by 1./(1-rate).
During evaluation, You could set the keep prob to 1 which is equivalent to set training to false.