Why input is scaled in tf.nn.dropout in tensorflow?

前端 未结 4 2000
自闭症患者
自闭症患者 2021-01-30 13:31

I can\'t understand why dropout works like this in tensorflow. The blog of CS231n says that, \"dropout is implemented by only keeping a neuron active with some probability

4条回答
  •  没有蜡笔的小新
    2021-01-30 13:52

    If you keep reading in cs231n, the difference between dropout and inverted dropout is explained.

    Since we want to leave the forward pass at test time untouched (and tweak our network just during training), tf.nn.dropout directly implements inverted dropout, scaling the values.

提交回复
热议问题