using leaky relu in Tensorflow

孤者浪人 提交于 2019-12-19 05:11:22

问题


How can I change G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1) to leaky relu? I have tried looping over the tensor using max(value, 0,01*value) but I get TypeError: Using a tf.Tensor as a Python bool is not allowed.

I also tried to find the source code on relu on Tensorflow github so that I can modify it to leaky relu but I couldn't find it..


回答1:


You could write one based on tf.relu, something like:

def lrelu(x, alpha):
  return tf.nn.relu(x) - alpha * tf.nn.relu(-x)

EDIT

Tensorflow 1.4 now has a native tf.nn.leaky_relu.




回答2:


If alpha < 1 (it should be), you can use tf.maximum(x, alpha * x)




回答3:


A leaky relu function has been included with release 1.4.0-rc1 as tf.nn.leaky_relu.

Documentation page: https://www.tensorflow.org/versions/master/api_docs/python/tf/nn/leaky_relu .



来源:https://stackoverflow.com/questions/45307072/using-leaky-relu-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!