How to implement the ReLU function in Numpy

后端 未结 9 1046
野性不改
野性不改 2020-12-02 09:53

I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy.

9条回答
  •  忘掉有多难
    2020-12-02 10:03

    There are a couple of ways.

    >>> x = np.random.random((3, 2)) - 0.5
    >>> x
    array([[-0.00590765,  0.18932873],
           [-0.32396051,  0.25586596],
           [ 0.22358098,  0.02217555]])
    >>> np.maximum(x, 0)
    array([[ 0.        ,  0.18932873],
           [ 0.        ,  0.25586596],
           [ 0.22358098,  0.02217555]])
    >>> x * (x > 0)
    array([[-0.        ,  0.18932873],
           [-0.        ,  0.25586596],
           [ 0.22358098,  0.02217555]])
    >>> (abs(x) + x) / 2
    array([[ 0.        ,  0.18932873],
           [ 0.        ,  0.25586596],
           [ 0.22358098,  0.02217555]])
    

    If timing the results with the following code:

    import numpy as np
    
    x = np.random.random((5000, 5000)) - 0.5
    print("max method:")
    %timeit -n10 np.maximum(x, 0)
    
    print("multiplication method:")
    %timeit -n10 x * (x > 0)
    
    print("abs method:")
    %timeit -n10 (abs(x) + x) / 2
    

    We get:

    max method:
    10 loops, best of 3: 239 ms per loop
    multiplication method:
    10 loops, best of 3: 145 ms per loop
    abs method:
    10 loops, best of 3: 288 ms per loop
    

    So the multiplication seems to be the fastest.

提交回复
热议问题