Numpy Pure Functions for performance, caching

后端 未结 4 1039
悲哀的现实
悲哀的现实 2020-11-28 14:06

I\'m writing some moderately performance critical code in numpy. This code will be in the inner most loop, of a computation that\'s run time is measured in hours. A quick ca

4条回答
  •  无人及你
    2020-11-28 14:35

    Just expanding on my comment, here is a comparison between your sigmoid through vectorize and using numpy directly:

    In [1]: x = np.random.normal(size=10000)
    
    In [2]: sigmoid = np.vectorize(lambda x: 1.0 / (1.0 + np.exp(-x)))
    
    In [3]: %timeit sigmoid(x)
    10 loops, best of 3: 63.3 ms per loop
    
    In [4]: %timeit 1.0 / (1.0 + np.exp(-x))
    1000 loops, best of 3: 250 us per loop
    

    As you can see, not only does vectorize make it much slower, the fact is that you can calculate 10000 sigmoids in 250 microseconds (that is, 25 nanoseconds for each). A single dictionary look-up in Python is slower than that, let alone all the other code to get the memoization in place.

    The only way to optimize this that I can think of is writing a sigmoid ufunc for numpy, which basically will implement the operation in C. That way, you won't have to do each operation in the sigmoid to the entire array, even though numpy does this really fast.

提交回复
热议问题