numba

How do I speed up profiled NumPy code - vectorizing, Numba?

谁都会走 提交于 2019-12-10 17:46:20
问题 I am running a large Python program to optimize portfolio weights for (Markowitz) portfolio optimization in finance. When I Profile the code, 90% of the run time is spent calculating the portfolio return, which is done millions of times. What can I do to speed up my code? I have tried: vectorizing the calculation of returns: made the code slower , from 1.5 ms to 3 ms used the function autojit from Numba to speed up the code: no change See example below - any suggestions? import numpy as np

Numba : cell vars are not supported

拜拜、爱过 提交于 2019-12-10 15:37:30
问题 I'd like to use numba to speed up this function: from numba import jit @jit def rownowaga_numba(u, v): wymiar_x = len(u) wymiar_y = len(u[1]) f = [[[0 for j in range(wymiar_y)] for i in range(wymiar_x)] for k in range(9)] cx = [0., 1., 0., -1., 0., 1., -1., -1., 1.] cy = [0., 0., 1., 0., -1., 1., 1., -1., -1.] w = [4./9, 1./9, 1./9, 1./9, 1./9, 1./36, 1./36, 1./36, 1./36] for i in range( wymiar_x): for j in range (wymiar_y): for k in range(9): up = u[i][j] vp = v[i][j] udot = (up**2 + vp**2)

convert float to string numba python numpy array

妖精的绣舞 提交于 2019-12-10 14:58:42
问题 I am running a @nb.njit function within which I am trying to put an integer within a string array. import numpy as np import numba as nb @nb.njit(nogil=True) def func(): my_array = np.empty(6, dtype=np.dtype("U20")) my_array[0] = np.str(2.35646) return my_array if __name__ == '__main__': a = func() print(a) I am getting the following error : numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Invalid use of Function(<class 'str'>) with argument(s) of type(s):

Python Fast Implementation of Convolution/Cross-correlation of 3D arrays

岁酱吖の 提交于 2019-12-10 11:55:20
问题 I'm working on calculating convolutions (cross-correlation) of 3D images. Due to the nature of the problem, FFT based approximations of convolution (e.g. scipy fftconvolve) is not desired, and the "direct sum" is the way to go. The images are ~(150, 150, 150) in size, and the largest kernels are ~(40, 40, 40) in size. the images are periodic (has periodic boundary condition, or needs to be padded by the same image) since ~100 such convolutions has to be done for one analysis, the speed of the

Why is passing a list (of length n) to a numba nopython function an O(n) operation

微笑、不失礼 提交于 2019-12-10 11:35:18
问题 This is only a question to satisfy my curiosity I'm not actually planning on using lists as arguments for a numba function. But I was wondering why passing a list to a numba function seems like an O(n) operation, while it's an O(1) operation in pure-Python functions. Some simple example code: import numba as nb @nb.njit def take_list(lst): return None take_list([1, 2, 3]) # warmup And the timings: for size in [10, 100, 1000, 10000, 100000, 1000000]: lst = [0]*size print(len(lst)) %timeit take

Install numba 0.30.1 on ubuntu 16.04 lts

三世轮回 提交于 2019-12-10 11:04:28
问题 How do I install the current version (0.30.1) of numba for Python 3 on Ubuntu 16.04 LTS? My version of Python is 3.5.2, and I have a barebones install of Ubuntu (server edition I think) 回答1: Okay so after a couple of hours of figuring things out, I've decided that this is painful enough to share and not let others figure out. First, set up the basics: install Python 3, Git and g++ sudo apt install python3 git g++ Then get python3 packages PyPI (aka pip) and NumPy sudo apt python3-pip pip3

How to use Numba to perform multiple integration in SciPy with an arbitrary number of variables and parameters?

僤鯓⒐⒋嵵緔 提交于 2019-12-10 04:35:19
问题 I'd like to use Numba to decorate the integrand of a multiple integral so that it can be called by SciPy's Nquad function as a LowLevelCallable . Ideally, the decorator should allow for an arbitrary number of variables, and an arbitrary number of additional parameters from the Nquad's args argument. This is built off an excellent Q&A from earlier this year, but extended to the case of multiple variables and parameters. As an example, suppose the following multiple integral with N variables

Improve min/max downsampling

和自甴很熟 提交于 2019-12-09 11:20:53
问题 I have some large arrays (~100 million points) that I need to interactively plot. I am currenlty using Matplotlib. Plotting the arrays as-is gets very slow and is a waste since you can't visualize that many points anyway. So I made a min/max decimation function that I tied to the 'xlim_changed' callback of the axis. I went with a min/max approach because the data contains fast spikes that I do not want to miss by just stepping through the data. There are more wrappers that crop to the x

Finding nearest neighbours of a triangular tesellation

旧街凉风 提交于 2019-12-09 07:46:15
问题 I have a triangular tessellation like the one shown in the figure. Given N number of triangles in the tessellation, I have a N X 3 X 3 array which stores (x, y, z) coordinates of all three vertices of each triangle. My goal is to find for each triangle the neighbouring triangle sharing the same edge. The is an intricate part is the whole setup that I do not repeat the neighbour count. That is if triangle j was already counted as a neighbour of triangle i , then triangle i should not be again

When numba is effective?

断了今生、忘了曾经 提交于 2019-12-08 18:06:30
I know numba creates some overheads and in some situations (non-intensive computation) it become slower that pure python. But what I don't know is where to draw the line. Is it possible to use order of algorithm complexity to figure out where? for example for adding two arrays (~O(n)) shorter that 5 in this code pure python is faster: def sum_1(a,b): result = 0.0 for i,j in zip(a,b): result += (i+j) return result @numba.jit('float64[:](float64[:],float64[:])') def sum_2(a,b): result = 0.0 for i,j in zip(a,b): result += (i+j) return result # try 100 a = np.linspace(1.0,2.0,5) b = np.linspace(1