numba

Improve performance of a for loop in Python (possibly with numpy or numba)

旧时模样 提交于 2019-12-04 14:17:01
I want to improve the performance of the for loop in this function. import numpy as np import random def play_game(row, n=1000000): """Play the game! This game is a kind of random walk. Arguments: row (int[]): row index to use in the p matrix for each step in the walk. Then length of this array is the same as n. n (int): number of steps in the random walk """ p = np.array([[ 0.499, 0.499, 0.499], [ 0.099, 0.749, 0.749]]) X0 = 100 Y0 = X0 % 3 X = np.zeros(n) tempX = X0 Y = Y0 for j in range(n): tempX = X[j] = tempX + 2 * (random.random() < p.item(row.item(j), Y)) - 1 Y = tempX % 3 return np.r_

Numba: calling jit with explicit signature using arguments with default values

孤街醉人 提交于 2019-12-04 10:15:36
I'm using numba to make some functions containing cycles on numpy arrays. Everything is fine and dandy, I can use jit and I learned how to define the signature. Now I tried using jit on a function with optional arguments, e.g.: from numba import jit import numpy as np @jit(['float64(float64, float64)', 'float64(float64, optional(float))']) def fun(a, b=3): return a + b This works, but if instead of optional(float) I use optional(float64) it doesn't (same thing with int or int64 ). I lost 1 hour trying to figure this syntax out (actually, a friend of mine found this solution by chance because

Why is Cython so much slower than Numba when iterating over NumPy arrays?

怎甘沉沦 提交于 2019-12-04 08:37:11
问题 When iterating over NumPy arrays, Numba seems dramatically faster than Cython. What Cython optimizations am I possibly missing? Here is a simple example: Pure Python code: import numpy as np def f(arr): res=np.zeros(len(arr)) for i in range(len(arr)): res[i]=(arr[i])**2 return res arr=np.random.rand(10000) %timeit f(arr) out: 4.81 ms ± 72.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Cython code (within Jupyter): %load_ext cython %%cython import numpy as np cimport numpy as np

numba - guvectorize barely faster than jit

人盡茶涼 提交于 2019-12-04 03:19:12
I was trying to parallellize a Monte Carlo simulation that operates on many independent datasets. I found out that numba's parallel guvectorize implementation was barely 30-40% faster than the numba jit implementation. I found these ( 1 , 2 ) comparable topics on Stackoverflow, but they do not really answer my question. In the first case, the implementation is slowed down by a fall back to object mode and in the second case the original poster did not properly use guvectorize - none of these problems apply to my code. To make sure there was no problem with my code, I created this very simple

cProfile adds significant overhead when calling numba jit functions

北城以北 提交于 2019-12-04 00:05:03
Compare a pure Python no-op function with a no-op function decorated with @numba.jit , that is: import numba @numba.njit def boring_numba(): pass def call_numba(x): for t in range(x): boring_numba() def boring_normal(): pass def call_normal(x): for t in range(x): boring_normal() If we time this with %timeit , we get the following: %timeit call_numba(int(1e7)) 792 ms ± 5.51 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit call_normal(int(1e7)) 737 ms ± 2.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) All perfectly reasonable; there's a small overhead for the numba function

Create multiple columns in Pandas Dataframe from one function

这一生的挚爱 提交于 2019-12-03 17:31:09
问题 I'm a python newbie, so I hope my two questions are clear and complete. I posted the actual code and a test data set in csv format below. I've been able to construct the following code (mostly with the help from the StackOverflow contributors) to calculate the Implied Volatility of an option contract using Newton-Raphson method. The process calculates Vega when determining the Implied Volatility. Although I'm able to create a new DataFrame column for Implied Volatility using the Pandas

Using numba for cosine similarity between a vector and rows in a matix

匿名 (未验证) 提交于 2019-12-03 09:14:57
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Found this gist using numba for fast computation of cosine similarity. import numba @numba.jit(target='cpu', nopython=True) def fast_cosine(u, v): m = u.shape[0] udotv = 0 u_norm = 0 v_norm = 0 for i in range(m): if (np.isnan(u[i])) or (np.isnan(v[i])): continue udotv += u[i] * v[i] u_norm += u[i] * u[i] v_norm += v[i] * v[i] u_norm = np.sqrt(u_norm) v_norm = np.sqrt(v_norm) if (u_norm == 0) or (v_norm == 0): ratio = 1.0 else: ratio = udotv / (u_norm * v_norm) return ratio Results look promising (500ns vs. only 200us without jit decorator in

Why this numba code is 6x slower than numpy code?

别等时光非礼了梦想. 提交于 2019-12-03 05:26:56
Is there any reason why the following code run in 2s, def euclidean_distance_square(x1, x2): return -2*np.dot(x1, x2.T) + np.expand_dims(np.sum(np.square(x1), axis=1), axis=1) + np.sum(np.square(x2), axis=1) while the following numba code run in 12s? @jit(nopython=True) def euclidean_distance_square(x1, x2): return -2*np.dot(x1, x2.T) + np.expand_dims(np.sum(np.square(x1), axis=1), axis=1) + np.sum(np.square(x2), axis=1) My x1 is a matrix of dimension (1, 512) and x2 is a matrix of dimension (3000000, 512). It is quite weird that numba can be so much slower. Am I using it wrong? I really need

Numba code slower than pure python

↘锁芯ラ 提交于 2019-12-03 05:06:54
问题 I've been working on speeding up a resampling calculation for a particle filter. As python has many ways to speed it up, I though I'd try them all. Unfortunately, the numba version is incredibly slow. As Numba should result in a speed up, I assume this is an error on my part. I tried 4 different versions: Numba Python Numpy Cython The code for each is below: import numpy as np import scipy as sp import numba as nb from cython_resample import cython_resample @nb.autojit def numba_resample(qs,

Error installing Numba on OS X

匿名 (未验证) 提交于 2019-12-03 01:31:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm unable to install Numba (via pip) on my OS X system. I'm using Python: 2.7.11 (Homebrew) pip: 8.1.1 setuptools: 20.6.7 OS X: 10.11.4 (x86_64) Xcode: 7.3 Xcode CLT: 7.3.0.0.1.1457485338 Clang: 7.3 build 703 and have installed the prerequisites (I think) with brew install llvm git clone https://github.com/numba/llvmlite cd llvmlite LLVM_CONFIG=/usr/local/opt/llvm/bin/llvm-config python setup.py install cd .. rm -rf llvmlite and also tried brew install llvm brew link --force llvm # later: brew unlink llvm cd /usr/local/Cellar/llvm/X.X.X