numba

TypingError: Failed in nopython mode pipeline (step: nopython frontend)

爷,独闯天下 提交于 2021-02-10 03:39:48
问题 I am trying to write my first function using numba jit, I have a pandas dataframe that I need to iterate through and find the root mean square for each 350 points, since the for loop of python is quite slow I decided to try numba jit, the code is: @jit(nopython=True) def find_rms(data, length): res = [] for i in range(length, len(data)): interval = np.array(data[i-length:i]) interval =np.power(interval, 2) sum = interval.sum() resI = sum/length resI = np.sqrt(res) res.appennd(resI) return res

How do I pass in calculated values to a list sort using numba.jit in python?

丶灬走出姿态 提交于 2021-02-08 17:02:37
问题 I am trying to sort a list using a custom key within a numba-jit function in Python. Simple custom keys work, for example I know that I can just sort by the absolute value using something like this: import numba @numba.jit(nopython=True) def myfunc(): mylist = [-4, 6, 2, 0, -1] mylist.sort(key=lambda x: abs(x)) return mylist # [0, -1, 2, -4, 6] However, in the following more complicated example, I get an error that I do not understand. import numba import numpy as np @numba.jit(nopython=True)

How do I pass in calculated values to a list sort using numba.jit in python?

本秂侑毒 提交于 2021-02-08 16:57:24
问题 I am trying to sort a list using a custom key within a numba-jit function in Python. Simple custom keys work, for example I know that I can just sort by the absolute value using something like this: import numba @numba.jit(nopython=True) def myfunc(): mylist = [-4, 6, 2, 0, -1] mylist.sort(key=lambda x: abs(x)) return mylist # [0, -1, 2, -4, 6] However, in the following more complicated example, I get an error that I do not understand. import numba import numpy as np @numba.jit(nopython=True)

Access GPU hardware specifications in Python?

随声附和 提交于 2021-02-08 08:31:35
问题 I want to access various NVidia GPU specifications using Numba or a similar Python CUDA pacakge. Information such as available device memory, L2 cache size, memory clock frequency, etc. From reading this question, I learned I can access some of the information (but not all) through Numba's CUDA device interface. from numba import cuda device = cuda.get_current_device() attribs = [s for s in dir(device) if s.isupper()] for attr in attribs: print(attr, '=', getattr(device, attr)) Output on a

python: fastest way to compute euclidean distance of a vector to every row of a matrix?

左心房为你撑大大i 提交于 2021-02-07 10:53:47
问题 Consider this python code, where I try to compute the eucliean distance of a vector to every row of a matrix. It's very slow compared to the best Julia version I can find using Tullio.jl. The python version takes 30s but the Julia version only takes 75ms . I am sure I am not doing the best in Python. Are there faster solutions? Numba and numpy solutions welcome. import numpy as np # generate a = np.random.rand(4000000, 128) b = np.random.rand(128) print(a.shape) print(b.shape) def lin_norm

python: fastest way to compute euclidean distance of a vector to every row of a matrix?

六眼飞鱼酱① 提交于 2021-02-07 10:53:24
问题 Consider this python code, where I try to compute the eucliean distance of a vector to every row of a matrix. It's very slow compared to the best Julia version I can find using Tullio.jl. The python version takes 30s but the Julia version only takes 75ms . I am sure I am not doing the best in Python. Are there faster solutions? Numba and numpy solutions welcome. import numpy as np # generate a = np.random.rand(4000000, 128) b = np.random.rand(128) print(a.shape) print(b.shape) def lin_norm

Difference between @cuda.jit and @jit(target='gpu')

百般思念 提交于 2021-02-07 09:13:00
问题 I have a question on working with Python CUDA libraries from Continuum's Accelerate and numba packages. Is using the decorator @jit with target = gpu the same as @cuda.jit ? 回答1: No, they are not the same, although the eventual compilation path into PTX into assembler is. The @jit decorator is the general compiler path, which can be optionally steered onto a CUDA device. The @cuda.jit decorator is effectively the low level Python CUDA kernel dialect which Continuum Analytics have developed.

Efficient implementation of pairwise distances computation between observations for mixed numeric and categorical data

送分小仙女□ 提交于 2021-02-07 04:07:15
问题 I am working on a data science project in which I have to compute the euclidian distance between every pair of observations in a dataset. Since I am working with very large datasets, I have to use an efficient implementation of pairwise distances computation (both in terms of memory usage and computation time). One solution is to use the pdist function from Scipy, which returns the result in a 1D array, without duplicate instances. However, this function is not able to deal with categorical

Efficient implementation of pairwise distances computation between observations for mixed numeric and categorical data

和自甴很熟 提交于 2021-02-07 04:02:09
问题 I am working on a data science project in which I have to compute the euclidian distance between every pair of observations in a dataset. Since I am working with very large datasets, I have to use an efficient implementation of pairwise distances computation (both in terms of memory usage and computation time). One solution is to use the pdist function from Scipy, which returns the result in a 1D array, without duplicate instances. However, this function is not able to deal with categorical

numba eager compilation? Whats the pattern?

馋奶兔 提交于 2021-02-05 09:13:58
问题 I looked into eager compilation on numba's website and couldnt figure out, how to specify the types: The example they use is this: from numba import jit, int32 @jit(int32(int32, int32)) def f(x, y): # A somewhat trivial example return x + y # source: http://numba.pydata.org/numba-doc/latest/user/jit.html#eager-compilation as you can see it gets 2 variables as input and returns one single variable. all of them should be int32. One way to understand the decorator is that @jit(int32(int32, int32