Difference between @cuda.jit and @jit(target='gpu')
问题 I have a question on working with Python CUDA libraries from Continuum's Accelerate and numba packages. Is using the decorator @jit with target = gpu the same as @cuda.jit ? 回答1: No, they are not the same, although the eventual compilation path into PTX into assembler is. The @jit decorator is the general compiler path, which can be optionally steered onto a CUDA device. The @cuda.jit decorator is effectively the low level Python CUDA kernel dialect which Continuum Analytics have developed.