cython

Cython relative import error, even when doing absolute import

痞子三分冷 提交于 2020-01-15 05:00:07
问题 I'm having trouble in Cython (with Python 3.5) with importing between modules in a single package. The error I'm getting is SystemError: Parent module '' not loaded, cannot perform relative import , even when I'm apparently doing absolute imports. Below is a simple test setup I'm using. This works fine using a pure-Python version of the below ( .py rather than .pyx and no compilation), but not compiled through Cython. Note I'm not actually using any Cython language features in the below

Cython: understanding a typed memoryview with a indirect_contignuous memory layout

廉价感情. 提交于 2020-01-15 01:21:48
问题 I want to understand more about Cython's awesome typed-memoryviews and the memory layout indirect_contiguous . According to the documentation indirect_contiguous is used when "the list of pointers is contiguous" . There's also an example usage: # contiguous list of pointers to contiguous lists of ints cdef int[::view.indirect_contiguous, ::1] b So pls correct me if I'm wrong but I assume a "contiguous list of pointers to contiguous lists of ints" means something like the array created by the

numpy faster than numba and cython , how to improve numba code

我是研究僧i 提交于 2020-01-14 10:06:02
问题 I have a simple example here to help me understand using numba and cython. I am `new to both numba and cython. I've tried my best with to incorporate all the tricks to make numba fast and to some extent, the same for cython but my numpy code is almost 2x faster than numba (for float64), more than 2x faster if using float32. Not sure what I am missing here. I was thinking perhaps the problem isn't coding anymore but more about compiler and such which I'm not very familiar with. I've gone thru

numpy faster than numba and cython , how to improve numba code

China☆狼群 提交于 2020-01-14 10:03:27
问题 I have a simple example here to help me understand using numba and cython. I am `new to both numba and cython. I've tried my best with to incorporate all the tricks to make numba fast and to some extent, the same for cython but my numpy code is almost 2x faster than numba (for float64), more than 2x faster if using float32. Not sure what I am missing here. I was thinking perhaps the problem isn't coding anymore but more about compiler and such which I'm not very familiar with. I've gone thru

Cython unable to find shared object file

只愿长相守 提交于 2020-01-13 16:29:13
问题 I am trying to link to my own C library from Cython, following the directions I've found on the web, including this answer: Using Cython To Link Python To A Shared Library I am running IPython through Spyder. My setup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Build import cythonize import numpy as np setup( ext_modules = cythonize( [Extension("*",["*.pyx"], libraries =["MyLib"], extra_compile_args = ["-fopenmp","-O3"], extra

cython in jupyter notebook

雨燕双飞 提交于 2020-01-13 08:29:42
问题 I am getting errors when loading a Cython file in Jupyter Notebook. Any ideas? %load_ext Cython import numpy as np cimport numpy as np import cython Just a simple error message: File "<ipython-input-3-7e39dc7f561b>", line 5 cimport numpy as np ^ SyntaxError: invalid syntax 回答1: After reading the docs -- I used two separate cells. The first one is just: %load_ext Cython Then my import statements %%cython import numpy as np cimport numpy as np import cython 来源: https://stackoverflow.com

How to speed up Pandas multilevel dataframe sum?

不想你离开。 提交于 2020-01-13 06:29:34
问题 I am trying to speed up the sum for several big multilevel dataframes. Here is a sample: df1 = mul_df(5000,30,400) # mul_df to create a big multilevel dataframe #let df2, df3, df4 = df1, df1, df1 to minimize the memory usage, #they can also be mul_df(5000,30,400) df2, df3, df4 = df1, df1, df1 In [12]: timeit df1+df2+df3+df4 1 loops, best of 3: 993 ms per loop I am not satisfy with the 993ms, Is there any way to speed up ? Can cython improve the performance ? If yes, how to write the cython

Filtering a NumPy Array: what is the best approach?

拜拜、爱过 提交于 2020-01-13 05:19:05
问题 Suppose I have a NumPy array arr that I want to element-wise filter, e.g. I want to get only values below a certain threshold value k . There are a couple of methods, e.g.: Using generators: np.fromiter((x for x in arr if x < k), dtype=arr.dtype) Using boolean mask slicing: arr[arr < k] Using np.where() : arr[np.where(arr < k)] Using np.nonzero() : arr[np.nonzero(arr < k)] Using a Cython-based custom implementation(s) Using a Numba-based custom implementation(s) Which is the fastest? What

Pandas installation on Mac OS X: ImportError (cannot import name hashtable)

我是研究僧i 提交于 2020-01-12 13:42:46
问题 I would like to build pandas from source rather than use a package manager because I am interested in contributing. The first time I tried to build pandas, these were the steps I took: 1) created the virtualenv mkvirtualenv --no-site-packages pandas 2) activated the virtualenv 3) installed Anaconda CE. However, this was installed in ~/anaconda. 4) cloned pandas 5) built C extensions in place (pandas)ems ~/.virtualenvs/pandas/localrepo/pandas> ~/anaconda/bin/python setup.py build_ext --inplace

Cython: Create memoryview without NumPy array?

江枫思渺然 提交于 2020-01-12 03:30:16
问题 Since I found memory-views handy and fast, I try to avoid creating NumPy arrays in cython and work with the views of the given arrays. However, sometimes it cannot be avoided, not to alter an existing array but create a new one. In upper functions this is not noticeable, but in often called subroutines it is. Consider the following function #@cython.profile(False) @cython.boundscheck(False) @cython.wraparound(False) @cython.nonecheck(False) cdef double [:] vec_eq(double [:] v1, int [:] v2,