cython

How do I force usage of long doubles with Cython?

青春壹個敷衍的年華 提交于 2019-12-22 10:27:12
问题 I apologize in advance for my poor knowledge of C: I use Python to code and have written a few modules with Cython using the standard C functions to effect a great increase in speed. However, I need a range higher than 1e308 (yes, you read it right), which is what I currently get by using the type double complex and the functions cexp and cabs . I tried to use the functions cexpl and cabsl , and declared my variables to be of type long double complex , but I still encounter overflows after

cython lambda1 vs. <lambda>

谁说我不能喝 提交于 2019-12-22 09:57:41
问题 I have found out that on my PC, a certain method is represented as <cyfunction <lambda> at 0x06DD02A0> , while on a CentOS server, it's <cyfunction lambda1 at 0x1df3050> . I believe this is the cause for a very obscure downstream error with a different package. Why is it different? What is its meaning? Can I turn one to the other? Details : I see this when looking at pandas.algos._return_false . Both PC and server has python 2.7.6, same version of pandas (0.14.1), and cython 0.20.2. The PC is

Profiling the GIL

冷暖自知 提交于 2019-12-22 09:56:21
问题 Is there a way to profile a python process' usage of the GIL ? Basically, I want to find out what percentage of the time the GIL is held . The process is single-threaded. My motivation is that I have some code written in Cython, which uses nogil . Ideally, I would like to run it in a multi-threaded process, but in order to know if that can potentially be a good idea, I need to know if the GIL is free a significant amount of the time. I found this related question, from 8 years ago. The sole

Handling C++ arrays in Cython (with numpy and pytorch)

依然范特西╮ 提交于 2019-12-22 09:46:24
问题 I am trying to use cython to wrap a C++ library ( fastText , if its relevant). The C++ library classes load a very large array from disk. My wrapper instantiates a class from the C++ library to load the array, then uses cython memory views and numpy.asarray to turn the array into a numpy array, then calls torch.from_numpy to create a tensor. The problem arising is how to handle deallocating the memory for the array. Right now, I get pointer being freed was not allocated when the program exits

Not able to convert Numpy array to OpenCV Mat in Cython when trying to write c++ wrapper function

十年热恋 提交于 2019-12-22 08:35:15
问题 I am trying to implement cv::cuda::warpPerspective in python2, there is a very sweet post about how to do that here: link. I followed the instruction as described in that post, however, I got Segmentation fault (core dumped) error. I was able to allocate the error in GpuWrapper.pyx file line 11: pyopencv_to(<PyObject*> _src, src_mat) It seems that it fails to convert numpy array to opencv Mat. I am not sure where is wrong and how to fix it. The python script that got Segmentation fault (core

cython: relative cimport beyond main package is not allowed

穿精又带淫゛_ 提交于 2019-12-22 08:17:21
问题 I am trying to use explicit relative imports in cython. From the release notes it seems like relative imports should work after cython 0.23, and I'm using 0.23.4 with python 3.5. But I get this strange error that I cannot find many references to. The error is only from the cimport: driver.pyx:4:0: relative cimport beyond main package is not allowed The directory structure is: myProject/ setup.py __init__.py test/ driver.pyx other.pyx other.pxd It seems like I'm probably messing up in setup.py

Cython - copy constructors

淺唱寂寞╮ 提交于 2019-12-22 08:11:02
问题 I've got a C library that I'm trying to wrap in Cython. One of the classes I'm creating contains a pointer to a C structure. I'd like to write a copy constructor that would create a second Python object pointing to the same C structure, but I'm having trouble, as the pointer cannot be converted into a python object. Here's a sketch of what I'd like to have: cdef class StructName: cdef c_libname.StructName* __structname def __cinit__(self, other = None): if not other: self.__structname = c

Spark with Cython

微笑、不失礼 提交于 2019-12-22 07:54:24
问题 I recently wanted to use Cython with Spark, for which I followed the following reference. I wrote the following programs as mentioned but I am getting a: TypeError: fib_mapper_cython() takes exactly 1 argument (0 given) spark-tools.py def spark_cython(module, method): def wrapped(*args, **kwargs): global cython_function_ try: return cython_function_(*args, **kwargs) except: import pyximport pyximport.install() cython_function_ = getattr(__import__(module), method) return cython_function_(

Spark with Cython

好久不见. 提交于 2019-12-22 07:51:06
问题 I recently wanted to use Cython with Spark, for which I followed the following reference. I wrote the following programs as mentioned but I am getting a: TypeError: fib_mapper_cython() takes exactly 1 argument (0 given) spark-tools.py def spark_cython(module, method): def wrapped(*args, **kwargs): global cython_function_ try: return cython_function_(*args, **kwargs) except: import pyximport pyximport.install() cython_function_ = getattr(__import__(module), method) return cython_function_(

Python dictionaries vs C++ std:unordered_map (cython) vs cythonized python dict

偶尔善良 提交于 2019-12-22 06:47:12
问题 I was trying to measure the performance between python dictionaries, cythonized python dictionaries and cythonized cpp std::unordered_map doing only a init procedure. If the cythonized cpp code is compiled I thought it should be faster than the pure python version. I did a test using 4 different scenario/notation options: Cython CPP code using std::unordered_map and Cython book notation (defining a pair and using insert method) Cython CPP code using std::unordered_map and python notation (map