cpython

Python C Module - Malloc fails in specific version of Python

时光毁灭记忆、已成空白 提交于 2019-12-08 17:16:54
问题 I'm writing a Python module to perform IO on a O_DIRECT context. One of the limitations of O_DIRECT is you must read into a buffer aligned on a 4096 byte boundary for 2.4 and 2.5 kernels, and 2.6 and up will accept any multiple of 512. The obvious memory allocation candidate for this is posix_memalign(void **memptr, size_t alignment, size_t size) In my code, I allocate an area like so: char *buffer = NULL; int mem_ret = posix_memalign((void**)&buffer, alignment, size); if (!buffer) { PyErr

C Python API Extensions is ignoring open(errors=“ignore”) and keeps throwing the encoding exception anyways

不羁的心 提交于 2019-12-08 02:18:13
问题 Given a file /myfiles/file_with_invalid_encoding.txt with invalid UTF8 as: parse this correctly Føö»BÃ¥r also parse this correctly I am using the builtin Python open function from the C API as follows the minimal example (excluding C Python setup boilerplate): const char* filepath = "/myfiles/file_with_invalid_encoding.txt"; PyObject* iomodule = PyImport_ImportModule( "builtins" ); if( iomodule == NULL ) { PyErr_PrintEx(100); return; } PyObject* openfunction = PyObject_GetAttrString(

Why and where python interned strings when executing `a = 'python'` while the source code does not show that?

独自空忆成欢 提交于 2019-12-07 12:10:59
问题 I am trying to learn the intern mechanism of python using in the implementation of string object. But in both PyObject *PyString_FromString(const char *str) and PyObject *PyString_FromStringAndSize(const char *str, Py_ssize_t size) python interned strings only when its size is 0 or 1. PyObject * PyString_FromString(const char *str) { fprintf(stdout, "creating %s\n", str);------------[1] //... //creating... /* share short strings */ if (size == 0) { PyObject *t = (PyObject *)op; PyString

Does CPython's garbage collection do compaction?

时间秒杀一切 提交于 2019-12-07 11:17:48
问题 I was talking with a friend, comparing languages, and he mentioned that Java's automated memory management is superior to Python's as Java's does compaction, while Python's does not - and hence for long-running servers, Python is a poor choice. Without getting into which is better or worse, is his claim true - does CPython's garbage collector not compact memory and, thus, long-running Python processes get more and more fragmented over time? I know that running CPython's garbage collector is

CPython sources - how to build a STATIC python26.lib?

青春壹個敷衍的年華 提交于 2019-12-07 04:38:41
问题 I'm trying to compile my hello.pyx file to an exe using Cython. First step was to compile the hello.pyx into a hello.cpp file using command "cython --cplus --embed hello.pyx". Embed option means to Generate a main() function that embeds the Python interpreter . I'm trying to create an independent exe with no dependencies. In hello.cpp I have an #include "Python.h" , so I'm downloading Python sources from here: http://www.python.org/download/releases/2.6.6/ , choosing Gzipped source tar ball

Possible to execute Python bytecode from a script?

筅森魡賤 提交于 2019-12-07 04:29:38
问题 Say I have a running CPython session, Is there a way to run the data ( bytes ) from a pyc file directly? (without having the data on-disk necessarily, and without having to write a temporary pyc file) Example script to show a simple use-case: if foo: data = read_data_from_somewhere() else: data = open("bar.pyc", 'rb').read() assert(type(data) is bytes) code = bytes_to_code(data) # call a method from the loaded code code.call_function() Exact use isn't important, but generating code

Slice endpoints invisibly truncated

隐身守侯 提交于 2019-12-07 02:16:43
问题 >>> class Potato(object): ... def __getslice__(self, start, stop): ... print start, stop ... >>> sys.maxint 9223372036854775807 >>> x = sys.maxint + 69 >>> print x 9223372036854775876 >>> Potato()[123:x] 123 9223372036854775807 Why the call to getslice doesn't respect the stop I sent in, instead silently substituting 2^63 - 1? Does it mean that implementing __getslice__ for your own syntax will generally be unsafe with longs? I can do whatever I need with __getitem__ anyway, I'm just

Why can't I access builtins if I use a custom dict as a function's globals?

99封情书 提交于 2019-12-07 01:46:51
问题 I have a dict subclass like this: class MyDict(dict): def __getitem__(self, name): return globals()[name] This class can be used with eval and exec without issues: >>> eval('bytearray', MyDict()) <class 'bytearray'> >>> exec('print(bytearray)', MyDict()) <class 'bytearray'> But if I instantiate a function object with the types.FunctionType constructor, the function can't access any builtins: import types func = lambda: bytearray func_copy = types.FunctionType(func.__code__, MyDict(), func._

Embedding CPython: how do you constuct Python callables to wrap C callback pointers?

大兔子大兔子 提交于 2019-12-06 11:52:53
问题 Suppose I am embedding the CPython interpreter into a larger program, written in C. The C component of the program occasionally needs to call functions written in Python, supplying callback functions to them as arguments. Using the CPython extending and embedding APIs, how do I construct a Python "callable" object that wraps a C pointer-to-function, so that I can pass that object to Python code and have the Python code successfully call back into the C code? Note: this is a revised version of

When does CPython garbage collect?

瘦欲@ 提交于 2019-12-06 08:07:00
If my understanding is correct, in CPython objects will be deleted as soon as their reference count reaches zero. If you have reference cycles that become unreachable that logic will not work, but on occasion the interpreter will try to find them and delete them (and you can do this manually by calling gc.collect() ). My question is, when do these interpreter-triggered cycle collection steps happen? What kind of events trigger them? I am more interested in the CPython case, but would love to hear how this differs in PyPy or other python implementations. The GC runs periodically based on the