shared-memory

What special powers does ashmem have?

最后都变了- 提交于 2019-12-03 04:29:02
Can someone explain why ashmem was created? I'm browsing through mm/ashmem.c right now. As near as I can tell, the kernel is thinking of ashmem as file-backed memory that can be mmap'd. But then, why go to the trouble of implementing ashmem? It seems like the same functionality could be achieved by mounting a RAM fs and then using filemap/mmap to share memory. I'm sure that ashmem can do more fancy stuff -- from looking at the code, it seems to have something to do with pinning/unpinning pages? Ashmem allows processes which are not related by ancestry to share memory maps by name, which are

Posix shared memory vs mapped files

蓝咒 提交于 2019-12-03 03:11:50
问题 Having learnt a bit about the subject, can anyone tell, what is the real difference between POSIX shared memory (shm_open) and POSIX mapped files (mmap)? Both seems to use the /dev/tmpfs subsystem, rather then older IPC mechanism. So is there any advantage of using mmap file over shared memory? Thanks. 回答1: The distinction is not always clear. Shared memory can be implemented via memory mapped files. An excellent write on this can be found here (as applied to C/C++ programming). 回答2: My

Placing Python objects in shared memory

蹲街弑〆低调 提交于 2019-12-03 02:07:29
Is there a Python module that would enable me to place instances of non-trivial user classes into shared memory? By that I mean allocating directly in shared memory as opposed to pickling into and out of it. multiprocessing.Value and multiprocessing.Array wouldn't work for my use case as they only seem to support primitive types and arrays thereof. The only thing I've found so far is POSH , but it hasn't changed in eight years. This suggests that it's either super-stable or is out of date. Before I invest time in trying to get it work, I'd like to know if there are alternatives I haven't

Shared memory file in PHP

偶尔善良 提交于 2019-12-03 00:13:17
I use openssl_pkcs7_sign and openssl_pkcs7_encrypt to create encrypted data. The functions only accept file names. I would like to store the temporary files in shared memory to improve performance. I understand in Linux I can file_put_contents('/dev/shm/xxx', data) , but it is not possible for Windows. Is there portable way in PHP to do this? Would shmop_ function help here? Thanks. PS: Or is there way to make these functions accept data strings? PS2: Please do not suggest invoking /usr/bin/openssl from PHP. It is not portable. Since Windows 2000, the shmop (previously shm_ ) methods are

Understanding in details the algorithm for inversion of a high number of 3x3 matrixes

梦想的初衷 提交于 2019-12-02 22:35:48
问题 I make following this original post : PyCuda code to invert a high number of 3x3 matrixes. The code suggested as an answer is : $ cat t14.py import numpy as np import pycuda.driver as cuda from pycuda.compiler import SourceModule import pycuda.autoinit # kernel kernel = SourceModule(""" __device__ unsigned getoff(unsigned &off){ unsigned ret = off & 0x0F; off >>= 4; return ret; } // in-place is acceptable i.e. out == in) // T = float or double only const int block_size = 288; typedef double T

Setting a value in the debugger of a shared section

爷,独闯天下 提交于 2019-12-02 20:29:45
问题 I have the following code in a DLL: #pragma data_seg("ABC") __declspec (dllexport) char abc[2000] = { 0 }; #pragma data_seg() #pragma comment(linker, "-section:ABC,rws") I have the following code in an executable: extern "C" __declspec(dllimport) char abc[]; char *abcPtr = abc; #define iVar0 (*(long *)(abcPtr)) int main() { printf("Value: %d %p\n", iVar0, &iVar0); iVar0 = 66; printf("Value: %d %p\n", iVar0, &iVar0); char buffer[256]; scanf_s("%s", buffer, 256); } When I run the first instance

Can I somehow share an asynchronous queue with a subprocess?

我的梦境 提交于 2019-12-02 17:19:05
I would like to use a queue for passing data from a parent to a child process which is launched via multiprocessing.Process . However, since the parent process uses Python's new asyncio library, the queue methods need to be non-blocking. As far as I understand, asyncio.Queue is made for inter-task communication and cannot be used for inter-process communication. Also, I know that multiprocessing.Queue has the put_nowait() and get_nowait() methods but I actually need coroutines that would still block the current task (but not the whole process). Is there some way to create coroutines that wrap

Parallelise python loop with numpy arrays and shared-memory

给你一囗甜甜゛ 提交于 2019-12-02 17:10:08
I am aware of several questions and answers on this topic, but haven't found a satisfactory answer to this particular problem: What is the easiest way to do a simple shared-memory parallelisation of a python loop where numpy arrays are manipulated through numpy/scipy functions? I am not looking for the most efficient way, I just wanted something simple to implement that doesn't require a significant rewrite when the loop is not run in parallel. Just like OpenMP implements in lower level languages. The best answer I've seen in this regard is this one , but this is a rather clunky way that

Posix shared memory vs mapped files

一个人想着一个人 提交于 2019-12-02 16:41:30
Having learnt a bit about the subject, can anyone tell, what is the real difference between POSIX shared memory (shm_open) and POSIX mapped files (mmap)? Both seems to use the /dev/tmpfs subsystem, rather then older IPC mechanism. So is there any advantage of using mmap file over shared memory? Thanks. The distinction is not always clear. Shared memory can be implemented via memory mapped files. An excellent write on this can be found here (as applied to C/C++ programming). My understanding is that that shared memory is built on top of mapped files, but This Page seems to indicate that the

How to use shared memory between kernel call of CUDA?

谁说我不能喝 提交于 2019-12-02 13:51:37
问题 I want to use shared memory between kernel call of one kernel. Can I use shared memory between kernel call? 回答1: No, you can't. Shared memory has thread block life-cycle. A variable stored in it can be accessible by all the threads belonging to one group during one __global__ function invocation. 回答2: Take a try of page-locked memory, but the speed should be much slower than graphic memory. cudaHostAlloc (void **ptr, size_t size, cudaHostAllocMapped); then send the ptr to the kernel code. 回答3