shared-memory

Share variables/memory between all PHP processes

此生再无相见时 提交于 2019-11-26 23:03:03
问题 Is it possible to share variables and arrays between all PHP processes without duplicating them ? Using memcached, I think PHP duplicates the used memory: $array = $memcache->get('array'); $array will contain a copy from memcached. So my idea is, there could be a static variable that was already defined, and shared between all processes. 回答1: By default its simply not possible. Every solution will always copy the content into the current scope, because if not, there is no way to access it. I

Do forked child processes use the same semaphore?

╄→尐↘猪︶ㄣ 提交于 2019-11-26 22:55:34
Let's say I create a semaphore. If I fork a bunch of child processes, will they all still use that same semaphore? Also, suppose I create a struct with semaphores inside and forked. Do all the child processes still use that same semaphore? If not, would storing that struct+semaphores in shared memory allow the child processes to use the same semaphores? I'm really confused about how my forked child processes can use the same semaphores. bdonlan Let's say I create a semaphore. If I fork a bunch of child processes, will they all still use that same semaphore? If you are using a SysV IPC

Pointers inside shared memory segment

好久不见. 提交于 2019-11-26 21:39:00
问题 I've been trying this for hours, and google all the things I kind think of, but I'm going crazy. I have a struct: typedef struct { int rows; int collumns; int* mat; char* IDs_row; } mem; I don't know the sizes of the int* (a Matrix) and char* untill later. When I do, I create the shared memory like this: mem *ctrl; int size = (2 + ((i-1)*num_cons))*sizeof(int) + i*26*sizeof(char); //I have the real size now shmemid = shmget(KEY, size, IPC_CREAT | 0666); if (shmemid < 0) { perror("Ha fallado

sharing memory between two applications

此生再无相见时 提交于 2019-11-26 20:56:44
I have two different windows applications (two different people writing the code). One is Written in C++ and another one is in C#. I need some way how to share data in RAM between them. One must writes data and another one just reads the written data. What should I use to make it most effective and fast? Thanks. You can use Memory Mapped Files . Here is an article describing how to use them. Use a Windows File Mapping Object which allows you to share memory between processes. You can use Named Pipes . A named pipe is a named, one-way or duplex pipe for communication between the pipe server and

How to use POSIX semaphores on forked processes in C?

一个人想着一个人 提交于 2019-11-26 19:40:15
I want to fork multiple processes and then use a semaphore on them. Here is what I tried: sem_init(&sem, 1, 1); /* semaphore*, pshared, value */ . . . if(pid != 0){ /* parent process */ wait(NULL); /* wait all child processes */ printf("\nParent: All children have exited.\n"); . . /* cleanup semaphores */ sem_destroy(&sem); exit(0); } else{ /* child process */ sem_wait(&sem); /* P operation */ printf(" Child(%d) is in critical section.\n",i); sleep(1); *p += i%3; /* increment *p by 0, 1 or 2 based on i */ printf(" Child(%d) new value of *p=%d.\n",i,*p); sem_post(&sem); /* V operation */ exit(0

want to efficiently overcome mismatch between key types in a map in Boost.Interprocess shared memory

丶灬走出姿态 提交于 2019-11-26 17:51:10
问题 I'm creating a map (from string to string in this example) in shared memory using Boost.Interprocess. The compiler seems to want to force me, during retrieval from the map, to allocate memory in the managed segment just to (unnecessarily) contain the query term. I'd like to be able to look up values in a shared map more efficiently, by matching the map's keys against instances that are already in non-shared memory, without performing this extra allocation. But it's refusing to compile if I

Shared-memory IPC synchronization (lock-free)

China☆狼群 提交于 2019-11-26 16:11:43
Consider the following scenario: Requirements: Intel x64 Server (multiple CPU-sockets => NUMA) Ubuntu 12, GCC 4.6 Two processes sharing large amounts of data over (named) shared-memory Classical producer-consumer scenario Memory is arranged in a circular buffer (with M elements) Program sequence (pseudo code): Process A (Producer): int bufferPos = 0; while( true ) { if( isBufferEmpty( bufferPos ) ) { writeData( bufferPos ); setBufferFull( bufferPos ); bufferPos = ( bufferPos + 1 ) % M; } } Process B (Consumer): int bufferPos = 0; while( true ) { if( isBufferFull( bufferPos ) ) { readData(

Do pthread mutexes work across threads if in shared memory?

孤人 提交于 2019-11-26 13:07:20
问题 I found this: Fast interprocess synchronization method I used to believe that a pthread mutex can only be shared between two threads in the same address space . The question / answers there seems to imply: If I have two separate proceses A & B. They have a shared memory region M. I can put a pThread mutex in M, lock in A, lock in B, unlock in A; and B will no longer block on the mutex. Is this correct? Can pThread mutexes be shared in two separate processes? Edit: I\'m using C++, on MacOSX.

Share Large, Read-Only Numpy Array Between Multiprocessing Processes

一个人想着一个人 提交于 2019-11-26 12:41:19
I have a 60GB SciPy Array (Matrix) I must share between 5+ multiprocessing Process objects. I've seen numpy-sharedmem and read this discussion on the SciPy list. There seem to be two approaches-- numpy-sharedmem and using a multiprocessing.RawArray() and mapping NumPy dtype s to ctype s. Now, numpy-sharedmem seems to be the way to go, but I've yet to see a good reference example. I don't need any kind of locks, since the array (actually a matrix) will be read-only. Now, due to its size, I'd like to avoid a copy. It sounds like the correct method is to create the only copy of the array as a

How to combine Pool.map with Array (shared memory) in Python multiprocessing?

扶醉桌前 提交于 2019-11-26 12:07:57
I have a very large (read only) array of data that I want to be processed by multiple processes in parallel. I like the Pool.map function and would like to use it to calculate functions on that data in parallel. I saw that one can use the Value or Array class to use shared memory data between processes. But when I try to use this I get a RuntimeError: 'SynchronizedString objects should only be shared between processes through inheritance when using the Pool.map function: Here is a simplified example of what I am trying to do: from sys import stdin from multiprocessing import Pool, Array def