shared-memory

Do forked child processes use the same semaphore?

那年仲夏 提交于 2019-11-26 08:28:41
问题 Let\'s say I create a semaphore. If I fork a bunch of child processes, will they all still use that same semaphore? Also, suppose I create a struct with semaphores inside and forked. Do all the child processes still use that same semaphore? If not, would storing that struct+semaphores in shared memory allow the child processes to use the same semaphores? I\'m really confused about how my forked child processes can use the same semaphores. 回答1: Let's say I create a semaphore. If I fork a bunch

Why do I need a memory barrier?

为君一笑 提交于 2019-11-26 08:15:50
问题 C# 4 in a Nutshell (highly recommended btw) uses the following code to demonstrate the concept of MemoryBarrier (assuming A and B were run on different threads): class Foo{ int _answer; bool complete; void A(){ _answer = 123; Thread.MemoryBarrier(); // Barrier 1 _complete = true; Thread.MemoryBarrier(); // Barrier 2 } void B(){ Thread.MemoryBarrier(); // Barrier 3; if(_complete){ Thread.MemoryBarrier(); // Barrier 4; Console.WriteLine(_answer); } } } they mention that Barriers 1 & 4 prevent

How to use POSIX semaphores on forked processes in C?

南笙酒味 提交于 2019-11-26 07:21:19
问题 I want to fork multiple processes and then use a semaphore on them. Here is what I tried: sem_init(&sem, 1, 1); /* semaphore*, pshared, value */ . . . if(pid != 0){ /* parent process */ wait(NULL); /* wait all child processes */ printf(\"\\nParent: All children have exited.\\n\"); . . /* cleanup semaphores */ sem_destroy(&sem); exit(0); } else{ /* child process */ sem_wait(&sem); /* P operation */ printf(\" Child(%d) is in critical section.\\n\",i); sleep(1); *p += i%3; /* increment *p by 0,

sharing memory between two applications

心已入冬 提交于 2019-11-26 06:47:13
问题 I have two different windows applications (two different people writing the code). One is Written in C++ and another one is in C#. I need some way how to share data in RAM between them. One must writes data and another one just reads the written data. What should I use to make it most effective and fast? Thanks. 回答1: You can use Memory Mapped Files. Here is an article describing how to use them. 回答2: Use a Windows File Mapping Object which allows you to share memory between processes. 回答3:

Shared-memory IPC synchronization (lock-free)

早过忘川 提交于 2019-11-26 04:45:30
问题 Consider the following scenario: Requirements: Intel x64 Server (multiple CPU-sockets => NUMA) Ubuntu 12, GCC 4.6 Two processes sharing large amounts of data over (named) shared-memory Classical producer-consumer scenario Memory is arranged in a circular buffer (with M elements) Program sequence (pseudo code): Process A (Producer): int bufferPos = 0; while( true ) { if( isBufferEmpty( bufferPos ) ) { writeData( bufferPos ); setBufferFull( bufferPos ); bufferPos = ( bufferPos + 1 ) % M; } }

Share Large, Read-Only Numpy Array Between Multiprocessing Processes

拥有回忆 提交于 2019-11-26 03:03:25
问题 I have a 60GB SciPy Array (Matrix) I must share between 5+ multiprocessing Process objects. I\'ve seen numpy-sharedmem and read this discussion on the SciPy list. There seem to be two approaches-- numpy-sharedmem and using a multiprocessing.RawArray() and mapping NumPy dtype s to ctype s. Now, numpy-sharedmem seems to be the way to go, but I\'ve yet to see a good reference example. I don\'t need any kind of locks, since the array (actually a matrix) will be read-only. Now, due to its size, I\

Combine Pool.map with shared memory Array in Python multiprocessing

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-26 02:30:09
问题 I have a very large (read only) array of data that I want to be processed by multiple processes in parallel. I like the Pool.map function and would like to use it to calculate functions on that data in parallel. I saw that one can use the Value or Array class to use shared memory data between processes. But when I try to use this I get a RuntimeError: \'SynchronizedString objects should only be shared between processes through inheritance when using the Pool.map function: Here is a simplified

Shared memory in multiprocessing

橙三吉。 提交于 2019-11-26 01:47:19
问题 I have three large lists. First contains bitarrays (module bitarray 0.8.0) and the other two contain arrays of integers. l1=[bitarray 1, bitarray 2, ... ,bitarray n] l2=[array 1, array 2, ... , array n] l3=[array 1, array 2, ... , array n] These data structures take quite a bit of RAM (~16GB total). If i start 12 sub-processes using: multiprocessing.Process(target=someFunction, args=(l1,l2,l3)) Does this mean that l1, l2 and l3 will be copied for each sub-process or will the sub-processes

Shared-memory objects in multiprocessing

一个人想着一个人 提交于 2019-11-26 00:24:51
问题 Suppose I have a large in memory numpy array, I have a function func that takes in this giant array as input (together with some other parameters). func with different parameters can be run in parallel. For example: def func(arr, param): # do stuff to arr, param # build array arr pool = Pool(processes = 6) results = [pool.apply_async(func, [arr, param]) for param in all_params] output = [res.get() for res in results] If I use multiprocessing library, then that giant array will be copied for

How to use shared memory with Linux in C

[亡魂溺海] 提交于 2019-11-25 23:36:32
问题 I have a bit of an issue with one of my projects. I have been trying to find a well documented example of using shared memory with fork() but to no success. Basically the scenario is that when the user starts the program, I need to store two values in shared memory: current_path which is a char* and a file_name which is also char* . Depending on the command arguments, a new process is kicked off with fork() and that process needs to read and modify the current_path variable stored in shared