shared-memory

Same memory address for different processes

爷,独闯天下 提交于 2019-12-11 05:51:07
问题 I just can't figure out why this code works the way it does (rather than I'd expect): #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> #include <sys/types.h> int main() { int buffer; int* address; address=&buffer; if(fork()==0) { *address=27; printf("Address %ld stores %d\n",(long)address,*address); exit(0); } wait(NULL); printf("Address %ld stores %d\n",(long)(&buffer),buffer); return 0; } Why does the system store different variables even if they're pointed

Creating accessing shared memory in C

 ̄綄美尐妖づ 提交于 2019-12-11 05:26:22
问题 So I have a problem that I don't really know how to go about. I was hoping maybe you could let me know how to deal with it. I need to allocate N number of buffers in shared memory. Each buffer should be initialized to 0. Then I must fork N/2 number of child processes. Each child(i) then writes value (i) into buffer (i), sleeps for one second. Then reads the value at the current buffer, if the value changed in the mean time then it displays a message. The child then moves i-positions N/2

std::mutex in shared memory not working

随声附和 提交于 2019-12-11 04:45:08
问题 I have a scenario where the shared memory area is exclusively accessed by two different processes. When I launch the processes, the first process successfully locks the mutex, updates the memory and unlock the mutex. But I observe that when the second process try to lock it, it is still in deadlock state, waiting for mutex to unlock. Time difference between the mutex lock is 10s for first and second process. I am using the std::mutex. Please tell me what I am missing. 回答1: An std::mutex

Segmentation fault with Array in multiprocessing.sharedctypes

馋奶兔 提交于 2019-12-11 03:49:53
问题 I allocate a multiprocessing.sharedctypes.Array in order to be shared among processes. Wrapping this program is a generator that generate the result computed from that array. Before using the array for any parallel computation, I encounter Segmentation fault error after 2 iterations which I doubt caused by deallocation mechanism between C and Python. I can reproduce the error using the following simple code snippet: import numpy as np from multiprocessing import Pool, sharedctypes def

Message passing between two programs

此生再无相见时 提交于 2019-12-11 03:35:56
问题 Currently I have two standalone C++ programs, a master and a slave. The master writes some data to shared memory, using boost::interprocess , and then launches the slave, which is able to read from the memory. What I would like to do is to have the slave constantly running, and for the master to send a message to the slave when the memory has been written to and is ready to be read from. The only way I can think to achieve the same thing is for the slave to constantly check the shared memory

Python - OSError 24 (Too many open files) and shared memory

ぃ、小莉子 提交于 2019-12-11 03:13:52
问题 I faced with the problem there was exception OSError 24 ("Too many open files") raised on my mac os x in python script. I had no idea what could caused that issue. lsof -p showed about 40-50 lines, and my ulimit was 1200 (I check that using resource.getrlimit(resource.RLIMIT_NOFILE) ), that returned tuple (1200, 1200). So I didn't exceed limit even closely. That my script spawned number of subprocesses and also allocated shared memory segments. Exception occurred while allocating shared

QSharedMemory is not getting deleted on Application crash

杀马特。学长 韩版系。学妹 提交于 2019-12-11 00:29:34
问题 I am implementing an application using Qt C++ where I have used QSharedMemory to restrict multiple instances of the application. Relevant code segment in main.cpp is as follows, QSharedMemory sharedMemory; sharedMemory.setKey(SM_INSTANCE_KEY); if (!sharedMemory.create(1)) { QMessageBox::warning(0, "Console", "An instance of this application is already running!" ); exit(0); /* Exit, already a process is running */ } On opening the Application, I can see that a shared memory has been created

Is OpenMP atomic write needed if other threads read only the shared data?

不羁岁月 提交于 2019-12-10 23:43:45
问题 I have an openmp parallel loop in C++ in which all the threads access a shared array of double. Each thread writes only in its own partition of the array. Two threads cannot write on the same array entry. Each thread read on partitions written by the other threads. It does not matter if the data has been updated by the thread who owns the partition or not, as soon as the double is either the old or the updated value (not an invalid value resulting from reading a half-written double). Do I

Anonymous shared memory?

人走茶凉 提交于 2019-12-10 20:45:55
问题 Is there a POSIX-y way to allocating shared memory that's not tied to a specific filename? I.e. memory that is shared between processes only by passing SCM_RIGHTS messages via UNIX domain sockets? 来源: https://stackoverflow.com/questions/16560401/anonymous-shared-memory

Is shmid returned by shmget() unique across processes?

落爺英雄遲暮 提交于 2019-12-10 19:29:20
问题 This is something I can't really figure out: if you call shmget() on linux with the same key but in different processes, will you get back the same shmid or not? Is shmid an ephemeral value like a file descriptor number or something you can persist across invocations? 回答1: Yes, you will receive the same shmid. Shared memory descriptors are kernel-level, not process-level. ipcs -m lists shared memory segments. from man shmctl: A successful IPC_INFO or SHM_INFO operation returns the index of