shared-memory

garbage collection of shared data in multiprocessing via fork

a 夏天 提交于 2019-12-11 17:50:05
问题 I am doing some multiprocessing in linux, and I am using shared memory that is currently not explicitly passed to the child processes (not via an argument). In the official python multiprocessing Programming guidelines at the "Explicitly pass resources to child processes" section it is written: On Unix using the fork start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to

Right way to share opencv video frame as Numpy array between multiprocessing processes

早过忘川 提交于 2019-12-11 17:33:56
问题 I want to share my captured frame in OpenVC with my multiprocessing subprocess but video_capture.read() creates a new object and doesnt write in to my numpy array that iam going to share by wrapping it with multiprocessing.Array() Here is the code: ret, frame = video_capture.read() shared_array = mp.Array(ctypes.c_uint16, frame.shape[0] * frame.shape[1], lock=False) while True: b = np.frombuffer(shared_array) ret, b = video_capture.read() But the buffer b gets overridden by the read()

Printing same physical address in a c program

限于喜欢 提交于 2019-12-11 16:45:21
问题 Is there is a way to print the same physical address in these programs (while using the shared memory concept) rather than printing different logical addresses? The reason for me to print the same physical address :... /*It's optional to read this, since I have provided a lot of information which is not to the core */ In my lab, I have two programs: one to store a string in a physical memory via shared memory concept and one to print the same string via accessing the shared memory. Program 1:

How to make sure child process finishes copying data into shared memory before join() is called?

↘锁芯ラ 提交于 2019-12-11 16:11:48
问题 I am using multiprocessing.Process to load some images and store them in a shared memory as explained here. The problem is, sometimes my code crashes due to a huge memory spike at completely random times. I just had an idea of what might be causing this: the process does not have had enough time to copy the contents of the image into the shared memory in RAM by the time join() . To test my hypothesis I added time.sleep(0.015) after doing join() on each of my processes and this has already

Why there is a sudden spike in memory usage when using multiprocessing.Process and shared memory?

蓝咒 提交于 2019-12-11 15:25:43
问题 I am running a Python ( python3 ) script that spawns (using fork and not spawn ) lots of processes through multiprocessing.Process (e.g 20-30 of them) at the same time. I make sure all of these processes are done ( .join() ) and don't become zombies. However, despite I am running the same code with the same random seed my job crashes due to a huge spike in memory usage at completely random times (memory usage goes up to a random value between 30GB s to 200GB s from the requested 14GB s all of

share variable (data from file) among multiple python scripts with not loaded duplicates

时光毁灭记忆、已成空白 提交于 2019-12-11 12:54:33
问题 I would like to load a big matrix contained in the matrix_file.mtx . This load must be made once. Once the variable matrix is loaded into the memory, I would like many python scripts to share it with not duplicates in order to have a memory efficient multiscript program in bash (or python itself). I can imagine some pseudocode like this: # Loading and sharing script: import share matrix = open("matrix_file.mtx","r") share.send_to_shared_ram(matrix, as_variable('matrix')) # Shared matrix

POSIX Shared Memory Sync Across Processes C++/C++11

孤街醉人 提交于 2019-12-11 11:26:21
问题 Problem (in short): I'm using POSIX Shared Memory and currently just used POSIX semaphores and i need to control multiple readers, multiple writers. I need help with what variables/methods i can use to control access within the limitations described below. I've found an approach that I want to implement but i'm unsure of what methodology i can use to implement it when using POSIX Shared memory. What I've Found https://stackoverflow.com/a/28140784 This link has the algorithm i'd like to use

Size of the Boost shared vector keeps fluctuating

雨燕双飞 提交于 2019-12-11 11:09:36
问题 I'm using a Boost based shared vector as IPC in my application. In the application where I'm trying to read the shared memory, the size of the memory, m_size, or vector->size , keeps fluctuating between 2 ( i.e. the number of vectors I'm sharing ),and 0. I've no idea why this is happening. Maybe it's a synchronization issue? But even if this is the case, the size of the memory should not come to 0, as it's just reading whatever is there in the memory. It may be invalid ( i.e. old data ), but

shmget size limit issue

落爺英雄遲暮 提交于 2019-12-11 11:03:45
问题 I have this snippet of code: if ((shmid = shmget(key, 512, IPC_CREAT | 0666)) < 0) { perror("shmget"); exit(1); } Whenever I set the number any higher than 2048, I get, an error that says: shmget: Invalid argument However when I run cat /proc/sys/kernel/shmall , I get 4294967296 . Does anybody know why this is happening? Thanks in advance! 回答1: The comment from Jerry is correct, even if cryptic if you haven't played with this stuff: "What about this: EINVAL: ... a segment with given key

shared memory of std::string give segmentation fault (linux)

冷暖自知 提交于 2019-12-11 10:47:25
问题 I am currently trying the put structures in a shared memory between 2 process on linux. I have no problem sharing bool or int but when trying to share a string, std::string or char i have a segmentation fault error. Right now my code is : #include <iostream> #include <sys/types.h> //shmat #include <sys/shm.h> #include <sys/stat.h> //open #include <fcntl.h> #include <unistd.h> //close using namespace std; struct Prises{ int numero; int transactionId; bool reservation; bool charge; bool