shared-memory

Shared Memory Segment in Operating System

吃可爱长大的小学妹 提交于 2019-12-05 20:11:45
Where is shared memory belongs to ? Which means it is owned by each individual process like stack and heap. So, other program cannot able to access the stack of some other program. Or it is a common segment of memory which is used by any number of process. The below figure shows my question diagramatically. Figure 1: ----------------- ----------------- ----------------- | stack | | stack | | stack | | | | | | | | Shared m/y | --->| Shared m/y |<--- | Shared m/y | | | | | | | | | | heap | | | Heap | | | Heap | | | | | | | | | | Data segment | | | Data segment | | | Data segment | | | | | | | |

Locking mechanisms for shared-memory consistency

梦想与她 提交于 2019-12-05 19:00:13
问题 I'm developing a mechanism for interchanging data between two or more processes using shared memory on linux. The problem is some level of concurrency control is required to maintain data integrity on the shared memory itself, and as I'm specting that sometime or another my process could be killed/crash, common lock mechanisms dont' work because they could left the memory in a "locked" state and right after dying, making other processes hung waiting for the lock to be released. So, doing some

Function that multiprocesses another function

吃可爱长大的小学妹 提交于 2019-12-05 18:06:46
I'm performing analyses of time-series of simulations. Basically, it's doing the same tasks for every time steps. As there is a very high number of time steps, and as the analyze of each of them is independant, I wanted to create a function that can multiprocess another function. The latter will have arguments, and return a result. Using a shared dictionnary and the lib concurrent.futures, I managed to write this : import concurrent.futures as Cfut def multiprocess_loop_grouped(function, param_list, group_size, Nworkers, *args): # function : function that is running in parallel # param_list :

php: delete shared memory on windows

拟墨画扇 提交于 2019-12-05 14:01:57
This code: shmop_delete(); shmop_close(); doesn't delete shared memory. An experiment: $shmid = @shmop_open(1234, 'a', 0, 0); var_dump($shmid); yields bool(false) of course. But $shmid = shmop_open(5678, 'c', 0644, 10); ... shmop_delete($shmid); shmop_close($shmid); ... $shmid = @shmop_open(5678, 'a', 0, 0); var_dump($shmid); yields int(157) Why not deleted yet? How can I delete shared memory? I'm running apache on windows 7. SHM is not natively available in Windows, so PHP tries to emulate it in its "thread safe resource manager" (TSRM) by using Windows File Mappings internally, which is an

How to use shm pixmap with xcb?

五迷三道 提交于 2019-12-05 11:10:48
I try to learn how to use shared memory pixmaps in the xcb library. Did any of you have experience with this and want to share example codes and/or information? This would be very helpful. Thanks After some research I found out how to use shared memory pixmaps in xcb. Here is my testcode: #include <stdlib.h> #include <stdio.h> #include <sys/ipc.h> #include <sys/shm.h> #include <xcb/xcb.h> #include <xcb/shm.h> #include <xcb/xcb_image.h> #define WID 512 #define HEI 512 int main(){ xcb_connection_t* connection; xcb_window_t window; xcb_screen_t* screen; xcb_gcontext_t gcontext; xcb_generic_event

Are memory-mapped files thread safe

我的梦境 提交于 2019-12-05 11:08:20
I was wondering whether you could do multithreaded writes to a single file by using memory-mapped files , and making sure that two threads don't write to the same area (e.g. by interleaving fixed-size records), thus alleviating the need for synchronization at the application level, i.e. without using critical sections or mutexes in my code. However, after googling for a bit, I'm still not sure. This link from Microsoft says: First, there is an obvious savings of resources because both processes share both the physical page of memory and the page of hard disk storage used to back the memory

How to use shared memory on Java threads?

。_饼干妹妹 提交于 2019-12-05 10:40:45
I am implementing a multi-threaded program in Java, where each thread is of a type class Node extends Thread . All these classes generate certain values which will be used by other classes. For main it's easy to get the values from the generated threads , but from within threads itself, how can I get the values on other threads ? //Start the threads from a list of objects for (int i = 0; i < lnode.size(); i++) { lnode.get(i).start(); } thanks If you do something like: class MyThreadRunnable implements Runnable { List<String> strings; MyThreadRunnable(List<String> strings) { this.strings =

Mapping non-contiguous blocks from a file into contiguous memory addresses

时光怂恿深爱的人放手 提交于 2019-12-05 10:38:12
问题 I am interested in the prospect of using memory mapped IO, preferably exploiting the facilities in boost::interprocess for cross-platform support, to map non-contiguous system-page-size blocks in a file into a contiguous address space in memory. A simplified concrete scenario: I've a number of 'plain-old-data' structures, each of a fixed length (less than the system page size.) These structures are concatenated into a (very long) stream with the type & location of structures determined by the

boost removing managed_shared_memory when process is attached

不想你离开。 提交于 2019-12-05 07:10:26
问题 I have 2 processes, process 1 creates a boost managed_shared_memory segment and process 2 opens this segment. Process 1 is then restarted and the start of process 1 has the following, struct vshm_remove { vshm_remove() { boost::interprocess::shared_memory_object::remove("VMySharedMemory"); } ~vshm_remove() { boost::interprocess::shared_memory_object::remove("VMySharedMemory"); } } vremover; I understand that when process 1 starts or ends the remove method will be called on my shared memory

Cannot append items to multiprocessing shared list

主宰稳场 提交于 2019-12-05 06:28:20
I'm using multiprocessing to create sub-process to my application. I also share a dictionary between the process and the sub-process. Example of my code: Main process: from multiprocessing import Process, Manager manager = Manager() shared_dict = manager.dict() p = Process(target=mysubprocess, args=(shared_dict,)) p.start() p.join() print shared_dict my sub-process: def mysubprocess(shared_dict): shared_dict['list_item'] = list() shared_dict['list_item'].append('test') print shared_dict In both cases the printed value is : {'list_item': []} What could be the problem? Thanks Manager.dict will