shared-memory

Trouble reading from Memory Mapped File

Deadly 提交于 2019-12-05 06:15:01
I am trying to implement a Memory Mapped File within my application (specifically a Windows Service), and then use a C# form to read from MMF the service writes to. Unfortunately i cannot seem to get the form to read anything from the MMF, more importantly it seems that the form never finds the MMF created by the Service. Below are code snippets that outline what im doing, can anyone see what I am doing wrong or be able to point me in a better direction? Service : private MemoryMappedFile mmf = MemoryMappedFile.CreateOrOpen("AuditStream", 1024 * 1024); private Mutex mutex = new Mutex(false,

Share SciPy Sparse Array Between Process Objects

混江龙づ霸主 提交于 2019-12-05 05:32:53
I've recently been learning Python multiprocessing, and have run into a roadblock. I have a lerge sparse SciPy array (CSC-format), that I need to share in read only format between 5 worker-processes. I've read this and this (numpy-shared), but this seems to be only for dense-types. How would I share a scipy.sparse.csc_matrix() without copying (or with minimal copying) between 5 multiprocessing Process objects? Even the numpy-shared method seems to require copying the entire array, and even then, I can't just convert a scipy.sparse into a mp.Array(). Could anyone help point me in the right

How to get list of open posix shared memory segments in FreeBSD

吃可爱长大的小学妹 提交于 2019-12-05 05:16:15
In linux i can get list of opened posix shared memory segments by getting /dev/shm directory listing. How do i programmatically get list of all opened posix shared memory segments in FreeBSD? Assuming segments opened with shm_open and i don't know even a part of a name that was used as a first argument of shm_open. You can't. See the comment in /sys/kern/uipc_shm.c: * TODO: * * (2) Need to export data to a userland tool via a sysctl. Should ipcs(1) * and ipcrm(1) be expanded or should new tools to manage both POSIX * kernel semaphores and POSIX shared memory be written? * * (3) Add support for

Using shared memory under Windows. How to pass different data

一个人想着一个人 提交于 2019-12-05 04:51:32
问题 I currently try to implement some interprocess communication using the Windows CreateFileMapping mechanism. I know that I need to create a file mapping object with CreateFileMapping first and then create a pointer to the actual data with MapViewOfFile. The example then puts data into the mapfile by using CopyMemory. In my application I have an image buffer (1 MB large) which I want to send to another process. So now I inquire a pointer to the image and then copy the whole image buffer into

ftok() collisions

萝らか妹 提交于 2019-12-05 02:59:01
I am using ftok() to generate identifiers for shared memory segments used by a C application. I am having problems, where on one box I am getting collisions with the identifiers used by root. I can fix it in this instance by hacking the code, but I would like a more robust solution. The application is installed into its own logical volume, and the path supplied to ftok is the binaries directory for the application (within that lv). The IDs supplied start at 1 and there are usually half a dozen or so. I've tracked down that ftok will do something like this: (id & 0xff) << 24 | (st.st_dev & 0xff

Determine how many times file is mapped into memory

核能气质少年 提交于 2019-12-05 02:23:50
问题 Is it possible to get the total amount of memory maps on a specific file descriptor in Linux? For clearness I made a small example code how I open/create the memory map: int fileDescriptor = open(mapname, O_RDWR | O_CREAT | O_EXCL, 0666); if(fileDescriptor < 0) return false; //Map Semaphore memorymap = mmap(NULL, sizeof(mapObject), PROT_READ | PROT_WRITE, MAP_SHARED, fileDescriptor, 0); close(fileDescriptor); The memory map is used by multiple processes. I have access to the code base of the

Shared memory vs. Go channel communication

拈花ヽ惹草 提交于 2019-12-04 23:58:52
One of Go's slogans is Do not communicate by sharing memory; instead, share memory by communicating . I am wondering whether Go allows two different Go-compiled binaries running on the same machine to communicate with one another (i.e. client-server), and how fast that would be in comparison to boost::interprocess in C++? All the examples I've seen so far only illustrate communication between same-program routines. A simple Go example (with separate client and sever code) would be much appreciated! One of the first things I thought of when I read this was Stackless Python. The channels in Go

command to check status of message queue and shared memory in linux?

筅森魡賤 提交于 2019-12-04 21:10:46
问题 Sorry to ask such a silly question as i am noob in unix. what are the unix commands to find shared memory and message queue and how to kill them ? 回答1: ipcs(1) provides information on the IPC facilities and ipcrm(1) can be used to remove the IPC objects from the system. List shared memory segments: ipcs -m List message queues: ipcs -q Remove shared memory segment created with shmkey : ipcrm -M key Remove shared memory segment identified by shmid : ipcrm -m id Remove message queue created with

Python 2.6: Process local storage while using multiprocessing.Pool

假如想象 提交于 2019-12-04 19:28:26
I'm attempting to build a python script that has a pool of worker processes (using mutiprocessing.Pool) across a large set of data. I want each process to have a unique object that gets used across multiple executes of that process. Psudo code: def work(data): #connection should be unique per process connection.put(data) print 'work done with connection:', connection if __name__ == '__main__': pPool = Pool() # pool of 4 processes datas = [1..1000] for process in pPool: #this is the part i'm asking about // how do I really do this? process.connection = Connection(conargs) for data in datas:

Python fork(): passing data from child to parent

走远了吗. 提交于 2019-12-04 19:23:22
问题 I have a main Python process, and a bunch or workers created by the main process using os.fork(). I need to pass large and fairly involved data structures from the workers back to the main process. What existing libraries would you recommend for that? The data structures are a mix of lists, dictionaries, numpy arrays, custom classes (which I can tweak) and multi-layer combinations of the above. Disk I/O should be avoided. If I could also avoid creating copies of the data -- for example by