shared-memory

Why does the compiler optimize away shared memory reads due to strncmp() even if volatile keyword is used?

好久不见. 提交于 2019-12-22 09:57:09
问题 Here is a program foo.c that writes data to shared memory. #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <stdint.h> #include <unistd.h> #include <sys/ipc.h> #include <sys/shm.h> int main() { key_t key; int shmid; char *mem; if ((key = ftok("ftok", 0)) == -1) { perror("ftok"); return 1; } if ((shmid = shmget(key, 100, 0600 | IPC_CREAT)) == -1) { perror("shmget"); return 1; } printf("key: 0x%x; shmid: %d\n", key, shmid); if ((mem = shmat(shmid, NULL, 0))

Function that multiprocesses another function

江枫思渺然 提交于 2019-12-22 08:40:42
问题 I'm performing analyses of time-series of simulations. Basically, it's doing the same tasks for every time steps. As there is a very high number of time steps, and as the analyze of each of them is independant, I wanted to create a function that can multiprocess another function. The latter will have arguments, and return a result. Using a shared dictionnary and the lib concurrent.futures, I managed to write this : import concurrent.futures as Cfut def multiprocess_loop_grouped(function,

Are memory-mapped files thread safe

╄→尐↘猪︶ㄣ 提交于 2019-12-22 07:05:04
问题 I was wondering whether you could do multithreaded writes to a single file by using memory-mapped files, and making sure that two threads don't write to the same area (e.g. by interleaving fixed-size records), thus alleviating the need for synchronization at the application level, i.e. without using critical sections or mutexes in my code. However, after googling for a bit, I'm still not sure. This link from Microsoft says: First, there is an obvious savings of resources because both

Shared map with boost::interprocess

可紊 提交于 2019-12-22 06:29:58
问题 I have a simple requirement that might be tough to solve. I did find some leads like this or this but I can't seem to readilly use them. The former doesn't even translate into buildable code for me. I am not experienced with Boost to just write this on my own but it seems to me this might be a common requirement. I have also come across Interprocess STL Map but I have not yet been able to assemble it into working code. I am thinking boost::interprocess is the way to go here, unless I want to

How to use shared memory on Java threads?

我的梦境 提交于 2019-12-22 05:17:06
问题 I am implementing a multi-threaded program in Java, where each thread is of a type class Node extends Thread . All these classes generate certain values which will be used by other classes. For main it's easy to get the values from the generated threads , but from within threads itself, how can I get the values on other threads ? //Start the threads from a list of objects for (int i = 0; i < lnode.size(); i++) { lnode.get(i).start(); } thanks 回答1: If you do something like: class

Are lock-free atomics address-free in practice?

拟墨画扇 提交于 2019-12-22 04:49:11
问题 Boost.Interprocess is a wonderful library that simplifies the usage of shared memory amongst different processes. It provides mutexes, condition variables, and semaphores, which allow for synchronization when writing and reading from the shared memory. However, in some situations these (relatively) performance-intensive synchronization mechanisms are not necessary - atomic operations suffice for my use case, and will likely give much better performance. Unfortunately, Boost.Interprocess does

Share SciPy Sparse Array Between Process Objects

坚强是说给别人听的谎言 提交于 2019-12-22 04:41:02
问题 I've recently been learning Python multiprocessing, and have run into a roadblock. I have a lerge sparse SciPy array (CSC-format), that I need to share in read only format between 5 worker-processes. I've read this and this (numpy-shared), but this seems to be only for dense-types. How would I share a scipy.sparse.csc_matrix() without copying (or with minimal copying) between 5 multiprocessing Process objects? Even the numpy-shared method seems to require copying the entire array, and even

Shared memory vs. Go channel communication

寵の児 提交于 2019-12-22 01:34:00
问题 One of Go's slogans is Do not communicate by sharing memory; instead, share memory by communicating. I am wondering whether Go allows two different Go-compiled binaries running on the same machine to communicate with one another (i.e. client-server), and how fast that would be in comparison to boost::interprocess in C++? All the examples I've seen so far only illustrate communication between same-program routines. A simple Go example (with separate client and sever code) would be much

Allocating a user defined struct in shared memory with boost::interprocess

ε祈祈猫儿з 提交于 2019-12-21 21:40:23
问题 I am trying to use boost::interprocess to allocate a very simple data structure in shared memory but I cannot quite figure out how to use the boost interprocess allocators to perform the memory allocations/deallocations within the shared memory segment which I allocate as follows using namespace boost::interprocess; shared_memory_object::remove("MySharedMem"); mSharedMemory = std::make_unique<managed_shared_memory>( open_or_create, "MySharedMem", 65536); I previously asked a similar question

What happens when two processes are trying to access a critical section with semaphore = 0?

别来无恙 提交于 2019-12-21 21:27:10
问题 In my code I do the following initialization : struct PipeShm myPipe = { .init = 0 , .flag = FALSE , .mutex = NULL , .ptr1 = NULL , .ptr2 = NULL , .status1 = -10 , .status2 = -10 , .semaphoreFlag = FALSE }; int initPipe() { if (!myPipe.init) { myPipe.mutex = mmap (NULL, sizeof *myPipe.mutex, PROT_READ | PROT_WRITE,MAP_SHARED | MAP_ANONYMOUS, -1, 0); if (!sem_init (myPipe.mutex, 1, 0)) // semaphore is initialized to 0 { myPipe.init = TRUE; } else perror ("initPipe"); } return 1; // always