shared-memory

Structures and vectors in Boost Shared Memory

时光怂恿深爱的人放手 提交于 2019-11-28 04:30:02
问题 I am new to Boost. I have the following structure and want to store it in shared memory using Boost. struct InData{ int x,y,h,w; char* lbl; }; In-turn this structure will be stored in Vector. Most of example talk about int or string datatype for Vectors. I would like if anbody can provide an example how to store user defined data type into boost shared memory. 回答1: You can easily store UDT in there, Boost's interprocess allocator will do the magic for you. However, storing raw-pointers is not

Linux shared memory: shmget() vs mmap()?

一曲冷凌霜 提交于 2019-11-28 03:01:51
In this thread the OP is suggested to use mmap() instead of shmget() to get shared memory in Linux. I visited this page and this page to get some documentation, but the second one gives an obscure example regarding mmap() . Being almost a newbie, and needing to share some information (in text form) between two processes, should I use the shmget() method or mmap() ? And why? Sergey L. Both methods are viable. mmap method is a little bit more restrictive then shmget , but easier to use. shmget is the old System V shared memory model and has the widest support. mmap / shm_open is the new POSIX

boost::interprocess scoped_allocator AND Containers of containers NOT in shared memory

荒凉一梦 提交于 2019-11-28 01:38:15
I have a similar question as before in boost::interprocess Containers of containers NOT in shared memory and How to I create a boost interprocess vector of interprocess containers but this time I like to use my class, which uses a scoped_allocator, also on the heap and shared memory. The solution to my first question was to use a template class with an allocator type In my second previous question it turned out that using a scoped_allocator together with a container of containers within the shared memory makes live easier. Now I like to have both, is this possible? Attached a example with a

Memory model spec in pthreads

ぃ、小莉子 提交于 2019-11-28 01:38:01
问题 Are there any guarantees on when a memory write in one thread becomes visible in other threads using pthread s? Comparing to Java, the Java language spec has a section that specifies the interaction of locks and memory that makes it possible to write portable multi-threaded Java code. Is there a corresponding pthreads spec? Sure, you can always go and make shared data volatile, but that is not what I'm after. If this is platform dependent, is there a de facto standard? Or should another

Is there a way in PHP to use persistent data as in Java EE? (sharing objects between PHP threads) without session nor cache/DB

回眸只為那壹抹淺笑 提交于 2019-11-28 01:10:02
Is there a way in PHP to use "out of session" variables, which would not be loaded/unloaded at every connexion, like in a Java server ? Please excuse me for the lack of accuracy, I don't figure out how to write it in a proper way. The main idea would be to have something like this : <?php ... // $variablesAlreadyLoaded is kind of "static" and shared between all PHP threads // No need to initialize/load/instantiate it. $myVar = $variablesAlreadyLoaded['aConstantValueForEveryone']; ... ?> I already did things like this using shmop and other weird things, but if there is a "clean" way to do this

Does madvise(___, ___, MADV_DONTNEED) instruct the OS to lazily write to disk?

有些话、适合烂在心里 提交于 2019-11-28 00:37:29
问题 Hypothetically, suppose I want to perform sequential writing to a potentially very large file. If I mmap() a gigantic region and madvise(MADV_SEQUENTIAL) on that entire region, then I can write to the memory in a relatively efficient manner. This I have gotten to work just fine. Now, in order to free up various OS resources as I am writing, I occasionally perform a munmap() on small chunks of memory that have already been written to. My concern is that munmap() and msync()will block my thread

How to perform reduction on a huge 2D matrix along the row direction using cuda? (max value and max value's index for each row)

ぐ巨炮叔叔 提交于 2019-11-28 00:36:14
I'm trying to implement a reduction along the row direction of a 2D matrix. I'm starting from a code I found on stackoverflow (thanks a lot Robert!) thrust::max_element slow in comparison cublasIsamax - More efficient implementation? The above link shows a custom kernel that performs reduction on a single row. It divides the input row into many rows and each row has 1024 threads. Works very well. For the 2D case, everything's the same except that now there's a y grid dimension. So each block's y dimension is still 1. The problem is that when I try to write data onto the shared memory within

Dynamically create a list of shared arrays using python multiprocessing

本小妞迷上赌 提交于 2019-11-27 21:53:49
问题 I'd like to share several numpy arrays between different child processes with python's multiprocessing module. I'd like the arrays to be separately lockable, and I'd like the number of arrays to be dynamically determined at runtime. Is this possible? In this answer, J.F. Sebastian lays out a nice way to use python's numpy arrays in shared memory while multiprocessing. The array is lockable, which is what I want. I would like to do something very similar, except with a variable number of

Share variables/memory between all PHP processes

你说的曾经没有我的故事 提交于 2019-11-27 21:16:48
Is it possible to share variables and arrays between all PHP processes without duplicating them ? Using memcached, I think PHP duplicates the used memory: $array = $memcache->get('array'); $array will contain a copy from memcached. So my idea is, there could be a static variable that was already defined, and shared between all processes. By default its simply not possible. Every solution will always copy the content into the current scope, because if not, there is no way to access it. I dont know, what exactly want to do, but maybe you can do that "outside", for example as a gearman job, and

How do I synchronize access to shared memory in LynxOS/POSIX?

余生颓废 提交于 2019-11-27 20:16:58
I am implementing two processes on a LynxOS SE (POSIX conformant) system that will communicate via shared memory. One process will act as a "producer" and the other a "consumer". In a multi-threaded system my approach to this would be to use a mutex and condvar (condition variable) pair, with the consumer waiting on the condvar (with pthread_cond_wait ) and the producer signalling it (with pthread_cond_signal ) when the shared memory is updated. How do I achieve this in a multi-process, rather than multi-threaded, architecture? Is there a LynxOS/POSIX way to create a condvar/mutex pair that