shared-memory

Shared memory and IPC [closed]

*爱你&永不变心* 提交于 2019-12-04 06:29:39
Closed . This question needs to be more focused. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it focuses on one problem only by editing this post . Closed 4 months ago . I was reading a tutorial about shared memory and found the following statement: "If a process wishes to notify another process that new data has been inserted to the shared memory, it will have to use signals, message queues, pipes, sockets, or other types of IPC.". So what is the main advantage of using shared memory and other type of IPC for notifying only instead

Using shared memory and how to correctly unallocate a space with IPC_RMID

删除回忆录丶 提交于 2019-12-04 06:26:18
问题 I have 2 applications running on my linux box, a server and a client. My server and client examples I am working with is from Dave Marshalls examples. Everything works well, but when I try this in my background process and I want to extend my original segment (perhaps due to an application upgrade in the future) I either have to change my key or somehow pass the shmctl(shmid, IPC_RMID, 0) call in my app. Since my app cannot exit graciously and I cannot set this right at the beginning after

How and when to use /dev/shm for efficiency?

走远了吗. 提交于 2019-12-04 06:07:30
How is /dev/shm more efficient than writing the file on the regular file system? As far as I know, /dev/shm is also a space on the HDD so the read/write speeds are the same. My problem is, I have a 96GB file and only 64GB RAM (+ 64GB swap). Then, multiple threads from the same process need to read small random chunks of the file (about 1.5MB). Is /dev/shm a good use case for this? Will it be faster than opening the file in read-only mode from /home and then passing over to the threads to do the reading the required random chunks? You don't use /dev/shm . It exists so that the POSIX C library

Share memory areas between celery workers on one machine

烂漫一生 提交于 2019-12-04 05:29:05
I want to share small pieces of informations between my worker nodes (for example cached authorization tokens, statistics, ...) in celery. If I create a global inside my tasks-file it's unique per worker (My workers are processes and have a life-time of 1 task/execution). What is the best practice? Should I save the state externally (DB), create an old-fashioned shared memory (could be difficult because of the different pool implementations in celery)? Thanks in advance! I finally found a decent solution - core python multiprocessing-Manager: from multiprocessing import Manager manag = Manager

Locking mechanisms for shared-memory consistency

怎甘沉沦 提交于 2019-12-04 02:59:33
I'm developing a mechanism for interchanging data between two or more processes using shared memory on linux. The problem is some level of concurrency control is required to maintain data integrity on the shared memory itself, and as I'm specting that sometime or another my process could be killed/crash, common lock mechanisms dont' work because they could left the memory in a "locked" state and right after dying, making other processes hung waiting for the lock to be released. So, doing some research I've found that System V semaphores have a flag called SEM_UNDO wich can revert the lock

Garbage collector in Ruby 2.2 provokes unexpected CoW

旧城冷巷雨未停 提交于 2019-12-04 00:46:23
问题 How do I prevent the GC from provoking copy-on-write, when I fork my process ? I have recently been analyzing the garbage collector's behavior in Ruby, due to some memory issues that I encountered in my program (I run out of memory on my 60core 0.5Tb machine even for fairly small tasks). For me this really limits the usefulness of ruby for running programs on multicore servers. I would like to present my experiments and results here. The issue arises when the garbage collector runs during

Mapping non-contiguous blocks from a file into contiguous memory addresses

ε祈祈猫儿з 提交于 2019-12-04 00:22:46
I am interested in the prospect of using memory mapped IO, preferably exploiting the facilities in boost::interprocess for cross-platform support, to map non-contiguous system-page-size blocks in a file into a contiguous address space in memory. A simplified concrete scenario: I've a number of 'plain-old-data' structures, each of a fixed length (less than the system page size.) These structures are concatenated into a (very long) stream with the type & location of structures determined by the values of those structures that proceed them in the stream. I'm aiming to minimize latency and

C - fork() and sharing memory

不羁岁月 提交于 2019-12-04 00:18:27
I need my parent and child process to both be able to read and write the same variable (of type int) so it is "global" between the two processes. I'm assuming this would use some sort of cross-process communication and have one variable on one process being updated. I did a quick google and IPC and various techniques come up but I don't know which is the most suitable for my situation. So what technique is best and could you provide a link to a noobs tutorial for it. Thanks. sum1stolemyname Since you are mentioning using fork(), I assume that you are living on a *nix-System From Unix.com The

IPC mechanisms concepts

◇◆丶佛笑我妖孽 提交于 2019-12-03 23:09:35
I want to understand these IPC mechanism concepts in OS - Shared Memory, Message System, Sockets, RPC, RMI How do different operating systems implement these. Specifically Android operating system? IPC is inter-process communication mechanisms in OS is large discussion concept so, I think here we can't cover all this, Some Low Level stuff: The IPC mechanism discussed here is at the lowest level—all other inter-CPU IPC mechanisms use it as the base. For example, a TCP/IP connection through the ARM11 processor to another processor ends up going through this IPC mechanism. Diagnostic messages are

Determine how many times file is mapped into memory

亡梦爱人 提交于 2019-12-03 21:02:41
Is it possible to get the total amount of memory maps on a specific file descriptor in Linux? For clearness I made a small example code how I open/create the memory map: int fileDescriptor = open(mapname, O_RDWR | O_CREAT | O_EXCL, 0666); if(fileDescriptor < 0) return false; //Map Semaphore memorymap = mmap(NULL, sizeof(mapObject), PROT_READ | PROT_WRITE, MAP_SHARED, fileDescriptor, 0); close(fileDescriptor); The memory map is used by multiple processes. I have access to the code base of the other processes that are going to use this memory map. How can I get in a 100% correct way how many