mmap

Mapping multiple data arrays to arbitrary fixed memory addresses

浪子不回头ぞ 提交于 2019-12-25 14:25:09
问题 I'm working on a program on a 64-bit Linux machine that needs to map multiple data arrays, of arbitrary length, to fixed memory addresses over which I have no control. I thought mmap() with MAP_FIXED and MAP_ANONYMOUS was the way to go, for example: mmap((void *) 0x401000, 0x18e, PROT_NONE, MAP_ANONYMOUS | MAP_FIXED, -1, 0); However, every time I call this function, it returns MAP_FAILED. I've set fd to -1, which I know is required by some systems, the address is a multiple of my page size

How to get memory address from shm_open?

前提是你 提交于 2019-12-25 00:21:56
问题 I want to share memory using a file descriptor with another process created via fork . The problem is that I get different address regions from mmap . I want that mmap returns the same address value. Only in such case I can be sure that I really share the memory. Probably it is possible to use MAP_FIXED flag to mmap , but how to get memory address from shm_open ? Is it possible to share memory via shm_open at all? Maybe shmget must be used instead? This is the minimal working example:

Why does mmap(2) with PROT_WRITE only require a readable fd?

别来无恙 提交于 2019-12-24 15:33:46
问题 From the POSIX (IEEE Std 1003.1-2008) section on mmap : The file descriptor fildes shall have been opened with read permission, regardless of the protection options specified. Why is that? Seems like a descriptor opened O_WRONLY and mapped with PROT_WRITE and not PROT_READ shouldn't be problematic with respect to permissions, right? 回答1: But the next line states that : If PROT_WRITE is specified, the application shall ensure that it has opened the file descriptor fildes with write permission

How can detect error when boost memory mapped file allocated more disk space than is free on HDD

ぃ、小莉子 提交于 2019-12-24 14:52:42
问题 In my modelling code I use boost memory mapped files, to allocate large-ish arrays on disk. It works well, but I couldn't find a way to detect situation in which I allocate array which is larger than free space on disk drivw. For example following code will execute happily (assuming that I have less than 8E9 bytes of free space on HDD): boost::iostreams::mapped_file_params file_params; file_params.path = path; file_params.mode = std::ios::in | std::ios::out; file_params.new_file_size = static

Raspberry Pi ffmpeg video4linux2, v4l2 mmap no such device

江枫思渺然 提交于 2019-12-24 07:58:08
问题 On my Raspberry pi I've installed ffmpeg. At the begin I type uv4l --driver raspicam --auto-video_nr --width 640 --height 480 --encoding jpeg to run driver. Then I check if devoce0 is registered: ls -la /dev/video* and it returns video0 so it is registered. Then I type command to run server: ffmpeg -v verbose -r 5 -s 640x480 -f video4linux2 -i /dev/video0 http://localhost/webcam.ffm and the camera lights up for a while and then turns off and I get error like bellow: [video4linux2, v4l2] mmap:

Is mmap the best way to communicate between processes?

淺唱寂寞╮ 提交于 2019-12-24 07:30:08
问题 I use a file to communicate between Python and Ruby script. But, we have mmap. So here are my questions. Can I do the same thing (communicate between processes) with mmap? What advantage can mmap give us over physical file? Speedup? What would be the easiest way to communicate between two processes? What would be the fastest way to communicate between two processes? 回答1: one advantage of mmap over physical file is indeed speedup, but anything is going to be faster than a physical file ! the

Concurrently writing to file while reading it out using mmap

烂漫一生 提交于 2019-12-24 05:31:13
问题 The situation is this. A large buffer of data (which shall exceed reasonable RAM consumption) is being generated by the program. The program concurrently serves a websocket which will allow a web client to specify a small subset of this buffer of data to view. To support the first goal, the file is written to using standard methods (I use portable C-stdio fopen and fwrite because it's been shown to be faster than various "pure C++" methods. Doesn't matter. Data gets appended to file; stdio

Concurrently writing to file while reading it out using mmap

微笑、不失礼 提交于 2019-12-24 05:31:05
问题 The situation is this. A large buffer of data (which shall exceed reasonable RAM consumption) is being generated by the program. The program concurrently serves a websocket which will allow a web client to specify a small subset of this buffer of data to view. To support the first goal, the file is written to using standard methods (I use portable C-stdio fopen and fwrite because it's been shown to be faster than various "pure C++" methods. Doesn't matter. Data gets appended to file; stdio

When changing the file length do I need to remap all associated MappedByteBuffers?

匆匆过客 提交于 2019-12-24 05:02:53
问题 I've a small and simple storage system which is accessible through memory mapped files. As I need to address more than 2GB space I need a list of MappedByteBuffer with a fixed size like 2GB (I'm using less for different reasons). Then all is relativly simple: a buffer maps to a certain space say to 1GB and when I need more I map a new MappedByteBuffer (the file automatically increases), then when I need more a third buffer is mapped etc. This just worked. But then I've read in the Java NIO

NumPy mmap: “ValueError: Size of available data is not a multiple of data-type size.”

自古美人都是妖i 提交于 2019-12-24 02:42:31
问题 I'm trying to get data from "data.txt" into a numpy array and plot it with matplotlib. This is what each line of the data looks like: "1" 11.658870417634 4.8159509459201 with about ten million lines. I'm trying to get it into a memory map, but keep getting this error: ValueError: Size of available data is not a multiple of data-type size. Here is the code I am using: import numpy import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt datatype=[('index',numpy.int), ('floati'