mmap

why we can mmap to a file but exceed the file size?

一世执手 提交于 2019-12-11 16:26:43
问题 For example. fd = ::open ("/test.txt", O_RDONLY, 0); struct stat buf; fstat(fd, &buf); char* addr = (char*)::mmap(NULL, buf.st_size + 10, PROT_READ, MAP_PRIVATE | MAP_POPULATE, fd, 0); Notice that I mapped + 10 here. But it still works? Why system does NOT apply any check? Is it dangerous? Thanks 回答1: Signature of mmap is: void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset); To quote Michael Kerrisk: The length argument specifies the size of the mapping in bytes.

Why isn't mmap closing associated file (getting PermissionError: [WinError 32])?

本秂侑毒 提交于 2019-12-11 15:33:54
问题 While experimenting with some of the code from the Reading Binary Data into a Mutable Buffer section of the O'Reilly website, I added a line at the end to remove the test file that was created. However this always results in the following error: PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'data' I don't understand this behavior because the with memory_map(test_filename) as m: should implicitly close the associated file, but

Is there a penalty for accesses to virtual addresses which are mapped to the same physical address?

情到浓时终转凉″ 提交于 2019-12-11 11:48:05
问题 Given the separation between virtual addresses that processes manipulate and the physical address that represent an actual location in memory, you can play some interesting tricks: such as creating a circular buffer without a discontinuity at the beginning/end of the allocated space. I would like to know if such mapping tricks have a penalty for data read or write access in the case: That access to the physical page is mostly through the same virtual mapping but only occasionally through the

Reasonable valid start address for mmap address hint so as to be gauranteed that things work

99封情书 提交于 2019-12-11 10:42:16
问题 In one of our assignments we are required to build a distributed shared memory between 2 machines, I was using a paging based technique such that the base addresses are different on both the machines. But there is this linked list test case which almost mandates that both address ranges are same. mmap's fixed address using MAP_FIXED causes the slave machine to crash( because the stack of the reply server thread was getting overwritten ), i figured that creating an address that is gauranteed

mmap() device memory into user space

怎甘沉沦 提交于 2019-12-11 09:02:45
问题 Saying if we do a mmap() system call and maps some PCIE device memory (like GPU) into the user space, then application can access those memory region in the device without any OS overhead. Data can by copied from file system buffer directly to device memory without any other copy. Above statement must be wrong... Can anyone tell me where is the flaw? Thanks! 回答1: For a normal device what you have said is correct. If the GPU memory behaves differently for reads/write, they might do this. We

ARM linux userspace gpio operations using mmap /dev/mem approach (able to write to GPIO registers, but fail to read from them)

那年仲夏 提交于 2019-12-11 08:54:05
问题 Kernel version 3.12.30-AM335x-PD15.1.1 by PHYTEC. If I use the /sys/class/gpio way, I can see that the button input pin (gpio103 of AM3359) value changes from 0 to 1. Following the this exercise http://elinux.org/EBC_Exercise_11b_gpio_via_mmap and executing the below command for reading gpio pins usig /dev/mem approach: `devmem2 0x481ae13c` (base of gpio bank 3 which is 0x481ae000 + 0x13c dataout offset) I get the below output regardless of button position. /dev/mem opened Memory mapped at

mmap returning ENOMEM

僤鯓⒐⒋嵵緔 提交于 2019-12-11 08:53:03
问题 I read every post linked to this topic and didn't find anything that was able to solve my problem. I'm trying to map a 900Mb MAP_SHARED backed by an underlying file. We have 2Gb of RAM and only 2Gb of VIRTUAL MEMORY available (1Gb reserved for Kernel + 1Gb reserved for Hypervisor). ptr = mmap(NULL, size, PROT_READ, MAP_SHARED, fd, 0); I was thinking about a lack of VMEM available so I tried to put "sysctl -w vm.overcommit_memory=1" but it failed with this too. I made a little loop (below) to

memset/memcpy on mmap region fails

走远了吗. 提交于 2019-12-11 08:51:58
问题 I'm trying to load a statically linked program from another one and execute it. My steps are: Parse the ELF Parse the segments from the program headers For each PT_LOAD Load it Jump to the starting address If elf_bytes is the mmap'ed ELF file, loading a PT_LOAD segment is load(&p, elf_bytes + p.p_offset) . The load function: int load(const Elf64_Phdr *phdr, const void *elf_bytes_for_phdr) { fprintf(stderr, "loading phdr of type %x from 0x%x to +=%zu bytes\n", phdr->p_type, phdr->p_vaddr, phdr

multiprocessing.RawArray operation

半世苍凉 提交于 2019-12-11 08:48:20
问题 I read that RawArray can be shared between proceses without being copied, and wanted to understand how it is possible in Python. I saw in sharedctypes.py, that a RawArray is constructed from a BufferWrapper from heap.py, then nullified with ctypes.memset . BufferWrapper is made of an Arena object, which itself is built from an mmap (or 100 mmaps in windows, see line 40 in heap.py) I read that the mmap system call is actually used to allocate memory in Linux/BSD, and the Python module uses

Excessive synchronizing of memory mapped file in Apache module

强颜欢笑 提交于 2019-12-11 05:21:15
问题 I am currently working on an Apache module that uses a large mmap file to share data between processes. This is created on start-up and removed when the server shuts down (May choose to keep it at a later stage). I have implemented this using the Apache APR libraries, and it works well, at least for smaller files. When the size of the memory mapped file however increases (there is still enough RAM to cache it when the server is running) the system at times virtually grinds to a halt as it