mmap

mmap:内存映射文件

心不动则不痛 提交于 2019-12-04 17:57:42
介绍 建立一个文件的内存映射将使用操作系统虚拟内存来直接访问文件系统上的数据,而不是使用常规的I/O函数访问数据。 内存映射通常可以提高I/O性能,因为使用内存映射时,不需要对每一个访问都建立一个单独的系统调用,也不需要你在缓冲区之间复制数据。实际上内核和用户应用都能直接访问内存。 内存映射文件可以看做是可修改的字符串或类似文件的对象,这取决于具体的需要。 映射文件支持一般文件的API方法,如close、flush、read、readline、seek、tell、write。他还支持字符串的API,提供切片等特性以及类似find的方法 。 读文件 import mmap ''' 使用mmap函数可以创建一个内存映射文件。第一个参数是文件描述符,可能来自file对象的fileno方法,也可能来自os.open。 调用者在调用mmap方法之前负责打开文件,不再需要文件时要负责将其关闭。 mmap函数的第二个参数是要映射的文件部分的大小(以字节为单位)。如果这个值为0,则映射整个文件,如果这个大小大于文件的当前大小,则会扩展该文件。 注意:Windows不支持长度为0的映射 ''' ''' 这两个平台都支持一个可选的参数access。使用ACCESS_READ表示只读访问;ACCESS_WRITE表示“写通过(write-through)”,即对内存的赋值直接写入文件; ACCESS

Linux: how to check the largest contiguous address range available to a process

天大地大妈咪最大 提交于 2019-12-04 16:55:53
I want to enter the pid at the command line and get back the largest contiguous address space that has not been reserved. Any ideas? Our 32 bit app, running on 64 bit RHEL 5.4, craps out after running for a while, say 24 hours. At that time it is only up to 2.5 gb of memory use, but we get out of memory errors. We think it failing to mmap large files because the app's memory space is fragmented. I wanted to go out to the production servers and just test that theory. Slighly nicer version of my above comment: #!perl -T use warnings; use strict; scalar(@ARGV) > 0 or die "Use: $0 <pid>"; my $pid

Setting a fmemopen ed file descriptor to be the standard input for a child process

匆匆过客 提交于 2019-12-04 12:09:31
I have an fmemopen file descriptor(pointing to a buffer in the parent) in Linux and I would like to be able to, in C, set this file descriptor as the standard input for a child process(for whom I do not have access to the code) Is this possible? If so how do I do it? I would like to avoid having to write to disk if at all possible. This is not possible. Inheriting stdin/out/err is based purely on file descriptors, not stdio FILE streams. Since fmemopen does not create a file descriptor, it cannot become a new process's stdin/out/err or be used for inter-process communication in any way. What

Why mmap cannot allocate memory?

China☆狼群 提交于 2019-12-04 10:51:27
问题 I ran the program with root priviledge but it keeps complaining that mmap cannot allocate memory. Code snippet is below: #define PROTECTION (PROT_READ | PROT_WRITE) #define LENGTH (4*1024) #ifndef MAP_HUGETLB #define MAP_HUGETLB 0x40000 #endif #define ADDR (void *) (0x0UL) #define FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB) int main (int argc, char *argv[]){ ... // allocate a buffer with the same size as the LLC using huge pages buf = mmap(ADDR, LENGTH, PROTECTION, FLAGS, 0, 0); if (buf

认真分析mmap:是什么 为什么 怎么用(转)

这一生的挚爱 提交于 2019-12-04 08:43:50
阅读目录 mmap基础概念 mmap内存映射原理 mmap和常规文件操作的区别 mmap优点总结 mmap相关函数 mmap使用细节 回到顶部 mmap基础概念 mmap是一种内存映射文件的方法,即将一个文件或者其它对象映射到进程的地址空间,实现文件磁盘地址和进程虚拟地址空间中一段虚拟地址的一一对映关系。实现这样的映射关系后,进程就可以采用指针的方式读写操作这一段内存,而系统会自动回写脏页面到对应的文件磁盘上,即完成了对文件的操作而不必再调用read,write等系统调用函数。相反,内核空间对这段区域的修改也直接反映用户空间,从而可以实现不同进程间的文件共享。如下图所示: 由上图可以看出,进程的虚拟地址空间,由多个虚拟内存区域构成。虚拟内存区域是进程的虚拟地址空间中的一个同质区间,即具有同样特性的连续地址范围。上图中所示的text数据段(代码段)、初始数据段、BSS数据段、堆、栈和内存映射,都是一个独立的虚拟内存区域。而为内存映射服务的地址空间处在堆栈之间的空余部分。 linux内核使用vm_area_struct结构来表示一个独立的虚拟内存区域,由于每个不同质的虚拟内存区域功能和内部机制都不同,因此一个进程使用多个vm_area_struct结构来分别表示不同类型的虚拟内存区域。各个vm_area_struct结构使用链表或者树形结构链接,方便进程快速访问,如下图所示: vm

How GPIO is mapped in memory?

妖精的绣舞 提交于 2019-12-04 06:50:11
I am recently browsing GPIO driver for pi2, I found user space pi2 GPIO lib (like RPi.GPIO 0.5.11 of python) use /dev/mem for BCM2708 (begins at 0x20000000,and GPIO begins at 0x200000 relatively) to mmap a user space memory region in order to handler GPIO. But I found drivers/gpio in linux source tree is designed to be handled by /sys/class/gpio/* . I found nothing like I/O ports mapping like request_io_region and __io_remap . My question is How GPIO for BCM2708 mapped in memory ? Is there another driver? And can I handle GPIO just by R&W to /sys/class/gpio/* ? I found nothing like I/O ports

How to Disable Copy-on-write and zero filled on demand for mmap()

笑着哭i 提交于 2019-12-04 05:22:53
I am implementing cp(file copy) command using mmap(). For that I mapped the source file in MAP_PRIVATE (As I just want to read)mode and destination file in MAP_SHARED mode(As I have to writeback the changed content of destination file). While doing this I have observed performance penalty due to lots of minor page faults that occurs due to 2 reason. 1) Zero fill on demand while calling mmap(MAP_PRIVATE) for source file. 2) Copy on write while calling mmap(MAP_SHARED) for destination file. Is there any way to disable Zero-fill-on-demand and Copy-on-write ? Thanks, Harish There is MMAP_POPULATE

Registering Mapped Linux Character Device Memory with cudaHostRegister Results in Invalid Argument

大城市里の小女人 提交于 2019-12-04 05:22:25
问题 I'm trying to boost DMA<->CPU<->GPU data transfer by: 1. Mapping my (proprietary) device Linux Kernel allocated memory to user space 2. Registering the later (mapped memory) to Cuda with cudaHostRegister API function. While mapping User Space allocated memory mapped to my device DMA and then registered to Cuda with cudaHostRegister works just fine, trying to register "kmalloced" memory results in "Invalid Argument" error returned by cudaHostRegister. First I thought the problem was with

How to memory map (mmap) a linux block device (e.g. /dev/sdb) in Java?

依然范特西╮ 提交于 2019-12-04 05:13:10
I can read/write a linux block device with Java using java.nio . The following code works: Path fp = FileSystems.getDefault().getPath("/dev", "sdb"); FileChannel fc = null; try { fc = FileChannel.open(fp, EnumSet.of(StandardOpenOption.READ, StandardOpenOption.WRITE)); } catch (Exception e) { System.out.println("Error opening file: " + e.getMessage()); } ByteBuffer buf = ByteBuffer.allocate(50); try { if(fc != null) fc.write(buf); } catch (Exception e) { System.out.println("Error writing to file: " + e.getMessage()); } However, memory mapping does not work. The following code fails :

Can mmap and gzip collaborate?

杀马特。学长 韩版系。学妹 提交于 2019-12-04 03:56:11
I'm trying to figure how to use mmap with a gzip compressed file. Is that even possible ? import mmap import os import gzip filename = r'C:\temp\data.gz' file = gzip.open(filename, "rb+") size = os.path.getsize(filename) file = mmap.mmap(file.fileno(), size) print file.read(8) The output data is compressed. Well, not the way you want. mmap() can be used to access the gzipped file if the compressed data is what you want. mmap() is a system call for mapping disk blocks into RAM almost as if you were adding swap. You can't map the uncompressed data into RAM with mmap() as it is not on the disk.