mmap

How to portably extend a file accessed using mmap()

余生颓废 提交于 2019-11-28 16:20:54
问题 We're experimenting with changing SQLite, an embedded database system, to use mmap() instead of the usual read() and write() calls to access the database file on disk. Using a single large mapping for the entire file. Assume that the file is small enough that we have no trouble finding space for this in virtual memory. So far so good. In many cases using mmap() seems to be a little faster than read() and write(). And in some cases much faster. Resizing the mapping in order to commit a write

Why mmap() is faster than sequential IO? [duplicate]

橙三吉。 提交于 2019-11-28 15:59:05
Possible Duplicate: mmap() vs. reading blocks I heard (read it on the internet somewhere) that mmap() is faster than sequential IO. Is this correct? If yes then why it is faster? mmap() is not reading sequentially. mmap() has to fetch from the disk itself same as read() does The mapped area is not sequential - so no DMA (?). So mmap() should actually be slower than read() from a file? Which of my assumptions above are wrong? Tony Delroy I heard (read it on the internet somewhere) that mmap() is faster than sequential IO. Is this correct? If yes then why it is faster? It can be - there are pros

POJ-2502(Dijikstra应用+最短路)

∥☆過路亽.° 提交于 2019-11-28 10:39:19
Subway POJ-2502 这里除了直接相连的地铁站,其他图上所有的点都要连线,这里是走路的速度。 记住最后的结果需要四舍五入,否则出错。 #include<iostream> #include<cstdio> #include<cstring> #include<algorithm> #include<string> #include<vector> #include<queue> #include<cmath> #include<map> using namespace std; typedef pair<int,int> p; const int INF=0x3f3f3f3f; int sx,sy,ex,ey; struct edge{ int to; double cost; int next; }; struct node{ double dis; int to; node(){} node(int a,int b):dis(a),to(b){} bool operator<(const node& t)const{ return dis>t.dis; } }; edge ma[1500005]; int head[220]; int top;//指向头结点 double d[220]; int tn;//结点个数 int n=220;//最大结点数 map<pair

Is there a memory mapping api on windows platform, just like mmap() on linux?

隐身守侯 提交于 2019-11-28 09:39:48
Is there an api to do memory mapping, just like mmap() on linux? Ignacio Vazquez-Abrams File mapping File mapping is the association of a file's contents with a portion of the virtual address space of a process. The system creates a file mapping object (also known as a section object ) to maintain this association. A file view is the portion of virtual address space that a process uses to access the file's contents. File mapping allows the process to use both random input and output (I/O) and sequential I/O. It also allows the process to work efficiently with a large data file, such as a

共享内存

谁说我不能喝 提交于 2019-11-28 08:10:17
  共享内存可以说是最有用的进程间通信方式,也是最快的IPC形式。两个不同进程A、B共享内存的意思是,同一块物理内存被映射到进程A、B各自的进程地址空间。进程A可以即时看到进程B对共享内存中数据的更新,反之亦然。由于多个进程共享同一块内存区域,必然需要某种同步机制,互斥锁和信号量都可以。   采用共享内存通信的一个显而易见的好处是效率高,因为进程可以直接读写内存,而不需要任何数据的拷贝。对于像管道和消息队列等通信方式,则需要在内核和用户空间进行四次的数据拷贝,而共享内存则只拷贝两次数据[1]:一次从输入文件到共享内存区,另一次从共享内存区到输出文件。实际上,进程之间在共享内存时,并不总是读写少量数据后就解除映射,有新的通信时,再重新建立共享内存区域。而是保持共享区域,直到通信完毕为止,这样,数据内容一直保存在共享内存中,并没有写回文件。共享内存中的内容往往是在解除映射时才写回文件的。因此,采用共享内存的通信方式效率是非常高的。   默认情况下通过fork派生的子进程并不与父进程共享内存区。通过一个程序来验证,程序功能是让父子进程都给一个名为count的全局变量加1操作,程序如下: 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <unistd.h> 4 #include <sys/types.h> 5 #include

Invalid argument for read-write mmap?

自闭症网瘾萝莉.ら 提交于 2019-11-28 07:28:46
I'm getting -EINVAL for some reason, and it's not clear to me why. Here's where I open and attempt to mmap the file: if ((fd = open(argv[1], O_RDWR)) < 0) { fprintf(stderr, "Failed to open %s: %s\n", argv[1], strerror(errno)); return 1; } struct stat statbuf; if (fstat(fd, &statbuf)) { fprintf(stderr, "stat filed: %s\n", strerror(errno)); return 1; } char* fbase = mmap(NULL, statbuf.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if (fbase == MAP_FAILED) { fprintf(stderr, "mmap failed: %s\n", strerror(errno)); return 1; } EDIT: I should add, the error is occurring in the mmap . Turns out

Sharing memory between processes through the use of mmap()

本秂侑毒 提交于 2019-11-28 06:53:48
I'm in Linux 2.6. I have an environment where 2 processes simulate (using shared memory) the exchange of data through a simple implementation of the message passing mode. I have a client process (forked from the parent, which is the server) which writes a struct(message) to a memory mapped region created (after the fork) with: message *m = mmap(NULL, sizeof(message), PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, -1, 0) This pointer is then written to a queue (in form of a linked list) into another shared memory area which is common to server and client process (because if was created prior

Do I need to keep a file open after calling mmap on it?

喜你入骨 提交于 2019-11-28 06:41:50
I have a program that maps quite a few (100's) of sizable files 10-100MB each. I need them all mapped at the same time. At the moment I am calling open followed by mmap at the beginning of the program, followed by munmap and close at the end. Often I have to adjust the open files limit running ulimit -n before running the program. Question is do I actually need to keep the files open, or can I open mmap close do some large data processing then munmap when I'm finished. The man pages of mmap do not seem terribly clear to me on this one. unwind No, at least not on Linux it's fine to close the

numpy vs. multiprocessing and mmap

旧街凉风 提交于 2019-11-28 05:56:07
I am using Python's multiprocessing module to process large numpy arrays in parallel. The arrays are memory-mapped using numpy.load(mmap_mode='r') in the master process. After that, multiprocessing.Pool() forks the process (I presume). Everything seems to work fine, except I am getting lines like: AttributeError("'NoneType' object has no attribute 'tell'",) in `<bound method memmap.__del__ of memmap([ 0.57735026, 0.57735026, 0.57735026, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], dtype=float32)>` ignored in the unittest logs. The tests pass fine, nevertheless. Any idea what's going on there?

虚拟空间,malloc

假装没事ソ 提交于 2019-11-28 05:36:28
一,内存空间分布图 所以,mmap其实和堆一样,实际上可以说他们都是动态内存分配,但是严格来说mmap区域并不属于堆区,反而和堆区会争用虚拟地址空间。 这里要提到一个很重要的概念,内存的延迟分配,只有在真正访问一个地址的时候才建立这个地址的物理映射,这是Linux内存管理的基本思想。Linux内核在用户申请内存的时候,只是给它分配了一个线性区(也就是虚拟内存),并没有分配实际物理内存;只有当用户使用这块内存的时候,内核才会分配具体的物理页面给用户,这时候才占用宝贵的物理内存。内核释放物理页面是通过释放先行区,找到其对应的物理页面,将其全部释放的过程。 二,malloc 原理和内存碎片 下面以一个例子来说明内存分配的原理: 情况一、malloc小于128k的内存,使用brk分配内存,将_edata往高地址推(只分配虚拟空间,不对应物理内存(因此没有初始化),第一次读/写数据时,引起内核缺页中断,内核才分配对应的物理内存,然后虚拟地址空间建立映射关系),如下图: 1、 进程启动的时候,其(虚拟)内存空间的初始布局如图1所示。 其中, mmap内存映射文件是在堆和栈的中间 (例如libc-2.2.93.so,其它数据文件等),为了简单起见,省略了内存映射文件。 _edata指针(glibc里面定义)指向数据段的最高地址。 2、 进程调用A=malloc(30K)以后,内存空间如图2: