mmap

how can I detect whether a specific page is mapped in memory?

Deadly 提交于 2019-11-29 05:56:23
I would like to detect whether or not a specific page has already been mapped in memory. The goal here is to be able to perform this check before calling mmap with a fixed memory address. The following code illustrates what happens in this case by default: mmap silently remaps the original memory pages. #include <sys/mman.h> #include <stdio.h> #include <unistd.h> int main(int argc, char *argv[]) { int page_size; void *ptr; page_size = getpagesize(); ptr = mmap(0, 10 * page_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); if (ptr == MAP_FAILED) { printf ("map1 failed\n");

内存管理

佐手、 提交于 2019-11-29 05:40:01
如何进行内存管理 为了让每个进程认为 独占 地使用内存,并且让每个进程看到的内存是 一致 的,操作系统对物理内存、磁盘进行了 抽象 ,抽象出 虚拟内存 。并且把虚拟内存、物理内存以相同固定大小的 页 进行切分管理( 分页 ),虚拟内存中叫页,物理内存中的叫页帧。 每个进程虚拟地址空间是独立的。用户访问的是虚拟内存的地址,即虚拟地址。需要通过 CPU 芯片上的 内存管理单元 MMU 硬件根据页表 翻译 成物理地址,才能真正访问内存。 页表 :每个进程都有它的独立的页表(放在内存里),用来存对虚拟页、物理页的 映射 。页表可以有多级页表,以时间换取空间(实际上,多级页表的地址翻译,并不比单级页表慢很多)。 为什么用分页机制 如果直接按一个个程序加载到内存,会出现内存 碎片 。 后来出现 分段 机制,按程序的各段来存储,从而减少碎片,但是还是有很多。 所以引出分页,把程序分成更小的页(一般大小为 4KB )来管理内存。分得更小,会增加负荷,但实际上利大于弊。 硬件关系 通过虚拟地址 访问 数据: MMU 先通过它里面的 TLB 缓存查询,如果没有,则去内存中的 页表 进行查询。成功翻译成物理地址后,访问 一级缓存 获取数据。如果没有则访问 二级缓存 (可能还有三级缓存)。还是没有就访问 内存 。 物理内存 不够 时: 将不用的页面换出到磁盘中的 swap 分区 里。 内存空间布局

Delete / Insert Data in mmap'ed File

倖福魔咒の 提交于 2019-11-29 05:14:46
问题 I am working on a script in Python that maps a file for processing using mmap(). The tasks requires me to change the file's contents by Replacing data Adding data into the file at an offset Removing data from within the file (not whiting it out) Replacing data works great as long as the old data and the new data have the same number of bytes: VDATA = mmap.mmap(f.fileno(),0) start = 10 end = 20 VDATA[start:end] = "0123456789" However, when I try to remove data (replacing the range with "") or

mmap slower than getline?

给你一囗甜甜゛ 提交于 2019-11-29 03:58:55
问题 I face the challenge of reading/writing files (in Gigs) line by line. Reading many forum entries and sites (including a bunch of SO's), mmap was suggested as the fastest option to read/write files. However, when I implement my code with both readline and mmap techniques, mmap is the slower of the two. This is true for both reading and writing. I have been testing with files ~600 MB large. My implementations parse line by line and then tokenize the line. I will present file input only. Here is

Android NDK mmap call broken on 32-bit devices after upgrading to Lollipop

别来无恙 提交于 2019-11-29 03:58:31
I'm trying to grab 784 MiB of memory. Yes, I know that is a lot for a 32-bit phone, but the following call worked before Android 5.0: mmap(0, 0x31000000, PROT_NONE, MAP_ANON | MAP_SHARED, -1, 0); However, on three different devices from different manufacturers, upgrading to Android 5.0 has broken this. I assume this is some change in memory allocation functionality in 5.0; maybe different flags need to be passed in? Here's the error message returned in logcat: E/libc﹕ mmap fail (pid 9994, tid 10125, size 822083584, flags 0x21, errno 12(Out of memory)) fadden At the point where the mmap() fails

Does mmap or malloc allocate RAM?

时光总嘲笑我的痴心妄想 提交于 2019-11-29 03:08:49
问题 I know this is probably a stupid question but i've been looking for awhile and can't find a definitive answer. If I use mmap or malloc (in C, on a linux machine) does either one allocate space in RAM? For example, if I have 2GB of RAM and wanted to use all available RAM could I just use a malloc/memset combo, mmap , or is there another option I don't know of? I want to write a series of simple programs that can run simultaneously and keep all RAM used in the process to force swap to be used,

POSIX共享内存

只愿长相守 提交于 2019-11-29 01:59:12
前言 几种进程间的通信方式:管道,FIFO,消息队列,他们的共同特点就是通过内核来进行通信(假设POSIX消息队列也是在内核中实现的,因为POSIX标准并没有限定它的实现方式)。向管道,FIFO,消息队列写入数据需要把数据从进程复制到内核,从这些IPC读取数据的时候又需要把数据从内核复制到进程。所以这种IPC方式往往需要2次在进程和内核之间进行数据的复制,即进程间的通信必须借助内核来传递。如下图所示: 共享内存也是一种IPC,它是目前可用IPC中最快的,它是使用方式是将同一个内存区映射到共享它的不同进程的地址空间中,这样这些进程间的通信就不再需要通过内核,只需对该共享的内存区域进程操作就可以了,和其他IPC不同的是,共享内存的使用 需要用户自己进行同步操作 。下图是共享内存区IPC的通信: mmap系列函数简介 mmap函数主要的功能就是将文件或设备映射到调用进程的地址空间中,当使用mmap映射文件到进程后,就可以直接操作这段虚拟地址进行文件的读写等操作,不必再调用read,write等系统调用。在很大程度上提高了系统的效率和代码的简洁性。 使用mmap函数的主要目的是: 对普通文件提供内存映射I/O,可以提供无亲缘进程间的通信; 提供匿名内存映射,以供亲缘进程间进行通信。 对shm_open创建的POSIX共享内存区对象进程内存映射,以供无亲缘进程间进行通信。

Redis 基础

ⅰ亾dé卋堺 提交于 2019-11-29 01:04:07
基本类型 String,hash,list,set,sorted set(zset) 安装 按照README的安装步骤进行 架构原理 redis单进程,单线程,并发很多的请求,如何变得很快的呢?? 当我们使用多个redis-cli进行连接的时候,我们首先对通过redis-cli连接到了linux kernel,linux kernel自带一个epoll的调用,我们在使用redis服务端去调用linux的系统内核,调用epoll。 啥是epoll? 在linux kernel中,我们使用client,可以使用socket直接连接kernel。早期,我们可以使用read fd <unistd.h>读取文件描述符,线程/进程使用 read fd去读取linux kernel(因为这时候socket这个时期是阻塞的(blocking)),整个计算机,并没有实时处理打来的线程,这是 早期的bio时期 。内核有一个跃迁,变化的过程,socket中的fd可以是nonblock的。如果有1000fd,代表用户进程轮询用1000次kernel的成本问题。于是内核更新新的调用,叫做select,实现多路复用的NIO。之后又进行了一次迭代更新,我们kernel更新mmap,我们系统开放了一个虚拟的共享空间,可以供用户调用。 mmap? 在mmap的共享空间,我们使用红黑树+链表(共享空间并非零拷贝

How does mmap work?

僤鯓⒐⒋嵵緔 提交于 2019-11-29 00:59:48
问题 I am working on programs in Linux which needs mmap file from harddrive, but i have a question, what can make it fail. Like if all the memories are fragmented, which has only 200M each, but i want to mmap a file to a memory of 1000M, will it succeed?? And another question, are there any tools in linux for recollect memory like some tools in Windows, e.g. the built-in tool for xp. Thanks. 回答1: mmap() uses addresses outide your program's heap area, so heap fragmentation isn't a problem, except

mmap( ) vs read( )

冷暖自知 提交于 2019-11-29 00:48:25
问题 I'm writing a bulk ID3 tag editor in C. ID3 tags are usually at the beginning of an mp3 encoded file, although older (version 1) tags are at the end. The app is designed to accept a directory and frame ID list from the command line, then recurse the directory structure updating all the ID3 tags it finds. The user may additionally choose to remove all older (version 1) tags. Another option is to simply display the current tags, without performing an update. The directory might contain 2 files