mmap

Linux 内存Cache和Buffer理解

谁说胖子不能爱 提交于 2019-11-27 13:14:06
在 Linux 系统中,我们经常用 free 命令来查看系统内存的使用状态。在一个 RHEL6 的系统上,free 命令的显示内容大概是这样一个状态: [root@tencent64 ~]# free total used free shared buffers cached Mem: 132256952 72571772 59685180 0 1762632 53034704 -/+ buffers/cache: 17774436 114482516 Swap: 2101192 508 2100684 这里的默认显示单位是 kb,我的服务器是 128G 内存,所以数字显得比较大。这个命令几乎是每一个使用过 Linux 的人必会的命令,但越是这样的命令,似乎真正明白的人越少(我是说比例越少)。一般情况下,对此命令输出的理解可以分这几个层次: 不了解。 这样的人的第一反应是:天啊,内存用了好多,70个多 G,可是我几乎没有运行什么大程序啊?为什么会这样? Linux 好占内存! 自以为很了解。 这样的人一般评估过会说:嗯,根据我专业的眼光看的出来,内存才用了 17G 左右,还有很多剩余内存可用。buffers/cache 占用的较多,说明系统中有进程曾经读写过文件,但是不要紧,这部分内存是当空闲来用的。 真的很了解。 这种人的反应反而让人感觉最不懂 Linux,他们的反应是:free

Efficiently reading a very large text file in C++

China☆狼群 提交于 2019-11-27 11:59:12
I have a very large text file(45GB). Each line of the text file contains two space separated 64bit unsigned integers as shown below. 4624996948753406865 10214715013130414417 4305027007407867230 4569406367070518418 10817905656952544704 3697712211731468838 ... ... I want to read the file and perform some operations on the numbers. My Code in C++: void process_data(string str) { vector<string> arr; boost::split(arr, str, boost::is_any_of(" \n")); do_some_operation(arr); } int main() { unsigned long long int read_bytes = 45 * 1024 *1024; const char* fname = "input.txt"; ifstream fin(fname, ios::in

Examining mmaped addresses using GDB

你说的曾经没有我的故事 提交于 2019-11-27 11:54:21
问题 I'm using the driver I posted at Direct Memory Access in Linux to mmap some physical ram into a userspace address. However, I can't use GDB to look at any of the address; i.e., x 0x12345678 (where 0x12345678 is the return value of mmap) fails with an error "Cannot access memory at address 0x12345678". Is there any way to tell GDB that this memory can be viewed? Alternatively, is there something different I can do in the mmap (either the call or the implementation of foo_mmap there) that will

mmap

梦想与她 提交于 2019-11-27 10:25:44
mmap系统调用使得进程之间通过映射同一个普通文件实现共享内存,普通文件被映射到进程地址空间后,进程可以像访问普通内存一样对文件进行访问 头文件 sys/mman.h void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); 作用: 参数说明: start:映射区的开始地址 length:映射区的长度 prot:期望的内存保护标志 —-PROT_EXEC //页内容可以被执行 —-PROT_READ //页内容可以被读取 —-PROT_WRITE //页可以被写入 —-PROT_NONE //页不可访问 flags:指定映射对象的类型 —-MAP_FIXED —-MAP_SHARED 与其它所有映射这个对象的进程共享映射空间 —-MAP_PRIVATE 建立一个写入时拷贝的私有映射。内存区域的写入不会影响到原文件 —-MAP_ANONYMOUS 匿名映射,映射区不与任何文件关联 fd:如果MAP_ANONYMOUS被设定,为了兼容问题,其值应为-1 offset:被映射对象内容的起点 来源: https://www.cnblogs.com/yangxingsha/p/11359429.html

Why mmap() is faster than sequential IO? [duplicate]

◇◆丶佛笑我妖孽 提交于 2019-11-27 09:25:28
问题 This question already has answers here : Closed 7 years ago . Possible Duplicate: mmap() vs. reading blocks I heard (read it on the internet somewhere) that mmap() is faster than sequential IO. Is this correct? If yes then why it is faster? mmap() is not reading sequentially. mmap() has to fetch from the disk itself same as read() does The mapped area is not sequential - so no DMA (?). So mmap() should actually be slower than read() from a file? Which of my assumptions above are wrong? 回答1: I

Why doesn't Python's mmap work with large files?

落花浮王杯 提交于 2019-11-27 07:11:47
[Edit: This problem applies only to 32-bit systems. If your computer, your OS and your python implementation are 64-bit, then mmap-ing huge files works reliably and is extremely efficient.] I am writing a module that amongst other things allows bitwise read access to files. The files can potentially be large (hundreds of GB) so I wrote a simple class that lets me treat the file like a string and hides all the seeking and reading. At the time I wrote my wrapper class I didn't know about the mmap module . On reading the documentation for mmap I thought "great - this is just what I needed, I'll

LINUX - mmap()

谁说胖子不能爱 提交于 2019-11-27 03:53:32
内存映射函数 https://blog.csdn.net/qq_33611327/article/details/81738195 来源: https://www.cnblogs.com/wangqiwen-jer/p/11343071.html

Mmap() an entire large file

浪子不回头ぞ 提交于 2019-11-27 02:52:12
I am trying to "mmap" a binary file (~ 8Gb) using the following code (test.c). #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <sys/mman.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) int main(int argc, char *argv[]) { const char *memblock; int fd; struct stat sb; fd = open(argv[1], O_RDONLY); fstat(fd, &sb); printf("Size: %lu\n", (uint64_t)sb.st_size); memblock = mmap(NULL, sb.st_size, PROT_WRITE, MAP_PRIVATE, fd, 0); if (memblock == MAP_FAILED) handle_error("mmap"); for

IPC——mmap

限于喜欢 提交于 2019-11-27 02:29:28
1. void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset); void *mmap64(void *addr, size_t length, int prot, int flags, int fd, off64_t offset); int munmap(void *addr, size_t length); #include <fcntl.h> #include <stdlib.h> #include <unistd.h> #include <stdio.h> #define file "./tmp" #define SIZE 1024 int main() { int fd = -1; int ret = 0; char *ptr = NULL; int pid = -1; if ((fd = open("/dev/zero", O_RDWR)) < 0) { perror("open"); ret = -1; goto __end__; } if ((ptr = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0)) == MAP_FAILED) { perror("mmap"); ret = -1;

Do I need to keep a file open after calling mmap on it?

依然范特西╮ 提交于 2019-11-27 01:32:10
问题 I have a program that maps quite a few (100's) of sizable files 10-100MB each. I need them all mapped at the same time. At the moment I am calling open followed by mmap at the beginning of the program, followed by munmap and close at the end. Often I have to adjust the open files limit running ulimit -n before running the program. Question is do I actually need to keep the files open, or can I open mmap close do some large data processing then munmap when I'm finished. The man pages of mmap