mmap

Trap all accesses to an address range (Linux)

心已入冬 提交于 2019-12-03 00:23:11
Background I'm writing a framework to enable co-simulation of RTL running in a simulator and un-modified host software. The host software is written to control actual hardware and typically works in one of two ways: Read/Write calls through a driver Memory mapped access using mmap The former case is pretty straightforward - write a library that implements the same read / write calls as the driver and link against that when running a simulation. This all works wonderfully and I can run un-modified production software as stimulus for my RTL simulations. The second case is turning out to be far

最短路-floyd

匿名 (未验证) 提交于 2019-12-03 00:19:01
#include <iostream> #include <bits/stdc++.h> #define inf 100000000; using namespace std; void floyd(int mmap[][1200],int n,int m) { int a,b,c,i,j,k; //初始化 for(i=1; i<=n; i++) { for(j=1; j<=n; j++) { if(i==j) { mmap[i][j]=0;//自己到自己距离为0 } else { mmap[i][j]=inf;//其他还未联通的点默认为无穷大 } } } for(i=1; i<=m; i++) { scanf("%d %d %d",&a,&b,&c); if(mmap[a][b]>c)//避免出现一路多权 { mmap[a][b]=c; mmap[b][a]=c;//此处为无向图 } } //floyd核心代码 for(k=1; k<=n; k++) { for(i=1; i<=n; i++) { for(j=1; j<=n; j++) { if(mmap[i][j]>mmap[i][k]+mmap[k][j]) { mmap[i][j]=mmap[i][k]+mmap[k][j]; } } } } scanf("%d %d",&a,&b); printf("%d\n",mmap

内存管理

匿名 (未验证) 提交于 2019-12-02 23:57:01
为了让每个进程认为 独占 地使用内存,并且让每个进程看到的内存是 一致 的,操作系统对物理内存、磁盘进行了 抽象 ,抽象出 虚拟内存 。并且把虚拟内存、物理内存以相同固定大小的 ҳ 进行切分管理( 分页 ),虚拟内存中叫页,物理内存中的叫页帧。 每个进程虚拟地址空间是独立的。用户访问的是虚拟内存的地址,即虚拟地址。需要通过 CPU 芯片上的 内存管理单元 MMU 硬件根据页表 翻译 成物理地址,才能真正访问内存。 页表 :每个进程都有它的独立的页表(放在内存里),用来存对虚拟页、物理页的 映射 。页表可以有多级页表,以时间换取空间(实际上,多级页表的地址翻译,并不比单级页表慢很多)。 如果直接按一个个程序加载到内存,会出现内存 碎片 。 后来出现 分段 机制,按程序的各段来存储,从而减少碎片,但是还是有很多。 所以引出分页,把程序分成更小的页(一般大小为 4KB )来管理内存。分得更小,会增加负荷,但实际上利大于弊。 通过虚拟地址 访问 数据: MMU 先通过它里面的 TLB 缓存查询,如果没有,则去内存中的 页表 进行查询。成功翻译成物理地址后,访问 一级缓存 获取数据。如果没有则访问 二级缓存 (可能还有三级缓存)。还是没有就访问 内存 。 物理内存 不够 时: 将不用的页面换出到磁盘中的 swap 分区 里。 包含: 程序代码和静态变量(根据可执行文件进行初始化,并固定了大小

Shmem vs tmpfs vs mmap [closed]

被刻印的时光 ゝ 提交于 2019-12-02 23:44:58
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Does someone know how well the following 3 compare in terms of speed: shared memory tmpfs (/dev/shm) mmap (/dev/shm) Thanks! Read about tmpfs here . The following is copied from that article, explaining the relation between shared memory and tmpfs in

Android性能优化---基础知识

匿名 (未验证) 提交于 2019-12-02 23:36:01
进程的地址空间为0到4GB,示意图如下: Stack空间(进栈和出栈)由操作系统控制,其中主要存储函数地址、函数参数、局部变量等等,所以Stack空间不需要很大,一般为几MB大小。 Heap空间的使用由程序员控制,程序员可以使用malloc、new、free、delete等函数调用来操作这片地址空间。Heap为程序完成各种复杂任务提供内存空间,所以空间比较大,一般为几百MB到几GB。 Android中的进程: (1) native进程:采用C/C++实现,不包含dalvik实例的linux进程,/system/bin/目录下面的程序文件运行后都是以native进程形式存在的。/system/bin/surfaceflinger、/system/bin/rild、procrank等就是native进程。 (2) java进程:实例化了dalvik虚拟机实例的linux进程,进程的入口main函数为java函数。dalvik虚拟机实例的宿主进程是fork()系统调用创建的linux进程,所以每一个android上的java进程实际上就是一个linux进程,只是进程中多了一个dalvik虚拟机实例。因此,java进程的内存分配比native进程复杂。如图3,Android系统中的应用程序基本都是java进程,如Launcher、InCallUI、Contact、SystemUI等等。

IPC之mmap共享映射区

匿名 (未验证) 提交于 2019-12-02 23:34:01
版权声明:wangdassye https://blog.csdn.net/weixin_43136315/article/details/90513411 mmap 共享内存(Shared Memory):映射一段能被其他进程所访问的内存,这段共享内存由一个进程创建,但多个进程都可以访问 优点 : 缺点 : 相关函数 创建映射区函数 void *mmap(void *addr, size_t length, int prot, int flags,int fd, off_t offset); 释放映射区 int munmap(void *addr, size_t length); 写端 # include <stdio.h> # include <unistd.h> # include <sys/types.h> # include <sys/stat.h> # include <fcntl.h> # include <sys/mman.h> # include <string.h> typedef struct _student { int sid ; char sname [ 20 ] ; } student ; int main ( int argc , char * argv [ ] ) { if ( argc != 2 ) { printf ( "./a.out

Overlapping pages with mmap (MAP_FIXED)

扶醉桌前 提交于 2019-12-02 23:23:36
Due to some obscure reasons which are not relevant for this question, I need to resort to use MAP_FIXED in order to obtain a page close to where the text section of libc lives in memory. Before reading mmap(2) (which I should had done in the first place), I was expecting to get an error if I called mmap with MAP_FIXED and a base address overlapping an already-mapped area. However that is not the case. For instance, here is part of /proc/maps for certain process 7ffff7299000-7ffff744c000 r-xp 00000000 08:05 654098 /lib/x86_64-linux-gnu/libc-2.15.so Which, after making the following mmap call ..

(转载)/dev/mem可没那么简单

匿名 (未验证) 提交于 2019-12-02 22:56:40
remap_pfn_range()校验漏洞的利用过程中,熟悉Linux内核地址空间布局非常重要,这篇文章帮助理解这个问题。 参考CVE-2013-2506的PoC:https://github.com/hiikezoe/libfb_mem_exploit 参考材料: http://unix.stackexchange.com/questions/5124/what-does-the-virtual-kernel-memory-layout-in-dmesg-imply?noredirect=1&lq=1 http://unix.stackexchange.com/questions/4929/what-are-high-memory-and-low-memory-on-linux?rq=1 http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/ http://unix.stackexchange.com/questions/218507/kernel-address-space-layout http://www.cnblogs.com/bizhu/archive/2012/10/09/2717303.html PoC中的kernel_phys_address是由读取/proc/iomem设备得到

Docker memory limit causes SLUB unable to allocate with large page cache

≯℡__Kan透↙ 提交于 2019-12-02 21:35:11
Given a process that creates a large linux kernel page cache via mmap'd files, running in a docker container (cgroup) with a memory limit causes kernel slab allocation errors: Jul 18 21:29:01 ip-10-10-17-135 kernel: [186998.252395] SLUB: Unable to allocate memory on node -1 (gfp=0x2080020) Jul 18 21:29:01 ip-10-10-17-135 kernel: [186998.252402] cache: kmalloc-2048(2412:6c2c4ef2026a77599d279450517cb061545fa963ff9faab731daab2a1f672915), object size: 2048, buffer size: 2048, default order: 3, min order: 0 Jul 18 21:29:01 ip-10-10-17-135 kernel: [186998.252407] node 0: slabs: 135, objs: 1950, free