mmap

C Delete last n characters from file

不打扰是莪最后的温柔 提交于 2019-12-05 06:14:37
I need to delete the last n characters from a file using C code. At fist I was trying to use '\b', but it returns a Segmentation Fault. I have seen interesting answers to similar questions here and here , but I would prefer to use mmap function to do this, if it's possible. I know it could be simpler to truncate the file by creating a temp file, and writing chars to temp until some offset of the original file. The problem is I don't seem to understand how to use mmap function to do this, can't see what parameters I need to pass to that function, specially address , length and offset . From

Does valgrind memcheck support checking mmap

白昼怎懂夜的黑 提交于 2019-12-05 05:38:35
I am trying valgrind for detecting memory leak. It works well on heap leak(i.e. memory allocation from malloc or new). however, does it support check mmap leaking in linux? Thanks Chang Not directly, it's very hard to debug, take a look to valgrind.h VALGRIND_MALLOCLIKE_BLOCK should be put immediately after the point where a heap block -- that will be used by the client program -- is allocated. It's best to put it at the outermost level of the allocator if possible; for example, if you have a function my_alloc() which calls internal_alloc(), and the client request is put inside internal_alloc(

How would I design and implement a non-blocking memory mapping module for node.js

余生颓废 提交于 2019-12-05 05:33:33
There exists the mmap module for node.js: https://github.com/bnoordhuis/node-mmap/ As the author Ben Noordhuis notes, accesing mapped memory can block, which is why he does not recommend it anymore and discontinued it. So I wonder how would I design a non-blocking memory mapping module for node.js? Threading, Fibers, ? Obviously this nearby raises the question if threading in node.js would just happen elsewhere instead of the request handler. josh3736 When talking about implementing some native facility in a non-blocking fashion, the first place to look is libuv . It is how node's core modules

Java map / nio / NFS issue causing a VM fault: “a fault occurred in a recent unsafe memory access operation in compiled Java code”

為{幸葍}努か 提交于 2019-12-05 02:32:08
I have written a parser class for a particular binary format ( nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access

Java mmap fails on Android with “mmap failed: ENOMEM (Out of memory)”

给你一囗甜甜゛ 提交于 2019-12-05 01:20:41
Memory mapping a large file on Android in Java works good. But when mapping more than ~1.5GB in total even with multiple mapping calls it fails with: mmap failed: ENOMEM (Out of memory) See the full discussion here . Note: It does not fail on a server Linux. The android:largeHeap="true" is enabled for the application. The following Java code is called a few hundred times requesting ~1MB per call: ByteBuffer buf = raFile.getChannel().map(allowWrites ? FileChannel.MapMode.READ_WRITE : FileChannel.MapMode.READ_ONLY, offset, byteCount); to avoid requesting one large contiguous memory chunk which

Python mmap 'Permission denied' on Linux

喜你入骨 提交于 2019-12-05 00:18:24
I have a really large file I'm trying to open with mmap and its giving me permission denied. I've tried different flags and modes to the os.open but its just not working for me. What am I doing wrong? >>> import os,mmap >>> mfd = os.open('BigFile', 0) >>> mfile = mmap.mmap(mfd, 0) Traceback (most recent call last): File "<stdin>", line 1, in <module> mmap.error: [Errno 13] Permission denied >>> (using the built in open() works via the python docs example, but it seems to open more than one handle to the file both in read & write mode. All i need for the mmap.mmap method is the file number, so

Why does munmap needs a length as parameter?

一笑奈何 提交于 2019-12-05 00:15:51
I was wondering, why should the size of mapped memory being one parameter passed in, since there couldn't more more than one mapping starting from same address (could they ?), why won't linux kernel record both start address, length together, but let userspace program remember them. I mean, why wouldn't it be, just use the start address as primary key to store the information tree. One can map , say, 5 pages and later unmap one of them. And information about what pages to unmap is passed as address and length where the length is a multiple of page size. You can munmap a subrange of memory

十问 Linux 虚拟内存管理 (glibc) (二)

僤鯓⒐⒋嵵緔 提交于 2019-12-04 23:41:07
版权声明:本文由陈福荣原创文章,转载请注明出处: 文章原文链接: https://www.qcloud.com/community/article/184 来源:腾云阁 https://www.qcloud.com/community 接上篇: 十问 Linux 虚拟内存管理 (glibc) (一) 五.free 的内存真的释放了吗(还给 OS ) ? 前面所有例子都有一个很严重的问题,就是分配的内存都没有释放,即导致内存泄露。原则上所有 malloc/new 分配的内存,都需 free/delete 来释放。但是, free 了的内存真的释放了吗? 要说清楚这个问题,可通过下面例子来说明。 初始状态:如图 (1) 所示,系统已分配 ABCD 四块内存,其中 ABD 在堆内分配, C 使用 mmap 分配。为简单起见,图中忽略了如共享库等文件映射区域的地址空间。 E=malloc(100k) :分配 100k 内存,小于 128k ,从堆内分配,堆内剩余空间不足,扩展堆顶 (brk) 指针。 free(A) :释放 A 的内存,在 glibc 中,仅仅是标记为可用,形成一个内存空洞 ( 碎片 ) ,并没有真正释放。如果此时需要分配 40k 以内的空间,可重用此空间,剩余空间形成新的小碎片。 free(C) : C 空间大于 128K ,使用 mmap 分配,如果释放 C ,会调用

linuxC进程间通信的几种方式

元气小坏坏 提交于 2019-12-04 22:03:08
1.管道,pipe()函数  实现最简单,实际为内核缓冲区的环形队列。  用于父子、兄弟等有血缘关系的进程间通信。   单向流动性,只能从管道读端读取,写端写入。  int fds[2];  pipe(fds);//传出参数,fd[0]为读端描述符,类似于stdin;fd[1]为写端描述符,类似于stdout   2.命名管道,fifo()函数  Linux基础文件类型。  可用于无血缘关系的进程间通信。  可多个读端,多个写端。  mkfifo("test", 0777);//创建一个命名管道  int fd1 = open("test", O_WRONLY);write(fd1, buf, strlen(buf));//一个进程写入  int fd2 = open("test", O_RDONLY)read(fd2, buf, sizeof(buf));//另一个进程读取 3.文件,open()函数  fork创建的子进程,共享文件描述符。  多个进程打开同一文件 4.共享存储映射,mmap()函数  借助文件创建映射内存。  进程间无血缘关系要求   void *mmap(void *addr, size_t len, int prot, int flags,int fd, off_t offset);    addr:映射区首地址,传NULL,内核自动分配    len

Linux/perl mmap performance

这一生的挚爱 提交于 2019-12-04 18:17:45
问题 I'm trying to optimize handling of large datasets using mmap. A dataset is in the gigabyte range. The idea was to mmap the whole file into memory, allowing multiple processes to work on the dataset concurrently (read-only). It isn't working as expected though. As a simple test I simply mmap the file (using perl's Sys::Mmap module, using the "mmap" sub which I believe maps directly to the underlying C function) and have the process sleep. When doing this, the code spends more than a minute